TITLE: 06/30 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 4mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY: H1 lost lock about an hour ago but is relocking well; so far up to TRANSITION_FROM_ETMX.
Lockloss1403820932
Ending a 15 hour lock. This lock had the ETMX wiggles as well, but they happened almost a second prior to lock loss and were larger than I have seen.
FAMS26260
Laser Status:
NPRO output power is 1.821W (nominal ~2W)
AMP1 output power is 67.15W (nominal ~70W)
AMP2 output power is 137.3W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 32 days, 23 hr 34 minutes
Reflected power = 22.75W
Transmitted power = 105.2W
PowerSum = 127.9W
FSS:
It has been locked for 0 days 11 hr and 42 min
TPD[V] = 0.6936V
ISS:
The diffracted power is around 2.0%
Last saturation event was 0 days 11 hours and 43 minutes ago
Possible Issues:
PMC reflected power is high
FSS TPD is low
Sun Jun 30 10:09:55 2024 INFO: Fill completed in 9min 52secs
TITLE: 06/30 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 6mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY: Locked for 7.5 hours, noise and range look okay.
TITLE: 06/30 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Three locklosses this shift, all with unknown causes but only two with the ETMX motion before that we've been seeing. Recovery from each was fairly straightforward, however the ALS-X PLL continued to be problematic and unlock when trying to recover from the first lockloss.
H1 has been observing for 1 hour.
LOG: No log for this shift.
Lockloss @ 06:00 UTC - link to lockloss tool
Locked for 49 minutes. No obvious cause; larger ETMX motion right before the lockloss this time.
H1 back to observing at 07:01 UTC. Automatic relock except for manually adjusting ETMX to lock ALS X faster.
Lockloss @ 04:15 UTC - link to lockloss tool
Locked for 38 minutes. No obvious cause, but I see the familiar small ETMX hit almost a half second before the lockloss.
H1 back to observing at 05:13 UTC
We lost lock just as I was beginning HVAC shutdowns to take advantage of the nearly 160 Mpc range. When we regained lock, the range was only about what it was for my last shutdown (77477), so I will defer. Here are the times for what I did do:
Start of shutdown (UTC) |
Start of end of shutdown (UTC)) |
Equipment shut down |
16:25 |
16:37 |
Office area HVAC |
16:50 |
16:57 lock loss 16:55:49 |
Chiller, all turbines, office area HVAC, split minis in CER |
Lockloss @ 23:57 UTC - link to lockloss tool
Ends lock at 6 hours. No obvious cause, and I don't see the ETMX motion prior to this lockloss as we've seen in the past.
H1 back to observing at 03:41 UTC.
ALS X PLL unlocking provided frequent interruptions in this relock. Eventually made it to DRMI, where PRM needed adjustments after going through MICH_FRINgES.
TITLE: 06/29 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: One lock loss with a straight forward relock, and then a delayed calibration measurement. We've been locked for 5.5 hours.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
16:08 | SAF | LVEA | LVEA | YES | LVEA IS LASER HAZARD | 10:08 |
16:23 | PEM/FAC | Robert | Site | n | HVAC shutdowns | 19:02 |
19:03 | SQZ | Terry | Opt Lab | local | SHG work | 00:57 |
TITLE: 06/29 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 4mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY: H1 has been locked for 5 hours.
Calibration sweep taken today at 2106UTC in coordination with LLO and Virgo. This was delayed today since we were'nt thermalized at 1130PT.
Simulines start:
PDT: 2024-06-29 14:11:45.566107 PDT
UTC: 2024-06-29 21:11:45.566107 UTC
GPS: 1403730723.566107
End:
PDT: 2024-06-29 14:33:08.154689 PDT
UTC: 2024-06-29 21:33:08.154689 UTC
GPS: 1403732006.154689
I ran into the error below when I first started the simulines script, but it seemed to move on. I'm not sure if this frequently pops up and this is the first time I caught it.
Traceback (most recent call last):
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 314, in _
bootstrap
self.run()
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 108, in r
un
self._target(*self._args, **self._kwargs)
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 427, in generateSignalInje
ction
SignalInjection(tempObj, [frequency, Amp])
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 484, in SignalInjection
drive.start(ramptime=rampUp) #this is blocking, and starts on a GPS second.
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/awg.py", line 122, in start
self._get_slot()
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/awg.py", line 106, in _get_sl
ot
raise AWGError("can't set channel for " + self.chan)
awg.AWGError: can't set channel for H1:SUS-ETMX_L1_CAL_EXC
One item to note is that h1susex is running a different version of awgtpman since last Tuesday.
This almost certainly failed to start the excitation.
I tested a 0-amplitude excitation on the same channel using awggui with no issue.
There may be something wrong the environment the script is running in.
We haven't made any changes to the environment that is used to run simulines. The only thing that seems to have changed is that a different version of awgtpman is running now on h1susex as Dave pointed out. Having said that, this failure has been seen before but rarely reappears when re-running simulines. So maybe this is not that big of an issue...unless it happens again.
turns out i was wrong about the environment not changing. according to step 7 of the ops calib measurement instructions, simulines has been getting run in the base cds environment...which the calibration group does not control. That's probably worth changing. In the meantime, I'm unsure if that's the cause of last week's issues.
The CDS environment was stable between June 22 (last good run) and Jun 29.
There may have been another failure on June 27, which would make two failures and no successes since the upgrade.
The attached graph for June 27 shows an excitation at EY, but no associated excitation at EX during the same period. Compare with the graph from June 22.
On Jun 27 and Jun 28, H1:SUS-ETMX_L2_CAL_EXCMON was excited during the test.
Andrei, Naoki, Sheila
For the upcoming measurement of the FC backscattering, we need to calibrate the length change of the FC. To do this, we calculated the transfer function from the GS_SL FC control signal [Hz] to FC2 displacement [μm]. We followed the steps in Diagram.png to get to result. The plot bode_all_datasets.png contains all the used datasets.
The resulting transfer function is presented here: Tranfer_func.png (where Result curve is the transfer function). The result was exported to frequency/magnitude/phase dataset and can be found in result_data.txt. The remaining .txt files contain all the used datasets for this calculation.
Assuming that the frequency of the FC resonance shift Δf equal to c/2L corresponds to the FC length change ΔL equal to λ/2. (λ = 532 nm, L = 300 m), then Δf/ΔL = c/(L * λ) = 1.88*1012 [Hz/m] = 1.88*106 [Hz/μm]. Multiplying Transfer function by this coefficient will get us the open loop unity gain frequency of 39.4 Hz. Open-loop gain plot (after multiplication) can be found in the following figure: openloop_gain.png.
For FC2 suspension plant, we used sus_um in H1:CAL-CS_SUM_PRCL_PRM filter bank. The sus_um is the PRM suspension plant in the unit of um/count. Although the FC2 and PRM are the same HSTS suspensions, FC2/PRM M2 and M3 actuation strength is different by 0.326/2.83 according to the suspensions control design summary table on the door of control room as shown in the attachment (TACQ for FC2, TACQ*** for PRM). So we added this factor for FC2 M3 path.
The long ramp time was confusing me while I was trying to adjust the TMS to speed up locking, so I looked into what needed such a long ramp. I thought it was the TMS servo, but it seems to only use the TEST bank. I couldn't find another place in ISC_LOCK or the ALS guardians that specifically referenced this bank, so I changed it to 2sec like for TMSY and then accepted it in SDF safe and observe. We made it through this acquisition at 2sec so maybe we're OK.
Lockloss 1403715367
There was ground motion from an earthquake just after the lock loss, but the lock loss itself seemed to be quite sudden. ETMX sees those usual wiggles that we often see.
Robert had just started running HVAC shutdown tests, but this seems very unlikely to be the cause.
Back to Observing at 1800UTC
I touched up TMSX P, PRM Y, and a bit pf BS P to speed of acquisition and avoid an initial alignment.
Our ALSX PLL beatnote seems to have gone quite bad ~10 hours ago, getting < -30 dBm. It recovered during this acquisition but I worry we will have more issues with this.
The RefCav TPD is reading down to 0.69V, showing a warning on DIAG_MAIN, and has been falling over the past couple weeks. Since this is likely due to the increased PMC loss lowering the output of the cavity, I don't suspect fixing the RefCav alignment will get too much out of it, but I can try an alignment this evening if the IFO loses lock.