Alerted by H1_MANAGER that it needed intervention. There was a large earthquake coming through that had tripped the ISIs for the ITMs and ETMs. Also, ETMY stages M0 and R0 had tripped (not yet sure if that is just due to the ISI tripping).
There was also a notification on verbals from 08:51UTC, a minute after everything tripped, that says "ETMY hardware watchdog trip imminent", but when I checked the hardware watchdog screen everything looked normal with no countdowns. Once it looked like the worst of the earthquake had passed and the ISI values were all within range, I reset R0 and M0 for ETMY and reset all four ISI watchdogs. We are still in LARGE_EQ mode so we haven't started relocking yet, but it looks like we are close to leaving and we should be good to go for relocking.
It looks like ETMY M0 and R0 tripped due to barely going over the watchdog limit, and that it was due to the ISI stages tripping (ndscope). Similar to what I did in 81668, I'll up the thresholds for M0 and R0 by a bit. Thankfully it looks like the thresholds we established for the other stages were all good guesses and are still good.
Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
M0 | 100 | 106 | 150 |
R0 | 120 | 122 | 175 |
L1 | 170 | 134 | 170 (unchanged) |
L2 | 270 | 168 | 270 (unchanged) |
TITLE: 01/12 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 159Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 02:36 UTC
Overall pretty quiet shift with one random lockloss (alog 82227). There doesn't seem to have been an EX glitch and the environment was stable (same microseism as last few days and low wind). We managed to get back up to NLN in 1 hour, with DRMI Locking and ENGAGE_ASC_FOR_FULL_IFO taking the longest to get past.
About 1.5hrs into the lock, SQZ unlocked and took us out of OBSERVING. It relocked automatically and we were observing again 4 minutes later.
LOG:
None
Unknown cause Lockloss - not environment and no EX saturation.
TITLE: 01/11 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 163Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: I ran the calibration sweep this afternoon, lockloss shortly after. I tried restarting the IOC for the lab dust monitors but it still couldn't connect to the lab2. "ISS diff power low" - DIAG_MAIN We've been locked for ~2 hours.
LOG: No log.
TITLE: 01/11 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 10mph Gusts, 6mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.38 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING as of 22:34 UTC
Closes FAMIS 26353. Last checked in alog 82116.
All trends look well. No fans above or near the (general) 0.7 ct threshold. Screenshots attached.
We had just finished the calibration measurement 3 minutes earlier, 20:03 lockloss
22:35 UTC Observing
Simulines start: The wind started picking up during the measurement.
PST: 2025-01-11 11:36:44.979063 PST
UTC: 2025-01-11 19:36:44.979063 UTC
GPS: 1420659422.979063
Simulines stop:
PST: 2025-01-11 12:00:27.281127 PST
UTC: 2025-01-11 20:00:27.281127 UTC
GPS: 1420660845.281127
Files:
2025-01-11 20:00:27,195 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20250111T193645Z.hdf5
2025-01-11 20:00:27,203 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20250111T193645Z.hdf5
2025-01-11 20:00:27,208 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20250111T193645Z.hdf5
2025-01-11 20:00:27,213 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20250111T193645Z.hdf5
2025-01-11 20:00:27,218 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20250111T193645Z.hdf5
Closes FAMIS26346
Laser Status:
NPRO output power is 1.851W
AMP1 output power is 70.19W
AMP2 output power is 137.0W
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 25 days, 0 hr 12 minutes
Reflected power = 25.46W
Transmitted power = 104.6W
PowerSum = 130.0W
FSS:
It has been locked for 0 days 15 hr and 6 min
TPD[V] = 0.8127V
ISS:
The diffracted power is around 2.5%
Last saturation event was 0 days 15 hours and 7 minutes ago
Possible Issues:
PMC reflected power is high
Sat Jan 11 10:18:17 2025 INFO: Fill completed in 18min 14secs
TCmins [-110C, -107C] OAT (10C, 50F)
TITLE: 01/11 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 2mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.48 μm/s
QUICK SUMMARY:
TITLE: 01/11 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 05:05 UTC
Overall calm shift with one slightly annoying LL during an opo temp optimization. The reacquisition was slow due to the environment (high winds and an earthquake) but was ultimately fully automatic after an initial alignment. Lockloss alog 82215
Other than this, the SQZ opo temp had to be adjusted so I went out of OBSERVING for 6 mins (5:05 UTC to 5:11 UTC) and adjusted it successfully bringing range from ~138 to ~158MPc (screenshot attached).
Otherwise, the microseism seems to be coming down slowly and high winds have calmed down.
LOG:
None
Unknown cause lockloss but reasonable to assume it was either microseism or squeeze related (though I'm unsure if SQZ can cause a LL like that).
The lockloss happened as I was adjusting the SQZ OPO temperature since in the last hour, there has been a steady decline in H1:SQZ-CLF_REFL_RF6_ABS_OUTPUT channel. After noticing this, I went out of OBSERVING and into CORRECTIVE MAINTENANCE to fix. It was actually working (the channel signal was getting higher with each tap when the LL happened. This coincided with a 5.2 EQ in Ethiopia, which I do not think was the cause but may have moved things while in this high microseim state. I've attached the trends from SQZ overview and OPO temp including the improvements made. If it is possible to induce a LL changing the OPO temp, then this is likely what happened, though the LL did not happen when the temp was being adjusted, as the 3rd screenshot shows.
Seems that the microseism, the earthquake and SQZ issues also coincided with some 34mph gusts so I will conclude that it was mostly environmental.
Short update on locking: After very slow initial alignment, locking took some time to get through DRMI but managed after going to PRMI (and losing lock due to BS in between). We are now sitting at DARM_OFFSET but signals are not converged after 10 minutes due to a passing 5.0 from guatemala (that I beleive we will survive.
As Ibrahim said, the OPO temp adjustment would not cause a lockloss.
However we can see a this Friday time and two days before, the SQZ angle servo and ASC seem to get into a strange ~13minute oscillation when the OPO temperature is bad and the SQZ angle is around 220deg. See attached plots. We are not sure why this is. Now we are a week from the OPO crystal move 82134, the OPO temperature is becoming more stable but will still need adjusting for the next ~week.
TITLE: 01/11 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
IFO Has been locked for 9 hours.
It's been a perfect day for Observing.
Nothing to really report.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
22:08 | OPS | LVEA | LVEA | N | LASER SAFE | 15:07 |
16:09 | FAC | Kim | Optics Lab | n | technical cleaning | 16:39 |
22:12 | VAC | Jordon & Janos | EX | No | Going to EX Mech room to check & get Vacuum Equipment | 22:43 |
23:37 | PCAL | Mr. Llamas | PCAL Lab | YES | Starting a PCAL lab measurement | 23:48 |
TITLE: 01/11 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 6mph Gusts, 2mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.48 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING as of 15:24 UTC
Jennie W, Sheila,
On Wednesday Sheila and I compared the measurement of HAM 6 throughput taken in April 2024 where we stepped the DARM offset and then compared the power at the ASC port which is sensitive to DARM with the power on the DCPDs, with the HAM 6 efficiency for this light predicted by the known losses as obtained from the squeezer noise budget.
We found that combining both of these results in a prediction of unknown losses of 12.2% which sort of matches the 10.4% unknown losses predicted from squeezing measurements (alog # 82097).
HAM6 throughput measured in April 2024 from alog #79146.: 80.2%
HAM6 known losses: google sheet (1-0.00072)*(1-0.015)*(1-0.0096)*(1-0.044)*(1-0.02) = 91.3% expected HAM6 throughput
Unknown HAM6 losses based on this measurement = 1-0.802/0.913 = 12.2%
Yesterday I looked at Camilla and Elenna's DARM step measurement from 21st October 2024, and found the unknown loss is 10.2 % using the HAM6 throughput measured in October which matches the 10.4% unknown losses predicted from squeezing measurements (alog # 82097).
HAM6 throughput measured in October 2024 from alog #82204: 82 %
Unknown HAM6 losses based on this measurement: 1-0.82/0.913 = 10.2%
Fri Jan 10 10:10:06 2025 INFO: Fill completed in 10min 3secs
Jordan confirmed a good fill curbside. TCmins [-92C, -90C] OAT (1C, 34F)
TITLE: 01/10 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 140Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 9mph Gusts, 6mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.62 μm/s
QUICK SUMMARY:
H1just recently locked and Observing 7 minues before I walked in.
Everything looks like it is working well at first glance, except H1:PEM-CS_DUST_LAB2 still doesn't work.