TITLE: 01/13 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 160Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 4mph Gusts, 3mph 3min avg
Primary useism: 0.06 μm/s
Secondary useism: 0.64 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING as of 14:03 UTC the previous day (10 hr lock!)
Microseism is increasing and violins are elevated.
We've been locked and observing for over 7 hours, 2ndary microseism looks to have peaked.
Sun Jan 12 10:18:16 2025 INFO: Fill completed in 18min 13secs
TCmins [-106C, -99C] OAT (3C, 38F)
TITLE: 01/12 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 160Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT_USEISM
Wind: 6mph Gusts, 4mph 3min avg
Primary useism: 0.06 μm/s
Secondary useism: 0.76 μm/s
QUICK SUMMARY:
Got called at 14:15UTC due to assistance required. I *believe* the issue was that the NLN timer had run out, but by the time I logged on we were in Observing. We had been in OMC_WHITENING for about an hour presumeably damping violins. It looks like the total amount of time we were trying to relock after completing an initial alignment was almost 3.5 hours. The strange thing is that looking at the H1_MANAGER log, it looks like the NLN timer ran out at 12:42UTC, triggering H1_MANAGER to enter ASSISTANCE_REQ, but like I said, I did not get called until 14:15UTC. There was also something weird going on with the IFO_NOTIFY guardian too, when I logged on it was rapidly cycling between ALERT_ACTIVE and WAITING, although it looks like the cycling started once we got into Observing, so that might have just been due to no longer needing the alert (although I never noticed it doing that on other similar occassions but maybe that's just me missing it).
All is good and we've been Observing for 30 minutes.
This is a bit of a conflict from H1_MANAGER and IFO_NOTIFY. Since the IFO was able to get back to Observing on its own, even after H1_MANAGER timed out and triggered the initial Assistance_Required (H1_MANAGER state) and Alert_Active (IFO_NOITFY state), IFO_NOTIFY would clear that state when we got back to observing, but H1_MANAGER would not. Since H1_MANAGER was still in Assistance_Required, then IFO_NOTIFY would bounce between Alert_Active and Waiting. I'll add something into H1_MANAGER to also clear the alert when we get back into observing to avoid this in the future.
I have no idea why Oli wasn't called until 14UTC, as the states of both IFO_NOTIFY and H1_MANAGER definitely changed two hours earlier. I'll check with Dave to see if Twillo threw any errors.
Alerted by H1_MANAGER that it needed intervention. There was a large earthquake coming through that had tripped the ISIs for the ITMs and ETMs. Also, ETMY stages M0 and R0 had tripped (not yet sure if that is just due to the ISI tripping).
There was also a notification on verbals from 08:51UTC, a minute after everything tripped, that says "ETMY hardware watchdog trip imminent", but when I checked the hardware watchdog screen everything looked normal with no countdowns. Once it looked like the worst of the earthquake had passed and the ISI values were all within range, I reset R0 and M0 for ETMY and reset all four ISI watchdogs. We are still in LARGE_EQ mode so we haven't started relocking yet, but it looks like we are close to leaving and we should be good to go for relocking.
It looks like ETMY M0 and R0 tripped due to barely going over the watchdog limit, and that it was due to the ISI stages tripping (ndscope). Similar to what I did in 81668, I'll up the thresholds for M0 and R0 by a bit. Thankfully it looks like the thresholds we established for the other stages were all good guesses and are still good.
| Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
| M0 | 100 | 106 | 150 |
| R0 | 120 | 122 | 175 |
| L1 | 170 | 134 | 170 (unchanged) |
| L2 | 270 | 168 | 270 (unchanged) |
TITLE: 01/12 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 159Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 02:36 UTC
Overall pretty quiet shift with one random lockloss (alog 82227). There doesn't seem to have been an EX glitch and the environment was stable (same microseism as last few days and low wind). We managed to get back up to NLN in 1 hour, with DRMI Locking and ENGAGE_ASC_FOR_FULL_IFO taking the longest to get past.
About 1.5hrs into the lock, SQZ unlocked and took us out of OBSERVING. It relocked automatically and we were observing again 4 minutes later.
LOG:
None
Unknown cause Lockloss - not environment and no EX saturation.
TITLE: 01/11 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 163Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: I ran the calibration sweep this afternoon, lockloss shortly after. I tried restarting the IOC for the lab dust monitors but it still couldn't connect to the lab2. "ISS diff power low" - DIAG_MAIN We've been locked for ~2 hours.
LOG: No log.
TITLE: 01/11 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 10mph Gusts, 6mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.38 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING as of 22:34 UTC
Closes FAMIS 26353. Last checked in alog 82116.
All trends look well. No fans above or near the (general) 0.7 ct threshold. Screenshots attached.
We had just finished the calibration measurement 3 minutes earlier, 20:03 lockloss
22:35 UTC Observing
Simulines start: The wind started picking up during the measurement.
PST: 2025-01-11 11:36:44.979063 PST
UTC: 2025-01-11 19:36:44.979063 UTC
GPS: 1420659422.979063
Simulines stop:
PST: 2025-01-11 12:00:27.281127 PST
UTC: 2025-01-11 20:00:27.281127 UTC
GPS: 1420660845.281127
Files:
2025-01-11 20:00:27,195 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20250111T193645Z.hdf5
2025-01-11 20:00:27,203 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20250111T193645Z.hdf5
2025-01-11 20:00:27,208 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20250111T193645Z.hdf5
2025-01-11 20:00:27,213 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20250111T193645Z.hdf5
2025-01-11 20:00:27,218 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20250111T193645Z.hdf5
Closes FAMIS26346
Laser Status:
NPRO output power is 1.851W
AMP1 output power is 70.19W
AMP2 output power is 137.0W
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 25 days, 0 hr 12 minutes
Reflected power = 25.46W
Transmitted power = 104.6W
PowerSum = 130.0W
FSS:
It has been locked for 0 days 15 hr and 6 min
TPD[V] = 0.8127V
ISS:
The diffracted power is around 2.5%
Last saturation event was 0 days 15 hours and 7 minutes ago
Possible Issues:
PMC reflected power is high
Sat Jan 11 10:18:17 2025 INFO: Fill completed in 18min 14secs
TCmins [-110C, -107C] OAT (10C, 50F)
TITLE: 01/11 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 2mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.48 μm/s
QUICK SUMMARY:
TITLE: 01/11 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 05:05 UTC
Overall calm shift with one slightly annoying LL during an opo temp optimization. The reacquisition was slow due to the environment (high winds and an earthquake) but was ultimately fully automatic after an initial alignment. Lockloss alog 82215
Other than this, the SQZ opo temp had to be adjusted so I went out of OBSERVING for 6 mins (5:05 UTC to 5:11 UTC) and adjusted it successfully bringing range from ~138 to ~158MPc (screenshot attached).
Otherwise, the microseism seems to be coming down slowly and high winds have calmed down.
LOG:
None
Unknown cause lockloss but reasonable to assume it was either microseism or squeeze related (though I'm unsure if SQZ can cause a LL like that).
The lockloss happened as I was adjusting the SQZ OPO temperature since in the last hour, there has been a steady decline in H1:SQZ-CLF_REFL_RF6_ABS_OUTPUT channel. After noticing this, I went out of OBSERVING and into CORRECTIVE MAINTENANCE to fix. It was actually working (the channel signal was getting higher with each tap when the LL happened. This coincided with a 5.2 EQ in Ethiopia, which I do not think was the cause but may have moved things while in this high microseim state. I've attached the trends from SQZ overview and OPO temp including the improvements made. If it is possible to induce a LL changing the OPO temp, then this is likely what happened, though the LL did not happen when the temp was being adjusted, as the 3rd screenshot shows.
Seems that the microseism, the earthquake and SQZ issues also coincided with some 34mph gusts so I will conclude that it was mostly environmental.
Short update on locking: After very slow initial alignment, locking took some time to get through DRMI but managed after going to PRMI (and losing lock due to BS in between). We are now sitting at DARM_OFFSET but signals are not converged after 10 minutes due to a passing 5.0 from guatemala (that I beleive we will survive.
As Ibrahim said, the OPO temp adjustment would not cause a lockloss.
However we can see a this Friday time and two days before, the SQZ angle servo and ASC seem to get into a strange ~13minute oscillation when the OPO temperature is bad and the SQZ angle is around 220deg. See attached plots. We are not sure why this is. Now we are a week from the OPO crystal move 82134, the OPO temperature is becoming more stable but will still need adjusting for the next ~week.