Lockloss @ 03:25 UTC - link to lockloss tool
The NPRO/FSS had been glitching with high frequency for about a minute prior to the lockloss, so I feel confident saying it's the cause here.
H1 back to observing at 04:23 UTC. Fully automatic relock.
Lockloss @ 01:09 UTC - link to lockloss tool
No obvious cause; doesn't look like the FSS, and I don't see evidence of an ETM glitch or DARM wiggle.
H1 returned to observing at 02:18 UTC after a fully automatic relock.
TITLE: 10/20 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 21:31 UTC
Mostly quiet shift with one fussy lock acquisition.
Lock Acquisition had no special problems but rather, a slew of small annoyances (unknown cause Lockloss alog 80773):
Other:
LOG:
None
TITLE: 10/20 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 21mph Gusts, 16mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.15 μm/s
QUICK SUMMARY: H1 has been observing for about 1.5 hours.
Closes FAMIS 26307. Last checked in alog 80649. Service mode error, see SYSSTAT.adl is the only new difference between last week's check.
Laser Status:
NPRO output power is 1.827W (nominal ~2W)
AMP1 output power is 64.44W (nominal ~70W)
AMP2 output power is 137.4W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 1 days, 22 hr 40 minutes
Reflected power = 24.82W
Transmitted power = 100.8W
PowerSum = 125.7W
FSS:
It has been locked for 0 days 0 hr and 4 min
TPD[V] = 0.8048V
ISS:
The diffracted power is around 2.4%
Last saturation event was 0 days 0 hours and 5 minutes ago
Possible Issues:
AMP1 power is low
PMC reflected power is high
Service mode error, see SYSSTAT.adl
Looks like I had left the PSL computer in service mode after work on Tuesday; fortunately this doesn't actually effect anything operationally, so no harm done. I've just taken the computer out of service mode.
I also added 100mL of water into the chiller since Ibrahim had gotten the warning from Verbal today.
Lockloss due to unknown cause. Not PSL this time. Still investigating.
Sun Oct 20 10:13:53 2024 INFO: Fill completed in 13min 49secs
TITLE: 10/20 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 4mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.15 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING as of 09:46 UTC (5 hr lock).
Range is a bit low so will see what that's about.
H1 called for assistance at 09:42 UTC from the NLN timer expiring, by the time I logged in 4 minutes later we were Observing 09:46 UTC. There was a high state lockloss, at LASERNOISE_SUPRESSION.
TITLE: 10/20 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Fairly quiet shift with one lockloss followed by an automatic relock. H1 has now been observing for over 2 hours.
Lockloss @ 01:32 UTC - link to lockloss tool
No obvious cause; looks like a sizeable ETMX glitch about a half second before the lockloss.
H1 back to observing at 02:47 UTC. Fully automatic relock.
TITLE: 10/19 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Its been a windy day, a few short locks today. Relocking has been easy, we've been locked for just over 3 hours.
LOG: No log
TITLE: 10/19 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 8mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.21 μm/s
QUICK SUMMARY: H1 has been locked and observing for almost 3 hours.
Ryan C taking over for the last 4 hours but here's what happened in the first 4:
After issues getting the IMC locked during SRC_ALIGN, we manged to automatically get to NLN, Observing at 16:04 UTC.
SQZ Tuning: Dropped out of OBSERVING 17:14-17:26 UTC
Calibration Sweep: Dropped out of OBSERVING 18:34 UTC - Lockloss 19:05 UTC
Lockloss (alog 80760): FSS Caused lockloss during simulines.
Timing glitch followup (Dave): While yesterday's EY time glitch had no consequences. Dave said that it has been glitching by quite erratic numbers lately and has recommended another power supply swap (like the EX one recently).
Fast Shutter Guardian Glitch Followup: Dave found that the guardian fast shutter malfunction yesterday was caused because there was a 5s delay in data-grabbing but why that happened is unknown. He will post an alog about this soon.
Ran the usual calibration sweep following the wiki. IFO was thermalized (locked for 2.5 hrs but ~3hrs since MAX_POWER. Monitor Attached. Times are in GPS.
Broadband
Start: 1413398319
End: 1413398630
Simulines
Start: 1413398808
End (Lockloss): 1413399944
No files written on the screen due to abort from lockloss. Here's the error message:
2024-10-19 19:05:22,079 | ERROR | IFO not in Low Noise state, Sending Interrupts to excitations and main thread.
2024-10-19 19:05:22,080 | ERROR | Ramping Down Excitation on channel H1:LSC-DARM1_EXC
2024-10-19 19:05:22,080 | ERROR | Ramping Down Excitation on channel H1:SUS-ETMX_L3_CAL_EXC
2024-10-19 19:05:22,080 | ERROR | Ramping Down Excitation on channel H1:CAL-PCALY_SWEPT_SINE_EXC
2024-10-19 19:05:22,080 | ERROR | Ramping Down Excitation on channel H1:SUS-ETMX_L2_CAL_EXC
2024-10-19 19:05:22,080 | ERROR | Ramping Down Excitation on channel H1:SUS-ETMX_L1_CAL_EXC
2024-10-19 19:05:22,080 | ERROR | Aborting main thread and Data recording, if any. Cleaning up temporary file structure.
ICE default IO error handler doing an exit(), pid = 3195655, errno = 32
PDT: 2024-10-19 12:05:26.374801 PDT
UTC: 2024-10-19 19:05:26.374801 UTC
GPS: 1413399944.374801
PSL Caused Lockloss during last part of Simulines Calibration.
IMC and ASC lost lock within 5ms of one another. (plot attached).
20:21 UTC Observing
Looking at more channels around this lockloss, I'm not entirely sure if the FSS was at fault in this case. In a longer-term trend (starting 6 sec before lockloss, see screenshot), there were a handful of glitches in the FSS_FAST_MON channel, but no significant corresponding NPRO temp or output power changes at those times and the EOM drive had not reached high levels like we've seen in the past when this has caused locklosses. Zooming in (see other screenshot), it seems that the first of these channels to change is the FSS_FAST_MON, but instead of a glitch, it looks to actually be moving. Soon after, there's a drop in AS_A_DC_NSUM, I believe indicating the IFO was losing lock, then 5 ms later the IMC starts to lose lock. I suppose we're at the mercy of data sampling rates here, but this may add some more context to these kinds of locklosses.