Test for Thursday morning at 7.45 am, assuming we are thermalised.
conda activate labutils
python auto_darm_offset_step.py
Wait until program has finished ~20 mins.
Turn OMC ASC back on by putting master gain slider back to 0.020.
[Fil, Erik]
Hardware watchdog at EX started count down when disconnected from either satellite amp.
TITLE: 09/02 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: One lockloss from the fire alarm, easy relock. We've been locked for just over 2 hours.
LOG: No log.
There was a small LVEA temperature excursion from AHU2 not restarting after the fire alarm. It looks mostly leveled out now.
01:30 UTC lockloss, the fire alarm went off. It was reported as a "trouble alarm, I called Richard and I then reset the alarm on the panel in the CUR.
03:03 UTC Observing
TITLE: 09/01 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Observing at 153 Mpc and have been Locked for over 1.5 hours. Relocking after the lockloss today went fine besides needing to touch ALSX and ALSY a bit. Weirdly, the first two times ALSX locked and turned on WFS, after a bit the WFS pulled the alignment away and caused ALSX to unlock. I set everything up to be able to turn the WFS off for the next time, but then there was no problem.
LOG:
14:30 UTC Relocking and in DRMI_LOCKED_CHECK_ASC
15:09 NOMINAL_LOW_NOISE
15:11 Observing
18:59 Superevent S250901cb
19:38 Earthquake mode activated - I moved us into earthquake mode because of some ground motion that USGS hadn't seen yet
19:46 Back to CALM
20:07 Lockloss
- ALSX and Y needed my help for locking
- Twice ALSX WFS pulled ALSX away and unlocked it
21:54 NOMINAL_LOW_NOISE
21:55 Observing
Back on July 28th, we doubled the bias on ETMX in an effort to try and survive more DARM Glitch locklosses (previously referred to as ETMX Glitch locklosses) (86027). Now that we've been at this new bias for a month, I have made some visuals to compare the number of locklosses from low noise before and after the change. We wanted to stay at this bias for at least a few weeks because sometimes throughout O4, we've had periods of a couple weeks or so where we barely have any locklosses caused by DARM Glitch, and we wanted to make sure we weren't just entering one of those periods.
TLDR;
In August, we spent more time locked with longer locks and less locklosses. The difference between August and the other months of O4 has been very drastic, and the plots make it look like to me that this is due to the bias change in ETMX.
Not TLDR:
A birds-eye view:
O4 All Locklosses vs DARM Glitch Locklosses
I've posted similar versions of this figure before, it just gives a visual representation of the amount of locklosses we've had during O4 that have been attributed to DARM Glitch versus every other lockloss. From this plot, you can see that since we doubled the ETMX bias, we have seen less ETMX glitch locklosses, and less locklosses in general!
A more in-depth examination:
I've decided to compare the month of August to the other full months of Observing we've had in O4c: February, March, June, and July. April was only a couple days of Observing, and May we were fully venting. The important thing to note here though is that for June, we started Observing 5 days into the month, and there was a lot of commissioning going on the rest of the month, so the June data points aren't the best comparison.
O4 Lock Length Distribution by Month
The xaxis shows the number of hours we had been locked for each NLN lockloss during the month, and the yaxis shows how many locklosses occurred after that lock lengh. Each months' plot has the same x and y axes limits, and on the right side of each plot is the total number of locklosses from low noise for the month as well as the average lock length.
You can see the distributions for February, March, June, and July all look pretty similar, with the majority of locklosses happening after 10 hours or less of being locked, and the average lock length for those four months is around 5.5 hours. February, March, and July also all have a similar number of locklosses, while June has a quarter less, partially due to commissioing at the beginning of the month and partially for unknown reasons.
August, however, is completely different. The distribution of lock lengths is a lot wider, and not just by one or two longer locks, in August there were 9 locks that were longer than during the other four months. There is also a lot flatter distribution for the shorter locks. This results in an average lock length of 13.3 hours, more than double the other months. There were also approximately half the number of locklosses during August, which is a very drastic drop. Even compared to June we had a lot less locklosses.
Lockloss Stats per Month for O4c
Here's a table I made with some more lock and lockloss stats. There's the number of days that that month consists of, the number of locklosses during that period of time, the number of locklosses attributed to DARM Glitches during that time, the average length of lock, and the total time spent locked during that period of time. The new info here is the total time locked - in August we spent about 25 days locked, which is 6 days more than the next highest month, July, and is a big jump up compared to the total locked time between February, March, and July.
TITLE: 09/01 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 6mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
Link to report here.
Summary:
Lockloss at 2025-09-01 20:07 UTC after almost 5 hours locked
21: 55 UTC Observing
Mon Sep 01 10:08:48 2025 INFO: Fill completed in 8min 44secs
TITLE: 09/01 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 2mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
Currently Relocking and in DRMI_LOCKED_CHECK_ASC
Two locklosses last night:
2025-09-01 06:55 UTC - There is a possible MICH ringup of 1.6 Hz starting two seconds before the lockloss? Tagging ISC
2025-09-01 11:11 UTC - Earthquake. Seems like it was pretty large and local, so unsure if we would have survived it even if someone had used the ASC Hi Gains state
15:11 UTC Observing
TITLE: 09/01 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: One lockloss with an easy recovery, YARM just took a little while. We've been locked for 45 minutes.
LOG: No log
04: 19 UTC Observing
Lockloss 02:54 UTC
04: 19 UTC Observing
TITLE: 08/31 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Currently Observing at 158 Mpc and have been Locked for 5 hours. Lockloss was fast and easy to recover from - the IFO just needed my help grabbing ALSY for the same reason we've been seeing for a while: SCAN_ALIGNMENT had taken it to a good-looking spot (above 1), but it wasn't high enough to catch. A few taps was all that was needed.
We had another test of using the ASC Hi Gains button again! During the earthquake, peakmon started creeping up above 500, and picket fence was showing multiple stations were yellow or orange. I tried to hold off on leaving Observing as long as possible since the ground motion wasn't that bad yet, but I eventually started the transition to ASC Hi Gain when I saw peakmon hit 900, but it quickly went above 900 and by the time we started transitioning over, we were above 1200 and quickly increasing. Luckily, we didn't lose lock, but we were close, so I definitely should've transitioned over sooner! This earthquake went up to 2200 on peakmon. Afterwards, the transition back went smoothly.
LOG:
14:30 UTC Observing and Locked for almost 3 hours
17:17 Lockloss
- Had to adjust ALSY a bit for it to be able to catch
18:30 NOMINAL_LOW_NOISE
18:33 Observing
20:06 Earthquake mode activated
20:17 I took us out of Observing and selected ASC Hi Gain to try and ride out the earthquake
20:34 Back to regular NLN gains and back into Observing
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
18:25 | PEM | Robert | LVEA | n | Carefully taking photos for Virgo | 18:30 |
TITLE: 08/31 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 7mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
Test for Thursday morning at 7.45 am, assuming we are thermalised, rewrote instructions above and took out last part.
conda activate labutils
python auto_darm_offset_step.py
Wait until program has finished ~15 mins.
Turn OMC ASC back on by putting master gain slider back to 0.020.
Commissioners will turn off OMC ASC and close beam diverter once heating has finished then do DARM offset step and other tests before turning on ASC and opening beam diverter before cooling down OM2 again.