Lockloss @ 19:35 UTC after 10.5 hrs locked - link to lockloss tool
No obvious cause, but this lockloss was fast (much like others of late).
H1 dropped observing from 18:30 to 19:00 UTC for regularly scheduled calibration measurements, which ran without issue. A screenshot of the calibration monitor medm and the calibration report are attached.
Broadband runtime: 18:30:45 to 11:35:54 UTC
Simulines runtime: 18:36:41 to 18:59:56 UTC
We had to rerun the report to account for pro-spring in the model. Calibration looks better now -- sensing model is within 2% above 20 Hz and 5% below 20 Hz, report attached. I also updated the .ini file to now account for the pro-spring behavior.
More detailed steps:
is_pro_spring
to True
at the pydarm_H1.ini
in report 20250614T183642Z
20250614T183642Z
(in terminal, ran $pydarm report --regen --skip-gds 20250614T183642Z
)pydarm_H1.ini
file at /ligo/groups/cal/H1/ifo
as pydarm_H1.ini.250610
to save previous configuration./ligo/groups/cal/H1/ifo/pydarm_H1.ini
, set is_pro_spring
to True
Sat Jun 14 10:09:44 2025 INFO: Fill completed in 9min 40secs
TITLE: 06/14 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 16mph Gusts, 7mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY: H1 has been locked and observing for 5.5 hours. Calibration measurements planned for this morning at 18:30 UTC.
Before going back to bed, wanted to check violins, and noticed that ITMx MODE13 was ringing up again. I'm guessing the settings RyanC had going were not being used. So, I put in what RyanC had 2hrs ago for the last lock, and within 2min, it quickly damped out the rung up MODE13.
I have NOT made a change/updated lscparams.
NEW Settings:
ITMx MODE13: FM1 + FM4 + FM10 + gain = -0.2
OLD Settings:
ITMx MODE13: FM1 + FM2 + FM4 + FM10 + gain = 0.0
OK, going back to bed. Sleep...we'll see.
These ones came up before---I'm hoping I am addressing them correctly this time! :)
Once these were CONFIRMED, H1 was automatically taken to OBSERVING at 920utc
(RyanC, CoreyG)
Got a Wake Up Call at 1211amPDT, but I was sort of already awake. RyanC was up after his shift and we were both watching H1 remotely. He was battling H1 most of his shift and made headway toward the end of his shift and then handed off info for how things were going (the winds started dying down around 9pm-ish). A few things he did for H1:
Now that I'm sort of awake, wondering why the Earth was playing with us and that Earthquake which literalluy caught us by surprise seconds after we made it to Observing last night.
Although there were some Alaska quakes around the time of the lockloss, they were under Mag2.6.
So assuming it was the Mag5.0 off the coast of Chile which was roughly about 30min before the lockloss. I guess that's fine, but why were there no notifications on Verbal of an Earthquake? Why didn't SEI_ENV transition from CALM to EARTHQUAKE?
Looking at the seismic BLRMS, the last time SEI_ENV transitioned to EARTHQUAKE was about 12hrs ago at 0352utc (see attached screenshot) during RyanC's shift (but he was just getting done dealing with winds at that time, so H1 was down anyway). But after that EQ, there were a few more earthquakes, which were less that the 0352 one, but not by much, and certainly big enough to knock H1 out at 0725 from the Chilean coast earthquake. Perhaps it was a unique EQ, because it was off the Pacific coast, albeit South American coast.
Just seems like H1 should have been able to handle this pesky measly Mag5.0 EQ that the Earth taunted us with after a rough night---literally seconds after we had hit the OBSERVING button! :-/
TITLE: 06/14 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Aquisition
INCOMING OPERATOR: Corey
SHIFT SUMMARY: A quaternary of issues, environmental (wind and an earthquake), hardware, and software it seems. The new DAMP_BOUNCE_ROLL seems to be killing it everytime it engages so I've commented it out.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:22 | FAC | LVEA is LASER HAZARD | LVEA | YES | LVEA is LASER HAZARD \u0d26\u0d4d\u0d26\u0d3f(\u239a_\u239a) | 05:18 |
00:46 | OMC | Keita | LVEA | Y | Investigate OMC PZT | 00:56 |
I checked the screenshots when I got home and saw that we had been sitting in DAMP_BOUNCE_ROLL for ~20 minutes with the state completed, so I hopped on and requested it to move on to NLN, I'm not sure why H1_MANAGER wasn't moving it on as the REQUEST was set to NLN as it should have been.
ITMX13 decided it didn't want to damp again, so I had to find some new settings. FM1 + FM4 + FM10 G = -0.2 seems to be working. I've set its gain to zero in lscparams in the meantime and reloaded the node.
Ryan had a hard time locking the OMC, and there was no DCPD_SUM spikes as Ryan moved the PZT offset manually. We saw nothing in the OMC trans camera either.
I found that the OMC PZT2 monitor dropped to zero-ish at 14:23 PDT (21:23 UTC). That coincides with vacuum-related activity for HAM1, not sure if they are related.
In the mezzanine I found that the output of the HV driver was zero.
Pressing VSET and ISET, I saw that the driver was set up to 110V 80mA. Pressing RECALL -> ENTER didn't do anything. I also noticed at this point that the unit was somehow in CC (constant current ) mode, which is usually automatically determined by the power supply. It should be in CV mode.
I turned the output off, asked Ryan to turn the PZT offset to zero (which means the middle of 0-100V range, i.e. 50V, so I should have asked -50V offset), power cycled the unit just because, pressed VSET again (it was still 110V), pressed ENTER, turned the output ON, and it started working again.
Ryan moved the PZT offset and the HV monitor responded. Shortly after this the IFO lost lock but I don't think that was related to the HV.
Corey, Craig and I had the exact same issue 2 weeks ago.
This is twice in a few weeks. Either we have a PZT drawing too much current or a power supply failing. We will swap the power supply Tuesday.
Jordan, Janos Today, at ~14 pm after a lockloss, we valved out the annulus Pfeiffer aux carts. We let them run until both the AIPs turned over. This happened first with the HAM1 AIP (Noble diode), which despite the higher gas load, turned over earlier. At ~16:30 pm the HAM2 AIP also turned over (Starcell), so at ~16:45 I stopped the Pfeiffer-carts, with this, eliminating the 2 biggest noise-sources. Also, today at ~14 pm we valved in the main IP (IP13), that brought down the pressure to ~5.3E-7 Torr, and it stabilizes at ~5.8E-7 Torr. The main turbo's backing cart is still on (standing on vibration dumping pads); we are planning to valve out the turbo early next week.
TITLE: 06/13 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 17mph Gusts, 11mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
We've lost lock at POWER_10Ws twice in a row, within a 5 seconds of entering the state. I'm worried of how rung up the violins will be now, as they looked large right before the 2nd lockloss. I'm going to stop at CHECK_VIOLINS on my way up now. Both locklosses tag ADS_EXCURSION
Since we've been seeing the ETMY roll mode consistently ringing up over the start of lock stretches and that it can cause locklosses after long enough, Sheila modified the 'DAMP_BOUNCE' [502] state of ISC_LOCK to now engage damping of this mode with a gain of 40. The state has also been renamed to 'DAMP_BOUNCE_ROLL'. I have accepted the gain of 40 and timeramp of 5 sec in the OBSERVE.snap table of h1susetmy and only the timeramp in the SAFE.snap table (screenshots attached; we had originally set the gain at 30 but then updated it to 40, which I forgot to take a screenshot of).
We are still unsure as to why this roll mode has been ringing up since the vent, but so far Elenna has ruled out the SRCL feedforward and theorizes it could be from ASC, specifically CHARD_P (see alog84982 and comments).
I think this is causing us locklosses, twice we've lost lock in this state as it turns on when I slowly stepped through the states, and twice we've lost it a few seconds into POWER_10Ws when GRD was moving automatically. I reduced the gain to 30 from 40 (SVN commited and reloaded ISC_LOCK, I had to first commit the DAMP_BOUNCE_ROLL state edits) and doubled the tramp to 10 (SDFed in SAFE).
The reduced gain and increased tramp didn't stop it from killing the lock, as soon as it engaged we lost lock. I've commented it out from ISC_LOCK - line 3937.
I think the BOUNCE_ROLL channel was mistyped in ISC_LOCK, the line is ezca['SUS-ETMY_M0_DAMP_R_GAIN'] = 40 where it should be ezca['SUS-ETMY_M0_DARM_DAMP_R_GAIN'] = 40 ? I should have noticed this earlier.
I edited the channel in ISC_LOCK to add "DARM_" but I did not get a chance to reload before we went into Observing.
Back to observing at 21:08 UTC. Ran an alignment where I touched up PRM by-hand, then lock acquisition was fully automatic with no SDF diffs.
Lockloss could have been the smallest of ETMX glitches? (attached, from lockloss tool)