Thu Nov 13 10:13:43 2025 INFO: Fill completed in 13min 39secs
Gerardo confirmed a good fill curbside.
[Joan-Rene Merou, Alicia Calafat, Sheila Dwyer, Jenne Driggers] We have entered the LVEA and went to the Beer garden. There, we first turned off the Low Voltage ITM ESD Driver D1600092, first the 15V switch and then the medium voltage switch. In order to turn it on again, it should be reconnected in the opposite order. With the voltage request set to 0 and chassis powered off, we have unplugged the SHV cables going to the chamber and plugged into Robert's ground boxes, which we used to ground to the rack which is grounded to the chamber. This has been done at both drivers (See attached photos). Afterwards, we have changed the code at /opt/rtcds/userapps/release/isc/h1/guardian/ISC_LOCK.py in order that the LOWNOISE_COIL_DRIVERS will go to LOWNOISE_ESD_ETMY instead of TRANSITION_FROM_ETMX. This has been done by changing lines 6670 and 6674, moving the ", 15" step from line 6670 to 6674. Finally, we communicated the change to the operator and loaded the guardian.
It appears that the grounding did not decrease the amplitude of the combs. As seen in the attached figure, the relative amplitude of the first harmonics of the combs remains mostly the same before and after the change on November 13th.
EPO-Tagging for photo of ESD work
M. Todd, S. Dwyer
We wanted to get a couple of measurements of the OMC mismatch with the SQZ beam when we changed from the nominal setting for only one of the ZMs at a time. OM2 was cold for all of these measurements.
| Measurement | Time | OMC Mismatch [%] |
| ZM4 = 6.2, ZM5 = -0.4 (nominal) | 1447088389 | 2.8 |
| ZM4 = 4, ZM5 = -0.4 | 1447088919 | 2.4 |
| ZM4 = 6.2, ZM5 = -4.5 | 1447089464 | 10.5 |
It seems the ZM4 is not a very strong actuator for changing the mode at the OMC.
During PRC Align, IMC unlocked and couldn't relock due to the FSS oscillating a lot - PZT MON was showing it moving all over the place, and I couldn't even take the IMC to OFFLILNE or DOWN due to the PSL ready check failing. To try and fix the oscillation issue, I turned off the autolock for the Loop Automation on the FSS screen, and after a few seconds re-enabled the autolocking, and then we were able to go to DOWN fine, and then I was able to relock the IMC.
TJ said this has happened to him and to a couple other operators recently.
Took a look at this, see attached trends. What happened here is the FSS autolocker got stuck between states 2 and 3 due to the oscillation. The autolocker is programmed to, if it detects an oscillation, jump immediately back to State 2 to lower the common gain and ramp it back up to hopefully clear the oscillation. It does this via a scalar multiplier of the FSS common gain that ranges from 0 to 1, which ramps the gain from 0dB to its previous value (15dB in this case); it does not touch the gain slider, it does it all in block of C code called by the front end model. The problem here is that 0dB is not generally low enough to clear the oscillation, so it gets stuck in this State 2/State 3 loop and has a very hard time getting out of it. This is seen in the lower left plot, H1:PSL-FSS_AUTOLOCK_STATE, it never gets to State 4 but continuously bounces between States 2 and 3; the autolocker does not lower the common gain slider, as seen in the center-left plot. If this happens, turning the autolocker off then on again is most definitely the correct course of action.
We have an FSS guardian node that also raises and lowers the gains via the sliders, and this guardian takes the gains to their slider minimum of -10dB which is low enough to clear the majority of oscillations. So why not use this during lock acquisition? When an oscillation is detected during the lock acquisition sequence, the guardian node and the autolocker will fight each other. This conflict makes lock acquisition take much longer, several 10s of minutes, so the guardian node is not engaged during RefCav lock acquisition.
Talking with TJ this morning, he asked if the FSS guardian node could handle the autolocker off/on if/when it gets stuck in this State 2/State 3 loop. On the surface I don't see a reason why this wouldn't work, so I'll start talking with Ryan S. about how we'd go about implementing and testing this. For OPS: In the interim, if this happens again please do not wait for the oscillation to clear on its own. If you notice the FSS is not relocking after an IMC lockloss, open the FSS MEDM screen (Sitemap -> PSL -> FSS) and look at the autolocker in the middle of the screen and the gain sliders at the bottom. If the autolocker state is bouncing between 2 and 3 and the gain sliders are not changing, immediately turn the autolocker off, wait a little bit, and turn it on again.
Slight correction to the above. The autolocker did not get stuck between states 2 and 3, as there is no path from state 3 to state 2 in the code. What's happening is the autolocker goes into state 4, detects an oscillation, then immediately jumps back to state 2; so this is a loop from states 2 -> 3 -> 4 -> 2 due to the oscillation and the inability of the autolocker gain ramp to effectively clear it. This happens at the clock speed of the FSS FE computer, while the channel that monitors the autolocker state is only a 16 Hz channel. So the monitor channel is no where close to fast enough to see all of the state changes the code is going through during an oscillation.
VACSTAT detected a vacuum glitch at 03:00:24 this morning originating with PT427 (EY beam tube ion pump station, about 1000 feet from EY). The vacuum pressure rapidly increased from 1.4e-09 to 1.4e-07 Torr. It then rapidly pumped back down to nominal in 8 seconds. The glitch was detected soon afterwards by all of the gauges in EY. They only increased from around 1.0e-09 to 2.0e-09 and they took around 25 minutes to pump down.
The glitch was seen at MY as a much reduced amplitude about 6 minutes after the event.
H1 was not locked at the time.
TITLE: 11/13 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 3mph Gusts, 0mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 1.22 μm/s
QUICK SUMMARY:
Currently in DOWN due to excessively high secondary microseism. We're supposed to have calibration measurements and commissioning today. I'll try for a bit to get us back up, but I doubt we'll get past DRMI since it's worse now than it was last night for Ryan or TJ
I restarted VACSTAT at 08:18 to clear its alarm. Tyler resolved the RO alarm at 06:17 and now the CDS ALARM is GREEN again.
The useism continues to grow. I'll keep the IFO in down and see where things are at again in a few hours.
TITLE: 11/13 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: TJ
SHIFT SUMMARY: H1 was happily observing until mid-shift when the microseism just got too high and caused a lockloss. Haven't been able to make it past DRMI since then due to the ground motion. I'm leaving H1 in 'DOWN' since it's not having any success, but TJ says he'll check on things overnight.
LOG:
Lockloss @ 02:44 UTC - link to lockloss tool
Possibly caused by ground motion, as everything looked to be moving quite a lot at the time, and the secondary microseism band has been rising this evening. Environment plots on the lockloss tool show a quick increase in motion at the time of the lockloss also.
TITLE: 11/13 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Observing at 140 Mpc and have been Locked for over 16.5 hours. One short drop out of Observing due to the squeezer dropping out, but that relocked automatically and we're been Observing ever since. GRB-Short E617667 came in today at 21:29 UTC
LOG:
15:30UTC Observing and Locked for over 7.5 hours
15:52 Out of Observing due to SQZ unlock
15:56 Back into Observing
21:29 GRB-Short E617667
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 16:00 | FAC | Randy | YTube | n | Caulking up those joints | 22:57 |
| 17:04 | FAC | Kim | MX | n | Tech clean | 18:34 |
| 18:21 | Sheila. Kar Meng | Optics Lab | y(local) | OPO prep (Sheila out 18:57) | 19:10 | |
| 18:51 | VAC | Gerardo | MX | n | Looknig for case | 20:07 |
| 20:09 | Corey | OpticsLab | n | Cleaning optics | 23:33 | |
| 21:26 | Kar Meng | Optics Lab | y(local) | OPO prep | 23:28 | |
| 22:01 | RyanS | OpticsLab | n | Cleaning optics | 23:33 | |
| 22:10 | TJ | Optics Lab | n | Spying on Corey and RyanS | 22:22 | |
| 22:59 | Matt | Optics Lab | n | Grabbing wipes | 23:16 | |
| 23:34 | Matt | Prep Lab | n | Putting wipes away | 23:35 |
Tyler has the reverse osmosis water conditioning system offline overnight. The CDS alarms system has an active cell-phone bypass for this channel which expires tomorrow afternoon. This should be the only channel in CDS ALARM.
Bypass will expire:
Thu Nov 13 05:05:22 PM PST 2025
For channel(s):
H0:FMC-CS_WS_RO_ALARM
TITLE: 11/13 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 5mph Gusts, 3mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.56 μm/s
QUICK SUMMARY: H1 has been locked for 16 hours and observing the whole day.
Wed Nov 12 10:08:17 2025 INFO: Fill completed in 8min 13secs
Gerardo confirmed a good fill curbside.
Closes FAMIS#27534, last checked 87787
I was supposed to do this last week but I was out, so doing it now. Last time it was done was 10/28, so this is comparing to measurements from two weeks ago.
Nothing of note, everything looks very similar to how it looked a couple weeks ago.
TITLE: 11/12 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 1mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.37 μm/s
QUICK SUMMARY:
Observing at 148 Mpc and have been Locked for over 7.5 hours. Currently in a standdown due to Superevent S251112cm that came in at 11/12 15:19 UTC
TITLE: 11/12 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: A completely uneventful shift with H1 observing throughout. Now been locked for almost 7 hours.
As per WP 12876 I updated the run number server with a new version to allow more testing of the new frame writer (h1daqfw2), without allowing it to modify external state in CDS.
The run number server tracks channel list configuration changes in the frames. The basic idea is the the frame writers create a checksum/hash of the channel list, send it to the run number server and get a run number to include in the frame. This is then used by the nds1 server to optimize multi-frame queries (if it knows the configuration hasn't changed some of the structures can be re-used between frames (this has been measured at about a 30% speed-up)).
This update added a new port/interface that the server listens on. It behaves a little different, it will only return the current run number (or 0 if the hash doesn't match) and will not increment the global state, so it is safe to have a test system use.
Now the new frame writer can automatically query the run number server to get the correct value to put in the frame (previously we had been setting it via a epics variable). One step closer to the new frame writer being in production.
WP12274
FAMIS28946
We rebooted the h1guardian1 machine today for 3 things:
All 168 nodes came back up and Erik confirmed that nds0 was seeing the traffic after the machine reboot.
Server order is set in the guardian::lho_guardian profile in puppet.