TITLE: 11/04 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 4mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.36 μm/s
SHIFT SUMMARY:
Lockloss from NLN , Lockloss tool failed again.
Thefirst channel that I was able to see deviate from its nominal operation was ETMX L3 channels.
screen shot attatched.
Relocking notes:
Relocked after another IA that unlocked and faulted the IMC after Saturating MC2. Doing a manual IA does not cause this issue.
Once passed DRMI, it locked without issue until NLN where the 1 hz ring up happened on the ASC-INP1-P. Elenna had me use the ASC high gain button for about a minute.
Once that had properly damped the oscillation and used the ASC Low gain button we were able to do some Observing.
Observing reached at 00:15:01 UTC
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:50 | FAC | Randy | Mid X | No | Sealing the Beam tube | 21:59 |
| 16:31 | PEM | Robert | LVEA | N | Setting up for Shaker injections, & now shutting down shakers | 19:16 |
| 16:32 | iss | Matt | Control Rm | N | ISS injections. | 17:16 |
| 16:33 | FAC | Nellie | Mid Y | N | Technical Cleaning | 17:33 |
| 16:33 | FAC | Kim | Mid Y | N | Technical cleaning | 17:25 |
| 18:29 | CDS | Erik | EY | N | swapping fibers at the racks. | 20:29 |
| 23:50 | CAL | Tony | PCAL lab | LOCAL | Prep for end station meas tomorrow | 00:35 |
| 00:49 | CAL | Rick | Receiving | N | Pick up PSL parts | 01:49 |
| 00:52 | OPS | Ryan | Optics lab | N | Check on dust monitor | 01:02 |
TITLE: 11/04 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 1mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.44 μm/s
QUICK SUMMARY:
At about 1100 PST I swapped the fiber pair for Ethernet IPC testing from the MSR in the corner station to EY. The new 4k pair is connected via patch fiber 24 at EY, replacing patch fiber 33.
FAMIS 31110
The PSL was restarted on Thursday following an interlock trip and the PS3 pump current was increased (alog87866). Since then, AMP2 power has been very slightly lower, but so has PMC reflected power. ISS diffrected power has been low the past couple of days as well.
Lockloss at 18:37 UTC Lockloss tool seemed to failed.
There was an unknown lockloss during commissioning.
Related Lockloss scopes attached.
Relocking notes:
Initial_Alignment railed MC2 in PRC aligning again today, twice. This perhaps is the start of a trend?
I just ran a manual IA.
After that relocking was pretty fast.
I accecpted some SQZr SDF changes to get back into Observing. Tagging SQZ.
Got back to Observing at 20:39:50
Superevent candidate S251103f
Mon Nov 03 10:11:37 2025 INFO: Fill completed in 11min 33secs
Gerardo confirmed a good fill curbside.
TITLE: 11/03 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 17mph Gusts, 12mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.47 μm/s
QUICK SUMMARY:
This morning the secondary u-seism has dropped enough to lock!
Oli was woken up and had suspected some Fast shutter errors, But was able to get H1 relocked!
When I walked in Oli had H1 in OMC_Whitening already!
H1 went back to Observing at 15:41 UTC.
The wind forecast to the next 24 hours looks fantastic for Observing.
The ETMX Hardware Watchdog started showing activity again around Wed 20oct2025. This does not appear to be related to any maintenance or Satellite Amp work. As the plot shows, its activity is very closely correlated to the IFO lock state, this was not the case earlier this year before it went quiet.
Got called by H1 Manager due to the fast shutter check during CHECK_AS_SHUTTERS failing. I've been looking at the trigger power and various SYS-PROTECTION channels (medm, ndscope), and it looks like to me that the fast shutter test failed only because we lost lock right as it was checking?? You can see the trigger power oscillating along with DRMI as DRMI is right about to fail. However, It does look like the fast shutter did not fire for this test.
So it looked like just a bad coincidence to me, BUT when I went to try opening the shutter manually via the shutter controller, it wouldn't open. I ended up using the FORCE OPEN button, which did work, and after that I was ableto open and close it fine normally.
Looking back further it looks like it doesn't close&open it every time?Looking back on ndscope, it looks like the few times we got up to CHECK_AS_SHUTTERS from the past day, during that state, the fast shutter did not close/open for the test(I thought it was supposed to get opened and closed every relock during CHECK_AS_SHUTTERS??). At 11/02 14:59 UTC it had functioned correctly, but then at the next CHECK_AS_SHUTTERS, it didn't.
I wanted to go just to CHECK_AS_SHUTTERS to see if I could confirm that maybe the failure was just coincidental and based on a lockloss during the test, so I got up to CHECK_AS_SHUTTERS, and it did run the test by itself, but as it closed, it caused a lockloss(ndscope). After this test though, all of the issues on the AS_PORT_PROTECTION screen had cleared, so I went up again. I did accidentally leave the FAST_SHUTTER guardian in SHUTTER_FAILURE though(last time it got reset by my INITing), so that probably affected the next test that I tried on the next CHECK_AS_SHUTTERS. When we got there, it didn't run the test. I realilzed the FAST_SHUTTER guardian was in the wrong state and corrected it, but the test still didn't run, but the ISC_LOCK state marked itself as finished. I manually ran the test from the AS PORT PROTECTION screen, and it caused a lockloss. The test failed and the AS PORT PROTECTION screen is back to having lots of red, with the fault this time saying 'NotReopening'.
Now that it's after 6am Pacific, I'll call Fil or Marc because it definitely seems like something is wrong with the fast shutter, but I'm still a bit confused as to what it is.
P.S. MC2 watchdog just tripped while trying to relock the IMC, but it's okay now
Jenne, Fil, Oli
I INITed ISC_LOCK and decided to try CHECK_AS_SHUTTERS a third time. This time, everything worked correctly (fast shutter closed&opened) and the test passed (ndscope). I checked with Fil, who said it might have just been a logic mix-up, and Jenne, who confirmed that we should be good to power up now that the test passed. We are heading up!
TITLE: 11/03 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: Oli
SHIFT SUMMARY: The lockloss tool does not seem to be working. I was able to lock DRMI and PRMI a few times tonight, I got as high as MAX_POWER. We are in DRMI with the flashes looking decent at the end of the shift.
LOG: No log.
03:00 UTC IA
DRMI locked within 30 seconds now
04:01 UTC lockloss at MAX_POWER, after it had finished the state
04:27 UTC DRMI_LOCKED_CHECK_ASC lockloss
TITLE: 11/03 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 4mph Gusts, 2mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.72 μm/s
QUICK SUMMARY:
Once again microseism has been elevated all day. There was a very slight drop in the microsiesm earlier in the day where I was able to get up to locklock loss from Check_AS_SHUTTERS @ 19:49 UTC.
Since then I have gotten past DRMI only once.
LOG:
No log
Robert, Tony
My beating shaker data from Thursday commissioning was ruined by large scattering glitches in DARM. We also saw these glitches on Friday. I found that they were coincident with brief seismic and microphone signals (Figure 1). The coupling site has to be EX because the signals appear in DARM before they reach any other station (Figure 2). I was able to determine that they could be coming from the direction of one crater-marked site on the east side of YTC, but not a more central location that Tony and I had found. The propagation velocity of the 5-20Hz signal is the same for both seismic and acoustic sensors and about 337 m/s indicating that the dominant propagation is through the air not the ground, though it couples to the ground locally. The linear attenuation in the ground at these frequencies is much greater than it is in the air.
The scattering noise produced at EX in DARM has harmonics of about 10 Hz (Figure 1, page 2). This is about the frequency of one of the worst resonances of the cryobaffles (9.7 Hz here: 56857 ). This is a reminder that we did not reduce the light reflected back into the interferometer from the cryobaffles, we just reduced their usual velocity by damping them. The explosions apparently kick the EX one pretty hard. It would be interesting if Detchar could keep track of the YTC glitches and also see if they ever knock us out of lock.
TITLE: 11/02 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT_USEISM
Wind: 3mph Gusts, 1mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.77 μm/s, 3 min avg of 1.036 μm/s from a script I have going
QUICK SUMMARY:
Sun Nov 02 10:11:30 2025 INFO: Fill completed in 11min 26secs
TITLE: 11/02 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 7mph Gusts, 5mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.59 μm/s
QUICK SUMMARY:
Walking in to the Control Room, H1 was locking DRMI _1F.
I took the Observing mode back to Microseism after taking control of H1.
H1 has been unlocked for atleast the past 24 hours.
Oli's owl shift was looking pretty rough last night.
But the 2ndairy useism looks to have fallen over the last 12 hours AND the wind forecast is looking great.
So there is some hope we can get locked today!
Got called again because we can't get past OFFLOAD_DRMI_ASC due to excessively high secondary microseism. We've had three DRMI catches in the last hour, all three of which died before finishing offloading DRMI ASC. Locklosses were from TRANSITION_DRMI_TO_3F, TURN_ON_BS_STAGE2, and DRMI_LOCKED_CHECK_ASC. It definitely looks like the issue is the high secondary microseism. The secondary microseism isn't as bad as it was earlier today, but it's still pretty high (nuc5). Looking at the secondary microseism channels in the Z direction (since that's where it's moving us the most), it looks like the level of microseism that we're seeing right now is the level that we last lost lock at (although we didn't lose lock because of the high microseism - there was some weird wiggle).
So since the secondary microseism looks to be heading downward, if this current DRMI can't catch and we lose lock, I'm going to put the detector in DOWN for now and come back in a couple hours to try again. Hopefully by then the secondary microseism will have gone down a bit further and we'll be able to lock.
Time to try relocking again. Going to give this a couple hours to try getting past DRMI, and if it can't I'll put the detector in IDLE for the rest of the night.
Still no luck with getting past DRMI_LOCKED_CHECK_ASC, but secondary microseism levels look to be at a similar level to what we relocked with last time, so I'll let it keep trying. It has been catching multiple times, but it can't stay locked more than a few seconds most times. The length of DRMI locks and places in the code were it loses lock are different every tme, so I don't think it's a different issue?
Going to keep letting the IFO try relocking.
After remotely rebooting h1seiey and seeing that the 4th ADC is now completely absent from the PCI bus, the next test was a power cycle of the IO Chassis.
Procedure was:
Stop h1seiey models, fence from Dolphin and power down the computer.
Dave (@EY):
power down the IO Chassis (front switch, then rear switch)
Power down the 16bit-DAC AI Chassis (prevent overvoltage when IO Chassis is power up)
Power up the IO Chassis (rear switch, then front switch).
The chassis did not power up. Tracking it back, the +24V-DC power strip was unpowered, the laser interlock chassis which is also plugged into this was powered down. This tripped all the lasers.
Fil came out to EY with a spare ADC
Dave & Fil (@EY):
We opened the IO Chassis and removed the 4th ADC. With this slot empty Fil powered up the DC power supply with the IO Chassis on, it did not trip.
We powered down the IO Chassis to install a new ADC. We are skipping the slot the old ADC was in, because it could be a bad slot.
The second DAC was moved from A2-slot4 to A3-slot1, the new ADC was installed in A2-slot4, leaving the suspect A2-slot3 empty.
We powered the IO Chassis on with no problems, then we powered up the h1seiey computer. The models started with no issues, I was able to reset the SWWD.
The chassis was buttoned up, pushed into the rack, and the AI Chassis were powered back up.
Marc is fixing a cracked cap on the old ADC so we can test it offline.
ADCs:
| old ADC (Removed) | 110204-18 |
| new ADC (Installed) | 210128-28 |
Tripped power supply location:
Updated as-built drawing for h1seiey IO Chassis
Here is the +24VDC power supply after we got everything going again. It is drawing about 3.5A
Testing of ADC 110204-18
After Marc replaced the cracked capacitor he discovered, this ADC (pulled from h1seiey) was tested on the DTS Thursday 30oct2025.
x7eetest1 IO Chassis was used. The ADC was installed into A3 by itself, no interface card or ribbon was attached. The chassis powered up with no problems. The ADC was not visible on the PCI bus (lspci and showcards).
Looks like this card is broken and not usable.