TITLE: 11/03 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 17mph Gusts, 12mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.47 μm/s
QUICK SUMMARY:
This morning the secondary u-seism has dropped enough to lock!
Oli was woken up and had suspected some Fast shutter errors, But was able to get H1 relocked!
When I walked in Oli had H1 in OMC_Whitening already!
H1 went back to Observing at 15:41 UTC.
The wind forecast to the next 24 hours looks fantastic for Observing.
The ETMX Hardware Watchdog started showing activity again around Wed 20oct2025. This does not appear to be related to any maintenance or Satellite Amp work. As the plot shows, its activity is very closely correlated to the IFO lock state, this was not the case earlier this year before it went quiet.
Got called by H1 Manager due to the fast shutter check during CHECK_AS_SHUTTERS failing. I've been looking at the trigger power and various SYS-PROTECTION channels (medm, ndscope), and it looks like to me that the fast shutter test failed only because we lost lock right as it was checking?? You can see the trigger power oscillating along with DRMI as DRMI is right about to fail. However, It does look like the fast shutter did not fire for this test.
So it looked like just a bad coincidence to me, BUT when I went to try opening the shutter manually via the shutter controller, it wouldn't open. I ended up using the FORCE OPEN button, which did work, and after that I was ableto open and close it fine normally.
Looking back further it looks like it doesn't close&open it every time?Looking back on ndscope, it looks like the few times we got up to CHECK_AS_SHUTTERS from the past day, during that state, the fast shutter did not close/open for the test(I thought it was supposed to get opened and closed every relock during CHECK_AS_SHUTTERS??). At 11/02 14:59 UTC it had functioned correctly, but then at the next CHECK_AS_SHUTTERS, it didn't.
I wanted to go just to CHECK_AS_SHUTTERS to see if I could confirm that maybe the failure was just coincidental and based on a lockloss during the test, so I got up to CHECK_AS_SHUTTERS, and it did run the test by itself, but as it closed, it caused a lockloss(ndscope). After this test though, all of the issues on the AS_PORT_PROTECTION screen had cleared, so I went up again. I did accidentally leave the FAST_SHUTTER guardian in SHUTTER_FAILURE though(last time it got reset by my INITing), so that probably affected the next test that I tried on the next CHECK_AS_SHUTTERS. When we got there, it didn't run the test. I realilzed the FAST_SHUTTER guardian was in the wrong state and corrected it, but the test still didn't run, but the ISC_LOCK state marked itself as finished. I manually ran the test from the AS PORT PROTECTION screen, and it caused a lockloss. The test failed and the AS PORT PROTECTION screen is back to having lots of red, with the fault this time saying 'NotReopening'.
Now that it's after 6am Pacific, I'll call Fil or Marc because it definitely seems like something is wrong with the fast shutter, but I'm still a bit confused as to what it is.
P.S. MC2 watchdog just tripped while trying to relock the IMC, but it's okay now
Jenne, Fil, Oli
I INITed ISC_LOCK and decided to try CHECK_AS_SHUTTERS a third time. This time, everything worked correctly (fast shutter closed&opened) and the test passed (ndscope). I checked with Fil, who said it might have just been a logic mix-up, and Jenne, who confirmed that we should be good to power up now that the test passed. We are heading up!
TITLE: 11/03 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: Oli
SHIFT SUMMARY: The lockloss tool does not seem to be working. I was able to lock DRMI and PRMI a few times tonight, I got as high as MAX_POWER. We are in DRMI with the flashes looking decent at the end of the shift.
LOG: No log.
03:00 UTC IA
DRMI locked within 30 seconds now
04:01 UTC lockloss at MAX_POWER, after it had finished the state
04:27 UTC DRMI_LOCKED_CHECK_ASC lockloss
TITLE: 11/03 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 4mph Gusts, 2mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.72 μm/s
QUICK SUMMARY:
Once again microseism has been elevated all day. There was a very slight drop in the microsiesm earlier in the day where I was able to get up to locklock loss from Check_AS_SHUTTERS @ 19:49 UTC.
Since then I have gotten past DRMI only once.
LOG:
No log
Robert, Tony
My beating shaker data from Thursday commissioning was ruined by large scattering glitches in DARM. We also saw these glitches on Friday. I found that they were coincident with brief seismic and microphone signals (Figure 1). The coupling site has to be EX because the signals appear in DARM before they reach any other station (Figure 2). I was able to determine that they could be coming from the direction of one crater-marked site on the east side of YTC, but not a more central location that Tony and I had found. The propagation velocity of the 5-20Hz signal is the same for both seismic and acoustic sensors and about 337 m/s indicating that the dominant propagation is through the air not the ground, though it couples to the ground locally. The linear attenuation in the ground at these frequencies is much greater than it is in the air.
The scattering noise produced at EX in DARM has harmonics of about 10 Hz (Figure 1, page 2). This is about the frequency of one of the worst resonances of the cryobaffles (9.7 Hz here: 56857 ). This is a reminder that we did not reduce the light reflected back into the interferometer from the cryobaffles, we just reduced their usual velocity by damping them. The explosions apparently kick the EX one pretty hard. It would be interesting if Detchar could keep track of the YTC glitches and also see if they ever knock us out of lock.
TITLE: 11/02 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT_USEISM
Wind: 3mph Gusts, 1mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.77 μm/s, 3 min avg of 1.036 μm/s from a script I have going
QUICK SUMMARY:
Sun Nov 02 10:11:30 2025 INFO: Fill completed in 11min 26secs
TITLE: 11/02 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 7mph Gusts, 5mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.59 μm/s
QUICK SUMMARY:
Walking in to the Control Room, H1 was locking DRMI _1F.
I took the Observing mode back to Microseism after taking control of H1.
H1 has been unlocked for atleast the past 24 hours.
Oli's owl shift was looking pretty rough last night.
But the 2ndairy useism looks to have fallen over the last 12 hours AND the wind forecast is looking great.
So there is some hope we can get locked today!
Got called again because we can't get past OFFLOAD_DRMI_ASC due to excessively high secondary microseism. We've had three DRMI catches in the last hour, all three of which died before finishing offloading DRMI ASC. Locklosses were from TRANSITION_DRMI_TO_3F, TURN_ON_BS_STAGE2, and DRMI_LOCKED_CHECK_ASC. It definitely looks like the issue is the high secondary microseism. The secondary microseism isn't as bad as it was earlier today, but it's still pretty high (nuc5). Looking at the secondary microseism channels in the Z direction (since that's where it's moving us the most), it looks like the level of microseism that we're seeing right now is the level that we last lost lock at (although we didn't lose lock because of the high microseism - there was some weird wiggle).
So since the secondary microseism looks to be heading downward, if this current DRMI can't catch and we lose lock, I'm going to put the detector in DOWN for now and come back in a couple hours to try again. Hopefully by then the secondary microseism will have gone down a bit further and we'll be able to lock.
Time to try relocking again. Going to give this a couple hours to try getting past DRMI, and if it can't I'll put the detector in IDLE for the rest of the night.
Still no luck with getting past DRMI_LOCKED_CHECK_ASC, but secondary microseism levels look to be at a similar level to what we relocked with last time, so I'll let it keep trying. It has been catching multiple times, but it can't stay locked more than a few seconds most times. The length of DRMI locks and places in the code were it loses lock are different every tme, so I don't think it's a different issue?
Going to keep letting the IFO try relocking.
Got called 6 minutes into my OWL shift due to the BSC2 ST2 ISI stages tripping during MICH_BRIGHT_LOCKED (in INITAL_ALIGNMENT). The rest of the inital alignment finished fine, so we're attempting to lock now, although I know the very high microseism has been making that really hard today.
TITLE: 11/02 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: Oli
SHIFT SUMMARY: The secondary microseism is at about the same level as the start of the shift but the wind is much lower. We have not been able to lock DRMI or PRMI for more than a few seconds this shift. The alignment keeps running away, we get stuck in a loop of PRMI to CHECK_MICH. I also adjusted the SQZ SHG_FIBR_REJ power while we we're trying to get DRMI.
LOG: No log.
00:02 UTC IA
After going into CHECK_MICH then back to PRMI it looks worse
02:08 UTC IA
04:50 UTC IA
TITLE: 11/01 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
H1 has been locked and Observing for 0 hours all day due to High microseism & High wind speeds.
Microseism seems to be slowly coming down.
No calibration was done today.
LOG:
No Log
FAMIS 27401 PSL Status Report - Weekly
Laser Status:
NPRO output power is 1.836W
AMP1 output power is 70.51W
AMP2 output power is 139.2W
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 2 days, 1 hr 40 minutes
Reflected power = 24.68W
Transmitted power = 107.0W
PowerSum = 131.7W
FSS:
It has been locked for 0 days 0 hr and 42 min
TPD[V] = 0.5318V
ISS:
The diffracted power is around 3.4%
Last saturation event was 0 days 4 hours and 37 minutes ago
Possible Issues:
PMC reflected power is high
TITLE: 11/01 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 24mph Gusts, 16mph 3min avg
Primary useism: 0.08 μm/s
Secondary useism: 0.83 μm/s
QUICK SUMMARY:
TITLE: 11/01 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 39mph Gusts, 30mph 3min avg
Primary useism: 0.11 μm/s
Secondary useism: 0.77 μm/s
QUICK SUMMARY:
No calibration was done due to, being unlocked.
H1 hasn't been able to get past DRMI all morning.
Secondard microseism has fallen a little bit.... but the wind has picked up the slack. :(
The wind's forcast is not looking great either.
After remotely rebooting h1seiey and seeing that the 4th ADC is now completely absent from the PCI bus, the next test was a power cycle of the IO Chassis.
Procedure was:
Stop h1seiey models, fence from Dolphin and power down the computer.
Dave (@EY):
power down the IO Chassis (front switch, then rear switch)
Power down the 16bit-DAC AI Chassis (prevent overvoltage when IO Chassis is power up)
Power up the IO Chassis (rear switch, then front switch).
The chassis did not power up. Tracking it back, the +24V-DC power strip was unpowered, the laser interlock chassis which is also plugged into this was powered down. This tripped all the lasers.
Fil came out to EY with a spare ADC
Dave & Fil (@EY):
We opened the IO Chassis and removed the 4th ADC. With this slot empty Fil powered up the DC power supply with the IO Chassis on, it did not trip.
We powered down the IO Chassis to install a new ADC. We are skipping the slot the old ADC was in, because it could be a bad slot.
The second DAC was moved from A2-slot4 to A3-slot1, the new ADC was installed in A2-slot4, leaving the suspect A2-slot3 empty.
We powered the IO Chassis on with no problems, then we powered up the h1seiey computer. The models started with no issues, I was able to reset the SWWD.
The chassis was buttoned up, pushed into the rack, and the AI Chassis were powered back up.
Marc is fixing a cracked cap on the old ADC so we can test it offline.
ADCs:
| old ADC (Removed) | 110204-18 |
| new ADC (Installed) | 210128-28 |
Tripped power supply location:
Updated as-built drawing for h1seiey IO Chassis
Here is the +24VDC power supply after we got everything going again. It is drawing about 3.5A
Testing of ADC 110204-18
After Marc replaced the cracked capacitor he discovered, this ADC (pulled from h1seiey) was tested on the DTS Thursday 30oct2025.
x7eetest1 IO Chassis was used. The ADC was installed into A3 by itself, no interface card or ribbon was attached. The chassis powered up with no problems. The ADC was not visible on the PCI bus (lspci and showcards).
Looks like this card is broken and not usable.