writting this down for future people recovering from power outages:
We weren't able to lock the IMC, the alignment was not good but not terrible. With 2 Watts into the IMC, H1:IMC-TRANS_OUT16 was about 10000 counts, while the IMC was well aligned it was 11000 counts. MC2 trans NSUM however was about 6 counts, well it should be 200-300 counts.
On the whitening screen for MC2 trans (LSC> Whitening) there were errors Invalid data chn (4,3,2,1). Elenna found this alog by searching the error message, 12120, Daniel Patrick and Fil reproduced a solution like this.
TITLE: 09/10 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Locked for about 10 hours when a brief power outage on site took everything down. We have been in recovery since (see list of alogs below). We are currently getting the IMC to lock after realizing that there was an issue with the whitening and Daniel logged into Beckhoff to fix it.
Power outage alogs:
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:11 | CEBEX | Bubba, contractors | MY | n | CEBEX building prep | 23:04 |
| 15:51 | FAC | Christina | OSB rec. | n | Rollup door to drip off packages | 15:56 |
| 16:41 | FAC | Randy, Eric | MY | n | Check on something | 19:59 |
| 17:18 | SYS | Betsy | Opt Lab | n | Parts | 17:22 |
| 19:19 | VAC | Eric | Ends | n | Getting HVAC at ends checks | 20:04 |
| 19:20 | VAC | Gerardo | LVEA | n | Reset Ion pumps for RGAs | 19:43 |
| 19:21 | VAC | Jordan | Ends | n | Reset ion pumps for RGAs | 19:51 |
| 19:25 | - | Fil | Ends | n | Turning on lasers and checking HV | 20:25 |
| 19:36 | VAC | Travis | LVEA | n | Grab a part | 19:42 |
| 19:36 | VAC | Travis | MY | n | Pump work | 19:58 |
| 19:47 | PSL | Richard | LVEA | n | PSL diode controller power cycle at PSL racks | 19:58 |
| 19:48 | SEI | Jim | Mech room | n | HEPI pump startup | 20:04 |
| 19:59 | VAC | Travis | LVEA | n | Parts | 20:16 |
| 20:04 | SEI | Jim | Ends | n | HEPI pump stations | 21:24 |
| 20:07 | CDS | Erik | EX | n | Power dolphin extender | 20:34 |
| 20:35 | CDS | Fil | EX | n | HEPI trouble shooting | 21:10 |
| 21:25 | SYS | Betsy | LVEA | n | Taking pictures of HAM2/3 | 21:41 |
| 21:38 | VAC | Travis, Janos | MY | n | Pump work | 22:22 |
| 21:40 | EE | Marc | LVEA | n | Checking out fast shutter electronics | 21:58 |
TITLE: 09/10 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 6mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY:
IFO is DOWN due to a POWER OUTAGE
We're still recovering H1, with the IMC being a big hiccup in recovery.
1. Both corner station HV interlocks tripped. Reset. The Fast Shutter, HAM6 PZT, and ITM ESD high voltage enabled.
2. PSL and SQZ high voltage enabled
3. Safety Laser Interlock System tripped, Reset. Lasers enabled.
4. ETMX ESD HV interlock tripped. Reset, HV enabled
5. ETMY ESD HV was found powered on
6. Found 2 ETMY SEI ISI Coil Drivers with overtemp errors. Reset button on front panel cleared all faults. Need to check all ECRs have been applied to chassis.
7. Jim reported issues bringing back ETMX HEPI. Normal power OFF/ON did not bring main pump controller panel online. Asked Patrick to check the Beckhoff software, no issues found. Unit eventually powered on when the ON button was activated during troubleshooting.
8. EY Pulizzi was found unresponsive and power cycled. Erik restarted code. This enabled power to WIFI and ITMY camera
The PSL is fully recovered with all subsystems and watchdogs enabled following the site power outage earlier today. I'll add more details in a comment to this entry later.
I restored all the sliders using my restore script on IFO_ALIGN_COMPACTEST to before the glitch, 19:00 UTC.
Remotely turned on the TCS X and Y chillers.
I restarted the ETMX and ETMY HWS code, following TCS wiki. ITMs were already running. ITM lasers turned back on by themselves.
I had some difficulty connecting to the corner hepi pump controller after the power outage. The workstations have gotten too far ahead of the old Athena controller, so when I attempted to ssh to the pump station, I got:
Unable to negotiate with XX.XXX.X.XX port 22: no matching key exchange method found. Their offer: diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
I had to add some options to ssh:
ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 -oHostKeyAlgorithms=+ssh-dss controls@h1hpipumpctrll0
After deleting the old host key again I was able to connect.
The end stations were easier with their beckhoff controllers. EX wouldn't start pumping at first, but I think the vfd just needed powered off more completely. I turned it off for 5 secs when I first arrived, that apparently wasn't enough to reset the vfd. Fil turned it off for maybe 20 secs, and fans came on when he powered it back up, which I didn't hear when I power cycled it earlier. EY came right back up.
We had a site wide power outage around 12:11 local time. Recovery of CDS has started.
I've turned the alarms system off, it was producing too much noise.
We are recovering front end models.
Jonathan, Erik, Richard, Fil, Patrick, EJ, TJ, RyanS, Dave:
CDS is recovered. CDSSDF showing WAPs are on, FMCSSTAT showing LVEA temp change.
Alarms are back on (currently no active alarms). I had to restart the locklossalert.service, it had gotten stuck.
BPA Dispatcher on duty said they had a breaker at the Benton substation open & reclose. At that time, they did not have a known cause for the breaker operation. Hanford fire called to report a fire off Route 4 by Energy Northwest near the 115KV BPA power lines. After discussions with the BPA dispatcher the bump on the line or breaker operation, may have been caused by a fault on the BPA 115KV line causing the fire. BPA was dispatching a line crew to investigate.
J. Kissel
Ops corps -- please run the following DARM FOM template on the top, front-and-center, wall display from 08:00 to 12:00 PDT on Saturday (9/13).
/opt/rtcds/userapps/release/cds/h1/scripts/fom_startup/nuc30/
H1_DARM_FOM_wO1.xml
Details:
For this upcoming Saturday's tours only, we'd like to celebrate how much the detector sensitivity has improved in the past 10 years by displaying the early O1 sensitivity (from G1501223), rather than show the L1 trace (whose live sensitivity will still be captured by the display of BNS range).
As such, I've augmented the standard template from
/opt/rtcds/userapps/release/isc/h1/scripts/H1_DARM_FOM.xml
with the following changes.
Functional:
** Changed the pwlech FFT chunk "% overlap" parameter to be 50%, rather than 75%, which is appropriate for the Hanning window specified to be used.
** Changed the number of rolling exponential averages parameter to be 10 rather than 3.
- Updated the H1 reference from April 11 2024 (representing O4B sensitivity) to yesterday, September 09 2025, so as to better represent the O4C sensitivity
- Imported G1501223 H1 "start of O1" representative sensitivity, replacing the L1 reference.
** these will have the effect of "slowing down" the live traces, and/or making "glitches contaminate the sensitivity for longer," but this comes with the benefit of a better (less noisy) estimate of the current stationary noise. That makes it a more apples-to-apples comparison with the O1 sensitivity.
Aesthetic:
- Added "Displacement" to the "aLIGO DARM" title
- Synchronized the format of the legend entries for the references
- Updated the O1 reference trace color so it can be seen a bit better
- Re-ordered the last three traces from "PCALY, GWINC, PCALX" to "PCALX, PCALY, GWINC" (while preserving the colors and symbols)
- Moved the legend to be centered in the window, but still showing violin modes.
The attached screenshot is of this template running live yesterday afternoon.
The hope is that this version of the central display will still fully function as a monitor of how the detector is doing for observational readiness, but also work as a good display piece for the tours.
This can reverted to the standard template as soon as the tours are done.
Satellite amplifiers for MC1 and MC3 were swapped during maintenance (15:00-19:00 UTC) on July 22, 2025 (85922), and MC2 satamp had been swapped before that (85770). I wanted to see if we could determine any improvement in noise in the LSC and ASC input mode cleaner channels.
LSC Channel:
- H1:IMC-L_OUT_DQ
ASC Channels:
- H1:IMC-MC2_TRANS_PIT_OUT_DQ
- H1:IMC-MC2_TRANS_YAW_OUT_DQ
- H1:IMC-WFS_A_DC_PIT_OUT_DQ
- H1:IMC-WFS_A_DC_YAW_OUT_DQ
- H1:IMC-WFS_B_DC_PIT_OUT_DQ
- H1:IMC-WFS_B_DC_YAW_OUT_DQ
- H1:IMC-WFS_A_I_PIT_OUT_DQ
- H1:IMC-WFS_A_Q_PIT_OUT_DQ
- H1:IMC-WFS_A_I_YAW_OUT_DQ
- H1:IMC-WFS_A_Q_YAW_OUT_DQ
- H1:IMC-WFS_B_I_PIT_OUT_DQ
- H1:IMC-WFS_B_Q_PIT_OUT_DQ
- H1:IMC-WFS_B_I_YAW_OUT_DQ
- H1:IMC-WFS_B_Q_YAW_OUT_DQ
I looked at many times before and after these swaps looking for the lowest noise for each to be the best representative of the noise level we can achieve, and have settled on a set of before and after times that differ depending on the channel.
BEFORE:
2025-06-18 09:10 UTC (DARK RED)
2025-07-07 09:37 UTC (DARK BLUE)
AFTER:
2025-08-01 07:05 UTC (GREEN)
2025-08-01 10:15 UTC (PINK)
2025-08-02 08:38 UTC (SEA GREEN)
2025-09-05 05:23 UTC (ORANGE)
These measurements were taken for 47 averages with a 0.01 BW.
Results:
| Channel | Comments |
|---|---|
| H1:IMC-L_OUT_DQ | Here the best 'after swap' time that I was able to find is noticeably worse than the best before time, so either the swap made the LSC noise worse, or we just haven't been able to reach that level of lowest noise again since the swap. |
| H1:IMC-MC2_TRANS_PIT_OUT_DQ | Small noise drop between 0.8-3 Hz |
| H1:IMC-MC2_TRANS_YAW_OUT_DQ | Slight noise drop between 0.6-3 Hz? |
| H1:IMC-WFS_A_DC_PIT_OUT_DQ | Looks about the same as before, maybe a bit of improvement between 7-9.5 Hz |
| H1:IMC-WFS_A_DC_YAW_OUT_DQ | Noise between 6-10 Hz has dropped slightly |
| H1:IMC-WFS_B_DC_PIT_OUT_DQ | No difference |
| H1:IMC-WFS_B_DC_YAW_OUT_DQ | No difference |
| H1:IMC-WFS_A_I_PIT_OUT_DQ | Looks like the noise above 1 Hz has dropped slightly |
| H1:IMC-WFS_A_Q_PIT_OUT | No difference |
| H1:IMC-WFS_A_I_YAW_OUT | No difference |
| H1:IMC-WFS_A_Q_YAW_OUT | No difference |
| H1:IMC-WFS_B_I_PIT_OUT | Looks about the same as before, maybe a bit of improvement between 7-9.5 Hz. Showing the sea green AFTER trace to verify that the pink AFTER bump seen at 10 Hz is not an issue caused by the satamp swap. |
| H1:IMC-WFS_B_Q_PIT_OUT | Looks about the same as before, maybe a bit of improvement between 7-9.5 Hz. Showing the sea green AFTER trace to verify that the pink AFTER bump seen at 10 Hz is not an issue caused by the satamp swap. |
| H1:IMC-WFS_B_I_YAW_OUT | Looks about the same as before, maybe a bit of improvement between 7-9.5 Hz |
| H1:IMC-WFS_B_Q_YAW_OUT | Looks about the same as before, maybe a bit of improvement between 7-9.5 Hz |
The SQZ filter cavity version of this: 86624
The plots from LHO:86253 show the before vs. after M1 OSEM noise performance for MC1 MC2, and MC3. Comparing the page 3 summaries of each of these .pdfs show that - We only expect change between ~0.2 and ~8 Hz. So, any improvement seen in the 7 to 9.5 Hz region is very likely *not* related to the sat amp whitening change. - For MC2, the LF and RT OSEM degrees of freedom -- both the before and after traces show that L and Y are well above the expected noise floor below for the entire region below 10 Hz. - The change all DOFs is a broadband change in noise performance; not only is there nothing *sensed* by these OSEMs above ~5 Hz, but there's also no change and thus any resonance-like features that change in the IMC signals will not be related to the sat amp whitening filter change. We should confirm that the data used for the MC2 "before vs after" were both taken with the IMC length control OFF (i.e. the IMC was OFFLINE). Back to the IMC metrics -- (1) Only IMC-L and the MC TRANS signals appear to be dominated by residual seismic / suspension noise below 10 Hz. Hence (a) I believe the improvement shown in the MC TRANS QPD, tho I would have expected more. (b) I believe that we're seeing something that is limited by seismic / suspension noise in IMC-L, but the fact that got worse doesn't agree with the top-mass OSEM's stated improvement from LHO:86253, so I suspect the story is more complicated. (2) it's clear that all of the DC "QPD" signals from the IMC WFS are not reading out anything seismically related below 10 Hz. Maybe this ASD shape is a function of the beam position? Maybe the real signal is drown out by ADC noise, i.e. the signal is only well-whitened above 10 Hz? Whatever it is, it's drowning out the improvements that we might have expected to see -- we don't see *any* of the typical residual seismic / suspension noise features. (3) WFS B RF signals appear to also be swamped by this same color of noise. WFS A RF signals show a similar color, but it's less in magnitude, so you see some residual seismic / suspension peaks popping up above it. But it's not low enough to show where we might expect the improvements -- the troughs between resonances. So given the quality of this metric, I'm not mad that we didn't see an improvement. So -- again -- I really only think that metrics (1), i.e. IMC-L and MC2 TRANS are showing noise that is/was dominated by residual seismic / suspension noise below 10 Hz, so that's why we see no change in the IMC WFS signals; not because we didn't make an improvement in what they're *supposed* to be measuring. We'll have to figure out what happened with IMC-L between June and Sep 2025, but the plethora of new high-Q (both "mechanical" and "digital") features between 3 and 10 Hz suggests that it's got nothing to do with the sat amp whitening improvement. The slope of increased broadband noise between 1 to 3 Hz doesn't match what we would expect, nor does it match the demonstrated improvement in the local M1 OSEM sensors. We should check the DC signal levels, or change the window type, to be sure this isn't spectral leakage.
Our range hasn't been the most stable during this lock. There's one drop in particular that goes down about 8Mpc to 145Mpc for 20min or so (attachment 1). Running our range comparison script, and changing the span appropriately (thanks Oli!), it looks to be a very broadband from ~40-200Hz. The SQZ BLRMS see something around that time (attachment 2).
Sheila had me run another template for that time. The template, which was copied from one of Camilla's, I've now saved in the sqz userapps - attachment 3. Green being the trace from this lock.
Lockloss due to an EQ that hit before EQ mode could activate. Seems to be either very local or very large - not on USGS yet.