Search criteria
Section: X1
Task: SUS

REMOVE SEARCH FILTER
SEARCH AGAIN
Reports until 15:12, Friday 19 December 2025
H1 SUS (SUS)
rahul.kumar@LIGO.ORG - posted 15:12, Friday 19 December 2025 (88624)
HAM1 vent work for JAC - JM1 and JM3 health check begins.

This morning, I started taking transfer function measurements on the two new Tip Tilt suspensions (JM1 and JM3 for JAC) recently installed in HAM1 chamber. The screenshots are attached below - the chamber has a strong purge air flowing which created a noisy environment and I had to drive the suspensions really hard get a decent coherence. Travis dialed the purge air down, which helped (however, there were other ongoing LVEA work which were kind of saturating the DAC).

This is first of the many measurements to be taken (including osem spectra etc.), however JM1 looks great - ignore the magnitude for now and we will sort it out in the new year. The resonance peaks are where they should be - especially for JM1, just like how I tested it in the triples lab.

Similarly, JM3 is also behaving like how it was in the lab (RyanC took that measurement in the triples lab) -   couple of peaks on L and P dof are off by 0.15 Hz (due to longer wires, not much control here). Y dof has cross coupling from L dof at 1.25Hz, but if you look at this damped TF (when L and P damping were kept on) plot - the cross coupling is gone.  

More tests are in the pipeline and better results are expected, however this is a decent start.

The templates are stored at the following locations,

JM1

/ligo/svncommon/SusSVN/sus/trunk/HTTS/H1/JM1/SAGM1/Data/

2025-12-19_1820_H1SUSJM1_M1_WhiteNoise_L_0p02to50Hz.xml
2025-12-19_1820_H1SUSJM1_M1_WhiteNoise_P_0p02to50Hz.xml
2025-12-19_1820_H1SUSJM1_M1_WhiteNoise_Y_0p02to50Hz.xml

JM3

/ligo/svncommon/SusSVN/sus/trunk/HTTS/H1/JM3/SAGM1/Data/

2025-12-19_2100_H1SUSJM3_M1_WhiteNoise_L_0p02to50Hz.xml
2025-12-19_2100_H1SUSJM3_M1_WhiteNoise_P_0p02to50Hz.xml
2025-12-19_2100_H1SUSJM3_M1_WhiteNoise_Y_0p02to50Hz.xml

 2025-12-19_2100_H1SUSJM3_M1_WhiteNoise_Y_0p02to50HzDampingON_LP.xml   - L and P dof damping ON during the measurement.

Purge air in HAM1 was set to its nominal flow once the measurements were complete.

Images attached to this report
H1 SUS (PSL, SUS)
rahul.kumar@LIGO.ORG - posted 15:41, Thursday 18 December 2025 - last comment - 07:33, Friday 19 December 2025(88599)
HAM1 vent work - JM1 and JM3 plugged in to the electronics chain - OLC re-taken and offsets applied on the medm screen

Betsy, Rahul

We found that SUS JM1 had a faulty quadrupus cable, which we replaced it today. Next, I took OLC for both JM1 and JM3 in HAM1 chamber. I applied the offsets, gains and then centered the BOSEMs - and they look good. Next, I will start checking the health of the electronics chain and the suspension itself (i.e. by taking the transfer function measurements).

The offsets and gain for JM1 is recorded in this screenshot - accepted in the SDF (safe).

The offsets and gain for JM3 is recorded in this screenshot - accepted in the SDF (safe).

Images attached to this report
Comments related to this report
rahul.kumar@LIGO.ORG - 15:54, Thursday 18 December 2025 (88600)

Oli, Rahul

We started damping both the suspensions - found that the voltmons were not working (Dave found that their gains were set to zero). 

With voltmons ON, both the suspensions were damping fine - no overflows on this 28bit DAC. 

rahul.kumar@LIGO.ORG - 15:59, Thursday 18 December 2025 (88601)

Adding pictures of JM1 and JM3 I took today.

Images attached to this comment
corey.gray@LIGO.ORG - 07:33, Friday 19 December 2025 (88607)EPO

Tagging EPO for JM photos

H1 SUS (CDS, SUS)
rahul.kumar@LIGO.ORG - posted 09:42, Thursday 18 December 2025 - last comment - 11:00, Thursday 18 December 2025(88588)
HAM1: JM3 (JAC) and PM1 Satamps replaced (in LVEA), JM2 and JM3 cables swapped (in CER)

Fil, Rahul

This morning we replaced the unmodified Satamps for JM3/PM1 (both TT suspension) with the modified version - as per alog 88584. Given below are the details,

Old satamp s/n - S1200173

New Satamp s/n - S2500407

The adc counts of the bosems on JM3 and PM1 have dropped by 25% approximately due to the above change, hence we will have to compensate that on the filter side (Oli is currently on it). 

Also, we have swapped the cables for JM2 (mount) with JM3 (SUS TT) in the CER - as per alog 88584, this is now consistent with the wiring diagram.

Comments related to this report
ryan.crouch@LIGO.ORG - 10:09, Thursday 18 December 2025 (88589)

A plot of some OSEM values before and after the satamp swap for PM1.

Images attached to this comment
oli.patane@LIGO.ORG - 11:00, Thursday 18 December 2025 (88593)

Here's the characterization data and fit results for S2500407, assigned to JM3 / PM1 M1's ULURLLLR OSEMs.

This sat amp is a US 8CH sat amp, D1002818 / D080276. The data was taken per methods described in T080062-v3, using the diagrammatic setup shown on PAGE 1 of the Measurement Diagrams from LHO:86807.

The data was processed and fit using ${SusSVN}/trunk/electronicstesting/lho_electronics_testing/satamp/ECR_E2400330/Scripts/
plotresponse_S2500407_H1_JM3PM1_M1_ULLLURLUR_20250915.m

Explicitly, the fit to the whitening stage zero and pole, the transimpedance feedback resistor, and foton design string are:

Optic  Stage  Serial_Number  Channel_Number  OSEM_Name  Zero_Pole_Hz  R_TIA_kOhm  Foton_Design 
JM3 M1 S2500407 CH1 UL 0.0927:5.07 -121 zpk([5.07],[0.0927],1,"n")
      CH2 UR 0.0949:5.18 -121 zpk([5.18],[0.0949],1,"n")
      CH3 LL 0.0949:5.19 -121 zpk([5.19],[0.0949],1,"n")
      CH4 LR 0.0955:5.22 -121 zpk([5.22],[0.0955],1,"n")
PM1 M1   CH5 UL 0.0931:5.09 -121 zpk([5.09],[0.0931],1,"n")
      CH6 UR 0.0942:5.15 -121 zpk([5.15],[0.0942],1,"n")
      CH7 LL 0.0935:5.105 -121 zpk([5.105],[0.0935],1,"n")
      CH8 LR 0.0969:5.29 -121 zpk([5.29],[0.0969],1,"n")

The attached plot and machine readable .txt file version of the above table are also found in ${SusSVN}/trunk/electronicstesting/lho_electronics_testing/satamp/ECR_E2400330/Results/
2025-09-15_USDualSatAmp_S2500407_D080276-v3_fitresults.txt

Per usual, R_TIA_kOhm is not used in the compensation filter -- but after ruling out an adjustment in the zero frequency (by zeroing the phase residual at the lowest few frequency points), Jeff nudged the transimpedance a bit to get the magnitude scale within the ~0.25%, shown in the attached results. Any scaling like this will be accounted for instead with the absolute calibration step, i.e. Side Quest 4 from G2501621, a la what was done for PR3 and SR3 top masses in LHO:86222 and LHO:84531 respectively.

H1 SYS (ISC, PSL, SUS)
rahul.kumar@LIGO.ORG - posted 16:36, Wednesday 17 December 2025 - last comment - 08:58, Friday 19 December 2025(88578)
Vent activities in HAM 1 for the Jitter Attenuation Cavity (JAC work)

Betsy, Fil, Rahul

Today we kicked started the installation activities in HAM1 chamber for the Jitter Attenuation Cavity (JAC). Given below are the things we placed on the ISI table - they are all roughly positioned and dog clamped. 

1. Tip Tilt JM1 - now connected to electronics chain, having some issues with the bosem adc counts etc, will continue looking into it.

2. Tip Tilt JM3 - now connected to the electronics chain, bosem centered, will proceed for health checks once the chassis and electronics chain looks okay.

3. The two periscopes for the JAC, Type 121 and 132 - assembly report posted by Jennie - 88574.

4. Some optics on Siskiyou mount were also added to the table.

I am attaching pictures which shows the above mentioned things added to the table - and for comparison a picture showing before any addition was (here) made.

Fil also performed group loop checks on JM1 and JM3 and did not find any issues with them.

Images attached to this report
Comments related to this report
corey.gray@LIGO.ORG - 08:58, Friday 19 December 2025 (88610)EPO

EPO-Tagging for JAC installation 

H1 CDS (SUS)
erik.vonreis@LIGO.ORG - posted 16:34, Wednesday 17 December 2025 (88581)
h1susham1 modified and restarted

[Oli, Erik]

Oli rearranged some ADC channels to match the current design.  The model was rebuilt and restarted.  No DAQ restart needed.

X1 SUS
ryan.crouch@LIGO.ORG - posted 10:30, Wednesday 17 December 2025 - last comment - 10:59, Wednesday 17 December 2025(88569)
JM3 transfer function results

After taking the first tranfer function, we saw Length look weird, so we measured the wire lengths then replaced one of the wires and remeasured JM3. Before replacing the wire, and after replacing the wire. The wire replacement made everything look nicer, shifted the peaks into a more correct location. While it is still not perfect and looks a little different, we should be able to damp it, the magnitude differences are likely from the coil drivers. 

The measurements are located in: /ligo/svncommon/SusSVN/sus/trunk/HTTS/X1/JM3/SAGM1/Data/

After replacing the wire:
2025-12-11_2300_X1SUSJM3_M1_WhiteNoise_L_0p02to50Hz.xml
2025-12-11_2300_X1SUSJM3_M1_WhiteNoise_P_0p02to50Hz.xml
2025-12-11_2300_X1SUSJM3_M1_WhiteNoise_Y_0p02to50Hz.xml

Before replacing the wire:

2025-12-11_2000_X1SUSJM3_M1_WhiteNoise_L_0p02to50Hz.xml
2025-12-11_2000_X1SUSJM3_M1_WhiteNoise_P_0p02to50Hz.xml
2025-12-11_2000_X1SUSJM3_M1_WhiteNoise_Y_0p02to50Hz.xml

Non-image files attached to this report
Comments related to this report
ryan.crouch@LIGO.ORG - 10:59, Wednesday 17 December 2025 (88572)

Comparing JM3 to some previous TT measurements for OM3, PM1, and RM2.

Non-image files attached to this comment
H1 GRD (OpsInfo, SUS)
thomas.shaffer@LIGO.ORG - posted 09:59, Wednesday 17 December 2025 (88570)
Created and started SUS_JM1 and SUS_JM3 nodes

Ryan S and I edited the sustools.py, susconst.py, and created SUS_JM{1,3}.py, then started the two new nodes. Other than the susconst file, these are common code so I will let LLO know. There were a few inconsistencies and it looks like PM1 was forgotten in a few places so we added those instances in. That node has been working so I don't think it has been an issue, but now it's consistent with the other nodes.

We briefly tested the node by tripping the suspesion, which isn't actually hooked up yet, and running it through all but its coil switching states. All good. Ryan and possible others will edit the medm screens that needs the new JMs on them. 

Images attached to this report
X1 SUS
ibrahim.abouelfettouh@LIGO.ORG - posted 16:19, Tuesday 16 December 2025 (88561)
BBSS Inspection Fixture Fit-Checked

BBSS Inspection Fixture (D2200364) parts were fit-checked and helicoiled. Since we need the actual container for the full assembly, I tried to assemble everything but that which was:

1. Inspection Ring Assembly - ~100 "feet" (D2400031) were helicoiled, 12 of which were attached successfully to the inspection rings (which also were helicoiled)

2. The locating shaft (D1000347) and sleeve (D1000346) were fit-checked with the set screw and helicoiled. The 3/8-16 screws that go into the helicoils are too tight for some to go in so I left the tangs in until I can investigate either. This is the only weird thing I encountered.

3. Wing nuts and other holes were fit-checked and fit well and easy.

See pictures attached.

Images attached to this report
H1 SUS (CDS, IOO, SUS)
jeffrey.kissel@LIGO.ORG - posted 11:27, Tuesday 16 December 2025 (88546)
SUS IM1, IM2, IM3 and IM4 Online and Running post sush2a and sush2b merge to sush12
J. Kissel, O. Patane

After having sorted issues with the (non-existent) binary IO (see LHO:88542 and LHO:88544), and Oli has performed all the "usual" controls updates after a 32CH DAC upgrade, all of the HAUX suspensions -- IM1, IM2, IM3, IM4 -- are now confirmed functional, damped, and aligned.

We've left their guardians them in the ALIGNED state.

That means all SUS in HAM2 are fully functional on the new merged sush12 computer / IO chassis and following the drawing D0902810-v12. See analog changes in LHO:88519 and software changes in LHO:88527.
H1 SUS (CDS, IOO, ISC, SUS)
jeffrey.kissel@LIGO.ORG - posted 09:00, Monday 15 December 2025 (88513)
h1sush2a, h1sus2b SUS taken to SAFE; SEI HAM1 and HAM2 brought to ISI_DAMPED_HEPI_OFFLINE -- merge of sush2a and sush2b systems into sush12 system begins!
J. Kissel

After offloading the IMC WFS (LHO:88510), and saving alignment offsets in respective safe.snaps (LHO:88511 and LHO:88512), I've brought all SUS on the h1sush2a and hsush2b computers to SAFE (that's MC1, MC3, PRM, PR3 and IM1, IM2, IM3, IM4, RM1, RM2, PM1, respectively). I've also brought the HAM1 and HAM2 SEI systems to ISI_DAMPED_HEPI_OFFLINE.

This is all in prep for the h1sush2a and h1sush2b merge into h1sush12, which is coincident with the DACs in these chassis getting upgraded to 32 CH 28-bit DACs per ECRs E2400409 and E2500296 scheduled via WP 12901.
H1 SUS (CDS, IOO, ISC, SUS)
jeffrey.kissel@LIGO.ORG - posted 08:48, Monday 15 December 2025 (88512)
h1sush2b SUS models have OPTICALIGN alignment offsets saved in their safe.snaps
J. Kissel


Saved opticalign alignment sliders for SUS on the h1sush2b computer in the h1susim and h1sushtts models; 
    h1susim
        H1SUSIM1
        H1SUSIM1
        H1SUSIM1
        H1SUSIM1
    h1sushtts (which will become h1susham1)
        H1SUSRM1
        H1SUSRM2
        H1SUSPM1

Ready for the h1sush2a + h1sush2b = h1sush12 IO chassis merge!
Images attached to this report
H1 IOO (CDS, ISC, SUS)
jeffrey.kissel@LIGO.ORG - posted 08:36, Monday 15 December 2025 (88510)
IMC WFS Offloaded
J. Kissel

In prep for for the sush12 upgrade, I've offloaded the IMC WFS to the MC1, MC2, and MC3 OPTICALIGN sliders. This was done with the built in "MCWFS_OFFLOADED" state in the IMC_LOCK guardian. There wasn't too much alignment change requested by the WFS's control signal; just ~5 counts on MC1 and MC3.
I'll now save the alignment offsets into the SUS safe.snaps.
H1 PEM (DetChar, ISC, SEI, SUS)
jeffrey.kissel@LIGO.ORG - posted 08:51, Thursday 11 December 2025 - last comment - 09:06, Thursday 11 December 2025(88473)
It's been ... WINDY.
J. Kissel

Post Dec 4th power outage, we've have an EPIC week of windstorms that have inhibited recovery effort, which has delayed upgrade progress. The summary pages (on their 24 hour cadence) and the OPS logs / environment summary don't really convey this well, so here's a citable link to show how bad last Friday (12/05), Monday (12/08), and Wednesday (12/10) were in terms of wind. Given the normal work weekend, it means that we really haven't had a conducive environment to recover from even a normal lockloss, let alone a 2-hour site-wide power outage. 

The attached screenshot is of the MAX minute trends (NOT the MEAN, to convey how bad it was) of wind speed at each station in UTC time. 
The 16:00 UTC hour mark is 08:00 PST -- the rough start of the human work day, so the vertical grid is marking the work days.
The arrow (and period where there's red-dashed 0 MPH no data) shows the 12/04 power outage.
The horizontal bar shows the weekend when we humans were trying to recover ourselves and not the IFO.
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 09:06, Thursday 11 December 2025 (88475)
Oh right -- and also on Monday, even though the wind wasn't *that* bad, the Earth was mad from the after shocks of 7.0 mag Alaskan EQ, and there were end-station Software Watchdog trips related to it that -- because of an oversight in watchdog calibration -- scared everyone into thinking we should "stand down until we we figure out if this was because the hardware upgrades or power outage." See LHO:88399 and LHO:88415. So, Monday was a wash for environmental reasons too.

Images attached to this comment
H1 SUS
oli.patane@LIGO.ORG - posted 17:00, Tuesday 09 December 2025 - last comment - 11:10, Wednesday 10 December 2025(88445)
Estimators seemingly caused 0.6 Hz oscillations again

Jeff, Oli

Earlier, while trying to relock, we were seeing locklosses preceded by a 0.6 Hz oscillation seen in the PRG. Back in October we had a time where the estimator filters were installed incorrectly and caused a 0.6 Hz lock-stopping oscillation (87689). Even though we haven't made any changes to the estimators in over a month now, I decided to try turning them all off (PR3 L/P/Y, SR3 L/P/Y). During the next lock attempt, there were no 0.6 Hz oscillations seen. I checked the filters and settings and everything looks normal, so I'm not sure why this was happening.

I took spectra of the H1:SUS-{PR3,SR3}_M1_ADD_{L,P,Y}_TOTAL_MON_DQ channels for each suspension and each DOF during two similar times before and after the power outage. I wanted the After time to be while we were in MICROSEISM, since it maybe seems like maybe the ifo isn't liking the normal WINDY SEI_ENV right now, so I wanted both the Before and After times to be in a SEI_ENV of MICROSEISM and the same ISC_LOCK states. I chose the After time to be 2025/12/09 18:54:30 UTC, when we were in an initial alignment, and then found a Before time of 2025/11/22 23:07:21 UTC.

Here are the sprectra for PR3 and SR3 for those times. PR3 looks fine for all DOF, and SR3 P looks to be a bit elevated between 0.6 - 0.75 Hz, but it doesn't look like it should be enough of a difference to cause oscillations.

Then, while talking to Jeff, we discovered the difference in overall noise in the total damping for L and P changed depending on the seismic state we were in, so I made a comparison between MICROSEISM and CALM SEI_ENV states (PR3, SR3). USEISM time was 2025/12/09 12:45:26 UTC and CALM was 2025/12/09 08:54:08 UTC with a BW of 0.02. The only difference in the total drive is seen in L and P, where it's higher below 0.6 Hz when we are in CALM.

So during those 0.6 Hz locklosses earlier today, we were in USEISM. Is it possible that the combination of the estimators in the USEISM state create an unstable combination?

Images attached to this report
Comments related to this report
edgard.bonilla@LIGO.ORG - 08:51, Wednesday 10 December 2025 (88456)

This is possibly true. The estimator filters are designed/measured using a particular SEI environment, so it is expected that they would underperform when we change the SEI loops/blends.

Additionally, we use the GS13 signal for the ISI-->SUS transfer function .It might be the case that the different amount of in-loop/out-of-loop ness of the GS13 might do something to the transfer functions. I don't have any math conclusions from it yet, but Brian and I will think about it.

jeffrey.kissel@LIGO.ORG - 11:10, Wednesday 10 December 2025 (88458)SEI, SUS
I'm pretty confident that the estimators aren't a problem, or at least a red herring.

Just clarifying the language here -- "oscillation" is an overloaded term. And remember, we're in "recovery" mode from Last Thursday's power outage -- so literally *everything* is suspect and wild guesses are are being thrown on around like flour in a bakery, and we only get brief, but separated by 10s of minutes time, unrepeatable, evidence that something's wrong. 

The symptom was "we're trying 6 different things at once to get the IFO going. Huh -- the ndscope time-series IFO build ups as we're locking one time looked to exponentially grow to lock-loss in one lock stretch and in another it just got noisier halfway through this lock stretch. What happened? Looks like something at 0.6 Hz."

We're getting to "that point" in the lock acquisition sequence maybe once every 10 minutes.
There's an entire rack's worth of analog electronics that go dark in the middle of this, as one leg of its DC power failed. (LHO:88446)
The microseism is higher than usual and we're between wind storms, so we're trying different ISI blend configurations (LHO:88444)
We're changing around global alignment because we thing suspensions moved again during the "big" HAM2 ISI trip at the power outage (LHO:88450)
There's a IFO-wide CDS crash after a while that requires all front-ends to be rebooted; with the suspicion that our settings configuration file track system might have been bad . (LHO:88448)...

Everyone in the room thinks "the problem" *could* be the thing they're an expert in, when it's likely a convolution of many things.

Hence, Oli trying to turn OFF the estimators.
An near that time, we switch the configuration of the sensor correct / blend filters of all the ISIs (switching the blends from WINDY to MICROSEISM -- see LHO:88444).

So -- there was 
    - only one, *maybe* two where an "oscillation" is seen, in the sense of "positive feedback" or "exponential growth of control signal." 
    - only one "oscillation" where it's "excess noise in the frequency region around 0.6 Hz," but they check if it actually *is* 0.6 Hz again isn't rigorous.

That happens to be frequency of the lowest L and P modes of the HLTSs, PR3 and SR3.
BUT -- Oli shows in their plots that:
    - Before vs. after the power outage, when looking at times when the ISI platforms are in the same blend state PR3 and SR3 control is the same.
    - The comparing the control request when the ISI platforms are in microseims vs. in windy show the expected change in control authority from ISI input, as the change in shape of the ASD of PR3 and SR3 between ~0.1 and ~0.5 Hz matches the change in shape of the blends.

Attached is an ndscope of all the relevant signals -- our at least the signals in question, for verbal discussion later.


Images attached to this comment
H1 SUS (CDS, SUS)
jeffrey.kissel@LIGO.ORG - posted 15:54, Tuesday 09 December 2025 (88446)
H1 SUS-C1's -18V_DC Power Fails (MC2, PR2, SR2's Coil Drivers, AAs, AIs, and BI/BO Chassis)
J. Driggers, R. Short, D. Barker, J. Kissel, R. McCarthy, M. Pirello, F. Clara
WP 12925
FRS 36300

The power supply for the negative rail (-18V) of the SUS-C1 rack in the CER -- the right power supply in VDC-C1, U3-U1 failed on 2025-12-09 22:55:40 UTC. This rack houses the coil drivers, AAs, AIs, and BI/BO chassis -- all the analog electronics for SUS-MC2, SUS-PR2, SUS-SR2.

We found the issue via DIAG_MAIN, which said "OSEMs in Fault" calling out MC2, PR2, and SR2.
We confirmed that the IMC wasn't locking, and MC2 couldn't push the IMC through fringes.
Also, the OSEM PD inputs on all stages of these suspensions were digital zero (not even ADC noise).

Marc/Fil/Richard were quick on the scene and with the replacement. 
We brought the HAM3 and HAM4 ISI -- ISI_DAMPED_HEPI_OFFLINE -- in prep for a front-end model / IO chassis restart if necessary.
Un-managed the MC2, PR2, and SR2 guardians, by bringing their MODE to AUTO.
Used those gaurdians to bring those SUS to SAFE.
- Richard/Marc/Fil replaced the failed -18V power supply.
- While there, the +18 V supply had already been flagged in Marc's notes for replacement, so we replaced that as well (see D2300167).

Replaced failed +18V power supply S1201909 with new power supply S1201944
Replaced failed -18V power supply S1201909 with new power supply S1201957.

The rack was powered back up, suspensions, and seismic restored by 2025-12-09 23:31 UTC. The suspensions appear fully functional.

Awesome work team!
H1 SUS (CDS, SEI, SUS)
jeffrey.kissel@LIGO.ORG - posted 16:14, Monday 08 December 2025 (88415)
Weekend ETMY Software Watchdog Trips were Because of L2 to R0 Longitudinal Tracking not being blocked by USER WD
J. Kissel, J. Warner

Trending around this morning to understand the reported ETMY software watchdog (SWWD) trips over the weekend (LHO:88399 and LHO:88403), Jim and I conclude that -- while unfortunate -- nothing in software, electronics or hardware is doing anything wrong or broken; we just had a whopper Alaskan earthquake (see USGS report for EQ us6000rsy1 at 2025-12-06 20:41:49 UTC) and had a few big aftershocks. 

Remember, since the upgrade to the 32CH, 2^28 bit DAC last week, both end station's DAC outputs will "look CRAZY" to all those whom are used to looking at the number of counts of a 2^20 bit DAC. Namely, the maximum number of counts is a factor of 2^10 = 1024x larger than previously, saturating at +/- 2^27 = +/- 134217728 [DAC counts] (as opposed to +/-2^19 = +/- 524288 [DAC counts]).

The real conclusion: Both SWWD thresholds and USER WD Sensor Calibration need updating; they were overlooked in the change of the OSEM Sat Amp whitening filter from 0.4:10 Hz to 0.1:5.3 Hz per ECR:E2400330 / IIET:LHO:31595.
The watchdogs use a 0.1 to 10 Hz band-limited RMS as their trigger signal, and the digital ADC counts they use (calibrated into either raw ADC voltage or microns, [um], of top mass motion) will see a factor of anywhere from 2x to 4x increase in RMS value for the same OSEM sensor PD readout current. In otherwords, the triggers are "erroneously" a factor 2x to 4x more sensitive to the same displacement.

As these two watchdog trigger systems are currently mis-calibrated, I put all reference of their RMS amplitudes in quotes, i.e. ["um"]_RMS for the USER WDs and ["mV"]_RMS and quote a *change* in value when possible.
Note -- any quote of OSEM sensors (i.e. the OSEM basis OSEMINF_{OSEM}_OUT_DQ and EULER basis DAMP_{DOF}_IN1_DQ) in [um] are correctly calibrated and the ground motion sensors (and any band-limited derivatives thereof; the BLRMS and PeakMons) are similarly well-calibrated.

Also: The L2 to R0 tracking went into oscillation because the USER WDs didn't trip. AGAIN -- we really need to TURN OFF this loop programmatically until high in the lock acquisition sequence. It's too hidden -- from t a user interface standpoint -- for folks to realize that it should never be used, and always suspect, when the SUS system is barely functional (e.g. when we're vented, or after a power outage, or after a CDS hardware / software change, etc.)

Here's the timeline leading up to the first SUS/SEI software watchdog that helped us understand it there's nothing wrong with the software / electronics / hardware but instead it was the giant EQ that tripped things originaly, but then subsequent trips were because of an overlooked watchdog trigger sensor vs. threadhold mis-calibration coupled with the R0 tracking loops.
2025-12-04 
    20:25 Sitewide Power Outage.
    22:02 Power back on.

2025-12-05
    02:35 SUS-ETMY watchdog untripped, suspension recovery
    20:38 SEI-ETMY system back to FULLY ISOLATED (large gap in recovery between SUS and SEI due to SEI GRD non-functional because the RTCDS file system had not yet recovered)
    20:48 Locking/Initial alignment start for recovery.

2025-12-06 
    20:41:49 Huge 7.0 Mag EQ in Alaska

    20:46:30 First s&p-waves hit the observatory; corner station peakmon (in Z) is around 15 [um/s]_peak (30-100 mHz band)
             SUS-ETMY sees this larger motion, motion on M0 OSEM sensors in 0.1 to 10 Hz band increases from 0.01 ["um"]_RMS to 1 ["um"]_RMS.
             SUS-SWWD using the same sensors, in the same band but calibrated into ADC volts is 0.6 ["mV"]_RMS to ~5 ["mV"]_RMS

    20:51:39 ISI-ETMY ST1 USER watchdog trips because the T240s have tilted off into saturation, killing ST1 isolation loops
             SUS-ETMY sees the large DC shift in alignment from the "loss" of ST1, and 
             SUS-ETMY sees the very large motion, increasing to ~100 ["um"]_RMS (with USER WD threshold set to 150 ["um"]_RMS) -- USER WD never trips. But -- peak motion is oscillating to the 300 ["um"]_peak range (but not close to saturating the ADC.)
             SUS-SWWD reports an RMS voltage increase to 500 [mV_RMS] (with the SWWD WD threshold set to 110 ["mV"]_RMS) -- starts the alarm count-down of 600 [sec] = 10 [min].

    20:51:40 ISI-ETMY ST2 USER watchdog trips ~0.5 sec later as the GS13s go into saturation, and actuators try hard to keep up with the "missing" ST1 isolation
             SUS-ETMY really starts to shake here. 

    20:52:36 The peak love/rayleigh waves hit the site, with the corner station Z motion peakmon reporting at 140 [um/s], and the 30 - 100 mHz BLRMS reporting 225 [um/s].
             At this point its clear from the OSEMs that the mechanical system (either the ISI or the QUAD) is clanking against earthquake stops, as the OSEMs show a saw-tooth-like waveforms. 

    20:55:39 SWWD trips for suspension, shutting off suspension DAC output -- i.e. damping loops and alignment offsets -- and sending the warning that it'll trip the ISI soon.
             Since the SUS is still ringing naturally recovering from the still-large EQ and uncontrolled ISI.
    
    20:59:39 SWWD trips for seismic, shutting off all DAC output for HEPI and ISI ETMY
             SUS-ETMY OSEMs don't really notice -- it's still naturally ringing down with a LOT of displacement. There is a noticable small alignment shift as HEPI sloshes to zero.

    21:06    SUS-ETMY SIDE OSEM stops looking like a saw-tooth, the last one to naturally ring-down. After this all SUS looks wobbly, but normal.
             ISI-ETMY ST2 GS-13 stops saturating
 
    21:08    SUS-ETMY LEFT OSEM stops exceeding the SWWD threshold, the last one to do so.

2025-12-07
    00:05    HPI-ETMY and ISI-ETMY User WDs are untripped, though it was a "tripped again ; reset" messy restart for HPI because we didn't realize that the SWWD needed to be untripped.
             The SEI manager state was trying to get bck to DAMPED, which includes turning on the ISO loops for HPI.
             Since no HPI or ISI USER WDs know about the SWWD DAC shut-off, they "can begin" to do so, "not realizing" there is no physical DAC output.
             The ISI's local damping is "stable" without DACs because there's just not a lot that these loops do and they're AC coupled.
             HPI's feedback loops, which are DC coupled, will run away.

    00:11    SUS and SEI SWWD is untripped

    00:11:44 HPI USER WD untripped, 

    00:12    RMS of OSEM motion begins to ramp up again, the L / P OSEMs start to show an oscillation at almost exactly 2 Hz.
             The R0 USER WD never tripped, which allowed the H1 SUS ETMY L2 (PUM) to R0 (TOP) DC coupled longitudinal loop to flow out to the DAC.
             with the Seismic system in DAMPED (HEPI running, but ST1 and ST2 of the ISIs only lightly damped), and
             with the M0 USER WD still tripped and the main chain without any damping or control,
             after HEPI turned on, causing a shift in the alignment of the QUAD, changing the distance / spacing of the L2 stage, and
             the L2 "witness" OSEMs started feeding back the undamped main chain L2 to the reaction chain M0 stage, and slowly begain oscillating in positive feedback. see R0 turn ON vs. SWWD annotated screenshot.
             Looking at the recently measured open loop gain of this longitudinal loop -- taken with the SUS in it's nominally DAMPED condition and the ISI ISOLATED, there's a damped mode at 2 Hz.
             It seems very reasonably that this mode is a main chain mode, and when undamped would destroy the gain margin at 2 Hz and go unstable. See R0Tracking_OpenLoopGain annoted screenshot from LHO:87529.
             And as this loop pushes on the main chain, with an only-damped ISI, it's entirely plausible that the R0 oscillation coupled back into the main chain, causing a positive feedback loop.
             
    
    00:22    The main chain OSEM RMS exceeds the SWWD threshold again, as the positive feedback gets out of control peaking around ~300 ["mV"]_RMS, and the USER WD says ~100 ["um"]_RMS. Worst for the pitch / longitudinal sensors, F1, F2, F3.
             But again, this does NOT trip the R0 USER WD, because the F1, F2, F3 R0 OSEM motion is "only" 80 ["um"]_RMS still below the 150 ["um"]_RMS limit.

    00:27    SWWD trips for suspensions AGAIN as a result, shutting off all DAC output -- i.e. damping loops and alignment offsets -- and sending the warning that it'll trip the ISI soon.
             THIS kills the 
    
    00:31    SWWD trips for seismic AGAIN, shutting off all DAC output for HEPI and ISI ETMY

    15:59    SWWDs are untripped, and because the SUS USER WD is still tripped, the same L2 to R0 instability happens again.
             This is where the impression that "the watchdogs keep tripping; something broken" enters in.
             
    16:16    SWWD for sus trips again
    
    16:20    SWWD for SEI trips again 

2025-12-08
    15:34    SUS-ETMY USER WD is untripped, main chain damping starts again, and recovery goes smoothly.
    
    16:49    SUS-ETMY brought back to ALIGNED
    
Images attached to this report
Non-image files attached to this report
H1 CDS (SUS)
filiberto.clara@LIGO.ORG - posted 22:01, Thursday 04 December 2025 (88375)
Power Supply Failure for EY SUS-C1/C2 Racks

The positive 18V power supply which provides power to Rack SUS-C1 and SUS-C2 failed after the power outage. Fan seized and power supply tripped off. Power supply was replaced.

New Power Supply  installed - S1300291

F. Clara, M. Pirello

H1 SUS (SUS)
rahul.kumar@LIGO.ORG - posted 18:27, Thursday 04 December 2025 - last comment - 18:57, Thursday 04 December 2025(88369)
SUS - power outage recovery

Oli, Ryan S, Rahul

Except ETMX and ETMY (timing error, work currently ongoing), we have recovered all other suspensions by un-tripping the WD and setting it to SAFE for tonight. The inmons looks fine - eyeballed them all (bosems and aosem), nothing out of the order.

For ETMX and ETMY, Dave is currently performing a computer restart, following which they will set to safe as well.

 

Comments related to this report
ryan.short@LIGO.ORG - 18:57, Thursday 04 December 2025 (88371)

Once CDS reboots were finished, I took all suspensions to either ALIGNED or MISALIGNED so that they're damped overnight.

H1 ISC (CAL, GRD, SUS)
elenna.capote@LIGO.ORG - posted 16:58, Wednesday 03 December 2025 (88346)
Locking Summary

[Sheila, Jeff, Elenna, Corey]

Today we relocked with some issues. We think that the soft close of the gate valves shook the ITM green cameras enough that the green references were no longer good, which caused many alignment problems through carm offset reduction.

The SRC ASC is OFF in DRMI ASC at this time, by having the use_DRMI_ASC['SRC1'/'SRC2'] flag set to False. When locking, SRM and SR2 might need to be moved by hand before offloading DRMI ASC.

Sheila will write a more detailed alog about the process of updating the green references, because we think we have a better method now. For the summary, we have updated the green camera references, SDFed them in safe, and run an initial alignment after the lockloss.

Other guardian changes:

Tony, Oli and I have been watching violin mode damping. So far, there doesn't seem to be any problems.

By eye, Jeff and I think the PCAL X crosses do not match the line height in CAL DELTA L, so there may be some calibration mismatch. I will run a cal measurement when we are thermalized.

Squeezer is not working, Daniel and others are trying to troubleshoot now.