Compaing now to a time before the power outage, the ISS diffracted power is now the same (3.6%). The PMC transmission has dropped by 2%, and the PMC reflection has increased by 3.5%.
Looking at IMC refl, before the power outage when the IMC was locked with 2W input power.
time | IMC refl | refl percent of before outage | MC2 trans | MC2 trans % of before outage |
9/10 8:12 UTC (before power outage, 2W IMC locked) | 0.74 | 317 | ||
9/11 00:22 UTC (IMC relocked at 2W after outage) | 1.15 | 155% | 312 | 98% |
9/11/1:50 UTC (after first 60 W and quick lockloss, IMC relocked at 2W) | 1.27 | 171% | 310 | 97% |
9/11 18:17 UTC (after overnight 60W lock)** had one IMC ASC loop off | 2.06 | 278% | 278 | 87% |
9/11 19:22 UTC (after all IMC ASC on) | 2.13 | 287% | 301 | 95% |
9/11 21:17 UTC (after sitting at 2W for 3 hours) | 2.03 | 274% | 303 | 95% |
The attached screenshot shows that the ISS second loop increased the IMC input power during the overnight 60W lock to keep the IMC circulating power constant.
After the lockloss today, we took the IMC offline for a bit, and I moved the IMC PZTs back to around where they had been before the outage. The time from then when we had the IMC offline was September 11, 2025 18:07:32 - 18:11:32 UTC. I then found a time from before the outage when we last had the IMC offline, which was September 02, 2025 17:03:02 - 17:19:02 UTC, and verified that the pointing for MC1 P and Y were about the same, as well as the IMC PZTs are in the same general area.
I then looked at IMC-WFS_{A,B}_DC_{PIT,YAW,SUM}_OUT during these times. The dtt, where blue is the Sept 2 time and red is the time from today, seems to show similar traces. The ndscopes (Sept2, Sept11) show many of those channels to be in the same place, but a few aren't - H1:IMC-WFS_A_DC_PIT_OUT has changed by 0.3, H1:IMC-WFS_A_DC_YAW_OUT has changed by 0.6, and H1:IMC-WFS_B_DC_YAW_OUT has changed by 0.07. However, after this time, we relocked the IMC for a while, and then went back to offline, between 19:25:22 - 19:31:22 UTC, and these values changed. I've added that trace to the dtt in green, and it still looks about the same. Many of the WFS values have changed though.
I don't really think this is related to the poor range, but it seems that one of the cps on HAM3 has excess high frequency noise and has been noisy for a while.
First image is 30+ day trends of the 65-100hz nad 130-200hz blrms for the HAM3 cps. Something happened about 30 days ago that cause the H2 cps to get noisy at higher frequency.
Second image are rz location trends for all the HAM ISI for the last day around the power outage. HAM3 shows more rz noise after the power outage.
Last image are asds comparing HAM2 and HAM3 horizontal CPS. HAM3 H2 shows much more noise above 200hz.
Since finding this, I've tried power cycling the CPS on HAM3 and reseating the card, but that so far has not fixed the noise. Since this has been going for a while, I will wait until maintenance to try to either fix or replace the card for this CPS.
I've replaced the noisy cps, and adjusted the cps setpoint to maintain the global yaw alignment, meaning I looked at the free hanging (iso loops off) position before and after the swap and changed the RZ setpoint so the delta between the isolated and freehanging position for RZ was the same with the new sensor. New sensor doesn't show either the glitching or high frequency noise that the old sensor had. I also changed the X and Y set points, but those only changed a few microns and should not affect IFO alignment.
Tony, TJ, Dave:
After the power outage the CS dust monitors (Diode room, PSL enclosure, LVEA) started recording very large numbers (~6e+06 PCF). TJ quickly realized this was most probably a problem with the central pump and resolved that around 11am today.
2-day trend attached.
The pump was running especially hot and the gauge showed no vacuum pressure. I turned off the pump and checked the hose connections. The filter for the bleed valve was loose, the bleed screw was full open, and one of the pump filters was very loose. after checking these I turned it back on and it immediately sucked to 20inHg. I trimmed it back to 19inHg and then rechecked a few hours later to confirm it had stayed at that pressue. The pump was also running much cooler at that point as well.
Sheila, Ryan S, Tony, Oli, Elenna, Camilla, TJ ...
From Derek's analysis 86848, Sheila was suspicious that the ISS second loop was causing the glitches, see attached. The ISS turning on on a normal lock vs in this low-range glitchy lock attached, it's slightly more glitchy in this lock.
The IMC OLG was checked 86852.
Sheila and Ryan unlocked the ISS 2nd loop at 16:55UTC. This did not cause a lockloss although Shiela found that the IMC WFS sensors saw a shift in alignment which is unexpected.
See the attached spectrum of IMC_WFS_A_DC_YAW and LSC_REFL_SERVO (identified in Derek's search), red is with the PSL 2nd loop ON, blue is off. There is no large differences so maybe the ISS 2nd loop isn't to blame.
We lost lock at 17:42UTC, unknown why but the buildups starting decreasing 3 minutes before the LL. It could have been from Sheila changing alignments of the IMC PZT.
Sheila found IMC REFL DC channels are glitching whether the IMC is locked or unlocked, plot attached. But this seems to be the case even before the power outage.
Once we unlocked, Oli out the IMC PZTs back to their location before the power outage, attached is what IMC cameras looks like locked at 2W after WFS converged for a few minutes.
Here are three spectrograms of the IMC WFS DC YAW signal: before the power outage, after the power outage with the ISS on, after the power outage with the ISS off.
I don't see glitches in the before outage spectrogram, but I do see glitches in BOTH the ISS on and ISS off spectrograms. The ISS off spectrogram shows more noise in general but it appears that it has the same amount of nonstationary noise.
IM4 trans with IMC locked at 2W before the power outage was PIT, YAW: 0.35,-0.11. After the outage PIT, YAW: 0.39,-0.08. Plot attached. These are changes of 0.04 in PIT, 0.03 in YAW.
Oli and Ryan found that the alignment on IMC REFL WFS was the same but PSL-ISS_SEONDLOOP different, this makes us think the alignment change is in the MC or IMs. The IMs osems haven't changed considerably.
Ryan, Sheila
Ryan and Sheila noticed that the PSL-PMC_TEMP changed ~0.5deg without the set point changing. After looking into his we compared to the April power outage where it came back 1deg cooler. We therefore don't think this is causing us any issues. Both bower outages plotted in attached.
We did notice tough that some glitches in PMC-TEMP_OUT started ~ 31st May 2025 and have been present since, plot attached. Tagging PSL.
Thu Sep 11 10:08:19 2025 INFO: Fill completed in 8min 15secs
Oli, Sheila, Elenna
We went out to the racks and hooked up the SR785 to measure the IMC olg. The results show close to an 80 kHz UGF (see attachment). This lines up with the results from the last time this was measured by Sheila and Vicky here.
We also checked the demod phase by measuring the ratio of Q/I, which we decided should be a small number. We found Q/I was -22 dB, so we think that seems reasonable, but we don't have anything to compare against.
TITLE: 09/11 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 136Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 9mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.15 μm/s
QUICK SUMMARY:
Range seems low, but SQZ_man is in FRQ_DEP_SQZ.
The SQZ ASD plot on nuc 33 wasn't running and when I logged in to hit start, it gave me a connection error.
After a reboot of Nuc 33, High Freq SQZn could be better looking.
The H(t) Triggers screen looks WILD... some frequencies have had issues all night.
We got the high frequency squeezing back to it's nominal -4.5dB as noticed that ZM6 PIT had changed 250urad in the power outage and the SCAN_ALIGNMENT_FDS state didn't have the range to bring it back. Nothing else changed more than ~30urad. I moved ZM6 it back to it's old alignment and reran SCAN_ALIGNMENT_FDS, then ran SCAN_ANGLE_FDS and checked that H1:SQZ-ADF_OMC_TRANS_SQZ_ANG was close to 0 (it was -1.5) before going back to FREQ_DEP_SQZ, which turns back on the ADF angle servo.
Our range than increased to 145MPc.
Related to the poor range challenges reported in 86844, there is an extremely high rate of glitches, close to 1 glitch with SNR > 8 every 10 seconds, an increase in rate of 2 orders of magnitude compared to the day before. The first HVeto run of the day shows that H1:IMC-WFS_A_DC_YAW_OUT_DQ is correlated with these glitches at a significance of 850 (10-20 is the usual threshold for an "interesting" significance) and witnesses 70% of all of the glitches in the most recent lock.
I've attached an example of these glitches in strain data and the mentioned IMC WFS channel, showing that there is a clear visual correlation between them.
Other channels that are correlated with these glitches are H1:LSC-MCL_IN1_DQ, H1:IMC-DOF_4_P_IN1_DQ, and H1:LSC-POP_A_RF9_I_ERR_DQ (among many other IMC, LSC, ASC, and PSL channels). The full HVeto results can be found here.
One additional observation is that the rate of these glitches is so high that it is possible they could be subtracted post-facto with standard linear subtraction techniques.
TITLE: 09/11 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
IFO is in NLN and OBSERVING (quite poorly) - We're going to OBS with low range (alog 86844)
Post outage recovery allowed us to get into NLN but as we got there, our range was sitting at a cool 103 MPc with a few culprits identified by Elenna and Sheila.
Getting to NLN was easy enough with the one issue being a shutter test guardian issue that was fixed after a few inits and another being the EY HV being off, causing a lockloss when it was turned back on (this happened at OMC_WHITENING). alog 86842. Elenna and Camilla helped me reconcile ~100 SDF diffs (alog 86843 and attached)
SQZ:
SQZ was and still is bad but we managed to improve it by running opo temp and ang adjust (manual and auto) to improve the range up to 130 MPc. Unknown why it's still not the same as before.
IMC:
The IMC was the other point of discussion but we don't really know if this is a problem. Sheila and Elenna were stepping through the IMC PZT whilst wathcing MC_REFL (which looked different). Ultimately we still don't know.
We do know that our calibration is fine so it's just high noise that is keeping us 25 MPc below our usual. This noise seems to be from everywhere according to the coherence checks (attached)
Comissining will continue tomorrow to fix whatever is causing this. On the bright side, high violins damped while we were locked.
LOG:
Our range is very low even with squeezing (about 130 Mpc). There seems to be some problems with squeezing, but there is significant jitter coherence from 10-40 Hz that is abnormal. The noise on the IMC WFS has increased by about 2x at low frequency, but decreased at high frequency, so there is much more power on them now. Sheila trended IMC refl and MC2 trans and IMC refl is higher than normal, MC2 trans is lower than normal. We can also see that the MC refl camera "looks different" (calling this "tea leaf reading"). The PRCL and SRCL coherence is also much higher than normal.
"It just looks like there is not much squeezing"- Sheila
We've tried some different things to move the IMC alignment around, but nothing seems to be working and we are tired. Our plan is to come back to this in the morning.
We think that something from today's power outage has caused these problems, but we are not sure what.
Screenshots include:
low range coherence check from Ibrahim
IMC WFS spectrum comparison (blue is yesterday, red is now)
sqz comparison change this also shows that squeezing is not causing the low frequency extra nosie
Here is a longer measurement of the IMC WFS and LSC coherence with calib strain. I can't get the coherence to run with calib strain clean right now. I'm not sure how good the cleaning is performing, but I do know that the strain and clean channels looks slightly different bove 100 Hz, so some jitter is being removed.
looking at IMC REFL DC, at 2W was 60% higher after the power outage than before, MC2 trans is lower by 2% than before the power outage.
At 60W input power, IMC REFL is 190% of what it was before the power outage, MC2 trans is 96% of what it was.
We trended MC suspensions and PZT drive, the suspension are within the range of where they were before the power outage but the PZT is different. We tried opening the PZT loop (MC2 trans to PZT), increasing the MC WFS gain from 0.04 to 0.16, and walking the PZT offet. It seemed like moving this in either direction in yaw made things worse, moving the pZT back towards it's old position in pitch also made things worse.
One suspicion could be that the beam from the PSL has shifted in alignment.
We had several SDF diffs. I think most of the issues could have been due to the safe being different than observe and the reboot from the power outage restoring old values. Specifically, the OAF online cleaning model coefficients for the jitter cleaning were all changed. I trended back the values and it they had taken on the old coefficients from before I retrained the jitter. Similar issue with TCS sim, which Matt had updated recently.
However, after I accepted the OAF and TCS sim values in OBSERVE, I loaded the safe.snap to see if I could update them in safe. However, the safe.snap file did not have the same differences as in observing. The OAF model had 71 diffs from the jitter coefficients, but in safe there are 6 diffs and none of them are the jitter coefficients.
Meanwhile, if I do the same thing for the TCS sim model and load the safe.snap, there are NO diffs.
So I am wondering if this a weird "burt restore" issue due to the power outage and whatever file was restored was very old and/or wrong. Tagging CDS so we can figure out what happened.
Camilla advised Ibrahim to revert some of the diffs in the HWS model.
We also had to change the PZT offset for the IMC PZT during the recovery process, which we had already accepted in safe so I also accepted in OBSERVE.
Attached are the TCS sim and ASCIMC acceptances. I didn't screenshot the OAF acceptance because the list was very long (but now I wish I had so we could figure out what went wrong with the restore).
With help from E. Bonilla, M. Todd, and S. Dwyer
Some background:
The HARD loop open loop gain transfer function can provide information about the arm power. The radiation pressure within the arm cavity adds an additional torsional stiffness term that stiffens the hard mode and softens the soft mode, causing the eigenmodes of the suspension to shift up (hard mode) or down (soft mode) in frequency.
We can measure this shift in frequency by taking the open loop gain of the hard loop, and dividing out the known digital controller to measure the high power plant hard plant.
The highest frequency pole in the suspension plant is the most interesting to study, as this is the hard mode that is most susceptible to the arm power. Edgard's technical document on the Sidles Sigg modes in the BHQS, T2300150, gives a good explanation as to why. Note that his document covers the BHQS design, which differs from the QUAD design in several respects, but the underlying physics is the same. Figure 2 on page 3 demonstrates the behavior of the suspension eigenmodes with respect to increasing intracavity power, demonstrating that the highest frequency hard mode will shift in frequency the most compared to the other modes (the opposite is true for the soft mode). Figure 2 will appear different for the QUAD model in pitch due to the large cross coupling of length to pitch (which mixes length modes into the shifting as well, yuck).
Some equations:
Following his document, we can use Equation 4 as a first order approximation for the hard mode frequency:
f_hard = 1/2*pi * sqrt(k_4 + k_hard / I_4)
where k_4 is the torsional stiffness of the fourth eigenmode of the QUAD, k_hard is the torsional stiffness induced by the radiation pressure torque and I_4 is the moment of inertia of the fourth eigenmode of the QUAD.
k_hard can be shown to depend on arm power via:
k_hard = P_arm *_L_arm / c * gamma_hard
where gamma_hard = ((ge + gi) + sqrt((ge - gi)^2 +4)) / (ge*gi - 1)
and P_arm is the arm cavity power, L_arm is the arm cavity length, and ge,i is the g factor of the ETM or ITM (ge,i = 1 - L_arm / Re,i)
The arm power will then depend on the following:
The last point is what makes this measurement somewhat degenerate; we know that the radii of curvature of the test masses changes from the design value due to the applied ring heater power and the absorbed power from self heating. However, Matt has been working lately to understand what these values are by checking the higher order mode spacing and finesse models. If we use the higher order mode spacing, and known measurements of the absorbed power from the ITM Hartmann wavefront sensors, we can remove some of the degeneracy.
Esimating the test mass RoCs:
In alog 86107, Sheila estimates the location of the X and Y arm higher order modes:
Using ge*gi = cos^2(pi*f_HOM/FSR) I find that the ge*gi for the X arm is 0.8158 and the Y arm is 0.8178
Matt reports that the ITMX absorbed power is 160 mW and ITMY is 140 mW. His new estimate for the coupling of the self heating is -20.3 uD/W for the ITMs (G2501909, slide 11). The ITM radius of curvatures are reported on galaxy as 1940.3 m for ITMX and 1940.2 for ITMY.
The ITM defocus when we are in the hot state can be calculated via
D = Dc + B * P_rh + Ai * P_self
where Dc = 1/R_cold, B is the ring heater coupling factor in D/W, P_rh is the known ring heater power applied to the test masses, Ai is the coupling above, and P_self is the absorbed power reported above
This gives the hot RoCs for the ITMs as 1949.94 m for ITMX and 1950.96 m for ITMY.
Using the product of the g factors above, we can then estimate the hot RoCs for the ETMs as 2246.54 m for ETMX and 2243.08 m for ETMY.
Calculating the arm power:
These numbers give the following g factors:
Mirror | g factor |
ITMX | -1.0485 |
ITMY | -1.0474 |
ETMX | -0.7781 |
ETMY | -0.7808 |
Using the QUAD model parameters I found in /ligo/svncommon/SusSVN/sus/trunk/QUAD/Common/MatlabTools/QuadModel_Production/h1itmy.m and some help from Edgard, I found:
I_4 | 0.419 kg m^2 |
k_4 | 19.7118 Nm |
I remeasured both the CHARD pitch and DHARD pitch transfer functions. I divided out the current controllers and used the InteractiveFitting program written by Gabriele to fit the plant. Both measurements give the same result: f_hard = 2.603 Hz
Combining all of the above numbers gives:
P_arm = 332.1 kW, when using the X arm parameters
P_arm = 328.3 kW, when using the Y arm parameters
To be clear, Sheila and I don't think these are the arm powers in the X and Y arm, since the f_hard value will depend on the average power between the arms in the CHARD and DHARD transfer functions. Instead, these values provide a possible range of arm powers.
I am currently reporting these values without any uncertainty (bad!), since the uncertainty will depend on the measurement uncertainty of the OLG, the HOM spacing, and self heating estimate. Once I have a better sense of all of these uncertainties, I will update here.
Furthermore, there are higher order corrections that can be applied to the estimate of the hard mode frequency. For example, Eq 22 in Edgard's document estimates the additional effect due to the effective spring between the PUM and test mass. However, that estimate is not exactly correct for the QUAD model, since the higher order correction will need to account for the length-to-pitch cross coupling. The yaw model may be simpler to use, so I plan to remeasure the HARD yaw OLGs and use them to calculate another arm power estimate.
Overall, putting this result in the context of our other arm power estimates is interesting:
I stated that it may be simpler to use the yaw mode measurement to calculate arm power, however that is not possible. Attached is a figure that demonstrates the hard and soft yaw mode shift with arm power using the QUAD model (Fig 68 in my thesis). We believe we are somewhere in the 300-400 kW region of this plot. At these powers, the yaw hard and soft modes have not been fully decoupled from each other, which means that the approximation that we can use this mode to calculate the arm power is not valid. Edgard's technical document goes into further detail about this approximation. This is validated from the open loop gain measurements I made for DHARD and CHARD Y here, which show that the mode is still near 3 Hz.
However, the pitch mode has fully decoupled, so the values I report above are valid, excepting whatever higher correction is required from length-to-pitch cross coupling.
Incidentally, this probably means we could damp the yaw hard mode from the top mass at this power, but that is an entirely different discussion.
As we got back up towards observing and entered the OMC_WHITENING state, I saw a guardian message that "ETMY HV ESD appears off". Looking at the suspension screen, I saw that even though the ETMY bias voltage was set, the H1:SUS-ETMY_L3_ESDAMON_DC_OUT16 channel was reading zero. Trending it back, I saw it should be reading something like 200.
Ibrahim, Keita, and I decided that the button we should probably press was the "HV ON/OFF" switch at the bottom of the ETMY screen which is the H1:SUS-ETMY_BIO_L3_RESET switch. Keita said "this is probably going to cause a lockloss" and I said "yeah probably" and then I pressed it and it caused a lockloss. However, I don't think we could have gone into observing like that, because that bias voltage is set to cancel out ground noise. In hindsight, Keita and I realized we should have probably: 1) set the bias offset to zero, 2) turned on the HV switch, 3) then slowly ramped up the ETMY ESD bias offset to the nominal value.