Since the power outage, when we've been in the higher 2W locking states, we have been seeing some 'breathing' of a spot in the upper right of the PRM camera. Before the power outage, there was some scattering seen there, but it looked different (now it looks like a thumbprint and has a clear outline when fully there) and didn't 'breathe' like we see now.
Following the steps as detailed here: 86935 we were able to get to the engage ASC full IFO state.
I ran the code for engaging IFO ASC by hand, and there were no issues. I did move the alignment around by hand to make sure the buildups were good and the error signals were reasonable. Ryan reset the green references once all the loops, including soft loops engaged.
We held for a bit at 2W DC readout to confer on the plan. We decided to power up and monitor IMC REFL We checked that the IMC REFL power made sense:
I ran guardian code to engage the camera servos so we could see what the low frequency noise looked like. It looked much better than it did the last time we were here!
We then stopped just before laser noise suppression. With IMC REFL down by half, we adjusted many gains up by 6 dB. We determined that on like 5939, where the IMC REFL gain is checked if it is below 2 should now be checked to see if it is below 8. I updated and loaded the guardian.
We rain laser noise suppression with no issues.
Then, I realized that we actually want to increase the power out of the PSL so that the power at IM4 trans matches the value before the power outage- due to the IMC issues that power has dropped from about 56 W to about 54 W.
I opened the ISS second loop with the guardian, and then stepped up PSL requested power from 60 W to 63 W. This seemed to get us the power out we wanted.
Then, while we were sitting at this slightly higher power, we had a lockloss. The lockloss appears to be an IMC lockloss (as in the IMC lost lock before the IFO).
The IMC REFL power had been increasing, which we expected from the increase of the input power. However, it looks like the IMC refl power was increasing even more than it should have been. This doesn't make any sense.
Since we were down, we again took the IMC up to 60 W and then 63 W. We do not see the same IMC refl power increase that we just saw when locked.
I am attaching an ndscope. I used the first time cursor to show when we stepped up to 63 W. You can see that between this first time cursor and second time cursor, the IMC refl power increases and the IM4 trans power drops. However, the iss second loop was NOT on. We also did NOT see this behavior when we stepped up to 60 W during the power up sequence. Finally, we could not replicate this behavior when we held in down and increased the input power with the IMC locked.
It is possible that our sensitivity is back to nominal. Here is a comparison of three lock times, first before the power outage, second during the lock just after the power outage, and third after we turned off ADS today when locked.
These settings were not nominal for the final reference (green on the attached plot):
The low frequency noise is not quite at the level of the "before outage" trace, but it is also not as bad as the orange trace.
After engaging full IFO ASC and soft loops this afternoon, I updated the ITM camera and ALS QPD offsets and accepted them in the appropriate SAFE SDF tables. After Beckhoff reboots and PSL optic inspections, we'll run an initial alignment to solidify these alignment setpoints. They will need to be accepted in the OBSERVE tables once eventually back to NLN.
We had a lockloss while in LASER_NOISE_SUPPRESSION (575), and looking at ASC-AS_A, the light on the PD dropped at the same time as DARM lost lock, so it was an IMC lockloss (lockloss webpage is still unaccessible but command line tool worked for me after waiting a while)
The previous channel list was channels which were non-zero before the glitch and became zero afterwards.
I've extended the analysis to look for channels which were varying before the glitch and became flat-lined afterwards, the flat-line value is shown.
auxcs.txt: H1:SYS-ETHERCAT_AUXCORNER_INFO_CB_QUEUE_2_PERCENT (varying→flat:8.000e-05)
auxcs.txt: H1:SYS-TIMING_C_FO_A_PORT_12_NODE_GENERIC_PAYLOAD_1 (varying→flat:9.100e+06)
auxcs.txt: H1:SYS-TIMING_C_FO_A_PORT_12_NODE_XOLOCK_MEASUREDFREQ (varying→flat:9.100e+06)
auxcs.txt: H1:SYS-TIMING_C_FO_B_PORT_11_NODE_GENERIC_PAYLOAD_0 (varying→flat:1.679e+04)
auxcs.txt: H1:SYS-TIMING_C_FO_B_PORT_11_NODE_GENERIC_PAYLOAD_17 (varying→flat:2.620e+02)
auxcs.txt: H1:SYS-TIMING_C_FO_B_PORT_11_NODE_PCIE_HASEXTPPS (varying→flat:1.000e+00)
auxcs.txt: H1:SYS-TIMING_C_FO_B_PORT_2_NODE_GENERIC_PAYLOAD_13 (varying→flat:5.830e+02)
auxcs.txt: H1:SYS-TIMING_C_FO_B_PORT_3_NODE_GENERIC_PAYLOAD_13 (varying→flat:6.470e+02)
auxcs.txt: H1:SYS-TIMING_X_GPS_A_DOP (varying→flat:3.000e-01)
auxcs.txt: H1:SYS-TIMING_Y_GPS_A_DOP (varying→flat:3.000e-01)
sqzcs.txt: H1:SQZ-FIBR_LOCK_BEAT_FREQUENCYERROR (varying→flat:2.000e+00)
sqzcs.txt: H1:SQZ-FREQ_ADF (varying→flat:-3.200e+02)
sqzcs.txt: H1:SQZ-FREQ_LASERBEATVSDOUBLELASERVCO (varying→flat:2.000e+00)
sqzcs.txt: H1:SYS-ETHERCAT_SQZCORNER_CPUUSAGE (varying→flat:1.200e+01)
tcsex.txt: H1:SYS-ETHERCAT_TCSENDX_CPUUSAGE (varying→flat:1.200e+01)
tcsey.txt: H1:AOS-ETMY_BAFFLEPD_4_ERROR_CODE (varying→flat:6.400e+01)
tcsey.txt: H1:AOS-ETMY_BAFFLEPD_4_ERROR_FLAG (varying→flat:1.000e+00)
tcsey.txt: H1:AOS-ETMY_ERROR_CODE (varying→flat:1.000e+01)
Comparing IM4 Trans alignment at 2W in before the power outage to now. Plot attached.
IM4 trans Pitch alignment is the same, Yaw alignment is 0.06 different. So alignment changes are minimal.
Power on IM4 trans is slightly (~3%) lower: NSUM Power on IM4 trans was 1.891 for 2.026W in (ratio 0.933), now is 1.780 for 1.974 in (ratio 0.902).
Here are the steps Sheila and I took that almost got us to prep ASC for full ifo:
We measured all the LSC gains and found that maybe PRCL gain was dropping and causing locklosses so
We then went to prep_ASC_for_Full_ifo but we forgot to reduce the PRCL2 gain, so this probably caused a lockloss since DRMI moved to POP
We will keep trying this and maybe do full IFO ASC next
On the next attempt, I added this additional step:
Hi y'all!
Summary:
Mon Sep 15 10:07:58 2025 INFO: Fill completed in 7min 54secs
I** have written a program to compare slow controls channels before and after with Wed 10sep2025 power glitch to see if there are any which look like they may have been broken and need further investigation.
(** Full disclosure, it was actually written by AI (Claude-code) and runs on my GC laptop safely contained in a virtual machine running Deb13 Trixie)
As a first pass, the code is looking for dead channels. These are channels which were active before the glitch, and are flat-line zero following.
The code gets its channels to analyze from the slow controls INI files.
Using Jonathan's simple_frames python module, it reads two minute trend GWF frame files. For before I'm using the 10:00-11:00 Wednesday file (an hour before the glitch) and for after I'm using the Thu 00:00-01:00 file. In both cases H1 was locked.
I'll post the results as comments to this alog.
ini file: H1EPICS_ECATAUXCS.ini
num_chans: 23392
dead chans: 10
H1:PSL-ENV_LASERRMTOANTERM_DPRESS
H1:SYS-ETHERCAT_AUXCORNER_INFO_CB_QUEUE_2_USED
H1:SYS-PROTECTION_AS_TESTNEEDED
H1:SYS-PROTECTION_AS_TESTOUTDATED
H1:SYS-TIMING_C_FO_A_PORT_13_CRCERRCOUNT
H1:SYS-TIMING_C_MA_A_PORT_14_NODE_UPLINKCRCERRCOUNT
H1:SYS-TIMING_C_MA_A_PORT_8_NODE_FANOUT_DELAYERR
H1:SYS-TIMING_X_FO_A_UPLINKCRCERRCOUNT
H1:SYS-TIMING_Y_FO_A_PORT_6_CRCERRCOUNT
H1:SYS-TIMING_Y_FO_A_PORT_6_NODE_PCIE_OCXOLOCKED
ini_file: H1EPICS_ECATAUXEX
num_chans: 1137
dead_chans: 0
ini_file: H1EPICS_ECATAUXEY
num_chans: 1137
dead_chans: 0
ini_file: H1EPICS_ECATISCCS
num_chans: 2618
num_dead: 1
H1:ISC-RF_C_AMP24M1_POWEROK
ini_file: H1EPICS_ECATISCEX
num_chans: 917
dead_chans: 0
ini_file: H1EPICS_ECATISCEY
num_chans: 917
dead_chans: 0
ini_file: H1EPICS_ECATTCSCS
num_chans: 1729
dead_chans: 1
H1:TCS-ITMY_CO2_LASERPOWER_RS_ENC_INPUTA_STATUS
ini_file: H1EPICS_ECATTCSEX
num_chans: 353
dead_chans: 0
ini_file: H1EPICS_ECATTCSEY
num_chans: 353
dead_chans: 0
ini_file: H1EPICS_ECATSQZCS
num_chans: 3035
dead_chans: 0
TITLE: 09/15 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 3mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY: I'll start by taking H1 through an initial alignment then relock up through CARM_5_PICOMETERS, since it sounds like that's still the latest stable state H1 can get to.
Attempt 1:
CARM_5_pm by guardian. TR_CARM offset -52, could then set TR_CARM gain to 2.1, then could step TR_CARM offset to -56. Then could set DHARD P gain to -30 and DHARD Y gain to -40.
Then ran CARM_TO_TR with the guardian. We could step the TR_REFLAIR9 offset to -0.03 and things looked stable, things started to ring up at 2Hz when we stepped to -0.02.
It seems like the increased DAHRD gain helped keep things more stable than last night.
Plot attached of the lockloss, Elenna pointed out we need to look at the faster channels. The oscillation started once the REFLAIR9 offset was -0.02 and higher. It's a 17Hz to 18Hz woble, also seen growing in all the LSC signals, which makes more sense for the fast frequency.
This same 17Hz LSC wobble was seen in the last lockloss last night too, plot.
I posted the following message in the Detchar-LHO mattermost channel:
Hey detchar! We could use a hand with some analysis on the presence and character of the glitches we have been seeing since our power outage Wednesday. They were first reported here: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=86848 We think these glitches are related to some change in the input mode cleaner since the power outage, and we are doing various tests like changing alignment and power, engaging or disengaging various controls loops, etc. We would like to know if the glitches change from these tests.
We were in observing from roughly GPS time 1441604501 to 1441641835 after the power outage, with these glitches and broadband excess noise from jitter present. The previous observing period from roughly GPS 1441529876 to 1441566016 was before the power outage and these glitches and broadband noise were not present, so it should provide a good reference time if needed.
After the power outage, we turned off the intensity stabilization loop (ISS) to see if that was contributing to the glitches. From 1441642051 to 1441644851, the ISS was ON. Then, from 1441645025 to 1441647602 the ISS was OFF.
Starting from 1441658688, we decided to leave the input mode cleaner (IMC) locked with 2 W input power and no ISS loop engaged. Then, starting at 1441735428, we increased the power to the IMC from 2 W to 60 W, and engaged the ISS. This is where we are sitting now. Since the interferometer is has been unlocked since yesterday, I think the best witness channels out of lock will be the IMC channels themselves, like the IMC wavefront sensors (WFS), which Derek reports are a witness for the glitches in the alog I linked above.
To add to this investigation:
We attenuated the power on IMC refl, as reported in alog 86884. We have not gone back to 60 W since, but it would be interesting to know if a) there was glitches in the IMC channels at 2W before the attenuation, and b) if there were glitches at 2 W after the attenuation. We can also take the input power to 60 W without locking to check if the glitches are still present.