(In the attached 1 week trend, [-7 day, -2 day] range is before the power outage and [-1day, 0 day] is after the long high power lock post power outage. Ignore [-2 day, -1 day] because the ISS 2nd loop was messing with the AOM during that time.)
ISS QPD (top right) shows a clearly measurable shift in the alignment mostly in DX (YAW). The QPD is downstream of the PMC, the calibration is supposed to be in mm so nominally it's just 5.4 um but there's a focusing lens in front of it and unfortunately I don't know the focal length nor the distance between the lens and the QPD for now, so the only thing I can say is that the alignment changed downstream of PMC in YAW.
BEA (Bull's Eye Sensor, top left) also shows an alignment shift upstream of PMC. Input matrix of this one is funny, don't read too much into the magnitude of the change. I'm not even sure if it was mostly in PIT or in YAW. (The beam size, 2nd from the top on the left column, also shows a change but I won't trust that because that output is supposed to change when there's an alignment change.)
In our first long lock after the power outage, ISS second loop increased the input power to the IMC by 0.1 W/ hour for 10 hours, MC2 trans was stable, and IMC refl DC increased by 1.7 (mW) per hour (first screenshot)
Last night we left the IMC at 2W, and saw no further degredation. (second screenshot).
Now we increased the power to 60W for an hour and see no further degredation after the transient while the ASC was moving and maybe there was a thermal transient. (third screenshot)
Daniel suggested that as the power has increased on the IMC REFL PD (IOT2L D1300357) by a factor of around 3, was 19mW at 60W in, now 60mW at 60Win, we should reduce the power on this diode as it doesn't work well with high powers. H1:IMC-REFL_DC_OUT16
At 2W input, it was 0.73mW, is now 2mW. Aim for a factor of 2 reduction (to make it simple to reduce electronics gains).
Aim to reduce power on IMC REFL to 1mW using IOT2L waveplate when there is 2W incident on IMC.
Decided we should do this with the IMC OFFLINE. Ryan S put 0.1W into the IMC with the PSL waveplate. This gives 2.37mW on H1:IMC-REFL_DC_OUT16.
Aim to reduce this by a factor of 2 to 1.18mW.
A quick calculation of the IMC visibility:
Before power outage:
IMC refl when offline = 46.9
IMC refl when online at 2 W = 0.815
1 - 0.815/46.9 = 0.982
After power outage:
IMC refl when offline = 46.5
IMC refl when online at 2 W = 2.02
1 - 2.02/46.5 = 0.965
So we think we have lost 2.6% of the visibility
I posted the following message in the Detchar-LHO mattermost channel:
Hey detchar! We could use a hand with some analysis on the presence and character of the glitches we have been seeing since our power outage Wednesday. They were first reported here: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=86848 We think these glitches are related to some change in the input mode cleaner since the power outage, and we are doing various tests like changing alignment and power, engaging or disengaging various controls loops, etc. We would like to know if the glitches change from these tests.
We were in observing from roughly GPS time 1441604501 to 1441641835 after the power outage, with these glitches and broadband excess noise from jitter present. The previous observing period from roughly GPS 1441529876 to 1441566016 was before the power outage and these glitches and broadband noise were not present, so it should provide a good reference time if needed.
After the power outage, we turned off the intensity stabilization loop (ISS) to see if that was contributing to the glitches. From 1441642051 to 1441644851, the ISS was ON. Then, from 1441645025 to 1441647602 the ISS was OFF.
Starting from 1441658688, we decided to leave the input mode cleaner (IMC) locked with 2 W input power and no ISS loop engaged. Then, starting at 1441735428, we increased the power to the IMC from 2 W to 60 W, and engaged the ISS. This is where we are sitting now. Since the interferometer is has been unlocked since yesterday, I think the best witness channels out of lock will be the IMC channels themselves, like the IMC wavefront sensors (WFS), which Derek reports are a witness for the glitches in the alog I linked above.
To add to this investigation:
We attenuated the power on IMC refl, as reported in alog 86884. We have not gone back to 60 W since, but it would be interesting to know if a) there was glitches in the IMC channels at 2W before the attenuation, and b) if there were glitches at 2 W after the attenuation. We can also take the input power to 60 W without locking to check if the glitches are still present.
Ryan S, Keita, Oli, Sheila
Using the instructions found here we used the netgpib script to get data from an IMC OLG measurement (already checked yesterday 86852).
Oli and I adapted Craig's quick tf plot script to plot two TFs on top of each other, and found some data from Oct 31st 2024 where the IMC OLG was measured with 2W input power 80979 .
We adjusted the gain of the 2024 measurement to match the gain of the 2024 measurement to today's for the first point. The IMC cavity pole is at 8.8kHz, so since this measurement cuts off at 10kHz it will be difficult to get information about the IMC cavity pole from this measurement.
This script is in sheila.dwyer/IOO/IMC_OLG/quick_2tfs_plot.py, legend entries and file names are hardcoded not arguments.
Fri Sep 12 10:08:39 2025 INFO: Fill completed in 8min 35secs
TITLE: 09/12 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 2mph Gusts, 1mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.16 μm/s
QUICK SUMMARY: No observing time for H1 overnight. Investigations will continue today into issues caused by the site power outage on Wednesday.
Things looking relatively stable overnight. Some trends and current snapshot of MC TRANS and REFL attached.
Here's an image of a time when the IMC was locked at 2W before the power outage to compare to: archive screenshot
TITLE: 09/12 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: NONE
SHIFT SUMMARY:
IFO is in DOWN and MAINTENANCE
We are still working on figuring out what happened post-outage that is yielding a glitchy interferometer when locked.
Due to the risk of damaging the interferometer, the decision has been made to cancel the OWL shift, shorten the EVE shift and stay in DOWN until tomorrow.
PSL Dust is fluctuating but has been increasing in the last 2-3 hours. I assume that this is due to the PSL work that was done 2/3 hours ago - this has not been an issue in the past though I've attached a plot.
LOG:
None
Closes FAMIS 26686. Last checked in alog 86718
Everything below threshold with the outage clearly visible as a blip (mostly in MY and MX fans).
Plots attached.
Originally pointed out by Camilla or Elenna earlier today, but I wanted to record it here in case it can help us figure out what the issue is. During last night's low range lock after the power outage (2025-09-11 03:00:47 - 17:42:34 UTC), our glitch rate was way higher than it typically is, and the glitches were mainly confined to several specific frequencies(main summary page, glitches page). I've been able to get some of these frequencies, but there are some frequency lines that I haven't been able to narrow in on the exact frequencies yet.
Here are the frequencies I confirmed, as well as guesses for the other lines:
16.91
24-ish
29.37
37.32
47.41
60-ish
76-ish
90-ish
120-ish
156.97
199.44
250-ish
321.95
409.04
510-ish
660.30
I've plotted them just lined up next to each other as well as plotting the difference in frequency as compared to each one's previous point, and we can see there is a slow exponential increase in the difference between each glitch line frequency. The yellow points are the ones that are around the correct range, but not their exact values.
Additionally, once we turned the ISS Second Loop off at 16:55 UTC, the glitches previously appearing between 500 and 1000 Hz stopped almost altogether, the glitches at 409 Hz and below became a lot more common and louder, and we also saw some extra glitches start above 4000 Hz. We understand the glitches above 4000 Hz, but we aren't sure why the glitches between 500 and 4000 Hz would stop when we did this.
Hoping this might help shine a light on some possible electronics issue?
The exponential behavior noted in this alog is related to how frequencies are chosen for the sine-Gaussian wavelets used by Omicron. This type of frequency behavior is what we would expect for broadband glitches, and unfortunately, it does not relate to their physical source in this case.
TITLE: 09/11 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 22mph Gusts, 15mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.16 μm/s
QUICK SUMMARY:
IFO is DOWN for MAINTENANCE
With full brevity, something is broken and we don't know why. You can use this alog as a glossary for ideas and tests that our experts have had today:
Much like yesterday, guesses involved the IMC, so Sheila and Ryan went into the PSL with no immediate culprits. IFO was extremely glitchy while observing so we made the call not to try locking at the risk of damaging. If comissioners think of anything to test, they will let me know.
Relevant alogs from throughout the day:
Alogs are still being written as it's been a long and busy day. I will add more throughout the shift.
TITLE: 09/11 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
H1 Started the day locked and Observing but something was strange. H1 h(t) triggers across multiple frequencies.
When we went to commissioning and skipped the Calibration due to the very obvious fact that the IFO was not running well.
Sheila and Oli went out on the floor to inject some test signals into H1:IMC-REFL_SERVO_COMEXCEN.
Sheila, Elenna, Camilla, Ryan, Daniel, Oil, & TJ were trying to find out why the IFO is behaiving so abnormally for most of the day.
Journalist came into the Control room for a quick tour with Fred.
There are fears that we may accidentally burn something if we try to lock, and perhaps the EVE's and Owls may be canceled.
Dust counts in the Anti-Room were Very high today and after TJ went to the Mech Mez to check and adjust the Dust pumps he did discover that the Vacuum lines for the Dust mon pumps were not securely connected.... which could explain the dust. After that the dust levels did seem to fall.
Ryan & Sheila went into the PSL room to see if there was any obvious issues.
Rahul checked out the Input arm su spensions and didn't find anything strange.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 14:57 | FAC | Nellie | Optics lab | N | Technical Cleaning | 15:13 |
| 16:06 | IMC | Sheila, Elenna, Oli | LVEA | N | Checking on the IMC | 18:06 |
| 17:54 | SEI | Jim | LVEA | N | Trippn HAM3 | 18:24 |
| 18:48 | PSL&PEM | Ryan S | LVEA | N | Checking the PSL environmental controls | 19:18 |
| 20:25 | JAC | Corey | Optics / vac lab | N | working on Jac table | 20:25 |
| 20:43 | IMC | Daniel & Ellenna | LVEA | N | Checking for oscillations on IMC | 21:13 |
| 22:11 | PSL | Sheila & Ryan S | PSL room | Yes | Checking for smell of burnt in PSL | 22:56 |
Continuing from alog86631, here's 24 days with DM10 mounted up high on BSC2 on a catwalk. The 3 Tuesdays are labled and I also plotted DM6 as a comparision, which has not moved and stayed in the mega cleanroom.
Compaing now to a time before the power outage, the ISS diffracted power is now the same (3.6%). The PMC transmission has dropped by 2%, and the PMC reflection has increased by 3.5%.
Looking at IMC refl, before the power outage when the IMC was locked with 2W input power.
| time | IMC refl | refl percent of before outage | MC2 trans | MC2 trans % of before outage |
| 9/10 8:12 UTC (before power outage, 2W IMC locked) | 0.74 | 317 | ||
| 9/11 00:22 UTC (IMC relocked at 2W after outage) | 1.15 | 155% | 312 | 98% |
| 9/11/1:50 UTC (after first 60 W and quick lockloss, IMC relocked at 2W) | 1.27 | 171% | 310 | 97% |
| 9/11 18:17 UTC (after overnight 60W lock)** had one IMC ASC loop off | 2.06 | 278% | 278 | 87% |
| 9/11 19:22 UTC (after all IMC ASC on) | 2.13 | 287% | 301 | 95% |
| 9/11 21:17 UTC (after sitting at 2W for 3 hours) | 2.03 | 274% | 303 | 95% |
The attached screenshot shows that the ISS second loop increased the IMC input power during the overnight 60W lock to keep the IMC circulating power constant.
Tony, TJ, Dave:
After the power outage the CS dust monitors (Diode room, PSL enclosure, LVEA) started recording very large numbers (~6e+06 PCF). TJ quickly realized this was most probably a problem with the central pump and resolved that around 11am today.
2-day trend attached.
The pump was running especially hot and the gauge showed no vacuum pressure. I turned off the pump and checked the hose connections. The filter for the bleed valve was loose, the bleed screw was full open, and one of the pump filters was very loose. after checking these I turned it back on and it immediately sucked to 20inHg. I trimmed it back to 19inHg and then rechecked a few hours later to confirm it had stayed at that pressue. The pump was also running much cooler at that point as well.
We had a site wide power outage around 12:11 local time. Recovery of CDS has started.
I've turned the alarms system off, it was producing too much noise.
We are recovering front end models.
Jonathan, Erik, Richard, Fil, Patrick, EJ, TJ, RyanS, Dave:
CDS is recovered. CDSSDF showing WAPs are on, FMCSSTAT showing LVEA temp change.
Alarms are back on (currently no active alarms). I had to restart the locklossalert.service, it had gotten stuck.
BPA Dispatcher on duty said they had a breaker at the Benton substation open & reclose. At that time, they did not have a known cause for the breaker operation. Hanford fire called to report a fire off Route 4 by Energy Northwest near the 115KV BPA power lines. After discussions with the BPA dispatcher the bump on the line or breaker operation, may have been caused by a fault on the BPA 115KV line causing the fire. BPA was dispatching a line crew to investigate.