(Anna I., Jordan V., Gerardo M.)
Late entry.
Last Tuesday we attempted to "activate" the NEG pump located on one of the top ports of the output mode cleaner tube, but we opted to do a "conditioning" instead, this due to issues encountered with the gauge related to this NEG pump. Also, what prompted the work on the NEG pump was the "noted" bump on the pressure inside the NEG housing by the gauge, bump on pressure also noted inside the main vacuum envelope, see attachment for pressure at the NEG housing (PT193) and pressure at PT170.
Side note: Two modes of heating for the NEG pump are available, ACTIVATION and CONDITIONING. The difference between both options is the maximum applied temperature, for "conditioning" the max temperature is 250 oC, where for "conditioning" the max temperature is 550 oC. The maximum temperature holds for 60 minutes.
We started by making sure that the isolation valve was closed, checked it and it was closed. We connected a pump cart+can turbo pump to the system and pump down the NEG housing volume, the aux cart dropped down to 5.4x10^-05 torr, ready for "activation". However, we noted no response from the gauge attached to this volume, PT193 signal remained flat. So, we aborted the "activation" and instead we ran a "conditioning" on the NEG. After the controller finished running the program, we allowed the system to cool down, while pumping on it, then the NEG housing was isolated from the active pumping, hoses and can turbo were removed, then system was returned to "nominal status" or no mechanical pumping on it.
Patrick logged in h0vaclx, but system showed the gauge nominal, and the system did not allowed to turn filaments because pressure is high.
Long story short, the gauge is broken. Next week we'll visit NEG1, and its gauge PT191.
ls /opt/rtcds/userapps/release/sus/common/scripts/quad/InLockChargeMeasurements/rec_LHO -lt | head -n 6
total 295
-rw-r--r-- 1 guardian controls 160 Sep 9 07:58 ETMX_12_Hz_1441465134.txt
-rw-r--r-- 1 guardian controls 160 Sep 9 07:50 ETMY_12_Hz_1441464662.txt
-rw-r--r-- 1 guardian controls 160 Sep 9 07:50 ITMY_15_Hz_1441464646.txt
-rw-r--r-- 1 guardian controls 160 Sep 9 07:50 ITMX_13_Hz_1441464644.txt
-rw-r--r-- 1 guardian controls 160 Sep 2 07:58 ETMX_12_Hz_1440860334.txt
In-Lock SUS Charge Measurements did indeed run this Tuesday.
python3 all_single_charge_meas_noplots.py
Cannot calculate beta/beta2 because some measurements failed or have insufficient coherence!
Cannot calculate alpha/gamma because some measurements failed or have insufficient coherence!
Something went wrong with analysis, skipping ITMX_13_Hz_1440859844
Ive put togther a number of Histograms that can be used for quick glance of locking state performance marks.
Sheila & Elenna wanted some DRMI Stats dating back to Aug 19th .
Sheila and I went out to IOT2L. We measured the power before and after the splitter called "IO_MCR_BS1" in this diagram of IOT2L. This is listed as a 50/50 beamsplitter. However, there is no possible way it can be, because we measure about 2.7 mW just before that splitter, and 2.45 mW on the IMC refl path and 0.25 mW on the WFS path. So we think it must be a 90/10.
Sheila then adjusted the have wave plate upstream of this splitter so that the light on IMC refl dropped from 2.4 mW to 1.2 mW., as measured from the IMC refl diode.
We compensated this change by raising the IMC gains by 6 dB in the guardian. I doubled all the IMC WFS input matrix gains, SDF attached.
SDFed the IMC PZT position that we think we like.
After the waveplate adjustment and updating of gains in Guardian, we took an IMC OLG measurement with the IMC locked at 2W input. The UGF was measured at 35.0 kHz.
(In the attached 1 week trend, [-7 day, -2 day] range is before the power outage and [-1day, 0 day] is after the long high power lock post power outage. Ignore [-2 day, -1 day] because the ISS 2nd loop was messing with the AOM during that time.)
ISS QPD (top right) shows a clearly measurable shift in the alignment mostly in DX (YAW). The QPD is downstream of the PMC, the calibration is supposed to be in mm so nominally it's just 5.4 um but there's a focusing lens in front of it and unfortunately I don't know the focal length nor the distance between the lens and the QPD for now, so the only thing I can say is that the alignment changed downstream of PMC in YAW.
BEA (Bull's Eye Sensor, top left) also shows an alignment shift upstream of PMC. Input matrix of this one is funny, don't read too much into the magnitude of the change. I'm not even sure if it was mostly in PIT or in YAW. (The beam size, 2nd from the top on the left column, also shows a change but I won't trust that because that output is supposed to change when there's an alignment change.)
In our first long lock after the power outage, ISS second loop increased the input power to the IMC by 0.1 W/ hour for 10 hours, MC2 trans was stable, and IMC refl DC increased by 1.7 (mW) per hour (first screenshot)
Last night we left the IMC at 2W, and saw no further degredation. (second screenshot).
Now we increased the power to 60W for an hour and see no further degredation after the transient while the ASC was moving and maybe there was a thermal transient. (third screenshot)
Daniel suggested that as the power has increased on the IMC REFL PD (IOT2L D1300357) by a factor of around 3, was 19mW at 60W in, now 60mW at 60Win, we should reduce the power on this diode as it doesn't work well with high powers. H1:IMC-REFL_DC_OUT16
At 2W input, it was 0.73mW, is now 2mW. Aim for a factor of 2 reduction (to make it simple to reduce electronics gains).
Aim to reduce power on IMC REFL to 1mW using IOT2L waveplate when there is 2W incident on IMC.
Decided we should do this with the IMC OFFLINE. Ryan S put 0.1W into the IMC with the PSL waveplate. This gives 2.37mW on H1:IMC-REFL_DC_OUT16.
Aim to reduce this by a factor of 2 to 1.18mW.
A quick calculation of the IMC visibility:
Before power outage:
IMC refl when offline = 46.9
IMC refl when online at 2 W = 0.815
1 - 0.815/46.9 = 0.982
After power outage:
IMC refl when offline = 46.5
IMC refl when online at 2 W = 2.02
1 - 2.02/46.5 = 0.965
So we think we have lost 2.6% of the visibility
I posted the following message in the Detchar-LHO mattermost channel:
Hey detchar! We could use a hand with some analysis on the presence and character of the glitches we have been seeing since our power outage Wednesday. They were first reported here: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=86848 We think these glitches are related to some change in the input mode cleaner since the power outage, and we are doing various tests like changing alignment and power, engaging or disengaging various controls loops, etc. We would like to know if the glitches change from these tests.
We were in observing from roughly GPS time 1441604501 to 1441641835 after the power outage, with these glitches and broadband excess noise from jitter present. The previous observing period from roughly GPS 1441529876 to 1441566016 was before the power outage and these glitches and broadband noise were not present, so it should provide a good reference time if needed.
After the power outage, we turned off the intensity stabilization loop (ISS) to see if that was contributing to the glitches. From 1441642051 to 1441644851, the ISS was ON. Then, from 1441645025 to 1441647602 the ISS was OFF.
Starting from 1441658688, we decided to leave the input mode cleaner (IMC) locked with 2 W input power and no ISS loop engaged. Then, starting at 1441735428, we increased the power to the IMC from 2 W to 60 W, and engaged the ISS. This is where we are sitting now. Since the interferometer is has been unlocked since yesterday, I think the best witness channels out of lock will be the IMC channels themselves, like the IMC wavefront sensors (WFS), which Derek reports are a witness for the glitches in the alog I linked above.
To add to this investigation:
We attenuated the power on IMC refl, as reported in alog 86884. We have not gone back to 60 W since, but it would be interesting to know if a) there was glitches in the IMC channels at 2W before the attenuation, and b) if there were glitches at 2 W after the attenuation. We can also take the input power to 60 W without locking to check if the glitches are still present.
Ryan S, Keita, Oli, Sheila
Using the instructions found here we used the netgpib script to get data from an IMC OLG measurement (already checked yesterday 86852).
Oli and I adapted Craig's quick tf plot script to plot two TFs on top of each other, and found some data from Oct 31st 2024 where the IMC OLG was measured with 2W input power 80979 .
We adjusted the gain of the 2024 measurement to match the gain of the 2024 measurement to today's for the first point. The IMC cavity pole is at 8.8kHz, so since this measurement cuts off at 10kHz it will be difficult to get information about the IMC cavity pole from this measurement.
This script is in sheila.dwyer/IOO/IMC_OLG/quick_2tfs_plot.py, legend entries and file names are hardcoded not arguments.
Fri Sep 12 10:08:39 2025 INFO: Fill completed in 8min 35secs
TITLE: 09/12 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 2mph Gusts, 1mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.16 μm/s
QUICK SUMMARY: No observing time for H1 overnight. Investigations will continue today into issues caused by the site power outage on Wednesday.
Things looking relatively stable overnight. Some trends and current snapshot of MC TRANS and REFL attached.
Here's an image of a time when the IMC was locked at 2W before the power outage to compare to: archive screenshot
TITLE: 09/12 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: NONE
SHIFT SUMMARY:
IFO is in DOWN and MAINTENANCE
We are still working on figuring out what happened post-outage that is yielding a glitchy interferometer when locked.
Due to the risk of damaging the interferometer, the decision has been made to cancel the OWL shift, shorten the EVE shift and stay in DOWN until tomorrow.
PSL Dust is fluctuating but has been increasing in the last 2-3 hours. I assume that this is due to the PSL work that was done 2/3 hours ago - this has not been an issue in the past though I've attached a plot.
LOG:
None
Originally pointed out by Camilla or Elenna earlier today, but I wanted to record it here in case it can help us figure out what the issue is. During last night's low range lock after the power outage (2025-09-11 03:00:47 - 17:42:34 UTC), our glitch rate was way higher than it typically is, and the glitches were mainly confined to several specific frequencies(main summary page, glitches page). I've been able to get some of these frequencies, but there are some frequency lines that I haven't been able to narrow in on the exact frequencies yet.
Here are the frequencies I confirmed, as well as guesses for the other lines:
16.91
24-ish
29.37
37.32
47.41
60-ish
76-ish
90-ish
120-ish
156.97
199.44
250-ish
321.95
409.04
510-ish
660.30
I've plotted them just lined up next to each other as well as plotting the difference in frequency as compared to each one's previous point, and we can see there is a slow exponential increase in the difference between each glitch line frequency. The yellow points are the ones that are around the correct range, but not their exact values.
Additionally, once we turned the ISS Second Loop off at 16:55 UTC, the glitches previously appearing between 500 and 1000 Hz stopped almost altogether, the glitches at 409 Hz and below became a lot more common and louder, and we also saw some extra glitches start above 4000 Hz. We understand the glitches above 4000 Hz, but we aren't sure why the glitches between 500 and 4000 Hz would stop when we did this.
Hoping this might help shine a light on some possible electronics issue?
The exponential behavior noted in this alog is related to how frequencies are chosen for the sine-Gaussian wavelets used by Omicron. This type of frequency behavior is what we would expect for broadband glitches, and unfortunately, it does not relate to their physical source in this case.
We had a site wide power outage around 12:11 local time. Recovery of CDS has started.
I've turned the alarms system off, it was producing too much noise.
We are recovering front end models.
Jonathan, Erik, Richard, Fil, Patrick, EJ, TJ, RyanS, Dave:
CDS is recovered. CDSSDF showing WAPs are on, FMCSSTAT showing LVEA temp change.
Alarms are back on (currently no active alarms). I had to restart the locklossalert.service, it had gotten stuck.
BPA Dispatcher on duty said they had a breaker at the Benton substation open & reclose. At that time, they did not have a known cause for the breaker operation. Hanford fire called to report a fire off Route 4 by Energy Northwest near the 115KV BPA power lines. After discussions with the BPA dispatcher the bump on the line or breaker operation, may have been caused by a fault on the BPA 115KV line causing the fire. BPA was dispatching a line crew to investigate.