Jenne and I noticed that the SEI_DIFF node was reporting "chambers not nominal" and was stuck in its 'DOWN' state as a result. We realized this was because HAM7 ISI is tripped likely due to vent prep and impending door removal, so we removed HAMs 7 and 8 from the list of chambers that are checked in SEI_DIFF's "isi_guardstate_okay()" function. After loading the node, SEI_DIFF successfully went to 'BSC2_FULL_DIFF_CPS'.
The picomotor controllers were not working. The software side looked ok, but there was no physical drive signal. The TwinCAT system showed an error message about "nonsensical priority order of the PLC tasks". In the past, we ignored these messages without any problems. After fixing this issue and re-activating the system, it started working again. Not usre if it just needed a restart, or if the priority order has now become important. More investigation needed.
The atomic clock has been resynchronized with GPS. The tolerance has been reduced to <1000ns again.
M1300464 - Preparing the aLIGO Interferometer for Pumpdown or Vent
The following high voltage power supplies were powered off in preparation for the HAM 7 vent.
1. MER Mezzanine - HAM7 PSAMS HV
2. MER Mezzanine - HAM7 Piezo HV
The PSAMs were ramped down - Sheila (alog 88414)
HAM 7 Pico High Controller disabled - Gerardo
SQZ OPO TEC Servo disabled - Gerardo
Since the SQZ laser is now off in preparation for the HAM7 vent, and we still want to keep trying to lock the IFO in the meantime, I've switched the "ignore_sqz" flag in lscparams.py from False to True. ISC_LOCK has been loaded.
This sent ISC_LOCK into error in a few places, so I've flipped the flag back and will revisit the logic at a later time.
I've fixed the logic in ISC_LOCK so now it ignoring SQZ_MANAGER with the flag raised is working as intended. I'm leaving the "ignore_sqz" flag True and all changes have been loaded.
I replaced a failed disk on cdsfs0. zpool status told us:
raidz3-1 DEGRADED 0 0 0 sdk ONLINE 0 0 0 3081545339777090432 OFFLINE 0 0 0 was /dev/sdq1 sdq ONLINE 0 0 0 sdn ONLINE 0 0 0
This hinted the disk was /dev/sdq that failed. When identifying the physical disk behind /dev/sdq (dd if=/dev/sdq of=/dev/null, which does a continuous read of the disk to make it light up), it pointed to a disk in the caddy marked 1:17. I then told zfs to fail /dev/sdq1, and then reads started showing up on the disk (as identified by the leds).be
To be safe I took the list of drives shown by zpool status, and the list of drives listed by the os (looking in /dev/disk/by-id). I then identified every disk on the system by doing a long read from it (to force the LED). There was a jump in caddies from 1:15-1:17. After accounting for all disks, I figured the bad disk was the 1:16 slot. I then pulled that disk. zpool status showed no other issues.
After replacing the disk I had to create a gpt partition on it using parted.
Then I replaced the disk
zpool replace fs0pool 3081545339777090432 /dev/sdzNow it is resilvering.
raidz3-1 DEGRADED 0 0 0 sdk ONLINE 0 0 0 replacing-1 OFFLINE 0 0 0 3081545339777090432 OFFLINE 0 0 0 was /dev/sdq1 sdz ONLINE 0 0 0 (resilvering) sdq ONLINE 0 0 0 sdn ONLINE 0 0 0 sdj ONLINE 0 0 0 sdm ONLINE 0 0 0
We need to retire this array. There are hints of problems on other disks.
FAMIS 31115
This week's check serves as a comparison on how things in the PSL came back after the long power outage last Thursday. Overall, things look good, with the exception that alignment is certainly needed into the PMC and RefCav (not surprising after the laser goes down), but we're waiting on full picomotor functionality to be restored before doing that. As is, alignment is fine enough for now.
One can see that after the system was recovered Thursday afternoon, for about a day, the environmental controls for the enclosure were in a weird state (see Jason's alog) which caused oscillations in amplifier pump diode currents and thus output power from AMP2. This has been fixed and behavior appears to be back to normal.
Additionally, the differential pressure sensor between the anteroom and laser room seems to have been fixed by the outage. Hooray.
Mon Dec 08 10:12:15 2025 Fill completed in 12min 12secs
Sheila, Filiberto
In prep for HAM7 vent, we ramped off the psams servo and the requested voltage. This is different since the addition of the psams servo, screenshot shows how to get to the screens to ramp that off.
first turned off the servo input, then ramp it's gain from 1 to 0 with 30 second ramp. Then with 100 second ramp turned off the offset for the requested voltage. This needs to be done for ZM4,5,2.
We also set the guardians to down in prep for loosing high votlage to the OPO, SHG and PMC.
We didn't do the pico controllers yet since Marc and Daniel are debugging them.
M. Todd, S. Dwyer, J. Driggers
| Measurement | Value [uD / W] | Notes |
| Ring Heater Coupling to Substrate Lens | -21.0 +/- 0.3 | relative to modeled coupling, 79 +/- 1 % efficiency compared to predicted 75-80% efficiency from arm cavity measurements. Modeled couplings assuming 100% efficiency report around -26.5 uD/W. |
| SR3 Heater Coupling to Substrate Lens |
ITMX HWS: 4.7 +/- 0.2 ITMY HWS: 4.6 +/- 0.1 |
The ITMX HWS seems to be noisier than ITMY, but give very similar mean estimates. The estimate from Gouy phase measurements is around 5.0 uD/W. |
We turned on inverse ring heater filters to speed up the heating for those (using nominal values for the settings). Because of the weekend mayhem with the earthquakes we did not get a SUPER long HWS transient measuring the full response, but we could get a pretty good estimate of the ring heater effect on the substrate thermal lens without any other heating in the measurement. This is good to compare to modeled values that we have.
I also turned on SR3 heater on Sunday to get estimates of the coupling of SR3 heating to the defocus of SR3. To do this, Jenne helped me untrip a lot of the SU watchdogs for the relevant optics to the HWS. About 3 hours after the SR3 was turned on the watchdogs must have tripped again and misaligned the optics. But fortunately we got the cooldown data for this as well and it's all pretty consistent. These measurement suggest a 4.7 uD/W coupling for SR3 heating, which is very similar to modeled coupling from Gouy phase measurements at different SR3 heater powers.
Overall, while these measurements provide more pieces to the puzzle, they make previous analyses a bit more confusing, requiring some more thought (as usual).
TITLE: 12/08 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering // Earthquake
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
SEI_ENV state: LARGE_EQ
QUICK SUMMARY:
When I try to run an ndscope I get the following error:
qt.qpa.xcb: could not connect to display
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, xcb.
Aborted
*I think I was trying to launch ndscope in a terminal that I was sshed into a different computer/env*
As a follow up note. Ndscope is working for Ryan, we are not entirely sure what the issue was, maybe the console was in a strange conda environment. When we looked at it, ndscope started fine.
M. Todd, J. Driggers
I wanted to turn on SR3 and watch the HWS while it heats up to compare to models of the defocus we expect. Due to the earthquake yesterday however most of the suspension watch dogs had tripped, and would not let me re-align the ITMs, BS, and SR3 for making sure the HWS were aligned.
With Jenne's help, we untripped and took ISI's to isolated for ITMs, BS and SR3. I then turned on the SR3 heater to 4W, and I will watch to make sure it responds well. It should be relatively well thermalized in about 5 hours, at which point I'll turn it off.
M. Todd
I went back in to turn off the SR3 heater, and discovered that the watch dogs for the BS and ITMY had again tripped. I reset the watch dogs but was not successful in trying to put their ISIs to fully isolated because of an error 'WATCHDOG TRIP : SUBORDINATE'.
Regardless, the suspensions report that theyr'e aligned and SR3 is cooling down so the HWS should be able to get the cool down data as well.
Sun Dec 07 10:07:10 2025 Fill completed in 7min 7secs
Another SWWD trip of ETMY
ETMY Suspension rings-up within 15 minutes of reset causing a SWWD trip of SEI (reset at 08:00, tripped 08:15 this morning).
I'll keep SUS tripped for now and reenable SEI in bypass mode until we can determine what is going on with h1susetmy.
BSC8 annulus ion pump (AIP) railed late yesterday around 10:02 pm local time, see attached plot.
After the pump railed there is no noted effect inside the main vacuum envelope. The annulus system will be diagnosed on next opportunity.
Last failure of this ion pump was mentioned on 2/19/2002.