I replaced a failed disk on cdsfs0. zpool status told us:
raidz3-1 DEGRADED 0 0 0 sdk ONLINE 0 0 0 3081545339777090432 OFFLINE 0 0 0 was /dev/sdq1 sdq ONLINE 0 0 0 sdn ONLINE 0 0 0
This hinted the disk was /dev/sdq that failed. When identifying the physical disk behind /dev/sdq (dd if=/dev/sdq of=/dev/null, which does a continuous read of the disk to make it light up), it pointed to a disk in the caddy marked 1:17. I then told zfs to fail /dev/sdq1, and then reads started showing up on the disk (as identified by the leds).be
To be safe I took the list of drives shown by zpool status, and the list of drives listed by the os (looking in /dev/disk/by-id). I then identified every disk on the system by doing a long read from it (to force the LED). There was a jump in caddies from 1:15-1:17. After accounting for all disks, I figured the bad disk was the 1:16 slot. I then pulled that disk. zpool status showed no other issues.
After replacing the disk I had to create a gpt partition on it using parted.
Then I replaced the disk
zpool replace fs0pool 3081545339777090432 /dev/sdzNow it is resilvering.
raidz3-1 DEGRADED 0 0 0 sdk ONLINE 0 0 0 replacing-1 OFFLINE 0 0 0 3081545339777090432 OFFLINE 0 0 0 was /dev/sdq1 sdz ONLINE 0 0 0 (resilvering) sdq ONLINE 0 0 0 sdn ONLINE 0 0 0 sdj ONLINE 0 0 0 sdm ONLINE 0 0 0
We need to retire this array. There are hints of problems on other disks.
FAMIS 31115
This week's check serves as a comparison on how things in the PSL came back after the long power outage last Thursday. Overall, things look good, with the exception that alignment is certainly needed into the PMC and RefCav (not surprising after the laser goes down), but we're waiting on full picomotor functionality to be restored before doing that. As is, alignment is fine enough for now.
One can see that after the system was recovered Thursday afternoon, for about a day, the environmental controls for the enclosure were in a weird state (see Jason's alog) which caused oscillations in amplifier pump diode currents and thus output power from AMP2. This has been fixed and behavior appears to be back to normal.
Additionally, the differential pressure sensor between the anteroom and laser room seems to have been fixed by the outage. Hooray.
Mon Dec 08 10:12:15 2025 Fill completed in 12min 12secs
Sheila, Filiberto
In prep for HAM7 vent, we ramped off the psams servo and the requested voltage. This is different since the addition of the psams servo, screenshot shows how to get to the screens to ramp that off.
first turned off the servo input, then ramp it's gain from 1 to 0 with 30 second ramp. Then with 100 second ramp turned off the offset for the requested voltage. This needs to be done for ZM4,5,2.
We also set the guardians to down in prep for loosing high votlage to the OPO, SHG and PMC.
We didn't do the pico controllers yet since Marc and Daniel are debugging them.
M. Todd, S. Dwyer, J. Driggers
| Measurement | Value [uD / W] | Notes |
| Ring Heater Coupling to Substrate Lens | -21.0 +/- 0.3 | relative to modeled coupling, 79 +/- 1 % efficiency compared to predicted 75-80% efficiency from arm cavity measurements. Modeled couplings assuming 100% efficiency report around -26.5 uD/W. |
| SR3 Heater Coupling to Substrate Lens |
ITMX HWS: 4.7 +/- 0.2 ITMY HWS: 4.6 +/- 0.1 |
The ITMX HWS seems to be noisier than ITMY, but give very similar mean estimates. The estimate from Gouy phase measurements is around 5.0 uD/W. |
We turned on inverse ring heater filters to speed up the heating for those (using nominal values for the settings). Because of the weekend mayhem with the earthquakes we did not get a SUPER long HWS transient measuring the full response, but we could get a pretty good estimate of the ring heater effect on the substrate thermal lens without any other heating in the measurement. This is good to compare to modeled values that we have.
I also turned on SR3 heater on Sunday to get estimates of the coupling of SR3 heating to the defocus of SR3. To do this, Jenne helped me untrip a lot of the SU watchdogs for the relevant optics to the HWS. About 3 hours after the SR3 was turned on the watchdogs must have tripped again and misaligned the optics. But fortunately we got the cooldown data for this as well and it's all pretty consistent. These measurement suggest a 4.7 uD/W coupling for SR3 heating, which is very similar to modeled coupling from Gouy phase measurements at different SR3 heater powers.
Overall, while these measurements provide more pieces to the puzzle, they make previous analyses a bit more confusing, requiring some more thought (as usual).
TITLE: 12/08 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering // Earthquake
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
SEI_ENV state: LARGE_EQ
QUICK SUMMARY:
When I try to run an ndscope I get the following error:
qt.qpa.xcb: could not connect to display
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, xcb.
Aborted
*I think I was trying to launch ndscope in a terminal that I was sshed into a different computer/env*
As a follow up note. Ndscope is working for Ryan, we are not entirely sure what the issue was, maybe the console was in a strange conda environment. When we looked at it, ndscope started fine.
M. Todd, J. Driggers
I wanted to turn on SR3 and watch the HWS while it heats up to compare to models of the defocus we expect. Due to the earthquake yesterday however most of the suspension watch dogs had tripped, and would not let me re-align the ITMs, BS, and SR3 for making sure the HWS were aligned.
With Jenne's help, we untripped and took ISI's to isolated for ITMs, BS and SR3. I then turned on the SR3 heater to 4W, and I will watch to make sure it responds well. It should be relatively well thermalized in about 5 hours, at which point I'll turn it off.
M. Todd
I went back in to turn off the SR3 heater, and discovered that the watch dogs for the BS and ITMY had again tripped. I reset the watch dogs but was not successful in trying to put their ISIs to fully isolated because of an error 'WATCHDOG TRIP : SUBORDINATE'.
Regardless, the suspensions report that theyr'e aligned and SR3 is cooling down so the HWS should be able to get the cool down data as well.
Sun Dec 07 10:07:10 2025 Fill completed in 7min 7secs
Another SWWD trip of ETMY
ETMY Suspension rings-up within 15 minutes of reset causing a SWWD trip of SEI (reset at 08:00, tripped 08:15 this morning).
I'll keep SUS tripped for now and reenable SEI in bypass mode until we can determine what is going on with h1susetmy.
BSC8 annulus ion pump (AIP) railed late yesterday around 10:02 pm local time, see attached plot.
After the pump railed there is no noted effect inside the main vacuum envelope. The annulus system will be diagnosed on next opportunity.
Last failure of this ion pump was mentioned on 2/19/2002.
BSC SWWDs were tripped around 13:00 this afternoon following a 7.0 mag EQ in Alaska.
After Dave texted me about the eq, I logged in, reset all the watchdogs, set all of the ISI to damped and turned off the sensor correction. This should be a safe enough state until people get back in on Monday.
SUSETMY tripped a second time on Saturday after its reset, I untripped it at 08:00 Sunday.
We are still recovering CDS from Thursday's power outage. All critical systems have been recovered and IFO locking was started yesterday.
Both Alarms and Alerts systems are operational and sending cell phone texts, I'm having issues sending alert emails.
Here is a brief summary of we are currently working on:
Timing:
Timing has two issues. The main one is with EY timing fanout chassis, its single mode link to MY (port15) is showing delay issues. Despite this h1pemmy appears to have good timing.
The secondary issue is that the atomic clock in the MSR has time jumped by 0.4S and needs resyncing.
Disk Failures:
The MSR file cluster has lost 4 disks. Resilvering for 2 of them is ongoing.
h1susauxh2 Power Supply Failure:
One of h1susauxh2's power supplies has failed, FRS36264.
EDC Disconnected Channels:
EDC currently has 574 disconnected channels, relating to auxiliary IOCs which still need to be started.
CDS OVERVIEW
Referencing the CDS Overview attached:
Range 7-segment LED service needs to be restarted.
Range of -1 MPc shows an outstanding GDS issue
EDC disconnect count mentioned above
Timing RED because of issues covered above
Picket Fence WHITE, needs restarting (see list below)
CDS Alarm RED due to EDC disconnect count
CDS SDF YELLOW because of remote power control and FCES-WIFI issues
Missing Auxillary EPICS IOCs
List may not be complete:
Picket Fence, End Station HWS, Mains Power (CS, EX), SUS Violin monitor, ncalx, cds load mon h1vmboot1, cal inj exttrig, range led, Observation mode
EY Geist Watchdog1250 actually went good for a few days after the power outage and has only recently failed again, suggesting a possible power supply issue.