Closes FAMIS#27621, last checked 88194
Everything looking okay except PMC REFL being high. It does look like it's been higher since they recovered the PSL after yesterday's power outage, but we are also still recovering and more will still need to be done for the PSL anyway, so maybe this is expected/will be adjusted anyway.
Laser Status:
NPRO output power is 1.83W
AMP1 output power is 70.67W
AMP2 output power is 139.9W
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 0 days, 17 hr 9 minutes
Reflected power = 25.84W
Transmitted power = 105.6W
PowerSum = 131.5W
FSS:
It has been locked for 0 days 16 hr and 19 min
TPD[V] = 0.5027V
ISS:
The diffracted power is around 4.0%
Last saturation event was 0 days 0 hours and 0 minutes ago
Possible Issues:
PMC reflected power is high
Following yesterdays power outage, primary FAC systems (HVAC, domestic water supply, fire alarm/suppression) all seem to be operating normally. Fire panels required acknowledgement of power supply failures, and battery backups did their job. There is an issue plaguing the supply fan drive at the FCES which revisited us this morning, but that arising from the outage is highly unlikely. Temperatures are within normal ranges in all areas. Eric continues to physically walk systems down to ensure normal operation of points in systems not visible via FMCS. C. Soike E. Otterman R. McCarthy T. Guidry
TITLE: 12/0 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
QUICK SUMMARY: Still in the midst of recovery from the power outage yesterday. Looks like all suspensions stayed damped overnight except the BS which tripped at 0422UTC. SEI is still in a non-nominal state as Jim left it yesterday (alog88373). Many medm screens, scripts, other processes are running *very* slow.
Recovery will continue, one forward step at a time.
Recovery of the front end systems was hampered by a slow /opt/rtcds file system, which still persists. There are no logs on h1fs0 indicating the problem, and the ZFS file system itself appears to be fully functional. Hourly backups by h1fs1 continue normally. At this point we cannot rule out a network issue.
The alarms system is running but does not seem to be operational, again no errors are being logged and channel access appears to be working.
Jim reported a bad model initialization with its safe.snap, most probably caused by a slow /opt/rtcds. I had to restart many models several times to get through a "burt restore" timeout for similar reasons.
We may need to restart h1fs0 in the morning and remount /opt/rtcds on all the NFS clients
The positive 18V power supply which provides power to Rack SUS-C1 and SUS-C2 failed after the power outage. Fan seized and power supply tripped off. Power supply was replaced.
New Power Supply installed - S1300291
F. Clara, M. Pirello
WP 12915
Corner Station Controls Chassis 4 Wiring Diagram - E1101222
EtherCAT Corner Station Chassis 4 - D1101266
Modifications to the Beckhoff chassis - E2000499
The EtherCAT Corner Station Chassis 4 was modified per E2000499. New Beckhoff terminals were installed to support JAC and BHD. A new rear panel was installed to accommodate the new rear adapter boards and shutter control connectors. The din rail power terminal blocks were relocated making the exising power cables too short. New power cables installed to the EtherCAT couplers and power terminals. All field cabling was reconnected, expect for ISC_313. The Beckhoff software will need to be updated.
EtherCAT Corner Station Chassis 4 - Serial Number S1107450
F. Clara, D. Sigg
Seems like all of the models are running, but many of the guardians are still down. What I was able to get up and running:
HEPI Pump stations for all 3 buildings. Patrick had to help some with the end station beckhoff. I had to adjust the float switch for the overflow tank at EX, we were ridding just at the trip level there. The corner station Athena looked dead when I powered on the power supply on the mezzanine, but I think some of the leds are just dead. Good thing we plan to replace that soon.
Both BRS are running. Patrick had to power cycle the EX beckhoff, we were then able to restart the software. The damping settings got reverted (highthreshold and lowthreshold), so I had to fix those through the cli. Heater settings also got lost, both brs cooled off a bit over the several hours we were blind, and will take a little while to warm back up. Probably will keep watching for a while to make sure everything is okay, but I think we are good.
Some of the chambers are able to be isolated through guardian, but many have dead guardian nodes. I manually engaged damping where I couldn't get the ISI isolated via guardian.
None of the blend, sensor correction or cps diff is alive enough to recover yet. I have manually disabled sensor correction for the corner, the ends can't be isolated yet. I guess that will wait for tomorrow.
Oli, Ryan S, Rahul
Except ETMX and ETMY (timing error, work currently ongoing), we have recovered all other suspensions by un-tripping the WD and setting it to SAFE for tonight. The inmons looks fine - eyeballed them all (bosems and aosem), nothing out of the order.
For ETMX and ETMY, Dave is currently performing a computer restart, following which they will set to safe as well.
Once CDS reboots were finished, I took all suspensions to either ALIGNED or MISALIGNED so that they're damped overnight.
TITLE: 12/04 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY:
After CDS & H1-Locking work of yesterday, today we transitioned to starting prep work for the upcoming vent of HAM7.
However, there was more CDS work today which was related to ISC Slow Controls....but in the middle of this...
There was a 90min power outage on site!!
LOG:
Recovery milestones:
22:02 Power back-up
22:04 CDS / GC seeing what's on/functional, then bring infrastructure back up
22:10 VAC team starts Kebelco to support the ar pressure that's keeping the cornerstation gate valves for closing
22:13 Phones are back
22:17 LHO GC Internet's back
22:25 GC wifi back up, alog and CDS back up
22:40 RyanS, Jason, Patick got PSL beckoff back up
22:55 VAC in LVEA/FCES back up
22:57 CDS back up (only controls)
23:27 PSL computer back
00:14 Safety interlock is back
00:14 HV and ALSY on at EY
00:35 opslogin0 back up
00:37 CS HV back up
00:53 CS HEPI pump station controller back up
01:05 CO2X and CO2Y back up
01:10 HV and ALSY on at EY
01:22 PSL is alive
Note: EY 24MHz oscillator had to be power cycled to resync
Casualties:
- lost ADC on seiH16
- 18V power supply failure at EY in SUS rack
Log:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 22:32 | VAC | Gerardo | MX, EX, MY, EY | n | Bringing back VAC computers | 23:41 |
| 23:32 | VAC | Jordan | LVEA, FCES | n | Bringing back VAC computers | 22:55 |
| 22:54 | PSL | RyanS, Jason | LVEA | n | Turning PSL chiller back on | 23:27 |
| 23:03 | VAC | Jordan | LVEA | n | Checking out VAC computer | 23:41 |
| 23:56 | SAF | Richard | EY | n | Looking at safety system | 00:25 |
| 00:02 | JAC | Jennie | JOAT, OpticsLab | n | Putting components on table | 01:24 |
| 00:19 | SAF | TJ | EX | n | Turning on HV and ALSX | 00:49 |
| 00:38 | TCS | RyanC | LVEA | n | Turning CO2 lasers back on | 01:06 |
| 01:03 | EE | Marc | CER | n | Power cycling io chassis | 01:14 |
| 01:06 | PSL | RyanS, Jason | LVEA | n | Checking makeup air for PSL | 01:10 |
| 01:07 | JAC | Jennie | LVEA | n | Grabbing parts | 01:14 |
| 01:08 | beckhoff | patrick | cr | - | BRSx recovery | 02:20 |
| 01:08 | sei | jim | EX.EY | - | HEPI pump station recovery | 01:58 |
| 01:25 | SEI | Patrick | EX | n | BRSX troubleshooting | 01:54 |
| 01:27 | EE | Marc | EX | n | Looking at RF sources | 02:09 |
| 02:00 | EE | Fil | EY | n | Power cycling SUSEY | 02:19 |
| 02:09 | VAC | Gerardo | MX | N | Troubleshooting VAC computer | 02:27 |
| 02:12 | EE | Marc | CER | N | Checking power supplies | 02:16 |
Since Oli's alog, I tried to keep a rough outline of the goings-on:
Marc and Fil went down to EY to replace the failed power supply, which brought life back to the EY frondends.
Dave noticed several models across site had timing errors, so he restarted them.
Gerardo continued to torubleshoot VAC computers at the mid-stations.
Once CDS boots were finished, I brought all suspension Guardians to either ALIGNED or MISALIGNED so that they're damped overnight.
I started to recover some of the Guardian nodes that didn't come up initially. When TJ started the Guardian service earlier, it took a very long time, but most of the nodes came up and he put them into good states for the night. The ones that didn't come up (still white on the GRD overview screen) I've been able to revive with a 'guardctrl restart' command, but I can only do a couple at a time or else the process times out. Even this way, the nodes take several minutes to come online. I got through many of the dead nodes, but I did not finish as I am very tired.
Main things still to do for recovery: (off the top of my head)
R. Short, P. Thomas, J. Oberling
We have recovered the PSL after today's power outage. Some notes for the future:
I've attached a picture of the Settings table for PSL sensor calibration and operating hours for future reference. Again, our persistent operating hours (that track total uptime of PSL laser components; OPHRS A in the attached picture) will continue to be wrong as we cannot update this value. The current operating hours, which tracks operating hours of currently operating components (i.e. we've been running this specific NPRO for X hours; OPHRS in the attached picture) are correct.
We have the PMC and FSS RefCav locked, but have left the ISS OFF overnight while the PMC settles. The PMC requires a beam alignment tweak (normal after an extended time off, like a 90 minute power outage) but we don't yet have Beckhoff so we don't have access to our picomotor mounts. I'll tweak the beam alignment tomorrow once Beckhoff has been recovered.
[Sheila, Eric, Karmeng]
We checked the NLG of the OPO without cavity lock, threshold power is roughly at 3.7mW, at the same order as the test performed at MIT (E2500270)
Checked the crystal alignment, found 9 good dual resonant position, from -2689um to 1223um, with 490um separation between each point. Everything seems to be in a good position.
Plus a clipped dual resonance position at -3190um. Crystal edge is roughly at 1500um and -3200um.
Red alignment does not shift between the various crystal position.
It's been a few minutes so far. There is emergency lighting. Luckily since it was lunchtime there were no people out on the floor. Recovery Begins!
FAMIS 38886, last checked in alog88025
Both HAM5 and the BS chambers were in 'ISI_DAMPED_HEPI_OFFLINE' at the time this was run, but all other chambers were nominal.
There are 15 T240 proof masses out of range ( > 0.3 [V] )!
ETMX T240 2 DOF X/U = -1.565 [V]
ETMX T240 2 DOF Y/V = -1.633 [V]
ETMX T240 2 DOF Z/W = -0.886 [V]
ITMX T240 1 DOF X/U = -2.256 [V]
ITMX T240 1 DOF Z/W = 0.467 [V]
ITMX T240 3 DOF X/U = -2.386 [V]
ITMY T240 3 DOF X/U = -1.073 [V]
ITMY T240 3 DOF Z/W = -2.895 [V]
BS T240 1 DOF Y/V = -0.323 [V]
BS T240 2 DOF Z/W = 0.327 [V]
BS T240 3 DOF X/U = -0.547 [V]
BS T240 3 DOF Z/W = -0.401 [V]
HAM8 1 DOF X/U = -0.638 [V]
HAM8 1 DOF Y/V = -0.729 [V]
HAM8 1 DOF Z/W = -1.11 [V]
All other proof masses are within range ( < 0.3 [V] ):
ETMX T240 1 DOF X/U = 0.007 [V]
ETMX T240 1 DOF Y/V = -0.049 [V]
ETMX T240 1 DOF Z/W = -0.017 [V]
ETMX T240 3 DOF X/U = -0.005 [V]
ETMX T240 3 DOF Y/V = -0.056 [V]
ETMX T240 3 DOF Z/W = -0.051 [V]
ETMY T240 1 DOF X/U = 0.045 [V]
ETMY T240 1 DOF Y/V = 0.198 [V]
ETMY T240 1 DOF Z/W = 0.249 [V]
ETMY T240 2 DOF X/U = -0.073 [V]
ETMY T240 2 DOF Y/V = 0.218 [V]
ETMY T240 2 DOF Z/W = 0.037 [V]
ETMY T240 3 DOF X/U = 0.268 [V]
ETMY T240 3 DOF Y/V = 0.076 [V]
ETMY T240 3 DOF Z/W = 0.147 [V]
ITMX T240 1 DOF Y/V = 0.254 [V]
ITMX T240 2 DOF X/U = 0.151 [V]
ITMX T240 2 DOF Y/V = 0.258 [V]
ITMX T240 2 DOF Z/W = 0.223 [V]
ITMX T240 3 DOF Y/V = 0.104 [V]
ITMX T240 3 DOF Z/W = 0.094 [V]
ITMY T240 1 DOF X/U = 0.074 [V]
ITMY T240 1 DOF Y/V = 0.123 [V]
ITMY T240 1 DOF Z/W = -0.013 [V]
ITMY T240 2 DOF X/U = 0.017 [V]
ITMY T240 2 DOF Y/V = 0.22 [V]
ITMY T240 2 DOF Z/W = 0.148 [V]
ITMY T240 3 DOF Y/V = 0.091 [V]
BS T240 1 DOF X/U = 0.184 [V]
BS T240 1 DOF Z/W = -0.223 [V]
BS T240 2 DOF X/U = 0.094 [V]
BS T240 2 DOF Y/V = -0.224 [V]
BS T240 3 DOF Y/V = 0.004 [V]
There are 2 STS proof masses out of range ( > 2.0 [V] )!
STS EY DOF X/U = -4.625 [V]
STS EY DOF Z/W = 2.321 [V]
All other proof masses are within range ( < 2.0 [V] ):
STS A DOF X/U = -0.498 [V]
STS A DOF Y/V = -0.915 [V]
STS A DOF Z/W = -0.485 [V]
STS B DOF X/U = 0.154 [V]
STS B DOF Y/V = 0.939 [V]
STS B DOF Z/W = -0.337 [V]
STS C DOF X/U = -0.685 [V]
STS C DOF Y/V = 0.77 [V]
STS C DOF Z/W = 0.505 [V]
STS EX DOF X/U = -0.195 [V]
STS EX DOF Y/V = -0.111 [V]
STS EX DOF Z/W = 0.112 [V]
STS EY DOF Y/V = 1.271 [V]
STS FC DOF X/U = 0.071 [V]
STS FC DOF Y/V = -1.254 [V]
STS FC DOF Z/W = 0.567 [V]
Last night I took a calibration measurement (broadband and simulines) after the IFO was locked for about 3.5 hours. This generated calibration report 20251204T035347Z.
The results from broadband and simulines on the front page look reasonable. The plot label indicates that these are from PCAL X and Y but I think the broadband and simulines are run on PCAL Y only, so I don't know if PCAL X agrees with Y. I believe as of last night, the DAC for PCAL X had been swapped.
I made no changes to the pydarm ini file to account for the DAC change before running the report.
Jeff and I looked over the report and have concluded that the changes we see are very minimal, and do not require any updates to the calibration at this time.
There is a 1% change in the L3 actuation strength, comparing the measurement from 11/18 and 12/04. This could be from the DAC change or it could be from charging. Either way, it is small enough that kappa TST is likely correcting this properly. The overall systematic error is still around 1%, as it was before the DAC change.
We don't think there needs to be any changes to the pydarm ini file at this time.
Thu Dec 04 10:15:45 2025 INFO: Fill completed in 15min 42secs
Plot has zoomed y-scale to reduce number of divisions in order to show detail.
Jennie W, Keita,
Since we don't have an easy way of scanning the input beam in the vertical direction, Keita used the pitch of the PZT steering mirror to do the scan and we read out the DC voltages for each PD.
The beam position can be inferred from the pictures setup - see photo. As the pitch actuator on the steering mirror is rotated the allen key which is in the hole in the pitch actuator moves up and down relative to the ruler.
height on ruler above table = height of centre of actuator wheel above table + sqrt((allen key thickness/2)^2 + (allen key length)^2) *np.sin(ang - delta_theta)
where ang is the angle the actuator wheel is at and delta_theta is the angle from the centre line of the allen key to its corner which is used to point at the gradations on the ruler.
The first measurement from our alignment that Keita found yesterday that minimised the vertical dither coupling is shown. It shows voltage on each PD vs. height on the ruler.
From this and from the low DC voltages we saw on the QPD and some PDs yesterday Keita and realised we had gone too far to the edge of the QPD and some PDs.
So in the afternoon Keita realigned onto all the of PDs.
Today as we were doing measurements on it Keita realised we still had the small aperture piece in place on the array so we moved that for our second set of measurements.
The plot of voltage with ruler position and voltage with pitch wheel angle are attached.
Keita did a few more measurements in the verticall scan after I left on Friday, attached is the updated scan plot.
He also then set the pitch to the middle of the range (165mm on the scale in the graph) and took a horizontal scan of the PD array using the micrometer that the PZT mirror is mounted on. See second graph.
From the vertical scan of the PD array it looks like diodes 2 and 6, which are in a vertitcal line in the array, are not properly aligned. We are not sure if this is an issue with one of the beam baths through the beamsplitters/mirrors that split the light onto the four directions for each vertical pair of diodes or if these diodes are just aligned wrongly.
The above plots are not relevant any more as PD positions were adjusted since, but here are additional details for posterity.
Calculating rotation angle of the knob doesn't mean anything, that must be converted to a meaningful number like the displacement of the beam on the PD. This wasn't done for the above plots but was done to the plots with final PD positions.