While TJ was running initial alignment for the green arms, I noticed that the ALS X beam on the camera appeared to be too far to the right side of the screen. The COMM beatnote was at -20 dBm, when it is normally between -5 and -10 dBm. I checked both the PR3 top mass osems and the PR3 oplevs. The top mass osems did not indicate any significant change in position, but the oplev seems to indicate a significant change in the yaw position. PR3 yaw was around -11.8 but then changed to around -11.2 after the power outage. It also appears that the ALS X beam is closer to its usual position on the camera.
I decided to try moving PR3 yaw. I stepped 2 urad and which brought the oplev back to -11.8 in yaw and the COMM beatnote to -5 dBm. Previous slider value: -230.1, new slider value: -232.1.
The DIFF beatnote may not be great yet, but we should wait for beamsplitter alignment before making any other changes.
Actually, this may not have been the right thing to do. I trended the oplevs and top mass osems of ITMX and ETMX and compared their values during today's initial alignment before moving PR3, to the last initial alignment we did before the power outage. They are mostly very similar except for ETMX yaw.
| P then | P now | Y then | Y now | |
| ITMX oplev | -7.7 | -7.8 | 5.8 | 5.6 |
| ETMX oplev | 2.7 | 3.1 | -11.5 | -3.1 |
I put the PR3 yaw slider back to its previous value of -230.1 until we can figure out why ETMX yaw has moved so much.
Closes FAMIS38811, last checked in alog87797.
We can see yesterdays outage on every plot as expected and the BRS issues from October 21st. There's doesn't look to be any trend in the driftmon, and ETMY looks to be still slowly increasing in temperature before the outage. The aux plot looks about the same as during the last check except for ETMY DAMP CTL looks to have come back at a different lower spot but the EY SEI configuration looks like is not quite fully recovered so that may be why.
All systems have been recovered enough to start relocking at this point. We will not be relocking the IFO fully at this time though since the ring heaters have been off and their time to thermalize would take too long. We will instead be running other measurements that will not need the ring heaters on.
There is a good chance that other issues will pop up, but we are starting initial alignment now.
[Oli, Jenne]
Oli brought all of the IMC optics back to where they had been yesterday before the power outage. We squiggled MC2 and the IMC PZT around until we could lock on the TEM00 mode, and let the WFS (with 10x usual gain) finish the alignment. We offloaded the IMC WFS using the IMC guardian. We then took the IMC to OFFLINE, and moved MC1 and the PZT such that the DC position on the IMC WFS matched a time from yesterday when the IMC was offline before the power outage. We relocked, let the alignment run again, and again offloaded using the guardian. Now the IMC is fine, and ready for initial alignement soon.
I restarted all the dust monitor IOCs, they all came back nicely. I then reset the alarm levels using the 'check_dust_monitors_are_working' script.
Marc Daniel
We finished the software chanegs for EtherCAT Corner Station Chassis 4.
We found 2 issues related to the power outage:
Baffle PD chassis in EX has a bench suppy that needs to be turned on by hand.
The system is back up and running.
Jonathan, EJ, Richard, TJ, Dave:
Slow /opt/rtcds file access has been fixed.
All Guardian nodes running again.
The long-range-dolphin issue was due to the end station Adnaco chassis being down, Richard powered these up and all IPCs are working correctly now.
Cameras are now working again.
Currently working on:
. GDS broacaster and DAQ min trend archived data
. Alarms and Alerts
Closes FAMIS#27621, last checked 88194
Everything looking okay except PMC REFL being high. It does look like it's been higher since they recovered the PSL after yesterday's power outage, but we are also still recovering and more will still need to be done for the PSL anyway, so maybe this is expected/will be adjusted anyway.
Laser Status:
NPRO output power is 1.83W
AMP1 output power is 70.67W
AMP2 output power is 139.9W
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 0 days, 17 hr 9 minutes
Reflected power = 25.84W
Transmitted power = 105.6W
PowerSum = 131.5W
FSS:
It has been locked for 0 days 16 hr and 19 min
TPD[V] = 0.5027V
ISS:
The diffracted power is around 4.0%
Last saturation event was 0 days 0 hours and 0 minutes ago
Possible Issues:
PMC reflected power is high
Following yesterdays power outage, primary FAC systems (HVAC, domestic water supply, fire alarm/suppression) all seem to be operating normally. Fire panels required acknowledgement of power supply failures, and battery backups did their job. There is an issue plaguing the supply fan drive at the FCES which revisited us this morning, but that arising from the outage is highly unlikely. Temperatures are within normal ranges in all areas. Eric continues to physically walk systems down to ensure normal operation of points in systems not visible via FMCS. C. Soike E. Otterman R. McCarthy T. Guidry
TITLE: 12/0 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
QUICK SUMMARY: Still in the midst of recovery from the power outage yesterday. Looks like all suspensions stayed damped overnight except the BS which tripped at 0422UTC. SEI is still in a non-nominal state as Jim left it yesterday (alog88373). Many medm screens, scripts, other processes are running *very* slow.
Recovery will continue, one forward step at a time.
Recovery of the front end systems was hampered by a slow /opt/rtcds file system, which still persists. There are no logs on h1fs0 indicating the problem, and the ZFS file system itself appears to be fully functional. Hourly backups by h1fs1 continue normally. At this point we cannot rule out a network issue.
The alarms system is running but does not seem to be operational, again no errors are being logged and channel access appears to be working.
Jim reported a bad model initialization with its safe.snap, most probably caused by a slow /opt/rtcds. I had to restart many models several times to get through a "burt restore" timeout for similar reasons.
We may need to restart h1fs0 in the morning and remount /opt/rtcds on all the NFS clients
The positive 18V power supply which provides power to Rack SUS-C1 and SUS-C2 failed after the power outage. Fan seized and power supply tripped off. Power supply was replaced.
New Power Supply installed - S1300291
F. Clara, M. Pirello
WP 12915
Corner Station Controls Chassis 4 Wiring Diagram - E1101222
EtherCAT Corner Station Chassis 4 - D1101266
Modifications to the Beckhoff chassis - E2000499
The EtherCAT Corner Station Chassis 4 was modified per E2000499. New Beckhoff terminals were installed to support JAC and BHD. A new rear panel was installed to accommodate the new rear adapter boards and shutter control connectors. The din rail power terminal blocks were relocated making the exising power cables too short. New power cables installed to the EtherCAT couplers and power terminals. All field cabling was reconnected, expect for ISC_313. The Beckhoff software will need to be updated.
EtherCAT Corner Station Chassis 4 - Serial Number S1107450
F. Clara, D. Sigg
Seems like all of the models are running, but many of the guardians are still down. What I was able to get up and running:
HEPI Pump stations for all 3 buildings. Patrick had to help some with the end station beckhoff. I had to adjust the float switch for the overflow tank at EX, we were ridding just at the trip level there. The corner station Athena looked dead when I powered on the power supply on the mezzanine, but I think some of the leds are just dead. Good thing we plan to replace that soon.
Both BRS are running. Patrick had to power cycle the EX beckhoff, we were then able to restart the software. The damping settings got reverted (highthreshold and lowthreshold), so I had to fix those through the cli. Heater settings also got lost, both brs cooled off a bit over the several hours we were blind, and will take a little while to warm back up. Probably will keep watching for a while to make sure everything is okay, but I think we are good.
Some of the chambers are able to be isolated through guardian, but many have dead guardian nodes. I manually engaged damping where I couldn't get the ISI isolated via guardian.
None of the blend, sensor correction or cps diff is alive enough to recover yet. I have manually disabled sensor correction for the corner, the ends can't be isolated yet. I guess that will wait for tomorrow.
Oli, Ryan S, Rahul
Except ETMX and ETMY (timing error, work currently ongoing), we have recovered all other suspensions by un-tripping the WD and setting it to SAFE for tonight. The inmons looks fine - eyeballed them all (bosems and aosem), nothing out of the order.
For ETMX and ETMY, Dave is currently performing a computer restart, following which they will set to safe as well.
Once CDS reboots were finished, I took all suspensions to either ALIGNED or MISALIGNED so that they're damped overnight.
TITLE: 12/04 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY:
After CDS & H1-Locking work of yesterday, today we transitioned to starting prep work for the upcoming vent of HAM7.
However, there was more CDS work today which was related to ISC Slow Controls....but in the middle of this...
There was a 90min power outage on site!!
LOG:
Recovery milestones:
22:02 Power back-up
22:04 CDS / GC seeing what's on/functional, then bring infrastructure back up
22:10 VAC team starts Kebelco to support the ar pressure that's keeping the cornerstation gate valves for closing
22:13 Phones are back
22:17 LHO GC Internet's back
22:25 GC wifi back up, alog and CDS back up
22:40 RyanS, Jason, Patick got PSL beckoff back up
22:55 VAC in LVEA/FCES back up
22:57 CDS back up (only controls)
23:27 PSL computer back
00:14 Safety interlock is back
00:14 HV and ALSY on at EY
00:35 opslogin0 back up
00:37 CS HV back up
00:53 CS HEPI pump station controller back up
01:05 CO2X and CO2Y back up
01:10 HV and ALSY on at EY
01:22 PSL is alive
Note: EY 24MHz oscillator had to be power cycled to resync
Casualties:
- lost ADC on seiH16
- 18V power supply failure at EY in SUS rack
Log:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 22:32 | VAC | Gerardo | MX, EX, MY, EY | n | Bringing back VAC computers | 23:41 |
| 23:32 | VAC | Jordan | LVEA, FCES | n | Bringing back VAC computers | 22:55 |
| 22:54 | PSL | RyanS, Jason | LVEA | n | Turning PSL chiller back on | 23:27 |
| 23:03 | VAC | Jordan | LVEA | n | Checking out VAC computer | 23:41 |
| 23:56 | SAF | Richard | EY | n | Looking at safety system | 00:25 |
| 00:02 | JAC | Jennie | JOAT, OpticsLab | n | Putting components on table | 01:24 |
| 00:19 | SAF | TJ | EX | n | Turning on HV and ALSX | 00:49 |
| 00:38 | TCS | RyanC | LVEA | n | Turning CO2 lasers back on | 01:06 |
| 01:03 | EE | Marc | CER | n | Power cycling io chassis | 01:14 |
| 01:06 | PSL | RyanS, Jason | LVEA | n | Checking makeup air for PSL | 01:10 |
| 01:07 | JAC | Jennie | LVEA | n | Grabbing parts | 01:14 |
| 01:08 | beckhoff | patrick | cr | - | BRSx recovery | 02:20 |
| 01:08 | sei | jim | EX.EY | - | HEPI pump station recovery | 01:58 |
| 01:25 | SEI | Patrick | EX | n | BRSX troubleshooting | 01:54 |
| 01:27 | EE | Marc | EX | n | Looking at RF sources | 02:09 |
| 02:00 | EE | Fil | EY | n | Power cycling SUSEY | 02:19 |
| 02:09 | VAC | Gerardo | MX | N | Troubleshooting VAC computer | 02:27 |
| 02:12 | EE | Marc | CER | N | Checking power supplies | 02:16 |
Since Oli's alog, I tried to keep a rough outline of the goings-on:
Marc and Fil went down to EY to replace the failed power supply, which brought life back to the EY frondends.
Dave noticed several models across site had timing errors, so he restarted them.
Gerardo continued to torubleshoot VAC computers at the mid-stations.
Once CDS boots were finished, I brought all suspension Guardians to either ALIGNED or MISALIGNED so that they're damped overnight.
I started to recover some of the Guardian nodes that didn't come up initially. When TJ started the Guardian service earlier, it took a very long time, but most of the nodes came up and he put them into good states for the night. The ones that didn't come up (still white on the GRD overview screen) I've been able to revive with a 'guardctrl restart' command, but I can only do a couple at a time or else the process times out. Even this way, the nodes take several minutes to come online. I got through many of the dead nodes, but I did not finish as I am very tired.
Main things still to do for recovery: (off the top of my head)
Jennie W, Keita,
Since we don't have an easy way of scanning the input beam in the vertical direction, Keita used the pitch of the PZT steering mirror to do the scan and we read out the DC voltages for each PD.
The beam position can be inferred from the pictures setup - see photo. As the pitch actuator on the steering mirror is rotated the allen key which is in the hole in the pitch actuator moves up and down relative to the ruler.
height on ruler above table = height of centre of actuator wheel above table + sqrt((allen key thickness/2)^2 + (allen key length)^2) *np.sin(ang - delta_theta)
where ang is the angle the actuator wheel is at and delta_theta is the angle from the centre line of the allen key to its corner which is used to point at the gradations on the ruler.
The first measurement from our alignment that Keita found yesterday that minimised the vertical dither coupling is shown. It shows voltage on each PD vs. height on the ruler.
From this and from the low DC voltages we saw on the QPD and some PDs yesterday Keita and realised we had gone too far to the edge of the QPD and some PDs.
So in the afternoon Keita realigned onto all the of PDs.
Today as we were doing measurements on it Keita realised we still had the small aperture piece in place on the array so we moved that for our second set of measurements.
The plot of voltage with ruler position and voltage with pitch wheel angle are attached.
Keita did a few more measurements in the verticall scan after I left on Friday, attached is the updated scan plot.
He also then set the pitch to the middle of the range (165mm on the scale in the graph) and took a horizontal scan of the PD array using the micrometer that the PZT mirror is mounted on. See second graph.
From the vertical scan of the PD array it looks like diodes 2 and 6, which are in a vertitcal line in the array, are not properly aligned. We are not sure if this is an issue with one of the beam baths through the beamsplitters/mirrors that split the light onto the four directions for each vertical pair of diodes or if these diodes are just aligned wrongly.
The above plots are not relevant any more as PD positions were adjusted since, but here are additional details for posterity.
Calculating rotation angle of the knob doesn't mean anything, that must be converted to a meaningful number like the displacement of the beam on the PD. This wasn't done for the above plots but was done to the plots with final PD positions.