HAM1 Before Photos: (HAM1 chamber open just under 90min for this activity)
This morning before the deinstall activities begin, took the opportunity to photo document the HAM1 optical layout. Keita requested I take photos to record the layout wrt to the iLIGO bolt pattern, because rough alignment of optical components on the new SEI ISI for HAM1 will be done utilizing the bolt patterns of the Optics Tables; so I took a few more photos than normal (top view and angled with a focus on the REFL path). Took large photos with the Canon 60D DSLR camera as well as my camera phone.
The photos are being populated in this Google Drive folder: https://drive.google.com/drive/folders/1yDKp7aByA_TYJ12c8j8BnZM_pd1Q2DBZ?usp=sharing
Naming each photo referencing an updated layout Camilla Compton made which labels all beam dumps, but I also had to use an older layout to preserve naming since the layout on HAM1 currently looks like D1000313v16 (which is also referenced for naming the photos).
The above folder has the Canon photos, and I'll be adding the camera phone images next.
Contamination Control Notes:
Tagging ISC, SUS, SYS, and SEI. Rest in power HAM1 Stack!
The Mitsubishi condensing units at the H2 and MSR have had their low ambient hoods enabled for the season. Due to the large ambient temperature swings that occur in Spring, this may cause less uniform temperature trending in the MSR until the overnight temperatures moderate.
The annual air handler maintenance was done at the FCES this morning. No issues have been observed since.
The DAQ's EDC is green again. Erik and Camilla started the HWS EX and EY IOCs which were the last needed to complete the set.
Daniel and Patrick confirmed that the two Beckhoff terminals lost at EX are the AOS baffle PDs, which are not immediately needed but will be needed to return to O4. For now I have "greened up" the CDS overview by configuring DEV4 to expect 125 terminals of 127 total.
To remind us that DEV4 is in a degraded state, and that DEV1 was degraded in Dec 2024 when it lost its illuminators, these are shown with a darker green block.
Attached are screenshots showing the terminals that are currently in error. The h1ecatc1 computer froze up part way through looking at these with remote desktop. I put a terminal on it and physically power cycled it.
Tue Apr 08 10:10:05 2025 INFO: Fill completed in 10min 1secs
Morning inspection of Kobelco, water pump and drying towers completed, all nominal.
Purge air dewpoint prior to in chamber work, measured at HAM1 port of entry was -42 C. Measurement will be repeated at YBM prior to BSC8 -X door removal.
C&E Trenching arrived yesterday morning and began work on uncovering/locating the potable water leak. Ultimately, the leak was in very close proximity to where I first noticed the persistently wet pavement. It was directly beneath secondary high voltage lines that feed the chiller yard among other things. The cause of the leak ended up being a small crack impregnated with a rock near a smashed/deformed section of pipe. There is no clear explanation as to how the pipe was damaged in the first place. The trenching team suggested that the pipe may have been damaged just before/during installation. Interestingly enough, this section of pipe was found to have two prior repairs made to it as seen in the photo with the two pipe couplers right next to each other. Based on the date printed on the pipe, these repairs were made during construction some time in 96'. Important note: with the potable supply valved out at the water room, I replenished the holding tank to a level of 57.8" read at FMCS. Once the pipe repair had been made, I reintroduced the potable supply and the tank level fell to 52.5". In conjunction with this, the Magemter gauge prior to line repair read xx6171 gallons. Post repair the gauge read xx6460. However, I don't have much confidence in the Magmeter gauge readout in this scenario as the line turbulence causes some egregious (-250gpm+) reverse flow readings while the line recharges. After repair, and keeping staff water usage suspended, I held the line pressure at some 80psi for 30 minutes or so and observed no leaks. There were also no drops in system pressure nor was there any flow readout at the magmeter gauge - both important and reassuring improvements. R. McCarthy T. Guidry
Tagging EPO for the photos!
Closes FAMIS#26038, last checked 83718
Just like last week, everything is elevated due to the vent.
TITLE: 04/08 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
SEI_ENV state: MAINTENANCE
Wind: 18mph Gusts, 15mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.23 μm/s
QUICK SUMMARY:
Plans for today:
Workstations were updated and rebooted. This was an OS packages update. Conda packages were not updated.
Travis, Melina, Jordan
Today we were able to install the assembly for the new 1000AMU RGA. (D2300253). This is located on the Y-beam manifold near BSC8.
The assembly had been previously built in the staging building and the final 8"CF connection to the gate valve on the YBM was done. There is a temporary support stand currently attached, a more permanent support and bump protection cage will be installed later.
The assembly was helium leak checked at all joints, He dwell of ~5 seconds per joint. No He signal was seen above the HLD background ~1.7E-10 Torr-l/s.
Assembly will continue to pumpdown overnight on a turbo, then will transition to the assembly 10 l/s diode ion pump.
TITLE: 04/07 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY:
There was some good vent progress made today, but a good chunk of the day was just working on recovering all systems from last night's power outage.
Vent progress made today:
VAC - 1000 AMU RGA WORK - YBT
HAM1 - Contam Control checks
COREY - HAM1 Pictures
ISC - MC Flashing Peek to confirm MC and PSL PZT aligned
FAC - Clean BSC8 door and area
HAM1 - COAX Cable length measurement (disconnect at component)
Overview of biggest issues caused by the outage:
- Power supply for h1susb123SUS C5 (SUS C5) tripped off (83787)
- ITMs and BS OSEM counts were all sitting very low around 3000. Power supply was flipped back on without turning coil drivers off, causing it to trip again. OSEM counts returned to normal for ITMX and BS, but ITMY coil driver filter banks for M0, L1, L2 were flashing ROCKER SWITCH DEATH. ROCKER SWITCH DEATH means that all OSEM values for that stage are less than 5.
- Initially remedied by turning coil drivers and power supply off, then power supply back on and then turning coil drivers back on one at a time.
- 17:46 ROCKER SWITCH DEATH started flashing again for ITMY. ITMs and BS all taken to SAFE, power supply for SUS C5 swapped. Checked before swapping - power supply was still on and had not tripped.
- Tried power cycling the satamp - didn't work
- SatAmp replacement worked
- ISI ITMY corner1 interface chassis power regulator was blown (83781)
- Swapped with a spare
- Timing error for Corner A fanouts, digital port 13, analog port 14 (83777)
- Power cycled timing comparator and it came back fine
- Timing error for Master, digital port 8, analog port 9 -
- PSL down due to Beckoff chassis off (83785)
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
14:30 | FAC | Kim, Nelly | EY | n | Tech clean | 15:23 |
15:26 | EE | Jeff | CER | n | Checking power supply | 15:46 |
15:41 | Camilla | CER | n | Checking on Jeff | 15:46 | |
15:44 | VAC | Travis, Jordan | LVEA | n | BSC8 annulus vent | 16:26 |
15:47 | Jeff, Jim | CER | n | Checking racks again | 15:58 | |
15:50 | FAC | Kim, Nelly | LVEA | n | Tech clean | 19:51 |
16:06 | PSL | RyanS | LVEA | n | Power cycling PSL chassis | 16:51 |
16:09 | EE | Richard | CER | n | Turning power supply back on | 16:09 |
16:11 | Jeff, Richard | CER | n | Cycling coil drivers | 16:23 | |
16:20 | FAC | C+U | water tower | n | Excavating to find leak | 18:20 |
16:29 | EE | Marc | CER | n | ISI chassis work | 16:44 |
16:46 | EE | Richard | Power supply room | n | 16:50 | |
16:47 | PSL | RyanS, Jason | LVEA | n | PSL ion pump power cycling | 17:23 |
16:52 | ISC | Camilla | LVEA | n | Cleaning IOT2 | 17:54 |
16:58 | VAC | Travis, Jordan | LVEA | n | Prep for RGA install, includes craning | 19:04 |
16:59 | AOS | Corey | LVEA | n | Photos + contamination control | 18:33 |
17:07 | EE | Jeff, Marc | CER | n | Power cycling timing comparator | 17:21 |
17:17 | EPO | Betsy | LVEA | N | Helping Corey | 18:17 |
16:40 | FAC | Tony, Randy | LVEA | n | Breaking BSC8 bolts | 17:23 |
17:31 | FAC | Chris | EY, EX | n | FAMIS tasks | 18:38 |
17:45 | EE | Jeff, Marc | CER | n | Replacing SUS C5 power supply | 18:40 |
17:47 | ISI | Jim | Ends | n | Restoring HEPI and checking BRS lasers | 18:29 |
18:00 | CDS | Tony, Erik | EX, EY | n | Dolphin issues, wifi issues | 19:04 |
18:37 | SEI | Jim | HAM8, EY, EX | n | EY HEPI, EX ISI coil drivers | 21:01 |
18:37 | ISC | Keita | LVEA | n | HAM1 RF work | 19:30 |
18:40 | EE | Jeff, Marc | CER, LVEA | n | Power cycling satamp | 19:21 |
18:42 | ISC | Camilla | LVEA | n | HAM1 RF cables | 20:30 |
18:48 | Betsy | LVEA | n | Checking on HAM1 work | 20:30 | |
19:01 | CDS | Jonathan | CER | n | Moving a switch | 21:07 |
19:05 | CDS | Patrick | EX | n | Beckoff for BRS | 19:54 |
19:44 | OPO | Corey | LVEA | N | Grab camera equipment by ham1 garb room | 19:48 |
20:13 | ISC | Keita | LVEA | N | HAM1 RF work cont | 20:31 |
20:29 | VAC | Jordan, Melina | LVEA | n | RGA work | 23:30 |
20:43 | EE | Marc & Daniel | LVEA | N | Measure cables HAM1 | 22:04 |
21:23 | CDS | Erik | MY, EY, EX | n | Weather sensor work at MY/EY, HWS work at EY/EX | 22:54 |
21:25 | SEI | Jim | LVEA | YES | HAM1 HEPI work | 23:05 |
21:26 | PSL | Keita | LVEA | n | Work on PSL FSS issue | 22:10 |
21:41 | PSL | Jason, RyanS | LVEA | n | Staring at PSL racks | 21:54 |
21:47 | Camilla | LVEA | YES | Transitioning to LASER HAZARD | 22:02 | |
21:54 | PSL | Jason | LVEA | YES | Looking at lockout box | 22:02 |
22:13 | (⌐■_■) | LVEA IS LASER HAZARD | LVEA | YES | LVEA IS LASER HAZARD | 23:24 |
22:24 | ISC | Camilla, Betsy, Keita, Elenna | LVEA | YES | Opening lightpipe & opening HAM2 viewport | 23:11 |
22:25 | IAS | Jason | LVEA | YES | Setting up FARO for tomorrow | 22:33 |
22:50 | Richard | LVEA | YES | Checking that people are working safely | 23:05 | |
23:06 | CDS | Erik | EY | n | Fixing dust monitor | 23:32 |
23:13 | ISC | Camilla | LVEA | YES | Going back to laser safe | 23:22 |
[Betsy, Keita, Camilla, Richard M., Elenna]
After the power outage, we wanted to get some confirmation of PZT pointing out of the PSL. Betsy, Keita and Camilla went in the cleanroom with the IR viewer and indicator cards and took the covers off the IMC refl and trans viewports on HAM2. Keita found the beam out of the IMC REFL viewport, and I adjusted the IMC PZT in pitch and yaw under direction from Keita. I accidentally moved the pitch slider a bit at the beginning due to user error. Then, we took large steps of 1000 or 2000 counts to move the offsets in pitch and yaw.
Start values: pitch: 22721, yaw: 5488
End values: pitch: 21192, yaw: 6488
Then, Betsy found the beam out of the IMC TRANS viewport and Richard marked the beam spot approximately on the outside of the cleanroom curtain with a red sharpie. This is super rough but gives us a general location of the beam. We think this is a good enough pointing recovery to proceed with vent activities.
During this work, PSL waveplate rotetor was set to 200mW and then de-energized. I haven't re-energized as we don't need higher power for a long time.
PSL laser pipe was temporarily opened for this task and was closed after.
Attached video shows the flashing of the beam in MC TRANS path. The alignment from the PSL through the IMC is not crazy. Any further refinement should be done after we start pumping down the corner before we install HAM1 optics.
Accepting PZT offsets in h1ascimc SAFE.snap table.
Oli, Ibrahim, RyanC
We took a look at the osems current positions for the suspensions post power outage to make sure the offsets are still correct, the previously referenced "Golden time" was 1427541769 (the last drmi lock before the vent). While we did compare against this time we mainly set them to before the poweroutage.
Input:
IM1_P: 368.5 -> 396.1, IM1_Y: -382.7 -> -385.4
IM2_P: 558.0 -> 792.0, IM2_Y: -174.7 -> -175.7
IM3_P: -216.3 -> -207.7, IM3_Y: 334.0 -> 346.0
IM4_P: -52.4 -> -92.4, IM4_Y: 379.5 -> 122.5
SR2_P: -114.3 -> -117.6, SR2_Y: 255.2 -> 243.6
SRM_P: 2540.3 -> 2478.3, SRM_Y: -3809.1 -> -3825.1
SR3_P: 439.8 -> 442.4, SR3_Y: -137.7 -> -143.9
Output:
ZM6_P: 1408.7 -> 811.7, ZM6_Y: -260.1 -> -206.1
OM1_P: -70.9 -> -90.8, OM1_Y: 707.2 -> 704.5
OM2_P: -1475.8 -> -1445.0, OM2_Y: -141.2 -> -290.8
OM3_P: Didn't adjust, OM3_Y: Didn't adjust
Input:
PRM_P: -1620.8 -> -1672, (Changed by -51.2) PRM_Y: 579.6 -> -75.6 (Changed by -655.2)
PR2_P: 1555 -> 1409 (Changed by 146), PR2_Y: 2800.7 -> -280.8 (Changed by -3081.5)
PR3_P: -122.2 -> -151 (Changed by -28.8), PR3_Y: -100 -> -232.4 (Changed by -132.4)
MC1_P: 833.3 -> 833.3, MC1_Y: -2230.6 -> -2230.6 (No change)
MC2_P: 591.5 -> 591.5, MC2_Y: -580.4 -> -580.4 (No change)
MC3_P: -20.3 -> -20.3, MC3_Y: -2431.1 -> -2431.1 (No change)
Attached are plots showing the offsets (and their relevant M1 OSEMs) before and after the suspension was reliably aligned.
Comparing QUADs, BS, and TMSs pointing before to after outage. All had come back up from the power outage with a slightly different OPTICALIGN OFFSET for P and Y, due to the power outage taking everything down, and then when systems got back up, the OPTICALIGN OFFSETS were read from the sus sdf files, and those channels aren't monitored by sdf and so had older offset values. I set the offest values back to what they were before the power outage, but still had to adjust them to get the top masses pointed back to where they were before the outage.
'Before' refers to the OPTICALIGN OFFSET values before the outage, and 'After' is what I changed those values to to get the driftmon channels to match where they were before the outage.
SUS | Before | After |
ITMX | ||
P | -114.5 | -109.7 |
Y | 110.1 | 110.1 (no change) |
ITMY | ||
P | 1.6 | 2.0 |
Y | -17.9 | -22.4 |
ETMX | ||
P | -36.6 | -45.7 |
Y | -146.2 | -153.7 |
ETMY | ||
P | 164.6 | 160.6 |
Y | 166.7 | 166.7 (no change) |
BS | ||
P | 96.7 | 96.7 (no change) |
Y | -393.7 | -393.7 (no change) |
TMSX | ||
P | -88.9 | -89.9 |
Y | -94.3 | -92.3 |
TMSY | ||
P | 79.2 | 82.2 |
Y | -261.9 | -261.9 (no change) |
HAM7/8 suspensions was brought back, details in 83774.
Betsy, Oli, Camilla
We rechecked that everything was put back to this time, looking at the M1 DAMP channels for each optic, e.g. H1:SUS-OM1_M1_DAMP_{P,Y}_INMON:
For each ndscope, 1st t-cursor on originally "good DRMI time" we had planned to go back to, second t-cursor on time we went back to.
I checked the rest of the optics and verified that they all got put back to point where they were pointing before the power outage. I've also used one of the horizontal cursors to mark where the optic was at the "good DRMI time", and the other cursor marks where the optic is currently.
Slow start on vent tasks today - besides the lack of water onsite this morning, and the particularly nasty power outage which are making things not come back up very well, we popped into HAM2 to keep moving on a few of the next steps. Corey has captured detailed pictures of components and layouts, and Camilla and I have logged all of the LSC and ASC PD serial numbers and cables numbers. We removed all connections at these PD boxes and Daniel is out making the RF cable meter length measurements. OPS were realigning all suspensions to a golden DRMI time they chose as a reference for any fall back times. Jason and Ryan are troubleshooting other PSL items that are misbehaving. We are gearing up to take a side road on today's plan to look for flashes out of HAM2 to convince ourselves that the PSL PZT and alignment restoration of some suspensions are somewhat correct.
Betsy and I also removed the septum plate VP cover to allow the PSL beam into HAM2 for alignment check work 83794, it was placed on HAM1.
The counts were pretty low over the weekend, peaking at ~ 30 counts of 0.3s and 10 counts for 0.5s.
The EY dust monitor died with the power outage and has not come back with a process restart, it's having connection issues.
The EY dust monitor came back overnight.