Closes FAMIS#26662, last checked 86750 Laser Status: NPRO output power is 1.873W AMP1 output power is 69.91W AMP2 output power is 138.9W NPRO watchdog is GREEN AMP1 watchdog is GREEN AMP2 watchdog is GREEN PDWD watchdog is GREEN PMC: It has been locked 3 days, 3 hr 49 minutes Reflected power = 24.56W Transmitted power = 104.1W PowerSum = 128.7W FSS: It has been locked for 0 days 3 hr and 22 min TPD[V] = 0.5336V ISS: The diffracted power is around 4.5% Last saturation event was 0 days 5 hours and 44 minutes ago Possible Issues: PMC reflected power is high FSS TPD is low
Closes FAMIS#26657, last checked 86805
Everything looking good this week, some of the elevated noise from last week is lower now
Fri Sep 19 10:09:17 2025 INFO: Fill completed in 9min 13secs
The ALS COMM beatnote is reduced by about 3dB (was -4 dBm before power outage, is now -7dBm). This is probably OK, our past experience was that we start to have trouble with ALS when the ground motion is high and the beatnote is about -15dBm.
The attached screenshot shows the power arriving on ISCT1 can explain this, the first cursor is the power outage time and the second cursor is when the power through the EOM was reduced 86966
TCS Chiller Water Level Top-Off - BiWeekly FAMIS 27824
CO2Y:
Before: 10.4
After: 10.4
Water Added: 0ml
CO2X:
Before: 30.4ml
After: 30.4ml
Water Added: 0ml
Updated the Google sheet .
TITLE: 09/18 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
Calibration & Comissioning went well enough.....Butttt, right after that ended the world started to shake and hasn't real stopped yet.
We sat in Idle for a few hours, while the aftershocks rolled through and we waited for the ground motion slow down.
Ground motion is approaching the point that we can start locking soon.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
14:33 | FAC | Randy | MX & MY | N | Taking Inventory | 15:23 |
14:52 | FAC | Kim | Optics lab & VAC prep | N | Technical cleaning | 15:17 |
17:40 | PSL | Jason | Optics lab | N | Acclimating a NPro laser | 18:25 |
17:40 | FAC | Randy | MY | N | Taking inventory | 18:05 |
20:09 | VAC | Jordan & Gerardo | LVEA | N | Craining & vacuuming | 21:13 |
20:11 | Tour | Sam +2 | LVEA, MSR & Over pass | N | Quick tour inside the LVEA | 23:07 |
20:14 | PEM | TJ | LVEA | N | Getting part for contaimination control dust mon | 21:35 |
20:15 | FAC | Randy | LVEA | N | stashing parts | 21:00 |
21:05 | SPI | Jeff, Betsy, Cammy, Randy | LVEA | N | Getting optics ready for cleaning | 21:20 |
22:39 | View | Richard & Co | Roof | N | Giving to to guests | 23:08 |
Edited from yesterday (9-18-2025) to point to the correct lockloss alog alog and a screenshot of the Seismic BLRMS.
Lockloss at 2025-09-19 15:19 UTC. We were waiting for an earthquake to come through - we had just entered earthquake mode and I had taken us to Hi ASC Gain mode because it looked like we might need that, but we lost lock. Checking now if the ASC hi gains is what caused the lockloss because it wasn't the ground motion
The lockloss seems like it happened 5 seconds after we finished transitioning to ASC Hi Gain, so I''m not sure what caused this lockloss. You can see the ground motion was not very high
18:51 UTC Back to Observing
TITLE: 09/19 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT
Wind: 3mph Gusts, 2mph 3min avg
Primary useism: 0.10 μm/s
Secondary useism: 0.45 μm/s
QUICK SUMMARY:
Observing at 150 Mpc and have been Locked for over 1.5 hours. We're at the tail end of a small earthquake that's coming through
TITLE: 09/18 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
Nice shift with H1 currently locked 3+hrs post the big M7.8 Russia EQ. Have not had any big EQs to knock us out during this shift.
LOG:
(ElennaC, CoreyG, JennieW, JimW)
After Elenna was able to fix the CARM 5 PICOMETERS state for ISC_LOCK (see her alog 87012), H1 made it back to NLN with no issues....other than scares from EQ alerts (all of which were not bad).
However, H1 could not go to Observing due to SDFs from SEI (see attached which shows HAM7 ISI + SEI PROC).
Looks like in all the hubbub from the big EQ, HAM7 ISI was skipped in recovery (It was still stuck in DAMPING v. ISOLATED---so we had the big orange box around HAM7 on OPS Overview). If I was a gambling man, I could have just taken the HAM7 Guardian to ISOLATED and hope for no lockloss, but I opted to contact Jim. Jim & also Jennie W offered another choice: taking the SQZ to DOWN, ISOLATE HAM7 and then restore SQZ.
Was not sure of the easiest/best way to do the latter, so I just did this:
(Jordan V., Gerardo M.)
We took opportunity of the IFO being out of lock due to an earthquake to go in the LVEA and stage some of the components that are going to aide on the removal and replacement of the ion pump for HAM6 annulus. We moved to the East bay a couple of complete aux pump carts. Also, we placed a bottle of nitrogen inside the large access area. Ready for Tuesday.
Result 1: We can make it fine up through DHARD WFS. Reminder, DHARD WFS now comes after CARM OFFSET REDUCTION. CARM 5 PM still is the problem. The AS air camera shows lots of moving beams during offset reduction without DHARD on, but that seems repeatably ok for locking. (The first time we went through this sequence we were still in earthquake mode, which made this a fun stress test.)
Below are the locklosses experienced.
Lockloss #1:
2025-09-19_00:05:44.210977Z ISC_LOCK executing state: CARM_5_PICOMETERS (309)
2025-09-19_00:05:44.211404Z ISC_LOCK [CARM_5_PICOMETERS.enter]
2025-09-19_00:05:44.222412Z ISC_LOCK [CARM_5_PICOMETERS.main] timer['CARM_ramp'] = 10.0
2025-09-19_00:05:48.263712Z ISC_LOCK [CARM_5_PICOMETERS.run] ['REFL_DC at reference offset: ', '1.81979900598526']
2025-09-19_00:05:48.263712Z ISC_LOCK [CARM_5_PICOMETERS.run] ['Using LSC-TR_CARM_OFFSET of ', '-47.13799969222334', ' for last step of CARM reduction.']
2025-09-19_00:05:48.263712Z ISC_LOCK [CARM_5_PICOMETERS.run] ezca: H1:LSC-TR_CARM_TRAMP => 5
2025-09-19_00:05:48.263712Z ISC_LOCK [CARM_5_PICOMETERS.run] ezca: H1:LSC-TR_CARM_OFFSET => -52
2025-09-19_00:05:53.400224Z ISC_LOCK [CARM_5_PICOMETERS.run] timer['CARM_ramp'] = 5.0
2025-09-19_00:05:57.263417Z ISC_LOCK [CARM_5_PICOMETERS.run] ['REFL_DC at reference offset: ', '0.8649354875087738']
2025-09-19_00:05:58.400264Z ISC_LOCK [CARM_5_PICOMETERS.run] timer['CARM_ramp'] done
2025-09-19_00:06:01.261270Z ISC_LOCK [CARM_5_PICOMETERS.run] ['REFL_DC at reference offset: ', '0.8532866835594177']
2025-09-19_00:06:01.261939Z ISC_LOCK [CARM_5_PICOMETERS.run] ezca: H1:LSC-TR_CARM_GAIN => 2.1
2025-09-19_00:06:01.262711Z ISC_LOCK [CARM_5_PICOMETERS.run] timer['CARM_ramp'] = 5.0
2025-09-19_00:06:05.262141Z ISC_LOCK [CARM_5_PICOMETERS.run] ['REFL_DC at reference offset: ', '0.8590858280658722']
2025-09-19_00:06:06.262933Z ISC_LOCK [CARM_5_PICOMETERS.run] timer['CARM_ramp'] done
2025-09-19_00:06:09.261966Z ISC_LOCK [CARM_5_PICOMETERS.run] ['REFL_DC at reference offset: ', '0.7854519486427307']
2025-09-19_00:06:09.263923Z ISC_LOCK [CARM_5_PICOMETERS.run] ezca: H1:LSC-TR_CARM_OFFSET => -56
2025-09-19_00:06:14.401909Z ISC_LOCK [CARM_5_PICOMETERS.run] timer['CARM_ramp'] = 5.0
2025-09-19_00:06:14.402503Z ISC_LOCK [CARM_5_PICOMETERS.run] ezca: H1:LSC-PD_DOF_MTRX_SETTING_4_17 => -1.952
2025-09-19_00:06:14.505997Z ISC_LOCK JUMP target: LOCKLOSS
2025-09-19_00:06:14.512463Z ISC_LOCK [CARM_5_PICOMETERS.exit]
2025-09-19_00:06:14.573087Z ISC_LOCK JUMP: CARM_5_PICOMETERS->LOCKLOSS
So this happened right after the ramp up of PRCL gain by 60% and CARM offset stepped to -56. However, the ring up was a ~63 Hz ring up, and PRCL doesn't go unstable at that frequency, so it's hard to see how the PRCL gain ramp was the cause.
Plan: measure OLGs before CARM 5 and figure out what the issue is. Possible culprits: TR CARM, RF DARM, PRCL
Lockloss #2:
Seems like the -56 TR CARM offset is the culprit. Commented out this step in the guardian code.
Decided to reduce the PRCL gain increase from a 60% increase to a 30% increase (since based on the measurement, 60% might be too much). Edited guardian code.
Plan: let the code run with above adjustments, do not adjust DARM gain.
Result 2: SUCCESS. Guardian code tested without intervention.
Uploaded a new MEDM screen mapping the location of the RGAs at the site, non functional only to be used as a display.
Most of the RGAs in use are Pfieffer brand, capable to read up to 100 amu, red badges with RGA on them. Two types of Pfieffer RGAs, Prisma Pro and Prisma Plus.
There is one RGA at the corner station, gray badge in the LVEA, this is a Extrel brand, capable of 1000 amu.
Outstanding!
TITLE: 09/18 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Earthquake
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: EARTHQUAKE
Wind: 15mph Gusts, 6mph 3min avg
Primary useism: 0.19 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY:
Still recovering from an Earthquake, but TJ handed over H1 as he had just started an Initial Alignment. The plan is to proceed with locking as normal after the alignment, but we'll see how reacquisition goes since this is our first attempt at locking (1) post a 40+hr lock AND (2) our second lock after the 6-days of being down due to the power outtage where H1 needed some ISC_LOCK changes to get to NLN.
ALSO, since this EQ was a big one earlier today, we also will assess locking feasability with possible aftershocks---if all looks good we'll proceed with owl shift, but if the earth is a rumblin', we may need to cancel the owl---will consult with Jenne/TJ if this is the case.
Initial Alignment is almost done!
To avoid the very unlikely scenario that we have another power glitch that puts the dust monitor vacuum pump in reverse again, I've disconnected the vacuum line leading to the dust monitors in the PSL enclosure and the anteroom. The dust monitors themselves are left on, so this means that we should ignore the counts reported by these dust monitors. We will not have dust counts in these rooms until this is rectified.
I did this by removing the 3/4" tee underneath the South side of HAM1 and placing a length of 3/8" tubing with an unoccupied quick disconnect into the empty 3/4" tube. This 3/8" tube came from the other line of the tee, ie. one went to the PSL and one to this unoccupied line. See annotated pictures.
Elenna, Camilla
We ran auto_darm_offset_step.py after activating labutils following Jennie's 86623 instructions at ~17:30UTC to 17:45UTC. We left the POP beam divertor open and did not turn off the OMC ASC. We were in No SQZ at the time.
Repeated at 18:20 to 18:35UTC with the OMC ASC off.
I processed both measurements. With OMC ASC ON, the fit gives a contrast defect of 0.77-0.86 mW. However with OMC ASC OFF, the fit gives a contrast defect of 1.23-1.28 mW. I am confused, because the motivation for running with the OMC ASC off was "the OMC ASC goes bad when we changed the DARM offset and this makes the measurement worse".
I tried to look back at previous contrast defect measurements. Jennie measured the contrast defect in June while adjusting CO2 annular power 85434. Her measurements report values ranging between 1 and 1.1 mW for different CO2Y powers. Not sure if those are directly comparable, given her changes of the CO2s. I also believe she has been running these measurements with the ASC off.
I measured the contrast defect with the OMC ASC ON in 80804 (circa October 2024) to be 0.742 mW.
Just comparing the ASC ON measurement from the October measurement, the contrast defect has possibly gotten worse. Comparing the measurement at the 255 Hz line, the contrast has increased by 16% (0.862/0.742). Comparing the data measured at 410 Hz, the contrast defect is the same (0.771/0.771).
In regards to the "FSS TPD is low" warning, this is simply because we've recalibrated that PD as of Tuesday (alog86972), so the nominal "good" value has changed and I forgot to update this status script accordingly. Since we just realigned the FSS path and are happy with the results, this TPD reading of about 0.54 V is acceptable.
PMC REFL being high is not a new issue, but it doesn't hurt to be reminded about it.