(Jordan V., Gerardo M.)
We took opportunity of the IFO being out of lock due to an earthquake to go in the LVEA and stage some of the components that are going to aide on the removal and replacement of the ion pump for HAM6 annulus. We moved to the East bay a couple of complete aux pump carts. Also, we placed a bottle of nitrogen inside the large access area. Ready for Tuesday.
Result 1: We can make it fine up through DHARD WFS. Reminder, DHARD WFS now comes after CARM OFFSET REDUCTION. CARM 5 PM still is the problem. The AS air camera shows lots of moving beams during offset reduction without DHARD on, but that seems repeatably ok for locking. (The first time we went through this sequence we were still in earthquake mode, which made this a fun stress test.)
Below are the locklosses experienced.
Lockloss #1:
2025-09-19_00:05:44.210977Z ISC_LOCK executing state: CARM_5_PICOMETERS (309)
2025-09-19_00:05:44.211404Z ISC_LOCK [CARM_5_PICOMETERS.enter]
2025-09-19_00:05:44.222412Z ISC_LOCK [CARM_5_PICOMETERS.main] timer['CARM_ramp'] = 10.0
2025-09-19_00:05:48.263712Z ISC_LOCK [CARM_5_PICOMETERS.run] ['REFL_DC at reference offset: ', '1.81979900598526']
2025-09-19_00:05:48.263712Z ISC_LOCK [CARM_5_PICOMETERS.run] ['Using LSC-TR_CARM_OFFSET of ', '-47.13799969222334', ' for last step of CARM reduction.']
2025-09-19_00:05:48.263712Z ISC_LOCK [CARM_5_PICOMETERS.run] ezca: H1:LSC-TR_CARM_TRAMP => 5
2025-09-19_00:05:48.263712Z ISC_LOCK [CARM_5_PICOMETERS.run] ezca: H1:LSC-TR_CARM_OFFSET => -52
2025-09-19_00:05:53.400224Z ISC_LOCK [CARM_5_PICOMETERS.run] timer['CARM_ramp'] = 5.0
2025-09-19_00:05:57.263417Z ISC_LOCK [CARM_5_PICOMETERS.run] ['REFL_DC at reference offset: ', '0.8649354875087738']
2025-09-19_00:05:58.400264Z ISC_LOCK [CARM_5_PICOMETERS.run] timer['CARM_ramp'] done
2025-09-19_00:06:01.261270Z ISC_LOCK [CARM_5_PICOMETERS.run] ['REFL_DC at reference offset: ', '0.8532866835594177']
2025-09-19_00:06:01.261939Z ISC_LOCK [CARM_5_PICOMETERS.run] ezca: H1:LSC-TR_CARM_GAIN => 2.1
2025-09-19_00:06:01.262711Z ISC_LOCK [CARM_5_PICOMETERS.run] timer['CARM_ramp'] = 5.0
2025-09-19_00:06:05.262141Z ISC_LOCK [CARM_5_PICOMETERS.run] ['REFL_DC at reference offset: ', '0.8590858280658722']
2025-09-19_00:06:06.262933Z ISC_LOCK [CARM_5_PICOMETERS.run] timer['CARM_ramp'] done
2025-09-19_00:06:09.261966Z ISC_LOCK [CARM_5_PICOMETERS.run] ['REFL_DC at reference offset: ', '0.7854519486427307']
2025-09-19_00:06:09.263923Z ISC_LOCK [CARM_5_PICOMETERS.run] ezca: H1:LSC-TR_CARM_OFFSET => -56
2025-09-19_00:06:14.401909Z ISC_LOCK [CARM_5_PICOMETERS.run] timer['CARM_ramp'] = 5.0
2025-09-19_00:06:14.402503Z ISC_LOCK [CARM_5_PICOMETERS.run] ezca: H1:LSC-PD_DOF_MTRX_SETTING_4_17 => -1.952
2025-09-19_00:06:14.505997Z ISC_LOCK JUMP target: LOCKLOSS
2025-09-19_00:06:14.512463Z ISC_LOCK [CARM_5_PICOMETERS.exit]
2025-09-19_00:06:14.573087Z ISC_LOCK JUMP: CARM_5_PICOMETERS->LOCKLOSS
So this happened right after the ramp up of PRCL gain by 60% and CARM offset stepped to -56. However, the ring up was a ~63 Hz ring up, and PRCL doesn't go unstable at that frequency, so it's hard to see how the PRCL gain ramp was the cause.
Plan: measure OLGs before CARM 5 and figure out what the issue is. Possible culprits: TR CARM, RF DARM, PRCL
Lockloss #2:
Seems like the -56 TR CARM offset is the culprit. Commented out this step in the guardian code.
Decided to reduce the PRCL gain increase from a 60% increase to a 30% increase (since based on the measurement, 60% might be too much). Edited guardian code.
Plan: let the code run with above adjustments, do not adjust DARM gain.
Result 2: SUCCESS. Guardian code tested without intervention.
Uploaded a new MEDM screen mapping the location of the RGAs at the site, non functional only to be used as a display.
Most of the RGAs in use are Pfieffer brand, capable to read up to 100 amu, red badges with RGA on them. Two types of Pfieffer RGAs, Prisma Pro and Prisma Plus.
There is one RGA at the corner station, gray badge in the LVEA, this is a Extrel brand, capable of 1000 amu.
TITLE: 09/18 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Earthquake
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: EARTHQUAKE
Wind: 15mph Gusts, 6mph 3min avg
Primary useism: 0.19 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY:
Still recovering from an Earthquake, but TJ handed over H1 as he had just started an Initial Alignment. The plan is to proceed with locking as normal after the alignment, but we'll see how reacquisition goes since this is our first attempt at locking (1) post a 40+hr lock AND (2) our second lock after the 6-days of being down due to the power outtage where H1 needed some ISC_LOCK changes to get to NLN.
ALSO, since this EQ was a big one earlier today, we also will assess locking feasability with possible aftershocks---if all looks good we'll proceed with owl shift, but if the earth is a rumblin', we may need to cancel the owl---will consult with Jenne/TJ if this is the case.
Initial Alignment is almost done!
TITLE: 09/18 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
Calibration & Comissioning went well enough.....Butttt, right after that ended the world started to shake and hasn't real stopped yet.
We sat in Idle for a few hours, while the aftershocks rolled through and we waited for the ground motion slow down.
Ground motion is approaching the point that we can start locking soon.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
14:33 | FAC | Randy | MX & MY | N | Taking Inventory | 15:23 |
14:52 | FAC | Kim | Optics lab & VAC prep | N | Technical cleaning | 15:17 |
17:40 | PSL | Jason | Optics lab | N | Acclimating a NPro laser | 18:25 |
17:40 | FAC | Randy | MY | N | Taking inventory | 18:05 |
20:09 | VAC | Jordan & Gerardo | LVEA | N | Craining & vacuuming | 21:13 |
20:11 | Tour | Sam +2 | LVEA, MSR & Over pass | N | Quick tour inside the LVEA | 23:07 |
20:14 | PEM | TJ | LVEA | N | Getting part for contaimination control dust mon | 21:35 |
20:15 | FAC | Randy | LVEA | N | stashing parts | 21:00 |
21:05 | SPI | Jeff, Betsy, Cammy, Randy | LVEA | N | Getting optics ready for cleaning | 21:20 |
22:39 | View | Richard & Co | Roof | N | Giving to to guests | 23:08 |
To avoid the very unlikely scenario that we have another power glitch that puts the dust monitor vacuum pump in reverse again, I've disconnected the vacuum line leading to the dust monitors in the PSL enclosure and the anteroom. The dust monitors themselves are left on, so this means that we should ignore the counts reported by these dust monitors. We will not have dust counts in these rooms until this is rectified.
I did this by removing the 3/4" tee underneath the South side of HAM1 and placing a length of 3/8" tubing with an unoccupied quick disconnect into the empty 3/4" tube. This 3/8" tube came from the other line of the tee, ie. one went to the PSL and one to this unoccupied line. See annotated pictures.
Lockloss from 7.8M EQ from Kamchatsky, Russia.
The following watchdogs all tripped:
ISI's: All, including both stages of the BS.
SUS: MC3, IM2, IM3, IM4, FC1, PR2, MC2, SRM,2,3 OPO
Those whom hoped for a lockloss, now have all afternoon to complete their "during lockloss tasks".
Holding H1 in Idle while the world shakes.
Elenna, Camilla
We ran auto_darm_offset_step.py after activating labutils following Jennie's 86623 instructions at ~17:30UTC to 17:45UTC. We left the POP beam divertor open and did not turn off the OMC ASC. We were in No SQZ at the time.
Repeated at 18:20 to 18:35UTC with the OMC ASC off.
I processed both measurements. With OMC ASC ON, the fit gives a contrast defect of 0.77-0.86 mW. However with OMC ASC OFF, the fit gives a contrast defect of 1.23-1.28 mW. I am confused, because the motivation for running with the OMC ASC off was "the OMC ASC goes bad when we changed the DARM offset and this makes the measurement worse".
I tried to look back at previous contrast defect measurements. Jennie measured the contrast defect in June while adjusting CO2 annular power 85434. Her measurements report values ranging between 1 and 1.1 mW for different CO2Y powers. Not sure if those are directly comparable, given her changes of the CO2s. I also believe she has been running these measurements with the ASC off.
I measured the contrast defect with the OMC ASC ON in 80804 (circa October 2024) to be 0.742 mW.
Just comparing the ASC ON measurement from the October measurement, the contrast defect has possibly gotten worse. Comparing the measurement at the 255 Hz line, the contrast has increased by 16% (0.862/0.742). Comparing the data measured at 410 Hz, the contrast defect is the same (0.771/0.771).
Thu Sep 18 10:09:17 2025 INFO: Fill completed in 9min 13secs
Calibration sweep ran at 8:30 Local 15:30 UTC
pydarm measure --run-headless bb
.....
...[computer noises]...
....
2025-09-18 08:37:08,412 bb measurement complete.
2025-09-18 08:37:08,413 bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250918T153158Z.xml
2025-09-18 08:37:08,413 all measurements complete.
gpstime;python /ligo/groups/cal/src/simulines/simulines/simuLines.py -i /ligo/groups/cal/H1/simulines_settings/newDARM_20231221/settings_h1_20250212.ini;gpstime
PDT: 2025-09-18 08:39:17.330118 PDT
UTC: 2025-09-18 15:39:17.330118 UTC
GPS: 1442245175.330118
...
..... [more computing noises] ....
...
2025-09-18 16:02:03,289 | INFO | Finished gathering data. Data ends at 1442246540.0
2025-09-18 16:02:03,505 | INFO | It is SAFE TO RETURN TO OBSERVING now, whilst data is processed.
2025-09-18 16:02:03,505 | INFO | Commencing data processing.
2025-09-18 16:02:03,505 | INFO | Ending lockloss monitor. This is either due to having completed the measurement, and this functionality being terminated; or because the whole process was aborted.
2025-09-18 16:02:39,467 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20250918T153918Z.hdf5
2025-09-18 16:02:39,474 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20250918T153918Z.hdf5
2025-09-18 16:02:39,477 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20250918T153918Z.hdf5
2025-09-18 16:02:39,481 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20250918T153918Z.hdf5
2025-09-18 16:02:39,485 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20250918T153918Z.hdf5
PDT: 2025-09-18 09:02:39.614492 PDT
UTC: 2025-09-18 16:02:39.614492 UTC
GPS: 1442246577.614492
pydarm report --skip-gds
...
.... [Final Computer Noises] ....
...
2025-09-18 09:08:18,990 report generation complete.
2025-09-18 09:08:18,990 report file: /ligo/groups/cal/H1/reports/20250918T153918Z/H1_calibration_report_20250918T153918Z.pdf
2025-09-18 09:08:18,990 displaying report /ligo/groups/cal/H1/reports/20250918T153918Z/H1_calibration_report_20250918T153918Z.pdf...
It looks like the ini file generated in this report has the old ETMX ESD bias actuation values, so I updated it and regenerated the report.
The L3 actuation strength is different by nearly 2% compared to the 8/23 report. This is possibly due to charging. Tagging sus.
TITLE: 09/18 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 2mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.17 μm/s
QUICK SUMMARY:
H1 has been locked & Observing for 35.5 hours now.
All systems seem to be running smoothly.
Violins lookin good at below 10e-17 this morning.
Potiential Comissioning tasks today:
TITLE: 09/17 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: Nice quiet shift with H1 now locked for about 26hrs.
LOG: 2340 Betsy & Jason out of the optics lab
TITLE: 09/17 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 2mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY:
H1's been locked since last night! (20.5hrs now) H1 was in need of an ISC_LOCK node LOAD, but the H1 SQZ briefly dropped us out of Observing, so Tony was able to take this LOAD off the To Do List.
Seismically, there were a couple of EQs which required EQ Mode, but H1 rode right through them (last night and during Tony's shift....no "ASC Hi Gn" transition was needed.
From last night, it looks like TJ & Jim took care of BRSy and also the violins were fixed by Ryan C (and thanks about the note on its finicky gain!) :)
Smooth sailing for H1 with it being locked for 24.5hrs & observing.
TITLE: 09/17 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
H1 Has mostly been locked and Observing all day.
But we dropped from Observing at 15:18 UTC to apply violin damping.
Back to Observing at 15:22 UTC
And again dropping to commissioning when the SQZr lost lock at 23:07 UTC.
Back to Observing at 23:11:12 UTC.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
16:08 | SPI | Jeff | Optics Lab | N | Dropping off parts | 18:26 |
16:39 | FAC | MacMiller | VPW | N | Contractors Quitely Working at VPW | 00:39 |
17:10 | EOM | Rick | Optics lab | N | Looking for parts. | 18:10 |
17:37 | PSL/ io | Jason & Betsy | Optics lab | N | Crystal Photography | 19:47 |
18:19 | SPI | Camilla | Optics lab | N | Helping Jeff & Jason | 19:00 |
21:58 | EOM | Jason & Betsy | Optics lab | N | crystal photograghy | 23:58 |
22:48 | SPI | Jeff | Optics Lab | N | Dropping off parts. | 23:00 |
SQZ lost lock as the PMC PZT got to the end of it's range, plot. There is already a SQZ_MANAGER checker that will re-lock the PMC if the PZT is under 5V during FDS_READY_IFO (which it wasn't) so there's nothing that needs to be done to avoid this in the future.
We have now been locked for over 16 hours.
IMC REFL DC power is steady at 18.5 mW
IMC WFS A is at 0.95 mW and IMC WFS B is at 0.75 mW
The IMC power in is 62 W and the power at IM4 trans is 56.7 W
MC2 trans is about 9670 [mystery units]
This is reasonable power for IMC refl, but the WFS power is very low. These are the jitter witnesses, and jitter subtraction is not performing as well as it was before the power outage. I can think of several possible reasons for this, but I'm sure that having less than a mW of power isn't helping.
We may want to consider either a) increasing the power on the IMC refl path b) changing the splitter between IMC refl and IMC WFS to be a 50/50 instead of a 90/10, or c) some combination of the first two options that gets us reasonable power on both IMC refl and IMC WFS.
The numbers are confirmed to have held through the entire 40 hours of this most recent lock (killed by earthquake).