At the time of starting these measurements, we had only been Locked for 1.5 hours, so we were not fully thermalized
CALIBRATION_MONITOR screen and pydarm report are attached
Broadband
2025-08-13 16:13:34 - 16:18:45 UTC
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250813T161334Z.xml
Simulines
2025-08-13 16:19:57 - 16:43:04 UTC
/ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20250813T161958Z.hdf5
/ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20250813T161958Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20250813T161958Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20250813T161958Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20250813T161958Z.hdf5
At the time of starting these measurements, we had only been Locked for 1 hour (1hr10min since MAX_POWER), so we were not fully thermalized
CALIBRATION_MONITOR screen and pydarm report are attached
Broadband
2025-08-13 15:31:30 - 15:37:02 UTC
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250813T153151Z.xml
Simulines
2025-08-13 15:38:19 - 16:01:22 UTC
/ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20250813T153820Z.hdf5
/ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20250813T153820Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20250813T153820Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20250813T153820Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20250813T153820Z.hdf5
TITLE: 08/13 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 0mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
Currently Observing at 153 Mpc and have been Locked for 10 minutes
Last night's lockloss at 2025-08-13 12:59UTC has no obvious cause, but I did notice that the dumped power into HAM6 is an interesting shape, with two bumps instead of the usual one. However, I did find a lockloss from 2025-07-28 03:13UTC whose power has the same shape, so it's probably just an alignment thing, especially since the last lockloss was positioned in such a way that the fast shutter didn't even need to fire (86325).
TITLE: 08/13 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
Nice shift with H1 locked 6hrs15min. We did have 3 drop-outs due to TCS ITMy CO2 (all quick and automatic recoveries). ~45min after the 3rd drop, ther was superevent S250813k. Ended the night checking out peak Perseids just before the moon was rising.
LOG:
Just had a couple of drops from Observing due to TCS_ITMY_CO2 guardian saying laser is unlocked and needed to find a new locking point. (Attached are the two occurrences thus far).
TITLE: 08/12 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 143Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Currently Observing at 145 Mpc and have been Locked for 50 minutes. Relocking went pretty well today with the only concern being the fast shutter not firing during the last lockloss (86324), but luckily it seems like we were okay and that was expected in that situation.
LOG:
14:30UTC Locked for 5.5 hours and running magnetic injections
14:40 Back into Observing
14:45 Out of Observing for SUS charge measurements
14:57 Lockloss
19:18 Started relocking
- Initial alignment - BS ADS convergence took 16+ minutes (86320)
20:47 NOMINAL_LOW_NOISE
20:55 Observing
21:12 Lockloss
22:44 NOMINAL_LOW_NOISE
22:46 Observing
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 19:33 | SAF | Laser HAZARD | LVEA | YES | LVEA is Laser HAZARD | 15:33 |
| 15:00 | FAC | Kim, Nelly | LVEA | YES | Tech clean | 15:09 |
| 15:04 | FAC | Randy, Chris | LVEA | n | Craning around BSC2 | 18:46 |
| 15:09 | FAC | Kim | EX | n | Tech clean | 16:14 |
| 15:10 | FAC | Nelly | EY | n | Tech clean | 16:10 |
| 15:10 | PSL | Jason | PSL | YES | RefCav Alignment | 16:38 |
| 15:20 | VAC | Janos, Travis | MX, MY | n | Pump install | 19:21 |
| 15:22 | EE | Fil | CER, LVEA | n | Cable pulling | 18:14 |
| 15:24 | Camilla | LVEA | n | Transitioning to LASER SAFE | 15:40 | |
| 15:27 | EE | Marc | CER/LVEA | n | Pulling cables | 18:47 |
| 15:31 | VAC | Gerardo | LVEA | n | Removing turbo pump | 17:31 |
| 15:42 | Christina, Nichole | LVEA | n | 3IFO inventory | 18:25 | |
| 15:55 | Richard | LVEA | n | Surverying the floor (people) for any weaknesses | 16:12 | |
| 15:56 | PEM | Sam | LVEA | n | Talking to Fil and looking at accelerometers | 16:12 |
| 16:12 | FAC | Nelly | HAM Shack | n | Tech clean | 17:09 |
| 16:13 | EPO | Amber, Tour | LVEA | n | Tour | 16:31 |
| 16:16 | FAC | Kim | HAM Shack | n | Tech clean | 17:09 |
| 16:16 | SEI | Jim | LVEA | n | HEPI accumulator checks | 17:34 |
| 16:35 | EPO | Sam, Tooba, +1 | LVEA | n | Tour | 17:31 |
| 17:05 | EPO | Mike +2 Spokane Review | LVEA | n | Tour | 18:34 |
| 17:06 | EE | Jackie | LVEA | n | Joining Fil and Marc | 18:47 |
| 17:13 | FAC | Nelly, Kim | LVEA | n | Tech clean | 18:19 |
| 17:34 | Richard | LVEA | n | Checking on work | 17:51 | |
| 17:44 | Camilla | LVEA | n | Looking for Richard | 17:51 | |
| 18:02 | Richard, Tooba, +1 | Roof | n | Being on the roof | 18:14 | |
| 18:34 | Camilla | LVEA | YES | Transitioning LVEA to laser hazard | 18:47 | |
| 18:41 | SQZ | Sheila, Matt, Jennie | LVEA | YES | SQZT0 table work | 20:04 |
| 18:46 | EPO | Mike, Spokane Review | YARM | n | Driving down YARM | 20:21 |
| 18:47 | SQZ | Camilla | LVEA | YES | Joining SQZ crew | 20:04 |
| 18:49 | LASER | LVEA is LASER HAZARD | LVEA | YES | LVEA IS LASER HAZARD | 09:49 |
| 20:21 | VAC | Janos, Travis | MY | n | Continuing pump work | 22:17 |
| 20:29 | Christina, Nichole | MX, MY | n | 3IFO | 21:59 | |
| 23:15 | VAC | Janos | MY | n | Turning off pump | 00:45 |
TITLE: 08/12 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 144Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 8mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.17 μm/s
QUICK SUMMARY:
Got the hand-off from OIi which was standard other than their mention of the Fast Shutter note after a lockloss they had from their first locking attempt post-Maintenance. H1 has currently been locked for almost an hour.
(oh and Oli did also mention on leaving that they & Elenna noticed there was a Verbal Alarm for a PR3 saturation which happened about 20min after being in observing---they mentioned this is an odd thing for PR3.)
Operator Checksheet NOTES:
Ivey, Edgard, and Brian have created new estimator fits (86233) and blend filters (86265) for the SR3 Y estimator, and we have new rate channels (86080), so we were excited to be able to take new estimator measurements (last time 85615).
Unfortunately, there were issues with installing the new filters, so I had to make do with the old filters: for the for the estimator filters, I used the fits from fits_H1SR3_2025-06-30.mat, and the blend filters are from Estimator_blend_doublenotch_SR3yaw.m, aka the DBL_notch filter and not the new skinny notch. These are the same filters used in the testing from 85615.
So the only difference between the last estimator test and this one is that the last test had the generic satamp compensation filters (85471), and this measurement has the more precise 'best possible' compensation filters (85746). Good for us to see how much of a difference the generic vs best possible compensation filters make.
Unfortunately, due to the filter installation issues as well as still trying to re set up the estimator channels following the channel name changes, I also didn't have much time to run the tests, resulting in the actual test with the estimator being only 5 minutes. Hopefully this is okay enough for at least a preliminary view of how it's working and then next week we can run a full test with the more recent filters. Like last time, the transition between the OSEM damping and the estimator damping was very smooth and the noise out of the estimator was visibly smaller than with the regular damping (ndscope1).
Measurement times
SR3 Y damp -0.1
2025-08-12 18:28:00 - 18:44:00 UTC
SR3 Y damp -0.1, OSEM damp -0.4
2025-08-12 18:46:46 - 19:03:41 UTC
SR3 Y damp -0.1, Estimator damp -0.4
2025-08-12 19:09:00 - 19:16:51 UTC
Attached below are plots of the OSEM yaw signal, the M3 yaw optical lever witness sensor signal, and the drive request from light damping, full damping (current setting), and estimator damping modes from Oli's recent estimator test.
The blue trace is the light damping mode, the red trace is the full damping mode, and the yellow trace is the estimator damping.
The first plot is of the OSEM signal. The spectrum is dominated by OSEM noise. The blue, light damping trace shows where the suspension resonances are (around 1, 2, and 3 Hz). Under estimator damping, the resonances don't show up as expected.
This second plot is of the OPLEV signal. It is much more obvious from this plot that the estimator is damping at the resonances as expected. Between the first and second, as well as the second and third peaks, the yellow trace of the estimator damping mode is below the red trace of the full damping mode. This is good because it is expected that the estimator damping is better than the current full damping mode between the peaks. There is some estimator noise between 3 and 4 Hz from the estimator. The light damping trace also sees a noticeable amount of excess noise between 10 to 15 Hz. We suspect this is due to ground motion from maintenance: third, fourth, and fifth plots show comparisons between ground motion in July (when the light damping trace was 'normal') and August. There is excess noise in X, Y, and Z in August when compared to July.
The sixth plot is of the drive requests. This data was pulled from a newly installed 512 samples/sec channel, while the previous analysis for a test in July (see: LHO: 85745) was done using a channel that was sampling at 16 samples/sec. The low frequency full damping drive request differs significantly between July and August, likely because aliasing effects caused the July data to be unreliable. Otherwise, the estimator is requesting less drive above 5 Hz as expected. We note that the estimator rolls off sharply above 10 Hz.
The last plot is of the theoretical drive requests overlaid onto the empirical drive requests. We see that the major features of the estimator drive request are accounted for, as expected.
Oli intends to install the filter and the new, clean fits (see LHO: 86366) next Tuesday to test the yaw estimator once more. Hopefully the installation is smooth!
I would like to clarify from my initial alog that when I said that "the only difference between the last estimator test and this one is that the last test had the generic satamp compensation filters", that was a lie!! The measurements taken for calibrating and figuring out the correct response drives were taken before the satellite amplifiers were swapped for SR3, so even just the OSEMINF calibration was not done with the new satellite amplifiers in mind, so the calibration we had in there at the time was not very accurate to what we had going on, so we can't really compare this measurement to the last one.
Oli told me that the TCS CO2Y Chassis Tripped off during Maintenance this morning. This is not surprising as there was alot of craning work going on near that rack and the CO2 chassis are known to be fussy, see FRS 6639.
When I went to untrip it both indicator lights were red on it but after key-ing off/on, it turned on with no issues.
Lockloss at 2025-08-12 21:12UTC after 25 minutes locked
Oli, Elenna, Keita, Sheila, Jennie, Camilla
Oli noticed that ICS_LCOK had a check that the fast shutter didn't fire. Elenna and Oli then confirmed this.
We found that the fast shutter did not fire because the threshold of light at the AS port was not high enough to fire it. There was no danger here and it should not have fired. This is a rare occurrence where the light goes towards the input rather than output and is caught by baffles.
See plots of today's lockloss with no spike of light and no fast shutter closing via HAM6 GS13 vs a normal (earthquake) LL where the light spikes (especially as seen on Keita's 81080 VP power monitor) and the fast shutter closes.
It is unusual that we have had this rare type of lockloss twice in a couple of months (27th June: 85383). So this can be monitored, I added this plot to the "lockloss select" command by putting it into /sys/h1/templates/locklossplots
Keita requested we checked the power on ASC-AS_C is at it's normal level with a 2W DRMI lock. It is within the normal range of the last three locks, see attached.
I checked the vacuum channels and there was no excursion around the lockloss time so we didn't burn anything
Back to Observing 22:46 UTC
Sheila, Jennie, Matt, Camilla. WP#12750
Followed what we did in 82881, but did not need to adjust alignment through the EOM.
Starting with:
Then with 0V on ISS AOM controlmon, increased to 73mW (85% throughput). Then put 5V on controlmon and started to maximize the 1st order beam. We decided to not completely maximize 1st order as this reduces overall throughput and our current aim is to increase total green throughout, not ISS AOM range.
Ended with:
Sheila then adjusted the alignment into the fiber and maximized H1:SQZ-OPO_REFL_DC_POWER from 1.55 to 2.8.
This allowed us to have an OPO setpoint of 80uW with 20mW going into the fiber, wirth 5 on the controlmon and a spare 20mW that we could give to the fiber. Note that after ~1hour when we got to NLN, the controlmon had decreased from 5 to 2.5. So the power to the AOM may need to be increased next time we loose lock. Maybe with SQZT0 temperature changes now doors are on and fans are off.
There was a high pitched noise in SQZT7, close to the -Y side of the table, we couldn’t figure out what it was and maybe will have someone from EE help us with it another week.
I ran the SCAN_OPOTEMP guardian state (twice as it was far off) once we wer close to NLN.
WP12746 h1omc0 Low Noise ADC Autocal Test
EJ, Jonathan, Dave:
For a first test we restarted h1iopomc0 three times to run an autocal on the low-noise ADC -- each time the autocal Failed. Next test will be to power cycle existing card leading to possible replacement.
WP12755 TW1 Raw Minute Trend Offload
Dave:
h1daqtw1 was configured to write it raw minutes into a new local directory to isolate the last 6 months of data from the running system. The NDS service on h1daqnds1 was restarted to serve these data from this location while the file transfer progresses.
Restarts
Tue12Aug2025
LOC TIME HOSTNAME MODEL/REBOOT
09:06:26 h1omc0 h1iopomc0 <<< three ADC AUTOCAL tests
09:08:12 h1omc0 h1iopomc0
09:08:58 h1omc0 h1iopomc0
09:09:12 h1omc0 h1omc <<< start user models
09:09:26 h1omc0 h1omcpi
11:51:19 h1daqnds1 [DAQ] <<< reconfigure for TW1 offload
TW1 offload status as of 07:30 Wed: 70% complete. ETA 15:15 this afternoon.
Unplugged an unused extension cord by the PSL racks.
High bay and LVEA and entrance lights turned off, paging system off. Oli checked WAP is off.
Everything else looked good.
Yesterday, we installed the BTRP (Beam Tube Roughing Pump) adapter flange on the 13.25" gate valve just to the -X side of GV13. This included installing a 8" GV onto the roughing pump port of the adapter, moving the existing gauge tree onto the new adapter, and installing a 2.75" blank on an unused port. All of the new CF joints were helium leak tested and no signal was seen above the ~9e-11 torrL/s background of the leak detector.
The assembly is currently valved out of the BT vacuum volume via the 13.25" GV, and is being pumped down via small turbo and aux cart. Therefore, the PT-343 gauge reading is only reporting on the BTRP assembly pressure, not the main BT pressure, so it can be ignored until further notice of it being vavled back in. This system has been pumping via aux cart or leak detector since ~2pm yesterday, and will continue to be pumped until it is in the pressure range of the BT volume. The aux cart is isolated by foam under the wheels, but some noise may be noticed by DetChar folks, hence the DetChar tag on this report.
A before - after pair of photos. As the conductance is very bad in this complex volume, we're aiming to pump it until next Tuesday. The estimated pressure rise of the main volume after valving in this small volume next Tuesday is less than E-12 Torr (after equalizing) - in other words, negligible.
Some backstage snapshots of the great teamwork of Travis, Janos, and me on installing these: Pic. 1 - "before"; 2,3 - 90% complete.
As of Tuesday, August 12, the pumps have been shut off and removed from this system, and the gauge tree valved back in to the main volume. Noise/vibration and pressure monitoring at MX should be back to nominal.
LOTO was applied now both to the handlers of the hand angle valve and the hand Gate Valve. Also, components have been added to the header, only 1 piece away from the booster pump.
Something kicked SRM and caused this lockloss. SRM was also kicked 20 seconds earlier, but we were able to recover from that.