Just had a couple of drops from Observing due to TCS_ITMY_CO2 guardian saying laser is unlocked and needed to find a new locking point. (Attached are the two occurrences thus far).
TITLE: 08/12 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 143Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Currently Observing at 145 Mpc and have been Locked for 50 minutes. Relocking went pretty well today with the only concern being the fast shutter not firing during the last lockloss (86324), but luckily it seems like we were okay and that was expected in that situation.
LOG:
14:30UTC Locked for 5.5 hours and running magnetic injections
14:40 Back into Observing
14:45 Out of Observing for SUS charge measurements
14:57 Lockloss
19:18 Started relocking
- Initial alignment - BS ADS convergence took 16+ minutes (86320)
20:47 NOMINAL_LOW_NOISE
20:55 Observing
21:12 Lockloss
22:44 NOMINAL_LOW_NOISE
22:46 Observing
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 19:33 | SAF | Laser HAZARD | LVEA | YES | LVEA is Laser HAZARD | 15:33 |
| 15:00 | FAC | Kim, Nelly | LVEA | YES | Tech clean | 15:09 |
| 15:04 | FAC | Randy, Chris | LVEA | n | Craning around BSC2 | 18:46 |
| 15:09 | FAC | Kim | EX | n | Tech clean | 16:14 |
| 15:10 | FAC | Nelly | EY | n | Tech clean | 16:10 |
| 15:10 | PSL | Jason | PSL | YES | RefCav Alignment | 16:38 |
| 15:20 | VAC | Janos, Travis | MX, MY | n | Pump install | 19:21 |
| 15:22 | EE | Fil | CER, LVEA | n | Cable pulling | 18:14 |
| 15:24 | Camilla | LVEA | n | Transitioning to LASER SAFE | 15:40 | |
| 15:27 | EE | Marc | CER/LVEA | n | Pulling cables | 18:47 |
| 15:31 | VAC | Gerardo | LVEA | n | Removing turbo pump | 17:31 |
| 15:42 | Christina, Nichole | LVEA | n | 3IFO inventory | 18:25 | |
| 15:55 | Richard | LVEA | n | Surverying the floor (people) for any weaknesses | 16:12 | |
| 15:56 | PEM | Sam | LVEA | n | Talking to Fil and looking at accelerometers | 16:12 |
| 16:12 | FAC | Nelly | HAM Shack | n | Tech clean | 17:09 |
| 16:13 | EPO | Amber, Tour | LVEA | n | Tour | 16:31 |
| 16:16 | FAC | Kim | HAM Shack | n | Tech clean | 17:09 |
| 16:16 | SEI | Jim | LVEA | n | HEPI accumulator checks | 17:34 |
| 16:35 | EPO | Sam, Tooba, +1 | LVEA | n | Tour | 17:31 |
| 17:05 | EPO | Mike +2 Spokane Review | LVEA | n | Tour | 18:34 |
| 17:06 | EE | Jackie | LVEA | n | Joining Fil and Marc | 18:47 |
| 17:13 | FAC | Nelly, Kim | LVEA | n | Tech clean | 18:19 |
| 17:34 | Richard | LVEA | n | Checking on work | 17:51 | |
| 17:44 | Camilla | LVEA | n | Looking for Richard | 17:51 | |
| 18:02 | Richard, Tooba, +1 | Roof | n | Being on the roof | 18:14 | |
| 18:34 | Camilla | LVEA | YES | Transitioning LVEA to laser hazard | 18:47 | |
| 18:41 | SQZ | Sheila, Matt, Jennie | LVEA | YES | SQZT0 table work | 20:04 |
| 18:46 | EPO | Mike, Spokane Review | YARM | n | Driving down YARM | 20:21 |
| 18:47 | SQZ | Camilla | LVEA | YES | Joining SQZ crew | 20:04 |
| 18:49 | LASER | LVEA is LASER HAZARD | LVEA | YES | LVEA IS LASER HAZARD | 09:49 |
| 20:21 | VAC | Janos, Travis | MY | n | Continuing pump work | 22:17 |
| 20:29 | Christina, Nichole | MX, MY | n | 3IFO | 21:59 | |
| 23:15 | VAC | Janos | MY | n | Turning off pump | 00:45 |
TITLE: 08/12 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 144Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 8mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.17 μm/s
QUICK SUMMARY:
Got the hand-off from OIi which was standard other than their mention of the Fast Shutter note after a lockloss they had from their first locking attempt post-Maintenance. H1 has currently been locked for almost an hour.
(oh and Oli did also mention on leaving that they & Elenna noticed there was a Verbal Alarm for a PR3 saturation which happened about 20min after being in observing---they mentioned this is an odd thing for PR3.)
Operator Checksheet NOTES:
Ivey, Edgard, and Brian have created new estimator fits (86233) and blend filters (86265) for the SR3 Y estimator, and we have new rate channels (86080), so we were excited to be able to take new estimator measurements (last time 85615).
Unfortunately, there were issues with installing the new filters, so I had to make do with the old filters: for the for the estimator filters, I used the fits from fits_H1SR3_2025-06-30.mat, and the blend filters are from Estimator_blend_doublenotch_SR3yaw.m, aka the DBL_notch filter and not the new skinny notch. These are the same filters used in the testing from 85615.
So the only difference between the last estimator test and this one is that the last test had the generic satamp compensation filters (85471), and this measurement has the more precise 'best possible' compensation filters (85746). Good for us to see how much of a difference the generic vs best possible compensation filters make.
Unfortunately, due to the filter installation issues as well as still trying to re set up the estimator channels following the channel name changes, I also didn't have much time to run the tests, resulting in the actual test with the estimator being only 5 minutes. Hopefully this is okay enough for at least a preliminary view of how it's working and then next week we can run a full test with the more recent filters. Like last time, the transition between the OSEM damping and the estimator damping was very smooth and the noise out of the estimator was visibly smaller than with the regular damping (ndscope1).
Measurement times
SR3 Y damp -0.1
2025-08-12 18:28:00 - 18:44:00 UTC
SR3 Y damp -0.1, OSEM damp -0.4
2025-08-12 18:46:46 - 19:03:41 UTC
SR3 Y damp -0.1, Estimator damp -0.4
2025-08-12 19:09:00 - 19:16:51 UTC
Attached below are plots of the OSEM yaw signal, the M3 yaw optical lever witness sensor signal, and the drive request from light damping, full damping (current setting), and estimator damping modes from Oli's recent estimator test.
The blue trace is the light damping mode, the red trace is the full damping mode, and the yellow trace is the estimator damping.
The first plot is of the OSEM signal. The spectrum is dominated by OSEM noise. The blue, light damping trace shows where the suspension resonances are (around 1, 2, and 3 Hz). Under estimator damping, the resonances don't show up as expected.
This second plot is of the OPLEV signal. It is much more obvious from this plot that the estimator is damping at the resonances as expected. Between the first and second, as well as the second and third peaks, the yellow trace of the estimator damping mode is below the red trace of the full damping mode. This is good because it is expected that the estimator damping is better than the current full damping mode between the peaks. There is some estimator noise between 3 and 4 Hz from the estimator. The light damping trace also sees a noticeable amount of excess noise between 10 to 15 Hz. We suspect this is due to ground motion from maintenance: third, fourth, and fifth plots show comparisons between ground motion in July (when the light damping trace was 'normal') and August. There is excess noise in X, Y, and Z in August when compared to July.
The sixth plot is of the drive requests. This data was pulled from a newly installed 512 samples/sec channel, while the previous analysis for a test in July (see: LHO: 85745) was done using a channel that was sampling at 16 samples/sec. The low frequency full damping drive request differs significantly between July and August, likely because aliasing effects caused the July data to be unreliable. Otherwise, the estimator is requesting less drive above 5 Hz as expected. We note that the estimator rolls off sharply above 10 Hz.
The last plot is of the theoretical drive requests overlaid onto the empirical drive requests. We see that the major features of the estimator drive request are accounted for, as expected.
Oli intends to install the filter and the new, clean fits (see LHO: 86366) next Tuesday to test the yaw estimator once more. Hopefully the installation is smooth!
I would like to clarify from my initial alog that when I said that "the only difference between the last estimator test and this one is that the last test had the generic satamp compensation filters", that was a lie!! The measurements taken for calibrating and figuring out the correct response drives were taken before the satellite amplifiers were swapped for SR3, so even just the OSEMINF calibration was not done with the new satellite amplifiers in mind, so the calibration we had in there at the time was not very accurate to what we had going on, so we can't really compare this measurement to the last one.
Oli told me that the TCS CO2Y Chassis Tripped off during Maintenance this morning. This is not surprising as there was alot of craning work going on near that rack and the CO2 chassis are known to be fussy, see FRS 6639.
When I went to untrip it both indicator lights were red on it but after key-ing off/on, it turned on with no issues.
Lockloss at 2025-08-12 21:12UTC after 25 minutes locked
Oli, Elenna, Keita, Sheila, Jennie, Camilla
Oli noticed that ICS_LCOK had a check that the fast shutter didn't fire. Elenna and Oli then confirmed this.
We found that the fast shutter did not fire because the threshold of light at the AS port was not high enough to fire it. There was no danger here and it should not have fired. This is a rare occurrence where the light goes towards the input rather than output and is caught by baffles.
See plots of today's lockloss with no spike of light and no fast shutter closing via HAM6 GS13 vs a normal (earthquake) LL where the light spikes (especially as seen on Keita's 81080 VP power monitor) and the fast shutter closes.
It is unusual that we have had this rare type of lockloss twice in a couple of months (27th June: 85383). So this can be monitored, I added this plot to the "lockloss select" command by putting it into /sys/h1/templates/locklossplots
Keita requested we checked the power on ASC-AS_C is at it's normal level with a 2W DRMI lock. It is within the normal range of the last three locks, see attached.
I checked the vacuum channels and there was no excursion around the lockloss time so we didn't burn anything
Back to Observing 22:46 UTC
Sheila, Jennie, Matt, Camilla. WP#12750
Followed what we did in 82881, but did not need to adjust alignment through the EOM.
Starting with:
Then with 0V on ISS AOM controlmon, increased to 73mW (85% throughput). Then put 5V on controlmon and started to maximize the 1st order beam. We decided to not completely maximize 1st order as this reduces overall throughput and our current aim is to increase total green throughout, not ISS AOM range.
Ended with:
Sheila then adjusted the alignment into the fiber and maximized H1:SQZ-OPO_REFL_DC_POWER from 1.55 to 2.8.
This allowed us to have an OPO setpoint of 80uW with 20mW going into the fiber, wirth 5 on the controlmon and a spare 20mW that we could give to the fiber. Note that after ~1hour when we got to NLN, the controlmon had decreased from 5 to 2.5. So the power to the AOM may need to be increased next time we loose lock. Maybe with SQZT0 temperature changes now doors are on and fans are off.
There was a high pitched noise in SQZT7, close to the -Y side of the table, we couldn’t figure out what it was and maybe will have someone from EE help us with it another week.
I ran the SCAN_OPOTEMP guardian state (twice as it was far off) once we wer close to NLN.
WP12746 h1omc0 Low Noise ADC Autocal Test
EJ, Jonathan, Dave:
For a first test we restarted h1iopomc0 three times to run an autocal on the low-noise ADC -- each time the autocal Failed. Next test will be to power cycle existing card leading to possible replacement.
WP12755 TW1 Raw Minute Trend Offload
Dave:
h1daqtw1 was configured to write it raw minutes into a new local directory to isolate the last 6 months of data from the running system. The NDS service on h1daqnds1 was restarted to serve these data from this location while the file transfer progresses.
Restarts
Tue12Aug2025
LOC TIME HOSTNAME MODEL/REBOOT
09:06:26 h1omc0 h1iopomc0 <<< three ADC AUTOCAL tests
09:08:12 h1omc0 h1iopomc0
09:08:58 h1omc0 h1iopomc0
09:09:12 h1omc0 h1omc <<< start user models
09:09:26 h1omc0 h1omcpi
11:51:19 h1daqnds1 [DAQ] <<< reconfigure for TW1 offload
TW1 offload status as of 07:30 Wed: 70% complete. ETA 15:15 this afternoon.
Unplugged an unused extension cord by the PSL racks.
High bay and LVEA and entrance lights turned off, paging system off. Oli checked WAP is off.
Everything else looked good.
Today Oli and I saw that the ADS convergence checker for the beamsplitter was taking forever to return True during initial alignment, despite the fact that the signals appeared well-converged. The convergence threshold is set on line 1158 of ALIGN_IFO.py, and it is 1.5 for both pitch and yaw. Watching the log, the yaw output seemed to quickly be below this value, while the pitch output hovered between about 1.8 to 6. I tried raising the value to 5, but pitch still stayed mostly above that value. I finally changed it to 10, and the state completed. Overall, we were waiting for convergence for over 16 minutes. It seems like the convergence values for pitch and yaw should be different. It took about two minutes for the pitch and yaw ADS outputs to reach their steady value. On ndscope minute trend, the yaw average value appears to be around zero, while for pitch the average value is around -4. The convergence checker averages over 20 seconds.
The value is still 10, but that might be too high.
As TJ, Elenna, and Sheila had chatted about, I think this is due to the 'slow let-go' integrator being turned off on the BS pit M1 stage.
I suggest that on Monday (or maybe Tuesday?) we modify the gen_PREP_FOR_MICH state of ALIGN_IFO to engage FM1 of H1:SUS-BS_M1_LOCK_P so that we have the integrator engaged for all of the MICH alignment use cases (MICH bright which we actually use for initial alignment, MICH dark which we used to use for init alignment, and MICH with ALS, which we use for aligning MICH when the green arms are locked).
It probably doesn't matter if the integrator in FM1 is left on or not, since these states are only used at low power, and the DOWN state of ISC_DRMI gets it turned off.
I'll coordinate with other commissioners to make that change early next week.
WP 12749
Drawing D0902810-v11
Installation of the field cabling for the JAC tip/tilts completed. Cables pulled from the CER SUS-C3 rack to the SUS-R1 field rack. The DB25 cables going from the Sat Amp chassis were pulled to to the HAM1 D4 and D6 flanges.
F. Clara, M. Pirello, D. Sigg
WP 12756
Rack and Cable Tray Layout D1002704-v8
A new electrical rack was positioned next to the SUS-R4 rack. The new rack will be designated as SUS-R7. Installation of cable tray, DC power strips, power junction box will be installed in the upcoming weeks.
Randy, F. Clara, J. Figueroa, and M. Pirello
The FSS RefCav TPD has been on a downward trend again, so I tuned up the alignment on the PSL table this morning. All work was done with the ISS ON and diffracting ~3.8% unless otherwise noted. As usual, I began with a power budget of the FSS path:
The AOM's single pass diffraction efficiency was a little low, so I touched up the AOM alignment to improve it. I also tweaked the alignment of mirror M21 (that reflects the RefCav beam back through the AOM for the double pass) to improve the double pass diffraction efficiency (since the AOM moved). Finally I slightly tweaked the EOM alignment to center the beam on the input and output apertures and measured the power transmitted through the EOM:
I then tweaked the alignment into the RefCav using the alignment iris and manually tweaking the picomotor-equipped mirror mounts. The RefCav locked on the first attempt, with an initial TPD of ~0.735 V. I then used the picomotors to fine tune the alignment and managed to get a TPD of ~0.845 V. To end, I unlocked the RefCav and tweaked the beam alignment onto the RefCav's RFPD, then measured the RefCav visibility using the RFPD voltage:
I then turned off the ISS (we turn it off while the enclosure returns to thermal equilibrium, because during this process the PMC transmission can change; we don't want the ISS to actuate on the laser power until things stabilize), left the enclosure, and returned it to Science Mode operation. I let the enclosure return to thermal equilibrium for about an hour and then checked on things. When I turned the ISS back ON I had to increase the ISS RefSignal from -1.99 V to -1.98 V to hold our diffracted power at ~4%. I then did on final tweak of the RefCav alignment using our picomotor mirrors; I was able to get the RefCav TPD up to ~0.854 V. At this point the RefCav tune up is complete and the PSL is ready for post-maintenance IFO recovery. This closes LHO WP 12751.
Tue Aug 12 10:07:51 2025 INFO: Fill completed in 7min 48secs
Lockloss at 2025-08-12 14:57UTC right before the start of maintenance. Haven't looked yet to see if it was caused by the SUS charge meaurements or by something else
Oli, Camilla
This lockloss occurs while the excitation on ETMX ESD was ramping up but doesn't look related to this. It looks like am ETM_GLITCH type lockloss.
Interestingly, at this time our DARM to L3 control has been moved from ETMX to ITMX and you can clearly see the glitch in ITMX_L3, see attached. As Sheila has been telling us al along, this is a clear indicator that the issue has nothing to do with the ETMX SUS glitching and is caused by DARM with the SUS just being the witness.
TITLE: 08/12 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 6mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.13 μm/s
QUICK SUMMARY:
Currently have been Locked for over 5.5 hours and out of Observing to run injections