TITLE: 08/16 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 29mph Gusts, 18mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
WP12760
Marc, Oli, Fil, Ryan C, Dave:
Marc is onsite preparing to drive to EX to replace the failed 18bit-DAC card in h1susex IO Chassis.
Front end is fenced and powered down.
At approximately 23:55 Friday 15 August 2025 PDT all models on h1susex failed with ADC timeout.
lspci showed that Adnaco2 slot4 looked not quite correct. This is the 3rd 18bit-DAC which is only used by the h1sustmsx model and was replaced recently during lock-loss investigations.
I power cycled h1susex following the procedure:
fence from Dolphin, stop all models, power down via terminal, wait, power up via IPMI.
Now lspci is not reporting any card in A2-S4, indicating this 18bit DAC has failed.
I was alerted to an issue by H1 MANAGER at midnight. One of the DACs failed at ETMX and took down ETMX and TMSX, and tripped the SEI software watchdog. I called Dave and he tried restarting the computer, but it didn't work. Richard said to try calling Fil and if he didn't answer, to wait until morning, and Fil didn't answer, so I'll try again at 6am to get him on the phone to see what can be done. For now, I've put ISC_LOCK in IDLE, and I've bypassed the software watchdog to restore the ISI and HEPI to their nominal states
TITLE: 08/15 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
Smooth sailing shift with H1 locked over 23hrs even with our windy night (it's been 20-25mph steady winds at the corner for the last 12hrs).
LOG:
For FAMIS #26591: All fan trends look nice and flat!
TITLE: 08/15 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 22mph Gusts, 14mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
Easy & smooth shift handoff (as it should be) with H1 locked 17.75hrs! And it has even been windy the last 7hrs (Corner winds over 20mph).
Operator Checksheet NOTES:
TITLE: 08/15 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: We stayed locked the entire shift, 17.5 hours.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
18:49 | LASER | LVEA is LASER HAZARD | LVEA | YES | LVEA IS LASER HAZARD | 09:49 |
17:14 | VAC | Travis | MidY | N | Check on roughing pumps | 17:28 |
18:02 | ISC | Keita, Jennie | Optics lab | LOCAL | ISS array work | 19:06 |
20:55 UTC There was both a plane and a helicopter flying over site (DETCHAR)
Summary:
This post provides an estimate of the thermal transition caused by changing the JAC input power from 0 W to 100 W. The result will be used for the design of the JAC heater.
The main conclusion is :Changing the JAC input power from 0 W to 100 W results in a cavity length change of ≈ 0.2 µm (corresponding to ≈ 100 V PZT actuation), which remains within the PZT range. No heater-based compensation is required.
Details:
Two possible types of state change are considered. One is the effect in which light scattered from the mirrors is absorbed by the cavity body, heating it and causing the entire cavity to expand. This is expected to have a long transition time. The other is expansion of the mirrors themselves due to absorption, which shortens the cavity length and is expected to have a relatively short time constant.
The attached plot shows one hour of trend data starting from 2024-11-14 19:30 UTC. From top to bottom, the traces are: PMC transmitted power, PMC temperature from the thermistor, heater input, and PZT input.
This lock occurred after the PMC had been unlocked for about three hours, so it represents the transition from a cold state to a hot state. Just before locking, the PMC temperature was about 307.0 K, and immediately after locking, the PZT input voltage was 370 V.
Although the heater input changes appear to cause rapid changes in the PMC temperature, these are not reliable indicators of the actual body temperature change. The observed change is about 0.4 K, but given aluminum’s thermal expansion coefficient of 10^{-5} m/K, a 0.1 K change would shift the cavity length by one FSR, which would exceed the PZT range and cause lock loss. This suggests that the rapid apparent temperature changes are due to the thermistor picking up radiation from the heater or sensing the local temperature near the heater, rather than the true body temperature.
Therefore, to evaluate the body temperature change, we need to disregard these fast variations and look at the average thermistor signal. This shows that the change is less than 0.1 K over one hour, with the highest temperature occurring immediately after lock. This indicates that the observed temperature change is due to the heater input change, and that the body expansion can be neglected.
The important point is that although the temperature just after lock and at t = 3500s is almost the same, the PZT input voltage changes by about 100 V. This is interpreted as a change in the mirror state. A 100 V change corresponds to a cavity length change of about 0.2 µm, which can be attributed to the change in input power.
TITLE: 08/15 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 3mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
August 5th Randy and I removed two braces from the BSC work platform at EY. These were used to stabalize the work platform by connecting it to the flange around the BSC chamber. These had been in place since the platform was originally put in place. These were the only two still in place.
Fri Aug 15 10:08:39 2025 INFO: Fill completed in 8min 36secs
TITLE: 08/15 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Quiet shift with H1 observing for almost the whole duration except for a lockloss right at the end. Initial alignment just finished, now starting main locking sequence.
LOG:
Lockloss @ 04:45 UTC after 18+ hours locked - link to lockloss tool
Perhaps caused by the smallest of ETMX glitches starting in L3 right before the lockloss? Doesn't exactly look to me like a usual one of these, though. Otherwise, no obvious cause.
Jumping right into an alignment given the long lock stretch.
TITLE: 08/14 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Locked for 13 hours. We had calibration and comissioning this morning, where we rode through a few earthquakes with help from some new configuration (alog86364). Ryan S noticed ITMX mode 2 has been growing slowly for this entire lock. It was never damped because it was almost non existant at the start of the lock. He will address it when practical.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
18:49 | LASER | LVEA is LASER HAZARD | LVEA | YES | LVEA IS LASER HAZARD | 09:49 |
19:22 | SPI | Jeff | Opt Lab | yes | Inventory | 19:25 |
TITLE: 08/14 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 25mph Gusts, 16mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY: H1 has been locked for 12.5 hours and observing since commissioning wrapped up this morning.
I noticed the ~500Hz violin mode peak is high in DARM and saw that indeed ITMX mode 2 is elevated and has been growing this entire lock stretch. It looks like the gain was never turned on for this mode, which happens if the monitor for a mode is low enough when the full power damping is turned on. Since the violin damping Guardian sees the mode is growing, it's constantly zeroing the gain in the damping filters, so we've been unable to turn on the nominal gain of -15. To fix this, we would have to take H1 out of observing, take the VIOLIN_DAMPING node to 'DAMPING_ON_SIMPLE', then back to 'DAMP_VILINS_FULL_POWER' so that the node resets its "node growing" checks and sees the mode needs damping. I plan to do this if either L1 drops out of observing or if the 500Hz peak on DARM gets up to 10-16 m/Hz-1/2
I ran a bruco yesterday that showed lots of low-ish broadband coherence of SRCL, PRCL and MICH.
I have a previous good measurement of the PRCL coupling, so I used that to iteratively fit an improved feedforward. Based on the last test I did of the PRCL feedforward, I am not certain that just adjusting the gain is good enough right now to reduce the coupling significantly. screenshot
I also measured and tried to fit a better SRCL feedforward. The challenge continues to be fitting the low and high frequency behaviors of the SRCL coupling simultaneously. The misfit of the high frequency behavior continues to create a broad bump of SRCL noise around 100-300 Hz.
I made some small improvements of the SRCL coupling, but there is still room for more improvement. screenshot
I still need to remeasure and refit the MICH feedforward.
The new SRCL FF is in FM6 and the PRCL FF is in FM8. I updated the guardian and the SDF.
Follow up bruco taken after these changes shows large reduction in PRCL coherence, some reduction of SRCL coherence. Minimal change in MICH coherence.
Ivey, Edgard, and Brian have created new estimator fits (86233) and blend filters (86265) for the SR3 Y estimator, and we have new rate channels (86080), so we were excited to be able to take new estimator measurements (last time 85615).
Unfortunately, there were issues with installing the new filters, so I had to make do with the old filters: for the for the estimator filters, I used the fits from fits_H1SR3_2025-06-30.mat, and the blend filters are from Estimator_blend_doublenotch_SR3yaw.m, aka the DBL_notch filter and not the new skinny notch. These are the same filters used in the testing from 85615.
So the only difference between the last estimator test and this one is that the last test had the generic satamp compensation filters (85471), and this measurement has the more precise 'best possible' compensation filters (85746). Good for us to see how much of a difference the generic vs best possible compensation filters make.
Unfortunately, due to the filter installation issues as well as still trying to re set up the estimator channels following the channel name changes, I also didn't have much time to run the tests, resulting in the actual test with the estimator being only 5 minutes. Hopefully this is okay enough for at least a preliminary view of how it's working and then next week we can run a full test with the more recent filters. Like last time, the transition between the OSEM damping and the estimator damping was very smooth and the noise out of the estimator was visibly smaller than with the regular damping (ndscope1).
Measurement times
SR3 Y damp -0.1
2025-08-12 18:28:00 - 18:44:00 UTC
SR3 Y damp -0.1, OSEM damp -0.4
2025-08-12 18:46:46 - 19:03:41 UTC
SR3 Y damp -0.1, Estimator damp -0.4
2025-08-12 19:09:00 - 19:16:51 UTC
Attached below are plots of the OSEM yaw signal, the M3 yaw optical lever witness sensor signal, and the drive request from light damping, full damping (current setting), and estimator damping modes from Oli's recent estimator test.
The blue trace is the light damping mode, the red trace is the full damping mode, and the yellow trace is the estimator damping.
The first plot is of the OSEM signal. The spectrum is dominated by OSEM noise. The blue, light damping trace shows where the suspension resonances are (around 1, 2, and 3 Hz). Under estimator damping, the resonances don't show up as expected.
This second plot is of the OPLEV signal. It is much more obvious from this plot that the estimator is damping at the resonances as expected. Between the first and second, as well as the second and third peaks, the yellow trace of the estimator damping mode is below the red trace of the full damping mode. This is good because it is expected that the estimator damping is better than the current full damping mode between the peaks. There is some estimator noise between 3 and 4 Hz from the estimator. The light damping trace also sees a noticeable amount of excess noise between 10 to 15 Hz. We suspect this is due to ground motion from maintenance: third, fourth, and fifth plots show comparisons between ground motion in July (when the light damping trace was 'normal') and August. There is excess noise in X, Y, and Z in August when compared to July.
The sixth plot is of the drive requests. This data was pulled from a newly installed 512 samples/sec channel, while the previous analysis for a test in July (see: LHO: 85745) was done using a channel that was sampling at 16 samples/sec. The low frequency full damping drive request differs significantly between July and August, likely because aliasing effects caused the July data to be unreliable. Otherwise, the estimator is requesting less drive above 5 Hz as expected. We note that the estimator rolls off sharply above 10 Hz.
The last plot is of the theoretical drive requests overlaid onto the empirical drive requests. We see that the major features of the estimator drive request are accounted for, as expected.
Oli intends to install the filter and the new, clean fits (see LHO: 86366) next Tuesday to test the yaw estimator once more. Hopefully the installation is smooth!
I would like to clarify from my initial alog that when I said that "the only difference between the last estimator test and this one is that the last test had the generic satamp compensation filters", that was a lie!! The measurements taken for calibrating and figuring out the correct response drives were taken before the satellite amplifiers were swapped for SR3, so even just the OSEMINF calibration was not done with the new satellite amplifiers in mind, so the calibration we had in there at the time was not very accurate to what we had going on, so we can't really compare this measurement to the last one.
Today Oli and I saw that the ADS convergence checker for the beamsplitter was taking forever to return True during initial alignment, despite the fact that the signals appeared well-converged. The convergence threshold is set on line 1158 of ALIGN_IFO.py, and it is 1.5 for both pitch and yaw. Watching the log, the yaw output seemed to quickly be below this value, while the pitch output hovered between about 1.8 to 6. I tried raising the value to 5, but pitch still stayed mostly above that value. I finally changed it to 10, and the state completed. Overall, we were waiting for convergence for over 16 minutes. It seems like the convergence values for pitch and yaw should be different. It took about two minutes for the pitch and yaw ADS outputs to reach their steady value. On ndscope minute trend, the yaw average value appears to be around zero, while for pitch the average value is around -4. The convergence checker averages over 20 seconds.
The value is still 10, but that might be too high.
As TJ, Elenna, and Sheila had chatted about, I think this is due to the 'slow let-go' integrator being turned off on the BS pit M1 stage.
I suggest that on Monday (or maybe Tuesday?) we modify the gen_PREP_FOR_MICH state of ALIGN_IFO to engage FM1 of H1:SUS-BS_M1_LOCK_P so that we have the integrator engaged for all of the MICH alignment use cases (MICH bright which we actually use for initial alignment, MICH dark which we used to use for init alignment, and MICH with ALS, which we use for aligning MICH when the green arms are locked).
It probably doesn't matter if the integrator in FM1 is left on or not, since these states are only used at low power, and the DOWN state of ISC_DRMI gets it turned off.
I'll coordinate with other commissioners to make that change early next week.
I started an Initial ALignment at 15:05 UTC
16:24 UTC Observing