Edgard, Ivey, Brian.
Relevant FRS ticket : 32526
We made modifications to the HLTS_W_EST and estimator library parts to add DQ channels to monitor the total drive request to the M1 OSEMs with and without the estimator damping. In passing, we made a few changes to the names of channels on the EST block (by modifying ESTIMATOR_PARTS.mdl ) to make them a bit more readable/less redundant. These changes will only affect the H1 SR3/PR3 models only.
The changes were committed to the userapps svn under revision 32426.
Oli mentioned that they will do a model restart to get these changes in on Tuesday, as long as we got the changes in before Monday.
The estimator MEDM screens haven't been updated yet, but I think Brian will get to it on Monday.
____________
This is a summary of the library part changes [see attached.pdf for screenshots of these changes in the library parts]:
SIXOSEM_T_STAGE_MASTER_W_EST.mdl
HLTS_MASTER_W_EST.mdl
Added two DQ channels to the top level: M1_ADD_P_TOTAL* 512, and M1_ADD_Y_TOTAL* 512
ESTIMATOR_PARTS.mdl
Prompted by me noticing on-off behaviors in the daily strain spectrogram for today at around 20.2 Hz, I've done some additional investigations into the source and behavior of this line:
The 20.2 Hz line, which is currently prominent in DARM, first appeared in accelerometer and microphone data from the corner station on June 9. The first appearance of this line that I found was in the PSL mics, as shown in this spectrogram. This line then appeared in DARM in the first post-vent locks a few days later. The summary of work from June 9 does not show anything obvious to me that would be the source of this new noise.
This feature also turns off and on multiple times during the day. An example from today can be seen in this spectrogram. Most corner station microphones and accelerometers exhibit this feature, but it is most pronounced visually in the PSL microphone spectrograms. I was unable to identify any other non-PEM channels that showed the same on-off behavior, but this does reveal many change points that should aid in tracking down the source. Almost every day, this line exhibits abrupt on-off features at different times of the day and for varying durations. Based on my initial review, these change points appear to be more likely during the local daytime (although not at any specific time). When the line first appeared, it was usually in the "off" state and then turned on for short periods. However, this has slowly changed, so that now the line is generally in the "on" state and turns off for brief periods.
For background, I attempted to push a new calibration on 7/3 to account for the change in the SRCL offset that we made on 6/26, but it failed due to the broadband PCAL measurement showing a larger uncertainty that we had beforehand (see 85529). Since then, we have been running with the same calibration we have had since 6/10, which has a low error (~3%), but is based on a model that know to be incorrect. Namely, the model created and pushed on 6/10 has a small, positive spring, and we believe now that DARM has no spring to at least 10 Hz. We are especially confused because we expected the model change to be focused around the 10-30 Hz region, since this is the band where we expect significant change due to the SRCL offset, but the measurement shows large, >5%, error at 100 Hz.
I have made a series of plots comparing a variety of PCAL broadband measurements from different points since 6/10, measuring PCAL with GDS CALIB STRAIN and CAL DELTA L.
Plot 1 shows CALIB STRAIN/PCAL and DELTA L/PCAL on 6/11 after we pushed a new calibration modeled with a positive spring. The calibration at this point was very good; the calibration line uncertainties showed error of 3% or less. However, this plot is already showing something a bit confusing- a difference in CAL DELTA L and GDS CALIB STRAIN, where GDS CALIB STRAIN has a higher uncertainty around 70-200 Hz. We believe the application of the kappas should further reduce the uncertainty of GDS CALIB STRAIN.
Plot 2 shows CALIB STRAIN/PCAL and DELTA L/PCAL on 6/26 after changed the SRCL offset. The calibration report generated from that day indicates that the sensing function is flatter with the adjusted SRCL offset. Because the calibration still expects a spring, we were not surprised to see that the low frequency uncertainty changed.
Plot 3 shows CALIB STRAIN/PCAL and DELTA L/PCAL on 7/3 after we pushed a new calibration which was supposed to account for the flatter sensing function. However, we saw that the uncertainty increased at 100 Hz, which we did not expect. This measurement was run slightly early during the "TDCF burn in" so it may not have been an accurate look at the effect of the new calibration.
Plot 4 shows CALIB STRAIN/PCAL and DELTA L/PCAL on 7/3 after we pushed a new calibration, and then were only relocked for 10 minutes. The uncertainty was even larger than the previous uncertainty measurement. We were also very confused that CAL DELTA L changed significantly compared to plot 3. We're not sure if the kappas were significantly different from 1 to also cause problems in GDS CALIB STRAIN when applied.
TITLE: 07/18 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 17mph Gusts, 10mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY:
Unknown Lockloss 2025-07-18 23:23:49
Relocking notes:
Ran initial Alignment to get relocked quickly.
But that ran us through SDF Revert which un-did Corey's changes.
Re-offloaded SR3 Offsets
H1:SUS-SR3_M1_DITHER_P_OUTPUT was 32 and is now 0.
TITLE: 07/18 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
H1's been locked the entire Day shift with a lock of over 10hrs! Fairly quiet day with decent triple coincidence. There was a forecast of a Red Flag wind day, but it's not been horrible---Corner Station gusts have gotten over 30mph, but nothing worse than that so far.
Did not get to offload the SR.....SCRATCH THAT! H1 had a lockloss right at the end of the shift, so I took the opportunity to Offload the SR3 Pitch Offset (see alog 85855)....but SDF Revert took the Offset back to 32.3.
LOG:
I've taken a first look at the data that Camilla and Matt took in https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=85813.
In the past, I've started modeling these data sets by assuming a fixed arm power, and finding IFO readout losses that are needed to fit the measured shot noise without squeezing at 2kHz. This time, inspired in part by a comment from Begum, I instead used only the known IFO readout losses and attributing the rest to mode mismatch between the IFO and OMC, this allows us to make an upper limit on the mode mismatch from the IFO to the OMC.
From the google sheet , I will include SRC losses as a known readout loss (although the are listed as sqz injection losses, I think for the IFO they are readout losses), 0.99(SRC)*0.995(OFI)*0.9993(OM1)*0.985(OM3)*0.9904(QPD)*0.956(OMC)*0.98(QE) = 10% known readout losses. The known injection losses (not including SRC losses) are then 0.985(OPO)*0.99^3 (3 SFI passes)0.99 (FC QPDs)*0.99(other HAM7 loss) *0.99 (OFI)= 7.2%
Fitting the level of squeezing and anti-squeezing at 2kHz suggests an NLG of 13.2 (fairly close to Camilla's measurement of 13.4), and a total efficiency of 0.752 using the Aoki equations (treating mismatches as losses). Looking at the interactive sqz gui, the IFO to OMC mismatch reduces the measured sqz anf anti squeezing at 2kHz, but the mismatch phase only has an impact below about 400Hz.
Using only the known readout (10%) and injection losses (7.2%), perfect sqz to OMC mode matching, we can get some limit on the amount of OMC to IFO mode mismatch that can be compatible with our 2kHz squeezing. 5.1% mismatch (which would imply 355kW in the arm cavity) seems too high to be compatible with our squeezing, while a mode mismatch of 3.7% with 350kW in the arm does seem compatible if the sqz to OMC mode matching is perfect. So, we can take 3.7% as an upper limit on the IFO to OMC mode matching that is compatible with known squeezing losses. The data could be compatible with mode mismatches as low as 2.3% (345kW in arms) without introducing any extra losses. Any unknown squeezer losses, like excess crystal losses, will reduce this amount. The arm power of upper limit that we'd infer from this is 350kW, but this depends sensitvely on what we assume the non quantum noise is at 2kHz. I will try to redo this estimate soon using the cross correlation data that Elenna is working on to have more confident limits on the arm power.
These first two plots show that this model isn't well tuned at a few hundred Hz, I haven't tried to set either the SRCL offset or homodyne angle yet, or the OMC to IFO mismatch phase. At first glance it does not seem like I will be able to make this match well by adjusting the mismatch phase.
The last two plots show the squeezing level in dB, just so that we have a plot we can look at. The script to make these plots are committed here
Lockloss toward the end of the shift, but took the opportunity to do the SR3 Pitch Offset Offload (per Oli's alog 85830).
Sheila ran me through what we should change the SR3 Pitch to once we zero/offload the SR3 Dither Offset:
Made the change above, but an SDF Revert undid the Offset change! So, I zeroed the SR3 Pit Dither Offset once again. Now we are waiting for DRMI to lock.
Attached are screenshots of the (1) ndscope showing the offload and (2) medms involved.
This is for FAMIS #26431.
Laser Status:
NPRO output power is 1.87W
AMP1 output power is 70.35W
AMP2 output power is 141.0W
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 3 days, 2 hr 42 minutes
Reflected power = 23.67W
Transmitted power = 105.6W
PowerSum = 129.3W
FSS:
It has been locked for 0 days 8 hr and 10 min
TPD[V] = 0.8367V
ISS:
The diffracted power is around 3.8%
Last saturation event was 0 days 8 hours and 10 minutes ago
Possible Issues:
PMC reflected power is high
Preeti, Gaby
With the help of Ashley Patron's Eqlock script, we calculated the locklosses caused due to EQs for each observing runs. This is a part of the study of investigating the correlation between the microseism and duty-cycle (alog), so we chose winter months (Nov, Dec, Jan and Feb) of each observing run and calculated the vertical velocity of the ground from z-channel and horizontal velocity which is the quadrature sum of x and y channels at the time of locklost due to an EQ. We also did the same study for LLO (alog).
Conclusion:
Fri Jul 18 10:09:26 2025 INFO: Fill completed in 9min 23secs
Gerardo confirmed a good fill curbside.
TITLE: 07/18 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 1mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
H1's been locked for 1.25hrs w/ 4 locklosses over night each with decent recovery times. Microseism is trending down over last 24hrs; forecast is for high winds (redflag warning) in the afternoon.
The lockloss that occurred at 2025-07-18 08:00 UTC (not tagged as a glitch) was preceded by what appears to be a large kick in the yaw ASC. First attachment shows the CSOFT Y and CHARD Y signals a few seconds before the locklosses. This is also apparent in the test mass L2 signals in the second attachment.
Of the 4 locklosses overnight, we had (1) ETMx Glitch lockloss.
The other two locklosses last night seem by eye to have the same behavior (2025-07-18 12:07:24 UTC and 2025-07-18 04:20:15 UTC). Within ~100 ms of the lockloss time, there is something glitchy in the darm error signal where the error signal drops sharply. It looks like the glitchy behavior starts in DARM IN1 slightly before ETMX L3 starts behaving weirdly, but that's hard to tell since I'm just looking at ndscopes.
TITLE: 07/18 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
Relocking and in MOVE_SPOTS. Relocking after the first lockloss of my shift was hands off, and this relock has been hands off this time so far too, so it's been easy. I did not get a chance to offload the SR3 dither offset onto the sliders (85830), but that can easily be done later, especially since it's already been like that for over 5 years!
We had GRB-Short E581624 come in while we were Observing earlier
LOG:
23:30UTC Observing and have been Locked for 3 hours
00:14 GRB-Short E581624
02:02 Lockloss
03:34 NOMINAL_LOW_NOISE
03:36 Observing
04:20 Lockloss
Lockloss at 2025-07-18 04:20 UTC after 46 minutes locked
Lockloss at 2025-07-18 02:02 UTC after 5.5 hours locked
03:36 UTC Observing
During relocking I reloaded the h1asc model filters in so Elenna's new filters could be added (diffs).
Jennie, Rahul
On Tuesday Rahul and I took the measurements for the horizontal coupling in the ISS array currently on the optical table.
The QPD read 9500 e-7 W.
The X position was 5.26 V, the Y position was -4.98 V.
PD | DC Voltage [mV] pk-pk | AC Voltage [mV] pk-pk |
1 | 600 | 420 |
2 | 600 | 380 |
3 | 600 | 380 |
4 | 600 | 420 |
5 | 800 | 540 |
6 | 800 | 500 |
7 | 600 | 540 |
8 | 800 | 540 |
After thinking about this data I realise we need to retake it as we should record the mean value for the DC coupled measurements. This was with a 78V signal applied from the PZT driver and an input dither signal of 2 Vpp at 100Hz on the oscilloscope and I think 150 mA pump current on the laser.
Rahul, Jennie W
Yesterday we went back into the lab and retook the DC and AC measurements while horizontal dither was on while measuring using the 'mean' setting and without changing the overall input pointing from what it was in the above measurement.
PD | DC Voltage [V] mean | AC Voltage [V] mean |
1 | -4.08 | -0.172 |
2 | -3.81 | 0.0289 |
3 | -3.46 | 0.159 |
4 | -3.71 | 0.17 |
5 | -3.57 | -0.0161 |
6 | -3.5 | 0.00453 |
7 | -2.91 | 0.187 |
8 | -3.36 | 0.0912 |
QPD direction | Mean Voltage [V] | Pk-Pk Voltage [V] |
X | 5.28 | 2.20 |
Y | -4.98 | 0.8 |
QPD sum is roughly 5V.
Next time we need to plug in the second axis of the PZT driver so as to take the dither coupling measurement in the vertical direction.
Lockloss at 2025-07-17 04:11 UTC due to a power issue with ETMX and TMSX. Currently in contact with Dave and Fil is on his way in.
ETMX M0 and R0 watchdogs tripped
ETMX and TMSX OSEMs are in FAULT
ETMX ESD off
ETMX HWWD notified that it would trip soon, so SEI_ETMX was preemptively put into ISI_OFFLINE_HEPI_ON to keep ISI from getting messed up when it trips
H1SUSETMX ADC channels zeroed out at 21:11:39. SWWDs did not trip because there is no RMS on the OSEM signals, but the HWWD completed its 20 minute countdown and powered down the three ISI coil drivers at 21:32. This indicates ETMX's top stage OSEMs have lost power.
I've opened WP12692 to cover Fil going to EX to investigate.
During the recovery the +24VDC power supply for the SUS IO Chassis was glitched which stopped all the h1susex and h1susauxex models. To recover I first did a straight forward reboot of h1susauxex (no Dolphin), it came back with no issues.
To reboot h1susex was more involved, remember that the EX Dolphin switch was damaged by the 06 April 2025 power outage and has no network control. The procedure to reboot h1susex I used was:
When h1susex came back, I verified all the IO Chassis cards were present (they were all there)
I unpaused the SEI and ISC IPC by writing a 0 to their IPC_PAUSE channels.
The HWWD came back in nominal state.
I reset the SUS SWWD DACKILLs and unbypassed the SEI SWWD.
DIAG_RESET to clear all the IPC errors (it did so) and clear DAQ CRCs (they cleared).
Handed systems over to control room (Oli and Ryan S).
From Fil:
-18VDC Power supply had failed and was replaced.
Power supply is in rack VDD-2, location U25-U28, right-hand supply, label [SUS-C1 C2]
old supply (removed) S1202024
new supply (installed) S1300288
Last night's HWWD sequence is shown below. Reminder that at +40mins the SUS part of the HWWD trips, which sets bit2 of the STAT. This opens internal relay switches, but since we don't route the SUS drives through the HWWD unit (too noisy) this has no effect on operations. The delay between 22:52 and 23:20 is because h1iopsusex was down between 23:01 and 23:20.
Fan motor seized on failed power supply.
Wed16Jul2025
LOC TIME HOSTNAME MODEL/REBOOT
23:15:13 h1susauxex h1iopsusauxex
23:15:26 h1susauxex h1susauxex
23:20:21 h1susex h1iopsusex
23:20:34 h1susex h1susetmx
23:20:47 h1susex h1sustmsx
23:21:00 h1susex h1susetmxpi