Finally had the opportunity (due to H1 Lockloss during my shift) to address the SR3 Dither Pitch Offset Oli noted last week (alog 85830), and I first looked at during a lockloss at the end of my shift Friday evening (alog 85855)---but my changes did not hold for the Dither Pitch Offset due to the H1SUSSR3 SDF needing its safe.snap changed/accepted.
It's been ages since I've updated an safe.snap, so pardon the less elegant steps I took to update the SR3 SDF here---Basically I
Both updates are screenshot-ed separately! Ha. The new SR3 OPTICALIGN_P_OFFSET was already at its new offset (of 457.9...which used to be 445.8) since Friday (and aligned). So now that the SR3 pointing changed with the Dither Offset going to zero, I ran a new alignment.
Currently, H1 has locked DRMI and I'm sure the final step here would be updated thing observe.snap with the changes noted above.
ADDENDUM: Currently stuck at PREP_DC_READOUT_TRANSITION, where the OMC can't lock. I'm wondering if this is due to the SR3 change....Where the SR3 Top Mass was at one spot since Fri evening, and now with the Dither Offset being zeroed, we now have SR3 Top Mass back to it's pointing we had for the last few weeks up until Friday night. (see attached ndscope trend over the weekend)
Sun Jul 20 10:08:56 2025 INFO: Fill completed in 8min 52secs
1) Dust Monitor Check Notifications for LVEA5 & LAB2
Ran the "check_dust_monitors_are_working" script the last two mornings and received notifications for the following:
2) Access System "Flashing Doors"
3) LHO Control Room Screenshots & FOMs
TITLE: 07/20 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 5mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
Whoa, big earthquake overnight as RyanC notes in his wake-up calls. Crazy 20hrs with high winds first which then died and tagged in the big EQ! Seismic motion elevated 8hrs ago for the EQ, and finally calmed down within the last hour. H1's been locked for the last 2+hrs. Nice to see that the violins weren't rung up after the shaky night!
11:03 UTC guardian called again after not being able to find IR.
11:13 - 11:49 UTC I decided to run an initial alignment. Xarm IR struggled so I trended the IMs and had to move IM2 a bunch in pitch to return it to its previous position pre watchdog trip.
11:50 UTC Back to regular locking
We found IR and DRMI locked after less than 30 seconds.
12:01 GRD called, we got hit by 2 large semi close earthquake from the eastern Russian penisula, a 6.7 then a 7.4. A few ISIs and suspensions tripped, it'll be a few hours till the ground motion comes down enough to relock. We were going through DRMI_ASC at the time of the earthquakes.
TITLE: 07/20 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
Inherited a Locked IFO.
Dropped from Observing at 3:35:45 UTC for SQZ_FC locking Issues.
I followed the instructions for FC troublshooting found here.
We went back into Observing at 3:47:32UTC
Wind started to pick up in speed.
Lockloss potentially from an Alaskan 4.7M Earthquake.
Locking Notes:
Initial Alignment was ran and completed.
TITLE: 07/20 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 23mph Gusts, 17mph 3min avg
Primary useism: 0.08 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
I have attached a series of pictures from the LIGO Hanford Corner Station roof.
Conditions are not smokey at all here. No fires or smoke can be seen.
Came back down form the roof and immdiately heard these from Verbals.
GRB-Short E582309 02:16:06 UTC
SuperEvent S250720J 02:16:58
GRB-Short E582309 02:17:09
SuperEvent S250720J 02:21:27
I'm not sure why there are duplicates like this.
TITLE: 07/19 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 58Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 22mph Gusts, 9mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
H1 has been locked for over 12 Hours!
Stand down alerts Failure happened earlier. Ryan pointed out these instructions to me: https://cdswiki.ligo-wa.caltech.edu/wiki/Ryan%20Crouch?highlight=%28Ryan%29%7C%28Crouch%29
We were able to get it up and running again fairly quickly.
TITLE: 07/19 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
Another nice DAY shift with H1 being locked for more than the entire DAY shift (over 13hrs!).
Since H1 was locked the entire shift, did not get another chance at removing the SR3 Pit OFFSET (So it is still there and the SR3 is at the new Pit Bias I put in yesterday---and it was aligned to this last night. So when we want to fix this, we'll need to take the SR3 Pit Offset to 0.0 and then run an alignment )
Attempted the Saturday Calibration, but it was most likely not successful (but ran a 2nd Calibration at the end of the shift---which was SUCCESSFUL!)
LOG:
NOTE: Saw that L1 had a lockloss, so took the opportunity at running my 2nd Calibration of the day (WITHOUT any CTRL-C's!!!)
Measurement NOTES:
Attached is a screenshot of the Calibration Monitor + pdf of Pydarm Report (but I only ran: "pydarm report" vs. "pydarm report --skip-gds"
Summary:
Measurement NOTES:
Attached is a screenshot of the Calibration Monitor, and unfortunately, I did not get to run a PyDarm Report. I'm assuming this is due to my CTRL-C from the headleass measurement noted above. Because, also at the end of this measurement, there was an SDF Diff also! Luckily, Tony is here and he was able to take care of the SDF Diff. The SDF was for PCAL Y (medm is SiteMap/CAL EY) , and it was related to the In-Loop (OFS) PD(H1:CAL-PCALY_OFS_PD_OUT16) being railed at -7.8. Tony fixed this by toggling the Loop Enable Button (H1:CAL-PCALY_OPTICALFOLLOWERSERVOENABLE) to Off and then On. This is all mentioned on the top of the PCal Known Issues wiki.
Once the SDF was cleared H1 was taken back to Observing, but there was discussion about trying to run the calibration again since L1 was still relocking. Opted to not drop out of Observing for this since we were already out of Observing for over 30min.
Smooth sailing thus far with H1 locked for almost 9hrs (H1 even rode through two M5+ earthquakes off the Guatamalan coast!). Delayed the Saturday Calibration, to allow L1 to thermalize after their recent lockloss and will start the calibration in about 30min.
Sat Jul 19 10:09:44 2025 INFO: Fill completed in 9min 40secs
TITLE: 07/19 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 1mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY:
H1's been locked almost 4.5hrs with a decent night; microseism continues to drop and is below the 50th percentile and winds have been calm the last 7hrs.
TITLE: 07/19 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
I inherited an unlocked IFO. After a few DRMI Locking attempt. I was able to relcok the IFO.
But the changes to SR3 H1:SUS-SR3_M1_DITHER_P_OUTPUT were reverted by SDF Revert after a DRMI lockloss.
H1 was locked at NLN at 2:26:39 UTC
And Observing at 2:18:19UTC.
SQZ_manager Dropped from FREQ_DEP_SQZ and took H1 into commissioning @ 2:28:38 UTC.
SQZ_FC is Stuck between GR_SUS_LOCKING and Down.
H1:SUS-FC2_M1_OPTICALIGN_P_OFFSET & Its Yaw counter part were moved to relock the FC.
We got back to Observing at 3:00:01 UTC
LOG:
No Log.
Edgard, Ivey, Brian.
Relevant FRS ticket : 32526
We made modifications to the HLTS_W_EST and estimator library parts to add DQ channels to monitor the total drive request to the M1 OSEMs with and without the estimator damping. In passing, we made a few changes to the names of channels on the EST block (by modifying ESTIMATOR_PARTS.mdl ) to make them a bit more readable/less redundant. These changes will only affect the H1 SR3/PR3 models only.
The changes were committed to the userapps svn under revision 32426.
Oli mentioned that they will do a model restart to get these changes in on Tuesday, as long as we got the changes in before Monday.
The estimator MEDM screens haven't been updated yet, but I think Brian will get to it on Monday.
____________
This is a summary of the library part changes [see attached.pdf for screenshots of these changes in the library parts]:
SIXOSEM_T_STAGE_MASTER_W_EST.mdl
HLTS_MASTER_W_EST.mdl
Added two DQ channels to the top level: M1_ADD_P_TOTAL* 512, and M1_ADD_Y_TOTAL* 512
ESTIMATOR_PARTS.mdl
Prompted by me noticing on-off behaviors in the daily strain spectrogram for today at around 20.2 Hz, I've done some additional investigations into the source and behavior of this line:
The 20.2 Hz line, which is currently prominent in DARM, first appeared in accelerometer and microphone data from the corner station on June 9. The first appearance of this line that I found was in the PSL mics, as shown in this spectrogram. This line then appeared in DARM in the first post-vent locks a few days later. The summary of work from June 9 does not show anything obvious to me that would be the source of this new noise.
This feature also turns off and on multiple times during the day. An example from today can be seen in this spectrogram. Most corner station microphones and accelerometers exhibit this feature, but it is most pronounced visually in the PSL microphone spectrograms. I was unable to identify any other non-PEM channels that showed the same on-off behavior, but this does reveal many change points that should aid in tracking down the source. Almost every day, this line exhibits abrupt on-off features at different times of the day and for varying durations. Based on my initial review, these change points appear to be more likely during the local daytime (although not at any specific time). When the line first appeared, it was usually in the "off" state and then turned on for short periods. However, this has slowly changed, so that now the line is generally in the "on" state and turns off for brief periods.
Looking into past alogs, I noticed that I reported this same issue last summer in alog 79948. Additional discussion about this line can be found in the detchar-requests repository (requires authentication). In this case, the line appeared in late spring and disappeared in early autumn of 2024. No source was identified before the line disappeared.
Going back further, I also see the same feature appearing in late spring and disappearing in early autumn of 2023. The presence of the line is hence correlated with the outside temperature, likely related to some aspect of the air conditioning system that is only needed when it is (roughly) hotter outside than inside. This also means that we can expect this line to remain present in the data until autumn unless mitigation measures are taken.
I looked briefly into the 20 Hz Noise without much success. Comparing the floor accelerometers, the noise is louder in the EBAY than the LVEA (although the signal of the EBAY accelerometer doesn't look good since the vent). The next closest is HAM1 followed by BS. So the noise is around the -X-Y corner of the LVEA, likely in the EBAY, Transition Area or Optics Lab because HAM6 sees less motion than HAM1 and EBAY sees the most.
For background, I attempted to push a new calibration on 7/3 to account for the change in the SRCL offset that we made on 6/26, but it failed due to the broadband PCAL measurement showing a larger uncertainty that we had beforehand (see 85529). Since then, we have been running with the same calibration we have had since 6/10, which has a low error (~3%), but is based on a model that know to be incorrect. Namely, the model created and pushed on 6/10 has a small, positive spring, and we believe now that DARM has no spring to at least 10 Hz. We are especially confused because we expected the model change to be focused around the 10-30 Hz region, since this is the band where we expect significant change due to the SRCL offset, but the measurement shows large, >5%, error at 100 Hz.
I have made a series of plots comparing a variety of PCAL broadband measurements from different points since 6/10, measuring PCAL with GDS CALIB STRAIN and CAL DELTA L.
Plot 1 shows CALIB STRAIN/PCAL and DELTA L/PCAL on 6/11 after we pushed a new calibration modeled with a positive spring. The calibration at this point was very good; the calibration line uncertainties showed error of 3% or less. However, this plot is already showing something a bit confusing- a difference in CAL DELTA L and GDS CALIB STRAIN, where GDS CALIB STRAIN has a higher uncertainty around 70-200 Hz. We believe the application of the kappas should further reduce the uncertainty of GDS CALIB STRAIN.
Plot 2 shows CALIB STRAIN/PCAL and DELTA L/PCAL on 6/26 after changed the SRCL offset. The calibration report generated from that day indicates that the sensing function is flatter with the adjusted SRCL offset. Because the calibration still expects a spring, we were not surprised to see that the low frequency uncertainty changed.
Plot 3 shows CALIB STRAIN/PCAL and DELTA L/PCAL on 7/3 after we pushed a new calibration which was supposed to account for the flatter sensing function. However, we saw that the uncertainty increased at 100 Hz, which we did not expect. This measurement was run slightly early during the "TDCF burn in" so it may not have been an accurate look at the effect of the new calibration.
Plot 4 shows CALIB STRAIN/PCAL and DELTA L/PCAL on 7/3 after we pushed a new calibration, and then were only relocked for 10 minutes. The uncertainty was even larger than the previous uncertainty measurement. We were also very confused that CAL DELTA L changed significantly compared to plot 3. We're not sure if the kappas were significantly different from 1 to also cause problems in GDS CALIB STRAIN when applied.