First I tried taking H1:IOP-LSC0_RLF_FREQ_OFS down towards zero but we lost FC lock at H1:IOP-LSC0_RLF_FREQ_OFS = -8. FC would not re-lock until I brought it back to -15.
Wanted to scan FC de-tuning from -15 to -60 but the command didn't seem to work with negative steps.
Then 5s after moving FC detuning from -15 to -60, the IFO lost lock, 1437146099. The lockloss appears unrelated to the FC detuning change as the FC lost lock after we see ETMX_L3 start to be unstable, plot. There was slightly increased noise at low frequency but not enough scatter to cause a lockloss.
Ideally would have ran ezcastep H1:IOP-LSC0_RLF_FREQ_OFS -s 30 '+1,45' to get 30sec steps from -60 to -15. We've been at -44 before with no issues, 83526, but wanted to make the low frequency noise worse to be able to fit to the best place.
I'm followed the pydarm deployment instructions here, to update the LHO pydarm install. This is the 20250721.0 tag for pydarm, which includes a bug fix for high frequency roam line handling, letting us indicate the correct reference model in the pydarm_H1.ini file. This is not the default cds conda environment, but the default you get when typing pydarm at a command line, or specifically invoking by running "conda activate /ligo/groups/cal/conda/pydarm".
TITLE: 07/21 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 113Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 9mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY:
Arriving to a Dust Alarm in the PSL over the last hr (there have been winds around the corner station the last 2hrs but only about 15mph).
For H1 over the last 12 hrs we have had 4 locks (current one is 90min) where 2 (of3) locklosses were ETMx Glitches, and it looks like all reacquisition was automatic overnight.
H1 is scheduled for Commissioning from 15-19utc (8-noonPDT).
Notes From Ops Shift Check Sheet
1) Dust Monitor Check Notifications for LVEA5 & LAB2
Ran the "check_dust_monitors_are_working" script the last two mornings and received notifications for the following:
2) Access System "Flashing Doors"
3) LHO Control Room Screenshots & FOMs
TITLE: 07/21 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
Started the Shift unlocked due to earthquake.
Got back to observing for 1 minute before a lockloss.
Then relocked again for an hour and 17 minutes before another Unknown Lockloss from NLN.
Relocking now, currently at LOW_NOISE_COIL_DRIVERS we should be locked by soon .
TITLE: 07/21 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 9mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
Went up to the Roof to check for fires or smoke. Took some pictures in all directions.
Inherited an unlocked H1 after a large earthquake from Alaska.
Did an Initial Alignment which had an SRM WD trip towards the end. So I reran it with out any issues.
We reached NOMINAL_LOW_NOISE at 01:25 UTC
Observing at 1:27 UTC.
And Lost lock to an Unknown Lockloss at 1: 29 UTC.
TITLE: 07/20 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
Flurries of earthquakes from Japan. Had another HAM6 High Voltage power supply failure. Able to finally finish the change for removing the SR3 Dither Pitch Offset. Shift ended with a M6.2 Alaska earthquake which took H1 down.
LOG:
(Corey, Elenna [remote], Tony)
Recent Backstory Alogs:
Today, while locking H1 noticed that we were stuck in PREP_DC_READOUT_TRANSITION. DIAG_MAIN was mentioning an OMC issue. OMC_LOCK guardian was in a loop of trying to sweep the PZT to FIND_CARRIER, but Elenna noticed that the PZT was not sweeping. Then she remmebered the previous issues!
(I was on the wrong trail, because I had assumed the SR3 change I made somehow was the cause of the issue---it was NOT.)
We still had the OMC_LOCK in a non-ISC_LOCK-managed AUTO state and we were not totally sure of how to have ISC_LOCK remanage OMC_LOCK. Tony found a 2015 alog from Patrick which we used to get OMC_LOCK managed again. H1 continued on and has been in OBSERVING.
Added a new comment for today's HAM6 Power Supply Issue to the already-opened FRS Ticket 34433.
OPS NOTE: If you are having trouble locking the OMC (no light on the HAM6 OMC Trans camera, stuck at PREP_DC_READOUT_TRANSITION, and no evidence of the OMC PZT2 working), it might due to this power supply!
Finally had the opportunity (due to H1 Lockloss during my shift) to address the SR3 Dither Pitch Offset Oli noted last week (alog 85830), and I first looked at during a lockloss at the end of my shift Friday evening (alog 85855)---but my changes did not hold for the Dither Pitch Offset due to the H1SUSSR3 SDF needing its safe.snap changed/accepted.
It's been ages since I've updated an safe.snap, so pardon the less elegant steps I took to update the SR3 SDF here---Basically I
Both updates are screenshot-ed separately! Ha. The new SR3 OPTICALIGN_P_OFFSET was already at its new offset (of 457.9...which used to be 445.8) since Friday (and aligned). So now that the SR3 pointing changed with the Dither Offset going to zero, I ran a new alignment.
Currently, H1 has locked DRMI and I'm sure the final step here would be updated thing observe.snap with the changes noted above.
ADDENDUM: Currently stuck at PREP_DC_READOUT_TRANSITION, where the OMC can't lock. I'm wondering if this is due to the SR3 change....Where the SR3 Top Mass was at one spot since Fri evening, and now with the Dither Offset being zeroed, we now have SR3 Top Mass back to it's pointing we had for the last few weeks up until Friday night. (see attached ndscope trend over the weekend)
Sun Jul 20 10:08:56 2025 INFO: Fill completed in 8min 52secs
1) Dust Monitor Check Notifications for LVEA5 & LAB2
Ran the "check_dust_monitors_are_working" script the last two mornings and received notifications for the following:
2) Access System "Flashing Doors"
3) LHO Control Room Screenshots & FOMs
TITLE: 07/20 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 5mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
Whoa, big earthquake overnight as RyanC notes in his wake-up calls. Crazy 20hrs with high winds first which then died and tagged in the big EQ! Seismic motion elevated 8hrs ago for the EQ, and finally calmed down within the last hour. H1's been locked for the last 2+hrs. Nice to see that the violins weren't rung up after the shaky night!
11:03 UTC guardian called again after not being able to find IR.
11:13 - 11:49 UTC I decided to run an initial alignment. Xarm IR struggled so I trended the IMs and had to move IM2 a bunch in pitch to return it to its previous position pre watchdog trip.
11:50 UTC Back to regular locking
We found IR and DRMI locked after less than 30 seconds.
12:01 GRD called, we got hit by 2 large semi close earthquake from the eastern Russian penisula, a 6.7 then a 7.4. A few ISIs and suspensions tripped, it'll be a few hours till the ground motion comes down enough to relock. We were going through DRMI_ASC at the time of the earthquakes.
TITLE: 07/20 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
Inherited a Locked IFO.
Dropped from Observing at 3:35:45 UTC for SQZ_FC locking Issues.
I followed the instructions for FC troublshooting found here.
We went back into Observing at 3:47:32UTC
Wind started to pick up in speed.
Lockloss potentially from an Alaskan 4.7M Earthquake.
Locking Notes:
Initial Alignment was ran and completed.
Prompted by me noticing on-off behaviors in the daily strain spectrogram for today at around 20.2 Hz, I've done some additional investigations into the source and behavior of this line:
The 20.2 Hz line, which is currently prominent in DARM, first appeared in accelerometer and microphone data from the corner station on June 9. The first appearance of this line that I found was in the PSL mics, as shown in this spectrogram. This line then appeared in DARM in the first post-vent locks a few days later. The summary of work from June 9 does not show anything obvious to me that would be the source of this new noise.
This feature also turns off and on multiple times during the day. An example from today can be seen in this spectrogram. Most corner station microphones and accelerometers exhibit this feature, but it is most pronounced visually in the PSL microphone spectrograms. I was unable to identify any other non-PEM channels that showed the same on-off behavior, but this does reveal many change points that should aid in tracking down the source. Almost every day, this line exhibits abrupt on-off features at different times of the day and for varying durations. Based on my initial review, these change points appear to be more likely during the local daytime (although not at any specific time). When the line first appeared, it was usually in the "off" state and then turned on for short periods. However, this has slowly changed, so that now the line is generally in the "on" state and turns off for brief periods.
Looking into past alogs, I noticed that I reported this same issue last summer in alog 79948. Additional discussion about this line can be found in the detchar-requests repository (requires authentication). In this case, the line appeared in late spring and disappeared in early autumn of 2024. No source was identified before the line disappeared.
Going back further, I also see the same feature appearing in late spring and disappearing in early autumn of 2023. The presence of the line is hence correlated with the outside temperature, likely related to some aspect of the air conditioning system that is only needed when it is (roughly) hotter outside than inside. This also means that we can expect this line to remain present in the data until autumn unless mitigation measures are taken.
I looked briefly into the 20 Hz Noise without much success. Comparing the floor accelerometers, the noise is louder in the EBAY than the LVEA (although the signal of the EBAY accelerometer doesn't look good since the vent). The next closest is HAM1 followed by BS. So the noise is around the -X-Y corner of the LVEA, likely in the EBAY, Transition Area or Optics Lab because HAM6 sees less motion than HAM1 and EBAY sees the most.
Sheila, Camilla
We ran a couple of squeezing angle scans to check the settings of the ADF servo.
One thing that we realized is that the ADF Q demod signal is divided by H1:SQZ-ADF_OMC_TRANS_Q_NORM rather than mulitplied which is what we had thought. We changed the coefficent from 0.18 to 5.8. The first png attachment shows that this transforms the blue ellipse into the orange one. It would be a bit better if we first adjusted the demod phase to maximize the Q signal, so that the ellipse would be aligned along the axis, and the rescaled version would be more like a circle. However you can see in the right side plot that this gives us a reasonably linear readback of sqz angle as we change the RF6 demod angle (which is actually cabled up to RF3 phase) about 150 degrees where our best squeezing is.
Camilla turned the servo back on in sqzparams.
For future reference, a slightly better way to do this would be to move the demod phase to maximize Q, do a scan and set H1:SQZ-ADF_OMC_TRANS_Q_NORM to the ratio (max of Q)/ (max of I). Then you can do a smaller scan around the point with the best squeezing, and in sqzparams set sqz_ang_adjust_ang to the readback angle that you think is best.
This didn't work at the start of today's the lock as the ADF frequency had been left near 10kHz. Once I put the ADF back to 322Hz it seemed to work fine.
For operators, this means that if the squeezing looks bad, running SCAN_SQZANG_FDS alone won't change the SQZ angle. You would need to:
If the servo is running away, try the above instructions, if that doesn't work, the servo can be turned off via editing use_sqz_angle_adjust = False in sqz/h1/guardian/sqzparams.py. Please alog and tag SQZ.
Since we've had this servo running, the range has been higher and sqz more stable, see attached.