TITLE: 06/15 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 122Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 11mph Gusts, 6mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY:
Lockloss @ 20:52 UTC after just under 15 hrs locked - link to lockloss tool
No obvious cause.
Ran an initial alignment doing PRC align by hand, then main locking was fully automatic.
Once at low noise and before observing, I noticed SQZ looked poor, and the SQZ BLRMs were worse than at the start of the last lock stretch. I cycled SQZ_MANAGER through 'SCAN_SQZANG_FDS' to try and find a better SQZ angle, but the Guardian wasn't able to find a place that made the BLRMs and BNS range both look good, so I manually searched around at different angles and eventually settled on 139 with a "cleaned" range of 153Mpc. Started observing at 22:34 UTC.
State of H1: Observing at 151Mpc
H1 has now been locked and observing for 13 hours. Earthquake rolled through a couple hours ago, but otherwise a quiet morning.
Sun Jun 15 10:12:39 2025 INFO: Fill completed in 12min 35secs
FAMIS 26426
Laser Status:
NPRO output power is 1.852W
AMP1 output power is 70.39W
AMP2 output power is 140.6W
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 27 days, 1 hr 38 minutes
Reflected power = 23.08W
Transmitted power = 105.6W
PowerSum = 128.7W
FSS:
It has been locked for 0 days 11 hr and 32 min
TPD[V] = 0.8236V
ISS:
The diffracted power is around 4.0%
Last saturation event was 0 days 12 hours and 51 minutes ago
Possible Issues:
PMC reflected power is high
TITLE: 06/15 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 4mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY: H1 has been locked and observing for almost 9 hours and range looks good.
TITLE: 06/15 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: We stayed locked or most of the shift, ~6.75 hours. I ran a coherence measurement, there's a lot of >10Hz CHARD_P, and <10Hz CHARD_Y. We're currently relocking at DRMI.
LOG: No log
03:52 UTC lockloss
TITLE: 06/14 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Only one lockloss today and two purposeful drops from observing for calibration sweeps and SQZ fixing. DARM high frequency still doesn't look amazing, so there might need to be some SQZ angle adjustment. H1 has now been locked for 2.5 hours.
TITLE: 06/14 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 20mph Gusts, 9mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY:
I took H1 out of observing for 15 minutes starting at 22:00 UTC when I noticed the range looked low and the SQZ_MANAGER Guardian reported that the SQZ ASC AS42 wasn't on.
Reminding me of the SQZ ASC issues we've been seeing early in lock stretches, I requested SQZ_MANAGER to 'RESET_SQZ_ASC_FDS' and then back to 'FREQ_DEP_SQZ', but the SQZ_LO_LR Guardian reported low OMC_RF3 power.. I then set SQZ_MANAGER to 'DOWN' while I trended ZMs to the last lock when squeezing was good. ZM6 was very far off according to OSEMs (several hundreds of µrad in both pitch and yaw), so I moved it back to where it was at the end of the last lock and requested SQZ_MANAGER again to 'FREQ_DEP_SQZ'. This time, there were no holdups and squeezing was restored, so I promptly returned H1 to observing. BNS range seems to have recovered and SQZ BLRMs look better also.
Lockloss @ 19:35 UTC after 10.5 hrs locked - link to lockloss tool
No obvious cause, but this lockloss was fast (much like others of late).
H1 dropped observing from 18:30 to 19:00 UTC for regularly scheduled calibration measurements, which ran without issue. A screenshot of the calibration monitor medm and the calibration report are attached.
Broadband runtime: 18:30:45 to 11:35:54 UTC
Simulines runtime: 18:36:41 to 18:59:56 UTC
We had to rerun the report to account for pro-spring in the model. Calibration looks better now -- sensing model is within 2% above 20 Hz and 5% below 20 Hz, report attached. I also updated the .ini file to now account for the pro-spring behavior.
More detailed steps:
is_pro_spring
to True
at the pydarm_H1.ini
in report 20250614T183642Z
20250614T183642Z
(in terminal, ran $pydarm report --regen --skip-gds 20250614T183642Z
)pydarm_H1.ini
file at /ligo/groups/cal/H1/ifo
as pydarm_H1.ini.250610
to save previous configuration./ligo/groups/cal/H1/ifo/pydarm_H1.ini
, set is_pro_spring
to True
Sat Jun 14 10:09:44 2025 INFO: Fill completed in 9min 40secs
TITLE: 06/14 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 16mph Gusts, 7mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY: H1 has been locked and observing for 5.5 hours. Calibration measurements planned for this morning at 18:30 UTC.
Before going back to bed, wanted to check violins, and noticed that ITMx MODE13 was ringing up again. I'm guessing the settings RyanC had going were not being used. So, I put in what RyanC had 2hrs ago for the last lock, and within 2min, it quickly damped out the rung up MODE13.
I have NOT made a change/updated lscparams.
NEW Settings:
ITMx MODE13: FM1 + FM4 + FM10 + gain = -0.2
OLD Settings:
ITMx MODE13: FM1 + FM2 + FM4 + FM10 + gain = 0.0
OK, going back to bed. Sleep...we'll see.
These ones came up before---I'm hoping I am addressing them correctly this time! :)
Once these were CONFIRMED, H1 was automatically taken to OBSERVING at 920utc
(RyanC, CoreyG)
Got a Wake Up Call at 1211amPDT, but I was sort of already awake. RyanC was up after his shift and we were both watching H1 remotely. He was battling H1 most of his shift and made headway toward the end of his shift and then handed off info for how things were going (the winds started dying down around 9pm-ish). A few things he did for H1:
Now that I'm sort of awake, wondering why the Earth was playing with us and that Earthquake which literalluy caught us by surprise seconds after we made it to Observing last night.
Although there were some Alaska quakes around the time of the lockloss, they were under Mag2.6.
So assuming it was the Mag5.0 off the coast of Chile which was roughly about 30min before the lockloss. I guess that's fine, but why were there no notifications on Verbal of an Earthquake? Why didn't SEI_ENV transition from CALM to EARTHQUAKE?
Looking at the seismic BLRMS, the last time SEI_ENV transitioned to EARTHQUAKE was about 12hrs ago at 0352utc (see attached screenshot) during RyanC's shift (but he was just getting done dealing with winds at that time, so H1 was down anyway). But after that EQ, there were a few more earthquakes, which were less that the 0352 one, but not by much, and certainly big enough to knock H1 out at 0725 from the Chilean coast earthquake. Perhaps it was a unique EQ, because it was off the Pacific coast, albeit South American coast.
Just seems like H1 should have been able to handle this pesky measly Mag5.0 EQ that the Earth taunted us with after a rough night---literally seconds after we had hit the OBSERVING button! :-/
ASC ran away and gave a notification to reset it, 23:07 we dropped observing to do so. We had to move the angle back as well as it was reset.
23:10 UTC Observing