Lockloss @ 02/17 05:16 UTC after 3 hours locked due to an earthquake. Earthquake didn't shake the ground too hard, but we still lost lock becuase secondary microseism is also really high.
TITLE: 02/17 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
INCOMING OPERATOR: Oli
SHIFT SUMMARY: H1 has been locked for 3 hours. As I'm posting this, H1 is unlocked and running an alignment. One lockloss this morning, then much of the shift was spent attempting to relock H1 with several failed attempts. After relocking, we noticed the range was noticeably lower than recent locks and that DARM in higher frequencies looked poor, pointing to SQZ not being optimized.
SQZ Troubleshooting: I started by just trying to take SQZ_MANAGER to 'DOWN' then back up to 'FREQ_DEP_SQZ' to see if that would fix the range issue, but the FC ASC was not turning on after the FC locked. The SQZ-FC_ASC_TRIGGER_INMON was not reaching its threshold of 3 to turn ASC on, so I thought improving FC alignment by adjusting FC2 might help. Ultimately, after following the FC alignment instructions from the SQZ troubleshooting wiki, the FC would not lock at all in IR, so I reverted my FC2 move and then attempted to adjust the OPO temperature. Again following the instructions on the wiki, with SQZ_MANAGER in 'LOCK_CF,' I was able to improve SQZ-CLF_REFL_RF6_ABS_OUPUT and the FC was able to lock successfully with ASC on. After then taking SQZ_MANAGER all the way up to 'FREQ_DEP_SQZ,' I again adjusted the OPO temperature (in the other direction) as both the inspiral range and SQZ BLRMS looked worse than when we had started. I was able to bring these back to about where they were when I dropped observing, so ultimately this adventure didn't provide much improvement. H1 then returned to observing.
LOG:
Lockloss @ 02/17 00:49 UTC after over 3 hours locked
TITLE: 02/17 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 140 Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 9mph Gusts, 5mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.53 μm/s
QUICK SUMMARY:
Currently Observing and have been Locked for 3 hours. Our range was bad, which we figured was due to squeezing being way worse than last lock and not getting better with thermalization. We popped out of Observing to adjust squeezing but it was giving us issues and we had trouble getting IR to lock (see Ryan's alog). Finally were able to get the filter cavity locked by adjusting the OPO temp, but our squeezing was still bad, so Ryan adjusted the OPO temperature again and got a tiny bit of squeezing out (-1db), but not much. Range is basically back to where it was before all this.
Ryan's alog: 82840
Lockloss @ 17:03 UTC - link to lockloss tool
ETMX glitch lockloss, no other obvious cause.
H1 back to observing at 21:35 UTC. Long but pretty much automatic relock due to two locklosses at TRANSITION_FROM_ETMX and many more around ALS and DRMI, possibly because of the high microseism.
Sun Feb 16 10:11:43 2025 INFO: Fill completed in 11min 40secs
TCmins [-54C, -53C] OAT (+1C, 34F) DeltaTempTime 10:11:46
TITLE: 02/16 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 7mph Gusts, 5mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.62 μm/s
QUICK SUMMARY: H1 has been locked for 4.5 hours. Inch or two of new snow on the ground this morning.
TITLE: 02/16 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 04:21 UTC
Overall calm shift with one Lockloss (alog 82832). Lock re-acquisition was slow due to high and increasing microseism - while we didn't lose lock during re-acquisition, we DRMI, PRMI and MICH_FRINGES took a long time. Squeezing is still bad but range trace seems to be on the rise.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 19:34 | SAF | Laser Haz | LVEA | YES | LVEA is laser HAZARD!!! | 06:13 |
| 00:09 | PEST | Robert | Optics Lab | N | Termite Wellness Check (and part inventory) | 03:08 |
Lockloss - no immediate environmental or other cause. Will continue investigating as we relock.
Back to NLN as of 04:21 UTC
TITLE: 02/16 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: One lockloss this shift at the end of calibration sweeps and H1 has been trying to relock since. [OpsInfo] I've put new damping settings for ETMY mode 20 into the Guardian and loaded; these have been working well the past couple of days.
TITLE: 02/16 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 3mph Gusts, 1mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.60 μm/s
QUICK SUMMARY:
IFO is in LOCKING
Having a lot of issues, starting with a LL due to a failed Cal Sweep. Since then, DAY Ops (Ryan S) has experienced locking issues at ALS, IR, DRMI and TRANSITION_ETMX. Interestingly, despite high microseism, ALS locking doesn't look noisy and IFO seems to lose lock without evidence of high ground motion.
Will continue investigating, troubleshooting and locking.
Back to OBSERVING as of 01:04 UTC
While attempting to lock H1 this afternoon, occasionally I would notice the ALS-X transmission get a bit noisy, then it would drop out entirely and there would be a lockloss. On a couple of the more recent instances of this, there was also an MC2 saturation callout right before the lockloss, leading me to believe the IMC was having trouble staying locked. The RefCav alignment has been dropping over the past several days (likely due to the change in weather) and has needed an adjustment, so, thinking that might help the IMC, I paused between lock attempts to touch up the RefCav alignment using the picomotor mirrors in the FSS path. I was only able to improve the signal on the TPD from about 630mV to 740mV in the few minutes I spent, but I was about at the limit of improvement I think I could get. I'm not sure if this helped at all as H1 still has yet to relock, but I figure this is still an improvement.
Following the wiki instructions, I ran broadband and Simulines calibration sweeps this morning. This was with the updated Simulines sweep from Vlad with the last frequency point cut out, however H1 again lost lock towards the end of the Simulines injections (link to lockloss tool). See previous investigations into these locklosses in alog82704.
The following injections were the ones still running at the time of the lockloss at 20:01:53 UTC:
Broadband sweep ran from 19:30:50 to 19:36:06 UTC, Simulines sweep ran from 19:36:53 to 20:01:53 UTC (ended at lockloss). The 7.68Hz DARM injection is clearly seen on the lockloss trends.
Lockloss @ 21:15 UTC - link to lockloss tool
The lockloss tool tags this as WINDY (although wind speeds were only up to around 20mph) and there seems to be an 11Hz oscillation that starts about a second before the lockloss seen by all quads, PRCL, MICH, SRCL, and DARM.
H1 back to observing at 23:11 UTC. Fully automatic relock after I started an initial alignment soon after the last lockloss.
After H1 reached NLN, I ran the A2L script (unthermalized) for both P & Y on all quads. Results here:
| Initial | Final | Diff | |
| ETMX P | 3.35 | 3.6 | 0.25 |
| ETMX Y | 4.94 | 4.92 | -0.02 |
| ETMY P | 5.48 | 5.64 | 0.16 |
| ETMY Y | 1.29 | 1.35 | 0.06 |
| ITMX P | -0.67 | -0.64 | 0.03 |
| ITMX Y | 2.97 | 3.0 | 0.03 |
| ITMY P | -0.06 | -0.03 | 0.03 |
| ITMY Y | -2.51 | -2.53 | -0.02 |
New A2L gains were updated in lscparams and ISC_LOCK was loaded. I also REVERTED all outstanding SDF diffs from running the script (time-ramps on quads and ADS matrix changes in ASC). The A2L gains themselves are not monitored.
Ran coherence check for a time right after returning to observing to check to A2L gains (see attached). Sheila comments that this looks just a bit better than the last time this was run and checked on Feb 11th (alog82737).
Another lockloss that looks just like this one was seen 02/16 09:48UTC. Same 11 Hz oscillation 1 second before lockloss, seen in the same places.
BS Camera stopped updating just like in alogs:
This takes the Camera Sevo guardian into a neverending loop (and takes ISC LOCK out of Nominal and H1 out of Observe). See attached screenshot.
So, I had to wake up Dave so he could restart the computer & process for the BS Camera. (Dave mentioned there is a new computer for this camera to be installed soon and it should help with this problem.)
As soon as Dave got the BS camera back, the CAMERA SERVO node got back to nominal, but I had accepted the SDF diffs for ASC which happened when this issue started, so I had to go back and ACCEPT the correct settings. Then we automatically went back to Observing.
OK, back to trying to go back to sleep again! LOL
Full procedure is:
Open BS (cam26) image viewer, verify it is a blue-screen (it was) and keep the viewer running
Verify we can ping h1cam26 (we could) and keep the ping running
ssh onto sw-lvea-aux from cdsmonitor using the command "network_automation shell sw-lvea-aux"
IOS commands: "enable", "configure terminal", "interface gigabitEthernet 0/35"
Power down h1cam26 with the "shutdown" IOS command, verify pings to h1cam26 stop (they did)
After about 10 seconds power the camera back up with the IOS command "no shutdown"
Wait for h1cam26 to start responding to pings (it did).
ssh onto h1digivideo2 as user root.
Delete the h1cam26 process (kill -9 <pid>), where pid given in file /tmp/H1-VID-CAM26_server.pid
Wait for monit to restart CAM26's process, verify image starts streaming on the viewer (it did).
FRS: https://services1.ligo-la.caltech.edu/FRS/show_bug.cgi?id=33320
Forgot once again to note timing for this wake-up. This wake-up was at 233amPDT (1033utc), and I was roughly done with this work in about 45min after phoning Dave for help.
02:11 Back to Observing