TITLE: 02/17 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 130Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 2mph Gusts, 0mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.64 μm/s
QUICK SUMMARY: H1 has been locked for just over an hour but only observing for about half that as SQZ was having issues locking (see Corey's alog).
1452utc (652ampdt): Wake up call due to No Squeezing.
SQZ Symptoms & Solution:
TITLE: 02/17 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Currently relocking and in ENGAGE_DRMI_ASC. After we lost lock right at the end of Ryan's shift, we were able to get back up relatively easily after I ran an initial alignment. We got knocked out by an earthquake though, and we just finished running another initial alignment. The only difficulty in getting relocked for me has been ALSX and Y - they've still been liking to drop out every once in a while, even right after an initial alignment.
After trying and failing to improve squeezing earlier today (two locks ago) with Ryan, squeezing (and subsequently range) also wasn't great when we got relocked this last lock. However, an hour in, we dropped out due to the SQZ filter cavity unlocking, and when it relocked, squeezing was back to being around -4db. Not sure what was changed there, but it got us back to around 155Mpc, which was very appreciated, even if it's not an amazing range.
LOG:
00:49 Lockloss
- Started an initial alignment since ALS flashes didnt look good and I knew there would be issues with ALSY dropping out
02:08 NOMINAL_LOW_NOISE
02:11 Observing
02:57 Out of Observing due to SQZ FC unlocking
03:01 Back into Observing
05:16 Lockloss
- Ran an IA
I was looking at the lockloss sheet and noticed that this weekend, there were three locklosses that were similar to each other. Ryan Short noted one in an alog a couple days ago: 82809.
The locklosses look similar to what we sometimes see when we lose lock due to ground motion from an earthquake or wind, but in all three cases there was no earthquake motion, and the wind was below 20mph. There will be an 11 Hz oscillation starting about 1 second before the lockloss, and it can be seen in DARM, all QUADs, MICH, PRCL, and SRCL.
Lockloss times:
There was another instance of a lockloss that looked just like these on February 7th 2025 at 05:47UTC as well.
Some more of these weird locklosses from the last day:
Lockloss @ 02/17 05:16 UTC after 3 hours locked due to an earthquake. Earthquake didn't shake the ground too hard, but we still lost lock becuase secondary microseism is also really high.
TITLE: 02/17 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
INCOMING OPERATOR: Oli
SHIFT SUMMARY: H1 has been locked for 3 hours. As I'm posting this, H1 is unlocked and running an alignment. One lockloss this morning, then much of the shift was spent attempting to relock H1 with several failed attempts. After relocking, we noticed the range was noticeably lower than recent locks and that DARM in higher frequencies looked poor, pointing to SQZ not being optimized.
SQZ Troubleshooting: I started by just trying to take SQZ_MANAGER to 'DOWN' then back up to 'FREQ_DEP_SQZ' to see if that would fix the range issue, but the FC ASC was not turning on after the FC locked. The SQZ-FC_ASC_TRIGGER_INMON was not reaching its threshold of 3 to turn ASC on, so I thought improving FC alignment by adjusting FC2 might help. Ultimately, after following the FC alignment instructions from the SQZ troubleshooting wiki, the FC would not lock at all in IR, so I reverted my FC2 move and then attempted to adjust the OPO temperature. Again following the instructions on the wiki, with SQZ_MANAGER in 'LOCK_CF,' I was able to improve SQZ-CLF_REFL_RF6_ABS_OUPUT and the FC was able to lock successfully with ASC on. After then taking SQZ_MANAGER all the way up to 'FREQ_DEP_SQZ,' I again adjusted the OPO temperature (in the other direction) as both the inspiral range and SQZ BLRMS looked worse than when we had started. I was able to bring these back to about where they were when I dropped observing, so ultimately this adventure didn't provide much improvement. H1 then returned to observing.
LOG:
Lockloss @ 02/17 00:49 UTC after over 3 hours locked
02:11 Back to Observing
TITLE: 02/17 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 140 Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 9mph Gusts, 5mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.53 μm/s
QUICK SUMMARY:
Currently Observing and have been Locked for 3 hours. Our range was bad, which we figured was due to squeezing being way worse than last lock and not getting better with thermalization. We popped out of Observing to adjust squeezing but it was giving us issues and we had trouble getting IR to lock (see Ryan's alog). Finally were able to get the filter cavity locked by adjusting the OPO temp, but our squeezing was still bad, so Ryan adjusted the OPO temperature again and got a tiny bit of squeezing out (-1db), but not much. Range is basically back to where it was before all this.
Ryan's alog: 82840
Lockloss @ 17:03 UTC - link to lockloss tool
ETMX glitch lockloss, no other obvious cause.
H1 back to observing at 21:35 UTC. Long but pretty much automatic relock due to two locklosses at TRANSITION_FROM_ETMX and many more around ALS and DRMI, possibly because of the high microseism.
Sun Feb 16 10:11:43 2025 INFO: Fill completed in 11min 40secs
TCmins [-54C, -53C] OAT (+1C, 34F) DeltaTempTime 10:11:46
TITLE: 02/16 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 7mph Gusts, 5mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.62 μm/s
QUICK SUMMARY: H1 has been locked for 4.5 hours. Inch or two of new snow on the ground this morning.
TITLE: 02/16 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 04:21 UTC
Overall calm shift with one Lockloss (alog 82832). Lock re-acquisition was slow due to high and increasing microseism - while we didn't lose lock during re-acquisition, we DRMI, PRMI and MICH_FRINGES took a long time. Squeezing is still bad but range trace seems to be on the rise.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
19:34 | SAF | Laser Haz | LVEA | YES | LVEA is laser HAZARD!!! | 06:13 |
00:09 | PEST | Robert | Optics Lab | N | Termite Wellness Check (and part inventory) | 03:08 |
Lockloss - no immediate environmental or other cause. Will continue investigating as we relock.
Back to NLN as of 04:21 UTC
Lockloss @ 21:15 UTC - link to lockloss tool
The lockloss tool tags this as WINDY (although wind speeds were only up to around 20mph) and there seems to be an 11Hz oscillation that starts about a second before the lockloss seen by all quads, PRCL, MICH, SRCL, and DARM.
H1 back to observing at 23:11 UTC. Fully automatic relock after I started an initial alignment soon after the last lockloss.
After H1 reached NLN, I ran the A2L script (unthermalized) for both P & Y on all quads. Results here:
Initial | Final | Diff | |
ETMX P | 3.35 | 3.6 | 0.25 |
ETMX Y | 4.94 | 4.92 | -0.02 |
ETMY P | 5.48 | 5.64 | 0.16 |
ETMY Y | 1.29 | 1.35 | 0.06 |
ITMX P | -0.67 | -0.64 | 0.03 |
ITMX Y | 2.97 | 3.0 | 0.03 |
ITMY P | -0.06 | -0.03 | 0.03 |
ITMY Y | -2.51 | -2.53 | -0.02 |
New A2L gains were updated in lscparams and ISC_LOCK was loaded. I also REVERTED all outstanding SDF diffs from running the script (time-ramps on quads and ADS matrix changes in ASC). The A2L gains themselves are not monitored.
Ran coherence check for a time right after returning to observing to check to A2L gains (see attached). Sheila comments that this looks just a bit better than the last time this was run and checked on Feb 11th (alog82737).
Another lockloss that looks just like this one was seen 02/16 09:48UTC. Same 11 Hz oscillation 1 second before lockloss, seen in the same places.
BS Camera stopped updating just like in alogs:
This takes the Camera Sevo guardian into a neverending loop (and takes ISC LOCK out of Nominal and H1 out of Observe). See attached screenshot.
So, I had to wake up Dave so he could restart the computer & process for the BS Camera. (Dave mentioned there is a new computer for this camera to be installed soon and it should help with this problem.)
As soon as Dave got the BS camera back, the CAMERA SERVO node got back to nominal, but I had accepted the SDF diffs for ASC which happened when this issue started, so I had to go back and ACCEPT the correct settings. Then we automatically went back to Observing.
OK, back to trying to go back to sleep again! LOL
Full procedure is:
Open BS (cam26) image viewer, verify it is a blue-screen (it was) and keep the viewer running
Verify we can ping h1cam26 (we could) and keep the ping running
ssh onto sw-lvea-aux from cdsmonitor using the command "network_automation shell sw-lvea-aux"
IOS commands: "enable", "configure terminal", "interface gigabitEthernet 0/35"
Power down h1cam26 with the "shutdown" IOS command, verify pings to h1cam26 stop (they did)
After about 10 seconds power the camera back up with the IOS command "no shutdown"
Wait for h1cam26 to start responding to pings (it did).
ssh onto h1digivideo2 as user root.
Delete the h1cam26 process (kill -9 <pid>), where pid given in file /tmp/H1-VID-CAM26_server.pid
Wait for monit to restart CAM26's process, verify image starts streaming on the viewer (it did).
FRS: https://services1.ligo-la.caltech.edu/FRS/show_bug.cgi?id=33320
Forgot once again to note timing for this wake-up. This wake-up was at 233amPDT (1033utc), and I was roughly done with this work in about 45min after phoning Dave for help.