Ryan S, Camilla
Corey 82849, Ryan and I have had some issues with SQZ this morning. Corey adjusted the SHG temp to get enough green power to lock the OPO but after that the SQZ and range was bad. FR3 signal and FC WFS were very noisy.
Ryan tried to take SQZ out, checked OPO temp (fine), reset the SQZ angle from 220deg back to a more sensible 190deg and then put SQZ back in again but he couldn't get the FC to lock. Followed steps in the SQZ wiki to touch up FC1/2 alignment and got H1:SQZ-FC_TRANS_C_LF_OUTPUT to >120, plot. but we still were loosing the FC at TRANSION_TO_IR_LOCKING. THE FC also seemed unstable when locked on green. While we were troubleshooting, the IFO lost lost. Unsure if this is an ASC issue, FC_ASC trends attached (POS for Y and P were moving much more than usual), SQZ ASC trends (ZM4 PIT changes alot).
After the lock loss, the SQZ_FC seemed to lock stably in green with H1:SQZ-FC_TRANS_C_LF_OUTPUT = 160. Plot. This is higher than usual and it's not clear what changed!
Ryan mentioned that something happened to Oli at the weekend where the range was bad but SQZ unlocked and re-locked and it came back good, plot, but this seemed to the the OPO PZT changing to a better place (we know it likes ot be ~90 rather than 50s).
After relocking, everything was fine but the FC ASC wasn't turning on as the threshold was too low, 2.5. Ryan decreased H1:SQZ-FC_ASC_TRIGGER_THRESH_ON from 3.0 to 2.0, this has been slowly decreasing, maybe as we decreased the OPO trans from 80uW to 60uW last week. Plot. Ryan accepted in sdf and then checked SQZ_MANAGER, SQZ_FC, sqzparams to check this isn't set in GRD.
FC ASC lower trigger threshold accepted in SDF. It is not set anywhere by Guardian nor is this model's SAFE.snap table pushed during SDF_REVERT.
After a while, Camilla and I again dropped H1 out of observing as even after thermalization, BNS range and squeezing weren't looking good. We decided to reset the SQZ ASC and angle in case they were set at a bad reference point. I took SQZ_MANAGER to 'RESET_SQZ_ASC_FDS' and adjusted the SQZ angle to optimize DARM and SQZ BLRMS. To reset the angle servo here, I adjusted SQZ-ADF_OMC_TRANS_PHASE to make SQZ-ADF_OMC_TRANS_SQZ_ANG oscillate around 0 (ended with a total change of -10deg), then requested SQZ_MANAGER back to 'FREQ_DEP_SQZ' to turn servos back on, and finally accepted on SDF (attached) to return to observing. It's been about 20 minutes since then, and so far H1 is observing with a much better steady range at around 160Mpc.
What happened that needed the ADF servo setpoint to be updated with a different SQZ angle? Investigation is ongoing.
Attaching the plot from after Corey got the SQZ locked and it was bad, you can see it looks like one of the loops was oscillating, plot attached. Compare to normal SQZ plot.
Mon Feb 17 10:13:43 2025 INFO: Fill completed in 13min 40secs
TCmins [-49, -48C] OAT (+2C, 36F) DeltaTempTime 10:13:43
TITLE: 02/17 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 130Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 2mph Gusts, 0mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.64 μm/s
QUICK SUMMARY: H1 has been locked for just over an hour but only observing for about half that as SQZ was having issues locking (see Corey's alog).
1452utc (652ampdt): Wake up call due to No Squeezing.
SQZ Symptoms & Solution:
TITLE: 02/17 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Currently relocking and in ENGAGE_DRMI_ASC. After we lost lock right at the end of Ryan's shift, we were able to get back up relatively easily after I ran an initial alignment. We got knocked out by an earthquake though, and we just finished running another initial alignment. The only difficulty in getting relocked for me has been ALSX and Y - they've still been liking to drop out every once in a while, even right after an initial alignment.
After trying and failing to improve squeezing earlier today (two locks ago) with Ryan, squeezing (and subsequently range) also wasn't great when we got relocked this last lock. However, an hour in, we dropped out due to the SQZ filter cavity unlocking, and when it relocked, squeezing was back to being around -4db. Not sure what was changed there, but it got us back to around 155Mpc, which was very appreciated, even if it's not an amazing range.
LOG:
00:49 Lockloss
- Started an initial alignment since ALS flashes didnt look good and I knew there would be issues with ALSY dropping out
02:08 NOMINAL_LOW_NOISE
02:11 Observing
02:57 Out of Observing due to SQZ FC unlocking
03:01 Back into Observing
05:16 Lockloss
- Ran an IA
I was looking at the lockloss sheet and noticed that this weekend, there were three locklosses that were similar to each other. Ryan Short noted one in an alog a couple days ago: 82809.
The locklosses look similar to what we sometimes see when we lose lock due to ground motion from an earthquake or wind, but in all three cases there was no earthquake motion, and the wind was below 20mph. There will be an 11 Hz oscillation starting about 1 second before the lockloss, and it can be seen in DARM, all QUADs, MICH, PRCL, and SRCL.
Lockloss times:
There was another instance of a lockloss that looked just like these on February 7th 2025 at 05:47UTC as well.
Some more of these weird locklosses from the last day:
Lockloss @ 02/17 05:16 UTC after 3 hours locked due to an earthquake. Earthquake didn't shake the ground too hard, but we still lost lock becuase secondary microseism is also really high.
TITLE: 02/17 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
INCOMING OPERATOR: Oli
SHIFT SUMMARY: H1 has been locked for 3 hours. As I'm posting this, H1 is unlocked and running an alignment. One lockloss this morning, then much of the shift was spent attempting to relock H1 with several failed attempts. After relocking, we noticed the range was noticeably lower than recent locks and that DARM in higher frequencies looked poor, pointing to SQZ not being optimized.
SQZ Troubleshooting: I started by just trying to take SQZ_MANAGER to 'DOWN' then back up to 'FREQ_DEP_SQZ' to see if that would fix the range issue, but the FC ASC was not turning on after the FC locked. The SQZ-FC_ASC_TRIGGER_INMON was not reaching its threshold of 3 to turn ASC on, so I thought improving FC alignment by adjusting FC2 might help. Ultimately, after following the FC alignment instructions from the SQZ troubleshooting wiki, the FC would not lock at all in IR, so I reverted my FC2 move and then attempted to adjust the OPO temperature. Again following the instructions on the wiki, with SQZ_MANAGER in 'LOCK_CF,' I was able to improve SQZ-CLF_REFL_RF6_ABS_OUPUT and the FC was able to lock successfully with ASC on. After then taking SQZ_MANAGER all the way up to 'FREQ_DEP_SQZ,' I again adjusted the OPO temperature (in the other direction) as both the inspiral range and SQZ BLRMS looked worse than when we had started. I was able to bring these back to about where they were when I dropped observing, so ultimately this adventure didn't provide much improvement. H1 then returned to observing.
LOG:
Lockloss @ 02/17 00:49 UTC after over 3 hours locked
02:11 Back to Observing
TITLE: 02/17 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 140 Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 9mph Gusts, 5mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.53 μm/s
QUICK SUMMARY:
Currently Observing and have been Locked for 3 hours. Our range was bad, which we figured was due to squeezing being way worse than last lock and not getting better with thermalization. We popped out of Observing to adjust squeezing but it was giving us issues and we had trouble getting IR to lock (see Ryan's alog). Finally were able to get the filter cavity locked by adjusting the OPO temp, but our squeezing was still bad, so Ryan adjusted the OPO temperature again and got a tiny bit of squeezing out (-1db), but not much. Range is basically back to where it was before all this.
Ryan's alog: 82840
Lockloss @ 17:03 UTC - link to lockloss tool
ETMX glitch lockloss, no other obvious cause.
H1 back to observing at 21:35 UTC. Long but pretty much automatic relock due to two locklosses at TRANSITION_FROM_ETMX and many more around ALS and DRMI, possibly because of the high microseism.
Sun Feb 16 10:11:43 2025 INFO: Fill completed in 11min 40secs
TCmins [-54C, -53C] OAT (+1C, 34F) DeltaTempTime 10:11:46
Lockloss @ 21:15 UTC - link to lockloss tool
The lockloss tool tags this as WINDY (although wind speeds were only up to around 20mph) and there seems to be an 11Hz oscillation that starts about a second before the lockloss seen by all quads, PRCL, MICH, SRCL, and DARM.
H1 back to observing at 23:11 UTC. Fully automatic relock after I started an initial alignment soon after the last lockloss.
After H1 reached NLN, I ran the A2L script (unthermalized) for both P & Y on all quads. Results here:
Initial | Final | Diff | |
ETMX P | 3.35 | 3.6 | 0.25 |
ETMX Y | 4.94 | 4.92 | -0.02 |
ETMY P | 5.48 | 5.64 | 0.16 |
ETMY Y | 1.29 | 1.35 | 0.06 |
ITMX P | -0.67 | -0.64 | 0.03 |
ITMX Y | 2.97 | 3.0 | 0.03 |
ITMY P | -0.06 | -0.03 | 0.03 |
ITMY Y | -2.51 | -2.53 | -0.02 |
New A2L gains were updated in lscparams and ISC_LOCK was loaded. I also REVERTED all outstanding SDF diffs from running the script (time-ramps on quads and ADS matrix changes in ASC). The A2L gains themselves are not monitored.
Ran coherence check for a time right after returning to observing to check to A2L gains (see attached). Sheila comments that this looks just a bit better than the last time this was run and checked on Feb 11th (alog82737).
Another lockloss that looks just like this one was seen 02/16 09:48UTC. Same 11 Hz oscillation 1 second before lockloss, seen in the same places.
BS Camera stopped updating just like in alogs:
This takes the Camera Sevo guardian into a neverending loop (and takes ISC LOCK out of Nominal and H1 out of Observe). See attached screenshot.
So, I had to wake up Dave so he could restart the computer & process for the BS Camera. (Dave mentioned there is a new computer for this camera to be installed soon and it should help with this problem.)
As soon as Dave got the BS camera back, the CAMERA SERVO node got back to nominal, but I had accepted the SDF diffs for ASC which happened when this issue started, so I had to go back and ACCEPT the correct settings. Then we automatically went back to Observing.
OK, back to trying to go back to sleep again! LOL
Full procedure is:
Open BS (cam26) image viewer, verify it is a blue-screen (it was) and keep the viewer running
Verify we can ping h1cam26 (we could) and keep the ping running
ssh onto sw-lvea-aux from cdsmonitor using the command "network_automation shell sw-lvea-aux"
IOS commands: "enable", "configure terminal", "interface gigabitEthernet 0/35"
Power down h1cam26 with the "shutdown" IOS command, verify pings to h1cam26 stop (they did)
After about 10 seconds power the camera back up with the IOS command "no shutdown"
Wait for h1cam26 to start responding to pings (it did).
ssh onto h1digivideo2 as user root.
Delete the h1cam26 process (kill -9 <pid>), where pid given in file /tmp/H1-VID-CAM26_server.pid
Wait for monit to restart CAM26's process, verify image starts streaming on the viewer (it did).
FRS: https://services1.ligo-la.caltech.edu/FRS/show_bug.cgi?id=33320
Forgot once again to note timing for this wake-up. This wake-up was at 233amPDT (1033utc), and I was roughly done with this work in about 45min after phoning Dave for help.