TITLE: 11/11 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 26mph Gusts, 19mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.40 μm/s
QUICK SUMMARY:
Lockloss from NLN during comissioning tagged as windy: https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1415382893
Holding H1 in Down with the IMC locked but the ISS turned off to test if the ISS is the cause of the IMC glitches we have been seeing that breaks our locks.
The secondary Microseism is elevated and the Wind is gusting above 30 MPH, so it's a good time to try to test this.
IMC Lockloss @ 12:37:44 UTC even with the ISS turned off.
22:05:49 UTC the FSS Autolocker H1:PSL-FSS_AUTOLOCK_ON was turned off .
We seem to be having lots of locklosses during the transition from ETMX/lownoise ESD ETMX guardian states. With Camilla's help, I looked through the lockloss tool to see if these are related to the IMC locklosses or not. "TRANSITION_FROM_ETMX" and "LOWNOISE_ESD_ETMX" are states 557 and 558 respectively.
For reference, transition from ETMX is the state where DARM control is shifted to IX and EY, and the ETMX bias voltage is ramped to our desired value. Then control is handed back over the EX. Then, in lownoise ESD, some low pass is engaged and the feedback to IX and EY is disengaged.
In total since the start of O4b (April 10), there have been 26 locklosses from "transition from ETMX"
From the start of O4b (April 10) to now, there have been 22 locklosses from "lownoise ESD ETMX"
Trending the microseismic channels, the microseism increases seems to correspond with the worsening of this lockloss rate for lownoise ESD ETMX. I think, when correcting for the IMC locklosses, the transition from ETMX lockloss rate is about the same. However, the lownoise ESD ETMX lockloss rate has increased significantly.
Here is a look at what these locklosses actually look like.
I trended the various ETMX, ETMY and ITMX suspension channels during these transitions. The first two attachments here show a side by side set of scopes, with the left showing a successful transition from Nov 10, and the right showing a failed transition from Nov 9. It appears that in both cases, the ITMX L3 drive has a ~1 Hz oscillation that grows in magnitude until the transition. Both the good and bad times show a separate oscillation right at the transition. This occurs right at the end of state 557, so the locklosses within 1 or 2 seconds of state 558 likely are resulting from whatever the of the steps in state 557 do. The second screenshot zooms in to highlight that the successful transition has a different oscillation that rings down, whereas the unsuccessful transition fails right where this second ring up occurs.
I grabbed a quick trend of the microseism, and it looks like the ground motion approximately doubled around Sept 20 (third screenshot). I grabbed a couple of recent successful and unsuccessful transitions since Sept 20, and they all show similar behavior. A successful transition from Sept 19 (fourth attachment) does not show the first 1 Hz ring up, just the second fast ring down after the transition.
I tried to look back at a time before the DAC change at ETMX, but I am having trouble getting any data that isn't minute trend. I will keep trying for earlier times, but this already indicates an instability in this transition that our long ramp times are not avoiding.
Thanks to some help from Erik and Jonathan, I was able to trend raw data from Nov 2023 (hint: use nds2). Around Nov 8 2023, the microseism was elevated, ~ 300 counts average on the H1:ISI-GND_STS_ITMY_Z_BLRMS_100M_300M channel, compared to ~400 counts average this weekend. The attached scope compares the transition in Nov 8 2023 (left) versus this weekend Nov 10 (right). One major difference here is that we now have a new DARM offloading scheme. A year ago, it appears that this transition also involves some instability causing some oscillation in ETMX L3, but the instability now creates a larger disturbance that rings down in both the ITMX L3 and ETMX L3 channels.
Final thoughts for today:
Sheila and I looked a look at these plots again. It appears that the 3 Hz oscillation has the potential to saturate L3, which might be causing the lockloss. The ringup repeatedly occurs within the first two seconds of the gain ramp, which is set to a 10 second ramp. We decided to shorten this ramp to 2 seconds. I created a new variable on line 5377 in the ISC guardian called "etmx_ramp" and set that to 2. The ramps from the EY/IX to EX L3 control are now set to this value, as well as the timer. If this is bad, this ramp variable can be changed back.
Checked that there was no significant downstream alignment shift after the IMC WFS were centered on September 10th 80018. Checking because this because this was the same week the "IMC" locklosses started.
Al downstream (IM4 TRANS and POPA/B) QPD PIT/YAW values are the same within 0.05 (range -1 to 1) before and after the IMC WFS centering. Some NSUM change in IM4 TRANS, probably as this PD is clipped. Plot attached.
Mon Nov 11 10:17:12 2024 INFO: Fill completed in 17min 9secs
Jordan confirmed a good fill.
After discussions with control room team: Jason, Ryan S, Sheila, Tony, Vicky, Elenna, Camilla
Conclusions: The NPRO glitches aren't new. Something changed to make us not be able to survive them as well in lock. The NRPO was swapped so isn't the issue. Looking the timing of these "IMC" locklosses, they are caused by something in the IMC or upstream 81155.
Tagging OpsInfo: Premade templates for looking at locklosses are in /opt/rtcds/userapps/release/sys/h1/templates/locklossplots/PSL_lockloss_search_fast_channels.yaml and will come up with command 'lockloss select' or 'lockloss show 1415370858'.
Updated list of things that have been checked above and attache a plot where I've split the IMC only tagged locklosses (orange) from those tagged IMC and FSS_OSCIALTION (yellow). The non IMC ones (blue) are the normal lock losses and (mostly) only once we saw before September.
Sheila and I went out to the floor to plug in the cable to run frequency noise injections (AO2).
I switched us to REFL B (79037) only to run the injections. I first ran the frequency noise injection template. Then, I disenaged the 10 Hz pole and engaged the digital gain to run the ISS injections (low, medium, and high frequency). I was fiddling with the IMC WFS pitch injection template to get it to higher frequency for a jitter injection when we lost lock.
Next step is to at least use the intensity and frequency measurements to determine how the noises are coupling to DARM (would be nice to have jitter too, but we'll see).
Commit 464b5256 in aligoNB git repo.
These results show that above 3 kHz, some of the intensity noise couples through frequency noise. There is no measurable intensity noise coupling through frequency noise at mid or low frequency.
The frequency noise projection and coupling in this plot are divided by rt2 to account for the fact that frequency noise is measured on one sensor here, but usually is controlled on two REFL sensors. This may not be correct for high frequency (roughly above 4 kHz), where CARM is gain limited.
code lives in /ligo/gitcommon/NoiseBudget/simplepyNB/Laser_noise_projections.ipynb
TITLE: 11/11 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 4mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.33 μm/s
QUICK SUMMARY:
Ryan C was called last night at 9:24 UTC.
When I walked in Sheila was in the Team speak chat, so maybe she was helping out with getting it locked first thing in the morning.
NLN reached at 15:37 UTC
Observing reached at 15 :40 UTC
H1 called for assistance at 09:29 UTC after the initial alignment timer expired from the IMC unlocking an not relocking, we were in PREP_FOR_SRY. I took us to down and the IMC locked and I then went back into locking where DRMI locked very quickly and looked pretty decent.
lockloss at 10:07 UTC from LOWNOISE_ESD_ETMX, doesn't look like an IMC lockloss
11:03 observing
TITLE: 11/11 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Aligning
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
IFO is LOCKING at CHECK_AS_SHUTTERS
Overall a smooth shift where we were locked for two thirds of the shift until the PSL glitch most likely took us down. Since then, the glitch has been happening without much gap for relocking. We've had gaps of 20 or so mins without glitching but that's been the maximum so far since LL. Lockloss alog 81181.
Other than that, we dropped out of OBSERVING very briefly due to a TCSY CO2 Laser SDF diff that fixed itself.
LOG:
None
Closes FAMIS 26016. Last checked in alog 80912.
Changes from last check:
Lockloss seemingly PSL caused judging from IMC and ASC-A signals losing lock within 10ms (attached). About a 700ms after the glitching started, we lost lock. Interestingly, IMC is not glitching for minutes post-LL as it usually does, with IMC locking and staying locked in the new acquisition.
Spoke way way (way) too soon. The glitch has been happening since the lockloss, which is much longer than usual. I've been staying in DOWN and attempting to lock if we pass 5 mins without glitching, which happened twice only to lose lock to it around DRMI or FIND_IR...
Watching the channels closely for when it stops so I can actually lock. The screenshot shows that the longest time without glitching since LL is that first 19 mins.
TITLE: 11/11 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
The morning was a little shakey, but we persevered through the problematic IMC glitches to get to a lock by 22:35 UTC
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
21:41 | SAF | Laser | LVEA | YES | LVEA is laser HAZARD | 08:21 |
18:30 | PEM | Robert | LVEA | Yes | Getting more spools of cable | 18:39 |
18:48 | PEM | Robert | CER | No | Checking cables | 20:10 |
21:29 | BBSS | Ibrahim | Staging Building | N | BBSS inventory | 00:29 |
TITLE: 11/11 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 4mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.37 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING as of 22:35 UTC
Recovered from a massive 6.8 EQ. Nothing else of note.
This lockloss seems to have the AOM driver monitor glitching right before the lockloss(ndscope), similar to what I had noticed in the two toher locklosses from this past weekend(81037).
03:45UTC Observing
In 81073 (Tuesday 05 November) Jason/Ryan S gave the ISS more range to stop it running out of range and going unstable. But on Tuesday evening, Oli saw this type of lockloss with again 81089. Not sure if we've seen it since then.
The channel to check in the ~second before the lockloss is H1:PSL-ISS_AOM_DRIVER_MON_OUT_DQ, I added this to the lockloss trends shown by the 'lockloss' command line tool via /opt/rtcds/userapps/release/sys/h1/templates/locklossplots/psl_fss_imc.yaml
I checked all the "IMC" tagged locklosses since Wednesday 6th and didn't see any more of these.
This happened before we fixed the NPRO mode hopping problem, which we did on Wednesday, Nov 6th. Not seeing any more of these locklosses since lends credence to our suspicion that the ISS was not responsible for these locklosses, the NPRO mode hopping was (NPRO mode hops cause the power output from the PMC to drop, the ISS sees this drop and does its job by lowering the diffracted power %, once the diffracted power % hits 0 the ISS unlocks and/or goes unstable).
Reran all O4b locklosses on the locklost tool with the most up to date code, which includes the "IMC" tag.
I used the lockloss tool data since the emergency OFI vent (back online ~23rd August) until today and did some excel wizardry (attached) to make the two attached plots, showing the number of locklosses per day tagged with IMC and without tag IMC. I made this plots for just locklosses from Observing and all NLN locklosses (can include commissioning/maintenance times), including:
Attaching plots from zooming in on a few locklosses:
Time | Tags | Zoom Plot | Wide Plot |
2024-10-31 12:12:26.993164 UTC (IMC) | OBSERVE IMC REFINED | annotated plot | plot |
2024-10-30 12:16:33.504883 UTC (IMC) | OBSERVE IMC REFINED | annotated plot | plot |
2024-10-30 16:09:21.548828 UTC (Normal) | OBSERVE REFINED OMC_DCPD | N/A | annotated plot |
The 10-31 zoom plot notes the framerates of the channels: ASC, REFL and NPRO_PWR are 2kHz and GS13 is 4kHz, the others are 16kHz.
Since September 18th we've had 21 locklosses from NLN tagged as FSS_OSCILLATION, of these 20 also had the IMC tag. Since September 12th, we've had 49 locklosses from NLN tagged IMC, so roughly 40% of these IMC locklosses have the FSS oscillation tag, since the NPRO was swapped we don't have any tagged with FSS_OSCILLATION.
(Reminder, the FSS_OSCILLATION tag is an old tag, intended for a different problem, but it tags times where the FSS fast mon goes above a treshold.)
Updated plot attached of NLN locklosses tagged with and without the IMC tag.