Ibrahim, Tony, Vicky, Ryan S, Jason
Two screenshots below answering the question of whether the PMC Mixer glitches on its own with FSS and ISS out. It does.
PSL team is having a think about what this implies.
Jason, Ryan, Ibrahim, Elenna, Daniel, Vicky - This test had only PSL + PMC for ~2 hours, from 2024/11/11 23:47:45 UTC to 02:06:14 UTC. No FSS, ISS, or IMC.
Mixer glitch#1 @ 1415406517 - 1st PSL PMC mixer REFL glitches. Again, only PSL + PMC. No FSS, ISS, IMC.
The squeezer was running TTFSS, which locks the squeezer laser frequency to the PSL laser frequency + 160 MHz offset. With SQZ TTFSS running, for glitch #1, the squeezer witnessed glitches in SQZ TTFSS FIBR MIXER, and PMC and SHG demod error signals.
This seems to suggest the squeezer is following real PSL free-running laser frequency glitches? Since there is no PSL FSS servo actuating on the PSL laser frequency.
- also suggests the 35 MHz PSL (+SQZ) PMC LO VCO is not the main issue, since SQZ witnesses the PSL glitches in the SQZ FIBR MIXER.
- also suggests PMC PZT HV is not the issue. Without PSL FSS, any PMC PZT HV glitches should not become PSL laser frequency glitches. Caveat the cabling was not disconnected, just done from control room, so analog glitches could still propagate.
Mixer glitch #2 @ 1415408786. Trends here.
Somehow looks pretty different than glitch #1. PSL-PMC_MIXER glitches are not clearly correlated with NPRO power changes. SQZ-FIBR_MIXER sees the glitches, and SQZ-PMC_REFL_RF35 also sees glitches. But notably the SHG_RF24 does NOT see the glitches, unlike before in glitch #1.
For the crazy glitches at ~12 minutes (end of scope) - the SQZ TTFSS MIXER + PMC + SHG all see the big glitch, and there seem to be some (weak) NPRO power glitches too.
Mixer glitches #3 @ 1415409592 - Trends. Ryan and I are wondering if these are the types of glitches that bring the IMC down? But, checking the earlier IMC lockloss (tony 81199), can't tell from trends.
Here - huge PSL freq glitches that don't obviously correlate with PSL NPRO power changes (though maybe a slight step after the first round of glitches?). But these PSL glitches are clearly observed across the squeezer TTFSS + PMC + SHG signals (literally everywhere).
The scope I'm using is at /ligo/home/victoriaa.xu/ndscope/PSL/psl_sqz_glitches.yaml (sorry it runs very slow).
Thinking about the overall at tests done today, see annotated trends.
TITLE: 11/12 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
H1 Holding ISC_LOCK in IDLE with the IMC locked, BUT with the ISS and the FSS are turned OFF.
Note: note FSS has to be ON to lock the IMC , once locked turn it off.
Using this command to open the ndscope ~/../sheila.dwyer/ndscope/PSL/PSL_fast_channels.yaml
We are watching H1:PSL-PMC_MIXER_OUT_DQ for glitches.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
21:41 | SAF | Laser | LVEA | YES | LVEA is laser HAZARD | 08:21 |
16:20 | FAC | Karen | Optics & Vac Prep | N | Technical Cleaning | 16:47 |
16:43 | PSL-ISS | Sheila | Ctrl Rm | N | ISS investigations. | 19:43 |
16:47 | FAC | Karen | MY | N | Technical cleaning | 17:57 |
17:14 | PSL-ISS | Sheila & Elenna | LVEA | YES | Pluggin in a cable for ISS injection | 17:27 |
18:39 | FAC | Kim | MX | N | Technical Cleaning | 19:27 |
I was trying to trace some of the noise back to when it started, and noticed an increase in the magnitude of the peak noise on the PMC, and the Bullseye.
TITLE: 11/11 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 31mph Gusts, 23mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.55 μm/s
QUICK SUMMARY:
IFO is in DOWN and CORRECTIVE MAINTENANCE
Due to microseism and high winds, we are unable to lock. As such, we're using this as an oppurtunity to troubleshoot and debug possible issues in PSL/IMC lock story.
Camilla has a great summary of the PSL story so far in alog 81193.
We have been sitting with the ISS off and the IMC unlocked (MC2 misaligned), and we see that there are a lot of glitches and the reference cavity isn't staying locked.
The second screenshot shows that this glitch also shows up in the PMC mixer signal, and less obviously in the PMC high votlage and ISS channels.
We are now sitting with the reference cavity unlocked to see if we see something like this in the PMC without the FSS on, as of 22:05:49 UTC.
Edited to add: We sat with the PMC locked without seeing glitches for 30 minutes, then tried locking the FSS again. The glitches did not return in 30 minutes, so we can't say from this test if the glitches are from the FSS or the PMC.
TITLE: 11/11 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 26mph Gusts, 19mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.40 μm/s
QUICK SUMMARY:
Lockloss from NLN during comissioning tagged as windy: https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1415382893
Holding H1 in Down with the IMC locked but the ISS turned off to test if the ISS is the cause of the IMC glitches we have been seeing that breaks our locks.
The secondary Microseism is elevated and the Wind is gusting above 30 MPH, so it's a good time to try to test this.
IMC Lockloss @ 12:37:44 UTC even with the ISS turned off.
22:05:49 UTC the FSS Autolocker H1:PSL-FSS_AUTOLOCK_ON was turned off .
We seem to be having lots of locklosses during the transition from ETMX/lownoise ESD ETMX guardian states. With Camilla's help, I looked through the lockloss tool to see if these are related to the IMC locklosses or not. "TRANSITION_FROM_ETMX" and "LOWNOISE_ESD_ETMX" are states 557 and 558 respectively.
For reference, transition from ETMX is the state where DARM control is shifted to IX and EY, and the ETMX bias voltage is ramped to our desired value. Then control is handed back over the EX. Then, in lownoise ESD, some low pass is engaged and the feedback to IX and EY is disengaged.
In total since the start of O4b (April 10), there have been 26 locklosses from "transition from ETMX"
From the start of O4b (April 10) to now, there have been 22 locklosses from "lownoise ESD ETMX"
Trending the microseismic channels, the microseism increases seems to correspond with the worsening of this lockloss rate for lownoise ESD ETMX. I think, when correcting for the IMC locklosses, the transition from ETMX lockloss rate is about the same. However, the lownoise ESD ETMX lockloss rate has increased significantly.
Here is a look at what these locklosses actually look like.
I trended the various ETMX, ETMY and ITMX suspension channels during these transitions. The first two attachments here show a side by side set of scopes, with the left showing a successful transition from Nov 10, and the right showing a failed transition from Nov 9. It appears that in both cases, the ITMX L3 drive has a ~1 Hz oscillation that grows in magnitude until the transition. Both the good and bad times show a separate oscillation right at the transition. This occurs right at the end of state 557, so the locklosses within 1 or 2 seconds of state 558 likely are resulting from whatever the of the steps in state 557 do. The second screenshot zooms in to highlight that the successful transition has a different oscillation that rings down, whereas the unsuccessful transition fails right where this second ring up occurs.
I grabbed a quick trend of the microseism, and it looks like the ground motion approximately doubled around Sept 20 (third screenshot). I grabbed a couple of recent successful and unsuccessful transitions since Sept 20, and they all show similar behavior. A successful transition from Sept 19 (fourth attachment) does not show the first 1 Hz ring up, just the second fast ring down after the transition.
I tried to look back at a time before the DAC change at ETMX, but I am having trouble getting any data that isn't minute trend. I will keep trying for earlier times, but this already indicates an instability in this transition that our long ramp times are not avoiding.
Thanks to some help from Erik and Jonathan, I was able to trend raw data from Nov 2023 (hint: use nds2). Around Nov 8 2023, the microseism was elevated, ~ 300 counts average on the H1:ISI-GND_STS_ITMY_Z_BLRMS_100M_300M channel, compared to ~400 counts average this weekend. The attached scope compares the transition in Nov 8 2023 (left) versus this weekend Nov 10 (right). One major difference here is that we now have a new DARM offloading scheme. A year ago, it appears that this transition also involves some instability causing some oscillation in ETMX L3, but the instability now creates a larger disturbance that rings down in both the ITMX L3 and ETMX L3 channels.
Final thoughts for today:
Sheila and I looked a look at these plots again. It appears that the 3 Hz oscillation has the potential to saturate L3, which might be causing the lockloss. The ringup repeatedly occurs within the first two seconds of the gain ramp, which is set to a 10 second ramp. We decided to shorten this ramp to 2 seconds. I created a new variable on line 5377 in the ISC guardian called "etmx_ramp" and set that to 2. The ramps from the EY/IX to EX L3 control are now set to this value, as well as the timer. If this is bad, this ramp variable can be changed back.
Checked that there was no significant downstream alignment shift after the IMC WFS were centered on September 10th 80018. Checking because this because this was the same week the "IMC" locklosses started.
Al downstream (IM4 TRANS and POPA/B) QPD PIT/YAW values are the same within 0.05 (range -1 to 1) before and after the IMC WFS centering. Some NSUM change in IM4 TRANS, probably as this PD is clipped. Plot attached.
Mon Nov 11 10:17:12 2024 INFO: Fill completed in 17min 9secs
Jordan confirmed a good fill.
After discussions with control room team: Jason, Ryan S, Sheila, Tony, Vicky, Elenna, Camilla
Conclusions: The NPRO glitches aren't new. Something changed to make us not be able to survive them as well in lock. The NRPO was swapped so isn't the issue. Looking the timing of these "IMC" locklosses, they are caused by something in the IMC or upstream 81155.
Tagging OpsInfo: Premade templates for looking at locklosses are in /opt/rtcds/userapps/release/sys/h1/templates/locklossplots/PSL_lockloss_search_fast_channels.yaml and will come up with command 'lockloss select' or 'lockloss show 1415370858'.
Updated list of things that have been checked above and attache a plot where I've split the IMC only tagged locklosses (orange) from those tagged IMC and FSS_OSCIALTION (yellow). The non IMC ones (blue) are the normal lock losses and (mostly) only once we saw before September.
Sheila and I went out to the floor to plug in the cable to run frequency noise injections (AO2).
I switched us to REFL B (79037) only to run the injections. I first ran the frequency noise injection template. Then, I disenaged the 10 Hz pole and engaged the digital gain to run the ISS injections (low, medium, and high frequency). I was fiddling with the IMC WFS pitch injection template to get it to higher frequency for a jitter injection when we lost lock.
Next step is to at least use the intensity and frequency measurements to determine how the noises are coupling to DARM (would be nice to have jitter too, but we'll see).
Commit 464b5256 in aligoNB git repo.
These results show that above 3 kHz, some of the intensity noise couples through frequency noise. There is no measurable intensity noise coupling through frequency noise at mid or low frequency.
The frequency noise projection and coupling in this plot are divided by rt2 to account for the fact that frequency noise is measured on one sensor here, but usually is controlled on two REFL sensors. This may not be correct for high frequency (roughly above 4 kHz), where CARM is gain limited.
code lives in /ligo/gitcommon/NoiseBudget/simplepyNB/Laser_noise_projections.ipynb
TITLE: 11/11 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 4mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.33 μm/s
QUICK SUMMARY:
Ryan C was called last night at 9:24 UTC.
When I walked in Sheila was in the Team speak chat, so maybe she was helping out with getting it locked first thing in the morning.
NLN reached at 15:37 UTC
Observing reached at 15 :40 UTC
H1 called for assistance at 09:29 UTC after the initial alignment timer expired from the IMC unlocking an not relocking, we were in PREP_FOR_SRY. I took us to down and the IMC locked and I then went back into locking where DRMI locked very quickly and looked pretty decent.
lockloss at 10:07 UTC from LOWNOISE_ESD_ETMX, doesn't look like an IMC lockloss
11:03 observing
This lockloss seems to have the AOM driver monitor glitching right before the lockloss(ndscope), similar to what I had noticed in the two toher locklosses from this past weekend(81037).
03:45UTC Observing
In 81073 (Tuesday 05 November) Jason/Ryan S gave the ISS more range to stop it running out of range and going unstable. But on Tuesday evening, Oli saw this type of lockloss with again 81089. Not sure if we've seen it since then.
The channel to check in the ~second before the lockloss is H1:PSL-ISS_AOM_DRIVER_MON_OUT_DQ, I added this to the lockloss trends shown by the 'lockloss' command line tool via /opt/rtcds/userapps/release/sys/h1/templates/locklossplots/psl_fss_imc.yaml
I checked all the "IMC" tagged locklosses since Wednesday 6th and didn't see any more of these.
This happened before we fixed the NPRO mode hopping problem, which we did on Wednesday, Nov 6th. Not seeing any more of these locklosses since lends credence to our suspicion that the ISS was not responsible for these locklosses, the NPRO mode hopping was (NPRO mode hops cause the power output from the PMC to drop, the ISS sees this drop and does its job by lowering the diffracted power %, once the diffracted power % hits 0 the ISS unlocks and/or goes unstable).
Reran all O4b locklosses on the locklost tool with the most up to date code, which includes the "IMC" tag.
I used the lockloss tool data since the emergency OFI vent (back online ~23rd August) until today and did some excel wizardry (attached) to make the two attached plots, showing the number of locklosses per day tagged with IMC and without tag IMC. I made this plots for just locklosses from Observing and all NLN locklosses (can include commissioning/maintenance times), including:
Attaching plots from zooming in on a few locklosses:
Time | Tags | Zoom Plot | Wide Plot |
2024-10-31 12:12:26.993164 UTC (IMC) | OBSERVE IMC REFINED | annotated plot | plot |
2024-10-30 12:16:33.504883 UTC (IMC) | OBSERVE IMC REFINED | annotated plot | plot |
2024-10-30 16:09:21.548828 UTC (Normal) | OBSERVE REFINED OMC_DCPD | N/A | annotated plot |
The 10-31 zoom plot notes the framerates of the channels: ASC, REFL and NPRO_PWR are 2kHz and GS13 is 4kHz, the others are 16kHz.
Since September 18th we've had 21 locklosses from NLN tagged as FSS_OSCILLATION, of these 20 also had the IMC tag. Since September 12th, we've had 49 locklosses from NLN tagged IMC, so roughly 40% of these IMC locklosses have the FSS oscillation tag, since the NPRO was swapped we don't have any tagged with FSS_OSCILLATION.
(Reminder, the FSS_OSCILLATION tag is an old tag, intended for a different problem, but it tags times where the FSS fast mon goes above a treshold.)
Updated plot attached of NLN locklosses tagged with and without the IMC tag.
TJ and I turned up the CO2s from 1.7W to 2.0W today before we lost lock for Tuesday Maintenance at 2024/09/10 15:10 UTC. We then lost lock at 2024/09/10 15:18 UTC. We did this after mention that PRCL to CHARD coupling could be reduced by CO2 power changed in 79989, but we didn't have time to think about if we should make an injection.
Checked when we turned the CO2s up for 7 minutes, that this had no noticeable effect on squeezing level or sqz alignment, plot attached. But maybe 7 minutes isn't long enough to see any changes, we could try again with more time or try turning the CO2s off before Tuesday Maintenance
Checking as LLO sees the Squeezing and SQZ alignment/mode matching may be related to changes in CO2 power (or alignment), see 72244.