Image 1: PSL top eurocrate
From left to right: ISS inner loop, old ISS second loop (depowered), reference cavity heater (depowered), and TTFSS field box.
Image 2: PSL bottom eurocrate
From left to right: injection locking field box (depowered), injection locking servo (depowered), monitor field box, PMC locking field box, and PMC locking servo.
Tue Nov 12 10:07:17 2024 INFO: Fill completed in 7min 13secs
Jordan confirmed a good fill curbside.
TITLE: 11/12 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 18mph Gusts, 14mph 3min avg
Primary useism: 0.06 μm/s
Secondary useism: 0.78 μm/s
QUICK SUMMARY:
Good Maintenance Tuesday Morning Everyone.
H1 was in move spots when I walked in and shortly there after caught a lockloss.
So In_Lock_SUS_Charge Measurements will not be ran this week.
List of expected Maintenance Items :
After discussion with Tony we decided not to install his new PCAL Guardian nodes today, which means no DAQ restart today (my VACSTAT ini change is target-of-opportunity if a DAQ restart were to happen for other reasons)
H1 called for help from the initial alignment timer expiring at 09:00 UTC, we were in locking green arms with Yarm struggling to lock, Winds were semi high and microseism is above the 90th percentile. I adjusted ETMY in pitch and it was finally able to lock. I finished the IA at 09:32 UTC.
Struggling to hold the ARMs from FSS oscillations and the wind knocking it out. The wind isn't predicted to calm down till 15:00 UTC and only for an hour before they rise even higher than they are currently from windy.com. The 2ndary microseism isn't helping either.
As of 10:30 UTC I can hold the arms but the FSS keeps oscillating, holding it in DOWN till it passes ( It only took a few minutes)
10:47 ENGAGE_ASC_FOR_FULL_IFO lockloss
11:07 UTC RESONANCE lockloss from DRMI losing it
11:23 UTC RESONANCE lockloss again from DRMI
11:28 UTC FSS keeps oscillating
11:46 PREP_ASC lockloss from the IMC
12:00 UTC ENGAGE_ASC lockloss
The 3 minute average for the wind is just over 20 mph with gusts over 25, that combined with the secondary microseism fully above the 90th percentile relocking this morning is unlikely.
12:35 UTC LOWNOISE_COIL_DRIVERS lockloss
TITLE: 11/12 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
IFO is LOCKING at ACQUIRE_PRMI
For the first third of shift, comissioners were troubleshooting and investigating the PSL glitch after a new finding emerged: the glitch is seen in the PMC mixer with the IMC offline, ISS and FSS off (alog 81207 and its many comments). IFO was in MAINTENANCE, wind gusts were over 30mph and microseism was very high (troughs higher than 90% line and peaks higher than 100% line for secondary).
For the second third of shift, I was attempting to lock since winds calmed down but got caught up during initial alignment since SRM WD M3 would trip incessantly during SRC align, which I assume is due to the microseism being so high since this doesn't normally happen and it's the primary unique condition. After initial alignment finished, IFO reached NLN fully automatically. After an SDF diff clearance (attached), we were observing for a whole 27 minutes!
For the third third of shift, I was attempting to relock following a Lockloss (alog 81215) caused by a 50mph wind gust that even my less-than-LIGO-sensitive ears could detect. I also confirmed it wasn't the PSL. I waited in DOWN until the gusts got back below 30mph and attempted to relock. The PSL glitch happened a few times in between (since IMC was faulting but winds were low) but we got all the way to FIND_IR until another 38mph gust came though. Now, we're stably past ALS but PRMI locks and unlocks every few minutes due to the environment.
LOG:
None
Lockloss due to a 50mph wind gust. I heard the wind shake the building and immediately thereafter, a lockloss.
I also checked particularly if it was an IMC glitch and it wasn't (IMC and ASC channels lost lock at different times (~250ms apart) and IMC locked right after LL.
Short 37 min lock.
Ibrahim, Tony, Vicky, Ryan S, Jason
Two screenshots below answering the question of whether the PMC Mixer glitches on its own with FSS and ISS out. It does.
PSL team is having a think about what this implies.
Jason, Ryan, Ibrahim, Elenna, Daniel, Vicky - This test had only PSL + PMC for ~2 hours, from 2024/11/11 23:47:45 UTC to 02:06:14 UTC. No FSS, ISS, or IMC.
Mixer glitch#1 @ 1415406517 - 1st PSL PMC mixer REFL glitches. Again, only PSL + PMC. No FSS, ISS, IMC.
The squeezer was running TTFSS, which locks the squeezer laser frequency to the PSL laser frequency + 160 MHz offset. With SQZ TTFSS running, for glitch #1, the squeezer witnessed glitches in SQZ TTFSS FIBR MIXER, and PMC and SHG demod error signals.
This seems to suggest the squeezer is following real PSL free-running laser frequency glitches? Since there is no PSL FSS servo actuating on the PSL laser frequency.
- also suggests the 35 MHz PSL (+SQZ) PMC LO VCO is not the main issue, since SQZ witnesses the PSL glitches in the SQZ FIBR MIXER.
- also suggests PMC PZT HV is not the issue. Without PSL FSS, any PMC PZT HV glitches should not become PSL laser frequency glitches. Caveat the cabling was not disconnected, just done from control room, so analog glitches could still propagate.
Mixer glitch #2 @ 1415408786. Trends here.
Somehow looks pretty different than glitch #1. PSL-PMC_MIXER glitches are not clearly correlated with NPRO power changes. SQZ-FIBR_MIXER sees the glitches, and SQZ-PMC_REFL_RF35 also sees glitches. But notably the SHG_RF24 does NOT see the glitches, unlike before in glitch #1.
For the crazy glitches at ~12 minutes (end of scope) - the SQZ TTFSS MIXER + PMC + SHG all see the big glitch, and there seem to be some (weak) NPRO power glitches too.
Mixer glitches #3 @ 1415409592 - Trends. Ryan and I are wondering if these are the types of glitches that bring the IMC down? But, checking the earlier IMC lockloss (tony 81199), can't tell from trends.
Here - huge PSL freq glitches that don't obviously correlate with PSL NPRO power changes (though maybe a slight step after the first round of glitches?). But these PSL glitches are clearly observed across the squeezer TTFSS + PMC + SHG signals (literally everywhere).
The scope I'm using is at /ligo/home/victoriaa.xu/ndscope/PSL/psl_sqz_glitches.yaml (sorry it runs very slow).
Thinking about the overall at tests done today, see annotated trends.
TITLE: 11/12 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
H1 Holding ISC_LOCK in IDLE with the IMC locked, BUT with the ISS and the FSS are turned OFF.
Note: note FSS has to be ON to lock the IMC , once locked turn it off.
Using this command to open the ndscope ~/../sheila.dwyer/ndscope/PSL/PSL_fast_channels.yaml
We are watching H1:PSL-PMC_MIXER_OUT_DQ for glitches.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
21:41 | SAF | Laser | LVEA | YES | LVEA is laser HAZARD | 08:21 |
16:20 | FAC | Karen | Optics & Vac Prep | N | Technical Cleaning | 16:47 |
16:43 | PSL-ISS | Sheila | Ctrl Rm | N | ISS investigations. | 19:43 |
16:47 | FAC | Karen | MY | N | Technical cleaning | 17:57 |
17:14 | PSL-ISS | Sheila & Elenna | LVEA | YES | Pluggin in a cable for ISS injection | 17:27 |
18:39 | FAC | Kim | MX | N | Technical Cleaning | 19:27 |
I was trying to trace some of the noise back to when it started, and noticed an increase in the magnitude of the peak noise on the PMC, and the Bullseye.
We seem to be having lots of locklosses during the transition from ETMX/lownoise ESD ETMX guardian states. With Camilla's help, I looked through the lockloss tool to see if these are related to the IMC locklosses or not. "TRANSITION_FROM_ETMX" and "LOWNOISE_ESD_ETMX" are states 557 and 558 respectively.
For reference, transition from ETMX is the state where DARM control is shifted to IX and EY, and the ETMX bias voltage is ramped to our desired value. Then control is handed back over the EX. Then, in lownoise ESD, some low pass is engaged and the feedback to IX and EY is disengaged.
In total since the start of O4b (April 10), there have been 26 locklosses from "transition from ETMX"
From the start of O4b (April 10) to now, there have been 22 locklosses from "lownoise ESD ETMX"
Trending the microseismic channels, the microseism increases seems to correspond with the worsening of this lockloss rate for lownoise ESD ETMX. I think, when correcting for the IMC locklosses, the transition from ETMX lockloss rate is about the same. However, the lownoise ESD ETMX lockloss rate has increased significantly.
Here is a look at what these locklosses actually look like.
I trended the various ETMX, ETMY and ITMX suspension channels during these transitions. The first two attachments here show a side by side set of scopes, with the left showing a successful transition from Nov 10, and the right showing a failed transition from Nov 9. It appears that in both cases, the ITMX L3 drive has a ~1 Hz oscillation that grows in magnitude until the transition. Both the good and bad times show a separate oscillation right at the transition. This occurs right at the end of state 557, so the locklosses within 1 or 2 seconds of state 558 likely are resulting from whatever the of the steps in state 557 do. The second screenshot zooms in to highlight that the successful transition has a different oscillation that rings down, whereas the unsuccessful transition fails right where this second ring up occurs.
I grabbed a quick trend of the microseism, and it looks like the ground motion approximately doubled around Sept 20 (third screenshot). I grabbed a couple of recent successful and unsuccessful transitions since Sept 20, and they all show similar behavior. A successful transition from Sept 19 (fourth attachment) does not show the first 1 Hz ring up, just the second fast ring down after the transition.
I tried to look back at a time before the DAC change at ETMX, but I am having trouble getting any data that isn't minute trend. I will keep trying for earlier times, but this already indicates an instability in this transition that our long ramp times are not avoiding.
Thanks to some help from Erik and Jonathan, I was able to trend raw data from Nov 2023 (hint: use nds2). Around Nov 8 2023, the microseism was elevated, ~ 300 counts average on the H1:ISI-GND_STS_ITMY_Z_BLRMS_100M_300M channel, compared to ~400 counts average this weekend. The attached scope compares the transition in Nov 8 2023 (left) versus this weekend Nov 10 (right). One major difference here is that we now have a new DARM offloading scheme. A year ago, it appears that this transition also involves some instability causing some oscillation in ETMX L3, but the instability now creates a larger disturbance that rings down in both the ITMX L3 and ETMX L3 channels.
Final thoughts for today:
Sheila and I looked a look at these plots again. It appears that the 3 Hz oscillation has the potential to saturate L3, which might be causing the lockloss. The ringup repeatedly occurs within the first two seconds of the gain ramp, which is set to a 10 second ramp. We decided to shorten this ramp to 2 seconds. I created a new variable on line 5377 in the ISC guardian called "etmx_ramp" and set that to 2. The ramps from the EY/IX to EX L3 control are now set to this value, as well as the timer. If this is bad, this ramp variable can be changed back.
Closes FAMIS#28454, last checked 80607
CO2 trends looking good (ndscope1)
HWS trends looking good (ndscope2)
You can see in the trends when the ITMY laser was swapped about 15-16 days ago.
Trend shows that the ITMY HWS code stopped running. I restarted it.
Erik, Camilla. We've been seeing that the code running on h1hwsmsr1 (ITMY) kept stopping after ~1hours with a "Fatal IO error 25" (Erik said related to a display) attached.
We checked that memory is fine of h1hwsmsr1. Erik troubleshooted this back to matplotlib trying to make a plot and failing as there was no display to make the plot on. State3.py calls get_wf_new_center() from hws_gradtools.py which calls get_extrema_from_gradients() which makes a contour plot, it's trying to make this plot and thinks there's a display but then can't plot it. This error isn't happening on h1hwsmsr (ITMX). I had ssh'ed into h1hwsmsr1 using -Y -C
options (allowing the stream image to show), but Erik found this was making the session think there was a display when there wasn't.
Fix: quitting tmux session, logging in without options (ssh controls@h1hwsmsr1), and starting code again. The code has now been running fine for the last 18 hours.
In the process of investigating the locklosses due to FSS glitching and working on spare chassis for the FSS in the PSL, we compared the power spectrum of the PZT monitor between H1 and L1. We found some difference in the power spectrum, plot attached.
I discovered today that LLO is indeed doing some additional digital filtering to their FSS_FASTMON channel, which would very likely explain the difference in spectra Marc shows above. Just looking at the MEDM screen for the filter bank, it shows three filters in use (called "cts2V', "NPRO", and "toMHz") while LHO is using none; parameters of these are attached in a screenshot. I'm not entirely sure what the purpose of these are, but from what I can tell there is an additional pole at 10Hz, which would explain the 1/f-looking drop in noise towards higher frequencies.