TITLE: 11/13 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: MAINTENANCE
Wind: 26mph Gusts, 19mph 3min avg
Primary useism: 0.67 μm/s
Secondary useism: 0.77 μm/s
QUICK SUMMARY:
Long Maintenance Day, high microseism, high winds, M5.0 Mexico earthquake.....but the IMC was just locked in the last few minutes!
TITLE: 11/13 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
H1 was held in IDLE all day while we had a long maintenance day with a focus on the PSL's TTFSS repair.
The PSL Crew is still in the PSL room, re-installing the TTFSS.
Secondary microseism is still elevated wind is still elevated as well.
Beckhoff updates were done today at 4PM as well.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 21:41 | SAF | Laser | LVEA | YES | LVEA is laser HAZARD | 08:21 |
| 15:32 | FAC | Karen | Optics & VAC Prep Labs | N | Technical cleaninng | 15:42 |
| 15:42 | FAC | Karen | EY | n | Technical Cleaning. | 17:01 |
| 15:55 | FAC | Nelli | HAM Shaq | N | Technical Cleaning | 16:41 |
| 15:56 | FAC | Kim | EX | N | Technical Cleaning | 17:21 |
| 16:30 | SEI | Neil, Jim, Fil | LVEA Beirgarden | yes | Installing GeoPhone Seimometer | 19:23 |
| 16:46 | 3IFO | Tyler | LVEA | Yes | 3 IFO Checks | 17:22 |
| 16:57 | PSL | Jason & Ryan S | PSL Room | Yes | Working on PSL, PMC, TTFSS | 18:41 |
| 17:11 | SUS | Rahul | EY & EX | N | SUS Measurements | 18:39 |
| 17:13 | CDS | Richard | CER | N | Maximizing the window for Maintenance | 17:16 |
| 17:15 | N2 | NorCo | Y arm | N | N2 fill, another Nor Cal Truck also went down the Y arm | 19:15 |
| 17:22 | FAC | Tyler | Mid Stations | N | 3 IFO checks | 19:22 |
| 17:39 | pSL | Daniel | PSL racks | Yes | Removing power from old PSL rack equipmnt | 19:39 |
| 17:45 | VAC | Travis & Janos | EY & MX | N | Checking vac pump status in Recieving areas | 19:39 |
| 17:51 | FAC | Kim & Karen | LVEA | YES | Technical Cleaning Karen out early | 19:04 |
| 18:30 | SEI | Neil D. | LVEA | Yes | Taping a GeoPhone to the Floor. | 18:50 |
| 18:57 | SEI | Neil | CER | N | Turning on a switch in the CER | 19:14 |
| 19:05 | FAC | Kim | High bay | n | Tech clean | 20:03 |
| 20:08 | VAC | Janos | EY, MX | N | Vac Hardware check. | 22:08 |
| 20:58 | PSL | Daniel | LVEA PSL Racks | Yes | Checking on the FSS | 21:51 |
| 21:14 | FAC | Erik Tyler | Mid Y | N | Handling the Air , Tyler back early | 22:36 |
| 21:41 | ISC | Christina | EY, MX | n | Inventory | 22:16 |
| 21:47 | PSL | Ryan S & Jason | PSL | Yes | Replacing the TTFSS | 01:48 |
| 22:01 | VAC | Travis | MX | N | Vac systems check | 22:20 |
| 22:07 | EE | Fil | CER | N | Turning on High Voltage | 22:17 |
| 23:22 | VAC | Janos | Mid X | N | Measuring Vac system equipment. | 23:34 |
| 23:47 | SEI | Jim | CER | N | checking cables | 23:52 |
| 23:54 | CDS | Erik | CER | N | Getting a laptop | 00:00 |
Famis 26336
Just over 6 days ago there was an increase in H0:VAC-EX_FAN1_570_2_ACC_INCHSEC
Both H0:VAC-MR_FAN1_170_1 & 2 ACC_INCHSEC saw an increase in noise in the last 3 days.
TITLE: 11/12 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
CURRENT ENVIRONMENT:
SEI_ENV state: MAINTENANCE
Wind: 33mph Gusts, 26mph 3min avg
Primary useism: 0.08 μm/s
Secondary useism: 0.85 μm/s
QUICK SUMMARY:
17:11 UTC Test T524646 EXTTRIG: SNEWS alert actve
Maintenance agenda:
Saw 1 PSL glitch ~50 minutes after Jason/RyanS (removed the TTFSS box and calibrated the H1:PSL-PWR_HPL_DC_OUT_DQ channel). Otherwise was quiet ~ 3 hours.
Trends here. Glitch visible on mixer with peak-peak +/-0.1, similar to seen yesterday with FSS autolocker off. These are not the biggest mixer glitches we have seen, but are the large ones (like glitch 3, or end of glitch 2 yesterday). Jason notes the temperature feedback cable is still plugged in (though the FSS box has been removed).
We expect mixer output to be fuzzier (bigger RMS) when FSS is off, because FSS is not running to control the laser frequency noise w.r.t. refcav. So, should be seeing the free-running npro laser linewidth with FSS off, which is expected to be more than with FSS on (aka, no need to get thrown off by the mixer output fuzziness with FSS off).
The VEAs in Mid X and Mid Y have four temperature sensors located inside the exterior wall. These four sensors are averaged by the automation system for the purpose of temperature control. The location in the exterior wall means all four of these sensors are influenced by outdoor ambient air and consistently read higher or lower than the interior of the VEA, depending on the season. At both mid stations, one of these sensors will be relocated to the interior of the room. Then, two of the sensors in the exterior wall will be taken out of the averaging so that one interior sensor and one wall sensor will be used to figure the average space temperature. This will have an impact on temperature trends for the mid stations, and although these are not critical like the temperatures at the corner and end stations, anyone keeping an eye on the trends will notice a significant change.
Image 1: PSL top eurocrate
From left to right: ISS inner loop, old ISS second loop (depowered), reference cavity heater (depowered), and TTFSS field box.
Image 2: PSL bottom eurocrate
From left to right: injection locking field box (depowered), injection locking servo (depowered), monitor field box, PMC locking field box, and PMC locking servo.
Tue Nov 12 10:07:17 2024 INFO: Fill completed in 7min 13secs
Jordan confirmed a good fill curbside.
TITLE: 11/12 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 18mph Gusts, 14mph 3min avg
Primary useism: 0.06 μm/s
Secondary useism: 0.78 μm/s
QUICK SUMMARY:
Good Maintenance Tuesday Morning Everyone.
H1 was in move spots when I walked in and shortly there after caught a lockloss.
So In_Lock_SUS_Charge Measurements will not be ran this week.
List of expected Maintenance Items :
After discussion with Tony we decided not to install his new PCAL Guardian nodes today, which means no DAQ restart today (my VACSTAT ini change is target-of-opportunity if a DAQ restart were to happen for other reasons)
H1 called for help from the initial alignment timer expiring at 09:00 UTC, we were in locking green arms with Yarm struggling to lock, Winds were semi high and microseism is above the 90th percentile. I adjusted ETMY in pitch and it was finally able to lock. I finished the IA at 09:32 UTC.
Struggling to hold the ARMs from FSS oscillations and the wind knocking it out. The wind isn't predicted to calm down till 15:00 UTC and only for an hour before they rise even higher than they are currently from windy.com. The 2ndary microseism isn't helping either.
As of 10:30 UTC I can hold the arms but the FSS keeps oscillating, holding it in DOWN till it passes ( It only took a few minutes)
10:47 ENGAGE_ASC_FOR_FULL_IFO lockloss
11:07 UTC RESONANCE lockloss from DRMI losing it
11:23 UTC RESONANCE lockloss again from DRMI
11:28 UTC FSS keeps oscillating
11:46 PREP_ASC lockloss from the IMC
12:00 UTC ENGAGE_ASC lockloss
The 3 minute average for the wind is just over 20 mph with gusts over 25, that combined with the secondary microseism fully above the 90th percentile relocking this morning is unlikely.
12:35 UTC LOWNOISE_COIL_DRIVERS lockloss
TITLE: 11/12 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
IFO is LOCKING at ACQUIRE_PRMI
For the first third of shift, comissioners were troubleshooting and investigating the PSL glitch after a new finding emerged: the glitch is seen in the PMC mixer with the IMC offline, ISS and FSS off (alog 81207 and its many comments). IFO was in MAINTENANCE, wind gusts were over 30mph and microseism was very high (troughs higher than 90% line and peaks higher than 100% line for secondary).
For the second third of shift, I was attempting to lock since winds calmed down but got caught up during initial alignment since SRM WD M3 would trip incessantly during SRC align, which I assume is due to the microseism being so high since this doesn't normally happen and it's the primary unique condition. After initial alignment finished, IFO reached NLN fully automatically. After an SDF diff clearance (attached), we were observing for a whole 27 minutes!
For the third third of shift, I was attempting to relock following a Lockloss (alog 81215) caused by a 50mph wind gust that even my less-than-LIGO-sensitive ears could detect. I also confirmed it wasn't the PSL. I waited in DOWN until the gusts got back below 30mph and attempted to relock. The PSL glitch happened a few times in between (since IMC was faulting but winds were low) but we got all the way to FIND_IR until another 38mph gust came though. Now, we're stably past ALS but PRMI locks and unlocks every few minutes due to the environment.
LOG:
None
Lockloss due to a 50mph wind gust. I heard the wind shake the building and immediately thereafter, a lockloss.
I also checked particularly if it was an IMC glitch and it wasn't (IMC and ASC channels lost lock at different times (~250ms apart) and IMC locked right after LL.
Short 37 min lock.
Ibrahim, Tony, Vicky, Ryan S, Jason
Two screenshots below answering the question of whether the PMC Mixer glitches on its own with FSS and ISS out. It does.
PSL team is having a think about what this implies.
Jason, Ryan, Ibrahim, Elenna, Daniel, Vicky - This test had only PSL + PMC for ~2 hours, from 2024/11/11 23:47:45 UTC to 02:06:14 UTC. No FSS, ISS, or IMC.
Mixer glitch#1 @ 1415406517 - 1st PSL PMC mixer REFL glitches. Again, only PSL + PMC. No FSS, ISS, IMC.
The squeezer was running TTFSS, which locks the squeezer laser frequency to the PSL laser frequency + 160 MHz offset. With SQZ TTFSS running, for glitch #1, the squeezer witnessed glitches in SQZ TTFSS FIBR MIXER, and PMC and SHG demod error signals.
This seems to suggest the squeezer is following real PSL free-running laser frequency glitches? Since there is no PSL FSS servo actuating on the PSL laser frequency.
- also suggests the 35 MHz PSL (+SQZ) PMC LO VCO is not the main issue, since SQZ witnesses the PSL glitches in the SQZ FIBR MIXER.
- also suggests PMC PZT HV is not the issue. Without PSL FSS, any PMC PZT HV glitches should not become PSL laser frequency glitches. Caveat the cabling was not disconnected, just done from control room, so analog glitches could still propagate.
Mixer glitch #2 @ 1415408786. Trends here.
Somehow looks pretty different than glitch #1. PSL-PMC_MIXER glitches are not clearly correlated with NPRO power changes. SQZ-FIBR_MIXER sees the glitches, and SQZ-PMC_REFL_RF35 also sees glitches. But notably the SHG_RF24 does NOT see the glitches, unlike before in glitch #1.
For the crazy glitches at ~12 minutes (end of scope) - the SQZ TTFSS MIXER + PMC + SHG all see the big glitch, and there seem to be some (weak) NPRO power glitches too.
Mixer glitches #3 @ 1415409592 - Trends. Ryan and I are wondering if these are the types of glitches that bring the IMC down? But, checking the earlier IMC lockloss (tony 81199), can't tell from trends.
Here - huge PSL freq glitches that don't obviously correlate with PSL NPRO power changes (though maybe a slight step after the first round of glitches?). But these PSL glitches are clearly observed across the squeezer TTFSS + PMC + SHG signals (literally everywhere).
The scope I'm using is at /ligo/home/victoriaa.xu/ndscope/PSL/psl_sqz_glitches.yaml (sorry it runs very slow).
Thinking about the overall at tests done today, see annotated trends.
We seem to be having lots of locklosses during the transition from ETMX/lownoise ESD ETMX guardian states. With Camilla's help, I looked through the lockloss tool to see if these are related to the IMC locklosses or not. "TRANSITION_FROM_ETMX" and "LOWNOISE_ESD_ETMX" are states 557 and 558 respectively.
For reference, transition from ETMX is the state where DARM control is shifted to IX and EY, and the ETMX bias voltage is ramped to our desired value. Then control is handed back over the EX. Then, in lownoise ESD, some low pass is engaged and the feedback to IX and EY is disengaged.
In total since the start of O4b (April 10), there have been 26 locklosses from "transition from ETMX"
From the start of O4b (April 10) to now, there have been 22 locklosses from "lownoise ESD ETMX"
Trending the microseismic channels, the microseism increases seems to correspond with the worsening of this lockloss rate for lownoise ESD ETMX. I think, when correcting for the IMC locklosses, the transition from ETMX lockloss rate is about the same. However, the lownoise ESD ETMX lockloss rate has increased significantly.
Here is a look at what these locklosses actually look like.
I trended the various ETMX, ETMY and ITMX suspension channels during these transitions. The first two attachments here show a side by side set of scopes, with the left showing a successful transition from Nov 10, and the right showing a failed transition from Nov 9. It appears that in both cases, the ITMX L3 drive has a ~1 Hz oscillation that grows in magnitude until the transition. Both the good and bad times show a separate oscillation right at the transition. This occurs right at the end of state 557, so the locklosses within 1 or 2 seconds of state 558 likely are resulting from whatever the of the steps in state 557 do. The second screenshot zooms in to highlight that the successful transition has a different oscillation that rings down, whereas the unsuccessful transition fails right where this second ring up occurs.
I grabbed a quick trend of the microseism, and it looks like the ground motion approximately doubled around Sept 20 (third screenshot). I grabbed a couple of recent successful and unsuccessful transitions since Sept 20, and they all show similar behavior. A successful transition from Sept 19 (fourth attachment) does not show the first 1 Hz ring up, just the second fast ring down after the transition.
I tried to look back at a time before the DAC change at ETMX, but I am having trouble getting any data that isn't minute trend. I will keep trying for earlier times, but this already indicates an instability in this transition that our long ramp times are not avoiding.
Thanks to some help from Erik and Jonathan, I was able to trend raw data from Nov 2023 (hint: use nds2). Around Nov 8 2023, the microseism was elevated, ~ 300 counts average on the H1:ISI-GND_STS_ITMY_Z_BLRMS_100M_300M channel, compared to ~400 counts average this weekend. The attached scope compares the transition in Nov 8 2023 (left) versus this weekend Nov 10 (right). One major difference here is that we now have a new DARM offloading scheme. A year ago, it appears that this transition also involves some instability causing some oscillation in ETMX L3, but the instability now creates a larger disturbance that rings down in both the ITMX L3 and ETMX L3 channels.
Final thoughts for today:
Sheila and I looked a look at these plots again. It appears that the 3 Hz oscillation has the potential to saturate L3, which might be causing the lockloss. The ringup repeatedly occurs within the first two seconds of the gain ramp, which is set to a 10 second ramp. We decided to shorten this ramp to 2 seconds. I created a new variable on line 5377 in the ISC guardian called "etmx_ramp" and set that to 2. The ramps from the EY/IX to EX L3 control are now set to this value, as well as the timer. If this is bad, this ramp variable can be changed back.
Sheila and I went out to the floor to plug in the cable to run frequency noise injections (AO2).
I switched us to REFL B (79037) only to run the injections. I first ran the frequency noise injection template. Then, I disenaged the 10 Hz pole and engaged the digital gain to run the ISS injections (low, medium, and high frequency). I was fiddling with the IMC WFS pitch injection template to get it to higher frequency for a jitter injection when we lost lock.
Next step is to at least use the intensity and frequency measurements to determine how the noises are coupling to DARM (would be nice to have jitter too, but we'll see).
Commit 464b5256 in aligoNB git repo.
These results show that above 3 kHz, some of the intensity noise couples through frequency noise. There is no measurable intensity noise coupling through frequency noise at mid or low frequency.
The frequency noise projection and coupling in this plot are divided by rt2 to account for the fact that frequency noise is measured on one sensor here, but usually is controlled on two REFL sensors. This may not be correct for high frequency (roughly above 4 kHz), where CARM is gain limited.
code lives in /ligo/gitcommon/NoiseBudget/simplepyNB/Laser_noise_projections.ipynb
Closes FAMIS#28454, last checked 80607
CO2 trends looking good (ndscope1)
HWS trends looking good (ndscope2)
You can see in the trends when the ITMY laser was swapped about 15-16 days ago.
Trend shows that the ITMY HWS code stopped running. I restarted it.
Erik, Camilla. We've been seeing that the code running on h1hwsmsr1 (ITMY) kept stopping after ~1hours with a "Fatal IO error 25" (Erik said related to a display) attached.
We checked that memory is fine of h1hwsmsr1. Erik troubleshooted this back to matplotlib trying to make a plot and failing as there was no display to make the plot on. State3.py calls get_wf_new_center() from hws_gradtools.py which calls get_extrema_from_gradients() which makes a contour plot, it's trying to make this plot and thinks there's a display but then can't plot it. This error isn't happening on h1hwsmsr (ITMX). I had ssh'ed into h1hwsmsr1 using -Y -C options (allowing the stream image to show), but Erik found this was making the session think there was a display when there wasn't.
Fix: quitting tmux session, logging in without options (ssh controls@h1hwsmsr1), and starting code again. The code has now been running fine for the last 18 hours.