Back in March (76269) Jeff and I had updated all the suspension watchdogs (besides OFIS, OPOS, and HXDS since those were already up to date) to use better blrms filtering and to be output into um. We set the suspension watchdog thresholds to values between 100 and 300 µm, but these values were set arbitrarily since there was no way to previously see how far the stages move during different scenarios. We had upped a few of the thresholds after having some suspensions trip when they probably shouldn't have, and this is a continuation of that.
During the large earthquake that hit us on December 5th, 2024 18:46 UTC, all ISI watchdogs tripped as well as some of the stages on several suspensions. After a cursory look, all suspensions that tripped only had either the bottom or bottom+penultimate stage trip, meaning that with the exception of the single suspensions, the others' M1 stage damping should have stayed on.
We wanted to go through and check whether the trips may have just been because of the movement from the ISIs tripping. If that is the case, we want to raise the suspension watchdog thresholds for those stages so that these suspensions don't trip every single time their ISI trips, especially if the amount that they are moving is still not very large.
Suspension stages that tripped:
Triples:
- MC3 M3
- PR3 M2, M3
- SRM M2, M3
- SR2 M2, M3
Singles:
- IM1 M1
- OFI M1
- TMSX M1
MC3 (M3) (ndscope1)
When the earthquake hit and we lost lock, all stages were moving due to the earthquake, but once HAM2 ISI tripped 5 seconds after the lockloss, the rate at which the OSEMs were moving quickly accelerated, so the excess motion looks to mainly be due to the ISI trip (ndscope2).
Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
M1 | 150 | 86 | 150 (unchanged) |
M2 | 150 | 136 | 175 |
M3 | 150 | 159 | 200 |
PR3 (M2, M3) (ndscope3)
Looks to be the same issue as with MC3.
Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
M1 | 150 | 72 | 150 (unchanged) |
M2 | 150 | 162 | 200 |
M3 | 150 | 151 | 200 |
SRM (M2, M3) (ndscope4)
Once again, the OSEMs are moving after the lockloss due to the earthquake, but the rate they were moving at greatly increased once HAM5 saturated and the ISI watchdogs tripped.
Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
M1 | 150 | 84 | 150 (unchanged) |
M2 | 150 | 165 | 200 |
M3 | 150 | 174 | 225 |
SR2 (M2, M3) (ndscope5)
Again, the OSEMs are moving after the lockloss due to the earthquake, but the rate they were moving at greatly increased once HAM4 saturated and the ISI watchdogs tripped.
Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
M1 | 150 | 102 | 150 (unchanged) |
M2 | 150 | 182 | 225 |
M3 | 150 | 171 | 225 |
IM1 (M1) (ndscope6)
Again, the OSEMs are moving after the lockloss due to the earthquake, but the rate they were moving at greatly increased once HAM2 saturated and the ISI watchdogs tripped.
Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
M1 | 150 | 175 | 225 |
OFI (M1) (ndscope7)
Again, the OSEMs are moving after the lockloss due to the earthquake, but the rate they were moving at greatly increased once HAM5 saturated and the ISI watchdogs tripped.
Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
M1 | 150 | 209 | 250 |
TMSX (M1) (ndscope8)
This one seems a bit questionable - it looks like some of the OSEMs were already moving quite a bit before the ISI tripped, and there isn't as much of a clear place where they started moving more once the ISI had tripped(ndscope9). I will still be raising the suspension trip threshold for this one just because it doesn't need to be raised very much and is within a reasonable range.
Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
M1 | 100 | 185 | 225 |
We just had an earthquake come through and trip some of the ISIs, including HAM2, and with that tripped IM2(ndscope1). I checked to see if the movement in IM2 was caused by the ISI trip and sure enough it was (ndscope2). I will be raising the suspension watchdog threshold for IM2 up to 200.
Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
M1 | 150 | 152 | 200 |
Yet another earthquake!. The earthquake that hit us December 9th 23:10 UTC tripped almost all of our ISIs, and we had three suspension stages trip as well, so here's another round of trying to figure out if they tripped because of the earthquake or because of the ISI trips. The three suspensions that tripped are different from the ones I had updated the thresholds for earlier in this alog.
I will not be making these changes right now since that would knock us out of Observing, but the next time we are down I will make the changes to the watchdog thresholds for these three suspensions.
Suspension stages that tripped:
- MC2 M3
- PRM M3
- PR2 M3
MC2 (M3) (ndscope1)
It's hard to tell for this one what the cause for M3 tripping was(ndscope2), but I will up the threshold here for M3 anyways since I'm sure even if the trip was directly caused by the earthquake, the ISI tripping definitely wouldn't have helped!
Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
M1 | 150 | 88 | 150 (unchanged) |
M2 | 150 | 133 | 175 |
M3 | 150 | 163 | 200 |
PRM (M3) (ndscope3)
This one it's pretty clear that it was because of the ISI tripping(ndscope4).
Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
M1 | 150 | 44 | 150 (unchanged) |
M2 | 150 | 122 | 175 |
M3 | 150 | 153 | 200 |
PR2 (M3) (ndscope5)
Again this one it's pretty clear that it was because of the ISI tripping(ndscope6).
Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
M1 | 150 | 108 | 150 (unchanged) |
M2 | 150 | 129 | 175 |
M3 | 150 | 158 | 200 |
While the microseism has gone down a small bit, locking has still been difficult due to EQs.
First, there were two back to back EQs (mag 4s and 5s) that killed a potential lock at near NLN (LASER_NOISE_SUPPRESSION)
Then, we got to NLN and were OBSERVING for 36 mins but another two back to back (mag 4s and 5s) caused a lockloss.
Finally, just now while relocking, another two higher 6.5 mag and 6.3 mag EQs from Japan are coming through.
All of this is made much worse due to the already over 1 um/s secondary microseism.
As such, I will bring H1 to DOWN and ENVIRONMENT and wait until the Earth rings down a bit.
Closes FAMIS 26329, Last checked in alog 81539
Laser Status:
NPRO output power is 1.849W
AMP1 output power is 70.27W
AMP2 output power is 137.2W
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 11 days, 16 hr 6 minutes
Reflected power = 22.82W
Transmitted power = 105.8W
PowerSum = 128.7W
FSS:
It has been locked for 0 days 0 hr and 21 min
TPD[V] = 0.8091V
ISS:
The diffracted power is around 3.4%
Last saturation event was 0 days 0 hours and 21 minutes ago
Possible Issues: None reported
Lockloss 36 mins into NLN due to 2 back to back EQs (4.5 and 5.8) that happened during very high microseism. Relocking now.
Sun Dec 08 10:12:22 2024 Fill completed in 12min 19secs
Note to VAC: Texts were sent for this fill, but no emails
Jonathan, Dave:
The alarms service on cdslogin stopped reporting around 1am this morning. Symptoms are status file was not being updated (caused alarm block on CDS Overview MEDM to turn PURPLE) and the report file was not being updated. Presumably no alarms would have been sent from this time onwards.
At 08:10 I restarted the alarms.service on cdslogin. A new report file was created but not written to, the /tmp/alarm_status.txt file was not changed (still frozen at 01:00) but I did get a startup text. Then 14 minutes later the files started being written. I raised a test alarm and got a text, but no email.
At 09:38 after not getting a keepalive email at 09:00 or any SSH login emails I rebooted cdslogin. Same behavior as 08:10; report file created not written, tmp file not created, startup text sent successfully. After 14 minutes alarms starts running, writes to file system, test alarms are texted but no emails at all.
Jonathan is going to check on bepex.
Jonathan rebooted bepex which has fixed the no-email problem with alarms and alerts. I raised a test alarm and alert to myself and got both texts and emails.
Alarms got stuck again around noon today, presumably due to a reoccurring bepex issue. I have edited the code to skip trying to use bepex and only use twilio for texts. alarms.service was restarted on cdslogin at 16:48 PST.
TITLE: 12/08 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 20mph Gusts, 16mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.71 μm/s
QUICK SUMMARY:
IFO is in ENVIRONMENT and LOCKING. IFO stayed in down all last night due to high winds and microseism.
Since last night, the microseiem has leveled off and even gone down a bit. The wind hasn't changed much. Attempting to lock to see where we get to.
OBSERVING as of 18:12 UTC.
Got very close to NLN earlier (LASER_NOISE_SUPPRESSION) but lost lock due to 3 back to back mid magnitude (4s and 5s) EQs that were exacerbated by very high microseism.
TITLE: 12/08 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: TJ/Oli
SHIFT SUMMARY: Currently unlocked. We've been sitting in DOWN for the past hour while the secondary microseism stays really high. We also currently have a smallish earthquake coming through. I will set the ifo to start trying to relock again.
Earlier while trying to relock, we were having issues with the ASLX crystal frequency. When this is a consistant issue we have to fix it by going out to the end station to adjust the crystal temperature. I trended the ALSX channels alongside the EX VEA temperatures and it looks like a couple of the temperatures went down, section D down by almost one degree F, right around when we started having crystal frequency issues. The wind also was blowing into the VEA, which we know since the dust counts were high then. I believe it's possible that the wind was cooling down the air in the part of the VEA near where we have the ALS box and changing the temperature of the crystal enough to affect the beatnote. I only have this one screenshot right now (ndscope), but I had trended back a few months and seen a possible correlation between when we get into the CHECK_CRYSTAL_FREQUENCY state for ALSX, the temperature inside the EX VEA, and the dust counts indicating wind entering the VEA. It's hard to know for sure especially because the air/wind outside is now much colder than it was a couple months ago, but it would be interesting to know the location of the D section and look for these correlations more closely. tagging ISC
LOG:
22:15 started an initial alignment
22:40 initial alignment done, relocking
- ALSX beatnote issue - CHECK_CRYSTAL_FREQUENCY
- toggled force/no force
- finally caught with no force
- ALSX beatnote issue again
- toggled force/no force and enable/disable
00:01 Put ifo in DOWN since we can't get past DRMI due to the high microseism
00:29 tried relocking
01:06 back to DOWN
02:38 Trying relocking again
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:03 | PEM | Robert | LVEA | YES | Finish setting up for Monday | 23:48 |
TITLE: 12/08 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is DOWN due to MICROSEISM/ENVIRONMENT since 22:09 UTC
First 6-7 hrs of shift were very calm and we were in OBSERVING for majority of the time.
The plan is to stay in DOWN and intermittently try to lock but the last few attempts have resulted in 6 pre-DRMI LLs with 0 DRMI acquisitions. Overall, microseism is just very high.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:03 | PEM | Robert | LVEA | YES | Finish setting up for Monday | 23:48 |
23:03 | HAZ | LVEA IS LASER HAZARD | LVEA | YES | LVEA IS LASER HAZARD | 06:09 |
TITLE: 12/08 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT_USEISM
Wind: 15mph Gusts, 9mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.78 μm/s
QUICK SUMMARY:
Currently in DOWN and trying to wait out the microseism a bit. Thankfully wind has gone back down
Trying to gather more info about the nature of these M3 SRM WD trips in light of OWL Ops being called (at least twice in recent weeks) to press one button.
Relevant Alogs:
Part 1 of this investigation: 81476
Tony OWL Call: alog 81661
TJ OWL Call: alog 81455
TJ OWL Call: alog 81325
It's mentioned in some more OPS alogs but no new info.
Next Steps:
Lockloss @ 12/07 22:09 UTC. Possibly due to a gust of wind since at EY it had jumped up from lower 20s to almost 30mph the same minute of the lockloss? A possible contributer could also be the secondary microseism - it has been quickly rising over the last several hours and is now up to 2 um/s.
Calibration sweep done using the usual wiki.
Broadband Start Time: 1417641408
Broadband End Time: 1417641702
Simulines Start Time: 1417641868
Simulines End Time: 1417643246
Files Saved:
2024-12-07 21:47:09,491 | INFO | Commencing data processing.
2024-12-07 21:47:09,491 | INFO | Ending lockloss monitor. This is either due to having completed the measurement, and this functionality being terminated; or because the whole process was aborted.
2024-12-07 21:47:46,184 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20241207T212404Z.hdf5
2024-12-07 21:47:46,191 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20241207T212404Z.hdf5
2024-12-07 21:47:46,196 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20241207T212404Z.hdf5
2024-12-07 21:47:46,200 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20241207T212404Z.hdf5
2024-12-07 21:47:46,205 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20241207T212404Z.hdf5
ICE default IO error handler doing an exit(), pid = 2104592, errno = 32
PST: 2024-12-07 13:47:46.270025 PST
UTC: 2024-12-07 21:47:46.270025 UTC
GPS: 1417643284.270025
Sat Dec 07 10:09:18 2024 INFO: Fill completed in 9min 15secs
TITLE: 12/07 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 2mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.44 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING as of 11:47 UTC (4 hrs)
There was one lockloss last night and a known issue where SUS SRM WD trips during initial alignment. OWL was called (alog 81661) to untrip it.
TITLE: 12/07 Owl Shift: 0600-1530 UTC (2200-0730 PST), all times posted in UTC
STATE of H1: Aligning
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 5mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.28 μm/s
QUICK SUMMARY:
IFO stuck in initial alignment because SRM watchdog H1:SUS-SRM_M3_WDMON_STATE trip.
Watch dog tripped while we were in Initial alignmnet not before, and was not due to ground motion.
I logged in discovered the trip. Reset the watchdog and reselected myself for Remote OWL notifications.
SUS SDF drive aligh L2L gain change accepted.
Just commenting that this is not a new issue. TJ and I were investigating it earlier and had early thoughts that SRM was catching on the wrong mode during SRC alignment in ALIGN_IFO either during the re-alignment of SRM (pre-SRC align) or after the re-misalignment of SRM. This results in the guardian thinking that SRC is aligned, which results in saturations and trips because it's actually not. Again, we think this is the case as of 11/25 but still investigating. I have an alog about it here: 81476.
Ansel reported that a peak in DARM that interfered with the sensitivity of the Crab pulsar followed a similar time frequency path as a peak in the beam splitter microphone signal. I found that this was also the case on a shorter time scale and took advantage of the long down times last weekend to use a movable microphone to find the source of the peak. Microphone signals don’t usually show coherence with DARM even when they are causing noise, probably because the coherence length of the sound is smaller than the spacing between the coupling sites and the microphones, hence the importance of precise time-frequency paths.
Figure 1 shows DARM and the problematic peak in microphone signals. The second page of Figure 1 shows the portable microphone signal at a location by the staging building and a location near the TCS chillers. I used accelerometers to confirm the microphone identification of the TCS chillers, and to distinguish between the two chillers (Figure 2).
I was surprised that the acoustic signal was so strong that I could see it at the staging building - when I found the signal outside, I assumed it was coming from some external HVAC component and spent quite a bit of time searching outside. I think that this may be because the suspended mezzanine (see photos on second page of Figure 2) acts as a sort of soundboard, helping couple the chiller vibrations to the air.
Any direct vibrational coupling can be solved by vibrationally isolating the chillers. This may even help with acoustic coupling if the soundboard theory is correct. We might try this first. However, the safest solution is to either try to change the load to move the peaks to a different frequency, or put the chillers on vibration isolation in the hallway of the cinder-block HVAC housing so that the stiff room blocks the low-frequency sound.
Reducing the coupling is another mitigation route. Vibrational coupling has apparently increased, so I think we should check jitter coupling at the DCPDs in case recent damage has made them more sensitive to beam spot position.
For next generation detectors, it might be a good idea to make the mechanical room of cinder blocks or equivalent to reduce acoustic coupling of the low frequency sources.
This afternoon TJ and I placed pieces of damping and elastic foam under the wheels of both CO2X and CO2Y TCS chillers. We placed thicker foam under CO2Y but this did make the chiller wobbly so we placed thinner foam under CO2X.
Unfortunately, I'm not seeing any improvement of the Crab contamination in the strain spectra this week, following the foam insertion. Attached are ASD zoom-ins (daily and cumulative) from Nov 24, 25, 26 and 27.
This morning at 17:00UTC we turned the CO2X and CO2Y TCS chiller off and then on again, hoping this might change the frequency they are injecting into DARM. We do not expect it to effect it much we had the chillers off for a ling period 25th October 80882 when we flushed the chiller line and the issue was seen before this date.
Opened FRS 32812.
There were no expilcit changes to the TCS chillers bettween O4a and O4b although we swapped a chiller for a spare chiller in October 2023 73704.
Between 19:11 and 19:21 UTC, Robert and I swapped the foam from under CO2Y chiller (it was flattened and not providing any damping now) to new, thicker foam and 4 layers of rubber. Photo's attached.
Thanks for the interventions, but I'm still not seeing improvement in the Crab region. Attached are daily snapshots from UTC Monday to Friday (Dec 2-6).
I changed the flow of the TCSY chiller from 4.0gpm to 3.7gpm.
These Thermoflex1400 chillers have their flow rate adjusted by opening or closing a 3 way valve at the back of the chiller. for both X and Y chillers, these have been in the full open position, with the lever pointed straight up. The Y chiller has been running with 4.0gpm, so our only change was a lower flow rate. The X chiller has been at 3.7gpm already, and the manual states that these chillers shouldn't be ran below 3.8gpm. Though this was a small note in the manual and could be easily missed. Since the flow couldn't be increased via the 3 way valve on back, I didn't want to lower it further and left it as is.
Two questions came from this:
The flow rate has been consistent for the last year+, so I don't suspect that the pumps are getting worn out. As far back as I can trend they have been around 4.0 and 3.7, with some brief periods above or below.
Thanks for the latest intervention. It does appear to have shifted the frequency up just enough to clear the Crab band. Can it be nudged any farther, to reduce spectral leakage into the Crab? Attached are sample spectra from before the intervention (Dec 7 and 10) and afterward (Dec 11 and 12). Spectra from Dec 8-9 are too noisy to be helpful here.
TJ touched the CO2 flow on Dec 12th around 19:45UTC 81791 so the flowrate further reduced to 3.55 GPM. Plot attached.
The flow of the TCSY chiller was further reduced to 3.3gpm. This should push the chiller peak lower in frequency and further away from the crab nebula.
The further reduced flow rate seems to have given the Crab band more isolation from nearby peaks, although I'm not sure I understand the improvement in detail. Attached is a spectrum from yesterday's data in the usual form. Since the zoomed-in plots suggest (unexpectedly) that lowering flow rate moves an offending peak up in frequency, I tried broadening the band and looking at data from December 7 (before 1st flow reduction), December 16 (before most recent flow reduction) and December 18 (after most recent flow reduction). If I look at one of the accelerometer channels Robert highlighted, I do see a large peak indeed move to lower frequencies, as expected. Attachments: 1) Usual daily h(t) spectral zoom near Crab band - December 18 2) Zoom-out for December 7, 16 and 18 overlain 3) Zoom-out for December 7, 16 and 18 overlain but with vertical offsets 4) Accelerometer spectrum for December 7 (sample starting at 18:00 UTC) 5) Accelerometer spectrum for December 16 6) Accelerometer spectrum for December 18