Camilla C, TJ S
This morning we had another period where our range was fluctuating almost 40Mpc, previously seen on Dec 1 (alog81587) and further back in May (alog78089). Camilla and decided to turn off both TCS CO2s for a short period just to completely rule them out, since previously there was correlation with these range dips and a TCS ISS channel. We saw no positive change in DARM during this short duration test, but we didn't want to go too long and lose lock. CO2s were requested to have no output power from 16:12:30-16:14:30UTC
The past times that we have seen this range loss, the H1:TCS-ITMX_CO2_ISS_CTRL2_OUT_DQ channel and the H1:SQZ-FC_LSC_DOF2_OUT_DQ channel had noise that correlated to the loss, but theISS channel showed nothing different this time (attachment 2). We also were in a state of no squeezing at the time. So it's possible that this is a completely different type of range loss.
DetChar, could we run Lasso or check on HVETO for a period during the morning lock with our noisy range?
Here is a link to a lasso run during this time period. The two channels with the highest coefficients are a midstation channel H1:PEM-MY_RELHUM_ROOF_WEATHER.mean and a HEPI pump channel H1:HPI-PUMP_L0_CONTROL_VOUT.mean.
SEI seismometer mass check - Monthly - Closes FAMIS 26496. Last Checked alog 80959
Averaging Mass Centering channels for 10 [sec] ...
2024-12-09 14:00:13.262669
There are 14 T240 proof masses out of range ( > 0.3 [V] )!
ETMX T240 2 DOF X/U = -0.751 [V]
ETMX T240 2 DOF Y/V = -0.833 [V]
ETMX T240 2 DOF Z/W = -0.352 [V]
ITMX T240 1 DOF X/U = -1.576 [V]
ITMX T240 1 DOF Y/V = 0.388 [V]
ITMX T240 1 DOF Z/W = 0.488 [V]
ITMX T240 2 DOF Y/V = 0.308 [V]
ITMX T240 3 DOF X/U = -1.657 [V]
ITMY T240 3 DOF X/U = -0.762 [V]
ITMY T240 3 DOF Z/W = -2.097 [V]
BS T240 1 DOF Y/V = -0.329 [V]
BS T240 3 DOF Z/W = -0.41 [V]
HAM8 1 DOF Y/V = -0.415 [V]
HAM8 1 DOF Z/W = -0.7 [V]
All other proof masses are within range ( < 0.3 [V] ):
ETMX T240 1 DOF X/U = 0.049 [V]
ETMX T240 1 DOF Y/V = 0.002 [V]
ETMX T240 1 DOF Z/W = 0.005 [V]
ETMX T240 3 DOF X/U = 0.029 [V]
ETMX T240 3 DOF Y/V = -0.001 [V]
ETMX T240 3 DOF Z/W = 0.014 [V]
ETMY T240 1 DOF X/U = 0.092 [V]
ETMY T240 1 DOF Y/V = 0.202 [V]
ETMY T240 1 DOF Z/W = 0.263 [V]
ETMY T240 2 DOF X/U = -0.032 [V]
ETMY T240 2 DOF Y/V = 0.233 [V]
ETMY T240 2 DOF Z/W = 0.071 [V]
ETMY T240 3 DOF X/U = 0.286 [V]
ETMY T240 3 DOF Y/V = 0.127 [V]
ETMY T240 3 DOF Z/W = 0.184 [V]
ITMX T240 2 DOF X/U = 0.182 [V]
ITMX T240 2 DOF Z/W = 0.29 [V]
ITMX T240 3 DOF Y/V = 0.175 [V]
ITMX T240 3 DOF Z/W = 0.134 [V]
ITMY T240 1 DOF X/U = 0.111 [V]
ITMY T240 1 DOF Y/V = 0.129 [V]
ITMY T240 1 DOF Z/W = -0.006 [V]
ITMY T240 2 DOF X/U = 0.059 [V]
ITMY T240 2 DOF Y/V = 0.218 [V]
ITMY T240 2 DOF Z/W = 0.147 [V]
ITMY T240 3 DOF Y/V = 0.099 [V]
BS T240 1 DOF X/U = -0.127 [V]
BS T240 1 DOF Z/W = 0.157 [V]
BS T240 2 DOF X/U = -0.014 [V]
BS T240 2 DOF Y/V = 0.072 [V]
BS T240 2 DOF Z/W = -0.057 [V]
BS T240 3 DOF X/U = -0.136 [V]
BS T240 3 DOF Y/V = -0.284 [V]
HAM8 1 DOF X/U = -0.298 [V]
Assessment complete.
All other proof masses are within range ( < 0.3 [V] ):
ETMX T240 1 DOF X/U = -0.133 [V]
ETMX T240 1 DOF Y/V = -0.112 [V]
ETMX T240 1 DOF Z/W = -0.159 [V]
ETMX T240 3 DOF X/U = -0.1 [V]
ETMX T240 3 DOF Y/V = -0.226 [V]
ETMX T240 3 DOF Z/W = -0.103 [V]
ETMY T240 1 DOF X/U = 0.022 [V]
ETMY T240 1 DOF Y/V = 0.09 [V]
ETMY T240 1 DOF Z/W = 0.148 [V]
ETMY T240 2 DOF X/U = -0.098 [V]
ETMY T240 2 DOF Y/V = 0.152 [V]
ETMY T240 2 DOF Z/W = 0.049 [V]
ETMY T240 3 DOF X/U = 0.155 [V]
ETMY T240 3 DOF Y/V = 0.041 [V]
ETMY T240 3 DOF Z/W = 0.089 [V]
ITMX T240 1 DOF Z/W = -0.092 [V]
ITMX T240 3 DOF Z/W = -0.289 [V]
ITMY T240 1 DOF X/U = -0.25 [V]
ITMY T240 2 DOF X/U = -0.08 [V]
ITMY T240 2 DOF Y/V = -0.185 [V]
HAM8 1 DOF X/U = -0.287 [V]
Assessment complete.
Averaging Mass Centering channels for 10 [sec] ...
2024-12-09 14:00:32.260633
There are 1 STS proof masses out of range ( > 2.0 [V] )!
STS EY DOF X/U = -2.37 [V]
All other proof masses are within range ( < 2.0 [V] ):
STS A DOF X/U = -0.472 [V]
STS A DOF Y/V = -0.961 [V]
STS A DOF Z/W = -0.448 [V]
STS B DOF X/U = 0.281 [V]
STS B DOF Y/V = 0.954 [V]
STS B DOF Z/W = -0.387 [V]
STS C DOF X/U = -0.777 [V]
STS C DOF Y/V = 0.737 [V]
STS C DOF Z/W = 0.654 [V]
STS EX DOF X/U = -0.145 [V]
STS EX DOF Y/V = -0.044 [V]
STS EX DOF Z/W = 0.139 [V]
STS EY DOF Y/V = 0.035 [V]
STS EY DOF Z/W = 1.273 [V]
STS FC DOF X/U = 0.216 [V]
STS FC DOF Y/V = -1.054 [V]
STS FC DOF Z/W = 0.66 [V]
Assessment complete.
J. Freed,
SR2 M1 stage bosems (speciffically T2, T3) show strong coupling below 20Hz
Today I did damping loop injections on all 6 BOSEMs on the M1. This is a continuation of the work done previously for ITMX, ITMY, PR2, PR3, PRM
As with PRM, gains of 300 and 600 were collected (300 is labled as ln or L).
The plots, code, and flagged frequencies are located at /ligo/home/joshua.freed/bosem/SR2/scripts. While the diaggui files are at /ligo/home/joshua.freed//bosem/SR2/data.
Neil and Jim swapped HS-1s on the LVEA floor last week Tuesday morning (2024-12-03). Repeated huddle test and both vertical HS-1s (serial no. 1151 and 1152) work brilliantly. The two huddle images attached are not of HS-1-1151 nor HS-1-1152, but of HS-1-LHO-Brian-Lantz; the setup for the huddle tests for all 3 HS-1 units was the same. The plot shows a seismic signal (blue) and noise floor of HS-1-1152 (green).
FAMIS 31063
No major events of note this week, except that PMC REFL seems to have been consistently on the rise over the past week. Total power increase over that time has been about 0.5W with no majorly corresponding decrease in PMC TRANS.
FMCS STAT is red on the CDS Overview due to a warming of the LVEA Zone2a area. Attached 7 day trend shows it did something similar yesterday (Sunday) morning.
Mon Dec 09 10:21:24 2024 INFO: Fill completed in 21min 20secs
Gerardo confirmed a good fill curbside. Long fill with temps starting positive, hence the strange looking trend.
Summary of Week:
More detailed information can be found on the DQ Shift Report page.
After the SQZ HV tripped off and was brought back 81689, the PSAMS servos (80685) railed. Attached plot shows the output of SAMS_SERVO at the limit and Strain gauge not at PSAMS target.
To fix this, on the H1:AWC-ZM{2,4,5}_M2_SAMS_SERVO (small box to the left of REQUESTED PZT(V)), I increased ramp time form 2 seconds to 5 seconds, ramped gain to 0, cleared history, ramped gain to 1 and then put ramp time back.
After these were back to their nominal, the alignments of the ZMs went back close to nominal and we were able to lock the FC.
It appears that we have lost the gauge for Y2-8, nothing to do for now since this is a monitoring tool. We will assess tomorrow, but if the system needs to be charged, then maybe there will be enough sunlight today.
Lost the gauge early Saturday morning.
Earthquake caused Lockloss.
TITLE: 12/09 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 97 Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 2mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.36 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING without squeezing.
There were issues with the squeeze high voltage, so observing without squeezing until this is investigated today. (per Camilla's alog 81689)
We also have planned comissioning time today from 8AM-11AM (30 mins earlier than usual due to an all-hands meeting).
Other than that, it seems the wind is low, the microseism is slowly coming down and the barrage of aleutian islands earthquakes has ended.
TITLE: 12/09 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Currently Observing without squeezing and have been Locked for 1 hour.
All thoughout the day we haven't been able to lock because of many earthquakes, but in the last hour I was finally able to relock the IFO without us losing lock from ground motion.
When we got to NLN we couldn't go into Observing because the squeezer PMC wouldn't lock - the message said "Cannot lock PMC. Check SQZ laser power". Last time this happened we were just able to get it to lock by re-requesting LOCKED for the PMC guardian, but this time it didn't work. I called Camilla, and we saw that although the PZT thought it was scanning to try and lock, the PZT volts were not changing and were just sitting out of range, around -12 volts, and that 10 hours ago something had happened to the PZT volts where they suddently glitched down to over -200, which is much lower than they ever go.
We weren't able to get it back working so we used Ryan Short's script to change the nominal sqz states. I accepted the sdfs, and we went to Observing at 06:31 UTC. Here is Camilla's alog with an ndscope. tagging SQZ
LOG:
22:32 Started an initial alignment
22:58 Initial alignment done, relocking
- Lockloss from LOCKING_ARMS_GREEN
- Lockloss from OFFLOAD_DRMI_ASC
- Lockloss from ENGAGE_ASC_FOR_FULL_IFO
- Lockloss from RESONANCE due to large EQ hitting
00:23 Put IFO in DOWN to wait out earthquake
- HAM2, HAM3, HAM5 ISI tripped
- IM2 tripped
02:58 Started an initial alignment
03:22 Initial alignment done, relocking
- Lockloss from PREP_DC_READOUT_TRANSITION due to earthquake
03:43 Sitting in DOWN while earthquake passes
04:48 Trying to relock
05:46 NOMINAL_LOW_NOISE
- Issues getting PMC to lock - see above
06:31 Observing without squeezing
As SQZ_ANG_ADJUST guardian had a conditional for the nominal state depending on sqzparams.use_sqz_angle_adjus, the observation without squeezing code doesn't change it and we edited ADJUST_SQZ_ANG.py ourselves.
Now we've been observing for months using this state, I'm removing the conditional so it is always 'nominal = 'ADJUST_SQZ_ANG_ADF'' and adding a note to change this in sqzparams.
I will ask Ryan to change the permissions so we can have SQZ_ANG_ADUST included in his switch_nom_sqz_states.py script.
I've fixed the group permissions on the switch_nom_sqz_states.py script to be 'controls' and added the SQZ_ANG_ADJUST node to be updated along with the others with a nominal up state of 'ADJUST_SQZ_ANG_ADF'.
I also added several models to the sqz_models list in the script that will allow for SDF diffs when going to the no-SQZ configuration. Since the IFO will be observing in a non-nominal configuration, the intent is to not accept SDF diffs associated with this temporary change. The full list of models that will now be excluded is as follows:
It appears that the SQZ HV (PMC/SHG/OPO PZT, ZM2,4,5 PSAMS) went down at 12/08 20:33UTC. Trends attached.
Oli found that the SQZ PMC wasn't locking. We tried enabling the PZT manually and although it thought it was scanning, the PZT Volts remained at -12, Oli found it had been like this since 12/08 20:33UTC. Then realized all PSAMS volts are off and other PZTs. Oli has taken the IFO into observing without SQZ for tonight do the CDS/EE team can look at this tomorrow.
This was likely caused by a reported HAM7 interlock trip. Tagging Vacuum.
Looking at HAM7 pressure doesn't seem to show anything at the time of trip though.
Richard untripped the HV in the Mech room mezzanine at 2024/12/09 16:12:14 UTC this morning. The ZM2/4/5 SAMs railed, we brought them back in 81697 by clearing the servo's histories.
I logged into h0vacly and looked up the status of PT153. See "Latched Device Error Details" in the attached screenshot. According to the manual, this appears to translate to "Electronics failure, Non-volatile memory failure, invalid internal communication".
Closes FAMIS#28382, last checked 80571
Here are December 3rd's plots for ITMX, ITMY, ETMX, and ETMY. There are no new points for ITMX since the coherence for the bias drive bias off measurement was 0.07, which is below the threshold of 0.1 that we have set. We had previously concluded (79597) that this may be because of no charge build-up on the test mass, assuming that we are calculating our coherences correctly.
Previous issue with measurement processing script:
There had been some issues with processing the In-Lock SUS Charge Measurements for this past week, so I looked at the script and was able to fix the issue.
Originally, the way that the script would figure out whether it needs to run a measurement or whether it had already been processed is that it would make a list of all the measurements that occurred during the entered time period (default is the past month) and a list of all processed measurements that occurred during the entered time period, and then uses nested for loops to check whether a processed file existed for each data file. That normally works fine, but in this case, the last time we had gotten and processed measurements was over one month ago, so when the script tried making a list of the processed files from within the last month, there weren't any and the list was blank. Then when it tried to iterate over each data file and then over each processed file, it would skip the entire processed file iteration because the list was empty.
The range over which it looks for unprocessed files can be changed or set to look at all files in each directory, but that shouldn't be required in this/most cases, so I edited the script. I added in a catch for when the coefficient file list is empty, for it to run all measurements in the data directory list. This update works and has been committed to svn.
Currently in DOWN due to more earthquakes(nuc5). I was able to run an initial alignment and start relocking, but we had more earthquakes come in and we lost lock at PREP_DC_READOUT_TRANSITION a few minutes ago because of the ground motion. Since we just had a couple more earthquakes announced and just entered earthquake mode, I'll have us sit in DOWN for a bit while the earthquakes roll though and hopefully we can get back to NLN soon. Thankfully it does look like secondary microseism is slowly starting to go down.
Back in March (76269) Jeff and I had updated all the suspension watchdogs (besides OFIS, OPOS, and HXDS since those were already up to date) to use better blrms filtering and to be output into um. We set the suspension watchdog thresholds to values between 100 and 300 µm, but these values were set arbitrarily since there was no way to previously see how far the stages move during different scenarios. We had upped a few of the thresholds after having some suspensions trip when they probably shouldn't have, and this is a continuation of that.
During the large earthquake that hit us on December 5th, 2024 18:46 UTC, all ISI watchdogs tripped as well as some of the stages on several suspensions. After a cursory look, all suspensions that tripped only had either the bottom or bottom+penultimate stage trip, meaning that with the exception of the single suspensions, the others' M1 stage damping should have stayed on.
We wanted to go through and check whether the trips may have just been because of the movement from the ISIs tripping. If that is the case, we want to raise the suspension watchdog thresholds for those stages so that these suspensions don't trip every single time their ISI trips, especially if the amount that they are moving is still not very large.
Suspension stages that tripped:
Triples:
- MC3 M3
- PR3 M2, M3
- SRM M2, M3
- SR2 M2, M3
Singles:
- IM1 M1
- OFI M1
- TMSX M1
MC3 (M3) (ndscope1)
When the earthquake hit and we lost lock, all stages were moving due to the earthquake, but once HAM2 ISI tripped 5 seconds after the lockloss, the rate at which the OSEMs were moving quickly accelerated, so the excess motion looks to mainly be due to the ISI trip (ndscope2).
| Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
| M1 | 150 | 86 | 150 (unchanged) |
| M2 | 150 | 136 | 175 |
| M3 | 150 | 159 | 200 |
PR3 (M2, M3) (ndscope3)
Looks to be the same issue as with MC3.
| Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
| M1 | 150 | 72 | 150 (unchanged) |
| M2 | 150 | 162 | 200 |
| M3 | 150 | 151 | 200 |
SRM (M2, M3) (ndscope4)
Once again, the OSEMs are moving after the lockloss due to the earthquake, but the rate they were moving at greatly increased once HAM5 saturated and the ISI watchdogs tripped.
| Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
| M1 | 150 | 84 | 150 (unchanged) |
| M2 | 150 | 165 | 200 |
| M3 | 150 | 174 | 225 |
SR2 (M2, M3) (ndscope5)
Again, the OSEMs are moving after the lockloss due to the earthquake, but the rate they were moving at greatly increased once HAM4 saturated and the ISI watchdogs tripped.
| Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
| M1 | 150 | 102 | 150 (unchanged) |
| M2 | 150 | 182 | 225 |
| M3 | 150 | 171 | 225 |
IM1 (M1) (ndscope6)
Again, the OSEMs are moving after the lockloss due to the earthquake, but the rate they were moving at greatly increased once HAM2 saturated and the ISI watchdogs tripped.
| Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
| M1 | 150 | 175 | 225 |
OFI (M1) (ndscope7)
Again, the OSEMs are moving after the lockloss due to the earthquake, but the rate they were moving at greatly increased once HAM5 saturated and the ISI watchdogs tripped.
| Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
| M1 | 150 | 209 | 250 |
TMSX (M1) (ndscope8)
This one seems a bit questionable - it looks like some of the OSEMs were already moving quite a bit before the ISI tripped, and there isn't as much of a clear place where they started moving more once the ISI had tripped(ndscope9). I will still be raising the suspension trip threshold for this one just because it doesn't need to be raised very much and is within a reasonable range.
| Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
| M1 | 100 | 185 | 225 |
We just had an earthquake come through and trip some of the ISIs, including HAM2, and with that tripped IM2(ndscope1). I checked to see if the movement in IM2 was caused by the ISI trip and sure enough it was (ndscope2). I will be raising the suspension watchdog threshold for IM2 up to 200.
| Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
| M1 | 150 | 152 | 200 |
Yet another earthquake!. The earthquake that hit us December 9th 23:10 UTC tripped almost all of our ISIs, and we had three suspension stages trip as well, so here's another round of trying to figure out if they tripped because of the earthquake or because of the ISI trips. The three suspensions that tripped are different from the ones I had updated the thresholds for earlier in this alog.
I will not be making these changes right now since that would knock us out of Observing, but the next time we are down I will make the changes to the watchdog thresholds for these three suspensions.
Suspension stages that tripped:
- MC2 M3
- PRM M3
- PR2 M3
MC2 (M3) (ndscope1)
It's hard to tell for this one what the cause for M3 tripping was(ndscope2), but I will up the threshold here for M3 anyways since I'm sure even if the trip was directly caused by the earthquake, the ISI tripping definitely wouldn't have helped!
| Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
| M1 | 150 | 88 | 150 (unchanged) |
| M2 | 150 | 133 | 175 |
| M3 | 150 | 163 | 200 |
PRM (M3) (ndscope3)
This one it's pretty clear that it was because of the ISI tripping(ndscope4).
| Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
| M1 | 150 | 44 | 150 (unchanged) |
| M2 | 150 | 122 | 175 |
| M3 | 150 | 153 | 200 |
PR2 (M3) (ndscope5)
Again this one it's pretty clear that it was because of the ISI tripping(ndscope6).
| Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
| M1 | 150 | 108 | 150 (unchanged) |
| M2 | 150 | 129 | 175 |
| M3 | 150 | 158 | 200 |