The broadband glitching that was present in the early hours of Dec 11 (UTC) appears to have suddenly and entirely stopped at 10:36:30 UTC - this sharp feature can be seen in the daily range plot. I completed a series of LASSO investigations around this time in the hopes that such a sharp feature would make it easier for LASSO to identify correlations. I find a number of trend channels that have drastic changes at the same time as this turn-off point related to TCS-ITMY_CO2, ALS-Y_WFS, and SUS-MC3_M1.
The runs I completed are linked here:
Run #1 was a generic run of LASSO in the hopes of identifying a correlation. While no channel was highlighted as strongly correlated to the entire time period, this run does identify H1:TCS-ITMY_CO2_QPD_B_SEG2_OUTPUT (rank 11) and H1:TCS-ITMY_CO2_QPD_B_SEG2_INMON (rank 15) as having a drastic feature at the turn-off point (example figure). Based on this information, I launched targeted runs #2 and #3.
Run #2 is a run of LASSO using H1:TCS-ITMY_CO2_QPD_B_SEG2_OUTPUT as the primary channel to correlate against. This was designed to identify any additional channels that may show a drastic change in behavior at the same time. Channels of interest from this run include H1:ALS-Y_WFS_B_DC_SEG3_OUT16 (example figure) and H1:ALS-Y_WFS_B_DC_MTRX_Y_OUTMON (example figure). SEISMON channels were also found to be correlated, but this is likely a coincidence.
Run #3 targets the same turn-on point, but with the standard sensemon range as the primary channel. This run revealed an additional channel with a change in behavior at the time of interest, H1:SUS-MC3_M1_DAMP_P_INMON (example figure).
Based on these runs, the TCS-IMTY_CO2 and ALS-Y_WFS channels are the best leads for additional investigations into the source of this glitching.
TITLE: 12/11 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 3mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.32 μm/s
QUICK SUMMARY:
I ran the lowrange coherence check for a good and bad range time during this current lock.
I looked through the suspension driftmon scopes from the medm IFO_ALIGN_COMPACTEST and most looked normal compared to other locks. The main thing I saw that looked strange was a small step in MC3_P at the same time as the range gets better, I didn't see this behaviour in previous locks with bad range stretches though.
I looked at the OPLEV BLRMS as well, the main thing I saw on these scopes was that the BSs BLRM increased, largely in yaw, during the bad range times of this current lock. I don't see as obvious of a jump in other lock stretches.
Jenne suggested that we look at top mass Vertical osems to check for sagging that is causing touching. I check the quads, BS and output arm and see no drifts that correlate with the low range periods.
I looked at verticals for MC{1,2,3}, PR{M,2,3}, and FC{1,2} and the only thing I odd I noticed is that FC2 vertical seems to move more during the bad range times than the good range. Most of them see a seasonal small downward sag over the past 40 days.
I've looked at spectra (using 64 sec of data split into 16 second chunks with 50% overlap) of the top mass osems for all suspensions, comparing between start times of 1417933576 (bad time) and 1417950319 (a little while after the sharp improvement). None of the spectra have any of the classic 'we're rubbing' peaks. I've noted a few that I want to re-plot and zoom in (RM1 L, RM2 L, OMC L P V, FC1 P Y R T). I'll also re-look at MC3, since that one is one of our most 'suspicious' optics right now.
I attach the spectra that I made, of these potentially suspicious optics. The main conclusion here is that none of these are actually very suspicious, so since these are my *most* suspicious, probably we are not rubbing. But, I'll make a few more plots of these suspensions. In all of these plots the blue 'reference time' is when the IFO is locked with good sensitivity, and the orange 'check time' is when the IFO is locked with poor sensitivity.
I replotted the 'suspicious' top mass spectra using DTT. I don't find anything suspicious or interesting on FC1 or MC3.
OMC is a little strange, in that it has a set of peaks that all change frequency in the same way (first attachment). I'm not sure that this is meaningful for today's investigation though.
RM1 and RM2 are both quite strange looking in the 5-9 Hz range. They both pick up a forest of peaks in Length (and a little bit in Pit, and maybe a teeny bit in Yaw). Second attachment. Going to look further into these, maybe at other times as well. Robert said that these both saw some motion on the summary page, but their motion didn't seem to correlate with the reduction in range.
TITLE: 12/11 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 102Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: Currently Observing and have been Locked for almost 5 hours. Our range is still all over the place unfortunately. I jumped in and out of Observing a few times by turning squeezing on and off to check for differences with the range (81753), but didn't find anything.
LOG:
00:30 Relocking
01:08 NOMINAL_LOW_NOISE
01:14 Observing
02:57 Went out of Observing and turned off SQZ to see if that fixes the mystery noise
03:09 Back to FDS
03:19 Turned off SQZ
03:33 Back to FDS and back to Observing
Attached plot shows that the low range glitchy behavior happens independent of whether we have SQZ injected or not. Traces show the SQZ/NO SQZ times Oli noted in green/yellow with comparisons of good SQZ and NO SQZ times in blue and red.
Yesterday, in 81724 it seemed like the behavior stopped before we took the no SQZ time. Each trace is: 0.1Hz 50% overlap 100 averages = 500 seconds ~10 minutes. /ligo/home/camilla.compton/Documents/H1_DARM_FOM_s_glitchy.xml
Conclusions:
I ran BRUCOs on the bad (1417932916: 2024/12/11 06:14UTC) and good (1417949016: 2024/12/11 10:43UTC) times in the attached plot.
Main differences in 20-100Hz region:
[Jason, Masayuki]
The PMC heater calibration was performed last week (Tuesday). The calibration involved adjusting the temperature loop set point and monitoring the corresponding changes in PMC temperature and length. The results were validated using previous measurements and compared to similar evaluations at LLO. Additionally, the necessity of the heater for JAC operations was assessed, highlighting it would be needed for long term operation.
0.30 μm / 0.023 K = 13 μm/KThis result is consistent with the thermal expansion coefficient of aluminum, 22e-6/K, and the approximate PMC length of 0.5 m.
13 μm/K × 0.67 K/V = 8.7 μm/V
The PZT can be railed in a day from this measurement, so we would need the heater for JAC operation.
Spikes observed at LHO caused glitches in transmitted power, as shown in the attached plot, and may require resolution. Redesigning the filters could potentially mitigate these anomalies.
Just went out of Observing and took SQZ manager to no squeezing to check if the noise issues are related to squeezing. We'll be doing no squeezing for 10mins, SQZ for 10, no sqz for 10, and then back to sqzing and Observing
Back to just Observing as of 03:33 UTC
Looks like the range is still changing a decent amount when squeezing was off (ndscope), which lines up with previous observations that we were also seeing this noise during the lock stretches a few nights ago where we weren't squeezing for multiple hours.
Today Sheila realigned the beam on IM4 trans QPD (81735). This beam has been clipped at least since the O4 break due to various reasons (in this particular alog we tried to fix it, but then undid the fix, I'm adding here so we have a general time reference for when we first noticed the problem: 76291). As a result, the PRG calibration for O4b has been fishy, since it is normalized by the IM4 trans QPD power. Today, with the beam properly centered, I checked the PRG calibration to make sure it's still good.
Craig did this after ITMY was replaced: 58327. I followed his same steps with help from Ryan S and Sheila. First, we ran initial alignment on the y arm, which uses the green camera references. We waited until that alignment was converged and then stepped through the input_align_yarm states in the ALIGN_IFO guardian. I recorded the IM4 trans value as well as the y arm transmission value.
When we run a normal initial alignment, we lock input align with the x arm, so I used that opportunity to similarly record the x arm transmission value and IM4 trans. As a note: the green input alignment references are set when we converge the full IFO ASC, therefore I believe the single arm alignment after we run the green initial alignment is probably very close to the full IFO alignment.
Then, we paused in DARM_TO_DC_READOUT in the ISC LOCK guardian; this is just after all the full IFO ASC converges at 2W. I grabbed the x and y arm transmission values and IM4 trans.
Channels: H1:IMC-IM4_TRANS_NSUM_OUT16, H1:LSC-TR_X_NORM_OUT16, H1:LSC_TR_Y_NORM_OUT16
Single arm lock values:
Y-arm trans: 0.91, IM4 trans: 1.95
X-arm trans: 1.01, IM4 trans: 1.98
Full lock values:
Y-arm trans: 1684
X-arm trans: 1708
IM4 trans: 1.98
Calculation:
PRM transmission = 0.031
PRG = Tp * Y-arm trans (full ifo) / Y-arm trans (yarm only) * Input power (yarm only) / Input power (full ifo) (copied directly from Craig's alog)
PRG from Y arm = 56.5
PRG from X arm = 54.0
PRG reported from H1:LSC-PR_GAIN_OUT16 at the 2W lock time: 54
Therefore, I think our PRG calibration is correct. We can begin to cultivate a good O4b power up data set for modeling.
TITLE: 12/11 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Long maintenance day with a great many activities wrapped up late afternoon and initial alignment started around 23:40. The biggest issue we've encountered is that the FSS autolocker is having trouble locking the RefCav; it can grab the resonance but loses it after about a second, so it's been taking quite a while to lock (I manually did it twice this afternoon to speed things along). However, if the FSS is locked, it seems happy, the only issue appears to be with relocking. Otherwise, initial alignment ran smoothly and main locking started at 00:24. Currently up to PREP_ASC_FOR_FULL_IFO.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:03 | HAZ | LVEA IS LASER HAZARD | LVEA | YES | LVEA IS LASER HAZARD | 16:45 |
15:40 | PEM | Robert | LVEA | Y | Viewports | 15:52 |
15:41 | TCS | Camilla | LVEA | Y | Guillotines | 15:52 |
15:58 | FAC | Chris | EndY then X | N | Check paths are clear for crane inspections | 18:27 |
16:00 | PEM | Robert | LVEA | Y | Viewports | 16:04 |
16:02 | FAC | Nelly | FCES | N | Tech clean | 16:51 |
16:03 | FAC | Karen | EndY | N->Y | Tech clean | 16:51 |
16:03 | FAC | Kim | EndX | N | Tech clean | 17:10 |
16:03 | CAL | Tony, Dripta, Francisco | PCAL lab | LOCAL | Grab measurement equipment | 16:10 |
16:11 | CAL | Tony, Dripta | EndY | Y | PCAL measurement | 19:30 |
16:12 | PEM | Robert | LVEA | Y | Viewports, in and out till ~16:30 | 16:16 |
16:14 | OPS | Camilla | LVEA | Y -> N | LASER transition | 16:35 |
16:15 | CDS | Jonathan, Dave | Remote | N | OAF0 work, virtual machines, H0 will go down | 22:06 |
16:24 | VAC | Jordan | Mech room | N | Start up purge air | 16:49 |
16:46 | FAC/OPS | Richard | LVEA | N | Walkaround | 17:01 |
16:52 | PSL | Jason, RyanS | PSL enc | Y | PMC mode matching | 19:21 |
16:53 | OPS | LVEA | LVEA | N | LVEA IS LASER HAZARD | 04:16 |
16:53 | FAC | Tyler | LVEA | N | Crane inspections | 19:51 |
17:13 | EE | Fil | HAM7 | N | VAC gauge, out at 19:00 | 19:48 |
17:13 | VAC | Janos, Jordan | Ends | N | Mech room pump checks | 18:09 |
17:29 | FAC | Kim | LVEA | N | Tech clean | 19:02 |
17:37 | EE | Marc, Fernando | LVEA | N | ISC picomotors inspection in at 18:18 | 19:07 |
17:54 | FAC | Eric | EndX | N | Ceiling sensors | 18:34 |
17:59 | VAC | Travis | LVEA | N | Close gatevalves 5 & 7 | 18:27 |
18:09 | VAC | Jordan | LVEA | N | Join travis gatevalves | 18:27 |
18:37 | TCS | TJ, Camilla | EndX | Y | HWS work | 20:02 |
19:17 | VAC | Travis, Jordan, Gerardo | LVEA | N | Close GVs 5 & 7 | 19:39 |
19:50 | VAC | Fil, Gerardo | LVEA, HAM7 | N | VAC Gauge, reset HV | 20:05 |
20:19 | TCS | Camilla | LVEA | N | Untrip CO2X | 20:22 |
20:42 | FAC | Tyler | EndX then Y | N | Crane inspection | 23:00 |
20:57 | PEM | Robert | LVEA | N | Setup shaker for comis later this week | 21:57 |
21:14 | IAS | Jason, RyanC, Mitchell | LVEA | N | FARO surveying | 23:30 |
21:15 | CAL | Tony, Dripta | PCal Lab | Local | Post-maintenance measurement | 22:12 |
21:29 | FAC | Chris | LVEA | N | FAMIS checks | 21:52 |
21:47 | ISC | Camilla | EX | YES | Beam profiling | 23:12 |
22:44 | VAC | Gerardo | LVEA | N | Picture on HAM7 | 23:00 |
23:39 | SAF | Oli, Ibrahim | LVEA | YES | Sweep & transition to HAZARD | 00:15 |
23:44 | VAC | Jordan, Janos | LVEA | - | Turn off purge air | 23:58 |
23:49 | SAF | Fil | LVEA | - | Moving crane | 00:15 |
23:51 | CAL | Tony | PCal Lab | Local | Measurement | 23:58 |
TITLE: 12/11 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Ryan C / Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 4mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.43 μm/s
QUICK SUMMARY:
Just started relocking after finishing an initial alignment. Little bit of a spike in the secondary microseism but nothing too bad
Sheila D, TJ S
There has been a bout of SRM M3 watch dog trips around the SRY locking step of initial alignment over the last few weeks (alog81670 for example). Today Sheila and I discussed locking without the FE triggering at all, and just having the ALIGN_IFO node see good flashes and turn on the necessary filters and banks. I tried this out today but I wasn't able to get it to lock any more reliably than what we have now. No matter the method, it would seem to catch with low AS_A values, ~2500 on NSUM vs the normal 5000. From here, SRM would start to be driven before it would realize that it wasn't quite locked. I tried adding some code in ACQUIRE_SRY to try to turn on and off the SRM M1 LOCK L input and the SRCL FM4 filters and even clear the history if necessary, but this didn't work without long settling periods between attempts.
I ended up keeping the ISC_library.is_locked('SRY') looking as AS_A_DC_NSUM_OUTPUT > 4000. This value is a bit higher but safer. It will let the node run through Down and reset drives, integrators, and let SRM settle a bit. I don't think this is a fix, barely even a band aid. It will need some more thought on how to only catch on the correct mode.
Oli, Ibrahim
IFO has been swept.
Of note:
Per the WP12246:
Visual inspections were performed in the LVEA to track the wires and verify the picomotor controllers existence, connections and spares (physically only). The information gattered was updated in the document E1200072. Important findings were found and the document is pretty much near to the real installation. Electrical part investigations will follow to determine the spares. The names of the rack as well as the physical location for the controllers were verified using the O5 ISC wiring diagram D1900511.
Marc, Fernando