Since we've got OM2 warm now, I've updated the jitter cleaning coefficients. It seems to have added one or two Mpc to the new SENSMON2 calculated sensitivity [Notes on new SENSMON2 below].
The first plot shows the SENSMON2 range, as well as an indicator of when the cleaning was changed (bottom panel, when there's a spike up, that's the changeover).
The second plot shows the effect in spectra form. The pink circle is at roughly the same frequencies in all 3 panels. The reference channels are data taken before the jitter cleaning was updated (so, coefficients we've been using for many months, trained on cold OM2 data), and the live traces are with newly trained jitter coeffs today.
I've saved the previous OBSERVE.snap file in /ligo/gitcommon/NoiseCleaning_O4/Frontend_NonSENS/lho-online-cleaning/Jitter/CoeffFilesToWriteToEPICS/h1oaf_OBSERVE_valuesInPlaceAsOf_24June2024_haveBeenLongTime_TraintedOM2cold.snap , so that is the file we should revert Jitter Coeffs to if we turn off the OM2 heater.
Notes on SENSMON2, which was installed last Tuesday:
I added a new version of the DARM BLRMS FOM based off of Minyo's template (alog77815). The top two plots are the inverted BLRMS, so positive changes seen on these should correlate to positive changes in the range. The units of the top plots should be close to Mpcs, so it is important to look for changes in the top plot since the scale is so much larger.
Using this in conjunction with the low range checks should help us diagnose what is changing our range around lately.
Mon Jun 24 10:09:54 2024 INFO: Fill completed in 9min 51secs
Jordan confirmed a good fill curbside.
Derek Davis, Gravity Spy user ZngabitanT
In previous cases where there were OM2 temperature transitions, Gravity Spy users had noted a specific glitch class (example 1) that is present for ~tens of minutes after the transition begins. This issue was also previously noted in alog 71735. Specific times where OM2 transitions were correlated with Gravity Spy glitches are listed in this comment.
With this morning's OM2 heat-up (see alog 78573), these glitches were noted again. This can be seen in these example gravity spy subjects (example 2, example 3). This behavior can also be seen in the glitch gram for the relevant hour near 300 Hz (the OM2 transition began at 12:30 UTC).
More notes from Gravity Spy users about this glitch class (referred to as "bike chains") can be found on this zooniverse talk page. Many thanks to all the volunteers who contributed to these investigations!
The GC UPS detected a power glitch and was on battery power between 20:48:15 and 20:48:20 PDT. The attached plot shows the three phases in the corner station. The CDS UPS in the MSR did not report at this time.
TITLE: 06/24 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 1mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.14 μm/s
QUICK SUMMARY:
H1's been locked almost 7hrs, the range is running a little low afterthe No Squeezing test earlier this morning. Winds are much calmer after last night's wind storm.
Mon Commissioning is planned from 8:30-11:30amPT (1530-1830utc).
10 minutes of no squeezing time started at 12:18 UTC June 24th, back to squeezing at ~12:29. This is so we have a comparison without squeezing before OM2 starts heating up 78573.
Unmonitored the OM2 heater channel when the script changed it, now we are back in observing while OM2 heats up.
Thermistor 2 has thremalized completely, but thermistor 1 still shows some thremal transient settling. Unlike what we've seen in the past, 71087, the optical gain did not change with this TSAMS change.
There is coherence with DHARD Y, and SRCL coherence has increased as expected if the DARM offset has changed.
Here are some jitter injections, looking at the coupling change between OM2 cold and hot. The pitch jitter coupling (without cleaning) seems worse with OM2 hot, which is different than our previous OM2 tests.
TITLE: 06/24 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Two locklosses this shift and several hours of downtime due to high winds. H1 has been locked for about 30 minutes.
LOG: No log for this shift.
Lockloss @ 06:38 UTC - link to lockloss tool
H1 was observing for 3 minutes before again losing lock from an unknown cause. There was more significant ETMX motion about a half second before the lockloss in this case, but I'm unsure where it came from.
H1 back to observing at 07:38 UTC. Fully automated relock.
Lockloss @ 01:12 UTC - link to lockloss tool
No obvious cause; maybe from suddenly higher winds? Gusts have recently hit close to 40mph and microseism is past the 50th percentile.
H1 is back to observing as of 06:35 UTC after ~5 hours of downtime from high winds causing locking difficulties.
TITLE: 06/23 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 4mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.17 μm/s
QUICK SUMMARY: H1 has been locked for almost 4 hours. After I arrived, the OPO pump ISS saturated and dropped H1 out of observing for 2 minutes after Guardian brought everything back. This strangely happened twice more before I decided to take SQZ_MANAGER to DOWN to try and reset things, and after doing that and bringing it back to FREQ_DEP_SQZ, things seem okay now.
Vicky, Begum, Camilla: Vicky and Begum noted that the CLF ISS and SQZ laser is glitchy.
Vicky's plot shows CLF ISS glitches started with O4b, attached.
Timeline below shows SQZ laser glitches started May 20th and aren't related to TTFSS swaps. DetChar - Request : Do you see these gltiches in DARM since May 20th?
Summary pages screenshots from: before glitches started, first glitch May 20th (see top left plot 22:00UTC), bad glitches since then.
Missed point:
In addition to previous report, I must note that glitches started on May 9th and continued for several times even before to May 25th.
Glitches are usually accompanied by the increased noise in H1:SQZ-FIBR_EOMRMS_OUT_DQ and H1:SQZ-FIBR_MIXER_OUT_DQ channels.
Andrei, Camilla
Camilla swapped the TTFSS fiber box 78641 on June 25th in hopes that this will resolve the glitches issue.
However, it made no difference: Figure (see from 20:40 UTC, as it is when TTFSS box was swapped).
State of H1: Observing at 157Mpc, locked for 6.5 hours.
Quiet shift so far except for another errant Picket Fence trigger to EQ mode just like ones seen last night (alog78404) at 02:42 UTC (tagging SEI).
That's about two triggers in a short time. If the false triggers are an issue, we should consider triggering on picket fence only if there's a Seismon alert.
The picket fence-only transition was commented out last weekend on the 15th by Oli. We now will only transition on picket fence signals if there is a live seismon notificaition.
Thanks Jim,
I'm back from my vacation and will resume work on the picket fence to see if we can fix these errant triggers this summer.
Sheila, Camilla, Robert
A change in the ETMX measured charge over the O4a-O4b break (e.g. 78114 ) suggested that there might be a change in electronics ground fluctuation coupling. This is because the hypothesized mechanism for ground fluctuation coupling depends on the test mass being charged so that, as the potential of electronics (such as the ring heater and the ESD itself) near the test mass fluctuate with electronics ground potential, there is a fluctuating force on the test mass.
We swept the bias (see figure) and found that the minimum in coupling had changed from an ESD bias of about 150 V in August of 2023 ( 72118 ) to 58 V now, with the coupling difference between the two setting a factor of about ten (in other words, if we stuck with the old setting the coupling would be nearly ten times worse). Between January of 2023 and August of 2023, the minimum coupling changed from about 130 V to about 150 V, with the coupling difference between the two settings being less than a factor of two. The second page of the figure is from this August alog, showing the difference in the coupling between then and now. I checked the differences across the break for ETMY, ITMY and ITMX and the coupling differences across the break were not much more than a factor of two, so the change in ETMX, about a factor of ten, seems particularly large, as might be expected for a significant charge change.
I started to find the gain adjustment that we need to change the bias. To get to an offset of 70V, preserving the DARM UGF we need 436.41 in L3 drivealign L2L, an an offset of 2.0 in the BIAS filter.