Mon Jun 24 10:09:54 2024 INFO: Fill completed in 9min 51secs
Jordan confirmed a good fill curbside.
Derek Davis, Gravity Spy user ZngabitanT
In previous cases where there were OM2 temperature transitions, Gravity Spy users had noted a specific glitch class (example 1) that is present for ~tens of minutes after the transition begins. This issue was also previously noted in alog 71735. Specific times where OM2 transitions were correlated with Gravity Spy glitches are listed in this comment.
With this morning's OM2 heat-up (see alog 78573), these glitches were noted again. This can be seen in these example gravity spy subjects (example 2, example 3). This behavior can also be seen in the glitch gram for the relevant hour near 300 Hz (the OM2 transition began at 12:30 UTC).
More notes from Gravity Spy users about this glitch class (referred to as "bike chains") can be found on this zooniverse talk page. Many thanks to all the volunteers who contributed to these investigations!
The GC UPS detected a power glitch and was on battery power between 20:48:15 and 20:48:20 PDT. The attached plot shows the three phases in the corner station. The CDS UPS in the MSR did not report at this time.
TITLE: 06/24 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 1mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.14 μm/s
QUICK SUMMARY:
H1's been locked almost 7hrs, the range is running a little low afterthe No Squeezing test earlier this morning. Winds are much calmer after last night's wind storm.
Mon Commissioning is planned from 8:30-11:30amPT (1530-1830utc).
10 minutes of no squeezing time started at 12:18 UTC June 24th, back to squeezing at ~12:29. This is so we have a comparison without squeezing before OM2 starts heating up 78573.
Unmonitored the OM2 heater channel when the script changed it, now we are back in observing while OM2 heats up.
Thermistor 2 has thremalized completely, but thermistor 1 still shows some thremal transient settling. Unlike what we've seen in the past, 71087, the optical gain did not change with this TSAMS change.
There is coherence with DHARD Y, and SRCL coherence has increased as expected if the DARM offset has changed.
Here are some jitter injections, looking at the coupling change between OM2 cold and hot. The pitch jitter coupling (without cleaning) seems worse with OM2 hot, which is different than our previous OM2 tests.
TITLE: 06/24 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Two locklosses this shift and several hours of downtime due to high winds. H1 has been locked for about 30 minutes.
LOG: No log for this shift.
Lockloss @ 06:38 UTC - link to lockloss tool
H1 was observing for 3 minutes before again losing lock from an unknown cause. There was more significant ETMX motion about a half second before the lockloss in this case, but I'm unsure where it came from.
H1 back to observing at 07:38 UTC. Fully automated relock.
Lockloss @ 01:12 UTC - link to lockloss tool
No obvious cause; maybe from suddenly higher winds? Gusts have recently hit close to 40mph and microseism is past the 50th percentile.
H1 is back to observing as of 06:35 UTC after ~5 hours of downtime from high winds causing locking difficulties.
TITLE: 06/23 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 4mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.17 μm/s
QUICK SUMMARY: H1 has been locked for almost 4 hours. After I arrived, the OPO pump ISS saturated and dropped H1 out of observing for 2 minutes after Guardian brought everything back. This strangely happened twice more before I decided to take SQZ_MANAGER to DOWN to try and reset things, and after doing that and bringing it back to FREQ_DEP_SQZ, things seem okay now.
TITLE: 06/23 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: RyanS
SHIFT SUMMARY:
Mostly straightforward shift, with a big Mexico earthquake in the middle and then we had some squeezer issues dropping us out of Observing in the last few minutes (Taking SQZ Manager to DOWN did the trick with fixing the loop of the squeezer going down). And there was a superevent in the last few min, too!
LOG:
This was an obvious & quick lockloss due to a 5.2 EQ in Mexico (looks like it took down L1 about 10min before H1). Here in the control room:
Sun Jun 23 10:10:59 2024 INFO: Fill completed in 10min 56secs
TITLE: 06/23 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 3mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
GWISTAT has not been listing V1's Observing status (been in "No Data" state for several days); yesterday the V1 operator mentioned they are aware and working on it.
H1's been locked, for almost 2.5hrs, but scanning Verbal Alarms, it said we weren't in Observing. H1 GRDIFO was in the MANAGED state over night, so I took it to AUTOMATIC--this automatically popped H1 back to OBSERVING. All that is understandable, but I guess I'm confused by the Range FOM (attached). A quick glance would tell me H1 was observing since we have the H1 Clean trace (dark red) on the range FOM---maybe I'm not interpreting this correctly.
NEVERMIND!!! Since GRD-IFO was in MANAGED over night, for the quick lockloss which was automatically recovered from, the OBSERVATORY MODE was remained in OBSERVING after that lockloss. I'm assuming the "H1 Clean Range" channel on the range FOM/nuc27 is non-zero when the OBSERVATORY MODE is selected to be in the OBSERVING state.
So:
TITLE: 06/23 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Two locklosses this shift, one from an ITMY ISI trip and the other from an unknown cause. Most locking this evening was automated, at least.
LOG: No log for this shift.
Lockloss @ 03:00 UTC - link to lockloss tool
No obvious cause. As usual with these, ETMX and LSC-DARM saw the first motion before lockloss.
H1 back to observing at 06:32 UTC.
Relocking time extended due to a M6.0 EQ that came through after DRMI locked. Also had to wait in OMC_WHITENING for 45 minutes to damp violins. Otherwise an automatic relock.
State of H1: Observing at 157Mpc, locked for 6.5 hours.
Quiet shift so far except for another errant Picket Fence trigger to EQ mode just like ones seen last night (alog78404) at 02:42 UTC (tagging SEI).
That's about two triggers in a short time. If the false triggers are an issue, we should consider triggering on picket fence only if there's a Seismon alert.
The picket fence-only transition was commented out last weekend on the 15th by Oli. We now will only transition on picket fence signals if there is a live seismon notificaition.
Thanks Jim,
I'm back from my vacation and will resume work on the picket fence to see if we can fix these errant triggers this summer.