TITLE: 09/10 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
INCOMING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 10mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.09 μm/s
SHIFT SUMMARY:
Only thing note worthy was a possible Superevent candidate : S250910b @ 00:07:52 UTC.
I didn't really hear any thunder either.
LOG:
No log
TITLE: 09/09 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 4mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
H1 has been locked for 2.75 hours.
All subsystems seem to be running smoothly.
The weather report suggests that there may be thunderstorms soon.
TITLE: 09/09 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: It was a pretty standard maintenance day with a few aftershocks from the oregon earthquake to slow down the reacquisition. Observing for 2.75hours.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 14:44 | SYS | Betsy | Opt Lab | n | In and out of lab for stuffs, quick into the LVEA for more parts | 16:35 |
| 14:47 | CEBEX | Bubba, contractors | MY | n | Drilling and surveying | 22:41 |
| 15:00 | SYS | Randy | OSB | n | Forklifting to mech room | 15:22 |
| 15:00 | FAC | Kim | EX | n | Tech clean | 16:02 |
| 15:00 | FAC | Nelly | EY | n | Tech clean | 15:49 |
| 15:08 | VAC | Jordan, Gerardo, Anna | LVEA | n | NEG pump regen, GV checks, valve in NEG | 17:58 |
| 15:10 | FAC | Chris | site | n | Forklifting from staging to woodshop area | 17:00 |
| 15:15 | CDS | Fil | LVEA | n | Cable trays near HAM5 with Ken & Drilling with Randy | 18:54 |
| 15:15 | FAC | Ken | LVEA | n | LVEA light replacement | 18:49 |
| 15:22 | FAC | Christina | VPW | n | Fork lifting | 15:42 |
| 15:22 | SYS | Randy | LVEA | n | W & E bay craning | 17:22 |
| 15:30 | SYS | Mitchell | LVEA | n | Grabbing parts | 15:50 |
| 15:43 | OPS | Richard | LVEA | n | Checking on things | 16:03 |
| 15:47 | VAC | Janos | MX, MY | n | Pump install work continuing | 18:55 |
| 15:49 | FAC | Nelly, Kim | FCES | N | Tech clean | 16:55 |
| 15:58 | GRD | TJ | CR | n | h1guardian1 reboot | 16:07 |
| 16:38 | PEM | Sam, Jonathan (student) | LVEA | n | Tour/ looking at PEM things | 17:34 |
| 16:50 | ISC | Daniel | LVEA | n | OMC whitening chassis removal | 17:16 |
| 16:51 | FAC | Mitchell | Mid X | N | moving pelican cases | 17:38 |
| 16:51 | TSC | TJ | Mech rm -> LVEA | N | Folling chiller lines | 17:32 |
| 16:55 | safety | McCarthy | LVEA | N | Checking on the LVEA safety | 17:42 |
| 16:56 | FAC | Kim & Nelly | High bay & LVEA | N | Technical cleaning garbing supplies | 18:34 |
| 17:07 | SUS | Ryan C | Crtl Rm | N | ETMX OPLEV SUS charge measurements | 18:41 |
| 17:27 | OPT | Camilla | LVEA, OpticsLab | n | Looking for parts | 18:04 |
| 17:39 | PCAL | Tony, Mitchell | PCAL Lab | - | Looking for parts | 17:53 |
| 17:52 | SYS | Randy | EY | n | Check in receiving | 18:17 |
| 17:54 | CDS | Marc | LVEA | n | Check in with Fil | 18:04 |
| 18:18 | SYS | Randy | LVEA | n | Take some measurements | 18:34 |
| 18:32 | - | Richard, student | OSB roof | n | Roof tour | 18:47 |
| 18:42 | VAC | Jordan, Anna, Gerardo | LVEA | n | Shut down NEG pump | 19:14 |
| 18:51 | - | Oli | LVEA | n | Sweep | 18:58 |
| 19:09 | FAC | Tyler | MY | n | Checking with crew down there | 21:11 |
| 19:51 | - | Fil, Betsy | LVEA | n | Measure something | 20:11 |
| 21:47 | SPI | Jeff | OpticsLab | n | Figuring out what is in the bag | 21:59 |
I processed one hour of no-squeezing data taken in November 2024 (81468) for the purpose of running a correlated noise budget. This data was taken shortly after noise budget injections were run (80596), so I was able to use the measurement of the jitter noise to perform jitter subtraction from 100-1000 Hz, similar to my work in 85899.
I followed the same procedure as I documented in 85899: I plotted the whitened time series and saw two glitches, which I excised from the data by removing the individual segments with the glitches. I then mean-averaged the remaining segments. As a part of the correlated noise data collection in Nov 2024, we took a full calibration measurement, which I was able to use to generate a model to calibrate the DCPD signals. I used the IMC WFS yaw signal as a witness to subtract jitter from 100-1000 Hz. I then calculated the full correlated noise estimate also as described in 85899 (in that log I called it the "full classical noise estimate", which I now realize is confusing).
The comparison of the Nov 2024 time (O4b) and June 2025 time is shown here.
The noise below 100 Hz is lower in June 2025, however the noise above 100 Hz is higher in June 2025. It also appears that the slope of the noise above 100 Hz has changed slightly. The high frequency noise is also different, possibly because the frequency noise coupling to DARM has changed.
I also plotted the ratio of the noise from 40-300 Hz, showing that the noise below 100 Hz is reduced by up to 10% in amplitude, and above 100 Hz increase by up to 10% in amplitude.
I added the GWINC thermal noise trace to the plot above, and took the ratio of the correlated noise to the total thermal noise to that trace to highlight the change in noise. The ratio shows that the overall slope and amplitude of the excess noise, compared to the full thermal noise trace, has changed.
And going back even further, the correlated noise budget was run in O4a in Dec 2023, which we used in the O4 detector paper. We only had 900 seconds of data, so the results are more noisy. Comparing the three traces and comparing their ratios to the GWINC thermal noise trace. I did not do a jitter subtraction on the Dec 2023 data. The nearest valid calibration report is on Oct 27, which is what I used to calibrate the data.
Maintenenace recovery was fairly straight forward but there were a few more aftershocks from Oregon that caused us to lose lock while acquiring. We eventually made it up at 2047UTC, with violins very elevated. I'll work on damping those a bit faster that the automated settings.
A cable tray support was installed near HAM5.
F. Clara, R. McCarthy, Randy
Attached plot show trends of OSEM movements and subsequent watchdog activity.
Summary: we had three SWWD SEI trips (ETMY, ITMY, ITMX) followed by two SWWD SUS trips (ETMY, ITMY). We had one HWWD trip, ITMY.
Color code ETMY=BLUE, ITMY=RED, ITMX=GREEN, ETMX=GOLD.
Top panel: ETMY top stage 6 OSEMS
Second panel: ITMY top stage 6 OSEMS
Third panel: ITMX top stage 6 OSEMS
Fourth panel: ETMX top stage 6 OSEMS
Fifth panel: SWWD
Bottom panel: HWWD
Time Line (all times PDT Monday 8th September 2025)
(1) [21:11:44]: all four SWWD SEI timers start countdown
(2) [21:15:27]: ETMX goes good again (not shown, but ditto for ETMX HWWD)
(3) [21:16:42]: SWWD SEI trip, SEI IOP countdowns start, they trip in five minutes time
(4) [21:26:41]: ETMY SWWD SUS trip, its second timer is 10 minutes
(5) [21:31:54]: ITMY SWWD SUS trip, it has the original 15 minute timer, closely followed by:
(6) [21:32:29]: ITMY HWWD trip after its 20 minute countdown
(7) [21:38:14]: ITMY HWWD input goes good, ready for its reset button to be physically pushed
(8) [21:38:00]: ITMX SWWD SUS does not trip, it occasionally dips into the good region resetting the timer
(9) [21:42:00]: All SWWDs are reset, their inputs a good after the EQ has cleared
J. Kissel Continuing my low-level, background, fact building exercises on satellite amplifiers (the only previous installment thus far being LHO:85348) here I compare the three types of satellite amplifiers (a) D0900900 / D0901284 :: UK 4CH SatAmp (b) D1002818 / D080276 :: US 8CH SatAmp (c) D1900089 / D1900217 :: US 4CH SatAmp again, but instead now comparing their transimpedance sign and frequency response. Namely, if the systems design intent was - the US 8CH SatAmp has been wired up to the OSEM in a POSITIVE reverse bias configuration, with ANODE (A) connected to the NEGATIVE terminal of the transimpedance opamp, and - the US and UK 4CH SatAmps have been wired up to the OSEM in a NEGATIVE reverse bias configuration, with CATHODE (K) connected to the NEGATIVE terminal of the transimpedance opamp, then these better have differently signed overall transimpendance in order to achieve the same sign at the ADC. As such, I performed a similar frequency response measurement on an example instantiation of a D1002818 and D1900089 chasses as I've been doing for the UK 4 channel satamps -- see T080062 for the full procedure, and the "only" difference is the physical connection of the SR785 to the device under test. To describe that, I attach here a collection of diagrams of the physical setup for each chassis. The results are clear: driving a POSITIVE voltage through a series resistor into the NEGATIVE input of the transimpedance op amp (i.e. mimicking the flow of PD current [conventional current, not electron flow] into the NEGATIVE input) results in a (a) NEGATIVE differential output voltage for a D0900900 UK 4CH chassis, (b) POSITIVE differential output voltage for a D1002818 US 8CH chassis, (c) NEGATIVE differential output voltage for a D1900089 US 4CH chassis. Note, the NEGATIVE input to each chassis transimpedance amp is connected to across pages of the circuit diagram to connector pins labeled as (a) K (CATHODE) for a D0900900 UK 4CH chassis, (b) A (ANODE) for a D1002818 US 8CH chassis, (c) K (CATHODE) for a D1900089 US 4CH chassis. implying the correct outer-layer usage. The second attachment shows the frequency response of the three chassis. Note all of the tested instantiations are pre-ECR E2400330 whitening filter change, so they all have the old z:p = ~0.4:10 Hz frequency response. The legend in the caption of the upper right panel shows the predicted z:p values from the schematic, and the data in the upper and lower right panels show that particular z:p divided out of the data to show how close these instantiations are to the ideal. Yes, the 8CH satamp frequency response is slightly different than the two 4CH responses; this will be rectified with ECR E2400330. The deviation of all three responses from ideal is consistent with the diversity of the instantiations of the UK 4CH sat amps, namely the uncertainty in whitening filter stage capacitance value (which should be a total of 20 [uF], but can vary up to 5%; see LHO:85396). As such, when using these chassis, it is critical that the chassis-to-flange and in-vacuum wiring is connected in such a way that connects this NEGATIVE input to the right terminal of the OSEM PD. For detailed breakdown of each stage of the sat amp frequency response and sign, to understand *why* and *how* they're different, check out G2500980.
LVEA Swept and ready for Observing
I was getting lots of overflows when I started the measurement, I tried lower the drive_amp by 20% and retrying. This didn't work, I went from the usual 11000 down to under 100 and still saw lots of overflows. Rahul suggested to halve the gain of the ESD output filters with the original 11000 drive_amp, from 1.0 to 0.5 and this stopped all the overflows during the measurements. I got a few thousand overflows when the script ended and was restoring settings though.
This measurement like previous recent ones (since the ESD bias voltage increase?) has very large errors, there wasn't any work going on at EX and the wind wasn't too bad, gusting up to 20mph.
Despite the huge error bars the charge generally seems lower than the last measument in most quadrants/dofs.
FAMIS Link: 26656
Only CPS channels which look higher at high frequencies (see attached) would be the following:
In the bash window we get this note:
"BSC high freq noise is elevated for these sensor(s)!!!: ITMY_ST1_CPSINF_H3 "
Tue Sep 09 10:06:51 2025 INFO: Fill completed in 6min 47secs
FAMIS 28949
Reboot at 1600UTC(0900PT). Every node came back up without issue, Ryan C helped me bring back nodes to their nominal requests for maintenance.
TITLE: 09/09 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Calibration
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 4mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY: Locked for 4 hours, pem and charge measurements running. Maintenance day will begin at 0800 PT. Rough recovery last night from the earthquake and eletrical storm, but Ibrahim got us back up.
Got the wake up call at 338amPDT. And literally while I was waking up and trying to remember what to do for the NOTIFY + MANAGER guardian nodes (I just "init-ed" both...is that the right thing? that's what both of their User Messages said to do), I noticed that there was a Message that said there was an issue with the SQZ MANAGER.
As I was getting ready to open up the SQZ guardians and assess (I'm pretty sure I spelled that right--I remember Hugh giving me grief for misspelling it in an alog before!), H1 was already back to OBSERVING. I literally did nothing! H1 was in OMC Whitening at 932utc, sent wake up call at 1037utc, and automatically went to Observing on its own at1043utc. I'm going to go back to bed ASAP!
Great job getting H1 thru all the environmental catastrophes at the end of your shift, Ibrahim! :)
Corey you were woken up because we were still in OMC_Whitening Damping Violins when the Timer for the "OPS Wake Up" (IFO_Notify) Guardian ran past an hour (the timer for this state), and thus rang your alarm clock early.
Ibrahim did buy you some time for some more Z's by waiting until we were in OMC_Whitening to set the Remote OWL Shift button, but the Violins just took So0 long to Damp (1 hour 10 min ).
SQZ sub system was working just fine in this case. Less than a second after ISC_LOCK hit NLN, the SQZ_MAN was completed, and 9 seconds later we were Observing.
Unfortunate timing really. Some times, H1 just wants you to watch while it Locks it's self, I guess.
TITLE: 09/09 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
IFO is in OMC_WHITENING and LOCKING
The first 5 hours of this extended shift were quiet and well behaved. Truly the calm before the earthquake before the storm. Lockloss alog 86792.
Thanfully the next 3 hours were action packed due to 2 environmental events: a 5.8 EQ (and 4.9 aftershock) off the coast of Oregon and an electrical storm power glitch.
The Earthquake (alog 86793)
5.8 EQ from Oregon downed the IFO before SEI or SUS had a chance to prep.
The Storm (alog 86795)
The storm caused a slight power glitch that caused the high voltage for both HAM6 and the PSL to turn off.
The Aftershock:
While relocking in INITIAL_ALIGNMENT, we got an aftershock (4.9 EQ) from the same location, wich prompted EQ mode to turn on.
Initial alignment finished, I began locking, DRMI caught immediately and thanks to the awesome power of nature, this shift was not boring. Except of course, the OMC is not locking (despite turning the high voltage back on) - see below for the 3rd installment nobody asked for.
The OMC:
The OMC high voltage also tripped though I did turn it on (narrator: or so he thought)
Other:
The DARM FOM is having trouble connecting to NDS so I can't see the Violin Modes but they're extremely high (maybe due to the possbily fake measured 573 Lin Velocity of that Oregon EQ). Violin medm attached.
Anyway, I should log off now before I summon a tornado or something. Honestly, this has been a fantastic learning oppurtunity (and don't worry I have the day off tomorrow).
LOG:
After the watchdogs were reset a lightening strike caused a power glitch. GC UPS reported going into and off of battery power.
h1susauxh2 and h1susauxex went offline, turned out they were rebooting themselves and came back after a few minutes.
PSL is in a bad way, Ibrahim is calling for support.
The lights in my house flickered at the time I saw the sus-aux machines go down. We have been having a storm roll over us for the past hour, moving from Oregon northwards.
Ibrahim is on his way to the CER MEZ to reset the PSL REFCAV high-voltage supply.
Back on.
Here are details of the power glitches due to the electrical storm Monday night. We had a large glitch at 21:51:26 followed 5 seconds later by a smaller glitch.