After the SQZ improved H1 range, about 15min later there was a 1832utc notification of an incoming M6.0 Solomon Island earthquake....this would cause a lockloss at 1856utc. H1's lock was 14hrs22min.
(V1 & L1 are still observing.)
After almost 24 hrs of H1 drifting down about 10Mpc (from about 156 down to 146Mpc, see attachment #1), and while I was looking at instructions for SQZ Checks and Improve Squeezing, H1 was dropped out of Observing due to the Squeezer at 1809utc (coincidentally around the time of the CP1 overfill). The CLF_LR, FC, and MANAGER nodes went down and then relocked automatically.
As it was relocking, the SQZ came back with a better OPO Temperature (After the SQZ came back, I tried small OPO TEC Temp slider changes, but it was already optimized), and it looks like the SQZ automatically improved H1 range! (see attachement #2)
Was hoping to see the "H1 Live" trace to be better at high frequency, but it is still higher than the reference above 1500Hz (see attachement #3).
Sun Feb 23 10:06:25 2025 INFO: Fill completed in 6min 22secs
TCs started at lower temps because of the warm up outside, so I reduced the trip temps from -30C to -60C. TCmins [-134C, -130C] OAT (+12C, 53F) DeltaTempTime 10:06:25
TITLE: 02/23 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 16mph Gusts, 11mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.41 μm/s
QUICK SUMMARY:
H1's been locked 11hrs. There's a slow drift in H1 range for the last 24hrs (through 2-locks). I put in by hand the latest settings for ITMy MODE-5 a few minutes ago. Microseism still high, but with slight decrease and it's been breezy (up to 25mph) the last 12hrs.
TITLE: 02/23 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
Lockloss from an earthquake
Took a few tries to get it relocked, did an Initial Alignment and it locked just fine.
Nominal_Low_noise Reached at 04:33 UTC
Observing at 04:35 UTC
The nominal settings for the violins seem to be working well, so I did not change it.
H1 has been locked for 1.5 hour.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 19:34 | SAF | Laser Haz | LVEA | YES | LVEA is laser HAZARD!!! (⌐■_■) | 06:13 |
| 21:30 | pcal | tony | pcal.lab | yes | pcal work | 22:15 |
| 02:30 | PEM | Robert | LVEA | Yes | Covering Viewports. | 02:39 |
Lockloss due to a 5.6 Mag earthquake from the Soloman Islands, or a number of smallish earthquakes from the same region.
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi/event/1424312644
TITLE: 02/22 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 3mph Gusts, 0mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.45 μm/s
QUICK SUMMARY:
Nice shift for H1 with a lock over 12.5hrs. Also ran the Saturday calibration. Microseism is still high.
LOG:
TITLE: 02/23 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 5mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.41 μm/s
QUICK SUMMARY:
H1 has been Locked for 12.5 hours.
If Unlocked:
Violin ITMY Mode 5 settings were changed by the Day Ops by hand because Rahul had mentioned to Corey that IY m5 should not have a zero gain.
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=82975
Other than that all systems are running well.
Earlier in the shift, happened to notice IY Mode-1's gain was not at nominal (it was at 0 instead of the nominal -10); this mode was NOT rung up and was decent with an OUTPUT around ~2...it was slightly increasing. (See attachment#1 for a look at this mode for the last 24hrs.)
I had tried to take the gain to -10 while in Observing, but VIOLIN_DAMPING would immediately take the gain back to 0. I chatted with Rahul and he mentions that sometimes this happens. We decided to hold off with this gain for a bit, because Rahul was not sure whether taking the VIOLIN_DAMPING guardian from DAMPING_VIOLINS_FULL_POWER to DAMPING_ON_SIMPLE would take us out of Observing, but.....we were going to drop out of observing for Calibration shortly.
VIOLIN_DAMPING Guardian State Change Takes H1 Out Of OBSERVING
At 1930utc, took took the violins guardian to DAMPING_ON_SIMPLE, and this took us out of OBSERVING. At this point, I then ran the calibration. Once the calibration measurement was complete, at 2000utc, I took IY Mode-1's gain from 0 to it's nominal of -10.0.
Then returned the violin guardian to DAMPING_VIOLINS_FULL_POWER. (See attachment#2 for a look at when the gain was restored.)
Measurement NOTES:
Sat Feb 22 10:08:23 2025 INFO: Fill completed in 8min 19secs
TCmins [-81C, -79C] OAT (+4C,40F) DeltaTempTime 10:08:23
While filling out the Lock Reacquisition Survey, noticed that for the two locks overnight:
TITLE: 02/22 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 3mph Gusts, 0mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.45 μm/s
QUICK SUMMARY:
H1's been locked almost 4hrs. H1 has not shown the glitchy behavior for the last 12hrs (on Omicron) and our violins are looking much better, too!
Environmentally secondary microseism continued its increase and is now squarely at the 95th percentile; winds are calm, but there was a little windstorm between 3-5hrs ago with gusts above 20mph.
Calibration is scheduled for 1130amPDT (1930utc).
TITLE: 02/22 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
H1 was locked for 6 Hours and 58 Minutes...
And Then... Unknown lockloss 15 Minutes before the end of the Shift.
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1424238353
I had H1 do an Initial alignment and started normal locking before leaving.
The Violin Mode IY Mode 5 may need to be adjusted again as it was previously set to FM 6,8,10 with a gain of + 0.01 instead of it's nominal state which was working well.
LOG:
No Log
Strange Dip in H0:VAC-MY_FAN2 _2702 and a half days ago.
Averaging Mass Centering channels for 10 [sec] ...
2025-02-21 19:13:29.266265
There are 16 T240 proof masses out of range ( > 0.3 [V] )!
ETMX T240 2 DOF X/U = -0.872 [V]
ETMX T240 2 DOF Y/V = -1.013 [V]
ETMX T240 2 DOF Z/W = -0.359 [V]
ETMY T240 3 DOF X/U = 0.313 [V]
ITMX T240 1 DOF X/U = -1.715 [V]
ITMX T240 1 DOF Y/V = 0.441 [V]
ITMX T240 1 DOF Z/W = 0.52 [V]
ITMX T240 2 DOF Y/V = 0.337 [V]
ITMX T240 3 DOF X/U = -1.779 [V]
ITMY T240 3 DOF X/U = -0.723 [V]
ITMY T240 3 DOF Z/W = -2.288 [V]
BS T240 1 DOF Y/V = -0.325 [V]
BS T240 3 DOF Z/W = -0.397 [V]
HAM8 1 DOF X/U = -0.307 [V]
HAM8 1 DOF Y/V = -0.421 [V]
HAM8 1 DOF Z/W = -0.687 [V]
All other proof masses are within range ( < 0.3 [V] ):
ETMX T240 1 DOF X/U = 0.091 [V]
ETMX T240 1 DOF Y/V = 0.038 [V]
ETMX T240 1 DOF Z/W = 0.101 [V]
ETMX T240 3 DOF X/U = 0.107 [V]
ETMX T240 3 DOF Y/V = 0.05 [V]
ETMX T240 3 DOF Z/W = 0.051 [V]
ETMY T240 1 DOF X/U = 0.129 [V]
ETMY T240 1 DOF Y/V = 0.217 [V]
ETMY T240 1 DOF Z/W = 0.287 [V]
ETMY T240 2 DOF X/U = -0.034 [V]
ETMY T240 2 DOF Y/V = 0.248 [V]
ETMY T240 2 DOF Z/W = 0.096 [V]
ETMY T240 3 DOF Y/V = 0.168 [V]
ETMY T240 3 DOF Z/W = 0.19 [V]
ITMX T240 2 DOF X/U = 0.229 [V]
ITMX T240 2 DOF Z/W = 0.295 [V]
ITMX T240 3 DOF Y/V = 0.172 [V]
ITMX T240 3 DOF Z/W = 0.164 [V]
ITMY T240 1 DOF X/U = 0.13 [V]
ITMY T240 1 DOF Y/V = 0.166 [V]
ITMY T240 1 DOF Z/W = 0.083 [V]
ITMY T240 2 DOF X/U = 0.054 [V]
ITMY T240 2 DOF Y/V = 0.286 [V]
ITMY T240 2 DOF Z/W = 0.192 [V]
ITMY T240 3 DOF Y/V = 0.123 [V]
BS T240 1 DOF X/U = -0.126 [V]
BS T240 1 DOF Z/W = 0.193 [V]
BS T240 2 DOF X/U = -0.002 [V]
BS T240 2 DOF Y/V = 0.106 [V]
BS T240 2 DOF Z/W = -0.047 [V]
BS T240 3 DOF X/U = -0.099 [V]
BS T240 3 DOF Y/V = -0.283 [V]
Assessment complete.
Averaging Mass Centering channels for 10 [sec] ...
2025-02-21 19:16:03.265814
There are 1 STS proof masses out of range ( > 2.0 [V] )!
STS EY DOF X/U = -2.336 [V]
All other proof masses are within range ( < 2.0 [V] ):
STS A DOF X/U = -0.46 [V]
STS A DOF Y/V = -0.842 [V]
STS A DOF Z/W = -0.555 [V]
STS B DOF X/U = 0.251 [V]
STS B DOF Y/V = 0.955 [V]
STS B DOF Z/W = -0.313 [V]
STS C DOF X/U = -0.86 [V]
STS C DOF Y/V = 0.793 [V]
STS C DOF Z/W = 0.694 [V]
STS EX DOF X/U = 0.016 [V]
STS EX DOF Y/V = -0.038 [V]
STS EX DOF Z/W = 0.123 [V]
STS EY DOF Y/V = -0.066 [V]
STS EY DOF Z/W = 1.362 [V]
STS FC DOF X/U = 0.192 [V]
STS FC DOF Y/V = -1.1 [V]
STS FC DOF Z/W = 0.662 [V]
Assessment complete.
TITLE: 02/22 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 10mph Gusts, 7mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.40 μm/s
QUICK SUMMARY:
H1 Is currently Locked and Observing for 1.5 hours.
Our plan is to continue to Observe for the rest of the night.
I am told I likely need to keep an eye on PI mode 8, and Violin mode IY Mode 5 as it may require non-nominal settings.
Pi Mode 8 channel name to watch = H1:SUS-PI_PROC_COMPUTE_MODE8_NORMLOG10RMSMON
Since the Ring Heater settings were changed, I'll likely have to do an Initial Alignment after each Lockloss.
Everything else seems to be functioning normally.
TITLE: 02/21 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
LOG:
WP12339
Dave:
The offload of the past 6 months of raw minute trend files from h1daqtw1's SSD-RAID to permanent storage is complete (disk usage was reduced from 92% to 2%).
There have been a few other alogs about this already:
H1's range is slighty improved after the initial drop (possibly due to PRCL and A2L tuning in the commissioning window), but the large glitches are still present. They were not present in the early part of yesterday's lock, but re-appeared after TJ took a calibration measurement around 22 UTC (were present all night and caused a retraction of a GW alert), and they are not present so far in the current lock, which was automatic. Looking at the summary page plots, the glitches that come and go are those with SNR between 10 and 30, at 60 and just below 50 Hz. (I can't read the precise frequency from the summary page plot, to know if these glitches line up with the PR3 roll mode well or not). When the glitches are on, they have a rate of something like 2e-2 Hz (between SNR of 10-20), which means roughly a glitch every 50 seconds, more if we include the SNR 20-30 glitches. Hveto doesn't identify an auxillary channel that can veto these glitches.
Looking at the range plot, there seem to be pretty regular drops in range when the glitches at 50 + 60 Hz are present, zooming in the spacing between these range drops is 5-6 minutes, although I'm using a channel that only updates every minute. Looking at DCPD sum or ESD drive channels time series don't show an obvious way to get a better handle on the timing of these larger, less frequent glitches.
The correlation with 82944 the PI channel found by Jane's Lasso run seems to continue where the PI channel keeps being rung up in the time periods when the range has these drops every 5 minutes.
Editing to add:
It looks like the MODE8 channel got elevated like this for the first time on Feb 5th, and there have been a number of incidences where this channel was elevated with glitches at 60 Hz and just below 50:
Following up Sheila's investigation, I've made a set of slides comparing the glitching seen in the strain channel against the 10.4 kHZ PI channel for many of the times highlighted above. We can see a clear correlation between this channel and the presence of the glitching in strain.
Within the glitchy periods, there seem to be correlations between MODE8 (TMS X QPDs bandpassed from 10kHz-10.8kHz) and MODE24 (DCPDs) monitors and the rnage drops that are somewhat regular during these glitchy periods (see screenshot).
We don't have an equivalent monitor set up for TMS Y QPDs.
Daniel and I looked at this time in a dtt watching exponetial averages, and it does seem that the mode at 10432 Hz is going up and down by a few orders of magnitude, at least once this seems to happen after the glitch that shows up in the GW band. The mode directly below it is also going up and down. Watching this with exponetial averaging tends to crash dtt, but some screenshots are attached.