Mon Dec 02 10:12:17 2024 INFO: Fill completed in 12min 13secs
Gerardo confirmed a good fill curbside.
WP12223 FRS32776
Jonathan, Erik, Dave:
cdslogin is currently down for a rebuild following a disk failure yesterday. Alarms and Alerts are currently offline. EDC has 58 disconnected channels (cdslogin epics_load_mon and raccess channels).
Rebuild is finished. Alarms are working again.
Sheila, Dripta, Camilla. Repeat of 78776
Turned off H1:LSC-SRCL1_OFFSET with ramp time at 10s. Offset off made the FCC drop ~2%. See attached plot. Left H1:LSC-SRCL1_OFFSET off.
Turned SRC1 ASC loops inputs off (would have quickly with ramptime 0.1s turned off offsets too but offsets were already off). Set the ramp to 10s, turn gain to 0, clear history, gain back to 4.
We then started stepping SRM Y in steps of 0.5. This was too big, we lost lock (sorry). It seems we actually moved 500+ urad rather than the expected 0.5urad and this caused a lockloss <0.5s latter, plot attached. Sorry. I moved the alignment slides back while Oli was locking as they are getting SRM saturations.
We would have aimed to increase H1:CAL-CS_TDEP_F_C_OUTPUT with SRM moved (SRC2 was left closed and would have moved SR2 to compensate). Then this SRM location could have been used as the SRSCL1 ASC offsets. Then we could have taken a SQZ FIS dataset (eg 80318) to find the best SRCL offset.
Lockloss @ 12/02 16:58 UTC after 3:39 locked due to commissioning activities
Back to Observing as of 20:08UTC
TITLE: 12/02 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 2mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.46 μm/s
QUICK SUMMARY:
Observing and have been Locked for over 2 hours. Secondary useism looks to be going up a bit in the past few minutes.
Lockloss from last night - December 02 @ 11:48 UTC
Last night's lockloss has some interesting things happening right before the lockloss.
TITLE: 12/02 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 142Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: A long relock today from DRMI locking struggles and lock losses at Transition_From_ETMX. Once back up to observing the range hasn't been very stable, hovering between 140-150Mpc. I was waiting for a drop in triple coincidence before trying to tune squeezing, but that didn't happen before the end of my shift. The 15-50Hz area in particular looks especially poor.
LOG:
Plot of range and glitches attached.
It didn't look like the explicitly squeezing was the issue last night when the range was low as SQZ was stable, sqz plot. The range seemed to correct itself on it's own as we stayed in Observing. If it happens agin, we could have gone to NoSQZ or FIS to check it's not backscatter from SQZ of FC.
Oli and I ran a Bruco during the low range time, bruco website, but Shiela noted that the noise looks un-stationary like scatter so that a bruco isn't the best way of finding the cause.
I ran a range comparison using 50 minutes of data from a time in the middle of the bad range and a time after it stops with good range. Excess noise looks to be mostly below 100 Hz for sensmon, for DARM the inflection point looks to be at 60Hz and there is broadband noise but low frequency again seems larger.
I also checked these same times in my "Misaligned" GUI that compares SUS top mass osems, witness sensors, and OPLEVS avg motion to compare alignments for relocking and to look for drifts. It doesn't look all that useful here, the whole IFO is moving together throughout the lock. I ran it for seperate times within the good range block as well and it look pretty much the same.
As discussed in today's commissioning meeting, if this low range with glitches on Omicron at low frequency happens again, can the operator take SQZ_MANAGER to NO_SQUEEZING for 10 minutes so that we can check this isn't caused by backscatter from something in HAM7/8. Tagging OpsInfo.
Runs of HVeto on this data stretch indicate extremely high correlations between strain glitches and glitches in SQZ FC channels. The strongest correlation was found with H1:SQZ-FC_LSC_DOF2_OUT_DQ.
The full HVeto results can be seen here: https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20241202/1417132818-1417190418/
An example of H1 strain data and the channel highlighted by HVeto can be seen in the following attached plots:
Derek also so kindly ran lasso for that time period (link to lasso run) and the top correlated channel is H1:TCS-ITMX_CO2_ISS_CTRL2_OUT_DQ. Back in May we were seeing correlations between drops in range, FC alignment, and the values in this same TCS channel(78089). Here's a screenshot of the range vs that channel - the TCS channel matches with how it was looking back in May. As stated in that May alog thread, the cable for this channel was and is still unplugged :(
FINALLY made it! After lots of struggles to get DRMI to lock, 3 lock losses from Transition_From_ETMX, we are now observing.
We've now had two lock losses from the Transition from ETMX state or immediately after while trying to reacquire. Unlike back on Nov 22 (alog81430), SRCL seems fine. For the first one, the locklost tool 1417133960 shows the IMC-SERVO_SPLITMON channel is saturating ~5sec before lock loss, then there is some odd LSC signals 40msecs before the tool tagged the lock loss (attachment 1). This might just be the lock loss itself though. The second one (locklost tool 1417136668) hasn't tagged anything yet, but ETMY has a glitch 2.5 sec before lock loss and ETMX seems to move more from that point (attachment 2).
Another one while I was looking at the code for the Transition_From_ETMX state. We were there for a few minutes before I noticed CHARD & DHARD inputs ringing up. Unsure how to save it, I just requested it to move on but it lead to a lock loss.
I ended up changing the SUS-ITMX_L3_LOCK_BIAS_TRAMP to 25 from its 30 to hopefully move to a safer place sooner. Since it was already changed from 60 a week ago, I didn't want to go too short. Worked this one time.
Sheila, Camilla
We had another lockloss with an ETMX_L2 glitch beforehand this morning, plot, and it seems that even a successful transition this morning had a glitch too but it was smaller plot. We looked at the ISC_LOCK.py code and it's not yet obvious what's causing this glitch. The successful transition also had a DARM wobble up to 2e6 plot but when we have the locklosses, DARM goes to ~10e6.
While looking at all the filters, we found that ETMX_L2_LOCK_L ramp time in 20s screenshot although we only wait 10s in ISC_LOCK. We will edit this tomorrow when we are not observing. We don't think this will effect the glitch as there is no input/output to this filter at the time of the glitch.
The only thing that seems like it could cause the glitch is the DARM1 FM1 being turned off, we don't yet understand how and had similar issues we thought we solved in 77640
This morning I edited ETMX_L2_LOCK_L FM1 ramp time down to 10s, reloaded coefficients.
TITLE: 12/01 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
Just had a ~19.5hr lock! (was hoping we'd make it past 22.5hrs for a new record since the new NPRO swap)
Other than that a fairly quiet shift. (I'm under the weather and having TJ tag in.)
LOG:
TITLE: 12/01 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 3mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.46 μm/s
QUICK SUMMARY:
Just lost lock before I showed up. Currently acquiring DRMI. The useism is increasing, peaking above the 95%. Wind is low.
Smoot sailing GW Detector wise with H1 at 17hrs8min (but on nuc28, the LockClock died, so I ran the launch startup script, and this restarts the LockClock. So you have to add about 16hr13min to whatever time the Lock Clock has for this current lock.)
Dave phoned in to address issues with the cdslogin computer.
What Else: Francisco and Neil are onsite. In the last few hours the microseism continues increase and now peaking above the 95th percentile line.
Sun Dec 01 10:08:08 2024 Fill completed in 8min 5secs
No texts/emails for this fill due to cdslogin issues.
cdslogin is in a strange state, not quite down but not sending alarms or alerts. It is pingable, I am still using it as a ssh-tunnel for my no-machine connection to opslogin0, but it does not accept any new ssh logins and my shell on it cannot find any commands.
File system error starting at 03:42:20 this morning
I power cycled cdslogin remotely via IPMI (10:50 power down, 10:52 power up) to force a fsck. The system came back up in operational mode, the systemd services alarms and locklossalerts started normally.
These services write to the local file system, which is presumably why they were down when the local FS switched to read-only mode.
Two improvements spring to mind:
Make alarms and alerts memory resident only, no reliance on any file system.
Make these services portable to cdsssh if cdslogin became unusable.
There is no indication of any mains power issues at 03:42 this morning. No UPS reports, and the three phases of the corner station mains-mon look good throughout.