At 16:14:31 PDT h1daqdc0 crashed. Its EPICS IOC stopped running resulting in white boxes on MEDM. Last log was 15:56.
I connected a monitor to its VGA port, its console was showing the login prompt. The cursor was not flashing and an attached keyboard was unresponsive.
Erik and I rebooted the machine by pressing the front panel RESET button. It booted and started with no problems.
Currently we don't know why dc0 froze this way.
TITLE: 06/18 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT
Wind: 27mph Gusts, 14mph 3min avg
Primary useism: 0.11 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
Currently relocking and in MOVE_SPOTS. Wind and ground motion are still a bit high.
Unknown cause lockloss. Could be attributed to the wind, since it is on the rise and expected to continue tonight. Gusts were over 30mph at the time of the lockloss.
It seems that there was a length kick in EY that was first seen causing the lockloss. Lockloss report.
This lockloss could have been caused by some ring up that appears to be at a frequency between 11-13 Hz in the LSC loops.
Here is the lockloss tool link: https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1434317297
23:50 Observing
There were a few SDF diffs before observing in ramp times that I think came from running various scripts like the A2L and the DARM offset step. I reverted all of the changes so we (hopefully) don't get another SDF diff next time we lock.
I ran a noise budget injection into DC6 (centering loop for POP WFS), using a broadband excitation from 10-100 Hz. Based on the results in the DTT template (attached), there is not any measureable contribution to noise in DARM when injecting about 100x above ambient in the DC6 P control (bottom left plot, ref trace shows quiet time, live trace shows injection). We can include this channel in the ASC noise budget, but our code won't even generate a trace since the reference DARM and injection DARM shown here are exactly the same (top left plot, blue reference trace shows quiet time, red live trace shows injection time).
At a later time I will check DC6 Y.
Ran A2L script for all sus during comissioning today. Screenshot attached.
Wed Jun 18 10:11:15 2025 INFO: Fill completed in 11min 11secs
H1 ISI CPS Noise Spectra Check - FAMIS 26048
NEW and IMPROVED H1 ISI CPS Noise Spectra check Now includes HAM1 !
HAM1 currently have some very loud VAC equipment attached to it which is running and may be why HAM1 looks so terible relative to the rest of the othe HAMs.
Sheila, Camilla
We can see large SQZ changes dependent on the OPO PZT value, we;vee seen this before, some alignment changes from this PZT should be adjusted for by FC AS and FC beamspot control. The FC beamspot control has been off since the vent, but we're turned FC beamspot control on again in the hope to reduce this dependency.
Yesterday we needed to turn the ASC on to improve high freq sqz 85147 and since we've started using the THERMALIZATION guardian 85083 to slowly adjust SRCL Offset, our squeezing and ASC error signals are reduced slightly (see below). We have turned back on the SQZ ASC as expect this new guardian will stop the ASC running away.
Now we have the THERMALIZATION guardian working, the ADF measured sqz ang change has reduced (see below), we want to try turning back on SQZ_ANG_SERVO which will take a little tuning of settings. You can see in this plot that when the OPo PZT changed, the servo would have adjusted the sqz angle too.
Also touched the SHG launch waveplates to decrease the rejected power in H1:SQZ-SHG_FIBR_REJECTED_DC_POWER.
SQZ ASC was running away at the start of the lock so I've turned SQZ ASC off again.
I tried re-measureing the sensing matrix, the result was different to that measrue in September 80373 with the YAW sensor swapped (see output of /sqz/h1/scripts/ASC/python AS42_sensing_matrix_cal.py below) but when I tried it later the sensing matrix seemed to be different, I expect you need to start in good squeezing for it to work well which we were not when I tried it.
Plan to remeasure the sensing matrix more carefully (as in 80373) and then try a new one again.
Also tried today the SQZ_ANG_ADJUST servo using ADF, as the ASC was running away this was confusing, left off.
Jonathan, Erik, Dave, Ibrahim, Tony:
Following the front end restarts yesterday there has been a spate of Guardian AWG connection issues e.g. alog85130
Erik recommended that we reboot h1guardian1 at the next opportunity to force new front end connections for all Guardian nodes, rather than restart each node as the problem arises.
Following a lock loss this morning, the control room gave us the go ahead to reboot h1guardian1 at 08:49 Wed18jun2025PDT. Like TJs last reboot 12 days ago, h1guardian came back and restarted all of its nodes quickly and without any problems.
However our two guesses are:
Reminder that for the past month HAM1 pressure was being reported by a temporary "H1" version of the old cold-cathode PT100A (called H1:VAC-LY_X0_PT100B_PRESS_TORR) which uses an ADC in h0vacly and calcuates the pressure from the raw voltage signal.
Yesterday Patrick installed the h0vaclx full Beckhoff readout of this gauge via an ethernet connection. The channel name for this gauge from now onwards is H0:VAC-LX_X0_PT100_MOD2_PRESS_TORR
For now I'm showing both channels on the Vacuum MEDM and FOMs, H1:VAC-LY_X0_PT100B_PRESS_TORR upper, H0:VAC-LX_X0_PT100_MOD2_PRESS_TORR lower.
Note the H1 channel is a bigger number, the voltage signal increased slightly when Gerardo plugged the ethernet cable into h0vaclx yesterday.
TITLE: 06/18 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 1mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING as of 11:57 UTC
Looks like we were able to recover from a 5.8 Mexico quake fully automatically last night.
TITLE: 06/18 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
Currently Observing at 145 Mpc and have been Locked for almost 2 hours.Everything looking good.
Two hours into our previous lock, our range was slowly dropping and we could see that SQZ wasn't looking very good, and trending back the optic allignments Camilla saw that ZM4 and ZM6 weren't where they were supposed to be because the ASC hadn't been offloaded before all the updates earlier today(ndscope1). To fix this, we popped out of Observing and turned on the SQZ ASC for three minutes, which made the sqzing better(ndscope2)! This better sqzing has persisted to this next lock stretch.
After the 01:30 lockloss (85145), I sat in DOWN for a couple minutes while restarting a few Guardian nodes since awg was restarted earlier today and this should help with the guardian node error (like in 85130) (these are the ones I could think of that probably all use awg): ALS_{X,Y}ARM, ESD_EXC_{I,E}TM{X,Y}, SUS_CHARGE, PEM_MAG_INJ Tagging Guardian
LOG:
23:30 Observing and have been Locked for over 1 hour
00:24 Popped out of Observing quickly to run the SQZ ASC for a few minutes
00:27 Back into Observing
01:30 Lockloss
- Sat in DOWN for a couple minutes while I restarted some Guardian nodes
- We ended up in CHECK_MICH_FRINGES but that didn't help, so I started an IA
03:05 NOMINAL_LOW_NOISE
03:08 Observing
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:22 | SAF | LASER SAFE | LVEA | SAFE | LVEA is LASER SAFE ദ്ദി( •_•) | 15:37 |
20:49 | ISS | Keita, Rahul | Optics lab | LOCAL | ISS array work (Rahul out 23:45) | 00:28 |
03:08 Observing
Trend of H1:IOP-SUS_ITMY_WD_OSEM1_RMSOUT shows increased motion during the 10 minutes post-RCG upgrade that OMC0, see alog 85120, was clobbering IPCs, including two peaks.
The attached screenshot has cursors at the approximate start and end of OMC0 clobbering IPCs. RMS remained high until guardian was started 30 minutes later, after which ITMY continued to ring until guardian was again restarted.
We will attempt to trace the clobbered IPCs to see if they plausibly could have driven ITMY.
The attach list shows the mapping from OMC0 IPCs to IPCs that were clobbered during the ten minutes OMC0 was running on the wrong IPC table.
ITMX, which received the same clobbered channel as ITMY, also showed a spike in movement during the same period, but was properly stilled by guardian.
FW0 full frame gap due to crash and restart:
Jun 18 16:14 H-H1_R-1434323584-64.gwf
Jun 18 16:26 H-H1_R-1434324288-64.gwf
FRS34505