TITLE: 06/17 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Currently Observing at 160 Mpc and have been locked for over one hour. Two locklosses during my shift, but both were easy to relock from.
LOG:
23:00UTC Detector Observing and Locked for 21 hours
23:22 Kicked out of Observing due to two cameras, ASC-CAM_PIT1_INMON and ASC-CAM_YAW1_INMON glitching or restarting
- Weren't able to turn back on fully and gave the warning messages "[channel_name] is stuck! Going back to ADS"
- Referencing alog 77499, we contacted Dave and he restarted camera 26 (BS cam)
23:43 Back into Observing
01:37 Lockloss
- During relocking, COMM wasn't able to get IR high enough
- I stalled ALS_COMM and tried adjusting the COMM offset by hand, but still wasn't working
02:21 I started an initial alignment
02:43 Initial alignment done, relocking
03:27 NOMINAL_LOW_NOISE
03:29 Started running SQZ alignment (SDF)
03:38 Observing
05:28 Lockloss
05:35 Lockloss from LOCKING_ALS, started an initial alignment
06:00 Initial alignment done, starting relocking
06:43 NOMINAL_LOW_NOISE
06:45 Observing
Lockloss @ 06/17 05:28 UTC from unknown causes. Definitely not from wind or an earthquake
Closes FAMIS#26310, last checked 78362
Corner Station Fans (attachment1)
- All fans are looking normal and within range.
Outbuilding Fans (attachment2)
- All fans are looking normal and within range.
Lockloss at 06/17 01:37 UTC - looks like it may have been due to a jump in the wind speed?
03:39 UTC Observing
Link to report.
TITLE: 06/16 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO was in NLN and OBSERVING as of 06:05 UTC (21hr 37 min lock) but is NOW in CORRECTIVE_MAINTENANCE while we briefly restart the ADS Camera.
Like 5 minutes before my shift ended, the ADS Pitch1 Inmon and Yaw1 Inmon are stuck. It seems that they keep trying to turn on, but can't get past TURN_ON_CAMERA_FIXED_OFFSET. This has happened before alog 77499 and it is likely that the cameras just need to be restarted. We did not lose lock. Oli (incoming Op) has called Dave and they are working on it.
LOG:
None
Dave restarted the camera servo for camera 26 and we are back in Observing as of 23:43 UTC
TITLE: 06/16 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 11mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY:
Observing and locked for 21.5 hours.
IFO is in NLN and OBSERVING (Now 18hr 35 min lock!)
Nothing else of note.
Sun Jun 16 10:10:34 2024 INFO: Fill completed in 10min 30secs
Note TCs did not reach -200C because lower outside temps this morning (15C, 59F).
TITLE: 06/16 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 4mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING since 06:06 UTC (12hr 45 min lock)
NUC27 Glitch screen is giving a warning: "Cluster is down, glitchgram is not updated", which I haven't seen before.
TITLE: 06/16 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: We are Observing at 158 Mpc and have been locked for 6 hours now. The only issues I had today was the OPO ISS maxing out at 54uW and so not being able to catch at 80uW and so needing to adjust the setpoint, and then relocking after the 06/16 00:01 UTC lockloss , I had some issues with getting PRMI and DRMI to lock, but it makes sense since the wind was still a bit high at the time. After that the night has been quiet.
LOG:
23:00 Detector relocking and at DARM_TO_RF
23:39 NOMINAL_LOW_NOISE
- SQZ OPO ISS pump having trouble locking. The OPO transmission couldn't go higher than 54.6uW
- Adjusted OPO temp, but the current temp was the best so I put it back. Reloaded the OPO guardian (So I had changed nothing!). OPO was able to get up to 72 after this, but still not to 80.
- Naoki came on teamspeak and lowered the threshold to 70, and it caught very soon after that.
00:01 While dealing with the OPO, we lost lock. We had been locked for 22 minutes
00:29 Lockloss from ACQUIRE_DRMI
00:30 Started an initial alignment
00:51 Initial alignment done, relocking
01:04 Lockloss from ACQUIRE_DRMI
01:52 NOMINAL_LOW_NOISE
01:56 Observing
02:01 Our range was low so I took us out of Observing and ran the sqz tuning guardian states
02:10 Back to Observing, with a 7Mpc increase in range
05:25 Left Observing and started calibration measurements
05:53 Calibration measurements done, running sqz alignment (new optic offsets accepted)
06:05 Back into Observing
Calibration was run between 06/16 05:25 and 05:53 UTC
Calibration monitor screenshot
Broadband (2024/06/16 05:25 - 05:30 UTC)
File: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240616T052532Z.xml
Simulines (2024/06/16 05:32 - 05:53 UTC)
Files:
/ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240616T053211Z.hdf5
/ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240616T053211Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240616T053211Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240616T053211Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240616T053211Z.hdf5
06/16 05:24 UTC I took us out of Observing to run a calibration sweep that we weren't able to run earlier.
06/16 06:05 UTC Back into Observing after running calibration sweep and tuning squeeze
Currently Observing at 152 Mpc and have been Locked for 55 mins. We had a lockloss at 06/16 00:01 UTC, 22 minutes into NOMINAL_LOW_NOISE, and relocking required me helping with PRMI and DRMI, but we were able to get up eventually. The wind is slowly going down and is around 25mph now.
HAM4 annulus ion pump signal railed about 7:50 utc 06/15/2024. No immediate attention is required, per trend of PT120, an adjecent gauge, the internal pressure does not appear to be affected. HAM4 AIP will be assesed next Tuesday.
06:45 Observing
Haven't figure out the cause, but this lockloss generally follows the pattern that we have seen for other 'DARM wiggle'** locklosses, but there are a couple of extra things that I noted and want to have them recorded, even if they mean nothing.
Timeline (attachment1, attachment2-zoomed, attachment3-unannotated)
Note: I am taking the lockloss as starting at 1402637320.858, since that is when we see DARM and ETMX L3 MASTER_OUT lose and fail to regain control. The times below are milliseconds before this time.
It also kind of looks like ASC-CSOFT_P_OUT and ASC-DSOFT_P_OUT get higher in frequency in the ten seconds before the lockloss(attachemnt4), which is something I had previoiusly noticed happening in the 2024/05/01 13:19 UTC lockloss (attachment5). However, that May 1st lockloss was NOT a DARM wiggle lockloss.
** DARM wiggle - when there is a glitch seen in DARM and ETMX L3 MASTER_OUT, then DARM goes back to looking normal before losing lock within the next couple hundreds of milliseconds