TITLE: 09/08 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Just got into Observing after having waited in OMC_WHITENING for a while damping violins - they seem to have gotten rung up from a lockloss we had from ENGAGE_ASC_FOR_FULL_IFO. This last relocking process went smoothly otherwise, and the lockloss from earlier today was also not complicated, besides needing to wait a good while for ALSY WFS 3 Yaw to converge.
LOG:
14:30UTC Observing and Locked for 16:50hours
15:21 Lockloss after 17:39 hours locked
15:41 Going to run an initial alignment after locking green arms
- ALS_YARM was sitting in the INITIAL_ALIGNMENT state for a while without starting offloading because WFS_3_Y was taking a while to get under the threshold. I took ALS_YARM to UNLOCKED just in case and then back to INITIAL_ALIGNMENT_OFFLOAD and it converged eventually.
16:36 Initial alignment done, relocking
17:18 NOMINAL_LOW_NOISE
17:21 Observing
21:45 Lockloss
22:16 Lockloss from ENGAGE_ASC_FOR_FULL_IFO
23:30 NOMINAL_LOW_NOISE
23:30 Observing
TITLE: 09/08 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 5mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY:
Lockloss @ 09/08 21:45UTC after 4.5 hours locked
Currently Observing at 158Mpc nd have been Locked for 2.5 hours. Quiet day with nothing to report
Lockloss @ 09/08 15:21UTC after 17:39 locked
17:21 Observing
Sun Sep 08 08:09:39 2024 INFO: Fill completed in 9min 35secs
TITLE: 09/08 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 5mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
Observing at 158Mpc and have been Locked for almost 17 hours. Everything looking normal, but the dust monitor alarm for the optics lab was going off so I'll make sure we don't have more sand appearing in there.
TITLE: 09/08 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 21:52 UTC (7hr 20 min lock) with some minor squeeze exceptions. Otherwise, very smooth day with 0 locklosses during my shift.
The squeezer has been acting up today:
23:04 UTC COMISSIONING: Squeezer was far from optimal and while still observing, we were only getting 120ish MPc range. As such, Oli and I went into temp comissioning to run the temperature optimization script before trying to relock it. While this was happening, Naoki called and said he thought it was a Pump ISS issue and then took hold of IFO to fix it. He was successful and we were back to obsering at our recent 155ish MPc. OBSERVING again as of 23:27 UTC
01:53 UTC COMISSONING: Squeezer unlocked, sending us into comissiong but within a few minutes it relocked again automatically. I was watching it do so. We were OBSERVING again as of 01:58 UTC.
Other:
We successfuly rode through a 6.0 mag EQ from Tonga. EQ mode triggered successfully.
Dust is high in the Optics Lab - I was told by Oli yesterday that there's a strange acummulation of sand by a dust monitor and that some measures were taken to remove the sand though perhaps more has built up. The 300NM PCF monitor is at RED alert and the 500NM PCF monitor is at YELLOW.
LOG:
None
TITLE: 09/07 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: We have been Locked for close to 2 hours. Not currently Observing because Naoki is trying to adjust some squeeze stuff since our range is really bad right now. Quiet day with one lockloss and easy relocking
LOG:
14:30UTC Observing and Locked for 7:47hrs
15:28 Plane passes overhead
15:38 Superevent S240907cg
18:30 Left Observing to run calibration sweep
19:04 Calibration measurements finished, back into Observing
19:36 Lockloss
20:20 We started going through CHECK_MICH_FRINGES for the second time so I took us to DOWN and started an initial alignment
20:41 Initial alignment done, relocking
21:41 NOMINAL_LOW_NOISE
- OPO couldn't catch, I lowered opo_grTrans_setpoint_uW to 69 in sqzparams, reloaded OPO, locked the ISS, and then adjusted the OPO temperature a bit until SQZ-CLF_REFL_RF6_ABS was maximized. Accepted new temperature setpoint and went into Observing
21:52 Observing
23:04 Left Observing to run sqz alignment
TITLE: 09/07 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 136Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT
Wind: 13mph Gusts, 6mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING as of 21:52 UTC
We have another installment of transfer functions for the BBSS. These were taken on August 30th and results can be found in $(sussvn)/BBSS/X1/BS/SAGM1/Results/2024-08-30_2300_tfs/ and I've attached the pdf. We've been dealing with the mystery of the F1 OSEM on M1 drifting downwards sometimes (see 79941 - very important info), and through we believe that the drift does not affect the transfer functions, we still want to make it clear that this measurement was taken shortly after adjusting for the drift, and over the next few days we confirmed that it was still drifting down. We also did a comparison by plotted this measurement set next to the set from July 12(79181), which was close to when the new (shorter by 9mm) wire loop was installed, and which was a time where we did have F1 drift, although in that case the measurements were taken over a week after F1 started the drift, so it had already travelled most of the way down that it was going to travel (drifts down in an exponential decay-like fashion). I also added the measurements from back in January when we completed the first iteration75787. It's interesting to see how the July and August Pitch TFs line up with each other around 1Hz as compared to the January measurement and the model. The location of this peak depends heavily on the distance between the M1 blades and the center of mass of M1, so this shift makes sense since the M1 blade heights have been changed since the initial build in January, and the current model doesn't yet reflect this change (comparison of how d1 changes the model with this measurement also plotted).
Lockloss @ 09/07 19:36UTC after nearly 13 hours Locked
21:52 UTC Observing
Currently Observing right around 160Mpc and have been Locked for 12.5 hours. We dropped Observing a little while ago to run a calibration sweep, but we've been back Observing for over half an hour now.
Calibration measurements run. Before starting measurements we had been Locked for 11:45mins. calibration monitor ss
Measurements completed, but we got this error in simulines - not sure if this exact error is already known about but I thought it wouldn't hurt to attach it just in case.
Broadband (18:30 - 18:35UTC)
Output file:
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240907T183023Z.xml
Simulines (18:38 - 19:01UTC)
Output files:
/ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240907T183846Z.hdf5
/ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240907T183846Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240907T183846Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240907T183846Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240907T183846Z.hdf5
Sat Sep 07 08:08:26 2024 INFO: Fill completed in 8min 23secs
Ibrahim, Oli, Jeff, Betsy, Joe, Others
alog 79079: Recent Post-TF Diagnostic Check-up - one of the early discoveries of the drift and pitch instability.
alog 79181: Recent M1 TF Comparisons. More recent TFs have been taken (found at: /ligo/svncommon/SusSVN/sus/trunk/BBSS/X1/BS/SAGM1/Data on the X1 network). We are waiting on updated confirmation of model parameters in order to know what we should correctly be comparing our measurements to. We just confirmed d4 a few days ago following the bottom wire loop change and now seek to confirm d1 and what that means with respect to our referential calibration block.
alog 79042: First investigation into the BOSEM drift - still operating erroneously under the tmperature assumption.
alog 79032: First discovery of drift issue, originally erroneously thought to be part of the diurnal temperature driven suspension sag (where I though that blades sagging more than others contributed to the drift in pitch).
We think that this issue is related to the height of the blades for these reasons:
We need to know how the calibration block converts to model parameters in d1 and whether that's effective or physical d1 in the model. Then we can stop using referential units.
Update to the triplemodelcomp_2024-08-30_2300_BBSS_M1toM1 file Ibrahim attached - there is an update to the legend. In that version I had the description for the July 12th measurement as 'New wire loop, d1=-1.5mm, no F1 drift', but there was actually F1 drift during that measurement - it had just started over a week before so the OSEM values weren't declining as fast as they had been earlier that week. I also want to be more specific as to what d1 means in that context, so in this updated version I changed July's d1 to be d1_indiv to hopefully better show that that value of -1.5mm is the same for each blade, whereas for the August measurements (now posted ) we have d1_net, because the blades heights differ by multiple .1 mms, but they still average out to the same -1.5mm.
Back to Observing at 23:30 UTC
CDS reports 12 disconnected channels, all related to NUC33. The NUC could probably use a restart, I can't vnc into it and pinging it gives nothing back, its frozen.