Lockloss @ 09/08 21:45UTC after 4.5 hours locked
Currently Observing at 158Mpc nd have been Locked for 2.5 hours. Quiet day with nothing to report
Lockloss @ 09/08 15:21UTC after 17:39 locked
Sun Sep 08 08:09:39 2024 INFO: Fill completed in 9min 35secs
TITLE: 09/08 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 5mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
Observing at 158Mpc and have been Locked for almost 17 hours. Everything looking normal, but the dust monitor alarm for the optics lab was going off so I'll make sure we don't have more sand appearing in there.
TITLE: 09/08 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 21:52 UTC (7hr 20 min lock) with some minor squeeze exceptions. Otherwise, very smooth day with 0 locklosses during my shift.
The squeezer has been acting up today:
23:04 UTC COMISSIONING: Squeezer was far from optimal and while still observing, we were only getting 120ish MPc range. As such, Oli and I went into temp comissioning to run the temperature optimization script before trying to relock it. While this was happening, Naoki called and said he thought it was a Pump ISS issue and then took hold of IFO to fix it. He was successful and we were back to obsering at our recent 155ish MPc. OBSERVING again as of 23:27 UTC
01:53 UTC COMISSONING: Squeezer unlocked, sending us into comissiong but within a few minutes it relocked again automatically. I was watching it do so. We were OBSERVING again as of 01:58 UTC.
Other:
We successfuly rode through a 6.0 mag EQ from Tonga. EQ mode triggered successfully.
Dust is high in the Optics Lab - I was told by Oli yesterday that there's a strange acummulation of sand by a dust monitor and that some measures were taken to remove the sand though perhaps more has built up. The 300NM PCF monitor is at RED alert and the 500NM PCF monitor is at YELLOW.
LOG:
None
TITLE: 09/07 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: We have been Locked for close to 2 hours. Not currently Observing because Naoki is trying to adjust some squeeze stuff since our range is really bad right now. Quiet day with one lockloss and easy relocking
LOG:
14:30UTC Observing and Locked for 7:47hrs
15:28 Plane passes overhead
15:38 Superevent S240907cg
18:30 Left Observing to run calibration sweep
19:04 Calibration measurements finished, back into Observing
19:36 Lockloss
20:20 We started going through CHECK_MICH_FRINGES for the second time so I took us to DOWN and started an initial alignment
20:41 Initial alignment done, relocking
21:41 NOMINAL_LOW_NOISE
- OPO couldn't catch, I lowered opo_grTrans_setpoint_uW to 69 in sqzparams, reloaded OPO, locked the ISS, and then adjusted the OPO temperature a bit until SQZ-CLF_REFL_RF6_ABS was maximized. Accepted new temperature setpoint and went into Observing
21:52 Observing
23:04 Left Observing to run sqz alignment
TITLE: 09/07 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 136Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT
Wind: 13mph Gusts, 6mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING as of 21:52 UTC
We have another installment of transfer functions for the BBSS. These were taken on August 30th and results can be found in $(sussvn)/BBSS/X1/BS/SAGM1/Results/2024-08-30_2300_tfs/ and I've attached the pdf. We've been dealing with the mystery of the F1 OSEM on M1 drifting downwards sometimes (see 79941 - very important info), and through we believe that the drift does not affect the transfer functions, we still want to make it clear that this measurement was taken shortly after adjusting for the drift, and over the next few days we confirmed that it was still drifting down. We also did a comparison by plotted this measurement set next to the set from July 12(79181), which was close to when the new (shorter by 9mm) wire loop was installed, and which was a time where we did have F1 drift, although in that case the measurements were taken over a week after F1 started the drift, so it had already travelled most of the way down that it was going to travel (drifts down in an exponential decay-like fashion). I also added the measurements from back in January when we completed the first iteration75787. It's interesting to see how the July and August Pitch TFs line up with each other around 1Hz as compared to the January measurement and the model. The location of this peak depends heavily on the distance between the M1 blades and the center of mass of M1, so this shift makes sense since the M1 blade heights have been changed since the initial build in January, and the current model doesn't yet reflect this change (comparison of how d1 changes the model with this measurement also plotted).
Lockloss @ 09/07 19:36UTC after nearly 13 hours Locked
21:52 UTC Observing
Currently Observing right around 160Mpc and have been Locked for 12.5 hours. We dropped Observing a little while ago to run a calibration sweep, but we've been back Observing for over half an hour now.
Calibration measurements run. Before starting measurements we had been Locked for 11:45mins. calibration monitor ss
Measurements completed, but we got this error in simulines - not sure if this exact error is already known about but I thought it wouldn't hurt to attach it just in case.
Broadband (18:30 - 18:35UTC)
Output file:
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240907T183023Z.xml
Simulines (18:38 - 19:01UTC)
Output files:
/ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240907T183846Z.hdf5
/ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240907T183846Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240907T183846Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240907T183846Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240907T183846Z.hdf5
Sat Sep 07 08:08:26 2024 INFO: Fill completed in 8min 23secs
TITLE: 09/07 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 3mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
Currently Observing and have been Locked for almost 8 hours. Nice change compared to the past 24+ hours of short locks.
TITLE: 09/07 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
IFO is ALIGNING (Initial Alignment)
We have been experiencing weirdly short locks <2hrs for the last 5 locks. While locking itself is easy and automatic, it seems that as we approach the 1.5 hr mark, the HAM6 ISI gets saturated (at least in my 3 locks in last day or so). Additionally, PI 24 has been ringing up but damping slower than in the last few shifts. Moreso, we're getting ALS locklosses with no immediately apparent reason (No EQs and no Microseism). I've had to do an intiial alignment for both my lock acquistions. The last one happened 30 minutes before shift-end but I plan to investigate it in tomorrow's shift.
LOG:
None
TITLE: 09/06 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Lots of locklosses but pretty easy relocks
LOG:
14:30 Observing and Locked for 30 mins
15:53 Lockloss after almost 2 hours locked https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=79946
16:06 Lockloss from TRANSITION_DRMI_TO_3F, starting initial alignment
16:29 Initial alignment done, relocking
17:20 NOMINAL_LOW_NOISE
17:23 Observing
18:51 Lockloss after 1.5 hours https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=79949
- Had to restore ETMY and TMSY to values from previous lock's LOCKING_ARMS_GREEN, and then had to still touch them up to get ALSY to lock
20:14 Observing
22:27 Lockloss after 2.25 hours https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=79954
22:28 Started an initial alignment
- Couldn't get to SRC_ALIGN, so I went to ACQUIRE_SRY and then moved SRM to minimize the error signals in SRC1. Then SRC locked and offloaded fine
22:56 Initial alignment done, relocking
23:42 NOMINAL_LOW_NOISE
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:58 | SAF | H1 | LHO | YES | LVEA is laser HAZARD | 18:24 |
14:49 | FAC | Kim | OpticsLab | n | Tech clean | 15:05 |
15:38 | FAC | Mitchell | EX, EY | n | Dust monitor checks | 16:31 |
16:11 | FAC | Kim | OpticsLab | n | Tech clean - vacuuming up sand | 16:56 |
16:19 | PCAL | Francisco | PCAL Lab | y(local) | Checking for sand | 16:31 |
16:20 | OPT | Sheila | OptLab | n | Checking for sand | 16:31 |
18:37 | FAC | Kim | OptLab | n | Checking for more sand | 18:47 |
21:46 | FIT | Vicky | YARM | n | Running fast | 22:18 |
Lockloss @ 09/06 22:27UTC after 2:13 hours locked. Not sure why these locks have been so short.
23:44 Observing
Ibrahim, Oli, Jeff, Betsy, Joe, Others
alog 79079: Recent Post-TF Diagnostic Check-up - one of the early discoveries of the drift and pitch instability.
alog 79181: Recent M1 TF Comparisons. More recent TFs have been taken (found at: /ligo/svncommon/SusSVN/sus/trunk/BBSS/X1/BS/SAGM1/Data on the X1 network). We are waiting on updated confirmation of model parameters in order to know what we should correctly be comparing our measurements to. We just confirmed d4 a few days ago following the bottom wire loop change and now seek to confirm d1 and what that means with respect to our referential calibration block.
alog 79042: First investigation into the BOSEM drift - still operating erroneously under the tmperature assumption.
alog 79032: First discovery of drift issue, originally erroneously thought to be part of the diurnal temperature driven suspension sag (where I though that blades sagging more than others contributed to the drift in pitch).
We think that this issue is related to the height of the blades for these reasons:
We need to know how the calibration block converts to model parameters in d1 and whether that's effective or physical d1 in the model. Then we can stop using referential units.
Update to the triplemodelcomp_2024-08-30_2300_BBSS_M1toM1 file Ibrahim attached - there is an update to the legend. In that version I had the description for the July 12th measurement as 'New wire loop, d1=-1.5mm, no F1 drift', but there was actually F1 drift during that measurement - it had just started over a week before so the OSEM values weren't declining as fast as they had been earlier that week. I also want to be more specific as to what d1 means in that context, so in this updated version I changed July's d1 to be d1_indiv to hopefully better show that that value of -1.5mm is the same for each blade, whereas for the August measurements (now posted ) we have d1_net, because the blades heights differ by multiple .1 mms, but they still average out to the same -1.5mm.
17:21 Observing