FAMIS 26301
Laser Status:
NPRO output power is 1.828W (nominal ~2W)
AMP1 output power is 64.52W (nominal ~70W)
AMP2 output power is 136.7W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 0 days, 11 hr 34 minutes
Reflected power = 22.33W
Transmitted power = 104.0W
PowerSum = 126.3W
FSS:
It has been locked for 0 days 1 hr and 1 min
TPD[V] = 0.7807V
ISS:
The diffracted power is around 2.4%
Last saturation event was 0 days 1 hours and 2 minutes ago
Possible Issues: (both are known)
AMP1 power is low
PMC reflected power is high
Sat Sep 28 08:13:16 2024 INFO: Fill completed in 13min 12secs
TITLE: 09/28 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 6mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.27 μm/s
QUICK SUMMARY: H1 just lost lock at 14:16 UTC, 8:34 into the lock stretch. I don't notice anything obvious as to the cause except maybe some ETMX motion immediately prior, but this doesn't look like one of the ETMX "glitches" like we've seen before. Have been trying to lock green arms for 30 minutes, so I'll start touching them up by hand.
DIAG_MAIN is reporting "TCS_LASER: Y power too low," and that seems to be the case since the CO2Y laser relocked at 10:13 UTC this morning, when the laser output dropped from 30.20W to 29.94W (screenshot attached, tagging TCS).
This caused H1 to drop observing for only about a minute and a half.
H1 back to observing at 16:05 UTC. Fully automatic relock after running an also automatic initial alignment.
TITLE: 09/28 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
Currently relocking and at ACQUIRE_DRMI_1F. We were cleared to go into Observing by the CAL team 45 minutes ago when we were still Locked and got in 10 minutes of Observation time before losing lock from an FSS oscillation.
Before that, troubleshooting was being done by the calibration team for the entirety of my shift, so I had just been trying to keep us locked and by running broadband measurements when they needed them. Current status of the calibration work is that they tried multiple things with ultimately no success. The things they tried have been reverted, and they will be writing up their own alog and continuing work on Monday.
While trying to relock after the first lockloss of my shift and this last lockloss, we got stuck in READY due to IMC_LOCK being in FAULT with the DIAG_MAIN messages: "PSL_FSS: PZT MON is high, may be oscillating" and "PSL_FSS_TPD: RefCav transmission low, fix alignment". I tried taking the IMC to DOWN and then back to LOCKED and tried toggling the FSS loop automation OFF and then back ON, but neither seemed to help. But both times the IMC was eventually able to lock, but I'm not sure that it had anything to do with my toggling. Once the IMC was locked, relocking went smoothly for that first relock and so far is going well this relock.
So the FSS oscillations are not only causing locklosses, but now are also starting to delay relocking. Tagging ISC
LOG:
22:16UTC Lockloss
22:19 Lockloss from LOCKING_ARMS_GREEN
22:20 Lockloss from LOCKING_ARMS_GREEN
22:21 Lockloss from LOCKING_ARMS_GREEN
IMC in FAULT with the messages "PSL_FSS: PZT MON is high, may be oscillating" and "PSL_FSS_TPD: RefCav transmission low, fix alignment"
- Taking IMC_LOCK to DOWN+INIT and then back to LOCKED didn't work
- Toggled FSS loop automation autolock OFF/ON, eventually IMC locked a couple minutes after doing this so not sure if it helped
23:29 NOMINAL_LOW_NOISE
23:29 NLN_CAL_MEAS
23:30 Started broadband calibration measurement
23:36 measurement done (PCALY2DARM_BB_20240927T233009Z.xml), back to NOMINAL_LOW_NOISE
23:41 NLN_CAL_MEAS
23:42 Started broadband calibratoin measurement #2
23:48 Measurement done, (PCALY2DARM_BB_20240927T234226Z.xml) back to NOMINAL_LOW_NOISE
00:11 Lockloss
00:34 Started an initial alignment
00:52 Initial alignment done, relocking
01:37 NOMINAL_LOW_NOISE
01:43 NLN_CAL_MEAS
01:44 Started broadband calibration measurement #3
01:50 measurements done (PCALY2DARM_BB_20240928T014428Z.xml)
01:50 NOMINAL_LOW_NOISE
03:00 NLN_CAL_MEAS
03:01 Started broadband calibration measurement
03:08 Measurement done, back to NOMINAL_LOW_NOISE
03:08 Lockloss due to earthquake https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi/event/1411528136
04:05 NOMINAL_LOW_NOISE
04:16 Observing
04:22 Lockloss due to FSS oscillation starting 30 before LL
FSS oscillating and won't lock - same DIAG_MAIN messages as earlier
04:39 Got it locked, starting an initial alignment
04:56 Initial alignment done, relocking
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
22:44 | Keita | CER | n | Putting laptop away | 22:46 |
Currently relocking and at MOVE_SPOTS - we lost lock about 30 minutes ago from a nearby earthquake.
The CAL team has been spending the last several hours troubleshooting the issues with the H1 calibration. We've had two other locklosses during my shift besides this latest earthquake one, both were easy to relock from, but have slightly slowed down the calibration troubleshooting.
Note: There is an issue with the flag that decides whether or not we have clean data for H1:CDS-SENSEMON_CAL_SNSC_EFFECTIVE_RANGE_MPC. Since 09/27 20:25UTC we have not been Observing (so the past 5 hours), but diaggui and ndscope are both showing data for the CLEAN channel.
TITLE: 09/27 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 13mph Gusts, 8mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.52 μm/s
QUICK SUMMARY:
Currently relocking at LOWNOISE_LENGTH_CONTROL. Once we get to NLN we will be running a broadband calibration sweep as part of our continuation of updating/fixing the calibration.
TITLE: 09/27 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Calibration
OUTGOING OPERATOR: n/a
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 10mph Gusts, 4mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.54 μm/s
QUICK SUMMARY:
Ran simulines calibration measurement for Louis and then stayed out of Observing a little extra while they worked on calibration results.
Measurement NOTES:
There was some commissioning time this morning to get the SRCL offset that we needed before running a calibration measurement that we desperately needed. We then lost lock when running simulines. Relocked automatically and we are now running another simulines.
Once Sheila pushed a new SRC detuning, I ran a broadband measurement, then started simulines. We lost lock a little over a minute after starting, not sure of the cause yet.
Simulines start:
PDT: 2024-09-27 11:08:44.473513 PDT
UTC: 2024-09-27 18:08:44.473513 UTC
GPS: 1411495742.473513
Lock loss 1810UTC
Lockloss 1411472854
The lock loss tool tagged FSS oscillation. Trending H1:PSL-FSS_FAST_MON_OUT_DQ shows that something started to get noisy 35seconds before lock loss. I didn't see any other strangeness in our usual LSC, ASC, ETMX, DARM signals.
Unlike what Ryan C saw on Sept 21, the IMC refl servo splitmon and tidal seem stable when the FSS starts to get noisy and the PC mon channel starts to drift.
Took another data set of FIS with different SRCL offsets, to try to set the SRCL detuning for calibration measurement, similar to 79903.
Here are some plots of this code, made by borrowing heavily from Vicky's repo here and from the Noise budget repo. I will put this into a repo soon, perhaps here.
The first plot shows the spectra with different SRCL offsets in, always with the squeezing angle optimized for kHz squeezing. The no squeezing model isn't well verified, I've used SRCL detuning of 0 which we know isn't correct. We use this no squeezing model to subtract from the no squeezing measurement to estimate the non-quatum noise, shown in gray here. The SRC detuning doesn't change this estimate much without squeezing injected.
The next plot is a re-creation of Vicky's brontosaurus plot, as in 79951. The non-quantum noise estimate is subtracted from each of the FIS curves, which are then plotted in dB relative to the no squeezing model. Each of those shows a squeezing data set with a model, where I by hand adjusted the SRCL offset in the model based on this plot. The subtraction is needed to make the impact of the SRCL offset clear.
The final plot shows the linear fit of the SRC detuning to SRCL offset, which gives us the SRCL offset we should use to go toward 0 detuning, (-191 counts).
Fri Sep 27 08:14:05 2024 INFO: Fill completed in 14min 1secs
TITLE: 09/27 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 4mph Gusts, 2mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.44 μm/s
QUICK SUMMARY: Locked for one and a half hours, some shorter locks overnight as well. Verbal didn't mention any PIs for the last lock loss, but the length of the lock is suspicious. More investigation needed. Our range isn't optimal and the squeezing looks poor in the higher frequencies based on the nuc33 FOM.
TITLE: 09/27 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
IFO is LOCKING at ENGAGE_ASC_FOR_FULL_IFO. Fully auto so far...
Shift was mostly quiet. Despite locking issues today, IFO locked pretty automatically following Tony's initial alignment. We did lose lock during LOWNOISE_ESD_ETMX (558). Though, guardian brought it to NLN shortly after and we were in observing swiftly.
In terms of PI modes, there was one very harsh ringup 23 minutes into NLN (or 34 mins after MAX_POWER). This gave 3 verbal PI 24 alarms but the damping, even though at maximum, was able to bring it down. No other ringups.
There was one lockloss 04:08 UTC. probably attributed to the environment and rising secondary microseism/over 35mph wind combo - alog 80323.
TCS Work from today has 2 SDF Diffs (screenshot attached).
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:39 | PCAL | Tony, Neil | EX | N | PCAL Computer Acquisition | 23:51 |
00:21 | PEM | Robert | Y-arm | N | Looking for parts | 00:21 |
Lockloss during Post-Event Standdown. Unknown cause not enviuronmental. Relocking now buyt having difficulties with both ALSY and ALSX.