Displaying report 1-1 of 1.
Reports until 00:05, Wednesday 11 June 2025
H1 General (ISC, SQZ)
oli.patane@LIGO.ORG - posted 00:05, Wednesday 11 June 2025 - last comment - 11:22, Wednesday 11 June 2025(84958)
Ops Eve Shift End

TITLE: 06/11 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 135 Mpc
INCOMING OPERATOR: None
SHIFT SUMMARY:

We are Observing! We've been Locked for 1 hour. We got to NOMINAL_LOW_NOISE an hour ago after a couple locklosses with Elenna's help (see below) and then took a couple unthermalized broadband calibration measurements (84959, 84960). I also just adjusted the sqz angle and was able to get better squeezing for the 1.7 kHz band, but the 350 Hz band squeezing is very bad. I am selecting DOWN so that if we unlock, we don't relock.

Early in the relocking process, we were having issues with DRMI and PRMI not catching, even though we had really good DRMI flashes. I finally gave up and went to run an initial alignment, but we had a bit of a detour when an error in SDFing caused Big Noise (TM) to be sent into PM1 and caused the software WD to trip, then causing the HAM1 ISI and HAM1 HEPI to also trip. Once we got that figured out we went through a full initial alignment with no issues.

Relocking, we had two locklosses from LOWNOISE_ASC from the same spot. Here are their logs (first, second). There were no ASC oscillations before the locklosses, so it doesn't seem to be due to the 1Hz issues from earlier (849463). Looking at the logs, they both happened right after turning on FM4 for DHARD P, DcntrlLP. Elenna took a look at that filter and noticed that the ramping on time might be too short, and changed it from 5s to 10s, and updated the wait time in the guardian to match. She loaded that all in, and it worked!!

As a strange aside, after the second LOWNOISE_ASC lockloss, I went into manual IA to align PRX, but there was no light on ASC-AS_A. Left Manual IA, went through DOWN and SDF_REVERT again, then back into manual IA, and found the same issue at PRX. Looked at the ASC screen and noticed that the fast shutter was closed. Selected OPEN for the fast shutter, and it opened fine. This was a weird issue??

LOG:

23:30UTC Locked and getting data for the new calibration
23:43 Lockloss
    - Started an initial alignment, trying to do automatically after PRC align was bypassed in the state graph (84950)
    - Tried relocking, couldn't get DRMI or PRMI to catch, even with really good DRMI flashes
    - Went to manual inital alignment to just do PRX by hand, but saw the HAM1 ISI IOP DACKILL had tripped
        - Then HAM1 HEPI tripped, and I had to put PM1 in SAFE because huge numbers were coming in through the LOCK filter
        - It was due to an SDF error and was corrected
    - Lockloss from LOWNOISE_ASC for unknown cause (no ringup)
    - Lockloss from LOWNOISE_ASC for unknown cause (no ringup)
    - Tried going to manual IA to align PRX, but there was no light on ASC-AS_A. Left Manual IA, went through DOWN and SDF_REVERT again, then back into manual IA, and found the same issue at PRX. Looked at the ASC screen and noticed that the fast shutter was closed. Selected OPEN for the fast shutter, and it opened fine.
06:03 NOMINAL_LOW_NOISE
06:07 Started BB calibration measurement
06:12 Calibration measurement done
06:36 BB calibration measurement started
06:41 Calibration measurement done
07:02 Back into Observing

Start Time System Name Location Lazer_Haz Task Time End
00:50 VAC Gerardo LVEA YES Climbing around on HAM1 00:58
Non-image files attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 00:13, Wednesday 11 June 2025 (84962)

Unfortunately, since we don't want the ifo trying to relock all night if we lose lock, I have to select DOWN, but that means that the request for ISC_LOCK is not in the right spot for us to stay Observing. So we won't be Observing overnight, but will be locked (at least until we lose lock, then we will be in DOWN)

elenna.capote@LIGO.ORG - 11:22, Wednesday 11 June 2025 (84973)

Here is some more information about some of the problems Oli faced last night and how they were fixed.

PM1 saturations:

Unfortunately, this problem was an error on my part. Yesterday, Sheila and I were making changes to the DC6 centering loop, which feeds back to PM1. As a part of updating the loop design, I SDFed the new filter settings, but inadvertently also SDFed the input of DC6 to be ON in safe. We don't want this; SDF is supposed to revert all the DC centering loop inputs to OFF when we lose lock. Since I made this mistake, a large junk signal came in through the input of DC6 and then was sent to the suspension, which railed PM1 and then tripped the HAM1 ISI. Once I realized what was happening, I logged in and had Oli re-SDF the inputs of DC6 P and Y to be OFF.

You can see this mistake in my attached screenshot of the DC6 SDF; I carelessly missed the "IN" among the list of differences.

DHARD P filter engagement:

In order to avoid some control instabilities, Sheila and I have been reordering some guardian states. Specifically, we moved the LOWNOISE ASC state to run after LOWNOISE LENGTH CONTROL. This should not have caused any problems, except Oli noticed that we lost lock twice at the exact same point in the locking process, right at the end of LOWNOISE ASC when the DHARD P low noise controller is engaged, FM4. I attached the two guardian logs Oli sent me demonstrating this.

I took a look at the FM4 step response in foton, and noticed that the step response is actually quite long, and the ramp time of the filter was set to 5 seconds. I also looked at the DARM signal right before lockloss, and noticed that the DARM IN1 signal had a large motion away from zero just before lockloss, like it was being kicked. My hypothesis is that the impulse of the new DHARD P filter was kicking DARM during engagement. This guardian state used to be run BEFORE we switched the coil drivers to low bandwidth, so maybe the low bandwidth coil drivers can't handle that kind of impulse.

I changed the ramp time of the filter to 10 seconds, and we proceeded through the state on the next attempt just fine.

Images attached to this comment
Non-image files attached to this comment
Displaying report 1-1 of 1.