Since we've been seeing the ETMY roll mode consistently ringing up over the start of lock stretches and that it can cause locklosses after long enough, Sheila modified the 'DAMP_BOUNCE' [502] state of ISC_LOCK to now engage damping of this mode with a gain of 40. The state has also been renamed to 'DAMP_BOUNCE_ROLL'. I have accepted the gain of 40 and timeramp of 5 sec in the OBSERVE.snap table of h1susetmy and only the timeramp in the SAFE.snap table (screenshots attached; we had originally set the gain at 30 but then updated it to 40, which I forgot to take a screenshot of).
We are still unsure as to why this roll mode has been ringing up since the vent, but so far Elenna has ruled out the SRCL feedforward and theorizes it could be from ASC, specifically CHARD_P (see alog84982 and comments).
I think this is causing us locklosses, twice we've lost lock in this state as it turns on when I slowly stepped through the states, and twice we've lost it a few seconds into POWER_10Ws when GRD was moving automatically. I reduced the gain to 30 from 40 (SVN commited and reloaded ISC_LOCK, I had to first commit the DAMP_BOUNCE_ROLL state edits) and doubled the tramp to 10 (SDFed in SAFE).
The reduced gain and increased tramp didn't stop it from killing the lock, as soon as it engaged we lost lock. I've commented it out from ISC_LOCK - line 3937.
I think the BOUNCE_ROLL channel was mistyped in ISC_LOCK, the line is ezca['SUS-ETMY_M0_DAMP_R_GAIN'] = 40 where it should be ezca['SUS-ETMY_M0_DARM_DAMP_R_GAIN'] = 40 ? I should have noticed this earlier.
I edited the channel in ISC_LOCK to add "DARM_" but I did not get a chance to reload before we went into Observing.
Fri Jun 13 10:07:30 2025 INFO: Fill completed in 7min 27secs
Jordan confirmed a good fill curbside.
The squeezer ASC was misaligning the squeezer early in the lock as it has been doing this week.
Ryan took us out of observing to deal with this and the roll mode. I went to no squeezing and reset the AS42 offsets for no squeezing, a little more than 1 hour after power up. These offsets have changed with thermalization in the past.
I reset the sqz ASC using the "graceful clear history". Once the squeezing was injected the RF3 level was too low to lock, so I adjusted ZM6 manually. I could perhaps have done this (without reseting the offsets) by asking SQZ_MANAGER to RESET_SQZ_ASC, as Oli and Camilla suggested last night.
I accepted the offsets in the observe.snap, but forgot about the safe.snap. Ryan verified that SQZASC is not included in SDF revert, so this will be fine for the weekend, but we should accept these in safe.snap sometime soon.
If we have another lock today we can see if reseting the offsets has helped with the asc issue during thermalization. If we do not, we can set the flag to not run sqz ASC over the weekend.
WP12620
09:35 I started the copy of the past 6 months of raw minute trend files from h1daqtw0 to h1ldasgw0-RAID using h1daqfw0. This copy typically takes about 26 hours. It is running in nice mode to minimize its impact on daqd.
Fully automatic relock except for an unsurprising adjustment of PRM to lock DRMI.
Also accepted a batch of SDFs to start observing:
TITLE: 06/13 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 5mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.13 μm/s
QUICK SUMMARY: Looks like H1 unlocked at 14:00 and just finished up running an initial alignment. Starting lock acquisition now.
The cause of the range drop and eventual lockloss this morning appears to be from the problematic roll mode we've been seeing recently (see alog84982).
If I see that it's still rung up once H1 relocks, I'll apply the damping gain of 30 that seemed to work yesterday evening.
Received phone call at 1:20amPDT.
Saw that H1 was at NLN, and ready to go to Observing, but could not due to SDF Diffs for LSC & SUSETMY (see attached screenshot):
1) LSC
I was not familiar with these channels, so I went through the exercise of trying to find their medm, but for the life of me I could not get there! The closest I got was LSC Overview / IMC-MCL Filter Bank, but they were not on that medm. (probably spent 30min looking everywhere and in between with no luck). Looked at these channels in ndscope & these channels were at their nominals for the last lock. Also looked in the alog, and only saw SDF entries for them from 2019 & 2020. Ultimately, I just decided to do a REVERT (and luckily, H1 did not lose lock).
2) SUSETMY
Then H1 automatically went back to Observe.
Maybe Guardian, for some reason took these channels to these settings? At any rate, going to try to go back to sleep since it has been an hour already (hopefully this does not happen for the next lock!).
These MCL trigger thresholds come from the IMC_LOCK Guardian and are set in the 'DOWN' and 'MOVE_TO_OFFLINE' states.
In 'DOWN', the trigger ON and trigger OFF thresholds are set at 1300 and 200, respectively, for the IMC to prepare to lock as seen in the setpoints from Corey's screenshot.
In 'MOVE_TO_OFFLINE', the trigger ON and trigger OFF thresholds are set at 100 and 90, respectively (for <4W input), as seen in the EPICS values from Corey's screenshot.
So, it would seem that after the lower thresholds were set when taking the IMC offline sometime recently, they were incorrectly accepted in SDF. I'll accept them as the correct values in the OBSERVE.snap table once H1 is back up to low noise, as I expect they'll show up as a difference again.
TITLE: 06/13 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
Observing at 148 Mpc and have been Locked for almost 6 hours. I just turned the Roll damping off for ETMY so we don't get any sdf diffs if we have to relock overnight.
When we first got back up, we were noticing the ETMY roll mode ringing up again, which we had been hoping to have solved by Elenna's changes to the SRCL FF (84998). We tried turning damping back on where we had had it in previoius locks (at 20), but this didn't seem to be doing anything this time. We eventually tried a gain of 30 instead, and this finally started damping it.
Near the beginning of the lock, we also had the SQZer unlock and it was having trouble relocking by itself because the LO could not relock due to the ASC having pulling ZM4 and ZM6 out of range. To fix this, Camilla took us to NO_SQZ, then cleared the history on the P and Y lock filters on ZM4/6. Afterwards, she verified that we would've also been able to do this by just taking SQZ manager to RESET_SQZ_ASC, and then back to FDS once that finishes.
In the hours after that, we've had the SQZer unlock multiple times, which is why we've been popping in and out of Observing so much, but each time it has been able to relock itself fine.
LOG:
21:30 Working on PRMI
21:41 Decided to just try and relock bc wind is rising
- Started an IA due to DRMI not being able to catch
- Lost lock during ENGAGE_ASC_FOR_FULL_IFO (the glitch that happens during it got too big)
23:32 NOMINAL_LOW_NOISE
23:39 Observing
23:41 Quickly out of Observing to turn on ETMY Roll damping
23:41 Back into Observing
23:54 Out of Observing due to turning ETMY roll damping back on
23:54 Back into Observing
00:01 Out of Observing due to SQZer unlocking
- LO could not relock due to the ASC running away
00:15 Back into Observing
00:17 Out of Observing to try bumping up the ETMY Roll damp gain
00:17 Back into Observing
02:17 Out of Observing due to SQZ unlock
02:20 Back into Observing
02:49 Out of Observing due to SQZ unlock
02:57 Back into Observing
03:37 Out of Observing due to SQZ unlock
03:38 Back into Observing
05:11 Out of Observing to turn the ETMY roll damping off
05:12 Back into Observing
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
21:34 | PEM | Robert | LVEA | YES | Getting LVEA ready for Observing | 22:02 |
Observing at 145 Mpc and have been Locked for 4 hours. We've been bumped out of Observing a couple times due to SQZ unlocking, but it's been able to get everything back in order by itself each time.
Jennie, Sheila, Camilla
Ran the userapps/.../sqz/h1/scripts/SCAN_PSAMS.py script with it optimizing SQZ with the 350Hz BLRMS rather than the high frequency ones to set SQZ angle.
We tried the beloiw values, see attached ndscope:
Note we don't think that the 350Hz BLRMS is a good measure of the range as the yellow 350Hz BLRMS being good don't agree with good range times. We should add a 80-250Hz BLRMS in the place of BLRMS_2 with the hope that it would follow the range better.
TITLE: 06/12 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is LOCKING in ACQUIRE_DRMI_1F
(Started my shift cover at around 10AM PT)
Today we had planned commissioning until 21:00 UTC. We were locked for the majority of it, with much of the commissioning time being spent on SQZ.
We experienced a ETM Glitch Lockloss (not human nor environment caused) - alog 85004. After this, we stopped at PRMI_LOCKED to work on PRMI ASC, but after trying for a while and not making much progress, commissioners decided to continue normal locking.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:22 | FAC | LVEA is LASER HAZARD | LVEA | YES | LVEA is LASER HAZARD \u0d26\u0d4d\u0d26\u0d3f(\u239a_\u239a) | 05:18 |
16:05 | PEM | Robert | LVEA | Y | Remove VP covers for CPY evaluation WP#12618 | 22:55 |
17:05 | FAC | Eric | EX | N | EX Chiller Work | 17:35 |
17:06 | FAC | Tyler, Philip the Bee Guy | EX | N | Bee Removal | 17:06 |
17:06 | FAC | Tyler | EX | N | Tyler | 17:06 |
19:16 | PSL | Jennie, Keita | Optics Lab | Local | ISS Work | 22:16 |
21:34 | PEM | Robert | LVEA | YES | Getting LVEA ready for Observing | 22:34 |
Lost lock due to the ETM Glitch while commissioning.
2025-06-12 23:39 UTC Back to Observing
Accepted sdfs for SUSPROC (filter for Roll modes) and for LSC (IMC-MCL_FM_TRIG thresholds) to get into Observing. Control room at the time didn't know who changed the MCL thresholds, so I just accepted them.
TITLE: 06/12 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Commissioning
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 28mph Gusts, 21mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
Currently unlocked and trying to relock the detector before the wind gets worse. Previously work was being done to fix PRMI ASC, but that's been abandoned for now since the wind is supposed to get worse and we can't keep green arms locked for too long with the wind this bad.
WP12620
Dave, Oli:
I have isolated the raw minute trends for the time period 16dec2024 - 12jan2025 from the current data, these static files are now ready for transfer to spinning media.
h1daqnds0 was reconfigured to serve the past 6 month from this temporary location and with Oli's permission its DAQD process was restarted 13:44.
Sheila, Camilla
Aim was to following 83594 with more FIS data points every 10deg to get a better SQZ picture and skip FIS ASQZ. Plan was to do this in the same lock stretch as Kevin's ADF scan so that we can compare results, lost lock before data was taken.
Type | Time (UTC) | Angle | DTT Ref |
No SQZ | 20:07:00 - 20:12:00 | N/A | ref 0 |
opo_grTrans_ setpoint_uW | Amplified Max | Amplified Min | UnAmp | Dark | NLG (usual) | OPO Gain |
95 | 0.005316 | 9.54 e-5 | 0.0002687 | -1.26e-5 | 18.9 | -8 |
Elenna, Camilla, Jenne, Ryan Short, Tony
We saw an increase in the DCPD monitor screen, which the roll monitor identified as ETMX roll mode. The existing settings from a long time ago used AS A YAW to damp roll modes, I attempted to damp this by actuating on ETMY, which seems to have worked with a gain of 20. (settings in screenshot)
Camilla trended the monitor, and sees that this started to ring up slowly around 7 am today.
We've gone back to observing with this gain setting, but we plan to turn it off when this looks well damped (if we are still locked by the end of the eve shift, we should turn it off so the owl operator doesn't get woken up by SDF diffs).
Since we were able to actuate on ETMY, that indicates this is likely the ETMY roll mode. That indicates that either the ASC or the LSC feedforward is driving this mode.
I just took a look at the SRCL feedforward, and I see a pair of high Q zeros/poles (Q ~ 1600!) at about 13.5 Hz that I missed (facepalm). That seems a bit low, since this roll mode appears to be around 13.7 Hz, but we should still remove that anyway. We can take care of this tomorrow during commissioning and that might prevent the driving of this mode.
I can't think of any ASC control that would have a significant drive at 13.7 Hz.
I just removed two high Q features at 6.5 and 13.5 Hz that were in the SRCL feedforward. I kept the same filter but removed the features, so there should be no SDF or guardian changes. My hope is that this will prevent the roll mode from being rung up, so I have turned the gain off and SDFed it. Attached is a screenshot of the SDF change for ETMY roll.
Unfortunately, it appears that the ETMY roll mode is still ringing up, so the SRCL feedforward is not the cause. Another possibility is the ASC. The CHARD P control signal is larger now around 13.7 Hz than it was in April. The attached plot shows a reference trace from April and live trace from last night's lock. I don't know if this is enough to drive this mode. The bounce roll notches are engaged on ETMY L2, and have 40 dB attenuation for the roll mode between 13.4 and 13.7 Hz.
Sheila, Elenna, Camilla
Sheila was questioning if something is drifting for us to need an initial alignment after the majority of relocks. Elenna and I noticed that BS PIT moves a lot both while powering up /moving spots and while in NLN. Unsure from the BS alignment inputs plot what's causing this.
This was also happening before the break (see below) but the operators were similarly needing more regular initial alignments before the break too. 1 year ago this was not happening, plot.
These large BS PIT changes began 5th to 6th July 2024 (plot). This is the day shift from the time that the first lock like this happened 5th July 2024 19:26UTC (12:26PT): 78877 at the time we were doing PR2 spot moves. There also was a SUS computer restart 78892 but that appeared to be a day after this started happening.
Sheila, Camilla
This reminded Sheila of when we were heating a SUS in the past and causing the bottom mass to pitch and the ASC to move the top mass to counteract this. Then after lockloss, the bottom mass would slowly go back to it's nominal position.
We do see this on the BS since the PR2 move, see attached (top 2 left plots). See in the green bottom mass oplev trace, when the ASC is turned off on lockloss, the BS moves quickly and then slowly moves again over the next ~30 minutes, do not see simular things on PR3. Attached is the same plot before the PR2 move. And below is a list of other PR2 positions we tried, all the other positions have also made this BS drift. The total PR2 move since the good place is ~3500urad in Yaw.
To avoid this heating and BS drift, we should move back towards a PR2 YAW of closer to 3200. But, we moved PR2 to avoid the spot clipping on the scrapper baffle, e.g. 77631, 80319, 82722, 82641.
I did a bit of alog archaeology to re-remember what we'd done in the past.
To put back the soft turn-off of the BS ASC, I think we need to:
Camilla made the good point that we probably don't want to implement this and then have the first trial of it be overnight. Maybe I'll put it in sometime Monday (when we again have commissioning time), and if we lose lock we can check that it did all the right things.
I've now implemented this soft let-go of BS pit in the ISC_DRMI guardian, and loaded. We'll be able to watch it throughout the day today, including while we're commissioning, so hopefully we'll be able to see it work properly at least once (eg, from a DRMI lockloss).
This 'slow let-go' mode for BS pitch certainly makes the behavior of the BS pit oplev qualitatively different.
In the attached plots, the sharp spike up and decay down behavior around -8 hours is how it had been looking for a long time (as Camilla notes in previous logs in this thread). Around -2 hours we lost lock from NomLowNoise, and while we do get a glitch upon lockloss, the BS doesn't seem to move quite as much, and is mostly flattened out after a shorter amount of time. I also note that this time (-2 hours ago) we didn't need to do an initial alignment (which was done at the -8 hours ago time). However, as Jeff pointed out, we held at DOWN for a while to reconcile SDFs, it's not quite a fair comparison.
We'll see how things go, but there's at least a chance that this will help reduce the need for initial alignments. If needed, we can try to tweak the time constant of the 'soft let-go' to further make the optical lever signal stay more overall flat.
The SUSBS SDF safe.snap file is saved with FM1 off, so that it won't get turned back on in SDF revert. The PREP_PRMI_ASC and PREP_DRMI_ASC states both re-enable FM1 - I may need to go through and ensure it's on for MICH initial alignment.
RyanS, Jenne
We've looked at a couple of times that the BS has been let go of slowly, and it seems like the cooldown time is usually about 17 minutes until it's basically done and at where it wants to be for the next acquisition of DRMI. Attached is one such example.
Alternatively, a day or so ago Tony had to do an initial alignment. On that day, it seemed like the BS took much longer to get to its quiescent spot. I'm not yet sure why the behavior is different sometimes.
Tony is working on taking a look at our average reacquisition time, which will help tell us whether we should make another change to further improve the time it takes to get the BS to where it wants to be for acquisition.