TITLE: 06/13 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Somewhat uneventful day with just one lockloss that we're still trying to come back up from. H1 is relocking and currently up to LOCKING_ALS.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:22 | FAC | LVEA is LASER HAZARD | LVEA | YES | LVEA is LASER HAZARD | Ongoing |
14:51 | FAC | Randy | MY | N | Plugging in forklift | 15:17 |
21:13 | VAC | Jordan, Janos | LVEA | - | Valving in/out HAM1 pumps | 21:29 |
21:16 | CDS | Dave | FCES | N | Check on I/O chassis | 21:36 |
21:30 | TCS | Tony | MER | N | TCS chiller checks | 22:07 |
22:08 | PEM | Robert | LVEA | - | Set up for picture taking | 22:26 |
Kiet and Sheila,
Following up on the investigation posted in aLOG 84136, we examined the impact of higher-order violin mode harmonics on the contamination region.
We found that subtracting the violin peaks near 1000 Hz (1st harmonic) from those near 1500 Hz (2nd harmonic) results in frequency differences that align with many of the narrow lines observed in the contamination region around 500 Hz.
Violin peaks that we used (using O4a+b run average spectra)
F_n1 = {1008.69472,1008.81764, 1007.99944,1005.10319,1005.40083} Hz
F_n2 = {1472.77958,1466.18903,1465.59417,1468.58861, 1465.02333, 1486.36153, 1485.76708} Hz
Out of the 35 possible difference pairs (one from each set), 27 matched known lines in the contamination region to within 1/1800 Hz (~0.56 mHz)—most within 0.1 mHz. Considering that each region actually contains >30 peaks, the number of matching pairs likely increases significantly, helping explain the dense forest of lines in the comtaimination region.
Next steps:
The Fscan run average data are available here (interactive plots):
Fundamental region (500 Hz):
1st harmonic region (1000 Hz):
2nd harmonic region (1500 Hz):
TITLE: 06/13 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 17mph Gusts, 11mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
We've lost lock at POWER_10Ws twice in a row, within a 5 seconds of entering the state. I'm worried of how rung up the violins will be now, as they looked large right before the 2nd lockloss. I'm going to stop at CHECK_VIOLINS on my way up now. Both locklosses tag ADS_EXCURSION
TCS Chiller Water Level Top-Off - BiWeekly FAMIS 27817
TSCY: Found at 10.4 and added 0
TSCX found at 30.3 and added 50mL
Everything looks like its functioning well.
Lockloss @ 21:11 UTC - link to lockloss tool
No obvious cause, but looks to have been sudden. We're taking advantage of the downtime to have the VAC team address pumps on HAM1.
I took the opportunity to go into the FCES CER to take photos on h1cdsh8's IO Chassis. I was in there for about 5 minutes starting at 14:21/
Yesterday we ran a bruco on Francisco's post-vent SQZ time from 84996. Link to bruco here.
Command used: python -m bruco --ifo=H1 --channel=GDS-CALIB_STRAIN_CLEAN --gpsb=1433758866 --length=600 --outfs=4096 --fres=0.1 --dir=/home/camilla.compton/public_html/brucos/1433758866 --top=100 --webtop=20 --plot=html --nproc=20 --xlim=7:2000 --excluded=/home/elenna.capote/bruco-excluded/lho_DARM_excluded.txt
Links to to a few coherences, although I haven't done a deep dive: SRCL (some 100-250Hz), PRCL, MICH (bad 2-4Hz), PSL ISS 2nd loop, can see the jitter peaks in IMC WFS
The high coherence with CHARD P is probably coming from excess noise in CHARD P from HAM1, 84863. Jim is set to do further HAM1 ISI tuning tomorrow, so we can recheck this coherence later. We also have plans to rerun the noise budget injections to check if the CHARD coupling has changed.
We could do an iterative feedforward to take care of the residual LSC coherence, which mainly seems to be coming from MICH LSC.
We should also determine how much the MICH ASC coherence is limiting DARM and maybe change the loop design again.
Much of the other coherence seems to be jitter.
Closes FAMIS26392
For the CS fans, they look fine althought MR_FAN5_170_2 is a bit noisy.
For the OUT building fans, there's a periodic noise increase on a few different fans. EY_FAN2_470_2, EX_FAN1_570_{1,2} and MX_FAN2_370_1.
Summary of (our knowledge about) 2nd loop array units are available on DCC: https://dcc.ligo.org/LIGO-D1101059
We opened the container of S1202966 in the optics lab for inspection. This is a unit removed from LHAM2 in 2016.
We found no damage (1st picture), all photodiodes and the QPD look OK, no chipping of the optics, but many components are missing.
I decided to disassemble the damaged/contaminated S1202967 partially to send some of the parts to C&B and keep them as the last-resort spares. Jennie sent the following parts to C&B. There are deep scuffs which should be the result of repeated metal-to-metal contact/scratching, but they should be OK for use once they go through C&B.
ISS array cover might be salvageable but the place where the poles are attached is bent so badly, bending it back might break it. See the 2nd picture, the surface is supposed to be flat.
Brief update about the following two items. No spare was found at LHO as of Jun/18/2025, so in the worst case we will use the parts salvaged from the damaged S1202967 assembly (they were already sent to C/B).
Since we've been seeing the ETMY roll mode consistently ringing up over the start of lock stretches and that it can cause locklosses after long enough, Sheila modified the 'DAMP_BOUNCE' [502] state of ISC_LOCK to now engage damping of this mode with a gain of 40. The state has also been renamed to 'DAMP_BOUNCE_ROLL'. I have accepted the gain of 40 and timeramp of 5 sec in the OBSERVE.snap table of h1susetmy and only the timeramp in the SAFE.snap table (screenshots attached; we had originally set the gain at 30 but then updated it to 40, which I forgot to take a screenshot of).
We are still unsure as to why this roll mode has been ringing up since the vent, but so far Elenna has ruled out the SRCL feedforward and theorizes it could be from ASC, specifically CHARD_P (see alog84982 and comments).
I think this is causing us locklosses, twice we've lost lock in this state as it turns on when I slowly stepped through the states, and twice we've lost it a few seconds into POWER_10Ws when GRD was moving automatically. I reduced the gain to 30 from 40 (SVN commited and reloaded ISC_LOCK, I had to first commit the DAMP_BOUNCE_ROLL state edits) and doubled the tramp to 10 (SDFed in SAFE).
The reduced gain and increased tramp didn't stop it from killing the lock, as soon as it engaged we lost lock. I've commented it out from ISC_LOCK - line 3937.
I think the BOUNCE_ROLL channel was mistyped in ISC_LOCK, the line is ezca['SUS-ETMY_M0_DAMP_R_GAIN'] = 40 where it should be ezca['SUS-ETMY_M0_DARM_DAMP_R_GAIN'] = 40 ? I should have noticed this earlier.
I edited the channel in ISC_LOCK to add "DARM_" but I did not get a chance to reload before we went into Observing.
Fri Jun 13 10:07:30 2025 INFO: Fill completed in 7min 27secs
Jordan confirmed a good fill curbside.
The squeezer ASC was misaligning the squeezer early in the lock as it has been doing this week.
Ryan took us out of observing to deal with this and the roll mode. I went to no squeezing and reset the AS42 offsets for no squeezing, a little more than 1 hour after power up. These offsets have changed with thermalization in the past.
I reset the sqz ASC using the "graceful clear history". Once the squeezing was injected the RF3 level was too low to lock, so I adjusted ZM6 manually. I could perhaps have done this (without reseting the offsets) by asking SQZ_MANAGER to RESET_SQZ_ASC, as Oli and Camilla suggested last night.
I accepted the offsets in the observe.snap, but forgot about the safe.snap. Ryan verified that SQZASC is not included in SDF revert, so this will be fine for the weekend, but we should accept these in safe.snap sometime soon.
If we have another lock today we can see if reseting the offsets has helped with the asc issue during thermalization. If we do not, we can set the flag to not run sqz ASC over the weekend.
WP12620
09:35 I started the copy of the past 6 months of raw minute trend files from h1daqtw0 to h1ldasgw0-RAID using h1daqfw0. This copy typically takes about 26 hours. It is running in nice mode to minimize its impact on daqd.
Fully automatic relock except for an unsurprising adjustment of PRM to lock DRMI.
Also accepted a batch of SDFs to start observing:
TITLE: 06/13 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 5mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.13 μm/s
QUICK SUMMARY: Looks like H1 unlocked at 14:00 and just finished up running an initial alignment. Starting lock acquisition now.
The cause of the range drop and eventual lockloss this morning appears to be from the problematic roll mode we've been seeing recently (see alog84982).
If I see that it's still rung up once H1 relocks, I'll apply the damping gain of 30 that seemed to work yesterday evening.
Received phone call at 1:20amPDT.
Saw that H1 was at NLN, and ready to go to Observing, but could not due to SDF Diffs for LSC & SUSETMY (see attached screenshot):
1) LSC
I was not familiar with these channels, so I went through the exercise of trying to find their medm, but for the life of me I could not get there! The closest I got was LSC Overview / IMC-MCL Filter Bank, but they were not on that medm. (probably spent 30min looking everywhere and in between with no luck). Looked at these channels in ndscope & these channels were at their nominals for the last lock. Also looked in the alog, and only saw SDF entries for them from 2019 & 2020. Ultimately, I just decided to do a REVERT (and luckily, H1 did not lose lock).
2) SUSETMY
Then H1 automatically went back to Observe.
Maybe Guardian, for some reason took these channels to these settings? At any rate, going to try to go back to sleep since it has been an hour already (hopefully this does not happen for the next lock!).
These MCL trigger thresholds come from the IMC_LOCK Guardian and are set in the 'DOWN' and 'MOVE_TO_OFFLINE' states.
In 'DOWN', the trigger ON and trigger OFF thresholds are set at 1300 and 200, respectively, for the IMC to prepare to lock as seen in the setpoints from Corey's screenshot.
In 'MOVE_TO_OFFLINE', the trigger ON and trigger OFF thresholds are set at 100 and 90, respectively (for <4W input), as seen in the EPICS values from Corey's screenshot.
So, it would seem that after the lower thresholds were set when taking the IMC offline sometime recently, they were incorrectly accepted in SDF. I'll accept them as the correct values in the OBSERVE.snap table once H1 is back up to low noise, as I expect they'll show up as a difference again.
TITLE: 06/13 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
Observing at 148 Mpc and have been Locked for almost 6 hours. I just turned the Roll damping off for ETMY so we don't get any sdf diffs if we have to relock overnight.
When we first got back up, we were noticing the ETMY roll mode ringing up again, which we had been hoping to have solved by Elenna's changes to the SRCL FF (84998). We tried turning damping back on where we had had it in previoius locks (at 20), but this didn't seem to be doing anything this time. We eventually tried a gain of 30 instead, and this finally started damping it.
Near the beginning of the lock, we also had the SQZer unlock and it was having trouble relocking by itself because the LO could not relock due to the ASC having pulling ZM4 and ZM6 out of range. To fix this, Camilla took us to NO_SQZ, then cleared the history on the P and Y lock filters on ZM4/6. Afterwards, she verified that we would've also been able to do this by just taking SQZ manager to RESET_SQZ_ASC, and then back to FDS once that finishes.
In the hours after that, we've had the SQZer unlock multiple times, which is why we've been popping in and out of Observing so much, but each time it has been able to relock itself fine.
LOG:
21:30 Working on PRMI
21:41 Decided to just try and relock bc wind is rising
- Started an IA due to DRMI not being able to catch
- Lost lock during ENGAGE_ASC_FOR_FULL_IFO (the glitch that happens during it got too big)
23:32 NOMINAL_LOW_NOISE
23:39 Observing
23:41 Quickly out of Observing to turn on ETMY Roll damping
23:41 Back into Observing
23:54 Out of Observing due to turning ETMY roll damping back on
23:54 Back into Observing
00:01 Out of Observing due to SQZer unlocking
- LO could not relock due to the ASC running away
00:15 Back into Observing
00:17 Out of Observing to try bumping up the ETMY Roll damp gain
00:17 Back into Observing
02:17 Out of Observing due to SQZ unlock
02:20 Back into Observing
02:49 Out of Observing due to SQZ unlock
02:57 Back into Observing
03:37 Out of Observing due to SQZ unlock
03:38 Back into Observing
05:11 Out of Observing to turn the ETMY roll damping off
05:12 Back into Observing
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
21:34 | PEM | Robert | LVEA | YES | Getting LVEA ready for Observing | 22:02 |
Observing at 145 Mpc and have been Locked for 4 hours. We've been bumped out of Observing a couple times due to SQZ unlocking, but it's been able to get everything back in order by itself each time.
Adding plot comparing the PSDs before and after getting rid of the peaks that can be indentified by this method.