TITLE: 06/14 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Aquisition
INCOMING OPERATOR: Corey
SHIFT SUMMARY: A quaternary of issues, environmental (wind and an earthquake), hardware, and software it seems. The new DAMP_BOUNCE_ROLL seems to be killing it everytime it engages so I've commented it out.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:22 | FAC | LVEA is LASER HAZARD | LVEA | YES | LVEA is LASER HAZARD \u0d26\u0d4d\u0d26\u0d3f(\u239a_\u239a) | 05:18 |
00:46 | OMC | Keita | LVEA | Y | Investigate OMC PZT | 00:56 |
I checked the screenshots when I got home and saw that we had been sitting in DAMP_BOUNCE_ROLL for ~20 minutes with the state completed, so I hopped on and requested it to move on to NLN, I'm not sure why H1_MANAGER wasn't moving it on as the REQUEST was set to NLN as it should have been.
ITMX13 decided it didn't want to damp again, so I had to find some new settings. FM1 + FM4 + FM10 G = -0.2 seems to be working. I've set its gain to zero in lscparams in the meantime and reloaded the node.
Ryan had a hard time locking the OMC, and there was no DCPD_SUM spikes as Ryan moved the PZT offset manually. We saw nothing in the OMC trans camera either.
I found that the OMC PZT2 monitor dropped to zero-ish at 14:23 PDT (21:23 UTC). That coincides with vacuum-related activity for HAM1, not sure if they are related.
In the mezzanine I found that the output of the HV driver was zero.
Pressing VSET and ISET, I saw that the driver was set up to 110V 80mA. Pressing RECALL -> ENTER didn't do anything. I also noticed at this point that the unit was somehow in CC (constant current ) mode, which is usually automatically determined by the power supply. It should be in CV mode.
I turned the output off, asked Ryan to turn the PZT offset to zero (which means the middle of 0-100V range, i.e. 50V, so I should have asked -50V offset), power cycled the unit just because, pressed VSET again (it was still 110V), pressed ENTER, turned the output ON, and it started working again.
Ryan moved the PZT offset and the HV monitor responded. Shortly after this the IFO lost lock but I don't think that was related to the HV.
Corey, Craig and I had the exact same issue 2 weeks ago.
This is twice in a few weeks. Either we have a PZT drawing too much current or a power supply failing. We will swap the power supply Tuesday.
Jordan, Janos Today, at ~14 pm after a lockloss, we valved out the annulus Pfeiffer aux carts. We let them run until both the AIPs turned over. This happened first with the HAM1 AIP (Noble diode), which despite the higher gas load, turned over earlier. At ~16:30 pm the HAM2 AIP also turned over (Starcell), so at ~16:45 I stopped the Pfeiffer-carts, with this, eliminating the 2 biggest noise-sources. Also, today at ~14 pm we valved in the main IP (IP13), that brought down the pressure to ~5.3E-7 Torr, and it stabilizes at ~5.8E-7 Torr. The main turbo's backing cart is still on (standing on vibration dumping pads); we are planning to valve out the turbo early next week.
TITLE: 06/13 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Somewhat uneventful day with just one lockloss that we're still trying to come back up from. H1 is relocking and currently up to LOCKING_ALS.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:22 | FAC | LVEA is LASER HAZARD | LVEA | YES | LVEA is LASER HAZARD | Ongoing |
14:51 | FAC | Randy | MY | N | Plugging in forklift | 15:17 |
21:13 | VAC | Jordan, Janos | LVEA | - | Valving in/out HAM1 pumps | 21:29 |
21:16 | CDS | Dave | FCES | N | Check on I/O chassis | 21:36 |
21:30 | TCS | Tony | MER | N | TCS chiller checks | 22:07 |
22:08 | PEM | Robert | LVEA | - | Set up for picture taking | 22:26 |
Kiet and Sheila,
Following up on the investigation posted in aLOG 84136, we examined the impact of higher-order violin mode harmonics on the contamination region.
We found that subtracting the violin peaks near 1000 Hz (1st harmonic) from those near 1500 Hz (2nd harmonic) results in frequency differences that align with many of the narrow lines observed in the contamination region around 500 Hz.
Violin peaks that we used (using O4a+b run average spectra)
F_n1 = {1008.69472,1008.81764, 1007.99944,1005.10319,1005.40083} Hz
F_n2 = {1472.77958,1466.18903,1465.59417,1468.58861, 1465.02333, 1486.36153, 1485.76708} Hz
Out of the 35 possible difference pairs (one from each set), 27 matched known lines in the contamination region to within 1/1800 Hz (~0.56 mHz)—most within 0.1 mHz. Considering that each region actually contains >30 peaks, the number of matching pairs likely increases significantly, helping explain the dense forest of lines in the comtaimination region.
Next steps:
The Fscan run average data are available here (interactive plots):
Fundamental region (500 Hz):
1st harmonic region (1000 Hz):
2nd harmonic region (1500 Hz):
TITLE: 06/13 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 17mph Gusts, 11mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
We've lost lock at POWER_10Ws twice in a row, within a 5 seconds of entering the state. I'm worried of how rung up the violins will be now, as they looked large right before the 2nd lockloss. I'm going to stop at CHECK_VIOLINS on my way up now. Both locklosses tag ADS_EXCURSION
TCS Chiller Water Level Top-Off - BiWeekly FAMIS 27817
TSCY: Found at 10.4 and added 0
TSCX found at 30.3 and added 50mL
Everything looks like its functioning well.
Lockloss @ 21:11 UTC - link to lockloss tool
No obvious cause, but looks to have been sudden. We're taking advantage of the downtime to have the VAC team address pumps on HAM1.
I took the opportunity to go into the FCES CER to take photos on h1cdsh8's IO Chassis. I was in there for about 5 minutes starting at 14:21/
Yesterday we ran a bruco on Francisco's post-vent SQZ time from 84996. Link to bruco here.
Command used: python -m bruco --ifo=H1 --channel=GDS-CALIB_STRAIN_CLEAN --gpsb=1433758866 --length=600 --outfs=4096 --fres=0.1 --dir=/home/camilla.compton/public_html/brucos/1433758866 --top=100 --webtop=20 --plot=html --nproc=20 --xlim=7:2000 --excluded=/home/elenna.capote/bruco-excluded/lho_DARM_excluded.txt
Links to to a few coherences, although I haven't done a deep dive: SRCL (some 100-250Hz), PRCL, MICH (bad 2-4Hz), PSL ISS 2nd loop, can see the jitter peaks in IMC WFS
The high coherence with CHARD P is probably coming from excess noise in CHARD P from HAM1, 84863. Jim is set to do further HAM1 ISI tuning tomorrow, so we can recheck this coherence later. We also have plans to rerun the noise budget injections to check if the CHARD coupling has changed.
We could do an iterative feedforward to take care of the residual LSC coherence, which mainly seems to be coming from MICH LSC.
We should also determine how much the MICH ASC coherence is limiting DARM and maybe change the loop design again.
Much of the other coherence seems to be jitter.
Closes FAMIS26392
For the CS fans, they look fine althought MR_FAN5_170_2 is a bit noisy.
For the OUT building fans, there's a periodic noise increase on a few different fans. EY_FAN2_470_2, EX_FAN1_570_{1,2} and MX_FAN2_370_1.
Summary of (our knowledge about) 2nd loop array units are available on DCC: https://dcc.ligo.org/LIGO-D1101059
We opened the container of S1202966 in the optics lab for inspection. This is a unit removed from LHAM2 in 2016.
We found no damage (1st picture), all photodiodes and the QPD look OK, no chipping of the optics, but many components are missing.
I decided to disassemble the damaged/contaminated S1202967 partially to send some of the parts to C&B and keep them as the last-resort spares. Jennie sent the following parts to C&B. There are deep scuffs which should be the result of repeated metal-to-metal contact/scratching, but they should be OK for use once they go through C&B.
ISS array cover might be salvageable but the place where the poles are attached is bent so badly, bending it back might break it. See the 2nd picture, the surface is supposed to be flat.
Brief update about the following two items. No spare was found at LHO as of Jun/18/2025, so in the worst case we will use the parts salvaged from the damaged S1202967 assembly (they were already sent to C/B).
Since we've been seeing the ETMY roll mode consistently ringing up over the start of lock stretches and that it can cause locklosses after long enough, Sheila modified the 'DAMP_BOUNCE' [502] state of ISC_LOCK to now engage damping of this mode with a gain of 40. The state has also been renamed to 'DAMP_BOUNCE_ROLL'. I have accepted the gain of 40 and timeramp of 5 sec in the OBSERVE.snap table of h1susetmy and only the timeramp in the SAFE.snap table (screenshots attached; we had originally set the gain at 30 but then updated it to 40, which I forgot to take a screenshot of).
We are still unsure as to why this roll mode has been ringing up since the vent, but so far Elenna has ruled out the SRCL feedforward and theorizes it could be from ASC, specifically CHARD_P (see alog84982 and comments).
I think this is causing us locklosses, twice we've lost lock in this state as it turns on when I slowly stepped through the states, and twice we've lost it a few seconds into POWER_10Ws when GRD was moving automatically. I reduced the gain to 30 from 40 (SVN commited and reloaded ISC_LOCK, I had to first commit the DAMP_BOUNCE_ROLL state edits) and doubled the tramp to 10 (SDFed in SAFE).
The reduced gain and increased tramp didn't stop it from killing the lock, as soon as it engaged we lost lock. I've commented it out from ISC_LOCK - line 3937.
I think the BOUNCE_ROLL channel was mistyped in ISC_LOCK, the line is ezca['SUS-ETMY_M0_DAMP_R_GAIN'] = 40 where it should be ezca['SUS-ETMY_M0_DARM_DAMP_R_GAIN'] = 40 ? I should have noticed this earlier.
I edited the channel in ISC_LOCK to add "DARM_" but I did not get a chance to reload before we went into Observing.
Fri Jun 13 10:07:30 2025 INFO: Fill completed in 7min 27secs
Jordan confirmed a good fill curbside.
The squeezer ASC was misaligning the squeezer early in the lock as it has been doing this week.
Ryan took us out of observing to deal with this and the roll mode. I went to no squeezing and reset the AS42 offsets for no squeezing, a little more than 1 hour after power up. These offsets have changed with thermalization in the past.
I reset the sqz ASC using the "graceful clear history". Once the squeezing was injected the RF3 level was too low to lock, so I adjusted ZM6 manually. I could perhaps have done this (without reseting the offsets) by asking SQZ_MANAGER to RESET_SQZ_ASC, as Oli and Camilla suggested last night.
I accepted the offsets in the observe.snap, but forgot about the safe.snap. Ryan verified that SQZASC is not included in SDF revert, so this will be fine for the weekend, but we should accept these in safe.snap sometime soon.
If we have another lock today we can see if reseting the offsets has helped with the asc issue during thermalization. If we do not, we can set the flag to not run sqz ASC over the weekend.