Lockloss @ 00:45 UTC after almost 12 hrs lock - link to lockloss tool
No obvious cause.
TITLE: 07/04 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Commissioning
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 25mph Gusts, 20mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY: H1 has been locked for almost 11 hours, but currently not observing as I'm troubleshooting an issue with the SQZ filter cavity. I'm able to consistently bring everything with the squeezer up to before the FC is needed, and lock the FC up to the SQZ_FC Guardian's 'IR_FOUND' state, but every time I try moving on, the FC drops out. Investigation still ongoing.
H1 back to observing at 00:18 UTC.
Ultimately, after trying several things that didn't completely fix the issue like trending FC1/2 OSEMs, adjusting FC1/2 to maximize FC green transmission, and doing a "graceful clear history" on the filter cavity ASC, I enlisted Sheila for guidance. I ran her new 'SCAN_OPOTEMP' Guardian state to optimize the OPO temperature (then gave it a couple more steps to maximize CLF_REFL_RF6), then requested SQZ_MANAGER all the way up to 'FREQ_DEP_SQZ', which worked perfectly this time. The improvement of CLF_REFL_RF6 from 0.35 to 0.39 must have been just enough to make the filter cavity happy.
There were outstanding SDF diffs on h1ascsqzfc, possibly from my history clearing, which Sheila said were okay to just revert (screenshot attached). When I did this, I saw the green FC spot wobble, but not unlock. I then returned H1 to observing.
TITLE: 07/04 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
H1 has been locked all day and is still locked at NLN.... But...
Dropped out of Observing at 22:54:25 UTC due to the SQZ_FC Dropping to Down.
A few trys later it locked and H1 returned to Observing for just a second, only for the SQZ_FC to unlock again. @ 22:58:55 UTC.
Several trys after that and the FC cannot seem to lock it's self reliably.
We have tried following the directions found in the SQZ_FC section of this SQZr Troubleshooting document.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
22:37 | FAC | Tyler | Machine Shop | N | Checking in on machine shop status | 23:06 |
PSL Status Report - Weekly FAMIS 26429
Laser Status:
NPRO output power is 1.856W
AMP1 output power is 69.91W
AMP2 output power is 140.0W
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 17 days, 1 hr 18 minutes
Reflected power = 23.39W
Transmitted power = 105.6W
PowerSum = 129.0W
FSS:
It has been locked for 0 days 6 hr and 52 min
TPD[V] = 0.8037V
ISS:
The diffracted power is around 3.6%
Last saturation event was 0 days 6 hours and 52 minutes ago
Possible Issues:
PMC reflected power is high
Fri Jul 04 10:08:40 2025 INFO: Fill completed in 8min 36secs
TITLE: 07/04 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 2mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY:
When I Arrived H1 was in Nominal_LOW NOISE, but not Observing. Range: 120
SQZ_LO_LR was stuck in Locking_OMC[15] with an error message: Beam diverter is open but no 3 MHz on OMC, could be an alignment problem.
Things i have done already.
Init SQZ_LO_LR & then SQZ MANAGER
"Beam diverter is open but no 3MHz on OMC, could be an alignment problem" - trend ZM4,5,6 alignment while trending H1:SQZ-OMC_TRANS_RF3_DEMOD_RFMON. If the alignment of any has changed, move them back slowly while aiming to increase H1:SQZ-OMC_TRANS_RF3_DEMOD_RFMON.
Found here:Troubleshooting SQZr from Ops wiki.
Slider values had not changed at all over the course of several hours but the osems had.
Sheila called while I was setting up my ndscope to check the Osems and had mentioned that ZM5 likely didnt matter as it is ZM4 and 6 that get moved.
She had me clear the history on H1:SUS-ZM{4 & 6}_M1_LOCK_{L P Y}_OUTPUT before I adjusted the ZM4 & 6 sliders.
H1:SUS-ZM4_M1_LOCK_L_GAIN were changed after turning up the TRamps for H1:SUS-ZM{4 & 6}_M1_LOCK_{L, P, Y}_GAIN
Then I cleared the History on those locking filters.
Once that was done I simply just move them back slowly while aiming to increase H1:SQZ-OMC_TRANS_RF3_DEMOD_RFMON as the Trouble shooting doc suggested.
Then as Sheila advised I ren SQZ_MAN through Scan_Alignment_FDS[105] wich further touched up alignment.
After accepting SDF diffs.
We got back to Observing at 15:57 UTC @ 149 Mpc.
TITLE: 07/04 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: I did not see any PI ring ups during my shift. Ended the shift with a supervent!
LOG: No log
00:50 - 01:21 UTC I dropped Observing to run a calibration measurement
04:31 UTC Superevent S250704ab
05:00 UTC lockloss :(
We had been locked for just over 3 hours, the circulating power was ~378.5kW in each arm, a little under the usual 380kW.
Broadband:
Start: 2025-07-04 00:50:39
Stop: 2025-07-04 00:55:50
Data: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250704T005039Z.xml
Simulines:
Start: 2025-07-04 00:56:56.978617 UTC // GPS: 1435625834.978617
Stop: 2025-07-04 01:20:13.539672 UTC // GPS: 1435627231.539672
Data:
2025-07-04 01:20:13,381 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20250704T005657Z.hdf5
2025-07-04 01:20:13,389 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20250704T005657Z.hdf5
2025-07-04 01:20:13,394 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20250704T005657Z.hdf5
2025-07-04 01:20:13,398 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20250704T005657Z.hdf5
2025-07-04 01:20:13,403 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20250704T005657Z.hdf5
TITLE: 07/03 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
Today was Thursday Commissioning from 8am-1230pm, and had a locked H1 for 1hr45min of the commissioning time and had a few commissioning tasks squeezed in this time.
H1 continues to be plagued by ringing up PI Modes that cause locklosses. So, today different settings continued to be tried (latest & current ETMy Ring Heater setting is 1.5 W for the upper and lower). But for some of these changes, we have had to try different violin mode settings (for ETMy Mode 20)---see this alog85527.
Also, the input pointing which was lost due to the HAM2 SEI Power supply swap on Maintenance Day, was reverted back by Elenna (alog85533).
Did not get to run the calibration suite today, so hope to do this after H1 gets to 3hrs during RyanC's shift tonight.
LOG:
I started adding a state to the OPO guardian this morning while we were unlocked that scans the OPO temperature while the CLF is locked, and sets it to the temperature that maximizes CLF 6 Mz power in reflection off the OPO.
The code is based on the SCAN_SQZ_ANG guardian state and it produces plots here: https://lhocds.ligo-wa.caltech.edu/exports/SQZ/GRD/OPOTEMP_SCAN/OPOTEMP_SCAN_250703142801.png
Screenshot shows that exmpale was a 1 minute scan with a 0.02 degree C range, this was a bit too fast for the TEC servo to keep up with so I've slowed it down to a 2 minute scan for the next time we try this out.
Operators can use it to adjust the opo temp, by opening the SQZ_OPO_LR guardian and requesting SCAN_OPOTEMP. I haven't tried this while squeezing was injected into the IFO, but it should work fine to do it then (not while in observing). For future work we could add management that runs this each time the IFO relocks, but I haven't done that now.
Also note: This morning I set the sqz params to stop using SQZ ASC and start using the ADF servo for the squeezing angle. These two can't run at the same time and I think that controlling the squeezing angle would be more beneficial right now. However with various things we didn't get to test this and now we are observing with both ASC and the ADF servo off. We momentarily dropped out of observing to engage the servo. I also added to the SQZ_ANG_ADJUST guardian an if statement that updates the nominal state looking at sqzparams to check if the servo is supposed to be on or off in the guardian. This will be compatible with the script that allows observing without squeezing for this weekend, but that script hard codes what the nominal state is, so in the long run we need to do something else.
TITLE: 07/03 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 146 Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 8mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY:
Jennie Wright, Rahul
Earlier, we have measured the QPD readout per mm to be around 44.1V (in X direction). This afternoon we dithered the input beam (spot size, radius = 200 microns, see LHO alog 85458) in horizontal and vertical direction at 100Hz using the signal generator (amplitude 2V peak to peak and offset 2.5V). This we did after centering the beam on the QPD (output was around 10,500 counts approx.). The output voltage applied on the dither was around 80V. Given below are the results,
Horizontal dithering:-
On the QPD the dither amplitude X (peak to peak) was estimated to be 2.2V, Mean 6.5V
Beam motion estimate X= 50microns
Dither amplitude Y (peak to peak) was estimated to be 1.2V, Mean -355mV
Vertical dithering:-
On the QPD the dither amplitude X (peak to peak) was estimated to be 1.0V, Mean 4.45V
Beam motion estimate X= 20microns
Dither amplitude Y (peak to peak) was estimated to be 1.62V, Mean -7.96V.
During some ndscope testing Erik ran into a file descriptor limit on h1daqnds1 that required us to restart daqd. The daqd stopped accepting requestions, giving a log message that the accept call had failed. We bumped the file descriptor limit up and that solved the issue. We have made runtime changes (no restart required) on h1daqnds[01]. We have put the new limits into the daqd puppet, but have not applied them yet. We will reconcile the systems with puppet next week, after the long weekend.
with help online from Joe B
I revised this title to "attempted" because this push has failed and we reverted to the calibration from 20250610T224009Z.
Today I pushed a new calibration from calibration report 20250628T190643Z. We changed the SRCL offset on 6/26 which had a small effect on the sensing function, enough that Joe and I (with input from Sheila) decided to push a new calibration. With the change in the sensing function, I tagged a previous report on 6/26 with epoch-sensing
and epoch-hfrl
. When I regenerated the report, I set is_pro_spring
to empty, since the previous iteration of the report showed that there was very little spring in the DARM sensing measurement, at least to 10 Hz. I confirmed with Joe that the resulting corner plots for the sensing that show very poor fits for F_spring and Q are ok- this is because the pipeline is unable to fit any appreciable spring in the sensing function.
I checked the GDS filter results by eye, and confirmed they all looked flat. I then went ahead to push this new calibration, following steps we took last time, specifically:
pydarm commit
20250628T190643Z --valid
pydarm export --push
20250628T190643Z
pydarm upload
20250628T190643Z
pydarm gds restart
Then, we waited ten minutes to begin the calibration measurement. This is where I made an error- I checked that the GDS calib strain channels all looked sensible, and I saw some lines updating on grafana, so I assumed we were good to go. Corey began a calibration suite, which starts with a broadband measurement. However, the broadband results were not very good. We lost lock right at the end of the measurement. This was my mistake- I never checked if kappaC and kappaTST had settled, which it looks like they hadn't. So, I think we need to relock and check the calibration again. If it still looks poor, we can revert to the previous calibration. This must be done before we go back to observing.
Follow up edit below here:
We relocked after the push above and remeasured and saw the calibration was even worse than before (orange trace). We think this may be a fit error to the L2 actuation function, but we're not sure. Joe helped me revert the calibration to the previous version, from 20250610T224009Z. Corey and I ran an early broadband and saw the error was better, red trace. Still not the best, but we were not thermalized yet. Hopefully we can try a new push next week with a better cal.
Kevin, Matt, Sheila, Elenna, Corey
After yesterday's ETMY RH change to avoid the 10kHz PI 85514, we lost lock 3 times overnight due to 80kHz PIs.
Apartently the RH change also changed a violin mode damping phase for ETMY 1 kHz mode (mode 20 1000.307 Hz), 85526, which did not cause the earlier locklosses but is growing to the point where it is dominating the DCPD RMS in this lock, but responded well the Elenna's sign flip.
Kevin took a look at the 10kHz higher order modes, second attachment. The top panel shows how the higher order modes have been thermalizing before the ring heater change, the bottom panel shows three locks since the ring heater change which is a few minutes before 2 hours into the lock, so these can be compared to the green trace in the top panel. THe x arm higher order modes are sitting around 10.6kHz at this point in the self heating thermalization, the yarm modes were below the scale on this plot before the change and are now moving up to around 10480Hz. If they gain another 20Hz as the self heating kept thermalizing similar to the x arm, this would have put them around 10500 Hz which doesn't have a lot of accoustic modes visible here.
We don't want to revert the ring heater change from last night as that setting had the y arm sitting right below the forest of accoustic modes. Matt estimates that we could move the y arm below this forest of accostic modes by going to 1.5 or 1.6W per segment, but that would mean that our two arm modes are more different from each other. First we tried lowering the power a little more, to 0.9W per segment to see if that helps the 80 Hz modes. Then Matt estiamted that we could put the Y arm 2nd order mode in a similar location to the x arm by using 0.6W per segment, so we've now set them to that.
Matt took a reference here before the PI rang up, it looks like the PI is 80297 Hz, I got a reference that includes the peak and the lockloss transient, which shows the frequency as 80298 Hz. The signal used for the PI damping is DCPDs downconverted at 80kHz, with a bandpass that goes from 294 to 298.5 Hz, so our peak is within the bandpass. The PLL set frequency is 299Hz, we lowered this 297.5Hz. It was being sent to ETMX for damping, which did appear to work once but not as the mode grew. We think this is a y arm PI, since we have been changing the Y arm ring heaters, so I've changed the output matrix to send this to ETMY.
I SDFed the ETMY ring heater change to 1.5 W in observe.
For this new Ring Heater power (1.5), the ETMy Mode20 violin started to ring up again with default settings (+30deg + -1.0gain).
Took the gain to +1.0, but this also rung up the violin.
It's been about an hour, but this seeting has been damping for this mode:
Have not updated lscparams since we are still figuring out a Ring Heater power which works for H1. Once we find a good Ring Heater setting, we should update lscparams (if necessary).
Sheila, Matt, Elenna
In the midst of trying to diagnose various PI issues, we noticed DCPD sum was slowly increasing, but not from PIs. We eventually figured out it was ETMY violin mode 20. The gain was set to -1. I flipped it to +1 and then mode started being damped down.
Sheila updated the ETMY ring heater again to 1.5 W, and it looks like ETMY mode 20 phase has flipped back, and a damping gain of -1 is working once again.
It appears that gains of both -1 and +1 are not damping the mode now. Corey is trying different phase filters.
Back to observing at 02:09 UTC, then another lockloss with no obvious cause at 02:10 UTC.
Back to observing at 03:17 UTC, fully automatic relock.