Today we've had to run many initial alignments and mess around with mirror alignments to cajol DRMI into locking. In the first part of the day, we were coming back from a big earthquake which had kept us down for hours. I don't know what we can attribute the struggle to lock to, but it can't be related to the BS heating since we had been down for many hours.
Later in the day, we were relocking after a few hours up, and just after the lockloss we proceeded to DRMI locking. The alignment was very poor. The guardian quickly took us to PRMI and then MICH FRINGES because of the very bad alignment. This is perhaps a sign that the BS slow release method isn't working so well, since it should keep the alignment decent just after lockloss. I cleared the history on the BS M1 pitch lock bank, and proceeded to help the MICH FRINGES and PRMI state by hand. Once we got back to DRMI we were able to lock relatively quickly.
I am suggesting that our DRMI locking problems are "maybe not an alignment issue" because this morning after several alignments, DRMI took a very long time to lock, and just after lockloss in the afternoon, where the slow release method should help keep the alignment good, the alignment was very bad. Maybe we should look into the triggers or engagement of the locking to see if there is a problem there. Just based on my experience this afternoon, I would want to turn off the BS slow release, or recheck that it is doing what we want it to do.
TITLE: 07/07 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
Relocking notes:
Today started off with an Initial_Alignment that was interrupted by earthquake mode being reactivated.
Which prompted another Inital_Alignment.
While trying to relock there was many attempts at the DRMI-> PRMI-> Check_Mich_Fringes which yeilded no locking results.
Another Initial Alignment happened, Which allowed us to lock DRMI but we caught a lockloss at LOW_NOISE_COIL_DRIVERS, and on the next attempt OMC_WHITENING.
After that H1 Bouced around through DRMI-> PRMI-> and Check_Mich _Fridges again until another Inital_Alignment was ran.
We finally got back to NLN at 20:54 UTC and were locked for a little over 2 hours before the 4pm Lockloss struck.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:31 | SQZ | Matt & Sheila | SQZT0 | Local | Adjusting the OPO Crystal | 18:25 |
| 15:38 | FAC | Kim & Nelly | MX & MY | N | Kim, Technical Cleaning at MX, Nelly MY. | 16:41 |
| 16:28 | PEM | Sam & Robert | LVEA HAM1 & 2 | N | Installing Accerometer | 17:28 |
| 20:32 | PEM | Robert | LVEA | N | Turning off PEM apparatuses | 20:34 |
| 20:48 | ISS | Rahul | Optics Lab | Local | Working in Optics lab, Likely on ISS | 20:53 |
| 21:47 | VAC | Jordan | Mx | N | Getting rack | 22:17 |
Today while in DRMI ASC, and while trying to debug other problems with DRMI acquisition, Ryan, Tony, and I saw that the DRMI ASC starting pulling the buildups in a bad direction, which made no sense. We were trying to figure out which loops were the culprit, when I saw that the SRC1 offsets were engaged. These offsets had been put in place during the problems with the OFI, and we don't run with these offsets in full lock anymore. I turned the offsets off and the buildups starting moving in the good direction again. This is very confusing, because we've been running like this for ages probably without a problem. Today it was suddenly a problem. I commented out the lines in the ISC_DRMI guardian state PREP_DRMI_ASC where these offsets are turned on and loaded.
TITLE: 07/07 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 4mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY: H1 just lost lock after a couple hours of being locked, so starting relocking now.
Lockloss @ 23:14 UTC after almost 2.5 hrs locked - link to lockloss tool
No obvious cause.
H1 back to observing at 00:46 UTC. Automatic relock except for a small adjustment to FC2 pitch was needed to lock the filter cavity.
I also ran a 'SCAN_SQZANG_FDS' once at low noise to improve the SQZ angle, which helped bring the BNS range up to about 142Mpc, but not quite back to the 150Mpc we were hoping for. Perhaps a SQZ alignment scan would've helped too, but I did not take the time to do that here.
M. Todd, S. Dwyer
We moved the OPO crystal around again, following the work we did last week. We had much better luck this time with the translation stage controller (until the end, where we seemed to lose where we were).
After we took the measurement at -1100, we decided that given the historesis in the controller the better way to go back to a "position" was to set our OPO temperature to the value that corresponded to that position and move the controller until we re-found the green/ir coresonance.
We tried this, and then got veeery lost. But eventually found our way closer to our high NLG and ~31.4 OPO crystal temp, but it was not at the position we expected it to be at. This method may be refined in the future, where instead of doing finite steps with the controller to map out the NLG parameter space, we could sweep the OPO temp and follow it with the controller position via the co-resonance.
Regardless, we are at a much higher NLG than when we started!
| Position | Max | Thermistor | Green Launch | Unamplified | Dark | NLG | Pthres | P | Notes |
| 0 | 6.70E-02 | 30.28 | 29.4 | 7.11E-03 | -2.70E-05 | 9.39 | 117.51 | 105 | 9:22:03 AM |
| 100 | 4.52E-02 | 30.14 | 22.5 | 7.11E-03 | -2.70E-05 | 6.34 | 124.67 | 105 | 9:45:12 AM |
| -100 | 5.98E-02 | 30.214 | 27.7 | 7.11E-03 | -2.70E-05 | 8.38 | 119.22 | 105 | 9:59:40 AM |
| -300 | 7.83E-02 | 30.42 | 30.7 | 7.11E-03 | -2.70E-05 | 10.97 | 115.53 | 105 | 10:07:48 AM |
| -500 | 1.08E-01 | 30.719 | 30 | 7.11E-03 | -2.70E-05 | 15.14 | 112.43 | 105 | 10:16:46 AM |
| -700 | 1.20E-01 | 31.164 | 27 | 7.11E-03 | -2.70E-05 | 16.82 | 111.64 | 105 | 10:26:05 AM |
| -900 | 2.11E-01 | 31.397 | 31 | 7.11E-03 | -2.70E-05 | 29.57 | 108.68 | 105 | 10:33:52 AM |
| -1100 | 1.62E-01 | 31.348 | 27 | 7.11E-03 | -2.70E-05 | 22.70 | 109.84 | 105 | 10:48:45 AM |
| -2880 | 1.70E-01 | 31.364 | 27 | 7.11E-03 | -2.70E-05 | 23.82 | 109.60 | 105 |
I think that what has been happening is that we have slowly drifted away from our phase matching temperature in the OPO as we have moved spots. Conceptualy, we'd like to adjust the translation stage for co-resonance with the temperature servo set to keep the crystal at the phase matching temperature. In reality, the actual temperature temperature that we get for a particular temperature servo set point is not the same for different crystal positions because of local heating from green absorption. This last week's expirience leaves me more conviced that we can't rely on the translation stage counts for information about where we are in the crytsal.
In the end, we did improve the nonlinear gain today, but our squeezing angle servo was not doing well in the previous lock and seemed to be causing large range fluctuations. We dropped out of observing for a few moments to turn off the servo and scan the squeezing angle, but this didn't restore us to a good range. Now we have lost lock for some other reason, and I've set the OPO trans power set point down to 80 uW (it was 105), which according to RF6 gives us a similar NLG to what we had before the crystal move. I've adjsuted the OPO temperature for this green power.
For operators: When we relock, we will want to run scan sqz ang again before going to observing, and we might need to run it again later once the IFO has thermalized.
18:52 UTC lockloss while we were in OMC_WHITENING damping violins
Below is the summary of the DQ shift for the week from 2025-06-16 to 2025-06-22
The full DQ shift report with day-by-day details and plots is available here.
Mon Jul 07 10:07:08 2025 INFO: Fill completed in 7min 4secs
Gerardo confirmed a good fill curbside.
TITLE: 07/07 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Earthquake
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: EARTHQUAKE
Wind: 5mph Gusts, 1mph 3min avg
Primary useism: 0.29 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
H1 Unlocked due to earthquake when I arrived.
I think the ground motion has come down a fair amount, and in a few minutes we will be leaving EQ mode.
I've started an Initial Alignment to start getting H1 locked again.
12:39 UTC H1 called for help for the initial alignment timer expiring, I found Yarm to be notifying "Find by hand" so I helped it out and it locked after a few taps of ETMY and ITMY.
13:07 UTC finished IA, back to locking
13:52 UTC lockloss at POWER_25Ws from a 6.3 from New Zealand, holding in DOWN
11:14 UTC lockloss from an ETMX glitch
TITLE: 07/07 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 138Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Quiet shift where H1 remained locked throughout, although I did drop observing once briefly to run a SQZ angle scan, but it didn't make an improvement (afterwords I remembered the angle servo is running now, so it makes sense why it was already optimized). Current lock stretch is up to 9.5 hours.
Link to DQ Shift Summary here
TITLE: 07/07 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 138Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
I had to run an Initial alignment twice this morning before H1 would get past DRMI and Evetually reached NLN at 16:41:30 UTC.
@ 14:07 UTC Unknown Lockloss today.
Relock needed another Initial Alignment to get past DRMI. We eventually Arrived back at NLN at 19:26 UTC.
@21:01 UTC I noticed the Range drop and I started to Troubleshoot Range Related SQZ issues.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 16:18 | PEM | Robert | LVEA +X | N | Turning on Current Clamp | 16:22 |
| 18:20 | PEM | Robert | LVEA HAM 1 | N | More PEM commisioning. | 19:06 |
| 19:11 | PEM | Robert | LVEA | N | Shutting down PEM amps | 19:13 |
I noticed that the Range started to drop, and the guardians had some things to tell me.
I followed those instructions in the guardian messages and requested RESET_SQZ_ASC_FDS[97]
Immediatley, ZM4 Started to Saturate and wouldn't stop even if I took SQZ_MAN to down.
I eventually took ZM4 to Damped for a few minutes while I try to find a solution.
One of the solutions I tried were to "Gracefully clear the SQZ-ASC offsets.
Eventually I figured out that ZM4 slider needed to be moved in pitch since it had drifted off quite far.
I got SQZ_MAN back to FREQ_DEP_SQZ to see if that ZM4 move made a difference.
It did not..... then I requested a SCAN_ALIGNMENT_FDS to see if that would help.
I then took a Low range Coherence plot and a comparision plot from before the range fell in this lock and after.
Then I adjusted the OPO Temp and then requested SCAN_ALIGNMENT FDS again.
I was only able to get the range back up to 138 Mpc.
We had been locked for just over 3 hours, the circulating power was ~378.5kW in each arm, a little under the usual 380kW.
Broadband:
Start: 2025-07-04 00:50:39
Stop: 2025-07-04 00:55:50
Data: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250704T005039Z.xml
Simulines:
Start: 2025-07-04 00:56:56.978617 UTC // GPS: 1435625834.978617
Stop: 2025-07-04 01:20:13.539672 UTC // GPS: 1435627231.539672
Data:
2025-07-04 01:20:13,381 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20250704T005657Z.hdf5
2025-07-04 01:20:13,389 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20250704T005657Z.hdf5
2025-07-04 01:20:13,394 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20250704T005657Z.hdf5
2025-07-04 01:20:13,398 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20250704T005657Z.hdf5
2025-07-04 01:20:13,403 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20250704T005657Z.hdf5
Using a script called histogram.py which calls statecounter2.0, I was able to determine how many times ISC_LOCK spent more than 60 seconds in ACQUIRE_DRMI_1F using Minute trends.
I broke this up into Pre Vent and Post vent:
Pre Vent:
Jan1st to April 1st 2025
Length of data 601
max Duration 19 Min
Average 3.5640
Post Vent:
June 1st 2025 to now
170 data points
Longest: 24 minutes.
Average: 5.1823 Min
Post vent break down.... break down:
Jun 1st to Jun 16th
Length of data 100
max Duration: 24min
Average 5.04 min
Jun 16th - now
Length of data 70
max Duration 13 Min
Average 5.38571
Link to a google sheet with all the exported data, and GPS times.
I copied Tony's awesome spreadsheet, and replotted the data sets while thinking about what they mean.
I have the same 4 data sets that Tony has (Jan-April, All of June, and then broken into Early June and Late June, with the divider being the time that I enabled the 'slow let go' of the BS pitch control). However, I've got all the x-axes fixed to be 0-25 minutes. I've also set the y-axes to be (0, number lock segments), so that they are roughly normalized. In the subtitle of the plot I note the percentage of the segments who are 10 mins or longer (actually, from the data set, the percent that have a value of 9 mins or greater). Since we have a 10 minute timer in the guardian that will flip over to trying PRMI or MICH locking, this percentage should help capture the number of locks that take a long time to acquire DRMI.
Notably, the number of segments that take a long time is about 2x larger after the BS slow let go was enabled, if we look at the percent in late June (48% take a long time) versus the percent in early June (28% take a long time) :( But, both of these are much higher than the 18% that took a long time before the vent.
This may mean that the slow letting go of the BS, as currently enabled, is not helpful.
If the statecounter.py code is able to, it could be interesting to get similar statistics, but have the durations start when we leave state 18 (Arms_off_resonance) and the duration end when we get to state 102 (DRMI_locked_check_ASC). That would enable us to more accurately see the total length of time it takes during an acquisition sequence. If we do this, we'd want to count and then exclude from the statistics the number of times we 'give up' and lose lock or do an initial alignment.
Updated Statecounter to find times that a value is between 2 user selected states.
And refined a script that makes Histograms specifically for DRMI Histogram Investigations that calls Statecounter.
tconvert jan 1 2025 = 1419724818
tconvert apr 1 2025 = 1427500818
tconvert jun 1 2025 = 1432771218
tconvert jun 16 2025 = 1434067218
tconvert now = 1435883245
The GPS Times are rounded off by Jim's round function so they fit into minute trend windows.
I made a minor mistake here- I only commented out the lines in ISC_DRMI where the offset is SET, but I didn't comment out lines where the offset is turned ON. However, ISC_LOCK sets a random offset in SRC1 as well, and doesn't turn it on, but then we were in a situation where ISC_LOCK sets a weird offset, and then ISC_DRMI turns it ON. This meant today the DRMI ASC came on in a very strange way and pulled the alignment far off. I have now commented out all lines in both ISC_DRMI and ISC_LOCK that set these offsets, and turn the offsets on. Hopefully, this won't be an issue again.