TITLE: 10/31 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Locked at 147 Mpc and have been Locked for 4.5 hours. Relocking today took a while due to a lockloss right after CLOSE_BEAM_DIVERTERS/beginning of OMC_WHITENING (87873). Coming back up was fine - I just went state-by-state from LASER_NOISE_SUPPRESSION onwards, and we were fine.
2.5 hours in is where it got interesting (87876). I noticed the DCPDs diverging. Since this happened yesterday (87842) and had been due to violin mode ETMX mode 6, I first turned the damping off for that mode since trying to damp the mode more yesterday made it ring up more. This clearly didn't help since the DCPDs started diverging at a faster rate, so I slowly upped the gain from its nominal of 5 up to 13. It seemed to work for a while - the DCPDs levelled out, but then the ETMX damp mode output started increasing again, this time parabolically, and the DCPD MIN and MAX's started wiggling. At this point I turned the gain back down to 9, but then the DCPDs started diverging again so I turned them back to 13. I tried no gain as well as nominal gain again, but DCPDs immediately start diverging. Here's a look at the relationship I was seeing with the damp gain, the damp output, and the DCPDs. I called Rahul. Eventually he decided to try turning off drive for ETMX mode 1 since it is at a similar freuqnecy, and that immediately fixed the problem. Currently he is looking into better settings, but otherwise he will leave damping off for both of them. So what happened yesterday ws probably that the guardian turned off ETMX mode 1, and that solved the problem.
LOG:
23:30UTC Relocking and in ADS_TO_CAMERAS
- Had to turn on Hi ASC gain due to 1 Hz ringup
- Lockloss after CLOSE_BEAM_DIVERTERS/start of OMC_WHITENING
- Got up to LASER_NOISE_SUPPRESSION, waited
- ADS_TO_CAMERAS, waited, turned on Hi ASC gain due to 1 Hz ringup, waited, went back to regular ASC gains
- Stepped through each state individually and waited a few moments, nothing happened
00:38 NOMINAL_LOW_NOISE
- 00:38 Observing
- ~2.5 hours into the lock, DCPDs start divergingg
- 04:49 Out of Observing to try and find better damping settings for ETMX mode 6
- 04:58 Back into Observing
- 04:59 Out of Observing to try and find better damping settings for ETMX mode 6/ 1
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 23:42 | PCAL | Tony | PCAL Lab | y(local) | Putting covers on the spheres | 23:47 |
Observing at 150 Mpc and have been Locked for almost 3 hours. About 2.5 hours into the lock I noticed that the DCPDs were diverging. Since this happened yesterday (87842) and had been due to violin mode ETMX mode 6, I first turned the damping off for that mode since yesterday trying to damp the mode more made it ring up more (but also brought the DCPDs back together??). This clearly didn't help since the DCPDs started diverging at a faster rate, so I've slowly been upping the gain from its nominal gain of 5, to now 11, which seems to be damping it. I'll probably bump the gain up a bit more to get the DCPDs to start converging.
As expected, the time from finishing the MAX_POWER state to when the DCPDs visibly start diverging yesterday's lock vs today's is about the same, 2hr 37mins-ish (yesterday's lock, this lock)
Jeff, Oli
Back in 85674 we updated the model paramenter set for the BBSS, bbssopt.m, to better line up with what we were seeing in our measurements at LHO X1 and LLO X2. We ended up adjusting the value of (physical) d4 down by 1mm, so the original FDR physical d4 value of 2.6775 mm was changed in the model to a value of 1.6775mm.
After confirming that LHO's physical d4 value on the dummy mass is currently actually measured to be 2.73mm above the dummy centerline (87767), and that LLO's physical d4 value on the dummy mass is currently measured to be 2.5715mm above the centerline. In 87781 I showed that a +/- 3mm error in the primary prism's location compared to the original FDR-designated physical d4 location would only shift the frequency of that peak by +/- 0.1 Hz, so we don't need to worry about anything with the placement of the primary prisms.
As Jeff said in an email:
"The plots [in 87781] (and specifically pages 1&7 for longitudinal and 5 & 11) show:(1) where the value of d4 ends up — even +/- 3.0 [mm] from the FDR +4.0 [mm] effective d4 or +2.67 [mm] physical d4 only moves the first pitch mode up or down by 0.1 Hz. We’re talking about only a +/-1 [mm] range around the FDR value.- Unimpactful for seismic isolation- Unimpactful for ISC control, and- Easily accounted for in the top mass damping loop design.(2) The data that drove us to *think* that (i.e. fit) the physical d4 of 1.67 [mm] on the dummy masses was “in between” a physical d of 1.67 [mm] and 2.67 [mm]. This was *the only* reason 1.67 [mm] was ever brought up instead of 2.67 [mm].- There was tons of confusion early on about d4, given:: the confusion about physical vs. effective ds (hopefully now cleared up) and:: the error in dummy mass being installed / built upside down, and:: the site-to-site confusion about what to measure,but all of that is resolved now, and we’re confident that both dummy masses were built “correctly” and the install teams measure, consistently, on the dummy metal build, a physical d4 of 2.67 [mm].In conclusion — the jig is designed to land the BBSS M3 stage prism on the real optic at a physical d4 of +2.67 [mm], or effective d4 of +4.0 [mm]. That is a totally fine value for d4, and is we’ll call it “ the correct” value, as that’s what the value was in the FDR."
Thus, we are going to revert the physical d4 value in our bbssopt.m model back to what it was originally, d4 = 2.6775 mm, since that's closer to the actual physical measurements that we are seeing, has a very small effect on the resonances, and was the original intended physical d4 value.
The bbssopt.m file has been updated in /ligo/svncommon/SusSVN/sus/trunk/Common/MatlabTools/TripleModel_Production/bbssopt.m, r12764. It has also been updated in T2000599 (v5).
TITLE: 10/30 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 10mph Gusts, 6mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.43 μm/s
QUICK SUMMARY:
Right up about to get back to NLN, had to turn the Hi gains on, we were doing well but then just lost lock right after CLOSE_BEAM_DIVERTERS, suspiciously like what was happening on Tuesday (87826).
00:38 UTC Back to Observing. I stepped one-by-one through the states above LASER_NOISE_SUPPRESSION, pausing in between each state and turning the Hi ASC gains on/off as needed, and we were fine and didn't lose lock.
TITLE: 10/30 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Oli
SHIFT SUMMARY: 3 lock losses during my shift today, the first two looked vaguely similar, but we haven't come to any conclusion as to what the cause was. The last one was due to a ADC card failure at SEI EY. During troubleshooting of this the interlocked was tripped and we had to recover all of the site lasers after the ADC card was fixed. We ran an initial alignment and we are now back to NLN. For both of the relocks in the morning the 1Hz ringup didn't show up, but we just saw it come up for this lock. Oli pressed the new high ASC button.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 14:43 | FAC | Randy | Xarm | n | BTE sealing, starting at EX | 18:43 |
| 15:32 | PEM | Robert | LVEA | n | Measurement setup | 17:32 |
| 16:01 | IO | Rahul | Opt Lab | LOCAL | JAC optics | 17:56 |
| 20:25 | CDS | Dave, Fil | EY | n | h1iopseiey troubleshoot | 21:42 |
| 21:07 | - | Jeff S | Arms | n | Running arms | 22:09 |
| 21:11 | FAC | Tyler & APEX contractor | Mid Y | N | Contractors doing work in Mid Y | 23:11 |
| 21:22 | Safe T | Danny | Mid X | N | Inspecting Harnesses | 23:22 |
| 21:24 | - | Oli, Tony | LVEA | n | Turning lasers back on | 21:44 |
| 21:25 | - | Fil | EX | n | Turning on ALS laser | 21:45 |
| 22:01 | IO | Keita | Opt Lab | LOCAL | JAC optics | 22:38 |
M. Todd, E. Capote, J. Wright, S. Dwyer
We are attempting to understand discrepancies between model estimates and measurements of various parameters in the corner cavities. In particular, we have measurements of the PRC and SRC Gouy phase [alog 66215, alog 66211], along with input mode (IMC) overlap with the OMC mode from OMC scans [alog 87342], and beam profile measurments at the REFL and POP port in single bounce, with ITMY misaligned [alog 84307].
Using the LHO O4 yaml file in the modeling repo, we can model what we expect these values to be given our understanding of the optics geometries and lengths between optics. We've already identified a few issues with the model as several estimates disagree with the measurements. The first example of this was the SR2 position relative to the SRM and SR3. It was discovered that the model had not accounted for a 5mm shift of the SR2 towards the BS, adding roughly 5mm to each of the corrresponding lengths in the model.
This does not explain all of the discrepancies with measurements, however, so we are exploring how uncertainty in optic geometries can affect the model. Below is a table summarizing this effort so far.
The PR3/SR3 uncertainty in the RoC is +/- 6mm. The PR3 LIGO reported RoC is 36.021, the vendor reports 36.006 mm. The SR3 LIGO reported RoC is 36.013, the vendor reports 36.00. At each bound of these uncertainties, we analyze the beam overlap between the Input mode propagated to the POP port with the q-parameter fit from the measurements. We also analyze the mode overlap between the Input mode and the OMC mode, with ITMX misaligned (this was measured to be 91.7%).
| PR3 RoC [m] | SR3 RoC [m] | POP Measurements Overlap [%] | Input Mode to OMC Overlap with ITMX misaligned [%] | PRC Gouy (x, y) [deg] | SRC Gouy (x, y) [deg] |
| 36.015 | 36.013 | 70.86 | 87.03 | 23.80, 24.17 | 16.97, 17.86 |
| 36.021 (LIGO nominal) | 36.013 | 65.92 | 85.05 | 23.16, 23.50 | 16.97, 17.86 |
| 36.027 | 36.013 | 61.27 | 82.69 | 22.60, 22.90 | 16.97, 17.86 |
| 36.015 | 36.007 | 70.86 | 89.47 | 23.80, 24.17 | 19.36, 20.16 |
| 36.021 | 36.007 | 65.92 | 87.31 | 23.16, 23.50 | 19.36, 20.16 |
| 36.006 (vendor report) | 36.013 | 78.23 | 92.44 | 24.89, 25.34 | 19.36, 20.16 |
| (from measurements) | 100 | 91.7 | 20.5 +/- 0.2 | 19.5 +/- 0.4 |
After remotely rebooting h1seiey and seeing that the 4th ADC is now completely absent from the PCI bus, the next test was a power cycle of the IO Chassis.
Procedure was:
Stop h1seiey models, fence from Dolphin and power down the computer.
Dave (@EY):
power down the IO Chassis (front switch, then rear switch)
Power down the 16bit-DAC AI Chassis (prevent overvoltage when IO Chassis is power up)
Power up the IO Chassis (rear switch, then front switch).
The chassis did not power up. Tracking it back, the +24V-DC power strip was unpowered, the laser interlock chassis which is also plugged into this was powered down. This tripped all the lasers.
Fil came out to EY with a spare ADC
Dave & Fil (@EY):
We opened the IO Chassis and removed the 4th ADC. With this slot empty Fil powered up the DC power supply with the IO Chassis on, it did not trip.
We powered down the IO Chassis to install a new ADC. We are skipping the slot the old ADC was in, because it could be a bad slot.
The second DAC was moved from A2-slot4 to A3-slot1, the new ADC was installed in A2-slot4, leaving the suspect A2-slot3 empty.
We powered the IO Chassis on with no problems, then we powered up the h1seiey computer. The models started with no issues, I was able to reset the SWWD.
The chassis was buttoned up, pushed into the rack, and the AI Chassis were powered back up.
Marc is fixing a cracked cap on the old ADC so we can test it offline.
ADCs:
| old ADC (Removed) | 110204-18 |
| new ADC (Installed) | 210128-28 |
Tripped power supply location:
Updated as-built drawing for h1seiey IO Chassis
Here is the +24VDC power supply after we got everything going again. It is drawing about 3.5A
Testing of ADC 110204-18
After Marc replaced the cracked capacitor he discovered, this ADC (pulled from h1seiey) was tested on the DTS Thursday 30oct2025.
x7eetest1 IO Chassis was used. The ADC was installed into A3 by itself, no interface card or ribbon was attached. The chassis powered up with no problems. The ADC was not visible on the PCI bus (lspci and showcards).
Looks like this card is broken and not usable.
The PSL is now back online after the interlock trip. During recovery I did a quick beam alignment into the PMC, as PMC Refl came back at ~26.5W, 1.2W higher than before the interlock trip. After the alignment I was able to get PMC Refl down to ~25.4W. I then changed the pump diode current for power supply 3 (Amp2, pump diodes 1 and 2), to 9.4A from 9.3A. This made no change to Amp2 Out but brought PMC Trans back to ~106.2W and PMC Refl back to ~25.2W. The FSS and ISS engaged without issue. I had to change the ISS RefSignal to -2.02V from -1.99V to get the % diffracted power back to ~4.0; with the ISS engaged PMC Trans is now ~106.0W. The % diffracted power will likely slowly change as things reach thermal equilibrium, as it usually does, so I'll monitor this over the weekend.
The watchdogs are back on and the PSL is good to go.
Thu Oct 30 10:07:57 2025 INFO: Fill completed in 7min 53secs
Gerardo confirmed a good fill curbside.
front end server h1seiey went down at 13:05 with an ADC timeout on the last ADC. Restart of the server did not bring the ADC back. Dave is headed out with a replacement.
This could be the cause of the last lockloss.
3rd lock loss of the day. This one was not like the others though, it was very fast.
Lock loss caused by h1iopseiey adc card failure - alog87862
A little late to observing since we thought that the estimator was injecting noise, but it's possible that it was some of Robert's equipment. Still not entirely sure but the scattering that we saw in DARM looks to be gone for now.
J. Kissel Comparison of local performance metrics for H1 SUS PR3 after turning on the LP estimators this morning. 2025-10-30 13:00 UTC -- only Y estimator ON 2025-10-30 17:00 UTC -- all LPY estimators ON. Both times are when IFO is in nominal low noise with no commissioning happening. In short -- L is far more limited by residual suspension point motion that P or Y, so the reduction in off resonance drive is not *as* awesome as in P or Y. But there is reduction in off-resonance drive, and damping on resonance is about the same. Nice! More plots to come...
Round 2 of testing the guardianization of turning on and off the high ASC gains (Round 1 - alog87462). SEI_ENV will now automatically move us into the high gain ASC state when a.) we are in the earthquake state b.) there is an incoming or ongoing earthquake that is at or below the dotted line on the "rasta plot". The transition takes 11 seconds to complete, and it will transition back when the ground motion is low enough to bring us out earthquake state.
I started testing with a few 10 and 5 second waits between steps, just as is done in the script that we currently use. Once those ran successfully a few times I started to decrease the wait times between steps. Eventually, I had success transitioning all the ASC at the same time, then the FF 10 seconds after. since this was the same configuration that I had last time I tried this, tried to reporduce the lock loss by requesting the High ASC state, then immediately requesting the Low ASC state. This did, again, cause a lock loss. To avoid this I have a wait timer in the High state so it won't switch quickly from one to the other.
Transitioning back out of the high ASC state has the same thresholds as the earthquake state currently. We didn't want to transition back and then have to do it all over again, or wait in earthquake for another 10 minutes for it to calm down. We might make this a bit shorter or smarter after we've seen it work a few times.
| Time (hhmmss UTC) | Transition to | Notes |
| 150251 | High | 10/5s timers |
| 150457 | Low | 10/5s timers |
| 150616 | High | Repeat of above |
| 150724 | Low | Repeat of above |
| 150754 | High | 1/5s timers |
| 150930 | Low | 1/5s timers |
| 151113 | High | All ASC engaged at once |
| 151218 | Low | All ASC engaged at once |
| 151326 | High | All ASC engaged at once |
| 151340 | Low | Lock loss |
I forgot that this would eventually trigger IFO_NOTIFY if the high gain state were to keep us out of Observing for longer than 10 minutes while IFO_NOTIFY was running. I've changed IFO_NOTIFY to not notify when the SEI_ENV node is in the high asc or transition states.
M. Todd
Today I ran another OMC scan (following last week's instructions alog 87316 and 87342) after the morning lockloss to see if could a better measurement of the hot state overlaps.
I was still limited to about 12 minutes after the lockloss, so we expect some amount of difference in the full hot state.
I've plotted the results on top of last week's scans, in yellow. Analysis of this will follow in a comment.