Thu Aug 21 10:08:44 2025 INFO: Fill completed in 8min 41secs
J. Kissel, O. Patane 2025-08-21 15:34 UTC Ramped over from using OSEM-only to using GS13 Y Estimator and "light" OSEM damping. 2025-08-21 16:05 UTC Turned OFF yaw estimator (back to ), turned ON P estimator. 2025-08-21 16:36 UTC Turned ON yaw estimator, P estimator remains ON. Woo! 2025-08-21 16:50 UTC Both estimators turned OFF (and immediately, calibration measurements start) - IFO reached nominal low noise at 14:10 UTC (so IFO is ~1.25 hours into thermalization, which we typically quote to be 3.0 hours "done" [not 1/e time constant]) - EQ band is low at 0.02 [um/s]_RMS (most recent EQ was 4.5 [hr] ago; 5.8 [mag] just north of Japan) - uSeism band is medium (for LHO summer) at 0.1 [um/s]_RMS - Wind is less than 5-10 [mph] 2025-07-01 These are in use are after the M1 OSEMs have had their satamps upgraded (LHO:85463, 2025-07-14 The upgraded satamp response was perfectly compensated with measured z:p's from testing in the EE lab LHO:85746) 2025-07-29 These are in use after the M1 OSEMs have had their absolute calibration improved (LHO:86070) After the above, HAM5 / SR3 estimator design was informed by the following measurements - 2025-07-29 (In the presence SR3 SUSPOINT Basis ISI EXC; (ISI GS13s projected to SR3 SUSPOINT Y) to (SR3 M1 DAMP Y) LHO:86075, - 2025-08-05 M1 to M1 LHO:86202, - 2025-08-05 (In the presence SR3 SUSPOINT Basis ISI EXC; (ISI GS13s projected to SR3 SUSPOINT P) to (SR3 M1 DAMP P) LHO:86203 - 2025-08-05 M1 to M1 LHO:86203 Design of estimator and blend filter aLOGs are quoted below by each filter 2025-08-19 Oli installed / configured these estimators this past Tuesday (LHO:86455) ::: PIT FILTERS ::: Design aLOG: LHO:86430 M1_EST_P_FUSION_MODL_SUSP_P_2GAP "ISI_fit" FM1 EPICS Gain: +1.0 sos(-0.001012906275385808, [-2.0001366554573741; 1.000136668010563; -1.999948594076947; 0.99994860612347214; -1.999955359352324; 0.99995565554604537; -1.999976011494571; 0.99997638571005953; -2.000046972353724; 1.0000477507088701; -1.9999816676161819; 0.99998231095588275; -1.999807905043806; 0.9998079477319114; -1.9999858561539881; 0.9999859195985662; -1.999989374293667; 0.99998949933234416; -1.9999885936664199; 0.99998867482680542; -1.9999967240112111; 0.99999852427171887; -1.9999908757192211; 0.99999261046439736]) "to_um/nm" FM2 zpk([],[],1.000000000000000,"n") %% Yes, this is a gain of 1.0 and thus is not doing anything. This is vestigial. Calibration has been absorbed into the "ISI_fit." M1_EST_P_FUSION_MODL_DRV_P_2GAP "M1_fit" FM1 EPICS Gain: +1.0 sos(6.9937614616314464e-08, [1.9999999999998279; 0.99999999999995282; -1.9999796747664531; 0.99998031671408905; -1.9999989658263551; 0.99999907355206619; -1.999984525961469; 0.99998458794634182; -1.999972378962787; 0.99997245232369414; -1.999986427026998; 0.99998650968797875; -1.9999982894453341; 0.99999986582789491; -1.9999908361882079; 0.99999257469114344]) "to_um/cts" FM2 zpk([],[],1.000000000000000,"n") %% Yes, this is a gain of 1.0 and thus is not doing anything. This is vestigial. Calibration has been absorbed into the "M1_fit." Design aLOG: LHO:86452 M1_EST_P_FUSION_MEAS_BP "pit_v1" FM1 EPICS Gain: +1.0 sos(0.1000427204344847, [-0.99977476062437198; 0; -0.99998082542398548; 0; -1.999507407187904; 0.99950802568624952; -1.9999459253790719; 0.99994656777201008; -1.9997448445054919; 0.99974638260765558; -1.9999542865584909; 0.99995602687005281]) M1_EST_P_FUSION_MODL_BP "pit_v1" FM1 EPICS Gain: +1.0 sos(0.8999572795655153, [-1; 0; -0.99998082542398548; 0; -1.999987516253469; 0.99998815869468505; -1.9999459253790719; 0.99994656777201008; -1.9999884578241209; 0.99999019525631272; -1.9999542865584909; 0.99995602687005281]) Copies of M1_DAMP_P: (EPICS Gain: -0.1) P_DAMP_OSEM EPICS Gain: -0.4 P_DAMP_FUSION EPICS Gain: -0.4 The filters ON in each of these damping loop controller filter banks rolloff_P soft roll-off of sensor noise zpk([0;2;5],[20;20;30;30],0.005,"n") boost_P extra gain at first Y resonance zpk([0.415914+i*0.587721;0.415914-i*0.587721;0.67;0.77],[0.246255+i*0.676579;0.246255-i*0.676579;0.131518+i*0.707886;0.131518-i*0.707886],1,"n") norm_P nominal unit conversion from um to ADC ct zpk([],[],43.478,"n") x0.628 gain adjustment from absolute calibration gain(0.628) bounceRoll highest V and R mode notch tailored for SR3 ellip("BandStop",4,1,60,27.5,28.0)ellip("BandStop",4,1,60,45.0,45.5)gain(1.25893) ellip_P aggressive roll-off of sensor noise zpk([0.64975+i*9.7463;0.64975-i*9.7463],[1.22183+i*5.87428;1.22183-i*5.87428;2.7978],1,"n") (total OSEM gain or OSEM+Estimator gain of -0.5 for P, the prior-to-estimator "nominal" gain we'd been running since Aug 15 2023 LHO:72249) ::: YAW FILTERS ::: Design aLOG: LHO:86233 M1_EST_Y_FUSION_MODL_SUSP_Y_2GAP "ISI_fit" FM1 EPICS Gain: +1.0 sos(-0.00097333255316384561, [-2; 0.99999999999999989; -1.999987505826812; 0.99998828408658169; -1.9999917387445989; 0.99999222786356268; -1.999991053658418; 0.99999120602937952; -1.9999951079581699; 0.99999667180704854; -1.9999907068801011; 0.99999239185345823]) "to_um/nm" FM2 zpk([],[],1.000000000000000,"n") %% Yes, this is a gain of 1.0 and thus is not doing anything. This is vestigial. Calibration has been absorbed into the "ISI_fit." M1_EST_Y_FUSION_MODL_DRV_Y_2GAP "M1_fit" FM1 EPICS Gain: +1.0 sos(1.1347068995497109e-08, [2.000000000001652; 0.99999999999677247; -1.9999881576357379; 0.99998893434766245; -1.999998372312229; 0.99999974909962519; -1.99998928164089; 0.99999097015263916; -1.999998867104356; 0.99999912434806359; -1.99999256126361; 0.99999271377731203]) "to_um/cts" FM2 zpk([],[],1.000000000000000,"n") %% Yes, this is a gain of 1.0 and thus is not doing anything. This is vestigial. Calibration has been absorbed into the "M1_fit." Design aLOG: LHO:86265 M1_EST_Y_FUSION_MEAS_BP "SKY_notch" FM1 EPICS Gain: +1.0 sos(0.000183409288171444, [1; 0; -0.99996165121563274; 0; -1.999942013670952; 0.99994237961280041; -1.999812801308257; 0.99981448778963355; -1.9999004183717279; 0.99990166730434937; -1.999852310365053; 0.99985308742131895; -1.9999736341441301; 0.99997367996470554; -1.9999216555380299; 0.99992180839221256]) M1_EST_Y_FUSION_MODL_BP "SKY_notch" FM1 EPICS Gain: +1.0 sos(0.9998165907118286, [-1.0000000000000011; 0; -0.99996165121563274; 0; -1.999961208202631; 0.99996289480915956; -1.999812801308257; 0.99981448778963355; -1.9999698386597631; 0.99997061576169488; -1.9998523103650541; 0.9998530874213194; -1.9999842083331081; 0.99998436119207135; -1.9999216555380299; 0.99992180839221256]) Copies of M1_DAMP_Y: (EPICS Gain: -0.1) Y_DAMP_OSEM EPICS Gain: -0.4 Y_DAMP_FUSION EPICS Gain: -0.4 rolloff_Y soft roll-off of sensor noise zpk([0;1],[20;30;30],0.01,"n") boost_Y extra gain at first Y resonance zpk([0.9;1.1],[0.173648+i*0.984808;0.173648-i*0.984808],1,"n") norm_Y nominal unit conversion from um to ADC ct zpk([],[],43.478,"n") x0.757 gain adjustment from absolute calibration gain(0.757) bounceRoll highest V and R mode notch tailored for SR3 ellip("BandStop",4,1,60,27.5,28.0)ellip("BandStop",4,1,60,45.0,45.5)gain(1.25893) ellip_Y aggressive roll-off of sensor noise zpk([0.97195+i*9.7195;0.97195-i*9.7195],[1.0264+i*4.9347;1.0264-i*4.9347;2.7978],1,"n") (total OSEM gain or OSEM+Estimator gain of -0.5 for Y, the prior-to-estimator "nominal" gain we'd been running since Aug 15 2023 LHO:72249)
Also, the H1SUSSR3.txt filter file that includes all of this goodness is /opt/rtcds/lho/h1/chans/filter_archive/h1sussr3/H1SUSSR3_1439664560.txt which was auto-committed without message to the userapps repo location /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSSR3.txt rev 32818 on 2025-08-20.
Following Tuesday night's false positive alarm for HAM1 PT100_MOD2 due to a sensor glitch (alog 86472) this morning I increased the delta-trip values for this gauge from 1.0e-07 to 3.0e-07 Torr for all three lookbacks.
The Tuesday alarm was due to the delta-p glitching up to 2.0e-07, the slope channels only reached 50% of their trip values.
VACSTAT was restarted at 08:37 with these new values.
TITLE: 08/21 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 1mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.13 μm/s
QUICK SUMMARY: Locked for 25 minutes after an earthquake knocked us out and it recovered all on its own. No alarms. Planned calibration and commissioning today, though with us being unthermalized at the start, we will need to rearrange a few items.
TITLE: 08/20 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 46Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
Smooth and uneventful shift with H1 locked entire shift and currently at 6.75hrs.
LOG: n/a
TITLE: 08/20 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Locked for 1 hour after a fast lock loss (alog86483). Reacquisition was straight forward but I did need to run an initial alignment.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:54 | FAC | Kim | Opt Lab | n | Tech clean | 17:54 |
16:27 | FAC | Tyler, HFD | Vertex | n | Hydrant fix | 17:55 |
17:55 | FAC | Kim | MX | n | Tech clean | 18:33 |
18:35 | ISC | Camilla | Opt Lab | n | Parts bin | 19:26 |
20:03 | FAC | Tyler | Vertex | n | Taking photos of the leaky hydrant | 20:08 |
21:02 | SPI | Jeff | Opt Lab | n | Inventory | 21:17 |
21:05 | SPI | Rick, Dripta, Tony | PCAL lab | yes | SPI BS meas. | 22:41 |
TITLE: 08/20 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 7mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY:
H1's been locked almost an hr (and TJ reported reacquisition was straightforward---good to know after yesterday's post-Maintenance locking issues!). Winds low, microseism is low (below 50th percentile' here's to smooth sailing!
J. Kissel Dimensions of Glenair picomotor mightmouse connector for Science.
No obvious cause, looks very fast.
Since the HAM1 vent, I have done a few different measurements of the ASC that provide information on how to calibrate WFS signals from counts to microradians. Here is a summary:
CHARD, INP1 and PRC2 results come from this alog
DHARD results come from this alog
SRM results come from this alog (if you are comparing values, I made a power normalization error in the linked alog)
BS results were taken but never alogged (shame on me)
All of these measurements were taken by notching all ASC loops at 8.125 Hz and injecting an 8.125 Hz line in the desired DoF. The osem wits provide the urad reference.
Unless otherwise specified, the witness channels are the bottom stage osems
DoF | Input Matrix | Calibration | Notes |
CHARD P | -1 * REFL A 45 I + 0.6 * REFL B 45 I |
0.0161 urad [ETMY L2] / ct [REFL A 45 I] 0.0109 urad [ETMY L2] / ct [REFL B 45 I] |
measured as ETMY L2 wit, must transform to L3 urad, can also convert to cavity angle coherence near 1 |
CHARD Y | -1 * REFL A 45 I + 0.8 * REFL B 45 I |
0.0113 urad [ETMY L2] / ct [REFL A 45 I] 0.00965 urad [ETMY L2] / ct [REFL B 45 I] |
measured as ETMY L2 wit, must transform to L3 urad, can also convert to cavity angle coherence near 1 |
DHARD P | 0.5 * AS A 45 Q - 0.5 * AS B 45 Q |
0.00312 urad [ETMX L2] / ct [AS A 45 Q] 0.00312 urad [ETMX L2] / ct [AS B 45 Q] |
measured as ETMX L2 wit, must transform to L3 urad, can also convert to cavity angle coherence = 0.8, 10 averages |
DHARD Y | 0.5 * AS A 45 Q - 0.5 * AS B 45 Q |
0.00612 urad [ITMY L2] / ct [AS A 45 Q] 0.02 urad [ITMY L2] / ct [AS B 45 Q] |
measured as ITMY L2 wit, must transform to L3 urad, can also convert to cavity angle coherence = 0.5, 10 averages |
PRC2 P (PR2) | 1 * POP X RF I | 0.00033 urad / ct | coherence 1 |
PRC2 Y (PR2) | 1 * POP X RF I | 0.000648 urad / ct | coherence 1 |
INP1 P (IM4) |
1.5 * REFL A 45 I + 1 * REFL B 45 I |
0.0104 urad / ct [REFL A 45 I] 0.00988 urad / ct [REFL B 45 I] |
coherence 1 |
INP1 Y (IM4) | 2 * REFL A 45 I + 1 * REFL B 45 I |
0.0141 urad/ct [REFL A 45 I] 0.00608 urad/ct [REFL B 45 I] |
coherence 1 |
MICH P (BS) | 1 * AS A 36 Q | 0.0161 urad [M2]/ct | measured as BS M2 WIT/AS A 36 Q, must transform into M3 urad, coherence near 1 |
MICH Y (BS) | 1 * AS A 36 Q | data not taken | |
SRC1 P (SRM) | 1 * AS A 72 Q | 16.9 urad/ct | coherence near 1 |
SRC1 Y (SRM) | 1 * AS A 72 Q | 10.6 urad/ct | coherence near 1 |
Here is data for MICH yaw and SRC2:
DoF | Input Matrix | Calibration | Notes |
MICH Y | 1 * AS A 36 Q | 0.00248 urad [BS M2] / ct | measured as BS M2 WIT/AS A 36 Q, must transform into M3 urad |
SRC2 P (SRM + SR2) | 1 * AS_C |
33.4 urad [SR2 M3] / ct 44.7 urad [SRM M3] / ct |
SRC2 drive matrix is a combination of SRM and SR2: -7.6 * SRM + 1 * SR2 |
SRC2 Y (SRM + SR2) | 1 * AS_C |
20.9 [SR2 M3] / ct 48.8 [SRM M3] /ct |
SRC2 drive matrix is a combination of SRM and SR2: 7.1 * SRM + 1 * SR2 |
Continuing my efforts to check and update the LSC coupling (see 86423 and 86370) I ran yet another bruco, which showed that now the dominant low frequency coherence is coming from SRCL.
In review, I have:
I think we can do better. The main challenge is fitting the <100 Hz coupling while fitting the >100 Hz coupling, or at least not significantly worsening it. I think the fit would be easier if we made use of the parallel SRCLFF banks to fit a low and high frequency feedforward. This would require fitting the low frequency coupling, engaging it, and then remeasuring the high frequency coupling to fit separately.
I have what I think is a better fit to the low frequency coupling, currently saved in FM5 of the SRCLFF1 bank. To test this new filter:
Time permitting, if the new filter works, taking a better measurementing of the coupling from 100 Hz and up while on the new filter would be a useful next step, which may require adjusting the excitation shape or increasing the excitation gain.
To motivate the use of commissioning time on this, I did a very simple coherence based subtraction of the SRCL noise to see what it could get us in range improvement. I used the same time as the bruco I linked above, so this time includes the PRCL and MICH improvements. I copied some of Oli's range compare code to help me generate this nice plot comparing 1 hour of strain with coherence subtraction of the SRCL noise. Even if we can't improve the feedforward above 100 Hz, there is possible a 2 Mpc improvement from below 100 Hz.
Wed Aug 20 10:09:36 2025 INFO: Fill completed in 9min 32secs
Gerardo confirmed a good fill curbside.
While fixing a hydrant on the backside of the OSB/vertex area, fire pump 1 was activate from 1725-1742UTC. Hanford FD pickup truck and our F150 were also back in that area. Their driving to and from that area, as well as some of their work showed up on our ITMY seismometer.
TITLE: 08/20 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 1mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY: Locked for 11.5 hours, no alarms, calm environment. Looks like we dropped Observing at 1053-1057UTC because the SQZr lost lock and relocked.
Our range isn't as stable as it has been lately, and the Omicron glitch gram FOM shows some extra SNR around 20-30Hz during those times.
HFD are working on a hydrant, Tyler has requested the fire_pump alarms be bypassed for the next two hours.
Bypass will expire:
Wed Aug 20 12:31:32 PM PDT 2025
For channel(s):
H0:FMC-CS_FIRE_PUMP_1
H0:FMC-CS_FIRE_PUMP_2
We kept losing lock at DHARD WFS, and Jenne and I spent a lot of time figuring out why. Here are some notes about things that we tried:
How I solved the problem:
Some final notes about minor mistakes I made after:
Apologies for missing this detail in my original alog, but we had run two successful initial alignments. The first was run as usual after the end of the maintenance day. Green alignment converged properly, arm alignment looked good. We ran a second after the early problems with DHARD, thinking that maybe the alignment was poor. Same results, good green convergence, but problems with DHARD.
In general, the alignment of the arms and corners looked very good through this whole process. When I stepped in DHARD P and Y, I was making very small steps overall to correct the alignment, steps on the order of 0.01 or 0.03. In general, the buildups and the camera showed good alignment. The green arms also stayed well aligned when the DHARD signal engaged properly, which is telling me that our initial alignment was doing the correct thing for the ITMS. Is it possible that the CARM offset was too large? I have been wondering this, because if the alignment was decent but the WFS signal was junk because we were too far from resonance that might explain this behavior, and it lines up with the fact that the better fix was reducing the carm offset more before moving to REFL. I'm not sure if that makes sense though.
As a side note, DRMI was mostly locking very quickly throughout the night.
TJ, Ryan S, Elenna
Today we (once again) took some time to try to commission the PRM ASC loop in PRMI ASC.
We locked PRMI, and I engaged the beamsplitter ASC. I was able to see by moving PRM pitch around, and watching the buildups, that the "old" error signal, REFL A RF9 I, was a great error signal and only required a sign flip.
However, PRM yaw was harder. I checked REFL A and B RF9 I and neither signal worked. I checked POP X I next, and saw that the signal worked just fine. This doesn't make a lot of sense, but it works. I updated the guardian accordingly.
To test that the guardian changes work, we brought the ISC_DRMI guardian down, which unlocked PRMI, and re-requested PRMI ASC. We fixed a few guardian errors and tried again. It worked fine.
These are the changes made:
Loaded and tested!
Rereading this, I realized that saying it "didn't work" is very vague.
More words:
I stepped around with the PRM slider, watching both the POP18 and POP90 buildups. While watching the buildups, I looked to see when various signals crossed zero. For PRM pitch, REFL A 9 I clearly had a good zero crossing at the maximized buildup. The difference was the sign flip, which I tested by turning on the loop and seeing the error signal go the wrong direction (away from zero), and then the right direction (towards zero) when the gain sign was flipped. To maintain the gain sign as set by the guardian, I flipped the sign on the input matrix value from positive (pre-vent value) to negative.
For PRM yaw, the REFL signals did not cross zero when the buildups were maximized. However, the POP X RF signal did cross zero. I also watched the POP QPDs, which are sensitive to PRM, but also require some offset. I decided setting some offset and trying to use the REFL WFS was probably a bad idea, so I chose POP X RF yaw as the error signal. I calibrated it by measuring the signal difference in counts compared to the sliders steps I took, which are in urad. I checked the overall sign using the similar loop engagement test I described above.
Leo, Jennie, Camilla
Followed setup instructions from 80010, with 75mW injected in the SEED beam, we had 1mW on OPO_IR_PD_DC slightly lower than last time. Also took SQZ_FC to FC_MISALIGNED. And opened SQZ beam divertor and fast shutter. Have counts of ~60 on ASC-AS_A and B and 7e-4 on ASC-OMC-A and B NSUMs. Jennie took the OMC PZT down to zero to start and then ran a template userapps/../omc/h1/templates/OMC_scan_sqz_beam.xml.
Repeated mode scans with different ZM PSAMs offsets, the DC centering loops could account for the pitch change for PSAMs, for the extreme PSAMS values, I increased the servo limiter from 85 to 95.
Jennie and Leo have the data to analyze.
See the ndscope with the channels we used to monitor attached.
This template is saved in /ligo/home/jennifer.wright/Documents/OMC_scan/20250819_SQZ_beam_scan_with_ZM_changes.yaml
Following up on these OMC scans: attached is the table with all computed mode-mismatch values.
Leo, Jennie W., Camilla Below is a plot of the OMC scans fitted with a surface polynomial. The plot is from the presentation in T2500228, so the labels on axes will be different. This can be plotted on Matlab simply using the following code block (requires Curve Fitting Toolbox to use fit()). OMCX = [5.5, 4, 2.3, 3, 8.1, 2.3, 2.3, 6, 9.5, 6, 6, 9, 3, 8, 4, 9.5, 9.5]; OMCY = [-0.8, 0.34, 0.2, 0.85, -0.4, -0.1, 0.2, -0.4, -0.6, -4.5, -5.3, 2, 2, -2, -2, -5, -0.6]; OMCdata = [1-4.048/100, 1-3.412/100, 1-3.074/100, 1-3.234/100, 1-5.073/100, ... 1-3.176/100, 1-2.884/100, 1-2.174/100, 1-3.343/100, 1-9.490/100, 1-11.865/100, 1-6.282/100, 1-2.313/100, ... 1-3.013/100, 1-3.021/100, 1-9.179/100, 1-3.243/100]; omc = fit([transpose(OMCX), transpose(OMCY)], transpose(OMCdata), 'poly22'); p = plot(omc);
Jennie W, Sheila D
Summary: tried to use ETMX injection to tag light coming from the arms at the REFL port as we step the darm offset, didn't get a full measurement as we lost lock - seems unrelated.
During the commissioning period and while Robert as doing shaker i jections, I tuned a line at 74.37 Hz on the SUS-ETMX_L3_CAL_EXC. This is the same point at which the ETMX CAL line input goes in (where the calibration exc for ETMX is injected into the DRIVEALIGN matrix). The psds shown were measured about a minute before lost lock and it can be seen that the rms motion in the upper left of the ETMX bottom mass is not reaching its limit.
The settings I used in AWG are shown in this photo.
After checking the height of the line in DARM, OMC REFL and the rms on the upper left ESD readout, I tried to take a darm offsert step measurement using autodarmpoffsetstep.py (using the ETMX line instead of the PCAL lines to readout the optical gain). To do this I comment out the setup_pcal_for_darm_offset_step and restore_pcal functions in autodarmoffsetstep.py before running it.
I had to the stop the measurement twice to trurn off the OMC ASC and put the X0 offset back to what it was before the measurement. One step into this measurement set we then lost lock (around 2 minutes into the final attempted DARM offset measurement). We can't see any evidence so far as this was the causer of lockloss as the line had been running for 40 minutes or so without breaking the lock.
Data is saved in an xml in /ligo/gitcommon/jennifer.wright/git/DARM_OFFSET
Got the folder reference wrong for the xml file, its actually in /ligo/home/jennifer.wright/git/2025/DARM_OFFSET.
As the next step in fine-tuning the SR3 Y estimator, we needed to retake the SUSPOINT to M1 measurements as well as OLTFs so they could be used to calculate the filter modules for the suspoint and drive estimators. I took those measurements today.
General setup:
- in HEALTH_CHECK (but damping back on)
- damping for Y changed from -0.5 to -0.1
- OSEMINF gains and DAMP FM7 turned on (and left on afterwards 86070)
SUSPOINT to M1:
Data for those measurements can be found in /ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/SR3/Common/Data/2025-07-29_1730_H1ISIHAM5_ST1_WhiteNoise_SR3SusPoint_{L,T,V,R,P,Y}_0p02to50Hz.xml r12492
M1 to M1 (OLTFs):
After this, the next steps are to take regular transfer functions with the above setup of Y having -0.1 damping
That data is in /ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/SR3/SAGM1/Data/2025-07-29_1830_H1SUSSR3_M1_WhiteNoise_{L,T,V,R,P,Y}_0p02to50Hz_OpenLoopGainTF.xml r12493
Reminder that these open loop transfer functions were taken with the damping Y gain of -0.1, so they should not be taken as 'nominal' OLTFs.
The M1 to M1 TFs were supposed to be regular TFs so here is the alog for those: 86202
In the past few weeks have seen rocky performance out of the Calibration pipeline and its IFO-tracking capabilities. Much, but not all, of this is due to [my] user error. Tuesday's bad calibration state is a result of my mishandling of the recent drivealign L2L gain changes for the ETMX TST stage (LHO:78403, LHO:78425, LHO:78555, LHO:79841). The current practice adopted by LHO with respect to these gain changes is the following: 1. Identify that KAPPA_TST has drifted from 1 by some appreciable amount (1.5-3%), presumably due to ESD charging effects. 2. Calculate the necessary DRIVEALIGN gain adjustment to cancel out the change in ESD actuation strength. This is done in the DRIVEALIGN bank so that it's downstream enough to only affect the control signal being sent to the ESD. It's also placed downstream of the calibration TST excitation point. 3. Adjust the DRIVEALIGN gain by the calculated amount (if kappaTST has drifted +1% then this would correspond to a -1% change in the DRIVEALIGN gain). 3a. Do not propagate the new drivealign gain to CAL-CS. 3b. Do not propagate the new drivealign gain to the pyDARM ini model. After step 3 above it should be as if the IFO is back to the state it was in when the last calibration update took place. I.e. no ESD charging has taken place (since it's being canceled out by the DRIVEALIGN gain adjustments). It's also worth noting that after these adjustments the SUS-ETMX drivealign gain and the CAL-CS ETMX drivealign will no longer be the copies of each other (see image below). The reasoning behind 3a and 3b above is that by using these adjustments to counteract IFO changes (in this case ESD drift) from when it was last calibrated, operators and commissioners in the control room could comfortably take care of performing these changes without having to invoke the entire calibration pipeline. The other approach, adopted by LLO, is to propagate the gain changes to both CAL-CS and pyDARM each time it is done and follow up with a fresh calibration push. This approach leaves less to 'be remembered' as CAL-CS, SUS, and pyDARM will always be in sync but comes at the cost of having to turn a larger crank each time there is a change.Somewhere along the way I updated the TST drivealign gain parameter in the pyDARM model even though I shouldn't have. At this point, I don't recall if I was confused because the two sites operate differently or if I was just running a test and left this parameter changed in the model template file by accident and subsequently forgot about it. In any case, the drivealign gain parameter change made its way through along with the actuation delay adjustments I made to compensate for both the new ETMX DACs and for residual phase delays that haven't been properly compensated for recently (LHO:80270). This happened in commit 0e8fad of the H1 ifo repo. I should have caught this when inspecting the diff before pushing the commit but I didn't. I have since reverted this change (H1 ifo commit 41c516). During the maintenance period on Tuesday, I took advantage of the fact that the IFO was down to update the calibration pipeline to account for all of the residual delays in the actuation path we hadn't been properly compensating for (LHO:80270). This is something that I've done several times before; a combination of the fact that the calibration pipeline has been working so well in O4 and that the phase delay changes I was instituting were minor contributed to my expectation that we would come back online to a better calibrated instrument. This was wrong. What I'd actually done was install a calibration configuration in which the CAL-CS drivealign gain and the pyDARM model's drivealign gain parameter were different. This is bad because pyDARM generates FIR filters that are used by the downstream GDS pipeline; those filters are embedded with knowledge of what's in CAL-CS by way of the parameters in the model file. In short, CAL-CS was doing one thing and GDS was correcting for another. -- Where do we stand? At the next available opportunity, we will be taking another calibration measurement suite and using it reset the calibration one more time now that we know what went wrong and how to fix it. I've uploaded a comparison of a few broadband pcal measurements (image link). The blue curve is the current state of the calibration error. The red curve was the calibration state during the high profile event earlier this week. The brown curve is from last week's Thursday calibration measurement suite, taken as part of the regularly scheduled measurements. -- Moving forward, I and others in the Cal group will need to adhere more strictly to the procedures we've already had in place: 1. double check that any changes include only what we intend at each step 2. commit all changes to any report in place immediately and include a useful log message (we also need to fix our internal tools to handle the report git repos properly) 3. only update calibration while there is a thermalized ifo that can be used to confirm that things will back properly, or if done while IFO is down, require Cal group sign-off before going to observing
Posting here for historical reference.
The propagation of the correction of incorrect calibration was in an email thread between myself, Joseph Betzwieser, Aaron Zimmerman, Colm Talbot.
I had produced a calibration uncertainty with necessary correction that would account for the effects of this issue attached here as a text file, and as an image showing how it compares against our ideal model (blue dashed) and the readouts of the calibration monitoring lines at the time (red pentagons).
Ultimately the PE team used the inverse of what I post here, since as a result of this incident it was discovered that PE was ingesting uncertainly in an inverted fashion up to this point.
I am also posting the original correction transfer function (the blue dashed line in Vlad's comment's plot) here from Vlad for completeness. It was created by calculating the modeled response of the interferometer that we intended to use at the time (R_corrected), over the response of the interferometer that was running live at the time (R_original) corrected for online correction (i.e. time dependent correction factors such as Kappa_C, Kappa_TST, etc). So to correct, one would take the calibrated data stream at the time: bad_h(t) = R_original (t) * DARM_error(t) and correct it via: corrected_h(t) = R_original(t) * DARM_error(t) * R_corrected / R_original(t)
So our understanding of what was wrong with the calibration around September 25th, 2024 00:00 UTC has improved significantly since then. We had 4 issues in total. 1) The above mentioned drivealign gain mismatch issue between model, h1calcs, the interferometer and GDS calibration pipeline. 2) The ETMX L1 stage rolloff change that was not in our model (see LHO alog 82804) 3) LHO was not applying the measured SRC detuning to the front end calibration pipeline - we started pushing it in February 2025 (see LHO alog 83088). 4) The fact that pydarm doesn't automatically hit the load filters button for newly updated filters means sometimes humans forget to push that button (see for example LHO alog 85974). Turns out that night the optical gain filter in the H1:CAL-DARM_ERR filter bank had not been updated. Oddly enough, the cavity pole frequency filter bank had been updated, but I'm guessing the individual load button was pressed In the filter archive (/opt/rtcds/lho/h1/chans/filter_archive/h1calcs/), specifically H1CALCS_1411242933.txt has an inverse optical gain filter of 2.9083e-07, which is the same value as the previous file's gain. However, the model optical gains did change (3438377 in the 20240330T211519Z report, and 3554208 in the bad report that was pushed, 20240919T153719Z). The epics for the kappa generation were updated, so we had a mismatch between the kappa_C value that was calculated, and to what optical gain it applied - similar to the actuation issue we had. It should have changed by a factor of 0.9674 (3554208/3438377). This resulted in the monitoring lines showing ~3.5% error at the 410.3 Hz line during this bad calibration period. It also explains why there's a mistmatch between monitoring lines and the correction TFs we provided that night at high frequency. Normally the ratio between PCAL and GDS is 1.0 at 410.3 Hz since the PCAL line itself is used to calculated kappa_C at that frequency and thus matches the sensing at that frequency to the line. See the grafana calibration monitoring line page. I've combined all this information to create an improved TF correction factor and uncertainty plot, as well as more normal calibration uncertainty budgets. So the "calibration_uncertainty_H1_1411261218.png" is a normal uncertainty budget plot, with a correction TF from the above fixes applied. The "calibration_uncertainty_H1_1411261218.txt" is the associated text file with the same data. "H1_uncertainty_systematic_correction.txt" is the TF correction factor that I applied, calculated with the above fixes. Lastly, "H1_uncertainty_systematic_correction_sensing_L1rolloff_drivealign.pdf", is the same style plot Vlad made earlier, again with the above fixes. I'll note the calibration uncertainty plot and text file was created on the LHO cluster, with /home/cal/conda/pydarm conda environment, using command: IFO=H1 INFLUX_USERNAME=lhocalib INFLUX_PASSWORD=calibrator CAL_ROOT=/home/cal/archive/H1/ CAL_DATA_ROOT=/home/cal/svncommon/aligocalibration/trunk/ python3 -m pydarm uncertainty 1411261218 -o ~/public_html/O4b/GW240925C00/ --scald-config ~cal/monitoring/scald_config.yml -s 1234 -c /home/joseph.betzwieser/H1_uncertainty_systematic_correction.txt I had to modify the code slightly to expand out the plotting range - it was much larger than the calibration group usually assumes. All these issues were fixed in the C01 version of the regenerated calibration frames.