Mon Aug 25 10:09:19 2025 INFO: Fill completed in 9min 16secs
Sheila, Camilla
We noticed that the high frequency SQZ was worse after the ASC SRCL1 Pit and YAW offset changes last week 86422, plot.
We ran the SQZ_OPO_LR 's SCAN_OPOTEMP, then SCAN_ALIGMENT_FDS (largest change was 8urad in ZM4 Y, plots here), then SCAN_SQZANG_FDS which brought the ADF error signal to zero which is great as means the ADF servo needs no adjustment. Altogether these three states brought the high frequency squeezing back to where it was a week ago, -4.3dB.
The ADF servo is very slow ~45minutes but this is preferred than it running away at the start of the lock.
In 86481, Elenna proposed using a new FF fit as the bruco showed SRCL coherence, this has improved the low frequency SRCL noise, but moved the bump at 200Hz to 100Hz, overal a small improvement, see plot attached. Saved in ISC_LOCK and safe and observe sdfs.
I doubled the amplitude of the injection from 0.04 to 0.08, ran the template and saved as /lsc/h1/scripts/feedforward/SRCL_excitation.xml while FM5 was on, so that in the future we can try to fit a separate >100Hz filter as Elenna suggests in 86481. I didn't measure the pre-shaping as expect we recently have that.
In preparation to create a high frequency SRCL FF based on this data, I copied the SRCLFF1 FM10 highpass into FM10 of SRCLFF2 (not to be confused with a different, older "highpass" in FM1 of SRCLFF2).
I have updated the blend filters used for the SR3 P estimator to add in the osem damping for the P peaks at 0.6 and 0.7. The filters I installed were the ones Brian made last Thursday, pit_v2. The filters were installed into the filter banks SUS-SR3_M1_EST_P_FUSION_MEAS_BP and SUS-SR3_M1_EST_P_FUSION_MODL_BP with the OSEM and model blend filters, respectively. They were placed in FM2, and I swapped over from FM1 to FM2 for both of them. These changes were updated in Observe and Safe sdf.
We turned the estimator on with these new filters at 2025-08-25 15:42UTC.
TITLE: 08/25 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 2mph Gusts, 1mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
Observing at 151Mpc and have been Locked for over 4.5 hours. Commissioning today starting at 15:30UTC
DriptaB, TonyS, RickS
J. Kissel, quoting R. Savage: A few extra explanatory words for the uninitiated on how this measurement works / how the results were derived: The uncertainties reported are the statistical variations for the measurements we made, highlighted in the attached plots. The authors have not attempted an assessment of potential systematic errors. I suspect that the largest sources of systematic error would likely result from - deviations of the incident polarization (as defined by the plane of incidence of the beamsplitter) from pure p-pol and - deviations of the Angle of Incdence from 45 deg. I also suspect that the errors we might have in this regard are much smaller than what you will have in the SPI installation given the much longer path lengths measured here vs. the SPI in-chamber setup. The next largest source of systematic errors might be - the temperature dependence of the reflectivity of the beamsplitters. We did not attempt to quantify this. We do measure, and correct for, the temperature dependence of the power sensor responsivities and their dark levels during the measurements. I suspect these will have a negligible impact on the measurement results reported for this effort. Regarding the measurement setup and math to derive the answers: The description of the responsivity ratio measurements given in D. Bhattacharjee et al., CQG 38.1 (2020): 015009 (P2000113) -- specifically the caption and text surrounding Figure 3 -- is the gist of the measurement method - simply replace "... the square root of the product of the ratios... replaced with "... the square root of the quotient of the ratios ..." from that caption. This yields the beamsplitter ratio, T/R, rather than the responsivity ratio of the two integrating sphere PDs that the PCAL team is after. (called \alpha_{W1W2} in the caption, but could also be any two responsivities, \alpha_{WG}, \alpha_{RW}, etc). Only - laser power variations that occur over the difference between times of recording the two power sensor outputs (less than 0.1 sec) - variations of the reflectivity of the BS or the responsivities of the two power sensors that occur over the time difference between measuring in the A-B and B-A configurations (less than 40 seconds) should impact the measurements. We record four time series: the output of both power sensors (in volts) and the temperatures (in volts) recorded by sensors on the circuit boards of both power sensors. The any temperature variation in the power sensor time series is normalized out, leaving two conditioned voltage time series for a given physical arrangement of PDs -- and thus are the (power) transmission, T, and (power) reflection, R, of the beam splitter (the A path's HR steering mirror -- that reflects light 90 [deg] to be parallel with the B path -- reflectivity is measured and taken into account as well -- see details below). The responsivity of these PCAL integration sphere + photodiode assemblies -- here we'll call them \rho_1 and \rho_2 -- is known to extremely high accuracy. Each data point you see in the plot is the ratio of [[ the BS ratio (T/R) resulting from one set of (two conditioned) time series when the sensors are in one configuration ]] and [[ a second BS Ratio (T/R) when PD positions have been swapped ]], i.e. accounting for - what was the T time series (from \rho_1 PD in the B position; the "A-B" configuration) becomes the R time series (from \rho_1 PD in the A Position; the "B-A" configuration). - what was the R time series (from \rho_2 PD in the A position; the "A-B" configuration) becomes the T time series (from \rho_2 PD in the B Position; the "B-A" configuration), and conversely So the math is T/R = sqrt { [(P x T x rho_1) / (P x R x rho_2)]_{A-B} / [(P x R x rho_1) / (P x T x rho_2)]_{B-A} } = sqrt{ (T/R)^2 } where again - P is the input power (in [W]), - R and T are the beam splitter reflectivity and transmission (in power; [W]), - \rho_1 and \rho_2 are the two different working standards, and - the subscript _{A-B} and _{B-A} are the answers in the two different physical configurations of the integrating spheres. Assuming no other loss or absorption, then the (power) reflectivity, R, displayed on the plots is R + T = 1 1 + T/R = 1/R R = 1 / (1 + T/R) As noted earlier, the powers (sensor outputs) for the transmitted path are multiplied by about 1.00035 to account for the transmissivity of the the HR mirror that reflects the transmitted beam to the power sensor.
TITLE: 08/25 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
H1 has been locked for 41+ hours
All systems still running well.
No events to report.
LOG:
No Log
H1 ISI CPS Sensor Noise Spectra Famis 26545
No obvious or alarming changes from the last CPS Sensor Noise Spectra.
TITLE: 08/24 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 10mph Gusts, 4mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
H1 has been locked for 35 hours and 45 min.
All systems appear to be running smoothly.
TITLE: 08/24 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Observing at 152 Mpc and have been Locked for almost 37 hours. Once again nothing at all happened during my shift
LOG:
no log
Sun Aug 24 10:09:25 2025 INFO: Fill completed in 9min 21secs
TITLE: 08/24 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 2mph Gusts, 0mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY:
Currently Observing at 150Mpc and have been Locked for almost 27 hours
Looks like there were a couple of GRB-Short alerts that came in early this morning (but no superevent candidates :( ):
TITLE: 08/24 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
H1 has been locked for 17+ hours.
All systems are still running smoothly.
No events to report.
LOG:
No Log
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 9mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.15 μm/s
QUICK SUMMARY:
H1 has been locked for 12 + Hours.
All systems seem to be running smoothly.
Secondardy useism looks to be falling & Wind forcasting is single digit wind speeds for the night, so it looks like a good night for Observing.
TITLE: 08/23 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: We're Observing at 153 Mpc and have been Locked for almost 12 hours. Besides going out of Observing for the calibration measuremnts, I haven't had to do anything for the ifo. We had a GRB-Short alert come in at 20:11 UTC for E592892
LOG:
14:30 UTC Observing and have been Locked for almost 3 hours
18:30 Dropped Observing to run calibration
19:02 Back into Observing
20:11 GRB-Short E592892
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 17:09 | EPO | Sam + tour | Overpass | n | Passing over | 17:39 |
Calibration suite run with IFO fully thermalized, having been Locked for over 6.5 hours.
Broadband
2025-08-23 18:31:57 - 18:37:08 UTC
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250823T183157Z.xml
Simulines
2025-08-23 18:38:37 - 19:01:56 UTC
/ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20250823T183838Z.hdf5
/ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20250823T183838Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20250823T183838Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20250823T183838Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20250823T183838Z.hdf5
Sat Aug 23 10:08:18 2025 INFO: Fill completed in 8min 14secs
TITLE: 08/23 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 2mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.19 μm/s
QUICK SUMMARY:
Observing at 150Mpc and have been Locked for almost 3 hours
In the past few weeks have seen rocky performance out of the Calibration pipeline and its IFO-tracking capabilities. Much, but not all, of this is due to [my] user error. Tuesday's bad calibration state is a result of my mishandling of the recent drivealign L2L gain changes for the ETMX TST stage (LHO:78403, LHO:78425, LHO:78555, LHO:79841). The current practice adopted by LHO with respect to these gain changes is the following: 1. Identify that KAPPA_TST has drifted from 1 by some appreciable amount (1.5-3%), presumably due to ESD charging effects. 2. Calculate the necessary DRIVEALIGN gain adjustment to cancel out the change in ESD actuation strength. This is done in the DRIVEALIGN bank so that it's downstream enough to only affect the control signal being sent to the ESD. It's also placed downstream of the calibration TST excitation point. 3. Adjust the DRIVEALIGN gain by the calculated amount (if kappaTST has drifted +1% then this would correspond to a -1% change in the DRIVEALIGN gain). 3a. Do not propagate the new drivealign gain to CAL-CS. 3b. Do not propagate the new drivealign gain to the pyDARM ini model. After step 3 above it should be as if the IFO is back to the state it was in when the last calibration update took place. I.e. no ESD charging has taken place (since it's being canceled out by the DRIVEALIGN gain adjustments). It's also worth noting that after these adjustments the SUS-ETMX drivealign gain and the CAL-CS ETMX drivealign will no longer be the copies of each other (see image below). The reasoning behind 3a and 3b above is that by using these adjustments to counteract IFO changes (in this case ESD drift) from when it was last calibrated, operators and commissioners in the control room could comfortably take care of performing these changes without having to invoke the entire calibration pipeline. The other approach, adopted by LLO, is to propagate the gain changes to both CAL-CS and pyDARM each time it is done and follow up with a fresh calibration push. This approach leaves less to 'be remembered' as CAL-CS, SUS, and pyDARM will always be in sync but comes at the cost of having to turn a larger crank each time there is a change.Somewhere along the way I updated the TST drivealign gain parameter in the pyDARM model even though I shouldn't have. At this point, I don't recall if I was confused because the two sites operate differently or if I was just running a test and left this parameter changed in the model template file by accident and subsequently forgot about it. In any case, the drivealign gain parameter change made its way through along with the actuation delay adjustments I made to compensate for both the new ETMX DACs and for residual phase delays that haven't been properly compensated for recently (LHO:80270). This happened in commit 0e8fad of the H1 ifo repo. I should have caught this when inspecting the diff before pushing the commit but I didn't. I have since reverted this change (H1 ifo commit 41c516). During the maintenance period on Tuesday, I took advantage of the fact that the IFO was down to update the calibration pipeline to account for all of the residual delays in the actuation path we hadn't been properly compensating for (LHO:80270). This is something that I've done several times before; a combination of the fact that the calibration pipeline has been working so well in O4 and that the phase delay changes I was instituting were minor contributed to my expectation that we would come back online to a better calibrated instrument. This was wrong. What I'd actually done was install a calibration configuration in which the CAL-CS drivealign gain and the pyDARM model's drivealign gain parameter were different. This is bad because pyDARM generates FIR filters that are used by the downstream GDS pipeline; those filters are embedded with knowledge of what's in CAL-CS by way of the parameters in the model file. In short, CAL-CS was doing one thing and GDS was correcting for another. -- Where do we stand? At the next available opportunity, we will be taking another calibration measurement suite and using it reset the calibration one more time now that we know what went wrong and how to fix it. I've uploaded a comparison of a few broadband pcal measurements (image link). The blue curve is the current state of the calibration error. The red curve was the calibration state during the high profile event earlier this week. The brown curve is from last week's Thursday calibration measurement suite, taken as part of the regularly scheduled measurements. -- Moving forward, I and others in the Cal group will need to adhere more strictly to the procedures we've already had in place: 1. double check that any changes include only what we intend at each step 2. commit all changes to any report in place immediately and include a useful log message (we also need to fix our internal tools to handle the report git repos properly) 3. only update calibration while there is a thermalized ifo that can be used to confirm that things will back properly, or if done while IFO is down, require Cal group sign-off before going to observing
Posting here for historical reference.
The propagation of the correction of incorrect calibration was in an email thread between myself, Joseph Betzwieser, Aaron Zimmerman, Colm Talbot.
I had produced a calibration uncertainty with necessary correction that would account for the effects of this issue attached here as a text file, and as an image showing how it compares against our ideal model (blue dashed) and the readouts of the calibration monitoring lines at the time (red pentagons).
Ultimately the PE team used the inverse of what I post here, since as a result of this incident it was discovered that PE was ingesting uncertainly in an inverted fashion up to this point.
I am also posting the original correction transfer function (the blue dashed line in Vlad's comment's plot) here from Vlad for completeness. It was created by calculating the modeled response of the interferometer that we intended to use at the time (R_corrected), over the response of the interferometer that was running live at the time (R_original) corrected for online correction (i.e. time dependent correction factors such as Kappa_C, Kappa_TST, etc). So to correct, one would take the calibrated data stream at the time: bad_h(t) = R_original (t) * DARM_error(t) and correct it via: corrected_h(t) = R_original(t) * DARM_error(t) * R_corrected / R_original(t)
So our understanding of what was wrong with the calibration around September 25th, 2024 00:00 UTC has improved significantly since then. We had 4 issues in total. 1) The above mentioned drivealign gain mismatch issue between model, h1calcs, the interferometer and GDS calibration pipeline. 2) The ETMX L1 stage rolloff change that was not in our model (see LHO alog 82804) 3) LHO was not applying the measured SRC detuning to the front end calibration pipeline - we started pushing it in February 2025 (see LHO alog 83088). 4) The fact that pydarm doesn't automatically hit the load filters button for newly updated filters means sometimes humans forget to push that button (see for example LHO alog 85974). Turns out that night the optical gain filter in the H1:CAL-DARM_ERR filter bank had not been updated. Oddly enough, the cavity pole frequency filter bank had been updated, but I'm guessing the individual load button was pressed In the filter archive (/opt/rtcds/lho/h1/chans/filter_archive/h1calcs/), specifically H1CALCS_1411242933.txt has an inverse optical gain filter of 2.9083e-07, which is the same value as the previous file's gain. However, the model optical gains did change (3438377 in the 20240330T211519Z report, and 3554208 in the bad report that was pushed, 20240919T153719Z). The epics for the kappa generation were updated, so we had a mismatch between the kappa_C value that was calculated, and to what optical gain it applied - similar to the actuation issue we had. It should have changed by a factor of 0.9674 (3554208/3438377). This resulted in the monitoring lines showing ~3.5% error at the 410.3 Hz line during this bad calibration period. It also explains why there's a mistmatch between monitoring lines and the correction TFs we provided that night at high frequency. Normally the ratio between PCAL and GDS is 1.0 at 410.3 Hz since the PCAL line itself is used to calculated kappa_C at that frequency and thus matches the sensing at that frequency to the line. See the grafana calibration monitoring line page. I've combined all this information to create an improved TF correction factor and uncertainty plot, as well as more normal calibration uncertainty budgets. So the "calibration_uncertainty_H1_1411261218.png" is a normal uncertainty budget plot, with a correction TF from the above fixes applied. The "calibration_uncertainty_H1_1411261218.txt" is the associated text file with the same data. "H1_uncertainty_systematic_correction.txt" is the TF correction factor that I applied, calculated with the above fixes. Lastly, "H1_uncertainty_systematic_correction_sensing_L1rolloff_drivealign.pdf", is the same style plot Vlad made earlier, again with the above fixes. I'll note the calibration uncertainty plot and text file was created on the LHO cluster, with /home/cal/conda/pydarm conda environment, using command: IFO=H1 INFLUX_USERNAME=lhocalib INFLUX_PASSWORD=calibrator CAL_ROOT=/home/cal/archive/H1/ CAL_DATA_ROOT=/home/cal/svncommon/aligocalibration/trunk/ python3 -m pydarm uncertainty 1411261218 -o ~/public_html/O4b/GW240925C00/ --scald-config ~cal/monitoring/scald_config.yml -s 1234 -c /home/joseph.betzwieser/H1_uncertainty_systematic_correction.txt I had to modify the code slightly to expand out the plotting range - it was much larger than the calibration group usually assumes. All these issues were fixed in the C01 version of the regenerated calibration frames.