J. Betzwieser, V. Bossilkov, L. Dartez, J. Kissel We've updated the calibration to account for the errors that have been in play for a while, as discussed in LHO aLOG 82804, and this after trying last week but running into issues; see 82935, and 83083. The highlights for this change are: (1) Employing a detuned sensing function for the first time in O4. (2) Updating the computational delay in the actuation stages (3) Updating the UIM distribution filters to match what's installed in the true ETMX_L1_LOCK bank (4) Making sure the ETMX L3 DRIVEALIGN GAIN is self-consistent everywhere. The model parameter set pushed includes measured interferometer parameters based on the 20250222T193656Z measurement (i.e. (1) and (2)), but the other two changes (3) and (4) are digital filters and settings self-consistency additions that are done by hand. The summary of the resulting change in systematic error is shown in the only attachment, with black as "before" and blue and red being the "after" where blue is with not-yet great TDCFs, and red being after 15 minutes of TDCF "burn in." %%%%% Process %%%% After Joe / Louis / Vlad concocted a model parameter set they were happy with, and they've tagged is as "valid," we: (Quotes of command lines that start with - "$" indicate that it's on the LHO control room workstation - "@" indicate that it's on the LHO LDAS cluster) - Confirmed that I *don't* need to activate any special conda environment on the local LHO control room workstations, - Checked the pydarm version to ensure it is what Louis wants it to be: $ pydarm --version 20250227.0 i.e. this "tag" of the pydarm code: https://git.ligo.org/Calibration/pydarm/-/tags/20250227.0. (Note, to find these tags, go to the pydarm homepage https://git.ligo.org/Calibration/pydarm, then use the sidebar to navigate to "code" > "tags") - Locally checked the about-to-be-pushed calibration parameter set with $ pydarm export 20250222T193656Z Here, I trusted that all the "various darm model transfer function values at calibration line frequencies" -- so called "EPICs records" had been validated by Joe / Louis / Vlad, but we went through the expect filter changes together. - Also looked thru (a bit more) human readable .json file for the report in the report directory, /ligo/groups/cal/H1/reports/20250222T193656Z $ firefox foton_filters_to_export.json and sanity checked that version of what was about to be exported. - HIT GO ON THE EXPORT $ pydarm export --push 20250222T193656Z which implicitly includes addressing highlight (2) from about, since this is in the pushed model parameter set Addressing highlight (1) from above, - By hand, loaded filter coefficients on CALCS model to install the new 3 DARM_ERR filters for the updated (inverse) sensing function model. - By hand, turned on the new FM8 SRCD2N filter in the CAL-CS_DARM_ERR bank (FM9 and FM10 for the [inverse] cavity pole and optical gain were already on, since they've been used consistently throughout the O4 run). - In the CALCS SDF OBSERVE.snap file (which is the same as the safe.snap file), I individually accepted the CAL-CS_DARM_ERR FM8 filter being ON first. THEN accepted the ~80 "EPICs records" for the "model values at calibration line frequencies." Addressing highlight (4) from above, - We collectively reviewed that . H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN (real, in loop value) . H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN (CALCS replica actuation model value) . [actuation_x_arm] pydarm parameter tst_drive_align_gain (the pydarm parameter replica of the actuation model value) were all self-consistently the current value of 198.664. Addressing highlight (3) from above, - By hand, Louis copied over the H1:SUS-ETMX_L1_LOCK_L FM6 "aL1L3" filter coefficients over into the equivalently named replica H1:CAL-CS_DARM_FE_ETMX_L1_LOCK_L FM6 filter coefficients, and load filter coefficients in CALCS model again. Then on to restarting the GDS pipeline, - accepted H1CDSSDF table to capture CALIB_REPORT_ID, HASH, and GDS HASH - Joe pushed the valided report the the LHO LDAS cluster, $ pydarm upload 20250222T193656Z - I logged into the LDAS cluster and made sure that the report made it to the cluster OK, $ ssh jeffrey.kissel@ldas-grid.ligo-wa.caltech.edu @ cat /home/cal/archive/H1/reports/last-exported/id 20250222T193656Z - Back on the workstation, I printed out the checksum for the GDS filters we were about to restart the GDS pipeline with, $ cd /ligo/groups/cal/H1/reports/20250222T193656Z/ $ sha256sum gstlal_compute_strain_C00_filters_H1.npz 3c12baf1aac516042212f233d3f2f574a8b77ef25a893da9a6244a2950c42d1e gstlal_compute_strain_C00_filters_H1.npz - HIT GO ON RESTARTING THE GDS PIPELINE, back on the work station $ pydarm gds restart and watched the output to command line for the check-sums it spit back about what it installed, to make sure it was the same as above, [...] target GDS filter file sha256 checksum: 3c12baf1aac516042212f233d3f2f574a8b77ef25a893da9a6244a2950c42d1e [...] target GDS filter file sha256 checksum: 3c12baf1aac516042212f233d3f2f574a8b77ef25a893da9a6244a2950c42d1e [...] Connection to h1guardian1 closed. 2025-02-27 10:39:23 PDT ============== - Waited 12 minutes, look for data from H1 on web sites, and running $ pydarm gds status - Now started to validate whether the push worked: . Went to to NLN_CAL_MEAS . ran excitation template, $ diaggui /ligo/home/louis.dartez/pcaly2darm_bb_excitation.xml & . Let the 300 average, ~10 minute excitation, run to completion, but ignored the answer from it, since it's a poorly calibrated DELTAL_EXTERNAL / PCAL TF. (A) After 4 to 6 minutes of running the excitation template, waiting for PCAL to show up in the DTT accessible frames, we ran the better calibrated template, $ diaggui /ligo/home/louis.dartez/pcaly2darm_broadbands/pcaly2darm_broadbands_compare.xml & updating the date and time within the time of the above excitation BB template, saving the answer to /ligo/home/jeffrey.kissel/2025-02-27/2025-02-27_185617UTC_H1_PCAL2DELTAL_BB_300avgs.xml (B) Using the same "better calibrated offline measurement" template, we also gathered "before" data during from last saturday's suite. . We thought (A) is definitely better than before (B), but see a hump at ~70 Hz. So, suspecting it was GDS kappa's not yet burned in at a good measured value, we turned on the calibration lines for ~15 minutes, allowing GDS to compute good updated kappa values to be used during the broadband measurement. (C) Took it again after having calibration lines ON for ~15 minutes. 2025-02-27_192525UTC_H1_PCAL2DELTAL_BB_300avgs.xml The results are attached -- see 2025-02-27_192525UTC_H1_PAL2GDSL_BB_300avgs.png. This transfer function, a measure of the systematic error in the calibration, has much less error below 50 Hz, which was the goal. We're sad that there's still a ~2% error hump at 70 Hz, and that in inverted from before, but we consider this good enough. The observing segment that started 2025-02-27 20:18 UTC has this new calibration. For final clean up, I committed the following affected files to the userapps repo: CALCS filter file /opt/rtcds/userapps/release/cal/h1/filterfiles H1CALCS.txt CALCS SDF safe / observe file /opt/rtcds/userapps/release/cal/h1/burtfiles h1calcs_safe.snap CDSSDF SDF safe / observe file /opt/rtcds/userapps/release/cds/h1/burtfiles/h1cdssdf/ h1cdssdf_safe.snap rev 30829