The Calibration group has been working hard to produce and release the LHO uncertainty budget for the first half of O4a. The case at LHO has proven to be significantly more involved than LLO's for a variety of reasons.
- LHO started the run at a much higher power, which led to a much more severe "thermalization" effect for the first hours of each lock stretch
- the IFO was quite unstable at 76W, which meant that we had more locklosses & therefore spent more time "thermalizing"
- the power was lowered to 60W in late June
- there were signifcant periods during which we could not trust the online TDCFs
There are probably a few more reasons that I'm not recalling right now. The most challenge has been dealing with the thermalization periods as they contribute the most to our uncertainty at low frequencies. Moreover, the thermalization effect is very different between the time periods at which the IFO was operating at 76W compared to when it was operating at 60W.
As part of uncertainty generation we processed the high frequency roaming Pcal lines to measure the sensing function and inform the uncertainty budget from 1-5 kHz. More on the implementation details of this process will be shared in another alog and linked here as a comment. One detail worth noting here has to do with the fact that for O4 we're using a new report-based calibration infrastructure based on pyDARM. Each time a new set of calibration measurements are taken, the data gets processed (by either a member of the Calibration team or an LHO operator) using the pyDARM tools. The data processing step produces a report PDF file and a multitude of by-product intermediate data files. These files and the PDF are all collectively called a "report."
In December, I provided Lilli with a set of re-generated reports that included the newly processed high frequency roaming line data at LHO. This is similar to the work-flow the Calibration group employed for LLO earlier in the run. She then calculated uncertainty budgets for the first half of O4a, ending on October 1, 2023. However, all uncertainty envelopes before GPS 1371427218 were clearly wrong. As an example, the last "bad" GPS time is
1371394818 (
calibration_uncertainty_H1_1371394818.png) and the first "good" GPS time is
1371427218 (
calibration_uncertainty_H1_1371427218.png).
As an additional check, Lilli used the systematic error monitoring lines to compare them against and overlay them over the uncertainty budget. The uncertainty checks can be found at:
https://ldas-jobs.ligo-wa.caltech.edu/~ling.sun/lho_unc_v0.5/monitoring/. Looking specifically at the checks corresponding to the "last bad" and "first good" GPS times, we can see that at GPS 1371394818 (
uncertainty_consistency_check_H1_13713948180_13713984180_GDS-CALIB_STRAIN.png), the envelope disagrees with the data taken directly using the systematic error lines. Meanwhile, the same check at GPS 1371427218 (
uncertainty_consistency_check_H1_13714272180_13714308180_GDS-CALIB_STRAIN.png) does match the direct measurement much more closely. This indicates that 1.) the uncertainty envelope during the "bad" times is likely wrong (good news) and 2.) that there is probably a bug in how the data that is passed to the uncertainty envelope generation is processed (i.e. a bug or bad parameter in the report generation at LHO).
Over the past few weeks, I've been trying to find this bug. I first ruled out mismatches between the front end settings (filter files, FM slots, gains, etc.) between the CAL-CS copy of the DARM loop, the actual DARM loop, and the pyDARM parameter files that are meant to mirror the (actual) DARM loop (except, of course, for the 3.2kHz pole that we already knew was missing from CAL-CS at the time).
It turns out that our suspicion that there was a problem in the report generation was correct. This bug caused every report's pyDARM_H1.ini file to be populated with wrong MCMC-fitted parameters for both the sensing and actuation functions from the start of O4a until roughly June 22, 2023. There are four places that the pyDARM report system records fitted MCMC values: the PDF report, an MCMC JSON file dump, an MCMC HDF5 file dump (that includes the entire MCMC chain), and the pyDARM parameter file that has been populated with the new parameters. I was able to identify the issue by comparing these four different data products against each other to check for self-consistency. In the affected period, they all matched except for the pydarm_H1.ini files. The tools I wrote for this is attached as
check_ini_vs_chain.py and its output is attached as
out.txt. The output compares the fitted sensing and actuation parameters for all reports marked valid so far in O4a.
Fixing this bug for report
20230620T234012Z
and regenerating the uncertainty envelope and the consistency check with the systematic error lines at GPS 1371394818 resolves the discrepancy. See
uncertainty_consistency_check_H1_1371391218_1371394818_GDS-CALIB_STRAIN.png.