E. Goetz, J. Kissel
Last we left off in the DARM loop calibration saga, we'd found that we didn't understand why the (CAL-DELTAL_EXTERNAL_DQ / CAL-PCALY_RX_PD_DQ) transfer function showed substantial systematic error, where measurements agreed with model for individual components of the response function (C, A_T, A_P, and A_U) -- see LHO:63520. The remaining suspects included the approximation we do to remedy the low-frequency impact of super-Nyquist and computational delay components of each term -- delaying the (A_T + A_P + A_U) actuation "ctrl" path before adding it with the (1/C) "residual" path (see LHO:63607).
Investigating those Super-Nyquist/Computational delay effects made us realize we didn't really have the math super figured out, so we revisited and upgraded our tech note on the subject, T1900169. This took us longer than expected because in doing so, we made sure to future-proof the math to be able to handle multiple-arm DARM actuation for each QUAD stage. Such actuation style is already in play at L1, and will likely be considered at H1. As a side note -- we will say, for as of yet unclear noise benefit, increasing the number of actuators *drastically* increases the amount of complexity in the math and bookkeeping required to calibrate it correctly. Regardless, the upgrade to the tech note allowed us to finally explicitly math out what's needed to correct both the CAL-DELTAL_EXTERNAL_DQ channel for all of its artifacts, and CAL-PCALY_RX_PD_DQ for all of its.
Redo-ing that math made us realize that we got away with murder in O3's version of the python-based O3 DARM loop model, aka pyDARM. While latest generation of pyDARM has been thus far been reworked since O3 for significantly better in terms of modularization, version control, and clarity of function, the underlying math was just a copy and paste from the O3 code. In short, that underlying code involved us building up the entire individual components in one giant clunky method, and then dividing out the stuff we *didn't* think was a part of the super-Nyquist effects, and treated the actuator corrections as multiplicative factors where we should have considered ratios of coherent sums. It made it *really* hard to debug, and hard to understand what needed changing when there *was* a change to the computational delays and super-Nyquist effects. Now, with the recent hard work of Evan Goetz, he's re-written the code to instead built up C and A in a modular fashion, clearly calling out the Super-Nyquist effects, and tightly following the math of T1900169 along the way. This code has now been released as pyDARM 0.0.3 (and we'll get some lingering further updates for convenience when pyDARM 0.0.4 is released, but it won't change the new adherence to the now accurate and clear math).
While this hard work was going on for weeks in the background, in the foreground, we fudged the calibration to give a more flat (CAL-DELTAL_EXTERNAL_DQ / CAL-PCALY_RX_PD_DQ) transfer function -- see LHO:64096. I had left that exercise with a bad taste in my mouth because I knew that the above work still needed doing to correct the (CAL-DELTAL_EXTERNAL_DQ / CAL-PCALY_RX_PD_DQ) transfer function for its super Nyquist artifacts, and because the work wasn't done, I didn't yet know how large of a correction it would be.
Here, finally, I demonstrate that the newly-mathed-out-and-pyDARM-coded-up corrections for these "super-Nyquist and computational delay" effects turn the intermediate raw (CAL-DELTAL_EXTERNAL_DQ / CAL-PCALY_RX_PD_DQ) transfer function into rather exquisite measure of the true systematic error. And indeed, that systematic error is indeed at beautifully low levels.
I do this in 4 ways with the attached bode plots, with three traces each:
- the blue trace is the "raw" (CAL-DELTAL_EXTERNAL_DQ / CAL-PCALY_RX_PD_DQ) transfer function with "the easy stuff" multiplied out:
:: the "PCAL Corrections" to (1 / CAL-PCALY_RX_PD_DQ), which include
* a "two poles at 1 Hz" whitening filter,
* an analog AA filter (a super-Nyquist effect), and
* a 65k to 16k digital AA filter (a super-Nyquist effect)
:: and the inverse of the whitening filter we use to prevent precision issues with CAL-DELTAL_EXTERNAL_DQ.
- the orange trace is the mathed out impact of all the remaining flaws in CAL-DELTAL_EXTERNAL_DQ,
:: removal of the "treat the super-Nyquist as a computational delay" approximating,
:: multiplying back in the exact models of computational delays and super Nyquist effects from Section IV of T1900169
- The green trace is the blue trace multiplied by the orange trace.
In the first and second attachment, I process the broad-band and swept-sine versions of the (CAL-DELTAL_EXTERNAL_DQ / CAL-PCALY_RX_PD_DQ) transfer functions taken prior to the fudging -- the black reference traces in the attachments of LHO:64096 -- taken on 2022-07-22.
In the third and fourth attachment, I process the broad-band and swept-sine (CAL-DELTAL_EXTERNAL_DQ / CAL-PCALY_RX_PD_DQ) transfer functions taken after to the fudging -- the red traces in the attachments of LHO:64096 -- taken on 2022-07-22.
Comparing the changes between the first and third, or the second and fourth yield a beautifully consistent story:
- Prior to fudging, but after correcting for super-Nyquist effects, it looks obvious (in retrospect) that:
:: Above a few 100 Hz, where the response function is dominated by the "residual" term, (1/C) -- we need to multiply by something like 1.08.
:: in the mid frequency band around ~50 Hz, where its a mix of the A_T and A_U stages, we need to multiply by something like 1.05.
- After fudging, where we've found empirically that we need to do exactly those corrections, we get a beautifully flat transfer function in both magnitude and phase. The flat, zero phase across the band is particularly exciting because it's a critical and sensitive metric of success for this collection of coherently summed paths.
Thus, I'm now*** finally confident in saying that the fudging I did on 2022-07-22 (LHO:64096) *actually* made the systematic error in the calibration lower.
You also can now clearly see the orange trace -- the low-frequency impacts of computational delay and Super-Nyquist effects -- has an impact at the level of 3%-low at 80 Hz then 2%-high at 200 Hz wiggle. Thus, you hopefully now also understand why I was reticent to make claim of success when fudging the calibration to levels less than these wiggles. This is also a direct demonstration of how much systematic error is normally in the "raw" CAL-DELTAL_EXTERNAL_DQ channel.
So -- what's the path forward?
(1) I say ***now with three asterisks next to them because we just changed the ETMX PUM DACs yesterday -- LHO:64274. This potentially has modified the actuation strength of the PUM, which may change it's contribution to the DARM response function, 'causing a new wiggle in the (CAL-DELTAL_EXTERNAL_DQ / CAL-PCALY_RX_PD_DQ) transfer function. So, we need to re-take these transfer functions and see how it's been confirmed.
(2) We need to update DTT templates for both straight ASD measurements of CAL-DELTAL_EXTERNAL_DQ with these new super-Nyquist + computational delay corrections, DELTAL whitening. And any templates that measure the (CAL-DELTAL_EXTERNAL_DQ / CAL-PCALY_RX_PD_DQ) transfer function with the same super-Nyquist + computational delay corrections, DELTAL whitening, and PCAL corrections. The later two (DELTAL Whitening and PCAL corrections) haven't changed, but the super-Nyquist + computational delay corrections have. This is why the DTT template screenshots of this transfer function are flat-*ish* but not "just right" -- again with the phase answer being the blaring indicator that something's fishy.
(3) As a side benefit of this work, we've found that me thinking that "we need to change the relative delay between the two (1/C) and (A_T+A_P+A_U) paths because we have new OMC DCPD electronics" was *the* source of systematic error is/was wrong. This is consistent with some side-investigations that Joe Betzwieser has done estimating the impact. We can safely leave this at the 7 clock cycles.
(4) Then -- the hard work begins -- getting back to measuring and understanding why we need this 1.08*(1/C), 1.045*A_T, and 1.01*A_P fudge factors. Taking the full set of sweeps again, revisiting the bookkeepping in the pyDARM model parameter set, and pushing measured values of these things to the front-end.
Why? Because without that validation, then
(a) we can't trust the model to give us the correct "various modeled DARM loop transfer functions values at calibration line frequencies" aka EPICs records, which means the Time Dependent Correction Factors which use those EPICs records will be untrustworthy, and
(b) we can't correctly modelled estimate, or "budget" of the systematic error as our usual cross-check on the point-estimate measurements of the same thing that the (CAL-DELTAL_EXTERNAL_DQ / CAL-PCALY_RX_PD_DQ) transfer functions provide.
(5) This also allows us to better understand our data that will newly be rolling off of the new PCALX calibration lines, which we can use to *constantly* measure the (CAL-DELTAL_EXTERNAL_DQ / CAL-PCALX_RX_PD_DQ) transfer function at all times. For now, this measurement will be analyzed in post-processing, and now we know the exact "low-frequency impact of super-Nyquist and computation delay effects" correction transfer function to use when doing it. AND it comes with its own validated pyDARM method!
Let's get to work!