Reports until 14:51, Monday 21 September 2015
H1 CAL (DetChar, ISC)
jeffrey.kissel@LIGO.ORG - posted 14:51, Monday 21 September 2015 - last comment - 18:15, Tuesday 22 September 2015(21746)
Motivation to increase Actuation Path Delay before the SUM in CAL-DELTAL_EXTERNAL
J. Kissel on behalf of P. Fritschel & M. Wade

Peter and Maddie have been trying to understand the discrepancies seen between the CAL-CS front-end calibraton and the (currently offline running) of the GDS pipeline -- see Maddie's comparisons in LHO aLOG 21638.

Peter put together an excellent summary on the calibration mailing list that's worth reproducing here because it motivates changing the actuation path delay in the CAL-CS model, which we intend to do tomorrow. We will change the actuation delay to it's current 4 clock cycles to Peter's suggested 7 clock cycles.

On Sep 17, 2015, at 6:39 PM, Peter Fritschel  wrote:

Maddie, et al.,

I spent some time looking into this (GDS vs CALCS) today, and I think I have a few
insights to share.

Bottom line is that I think the GDS code is doing the right thing, and that the corrections
[to the front-end calibration that are used] make sense given the way things are done. And, I think there is a simple
way to make the CAL-CS output get closer to the GDS output.

As Maddie pointed out, the amplitude corrections we are seeing from the GDS code in the
bucket (50-300 Hz or so) are caused mainly by the phase from the anti-alias (AA) and 
anti-image (AI) filters, which are accounted for in the GDS model but not in the CAL-CS one. 

Maddie already gave some numbers for 100 Hz, and pointed out that the relative phase shift
she is applying (16.4 degrees) is 8 degrees larger than the relative phase shift that
the CAL-CS model applies (8.8 degrees, from 244 usec). I’m referring to the relative 
phase shift between the DELTAL_CTRL and DELTAL_RESIDUAL signals.

The first thing to note is that this difference is going to have different effects on the
L1 and H1 GDS calibration, because they have different DARM open loop gain transfer functions. 

The simple picture for the region we are talking about is we are looking at the errors
in the sum: 1 + a*exp(i*phi), as a function of small changes in phi. Here, the ‘1’ represents
the DARM error signal, ‘a’ represents the DARM control signal, and is less than one (but 
not much smaller than 1). ‘phi’ is the relative phase between the two channels, and it is
errors in this phase (or small changes to) that we are talking about. The magnitude of the
sum is most sensitive to changes in phi for phi = 90 deg. So to bound the effect, assume
phi = 90 deg. At this point, the sensitivity is approximately:

   d|sum|/dphi = a

Sticking with 100 Hz as an example, the error in phi that GDS is correcting is 8 degrees,
or phi = 0.14 rad. ‘a’ is the DARM open loop gain at 100 Hz, which is different for L1 
and H1:

   L1, a = 0.6 —>  d|sum| = 0.084
   H1, a = 0.4  —> d|sum| = 0.056

These are the maximum possible errors, depending on ‘phi’. Maddie’s latest plots show a
correction at 100 Hz of 7% for L1, 3.5% for H1. Quite understandable.

For higher frequencies, the phase error is going to increase, but ‘a’ (open loop gain)
will decrease, so you need to look at both.

At these frequencies the phase shift/lag from the AA and AI filters (digital and analog)
is linear in frequency, so we can easily make the extrapolations. 

Maddie’s comparison plot shows that the biggest relative difference is at 250 Hz, where it
is 9%. At 250 Hz, the phase shift error is going to grow to (250/100)*8 = 20 deg = 0.35 rad.
For L1, the DARM OLG at 250 Hz is about 0.3 in magnitude (a). So the maximum error is:

  d|sum| = 0.105 = 10.5%. (vs. 9% observed)

For H1, Maddie’s plot shows a relative difference of about 8% at just below 300 Hz- say 280 Hz.
The phase shift error will be (280/100)*8 = 22.4 deg = 0.4 rad. The H1 OLG at 280 Hz is about
0.2 in magnitude. So the maximum error would be:

   d|sum| = 0.08 = 8%. (vs. 8% observed)

I think the frequencies where the differences go to very small values in Maddie’s plots, like
150 Hz for LHO, are frequencies where phi = 0 mod pi, for which |sum| is to first order
insensitive to dphi. 

OK, so now I can believe that it is realistic to see the kinds of amplitude corrections
that Maddie is seeing, in ‘the bucket’. 

However, the above picture also suggests how CAL-CS should be able to get much closer to
the GDS output. The frequencies where this is an issue is where ‘a’ (OLG magnitude) is not
too small. But at these frequencies (below ~500 Hz), the phase lags from the AA/AI filters are 
very nearly linear in frequency. Thus, they can be well approximated by a time delay.

So here’s the suggestion: Why not increase the time delay that is applied in the CAL-CS 
model to approximate the AA/AI filter effects? Adding 3 more sample delays would come close:

   3 sample delay = 183 usec; phase shift at 100 Hz = 6.6 degree
Comments related to this report
jeffrey.kissel@LIGO.ORG - 18:15, Tuesday 22 September 2015 (21816)
Check out the attachment to LHO aLOG 21815 for a graphical representation of why seven 16 [kHz] clock-cycles were chosen.

Also in the above email, Peter has *not* included delay for the OMC DCPD signal chain, he has *only* considered extra delay from the AA and AI filtering.