Displaying report 1-1 of 1.
Reports until 16:27, Monday 22 May 2023
H1 CAL
jeffrey.kissel@LIGO.ORG - posted 16:27, Monday 22 May 2023 - last comment - 17:54, Wednesday 24 May 2023(69796)
On Which Metric to Group / Trigger Sensing Function Systematic Error During Thermalization
J. Kissel, J. Rollins, E. Goetz (with ideas, support and inspiration from others)

Picking up where I left off on the saga of the IFO's thermalization vs. the DARM sensing function (LHO:69593), the open actions were:
    (a) more, lower, frequency points
    (b) synchronizing the thermalization period to a point in time 
    (c) more data sets
all to better understand if the sensing function during each thermalization period evolved consistently -- since we saw that the first three examples looked very much not consistent.

Namely, we're still craving a single metric of thermalization that we can use to create a look-up table for "when the metric is value Z, apply this XX% / YY deg frequency dependent transfer function amount of extra modeled systematic error to the sensing function's budget of systematic error."

I've not yet been able to execute (a), and probably won't before the start of the run (and thus maybe never). So we'll have to model what we can from these four frequency points. 

This aLOG focuses on (b) and (c). 

Regarding the number of data sets, (c), where LHO:69593 only characterized 3 data sets, here I've gathered 10, all since May 10th 2023. The times and durations are:
    UTC Start           UTC Stop            GPS Start     GPS Stop      Duration    Ref. ID     Notes
    2023-05-20 22:00    2023-05-21 02:00    1368655218    1368669618    4.00        1          (has systematic error lines amp increase)
    2023-05-20 15:40    2023-05-20 19:40    1368632418    1368646818    4.00        2          (has systematic error lines amp increase)
    2023-05-20 07:45    2023-05-20 11:45    1368603918    1368618318    4.00        3          (has systematic error lines amp increase)
    2023-05-19 19:30    2023-05-19 23:30    1368559818    1368574218    4.00        4          (has systematic error lines amp increase)
    2023-05-18 11:00    2023-05-18 15:00    1368442818    1368457218    4.00        5
    2023-05-17 18:48    2023-05-17 22:00    1368384498    1368396018    3.20        6
    2023-05-16 04:48    2023-05-16 08:47    1368247698    1368262038    3.98        7          (has glitch)
    2023-05-15 04:50    2023-05-15 08:50    1368161418    1368175818    4.00        8
    2023-05-14 08:15    2023-05-14 11:45    1368087318    1368099918    3.50        9
    2023-05-10 11:18    2023-05-10 15:18    1367752698    1367767098    4.00        10

Note, here, unlike the few datasets before, we've (sadly) had enough lock losses that I can select for lock stretches that are 4 hours or longer.
This way, if we go with *time* synchronization, we have enough data within each stretch for the data to comfortably be declared "thermalized."
 
Regarding synchronization of data sets, Jamie had the idea of synchronizing the sensing function to IFO Arm Power rather than time, since the varies ways of looking at the data from LHO:69593 indicate that the sensing function evolves on several, mixed, exponential, if not random functions of time (especially with the first 30 minutes of the IFO achieving nominal input power). On the other hand, at least by eye and with what limited study we had, the arm power seemed to be evolving in the same way as the sensing function (and this intuitively made sense given that the optical spring present in the sensing function is some relationship between the carrier power in the arm cavities vs. the power in the SRC).

The pdf attachment, sensingFunction_syserror_vs_power.pdf, shows the comparison between these 10 data sets.
    (i) Page 1: shows all 10 data sets' arm power as a function of time, and then
    (ii) Pages 2-11: show 3D bode plots, where the frequency, magnitude, and phase of the sensing function systematic error transfer function are show against arm power (on the left) and time (on the right).

Note each "arm power" data point over the 4 hour stretch is the median power value over each two minute stride of 16 Hz data, and the data is created by the average of the two arm powers at each 16 Hz data point, from the channels
    H1:ASC-X_PWR_CIRC_OUT16 
    H1:ASC-Y_PWR_CIRC_OUT16.

Here's what I observe from this collection of data sets:
    (1) These additional data sets' sensing function is just as inconsistent in the first 30 minutes after achieving the nominal 76W PSL input power as the previous three from LHO:69593.
    So, that means, if we're going to create a model of this systematic error to add to the collection of time-dependent response function's systematic error like we did in O3, it's going to *have* to have large uncertainty, at least in the first 30 minutes.

    (2) The arm power vs. time evolution is also inconsistent from lock-stretch to lock strength, both in rate of change as well as final thermalized arm power.
        (x) During the discussion of the plots in LHO:69593, during last week's LHO commissioning meeting, the suggestion was that the thermalization may be dependent on how cold the optics are at the start of heating them back up, or -- phrased differently -- if it's been a while since the last well-thermalized, high power lock segment, then the heating up may look different.
        This is consistent with what's seen here -- the orange data set (set 9, starting at 2023-05-14 08:15 UTC) is one of the lower starting powers, and achieves a higher power than the other 9 sets, at a faster rate. Unlike the other 9 data sets, set 9 had 13 hours of "dead time" prior to re-lock-acquisition, vs. most other data sets had "only" 3 hours or less of "dead time," so the supposition is that the optics have not yet full cooled down.
        (y) Also, the thermalized power is different, but as much as 4-5 kW. This should be less of a big deal, because the SRC detuning will have landed on the SRCL offset value that *retunes* the response, which means the only thing *left* changing with time, or between lock stretch to lock stretch in the sensing function is the optical gain and cavity pole, which are constantly being measured and corrected for.
    (3) The arm power will be about as good a metric for thermalization as log(time) -- not perfect good, but better than linear time.

So, we'll go with the arm power as the metric for thermalization, 'cause ...why not? We're winging this anyways, and we don't have time to research other channels or ideas (or even go back to other older ideas).

Now it's on to manipulating this data into arm power (as a proxy for time) slices, and them fitting the *collection* of 10 transfer function values at each arm power, such that we have a representative sensing function systematic error transfer function -- with quantified uncertainty -- at each arm power level. I.e. we'll "stack" these arm-power slices and feed them to the GPR fitting monster and then create a the desired look-up table of fits per arm power. 

Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 17:54, Wednesday 24 May 2023 (69898)
I've finally wrangled python's handling of 3D data structures enough to re-arrange the above data sets in terms of a "stacked" large meta-collection of arm power slices from all 10 thermalization stretches rather than in terms of segregated the 3D bode plots as a function of time from each thermalization stretch.

Remember the goal is to bin these sensing function systematic error transfer functions synchronized with similar arm power values, such that we can create a representative model what the sensing function systematic error is doing for a given arm power at *any* time.
In other words, in the *response* function systematic error budget, which is computed at an arbitrary time, we trigger that application of "this" systematic error model given the arm cavity power of "that" value at that time.

I've conditioned the data I put into these meta-collections in the following way:
(1) How fine a grain do we need in terms of arm power? 1 kW steps? 2 kW steps?

    Given that the data collections contain arm powers that range from ~396 kW to ~435 kW, 1 kW steps means 40 different models. 
    1 kW steps like a good balance between 
        - "a huge number of options for the look-up table" and 
        - "don't bin too many arm power values together, because in the beginning of thermalization, the arm power can span 10 kW in 15 minutes."
    This choice of 1 kW is admittedly arbitrary, or at least the metric for choosing the step size is human at the moment. "Perfect is the enemy of good enough."

(2) On the low-end of the arm powers, at the earliest parts of the thermalization, we don't have a lot of data, and the transfer function answer varies wildly. 
    So, I group all powers below 404 kW into one grouping.
    While this "all TFs from the 10 data sets, when the power is below 404 kW" collection spans a large magnitude and phase, a fit to this data set is legit, given
    (a) it's an accurate, semi-statistical representation of what's going on, but even if not,
    (b) this group will be used quite rarely, given that the start of the 4-hour data sets is always *before* we hit nominal low noise. (The temporary thermalization 
        lines were loud enough, and the low frequency noise was low enough, that we can start getting information about the detuning before we hit NOMINAL_LOW_NOISE.)

    So that knocks the number of power groups to 32.

(3) The final data conditioning is to remove the data set Ref ID 7 (starting on 2023-05-16 04:48 UTC) because the glitch in the IFO near the end of the thermalization 
    stretch spoils the fidelity of the sensing function systematic error measurement. So, I only use the thermalization from 9 data sets instead of 10. Still leaves 
    *plenty* of transfer functions at a given power to inform a GPR fit. 

OK, so let's show the results of this sorting and data conditioning.

sensingFunction_syserror_vs_power.pdf is an updated version of the same collection of 3D bode plots from the above aLOG (LHO:69796), but with the Ref ID 7 data set removed, and an updated first page which is labeled a bit better.

sensingFunction_syserror_vs_powerslices.pdf is the newly sorted and conditioned data, showing 
    Page 1: a repeat of the first page to show the power trend vs. time (this really helps be guide the decisions I made regarding power bin step sizes and upper and lower limits of the power bins, so I think it's useful to re-include)
    Pages 2 thru 33: the 32 groups of sensing function systematic error transfer functions, grouped by arm power.

As you page through them, you get the gratifying sense that the span of TF magnitude and phase of the data really reflects the range of TF magnitude and phase that had occurred in these nine data sets.
Further, you see, gratifyingly, that as the arm power increases, the sensing function's systematic error transfer function magnitude and phase spread gets smaller, tightening up, and the value of the error sets closer and closer to 1.0 and 0 deg phase, or at a least landing on the thermalized systematic error that we've seen from, e.g. from the sensing_gpr.png from the GPR collection in report 20230510T062635Z which is currently being used in the response function uncertainty budget.

So, we'll use each of these pages of transfer function meta-collections as the input to the GPR fits that will inform the look-up table.




Non-image files attached to this comment
Displaying report 1-1 of 1.