Displaying report 1-1 of 1.
Reports until 16:34, Wednesday 21 June 2023
H1 CAL (DetChar, ISC, OpsInfo)
jeffrey.kissel@LIGO.ORG - posted 16:34, Wednesday 21 June 2023 - last comment - 16:50, Thursday 22 June 2023(70693)
Calibration Pushed / Updated for 60W; Systematic Error is within +/- 5% and +/- 3 deg as before at 75/76W
L. Dartez, J. Kissel

More details to come, but as of Jun 21 2023 23:30 UTC, we have updated the calibration, to reflect the new IFO with input power back at 60W and all the associated other configuration changes including but not limited to a SRCL offset of -175 [ct].
Comments related to this report
jeffrey.kissel@LIGO.ORG - 16:59, Wednesday 21 June 2023 (70699)
The calibration update was pushed based on the second -175 [ct] SRCL offset sensing function data taken in LHO:70683.

Even though there were no new measurements of the actuators taken, these the N/ct actuation strength "free" parameters were also updated with, essentially, a new MCMC run on the last, most recent, old data from May 17 2023 (LHO:69684).

Here're the following "free parameter" values exported to foton:
   $ pydarm export
       searching for 'last' report...
       found report: 20230621T211522Z
       using model from report: /ligo/groups/cal/H1/reports/20230621T211522Z/pydarm_H1_site.ini
       filter file: /opt/rtcds/lho/h1/chans/H1CALCS.txt

          Hc: 3.4207e+06 :: 1/Hc: 2.9234e-07
          Fcc: 439.33 Hz

          Hau:  7.5083e-08 N/ct
          Hap:  6.2353e-10 N/ct
          Hat:  9.5026e-13 N/ct

    filters (filter:bank | name:design string):
       CS_DARM_ERR:10                O4_Gain:zpk([], [], 2.9234e-07)
       CS_DARM_CFTD_ERR:10           O4_Gain:zpk([], [], 2.9234e-07)
       CS_DARM_ERR:9                O4_NoD2N:zpk([439.32644887584786], [7000], 1.0000e+00)
       CS_DARM_ANALOG_ETMX_L1:4      Npct_O4:zpk([], [], 7.5083e-08)
       CS_DARM_ANALOG_ETMX_L2:4      Npct_O4:zpk([], [], 6.2353e-10)
       CS_DARM_ANALOG_ETMX_L3:4      Npct_O4:zpk([], [], 9.5026e-13)

The calibration report on the MCMC fitting for free parameters, as well as the GPR fit based on the two measurements at -175 [ct] (20230621T211522Z in LHO:70683 and 20230621T191615Z in LHO:70671) is attached below for convenience, but has been archived on the LDAS cluster under 
    H1_calibration_report_20230621T211522Z.pdf
Non-image files attached to this comment
jeffrey.kissel@LIGO.ORG - 10:12, Thursday 22 June 2023 (70722)
For the primary metric of how the calibration's quality changed across the 75W to 60W, then change of SRCL offset change, the calibration push, see LHO:70705. I copy and past that attached image here for convenience.

Also repeating Louis:
The DTT template for this measurement is stored in  
    /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O4/H1/Measurements/FullIFOSensingTFs/
        20230621_systematic_error_deltal_external_gds_calib_strain.xml
Images attached to this comment
louis.dartez@LIGO.ORG - 11:19, Thursday 22 June 2023 (70729)
we changed is_pro_spring to False in the pyDARM parameter model set. Commit: 353de502.
jeffrey.kissel@LIGO.ORG - 16:50, Thursday 22 June 2023 (70735)
Attached are the raw the blow-by-blow notes I took during yesterday's calibration push that highlights all the command-line commands and actions we needed to take in order to update the calibration.

Recapping here with a little more procedural clarity: 
(Any command recalled without a path are/were/may be run any new, fresh terminal; we did not need to invoke any special conda environment thanks to the hard work done behind the scenes by the pydarm-cmd team):

    (0) If at all possible, understand what you expect to change in the calibration ahead of time. 
        If that is *limited* to something changing that can only be measured with the full IFO, 
        i.e. you expect *only* a change in the "free parameters" (overall sensing function gain, 
        DARM cavity pole frequency, or any of the three ETMX UIM, PUM, TST actuator strengths) 
        then you run through the process outlined below as we did yesterday. Other changes to the 
        DARM loop, like electronics changes or computational arrangement mean you have to do a 
        more in-depth characterization of that thing, update the DARM loop model parameter set, 
        *then* start at (1).

    (1) Measure something new about the IFO. In this case we *knew* that we expect a change in 
        the inteferometric response of the IFO because of the ring heater changes and input 
        power change, so we remeasured the sensing function; and expected only the optical gain 
        and the cavity pole to change.
        
        $ pydarm measure --run-headless bb sens pcal
        
        We, of course, should be out of observing, and the ISC_LOCK guardian should be in NLN_CAL_MEAS.
        When the measurement is complete, you can do steps (2) through (6) with the IFO *back* 
        in NOMINAL_LOW_NOISE, and you can even go back in to OBSERVING during that time.

    (2) Process that measurement, and create the folder of material that's required for that 
        processing, as though it were a part of the on-going "epoch" of measurements where 
        you expect nothing to have changed about the DARM loop other than the time-dependent 
        corrections to the free parameters. This gives you a "report" that shows the residuals 
        between the last installed model of the IFO and you current measurement compared to 
        the rest of the measurement/model residuals in the inventory for that last "epoch." 
        In this way, you can confirm or refute your expectations of what has changed.
        
        $ pydarm report
        
        which generates the folder in 
        /ligo/groups/cal/H1/reports/20230621T191615Z/

    (3) Looking at the first results, we were disappointed that occasionally the MCMC fit 
        would land on a parameter hyperspace island that had a large SRC detuning spring 
        frequencies, even through the lower frequency limit of the data fed into the fitter 
        was ~80 Hz. As such, we adjusted the *default* model parameter set,
         
        /ligo/groups/cal/H1/ifo/pydarm_H1.ini
        changing the following parameter,
        Line 15    is_pro_spring = False
        and re-ran the report,
        $ pydarm report --force
        in order to re-run the MCMC. This worked so we committed pydarm_H1.ini to the 
        ifo/H1/ repo as git hash 353de502.
 
    (4) After looking through the history of measurement/model residuals, you should 
        then have an understanding what you want to *tag* as "valid" and an understanding 
        of whether you new measurement *is* infact the boundary of a new epoch. This 
        may also be the time when you *don't* like what you see, so you modify the 
        controls settings of the IFO to change it further and go back to step (1). As you 
        can see from LHO aLOGs 70671, 70677, and 70683, 
        we were doing just that.

        In the end, we had *two* measurements in "the new epoch" that we liked, and one 
        measurement in the middle -- technically its *own* epoch -- that we didn't like. 

        So, after processing all the data and making no updates to the report tags so we 
        could see the whole history of the sensing function, we tagged the reports in the 
        following way

        $ cd /ligo/groups/cal/H1/reports/
        $ pydarm ls -r                  # before tagging
            20230620T234012Z valid          # Last valid 75W sensing function data set
            20230621T191615Z                # First new 60W data set, with SRCL offset -175 [ct]
            20230621T201733Z                # Second new 60W data set, with SRCL offset -165 [ct]
            20230621T211522Z                # Third new 60W data set, with SRCL offset -175 [ct]
        $ # validating First and Third 60W data sets, both with -175 [ct] SRCL offset the report so it shows up on the
        $ touch 20230621T191615Z/tags/valid 
        $ touch 20230621T191615Z/tags/epoch-sensing
        $ touch 20230621T211522Z/tags/valid  
        $ pydarm ls -r                  # after tagging
            20230620T234012Z valid
            20230621T191615Z valid epoch-sensing
            20230621T201733Z
            20230621T211522Z valid

    (5) Then, now that these tags are set up -- and specifically the epoch-sensing tag -- 
        only these new 60W data sets are included in the history, which means only 
        those data sets are stacked and fit to GPR. Thus, this report generation run will 
        be the "final" report that generates what we end up exporting out to the calibration 
        pipeline. Importantly, even though the epoch boundary is defined by the the *first* 
        20230621T191615Z measurement, the parameters that will be installed are defined by 
        the MCMC of the *latest* 20230621T211522Z measurement. This works because we're 
        assuming the IFO is the same in this entire boundary, so we should get equivalent 
        answers (within uncertainty, and modulo time-dependent correction factors) if we MCMC 
        any of the measurements in the epoch.
        
        $ pydarm report --force
        
        yields a good report, with "free parameters," foton exports, FIR filters, MCMC 
        posteriors, and GPR fits that are ready to export to the calibration pipeline.

        Also note that all of these re-runs of the report ("pydarm report --force") 
        are *over-writing* the contents of the report, so if you want to save any interim 
        products you must move them out of the way to a different location and/or different name.

    (6) We can validate what we are about to push out into the world with the dry run command,
        
        $ pydarm export
        
        where if you don't specify the report ID, then it exports the latest report. In this case, 
        the latest is 20230621T211522Z, so we do want to use this simplest use of this command. 
        That spits out text like what's shown in LHO:70699. 
   
        Another option is 
        
        $ pydarm status
        
        which spits out a comparison between what's *actually* installed in the front-end against 
        the latest report (which, in this case, is what we're *about* to install).

    (7) If you're happy with what you see, then it's time to shove "the calibration" out into the world.
        (a) Presumably, the IFO is still locked, in NOMINAL_LOW_NOISE, and maybe even in OBSERVING. 
            Warn folks that you're about to take the IFO out of OBSERVING, and the DARM_FOM on the 
            wall is about to go nuts, but the IFO is fine. Try to do steps (b) through (g) as quickly 
            but accurately/completely/carefully as possible.

        (b) push EPICs records to the front end and save new foton files.
            $ pydarm export --push

        (c) open up the CAL-CS GDS-TP screen, and look at the DIFF of filter coefficients. 
            Hit the LOAD_COEFFICIENTS button if you see what you expect from the DIFF.

        (d) on the same screen, open up the SDF OVERVIEW. Review the changes and accept if you see 
            what you expect.

        Now everything's updated in the front-end, so it's time to migrate stuff out to the cluster 
        so GDS and the uncertainty pipelines get updated. 

        (e) Add an additional tag to the report which you just pushed,
        
        $ touch /ligo/groups/cal/H1/reports/20230621T211522Z/tags/exported
        $ pydarm ls -r 
            20230620T234012Z valid
            20230621T191615Z valid epoch-sensing
            20230621T211522Z exported valid
        

        (f) Archive all the reports that are a part of this wonderful new epoch, which pushes the 
            whole folder so it includes the tags. Having the "exported" tag is particularly 
            important for the GDS pipeline. 
            
            /ligo/groups/cal/H1/reports$ arx commit 20230621T191615Z
            /ligo/groups/cal/H1/reports$ arx commit 20230621T211522Z 
            

        (g) Restart the gds pipeline, which picks up the .npz of filter coefficients from the latest 
            report marked with the "exported" tag. 
            
            $ pydarm gds restart
            

            This opens up prompts form both DMT machines, dmt1 and dmt2, to say "yes" to confirm that 
            you want to restart.

            After restarting the GDS pipeline, you can check the status of the machines as well,
            
            $ pydarm gds status
            

    (8) Once you're done with the GDS pipeline restart, then you've gotta wait ~2-5 minutes for 
        the pipeline to complete its restart. To check whether the pipeline is back up and running, 
        head to "grafana" calibration monitoring page. 
        Presumably, the IFO is still in NOMINAL_LOW_NOISE, so eventually the live measurement of 
        response function systematic error should begin to reappear, hopefully even closer to 
        1.0 mag, 0.0 deg phase. While you're waiting, you can also pull up trends of 
            - the front-end computed TDCFs, see if those move closer to 1.0 (or in the case of f_cc, closer to the MCMC value)
            - the front-end computed DELTAL_EXTERNAL / PCAL systematic error transfer function, see if those move closer to 1.0
        In addition, the newest, latest *modeled* systematic error budget only gets triggered once 
        every hour, so you just have to be patient on that one, and check in later.

    (9) Once everything is settled, take the ISC_LOCK guardian to NLN_CAL_MEAS, and take a broad-band 
        PCAL injection for final, high-density frequency resolution, post-install validation a la LHO:70705
Non-image files attached to this comment
Displaying report 1-1 of 1.