Displaying reports 13361-13380 of 84064.Go to page Start 665 666 667 668 669 670 671 672 673 End
Reports until 12:47, Tuesday 12 September 2023
H1 SEI
jim.warner@LIGO.ORG - posted 12:47, Tuesday 12 September 2023 (72835)
HAM1 3dl4c single sensor un-epoxied coherence not as good as epoxied pairs

I looked at simplifying the HAM1 ground feedforward a little, by adding a single vertical sensor that wasn't epoxied to the floor. Not suprising that this doesn't have quite as good coherence as the 2 epoxied vertical sensors that are currently being used for the ff. Attached plot shows the different coherences. Red coherence between the B_X (test sensor, no epoxy) and B_Z (epoxied sensor). Blue is the coherence between the test sensor and the HEPI Z L4Cs, green is the epoxied to HEPI Z L4Cs. Brown is coherence the summed epoxied sensors with the HEPI Z L4Cs.

Comparing the green and blue coherences, it seems that epoxying might not be totally necessary. The sum of the epoxied sensors is more coherent with HAM1 HEPI than any of the single sensors, compare brown to blue or green.

Images attached to this report
H1 CDS
patrick.thomas@LIGO.ORG - posted 12:35, Tuesday 12 September 2023 (72834)
fmcs-epics-cds computer restarted
Restarted the computer running the FMCS EPICS IOC to check that the changes made yesterday to the subnet mask persist through a reboot. They do.
LHO General
thomas.shaffer@LIGO.ORG - posted 12:27, Tuesday 12 September 2023 (72833)
Ops Day Mid Shift Report

Maintenance has finished. We are moving through initial alignment now.

H1 SEI (CDS, SEI)
erik.vonreis@LIGO.ORG - posted 11:28, Tuesday 12 September 2023 - last comment - 16:59, Thursday 14 September 2023(72831)
Picket Fence updated

The Picket Fence client was updated.  This new version points at a server with lower latency.

It also fixes some bugs, and reports the current time and start time of the service.

Comments related to this report
edgard.bonilla@LIGO.ORG - 16:59, Thursday 14 September 2023 (72892)

I merged this into the main code.

Thank you Erik!

H1 CAL (CDS)
jeffrey.kissel@LIGO.ORG - posted 10:25, Tuesday 12 September 2023 - last comment - 13:07, Wednesday 13 September 2023(72830)
h1calcs Model Rebooted; Gating for CALCS \kappa_U is now informed by KAPPA_UIM Uncertainty (rather than KAPPA_TST)
J. Kissel, D. Barker
WP #11423

Dave has graciously compiled, installed and restarted the h1calcs model. In doing so, that brings in the bug fix from LHO:72820, which fixes the issue that the front-end, CAL-CS KAPPA_UIM library block was receiving the KAPPA_TST uncertainty identified in LHO:72819.

Thus h1calcs is now using rev 26218 of the library part /opt/rtcds/userapps/release/cal/common/models/CAL_CS_MASTER.mdl.

I'll confirm that the UIM uncertainty is the *right* uncertainty during the next nominal low noise stretch later today (2023-09-12 ~20:00 UTC).
Comments related to this report
jeffrey.kissel@LIGO.ORG - 17:37, Tuesday 12 September 2023 (72848)
Circa 9:30 - 10:00a PDT (2023-09-12 16:30-17:00 UTC)
Post-compile, but prior-to-install, Dave ran a routine foton -c check on the filter file to confirm that there were no changes in the
    /opt/rtcds/lho/h1/chans/H1CALCS.txt
besides "the usual" flip of the header (see IIET:11481 which has now become cds/software/advLigoRTS:589).

Also relevant, remember every front-end model's filter file is a softlink to the userapps repo,
    $ ls -l /opt/rtcds/lho/h1/chans/H1CALCS.txt 
    lrwxrwxrwx 1 controls controls 58 Sep  8  2015 /opt/rtcds/lho/h1/chans/H1CALCS.txt -> /opt/rtcds/userapps/release/cal/h1/filterfiles/H1CALCS.txt

Upon the check, he found that foton -c had actually changed filter coefficients.
Alarmed by this, he ran an svn revert on the userapps "source" file for H1CALCS.txt in
    /opt/rtcds/userapps/release/cal/h1/filterfiles/H1CALCS.txt

He walked me through what had happened, and when he did to fix it, *verbally* with me on TeamSpeak, and we agreed -- "yup, that should be fine."

Flash forward to NOMINAL_LOW_NOISE at 14:30 PDT (2023-09-12 20:25:57 UTC) TJ and I find that the GDS-CALIB_STRAIN trace on the wall looks OFF, and there're no impactful SDF DIFFs. I.e. TJ says "Alright Jeff... what'd you do..." seeing the front wall FOM show GDS-CALIB_STRAIN at 2023-09-12 20:28 UTC.

After some panic having not actually done anything but restart the model, I started opening up CALCS screens trying to figure out "uh oh, how can I diagnose the issue quickly..." I tried two things before I figured it out:
    (1) I get through the inverse sensing function filter (H1:CAL-CS_DARM_ERR) and look at the foton file ... realized -- looks OK, but if I'm really gunna diagnose this, I need to find the number that was installed on 2023-08-31 (LHO:72594)...
    (2) I also open up the actuator screen for the ETMX L3 stage (H1:CAL-CS_DARM_ANALOG_ETMX_L3) ... and upon staring for a second I see FM3 has a "TEST_Npct_O4" in it, and I immediately recognize -- just by the name of the filter -- that this is *not* the "HFPole" that *should* be there after Louis restores it on 2023-08-07 (LHO:72043).

After this, I put two-and-two together, and realized that Dave had "reverted" to some bad filter file. 

As such, I went to the filter archive for the H1CALCS model, and looked for the filter file as it stood on 2023-08-31 -- the last known good time:

/opt/rtcds/lho/h1/chans/filter_archive/h1calcs$ ls -ltr
[...]
-rw-rw-r-- 1 advligorts advligorts 473361 Aug  7 16:42 H1CALCS_1375486959.txt
-rw-rw-r-- 1 advligorts advligorts 473362 Aug 31 11:52 H1CALCS_1377543182.txt             # Here's the last good one
-rw-r--r-- 1 controls   advligorts 473362 Sep 12 09:32 H1CALCS_230912_093238_install.txt  # Dave compiles first time
-rw-r--r-- 1 controls   advligorts 473377 Sep 12 09:36 H1CALCS_230912_093649_install.txt  # Dave compiles the second time
-rw-rw-r-- 1 advligorts advligorts 473016 Sep 12 09:42 H1CALCS_1378572178.txt             # Dave installs his "reverted" file
-rw-rw-r-- 1 advligorts advligorts 473362 Sep 12 13:50 H1CALCS_1378587040.txt             # Jeff copies Aug 31 11:52 H1CALCS_1377543182.txt into current and installs it


Talking with him further in prep for this aLOG, we identify that when Dave said "I reverted it," he meant that he ran an "svn revert" on the userapps copy of the file, which "reverted" the file to the last time it was committed to the repo, i.e. 
    r26011 | david.barker@LIGO.ORG | 2023-08-01 10:15:25 -0700 (Tue, 01 Aug 2023) | 1 line

    FM CAL as of 01aug2023
i.e. before 2023-08-07 (LHO:72043) and before 2023-08-31 (LHO:72594).

Yikes! This is the calibration group's procedural bad -- we should be committing the filter file to the userapps svn repo every time we make a change.

So yeah, in doing normal routine things that all should have have worked, Dave fell into a trap we left for him.

I've now committed the H1CALCS.txt filter file to the repo at rev 26254

    r26254 | jeffrey.kissel@LIGO.ORG | 2023-09-12 16:26:11 -0700 (Tue, 12 Sep 2023) | 1 line

    Filter file as it stands on 2023-08-31, after 2023-08-07 LHO:72043 3.2 kHz ESD pole fix and  2023-08-31 LHO:72594 calibration update for several reasons.


By 2023-09-12 20:50:44 UTC I had loaded in H1CALCS_1378587040.txt which was simple "cp" copy of H1CALCS_1377543182.txt, the last good filter file that was created during the 2023-08-31 calibration update,...
and the DARM FOM and GDS-CALIB_STRAIN returned to normal. 

All of panic and fix was prior to us going to OBSERVATION_READY 2023-09-12 21:00:28 UTC, so there was no observation ready segment that had bad calibration.

I also confirmed that all was restored and well by checking in on both
 -- the live front-end systematic error in DELTAL_EXTERNAL_DQ using the tools from LHO:69285) and
 -- the low-latency systematic error in GDS-CALIB_STRAIN using the auto-generated plots on https://ldas-jobs.ligo-wa.caltech.edu/~cal/
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 13:07, Wednesday 13 September 2023 (72863)CDS
Just some retro-active proof from the last few days worth of measurements and models of systematic error in the calibration.

First, a trend of the front-end computed values of systematic error, shown in 2023-09-12_H1CALCS_TrendOfSystematicError.png which reviews the time-line of what had happened.

Next, grabs from the GDS measured vs. modeled systematic error archive which show similar information but in hourly snapshots,
    2023-09-12 13:50 - 14:50 UTC 1378561832-1378565432 Pre-maintenance, pre-model-recompile, calibration good, H1CALCS_1377543182.txt 2023-08-31 filter file running.
    2023-09-12 19:50 - 20:50 UTC 1378583429-1378587029 BAD 2023-08-01, last-svn-commit, r26011, filter file in place.
    2023-09-12 20:50 - 21:50 UTC 1378587032-1378590632 H1CALCS_1378587040.txt copy of 2023-08-31 filter installed, calibration goodness restored.

Finally, I show the systematic error in GDS-CALIB_STRAIN trends from the calibration monitor "grafana" page, which shows that because we weren't in ANALYSIS_READY during all this kerfuffle, the systematic error as reported by that system was none-the-wiser that any of this had happened.

*phew* Good save team!!
Images attached to this comment
H1 SEI
ryan.short@LIGO.ORG - posted 10:23, Tuesday 12 September 2023 (72829)
H1 ISI CPS Noise Spectra Check - Weekly

FAMIS 25956, last checked in alog 72708

BSC high freq noise is elevated for these sensors:

ITMX_ST2_CPSINF_H1    
ITMX_ST2_CPSINF_H3

All other spectra look nominal.

Non-image files attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:10, Tuesday 12 September 2023 (72828)
Tue CP1 Fill

Tue Sep 12 10:07:43 2023 INFO: Fill completed in 7min 39secs

 

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 08:04, Tuesday 12 September 2023 (72826)
Ops Day Shift Start

TITLE: 09/12 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Austin
CURRENT ENVIRONMENT:
    SEI_ENV state: MAINTENANCE
    Wind: 3mph Gusts, 2mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.11 μm/s
QUICK SUMMARY: Locked for 22 hours. Ground motion has just calmed down from a 6.3M earthquake from Taiwan. PEM injections and SUS charge measurements have finished up, maintenance has started.

H1 CDS (CDS)
erik.vonreis@LIGO.ORG - posted 06:53, Tuesday 12 September 2023 (72825)
Workstations updated

Workstations were updated and rebooted.  This was an os package update.  Conda packages were not updated.

H1 General
anthony.sanchez@LIGO.ORG - posted 00:06, Tuesday 12 September 2023 (72824)
Monday Ops Eve Shift End

TITLE: 09/12 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
INCOMING OPERATOR: Austin
SHIFT SUMMARY:
Very Uneventful shift, a few earthquakes hit but we survived all of them.
H1 has been locked for ~14.5 hours, without any issues.

During the next break in OBSERVING , We still need to reload ALIGN_IFO (ALog72811)

LOG: empty

 

LHO FMCS (PEM)
anthony.sanchez@LIGO.ORG - posted 00:02, Tuesday 12 September 2023 (72823)
Checking HVAC Fans Famis

Famis 26250

Fan Vibrometers don't have any noise that reaches about the vague threshhold of 0.7.
Screenshots attahced.

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 21:30, Monday 11 September 2023 (72822)
Monday Ops Eve Mid Shift

State of H1: Observing at 148.Mpc
Very quiet evening so far with just a couple of earthquakes passing through.
2:07 UTC GRB_Short E437024 candidate Standing Down https://gracedb.ligo.org/events/E437024/view/

H1 has been locked for 11.5 hours. No SQZr issues at all.

H1 SEI (SEI)
anthony.sanchez@LIGO.ORG - posted 18:59, Monday 11 September 2023 (72821)
Fixed DNS issue for h1cdsh8

I stumbled across this issue when working on a caQTdm based FOM screen for SEI. The MEDM screen seemed to work just fine, but the caQTdm screen was broken.
Looking into the problem further I found the servers that the data was getting pulled from for the rest of the screen were listed as such, 
FQDN:portnumber.
But the HAM8 Server was listed as:
FQDN:arpa.portNumberGibberish
So I scoured the linked files for server names and found nothing of the like in my caQTdm files.  

I spoke to Erik and showed him what I had found.
We opened the DNS settings and noticed the error.
Updated the DNS record for h1cdsh8 and It's now resolved. Committed the changes to SVN and updated the other servers.
 

H1 CAL (CDS)
jeffrey.kissel@LIGO.ORG - posted 17:33, Monday 11 September 2023 (72820)
Fixed Bug in CAL_CS_MASTER regarding KAPPA_UIM Gating Algorithm's Uncertainy Input
J. Kissel

After identifying a bug in the
    /opt/rtcds/userapps/release/cal/common/models/
        CAL_CS_MASTER.mdl
library that relates to the CAL-CS computation of KAPPA_UIM and its gating algorithm's uncertainy input (see LHO:72819), I've rectified the bug and committed the model change to the userapps repo as of rev 26218.

Attached is the "after" screenshot, which you can compare to LHO:72819's "before" screenshot.

Work permit to recompile the h1calcs model and install ASAP is forth-coming.
Images attached to this report
LHO VE (DetChar, VE)
gerardo.moreno@LIGO.ORG - posted 16:30, Monday 11 September 2023 (72818)
LOTO on Kobelco Compressor

(Heath E., Gerardo M.)
Service tech Heath was on-site to diagnose/repair the leak on the Kobelco compressor, unfortunately to be able to diagnose the system a couple of windows were lifted, the gaskets for such "windows" can't be reused and Heath did not have gaskets with him.  Now, the vendor is trying to secure a "kit" to do instead an annual service on the unit, meanwhile the compressor is LOTO, and will be down until repairs are completed, hopefully soon.

While the diagnosis was done, the compressor was powered on and ran for sometime this morning.

The compressor ran for 7 minutes, starting at 17:12 UTC and ending at 17:19 UTC, Drying tower was isolated from the compressor.

H1 CAL
jeffrey.kissel@LIGO.ORG - posted 16:29, Monday 11 September 2023 - last comment - 14:45, Monday 18 September 2023(72812)
Historical Systematic Error Investigations: Why MICH FF Spoiling UIM Calibration Line froze Optical Gain and Cavity Pole GDS TDCFs from 2023-07-20 to 2023-08-07
J. Kissel

I'm in a rabbit hole, and digging my way out by shaving yaks. The take-away if you find this aLOG TL;DR -- This is an expansion of the understanding of one part of multi-layer problem described in LHO:72622.

I want to pick up where I left off in modeling the detector calibration's response to thermalization except using the response function, (1+G)/C, instead of just the sensing function, C (LHO:70150). 

I need to do this for when 
    (a) we had thermalization lines ON during times of
    (b) PSL input power at 75W (2023-04-14 to 2023-06-21) and
    (c) PSL input power at 60W (2023-06-21 to now).

"Picking up where I left off" means using the response function as my metric of thermalization instead of the sensing function.

However, the measurement of sensing function w.r.t. to its model, C_meas / C_model, is made from the ratio of measured transfer functions (DARM_IN1/PCAL) * (DARMEXC/DARMIN2), where only the calibration of PCAL matters. The measurement response function w.r.t. its model, R_meas / R_model, on the other hand, is ''simply'' made by the transfer function of ([best calibrated product])/PCAL, where the [best calibrated product] can be whatever you like, as long as you understand the systematic error and/or extra steps you need to account for before displaying what you really want.

In most cases, the low-latency GDS pipeline product, H1:GDS-CALIB_STRAIN, is the [best calibrated product], with the least amount of systematic error in it. It corrects for the flaws in the front-end (super-Nyquist features, computational delays, etc.) and it corrects for ''known'' time dependence based on calibation-line informed, time-dependent correction factors or TDCFs (neither of which the real-time front-end product, CAL-DELTAL_EXTERNAL_DQ, does). So I want to start there, using the transfer function H1:GDS-CALIB_STRAIN / H1:CAL-DELTAL_REF_PCAL_DQ for my ([best calibrated product])/PCAL transfer function measurement.

HOWEVER, over the time periods when we had thermalization lines on, H1:GDS-CALIB_STRAIN had two major systematic errors in it itself that were *not* the thermalization. In short, those errors were:
    (1) between 2023-04-26 and 2023-08-07, we neglected to include the model of the ETMX ESD driver's 3.2 kHz pole (see LHO:72043) and
    (2) between 2023-07-20 and 2023-08-03, we installed a buggy bad MICH FF filter (LHO:71790, LHO:71937, and LHO:71946) that created excess noise as a spectral feature which polluted the 15.1 Hz, SUS-driven calibration line that's used to inform \kappa_UIM -- the time dependence of the relative actuation strength for the ETMX UIM stage. The front-end demodulates that frequency with a demod called SUS_LINE1, creating an estimate of the magnitude, phase, coherence, and uncertainty of that SUS line w.r.t. DARM_ERR.

When did we have thermalization lines on for 60W PSL input? Oh, y'know, from 2023-07-25 to 2023-08-09, exactly at the height of both of these errors. #facepalm
So -- I need to understand these systematic errors well in order to accurately remove them prior to my thermalization investigation.

Joe covers both of these flavors of error in LHO:72622.

However, after trying to digest latter problem, (2), and his aLOG, I didn't understand why spoiled \kappa_U alone had such impact -- since we know that the UIM actuation strength is quite unimpactful to the response function. 

INDEED (2) is even worse than "we're not correcting for the change in UIM actuation strength -- because 
    (3) Though the GDS pipeline (that finishes the calibration to form H1:GDS-CALIB_STRAIN) computes its own TDCFs from the calibration lines, GDS gates the value of its TDCFs with the front-end-, CALCS-, computed uncertainty. So, in that way, the GDS TDCFs are still influenced by the front-end, CALCS computation of TDCFs.

So -- let's walk through that for a second.
The CALCS-computed uncertainty for each TDCF is based on the coherence between the calibration lines and DARM_ERR -- but in a crude, lazy way that we thought would be good enough in 2018 -- see G1801594, page 13. I've captured a current screenshot, First Image Attachment  of the now-times simulink model to confirm the algorithm is still the same as it was prior to O3. 

In short, the uncertainty for the actuator strengths, \kappa_U, \kappa_P, and \kappa_T, is created by simply taking the larger of the two calibration line transfer functions' uncertainty that go in to computing that TDCF -- SUS_LINE[1,2,3] or PCAL_LINE1. 

HOWEVER -- because the optical and cavity pole, \kappa_C and f_CC, calculation depends on subtracting out the live DARM actuator (see appearance "A(f,t)" in the definition of "S(f,t)" in Eq. 17 from ), their uncertainty is crafted from the largest of the \kappa_U, \kappa_P, and \kappa_T, AND PCAL_LINE2 uncertainties. It's the same uncertainty for both \kappa_C and f_CC, since they're both derived from the magnitude and phase of the same PCAL_LINE2. 

That means the large SUS_LINE1 >> \kappa_U uncertainty propagates through this "greatest of" algorithm, and also blows out the \kappa_C and f_CC uncertainty as well -- which triggered the GDS pipeline to gate its 2023-07-20 TDCF values for \kappa_U, \kappa_C, and f_CC from 2023-07-20 to 2023-08-07.

THAT means, that --for better or worse-- when \kappa_C and f_CC are influenced by thermalization for the first ~3 hours after power up, GDS did not correct for it. Thus, a third systematic error in GDS, (3). 

*sigh*

OK, let's look at some plots.

My Second Image Attachment shows a trend of all the front-end computed uncertainties involved around 2023-07-20 when the bad MICH FF is installed. 
    :: The first row and last row show that the UIM uncertainty -- and the CAV_POLE uncertainty (again, used for both \kappa_C )

    :: Remember GDS gates its TDCFs with a threshold of uncertainty = 0.005 (i.e. 0.5%), where the front-end gates with an uncertainty of 0.05 (i.e. 5%).

First PDF attachment shows in much more clear detail the *values* of bot the the CALCS and GDS TDCFs during a thermalization time that Joe chose in LHO:72622, 2023-07-26 01:10 UTC.

My Second PDF attachment breaks down Joe's LHO:72622 Second Image attachment in to its components:
    :: ORANGE shows the correction to the "reference time" response function with the frozen, gated, GDS-computed TDCFs, by the ratio of the "nominal" response function (as computed from the 20230621T211522Z report's pydarm_H1.ini) to that same response function, but with the optical gain, cavity pole, and actuator strengths updated with the frozen GDS TDCF values,
        \kappa_C = 0.97828    (frozen that the low, thermalized value of the OM2 HOT value reflecting the unaccounted-for change just one day prior at 2023-07-19; LHO:71484)
        f_CC = 444.4 Hz       (frozen)
        \kappa_U = 1.05196    (frozen at a large, noisy value, right after the MICH FF filter is installed)
        \kappa_P = 0.99952    (not frozen)
        \kappa_T = 1.03184    (not frozen, large at 3% because of the TST actuation strength drift)

    :: BLUE shows the correction to the "reference time" response function with the not-frozen, non-gated, CALCS-computed TDCFs, by the ratio of the "nominal" 20230621T211522Z response function to that same response function updated with the CALCS values,
        \kappa_C = 0.95820    (even lower than OM2 HOT value because this time is during thermalization)
        f_CC = 448.9 Hz       (higher because IFO mode matching and loss are better before the IFO thermalizes)
        \kappa_U = 0.98392    (arguably more accurate value, closer to the mean of a very noisy value)
        \kappa_P = 0.99763    (the same as GDS, to within noise or uncertainty)
        \kappa_T = 0.03073    (the same as GDS, to within noise or uncertainty)

    :: GREEN is a ratio of BLUE / ORANGE -- and thus a repeat of what Joe shows in his LHO:72622 Second Image attachment.

Joe was trying to motivate why (1) the missing ESD driver 3.2 kHz pole is a separable problem from (2) and (3), the bad MICH FF filter spoiling the uncertainty in \kappa_U, \kappa_C, and f_CC, so he glossed over this issue. Further what he plotted in his second attachment, and akin to my GREEN curve, is the *ratio* between corrections, not the actually corrections themselves (ORANGE and BLUE) so it kind of hid this difference. 
Images attached to this report
Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 16:21, Monday 11 September 2023 (72815)
This plot was created by create_no3p2kHz_syserror.py, and the plots posted correspond to the script as it was when the Calibration/ifo project git hash was 53543b80.
jeffrey.kissel@LIGO.ORG - 17:21, Monday 11 September 2023 (72819)
While shaving *this* yak, I found another one -- The front-end CALCS uncertainty for the \kappa_U gating algorithm incorrectly consumes \kappa_T's uncertainty.

The attached image highlights the relevant part of the 
    /opt/rtcds/userapps/release/cal/common/models/
        CAL_CS_MASTER.mdl
library part, at the CS > TDEP level.

The red ovals show to what I refer. The silver KAPPA_UIM, KAPPA_PUM, and KAPPA_TST blocks -- which are each instantiations of the ACTUATOR_KAPPA block within the CAL_LINE_MONITOR_MASTER.mdl libary -- each receive the uncertainty output from the above mentioned crude, lazy algorithm (see first image from above LHO:72812) via tag. The KAPPA_UIM block incorrectly receives the KAPPA_TST_UNC tag.

The proof is seen in the first row of other image attachment from above LHO:72812 -- see that while the raw calibration line uncertainty (H1:CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY) is high, the resulting "greater of the two" uncertainty (H1:CAL-CS_TDEP_KAPPA_UIM_GATE_UNC_INPUT) remains low, and matches the third row's uncertainty for \kappa_T (H1:CAL-CS_TDEP_KAPPA_TST_GATE_UNC_INPUT), the greater of H1:CAL-CS_TDEP_PCAL_LINE1_UNCERTAINTY and H1:CAL-CS_TDEP_SUS_LINE3_UNCERTAINTY.

You can that this is the case even back in 2018 on page 14 of G1801594, so this has been wrong since before O3.

*sigh*

This makes me wonder which of these uncertainties the GDS pipeline gates \kappa_U, \kappa_C, and f_CC on ... 
I don't know gstlal-calibration well enough to confirm what channels are used. Clearly, from the 2023-07-26 01:10 UTC trend of GDS TDCFs, they're gated. But, is that because H1:CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY is used as input all of the GDS computed \kappa_U, \kappa_C, and f_CC, or are they using H1:CAL-CS_TDEP_KAPPA_UIM_GATE_UNC_INPUT?

As such, I can't make a statement of how impactful this bug has been.

We should fix this, though.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 12:09, Tuesday 12 September 2023 (72832)
The UIM uncertainty bug has now been fixed and installed at H1 as of 2023-09-12 17:00 UTC. See LHO:72820 and LHO:72830, respectively.
jeffrey.kissel@LIGO.ORG - 14:45, Monday 18 September 2023 (72944)
J. Kissel, M. Wade

Following up on this:
    This makes me wonder which of these uncertainties the GDS pipeline gates \kappa_U, \kappa_C, and f_CC on [... are channels like] H1:CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY used as input all of the GDS computed \kappa_U, \kappa_C, and f_CC, or are they using H1:CAL-CS_TDEP_KAPPA_UIM_GATE_UNC_INPUT?

I confirm from Maddie that 
    - The channels that are used to inform the GDS pipeline's gating algorithm are defined in the gstlal configuration file, which lives in the Calibration namespace of the git.ligo.org repo, under 
    git.ligo.org/Calibration/ifo/H1/gstlal_compute_strain_C00_H1.ini
where this config file was last changed on May 02 2023 with git hash 89d9917d.

    - In that file, The following config variables are defined (starting around Line 220 as of git hash version 89d9917d),
        #######################################
        # Coherence Uncertainty Channel Names #
        #######################################
        CohUncSusLine1Channel: CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY
        CohUncSusLine2Channel: CAL-CS_TDEP_SUS_LINE2_UNCERTAINTY
        CohUncSusLine3Channel: CAL-CS_TDEP_SUS_LINE3_UNCERTAINTY
        CohUncPcalyLine1Channel: CAL-CS_TDEP_PCAL_LINE1_UNCERTAINTY
        CohUncPcalyLine2Channel: CAL-CS_TDEP_PCAL_LINE2_UNCERTAINTY
        CohUncPcalyLine4Channel: CAL-CS_TDEP_PCAL_LINE4_UNCERTAINTY
        CohUncDARMLine1Channel: CAL-CS_TDEP_DARM_LINE1_UNCERTAINTY
      which are compared against a threshold, also defined in that file on Line 114,
        CoherenceUncThreshold: 0.01

    Note: the threshold is 0.01 i.e. 1% -- NOT 0.005 or 0.5% as described in the body of the main aLOG.

    - Then, inside the gstlal-calibration code proper, 
        git.ligo.orgCalibration/gstlal-calibration/bin/gstlal_compute_strain
    whose last change (as of this aLOG) has git hash 5a4d64ce, there are lines of code buried deep that compute create gating around lines 
        :: L1366 for \kappa_T,
        :: L1425 for \kappa_P, 
        :: L1473 for \kappa_U
        :: L1544 for \kappa_C
        :: L1573 for f_CC

    - From these lines one can discern what's going on, if you believe that calibration_parts.mkgate is a wrapper around gstlal's pipeparts.filters class, with method "gate" -- which links you to source code "gstlal/gst/lal/gstlal_gate.c" which actually lives under
        git.ligo.org/lscsoft/gstlal/gst/lal/gstlal_gate.c

    - I *don't* believe it (because I don't believe in my skills in following the gstlal rabbit hole), so I asked Maddie. She says: 
    The code uses the uncertainty channels (as pasted below) along with a threshold specified in the config (currently 0.01, so 1% uncertainty) and replaces any computed TDCF value for which the specified uncertainty on the corresponding lines is not met with a "gap". These gaps get filled in by the last non-gap value, so the end result is that the TDCF will remain at the "last good value" until a new "good" value is computable, where "good" is defined as a value computed during a time where the specified uncertainty channels are within the required threshold.
    The code is essentially doing sequential gating [per computation cycle] which will have the same result as the front-end's "larger of the two" method.  The "gaps" that are inserted by the first gate are simply passed along by future gates, so future gates only add new gaps for any times when the uncertainty channel on that gate indicates the threshold is surpassed.  The end result [at the end of computation cycle] is a union of all of the uncertainty channel thresholds.

    - Finally, she confirms that 
        :: \kappa_U uses 
            . CAL-CS_TDEP_PCAL_LINE1_UNCERTAINTY, 
            . CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY
        :: \kappa_P uses 
            . CAL-CS_TDEP_PCAL_LINE1_UNCERTAINTY, 
            . CAL-CS_TDEP_SUS_LINE2_UNCERTAINTY
        :: \kappa_T uses 
            . CAL-CS_TDEP_PCAL_LINE1_UNCERTAINTY, 
            . CAL-CS_TDEP_SUS_LINE3_UNCERTAINTY
        :: and both \kappa_C f_CC use
            . CAL-CS_TDEP_PCAL_LINE1_UNCERTAINTY, 
            . CAL-CS_TDEP_PCAL_LINE2_UNCERTAINTY, 
            . CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY, 
            . CAL-CS_TDEP_SUS_LINE2_UNCERTAINTY, 
            . CAL-CS_TDEP_SUS_LINE3_UNCERTAINTY

So, repeating all of this back to you to make sure we all understand: If any one of the channels is above the GDS pipeline's threshold of 1% (not 0.5% as described in the body of the main aLOG), then the TDCF will be gated, and "frozen" at the last time *all* of these channels were below 1%.

This corroborates and confirms the hypothesis that the GDS pipeline, although slightly different algorithmically from the front-end, would gate all three TDCFs -- \kappa_U, \kappa_C, and f_CC -- if only the UIM SUS line, CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY was above threshold -- as it was from 2023-07-20 to 2023-08-07.
H1 CAL (CAL)
joseph.betzwieser@LIGO.ORG - posted 12:40, Friday 01 September 2023 - last comment - 12:11, Friday 17 November 2023(72622)
Calibration uncertainty estimate corrections
This is a continuation of a discussion of mis-application of the calibration model raised in LHO alog 71787, which was fixed on August 8th (LHO alog: 72043), and further issues with what time varying factors (kappas) were applied while the ETMX UIM calibration line coherence was bad (see LHO alog 71790, which was fixed on August 3rd.

We need to update the calibration uncertainty estimates with the combination of these two problems where they overlap.  The appropriate thing is to use the full DARM model (1/C + (A_uim + A_pum + A_tst) * D), where C is sensing, A_{uim,pum,tst} are the individual ETMX stage actuation transfer functions, and D is the digital darm filters.  Although, it looks like we can just get away with an approximation, which will make implimentation somewhat easier.

As a demonstration of this, first I confirm I can replicate the the 71787 result purely with models (no fitting).  I take the pydarm calibration model Response, R, and correct it for the time dependent correction factors (kappas) at the same time I took the GDS/DARM_ERR data, and then take the ratio with the same model except the 3.2 kHz ETMX L3 HFPoles removed (the correction Louis and Jeff eventually implemented).  This is the first attachment.

Next we calculate the expected error just from the wrong kappas being applied in the GDS pipeline due to poor UIM coherence.  For this initial look, I choose GPS time 1374369018 (2023-07-26 01:10), you can see the LHO summary page here, with the upper left plot showing the kappa_C discrepancy between GDS and front end.  So just this issue produces the second attachment.

We can then look at what the effects of the 3.2 kHz pole being missing for two possibilities, for the Front end kappas, and for the GDS bad kappas, and see the difference is pretty small compared to typical calibration uncertainties.  Here it's on the scale of a tenth of a percent at around 90 Hz.  I can also plot the model with the front end kappas (more correct at this time) over the model of the wrong GDS kappas, for a comparison in scale as well.  This is the 3rd plot.

This suggests to me the calibration group can just apply a single correction to the overall response function systematic error for the period where the 3.2 kHz HFPole filter was missing, and then in addition, for the period where the UIM uncertainty was preventing the kappa_C calculation from updating, apply an additional correction factor that is time dependent, just multiplying the two.

As an example, the 4th attachment shows what this would look like for the gps time 1374369018.
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 16:25, Monday 11 September 2023 (72817)
For further explanation of the impact of Frozen GDS TDCFs vs. Live CAL-CS Computed TDCFs on the response function systematic error, i.e. what Joe's saying with
    Next we calculate the expected error just from the wrong kappas being 
    applied in the GDS pipeline due to poor UIM coherence.  For this initial look, I choose 
    GPS time 1374369018 (2023-07-26 01:10 UTC), you can see the LHO summary page here, with 
    the upper left plot showing the kappa_C discrepancy between GDS and front end.  
    So just this issue produces the second attachment.
and what he shows in his second attachment, see LHO:72812.
jeffrey.kissel@LIGO.ORG - 16:34, Thursday 14 September 2023 (72879)
I've made some more clarifying plots to help me better understand Joe's work above after getting a few more details from him and Vlad.

(1) GDS-CALIB_STRAIN is corrected for time dependence, via the relative gain changes, "\kappa," as well as for the new coupled-cavity pole frequency, "f_CC." In order to make a fair comparison between the *measured* response function, GDS-CALIB_STRAIN / DARM_ERR live data stream, and the *modeled* response function, which is static in time, we need to update the response function with the the time dependent correction factors (TDCFs) at the time of the *measured* response function. 

How is the *modeled* response function updated for time dependence? Given the new pydarm system, it's actually quite straightforward given a DARM model parameter set, pydarm_H1.ini and good conda environment. Here's a bit of pseudo-code that captures what's happening conceptually:
    # Set up environment
    from gwpy.timeseries import TimeSeriesDict as tsd
    from copy import deepcopy
    import pydarm

    # Instantiate two copies of pydarm DARM loop model
    darmModel_obj = pydarm.darm.DARMModel('pydarm_H1.ini')
    darmModel_wTDCFs_obj = deepcopy(darmModel_obj)

    # Grab time series of TDCFs
    tdcfs = tsd.get(chanList, starttime, endtime, frametype='R',verbose=True) 

    kappa_C = tdcfs[chanList[0]].value
    freq_CC = tdcfs[chanList[1]].value
    kappa_U = tdcfs[chanList[2]].value
    kappa_P = tdcfs[chanList[3]].value
    kappa_T = tdcfs[chanList[4]].value

    # Multiply in kappas, replace cavity pole, with a "hot swap" of the relevant parameter in the DARM loop model
    darmModel_wTDCFs_obj.sensing.coupled_cavity_optical_gain *= kappa_C
    darmModel_wTDCFs_obj.sensing.coupled_cavity_pole_frequency = freq_CC
    darmModel_wTDCFs_obj.actuation.xarm.uim_npa *= kappa_U
    darmModel_wTDCFs_obj.actuation.xarm.pum_npa *= kappa_P
    darmModel_wTDCFs_obj.actuation.xarm.tst_npv2 *= kappa_T

    # Extract the response function transfer function on your favorite frequency vector
    R_ref     = darmModel_obj.compute_response_function(freq)
    R_wTDCFs  = darmModel_wTDCFs_obj.compute_response_function(freq)

    # Compare the two response functions to form a "systematic error" transfer function, \eta_R.
    eta_R_wTDCFs_over_ref = R_wTDCFs / R_ref


For all of this study, I started with the reference model parameter set that's relevant for these times in late July 2023 -- the pydarm_H1.ini from the 20230621T211522Z report directory, which I've copied over to a git repo as pydarm_H1_20230621T211522Z.ini.

(2) One layer deeper, some of what Joe's trying to explore in his plots above -- the difference between low-latency, GDS pipeline computed TDCFs and real-time, CALCS pipeline -- because of the issues with the GDS pipeline computation discussed in LHO:72812.

So, in order to facilitate this study, we have to gather TDCFs from both GDS and CALCS pipeline. Here's the channel list for both:
    chanList = ['H1:GRD-ISC_LOCK_STATE_N',

                'H1:CAL-CS_TDEP_KAPPA_C_OUTPUT',
                'H1:CAL-CS_TDEP_F_C_OUTPUT',
                'H1:CAL-CS_TDEP_KAPPA_UIM_REAL_OUTPUT',
                'H1:CAL-CS_TDEP_KAPPA_PUM_REAL_OUTPUT',
                'H1:CAL-CS_TDEP_KAPPA_TST_REAL_OUTPUT',

                'H1:GDS-CALIB_KAPPA_C',
                'H1:GDS-CALIB_F_CC',
                'H1:GDS-CALIB_KAPPA_UIM_REAL',
                'H1:GDS-CALIB_KAPPA_PUM_REAL',
                'H1:GDS-CALIB_KAPPA_TST_REAL']
where the first channel in the list is the state of detector lock acquisition guardian for useful comparison.

(3) Indeed, for *most* of the above aLOG, Joe chooses an example of times when the GDS and CALCS TDCFs are *the most different* -- in his case, 2023-07-26 01:10 UTC (GPS 1374369018) -- when the H1 detector is still thermalizing after power up. They're *different* because the GDS calculation was frozen at the value they were on the day that the calculation was spoiled by a bad MICH FF filter, 2023-08-04 -- and importantly when the detector *was* thermalized.

An important distinction that's not made above, is that the *measured* data in his first plot is from LHO:71787 -- a *different* time, when the detector WAS thermalized, a day later -- 2023-07-27 05:03:20 UTC (GPS 1374469418).

Compare the TDCFs between NOT THERMALIZED time, 2023-07-26 first attachment here with the 2023-07-27 THERMALIZED first attachment I recently added to Vlad's LHO:71787.

One can see in the 2023-07-27 THERMALIZED data, the Frozen GDS and Live CALCS TDCF answers agree quite well. For the NOT THERMALIZED time, 2023-07-26, \kappa_C, f_CC, and \kappa_U are quite different.

(4) So, let's compare the response function ratio, i.e. systematic error transfer function ratio, between the response function updated with GDS TDCFs vs. CALCS TDCFs for the two different times -- thermalizes vs. not thermalized. This will be an expanded version Joe's second attachment:
    - 2nd Attachment here: this exactly replicates Joe's plot, but shows more ratios to better get a feel for what's happening. Using the variables from psuedo code above, I'm plotting
        :: BLUE = eta_R_wTDCFs_CALCS_over_ref = R_wTDCFs_CALCS / R_ref
        :: ORANGE = eta_R_wTDCFs_GDS_over_ref = R_wTDCFs_GDS / R_ref
        :: GREEN = eta_R_wTDCFs_CALCS_over_R_wTDCFs_GDS = R_wTDCFs_CALCS / R_wTDCFs_GDS
    where the GREEN trace is showing what Joe showed -- both as the unlabeled BLUE trace in his second attachment, and the "FE kappa true R / applied bad kappa R" GREEN trace in his third attachment -- the ratio between response functions; one updated with CALCS TDCFs and the other updated with GDS TDCFs, for the NOT THERMALIZED time. 

    - 3r Attachment here: this replicates the same traces, but with the TDCFs from Vlad's THERMALIZED time.

For both Joe and my plots, because we think that the CALCS TDCFs are more accurate, and it's tradition to put the more accurate response function in the numerator we show it as such. Comparing the two GREEN traces from my plots, it's much more clear that the difference between GDS and CALCS TDCFs is negligible for THERMALIZED times, and substantial during NOT THERMALIZED times.

(4) Now we bring in the complexity of the missing 3.2 kHz ESD pole. Unlike the "hot swap" of TDCFs in the DARM loop model, it's a lot easier just to create an "offline" copy of the pydarm parameter file, with the ESD poles removed. That parameter file lives in the same git repo location, but called pydarm_H1_20230621T211522Z_no3p2k.ini. So, with that, we just instantiate the model in the same way, but calling the different parameter file:
# Set up environment
    # Instantiate two copies of pydarm DARM loop model
    darmModel_obj = pydarm.darm.DARMModel('pydarm_H1_20230621T211522Z.ini')
    darmModel_no3p2k_obj = pydarm.darm.DARMModel('pydarm_H1_20230621T211522Z_no3p2k.ini')

    # Extract the response function transfer function on your favorite frequency vector
    R_ref = darmModel_obj.compute_response_function(freq)
    R_no3p2k = darmModel_no3p2k_obj.compute_response_function(freq)

    # Compare the two response functions to form a "systematic error" transfer function, \eta_R.
    eta_R_nom_over_no3p2k = R_ref / R_no3p2k

where here, the response function without the 3.2 kHz pole is less accurate, so R_no3p2k goes in the denominator.

Without any TDCF correction, I show this eta_R_nom_over_no3p2k compared against Vlad's fit from LHO:71787 for starters.

(5) Now for the final layer of complexity need to fold in the TDCFs. This is where I think a few more traces and plots are needed comparing the two THERMALIZED vs. NOT times, plus some clear math, in order to explain what's going on. In the end, I make the same conclusion as Joe, that the two effects -- Fixing the Frozen GDS TDCFs and Fixing the 3.2 kHz pole are "separable" to good approximation, but I'm slower than Joe is, and need things laid out more clearly.

So, on the pseudo-code side of things, we need another couple of copies of the darmModel_obj:
    - with and without 3.2 kHz pole 
        - with TDCFs from CALCS and GDS, 
            - from THERMALIZED (LHO71787) and NOT THERMALIZED (LHO72622) times:
    
        R_no3p2k_wTDCFs_CCS_LHO71787 = darmModel_no3p2k_wTDCFs_CCS_LHO71787_obj.compute_response_function(freq)
        R_no3p2k_wTDCFs_GDS_LHO71787 = darmModel_no3p2k_wTDCFs_GDS_LHO71787_obj.compute_response_function(freq)
        R_no3p2k_wTDCFs_CCS_LHO72622 = darmModel_no3p2k_wTDCFs_CCS_LHO72622_obj.compute_response_function(freq)
        R_no3p2k_wTDCFs_GDS_LHO72622 = darmModel_no3p2k_wTDCFs_GDS_LHO72622_obj.compute_response_function(freq)

        
        eta_R_wTDCFS_over_R_wTDCFs_no3p2k_CCS_LHO71787 = R_wTDCFs_CCS_LHO71787 / R_no3p2k_wTDCFs_CCS_LHO71787
        eta_R_wTDCFS_over_R_wTDCFs_no3p2k_GDS_LHO71787 = R_wTDCFs_GDS_LHO71787 / R_no3p2k_wTDCFs_GDS_LHO71787
        eta_R_wTDCFS_over_R_wTDCFs_no3p2k_CCS_LHO72622 = R_wTDCFs_CCS_LHO72622 / R_no3p2k_wTDCFs_CCS_LHO72622
        eta_R_wTDCFS_over_R_wTDCFs_no3p2k_GDS_LHO72622 = R_wTDCFs_GDS_LHO72622 / R_no3p2k_wTDCFs_GDS_LHO72622


Note, critically, that these ratios of with and without the 3.2 kHz pole -- both updated with the same TDCFs -- is NOT THE SAME THING as just the ratio of models updated with GDS vs CALCS TDCFs, even though it might look like the "reference" and "no 3.2 kHz pole" might cancel "on paper," if one naively thinks that the operation is separable
     
    [[ ( R_wTDCFs_CCS / R_ref )*( R_ref / R_no3p2k ) ]] / [[ ( R_wTDCFs_GDS / R_ref )*(R_ref / R_no3p2k) ]] #NAIVE
    which one might naively cancel terms to get down to
    [[ ( R_wTDCFs_CCS / R_ref )*( R_ref / R_no3p2k ) ]] / [[ ( R_wTDCFs_GDS / R_ref )*(R_ref / R_no3p2k) ]]  #NAIVE
    [[ ( R_wTDCFs_CCS ]] / [[ R_wTDCFs_GDS ]] #NAIVE

    
So, let's look at the answer now, with all this context.
    - NOT THERMALIZED This is a replica of what Joe shows in the third attachment for the 2023-07-26 time:
        :: BLUE -- the systematic error incurred from excluding the 3.2 kHz pole on the reference response function without any updates to TDCFs (eta_R_nom_over_no3p2k)
        :: ORANGE -- the systematic error incurred from excluding the 3.2 kHz pole on the CALCS-TDCF-updated, modeled response function (eta_R_wTDCFS_over_R_wTDCFs_no3p2k_CCS_LHO72622, Joe's "FE kappa true R /applied R (no pole))
        :: GREEN -- the systematic error incurred from excluding the 3.2 kHz pole on the GDS-TDCF-updated, modeled response function (eta_R_wTDCFS_over_R_wTDCFs_no3p2k_GDS_LHO72622, Joe's "GDS kappa true R / applied (no pole)")
        :: RED -- Compared against Vlad's *fit* the ratio of CALCS-TDCF-updated, modeled response function to (GDS-CALIB_STRAIN / DARM_ERR) measured response function

    Here, because the GDS TDCFs are different than the CALCS TDCFs, you actually see a non-negligible difference between ORANGE and GREEN. 

    - THERMALIZED:
        (Same legend, but the TIME and TDCFs are different)

    Here, because the GDS and CALCS TDCFs are the same-ish, you can't see that much of a difference between the two. 
    
    Also, note, that even when we're using the same THERMALIZED time and corresponding TDCFs to be self-consistent with Vlad's fit of the measured response function, they still don't agree perfectly. So, there's likely still yet more systematic error going in the thermalized time.

(6) Finally, I wanted to explicitly show the consequences of "just" correcting for GDS and from "just" correcting the missing 3.2 kHz pole to be able to better *quantify* the statement that "the difference is pretty small compared to typical calibration uncertainties," as well as showing the difference between "just" the ratio response functions updated with the different TDCFs (the incorrect model), against the "full" models.

    I show this in 
    - NOT THERMALIZED, and
    - THERMALIZED

For both of these plots, I show
    :: GREEN -- the corrective transfer function we would be applying if we only update the Frozen GDS TDCFs to Live CALCS TDCFs, compared with
    :: BLUE -- the ratio of corrective transfer functions,
         >> the "best we could do," updating the response with Live TDCFs from CALCS and fixing the missing 3.2 kHz pole against
         >> only fixing the missing 3.2 kHz pole
    :: ORANGE -- the ratio of corrective transfer functions
         >> the "best we could do," updating the response with Live TDCFs from CALCS and fixing the missing 3.2 kHz pole against
         >> the "second best thing to do" which is leave the Frozen TDCFs alone and correct for the missing 3.2 kHz pole 
       
     Even for the NOT THERMALIZED time, the BLUE never exceeds 1.002 / 0.1 deg in magnitude / phase, and it's small compared to the "TDCF only" the simple correction of Frozen GDS TDCFs to Live CALCS TDCFs, shown in GREEN .  This helps quantify why Joe thinks we can separately apply the two corrections to the systematic error budget, because GREEN is much larger than BLUE.

    For the THERMALIZED time, in BLUE, that ratio of full models is even less, and also as expected the ratio of simple TDCF update models is also small.


%%%%%%%%%%
The code that produced this aLOG is create_no3p2kHz_syserror.py as of git hash 3d8dd5df.
Non-image files attached to this comment
jeffrey.kissel@LIGO.ORG - 12:11, Friday 17 November 2023 (74255)
Following up on this study just one step further, as I begin to actually correct data curing the time period where both of these systematic errors are in play -- the frozen GDS TDCFs and the missing 3.2 kHZ pole...

I craved one more set of plots to convey "Fixing the Frozen GDS TDCFs and Fixing the 3.2 kHz pole are "separable" to good approximation" showing the actual corrections one would apply in the different cases:
    :: BLUE = eta_R_nom_over_no3p2k = R_ref / R_no3p2k >> The systematic error created by the missing 3.2 kHz pole in the ESD model alone
    :: ORANGE = eta_R_wTDCFs_CALCS_over_R_wTDCFs_GDS = R_wTDCFs_CALCS / R_wTDCFs_GDS >> the systematic error created by the frozen GDS TDCFs alone
    :: GREEN = eta_R_nom_over_no3p2k * eta_R_wTDCFs_CALCS_over_R_wTDCFs_GDS = the product of the two >> the approximation
    :: RED = a previously unshown eta that we'd actually apply to the data that had both = R_ref (updated with CALCS TDCFS) / R_no3p2k (updated with GDS TDCFs) the right thing

As above, it's important to look at both a thermalized case as well as a non-thermalized case., so I attach those two,
    NOT THERMALIZED, and
    THERMALIZED.

The conclusions are the same as above:
    - Joe is again right that the difference between the approximation (GREEN) and the right thing (RED) is small, even for the NOT THERMALIZED time
But I think this version of the plots / traces better shows the breakdown of which effect is contribution where on top of the approximation vs. "the right thing," and "the right thing" was never explicitly shown. All the traces in my expanded aLOG, LHO:72879, had the reference model (or no 3.2 kHz pole models) updated either both CALCS TDCFs or both GDS TDCFs in the numerator and denominator, rather than "the right thing" where you have CALCS TDCFs in the numerator and GDS TDCFs in the denominator).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
To create these extra plots, I added a few lines of "calculation" code and another 40-ish lines of plotting code to create_no3p2kHz_syserror.py. I've now updated in within the git repo, so it and the repo now have git hash 1c0a4126.
Non-image files attached to this comment
Displaying reports 13361-13380 of 84064.Go to page Start 665 666 667 668 669 670 671 672 673 End