TITLE: 11/18 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 6mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.16 μm/s
QUICK SUMMARY:
Watching H0:FMC-EY_VEATEMP_DEGF and H0:FMC-EY_VEATEMP_DEGF because they have seen about a degree and a half of fluctuation in the last 8 hours.
Everything else looks great we have been Locked for 12 Hours and 50 minutes.
TITLE: 11/18 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Got back into Observing a few minutes ago after ending Commissioning. We're Observing at 156Mpc and everything is good besides temperature fluctuations at both EX and EY due to the setpoints being changed by 1.5 degreesF, so hopefully that'll level off without issue.
LOG:
16:00UTC Detector Observing and Locked for 4.5 hours
21:00 Commissioning
22:07 To NLN_CAL_MEAS
23:35 Back to NOMINAL_LOW_NOISE
23:52 Observing
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
17:43 | FAC | Cindi | WS | n | Laundry | 18:13 |
18:23 | FAC | Travis | GarbRoom | n | Grabbing supplies | 18:25 |
20:13 | FAC | Cindi | WS | n | Laundry | 20:43 |
21:05 | FAC | Travis | GarbRoom | n | Putting supplies back | 21:07 |
21:27 | SQZ | Sheila | CR | n | Adjusting SQZ (started 21:00) | 22:27 |
21:28 | ISC | Camilla | CR | n | Testing new MICHFF | 22:05 |
21:41 | ISC | Louis | Remote | n | Droppin gain on ETMX L2 line | 22:05 |
Gabriele, Camilla
After the LSC FF measurments taken on 15 Nov 74220, Gabriele refit the MICH FF. It's now being used, was loaded into FM7, saved in sdf and ISC_LOCK.
This reduces the Q of the 17.7Hz feature from ~300 to 90, allowing Louis to reduce the amplitude of KAPPA_TST 74259. We may still think about reducing this 17.7Hz feature more or notching it out of MICH.
MICH comparism plot (pink = new) and Gabriele's DARM-MICH coherence plot (Orange = new) is attached. The new MICH FF is overall better but is a little worse around 30Hz and 60Hz. It's been harder to fit over all frequencies since we swapped to using the ETMY PUM for FF.
I'm attaching a quick and dirty comparison of DARM zoomed in near the L3 SUS line at 17.6 Hz. The thin red line is DARM from October 9th, before the MICH FF swap on 10/12 (LHO:73420). The grey line is from yesterday, showing the of the filter installed on 10/12. The yellow trace is from earlier today after the new filter was installed. Certainly the current filter seems to be better and the peak it produces is both farther from the 17.6 Hz line and lower in amplitude than the filter that has been installed the last few weeks. Still, I was not able to completely undo the L3 SUS line amplitude changes from LHO:74145 and maintain TF uncertainty < 0.5% (amplitude reduced earlier today in LHO:74259).
There is another alog from Louis coming soon, but we have been looking at the PUM crossover and see that it isn't well described by the pyDARM model. Since the crossover measurement has low coherence and can cause locklosses, today we did some investigation using the DARM OLG measurement. The attached plot shows a DARM OLG measured with our normal A2L decoupling (P2L = 4, Y2L = 4.4) and with half those values (P2L = 2, Y2L = 2.2), you can see that the DARM OLG is different between 20-5 Hz.
Naoki, Camilla, Sheila
We had another look at some signals that we could use to track or servo the SQZ angle.
We turned the ADF back on, making a line at 1.3kHz in DARM, and tuned the demod phase for the ADF so that the SQZ angle readout was 0 for the angle we've been using for observing in this lock. Camilla added a bandstop at 1.3kHz in the SQZ BLRMS4, sum and null channels and checked that the ADF isn't dominating those. We did a sweep of the CLF6 demod phase like this, with the dither amplitude reduced comapred to Wed (0.01 instead of 0.03 CLK gain).
In the first attachment we are using BLRMS 4 to demodulate for the noise lock signal, the second cursor shows that the zero crossing of the ADF SQZ angle and the noise lock both correspond roughly with the minimum of noise in the BLRM4 (1kHz BLRMs). This isn't the same phase as the one that minimizes the brown trace, a blrms centered at 350 Hz. This means we have a frequency dependence of the SQZ angle, so we should probably look into things like the SRCL offset that might be causing this.
Naoki then turned the ADF phase with the SQZ angle set to minize BLRMS3, as shown in the second screenshot from Camilla (at the begining here you can see that Naoki set the ADF phase so SQZ ANG was zero for CLF demod phase of 145). We repeated the sweep and see that we have lower SNR for the noise lock using this lower frequency BLRMS, and that the ADF as it is now wouldn't make a good error signal for this sqz angle because it doesn't really go negative.
Detchar: We are planning to leave the ADF on over the weekend, which will create a line at 1.3kHz. We are hoping to use this to track changes in the squeezing angle over time.
The ADF calculated SQZ angle (H1:SQZ-ADF_OMC_TRANS_SQZ_ANG) seems to follow our SQZ BLRMs over the weekend, plot attached. Unsure why the SQZ is different lock to lock, i.e. sometimes changes over first 6 hours (1 day ago) and is sometimes is stable (-12 hours ago).
(Jordan V., Gerardo M.)
Late entry.
Last Tuesday Jordan and I installed 4 o-ring valves on the following crosses, FC-B2, FC-B4 and FC-B5.
Details, on FC-B2 we installed two o-ring valves, one on the +X 2.75" CF port, and the second one on the -X 2.75" CF port, both valves were tested for leaks, all new joints passed.
On the other two crosses, FC-B4 and FC-B5, we installed only one o-ring valve, one on each cross. The o-ring valves were installed on the +X 2.75" CF ports. Both valves were tested for leaks, all new joints passed.
I've dropped the amplitude gain of the 17.6 SUS ETMX Line (SUS_LINE3) down to 0.12 from the 0.17 (LHO:74145) since Camilla and Gabriele set up and installed a new MICH FF filter (discussed in LHO:74139) which was engaged during today's commissioning period.
The command I used is below:
gpstime;val=0.12 && caput H1:SUS-ETMX_L3_CAL_LINE_CLKGAIN $val && caput H1:SUS-ETMX_L3_CAL_LINE_SINGAIN $val && caput H1:SUS-ETMX_L3_CAL_LINE_COSGAIN $val && caput H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_CLKGAIN $val && caput H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_SINGAIN $val && caput H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_COSGAIN $val
PST: 2023-11-17 13:51:37.083452 PST
UTC: 2023-11-17 21:51:37.083452 UTC
GPS: 1384293115.083452
Old : H1:SUS-ETMX_L3_CAL_LINE_CLKGAIN 0.1
New : H1:SUS-ETMX_L3_CAL_LINE_CLKGAIN 0.12
Old : H1:SUS-ETMX_L3_CAL_LINE_SINGAIN 0.1
New : H1:SUS-ETMX_L3_CAL_LINE_SINGAIN 0.12
Old : H1:SUS-ETMX_L3_CAL_LINE_COSGAIN 0.1
New : H1:SUS-ETMX_L3_CAL_LINE_COSGAIN 0.12
Old : H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_CLKGAIN 0.1
New : H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_CLKGAIN 0.12
Old : H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_SINGAIN 0.1
New : H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_SINGAIN 0.12
Old : H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_COSGAIN 0.1
New : H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_COSGAIN 0.12
lscparams.py has been updated to use the new gain
Lines 461-464:
cal_line_gains = {'ETMX_L1': 6.6, # for CLK, SIN, COS
'ETMX_L2': 9,
'ETMX_L3': 0.12
}
11/17 21:00UTC Dropped Observing for Commissioning
23:52UTC Observing
FAMIS26161
Added ~300mL to the TCSX chiller and nothing to the TCSY chiller. The TCSX chiller filter sock looked to have a large air bubble underneath, so I reseated it, hence the larger than expected fill amount.
Filters looked good and the Dixie LDU (Leak Detection Unit) was still dry.
Fri Nov 17 10:08:55 2023 INFO: Fill completed in 8min 51secs
Travis confirmed a good fill curbside. TC mins today were -114C and -104C, outside temp +1.1C.
Closes 26460, last completed in October
All looks nominal, barring a glitch that appears to have happened ~10 days ago, seen on all HEPIs.
TITLE: 11/17 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Austin
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 2mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.17 μm/s
QUICK SUMMARY:
Detector in Observing and Locked for 4.5 hours. Looks like this last lockloss was from an earthquake.
The lockloss from last night 11/17 08:32UTC was definitely from an earthquake(EQResponse, Peakmon/LSC, SeismicFOM).
The detector correctly waited for the ground and oplevs to settle, and afterwards locked itself without any input and did not even go through Increase Flashes or an Initial Alignment.
TITLE: 11/17 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: Austin
SHIFT SUMMARY: We've been locked for almost 7 hours.
For the relock we couldn't get DRMI or PRMI so we went through CHECK_MICH, then PRMI twice but after the 2nd PRMI DRMI was able to lock easily. ISCT6 AS AIR camera flashed blue at 00:24 UTC while we were in CHECK_SHUTTERS (tagging CDS). Once we went through CLOSE_BEAM_DIVERTERS the SQZ_FC node went into error, which Naoki then checked out and fixed.
01:09UTC In observing
02:39UTC EQ mode activated 5.7 from Myanmar, back to calm at 02:49UTC
03:03UTC GRB-Short E453298
First snow of the season? Small flurry on site around 04:00UTC and 07:00UTC onwards.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:53 | VAC | Gerardo | FCES | n | Vac checks | 00:14 |
Naoki, Sheila
In the AS72 sensing matrix measurement in alog74106, Daniel suggested to increase the whitening gain of AS72 since it could be limited by ADC noise. We checked the whitening filter of AS72 A and B. Both of them have 12dB whitening gain, but one stage whitening is engaged for AS72 A, while two stage whitening is engaged for AS72 B. We decided to engage the 2 stage whitening for AS72 A, which is used in SRC1. The IFO locks without any problem with this additional whitening. We accepted some SDFs as shown in the attached figures. We will try to increase the whitening gain later.
I think the attached plot shows this was a good idea. I have some old data measuring whitening and no whitening on at RF72, but I never posted it because I couldn't figure out the correct RF72 transimpedance. The attached plot shows an estimation of the ADC noise level comparison with the noise spectrum in lock. Naoki and Daniel were kind enough to help me figure out the proper transimpedance for the RF72 (see 37065).
The measurement procedure and calculation procedure is detailed in alog 66734. At the time, RF72 was using 1 stage of whitening with 12 dB whitening gain.
I think it's likely my shot noise calculation here is incorrect. Correcting that calcuation is in progress...
During Tuesday maintenance and IMC_LOCK is OFFLINE, I measured the AS72 dark noise with different whitening setting. The attached figure shows the dark noise of AS72 A Q PIT/YAW, which are used in SRC1. The 2 stage whitening and 12 dB whitening gain are the current nominal setting. In the previous Elenna's measurement, there was a bump around 25 Hz, but there is no bump in today's measurement.
I checked the FC GR SUS/VCO crossover as shown in the attached figure. The crossover is about 60 Hz which is much larger than the blue reference on December 2022. This higher GR SUS gain could cause the instability during the transition from GR to IR. I reduced the GR SUS gain from 1 to 0.5 and the crossover became similar to the blue reference. I updated the SQZ_FC guardian to change this gain. Let's see if this helps the recent failure of FC GR to IR transition.
The green sus gain was defined in GR_SUS_LOCKING state of SQZ_FC guardian as follows.
ezca['SQZ-FC_LSC_INMTRX_SETTING_1_6'] = 0.5
In the previous alog, I changed the green sus gain by changing this line, but I think it is more convenient to define it in sqzparams. So I modified this line as follows.
self.GR_GAIN = sqzparams.fc_green_sus_gain
ezca['SQZ-FC_LSC_INMTRX_SETTING_1_6'] = self.GR_GAIN
And I defined fc_green_sus_gain = 0.5 in sqzparams.
I defined self.GR_GAIN = sqzparams.fc_green_sus_gain also in TRANSITION_IR_LOCKING state.
This is a continuation of a discussion of mis-application of the calibration model raised in LHO alog 71787, which was fixed on August 8th (LHO alog: 72043), and further issues with what time varying factors (kappas) were applied while the ETMX UIM calibration line coherence was bad (see LHO alog 71790, which was fixed on August 3rd. We need to update the calibration uncertainty estimates with the combination of these two problems where they overlap. The appropriate thing is to use the full DARM model (1/C + (A_uim + A_pum + A_tst) * D), where C is sensing, A_{uim,pum,tst} are the individual ETMX stage actuation transfer functions, and D is the digital darm filters. Although, it looks like we can just get away with an approximation, which will make implimentation somewhat easier. As a demonstration of this, first I confirm I can replicate the the 71787 result purely with models (no fitting). I take the pydarm calibration model Response, R, and correct it for the time dependent correction factors (kappas) at the same time I took the GDS/DARM_ERR data, and then take the ratio with the same model except the 3.2 kHz ETMX L3 HFPoles removed (the correction Louis and Jeff eventually implemented). This is the first attachment. Next we calculate the expected error just from the wrong kappas being applied in the GDS pipeline due to poor UIM coherence. For this initial look, I choose GPS time 1374369018 (2023-07-26 01:10), you can see the LHO summary page here, with the upper left plot showing the kappa_C discrepancy between GDS and front end. So just this issue produces the second attachment. We can then look at what the effects of the 3.2 kHz pole being missing for two possibilities, for the Front end kappas, and for the GDS bad kappas, and see the difference is pretty small compared to typical calibration uncertainties. Here it's on the scale of a tenth of a percent at around 90 Hz. I can also plot the model with the front end kappas (more correct at this time) over the model of the wrong GDS kappas, for a comparison in scale as well. This is the 3rd plot. This suggests to me the calibration group can just apply a single correction to the overall response function systematic error for the period where the 3.2 kHz HFPole filter was missing, and then in addition, for the period where the UIM uncertainty was preventing the kappa_C calculation from updating, apply an additional correction factor that is time dependent, just multiplying the two. As an example, the 4th attachment shows what this would look like for the gps time 1374369018.
For further explanation of the impact of Frozen GDS TDCFs vs. Live CAL-CS Computed TDCFs on the response function systematic error, i.e. what Joe's saying with Next we calculate the expected error just from the wrong kappas being applied in the GDS pipeline due to poor UIM coherence. For this initial look, I choose GPS time 1374369018 (2023-07-26 01:10 UTC), you can see the LHO summary page here, with the upper left plot showing the kappa_C discrepancy between GDS and front end. So just this issue produces the second attachment. and what he shows in his second attachment, see LHO:72812.
I've made some more clarifying plots to help me better understand Joe's work above after getting a few more details from him and Vlad. (1) GDS-CALIB_STRAIN is corrected for time dependence, via the relative gain changes, "\kappa," as well as for the new coupled-cavity pole frequency, "f_CC." In order to make a fair comparison between the *measured* response function, GDS-CALIB_STRAIN / DARM_ERR live data stream, and the *modeled* response function, which is static in time, we need to update the response function with the the time dependent correction factors (TDCFs) at the time of the *measured* response function. How is the *modeled* response function updated for time dependence? Given the new pydarm system, it's actually quite straightforward given a DARM model parameter set, pydarm_H1.ini and good conda environment. Here's a bit of pseudo-code that captures what's happening conceptually: # Set up environment from gwpy.timeseries import TimeSeriesDict as tsd from copy import deepcopy import pydarm # Instantiate two copies of pydarm DARM loop model darmModel_obj = pydarm.darm.DARMModel('pydarm_H1.ini') darmModel_wTDCFs_obj = deepcopy(darmModel_obj) # Grab time series of TDCFs tdcfs = tsd.get(chanList, starttime, endtime, frametype='R',verbose=True) kappa_C = tdcfs[chanList[0]].value freq_CC = tdcfs[chanList[1]].value kappa_U = tdcfs[chanList[2]].value kappa_P = tdcfs[chanList[3]].value kappa_T = tdcfs[chanList[4]].value # Multiply in kappas, replace cavity pole, with a "hot swap" of the relevant parameter in the DARM loop model darmModel_wTDCFs_obj.sensing.coupled_cavity_optical_gain *= kappa_C darmModel_wTDCFs_obj.sensing.coupled_cavity_pole_frequency = freq_CC darmModel_wTDCFs_obj.actuation.xarm.uim_npa *= kappa_U darmModel_wTDCFs_obj.actuation.xarm.pum_npa *= kappa_P darmModel_wTDCFs_obj.actuation.xarm.tst_npv2 *= kappa_T # Extract the response function transfer function on your favorite frequency vector R_ref = darmModel_obj.compute_response_function(freq) R_wTDCFs = darmModel_wTDCFs_obj.compute_response_function(freq) # Compare the two response functions to form a "systematic error" transfer function, \eta_R. eta_R_wTDCFs_over_ref = R_wTDCFs / R_ref For all of this study, I started with the reference model parameter set that's relevant for these times in late July 2023 -- the pydarm_H1.ini from the 20230621T211522Z report directory, which I've copied over to a git repo as pydarm_H1_20230621T211522Z.ini. (2) One layer deeper, some of what Joe's trying to explore in his plots above -- the difference between low-latency, GDS pipeline computed TDCFs and real-time, CALCS pipeline -- because of the issues with the GDS pipeline computation discussed in LHO:72812. So, in order to facilitate this study, we have to gather TDCFs from both GDS and CALCS pipeline. Here's the channel list for both: chanList = ['H1:GRD-ISC_LOCK_STATE_N', 'H1:CAL-CS_TDEP_KAPPA_C_OUTPUT', 'H1:CAL-CS_TDEP_F_C_OUTPUT', 'H1:CAL-CS_TDEP_KAPPA_UIM_REAL_OUTPUT', 'H1:CAL-CS_TDEP_KAPPA_PUM_REAL_OUTPUT', 'H1:CAL-CS_TDEP_KAPPA_TST_REAL_OUTPUT', 'H1:GDS-CALIB_KAPPA_C', 'H1:GDS-CALIB_F_CC', 'H1:GDS-CALIB_KAPPA_UIM_REAL', 'H1:GDS-CALIB_KAPPA_PUM_REAL', 'H1:GDS-CALIB_KAPPA_TST_REAL'] where the first channel in the list is the state of detector lock acquisition guardian for useful comparison. (3) Indeed, for *most* of the above aLOG, Joe chooses an example of times when the GDS and CALCS TDCFs are *the most different* -- in his case, 2023-07-26 01:10 UTC (GPS 1374369018) -- when the H1 detector is still thermalizing after power up. They're *different* because the GDS calculation was frozen at the value they were on the day that the calculation was spoiled by a bad MICH FF filter, 2023-08-04 -- and importantly when the detector *was* thermalized. An important distinction that's not made above, is that the *measured* data in his first plot is from LHO:71787 -- a *different* time, when the detector WAS thermalized, a day later -- 2023-07-27 05:03:20 UTC (GPS 1374469418). Compare the TDCFs between NOT THERMALIZED time, 2023-07-26 first attachment here with the 2023-07-27 THERMALIZED first attachment I recently added to Vlad's LHO:71787. One can see in the 2023-07-27 THERMALIZED data, the Frozen GDS and Live CALCS TDCF answers agree quite well. For the NOT THERMALIZED time, 2023-07-26, \kappa_C, f_CC, and \kappa_U are quite different. (4) So, let's compare the response function ratio, i.e. systematic error transfer function ratio, between the response function updated with GDS TDCFs vs. CALCS TDCFs for the two different times -- thermalizes vs. not thermalized. This will be an expanded version Joe's second attachment: - 2nd Attachment here: this exactly replicates Joe's plot, but shows more ratios to better get a feel for what's happening. Using the variables from psuedo code above, I'm plotting :: BLUE = eta_R_wTDCFs_CALCS_over_ref = R_wTDCFs_CALCS / R_ref :: ORANGE = eta_R_wTDCFs_GDS_over_ref = R_wTDCFs_GDS / R_ref :: GREEN = eta_R_wTDCFs_CALCS_over_R_wTDCFs_GDS = R_wTDCFs_CALCS / R_wTDCFs_GDS where the GREEN trace is showing what Joe showed -- both as the unlabeled BLUE trace in his second attachment, and the "FE kappa true R / applied bad kappa R" GREEN trace in his third attachment -- the ratio between response functions; one updated with CALCS TDCFs and the other updated with GDS TDCFs, for the NOT THERMALIZED time. - 3r Attachment here: this replicates the same traces, but with the TDCFs from Vlad's THERMALIZED time. For both Joe and my plots, because we think that the CALCS TDCFs are more accurate, and it's tradition to put the more accurate response function in the numerator we show it as such. Comparing the two GREEN traces from my plots, it's much more clear that the difference between GDS and CALCS TDCFs is negligible for THERMALIZED times, and substantial during NOT THERMALIZED times. (4) Now we bring in the complexity of the missing 3.2 kHz ESD pole. Unlike the "hot swap" of TDCFs in the DARM loop model, it's a lot easier just to create an "offline" copy of the pydarm parameter file, with the ESD poles removed. That parameter file lives in the same git repo location, but called pydarm_H1_20230621T211522Z_no3p2k.ini. So, with that, we just instantiate the model in the same way, but calling the different parameter file: # Set up environment # Instantiate two copies of pydarm DARM loop model darmModel_obj = pydarm.darm.DARMModel('pydarm_H1_20230621T211522Z.ini') darmModel_no3p2k_obj = pydarm.darm.DARMModel('pydarm_H1_20230621T211522Z_no3p2k.ini') # Extract the response function transfer function on your favorite frequency vector R_ref = darmModel_obj.compute_response_function(freq) R_no3p2k = darmModel_no3p2k_obj.compute_response_function(freq) # Compare the two response functions to form a "systematic error" transfer function, \eta_R. eta_R_nom_over_no3p2k = R_ref / R_no3p2k where here, the response function without the 3.2 kHz pole is less accurate, so R_no3p2k goes in the denominator. Without any TDCF correction, I show this eta_R_nom_over_no3p2k compared against Vlad's fit from LHO:71787 for starters. (5) Now for the final layer of complexity need to fold in the TDCFs. This is where I think a few more traces and plots are needed comparing the two THERMALIZED vs. NOT times, plus some clear math, in order to explain what's going on. In the end, I make the same conclusion as Joe, that the two effects -- Fixing the Frozen GDS TDCFs and Fixing the 3.2 kHz pole are "separable" to good approximation, but I'm slower than Joe is, and need things laid out more clearly. So, on the pseudo-code side of things, we need another couple of copies of the darmModel_obj: - with and without 3.2 kHz pole - with TDCFs from CALCS and GDS, - from THERMALIZED (LHO71787) and NOT THERMALIZED (LHO72622) times: R_no3p2k_wTDCFs_CCS_LHO71787 = darmModel_no3p2k_wTDCFs_CCS_LHO71787_obj.compute_response_function(freq) R_no3p2k_wTDCFs_GDS_LHO71787 = darmModel_no3p2k_wTDCFs_GDS_LHO71787_obj.compute_response_function(freq) R_no3p2k_wTDCFs_CCS_LHO72622 = darmModel_no3p2k_wTDCFs_CCS_LHO72622_obj.compute_response_function(freq) R_no3p2k_wTDCFs_GDS_LHO72622 = darmModel_no3p2k_wTDCFs_GDS_LHO72622_obj.compute_response_function(freq) eta_R_wTDCFS_over_R_wTDCFs_no3p2k_CCS_LHO71787 = R_wTDCFs_CCS_LHO71787 / R_no3p2k_wTDCFs_CCS_LHO71787 eta_R_wTDCFS_over_R_wTDCFs_no3p2k_GDS_LHO71787 = R_wTDCFs_GDS_LHO71787 / R_no3p2k_wTDCFs_GDS_LHO71787 eta_R_wTDCFS_over_R_wTDCFs_no3p2k_CCS_LHO72622 = R_wTDCFs_CCS_LHO72622 / R_no3p2k_wTDCFs_CCS_LHO72622 eta_R_wTDCFS_over_R_wTDCFs_no3p2k_GDS_LHO72622 = R_wTDCFs_GDS_LHO72622 / R_no3p2k_wTDCFs_GDS_LHO72622 Note, critically, that these ratios of with and without the 3.2 kHz pole -- both updated with the same TDCFs -- is NOT THE SAME THING as just the ratio of models updated with GDS vs CALCS TDCFs, even though it might look like the "reference" and "no 3.2 kHz pole" might cancel "on paper," if one naively thinks that the operation is separable [[ ( R_wTDCFs_CCS / R_ref )*( R_ref / R_no3p2k ) ]] / [[ ( R_wTDCFs_GDS / R_ref )*(R_ref / R_no3p2k) ]] #NAIVE which one might naively cancel terms to get down to [[ ( R_wTDCFs_CCS / R_ref )*( R_ref / R_no3p2k ) ]] / [[ ( R_wTDCFs_GDS / R_ref )*(R_ref / R_no3p2k) ]] #NAIVE [[ ( R_wTDCFs_CCS ]] / [[ R_wTDCFs_GDS ]] #NAIVE So, let's look at the answer now, with all this context. - NOT THERMALIZED This is a replica of what Joe shows in the third attachment for the 2023-07-26 time: :: BLUE -- the systematic error incurred from excluding the 3.2 kHz pole on the reference response function without any updates to TDCFs (eta_R_nom_over_no3p2k) :: ORANGE -- the systematic error incurred from excluding the 3.2 kHz pole on the CALCS-TDCF-updated, modeled response function (eta_R_wTDCFS_over_R_wTDCFs_no3p2k_CCS_LHO72622, Joe's "FE kappa true R /applied R (no pole)) :: GREEN -- the systematic error incurred from excluding the 3.2 kHz pole on the GDS-TDCF-updated, modeled response function (eta_R_wTDCFS_over_R_wTDCFs_no3p2k_GDS_LHO72622, Joe's "GDS kappa true R / applied (no pole)") :: RED -- Compared against Vlad's *fit* the ratio of CALCS-TDCF-updated, modeled response function to (GDS-CALIB_STRAIN / DARM_ERR) measured response function Here, because the GDS TDCFs are different than the CALCS TDCFs, you actually see a non-negligible difference between ORANGE and GREEN. - THERMALIZED: (Same legend, but the TIME and TDCFs are different) Here, because the GDS and CALCS TDCFs are the same-ish, you can't see that much of a difference between the two. Also, note, that even when we're using the same THERMALIZED time and corresponding TDCFs to be self-consistent with Vlad's fit of the measured response function, they still don't agree perfectly. So, there's likely still yet more systematic error going in the thermalized time. (6) Finally, I wanted to explicitly show the consequences of "just" correcting for GDS and from "just" correcting the missing 3.2 kHz pole to be able to better *quantify* the statement that "the difference is pretty small compared to typical calibration uncertainties," as well as showing the difference between "just" the ratio response functions updated with the different TDCFs (the incorrect model), against the "full" models. I show this in - NOT THERMALIZED, and - THERMALIZED For both of these plots, I show :: GREEN -- the corrective transfer function we would be applying if we only update the Frozen GDS TDCFs to Live CALCS TDCFs, compared with :: BLUE -- the ratio of corrective transfer functions, >> the "best we could do," updating the response with Live TDCFs from CALCS and fixing the missing 3.2 kHz pole against >> only fixing the missing 3.2 kHz pole :: ORANGE -- the ratio of corrective transfer functions >> the "best we could do," updating the response with Live TDCFs from CALCS and fixing the missing 3.2 kHz pole against >> the "second best thing to do" which is leave the Frozen TDCFs alone and correct for the missing 3.2 kHz pole Even for the NOT THERMALIZED time, the BLUE never exceeds 1.002 / 0.1 deg in magnitude / phase, and it's small compared to the "TDCF only" the simple correction of Frozen GDS TDCFs to Live CALCS TDCFs, shown in GREEN . This helps quantify why Joe thinks we can separately apply the two corrections to the systematic error budget, because GREEN is much larger than BLUE. For the THERMALIZED time, in BLUE, that ratio of full models is even less, and also as expected the ratio of simple TDCF update models is also small. %%%%%%%%%% The code that produced this aLOG is create_no3p2kHz_syserror.py as of git hash 3d8dd5df.
Following up on this study just one step further, as I begin to actually correct data curing the time period where both of these systematic errors are in play -- the frozen GDS TDCFs and the missing 3.2 kHZ pole... I craved one more set of plots to convey "Fixing the Frozen GDS TDCFs and Fixing the 3.2 kHz pole are "separable" to good approximation" showing the actual corrections one would apply in the different cases: :: BLUE = eta_R_nom_over_no3p2k = R_ref / R_no3p2k >> The systematic error created by the missing 3.2 kHz pole in the ESD model alone :: ORANGE = eta_R_wTDCFs_CALCS_over_R_wTDCFs_GDS = R_wTDCFs_CALCS / R_wTDCFs_GDS >> the systematic error created by the frozen GDS TDCFs alone :: GREEN = eta_R_nom_over_no3p2k * eta_R_wTDCFs_CALCS_over_R_wTDCFs_GDS = the product of the two >> the approximation :: RED = a previously unshown eta that we'd actually apply to the data that had both = R_ref (updated with CALCS TDCFS) / R_no3p2k (updated with GDS TDCFs) the right thing As above, it's important to look at both a thermalized case as well as a non-thermalized case., so I attach those two, NOT THERMALIZED, and THERMALIZED. The conclusions are the same as above: - Joe is again right that the difference between the approximation (GREEN) and the right thing (RED) is small, even for the NOT THERMALIZED time But I think this version of the plots / traces better shows the breakdown of which effect is contribution where on top of the approximation vs. "the right thing," and "the right thing" was never explicitly shown. All the traces in my expanded aLOG, LHO:72879, had the reference model (or no 3.2 kHz pole models) updated either both CALCS TDCFs or both GDS TDCFs in the numerator and denominator, rather than "the right thing" where you have CALCS TDCFs in the numerator and GDS TDCFs in the denominator). %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% To create these extra plots, I added a few lines of "calculation" code and another 40-ish lines of plotting code to create_no3p2kHz_syserror.py. I've now updated in within the git repo, so it and the repo now have git hash 1c0a4126.