Displaying reports 15781-15800 of 86558.Go to page Start 786 787 788 789 790 791 792 793 794 End
Reports until 08:26, Saturday 16 September 2023
H1 PSL
oli.patane@LIGO.ORG - posted 08:26, Saturday 16 September 2023 (72910)
PSL Weekly FAMIS

Closes FAMIS#26209, last completed Sept 8th


Laser Status:
    NPRO output power is 1.833W (nominal ~2W)
    AMP1 output power is 67.19W (nominal ~70W)
    AMP2 output power is 134.8W (nominal 135-140W)
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN

PMC:
    It has been locked 41 days, 0 hr 12 minutes
    Reflected power = 16.6W
    Transmitted power = 109.0W
    PowerSum = 125.6W

FSS:
    It has been locked for 0 days 19 hr and 29 min
    TPD[V] = 0.8165V

ISS:
    The diffracted power is around 2.4%
    Last saturation event was 0 days 19 hours and 29 minutes ago


Possible Issues: None

H1 General
oli.patane@LIGO.ORG - posted 08:04, Saturday 16 September 2023 (72909)
Ops DAY Shift Start

TITLE: 09/16 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 3mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.24 μm/s
QUICK SUMMARY:

Everything is looking good this morning. We're Observing and have been Locked for 18hrs.

Earthquake mode was activated between 13:04-13:14UTC.

LHO General
corey.gray@LIGO.ORG - posted 23:59, Friday 15 September 2023 (72907)
Fri Eve Ops Summary

TITLE: 09/15 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:  Uneventful shift (mostly) with H1 approaching 10hrs of lock.
LOG:

H1 TCS (OpsInfo, TCS)
corey.gray@LIGO.ORG - posted 20:20, Friday 15 September 2023 (72908)
TCSx Laser Bumps H1 Out Of Observing

Bumped out of Observing due to TCSx CO2 Laser unlocking.

( 2023-09-16_03:01:35.894099Z TCS_ITMX_CO2 [LASER_UP.run] laser unlocked. jumping to find new locking point )

TCS_ITMX_CO2 guardian node was able to restore the laser back up within 70 sec...then I took H1 back to OBSERVING.

LHO General
corey.gray@LIGO.ORG - posted 16:10, Friday 15 September 2023 (72906)
Friday Eve Status

TITLE: 09/15 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 11mph Gusts, 9mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.18 μm/s
QUICK SUMMARY:

Got a run down of the Day from Oli (not much to report for H1 other than known investigations into EX glitches prior to locklosses).  Microseism has a slow increase over last 24hrs and winds are low.

(My first time in the Control Room for a while and it is noticeably quieter in here--- i.e. less fan noise?)

H1 General
oli.patane@LIGO.ORG - posted 16:03, Friday 15 September 2023 (72905)
Ops DAY Shift Start

TITLE: 09/15 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Have now been Observing and Locked for 2hours. Fairly quiet day today with only the one Lockloss

15:00UTC Detector in Observing and Locked for 16hrs 22mins

18:55 Earthquake mode activated due to earthquake from Chile
19:05 Back to calm

19:54 Lockloss

21:04 Reached Nominal Low Noise
21:19 Observing


LOG:             

Start Time System Name Location Lazer_Haz Task Time End
12:14 FAC Karen Vac Prep n Tech clean 15:14
16:52 FAC Cindi Wood Shop n Overpass clean 17:01
17:21 FAC Kim H2 n Tech clean 17:30
20:07 LAS Travis FCES n Laser glasses check 20:21
H1 SEI
erik.vonreis@LIGO.ORG - posted 14:35, Friday 15 September 2023 - last comment - 09:55, Tuesday 19 September 2023(72904)
BBB channel dropped from Picket Fence

The BBB channel was continually going into the alarm range.  It's been removed from Picket Fence FOM and EPICS.

The steps for removal were:

1. Edited /opt/rtcds/userapps/release/isi/h1/scripts/Picket-Fence/LHO-picket-fence.py.  Commented out the block at line 39 that adds the "BBB" channel.

2. VNC to nuc5.  Close the Picket Fecne window.

3. ssh as controls to nuc5.  Run "start/launch.sh".  This restarts the Picket Fence FOM, which also sets the EPICS variables.

Comments related to this report
brian.lantz@LIGO.ORG - 09:55, Tuesday 19 September 2023 (72963)

Thanks Erik.

FYI - There are a bunch of stations around the Vancouver area, but BBB is the only one hosted by PNSN. We've reached out to see if we can get access to a similar low-latency server so that we can hopefully find a quieter station to use. These stations are useful for monitoring incoming motion from Alaska.

H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 12:55, Friday 15 September 2023 - last comment - 14:19, Friday 15 September 2023(72901)
Lockloss

Lockloss at 09/15 19:54

Comments related to this report
oli.patane@LIGO.ORG - 14:19, Friday 15 September 2023 (72903)

21:19UTC Back Observing

H1 General
oli.patane@LIGO.ORG - posted 12:26, Friday 15 September 2023 (72900)
Ops DAY Midshift Update

We've been Locked for almost 21hours now. We've had a few EX saturations but nothing too big. Just finished riding out an earthquake from Chile.

H1 CDS
david.barker@LIGO.ORG - posted 12:07, Friday 15 September 2023 (72899)
Added GPS channels to Picket Fence MEDM

During this week's Tuesday maintenance, we added two GPS channels to the Picket Fence system:

H1:SEI-USGS_SERVER_START_GPS : Start time of server on nuc5
H1:SEI-USGS_SERVER_GPS: server's current processing time, updates every few seconds when running

I have added these channels to the Picket Fence MEDM, along with a "Server Running" status rectangle which turns RED if the SERVER_GPS time lags current GPS by a minute or more.

I also added Picket Fence to the CDS section of the CDS Overview. The circular RED/GREEN LED uses the same display logic as above, and the PKT button opens the Picket Fence MEDM.

The attachment below shows the new H1CDS_PICKET_FENCE MEDM (computer generated) and a snipet of the CDS Overview showing the CDS section.
 

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:16, Friday 15 September 2023 (72898)
Fri CP1 Fill

Fri Sep 15 10:08:59 2023 INFO: Fill completed in 8min 55secs

Gerardo confirmed a good fill curbside

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 08:03, Friday 15 September 2023 (72895)
Ops DAY Shift Start

TITLE: 09/15 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 6mph Gusts, 4mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.12 μm/s
QUICK SUMMARY:

Detector is Observing and has been Locked for 16hrs 22mins. Everything looks good

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 00:27, Friday 15 September 2023 - last comment - 10:20, Friday 15 September 2023(72894)
OPS Eve Shift Summary

TITLE: 09/15 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 143Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:

IFO is in NLN and OBSERVING as of 22:53 UTC

Continued Lockloss (alog 72875) Investigations (Continued from alog 72876)

The same type of lockloss happened today during TJ’s Day shift. Instead of continuing exactly where I left off by inspecting the EX saturations. I will briefly trend the lockloss select (as done yesterday and today with TJ with separate “this issue” locklosses. From the LSC lockloss scopes (Screenshot 1), we can clearly see that about 92ms before any of the LSC channels saw the lockloss, H1:LSC-DARM_IN1_DQ saw it first. From speaking with TJ the day earlier, this is a channel that goes back to the OMC DCPDs (if I recall correctly).

Before hunting the actuator down, I zoomed into the channel and saw that this channel’s bumpy behavior started building up at 4:45:47 (Screenshot 2) - a second before that lockloss. This second picture is just a zoom on the tiny hump seen in the first screenshot.

 

And unfortunately, there was not enough time to continue investigating but will be picking this up next week. We essentially found that there is one particular channel, related to OMC DCPDs that has a build-up followed by a violent kick that knocks everything from time to time, causing locklosses. What I would like to know/ask:

  1. Are these just causing glitches, whereby the most powerful “spasms” are the lockloss causing ones? Or do these species of glitches always cause locklosses?
  2. How is this related to EX saturating 1.5 seconds before? This didn’t happen in today’s lockloss, yet there was a similar preliminary kick in this channel prior to the lockloss.
  3. Which actuator/suspension is this one related to? What more can we find out by looking at it? This is probably easy to find out (and also the most important) - I just ran out of time.
  4. How long has this type of lockloss causing glitch been happening? This would involve just trending that channel with various lockloss times, but also trending it with various BLRMs glitch times to investigate the mutual exclusivity of these two events. Is there a correlation high enough between these events that can inform a causation? Do these things sometimes cause EX saturations (the really violent ones like yesterday’s) or is that just a coincidence?
  5. Is this nature of spasm build-up indicative of anything to do with electronics, software etc.? A question for the experts.

Most of these questions hit on the same few pieces of evidence we have (EX saturation - potential red herring, OMC Channel kick - a new area to investigate) and the BLRMs glitch incidence (the evidence that it wasn’t external).

Other:

3 GRB-Short Alarms

Many glitches but 0 lockloss causing ones

LOG:

Start Time System Name Location Lazer_Haz Task Time End
20:22 EPO Oregon Public Broadcasting Overpass N Setting up timelapse camera 20:59
Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 10:20, Friday 15 September 2023 (72896)

Tony, Oli, Camilla

Good lockloss investigations Ibrahim. The lockloss tool shows these ETMX glitches in the ~2 seconds before the lockloss in the "Saturations" and "Length-Pitch-Yaw" plots. I think ETMX moving would cause a DARM glitch (so the DARM BLRMs to increase) or vice versa, DARM changing would cause ETMX to try to follow.  Faster ETMX channels to look at would be H1:SUS-ETMX_L3_MASTER_OUT_UL_DQ, 16384Hz vs 16Hz. You can see the framerate of the channels using command 'chndump | grep H1:SUS-ETMX_L3_MASTER_OUT_' or simular...

Plot attached shows L1, L2, L3 of ETMX all see these fast noisy glitches but the OMC and DARM channels show a slower movement. Can this us tell us anything about the cause?

See similar glitches in:

Images attached to this comment
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 19:58, Thursday 14 September 2023 (72893)
OPS Eve Midshift Update

IFO is in NLN and OBSERVING as of 22:53 UTC

Nothing else to report.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:33, Thursday 14 September 2023 (72891)
OPS Eve Shift Start

TITLE: 09/14 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: SEISMON_ALERT
    Wind: 10mph Gusts, 7mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.12 μm/s
QUICK SUMMARY:

IFO is in NLN and OBSERVING as of 22:53 UTC

H1 CDS
david.barker@LIGO.ORG - posted 10:47, Thursday 14 September 2023 - last comment - 10:14, Friday 15 September 2023(72882)
Added front-end model latched IPC error display to CDS Overview

We are now in day 113 of O4 and we have not had any spontaneous IPC receive errors on any model throughout this time.

During Tuesday maintenance this week I forgot to issue a DIAG_RESET on h1oaf after the pem models were restarted, and therefore it is showing latched IPC errors from this time which I just noticed today.

To elevate the visibility of latched transient IPC errors, I have added a new block on the CDS overview which will turn yellow if the model has a latched IPC error. This block does not differentiate between IPC type (shared-memory, local-dolphin, x-arm, y-arm). The new block is labeled lower case "i". Clicking on this block opens the model's IPC channel table.

The upper case "I" block remains as before which turns red if there are any ongoing IPC errors (reported as a bit in the model's STATE_WORD)

To make space for this new block (located at the end by the CFC) I have reduced the width of the DAQ-STAT and DAQ-CFC triangles to the same width as the blocks (10 pixels).

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 10:14, Friday 15 September 2023 (72897)

I have added a legend to the CDS Overview, showing what all the model status bits mean.

Clicking on the Legend button opens DCC-T2300380 pdf using the zathura image viewer.

H1 SEI (CDS, SEI)
erik.vonreis@LIGO.ORG - posted 11:28, Tuesday 12 September 2023 - last comment - 16:59, Thursday 14 September 2023(72831)
Picket Fence updated

The Picket Fence client was updated.  This new version points at a server with lower latency.

It also fixes some bugs, and reports the current time and start time of the service.

Comments related to this report
edgard.bonilla@LIGO.ORG - 16:59, Thursday 14 September 2023 (72892)

I merged this into the main code.

Thank you Erik!

H1 CAL (CAL)
joseph.betzwieser@LIGO.ORG - posted 12:40, Friday 01 September 2023 - last comment - 12:11, Friday 17 November 2023(72622)
Calibration uncertainty estimate corrections
This is a continuation of a discussion of mis-application of the calibration model raised in LHO alog 71787, which was fixed on August 8th (LHO alog: 72043), and further issues with what time varying factors (kappas) were applied while the ETMX UIM calibration line coherence was bad (see LHO alog 71790, which was fixed on August 3rd.

We need to update the calibration uncertainty estimates with the combination of these two problems where they overlap.  The appropriate thing is to use the full DARM model (1/C + (A_uim + A_pum + A_tst) * D), where C is sensing, A_{uim,pum,tst} are the individual ETMX stage actuation transfer functions, and D is the digital darm filters.  Although, it looks like we can just get away with an approximation, which will make implimentation somewhat easier.

As a demonstration of this, first I confirm I can replicate the the 71787 result purely with models (no fitting).  I take the pydarm calibration model Response, R, and correct it for the time dependent correction factors (kappas) at the same time I took the GDS/DARM_ERR data, and then take the ratio with the same model except the 3.2 kHz ETMX L3 HFPoles removed (the correction Louis and Jeff eventually implemented).  This is the first attachment.

Next we calculate the expected error just from the wrong kappas being applied in the GDS pipeline due to poor UIM coherence.  For this initial look, I choose GPS time 1374369018 (2023-07-26 01:10), you can see the LHO summary page here, with the upper left plot showing the kappa_C discrepancy between GDS and front end.  So just this issue produces the second attachment.

We can then look at what the effects of the 3.2 kHz pole being missing for two possibilities, for the Front end kappas, and for the GDS bad kappas, and see the difference is pretty small compared to typical calibration uncertainties.  Here it's on the scale of a tenth of a percent at around 90 Hz.  I can also plot the model with the front end kappas (more correct at this time) over the model of the wrong GDS kappas, for a comparison in scale as well.  This is the 3rd plot.

This suggests to me the calibration group can just apply a single correction to the overall response function systematic error for the period where the 3.2 kHz HFPole filter was missing, and then in addition, for the period where the UIM uncertainty was preventing the kappa_C calculation from updating, apply an additional correction factor that is time dependent, just multiplying the two.

As an example, the 4th attachment shows what this would look like for the gps time 1374369018.
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 16:25, Monday 11 September 2023 (72817)
For further explanation of the impact of Frozen GDS TDCFs vs. Live CAL-CS Computed TDCFs on the response function systematic error, i.e. what Joe's saying with
    Next we calculate the expected error just from the wrong kappas being 
    applied in the GDS pipeline due to poor UIM coherence.  For this initial look, I choose 
    GPS time 1374369018 (2023-07-26 01:10 UTC), you can see the LHO summary page here, with 
    the upper left plot showing the kappa_C discrepancy between GDS and front end.  
    So just this issue produces the second attachment.
and what he shows in his second attachment, see LHO:72812.
jeffrey.kissel@LIGO.ORG - 16:34, Thursday 14 September 2023 (72879)
I've made some more clarifying plots to help me better understand Joe's work above after getting a few more details from him and Vlad.

(1) GDS-CALIB_STRAIN is corrected for time dependence, via the relative gain changes, "\kappa," as well as for the new coupled-cavity pole frequency, "f_CC." In order to make a fair comparison between the *measured* response function, GDS-CALIB_STRAIN / DARM_ERR live data stream, and the *modeled* response function, which is static in time, we need to update the response function with the the time dependent correction factors (TDCFs) at the time of the *measured* response function. 

How is the *modeled* response function updated for time dependence? Given the new pydarm system, it's actually quite straightforward given a DARM model parameter set, pydarm_H1.ini and good conda environment. Here's a bit of pseudo-code that captures what's happening conceptually:
    # Set up environment
    from gwpy.timeseries import TimeSeriesDict as tsd
    from copy import deepcopy
    import pydarm

    # Instantiate two copies of pydarm DARM loop model
    darmModel_obj = pydarm.darm.DARMModel('pydarm_H1.ini')
    darmModel_wTDCFs_obj = deepcopy(darmModel_obj)

    # Grab time series of TDCFs
    tdcfs = tsd.get(chanList, starttime, endtime, frametype='R',verbose=True) 

    kappa_C = tdcfs[chanList[0]].value
    freq_CC = tdcfs[chanList[1]].value
    kappa_U = tdcfs[chanList[2]].value
    kappa_P = tdcfs[chanList[3]].value
    kappa_T = tdcfs[chanList[4]].value

    # Multiply in kappas, replace cavity pole, with a "hot swap" of the relevant parameter in the DARM loop model
    darmModel_wTDCFs_obj.sensing.coupled_cavity_optical_gain *= kappa_C
    darmModel_wTDCFs_obj.sensing.coupled_cavity_pole_frequency = freq_CC
    darmModel_wTDCFs_obj.actuation.xarm.uim_npa *= kappa_U
    darmModel_wTDCFs_obj.actuation.xarm.pum_npa *= kappa_P
    darmModel_wTDCFs_obj.actuation.xarm.tst_npv2 *= kappa_T

    # Extract the response function transfer function on your favorite frequency vector
    R_ref     = darmModel_obj.compute_response_function(freq)
    R_wTDCFs  = darmModel_wTDCFs_obj.compute_response_function(freq)

    # Compare the two response functions to form a "systematic error" transfer function, \eta_R.
    eta_R_wTDCFs_over_ref = R_wTDCFs / R_ref


For all of this study, I started with the reference model parameter set that's relevant for these times in late July 2023 -- the pydarm_H1.ini from the 20230621T211522Z report directory, which I've copied over to a git repo as pydarm_H1_20230621T211522Z.ini.

(2) One layer deeper, some of what Joe's trying to explore in his plots above -- the difference between low-latency, GDS pipeline computed TDCFs and real-time, CALCS pipeline -- because of the issues with the GDS pipeline computation discussed in LHO:72812.

So, in order to facilitate this study, we have to gather TDCFs from both GDS and CALCS pipeline. Here's the channel list for both:
    chanList = ['H1:GRD-ISC_LOCK_STATE_N',

                'H1:CAL-CS_TDEP_KAPPA_C_OUTPUT',
                'H1:CAL-CS_TDEP_F_C_OUTPUT',
                'H1:CAL-CS_TDEP_KAPPA_UIM_REAL_OUTPUT',
                'H1:CAL-CS_TDEP_KAPPA_PUM_REAL_OUTPUT',
                'H1:CAL-CS_TDEP_KAPPA_TST_REAL_OUTPUT',

                'H1:GDS-CALIB_KAPPA_C',
                'H1:GDS-CALIB_F_CC',
                'H1:GDS-CALIB_KAPPA_UIM_REAL',
                'H1:GDS-CALIB_KAPPA_PUM_REAL',
                'H1:GDS-CALIB_KAPPA_TST_REAL']
where the first channel in the list is the state of detector lock acquisition guardian for useful comparison.

(3) Indeed, for *most* of the above aLOG, Joe chooses an example of times when the GDS and CALCS TDCFs are *the most different* -- in his case, 2023-07-26 01:10 UTC (GPS 1374369018) -- when the H1 detector is still thermalizing after power up. They're *different* because the GDS calculation was frozen at the value they were on the day that the calculation was spoiled by a bad MICH FF filter, 2023-08-04 -- and importantly when the detector *was* thermalized.

An important distinction that's not made above, is that the *measured* data in his first plot is from LHO:71787 -- a *different* time, when the detector WAS thermalized, a day later -- 2023-07-27 05:03:20 UTC (GPS 1374469418).

Compare the TDCFs between NOT THERMALIZED time, 2023-07-26 first attachment here with the 2023-07-27 THERMALIZED first attachment I recently added to Vlad's LHO:71787.

One can see in the 2023-07-27 THERMALIZED data, the Frozen GDS and Live CALCS TDCF answers agree quite well. For the NOT THERMALIZED time, 2023-07-26, \kappa_C, f_CC, and \kappa_U are quite different.

(4) So, let's compare the response function ratio, i.e. systematic error transfer function ratio, between the response function updated with GDS TDCFs vs. CALCS TDCFs for the two different times -- thermalizes vs. not thermalized. This will be an expanded version Joe's second attachment:
    - 2nd Attachment here: this exactly replicates Joe's plot, but shows more ratios to better get a feel for what's happening. Using the variables from psuedo code above, I'm plotting
        :: BLUE = eta_R_wTDCFs_CALCS_over_ref = R_wTDCFs_CALCS / R_ref
        :: ORANGE = eta_R_wTDCFs_GDS_over_ref = R_wTDCFs_GDS / R_ref
        :: GREEN = eta_R_wTDCFs_CALCS_over_R_wTDCFs_GDS = R_wTDCFs_CALCS / R_wTDCFs_GDS
    where the GREEN trace is showing what Joe showed -- both as the unlabeled BLUE trace in his second attachment, and the "FE kappa true R / applied bad kappa R" GREEN trace in his third attachment -- the ratio between response functions; one updated with CALCS TDCFs and the other updated with GDS TDCFs, for the NOT THERMALIZED time. 

    - 3r Attachment here: this replicates the same traces, but with the TDCFs from Vlad's THERMALIZED time.

For both Joe and my plots, because we think that the CALCS TDCFs are more accurate, and it's tradition to put the more accurate response function in the numerator we show it as such. Comparing the two GREEN traces from my plots, it's much more clear that the difference between GDS and CALCS TDCFs is negligible for THERMALIZED times, and substantial during NOT THERMALIZED times.

(4) Now we bring in the complexity of the missing 3.2 kHz ESD pole. Unlike the "hot swap" of TDCFs in the DARM loop model, it's a lot easier just to create an "offline" copy of the pydarm parameter file, with the ESD poles removed. That parameter file lives in the same git repo location, but called pydarm_H1_20230621T211522Z_no3p2k.ini. So, with that, we just instantiate the model in the same way, but calling the different parameter file:
# Set up environment
    # Instantiate two copies of pydarm DARM loop model
    darmModel_obj = pydarm.darm.DARMModel('pydarm_H1_20230621T211522Z.ini')
    darmModel_no3p2k_obj = pydarm.darm.DARMModel('pydarm_H1_20230621T211522Z_no3p2k.ini')

    # Extract the response function transfer function on your favorite frequency vector
    R_ref = darmModel_obj.compute_response_function(freq)
    R_no3p2k = darmModel_no3p2k_obj.compute_response_function(freq)

    # Compare the two response functions to form a "systematic error" transfer function, \eta_R.
    eta_R_nom_over_no3p2k = R_ref / R_no3p2k

where here, the response function without the 3.2 kHz pole is less accurate, so R_no3p2k goes in the denominator.

Without any TDCF correction, I show this eta_R_nom_over_no3p2k compared against Vlad's fit from LHO:71787 for starters.

(5) Now for the final layer of complexity need to fold in the TDCFs. This is where I think a few more traces and plots are needed comparing the two THERMALIZED vs. NOT times, plus some clear math, in order to explain what's going on. In the end, I make the same conclusion as Joe, that the two effects -- Fixing the Frozen GDS TDCFs and Fixing the 3.2 kHz pole are "separable" to good approximation, but I'm slower than Joe is, and need things laid out more clearly.

So, on the pseudo-code side of things, we need another couple of copies of the darmModel_obj:
    - with and without 3.2 kHz pole 
        - with TDCFs from CALCS and GDS, 
            - from THERMALIZED (LHO71787) and NOT THERMALIZED (LHO72622) times:
    
        R_no3p2k_wTDCFs_CCS_LHO71787 = darmModel_no3p2k_wTDCFs_CCS_LHO71787_obj.compute_response_function(freq)
        R_no3p2k_wTDCFs_GDS_LHO71787 = darmModel_no3p2k_wTDCFs_GDS_LHO71787_obj.compute_response_function(freq)
        R_no3p2k_wTDCFs_CCS_LHO72622 = darmModel_no3p2k_wTDCFs_CCS_LHO72622_obj.compute_response_function(freq)
        R_no3p2k_wTDCFs_GDS_LHO72622 = darmModel_no3p2k_wTDCFs_GDS_LHO72622_obj.compute_response_function(freq)

        
        eta_R_wTDCFS_over_R_wTDCFs_no3p2k_CCS_LHO71787 = R_wTDCFs_CCS_LHO71787 / R_no3p2k_wTDCFs_CCS_LHO71787
        eta_R_wTDCFS_over_R_wTDCFs_no3p2k_GDS_LHO71787 = R_wTDCFs_GDS_LHO71787 / R_no3p2k_wTDCFs_GDS_LHO71787
        eta_R_wTDCFS_over_R_wTDCFs_no3p2k_CCS_LHO72622 = R_wTDCFs_CCS_LHO72622 / R_no3p2k_wTDCFs_CCS_LHO72622
        eta_R_wTDCFS_over_R_wTDCFs_no3p2k_GDS_LHO72622 = R_wTDCFs_GDS_LHO72622 / R_no3p2k_wTDCFs_GDS_LHO72622


Note, critically, that these ratios of with and without the 3.2 kHz pole -- both updated with the same TDCFs -- is NOT THE SAME THING as just the ratio of models updated with GDS vs CALCS TDCFs, even though it might look like the "reference" and "no 3.2 kHz pole" might cancel "on paper," if one naively thinks that the operation is separable
     
    [[ ( R_wTDCFs_CCS / R_ref )*( R_ref / R_no3p2k ) ]] / [[ ( R_wTDCFs_GDS / R_ref )*(R_ref / R_no3p2k) ]] #NAIVE
    which one might naively cancel terms to get down to
    [[ ( R_wTDCFs_CCS / R_ref )*( R_ref / R_no3p2k ) ]] / [[ ( R_wTDCFs_GDS / R_ref )*(R_ref / R_no3p2k) ]]  #NAIVE
    [[ ( R_wTDCFs_CCS ]] / [[ R_wTDCFs_GDS ]] #NAIVE

    
So, let's look at the answer now, with all this context.
    - NOT THERMALIZED This is a replica of what Joe shows in the third attachment for the 2023-07-26 time:
        :: BLUE -- the systematic error incurred from excluding the 3.2 kHz pole on the reference response function without any updates to TDCFs (eta_R_nom_over_no3p2k)
        :: ORANGE -- the systematic error incurred from excluding the 3.2 kHz pole on the CALCS-TDCF-updated, modeled response function (eta_R_wTDCFS_over_R_wTDCFs_no3p2k_CCS_LHO72622, Joe's "FE kappa true R /applied R (no pole))
        :: GREEN -- the systematic error incurred from excluding the 3.2 kHz pole on the GDS-TDCF-updated, modeled response function (eta_R_wTDCFS_over_R_wTDCFs_no3p2k_GDS_LHO72622, Joe's "GDS kappa true R / applied (no pole)")
        :: RED -- Compared against Vlad's *fit* the ratio of CALCS-TDCF-updated, modeled response function to (GDS-CALIB_STRAIN / DARM_ERR) measured response function

    Here, because the GDS TDCFs are different than the CALCS TDCFs, you actually see a non-negligible difference between ORANGE and GREEN. 

    - THERMALIZED:
        (Same legend, but the TIME and TDCFs are different)

    Here, because the GDS and CALCS TDCFs are the same-ish, you can't see that much of a difference between the two. 
    
    Also, note, that even when we're using the same THERMALIZED time and corresponding TDCFs to be self-consistent with Vlad's fit of the measured response function, they still don't agree perfectly. So, there's likely still yet more systematic error going in the thermalized time.

(6) Finally, I wanted to explicitly show the consequences of "just" correcting for GDS and from "just" correcting the missing 3.2 kHz pole to be able to better *quantify* the statement that "the difference is pretty small compared to typical calibration uncertainties," as well as showing the difference between "just" the ratio response functions updated with the different TDCFs (the incorrect model), against the "full" models.

    I show this in 
    - NOT THERMALIZED, and
    - THERMALIZED

For both of these plots, I show
    :: GREEN -- the corrective transfer function we would be applying if we only update the Frozen GDS TDCFs to Live CALCS TDCFs, compared with
    :: BLUE -- the ratio of corrective transfer functions,
         >> the "best we could do," updating the response with Live TDCFs from CALCS and fixing the missing 3.2 kHz pole against
         >> only fixing the missing 3.2 kHz pole
    :: ORANGE -- the ratio of corrective transfer functions
         >> the "best we could do," updating the response with Live TDCFs from CALCS and fixing the missing 3.2 kHz pole against
         >> the "second best thing to do" which is leave the Frozen TDCFs alone and correct for the missing 3.2 kHz pole 
       
     Even for the NOT THERMALIZED time, the BLUE never exceeds 1.002 / 0.1 deg in magnitude / phase, and it's small compared to the "TDCF only" the simple correction of Frozen GDS TDCFs to Live CALCS TDCFs, shown in GREEN .  This helps quantify why Joe thinks we can separately apply the two corrections to the systematic error budget, because GREEN is much larger than BLUE.

    For the THERMALIZED time, in BLUE, that ratio of full models is even less, and also as expected the ratio of simple TDCF update models is also small.


%%%%%%%%%%
The code that produced this aLOG is create_no3p2kHz_syserror.py as of git hash 3d8dd5df.
Non-image files attached to this comment
jeffrey.kissel@LIGO.ORG - 12:11, Friday 17 November 2023 (74255)
Following up on this study just one step further, as I begin to actually correct data curing the time period where both of these systematic errors are in play -- the frozen GDS TDCFs and the missing 3.2 kHZ pole...

I craved one more set of plots to convey "Fixing the Frozen GDS TDCFs and Fixing the 3.2 kHz pole are "separable" to good approximation" showing the actual corrections one would apply in the different cases:
    :: BLUE = eta_R_nom_over_no3p2k = R_ref / R_no3p2k >> The systematic error created by the missing 3.2 kHz pole in the ESD model alone
    :: ORANGE = eta_R_wTDCFs_CALCS_over_R_wTDCFs_GDS = R_wTDCFs_CALCS / R_wTDCFs_GDS >> the systematic error created by the frozen GDS TDCFs alone
    :: GREEN = eta_R_nom_over_no3p2k * eta_R_wTDCFs_CALCS_over_R_wTDCFs_GDS = the product of the two >> the approximation
    :: RED = a previously unshown eta that we'd actually apply to the data that had both = R_ref (updated with CALCS TDCFS) / R_no3p2k (updated with GDS TDCFs) the right thing

As above, it's important to look at both a thermalized case as well as a non-thermalized case., so I attach those two,
    NOT THERMALIZED, and
    THERMALIZED.

The conclusions are the same as above:
    - Joe is again right that the difference between the approximation (GREEN) and the right thing (RED) is small, even for the NOT THERMALIZED time
But I think this version of the plots / traces better shows the breakdown of which effect is contribution where on top of the approximation vs. "the right thing," and "the right thing" was never explicitly shown. All the traces in my expanded aLOG, LHO:72879, had the reference model (or no 3.2 kHz pole models) updated either both CALCS TDCFs or both GDS TDCFs in the numerator and denominator, rather than "the right thing" where you have CALCS TDCFs in the numerator and GDS TDCFs in the denominator).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
To create these extra plots, I added a few lines of "calculation" code and another 40-ish lines of plotting code to create_no3p2kHz_syserror.py. I've now updated in within the git repo, so it and the repo now have git hash 1c0a4126.
Non-image files attached to this comment
Displaying reports 15781-15800 of 86558.Go to page Start 786 787 788 789 790 791 792 793 794 End