Displaying reports 13241-13260 of 84070.Go to page Start 659 660 661 662 663 664 665 666 667 End
Reports until 08:59, Tuesday 19 September 2023
H1 TCS
camilla.compton@LIGO.ORG - posted 08:59, Tuesday 19 September 2023 (72959)
CO2Y PZT Driven at 45Hz

TJ, Camilla. At 15:20 to 15:37 we drove CO2Y PZT at 80Hz and then 45Hz using H1:TCS-ITMY_CO2_PZT_OUT_GAIN_EXC via awggui, see attached. This was to check that we could drive a line using the CO2 PZT that we can later use to inject noise into DARM to understand if in future we'll need to re-install the CO2 ISS (Aidan started T2200341). Hope to repeat with H1 locked later.

Also saw that that CO2Y power meter that is measuring power injected to te IFO saw more lines than CO2X when no excitation was driving either PZT, see attached.

Images attached to this report
H1 CDS
erik.vonreis@LIGO.ORG - posted 08:40, Tuesday 19 September 2023 (72960)
Control room wall displays were restarted

Control room wall displays were updated and restarted.

LHO General
ryan.short@LIGO.ORG - posted 08:05, Tuesday 19 September 2023 (72958)
Ops Day Shift Start

TITLE: 09/19 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventative Maintenance
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 8mph Gusts, 6mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.51 μm/s
QUICK SUMMARY: H1 unlocked around 5 hours ago and was not able to relock (see alog 72955 for details). H1 is now down for Tuesday maintenance.

H1 CDS (CDS)
erik.vonreis@LIGO.ORG - posted 07:39, Tuesday 19 September 2023 (72956)
Workstations updated and rebooted

Workstations were updated and rebooted.  The updated included OS packages and Conda packages.  The default Conda environment is also upgraded to Python 3.10.

A list of updated Conda packages can be found here in both the September 5th and 4th versions.

What's new in Python 3.10

H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 07:27, Tuesday 19 September 2023 - last comment - 09:48, Tuesday 19 September 2023(72955)
Lockloss at 09:26TUC

The IFOs been struggling to relock following this LockLoss. It couldnt find DIFF IR a few times and I had to intervene (I only had to tap DIFF offset once or twice and it found it) and I ended up doing an initial alignment as it couldn't get  

past PRMI, we spent about 10 minutes in CARM_TO_TR before it said "NO IR IN arms" after the IA. It lost lock shortly after at 14:24UTC. 

Each relock seems to have the same trouble with DIFF IR, it being just 1 or 2 taps away but saying NO IR found

Comments related to this report
ryan.crouch@LIGO.ORG - 07:44, Tuesday 19 September 2023 (72957)

Lost lock at CARM_TO_TR again

ryan.crouch@LIGO.ORG - 09:48, Tuesday 19 September 2023 (72962)

The issue was it not actually finding IR and it rushing through CHECK_IR, so the solution would be to hold it CHECK_IR and let it fully find it and settle before moving on.

Images attached to this comment
LHO General
corey.gray@LIGO.ORG - posted 23:57, Monday 18 September 2023 (72948)
Mon EVE Ops Summary

TITLE: 09/18 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 135Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:  Windstorm rolled through this evening and it was surprising how long H1 was able to stay locked through it (it almost made it through, but eventually succombed.).  Squeezer caused one drop from Observing.
LOG:

H1 General
corey.gray@LIGO.ORG - posted 20:55, Monday 18 September 2023 (72954)
Holding at Green Arms For 1/2-Full Hour Due To Winds

Green arms are now aligned but H1 has had locklosses at early stages (i.e. FIND IR & LOCKING ALS), but winds are still hitting about 25mph.  So will hold at LOCKING ARMS GREEN and then try for more in about 30min.

LHO General
corey.gray@LIGO.ORG - posted 20:02, Monday 18 September 2023 - last comment - 20:33, Monday 18 September 2023(72952)
EVE Mid-Shift Status

The wind storm still rages, but it might be showing hints of calming.

There was a 2-min drop from observing due to the Squeezer's IR Filter Cavity unlocked at 0114utc (it came back on its own).

Comments related to this report
corey.gray@LIGO.ORG - 20:33, Monday 18 September 2023 (72953)

Lockloss at 0327 (Observatory mode taken to LOCK ACQUISITION at 0330 as I was making supper).

At the time winds were beginning to calm down.

Currently at INCREASE FLASHES for both arms.

H1 CAL (ISC, TCS)
jeffrey.kissel@LIGO.ORG - posted 17:29, Monday 18 September 2023 (72950)
Record of Thermalizing Lock Acquisitions at 60W PSL Input
J. Kissel

Incrementally crawling along as I build up all the necessary information in order to model the impact of thermalization on calibration during O4, I state here a record of "good" lock acquisitions when the IFO thermalized from "cold" to "thermalized" at 60W PSL input power, and critically, while the eight extra, "thermalization," calibration lines were ON between 2023-07-25 to 2023-08-09. This is a compliment to similar info for 75W PSL input power shown in LHO:69796. 


60W
    UTC Start           UTC Stop            GPS Start     GPS Stop      Duration   Ref. ID
    2023-06-22 04:55    2023-06-22 08:55    1371444918    1371459318    4.00       1
    2023-07-27 23:42    2023-07-28 03:42    1374536538    1374550938    4.00       2
    2023-07-28 20:10    2023-07-29 00:10    1374610218    1374624618    4.00       3
    2023-07-30 02:50    2023-07-30 06:50    1374720618    1374735018    4.00       4
    2023-07-30 20:23    2023-07-31 00:23    1374783798    1374798198    4.00       5
    2023-07-31 09:35    2023-07-31 13:35    1374831318    1374845718    4.00       6
    2023-07-31 18:05    2023-07-31 22:05    1374861918    1374876318    4.00       7
    2023-08-02 07:51    2023-08-02 11:51    1374997878    1375012278    4.00       8
    2023-08-04 02:03    2023-08-04 06:03    1375149798    1375164198    4.00       9
    2023-08-05 12:57    2023-08-05 16:57    1375275438    1375289838    4.00       10
    2023-08-06 07:40    2023-08-06 11:40    1375342818    1375357218    4.00       11
    2023-08-08 21:52    2023-08-09 01:52    1375566738    1375581138    4.00       12

Here "good" means that the IFO thermalized, and then was stable and locked in nominal-low-noise for 4 hours after (typically, undisturbed and OBSERVING). 
The times were established by trending 
    - the lock-acquisition guardian state (H1:GRD-ISC_LOCK_STATE_N) vs.
    - PSL input power (H1:IMC-PWR_IN_OUT16) vs. 
    - Power in the arm cavities (H1:ASC-X_PWR_CIRC_OUT16, H1:ASC-Y_PWR_CIRC_OUT16)
zooming in on each lock acquisition 2023-07-25 to 2023-08-09 to confirm
    - "clean" ISC-LOCK_STATE_N = 600 status for the entire four hour duration, 
    - nothing screwy went on with the laser power (e.g., this is how I found that Jenne had turned ON the ISS 2nd loop on 2023-08-07 which caused a discrete, 2.5 W drop in arm power -- see LHO:72027)

I don't record *all* off the "good" times, but this is the vast majority -- our duty cycle has gotten so good that there were relatively few lock re-acquisitions over these 14 days! 
Thankfully, there are 12 times that were "good," which should be enough to do a similar study of the sensing and response functions a la LHO:70150.

Note that there is still plenty of stuff changing with the DARM loop and calibration spanning these times, namely that
    (1) From 2023-07-20 to 2023-08-03, a bad MICH2DARM feed-forward filter pollutes the 15.1 Hz ETMX UIM calibration line, freezing GDS time-dependent correction factors, which means the values are *frozen* during thermalization -- see LHO:72812, 
    (2) From the start of O4 thru 2023-08-07, we neglected to model the ETMX TST stage electronics correctly, leaving out a 3.2 kHz pole -- see LHO:72879, and
    (3) From 2023-08-04 to 2023-08-29, a bad SRCL2DARM feed-forward filter pollutes the 102.13 Hz calibration line with its impulse response for hours at the beginning of lock acquisition -- see LHO:72868

For future reference -- this list is pulled from T2300297 a greater list of changes throughout ER15 and O4.
 
H1 PEM (DetChar, PEM)
corey.gray@LIGO.ORG - posted 17:09, Monday 18 September 2023 - last comment - 17:38, Monday 18 September 2023(72949)
WIND & Range Nose-Diving Starting at 2320!

Starting around 30+min ago, the wind gusts have passed the 30mph threshhold and are touching 40mph!  This is leading to our local "perfect storm":  (1) high winds (see image #1) and (2) high microseism (see image #2). 

Jeff mentioned the distinct noise bump on DARM between 20-80Hz (see image #3) is characteristic of stray light hitting our EX Cryobaffle when we have a noisy environment such as this.  This has also dropped our range down around 100Mpc (from 140) [see image #4].

With all this said, it's impressive H1 is riding through this noisy Earth!  According to NOAA, we will be under a red flag warning (screenshot is image #5) for another ~3hrs (or 8pmPT / 0300utc).

Images attached to this report
Comments related to this report
corey.gray@LIGO.ORG - 17:38, Monday 18 September 2023 (72951)

~0018utc Wind gust hit around 50mph at EY!

H1 General
oli.patane@LIGO.ORG - posted 16:06, Monday 18 September 2023 (72947)
Ops DAY Shift End

TITLE: 09/18 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 143Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Now Locked for 7hours. Detector had issues with finding the yarm IR this morning when trying to lock but after a bit of help it was able to finish locking. The rest of the day was uneventful.

15:00UTC Came in to an unlocked detector running initial alignment

15:11 INITIAL_ALIGNMENT completed
15:11 I took the detector to DOWN and then started locking
15:13 IR Not Found - I touched it up

16:04 NOMINAL_LOW_NOISE
16:21 Observing

LOG:                                                                                                                          

Start Time System Name Location Lazer_Haz Task Time End
14:30 FAC Karen Vac Prep/Optics Lab n Tech clean 15:01
15:02 FAC Karen MY n Tech clean 17:40
15:09 FAC Ken Wood Shop n Driving to wood shop 16:03
15:22 FAC Kim MX n Tech clean 16:07
16:50 FAC Tyler   n Look at AHU1 16:54
22:46 VAC Jordan Mech lab n Running pump -
LHO General
corey.gray@LIGO.ORG - posted 16:06, Monday 18 September 2023 (72946)
Mon EVE Ops Transition

TITLE: 09/18 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 143Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 27mph Gusts, 22mph 5min avg
    Primary useism: 0.13 μm/s
    Secondary useism: 0.57 μm/s
QUICK SUMMARY:

Definitely more windy this evening as one walks into the OSB (& since we are chatting about environmentals the microseism looks to be slowly coming down over last 18hrs).  H1 has been locked for 7+hrs.  Also see that Jordan is running a pump in the Mechanical Room/Lab.

Also got the hand-off from Oli for after the next lockloss to reload the SUS_PI guardian node.-----> Camilla changed asked to defer this task to the DAY shift tomorrow.

H1 TCS (ISC)
camilla.compton@LIGO.ORG - posted 15:27, Monday 18 September 2023 (72943)
CO2X Delivered Power has been decaying, Power should be bumped back to ER15 levels

As Jeff showed in 72627, CO2X delivered power has dropped 7% since May. At the start of May both CO2s were injecting 1.68W of annular CO2 power into the IFO but now CO2X is injecting 1.56W. CO2X laser power seems to be quickly decaying, see attached. We have been seeing this laser unlocking regularly and blaming the chiller 72653.

Expect that by adjusting H1:TCS-ITMX_CO2_LASERPOWER_POWER_IN (max central power) from 4.844 to 4.28 and re-requesting 1.15W for CO2X would inject the correct 1.68W for CO2X (as max laser power has decayed from 48.5W to 42.8W). We would need to check this and maybe re-run the calibration 66724 during Tuesday maintenance.

Current requested power in lscparams.py is tcs_nom_annular_pwr = {'X': 1.15, 'Y': 1.1}, calibrated for central.

Compared the HOM peaks at the below times, all for observing H1 that had been locked for 13 hours (can't compare prior to June as has 75W input), Can't seem much of a duiffernece apart from the x-arm peak moved between the end fo June and start of July. In 71284, Elenna showed the current state is different form April 2023.

Images attached to this report
LHO FMCS
tyler.guidry@LIGO.ORG - posted 14:41, Monday 18 September 2023 (72945)
Change to Heater Coil
This morning during my rounds to AHU-1, I noticed that the mechanical room was considerably cold (hovering around 64-65F). This is likely a product of the declining outside air temps. To raise the temp, Bubba and I commanded Heater Coil 6 on to a level of 30%. While this coil may only serve the mechanical room, there may be very minor changes to the adjacent spaces in LVEA zones 1 & 4. I will monitor this in the coming days.

T. Guidry B. Gateley
H1 SEI
erik.vonreis@LIGO.ORG - posted 14:35, Friday 15 September 2023 - last comment - 09:55, Tuesday 19 September 2023(72904)
BBB channel dropped from Picket Fence

The BBB channel was continually going into the alarm range.  It's been removed from Picket Fence FOM and EPICS.

The steps for removal were:

1. Edited /opt/rtcds/userapps/release/isi/h1/scripts/Picket-Fence/LHO-picket-fence.py.  Commented out the block at line 39 that adds the "BBB" channel.

2. VNC to nuc5.  Close the Picket Fecne window.

3. ssh as controls to nuc5.  Run "start/launch.sh".  This restarts the Picket Fence FOM, which also sets the EPICS variables.

Comments related to this report
brian.lantz@LIGO.ORG - 09:55, Tuesday 19 September 2023 (72963)

Thanks Erik.

FYI - There are a bunch of stations around the Vancouver area, but BBB is the only one hosted by PNSN. We've reached out to see if we can get access to a similar low-latency server so that we can hopefully find a quieter station to use. These stations are useful for monitoring incoming motion from Alaska.

H1 CAL
jeffrey.kissel@LIGO.ORG - posted 16:29, Monday 11 September 2023 - last comment - 14:45, Monday 18 September 2023(72812)
Historical Systematic Error Investigations: Why MICH FF Spoiling UIM Calibration Line froze Optical Gain and Cavity Pole GDS TDCFs from 2023-07-20 to 2023-08-07
J. Kissel

I'm in a rabbit hole, and digging my way out by shaving yaks. The take-away if you find this aLOG TL;DR -- This is an expansion of the understanding of one part of multi-layer problem described in LHO:72622.

I want to pick up where I left off in modeling the detector calibration's response to thermalization except using the response function, (1+G)/C, instead of just the sensing function, C (LHO:70150). 

I need to do this for when 
    (a) we had thermalization lines ON during times of
    (b) PSL input power at 75W (2023-04-14 to 2023-06-21) and
    (c) PSL input power at 60W (2023-06-21 to now).

"Picking up where I left off" means using the response function as my metric of thermalization instead of the sensing function.

However, the measurement of sensing function w.r.t. to its model, C_meas / C_model, is made from the ratio of measured transfer functions (DARM_IN1/PCAL) * (DARMEXC/DARMIN2), where only the calibration of PCAL matters. The measurement response function w.r.t. its model, R_meas / R_model, on the other hand, is ''simply'' made by the transfer function of ([best calibrated product])/PCAL, where the [best calibrated product] can be whatever you like, as long as you understand the systematic error and/or extra steps you need to account for before displaying what you really want.

In most cases, the low-latency GDS pipeline product, H1:GDS-CALIB_STRAIN, is the [best calibrated product], with the least amount of systematic error in it. It corrects for the flaws in the front-end (super-Nyquist features, computational delays, etc.) and it corrects for ''known'' time dependence based on calibation-line informed, time-dependent correction factors or TDCFs (neither of which the real-time front-end product, CAL-DELTAL_EXTERNAL_DQ, does). So I want to start there, using the transfer function H1:GDS-CALIB_STRAIN / H1:CAL-DELTAL_REF_PCAL_DQ for my ([best calibrated product])/PCAL transfer function measurement.

HOWEVER, over the time periods when we had thermalization lines on, H1:GDS-CALIB_STRAIN had two major systematic errors in it itself that were *not* the thermalization. In short, those errors were:
    (1) between 2023-04-26 and 2023-08-07, we neglected to include the model of the ETMX ESD driver's 3.2 kHz pole (see LHO:72043) and
    (2) between 2023-07-20 and 2023-08-03, we installed a buggy bad MICH FF filter (LHO:71790, LHO:71937, and LHO:71946) that created excess noise as a spectral feature which polluted the 15.1 Hz, SUS-driven calibration line that's used to inform \kappa_UIM -- the time dependence of the relative actuation strength for the ETMX UIM stage. The front-end demodulates that frequency with a demod called SUS_LINE1, creating an estimate of the magnitude, phase, coherence, and uncertainty of that SUS line w.r.t. DARM_ERR.

When did we have thermalization lines on for 60W PSL input? Oh, y'know, from 2023-07-25 to 2023-08-09, exactly at the height of both of these errors. #facepalm
So -- I need to understand these systematic errors well in order to accurately remove them prior to my thermalization investigation.

Joe covers both of these flavors of error in LHO:72622.

However, after trying to digest latter problem, (2), and his aLOG, I didn't understand why spoiled \kappa_U alone had such impact -- since we know that the UIM actuation strength is quite unimpactful to the response function. 

INDEED (2) is even worse than "we're not correcting for the change in UIM actuation strength -- because 
    (3) Though the GDS pipeline (that finishes the calibration to form H1:GDS-CALIB_STRAIN) computes its own TDCFs from the calibration lines, GDS gates the value of its TDCFs with the front-end-, CALCS-, computed uncertainty. So, in that way, the GDS TDCFs are still influenced by the front-end, CALCS computation of TDCFs.

So -- let's walk through that for a second.
The CALCS-computed uncertainty for each TDCF is based on the coherence between the calibration lines and DARM_ERR -- but in a crude, lazy way that we thought would be good enough in 2018 -- see G1801594, page 13. I've captured a current screenshot, First Image Attachment  of the now-times simulink model to confirm the algorithm is still the same as it was prior to O3. 

In short, the uncertainty for the actuator strengths, \kappa_U, \kappa_P, and \kappa_T, is created by simply taking the larger of the two calibration line transfer functions' uncertainty that go in to computing that TDCF -- SUS_LINE[1,2,3] or PCAL_LINE1. 

HOWEVER -- because the optical and cavity pole, \kappa_C and f_CC, calculation depends on subtracting out the live DARM actuator (see appearance "A(f,t)" in the definition of "S(f,t)" in Eq. 17 from ), their uncertainty is crafted from the largest of the \kappa_U, \kappa_P, and \kappa_T, AND PCAL_LINE2 uncertainties. It's the same uncertainty for both \kappa_C and f_CC, since they're both derived from the magnitude and phase of the same PCAL_LINE2. 

That means the large SUS_LINE1 >> \kappa_U uncertainty propagates through this "greatest of" algorithm, and also blows out the \kappa_C and f_CC uncertainty as well -- which triggered the GDS pipeline to gate its 2023-07-20 TDCF values for \kappa_U, \kappa_C, and f_CC from 2023-07-20 to 2023-08-07.

THAT means, that --for better or worse-- when \kappa_C and f_CC are influenced by thermalization for the first ~3 hours after power up, GDS did not correct for it. Thus, a third systematic error in GDS, (3). 

*sigh*

OK, let's look at some plots.

My Second Image Attachment shows a trend of all the front-end computed uncertainties involved around 2023-07-20 when the bad MICH FF is installed. 
    :: The first row and last row show that the UIM uncertainty -- and the CAV_POLE uncertainty (again, used for both \kappa_C )

    :: Remember GDS gates its TDCFs with a threshold of uncertainty = 0.005 (i.e. 0.5%), where the front-end gates with an uncertainty of 0.05 (i.e. 5%).

First PDF attachment shows in much more clear detail the *values* of bot the the CALCS and GDS TDCFs during a thermalization time that Joe chose in LHO:72622, 2023-07-26 01:10 UTC.

My Second PDF attachment breaks down Joe's LHO:72622 Second Image attachment in to its components:
    :: ORANGE shows the correction to the "reference time" response function with the frozen, gated, GDS-computed TDCFs, by the ratio of the "nominal" response function (as computed from the 20230621T211522Z report's pydarm_H1.ini) to that same response function, but with the optical gain, cavity pole, and actuator strengths updated with the frozen GDS TDCF values,
        \kappa_C = 0.97828    (frozen that the low, thermalized value of the OM2 HOT value reflecting the unaccounted-for change just one day prior at 2023-07-19; LHO:71484)
        f_CC = 444.4 Hz       (frozen)
        \kappa_U = 1.05196    (frozen at a large, noisy value, right after the MICH FF filter is installed)
        \kappa_P = 0.99952    (not frozen)
        \kappa_T = 1.03184    (not frozen, large at 3% because of the TST actuation strength drift)

    :: BLUE shows the correction to the "reference time" response function with the not-frozen, non-gated, CALCS-computed TDCFs, by the ratio of the "nominal" 20230621T211522Z response function to that same response function updated with the CALCS values,
        \kappa_C = 0.95820    (even lower than OM2 HOT value because this time is during thermalization)
        f_CC = 448.9 Hz       (higher because IFO mode matching and loss are better before the IFO thermalizes)
        \kappa_U = 0.98392    (arguably more accurate value, closer to the mean of a very noisy value)
        \kappa_P = 0.99763    (the same as GDS, to within noise or uncertainty)
        \kappa_T = 0.03073    (the same as GDS, to within noise or uncertainty)

    :: GREEN is a ratio of BLUE / ORANGE -- and thus a repeat of what Joe shows in his LHO:72622 Second Image attachment.

Joe was trying to motivate why (1) the missing ESD driver 3.2 kHz pole is a separable problem from (2) and (3), the bad MICH FF filter spoiling the uncertainty in \kappa_U, \kappa_C, and f_CC, so he glossed over this issue. Further what he plotted in his second attachment, and akin to my GREEN curve, is the *ratio* between corrections, not the actually corrections themselves (ORANGE and BLUE) so it kind of hid this difference. 
Images attached to this report
Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 16:21, Monday 11 September 2023 (72815)
This plot was created by create_no3p2kHz_syserror.py, and the plots posted correspond to the script as it was when the Calibration/ifo project git hash was 53543b80.
jeffrey.kissel@LIGO.ORG - 17:21, Monday 11 September 2023 (72819)
While shaving *this* yak, I found another one -- The front-end CALCS uncertainty for the \kappa_U gating algorithm incorrectly consumes \kappa_T's uncertainty.

The attached image highlights the relevant part of the 
    /opt/rtcds/userapps/release/cal/common/models/
        CAL_CS_MASTER.mdl
library part, at the CS > TDEP level.

The red ovals show to what I refer. The silver KAPPA_UIM, KAPPA_PUM, and KAPPA_TST blocks -- which are each instantiations of the ACTUATOR_KAPPA block within the CAL_LINE_MONITOR_MASTER.mdl libary -- each receive the uncertainty output from the above mentioned crude, lazy algorithm (see first image from above LHO:72812) via tag. The KAPPA_UIM block incorrectly receives the KAPPA_TST_UNC tag.

The proof is seen in the first row of other image attachment from above LHO:72812 -- see that while the raw calibration line uncertainty (H1:CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY) is high, the resulting "greater of the two" uncertainty (H1:CAL-CS_TDEP_KAPPA_UIM_GATE_UNC_INPUT) remains low, and matches the third row's uncertainty for \kappa_T (H1:CAL-CS_TDEP_KAPPA_TST_GATE_UNC_INPUT), the greater of H1:CAL-CS_TDEP_PCAL_LINE1_UNCERTAINTY and H1:CAL-CS_TDEP_SUS_LINE3_UNCERTAINTY.

You can that this is the case even back in 2018 on page 14 of G1801594, so this has been wrong since before O3.

*sigh*

This makes me wonder which of these uncertainties the GDS pipeline gates \kappa_U, \kappa_C, and f_CC on ... 
I don't know gstlal-calibration well enough to confirm what channels are used. Clearly, from the 2023-07-26 01:10 UTC trend of GDS TDCFs, they're gated. But, is that because H1:CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY is used as input all of the GDS computed \kappa_U, \kappa_C, and f_CC, or are they using H1:CAL-CS_TDEP_KAPPA_UIM_GATE_UNC_INPUT?

As such, I can't make a statement of how impactful this bug has been.

We should fix this, though.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 12:09, Tuesday 12 September 2023 (72832)
The UIM uncertainty bug has now been fixed and installed at H1 as of 2023-09-12 17:00 UTC. See LHO:72820 and LHO:72830, respectively.
jeffrey.kissel@LIGO.ORG - 14:45, Monday 18 September 2023 (72944)
J. Kissel, M. Wade

Following up on this:
    This makes me wonder which of these uncertainties the GDS pipeline gates \kappa_U, \kappa_C, and f_CC on [... are channels like] H1:CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY used as input all of the GDS computed \kappa_U, \kappa_C, and f_CC, or are they using H1:CAL-CS_TDEP_KAPPA_UIM_GATE_UNC_INPUT?

I confirm from Maddie that 
    - The channels that are used to inform the GDS pipeline's gating algorithm are defined in the gstlal configuration file, which lives in the Calibration namespace of the git.ligo.org repo, under 
    git.ligo.org/Calibration/ifo/H1/gstlal_compute_strain_C00_H1.ini
where this config file was last changed on May 02 2023 with git hash 89d9917d.

    - In that file, The following config variables are defined (starting around Line 220 as of git hash version 89d9917d),
        #######################################
        # Coherence Uncertainty Channel Names #
        #######################################
        CohUncSusLine1Channel: CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY
        CohUncSusLine2Channel: CAL-CS_TDEP_SUS_LINE2_UNCERTAINTY
        CohUncSusLine3Channel: CAL-CS_TDEP_SUS_LINE3_UNCERTAINTY
        CohUncPcalyLine1Channel: CAL-CS_TDEP_PCAL_LINE1_UNCERTAINTY
        CohUncPcalyLine2Channel: CAL-CS_TDEP_PCAL_LINE2_UNCERTAINTY
        CohUncPcalyLine4Channel: CAL-CS_TDEP_PCAL_LINE4_UNCERTAINTY
        CohUncDARMLine1Channel: CAL-CS_TDEP_DARM_LINE1_UNCERTAINTY
      which are compared against a threshold, also defined in that file on Line 114,
        CoherenceUncThreshold: 0.01

    Note: the threshold is 0.01 i.e. 1% -- NOT 0.005 or 0.5% as described in the body of the main aLOG.

    - Then, inside the gstlal-calibration code proper, 
        git.ligo.orgCalibration/gstlal-calibration/bin/gstlal_compute_strain
    whose last change (as of this aLOG) has git hash 5a4d64ce, there are lines of code buried deep that compute create gating around lines 
        :: L1366 for \kappa_T,
        :: L1425 for \kappa_P, 
        :: L1473 for \kappa_U
        :: L1544 for \kappa_C
        :: L1573 for f_CC

    - From these lines one can discern what's going on, if you believe that calibration_parts.mkgate is a wrapper around gstlal's pipeparts.filters class, with method "gate" -- which links you to source code "gstlal/gst/lal/gstlal_gate.c" which actually lives under
        git.ligo.org/lscsoft/gstlal/gst/lal/gstlal_gate.c

    - I *don't* believe it (because I don't believe in my skills in following the gstlal rabbit hole), so I asked Maddie. She says: 
    The code uses the uncertainty channels (as pasted below) along with a threshold specified in the config (currently 0.01, so 1% uncertainty) and replaces any computed TDCF value for which the specified uncertainty on the corresponding lines is not met with a "gap". These gaps get filled in by the last non-gap value, so the end result is that the TDCF will remain at the "last good value" until a new "good" value is computable, where "good" is defined as a value computed during a time where the specified uncertainty channels are within the required threshold.
    The code is essentially doing sequential gating [per computation cycle] which will have the same result as the front-end's "larger of the two" method.  The "gaps" that are inserted by the first gate are simply passed along by future gates, so future gates only add new gaps for any times when the uncertainty channel on that gate indicates the threshold is surpassed.  The end result [at the end of computation cycle] is a union of all of the uncertainty channel thresholds.

    - Finally, she confirms that 
        :: \kappa_U uses 
            . CAL-CS_TDEP_PCAL_LINE1_UNCERTAINTY, 
            . CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY
        :: \kappa_P uses 
            . CAL-CS_TDEP_PCAL_LINE1_UNCERTAINTY, 
            . CAL-CS_TDEP_SUS_LINE2_UNCERTAINTY
        :: \kappa_T uses 
            . CAL-CS_TDEP_PCAL_LINE1_UNCERTAINTY, 
            . CAL-CS_TDEP_SUS_LINE3_UNCERTAINTY
        :: and both \kappa_C f_CC use
            . CAL-CS_TDEP_PCAL_LINE1_UNCERTAINTY, 
            . CAL-CS_TDEP_PCAL_LINE2_UNCERTAINTY, 
            . CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY, 
            . CAL-CS_TDEP_SUS_LINE2_UNCERTAINTY, 
            . CAL-CS_TDEP_SUS_LINE3_UNCERTAINTY

So, repeating all of this back to you to make sure we all understand: If any one of the channels is above the GDS pipeline's threshold of 1% (not 0.5% as described in the body of the main aLOG), then the TDCF will be gated, and "frozen" at the last time *all* of these channels were below 1%.

This corroborates and confirms the hypothesis that the GDS pipeline, although slightly different algorithmically from the front-end, would gate all three TDCFs -- \kappa_U, \kappa_C, and f_CC -- if only the UIM SUS line, CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY was above threshold -- as it was from 2023-07-20 to 2023-08-07.
Displaying reports 13241-13260 of 84070.Go to page Start 659 660 661 662 663 664 665 666 667 End