Displaying reports 13561-13580 of 84064.Go to page Start 675 676 677 678 679 680 681 682 683 End
Reports until 00:04, Saturday 02 September 2023
H1 General
oli.patane@LIGO.ORG - posted 00:04, Saturday 02 September 2023 (72633)
Ops EVE Shift End

TITLE: 09/02 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 143Mpc
INCOMING OPERATOR: Austin
SHIFT SUMMARY: We are still Observing and have now been Locked for 10 hours. A few minutes ago we got a GRB-Short alert and a Stand Down (Fermi, labeled as NOT_GRB).

The top FOM on nuc26 disconnected from the port for ~30 seconds saying that nothing was connected, before turning back on by itself. Later on in the evening the top FOM of nuc5 also lost contact with its computer for a few seconds before coming back on its own.


23:00UTC In Observing and detector has been Locked for 2hrs 4mins

06:55 GRB-Short on Verbals and Stand Down on lock clock


LOG:

no log

LHO FMCS (PEM)
oli.patane@LIGO.ORG - posted 23:21, Friday 01 September 2023 (72632)
HVAC Fan Vibrometer Check

Closes FAMIS#26248, last checked 72443

Corner Station Fans (attachment1)

All fans are looking good with no concerns, with MR_FAN5_2 being very variable in noise level (as it usually is), and MR_FAN5_1 being the loudest at just below 0.4.

Outbuilding Fans (attachment2)

All fans are looking good and are well below any noise level that would be of concern to us.

Notes:

- Monday night, 08/29 at 6:13UTC, EX_Fan1 turned off for a minute before a power glitch(coincidence??) turned it back on (attachment3).

- 09/01 16:39UTC, something similar happened with EY_FAN2 (attachment4), although this one is probably related to the work that Randy and Tyler were doing at EY around that time(72628)

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 20:42, Friday 01 September 2023 (72631)
Ops EVE Midshift Update

We are still in Observing and have now been Locked for almost 7 hours.

H1 TCS (CAL, DetChar, FRS, ISC)
jeffrey.kissel@LIGO.ORG - posted 16:19, Friday 01 September 2023 (72627)
TCSX Laser Power Delivered Has Slowly Decreased from 1.7 to 1.6W
J. Kissel, C. Compton

For historical documentation purposes, I was poking around TCS trends today looking for changes.

Camilla keeps excellent record of changes for all TCS settings, see LHO:70078 then LHO:70616 for all requested changes from April 2023's ER15 through May 2023's O4 start through June 2023's power down from 75W to 60W.

From these aLOGs, we know we want both TCS C02 lasers to deposit 1.7 W of annular power on the input test masses.

In confirming this, I was trending the TCS C02 Lasers and found that TCSX's laser power delivered to 
 - ITMX and ITMY took a 0.02 W step down on 2023-07-12 after 18:00 UTC failures of the site-wise laser interlock system (LHO:71276), 
 - ITMY was restored to 1.68 W after table incursion for HWS SLED replacement on 2023-07-18 (LHO:71476)
 - ITMX was also restored to 1.67 W after 2023-07-28 HWS SLED replacement on 2023-07-18 (LHO:71476) 
 - BUT ITMX C02 power delivered has trended slowly downward since 2023-07-18 from 1.67 to 1.57, a decrease of 0.1 W, to today, 2023-09-01.
     - There is a slight increase of ITMX power on 2023-08-13 09:18 UTC (02:18 PDT -- in the middle of the night on Sunday Morning), but
     - The loss of laser power resumes post CO2 laser chiller swap a few days later during maintenance day 2023-08-16 01:04 UTC (2023-08-15 18:04 PDT) (LHO:72220).

I attach a trend of the delivered power, as well as laser head power over the course of the run thus far.
The ITMX C02 laser head power is also trending downward, in slow discrete steps from 48.5 W to 44 W, a decrease of 3.5 W from 2023-07-18 to 2023-09-01.

I don't think the IFO cares about the loss of 0.1 W differential decay in delivered CO2 laser power, but the trend is perhaps concerning, and ITMX is decaying faster than ITMY.
Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 16:16, Friday 01 September 2023 (72629)
Ops EVE Shift Start

TITLE: 09/01 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 142Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 19mph Gusts, 12mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.17 μm/s
QUICK SUMMARY: Detector has been Locked for 2hrs 20mins. Winds picking up a bit, but don't look too bad. Taking over for Tony.

 

H1 General
anthony.sanchez@LIGO.ORG - posted 16:09, Friday 01 September 2023 (72628)
Friday Ops Day shift End

TITLE: 09/01 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
OUTGOING OPERATOR: Tony
INCOMING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 16mph Gusts, 10mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.18 μm/s 
QUICK SUMMARY:

15:56 UTC Dropped out of Observing because of a PI Ringing up.
Camilla manually interveined, see alog https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72619
back to Observing at 15:59 UTC

Lockloss:1377628699

Relocking started 18:38 UTC
Lockloss at 19:15 UTC while ISC_LOCK @ DHARD_WFS Likely due to poor alignment, justified by Guardian going through check MICH_FRINGES and AQUIRE_PRMI

took ISC_LOCK to Initial Alignment, which then got stuck in MITCH_BRIGHT. Changed BS Pitch to get better peaks and valleys

Locking started at 20:11 UTC
NOMINAL_LOW_NOISE reached @ 20:54UTC
incoming 6.1M earthquake from Russia 21:11 UTC

SFD Diffs accepted https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72626
Observing reached at  21:13 UTC                                                                                                                                                                                                                                                                                                                                                                                          

Start Time System Name Location Lazer_Haz Task Time End
15:27 FAC Randy EX &EY N Delivering Supplies. 16:05
16:19 FAC Randy, Tyler Mech Room & EY N Pulling Vac on Air Handlers. Ey @16:56 17:20
17:53 EE Marc MY N Electronics Storage 18:23
17:57 FAC Randy EY N Possibly moveing pallets of jacks. 18:19
18:42 SQZ Vicki CTRL Rm N Checking SQZ Levels 19:12
18:43 SEI Jim LVEA N Checking out H2 ITMY Chamber. 18:48
H1 SQZ
anthony.sanchez@LIGO.ORG - posted 14:00, Friday 01 September 2023 (72626)
SDF Diffs Accepted.

SQZ SDF Diff accepted screenshot attached.

Images attached to this report
H1 CAL (CAL)
joseph.betzwieser@LIGO.ORG - posted 12:40, Friday 01 September 2023 - last comment - 12:11, Friday 17 November 2023(72622)
Calibration uncertainty estimate corrections
This is a continuation of a discussion of mis-application of the calibration model raised in LHO alog 71787, which was fixed on August 8th (LHO alog: 72043), and further issues with what time varying factors (kappas) were applied while the ETMX UIM calibration line coherence was bad (see LHO alog 71790, which was fixed on August 3rd.

We need to update the calibration uncertainty estimates with the combination of these two problems where they overlap.  The appropriate thing is to use the full DARM model (1/C + (A_uim + A_pum + A_tst) * D), where C is sensing, A_{uim,pum,tst} are the individual ETMX stage actuation transfer functions, and D is the digital darm filters.  Although, it looks like we can just get away with an approximation, which will make implimentation somewhat easier.

As a demonstration of this, first I confirm I can replicate the the 71787 result purely with models (no fitting).  I take the pydarm calibration model Response, R, and correct it for the time dependent correction factors (kappas) at the same time I took the GDS/DARM_ERR data, and then take the ratio with the same model except the 3.2 kHz ETMX L3 HFPoles removed (the correction Louis and Jeff eventually implemented).  This is the first attachment.

Next we calculate the expected error just from the wrong kappas being applied in the GDS pipeline due to poor UIM coherence.  For this initial look, I choose GPS time 1374369018 (2023-07-26 01:10), you can see the LHO summary page here, with the upper left plot showing the kappa_C discrepancy between GDS and front end.  So just this issue produces the second attachment.

We can then look at what the effects of the 3.2 kHz pole being missing for two possibilities, for the Front end kappas, and for the GDS bad kappas, and see the difference is pretty small compared to typical calibration uncertainties.  Here it's on the scale of a tenth of a percent at around 90 Hz.  I can also plot the model with the front end kappas (more correct at this time) over the model of the wrong GDS kappas, for a comparison in scale as well.  This is the 3rd plot.

This suggests to me the calibration group can just apply a single correction to the overall response function systematic error for the period where the 3.2 kHz HFPole filter was missing, and then in addition, for the period where the UIM uncertainty was preventing the kappa_C calculation from updating, apply an additional correction factor that is time dependent, just multiplying the two.

As an example, the 4th attachment shows what this would look like for the gps time 1374369018.
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 16:25, Monday 11 September 2023 (72817)
For further explanation of the impact of Frozen GDS TDCFs vs. Live CAL-CS Computed TDCFs on the response function systematic error, i.e. what Joe's saying with
    Next we calculate the expected error just from the wrong kappas being 
    applied in the GDS pipeline due to poor UIM coherence.  For this initial look, I choose 
    GPS time 1374369018 (2023-07-26 01:10 UTC), you can see the LHO summary page here, with 
    the upper left plot showing the kappa_C discrepancy between GDS and front end.  
    So just this issue produces the second attachment.
and what he shows in his second attachment, see LHO:72812.
jeffrey.kissel@LIGO.ORG - 16:34, Thursday 14 September 2023 (72879)
I've made some more clarifying plots to help me better understand Joe's work above after getting a few more details from him and Vlad.

(1) GDS-CALIB_STRAIN is corrected for time dependence, via the relative gain changes, "\kappa," as well as for the new coupled-cavity pole frequency, "f_CC." In order to make a fair comparison between the *measured* response function, GDS-CALIB_STRAIN / DARM_ERR live data stream, and the *modeled* response function, which is static in time, we need to update the response function with the the time dependent correction factors (TDCFs) at the time of the *measured* response function. 

How is the *modeled* response function updated for time dependence? Given the new pydarm system, it's actually quite straightforward given a DARM model parameter set, pydarm_H1.ini and good conda environment. Here's a bit of pseudo-code that captures what's happening conceptually:
    # Set up environment
    from gwpy.timeseries import TimeSeriesDict as tsd
    from copy import deepcopy
    import pydarm

    # Instantiate two copies of pydarm DARM loop model
    darmModel_obj = pydarm.darm.DARMModel('pydarm_H1.ini')
    darmModel_wTDCFs_obj = deepcopy(darmModel_obj)

    # Grab time series of TDCFs
    tdcfs = tsd.get(chanList, starttime, endtime, frametype='R',verbose=True) 

    kappa_C = tdcfs[chanList[0]].value
    freq_CC = tdcfs[chanList[1]].value
    kappa_U = tdcfs[chanList[2]].value
    kappa_P = tdcfs[chanList[3]].value
    kappa_T = tdcfs[chanList[4]].value

    # Multiply in kappas, replace cavity pole, with a "hot swap" of the relevant parameter in the DARM loop model
    darmModel_wTDCFs_obj.sensing.coupled_cavity_optical_gain *= kappa_C
    darmModel_wTDCFs_obj.sensing.coupled_cavity_pole_frequency = freq_CC
    darmModel_wTDCFs_obj.actuation.xarm.uim_npa *= kappa_U
    darmModel_wTDCFs_obj.actuation.xarm.pum_npa *= kappa_P
    darmModel_wTDCFs_obj.actuation.xarm.tst_npv2 *= kappa_T

    # Extract the response function transfer function on your favorite frequency vector
    R_ref     = darmModel_obj.compute_response_function(freq)
    R_wTDCFs  = darmModel_wTDCFs_obj.compute_response_function(freq)

    # Compare the two response functions to form a "systematic error" transfer function, \eta_R.
    eta_R_wTDCFs_over_ref = R_wTDCFs / R_ref


For all of this study, I started with the reference model parameter set that's relevant for these times in late July 2023 -- the pydarm_H1.ini from the 20230621T211522Z report directory, which I've copied over to a git repo as pydarm_H1_20230621T211522Z.ini.

(2) One layer deeper, some of what Joe's trying to explore in his plots above -- the difference between low-latency, GDS pipeline computed TDCFs and real-time, CALCS pipeline -- because of the issues with the GDS pipeline computation discussed in LHO:72812.

So, in order to facilitate this study, we have to gather TDCFs from both GDS and CALCS pipeline. Here's the channel list for both:
    chanList = ['H1:GRD-ISC_LOCK_STATE_N',

                'H1:CAL-CS_TDEP_KAPPA_C_OUTPUT',
                'H1:CAL-CS_TDEP_F_C_OUTPUT',
                'H1:CAL-CS_TDEP_KAPPA_UIM_REAL_OUTPUT',
                'H1:CAL-CS_TDEP_KAPPA_PUM_REAL_OUTPUT',
                'H1:CAL-CS_TDEP_KAPPA_TST_REAL_OUTPUT',

                'H1:GDS-CALIB_KAPPA_C',
                'H1:GDS-CALIB_F_CC',
                'H1:GDS-CALIB_KAPPA_UIM_REAL',
                'H1:GDS-CALIB_KAPPA_PUM_REAL',
                'H1:GDS-CALIB_KAPPA_TST_REAL']
where the first channel in the list is the state of detector lock acquisition guardian for useful comparison.

(3) Indeed, for *most* of the above aLOG, Joe chooses an example of times when the GDS and CALCS TDCFs are *the most different* -- in his case, 2023-07-26 01:10 UTC (GPS 1374369018) -- when the H1 detector is still thermalizing after power up. They're *different* because the GDS calculation was frozen at the value they were on the day that the calculation was spoiled by a bad MICH FF filter, 2023-08-04 -- and importantly when the detector *was* thermalized.

An important distinction that's not made above, is that the *measured* data in his first plot is from LHO:71787 -- a *different* time, when the detector WAS thermalized, a day later -- 2023-07-27 05:03:20 UTC (GPS 1374469418).

Compare the TDCFs between NOT THERMALIZED time, 2023-07-26 first attachment here with the 2023-07-27 THERMALIZED first attachment I recently added to Vlad's LHO:71787.

One can see in the 2023-07-27 THERMALIZED data, the Frozen GDS and Live CALCS TDCF answers agree quite well. For the NOT THERMALIZED time, 2023-07-26, \kappa_C, f_CC, and \kappa_U are quite different.

(4) So, let's compare the response function ratio, i.e. systematic error transfer function ratio, between the response function updated with GDS TDCFs vs. CALCS TDCFs for the two different times -- thermalizes vs. not thermalized. This will be an expanded version Joe's second attachment:
    - 2nd Attachment here: this exactly replicates Joe's plot, but shows more ratios to better get a feel for what's happening. Using the variables from psuedo code above, I'm plotting
        :: BLUE = eta_R_wTDCFs_CALCS_over_ref = R_wTDCFs_CALCS / R_ref
        :: ORANGE = eta_R_wTDCFs_GDS_over_ref = R_wTDCFs_GDS / R_ref
        :: GREEN = eta_R_wTDCFs_CALCS_over_R_wTDCFs_GDS = R_wTDCFs_CALCS / R_wTDCFs_GDS
    where the GREEN trace is showing what Joe showed -- both as the unlabeled BLUE trace in his second attachment, and the "FE kappa true R / applied bad kappa R" GREEN trace in his third attachment -- the ratio between response functions; one updated with CALCS TDCFs and the other updated with GDS TDCFs, for the NOT THERMALIZED time. 

    - 3r Attachment here: this replicates the same traces, but with the TDCFs from Vlad's THERMALIZED time.

For both Joe and my plots, because we think that the CALCS TDCFs are more accurate, and it's tradition to put the more accurate response function in the numerator we show it as such. Comparing the two GREEN traces from my plots, it's much more clear that the difference between GDS and CALCS TDCFs is negligible for THERMALIZED times, and substantial during NOT THERMALIZED times.

(4) Now we bring in the complexity of the missing 3.2 kHz ESD pole. Unlike the "hot swap" of TDCFs in the DARM loop model, it's a lot easier just to create an "offline" copy of the pydarm parameter file, with the ESD poles removed. That parameter file lives in the same git repo location, but called pydarm_H1_20230621T211522Z_no3p2k.ini. So, with that, we just instantiate the model in the same way, but calling the different parameter file:
# Set up environment
    # Instantiate two copies of pydarm DARM loop model
    darmModel_obj = pydarm.darm.DARMModel('pydarm_H1_20230621T211522Z.ini')
    darmModel_no3p2k_obj = pydarm.darm.DARMModel('pydarm_H1_20230621T211522Z_no3p2k.ini')

    # Extract the response function transfer function on your favorite frequency vector
    R_ref = darmModel_obj.compute_response_function(freq)
    R_no3p2k = darmModel_no3p2k_obj.compute_response_function(freq)

    # Compare the two response functions to form a "systematic error" transfer function, \eta_R.
    eta_R_nom_over_no3p2k = R_ref / R_no3p2k

where here, the response function without the 3.2 kHz pole is less accurate, so R_no3p2k goes in the denominator.

Without any TDCF correction, I show this eta_R_nom_over_no3p2k compared against Vlad's fit from LHO:71787 for starters.

(5) Now for the final layer of complexity need to fold in the TDCFs. This is where I think a few more traces and plots are needed comparing the two THERMALIZED vs. NOT times, plus some clear math, in order to explain what's going on. In the end, I make the same conclusion as Joe, that the two effects -- Fixing the Frozen GDS TDCFs and Fixing the 3.2 kHz pole are "separable" to good approximation, but I'm slower than Joe is, and need things laid out more clearly.

So, on the pseudo-code side of things, we need another couple of copies of the darmModel_obj:
    - with and without 3.2 kHz pole 
        - with TDCFs from CALCS and GDS, 
            - from THERMALIZED (LHO71787) and NOT THERMALIZED (LHO72622) times:
    
        R_no3p2k_wTDCFs_CCS_LHO71787 = darmModel_no3p2k_wTDCFs_CCS_LHO71787_obj.compute_response_function(freq)
        R_no3p2k_wTDCFs_GDS_LHO71787 = darmModel_no3p2k_wTDCFs_GDS_LHO71787_obj.compute_response_function(freq)
        R_no3p2k_wTDCFs_CCS_LHO72622 = darmModel_no3p2k_wTDCFs_CCS_LHO72622_obj.compute_response_function(freq)
        R_no3p2k_wTDCFs_GDS_LHO72622 = darmModel_no3p2k_wTDCFs_GDS_LHO72622_obj.compute_response_function(freq)

        
        eta_R_wTDCFS_over_R_wTDCFs_no3p2k_CCS_LHO71787 = R_wTDCFs_CCS_LHO71787 / R_no3p2k_wTDCFs_CCS_LHO71787
        eta_R_wTDCFS_over_R_wTDCFs_no3p2k_GDS_LHO71787 = R_wTDCFs_GDS_LHO71787 / R_no3p2k_wTDCFs_GDS_LHO71787
        eta_R_wTDCFS_over_R_wTDCFs_no3p2k_CCS_LHO72622 = R_wTDCFs_CCS_LHO72622 / R_no3p2k_wTDCFs_CCS_LHO72622
        eta_R_wTDCFS_over_R_wTDCFs_no3p2k_GDS_LHO72622 = R_wTDCFs_GDS_LHO72622 / R_no3p2k_wTDCFs_GDS_LHO72622


Note, critically, that these ratios of with and without the 3.2 kHz pole -- both updated with the same TDCFs -- is NOT THE SAME THING as just the ratio of models updated with GDS vs CALCS TDCFs, even though it might look like the "reference" and "no 3.2 kHz pole" might cancel "on paper," if one naively thinks that the operation is separable
     
    [[ ( R_wTDCFs_CCS / R_ref )*( R_ref / R_no3p2k ) ]] / [[ ( R_wTDCFs_GDS / R_ref )*(R_ref / R_no3p2k) ]] #NAIVE
    which one might naively cancel terms to get down to
    [[ ( R_wTDCFs_CCS / R_ref )*( R_ref / R_no3p2k ) ]] / [[ ( R_wTDCFs_GDS / R_ref )*(R_ref / R_no3p2k) ]]  #NAIVE
    [[ ( R_wTDCFs_CCS ]] / [[ R_wTDCFs_GDS ]] #NAIVE

    
So, let's look at the answer now, with all this context.
    - NOT THERMALIZED This is a replica of what Joe shows in the third attachment for the 2023-07-26 time:
        :: BLUE -- the systematic error incurred from excluding the 3.2 kHz pole on the reference response function without any updates to TDCFs (eta_R_nom_over_no3p2k)
        :: ORANGE -- the systematic error incurred from excluding the 3.2 kHz pole on the CALCS-TDCF-updated, modeled response function (eta_R_wTDCFS_over_R_wTDCFs_no3p2k_CCS_LHO72622, Joe's "FE kappa true R /applied R (no pole))
        :: GREEN -- the systematic error incurred from excluding the 3.2 kHz pole on the GDS-TDCF-updated, modeled response function (eta_R_wTDCFS_over_R_wTDCFs_no3p2k_GDS_LHO72622, Joe's "GDS kappa true R / applied (no pole)")
        :: RED -- Compared against Vlad's *fit* the ratio of CALCS-TDCF-updated, modeled response function to (GDS-CALIB_STRAIN / DARM_ERR) measured response function

    Here, because the GDS TDCFs are different than the CALCS TDCFs, you actually see a non-negligible difference between ORANGE and GREEN. 

    - THERMALIZED:
        (Same legend, but the TIME and TDCFs are different)

    Here, because the GDS and CALCS TDCFs are the same-ish, you can't see that much of a difference between the two. 
    
    Also, note, that even when we're using the same THERMALIZED time and corresponding TDCFs to be self-consistent with Vlad's fit of the measured response function, they still don't agree perfectly. So, there's likely still yet more systematic error going in the thermalized time.

(6) Finally, I wanted to explicitly show the consequences of "just" correcting for GDS and from "just" correcting the missing 3.2 kHz pole to be able to better *quantify* the statement that "the difference is pretty small compared to typical calibration uncertainties," as well as showing the difference between "just" the ratio response functions updated with the different TDCFs (the incorrect model), against the "full" models.

    I show this in 
    - NOT THERMALIZED, and
    - THERMALIZED

For both of these plots, I show
    :: GREEN -- the corrective transfer function we would be applying if we only update the Frozen GDS TDCFs to Live CALCS TDCFs, compared with
    :: BLUE -- the ratio of corrective transfer functions,
         >> the "best we could do," updating the response with Live TDCFs from CALCS and fixing the missing 3.2 kHz pole against
         >> only fixing the missing 3.2 kHz pole
    :: ORANGE -- the ratio of corrective transfer functions
         >> the "best we could do," updating the response with Live TDCFs from CALCS and fixing the missing 3.2 kHz pole against
         >> the "second best thing to do" which is leave the Frozen TDCFs alone and correct for the missing 3.2 kHz pole 
       
     Even for the NOT THERMALIZED time, the BLUE never exceeds 1.002 / 0.1 deg in magnitude / phase, and it's small compared to the "TDCF only" the simple correction of Frozen GDS TDCFs to Live CALCS TDCFs, shown in GREEN .  This helps quantify why Joe thinks we can separately apply the two corrections to the systematic error budget, because GREEN is much larger than BLUE.

    For the THERMALIZED time, in BLUE, that ratio of full models is even less, and also as expected the ratio of simple TDCF update models is also small.


%%%%%%%%%%
The code that produced this aLOG is create_no3p2kHz_syserror.py as of git hash 3d8dd5df.
Non-image files attached to this comment
jeffrey.kissel@LIGO.ORG - 12:11, Friday 17 November 2023 (74255)
Following up on this study just one step further, as I begin to actually correct data curing the time period where both of these systematic errors are in play -- the frozen GDS TDCFs and the missing 3.2 kHZ pole...

I craved one more set of plots to convey "Fixing the Frozen GDS TDCFs and Fixing the 3.2 kHz pole are "separable" to good approximation" showing the actual corrections one would apply in the different cases:
    :: BLUE = eta_R_nom_over_no3p2k = R_ref / R_no3p2k >> The systematic error created by the missing 3.2 kHz pole in the ESD model alone
    :: ORANGE = eta_R_wTDCFs_CALCS_over_R_wTDCFs_GDS = R_wTDCFs_CALCS / R_wTDCFs_GDS >> the systematic error created by the frozen GDS TDCFs alone
    :: GREEN = eta_R_nom_over_no3p2k * eta_R_wTDCFs_CALCS_over_R_wTDCFs_GDS = the product of the two >> the approximation
    :: RED = a previously unshown eta that we'd actually apply to the data that had both = R_ref (updated with CALCS TDCFS) / R_no3p2k (updated with GDS TDCFs) the right thing

As above, it's important to look at both a thermalized case as well as a non-thermalized case., so I attach those two,
    NOT THERMALIZED, and
    THERMALIZED.

The conclusions are the same as above:
    - Joe is again right that the difference between the approximation (GREEN) and the right thing (RED) is small, even for the NOT THERMALIZED time
But I think this version of the plots / traces better shows the breakdown of which effect is contribution where on top of the approximation vs. "the right thing," and "the right thing" was never explicitly shown. All the traces in my expanded aLOG, LHO:72879, had the reference model (or no 3.2 kHz pole models) updated either both CALCS TDCFs or both GDS TDCFs in the numerator and denominator, rather than "the right thing" where you have CALCS TDCFs in the numerator and GDS TDCFs in the denominator).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
To create these extra plots, I added a few lines of "calculation" code and another 40-ish lines of plotting code to create_no3p2kHz_syserror.py. I've now updated in within the git repo, so it and the repo now have git hash 1c0a4126.
Non-image files attached to this comment
H1 ISC
eleonora.polini@LIGO.ORG - posted 12:30, Friday 01 September 2023 (72625)
OMC scattered light noise study

I summarized the work done regarding the study of the scattered light produced by the output mode cleaner of LHO in the presentation on DCC document G2301690-v1. This could have been one of the candidates of the noise at 20-50Hz caused by the up-conversion of the fast glitches at 2.6Hz. According to this study, the amount of scattered light produced by the OMC could be sufficient to cause noise around those frequencies. Nevertheless, the relative movement between OMC and SRM, when the interferometer is locked, is not sufficient to move the shelf of scattered light into the critical region for the sensitivity curve, as shown in Fig. 1.

In order to perform the study, I used the data taken from Camilla (in the entries 7135471742) to find the coupling factor G. I then calculated the scattered light noise considering different gps: in the absence of earthquake, in the presence of earthquake and during a fast glitch at 2.6Hz.  The effect is that this noise remains at low frequency and thus should not cause extra noise in DARM.

Images attached to this report
H1 SEI
jim.warner@LIGO.ORG - posted 12:14, Friday 01 September 2023 (72623)
HAM1 X 3dl4c feedforward, and oversight in HEPI twist controls.

I got 2 horizontal L4Cs under HAM1 this week and was able to get measurements to try designing some filters for X  feedforward this week. It's possible to get nice improvements in HAM1 motion above 1 hz to beyond 10 hz, with some improvements in ASC over those frequencies, but it's difficult to get filters that don't make the low frequency motion worse. When I tried a filter like Huyen's , I got broad excess noise below .1hz. When I high pass the filter, I get different injection at the high pass, I think I can put this in a place that doesn't affect the IFO, but it will take some tuning.

First plot compares the LLO filter (yellow, scaled to my HAM1 data), the filter I have running in HAM1 right now (red) and the ratio of tfs for the ff filter design (blue).

Second plot compares some asds with the current filter running. For the top subplot, red and brown are with X ff off, green and light blue are on. For the other 2 subplots, red is X ff off, green is X ff on I'm able to get good improvements above 1hz in the HEPI channels, and CHARD pitch even sees some improvement in places, maybe a factor of 2ish around 6hz. Some of the other ASC seem to see small improvements as well. But the feedforward also causes some excess noise below 1hz, which all of the asc channels seem to pick up.

After talking with Arnaud a bit, I think I may have just found a cause for the noise below 1 hz. The hepis all have a "twist" path which is a feedforward path that is needed to compensate for the HEPI actuator drives causing the crossbeams to bend, tilting the horizontal L4Cs. This path is only used on HAM1, because HAM1 is the only chamber that uses the L4Cs in-loop. The error point for this should be the cartesian drive and it is currently read at the output for the isolation loops, shown in the third image. Because we've never used the 3dl4c path before no one had thought about the fact that this path should readout after the add block which sums the output of the iso loops and the output of the 3dl4c feedforward path. This means the total drive going signal to the twist path doesn't account for the 3dl4c drive signal, so it's probably not being fully subtracted from the HEPI blended l4cs. Fourth attachment shows how this path should be wired up. HAM1 has an active ECR, so I'll fix HAM1 for on Tuesday (which has it's own library part), but this should just be fixed in the general HEPI master part.

Images attached to this report
H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 11:51, Friday 01 September 2023 (72624)
Friday Lockloss 1377628699

Lockloss:
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1377628699
Still waiting on analysis to finish running.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:12, Friday 01 September 2023 (72621)
Fri CP1 Fill

Fri Sep 01 10:07:14 2023 INFO: Fill completed in 7min 10secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 PSL
anthony.sanchez@LIGO.ORG - posted 09:51, Friday 01 September 2023 (72620)
PSL Weekly Famis 26207

PSL Weekly Famis 26207

Laser Status:
    NPRO output power is 1.831W (nominal ~2W)
    AMP1 output power is 67.19W (nominal ~70W)
    AMP2 output power is 135.2W (nominal 135-140W)
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN

PMC:
    It has been locked 26 days, 1 hr 27 minutes
    Reflected power = 16.43W
    Transmitted power = 109.3W
    PowerSum = 125.8W

FSS:
    It has been locked for 0 days 11 hr and 45 min
    TPD[V] = 0.8494V

ISS:
    The diffracted power is around 2.3%
    Last saturation event was 0 days 9 hours and 38 minutes ago


Possible Issues: None

H1 SUS
camilla.compton@LIGO.ORG - posted 09:38, Friday 01 September 2023 (72619)
Commissioning 15:56 to 15:59UTC to Manually damp PI 24

Tony, Camilla We went into Commissioning 15:56 to 15:59UTC as we needed to take SUS_PI into IDLE and manually change the phase of PI mode 24. It was circling through phases and ringing up, plot attached, t-cursor where we changed SUS_PI to IDLE.

We are unsure why this would have changed today, Tony checked that it hasn't rung up this high in the last week, his plot attached. We should check the SUS_PI settings to avoid this in future locks.

There is instructions on how SUS_PI works in 68610 68379 but all we did was take SUS_PI to IDLE and change H1:SUS-PI_PROC_COMPUTE_MODE24_PLL_PHASE to 50, shown in attached image. Tony suggests maybe SUS_PI doesn't need to be monitored as the phase already changes during observing.

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 08:04, Friday 01 September 2023 (72618)
Friday Ops Day Shift Start

TITLE: 09/01 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 141Mpc
OUTGOING OPERATOR: Austin
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 6mph Gusts, 4mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.15 μm/s
QUICK SUMMARY:

Inherited an IFO that has been Locked for 7 hours.
 

H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 00:02, Friday 01 September 2023 (72616)
Lockloss at 07:00UTC

We lost lock right at 07:00. not sure why there was a DCPD saturation right before.

H1 General
ryan.crouch@LIGO.ORG - posted 00:00, Friday 01 September 2023 - last comment - 00:15, Friday 01 September 2023(72614)
OPS Thursday eve shift summary

TITLE: 09/01 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
INCOMING OPERATOR: Austin
SHIFT SUMMARY: Quiet shift, 1 lockloss fairly automated relock, I touched two things. Locked for 1:07 as of 07:00UTC

Lockloss at 04:35UTC

LOG:

No log for this shift

Comments related to this report
ryan.crouch@LIGO.ORG - 00:15, Friday 01 September 2023 (72617)

DRMI locked on its first try pretty quickly on the relock

H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 21:38, Thursday 31 August 2023 - last comment - 23:04, Thursday 31 August 2023(72613)
Lockloss at 04:35, no obvious cause

https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1377578169

Comments related to this report
ryan.crouch@LIGO.ORG - 23:04, Thursday 31 August 2023 (72615)

Reaquired Observing at 06:04UTC

LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 16:41, Tuesday 29 August 2023 - last comment - 17:03, Friday 01 September 2023(72530)
X-End BSC5 Annulus Ion Pump Replacement

(Janos C., Gerardo M.)

Old ion pump body was removed and replaced with a refurbished model type (Galaxy).  The annulus system was pumped down while the replacement took place.
After the 4-1/2" flange was torqued, the ion pump volume was added to the rest of the annulus system.  The aux cart pumping down the annulus system worked for 3 hours to get the pressure down, after pressure reached 4.5x10^-05 Torr, the ion pump took over the pumping with no problem, then after 20 minutes the aux cart and "can" turbo were removed.  System back to normal.

Images attached to this report
Comments related to this report
gerardo.moreno@LIGO.ORG - 17:03, Friday 01 September 2023 (72630)VE

Update for pumpdown progress, see attached trend data, total of 3 days.

Images attached to this comment
Displaying reports 13561-13580 of 84064.Go to page Start 675 676 677 678 679 680 681 682 683 End