Displaying reports 18201-18220 of 86823.Go to page Start 907 908 909 910 911 912 913 914 915 End
Reports until 08:19, Thursday 22 June 2023
H1 General
anthony.sanchez@LIGO.ORG - posted 08:19, Thursday 22 June 2023 - last comment - 10:06, Thursday 22 June 2023(70712)
Thursday Ops Owl Shift End


TITLE: 06/22 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 139Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 6mph Gusts, 4mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.06 μm/s
QUICK SUMMARY:

Inherited H1 which was already locked for about 2 hours.

10:11 UTC GRB-Short Candidate E412714 Verbal Alarm 

14:00 UTC Increase in SensMon Range:
Jenne Called and we spoke about making some changes to increase the sensitivity.
I did a simple caget to find out what the values of H1:LSC-SRCLFF1_GAIN & H1:LSC-SRCLFF1_TRAMP were and a caput to change the gain to 2.1.
After the change I saw a noticable increase in SENSMON Range.
IFO Current Status : NOMINAL_LOW_NOISE & OBSERVING  with a range of 140.6 Mpc

14:35 UTC Superevent Candidate S230622ba

 

Comments related to this report
jeffrey.kissel@LIGO.ORG - 10:06, Thursday 22 June 2023 (70721)OpsInfo
See more details about SRCLFF1 gain recovery, which improved the BNS range from ~125 to ~140 Mpc in LHO:70710 and LHO:70720.

Nice work Tony & team!
H1 General
ryan.crouch@LIGO.ORG - posted 08:04, Thursday 22 June 2023 (70711)
OPS Thursday Day shift start

TITLE: 06/22 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 142Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 3mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.06 μm/s
QUICK SUMMARY:

H1 General
anthony.sanchez@LIGO.ORG - posted 05:06, Thursday 22 June 2023 (70709)
Thusday Mid Owl Shift report.

IFO Current Status : NOMINNAL_LOW_NOISE & Observing with a range of 128 Mpc for the past 7 hours.

H1 SEI
anthony.sanchez@LIGO.ORG - posted 04:13, Thursday 22 June 2023 (70708)
Famis H1 ISI CPS Noise Spectra Check - Weekly

Famis 19663 Famis H1 ISI CPS Noise Spectra Check - Weekly task

 

Images attached to this report
Non-image files attached to this report
H1 CAL
louis.dartez@LIGO.ORG - posted 02:12, Thursday 22 June 2023 (70705)
Comparison of broadband 'systematic error' using CAL-DELTAL_EXTERNAL and GDS-CALIB_STRAIN_CLEAN
I'm attaching a screenshot of a DTT template showing overlaid "systematic error" measurements from five different broadband excitations taken yesterday (Wed. 6/21/2023 PDT). The left column is a bode plot of CAL-DELTAL_EXTERNAL / PCALY_RX and the right is the same for GDS-CALIB_STRAIN_CLEAN / PCALY_RX. 

The trace colors and what they represent are included in the legend on the bottom left. They are also defined below.

The magenta trace was taken during a broadband PCAL excitation after the new 60W calibration was installed (on the front end and the GDS pipeline, LHO 70693). The blue, yellow, and gray traces were taken after the IFO was powered down to 60W but before the calibration was updated. As expected, they are very similar and show no discernible effects from having changed SRCL offset. We changed the SRCL offset to -165 [ct] as a quick detuning check (LHO 70677) but immediately changed it back after taking and inspecting broadband and sensing function measurements in that configuration.

The analogous measurements using GDS-CALIB_STRAIN_CLEAN (right column) show a much tighter grouping than those of CAL-DELTAL_EXTERNAL. This makes sense because the GDS pipeline applies time dependent corrections in real time (after re-timestamping). So differences between the traces on the right bode plot can be attributed to IFO differences that aren't covered by the TDCFs and other compensations in that pipeline.


"PCALY2DARM" Broadband measurements used:
The following files are located in /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/.

- PCALY2DARM_BB_20230620T233459Z.xml : 75W, before RH change started, SRCL offset -265, red trace
- PCALY2DARM_BB_20230621T191103Z.xml : 60W, SRCL Offset -175, gray trace
- PCALY2DARM_BB_20230621T201220Z.xml : 60W, SRCL Offset -165, orange trace
- PCALY2DARM_BB_20230621T211010Z.xml : 60W, SRCL Offset -175, blue trace
- PCALY2DARM_BB_20230621T232125Z.xml : 60W, SRCL Offset -175, after calibration update, magenta trace

DTT template stored at /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O4/H1/Measurements/FullIFOSensingTFs/20230621_systematic_error_deltal_external_gds_calib_strain.xml


DTT reference traces:

T0: 2023-06-20 23:35:06
  Ref0: CAL-DELTAL_EXTERNAL / PCALY_RX_PD
  Ref1: GDS-CALIB_STRAIN_CLEAN / PCALY_RX_PD
T0: 2023-06-21 19:11:09
  Ref2: CAL-DELTAL_EXTERNAL / PCALY_RX_PD
  Ref3: GDS-CALIB_STRAIN_CLEAN / PCALY_RX_PD
T0: 2023-06-21 20:12:27
  Ref4: CAL-DELTAL_EXTERNAL / PCALY_RX_PD
  Ref5: GDS-CALIB_STRAIN_CLEAN / PCALY_RX_PD
T0: 2023-06-21 21:10:16
  Ref6: CAL-DELTAL_EXTERNAL / PCALY_RX_PD
  Ref7: GDS-CALIB_STRAIN_CLEAN / PCALY_RX_PD
T0: 2023-06-21 23:21:31
  Ref8: CAL-DELTAL_EXTERNAL / PCALY_RX_PD
  Ref9: GDS-CALIB_STRAIN_CLEAN / PCALY_RX_PD
Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 00:54, Thursday 22 June 2023 (70706)
Thursday Ops Owl Shift Start

TITLE: 06/22 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 127Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 3mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.06 μm/s
QUICK SUMMARY:

Current IFO Status : NOMINAL_LOW_NOISE & Observing at 60W

LHO General
thomas.shaffer@LIGO.ORG - posted 00:17, Thursday 22 June 2023 (70689)
Ops Eve Shift Summary

TITLE: 06/22 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 126Mpc
SHIFT SUMMARY: Lost lock after 10+ hours lock. Relocking was automatic aside from a small code error. This most recent lock range has been lower than previous (125Mpc vs 135Mpc). I think that it's perhaps related to the SQZ configuration, but not positive.
LOG:

H1 General
thomas.shaffer@LIGO.ORG - posted 22:09, Wednesday 21 June 2023 - last comment - 15:29, Tuesday 27 June 2023(70702)
Back to Observing 0504 UTC

Relock was fully auto after one lock loss while finding IR. There was a missing comma that brough ISC_LOCK into error in LOWNOISE_LENGTH_CONTROL, easy fix.

There were a few SDF diffs that look like they need to be accepted based on alog70648. Accepted with screenshots attached.

I turned on the CAL_AWG_LINES Guardian at request of Jeff. I had to change this node's nominal state to LINES_ON for it to be OK.

Comments related to this report
thomas.shaffer@LIGO.ORG - 23:47, Wednesday 21 June 2023 (70704)DetChar

I'm thinking that this lower range is related to the squeezer. I've attached a screenshot fo the FDS DARM FOM where the live trace is above the refernece in the same frequencies that DARM seems to be higher than normal. I followed the instructions on the Troubleshooting SQZ wiki to adjust the sqeeze angle, but I wasn't able to make anything better, only worse.

I adjusted the sqz angle from 0630-0640 UTC.

Images attached to this comment
thomas.shaffer@LIGO.ORG - 22:27, Wednesday 21 June 2023 (70703)

Our range is staying low at ~125Mpc, there seems to be extra noise from 20-60Hz. Investigating.

andrew.lundgren@LIGO.ORG - 01:56, Thursday 22 June 2023 (70707)DetChar, ISC
There's a lot more coherence of PRCL and SRCL in the ongoing lock than the previous one. The thermalization cal lines are also very high in DARM and CHARD - maybe they weren't turned on until this new lock though.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 09:13, Thursday 22 June 2023 (70718)CAL, DetChar
Tagging CAL regarding the turn on of CAL_AWG_LINES for this 60W lock stretch -- thanks TJ!

Tagging DetChar as well -- to note that we have 8 extra calibrations on during this nominal low noise stretch that started at June 22 2023 05:04 UTC

These are in because we want to characterize the thermalization of the detector's DARM loop sensing and response functions now that we're operating at 60W rather than 75/76W. I hope to get a few more of these lock acquisitions with these extra lines on, and then we'll turn them off as we had done for the start of the engineering run.

If you'd like to create a data quality flag, you can find the status of these lines "in one go" by looking at the CAL_AWG_LINES guardian state channel, 
    H1:GRD-CAL_AWG_LINES_STATE_N
The numerical value of the channel is 10.0 when the extra calibration lines are ON (the state is called LINES_ON), and 2.0 when the lines are OFF (the state is called IDLE). See CAL_AWG_LINES_StateGraph for the flow of the state graph.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 09:40, Thursday 22 June 2023 (70720)DetChar, OpsInfo
A retrospect on the lower range:

The solution was a lack of capturing the SRCLFF1 gain in SDF / Guardian during yesterday's power reduction from 75W to 60W LHO:70648.

See LHO:70712 and LHO:70710 where Tony recovered the correct gain of 2.1.

@DetChar -- it might be worth creating a data quality flag for this:
    Observation segment start (with SRCLFF1 Gain at 1.0): 
    2023-06-22 05:04:38 UTC
               22:04:38 PDT
               1371445496 GPS
    Observation segment stop (with SRCLFF1 Gain at 1.0): 
    2023-06-22 13:53:47 UTC
               06:53:47 PDT
               1371477245 GPS
Images attached to this comment
jenne.driggers@LIGO.ORG - 11:17, Thursday 22 June 2023 (70728)

RyanC just added the SRCLFF1 gain of 2.1 to lscparams, saved, and reloaded the ISC_LOCK guardian.  So, if we need to relock it'll come back on with the correct gain.  Note though, that we expect to update this yet again later today.

thomas.shaffer@LIGO.ORG - 16:03, Thursday 22 June 2023 (70742)

Here's that SDF screenshot that I said I was going to attach. Turns out my tired brain had flipped setpoint and epics value in the tables, Doh! My fault.

Images attached to this comment
ansel.neunzert@LIGO.ORG - 15:29, Tuesday 27 June 2023 (70892)

Adding a quick comment to Jeff's note about lines, with a bit of relevant info from ER15.

Abby Wang and Athena Baches recently analyzed lines in May 2023 data, grouping those that evolve similarly in time. They found a cluster of lines corresponding to the awg lines, but not including any other entries. This is good news; it implies that there are *not* strong narrow artifacts with very similar histories-- i.e. these lines aren't causing unexpected strong lines elsewhere, which ought to have shown up in the same cluster. (Note: it's still possible that there are weak artifacts which aren't caught by this analysis.)

The attached plots show what the time evolution looks: each row is a line (corresponding to 11.475, 11.575, 15.175, 15.275, 24.4, and 24.5 Hz) , yellow = above threshold and blue = below threshold.

Images attached to this comment
H1 General (Lockloss)
thomas.shaffer@LIGO.ORG - posted 21:01, Wednesday 21 June 2023 (70700)
Lock loss 1371441081

1371441081

Lost lock after 10hours 35 min. No obvious cause initially.

H1 ISC (OpsInfo)
sheila.dwyer@LIGO.ORG - posted 16:40, Wednesday 21 June 2023 (70692)
PRCL gain

Jenne, Sheila, TJ

When operating with 75W input power, we had been using a thermalization guardian to increase the PRCL gain as the 9MHz gain dropped during thermalization.  Today we did a series of PRCL OLG measurements as the interferometer thermalized, and see that the PRCL optical gain is still dropping but not nearly as much.  For now, Jenne and TJ have set the guardians to no longer use the thermalization guardian, so it's nominal state is DOWN. 

Our first PRCL gain measurement posted today 70659 was taken 24 minutes after reaching 60W input power, with a digital gain of 6 gave us a 25Hz ugf.  After the thermalization was complete Jenne found that a digital gain of 10 gave us a ugf of 30Hz.  The attached screenshot was that first measurement, showing that a digital gain of 10 would have been stable at this time.  Jenne has put the digitial gain of 10 into the guardian to happen in the state LOWNOISE_LENGTH_CONTROL, we think this should be OK.  For comparison, at 75W input power the thermalization guardian took the PRCL gain from 6 to 37, so this thermalization is much less extreme. 

If there are any problems with locklosses at LOWNOISE_LENGTH_CONTROL or shortly after, it may be that the PRCL gain is too high.  A temporary solution in this case would be to just wait ~25 minutes after reaching 60W input power before doing LOWNOISE_LENGTH_CONTROL, ie, wait at LOWNOISE_ASC for 20 mintues or so.  If this happens we will find a solution in the guardian tomorow.  

Images attached to this report
H1 General
thomas.shaffer@LIGO.ORG - posted 16:36, Wednesday 21 June 2023 - last comment - 16:44, Wednesday 21 June 2023(70694)
Observing 2334 UTC

We've finished up measurements related to the 60W input power change today and have cleaned up SDF and a few Guardians. We are now oberserving at 2334 UTC

We had to change the nominal state for the LASER_PWR node from 75 to 60 and we changed the nominal state for the TERMALIZATION node to IDLE. This latter change might be temporary, more thought is needed.

Comments related to this report
jenne.driggers@LIGO.ORG - 16:44, Wednesday 21 June 2023 (70698)

Congratulations, all! 

This process went very smoothly, thanks to a lot of prep work by a lot of folks, and many teams working in parallel today.

H1 CAL (DetChar, ISC, OpsInfo)
jeffrey.kissel@LIGO.ORG - posted 16:34, Wednesday 21 June 2023 - last comment - 16:50, Thursday 22 June 2023(70693)
Calibration Pushed / Updated for 60W; Systematic Error is within +/- 5% and +/- 3 deg as before at 75/76W
L. Dartez, J. Kissel

More details to come, but as of Jun 21 2023 23:30 UTC, we have updated the calibration, to reflect the new IFO with input power back at 60W and all the associated other configuration changes including but not limited to a SRCL offset of -175 [ct].
Comments related to this report
jeffrey.kissel@LIGO.ORG - 16:59, Wednesday 21 June 2023 (70699)
The calibration update was pushed based on the second -175 [ct] SRCL offset sensing function data taken in LHO:70683.

Even though there were no new measurements of the actuators taken, these the N/ct actuation strength "free" parameters were also updated with, essentially, a new MCMC run on the last, most recent, old data from May 17 2023 (LHO:69684).

Here're the following "free parameter" values exported to foton:
   $ pydarm export
       searching for 'last' report...
       found report: 20230621T211522Z
       using model from report: /ligo/groups/cal/H1/reports/20230621T211522Z/pydarm_H1_site.ini
       filter file: /opt/rtcds/lho/h1/chans/H1CALCS.txt

          Hc: 3.4207e+06 :: 1/Hc: 2.9234e-07
          Fcc: 439.33 Hz

          Hau:  7.5083e-08 N/ct
          Hap:  6.2353e-10 N/ct
          Hat:  9.5026e-13 N/ct

    filters (filter:bank | name:design string):
       CS_DARM_ERR:10                O4_Gain:zpk([], [], 2.9234e-07)
       CS_DARM_CFTD_ERR:10           O4_Gain:zpk([], [], 2.9234e-07)
       CS_DARM_ERR:9                O4_NoD2N:zpk([439.32644887584786], [7000], 1.0000e+00)
       CS_DARM_ANALOG_ETMX_L1:4      Npct_O4:zpk([], [], 7.5083e-08)
       CS_DARM_ANALOG_ETMX_L2:4      Npct_O4:zpk([], [], 6.2353e-10)
       CS_DARM_ANALOG_ETMX_L3:4      Npct_O4:zpk([], [], 9.5026e-13)

The calibration report on the MCMC fitting for free parameters, as well as the GPR fit based on the two measurements at -175 [ct] (20230621T211522Z in LHO:70683 and 20230621T191615Z in LHO:70671) is attached below for convenience, but has been archived on the LDAS cluster under 
    H1_calibration_report_20230621T211522Z.pdf
Non-image files attached to this comment
jeffrey.kissel@LIGO.ORG - 10:12, Thursday 22 June 2023 (70722)
For the primary metric of how the calibration's quality changed across the 75W to 60W, then change of SRCL offset change, the calibration push, see LHO:70705. I copy and past that attached image here for convenience.

Also repeating Louis:
The DTT template for this measurement is stored in  
    /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O4/H1/Measurements/FullIFOSensingTFs/
        20230621_systematic_error_deltal_external_gds_calib_strain.xml
Images attached to this comment
louis.dartez@LIGO.ORG - 11:19, Thursday 22 June 2023 (70729)
we changed is_pro_spring to False in the pyDARM parameter model set. Commit: 353de502.
jeffrey.kissel@LIGO.ORG - 16:50, Thursday 22 June 2023 (70735)
Attached are the raw the blow-by-blow notes I took during yesterday's calibration push that highlights all the command-line commands and actions we needed to take in order to update the calibration.

Recapping here with a little more procedural clarity: 
(Any command recalled without a path are/were/may be run any new, fresh terminal; we did not need to invoke any special conda environment thanks to the hard work done behind the scenes by the pydarm-cmd team):

    (0) If at all possible, understand what you expect to change in the calibration ahead of time. 
        If that is *limited* to something changing that can only be measured with the full IFO, 
        i.e. you expect *only* a change in the "free parameters" (overall sensing function gain, 
        DARM cavity pole frequency, or any of the three ETMX UIM, PUM, TST actuator strengths) 
        then you run through the process outlined below as we did yesterday. Other changes to the 
        DARM loop, like electronics changes or computational arrangement mean you have to do a 
        more in-depth characterization of that thing, update the DARM loop model parameter set, 
        *then* start at (1).

    (1) Measure something new about the IFO. In this case we *knew* that we expect a change in 
        the inteferometric response of the IFO because of the ring heater changes and input 
        power change, so we remeasured the sensing function; and expected only the optical gain 
        and the cavity pole to change.
        
        $ pydarm measure --run-headless bb sens pcal
        
        We, of course, should be out of observing, and the ISC_LOCK guardian should be in NLN_CAL_MEAS.
        When the measurement is complete, you can do steps (2) through (6) with the IFO *back* 
        in NOMINAL_LOW_NOISE, and you can even go back in to OBSERVING during that time.

    (2) Process that measurement, and create the folder of material that's required for that 
        processing, as though it were a part of the on-going "epoch" of measurements where 
        you expect nothing to have changed about the DARM loop other than the time-dependent 
        corrections to the free parameters. This gives you a "report" that shows the residuals 
        between the last installed model of the IFO and you current measurement compared to 
        the rest of the measurement/model residuals in the inventory for that last "epoch." 
        In this way, you can confirm or refute your expectations of what has changed.
        
        $ pydarm report
        
        which generates the folder in 
        /ligo/groups/cal/H1/reports/20230621T191615Z/

    (3) Looking at the first results, we were disappointed that occasionally the MCMC fit 
        would land on a parameter hyperspace island that had a large SRC detuning spring 
        frequencies, even through the lower frequency limit of the data fed into the fitter 
        was ~80 Hz. As such, we adjusted the *default* model parameter set,
         
        /ligo/groups/cal/H1/ifo/pydarm_H1.ini
        changing the following parameter,
        Line 15    is_pro_spring = False
        and re-ran the report,
        $ pydarm report --force
        in order to re-run the MCMC. This worked so we committed pydarm_H1.ini to the 
        ifo/H1/ repo as git hash 353de502.
 
    (4) After looking through the history of measurement/model residuals, you should 
        then have an understanding what you want to *tag* as "valid" and an understanding 
        of whether you new measurement *is* infact the boundary of a new epoch. This 
        may also be the time when you *don't* like what you see, so you modify the 
        controls settings of the IFO to change it further and go back to step (1). As you 
        can see from LHO aLOGs 70671, 70677, and 70683, 
        we were doing just that.

        In the end, we had *two* measurements in "the new epoch" that we liked, and one 
        measurement in the middle -- technically its *own* epoch -- that we didn't like. 

        So, after processing all the data and making no updates to the report tags so we 
        could see the whole history of the sensing function, we tagged the reports in the 
        following way

        $ cd /ligo/groups/cal/H1/reports/
        $ pydarm ls -r                  # before tagging
            20230620T234012Z valid          # Last valid 75W sensing function data set
            20230621T191615Z                # First new 60W data set, with SRCL offset -175 [ct]
            20230621T201733Z                # Second new 60W data set, with SRCL offset -165 [ct]
            20230621T211522Z                # Third new 60W data set, with SRCL offset -175 [ct]
        $ # validating First and Third 60W data sets, both with -175 [ct] SRCL offset the report so it shows up on the
        $ touch 20230621T191615Z/tags/valid 
        $ touch 20230621T191615Z/tags/epoch-sensing
        $ touch 20230621T211522Z/tags/valid  
        $ pydarm ls -r                  # after tagging
            20230620T234012Z valid
            20230621T191615Z valid epoch-sensing
            20230621T201733Z
            20230621T211522Z valid

    (5) Then, now that these tags are set up -- and specifically the epoch-sensing tag -- 
        only these new 60W data sets are included in the history, which means only 
        those data sets are stacked and fit to GPR. Thus, this report generation run will 
        be the "final" report that generates what we end up exporting out to the calibration 
        pipeline. Importantly, even though the epoch boundary is defined by the the *first* 
        20230621T191615Z measurement, the parameters that will be installed are defined by 
        the MCMC of the *latest* 20230621T211522Z measurement. This works because we're 
        assuming the IFO is the same in this entire boundary, so we should get equivalent 
        answers (within uncertainty, and modulo time-dependent correction factors) if we MCMC 
        any of the measurements in the epoch.
        
        $ pydarm report --force
        
        yields a good report, with "free parameters," foton exports, FIR filters, MCMC 
        posteriors, and GPR fits that are ready to export to the calibration pipeline.

        Also note that all of these re-runs of the report ("pydarm report --force") 
        are *over-writing* the contents of the report, so if you want to save any interim 
        products you must move them out of the way to a different location and/or different name.

    (6) We can validate what we are about to push out into the world with the dry run command,
        
        $ pydarm export
        
        where if you don't specify the report ID, then it exports the latest report. In this case, 
        the latest is 20230621T211522Z, so we do want to use this simplest use of this command. 
        That spits out text like what's shown in LHO:70699. 
   
        Another option is 
        
        $ pydarm status
        
        which spits out a comparison between what's *actually* installed in the front-end against 
        the latest report (which, in this case, is what we're *about* to install).

    (7) If you're happy with what you see, then it's time to shove "the calibration" out into the world.
        (a) Presumably, the IFO is still locked, in NOMINAL_LOW_NOISE, and maybe even in OBSERVING. 
            Warn folks that you're about to take the IFO out of OBSERVING, and the DARM_FOM on the 
            wall is about to go nuts, but the IFO is fine. Try to do steps (b) through (g) as quickly 
            but accurately/completely/carefully as possible.

        (b) push EPICs records to the front end and save new foton files.
            $ pydarm export --push

        (c) open up the CAL-CS GDS-TP screen, and look at the DIFF of filter coefficients. 
            Hit the LOAD_COEFFICIENTS button if you see what you expect from the DIFF.

        (d) on the same screen, open up the SDF OVERVIEW. Review the changes and accept if you see 
            what you expect.

        Now everything's updated in the front-end, so it's time to migrate stuff out to the cluster 
        so GDS and the uncertainty pipelines get updated. 

        (e) Add an additional tag to the report which you just pushed,
        
        $ touch /ligo/groups/cal/H1/reports/20230621T211522Z/tags/exported
        $ pydarm ls -r 
            20230620T234012Z valid
            20230621T191615Z valid epoch-sensing
            20230621T211522Z exported valid
        

        (f) Archive all the reports that are a part of this wonderful new epoch, which pushes the 
            whole folder so it includes the tags. Having the "exported" tag is particularly 
            important for the GDS pipeline. 
            
            /ligo/groups/cal/H1/reports$ arx commit 20230621T191615Z
            /ligo/groups/cal/H1/reports$ arx commit 20230621T211522Z 
            

        (g) Restart the gds pipeline, which picks up the .npz of filter coefficients from the latest 
            report marked with the "exported" tag. 
            
            $ pydarm gds restart
            

            This opens up prompts form both DMT machines, dmt1 and dmt2, to say "yes" to confirm that 
            you want to restart.

            After restarting the GDS pipeline, you can check the status of the machines as well,
            
            $ pydarm gds status
            

    (8) Once you're done with the GDS pipeline restart, then you've gotta wait ~2-5 minutes for 
        the pipeline to complete its restart. To check whether the pipeline is back up and running, 
        head to "grafana" calibration monitoring page. 
        Presumably, the IFO is still in NOMINAL_LOW_NOISE, so eventually the live measurement of 
        response function systematic error should begin to reappear, hopefully even closer to 
        1.0 mag, 0.0 deg phase. While you're waiting, you can also pull up trends of 
            - the front-end computed TDCFs, see if those move closer to 1.0 (or in the case of f_cc, closer to the MCMC value)
            - the front-end computed DELTAL_EXTERNAL / PCAL systematic error transfer function, see if those move closer to 1.0
        In addition, the newest, latest *modeled* systematic error budget only gets triggered once 
        every hour, so you just have to be patient on that one, and check in later.

    (9) Once everything is settled, take the ISC_LOCK guardian to NLN_CAL_MEAS, and take a broad-band 
        PCAL injection for final, high-density frequency resolution, post-install validation a la LHO:70705
Non-image files attached to this comment
H1 SEI
jim.warner@LIGO.ORG - posted 13:55, Wednesday 21 June 2023 - last comment - 16:43, Wednesday 21 June 2023(70678)
EY HEPI pump after Beckhoff install

The ETMY HEPI PLC was replaced yesterday with a new Beckhoff unit, and so far everything seems to be working okay, but we'd probably only notice if things weren't working at all. The seismic stack isolated no problem yesterday when we were recovering.

 First attach image are trends of the pressures over the last couple of days. It seems like whatevery else has happened there is less overall noise in the pressure readbacks, probably a good thing. Pressure so far seem as stable as before.

Second image are (top) asds before and after for the differential pressure error point for the pump controller and the output pressure readback on the pumpstation (PRESS4, this sensor is at the end of the output manifold on the pump cart) and (bottom) coherence with the Y HEPI L4Cs. The before asds both show some weird "anti-comb"(?) and are consistent with the greater readout noise seen in the timeseries trends. The after asds look nice and quiet by comparison. The coherence between the diff pressure and the Y HEPI L4Cs hasn't really changed, so I don't think we have made the HEPI motion worse.

Images attached to this report
Comments related to this report
jim.warner@LIGO.ORG - 16:43, Wednesday 21 June 2023 (70697)

The less noisy pressure is consistent with what Hugh found way back in 2017 when they first started testing the EX Beckhoff unit.

There's also a 2-5 psi drop in the individual pressures sensors for EY, but the new levels line up well with the pressures we are seeing at EX.

H1 ISC
camilla.compton@LIGO.ORG - posted 08:19, Wednesday 21 June 2023 - last comment - 12:35, Wednesday 28 June 2023(70648)
Next lock at 60W input power.

This lock we will change the IFO input power to 60W, see alog 70497.

Already done:

Comments related to this report
jenne.driggers@LIGO.ORG - 08:32, Wednesday 21 June 2023 (70651)

Thermalization guardian commented out of ISC_LOCK for now.

sheila.dwyer@LIGO.ORG - 08:38, Wednesday 21 June 2023 (70653)

Reverted the change to ITMY A2L gains from 69082

jenne.driggers@LIGO.ORG - 08:38, Wednesday 21 June 2023 (70654)

NOISE_CLEAN will not turn on any NonSENS cleaning.  This means that GDS-CALIB_STRAIN_CLEAN will be the same as GDS-CALIB_STRAIN_NOLINES.

jeffrey.kissel@LIGO.ORG - 08:39, Wednesday 21 June 2023 (70655)CAL, DetChar, OpsInfo
I've turned *ON* the "thermalization" calibration lines, via the CAL_AWG_LINES guardian for this power up, in order to track the thermalization of the sensing function during a 60W power up (we did not turn on these lines until we were regularly at 75W, so we don't really have as clear of an analysis [e.g. LHO:69593] of the thermalization behavior during 60W)

Recall, the eight line frequencies (the highest at 24.5 Hz) are listed in LHO:69284.

They've been on and running since 2023-06-21 15:18:04 UTC.

Note, I have *not* recoded this up to be turned on automatically in ISC_LOCK, as I hope that we'll get a few thermalization runs during these next two 8 hour periods, and I'll be present for them.
jeffrey.kissel@LIGO.ORG - 08:45, Wednesday 21 June 2023 (70656)CAL
As per discussion in LHO:70650, I've edited 
    /opt/rtcds/userapps/release/isc/h1/guardian/
        lscparams.py
in order to change the hard-coded value to which we set the SRCL offset, changing it from -265 [ct] that we've been using at 75/76W PSL input power, to -175 [ct] which we'd used at 60W PSL input power. This is line 526 (at the time I edited the params):

    offset = {'SRCL_MODEHOP':-800,
              'SRCL_RETUNE':-175  # updated 20230621
             }

While we're not confident this is the perfect value, it's certainly a fine place to start.

jenne.driggers@LIGO.ORG - 09:04, Wednesday 21 June 2023 (70658)

Reverted LSC FF filters, as noted by Elenna's config alog.

SRCLFF1 again uses FM2. MICHFF again uses FM6-9.  PRCLFF gain is commented out (so, should be left at zero from Down).

sheila.dwyer@LIGO.ORG - 09:12, Wednesday 21 June 2023 (70659)

PRCL OLG measured after the loop changes in LOWNOISE_LENGTH_CONTROL

Images attached to this comment
jenne.driggers@LIGO.ORG - 09:12, Wednesday 21 June 2023 (70660)

Sheila measured the PRCL OLG, having left the 'new' filters in place, and letting lownoise_length_control set PRCL1 gain to 6, and not using the thermalization guardian.

Our UGF is about 25 Hz, so a little lower than the target of 30 Hz, but stable and fine.  We'll re-check after a while of having been at full power.

jenne.driggers@LIGO.ORG - 10:07, Wednesday 21 June 2023 (70662)

Lowered CARM gain by hand by 6dB (lowered H1:LSC-REFL_SERVO_IN1GAIN and H1:LSC-REFL_SERVO_IN2GAIN by 1dB each, alternating, until I was down on each slider by 6dB).

When we just ran through LaserNoiseSuppression, we saw lots of excess noise, and Sheila measured the highest CARM UGF to be around 27 kHz, which is too high.  We the lowered the overall gain by 6dB.  Not yet in guardian.

EDIT: Lowering by 6dB brought our lowest UGF to ~12kHz, too low.  We re-increased by 3dB, so that in the end we've only reverted the 3dB increase that Elenna mentioned in the config alog.

EDIT2: this is now in guardian.

jenne.driggers@LIGO.ORG - 10:25, Wednesday 21 June 2023 (70664)

PRCL OLG remeasured longer into the lock, and the UGF was quite low.  I increased the PRCL1 gain to 10 (from the nominal, without-thermalization-guardian, 6), and the UGF is back to 30 Hz. 

We can likely afford to just put this gain of 10 into lownoise_length_control, but that would put our UGF at the beginning of the lock at 37 Hz with 24 deg of phase margin.  Probably fine, but much higher starts to be not fine.

jenne.driggers@LIGO.ORG - 11:26, Wednesday 21 June 2023 (70667)

I increased the gain of the SRCL FF (and measured that I did not need to change the gain of MICH FF).

Attached shows the reduction in coherence with SRCL when the gain of the SRCLFF1 filter bank is set to 2.1 (rather than 1.0). Blue is the old coherence, green is the updated coherence.  You can see that if we want to keep the coherence reduced at lower frequencies, we'll have to make a frequency-dependent change to the feedforward, but so far this at least helps.

I did this by injecting noise into SRCL (by just using the SRCL OLG measurement template's excitation, just set to exponential rather than fixed averages), and changed the feedforward gain until the noise in DARM seemed minimized above 30 Hz.  I did the same also for MICH, but found that the existing gain value of 0.97 was already the best.

This means that I incidentally got SRCL and MICH olg measurements, which are the second and third attachments.

Images attached to this comment
jenne.driggers@LIGO.ORG - 12:35, Wednesday 21 June 2023 (70673)

Accepted FF-related SDFs.  Also accepted PRCL1 gain at 2 sec, since the thermalization guardian is off and won't set it to 30 sec.

Not shown, I also accepted the OAF-WHITENING gain at zero (which means that there's no NonSENS cleaning going forward).

Images attached to this comment
jenne.driggers@LIGO.ORG - 12:57, Wednesday 21 June 2023 (70676)

Another PRCL measurement, UGF is just a bit under 30 Hz.

Images attached to this comment
jenne.driggers@LIGO.ORG - 14:32, Wednesday 21 June 2023 (70684)

Sheila plugged in the SR785 to the ISS second loop chassis similar to the photo in alog 61721.

Keita confirmed that Err1Mon is equivalent to our digital filter banks' In1 (so, before the excitation), and Err2Mon is equivalent to In2 (so, after the excitation).  The excitation BNC is likely the one plugged into the port under Err1Mon on the photo.

And, since the two monitor points have different gains, the UGF of the loop should be read off of the TF at the -20dB line.

With the ISS second loop gain H1:PSL-ISS_SECONDLOOP_GAIN at the 75W value of -5 dB, Sheila measured that we had a UGF of about 17kHz.  With the gain increased to -2dB, we have a UGF of about 21.7kHz.  I've accepted the value of -2dB into SDF.

Attached is a photo of the SR785 with the IFO at 60W and the ISS second loop gain at -2dB.

Images attached to this comment
jenne.driggers@LIGO.ORG - 16:41, Wednesday 21 June 2023 (70696)

We've changed the PRCL1 gain in lownoise_length_control to be 10.  Since this means that we don't need the Thermalization guardian, we'll just leave that in IDLE, and TJ has set it's nominal to be IDLE (see alog 70694).

Sheila has written a separate alog 70692 for what to do if this is too much gain for PRCL at the beginning of the lock.

anthony.sanchez@LIGO.ORG - 07:13, Thursday 22 June 2023 (70710)

I did a simple caget to find out what the values of H1:LSC-SRCLFF1_GAIN & H1:LSC-SRCLFF1_TRAMP and a caput to change the gain to 2.1. 
After the change I saw a noticable increase in SENSMON Range.

IFO Current Status : NOMINAL_LOW_NOISE & OBSERVING  with a range of 140.6 Mpc

Images attached to this comment
sheila.dwyer@LIGO.ORG - 12:35, Wednesday 28 June 2023 (70920)

This is a photo of the CARM OLG measurement refered to in 70662

Images attached to this comment
H1 ISC
daniel.sigg@LIGO.ORG - posted 12:01, Tuesday 20 June 2023 - last comment - 10:36, Sunday 02 July 2023(70611)
Interferometer Reflection Port RIN

Plot 1 shows the dark noise in LSC-REFL_A_LF and REFL_B_LF (yellow/black), with 10W laser input with ISS second loop (red/blue) and w/o ISS second loop (magenta/cyan). The photocurrent in REFL_A and REFL_B was about 16mW each for the 10W measurements. It will turn to about 10mW in full lock. So, we are now about a factor 4 above dark noise.

Plot 2 shows the RIN of the 2 REFL PDs and their average, together with the ISS second loop sensors. The ISS inner and outer sensors have about 7-8mW of light each, so the measurements are limited by the shot noise of the inner PD. In full lock these sensors see about 60mW.

Non-image files attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 16:58, Tuesday 20 June 2023 (70634)

Here are 2 plots when the interferometer is locked. The ISS second loop sensors see about 60mW of light, whereas for this time LSC-REFL_A/B see about 8mW each.

The first plot shows a large excess in REFL power fluctuations below ~200Hz. EVen the flat part 300Hz shows some excess. It should be about 70% of the 16mW measurements, but shows a very similar level. Looking at the coherence between REFL_A and B indicates that this is a real signal and not noise.

The second plot shows relative intensity noise. To get the curves calibrated correctly one should match the peak near 4.5kHz since this seems rela intensit noise from the laser. (There is a factor of 0.3 in the calibrations of the RIN of REFL_A/B to acocunt for the interferometer reflectivity at DC. This factor should be 1 when the interferometer isn;t locked.)

Non-image files attached to this comment
daniel.sigg@LIGO.ORG - 17:46, Tuesday 20 June 2023 (70636)

Here is comparison between early in the lock and after 4 hours.

The hump in the reflected power is clearly getting larger as time progresses, and is its coherence with PRCL. The input power as measured by the ISS second loop outer sensor doesn't have a large correlection with the reflected power (some is expected due to the shot noise of the inner sensor).

Q1: Why is PRCL coherent with the power in reflection? If theer is a couplinh, shouldn't it be at least second order?

Q2: What's the flat noise above 300Hz that we see in the reflection power?

Non-image files attached to this comment
daniel.sigg@LIGO.ORG - 17:50, Tuesday 20 June 2023 (70639)

Here is the power trend during this lock.

Images attached to this comment
daniel.sigg@LIGO.ORG - 16:41, Wednesday 21 June 2023 (70695)

And these are the plots more than 7 hours into a 60W lock. The REFL PD now seems to be shot noise limited above 100Hz.

Non-image files attached to this comment
daniel.sigg@LIGO.ORG - 11:38, Friday 23 June 2023 (70763)

Here is a comparison between the noise measured in reflection at 75W and 60W and against the dark noise. Some observations:

  1. The power in reflection reduced about 2.5, much more than the scaled input powers.
  2. The noise also scaled down more than just shot noise, indicating that there was additional noise at 75W (also seen in the coherence).
  3. The measured noise at 60W is only a factor of 1.8 above dark noise, somewhat marginal.
  4. If we subtract the dark noise from the measured noise we are a factor of 1,5 above ADC noise.
  5. Assuming this is all shot noise, the shot noise limited power becomes 2.8mW.
Non-image files attached to this comment
daniel.sigg@LIGO.ORG - 16:03, Friday 30 June 2023 (70979)

The outer loop RIN is always reported about 8% higher than the innner loop one. This is not real. In the PSL ISS model of the second loop ISS both detector values are divided by the DC value of the inner loop detector. Since the outer loop detector sees about 8% more light, the RIN in the outer loop detector is overestimated by this amount. To get a better value multiply by 0.922. With this correction both RIN spectra agree with each other.

A better calibration of the REFL/ISS PDs measured with 10W input and all TMs misaligned.

  Measured Calibration
IMC-PWR_IN 9.855 W 1 W/W
PDSUMINNER 7.477 mA 0.7588 mA/W
PDSUMOUTER 8.115 mA 0.8235 mA/W
REFL_A_LF 16.70 mW 1.694 mW/W
REFL_B_LF 15.59 mW 1.582 mW/W
Images attached to this comment
daniel.sigg@LIGO.ORG - 10:36, Sunday 02 July 2023 (71023)

Here is a 60W trend for completness.

Images attached to this comment
Displaying reports 18201-18220 of 86823.Go to page Start 907 908 909 910 911 912 913 914 915 End