Displaying reports 18181-18200 of 86829.Go to page Start 906 907 908 909 910 911 912 913 914 End
Reports until 16:00, Thursday 22 June 2023
H1 General
ryan.crouch@LIGO.ORG - posted 16:00, Thursday 22 June 2023 (70741)
OPS Thursday Day shift summary

TITLE: 06/22 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
SHIFT SUMMARY:

Lock #1:

Locked for 10:05 when I arrived.

Out of Observing at 17:15 for LSC FF injections LHO:70737, HAM1 FF LHO:70727, ITMY A2L LHO:70730, and PEM work, Jeff also turned off the CAL_AWG_LINES LHO:70724.

Turned on EX wifi for Robert at 19:54, turned it back off when he was done at 20:57 (He also swept the VEA on his exit).

In the CAL_AWG_LINES code, I changed the code so that IDLE is the nominal state so the node will report OK LHO:70736.

Back into Observing at 21:24, out at 22:44 for some more comissioning (MICHFF, more PEM, PSL ISS 2nd loop gain change).

LOG:                                                                                                                                                                                                                               

Start Time System Name Location Lazer_Haz Task Time End
16:22 VAC Travis EndY N Grab tools from mech room 16:52
17:16 CAL Gabriele, Elenna Remote N LSC FF injections 17:58
17:47 VAC Travis, Janos MidX N Mech room 18:25
18:08 CAL Elenna Remote N ITMY A2L measurement 18:42
18:44 PEM Robert CR N PEM tests 20:44
19:39 PEM Robert EndX N PEM tests' 20:57
20:58 CAL Elenna, Gabriele Remote N FF test 21:24
21:50 VAC Gerardo EndY N Search for pump parts, valve room 22:20
22:44 CAL Elenna Remote N MICHFF test 22:48
22:45 PEM Robert LVEA N Prep measurement 22:53
22:53 PEM Robert CR N PEM tests On going
H1 ISC (PSL)
ryan.short@LIGO.ORG - posted 15:47, Thursday 22 June 2023 - last comment - 09:46, Friday 23 June 2023(70739)
ISS Second Loop Gain Updated Following Power-down to 60W

R. Short, J. Driggers

Daniel had noticed that the ISS secondloop gain had been set back to -5 dB when the IFO relocked to 60W last night. Since we now want this at -2 dB in our 60W configuration (alog 70684), I've updated the ISS_acquisition_gain in lscparams.py to be -2 and committed to svn. While we were out of Observing this afternoon starting at 22:45 UTC, I changed the gain to -2 dB, accepted the change in SDF, and reloaded the IMC_LOCK guardian.

Comments related to this report
daniel.sigg@LIGO.ORG - 09:46, Friday 23 June 2023 (70759)

This usually doesn't work without adjusting the input offset. Also, you would want to test this with the IMC alone, since the AC-coupling is finicky. On the other hand, changing the gain after the servo is engaged is trivial.

H1 ISC
gabriele.vajente@LIGO.ORG - posted 14:39, Thursday 22 June 2023 - last comment - 15:51, Thursday 22 June 2023(70737)
LSC feed-forward update

Elenna, Gabriele

We measured the necessary transfer functions to retune the LSC feed-forward for MICH and SRCL. As usual, we turned off both MICH and SRCL FF paths, measured and fitted the transfer functions.

It turns out that the optimal MICH transfer function changes when the SRCL FF is on or off. So in the current situation we have a new SRCL FF (that improves things) and we are runningn with the old MICH FF (which is better than the new fit).

We did another MICH measurement with the SRCL FF on, and found that the optimal transfer function is indeed different from whta we measured with the SRCL FF off, and closer to the old MICH FF above 20 Hz, We have fitted the new transfer function and have a a new filter ready to go.

All changes have been accepted in SDF and guardian has been updated

Images attached to this report
Comments related to this report
gabriele.vajente@LIGO.ORG - 15:51, Thursday 22 June 2023 (70740)

We tested the new MICH FF that was fitted on data measured with the SRCL FF on. Now we have no residual coherence with MICH / SRCL or PRCL

Images attached to this comment
H1 CAL (OpsInfo)
ryan.crouch@LIGO.ORG - posted 14:34, Thursday 22 June 2023 (70736)
CAL_AWG_LINES guardian node reporting OK

Jeff had turned off the CAL_AWG_LINES earlier today LHO 70724 and requested the guardian to IDLE, but its nominal state was still set to LINES_ON so the node was reporting NOT OK.

To fix this, I simply went into the code ($USERAPPS/cal/h1/guardian/CAL_AWG_LINES.py) and changed the nominal states from LINES_ON to IDLE at the top of the code (you can just change which line is commented), I then reloaded the guardian to reflect the new code changes. This can be easily reversed when we want the lines back on by undoing that change in the code, saving and reloading the guardian.

H1 ISC
elenna.capote@LIGO.ORG - posted 14:26, Thursday 22 June 2023 - last comment - 10:31, Monday 26 June 2023(70734)
Soft loop gains updated

The CSOFT P gain has been returned to 20, which is nominal for 60W. Trending back to our 60W configuration, we ran with DSOFT P gain of 2.5 and DSOFT Y gain of 5. These gains were raised to 20 each with the power up to 76W to improve stability. I have reduced these gains to 10. This should reduce any residual DSOFT coupling to DARM. I think we can operate with the even lower gains, but I want to be able to see a few more locks to confirm we are stable in the ASC before dropping those gains further.

This is updated in the guardian and observe SDF.

Comments related to this report
elenna.capote@LIGO.ORG - 10:31, Monday 26 June 2023 (70823)

I have lowered the DSOFT gains again by half so they are now both 5. This is 60W nominal for DSOFT Y, but still twice 60W nominal for DSOFT P (was 2.5). This is SDFed and updated in the guardian.

H1 ISC
elenna.capote@LIGO.ORG - posted 12:51, Thursday 22 June 2023 - last comment - 16:09, Wednesday 28 June 2023(70730)
ITMY A2L gains retuned

I retuned the ITMY A2L gains today to reduce CHARD coupling to DARM. I followed the same process Gabriele and I followed when we did this at 76W, alog 69082.

- I turned on all ASC 8.125 Hz notches

- I turned on the ADS lines (out of an abundance of caution, I probably didn't need to do this)

- I injected an 8.125 Hz line into CHARD P SM EXC using awggui

- I adjusted the ITMY P2L gain to reduce the height of that peak in DARM

- I turned off this excitation and then repeated the process with CHARD Y and the Y2L gain

- I turned off the notches and took us back to camera servos when finished

New gains:

P2L: 0.1

Y2L: -0.2

Old gains:

P2L: -0.05

Y2L: -1.7

I was able to reduce the CHARD P line height in DARM by about a factor of 4, and the CHARD Y line height in DARM by about a factor of 10. These are similar to the values that Gabriele achieved at 76W. See attached plot, OMC DCPD SUM reference was the starting point, live trace was the end point with new A2Ls.

I updated the guardian to use these new A2L gains. These gains will be engaged in the move spots state when we adjust all the other A2Ls (as usual). I loaded the guardian after the change. I did not see any SDF diffs related to this change in observe, so I guess these channels are not monitored? There is no need to SDF in safe as these are guardian controlled.

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 16:09, Wednesday 28 June 2023 (70927)

I gave the A2L gains another look, still injecting a CHARD line at 8 Hz. I made some improvement to both pitch and yaw with P2L = -0.1, and Y2L = -0.15. I think we see a small reduction in noise below 10 Hz from this change. The CHARD P coherence is also slightly reduced as a result. The new gains are in the guardian. I gave these another look because we have evidence of an alignment change from the new OM2 TSAMS setting, and we did see the ASC coherence change with the OM2 setting.

H1 ISC
elenna.capote@LIGO.ORG - posted 12:02, Thursday 22 June 2023 (70727)
HAM1 FF status
Since the LSC FF injections went quicker than expected, I used some extra time to take a look at the status of the HAM1 FF. Gabriele noted larger than expected coherence of CHARD P and Y with DARM in his bruco. Up until this morning we had still been running with HAM1 FF tuned on 76W data on April 23. The FF was set for CHARD, PRC2 and INP1 both pitch and yaw loops. Comparing on and off times of the feedforward, it appeared that the HAM1 feedforward was actually injecting noise into the yaw loops in some cases, or having no effect. I immediately turned off all feedforward for the yaw loops (this does line up with memory that we had no need for feedforward in the yaw loops at 60W). It also appeared that the 76W HAM1 FF was worsening the noise in the pitch loops. I also reverted the pitch feedforward to the filters tuned on April 6th, during a thermalized 60W lock.

The plot attached shows all 6 loops (p/y of CHARD, INP1, and PRC2). The gold traces show the spectra with feedforward OFF. The blue traces show the spectra with the 76W feedforward ON. The red traces show the spectra after I reverted to the April 6 feedforward for the pitch loops, and turned the yaw loops feedforward off. This plot shows that yaw feedforward is unnecessary, and made the noise worse. The plot also shows that the April 6 pitch feedforward is decent, and does improve the noise.

It is difficult to see any change in DARM right now. I attribute this to higher LSC coherence. I think with the LSC feedforward retuned, we might actually see the improvement.


Quick edit add: these have been SDFed in both safe and observe.
Images attached to this report
H1 General
ryan.crouch@LIGO.ORG - posted 12:01, Thursday 22 June 2023 (70715)
OPS Thursday day shift midshift update

We've been locked for 14:03, Comissioning work is currently ongoing.

We went out of Observing at 17:15 UTC for some planned comissioning work; LSC FF injections, ITMY A2L, and then PEM work (Ongoing as of 19:00 UTC). Jeff also turned off CAL AWG lines during this comissioning time.

We anticipate being in comissioning until around 21:00 UTC.

H1 CAL (DetChar, ISC, OpsInfo)
jeffrey.kissel@LIGO.ORG - posted 11:41, Thursday 22 June 2023 (70725)
Status of Modeled vs. Measured Calibration Response Function Systematic Error after 60W Calibration Push
J. Kissel, L. Dartez, L. Sun, M. Wade, L. Wade

After Louis' beautiful report of the *measured* systematic error in the H1:GDS-CALIB_STRAIN_CLEANED channel (LHO:70705) after the calibration update (LHO:70693) for the decrease to 60 W (LHO:70648), I wondered what the state of our *modeled* systematic error is; for this is the metric of fidelity that the search groups use.

After the IFO thermalizes, the modeled systematic error continues to agree with measured error below ~30 Hz and above ~300 Hz, and the systematic error is low. However, the model continues to be discrepant with the measurement in the 50 to 150 Hz region and large-ish (though not abnormally so) with measured values of ~5-6% , 2 - 3 deg, where the model claims consistency with no error.

As Maddie and Les advertised in LHO:70666, there now exists the following web home page location,
    https://ldas-jobs.ligo-wa.caltech.edu/~cal/
where you may now find a *constant* comparison between

    :: the *measured* systematic error in the response function, and associated uncertainty -- informed by the coherent transfer function between PCAL calibration lines that are sporadically placed in frequency throughout the DARM sensivity and the final data product, GDS-CALIB_STRAIN (be it "no extension," _NOLINES, or _CLEANED, they all have the same calibration and thus same systematic error).

    :: the *modeled* systematic error in the response function, and associated uncertainty -- informed, as in O3, by
        - The "reference model" parameter set, in this case that found in report 20230621T211522Z/pydarm_H1.ini
        - The time-independent "free parameter" posterior distribution from their MCMC runs to determine 
            . the optical gain and DARM coupled cavity pole frequency 
            . the three DARM actuator stage strengths 
          ~cal/archive/H1/reports/20230621T211522Z/ 
              sensing_mcmc_chain.hdf5
              actuation_L1_EX_mcmc_chain.hdf5, actuation_L2_EX_mcmc_chain.hdf5, actuation_L3_EX_mcmc_chain.hdf5
        - The "unknown, static, frequency-dependent systematic error" from the GPR fits of the inventory of
            . sensing function measurement / model (corrected for time-dependence)
            . actuation function measurement / model (corrected for time-dependence) 
          ~cal/archive/H1/reports/20230621T211522Z/ 
            sensing_gpr.hdf5
            actuation_L1_EX_gpr.hdf5
            actuation_L2_EX_gpr.hdf5
            actuation_L3_EX_gpr.hdf5
        - The time-dependent correction factor (TDCF) *values* for the optical gain and DARM coupled cavity pole frequency the three DARM actuator stage strengths 
        - The calibration-line-coherence-based uncertainty for the TDCFs
        - The statistical uncertainty on the PCAL amplitude, a la G2301163, that's hard-coded into the default ~cal/archive/H1/ifo/
            pydarm_uncertainty_H1.ini
          under the [pcal] section, as sys_unc = 0.0028 for H1

I attach two (thermalized) times of this comparison from the archive of these plots,
    https://ldas-jobs.ligo-wa.caltech.edu/~cal/archive/H1/uncertainty/
    FIRST ATTACHMENT 2023-06-20 11:50 UTC -- IFO at 75W with 2023-05-17 (20230510T062635Z) calibration from LHO:69696

    SECOND ATTACHMENT2023-06-22 05:50 UTC -- IFO at 60W with 2023-06-21 (20230621T211522Z) calibration from LHO:70693

Note, that both model vs. measurement in *these* plots are plotted as PCAL / GDS-CALIB_STRAIN -- i.e. a multiplicative transfer function to the GDS-CALIB_STRAIN data stream that would make it more accurate -- vs. in LHO:70705) where it's plotted in terms of GDS-CALIB_STRAIN / PCAL, where one would *divide* that transfer function into the data in order to make it more accurate.

In both of these versions of the measurement, at both 75W and 60W, the model discrepant from the measurements in similar ways between 50 - 150 Hz.

Since we have two clocks (model and measurement), it's tough to prove which one's right -- but I would point my finger to the systematic error model since it's far more complicated in construction than the measurements. Also, Louis calibrated his DTT template with very little effort (*just* the two poles at 1 Hz for PCAL, and 3995 m arm length to convert strain to displacement), and it agrees (if you take the inverse) with Maddie / Les's work calibrating the constant PCAL lines.

Given that the high-frequency end of this comparison agrees (above 500 Hz), my guess that there's something wrong with the modeled actuator.

Further, the *actual* measured systematic error is large (~6% at 80 Hz), the actuator is challenging to model, the actuator is contributing substantially to the response at those frequencies, and my experience from fudging the calibration indicates that one can manipulate the systematic error in those regions by adjusting the actuator gains.

So -- that's where we'll start looking to both get the modeled error to match measured error, and to improve the measured error (assuming the measurement is right).
Images attached to this report
H1 CAL (DetChar, ISC, OpsInfo)
jeffrey.kissel@LIGO.ORG - posted 10:20, Thursday 22 June 2023 - last comment - 10:35, Thursday 22 June 2023(70724)
Turned OFF CAL_AWG_LINES 2023-06-22 17:15:30 UTC
J. Kissel

While the LSC FF team stopped the observation segment, I took the opportunity to turn OFF the CAL_AWG_LINES 8 extra calibration lines.

If we lose lock again, either operator corps or I will turn them back on at the start of lock reacquisition. Again, the idea is to get ~4 or 5 clean lock acquisition stretches, so we assemble a good inventory to create a statistically sound model of the 60W thermalization (while acknowledging that clean*er* data is more valuable than getting high-number statistics).
Comments related to this report
jeffrey.kissel@LIGO.ORG - 10:35, Thursday 22 June 2023 (70726)CDS, DetChar, GRD, OpsInfo
OK, *really* turned OFF all lines at 2023-06-22 17:24:34 UTC
J. Kissel, E. Capote 

Elenna reported from afar that she still saw excitations running from the LSC-DARM1_EXC point, and sure enough -- somehow -- the CAL_AWG_LINES guardian had not properly ramped off the four DARM EXC calibration lines. YUCK.

I re-cycled the guardian through LINES_ON then back to IDLE. This turned on, then turned off BOTH PCAL and DARM excitations cleanly as expected.
Gross!

Warning, TJ, "I told you so" bells are going off in my head: he had been cautioning my all through out the engineering run that the long term stability of these python / guardian / awg calls are suspect... I don't even know how to start investigating, so I tag CDS and GRD in hopes there are folks out there that can help diagnose and solve these stale test point control issues.
LHO VE
david.barker@LIGO.ORG - posted 10:14, Thursday 22 June 2023 (70723)
Thu CP1 Fill

Thu Jun 22 10:06:34 2023 INFO: Fill completed in 6min 34secs

Jordan confirmed a good fill curbside.

Images attached to this report
H1 DetChar
gabriele.vajente@LIGO.ORG - posted 08:27, Thursday 22 June 2023 - last comment - 13:28, Thursday 22 June 2023(70713)
Coherences for 60W lock

Coherence from last night lock at 60W, with the correct 2.1 gain for the SRCL FF:

https://ldas-jobs.ligo-wa.caltech.edu/~gabriele.vajente/bruco_GDS_1371427218/

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 13:28, Thursday 22 June 2023 (70731)

With the improvement of HAM1 FF and the ITMY A2L gains (70730, 70727), there is a reduction in CHARD and DHARD coherence. The attached plot shows the coherences after these changes. I think this indicates that these changes were good, and we can probably claim we are not limited (much) by ASC noise right now. The biggest contender for coherence is DHARD Y, but that is mostly below 10 Hz. We would probably also improve the coupling of CSOFT P slightly by dropping the gain back to 20 (discussed and agreed in commissioning meeting).

Images attached to this comment
gabriele.vajente@LIGO.ORG - 09:00, Thursday 22 June 2023 (70717)

Here's how much could be ganed with MICH and SRCL subtraction (or better tuned FF) and with jitter subtraction

Images attached to this comment
H1 General
thomas.shaffer@LIGO.ORG - posted 22:09, Wednesday 21 June 2023 - last comment - 15:29, Tuesday 27 June 2023(70702)
Back to Observing 0504 UTC

Relock was fully auto after one lock loss while finding IR. There was a missing comma that brough ISC_LOCK into error in LOWNOISE_LENGTH_CONTROL, easy fix.

There were a few SDF diffs that look like they need to be accepted based on alog70648. Accepted with screenshots attached.

I turned on the CAL_AWG_LINES Guardian at request of Jeff. I had to change this node's nominal state to LINES_ON for it to be OK.

Comments related to this report
thomas.shaffer@LIGO.ORG - 23:47, Wednesday 21 June 2023 (70704)DetChar

I'm thinking that this lower range is related to the squeezer. I've attached a screenshot fo the FDS DARM FOM where the live trace is above the refernece in the same frequencies that DARM seems to be higher than normal. I followed the instructions on the Troubleshooting SQZ wiki to adjust the sqeeze angle, but I wasn't able to make anything better, only worse.

I adjusted the sqz angle from 0630-0640 UTC.

Images attached to this comment
thomas.shaffer@LIGO.ORG - 22:27, Wednesday 21 June 2023 (70703)

Our range is staying low at ~125Mpc, there seems to be extra noise from 20-60Hz. Investigating.

andrew.lundgren@LIGO.ORG - 01:56, Thursday 22 June 2023 (70707)DetChar, ISC
There's a lot more coherence of PRCL and SRCL in the ongoing lock than the previous one. The thermalization cal lines are also very high in DARM and CHARD - maybe they weren't turned on until this new lock though.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 09:13, Thursday 22 June 2023 (70718)CAL, DetChar
Tagging CAL regarding the turn on of CAL_AWG_LINES for this 60W lock stretch -- thanks TJ!

Tagging DetChar as well -- to note that we have 8 extra calibrations on during this nominal low noise stretch that started at June 22 2023 05:04 UTC

These are in because we want to characterize the thermalization of the detector's DARM loop sensing and response functions now that we're operating at 60W rather than 75/76W. I hope to get a few more of these lock acquisitions with these extra lines on, and then we'll turn them off as we had done for the start of the engineering run.

If you'd like to create a data quality flag, you can find the status of these lines "in one go" by looking at the CAL_AWG_LINES guardian state channel, 
    H1:GRD-CAL_AWG_LINES_STATE_N
The numerical value of the channel is 10.0 when the extra calibration lines are ON (the state is called LINES_ON), and 2.0 when the lines are OFF (the state is called IDLE). See CAL_AWG_LINES_StateGraph for the flow of the state graph.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 09:40, Thursday 22 June 2023 (70720)DetChar, OpsInfo
A retrospect on the lower range:

The solution was a lack of capturing the SRCLFF1 gain in SDF / Guardian during yesterday's power reduction from 75W to 60W LHO:70648.

See LHO:70712 and LHO:70710 where Tony recovered the correct gain of 2.1.

@DetChar -- it might be worth creating a data quality flag for this:
    Observation segment start (with SRCLFF1 Gain at 1.0): 
    2023-06-22 05:04:38 UTC
               22:04:38 PDT
               1371445496 GPS
    Observation segment stop (with SRCLFF1 Gain at 1.0): 
    2023-06-22 13:53:47 UTC
               06:53:47 PDT
               1371477245 GPS
Images attached to this comment
jenne.driggers@LIGO.ORG - 11:17, Thursday 22 June 2023 (70728)

RyanC just added the SRCLFF1 gain of 2.1 to lscparams, saved, and reloaded the ISC_LOCK guardian.  So, if we need to relock it'll come back on with the correct gain.  Note though, that we expect to update this yet again later today.

thomas.shaffer@LIGO.ORG - 16:03, Thursday 22 June 2023 (70742)

Here's that SDF screenshot that I said I was going to attach. Turns out my tired brain had flipped setpoint and epics value in the tables, Doh! My fault.

Images attached to this comment
ansel.neunzert@LIGO.ORG - 15:29, Tuesday 27 June 2023 (70892)

Adding a quick comment to Jeff's note about lines, with a bit of relevant info from ER15.

Abby Wang and Athena Baches recently analyzed lines in May 2023 data, grouping those that evolve similarly in time. They found a cluster of lines corresponding to the awg lines, but not including any other entries. This is good news; it implies that there are *not* strong narrow artifacts with very similar histories-- i.e. these lines aren't causing unexpected strong lines elsewhere, which ought to have shown up in the same cluster. (Note: it's still possible that there are weak artifacts which aren't caught by this analysis.)

The attached plots show what the time evolution looks: each row is a line (corresponding to 11.475, 11.575, 15.175, 15.275, 24.4, and 24.5 Hz) , yellow = above threshold and blue = below threshold.

Images attached to this comment
H1 CAL (DetChar, ISC, OpsInfo)
jeffrey.kissel@LIGO.ORG - posted 16:34, Wednesday 21 June 2023 - last comment - 16:50, Thursday 22 June 2023(70693)
Calibration Pushed / Updated for 60W; Systematic Error is within +/- 5% and +/- 3 deg as before at 75/76W
L. Dartez, J. Kissel

More details to come, but as of Jun 21 2023 23:30 UTC, we have updated the calibration, to reflect the new IFO with input power back at 60W and all the associated other configuration changes including but not limited to a SRCL offset of -175 [ct].
Comments related to this report
jeffrey.kissel@LIGO.ORG - 16:59, Wednesday 21 June 2023 (70699)
The calibration update was pushed based on the second -175 [ct] SRCL offset sensing function data taken in LHO:70683.

Even though there were no new measurements of the actuators taken, these the N/ct actuation strength "free" parameters were also updated with, essentially, a new MCMC run on the last, most recent, old data from May 17 2023 (LHO:69684).

Here're the following "free parameter" values exported to foton:
   $ pydarm export
       searching for 'last' report...
       found report: 20230621T211522Z
       using model from report: /ligo/groups/cal/H1/reports/20230621T211522Z/pydarm_H1_site.ini
       filter file: /opt/rtcds/lho/h1/chans/H1CALCS.txt

          Hc: 3.4207e+06 :: 1/Hc: 2.9234e-07
          Fcc: 439.33 Hz

          Hau:  7.5083e-08 N/ct
          Hap:  6.2353e-10 N/ct
          Hat:  9.5026e-13 N/ct

    filters (filter:bank | name:design string):
       CS_DARM_ERR:10                O4_Gain:zpk([], [], 2.9234e-07)
       CS_DARM_CFTD_ERR:10           O4_Gain:zpk([], [], 2.9234e-07)
       CS_DARM_ERR:9                O4_NoD2N:zpk([439.32644887584786], [7000], 1.0000e+00)
       CS_DARM_ANALOG_ETMX_L1:4      Npct_O4:zpk([], [], 7.5083e-08)
       CS_DARM_ANALOG_ETMX_L2:4      Npct_O4:zpk([], [], 6.2353e-10)
       CS_DARM_ANALOG_ETMX_L3:4      Npct_O4:zpk([], [], 9.5026e-13)

The calibration report on the MCMC fitting for free parameters, as well as the GPR fit based on the two measurements at -175 [ct] (20230621T211522Z in LHO:70683 and 20230621T191615Z in LHO:70671) is attached below for convenience, but has been archived on the LDAS cluster under 
    H1_calibration_report_20230621T211522Z.pdf
Non-image files attached to this comment
jeffrey.kissel@LIGO.ORG - 10:12, Thursday 22 June 2023 (70722)
For the primary metric of how the calibration's quality changed across the 75W to 60W, then change of SRCL offset change, the calibration push, see LHO:70705. I copy and past that attached image here for convenience.

Also repeating Louis:
The DTT template for this measurement is stored in  
    /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O4/H1/Measurements/FullIFOSensingTFs/
        20230621_systematic_error_deltal_external_gds_calib_strain.xml
Images attached to this comment
louis.dartez@LIGO.ORG - 11:19, Thursday 22 June 2023 (70729)
we changed is_pro_spring to False in the pyDARM parameter model set. Commit: 353de502.
jeffrey.kissel@LIGO.ORG - 16:50, Thursday 22 June 2023 (70735)
Attached are the raw the blow-by-blow notes I took during yesterday's calibration push that highlights all the command-line commands and actions we needed to take in order to update the calibration.

Recapping here with a little more procedural clarity: 
(Any command recalled without a path are/were/may be run any new, fresh terminal; we did not need to invoke any special conda environment thanks to the hard work done behind the scenes by the pydarm-cmd team):

    (0) If at all possible, understand what you expect to change in the calibration ahead of time. 
        If that is *limited* to something changing that can only be measured with the full IFO, 
        i.e. you expect *only* a change in the "free parameters" (overall sensing function gain, 
        DARM cavity pole frequency, or any of the three ETMX UIM, PUM, TST actuator strengths) 
        then you run through the process outlined below as we did yesterday. Other changes to the 
        DARM loop, like electronics changes or computational arrangement mean you have to do a 
        more in-depth characterization of that thing, update the DARM loop model parameter set, 
        *then* start at (1).

    (1) Measure something new about the IFO. In this case we *knew* that we expect a change in 
        the inteferometric response of the IFO because of the ring heater changes and input 
        power change, so we remeasured the sensing function; and expected only the optical gain 
        and the cavity pole to change.
        
        $ pydarm measure --run-headless bb sens pcal
        
        We, of course, should be out of observing, and the ISC_LOCK guardian should be in NLN_CAL_MEAS.
        When the measurement is complete, you can do steps (2) through (6) with the IFO *back* 
        in NOMINAL_LOW_NOISE, and you can even go back in to OBSERVING during that time.

    (2) Process that measurement, and create the folder of material that's required for that 
        processing, as though it were a part of the on-going "epoch" of measurements where 
        you expect nothing to have changed about the DARM loop other than the time-dependent 
        corrections to the free parameters. This gives you a "report" that shows the residuals 
        between the last installed model of the IFO and you current measurement compared to 
        the rest of the measurement/model residuals in the inventory for that last "epoch." 
        In this way, you can confirm or refute your expectations of what has changed.
        
        $ pydarm report
        
        which generates the folder in 
        /ligo/groups/cal/H1/reports/20230621T191615Z/

    (3) Looking at the first results, we were disappointed that occasionally the MCMC fit 
        would land on a parameter hyperspace island that had a large SRC detuning spring 
        frequencies, even through the lower frequency limit of the data fed into the fitter 
        was ~80 Hz. As such, we adjusted the *default* model parameter set,
         
        /ligo/groups/cal/H1/ifo/pydarm_H1.ini
        changing the following parameter,
        Line 15    is_pro_spring = False
        and re-ran the report,
        $ pydarm report --force
        in order to re-run the MCMC. This worked so we committed pydarm_H1.ini to the 
        ifo/H1/ repo as git hash 353de502.
 
    (4) After looking through the history of measurement/model residuals, you should 
        then have an understanding what you want to *tag* as "valid" and an understanding 
        of whether you new measurement *is* infact the boundary of a new epoch. This 
        may also be the time when you *don't* like what you see, so you modify the 
        controls settings of the IFO to change it further and go back to step (1). As you 
        can see from LHO aLOGs 70671, 70677, and 70683, 
        we were doing just that.

        In the end, we had *two* measurements in "the new epoch" that we liked, and one 
        measurement in the middle -- technically its *own* epoch -- that we didn't like. 

        So, after processing all the data and making no updates to the report tags so we 
        could see the whole history of the sensing function, we tagged the reports in the 
        following way

        $ cd /ligo/groups/cal/H1/reports/
        $ pydarm ls -r                  # before tagging
            20230620T234012Z valid          # Last valid 75W sensing function data set
            20230621T191615Z                # First new 60W data set, with SRCL offset -175 [ct]
            20230621T201733Z                # Second new 60W data set, with SRCL offset -165 [ct]
            20230621T211522Z                # Third new 60W data set, with SRCL offset -175 [ct]
        $ # validating First and Third 60W data sets, both with -175 [ct] SRCL offset the report so it shows up on the
        $ touch 20230621T191615Z/tags/valid 
        $ touch 20230621T191615Z/tags/epoch-sensing
        $ touch 20230621T211522Z/tags/valid  
        $ pydarm ls -r                  # after tagging
            20230620T234012Z valid
            20230621T191615Z valid epoch-sensing
            20230621T201733Z
            20230621T211522Z valid

    (5) Then, now that these tags are set up -- and specifically the epoch-sensing tag -- 
        only these new 60W data sets are included in the history, which means only 
        those data sets are stacked and fit to GPR. Thus, this report generation run will 
        be the "final" report that generates what we end up exporting out to the calibration 
        pipeline. Importantly, even though the epoch boundary is defined by the the *first* 
        20230621T191615Z measurement, the parameters that will be installed are defined by 
        the MCMC of the *latest* 20230621T211522Z measurement. This works because we're 
        assuming the IFO is the same in this entire boundary, so we should get equivalent 
        answers (within uncertainty, and modulo time-dependent correction factors) if we MCMC 
        any of the measurements in the epoch.
        
        $ pydarm report --force
        
        yields a good report, with "free parameters," foton exports, FIR filters, MCMC 
        posteriors, and GPR fits that are ready to export to the calibration pipeline.

        Also note that all of these re-runs of the report ("pydarm report --force") 
        are *over-writing* the contents of the report, so if you want to save any interim 
        products you must move them out of the way to a different location and/or different name.

    (6) We can validate what we are about to push out into the world with the dry run command,
        
        $ pydarm export
        
        where if you don't specify the report ID, then it exports the latest report. In this case, 
        the latest is 20230621T211522Z, so we do want to use this simplest use of this command. 
        That spits out text like what's shown in LHO:70699. 
   
        Another option is 
        
        $ pydarm status
        
        which spits out a comparison between what's *actually* installed in the front-end against 
        the latest report (which, in this case, is what we're *about* to install).

    (7) If you're happy with what you see, then it's time to shove "the calibration" out into the world.
        (a) Presumably, the IFO is still locked, in NOMINAL_LOW_NOISE, and maybe even in OBSERVING. 
            Warn folks that you're about to take the IFO out of OBSERVING, and the DARM_FOM on the 
            wall is about to go nuts, but the IFO is fine. Try to do steps (b) through (g) as quickly 
            but accurately/completely/carefully as possible.

        (b) push EPICs records to the front end and save new foton files.
            $ pydarm export --push

        (c) open up the CAL-CS GDS-TP screen, and look at the DIFF of filter coefficients. 
            Hit the LOAD_COEFFICIENTS button if you see what you expect from the DIFF.

        (d) on the same screen, open up the SDF OVERVIEW. Review the changes and accept if you see 
            what you expect.

        Now everything's updated in the front-end, so it's time to migrate stuff out to the cluster 
        so GDS and the uncertainty pipelines get updated. 

        (e) Add an additional tag to the report which you just pushed,
        
        $ touch /ligo/groups/cal/H1/reports/20230621T211522Z/tags/exported
        $ pydarm ls -r 
            20230620T234012Z valid
            20230621T191615Z valid epoch-sensing
            20230621T211522Z exported valid
        

        (f) Archive all the reports that are a part of this wonderful new epoch, which pushes the 
            whole folder so it includes the tags. Having the "exported" tag is particularly 
            important for the GDS pipeline. 
            
            /ligo/groups/cal/H1/reports$ arx commit 20230621T191615Z
            /ligo/groups/cal/H1/reports$ arx commit 20230621T211522Z 
            

        (g) Restart the gds pipeline, which picks up the .npz of filter coefficients from the latest 
            report marked with the "exported" tag. 
            
            $ pydarm gds restart
            

            This opens up prompts form both DMT machines, dmt1 and dmt2, to say "yes" to confirm that 
            you want to restart.

            After restarting the GDS pipeline, you can check the status of the machines as well,
            
            $ pydarm gds status
            

    (8) Once you're done with the GDS pipeline restart, then you've gotta wait ~2-5 minutes for 
        the pipeline to complete its restart. To check whether the pipeline is back up and running, 
        head to "grafana" calibration monitoring page. 
        Presumably, the IFO is still in NOMINAL_LOW_NOISE, so eventually the live measurement of 
        response function systematic error should begin to reappear, hopefully even closer to 
        1.0 mag, 0.0 deg phase. While you're waiting, you can also pull up trends of 
            - the front-end computed TDCFs, see if those move closer to 1.0 (or in the case of f_cc, closer to the MCMC value)
            - the front-end computed DELTAL_EXTERNAL / PCAL systematic error transfer function, see if those move closer to 1.0
        In addition, the newest, latest *modeled* systematic error budget only gets triggered once 
        every hour, so you just have to be patient on that one, and check in later.

    (9) Once everything is settled, take the ISC_LOCK guardian to NLN_CAL_MEAS, and take a broad-band 
        PCAL injection for final, high-density frequency resolution, post-install validation a la LHO:70705
Non-image files attached to this comment
H1 ISC
daniel.sigg@LIGO.ORG - posted 11:45, Tuesday 20 June 2023 - last comment - 13:52, Thursday 22 June 2023(70610)
REFL_LF Whitening Improved

The DC routdout of the 2 LSC REFL RF detectors were ADC noise limited above ~200Hz. Today we improved the whitening filters to give us 10x higher gain above 30Hz.

In detail: D1102079, R20s were changed to 499Ohm from 1.58kOhm to extend the whitening filters from 1:10 to 1:30. Channel 4 was modified in ISC-R4 (LSC-REFL_A), and channel 2 in ISC-R1 (LSC-REFL_B).

Serial numbers:

  Old New Channel Modified
ISC-R4 S1200460 S1200450 LSC-REFL_A Channel 4
ISC-R1 S1200450 S1200452 LSC-REFL_B Channel 2

The LSC model was modified to add the average of the 2 REFL RIN channels: LSC-REFL_RIN = (LSC-REFL_A_RIN + LSC-REFL_B_RIN) / 2.

Comments related to this report
jeffrey.kissel@LIGO.ORG - 13:52, Thursday 22 June 2023 (70732)CDS, FRS, ISC, SYS
Quick links as an add-on to the documentation:
Chassis assembly drawing which Daniel cites: D1102079
Actually sub-assembly board drawing to which "R20" exists: D1102060 -- see page 3.

e-Traveler information on the serial numbers of these aLIGO LSC RFPD Interface chassis (not the of the board that's actually changed)
    S1200450
    S1200452

Work permits for the changes:
    WP 11272 -- electronics change
    WP 11273 -- front-end model changes

Measurement results that were possible as a result of these changes: LHO:70611.

My interpretation of verbal conversation with Daniel on 2023-06-22 about those measurement results: "Improving the electronics didn't seem to help making the measurement better, but I'm not going to revert it 'cause its doesn't matter whether it's in or not. #somewords #somewords L1 doesn't care about this. (a) This is a standard whitening chassis used for almost all ISC PDs, so we're not going to change the generic drawing, and (b) other than the OMC DCPDs [which have their own whitening design], most ISC PDs are not shot noise limited, so this change wouldn't help them."

Or something.

The above e-travelers on this particular whitening are therefore the sustained documentation about this deviation from design.
H1 ISC
sheila.dwyer@LIGO.ORG - posted 15:30, Wednesday 14 June 2023 - last comment - 15:00, Thursday 22 June 2023(70453)
OMC DCPD balance changed, 10 minutes of cross correlation data

Brina, Sheila

The DCPD balance matrix is normally set by Jeff using the method described in 47217.  Today I wanted to try setting the matrix so that the contribution of the two sensors to the DARM loop gain is set to be equal, because I think this will make it easier to correct for the imbalance of the PDs while doing the offline cross correlation.  I compared pcal line heights in the DCPD_A and DCPD_B at 17.1 Hz (A/B = h = 1.0277) and at 410.3Hz (A/B = 1.0362).  We chose the 17Hz value, and calculated the matrix elements for DCPD A to the SUM 2/(h+1) = 0.9863 and B to the sum 2h/(h+1) = 1.0137.  

The attached text file has commands used to copy and paste into a guardian shell to swap the DARM loop to one DCPD, and change the matrix elements.  After doing the swap we measured the DARM OLG, which is the live trace in the attached dtt template. 

no sqz start: 1370816035 (Jun 14 2023 22:13:37 UTC). lockloss: 22:23:18 UTC, we were just sitting there collecting data with no SQZ.  The matrix elements have been reset by SDF.

Images attached to this report
Non-image files attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 16:40, Thursday 15 June 2023 (70500)

deleted

sheila.dwyer@LIGO.ORG - 14:31, Wednesday 21 June 2023 (70682)

Here are some plots of the cross correlation for this time.  

The first is a comparison of the pyDARM model of the OLG to the measured OLG. At 24Hz, the model was predicting 2% less gain than the measurement, so here I've scaled the model up by 2% and used that for the estiamtion of the correlated noise. 

The second plot is the DCPD sum ASD loop corrected compared to the cross correlation.  You can see that the cross correlation is above the DCPD sum by 7% at 24Hz, which is incorrect.  I thought about if this could be because of the imbalance of the DCPDs, I will attach a note here that explains how I attempted to handle this imbalance in estimating the cross correlation.  (The DCPD_A and B channels are recorded before mulitplying by the balance matrix, the sum channel is after that matrix.)  In the end this did not make a significant difference, the cross correlation is still nearly 7% overestimating DARM at 24Hz when I corrected for this. 

The third plot shows the cross correlation compared to an estimate of the correlated noise obtained by subtracting the calculated shot noise from the DCPD SUM asd in quadrature, this mostly agrees with the cross correlation except at low frequencies. 

These plots were made using the code https://git.ligo.org/sheila-dwyer/cross-corelation commit 3c60740b  I will attempt to make a comparison of this code with Craig's cross correlation code to see if this problem is still present there. 

Images attached to this comment
sheila.dwyer@LIGO.ORG - 15:00, Thursday 22 June 2023 (70738)

Here is a note describing how I corrected for the DCPD imbalance.  Once the matrix was reset as above, things become simple.  I will add a diagram to this note if I have time.

Daniel raised the point that perhaps a phase difference between the two PDs could explain a discrepancy at low frequency, in the correction I did I assumed that the two paths were only imbalanced by a scalar gain. The attached png shows a transfer function between the two DCPD channels, taken at the time of one of yesterday's broadband pcal injections. The frequency dependence of this at first glance doesn't seem right to explain what we see, ie, the error in the cross correlation doesn't have a wiggle between 20-30 Hz, although the error does seem to happen around the frequency of the cross correlation problem.

 

Images attached to this comment
Non-image files attached to this comment
H1 ISC
daniel.brown@LIGO.ORG - posted 18:07, Thursday 10 November 2022 - last comment - 14:26, Thursday 22 June 2023(65719)
OMC throughput vs OM2 TSAMs

Evan, Craig, Dan

Yesterday during the 25W lock at DC readout we were looking into the transmission of violin and calibration lines through the OMC whilst changing the OM2 TSAM actuator. Assuming the calibration of AS_C and DCPD are good we can compare the ratio of both to try and infer how much the DARM mode couples through the OMC.

Shown in the attached plot is this ratio at the 17Hz cal line and the violin modes around 500Hz. Nominally OM2 should have about a 1.7m RoC which is when TSAMs is about 55C. At this temperature the throughput is about 70%. Increasing the TSAMs temperature) the throughput reduced. Seeing this we switched off TSAMs and let it cool down to 25C and then left it off. During this cool down the throughput seems to keep increasing peaking at better than 90% @ 25C.

The purple trace shown is the same measurement done when we reached 50W today with the TSAM @ 24C. The throughput appears to have significantly reduced again, down to 65%. We do not see as many violin modes now as most have been damped.

Overall these results seem confusing. They suggest quite extreme changes in the mode matching into the OMC of the DARM mode. I'm skeptical that going from 25->50W could really reduce the mode matching by this amount. Some more analysis of this will come tomorrow. We also need to look at how these ratios change in the new OMC REFL channel too.

We are suspicious that there might still bs some alignment issues too. We tried moving the OMC QPD offsets just now but we lost lock shortly after picking a new one.

Images attached to this report
Comments related to this report
daniel.brown@LIGO.ORG - 13:06, Saturday 12 November 2022 (65760)

Looking at the DCPD/AS_C ratio again at 50W when changing the OM2 TSAMs last night. Also showing the ratio before the vent for the 17Hz and one of the 500Hz violin mode.

It looks like putting the OM2 back to it's nominal (RoC ~50C) returns the same OMC line throughput to back to what it was pre-vent (4/9/2022). Having the TSAMs at 24C improves this ratio, up to about 70%.

Images attached to this comment
evan.hall@LIGO.ORG - 08:42, Monday 21 November 2022 (65911)

I looked again at the 25 W data and separately extracted the changes in AS C NSUM and DCPD sum, in addition to their ratio, at the PCAL drive frequency of 17.1 Hz. I used 0.86 A/W to convert the DCPD channel to milliwatts. The arm power and DARM offset during this lock is lower than the nominal high sensitivity, which is why the optical gain is only about 1 mW/pm.

GPS time    Temp  ASC/PCAL  DCPD/PCAL  DCPD/ASC
(s)         (C)   (mW/pm)   (mW/pm)    (mW/mW)
-----------------------------------------------
1352064408  65.0  1.75      1.08       0.62
1352064790  55.0  1.59      1.08       0.68
1352065329  45.0  1.42      1.08       0.76
1352066298  35.0  1.29      1.08       0.84
1352071194  25.0  1.16      1.07       0.93

The DCPD dc value remains servoed to 5.0 mA during this time (about 5.8 mW). I am not sure these data are consistent with the simple picture of a change in the OMC throughput η. We would expect to find that the ratio ASC/PCAL scales like 1/sqrt(η), while the ratio DCPD/PCAL scales like sqrt(η), thus making the overall ratio DCPD/ASC scale like η.

DTT template attached. The coherence with the OMC reflection photodiodes was iffy and I suggest this test should be repeated with more PCAL strength in order to get reliable values for the reflection measurement.

Addendum: I also looked at the subsequent 50 W lock (20 mA dc current) that Dan mentions in his comment. The result is qualitatively the same: there is no meaningful change in the optical gain DCPD/PCAL, and all the change occurs in ASC/PCAL.

Non-image files attached to this comment
evan.hall@LIGO.ORG - 14:26, Thursday 22 June 2023 (70733)

Attaching a plot of the heating and cooling of OM2. After a full throw of the heater, it will take about 2 hours to come within 2 °C of its steady-state temperature.

Non-image files attached to this comment
Displaying reports 18181-18200 of 86829.Go to page Start 906 907 908 909 910 911 912 913 914 End