Displaying reports 15881-15900 of 86571.Go to page Start 791 792 793 794 795 796 797 798 799 End
Reports until 18:59, Monday 11 September 2023
H1 SEI (SEI)
anthony.sanchez@LIGO.ORG - posted 18:59, Monday 11 September 2023 (72821)
Fixed DNS issue for h1cdsh8

I stumbled across this issue when working on a caQTdm based FOM screen for SEI. The MEDM screen seemed to work just fine, but the caQTdm screen was broken.
Looking into the problem further I found the servers that the data was getting pulled from for the rest of the screen were listed as such, 
FQDN:portnumber.
But the HAM8 Server was listed as:
FQDN:arpa.portNumberGibberish
So I scoured the linked files for server names and found nothing of the like in my caQTdm files.  

I spoke to Erik and showed him what I had found.
We opened the DNS settings and noticed the error.
Updated the DNS record for h1cdsh8 and It's now resolved. Committed the changes to SVN and updated the other servers.
 

H1 CAL (CDS)
jeffrey.kissel@LIGO.ORG - posted 17:33, Monday 11 September 2023 (72820)
Fixed Bug in CAL_CS_MASTER regarding KAPPA_UIM Gating Algorithm's Uncertainy Input
J. Kissel

After identifying a bug in the
    /opt/rtcds/userapps/release/cal/common/models/
        CAL_CS_MASTER.mdl
library that relates to the CAL-CS computation of KAPPA_UIM and its gating algorithm's uncertainy input (see LHO:72819), I've rectified the bug and committed the model change to the userapps repo as of rev 26218.

Attached is the "after" screenshot, which you can compare to LHO:72819's "before" screenshot.

Work permit to recompile the h1calcs model and install ASAP is forth-coming.
Images attached to this report
LHO VE (DetChar, VE)
gerardo.moreno@LIGO.ORG - posted 16:30, Monday 11 September 2023 (72818)
LOTO on Kobelco Compressor

(Heath E., Gerardo M.)
Service tech Heath was on-site to diagnose/repair the leak on the Kobelco compressor, unfortunately to be able to diagnose the system a couple of windows were lifted, the gaskets for such "windows" can't be reused and Heath did not have gaskets with him.  Now, the vendor is trying to secure a "kit" to do instead an annual service on the unit, meanwhile the compressor is LOTO, and will be down until repairs are completed, hopefully soon.

While the diagnosis was done, the compressor was powered on and ran for sometime this morning.

The compressor ran for 7 minutes, starting at 17:12 UTC and ending at 17:19 UTC, Drying tower was isolated from the compressor.

H1 CAL
jeffrey.kissel@LIGO.ORG - posted 16:29, Monday 11 September 2023 - last comment - 14:45, Monday 18 September 2023(72812)
Historical Systematic Error Investigations: Why MICH FF Spoiling UIM Calibration Line froze Optical Gain and Cavity Pole GDS TDCFs from 2023-07-20 to 2023-08-07
J. Kissel

I'm in a rabbit hole, and digging my way out by shaving yaks. The take-away if you find this aLOG TL;DR -- This is an expansion of the understanding of one part of multi-layer problem described in LHO:72622.

I want to pick up where I left off in modeling the detector calibration's response to thermalization except using the response function, (1+G)/C, instead of just the sensing function, C (LHO:70150). 

I need to do this for when 
    (a) we had thermalization lines ON during times of
    (b) PSL input power at 75W (2023-04-14 to 2023-06-21) and
    (c) PSL input power at 60W (2023-06-21 to now).

"Picking up where I left off" means using the response function as my metric of thermalization instead of the sensing function.

However, the measurement of sensing function w.r.t. to its model, C_meas / C_model, is made from the ratio of measured transfer functions (DARM_IN1/PCAL) * (DARMEXC/DARMIN2), where only the calibration of PCAL matters. The measurement response function w.r.t. its model, R_meas / R_model, on the other hand, is ''simply'' made by the transfer function of ([best calibrated product])/PCAL, where the [best calibrated product] can be whatever you like, as long as you understand the systematic error and/or extra steps you need to account for before displaying what you really want.

In most cases, the low-latency GDS pipeline product, H1:GDS-CALIB_STRAIN, is the [best calibrated product], with the least amount of systematic error in it. It corrects for the flaws in the front-end (super-Nyquist features, computational delays, etc.) and it corrects for ''known'' time dependence based on calibation-line informed, time-dependent correction factors or TDCFs (neither of which the real-time front-end product, CAL-DELTAL_EXTERNAL_DQ, does). So I want to start there, using the transfer function H1:GDS-CALIB_STRAIN / H1:CAL-DELTAL_REF_PCAL_DQ for my ([best calibrated product])/PCAL transfer function measurement.

HOWEVER, over the time periods when we had thermalization lines on, H1:GDS-CALIB_STRAIN had two major systematic errors in it itself that were *not* the thermalization. In short, those errors were:
    (1) between 2023-04-26 and 2023-08-07, we neglected to include the model of the ETMX ESD driver's 3.2 kHz pole (see LHO:72043) and
    (2) between 2023-07-20 and 2023-08-03, we installed a buggy bad MICH FF filter (LHO:71790, LHO:71937, and LHO:71946) that created excess noise as a spectral feature which polluted the 15.1 Hz, SUS-driven calibration line that's used to inform \kappa_UIM -- the time dependence of the relative actuation strength for the ETMX UIM stage. The front-end demodulates that frequency with a demod called SUS_LINE1, creating an estimate of the magnitude, phase, coherence, and uncertainty of that SUS line w.r.t. DARM_ERR.

When did we have thermalization lines on for 60W PSL input? Oh, y'know, from 2023-07-25 to 2023-08-09, exactly at the height of both of these errors. #facepalm
So -- I need to understand these systematic errors well in order to accurately remove them prior to my thermalization investigation.

Joe covers both of these flavors of error in LHO:72622.

However, after trying to digest latter problem, (2), and his aLOG, I didn't understand why spoiled \kappa_U alone had such impact -- since we know that the UIM actuation strength is quite unimpactful to the response function. 

INDEED (2) is even worse than "we're not correcting for the change in UIM actuation strength -- because 
    (3) Though the GDS pipeline (that finishes the calibration to form H1:GDS-CALIB_STRAIN) computes its own TDCFs from the calibration lines, GDS gates the value of its TDCFs with the front-end-, CALCS-, computed uncertainty. So, in that way, the GDS TDCFs are still influenced by the front-end, CALCS computation of TDCFs.

So -- let's walk through that for a second.
The CALCS-computed uncertainty for each TDCF is based on the coherence between the calibration lines and DARM_ERR -- but in a crude, lazy way that we thought would be good enough in 2018 -- see G1801594, page 13. I've captured a current screenshot, First Image Attachment  of the now-times simulink model to confirm the algorithm is still the same as it was prior to O3. 

In short, the uncertainty for the actuator strengths, \kappa_U, \kappa_P, and \kappa_T, is created by simply taking the larger of the two calibration line transfer functions' uncertainty that go in to computing that TDCF -- SUS_LINE[1,2,3] or PCAL_LINE1. 

HOWEVER -- because the optical and cavity pole, \kappa_C and f_CC, calculation depends on subtracting out the live DARM actuator (see appearance "A(f,t)" in the definition of "S(f,t)" in Eq. 17 from ), their uncertainty is crafted from the largest of the \kappa_U, \kappa_P, and \kappa_T, AND PCAL_LINE2 uncertainties. It's the same uncertainty for both \kappa_C and f_CC, since they're both derived from the magnitude and phase of the same PCAL_LINE2. 

That means the large SUS_LINE1 >> \kappa_U uncertainty propagates through this "greatest of" algorithm, and also blows out the \kappa_C and f_CC uncertainty as well -- which triggered the GDS pipeline to gate its 2023-07-20 TDCF values for \kappa_U, \kappa_C, and f_CC from 2023-07-20 to 2023-08-07.

THAT means, that --for better or worse-- when \kappa_C and f_CC are influenced by thermalization for the first ~3 hours after power up, GDS did not correct for it. Thus, a third systematic error in GDS, (3). 

*sigh*

OK, let's look at some plots.

My Second Image Attachment shows a trend of all the front-end computed uncertainties involved around 2023-07-20 when the bad MICH FF is installed. 
    :: The first row and last row show that the UIM uncertainty -- and the CAV_POLE uncertainty (again, used for both \kappa_C )

    :: Remember GDS gates its TDCFs with a threshold of uncertainty = 0.005 (i.e. 0.5%), where the front-end gates with an uncertainty of 0.05 (i.e. 5%).

First PDF attachment shows in much more clear detail the *values* of bot the the CALCS and GDS TDCFs during a thermalization time that Joe chose in LHO:72622, 2023-07-26 01:10 UTC.

My Second PDF attachment breaks down Joe's LHO:72622 Second Image attachment in to its components:
    :: ORANGE shows the correction to the "reference time" response function with the frozen, gated, GDS-computed TDCFs, by the ratio of the "nominal" response function (as computed from the 20230621T211522Z report's pydarm_H1.ini) to that same response function, but with the optical gain, cavity pole, and actuator strengths updated with the frozen GDS TDCF values,
        \kappa_C = 0.97828    (frozen that the low, thermalized value of the OM2 HOT value reflecting the unaccounted-for change just one day prior at 2023-07-19; LHO:71484)
        f_CC = 444.4 Hz       (frozen)
        \kappa_U = 1.05196    (frozen at a large, noisy value, right after the MICH FF filter is installed)
        \kappa_P = 0.99952    (not frozen)
        \kappa_T = 1.03184    (not frozen, large at 3% because of the TST actuation strength drift)

    :: BLUE shows the correction to the "reference time" response function with the not-frozen, non-gated, CALCS-computed TDCFs, by the ratio of the "nominal" 20230621T211522Z response function to that same response function updated with the CALCS values,
        \kappa_C = 0.95820    (even lower than OM2 HOT value because this time is during thermalization)
        f_CC = 448.9 Hz       (higher because IFO mode matching and loss are better before the IFO thermalizes)
        \kappa_U = 0.98392    (arguably more accurate value, closer to the mean of a very noisy value)
        \kappa_P = 0.99763    (the same as GDS, to within noise or uncertainty)
        \kappa_T = 0.03073    (the same as GDS, to within noise or uncertainty)

    :: GREEN is a ratio of BLUE / ORANGE -- and thus a repeat of what Joe shows in his LHO:72622 Second Image attachment.

Joe was trying to motivate why (1) the missing ESD driver 3.2 kHz pole is a separable problem from (2) and (3), the bad MICH FF filter spoiling the uncertainty in \kappa_U, \kappa_C, and f_CC, so he glossed over this issue. Further what he plotted in his second attachment, and akin to my GREEN curve, is the *ratio* between corrections, not the actually corrections themselves (ORANGE and BLUE) so it kind of hid this difference. 
Images attached to this report
Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 16:21, Monday 11 September 2023 (72815)
This plot was created by create_no3p2kHz_syserror.py, and the plots posted correspond to the script as it was when the Calibration/ifo project git hash was 53543b80.
jeffrey.kissel@LIGO.ORG - 17:21, Monday 11 September 2023 (72819)
While shaving *this* yak, I found another one -- The front-end CALCS uncertainty for the \kappa_U gating algorithm incorrectly consumes \kappa_T's uncertainty.

The attached image highlights the relevant part of the 
    /opt/rtcds/userapps/release/cal/common/models/
        CAL_CS_MASTER.mdl
library part, at the CS > TDEP level.

The red ovals show to what I refer. The silver KAPPA_UIM, KAPPA_PUM, and KAPPA_TST blocks -- which are each instantiations of the ACTUATOR_KAPPA block within the CAL_LINE_MONITOR_MASTER.mdl libary -- each receive the uncertainty output from the above mentioned crude, lazy algorithm (see first image from above LHO:72812) via tag. The KAPPA_UIM block incorrectly receives the KAPPA_TST_UNC tag.

The proof is seen in the first row of other image attachment from above LHO:72812 -- see that while the raw calibration line uncertainty (H1:CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY) is high, the resulting "greater of the two" uncertainty (H1:CAL-CS_TDEP_KAPPA_UIM_GATE_UNC_INPUT) remains low, and matches the third row's uncertainty for \kappa_T (H1:CAL-CS_TDEP_KAPPA_TST_GATE_UNC_INPUT), the greater of H1:CAL-CS_TDEP_PCAL_LINE1_UNCERTAINTY and H1:CAL-CS_TDEP_SUS_LINE3_UNCERTAINTY.

You can that this is the case even back in 2018 on page 14 of G1801594, so this has been wrong since before O3.

*sigh*

This makes me wonder which of these uncertainties the GDS pipeline gates \kappa_U, \kappa_C, and f_CC on ... 
I don't know gstlal-calibration well enough to confirm what channels are used. Clearly, from the 2023-07-26 01:10 UTC trend of GDS TDCFs, they're gated. But, is that because H1:CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY is used as input all of the GDS computed \kappa_U, \kappa_C, and f_CC, or are they using H1:CAL-CS_TDEP_KAPPA_UIM_GATE_UNC_INPUT?

As such, I can't make a statement of how impactful this bug has been.

We should fix this, though.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 12:09, Tuesday 12 September 2023 (72832)
The UIM uncertainty bug has now been fixed and installed at H1 as of 2023-09-12 17:00 UTC. See LHO:72820 and LHO:72830, respectively.
jeffrey.kissel@LIGO.ORG - 14:45, Monday 18 September 2023 (72944)
J. Kissel, M. Wade

Following up on this:
    This makes me wonder which of these uncertainties the GDS pipeline gates \kappa_U, \kappa_C, and f_CC on [... are channels like] H1:CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY used as input all of the GDS computed \kappa_U, \kappa_C, and f_CC, or are they using H1:CAL-CS_TDEP_KAPPA_UIM_GATE_UNC_INPUT?

I confirm from Maddie that 
    - The channels that are used to inform the GDS pipeline's gating algorithm are defined in the gstlal configuration file, which lives in the Calibration namespace of the git.ligo.org repo, under 
    git.ligo.org/Calibration/ifo/H1/gstlal_compute_strain_C00_H1.ini
where this config file was last changed on May 02 2023 with git hash 89d9917d.

    - In that file, The following config variables are defined (starting around Line 220 as of git hash version 89d9917d),
        #######################################
        # Coherence Uncertainty Channel Names #
        #######################################
        CohUncSusLine1Channel: CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY
        CohUncSusLine2Channel: CAL-CS_TDEP_SUS_LINE2_UNCERTAINTY
        CohUncSusLine3Channel: CAL-CS_TDEP_SUS_LINE3_UNCERTAINTY
        CohUncPcalyLine1Channel: CAL-CS_TDEP_PCAL_LINE1_UNCERTAINTY
        CohUncPcalyLine2Channel: CAL-CS_TDEP_PCAL_LINE2_UNCERTAINTY
        CohUncPcalyLine4Channel: CAL-CS_TDEP_PCAL_LINE4_UNCERTAINTY
        CohUncDARMLine1Channel: CAL-CS_TDEP_DARM_LINE1_UNCERTAINTY
      which are compared against a threshold, also defined in that file on Line 114,
        CoherenceUncThreshold: 0.01

    Note: the threshold is 0.01 i.e. 1% -- NOT 0.005 or 0.5% as described in the body of the main aLOG.

    - Then, inside the gstlal-calibration code proper, 
        git.ligo.orgCalibration/gstlal-calibration/bin/gstlal_compute_strain
    whose last change (as of this aLOG) has git hash 5a4d64ce, there are lines of code buried deep that compute create gating around lines 
        :: L1366 for \kappa_T,
        :: L1425 for \kappa_P, 
        :: L1473 for \kappa_U
        :: L1544 for \kappa_C
        :: L1573 for f_CC

    - From these lines one can discern what's going on, if you believe that calibration_parts.mkgate is a wrapper around gstlal's pipeparts.filters class, with method "gate" -- which links you to source code "gstlal/gst/lal/gstlal_gate.c" which actually lives under
        git.ligo.org/lscsoft/gstlal/gst/lal/gstlal_gate.c

    - I *don't* believe it (because I don't believe in my skills in following the gstlal rabbit hole), so I asked Maddie. She says: 
    The code uses the uncertainty channels (as pasted below) along with a threshold specified in the config (currently 0.01, so 1% uncertainty) and replaces any computed TDCF value for which the specified uncertainty on the corresponding lines is not met with a "gap". These gaps get filled in by the last non-gap value, so the end result is that the TDCF will remain at the "last good value" until a new "good" value is computable, where "good" is defined as a value computed during a time where the specified uncertainty channels are within the required threshold.
    The code is essentially doing sequential gating [per computation cycle] which will have the same result as the front-end's "larger of the two" method.  The "gaps" that are inserted by the first gate are simply passed along by future gates, so future gates only add new gaps for any times when the uncertainty channel on that gate indicates the threshold is surpassed.  The end result [at the end of computation cycle] is a union of all of the uncertainty channel thresholds.

    - Finally, she confirms that 
        :: \kappa_U uses 
            . CAL-CS_TDEP_PCAL_LINE1_UNCERTAINTY, 
            . CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY
        :: \kappa_P uses 
            . CAL-CS_TDEP_PCAL_LINE1_UNCERTAINTY, 
            . CAL-CS_TDEP_SUS_LINE2_UNCERTAINTY
        :: \kappa_T uses 
            . CAL-CS_TDEP_PCAL_LINE1_UNCERTAINTY, 
            . CAL-CS_TDEP_SUS_LINE3_UNCERTAINTY
        :: and both \kappa_C f_CC use
            . CAL-CS_TDEP_PCAL_LINE1_UNCERTAINTY, 
            . CAL-CS_TDEP_PCAL_LINE2_UNCERTAINTY, 
            . CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY, 
            . CAL-CS_TDEP_SUS_LINE2_UNCERTAINTY, 
            . CAL-CS_TDEP_SUS_LINE3_UNCERTAINTY

So, repeating all of this back to you to make sure we all understand: If any one of the channels is above the GDS pipeline's threshold of 1% (not 0.5% as described in the body of the main aLOG), then the TDCF will be gated, and "frozen" at the last time *all* of these channels were below 1%.

This corroborates and confirms the hypothesis that the GDS pipeline, although slightly different algorithmically from the front-end, would gate all three TDCFs -- \kappa_U, \kappa_C, and f_CC -- if only the UIM SUS line, CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY was above threshold -- as it was from 2023-07-20 to 2023-08-07.
H1 General
anthony.sanchez@LIGO.ORG - posted 16:07, Monday 11 September 2023 (72814)
Monday Ops Eve Shift Start

TITLE: 09/11 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 7mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.11 μm/s
QUICK SUMMARY:
IFO Is Behaving it's self very nicely.
Squeezer is working as it should.

LHO General
ryan.short@LIGO.ORG - posted 16:00, Monday 11 September 2023 (72801)
Ops Day Shift Summary

TITLE: 09/11 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 146 Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: One lockloss this morning, but came back up quickly and were observing for the rest of the shift. FMCS channels are back on the alarm handler.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
15:11 FAC Karen Opt Lab - Technical cleaning 15:45
15:18 FAC Kim H2 - Technical cleaning 15:44
16:07 FAC Karen MY - Technical cleaning 17:27
17:21 FAC Kim MX - Technical cleaning 18:10
17:50 FAC Ken H2 - Electrical work 19:03
18:05 CDS Patrick, Jonathan CR - Working on FMCS IOC 18:35
18:43 FAC Bubba, contractor FCES - Check on AHU (outside) 19:12
20:04 FAC Tyler Mech - Check AHU 20:07
20:12 SUS Jason Opt Lab Local Prep oplev equipment 20:41
20:23 SEI Jim MX, MY - Looking for equipment 20:51
21:36 FAC Tyler MX, MY - Safety check of 3IFO equipment 23:36
22:06 VAC Betsy, Janos CER - Grab clean equipment 22:08
H1 TCS
thomas.shaffer@LIGO.ORG - posted 15:57, Monday 11 September 2023 (72813)
TCS Chiller Water Level Top-Off - Biweekly

FAMIS26156

I added no water to either chiller as levels were good. Filters looked clean and clear. There was no water in the leak collection unit.

H1 SQZ (ISC, OpsInfo)
ryan.short@LIGO.ORG - posted 14:06, Monday 11 September 2023 (72811)
SQZ Alignment States Removed from ALIGN_IFO

Since we haven't used the ALIGN_IFO guardian to run automated alignment of the SQZ system in some time, I've commented out the following states from the guardian, along with their corresponding edges:

These states are only commented in the code, not deleted, so automated SQZ alignment can be revisited in the future if desired. Changes are committed to svn, but ALIGN_IFO will need to be loaded when we're next out of observing.

LHO VE
david.barker@LIGO.ORG - posted 13:04, Monday 11 September 2023 (72810)
Mon CP1 Fill

Mon Sep 11 10:09:49 2023 INFO: Fill completed in 9min 45secs

Travis confirmed a good overfill curbside

Images attached to this report
H1 CDS
patrick.thomas@LIGO.ORG - posted 12:15, Monday 11 September 2023 (72809)
EPICS FMCS readout back online
Chris (Apollo), Jonathan, Patrick

As part of the upgrade to the FMCS control system last week by Apollo, the subnet masks of the BACnet devices were changed from 255.255.255.0 to 255.255.0.0. The subnet mask of the computer running the BACnet to EPICS IOC has been 255.255.255.0. This needed to be changed to 255.255.0.0 to match the change to the BACnet devices. Jonathan and I did so this morning and this allowed the IOC to connect to the BACnet devices again. There are two more BACnet devices that Apollo still needs to change to 255.255.0.0. One is associated with the filter cavity station air handlers, the other to the LExC building. We do not translate any of the BACnet channels from the LExC building to EPICS, but we do for the filter cavity.

The IOC was restarted a few times during this troubleshooting with permission from the operator.
H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 08:43, Monday 11 September 2023 - last comment - 09:55, Monday 11 September 2023(72800)
Lockloss @ 15:35 UTC

Lockloss @ 15:35 UTC - fast, no obvious cause.

LSC DARM loop shows first sign of a kick before the lockloss.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 09:55, Monday 11 September 2023 (72804)

Back to observing as of 16:51 UTC.

H1 SQZ (OpsInfo)
oli.patane@LIGO.ORG - posted 06:22, Monday 11 September 2023 - last comment - 08:52, Monday 11 September 2023(72798)
Trying to get to Observing with SQZ ISS Issues

09/11 07:05UTC Vicky and Tony made changes to SQZ_MANAGER to take it to NO_SQUEEZING to allow us to be in Observing overnight without squeezing. (SDF Diffs1)

07:10 Vicki further made some edits (I already forgot what the difference was for these, but it was along the same vein)(SDF Diffs2)

07:15UTC While figuring out the workaround for the squeezer issues, we noticed that the ALIGN_IFO and INIT_ALIGN nodes were also keeping us out of Observing since they were reading as being NOT_OK according to the IFO guardian, even though they were in their nominal states. INIT_ALIGN for example is nominally in IDLE, and had been in IDLE since the previous lockloss at 05:53UTC (72797). Tony attempted to get IFO to show INIT_ALIGN as OK by requesting INIT_ALIGN to DOWN and then to IDLE, but this caused the detector to lose lock.

07:48UTC Running INITIAL_ALIGNMENT because the alignment was a mess (stuck going through ACQUIRE_PRMI 3 times) as well as hoping that it would clear the issues with INIT_ALIGN, but we lost lock at PREP_FOR_PRX

07:58 restored optics to settings from time 1378428918 (~3hrs previous, during lock)
- restarting INITIAL_ALIGNMENT again

08:06 INITIAL_ALIGNMENT couldn't find green arms so we took the detector out of INITIAL_ALIGNMENT and into GREEN_ARMS_MANUAL and found green by hand for both arms
08:14 green found, we requested it to DOWN and then into INITIAL_ALIGNMENT
08:39 Finished INITIAL_ALIGNMENT, requested NOMINAL_LOW_NOISE

09:20 Reached NOMINAL_LOW_NOISE
- ISC_LOCK is NOT_OK due to SQZ_MANAGER having been changed
- INIT_ALIGN is again listed as NOT_OK even though it is in its nominal state
- ALIGN_IFO listed as NOT_OK and is currently trying to go from SET_SUS_FOR_FULL_FPMI -> SET_SUS_FOR_FULL_FPMI (attachment3)
    - we later figured out that INIT_ALIGN and ALIGN_IFO are somehow connected to the squeezer and squeezer ISS and so the ISS issue somehow causes them to be NOT_OK

09:41 Trying to revert all changes that were made tonight to see if that will get rid of the NOT_OKs so we can get the detector into Observing. We were hoping that we would then be able to bypass the ISS failing
- Tony set SQZ_MANAGER's nominal to NO_SQUEEZING to see if that would help
    - it did not (nominal was changed back to FREQ_DEP_SQZ)

09:42 - 10:20 Various other methods tried and thought out (sdf3)
- ex) taking SQZ_MANAGER to DOWN and then back to FDS - didn't work

10:27UTC - Finally got into Observing! Did this by following alog 70050 to try to get around the squeezer ISS issue the same way Tony had been doing earlier that evening:
- Since the script settings were what they were supposed to be, Tony just used the alog as a reference for taking SQZ_MANAGER to NO_SQUEEZING, taking SQZ_OPO_LR to LOCKED_CLF_DUAL_NO_ISS, then to LOCKED_CLF_DUAL to help the ISS pump relock
- ISS pump relocked, ALIGN_IFO, INIT_ALIGN, and ISC_LOCK changed to OK, and we accepted sdf diffs(sdf4) and got into Observing

After that, the ISS has unlocked and locked back up multiple times and the currently accepted SDF diffs are for the ISS being ON. We know that this isn't ideal because of how often the ISS is losing lock and subsequently taking us out of Observing, but after VickY and Tony troubleshooted this issue for 6 hours, with me then working with Tony for a further 3.5 hours, we felt that there was no more that we would be able to do on a weekend night at 3:30am, and that although the ISS will keep losing lock and taking us out of Observing, SQZ_MANAGER will eventually get the ISS back up, putting us back into Observing, and this way we could at least get some amount of Observing time in the next few hours until more people can come in and fix the issue when the workday starts.

Things to note/tldr:
- SQZ_MANAGER's nominal state is back to being FREQ_DEP_SQZ, so that doesn't need to be changed
- SQZ_MANAGER needs to be taken out of IFO ignore list
- SQZ ISS needs to be fixed (obviously)

Thank you to Tony for staying for 3.5 hours past his shift end and Vicky staying up until the early hours to help troubleshoot!!

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 08:52, Monday 11 September 2023 (72803)OpsInfo

SQZ_MANAGER has been removed from the exclude_nodes list and the IFO top node was loaded at 15:37 UTC.

H1 PEM (DetChar)
robert.schofield@LIGO.ORG - posted 17:00, Saturday 09 September 2023 - last comment - 11:19, Monday 11 September 2023(72778)
HVAC couples at LVEA and EX but not EY: update on partial shutdown tests

Lance, Genevieve, Robert

Recently, we shut down specific components of the HVAC system in order to further understand the loss of about 10 Mpc to the HVAC system (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72308 ). We noted that shutdown of the EX water pump had shown that the 52 Hz DARM peak is produced by the chilled water pump at EX.  Based on coupling studies during commissioning time yesterday, the coupling of the water pump can be predicted from shaking injections in the area around the EX cryo-baffle, supporting the hypothesis that the water pump couples at the undamped cryo-baffle (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72769 ). Here we report on other results of the shutdown tests that we have been able to do so far.

CS Fans SF1, 2, 3, 4, 5, and 6 cost roughly 6 Mpc – coupling via input jitter noise and unknown coupling.

Figure 1 shows that the range increased by about 6Mpc when only the CS turbines were shut down; no chillers or chilled water pumps were shut down. Figure 2, a comparison of DARM spectra before, during, and after the fan-only shutdown, shows that there were two major differences. First, a decrease in peaks associated with input jitter noise, particularly the 120 Hz peak. Second, a broad band reduction in noise between about 20 and 80 Hz. This is not consistent with input jitter noise and represents an unknown noise source that we haven’t found yet.

There is a third difference that could be coincidence. The 9.8 Hz ITM bounce modes are higher in the before and after of Figure 2. I was tempted to wonder if the broad band noise was upconversion from the 9.8 Hz peak. We also have harmonics of roughly 10 Hz in the spectrum every so often.  I compared BLRMS of 8.5-10 Hz to BLRMS of 39-50 Hz but didn’t see any obvious correlation. But Im not sure this eliminates the possibility.

120 Hz peak in DARM due to periscope resonance matching new 120 Hz peak from HVAC, possibly due to a new leak in LVEA ducts.

Figure 3 shows that the 120 Hz peak in DARM went away when only SF1, 2, 3 and 4 were shut down. It also shows that the HVAC produces a broad peak between 115 and 120 Hz. I looked back and the 120 Hz vibration peak from the HVAC appears to have started during HVAC work at the end of May, beginning of June. There was a period when flows were increased to a high level for a short time that might have pushed apart a duct connection that is now whistleing at 120 Hz.  I think it would be worth checking for a leak in the ducts associated with SF1,2,3 and 4.

In addition to fixing a potential duct leak, we could mitigate the peak in DARM by moving the PSL periscope peak so that it doesn’t overlap with the HVAC peak. In the past I have moved PSL periscope resonances for similar reasons by attaching small weights.

EY HVAC does not contribute significantly to DARM noise

Figure 4 shows that on/off/on/off/on/off/on series of EY fan, chiller and water pump shutdowns does not seem to correlate with range.

Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 11:19, Monday 11 September 2023 (72807)ISC, SYS
This data is the analysis of the 2023-Aug-18 data originally summarized in LHO:72331.
H1 General (SUS)
anthony.sanchez@LIGO.ORG - posted 20:05, Friday 08 September 2023 - last comment - 11:25, Monday 11 September 2023(72771)
Friday Ops Eve Mid Shift & SDF Diffs

After a back to back set of earthquakes and an initial alignment.
H1 has made it back to NOMINAL_LOW_NOISE @ 02:58UTC.
There were SDF Diffs I had to accept to get into Observing @03:01 UTC.
SUS-FC2_M1_OPTICALALIGN_P_OFFSET
SUS-FC2_M1_OPTICALALIGN_Y_OFFSET

Tagging SUS team to documen tthe SDF Diffs.
 

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 11:17, Monday 11 September 2023 (72806)ISC, SQZ, SUS
Tagging ISC and SQZ teams:
    (1) Why are the FC2 alignment offsets monitored by the SDF system?
    (2) Why would we have expected a change in the FC2 alignment? Is this concurrent and/or a result of other commissioning / optimizing?
victoriaa.xu@LIGO.ORG - 11:25, Monday 11 September 2023 (72808)

1) FC2 alignment offsets in principle don't have to be monitored by the SDF system anymore, since we have commissioned ASC. While commissioning the squeezer, at some point it was helpful to do it this way before we set up ASC, but I don't think it makes a big difference now that filter cavity ASC is running. Sheila has already un-monitored several other alignment offsets in ZMs recently, so it should be fine to do that here as well.

2) FC2 alignment changed to help the filter cavity lock on a TEM00 mode after the string of earthquakes on Friday. There was significant pitch misalignment, so Naoki manually helped the FC green lock catch by aligning the FC2 (mostly pitch) slider. If FC does not catch green lock (similar to if ALS does not catch green lock due to misalignment) the system won't be able to go to FDS. So in this case, it can be nice to help it a bit by bumping FC2 P/Y sliders to get the green spot locking on FC green transmission camera (nuc33, bottom left).

H1 CAL (CAL)
joseph.betzwieser@LIGO.ORG - posted 12:40, Friday 01 September 2023 - last comment - 12:11, Friday 17 November 2023(72622)
Calibration uncertainty estimate corrections
This is a continuation of a discussion of mis-application of the calibration model raised in LHO alog 71787, which was fixed on August 8th (LHO alog: 72043), and further issues with what time varying factors (kappas) were applied while the ETMX UIM calibration line coherence was bad (see LHO alog 71790, which was fixed on August 3rd.

We need to update the calibration uncertainty estimates with the combination of these two problems where they overlap.  The appropriate thing is to use the full DARM model (1/C + (A_uim + A_pum + A_tst) * D), where C is sensing, A_{uim,pum,tst} are the individual ETMX stage actuation transfer functions, and D is the digital darm filters.  Although, it looks like we can just get away with an approximation, which will make implimentation somewhat easier.

As a demonstration of this, first I confirm I can replicate the the 71787 result purely with models (no fitting).  I take the pydarm calibration model Response, R, and correct it for the time dependent correction factors (kappas) at the same time I took the GDS/DARM_ERR data, and then take the ratio with the same model except the 3.2 kHz ETMX L3 HFPoles removed (the correction Louis and Jeff eventually implemented).  This is the first attachment.

Next we calculate the expected error just from the wrong kappas being applied in the GDS pipeline due to poor UIM coherence.  For this initial look, I choose GPS time 1374369018 (2023-07-26 01:10), you can see the LHO summary page here, with the upper left plot showing the kappa_C discrepancy between GDS and front end.  So just this issue produces the second attachment.

We can then look at what the effects of the 3.2 kHz pole being missing for two possibilities, for the Front end kappas, and for the GDS bad kappas, and see the difference is pretty small compared to typical calibration uncertainties.  Here it's on the scale of a tenth of a percent at around 90 Hz.  I can also plot the model with the front end kappas (more correct at this time) over the model of the wrong GDS kappas, for a comparison in scale as well.  This is the 3rd plot.

This suggests to me the calibration group can just apply a single correction to the overall response function systematic error for the period where the 3.2 kHz HFPole filter was missing, and then in addition, for the period where the UIM uncertainty was preventing the kappa_C calculation from updating, apply an additional correction factor that is time dependent, just multiplying the two.

As an example, the 4th attachment shows what this would look like for the gps time 1374369018.
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 16:25, Monday 11 September 2023 (72817)
For further explanation of the impact of Frozen GDS TDCFs vs. Live CAL-CS Computed TDCFs on the response function systematic error, i.e. what Joe's saying with
    Next we calculate the expected error just from the wrong kappas being 
    applied in the GDS pipeline due to poor UIM coherence.  For this initial look, I choose 
    GPS time 1374369018 (2023-07-26 01:10 UTC), you can see the LHO summary page here, with 
    the upper left plot showing the kappa_C discrepancy between GDS and front end.  
    So just this issue produces the second attachment.
and what he shows in his second attachment, see LHO:72812.
jeffrey.kissel@LIGO.ORG - 16:34, Thursday 14 September 2023 (72879)
I've made some more clarifying plots to help me better understand Joe's work above after getting a few more details from him and Vlad.

(1) GDS-CALIB_STRAIN is corrected for time dependence, via the relative gain changes, "\kappa," as well as for the new coupled-cavity pole frequency, "f_CC." In order to make a fair comparison between the *measured* response function, GDS-CALIB_STRAIN / DARM_ERR live data stream, and the *modeled* response function, which is static in time, we need to update the response function with the the time dependent correction factors (TDCFs) at the time of the *measured* response function. 

How is the *modeled* response function updated for time dependence? Given the new pydarm system, it's actually quite straightforward given a DARM model parameter set, pydarm_H1.ini and good conda environment. Here's a bit of pseudo-code that captures what's happening conceptually:
    # Set up environment
    from gwpy.timeseries import TimeSeriesDict as tsd
    from copy import deepcopy
    import pydarm

    # Instantiate two copies of pydarm DARM loop model
    darmModel_obj = pydarm.darm.DARMModel('pydarm_H1.ini')
    darmModel_wTDCFs_obj = deepcopy(darmModel_obj)

    # Grab time series of TDCFs
    tdcfs = tsd.get(chanList, starttime, endtime, frametype='R',verbose=True) 

    kappa_C = tdcfs[chanList[0]].value
    freq_CC = tdcfs[chanList[1]].value
    kappa_U = tdcfs[chanList[2]].value
    kappa_P = tdcfs[chanList[3]].value
    kappa_T = tdcfs[chanList[4]].value

    # Multiply in kappas, replace cavity pole, with a "hot swap" of the relevant parameter in the DARM loop model
    darmModel_wTDCFs_obj.sensing.coupled_cavity_optical_gain *= kappa_C
    darmModel_wTDCFs_obj.sensing.coupled_cavity_pole_frequency = freq_CC
    darmModel_wTDCFs_obj.actuation.xarm.uim_npa *= kappa_U
    darmModel_wTDCFs_obj.actuation.xarm.pum_npa *= kappa_P
    darmModel_wTDCFs_obj.actuation.xarm.tst_npv2 *= kappa_T

    # Extract the response function transfer function on your favorite frequency vector
    R_ref     = darmModel_obj.compute_response_function(freq)
    R_wTDCFs  = darmModel_wTDCFs_obj.compute_response_function(freq)

    # Compare the two response functions to form a "systematic error" transfer function, \eta_R.
    eta_R_wTDCFs_over_ref = R_wTDCFs / R_ref


For all of this study, I started with the reference model parameter set that's relevant for these times in late July 2023 -- the pydarm_H1.ini from the 20230621T211522Z report directory, which I've copied over to a git repo as pydarm_H1_20230621T211522Z.ini.

(2) One layer deeper, some of what Joe's trying to explore in his plots above -- the difference between low-latency, GDS pipeline computed TDCFs and real-time, CALCS pipeline -- because of the issues with the GDS pipeline computation discussed in LHO:72812.

So, in order to facilitate this study, we have to gather TDCFs from both GDS and CALCS pipeline. Here's the channel list for both:
    chanList = ['H1:GRD-ISC_LOCK_STATE_N',

                'H1:CAL-CS_TDEP_KAPPA_C_OUTPUT',
                'H1:CAL-CS_TDEP_F_C_OUTPUT',
                'H1:CAL-CS_TDEP_KAPPA_UIM_REAL_OUTPUT',
                'H1:CAL-CS_TDEP_KAPPA_PUM_REAL_OUTPUT',
                'H1:CAL-CS_TDEP_KAPPA_TST_REAL_OUTPUT',

                'H1:GDS-CALIB_KAPPA_C',
                'H1:GDS-CALIB_F_CC',
                'H1:GDS-CALIB_KAPPA_UIM_REAL',
                'H1:GDS-CALIB_KAPPA_PUM_REAL',
                'H1:GDS-CALIB_KAPPA_TST_REAL']
where the first channel in the list is the state of detector lock acquisition guardian for useful comparison.

(3) Indeed, for *most* of the above aLOG, Joe chooses an example of times when the GDS and CALCS TDCFs are *the most different* -- in his case, 2023-07-26 01:10 UTC (GPS 1374369018) -- when the H1 detector is still thermalizing after power up. They're *different* because the GDS calculation was frozen at the value they were on the day that the calculation was spoiled by a bad MICH FF filter, 2023-08-04 -- and importantly when the detector *was* thermalized.

An important distinction that's not made above, is that the *measured* data in his first plot is from LHO:71787 -- a *different* time, when the detector WAS thermalized, a day later -- 2023-07-27 05:03:20 UTC (GPS 1374469418).

Compare the TDCFs between NOT THERMALIZED time, 2023-07-26 first attachment here with the 2023-07-27 THERMALIZED first attachment I recently added to Vlad's LHO:71787.

One can see in the 2023-07-27 THERMALIZED data, the Frozen GDS and Live CALCS TDCF answers agree quite well. For the NOT THERMALIZED time, 2023-07-26, \kappa_C, f_CC, and \kappa_U are quite different.

(4) So, let's compare the response function ratio, i.e. systematic error transfer function ratio, between the response function updated with GDS TDCFs vs. CALCS TDCFs for the two different times -- thermalizes vs. not thermalized. This will be an expanded version Joe's second attachment:
    - 2nd Attachment here: this exactly replicates Joe's plot, but shows more ratios to better get a feel for what's happening. Using the variables from psuedo code above, I'm plotting
        :: BLUE = eta_R_wTDCFs_CALCS_over_ref = R_wTDCFs_CALCS / R_ref
        :: ORANGE = eta_R_wTDCFs_GDS_over_ref = R_wTDCFs_GDS / R_ref
        :: GREEN = eta_R_wTDCFs_CALCS_over_R_wTDCFs_GDS = R_wTDCFs_CALCS / R_wTDCFs_GDS
    where the GREEN trace is showing what Joe showed -- both as the unlabeled BLUE trace in his second attachment, and the "FE kappa true R / applied bad kappa R" GREEN trace in his third attachment -- the ratio between response functions; one updated with CALCS TDCFs and the other updated with GDS TDCFs, for the NOT THERMALIZED time. 

    - 3r Attachment here: this replicates the same traces, but with the TDCFs from Vlad's THERMALIZED time.

For both Joe and my plots, because we think that the CALCS TDCFs are more accurate, and it's tradition to put the more accurate response function in the numerator we show it as such. Comparing the two GREEN traces from my plots, it's much more clear that the difference between GDS and CALCS TDCFs is negligible for THERMALIZED times, and substantial during NOT THERMALIZED times.

(4) Now we bring in the complexity of the missing 3.2 kHz ESD pole. Unlike the "hot swap" of TDCFs in the DARM loop model, it's a lot easier just to create an "offline" copy of the pydarm parameter file, with the ESD poles removed. That parameter file lives in the same git repo location, but called pydarm_H1_20230621T211522Z_no3p2k.ini. So, with that, we just instantiate the model in the same way, but calling the different parameter file:
# Set up environment
    # Instantiate two copies of pydarm DARM loop model
    darmModel_obj = pydarm.darm.DARMModel('pydarm_H1_20230621T211522Z.ini')
    darmModel_no3p2k_obj = pydarm.darm.DARMModel('pydarm_H1_20230621T211522Z_no3p2k.ini')

    # Extract the response function transfer function on your favorite frequency vector
    R_ref = darmModel_obj.compute_response_function(freq)
    R_no3p2k = darmModel_no3p2k_obj.compute_response_function(freq)

    # Compare the two response functions to form a "systematic error" transfer function, \eta_R.
    eta_R_nom_over_no3p2k = R_ref / R_no3p2k

where here, the response function without the 3.2 kHz pole is less accurate, so R_no3p2k goes in the denominator.

Without any TDCF correction, I show this eta_R_nom_over_no3p2k compared against Vlad's fit from LHO:71787 for starters.

(5) Now for the final layer of complexity need to fold in the TDCFs. This is where I think a few more traces and plots are needed comparing the two THERMALIZED vs. NOT times, plus some clear math, in order to explain what's going on. In the end, I make the same conclusion as Joe, that the two effects -- Fixing the Frozen GDS TDCFs and Fixing the 3.2 kHz pole are "separable" to good approximation, but I'm slower than Joe is, and need things laid out more clearly.

So, on the pseudo-code side of things, we need another couple of copies of the darmModel_obj:
    - with and without 3.2 kHz pole 
        - with TDCFs from CALCS and GDS, 
            - from THERMALIZED (LHO71787) and NOT THERMALIZED (LHO72622) times:
    
        R_no3p2k_wTDCFs_CCS_LHO71787 = darmModel_no3p2k_wTDCFs_CCS_LHO71787_obj.compute_response_function(freq)
        R_no3p2k_wTDCFs_GDS_LHO71787 = darmModel_no3p2k_wTDCFs_GDS_LHO71787_obj.compute_response_function(freq)
        R_no3p2k_wTDCFs_CCS_LHO72622 = darmModel_no3p2k_wTDCFs_CCS_LHO72622_obj.compute_response_function(freq)
        R_no3p2k_wTDCFs_GDS_LHO72622 = darmModel_no3p2k_wTDCFs_GDS_LHO72622_obj.compute_response_function(freq)

        
        eta_R_wTDCFS_over_R_wTDCFs_no3p2k_CCS_LHO71787 = R_wTDCFs_CCS_LHO71787 / R_no3p2k_wTDCFs_CCS_LHO71787
        eta_R_wTDCFS_over_R_wTDCFs_no3p2k_GDS_LHO71787 = R_wTDCFs_GDS_LHO71787 / R_no3p2k_wTDCFs_GDS_LHO71787
        eta_R_wTDCFS_over_R_wTDCFs_no3p2k_CCS_LHO72622 = R_wTDCFs_CCS_LHO72622 / R_no3p2k_wTDCFs_CCS_LHO72622
        eta_R_wTDCFS_over_R_wTDCFs_no3p2k_GDS_LHO72622 = R_wTDCFs_GDS_LHO72622 / R_no3p2k_wTDCFs_GDS_LHO72622


Note, critically, that these ratios of with and without the 3.2 kHz pole -- both updated with the same TDCFs -- is NOT THE SAME THING as just the ratio of models updated with GDS vs CALCS TDCFs, even though it might look like the "reference" and "no 3.2 kHz pole" might cancel "on paper," if one naively thinks that the operation is separable
     
    [[ ( R_wTDCFs_CCS / R_ref )*( R_ref / R_no3p2k ) ]] / [[ ( R_wTDCFs_GDS / R_ref )*(R_ref / R_no3p2k) ]] #NAIVE
    which one might naively cancel terms to get down to
    [[ ( R_wTDCFs_CCS / R_ref )*( R_ref / R_no3p2k ) ]] / [[ ( R_wTDCFs_GDS / R_ref )*(R_ref / R_no3p2k) ]]  #NAIVE
    [[ ( R_wTDCFs_CCS ]] / [[ R_wTDCFs_GDS ]] #NAIVE

    
So, let's look at the answer now, with all this context.
    - NOT THERMALIZED This is a replica of what Joe shows in the third attachment for the 2023-07-26 time:
        :: BLUE -- the systematic error incurred from excluding the 3.2 kHz pole on the reference response function without any updates to TDCFs (eta_R_nom_over_no3p2k)
        :: ORANGE -- the systematic error incurred from excluding the 3.2 kHz pole on the CALCS-TDCF-updated, modeled response function (eta_R_wTDCFS_over_R_wTDCFs_no3p2k_CCS_LHO72622, Joe's "FE kappa true R /applied R (no pole))
        :: GREEN -- the systematic error incurred from excluding the 3.2 kHz pole on the GDS-TDCF-updated, modeled response function (eta_R_wTDCFS_over_R_wTDCFs_no3p2k_GDS_LHO72622, Joe's "GDS kappa true R / applied (no pole)")
        :: RED -- Compared against Vlad's *fit* the ratio of CALCS-TDCF-updated, modeled response function to (GDS-CALIB_STRAIN / DARM_ERR) measured response function

    Here, because the GDS TDCFs are different than the CALCS TDCFs, you actually see a non-negligible difference between ORANGE and GREEN. 

    - THERMALIZED:
        (Same legend, but the TIME and TDCFs are different)

    Here, because the GDS and CALCS TDCFs are the same-ish, you can't see that much of a difference between the two. 
    
    Also, note, that even when we're using the same THERMALIZED time and corresponding TDCFs to be self-consistent with Vlad's fit of the measured response function, they still don't agree perfectly. So, there's likely still yet more systematic error going in the thermalized time.

(6) Finally, I wanted to explicitly show the consequences of "just" correcting for GDS and from "just" correcting the missing 3.2 kHz pole to be able to better *quantify* the statement that "the difference is pretty small compared to typical calibration uncertainties," as well as showing the difference between "just" the ratio response functions updated with the different TDCFs (the incorrect model), against the "full" models.

    I show this in 
    - NOT THERMALIZED, and
    - THERMALIZED

For both of these plots, I show
    :: GREEN -- the corrective transfer function we would be applying if we only update the Frozen GDS TDCFs to Live CALCS TDCFs, compared with
    :: BLUE -- the ratio of corrective transfer functions,
         >> the "best we could do," updating the response with Live TDCFs from CALCS and fixing the missing 3.2 kHz pole against
         >> only fixing the missing 3.2 kHz pole
    :: ORANGE -- the ratio of corrective transfer functions
         >> the "best we could do," updating the response with Live TDCFs from CALCS and fixing the missing 3.2 kHz pole against
         >> the "second best thing to do" which is leave the Frozen TDCFs alone and correct for the missing 3.2 kHz pole 
       
     Even for the NOT THERMALIZED time, the BLUE never exceeds 1.002 / 0.1 deg in magnitude / phase, and it's small compared to the "TDCF only" the simple correction of Frozen GDS TDCFs to Live CALCS TDCFs, shown in GREEN .  This helps quantify why Joe thinks we can separately apply the two corrections to the systematic error budget, because GREEN is much larger than BLUE.

    For the THERMALIZED time, in BLUE, that ratio of full models is even less, and also as expected the ratio of simple TDCF update models is also small.


%%%%%%%%%%
The code that produced this aLOG is create_no3p2kHz_syserror.py as of git hash 3d8dd5df.
Non-image files attached to this comment
jeffrey.kissel@LIGO.ORG - 12:11, Friday 17 November 2023 (74255)
Following up on this study just one step further, as I begin to actually correct data curing the time period where both of these systematic errors are in play -- the frozen GDS TDCFs and the missing 3.2 kHZ pole...

I craved one more set of plots to convey "Fixing the Frozen GDS TDCFs and Fixing the 3.2 kHz pole are "separable" to good approximation" showing the actual corrections one would apply in the different cases:
    :: BLUE = eta_R_nom_over_no3p2k = R_ref / R_no3p2k >> The systematic error created by the missing 3.2 kHz pole in the ESD model alone
    :: ORANGE = eta_R_wTDCFs_CALCS_over_R_wTDCFs_GDS = R_wTDCFs_CALCS / R_wTDCFs_GDS >> the systematic error created by the frozen GDS TDCFs alone
    :: GREEN = eta_R_nom_over_no3p2k * eta_R_wTDCFs_CALCS_over_R_wTDCFs_GDS = the product of the two >> the approximation
    :: RED = a previously unshown eta that we'd actually apply to the data that had both = R_ref (updated with CALCS TDCFS) / R_no3p2k (updated with GDS TDCFs) the right thing

As above, it's important to look at both a thermalized case as well as a non-thermalized case., so I attach those two,
    NOT THERMALIZED, and
    THERMALIZED.

The conclusions are the same as above:
    - Joe is again right that the difference between the approximation (GREEN) and the right thing (RED) is small, even for the NOT THERMALIZED time
But I think this version of the plots / traces better shows the breakdown of which effect is contribution where on top of the approximation vs. "the right thing," and "the right thing" was never explicitly shown. All the traces in my expanded aLOG, LHO:72879, had the reference model (or no 3.2 kHz pole models) updated either both CALCS TDCFs or both GDS TDCFs in the numerator and denominator, rather than "the right thing" where you have CALCS TDCFs in the numerator and GDS TDCFs in the denominator).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
To create these extra plots, I added a few lines of "calculation" code and another 40-ish lines of plotting code to create_no3p2kHz_syserror.py. I've now updated in within the git repo, so it and the repo now have git hash 1c0a4126.
Non-image files attached to this comment
H1 SQZ
sheila.dwyer@LIGO.ORG - posted 16:12, Tuesday 29 August 2023 - last comment - 08:50, Monday 11 September 2023(72525)
SQZ measurement on homodyne: 16-18% unexplained losses

Vicky, Naoki, Sheila, Daniel

Details of homodyne measurement:

This morning Daniel and Vicky reverted the cable change to allow us to lock the local oscillator loop on the homodyne (undoing change described in 69013).  Vicky then locked the OPO on the seed using the dither lock, and increased the power into the seed fiber to 75mW (it can't go above 100mW for the safety of the fiber switch).  We then reduced the LO power so that the seed and LO power were matched on PDA, and adjusted the alignment of the sqz path to get good (~97%) visibility measured on PDA.  We removed the half wave plate from the seed path, without adjusting the rotation.  With it removed, we checked the visibility on PDB, and saw that the powers were imbalanced.

Polarization issue (revisiting the polarization of sqz beam, same conclusion as previous work):

There is a PBS in the LO path close to the homodyne, so we believe that the polarization should be set to horizontal at the beamsplitter in that path.  The LO power on the two PDs is balanced (imbalanced by 0.4%), so we believe this means that the beamsplitter angle was set correctly for p polarized light as we found it, and there is no need to adjut the beamsplitter angle.  However, when we switched to the seed power, there is a 10% difference between the power on the two PDs without the halfwave plate in the path.  We put the halfwave plate back, and the powers were again balanced (with the HWP angle as we found it).  We believe this means that the polarization of the sqz path is not horizontal arriving at the homodyne, and that the half wave plate is restoring the polarization to horizontal. If the polarization rotation is happening on SQZT7, the half wave plate should be able to mitigate the problem, if it's happening in HAM7 it will look like a loss for squeezing in the IFO. Vicky re-adjusted the alignment of the sqz path after we put the HWP back in, because it slightly shifts the alignment.  After this the visibility measured on PDA is 95.7% (efficiency of 91.6%) and on PDB visibility is 96.9% (efficiency of 93.9%). 

SQZ measurements, unclipping:

While the IFO was relocking Vicky and Naoki measured SQZ, SN, ASQZ and mean SQZ on the homodyne and found 4.46dB sqz, 10.4dB mean sqz and 13.14dB anti-sqz measured from 500-550Hz.  Vicky then checked for clipping, and saw some evidence of small clipping (order 1% clipping with 10urad yaw dither on ZM2).  We went to the table to check that the problem wasn't in the path to the IR PD and camera, we adjusted the angle of the 50/50 beamsplitter that sends light to the camera, and set the angle of the camera to be more normal to the PD path.  This improved the image quality on the camera.  Vicky moved ZM3 to reduce the clipping seen by the IR PD slightly.  She restored good visibility by maximizing the ADF, and also adjusted both PSAMs, moving ZM4 from 100V to 95V.  (We use different PSAMs for the homodyne than the IFO).  After this, she re-measured sqz at 800-850Hz: 5.2dB sqz, 13.6dB anti-sqz, and 10.6dB mean sqz. 

Using the nonlinear gain of 11 (Naoki and Vicky checked it's calibration yesterday), and the equations from aoki, this sqz/asqz level implies total efficiency of 0.72 without phase noise, the mean sqz measurement implies a total efficiency of 0.704. From the sqz loss spreadsheet we have 6.13% known HAM7 losses, if we also use the lower visibility measured using PDA we should have a total efficiency for the homodyne of 0.916*0.9387 = 0.86.  This means that we would infer an extra 16-18% losses from these homodyne measurements, which seems too large for homodyne PD QE and optics losses in the path.  Since we believe that the polarization issue is reflected in the visibility, this means that these are extra losses in addition to any losses the IFO sees due to the polarization issue. 

Screenshot from Vicky shows the measurement made including the dark noise.

Images attached to this report
Comments related to this report
victoriaa.xu@LIGO.ORG - 14:50, Thursday 31 August 2023 (72582)

Including losses from phase noise of 20mrad, dark noise -21dB below shot noise, and a more accurate calibration of our measured non-linear gain to generated sqz level (from adf paper vs the aoki paper sheila referenced), the total efficiency could marginally be increased to 0.74. This suggests 26% loss based on sqz/asqz. This is also consistent with the 27% loss calculated separately from the mean sqz and generated sqz levels.

From the sqz wiki, we could budget 17% known homodyne losses. This includes 7% in-chamber loss to the homodyne (opo escape efficiency * ham7 optics losses * bdiverter loss), and 11% HD on-table losses (incl. 2% optics losses on SQZT7, and visibility losses of 1- 91.6% as Sheila said above (note this visibility was measured before changing alignments for the -5.2dB measurement; so there remains some uncertainty from visibility losses)).

In total, after including more loss effects (phase noise, dark noise), a more accurate generated sqz level, and updating the known losses -- of the 27% total HD losses observed, we can plausibly account for 17% known losses, lowering the unexplained homodyne losses to ~10-11% (this is still high).

Images attached to this comment
victoriaa.xu@LIGO.ORG - 08:50, Monday 11 September 2023 (72802)

From Sheila's alog LHO:72604 regarding the quantum efficiency of the homodyne photodiodes (99.6% QE for PDA, and 95% QE for PDB), if we accept this at face value (which could be plausible due to e.g. the angle of incidence on PD B), this would change the 1% budgeted HD PD QE loss to 5% loss. 
This increases the amount of total budgeted/known homodyne losses to ~21%: 1 - [0.985(opo)*0.953 (ham7)*0.99 (bdiverter) * 0.98(on-table optics loss)*0.95(PD B QE)*0.916(hd visibility)].

From the 27% total HD losses observed, we can then likely account for about 21% known losses (~7% in-chamber, ~15% on-table), lowering unexplained homodyne losses to < 7%.

H1 PEM (DetChar)
robert.schofield@LIGO.ORG - posted 19:22, Friday 18 August 2023 - last comment - 11:13, Monday 11 September 2023(72331)
DARM 52 Hz peak from chilled water pump at EX: HVAC shutdown times

Genevieve, Lance, Robert

To further understand the roughly 10Mpc lost to the HVAC (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72308), we made several focussed shutdowns today. These manipulations were made during observing (with times recorded) because such HVAC changes happen automatically during observing and also, we were reducing noise rather than increasing it. The times of these manipulations are given below.

One early outcome is that the peak at 52 Hz in DARM is produced by the chilled water pump at EX (see figure). We went out and looked to see if the vibration isolation was shorted, it was not, though there are design flaws (the water pipes arent isolated). We switched from CHWP-2 to CHWP-1 to see if the particular pump was extra noisy. CHWP-1 produced a similar peak in DARM at its own frequency. The peak in accelerometers is also similar in amplitude to the one from the water pump at EY. One possibility is that the coupling at EX is greater because of the undamped cryobaffle at EX.

 

Friday HVAC shutdowns; all times Aug. 18 UTC

15:26 CS SF1, 2, 3, 4 off

15:30:30 CS SF5 and 6 off

15:36 CS SF5 and 6 on

15:40 CS SF1, 2, 3, 4 back on

 

16:02 EY AH2 (only fan on) shut down

16:10 EY AH2 on

16:20 EY AH2 off

16:28 EY AH2 on

16:45 EY AH2 and chiller off

16:56:30 EY AH2 and chiller on

 

17:19:30 EX chiller only off, pump stays on

17:27 EXwater pump CHWP-2 goes off

17:32: EX CHWP-2 back on chiller back on right after

 

19:34:38 EX chiller off, CHWP-2 pump stays on for a while

19:45 EX chiller back on

 

20:20 EX started switch from chiller 2 to chiller 1 - slow going

21:00 EX Finally switched

21:03 EX Switched back to original, chiller 1 to chiller 2

 

Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 10:29, Monday 21 August 2023 (72350)DetChar, FMP, ISC, OpsInfo
Turning Roberts reference to LHO:72308 into a hyperlink for ease of navigation.

Check out LHO:72297 for a bigger picture representation of how the 52 Hz peak in the broader DARM sensitivity, and from the Time Stamps in Elenna's plots, they were taken at 15:27 UTC, just after the corner station (CS) "SFs 1, 2, 3, 4" are off.

SF stands for "Supply Fans" i.e. those air handler unit (AHU) fans that push the cool air in to the LVEA. Recall, there are two fans per air handler unit, for the two air handler units (AHU1 and AHU2) that feed the LVEA in the corner station.

The channels that you can use to track the corner station's LVEA HVAC system are outlined more in LHO:70284, but in short, you can check the status of the supply fans via the channels
    H0:FMC-CS_LVA_AH_AIRFLOW_1   Supply Fan (SF) 1
    H0:FMC-CS_LVA_AH_AIRFLOW_2   Supply Fan (SF) 2
    H0:FMC-CS_LVA_AH_AIRFLOW_3   Supply Fan (SF) 3
    H0:FMC-CS_LVA_AH_AIRFLOW_4   Supply Fan (SF) 4
jeffrey.kissel@LIGO.ORG - 13:13, Monday 28 August 2023 (72486)DetChar, ISC, SYS
My Bad: -- Elenna's aLOG is showing the sensitivity on 2023-Aug-17, and the Robert logging of times listed above are for 2023-Aug-18. 

Elenna's demonstration is during Robert's site-wide HVAC shut down 2023-Aug-17 -- see LHO:72308 (i.e. not just the corner as I'd errantly claimed above.).
jeffrey.kissel@LIGO.ORG - 11:13, Monday 11 September 2023 (72805)FMP, ISC, OpsInfo
For these 2023-Aug-18 times mentioned in this LHO aLOG 72331, check out the subsequent analysis of impact in LHO:72778.
Displaying reports 15881-15900 of 86571.Go to page Start 791 792 793 794 795 796 797 798 799 End