Displaying reports 13321-13340 of 84064.Go to page Start 663 664 665 666 667 668 669 670 671 End
Reports until 21:49, Wednesday 13 September 2023
H1 General
ibrahim.abouelfettouh@LIGO.ORG - posted 21:49, Wednesday 13 September 2023 - last comment - 09:32, Thursday 14 September 2023(72875)
Lockloss 4:46 UTC

Lockloss due to another of the same type of Glitch as yesterday's Lockloss Alog 72852 (and less than 20 minutes apart too). Attempting to re-lock automatically now while investigating the cause of the glitch.

Comments related to this report
thomas.shaffer@LIGO.ORG - 09:32, Thursday 14 September 2023 (72878)

Investigation in Ibrahim's summary alog72876

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 20:08, Wednesday 13 September 2023 (72874)
OPS Eve Midshift Update

IFO is in NLN as of 7:24 UTC (19:37 hr lock) and OBSERVING as of 23:12 UTC

Other:

IY violin modes 5 (and 6) have been going down steadily for the last 4 hrs since comissioning rang them up and a gain was applied to mode 5 (Screenshot). All violins are now under 10-17 m/Hz1/2 as of about 2:00 UTC.

 

Images attached to this report
H1 SQZ
victoriaa.xu@LIGO.ORG - posted 17:53, Wednesday 13 September 2023 - last comment - 11:13, Monday 25 September 2023(72870)
Brief test for fc backscatter with fc alignments, some strange lines in FC-LSC locking signal

Based on TJ's log 72857 about recent FC alignment drifts, I wanted to check out the situation with possible scattered light from the filter cavity since alignments can drift around (worth investigating more), and I wanted to check the CLF fiber polarization (this was basically fine).

Summary screenshot shows some measurements from today. I'm wondering if we might be closer than I realized to filter cavity backscatter. With an excitation that is ~5-10x greater than the ambient lsc control to FC2, it created scatter about 2x above darm at 100 Hz. Based on previous (higher-freq) measurements (LHO:68022) and estimates (LHO:67586), I had thought ambient LSC control was about 10-fold below DARM; this suggests we are within a factor of ~5? Though, there is a lot of uncertainty (on my end) of how hard we are driving FC2 given the suspension/loop roll-offs/etc, so I need to think more about the scaling between measured scatter and the excitation.

Strange line in FC-LSC error signal -- it seems to wander; I've seen it (or things like it) sometimes in other SQZ-related signals; but I haven't figured out where it comes from yet. It can be easily seen on SQZ summary pages (sqz > subsystems tab) since Derek and Iara helped us it up (thank you Detchar!!). I don't see it in CLF_ISS but sometimes in other error signals. Not clear to me that this is an issue if the peak is >100 Hz, but if it drifts to lower frequencies and this line is real/physical (not some random artifact), it could be problematic. The peak amplitude seems large enough that if it were in-band of the FC2 suspension and controls, it could plausibly get injected as real FC length noise and drive some measurable backscatter.

For the excitation -- I used the template from LHO:68022 but ran it to lower frequencies, in-band of FC-LSC. Compared to the FC error signal (specifcally H1:SQZ-FC_WFS_A_I_SUM_OUTPUT which is the input sensor to FC_LSC_DOF2_IN1), this DTT injection of 30,000 counts increased the integrated RMS of the in-loop error signal at 10 Hz by about 9-fold (= 3375 (w/excitation) / 387 (ambient), measured from dtt rms cursors). I injected into the fc-lsc loop at H1:SQZ-FC_LSC_DOF2_EXC, with various amplitudes (like 30k), and filter = butter("BandPass",4,10,300); this should then go to FC2_L for suspension feedback. I'm not sure that I'm using the best witness sensors for actual length noise driven in this excitation, but wasn't able to totally figure it out in time.

With this excitation going, I tried to walk the alignment to see if there was an alignment that minimizes backscatter, but I didn't figure this out in time. I tried to walk ZM2 with beam spot control off, and then set the QPD offsets where it landed. This was probably the wrong approach, since I wasn't able to then set the QPD offsets in time; maybe I should have walked the FC-QPD offsets with full ASC running at higher gain, since this loop is so slow. Might be worth trying this again for a bit; with the injection running, I wasn't sure if I was able to minimize scatter by walking ZM2 (p/y/ maybe psams?), but there were a couple directions in both pitch and yaw that looked promising. 

In the end, SDF diffs for ZM2 were accepted (ZM2 is not under asc-control), and I accepted the beam spot position change in the FC QPD Pitch offset (from 0.07 --> 0.08) while reverting the yaw change that I didn't figure out in time. I don't anticipate much overall change in the squeezer after these tests.

Images attached to this report
Comments related to this report
lee.mcculler@LIGO.ORG - 11:13, Monday 25 September 2023 (73090)

Do you have the SQZ laser noise eater on? We noticed a similarly wandering line before O3. See LLO43780 and LLO43822.

H1 ISC
jenne.driggers@LIGO.ORG - posted 17:08, Wednesday 13 September 2023 - last comment - 17:47, Wednesday 13 September 2023(72862)
Measurements for adjusting LSC FF

[Jenne, Gabriele, with thoughts and ideas from Elenna and Sheila]

The last few days, our sensitivity has been degrading a small amount, and Gabriele noted from a bruco that we're seeing increased MICH and SRCL coherence.  It hasn't even been a full 2 weeks since Gabriele and Elenna last tuned the MICH FF, so this is disappointing. Elenna has made the point in alog 72598 that the effectiveness of the MICH FF seems to be related to the actuation strength of the ETMX ESD.  We certainly see that the Veff of ETMX has been marching in a monatonic line for the last few months in Ibrahim's alog 72849. After roughly confirming that this makes sense, Gabriele and I took measurements in preparation for soon switching the LSC FF to use the ITMY PUM, just like LLO does, in hopes that makes us more immune to these gain changes.


Today, at Sheila's suggestion, I tried modifying the ETMX L3 DriveAlign L gain to counteract this actuation strength change.  (Fear not, I reverted the gain before commissioning ended, so our Observe segments do not have any change to any calibration.)  To check the effect of changing that drivealign vlaue, I looked both at the DARM open loop gain, as well as the coupling between MICH and DARM. 

This all seemed to jive with the MICH FF effectiveness being related to ETMX ESD actuation strength.  So, rather than try to track that, we decided to work on changing over to use the ITMY PUM like LLO does.  I note that our Transition_from_ETMX guardian state uses ITMX, not ITMY, so it should be safe to have made changes to the ITMY settings.

Images attached to this report
Comments related to this report
gabriele.vajente@LIGO.ORG - 17:47, Wednesday 13 September 2023 (72873)

Here is a first look at fitting the MICH and SRCL FF when actuating through ITMY PUM. We started by measuring the coupling with FF completely off, and we might want / need to iterate once or twice to get better results.

MICH: fit looks good, a filter of order 14 fits the transfer function reasonably well

zpk([-1.873934219410558+i*46.83216078589182;-1.873934219410558-i*46.83216078589182;-0.7341315698490372+i*62.0033340187172;-0.7341315698490372-i*62.0033340187172;-0.9693772618643728+i*65.63696788856699;-0.9693772618643728-i*65.63696788856699;-0.2306511845861754+i*111.0934300115408;-0.2306511845861754-i*111.0934300115408;-2.345238808833105+i*1219.259385509747;-2.345238808833105-i*1219.259385509747;166.8801527617445+i*2890.927112949171;166.8801527617445-i*2890.927112949171;60.92218422456504;-104.5628711283977],[-5.152689788327478+i*30.32659841775321;-5.152689788327478-i*30.32659841775321;-11.78911031695619+i*37.8625720690391;-11.78911031695619-i*37.8625720690391;-1.090075083133351+i*61.79155607437996;-1.090075083133351-i*61.79155607437996;-0.9661450057136016+i*65.6925963766675;-0.9661450057136016-i*65.6925963766675;-0.2109589422385647+i*111.481550695115;-0.2109589422385647-i*111.481550695115;-2.311985363993003+i*1219.293222488905;-2.311985363993003-i*1219.293222488905;-219.9709799569737+i*1294.77378378834;-219.9709799569737-i*1294.77378378834],-0.1367317709937509)

SRCL: as usual it's hard to get a good fit. The predicted performance is a factor 10 subtraction, which should be ok as a start. We might need to iterate

zpk([-11.44480027611615+i*191.7889881463648;-11.44480027611615-i*191.7889881463648;-0.9328145827962181+i*266.494151512518;-0.9328145827962181-i*266.494151512518;-11.21909030586581+i*278.7781189198607;-11.21909030586581-i*278.7781189198607;-15.29341485126412+i*362.4319559041366;-15.29341485126412-i*362.4319559041366;-25.24254442245431+i*409.6141252691203;-25.24254442245431-i*409.6141252691203;-472.4077152209672+i*539.2943077769149;-472.4077152209672-i*539.2943077769149;-497.1568846370241+i*2312.902650596404;-497.1568846370241-i*2312.902650596404],[-0.7255682054509971+i*14.53648889678977;-0.7255682054509971-i*14.53648889678977;-11.0173094335847+i*191.6358938485432;-11.0173094335847-i*191.6358938485432;-0.9817914931295652+i*266.5227875729932;-0.9817914931295652-i*266.5227875729932;-11.91696627440176+i*278.5709318199229;-11.91696627440176-i*278.5709318199229;-15.26184825001648+i*362.5155688498652;-15.26184825001648-i*362.5155688498652;-24.47199659066153+i*410.2863096146389;-24.47199659066153-i*410.2863096146389;-227.951089876243+i*1896.119580343166;-227.951089876243-i*1896.119580343166],0.0007550286305133534)

Filters not yet uploaded to foton. Note the plots do not include the additional high pass filters that we are using, so the low frequency amplitude of the two LSC FF is lower.

Images attached to this comment
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:24, Wednesday 13 September 2023 (72869)
OPS Eve Shift Start

TITLE: 09/13 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 10mph Gusts, 8mph 5min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.14 μm/s
QUICK SUMMARY:

IFO is in NLN as of 7:24 UTC (16 hr lock) and OBSERVING (after Wednesday commissioning) as of 23:12 UTC

LHO General
thomas.shaffer@LIGO.ORG - posted 16:18, Wednesday 13 September 2023 (72864)
Ops Day Shift Summary

TITLE: 09/13 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Locked for 15.5 hours, we just finished up 4.5 hours of commissioning and calibration time.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
15:03 FAC Randy, Tyler MX, MY n Craning, moving equipment 17:26
16:10 FAC Karen Opt Lab n Tech cleaning 16:12
18:24 FAC Randy MY n Picture of chiller 19:20
18:39 CAL TJ CR n Running CAL sweep 18:59
19:05 ISC Jenne CR n FF measurement 20:24
19:53 FAC Tyler EY n Check line pressures on the mezzanine 20:25
21:59 SQZ Vicky Remote n SQZ injections, measurements, and more 23:00
22:25 VAC Gerardo, Jordan EY n Cleaning up glycol spill 23:02
H1 CAL (DetChar, ISC, OpsInfo)
jeffrey.kissel@LIGO.ORG - posted 14:57, Wednesday 13 September 2023 (72867)
Calibration Systematic Error Monitor Line moved BACK to 102.13 from temporary 104.23 Hz
J. Kissel (after consult with J. Driggers)

While poking around the front-end computed response function systematic error yesterday and today (LHO:72848 and LHO:72863), I was reminded that one of the calibration lines used to measure the response function systematic error has been showing completely bogus large numbers for "a long time."

It's the 102.13 or 104.23 Hz PCALX calibration line.

I realize the systematic error measured at this frequency is wrong because we never updated the frequency in the DARM loop model parameter file. 
(See details below as to why this matters for both GDS and CALCS computation of the response function systematic error at this frequency.)

For reasons I outline below, it's easier to fix the problem by changing the calibration line frequency *back* to 102.13 Hz, so I've done so as of 2023-09-13 20:17:45 UTC, and the error estimate has returned to functional values around unity magnitude and zero phase by 2023-09-13 20:19:45 UTC (i.e. after the 2-minute gating, medianing, averaging algorithm catches up.)

All of this was done while we're were *out* of observing during today's commissioning window, so the *next* observation ready segment will pick up this change around 2023-09-13 ~23:00 UTC.

%%%%%%%%%%
Details & Motivation
%%%%%%%%%%

2023-06-21 Calibration updates to reflect power down to 60W PSL input, 20230621T211522Z pydarm_H1.ini file correctly has 102.13 Hz in the list of cal_line_sys_pcalx_frequencies. LHO:70693

2023-08-04 Friday afternoon, SRCL Feed-forward filter re-tuned, informed by a measurement that mistakenly has all calibration lines still ON, including 102.13 Hz line. So, FF measurement fitter installs a super high Q zero:pole pair to "compensate" for it. The resulting filter is installed in the SRCL FF path, starting a period of huge interference between this high-Q zero:pole pair in the SRCL FF and the 102.13 Hz calibration line itself. LHO:71961

2023-08-08 Next week Tuesday, DetChar Team finds "some feature" at 102.12833 Hz, and folks blame the calibration line itself since it's originally identified as appearing Saturday morning. LHO:72064

2023-08-09 That same week Wednesday, out of panic and as well as starting a "grasping at straws" test, we moved the 102.13 Hz PCALX calibration line to 104.23 Hz LHO:72108 -- but in our rush, forgot/didn't realize we needed to update down-stream computation that is impacted by this change:
    (a) The front-end demodulator that turns this line into something useful in CAL-CS needs its local oscillator frequency to change to match,
    (b) The cal_line_sys_pcalx_frequencies list within pydarm_H1.ini needs updating as well -- or else the "DARM loop model transfer function values at calibration line frequencies" EPICs records are wrong for the new frequency, which means both GDS and CAL-CS are interpreting this line incorrectly.

2023-08-29 Problem with SRCL FF filters is finally identified and removed, and the story about what happened on 2023-08-04 finally becomes clear. LHO:72537

2023-08-31 Calibration updated in order to account for the past two month's worth of commissioning and ETMX ESD actuation strength drift, but the 20230830T213653Z pydarm_H1.ini file incorrectly still has 102.13 Hz in its cal_line_sys_pcalx_frequencies list. LHO:72594
    >> I didn't aLOG it explicitly, but on this day, I noticed the (a) mistake above, and changed the CAL-CS DEMOD local oscillator frequency to 104.23 Hz for the front-end demodulation and estimation of systematic error. Because (b) was still going on, it didn't fix the problem.

2023-09-13 TODAY Realized the issue with front-end systematic error calculation was still wrong because of (b); the cal_line_sys_pcalx_frequencies list in the pydarm_H1.ini file, and reasoned it would be way easier to change the PCAL OSC and DEMOD OSC frequency back to 102.13 Hz than to go through the entire rigamarole of updating the calibration and "pushing new EPICs records." So, I moved calibration line *back* to 102.13 Hz changing the PCALX OSC7 frequency, and remembered to update the LINE8 DEMOD LO frequency to match.

First attachment: The graphical MEDM interface of where I changed the EPICs records that define the PCAL OSC7 calibration line frequency, and the CALCS LINE8 DEMOD LO frequency.
Second attachment: The PCAL OSC7 frequency change has been saved in the H1CALEX SDF system. (Only the OBSERVE.snap is show, because the h1calex model's safe and OBSERVE.snaps are the same file soft-linked together.)
Third attachment: The CALCS LINE8 DEMOD LO frequency change to match has been saved in the H1CALCS SDF system. (Only the OBSERVE.snap is show, because the h1calcs model's safe and OBSERVE.snaps are the same file soft-linked together.)
Fourth attachment: A graphical representation of the above mentioned timeline

%%%%%%%%%%%%%%
Calibration Line List Update
%%%%%%%%%%%%%%

Freq (Hz)   Actuator                   Purpose                      Channel that defines Freq             Changes Since Last Update (LHO:72108)     
15.6        ETMX UIM (L1) SUS          \kappa_UIM excitation        H1:SUS-ETMY_L1_CAL_LINE_FREQ          No change
16.4        ETMX PUM (L2) SUS          \kappa_PUM excitation        H1:SUS-ETMY_L2_CAL_LINE_FREQ          No change
17.1        PCALY                      actuator kappa reference     H1:CAL-PCALY_PCALOSC1_OSC_FREQ        No change
17.6        ETMX TST (L3) SUS          \kappa_TST excitation        H1:SUS-ETMY_L3_CAL_LINE_FREQ          No change
33.43       PCALX                      Systematic error lines       H1:CAL-PCALX_PCALOSC4_OSC_FREQ        No change
53.67         |                            |                        H1:CAL-PCALX_PCALOSC5_OSC_FREQ        No change
77.73         |                            |                        H1:CAL-PCALX_PCALOSC6_OSC_FREQ        No change
102.13        |                            |                        H1:CAL-PCALX_PCALOSC7_OSC_FREQ        FREQUENCY CHANGE; THIS ALOG
283.91        V                            V                        H1:CAL-PCALX_PCALOSC8_OSC_FREQ        No change
284.01      PCALY                      PCALXY comparison            H1:CAL-PCALY_PCALOSC4_OSC_FREQ        No change
410.3       PCALY                      f_cc and kappa_C             H1:CAL-PCALY_PCALOSC2_OSC_FREQ        No change
1083.7      PCALY                      f_cc and kappa_C monitor     H1:CAL-PCALY_PCALOSC3_OSC_FREQ        No change
1153.2      PCALY                      Old "cancelling" line        H1:CAL-PCALY_PCALOSC9_OSC_FREQ        Added to the list, but has been on for all of O4.
n*500+1.3   PCALX                      Systematic error lines       H1:CAL-PCALX_PCALOSC1_OSC_FREQ        No change (n=[2,3,4,5,6,7,8])
Images attached to this report
H1 CAL
thomas.shaffer@LIGO.ORG - posted 12:01, Wednesday 13 September 2023 (72861)
Calibration Simulines ran at 1378665426

Simulines start:

PDT: 2023-09-13 11:36:48.648354 PDT
UTC: 2023-09-13 18:36:48.648354 UTC
GPS: 1378665426.648354
 

Simulines end:
PDT: 2023-09-13 11:58:49.591980 PDT
UTC: 2023-09-13 18:58:49.591980 UTC
GPS: 1378666747.591980
 

Files:

2023-09-13 18:58:49,243 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20230913
T183650Z.hdf5
2023-09-13 18:58:49,260 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20
230913T183650Z.hdf5
2023-09-13 18:58:49,269 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20
230913T183650Z.hdf5
2023-09-13 18:58:49,279 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20
230913T183650Z.hdf5
2023-09-13 18:58:49,289 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20
230913T183650Z.hdf5

Images attached to this report
H1 General
thomas.shaffer@LIGO.ORG - posted 11:34, Wednesday 13 September 2023 (72859)
Out of Observing for Commissioning at 1830 UTC

Starting with a calibration measurement.

H1 General
thomas.shaffer@LIGO.ORG - posted 11:05, Wednesday 13 September 2023 - last comment - 14:13, Wednesday 13 September 2023(72857)
Alignments before/after Sept 12 maintenance locks

Last night we had some alignment issues after a lock loss (alog72854) and Jenne noticed that our jitter noise is higher (alog72853). Comparing two DARM spectra from the summary pages in the first attachment from two days ago to now, the ~120Hz noise is worse, though our higher freqency noise seems better now.

In terms of alignment, Ryan C's Misaligned GUI shows ITMY P being further off compared to near the end of the Sept 11 lock before maintenance, but I'm not seeing that in top mass OSEM values as seen in the 3rd attachment. Interestingly, L1 P seems to be slightly shifted from the reference time, but this isn't seen in M0 or the oplevs.

This GUI also points to SR2 being slightly off in P and Y, which it does seem to be compared to the reference time, but overall doesn't seem that far off from where it normally moves (6th attachment). PRM is another contender with larger alignment moves, but nothing out of the normal between locks (7th attachment).

The ETMs see similar movement as well (ETMX ETMY). Perhaps obvious, the most movement we see is during our move spots [508] and max power [520] states.

FC2 has been moving much more in the last 5 days. Not sure if this is a symtom or a cause, but perhaps this might point to some of our SQZ issues. FC1 shows a similar story, though doens't seem to be as drastic.

All of this to say that I don't notice any major alignment differences, aside from FCs, from before maintenance to now, but there are a few interesting bits to spend some more time looking at.

Images attached to this report
Comments related to this report
jenne.driggers@LIGO.ORG - 11:44, Wednesday 13 September 2023 (72860)

Since we're into commissioning for the afternoon, I've retrained the Jitter cleaning (and left the laser noise cleaning with the same training it's had for a few weeks), and it seems to very effectively remove this new larger version of the jitter peak.  (Blue in the attachment is unlceaned, red is the cleaned data).

Images attached to this comment
victoriaa.xu@LIGO.ORG - 14:13, Wednesday 13 September 2023 (72866)OpsInfo, SQZ

From TJ's plots, it looks like the FC1/2 alignments were drifting, but not necessarily that they are more noisy. Some record of recent FC/SQZ alignment and adjustments, in descending order of most to least suspicious: 

  • After earthquakes on Friday 9/8 (~5 days ago), FC alignment needed manual alignment in pitch to recover 72771; FC2 slider diffs showed up as SDF diffs that were accepted. If this alignment brought the system to a clippy-position, I could imagine issues from scattered light to sub-optimal FC control introducing issues. We are likely within a factor of 10 for sqz/fc backscatter, but this was never tested down to frequencies below 100 Hz, and likely worth doing a backscatter test again in this realistic low-freq regime. Could be worth it to keep investigating this backscatter issue, from tests later in the day 72870 it looks closer than I thought orginally.
  • At some point in the last 2-3 weeks, fiber polarizations were aligned but I am not sure if they were optimized for the fiber delivering FC control signals; this is worth checking.  checked this (72870), and it is probably not an issue.
  • On 8/30, FC related alignments were tested 72579, on the same day that Rahul/Fil/Dave had earlier fixed the bad ADC to resolve the FC1 T3 osem issue 72740.
  • ~8/30 we had also adjusted alignments for homodyne, but this *should* all be set back. I haven't seen clear evidence of oustanding issues after homodyne alignments.
  • On 9/6, Naoki re-measured the SQZ ASC sensing matrix (ie, ZM5-6) 72725, and we basically set it back to the previous thermalized values we had been using ; b/c this is downstream of FC, I wouldn't think this affects FC at all. 

Re: long night of SQZ issues -- I still have no clear idea of what happened then, there is not a consistent nor clear story of the many glitches in the trends looking back. But, there are visible glitches corresponding to the spontaneous unlocks that night as up-stream as SHG (which is 1-2 layers upstream of FC), even the squeezer was DOWN and FC was unlocked and doing nothing. So, I believe the problem was very likely upstream of FC. That's to say, while FC issues might've resulted from the more low-level issues, or resulted from the same root cause as those glitches, I don't think FC itself was causing the glitches that kept the squeezer down that night.

LHO VE
david.barker@LIGO.ORG - posted 10:17, Wednesday 13 September 2023 (72858)
Wed CP1 Fill

Wed Sep 13 10:06:26 2023 INFO: Fill completed in 6min 22secs

Jordan confirmed a good fill curbside.

Images attached to this report
LHO FMCS
thomas.shaffer@LIGO.ORG - posted 08:02, Wednesday 13 September 2023 (72856)
Ops Day Shift Start

TITLE: 09/13 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Austin
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 2mph Gusts, 1mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.14 μm/s
QUICK SUMMARY: Locked for 7.5 hours, calm environment.

CDS Overview OK, no alarms.

LHO FMCS
tyler.guidry@LIGO.ORG - posted 07:35, Wednesday 13 September 2023 (72855)
AHU 1 Supply Temp Changes, Maintenance and Reconfiguration
There are two issues to address in this entry regarding the configuration of the corner station AHU1.

1: persistent flooding of the condensate tray's was occurring which at times caused water to reach the floor of the AHU requiring that it be vacuumed up. The cause of improper draining was suspected to be a clogged drain line. The line was flushed both with compressed air and with pressurized water. Following that, a vacuum was pulled on each of the two drain lines with a wet/dry shop-vac type vacuum. Once the trap at the drain is primed, the drain line should now function without issue.

2: the likely cause of moisture to be present in the first place is in the way that the AHU was moving air. Unlike the neighboring AHU (2), The face bypass dampers in AHU1 above the cooling coil were OPEN and the dampers directly below which allow flow across the coil were CLOSED. The attached screen grab illustrates the desired configuration. The cause for the bypass dampers to have flipped was found in the supply temperature settings which were changed from a temp control mode of "auto reset" at a 52F setpoint to a temp control mode of "manual" with a 45F setpoint which now mirrors AHU2. 

This change has been in effect since midday Monday 9/11.

B. Gateley T. Guidry 
Images attached to this report
H1 CAL (CDS)
jeffrey.kissel@LIGO.ORG - posted 10:25, Tuesday 12 September 2023 - last comment - 13:07, Wednesday 13 September 2023(72830)
h1calcs Model Rebooted; Gating for CALCS \kappa_U is now informed by KAPPA_UIM Uncertainty (rather than KAPPA_TST)
J. Kissel, D. Barker
WP #11423

Dave has graciously compiled, installed and restarted the h1calcs model. In doing so, that brings in the bug fix from LHO:72820, which fixes the issue that the front-end, CAL-CS KAPPA_UIM library block was receiving the KAPPA_TST uncertainty identified in LHO:72819.

Thus h1calcs is now using rev 26218 of the library part /opt/rtcds/userapps/release/cal/common/models/CAL_CS_MASTER.mdl.

I'll confirm that the UIM uncertainty is the *right* uncertainty during the next nominal low noise stretch later today (2023-09-12 ~20:00 UTC).
Comments related to this report
jeffrey.kissel@LIGO.ORG - 17:37, Tuesday 12 September 2023 (72848)
Circa 9:30 - 10:00a PDT (2023-09-12 16:30-17:00 UTC)
Post-compile, but prior-to-install, Dave ran a routine foton -c check on the filter file to confirm that there were no changes in the
    /opt/rtcds/lho/h1/chans/H1CALCS.txt
besides "the usual" flip of the header (see IIET:11481 which has now become cds/software/advLigoRTS:589).

Also relevant, remember every front-end model's filter file is a softlink to the userapps repo,
    $ ls -l /opt/rtcds/lho/h1/chans/H1CALCS.txt 
    lrwxrwxrwx 1 controls controls 58 Sep  8  2015 /opt/rtcds/lho/h1/chans/H1CALCS.txt -> /opt/rtcds/userapps/release/cal/h1/filterfiles/H1CALCS.txt

Upon the check, he found that foton -c had actually changed filter coefficients.
Alarmed by this, he ran an svn revert on the userapps "source" file for H1CALCS.txt in
    /opt/rtcds/userapps/release/cal/h1/filterfiles/H1CALCS.txt

He walked me through what had happened, and when he did to fix it, *verbally* with me on TeamSpeak, and we agreed -- "yup, that should be fine."

Flash forward to NOMINAL_LOW_NOISE at 14:30 PDT (2023-09-12 20:25:57 UTC) TJ and I find that the GDS-CALIB_STRAIN trace on the wall looks OFF, and there're no impactful SDF DIFFs. I.e. TJ says "Alright Jeff... what'd you do..." seeing the front wall FOM show GDS-CALIB_STRAIN at 2023-09-12 20:28 UTC.

After some panic having not actually done anything but restart the model, I started opening up CALCS screens trying to figure out "uh oh, how can I diagnose the issue quickly..." I tried two things before I figured it out:
    (1) I get through the inverse sensing function filter (H1:CAL-CS_DARM_ERR) and look at the foton file ... realized -- looks OK, but if I'm really gunna diagnose this, I need to find the number that was installed on 2023-08-31 (LHO:72594)...
    (2) I also open up the actuator screen for the ETMX L3 stage (H1:CAL-CS_DARM_ANALOG_ETMX_L3) ... and upon staring for a second I see FM3 has a "TEST_Npct_O4" in it, and I immediately recognize -- just by the name of the filter -- that this is *not* the "HFPole" that *should* be there after Louis restores it on 2023-08-07 (LHO:72043).

After this, I put two-and-two together, and realized that Dave had "reverted" to some bad filter file. 

As such, I went to the filter archive for the H1CALCS model, and looked for the filter file as it stood on 2023-08-31 -- the last known good time:

/opt/rtcds/lho/h1/chans/filter_archive/h1calcs$ ls -ltr
[...]
-rw-rw-r-- 1 advligorts advligorts 473361 Aug  7 16:42 H1CALCS_1375486959.txt
-rw-rw-r-- 1 advligorts advligorts 473362 Aug 31 11:52 H1CALCS_1377543182.txt             # Here's the last good one
-rw-r--r-- 1 controls   advligorts 473362 Sep 12 09:32 H1CALCS_230912_093238_install.txt  # Dave compiles first time
-rw-r--r-- 1 controls   advligorts 473377 Sep 12 09:36 H1CALCS_230912_093649_install.txt  # Dave compiles the second time
-rw-rw-r-- 1 advligorts advligorts 473016 Sep 12 09:42 H1CALCS_1378572178.txt             # Dave installs his "reverted" file
-rw-rw-r-- 1 advligorts advligorts 473362 Sep 12 13:50 H1CALCS_1378587040.txt             # Jeff copies Aug 31 11:52 H1CALCS_1377543182.txt into current and installs it


Talking with him further in prep for this aLOG, we identify that when Dave said "I reverted it," he meant that he ran an "svn revert" on the userapps copy of the file, which "reverted" the file to the last time it was committed to the repo, i.e. 
    r26011 | david.barker@LIGO.ORG | 2023-08-01 10:15:25 -0700 (Tue, 01 Aug 2023) | 1 line

    FM CAL as of 01aug2023
i.e. before 2023-08-07 (LHO:72043) and before 2023-08-31 (LHO:72594).

Yikes! This is the calibration group's procedural bad -- we should be committing the filter file to the userapps svn repo every time we make a change.

So yeah, in doing normal routine things that all should have have worked, Dave fell into a trap we left for him.

I've now committed the H1CALCS.txt filter file to the repo at rev 26254

    r26254 | jeffrey.kissel@LIGO.ORG | 2023-09-12 16:26:11 -0700 (Tue, 12 Sep 2023) | 1 line

    Filter file as it stands on 2023-08-31, after 2023-08-07 LHO:72043 3.2 kHz ESD pole fix and  2023-08-31 LHO:72594 calibration update for several reasons.


By 2023-09-12 20:50:44 UTC I had loaded in H1CALCS_1378587040.txt which was simple "cp" copy of H1CALCS_1377543182.txt, the last good filter file that was created during the 2023-08-31 calibration update,...
and the DARM FOM and GDS-CALIB_STRAIN returned to normal. 

All of panic and fix was prior to us going to OBSERVATION_READY 2023-09-12 21:00:28 UTC, so there was no observation ready segment that had bad calibration.

I also confirmed that all was restored and well by checking in on both
 -- the live front-end systematic error in DELTAL_EXTERNAL_DQ using the tools from LHO:69285) and
 -- the low-latency systematic error in GDS-CALIB_STRAIN using the auto-generated plots on https://ldas-jobs.ligo-wa.caltech.edu/~cal/
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 13:07, Wednesday 13 September 2023 (72863)CDS
Just some retro-active proof from the last few days worth of measurements and models of systematic error in the calibration.

First, a trend of the front-end computed values of systematic error, shown in 2023-09-12_H1CALCS_TrendOfSystematicError.png which reviews the time-line of what had happened.

Next, grabs from the GDS measured vs. modeled systematic error archive which show similar information but in hourly snapshots,
    2023-09-12 13:50 - 14:50 UTC 1378561832-1378565432 Pre-maintenance, pre-model-recompile, calibration good, H1CALCS_1377543182.txt 2023-08-31 filter file running.
    2023-09-12 19:50 - 20:50 UTC 1378583429-1378587029 BAD 2023-08-01, last-svn-commit, r26011, filter file in place.
    2023-09-12 20:50 - 21:50 UTC 1378587032-1378590632 H1CALCS_1378587040.txt copy of 2023-08-31 filter installed, calibration goodness restored.

Finally, I show the systematic error in GDS-CALIB_STRAIN trends from the calibration monitor "grafana" page, which shows that because we weren't in ANALYSIS_READY during all this kerfuffle, the systematic error as reported by that system was none-the-wiser that any of this had happened.

*phew* Good save team!!
Images attached to this comment
H1 ISC (OpsInfo)
camilla.compton@LIGO.ORG - posted 17:20, Wednesday 30 August 2023 - last comment - 13:31, Wednesday 13 September 2023(72572)
MICH FF Retuned

Elenna, Gabriele, Camilla 

This afternoon we updated the MICH Feedforward, it is now back to around the level it was last Friday, comparison attached. Last done in 72430. Maybe need to be done so soon because of the 72497 alignment changes on Friday.

The code for excitations and analysis has been moved to /opt/rtcds/userapps/release/lsc/h1/scripts/feedforward/

Elenna updated in guardian to engage FM1 rather than FM9 and sdf accepted. New filter attached. I forgot the accept this in h1lsc safe.snap and will ask the operators to accept MICHFF FM1 when we loose lock or come out of observe(72431), tagging OpsInfo.

Attached is a README file with instructions.

Images attached to this report
Non-image files attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 15:13, Thursday 31 August 2023 (72605)

Accepted FM1 in the LSC safe.snap

jeffrey.kissel@LIGO.ORG - 13:31, Wednesday 13 September 2023 (72865)CAL, DetChar
Calling out a line from the above README instructions that Jenne pointed me to that confirms my suspicions that the *reason* the bad FF filter's high Q feature showed up at 102.128888 Hz, right next to the 102.13 Hz calibration line:
    "IFO in Commissioning mode with Calibration Lines off (to avoid artifacts like in alog#72537)."

in other words -- go to NLN_CAL_MEAS to turn off all calibration lines before taking active measurements that inform any LSC feed forward filter design.

Elenna says the same thing -- quoting the paragraph from LHO:72537 later added in edit:

How can we avoid this problem in the future? This feature is likely an artifact of running the injection to measure the feedforward with the calibration lines on, so a spurious feature right at the calibration line appeared in the fit. Since it is so narrow, it required incredibly fine resolution to see it in the plot. For example, Gabriele and I had to bode plot in foton from 100 to 105 Hz with 10000 points to see the feature. However, this feature is incredibly evident just by inspecting the zpk of the filter, especially if you use the "mag/Q" of foton and look for the poles and zeros with a Q of 3e5 (!!). If we ensure to both run the feedforward injection with cal lines off and/or do a better job of checking our work after we produce a fit, we can avoid this problem.
H1 ISC
gabriele.vajente@LIGO.ORG - posted 11:42, Friday 04 August 2023 - last comment - 15:11, Wednesday 13 September 2023(71961)
MICH and SRCL feedforward measured and fitted

This morning between 9:20am and 10:00am I injected some noise to retune the MICH and SRCL feedforward. New filters have been uploaded with name '8-4-23'. Unfortunately the IFO lost lock before I was ready to test them.

We shoudl wait for the IFO to thermalize after relock, and then test the two new filters

Images attached to this report
Comments related to this report
gabriele.vajente@LIGO.ORG - 14:29, Friday 04 August 2023 (71965)

Tested the new feedforward fits, they work better than the old ones, so we'll leave them running. ISC_LOCK updated and reloaded.

Images attached to this comment
oli.patane@LIGO.ORG - 15:02, Friday 04 August 2023 (71966)

SDF diffs Accepted

Images attached to this comment
gabriele.vajente@LIGO.ORG - 15:23, Friday 04 August 2023 (71968)

Interestingly, retuning the FF reduced thee 52 Hz peak in DARM.

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 15:11, Wednesday 13 September 2023 (72868)CAL, DetChar, OpsInfo
Much later, we've identified that this routine filter update was informed by a measurement of the IFO incorrectly taken while calibration lines were still on. This causes the measurement fitter to  create a filter that tries to "compensate" for the high-Q feature with some equally high-Q zero:pole pairs LHO:72537. 

Once installed, the impulse response of this zero:pole pair causes hours-long ring-ups in the IFO sensitivity right at the calibration line frequency as the two features mix LHO:72064.

The procedure for taking LSC FF measurements has been rectified now (see LHO:72572), to explicitly call out that calibration lines MUST be turned OFF if you're gathering a set of measurements to be used for FF filter creation.
H1 CAL
vladimir.bossilkov@LIGO.ORG - posted 08:29, Friday 28 July 2023 - last comment - 12:31, Tuesday 12 December 2023(71787)
H1 Systematic Uncertainty Patch due to misapplication of calibration model in GDS

First observed as a persistent mis-calibration in systematic error monitoring Pcal lines which measure PCAL / GDS-CALIB_STRAIN affecting both LLO and LHO, [LLO Link] [LHO Link], characterised by these measurements consistently disagreeing with the uncertainty envelope.
It us presently understood that this arises from bugs in the code producing the GDS FIR filters there exists a sizeable discrepancy, which Joseph Betzwieser is spear-heading a thorough investigation to correct,

I make a direct measurement of this systematic error by dividing CAL-DARM_ERR_DBL_DQ / GDS-CALIB_STRAIN , where the numerator is further corrected for kappa values of the sensing, cavity pole, and the 3 actuation stages (GDS does the same corrections internally). This gives a transfer function of the difference induced from errors in the GDS filters.

Attached in this aLog, and its sibling aLog in LLO, is this measurement in blue, the PCAL / GDS-CALIB_STRAIN measurement in orange, and the smoothed uncertainty correction vector in red. Attached also is a text file of this uncertainty correction for application in pyDARM to produce the final uncertainty, in the format of [Frequency, Real, Imaginary].

Images attached to this report
Non-image files attached to this report
Comments related to this report
ling.sun@LIGO.ORG - 15:33, Friday 28 July 2023 (71798)

After applying this error TF, the uncertainty budget seems to agree with monitoring results (attached).

Images attached to this comment
ling.sun@LIGO.ORG - 13:02, Thursday 17 August 2023 (72299)

After running the command documented in alog 70666, I've plotted the monitoring results on top of the manually corrected uncertainty estimate (see attached). They agree quite well.

The command is:

python ~cal/src/CalMonitor/bin/calunc_consistency_monitor --scald-config  ~cal/src/CalMonitor/config/scald_config.yml --cal-consistency-config  ~cal/src/CalMonitor/config/calunc_consistency_configs_H1.ini --start-time 1374612632 --end-time 1374616232 --uncertainty-file /home/ling.sun/public_html/calibration_uncertainty_H1_1374612632.txt --output-dir /home/ling.sun/public_html/

The uncertainty is estimated at 1374612632 (span 2 min around this time). The monitoring data are collected from 1374612632 to 1374616232 (span an hour).

 

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 17:01, Wednesday 13 September 2023 (72871)
J. Kissel, J. Betzwieser

FYI: The time at which Vlad used to gather TDCFs to update the *modeled* response function at the reference time (R, in the numerator of the plots) is 
    2023-07-27 05:03:20 UTC
    2023-07-26 22:03:20 PDT
    GPS 1374469418

This is a time when the IFO was well thermalized.

The values used for the TDCFs at this time were
    \kappa_C  = 0.97764456
    f_CC      = 444.32712 Hz
    \kappa_U  = 1.0043616 
    \kappa_P  = 0.9995768
    \kappa_T  = 1.0401824

The *measured* response function (GDS/DARM_ERR, the denominator in the plots) is from data with the same start time, 2023-07-27 05:03:20 UTC, over a duration of 384 seconds (8 averages of 48 second FFTs).

Note these TDCF values list above are the CAL-CS computed TDCFs, not the GDS computed TDCFs. They're the value exactly at 2023-07-27 05:03:20 UTC, with no attempt to average further over the duration of the *measurement*. See attached .pdf which shows the previous 5 minutes and the next 20 minutes. From this you can see that GDS was computing essentially the same thing as CALCS -- except for \kappa_U, which we know
 - is bad during that time (LHO:72812), and
 - unimpactful w.r.t. the overall calibration.
So the fact that 
    :: the GDS calculation is frozen and
    :: the CALCS calculation is noisy, but is quite close to the frozen GDS value is coincidental, even though
    :: the ~25 minute mean of the CALCS is actually around ~0.98 rather than the instantaneous value of 1.019
is inconsequential to Vlad's conclusions.

Non-image files attached to this comment
louis.dartez@LIGO.ORG - 00:54, Tuesday 12 December 2023 (74747)
I'm adding the modeled correction due to the missing 3.2 kHz pole here as a text file. I plotted a comparison showing Vlad's fit (green), the modeled correction evaluated on the same frequency vector as Vlad (orange), and the modeled correction evaluated using a dense frequency spacing (blue), see eta_3p2khz_correction.png. The denser frequency spacing recovers error of about 2% between 400 Hz and 600 Hz. Otherwise, the coarsely evaluated modeled correction seems to do quite well. 
Images attached to this comment
Non-image files attached to this comment
ling.sun@LIGO.ORG - 12:31, Tuesday 12 December 2023 (74758)

The above error was fixed in the model at GPS time 1375488918 (Tue Aug 08 00:15:00 UTC 2023) (see LHO:72135)

Displaying reports 13321-13340 of 84064.Go to page Start 663 664 665 666 667 668 669 670 671 End