Displaying reports 16441-16460 of 86634.Go to page Start 819 820 821 822 823 824 825 826 827 End
Reports until 08:02, Friday 18 August 2023
LHO General
austin.jennings@LIGO.ORG - posted 08:02, Friday 18 August 2023 (72313)
Ops Day Shift Start

TITLE: 08/18 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 3mph Gusts, 2mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.16 μm/s 
QUICK SUMMARY:

- H1 still locked, just hit 33 hours

- All systems ok, seismic motion is low

H1 General
anthony.sanchez@LIGO.ORG - posted 00:01, Friday 18 August 2023 - last comment - 16:26, Friday 18 August 2023(72311)
Thursday Ops day Shift End

TITLE: 08/18 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
1:27 UTC PSL Dust monitor alerts going off during high winds again.
3:29UTC random unknown drop out of OBSERVING. I saw no SDF DIFFS within a few seconds of the event and took the IFO back to OBSERVING.
6:42 UTC another random unknown drop out of OBSERVING. I saw no SDF DIFFS within a few seconds of the event and took the IFO back to OBSERVING.
Other than that it' s been a quiet night.

Current LOCK 25 hours!

LOG: Coyotes chewing on sprinkler nossles out back.

 

Comments related to this report
naoki.aritomi@LIGO.ORG - 16:26, Friday 18 August 2023 (72328)SQZ

The drop out of observing at 3:29 UTC and 6:42 UTC seems due to SDF diff of syscssqz according to the attached DIAG_SDF log. The SQZ TTFSS COMGAIN and FASTGAIN changed at that time, but they are not monitored now (alog72915), so it should be another SDF diff related to the TTFSS change.

Images attached to this comment
H1 ISC (AWC, ISC)
keita.kawabe@LIGO.ORG - posted 20:44, Thursday 17 August 2023 (72309)
Do we reduce mode matching loss by simply replacing OM2?

Summary:

This is my attempt to constrain the mode matching parameter of the IFO beam on the OMC to see if we'll do any better as far as MM is concerned if we have even "colder" OM2. This is somewhat of a moot point because, empirically, the noise is lower with hot OM2 than cold though the mode matching is the opposite, but anyway, if we solve the noise problem/mystery we could think about improving the MM loss.

The answer is, it depends. We might gain some if we're lucky, but not much, 1% or 2 at most.

Details:

From Louis' calibration comparison with hot VS cold OM2 (alog 70907), we know that the optical gain increases by about a factor of 1.02 for cold OM2 relative to hot. Given the same DC power coming through the OMC (because of DARM loop), optical gain is proportional to the square root of (1-MMLoss).

Imagine that the light falling on the OMC is characterized by the waist position offset from OMC's waist (normalized by 2*Rayleigh of OMC) and the waist size (normalized by the waist size of the OMC). Pick an arbitrary point in this MM parameter space (posHot, sizeHot) and measure the MM loss, assuming that OM2 is hot. Calculate the beam parameter upstream of OM2. Change the OM2 ROC to cold number, propagate the upstream beam down to OMC again. This light is represented by a different (posCold, sizeCold). Measure the MM loss.

If sqrt((1-coldLoss)/(1-hotLoss)) is close enough to the measured optical gain ratio of 1.02, the pair of points (posHot, sizeHot) and (posCold, sizeCold) are compatible with the measurement. If you do this for all possible (posHot, sizeHot), you'll obtain two arcs, one corresponding to hot and other corresponding to cold OM2 that are compatible with the measurement. (If it's hard to visualize this, you might want to read my alog 71145, which does a different calculation but is based on a similar mode matching model.)

In the first attached, the lowest and the middle arc correspond to hot and cold OM2. Green arrows show the change from hot to cold for a selected pairs of pixels. Note that this doesn't constrain the loss itself. As an arbitrary constraint to save calculation time I assumed that the MM loss is 20%.

Now, given that reality is somewhere on the middle arc, could we gain anything by making OM2 colder? I quickly added more cooling (OM2 ROC=2.25m), and that's represented by the top arc and red arrows.

In the second attached, which is the same as the first one but with my hand-scribble, if we're on the right half(-ish) encircled with cyan line, making OM2 colder won't do us any good. If OTOH we're on the left half(-ish) encircled with orange line, colder OM2 will give us some (but not a huge) gain.

Third plot shows the MM loss of the cold OM2 arc on the X axis and colder OM2 MM loss on the Y axis. It just shows you that if e.g. the reality is ~1% MM loss for the cold OM2, if we're lucky (i.e. inside the orange-encircled part of the arc) the loss will go down to ~0.5+-0.2% or so if OM2 is even colder, but if we're unlucky (i.e. cyan-encircled part of the arc) the loss will go up to ~1.3+-0.3%. If the reality is 10% MM loss for the cold OM2, by going colder you'll get either 8.5% or 11.7%.

(I was hoping that SR3 heater data (Daniel's alog 68884) will further constrain the parameters, but it turns out that we should have waited longer before turning off the heater. If you look at the calibration factor corresponding to the optical gain, it seemed as if there was no change, certainly nothing larger than 0.2% (last plot). However, this was expected as 2W for 10 minutes only gives us 1.22uD (using 4.75uD/W in LLO alog 27262 and 70 min time constant in Aidan's comment in Daniel's alog), which basically makes zero % MM loss change if you start from cold OM2. If we waited long enough, we'd have reached 9.5uD. By looking at how much the MM loss got worse with 9.5uD, we should be able to exclude some area from the arcs.)

Images attached to this report
H1 PEM (DetChar)
robert.schofield@LIGO.ORG - posted 17:53, Thursday 17 August 2023 - last comment - 10:26, Monday 21 August 2023(72308)
PEM activites Wed. and Thurs.: PEM injections, bias sweep, HVAC shutdown breaks 160 Mpc (for a few minutes)

Lance, Genevieve, Robert

PEM injections

We are redoing part of the formal PEM injection program, which took place for a week before the run, because the 75-60W change reduced vibration coupling by factors of 2 or 3. On Wednesday, during the commissioning slot, we finished acoustic injections at EY and started shaker injections at EY. Thursday we finished the main shaker injections at EY.

ETMY bias sweep with electronic ground injections

Fig. 1 shows that the ETMY bias setting that minimizes our electronic ground injection coupling has possibly changed a little, but not much, since January.

HVAC shutdown

I shut down the HVAC, site-wide, from about 15:23 to 15:37 Aug 17 UTC. The range increased by a little less than 10 Mpc to a little over 160 Mpc. Elenna posted a spectrum for the best part of the site-wide shutdown (see below). The shutdown and especially the re-start took time, so Fig. 2 shows just the effect of of the chilled water system at EX. Also, we shut down SF4 only (the fan pushing 11,000 CFM at the CS, from 21:18 to 21:32 UTC, but we did not see an improvement in range. We will continue with a focussed study tommorrow in order to better understand the locations of the problems.

Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 10:26, Monday 21 August 2023 (72349)DetChar, FMP, ISC
For the ASD of GDS-CALIB_STRAIN of Elenna's that Robert mentions comparing the HVAC ON vs. OFF times see LHO:72297.
LHO General
thomas.shaffer@LIGO.ORG - posted 16:03, Thursday 17 August 2023 (72293)
Ops Day Shift Summary

TITLE: 08/17 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Tony
SHIFT SUMMARY: We've been locked for 17 hours, and just finished up a 1.5 hour commissioning time. Robert had done some HVAC on/off tests today (wp#11374) that resulted in some brief, large range gains. The optics lab dust counts have spiked a few times today while no one was in there. They quickly go back down, but we should keep an eye if this trend continues and address.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
15:47 PEM Robert Site n HVAC testing 18:13
16:16 PEM/FAC Robert EY n Turning on/off chillers 16:29
17:06 FAC Cindi MY n Tech clean 18:29
17:46 FAC Karen Opt lab n Tech clean 18:30
18:30 CC Oli Opt lab n Dust monitor famis 18:50
21:37 CAL TJ CR n CAL measurement 22:07
21:37 SEI Jim CR n HAM1 FF test 22:55
21:56 PEM Robert EY n Shaker measurement 22:55

 

H1 General
anthony.sanchez@LIGO.ORG - posted 16:03, Thursday 17 August 2023 (72306)
Thursday Ops day Shift Start

TITLE: 08/17 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 21mph Gusts, 13mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.10 μm/s
QUICK SUMMARY:
Inherited H1 With a 17 hour lock in NOMINAL_LOW_NOISE && OBSERVING
HVAC turned back on, And I'll be watching the temps tonight.

H1 OpsInfo (CDS)
thomas.shaffer@LIGO.ORG - posted 15:41, Thursday 17 August 2023 (72305)
Script to announce live LLO Observing status, or general channel change

Control room users have asked more than once to have some type of alert for when LLO goes into or out of Observing. I've made this, heavily based on our read only LLO medm command. This will create an epics connection to LLO after you input your ligo.org password and then it uses our text to speech program (picotts) to announce any changes to the L1:GRD-IFO_STATE (our intention bit). I put it in userscripts so:

To run it enter: llo_status_live into your terminal and then enter your password.
 
This uses a more general script that alerts on any channel changes for a given channel. Great if you're waiting around for something to finish up. It lives in (userapps)/cds/common/scripts/alert_channel_change.py
H1 General (DetChar, PEM)
richard.mccarthy@LIGO.ORG - posted 15:17, Thursday 17 August 2023 - last comment - 15:22, Thursday 17 August 2023(72303)
Shop Vac running in Mechanical Room

Robert and I went into the Fan rooms to check on Supply Fan 4.  When we entered the hall between Fan rooms there was water on the floor.   The last time this happened it was a clogged condensate drain.  So we check the Fan 1 and 2 room and sure enough more water.

Starting at 2:30 PM until 3 PM PST we (Randy, Fil and Richard) ran a shop vac outside the Fan rooms where the condesate drains exit the enclosure.  We were able to drain the condensate pans so no additional water shoule run onto the floor. 

We have left the water on the floor in the fan rooms as a Tuesday activity.

Thank you Randy for your assistance.

Comments related to this report
jenne.driggers@LIGO.ORG - 15:22, Thursday 17 August 2023 (72304)

While obviously this is critical maintenance that needed to be done regardless of IFO state, it happens that we were in Commissioning mode (not Observe) during this time, doing driven calibration measurements.  So, there should be no effect on any Observing mode data quality for this segment, and no need for any special DQ flags or investigations.

H1 CAL
thomas.shaffer@LIGO.ORG - posted 15:07, Thursday 17 August 2023 (72301)
CAL BB and Simulines run

Followed the usual instructions on wiki/TakingCalibrationMeasurements.

Simulines start GPS: 1376343784.859387
Simulines end GPS: 1376345111.425289
 

2023-08-17 22:04:53,037 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_
SS/DARMOLG_SS_20230817T214248Z.hdf5
2023-08-17 22:04:53,057 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DA
RM_SS/PCALY2DARM_SS_20230817T214248Z.hdf5
2023-08-17 22:04:53,068 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_
L1_SS/SUSETMX_L1_SS_20230817T214248Z.hdf5
2023-08-17 22:04:53,079 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_
L2_SS/SUSETMX_L2_SS_20230817T214248Z.hdf5
2023-08-17 22:04:53,089 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_
L3_SS/SUSETMX_L3_SS_20230817T214248Z.hdf5
 

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 12:07, Thursday 17 August 2023 (72298)
Thu CP1 Fill

Thu Aug 17 10:08:19 2023 INFO: Fill completed in 8min 15secs

 

Images attached to this report
H1 ISC (PEM)
elenna.capote@LIGO.ORG - posted 11:38, Thursday 17 August 2023 - last comment - 13:15, Monday 28 August 2023(72297)
DARM with and without HVAC

Robert did an HVAC off test. Here is a comparison of GDS CALIB STRAIN NOLINES from earlier on in this lock and during the test. I picked both times off the range plot from a time with no glitches.

Improvement from removal of 120 Hz jitter peak, apparent reduction of 52 Hz peak, and broadband noise reduction at low frequency (scatter noise?).

I have attached a second plot showing the low frequency (1-10 Hz) spectrum of OMC DCPD SUM, showing no appreciable change in the low frequency portion of DARM from this test.

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 14:57, Thursday 17 August 2023 (72302)DetChar, FMP, OpsInfo, PEM
Reminders from the summary pages as to why we got so much BNS range improvement from removing the 52 Hz and 120 Hz features shown in Elenna's ASD comparison.
Pulled from https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20230817/lock/range/.

Range integrand shows ~15 and ~5 MPC/rtHz reduction at the 52 and 120 Hz features.

BNS range time series shows brief ~15 MPC improvement at 15:30 UTC during Robert's HVAC OFF tests.
Images attached to this comment
elenna.capote@LIGO.ORG - 11:50, Friday 18 August 2023 (72321)

Here is a spectrum of the MICH, PRCL, and SRCL error signals at the time of this test. The most visible change is the reduction of the 120 Hz jitter peak also seen in DARM. There might be some reduction in noisy peaks around 10-40 Hz in the signals, but the effect is small enough it would be useful to repeat this test to see if we can trust that improvement.

Note: the spectra have strange shapes, I think related to some whitening or calibration effect that I haven't bothered to think about to make these plots. I know we have properly calibrated versions of the LSC spectra somewhere, but I am not sure where. For now these serve as a relative comparison.

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 10:46, Monday 21 August 2023 (72352)DetChar, FMP, PEM
According to Robert's follow-up / debrief aLOG (LHO:72331) and the time-stamps in the bottom left corner of Elenna's DTT plots, she's is using the time 2023-08-17 15:27 UTC, and that corresponds to the time when Robert had turned off all four the supply fans (SF1, SF2, SF3, and SF4) in the corner station (CS) air handler units (AHU) 1 and 2 that supply the LVEA around 2023-08-17 15:26 UTC.
jeffrey.kissel@LIGO.ORG - 13:15, Monday 28 August 2023 (72487)DetChar, PEM, SYS
My Bad: -- Elenna's aLOG is showing the sensitivity on 2023-Aug-17, and the Robert LHO:72331 logging of times listed above are for 2023-Aug-18. 

Elenna's demonstration is during Robert's site-wide HVAC shut down 2023-Aug-17 -- see LHO:72308 (i.e. not just the corner as I'd errantly claimed above.).
H1 PSL
thomas.shaffer@LIGO.ORG - posted 10:07, Thursday 17 August 2023 (72296)
Added 100mL to PSL chiller

Fil informed me that the PSL chiller was alarming. Oli and I went out there and it had a low level alarm. We added 100mL and brought it back near the max level. Last log was from exactly a month ago.

H1 CDS
david.barker@LIGO.ORG - posted 08:38, Thursday 17 August 2023 (72295)
FMCS alarms while Robert runs his tests

Please disregard FMCS chiller alarms for the next hour while Robert runs his tests which require the chillers to be shut down for short periods.

H1 General
anthony.sanchez@LIGO.ORG - posted 00:15, Thursday 17 August 2023 - last comment - 16:19, Thursday 17 August 2023(72291)
Wednesday Ops Eve Shift End

TITLE: 08/17 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
Lockloss 23:29 UTC
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1376263782

A change was made to ISC_LOCK that will lead to an SDF Diff that will need to be accepted.

Trouble locking DRMI even after PRMI
Ran Initial Alignment.

Elevated Dust levels in Optics labs again.

locking process started at 00:31 UTC
1:16 UTC made it to NOMINAL_LOW_NOISE
1:35 UTC Made it to Observing

lockloss from NLN @ 2:22 UTC almost certainly because of a Pi ring up and a series of Locklosses at LOWNOISE_LENGTH_CONTROL Edit*: It wasn not certain at all infact.

relocking went smoothly until, Lost lock at LOWNOISE_LENGTH_CONTROL @ 3:11 UTC

relocking went through PRMI and took a while, Lost lock at LOWNOISE_LENGTH_CONTROL again at @ 4:14 UTC

I have lost lock twice at LOWNOISE_LENGTH_CONTROL tonight. I am concerned that it may be due to  alog https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72262

Which is where there is a change to that state.
The counter argument to this is that I went past this earlier in my first lock of the night. Which was AFTER ISC_LOCK was loaded.
I'm doing an Initial Alignment again tonight to see if it's just poorly aligned instead, and to buy my self sometime to investigate.
I posted my findings in matter most and in the lock loss alog above . Then Called in the Commishoners to see if there was anything else I should look at.
Lines 5471 , 5472 were changed, and with the help of Danielle and Jenne pointing to line 5488 for another change that was reverted on ISC_LOCK.py, Locking went well from LOWNOISE_LENGTH_CONTROL all the way up to NLN

See comments in https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72262 for Specifics

Made it to NOMINAL_LOW_NOISE @ 5:58 UTC
Made it to Observing @ 6:02 UTC

 

LOG: empty

Comments related to this report
thomas.shaffer@LIGO.ORG - 08:28, Thursday 17 August 2023 (72294)

I'm not seeing any PI modes coming up during the 0222UTC lockloss, or any of the other lock losses from yesterday.

Images attached to this comment
anthony.sanchez@LIGO.ORG - 16:19, Thursday 17 August 2023 (72307)

By "lockloss from NLN @ 2:22 UTC almost certainly because of a Pi ring up" I really mean I thought I had a smoking gun for that first lockloss yesterday, but just didn't understand the Arbitrary "y" cursors on the plots for the PI monitors.
My apologies to the PI team for making poor assumptions.

H1 General
anthony.sanchez@LIGO.ORG - posted 23:14, Tuesday 15 August 2023 - last comment - 22:12, Thursday 17 August 2023(72264)
FAMIS 25079

Famis 25079
inLock SUS Charge Measurement
While searching for the files created by the inLock Sus Charge Measurments.
I noticed that there were multiples of a few of the files created today in the directory: /opt/rtcds/userapps/release/sus/common/scripts/quad/InLockChargeMeasurements/rec_LHO


ls -l  | grep "Aug 15"
-rw-r--r-- 1               1010 controls   160 Aug 15 07:50 ETMY_12_Hz_1376146243.txt
-rw-r--r-- 1               1010 controls   160 Aug 15 08:22 ETMY_12_Hz_1376148152.txt
-rw-r--r-- 1               1010 controls   160 Aug 15 07:50 ITMX_14_Hz_1376146241.txt
-rw-r--r-- 1               1010 controls   160 Aug 15 08:22 ITMX_14_Hz_1376148154.txt
-rw-r--r-- 1               1010 controls   160 Aug 15 07:50 ITMY_15_Hz_1376146220.txt
-rw-r--r-- 1               1010 controls   160 Aug 15 08:08 ITMY_15_Hz_1376147322.txt
-rw-r--r-- 1               1010 controls   160 Aug 15 08:21 ITMY_15_Hz_1376148134.txt


listing all files, filtering for only files that contain the string ETMX,  and then filtering those for files that contain "Aug 15" with the following command:
ls -l  | grep "ETMX" | grep "Aug 15"

Returned no files, which means that while it looks like it was ran twice, it never completed ETMX.
I'm not sure if the analysis will run with out all the files or not.

SUS_CHARGE LOG:
2023-08-15_15:26:18.969345Z SUS_CHARGE LOAD ERROR: see log for more info (LOAD to reset)
2023-08-15_15:26:53.512031Z SUS_CHARGE LOAD REQUEST
2023-08-15_15:26:53.524359Z SUS_CHARGE RELOAD requested.  reloading system data...
2023-08-15_15:26:53.527151Z SUS_CHARGE Traceback (most recent call last):
2023-08-15_15:26:53.527151Z   File "/usr/lib/python3/dist-packages/guardian/daemon.py", line 566, in run
2023-08-15_15:26:53.527151Z     self.reload_system()
2023-08-15_15:26:53.527151Z   File "/usr/lib/python3/dist-packages/guardian/daemon.py", line 327, in reload_system
2023-08-15_15:26:53.527151Z     self.system.load()
2023-08-15_15:26:53.527151Z   File "/usr/lib/python3/dist-packages/guardian/system.py", line 400, in load
2023-08-15_15:26:53.527151Z     module = self._load_module()
2023-08-15_15:26:53.527151Z   File "/usr/lib/python3/dist-packages/guardian/system.py", line 287, in _load_module
2023-08-15_15:26:53.527151Z     self._module = self._import(self._modname)
2023-08-15_15:26:53.527151Z   File "/usr/lib/python3/dist-packages/guardian/system.py", line 159, in _import
2023-08-15_15:26:53.527151Z     module = _builtin__import__(name, *args, **kwargs)
2023-08-15_15:26:53.527151Z   File "<frozen importlib._bootstrap>", line 1109, in __import__
2023-08-15_15:26:53.527151Z   File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
2023-08-15_15:26:53.527151Z   File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
2023-08-15_15:26:53.527151Z   File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
2023-08-15_15:26:53.527151Z   File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
2023-08-15_15:26:53.527151Z   File "<frozen importlib._bootstrap_external>", line 786, in exec_module
2023-08-15_15:26:53.527151Z   File "<frozen importlib._bootstrap_external>", line 923, in get_code
2023-08-15_15:26:53.527151Z   File "<frozen importlib._bootstrap_external>", line 853, in source_to_code
2023-08-15_15:26:53.527151Z   File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
2023-08-15_15:26:53.527151Z   File "/opt/rtcds/userapps/release/sus/h1/guardian/SUS_CHARGE.py", line 67
2023-08-15_15:26:53.527151Z     ezca.get_LIGOFilter('SUS-ETMX_L3_DRIVEALIGN_L2L').ramp_gain(lscparams.ETMX_GND_MIN_DriveAlign_gain, ramp_time=20, wait=False)
2023-08-15_15:26:53.527151Z                                                                                                                                  ^
2023-08-15_15:26:53.527151Z IndentationError: unindent does not match any outer indentation level
2023-08-15_15:26:53.527151Z SUS_CHARGE LOAD ERROR: see log for more info (LOAD to reset)
2023-08-15_15:29:10.009828Z SUS_CHARGE LOAD REQUEST
2023-08-15_15:29:10.011001Z SUS_CHARGE RELOAD requested.  reloading system data...
2023-08-15_15:29:10.050137Z SUS_CHARGE module path: /opt/rtcds/userapps/release/sus/h1/guardian/SUS_CHARGE.py
2023-08-15_15:29:10.050393Z SUS_CHARGE user code: /opt/rtcds/userapps/release/isc/h1/guardian/lscparams.py
2023-08-15_15:29:10.286761Z SUS_CHARGE system archive: code changes detected and committed
2023-08-15_15:29:10.331427Z SUS_CHARGE system archive: id: 9b481a54e45bfda96fa2f39f98978d76aa6ec7c0 (162824613)
2023-08-15_15:29:10.331427Z SUS_CHARGE RELOAD complete
2023-08-15_15:29:10.332868Z SUS_CHARGE calculating path: SWAP_TO_ITMX->INJECTIONS_COMPLETE
2023-08-15_15:29:14.129521Z SUS_CHARGE OP: EXEC
2023-08-15_15:29:14.129521Z SUS_CHARGE executing state: SWAP_TO_ITMX (11)
2023-08-15_15:29:14.135913Z SUS_CHARGE W: RELOADING @ SWAP_TO_ITMX.main
2023-08-15_15:29:14.158532Z SUS_CHARGE [SWAP_TO_ITMX.enter]
2023-08-15_15:29:14.276536Z SUS_CHARGE [SWAP_TO_ITMX.main] ezca: H1:SUS-ITMX_L3_DRIVEALIGN_L2L_TRAMP => 10
2023-08-15_15:29:14.277081Z SUS_CHARGE [SWAP_TO_ITMX.main] ezca: H1:SUS-ITMX_L3_DRIVEALIGN_L2L_GAIN => 0
2023-08-15_15:29:17.820392Z SUS_CHARGE REQUEST: DOWN
2023-08-15_15:29:17.821281Z SUS_CHARGE calculating path: SWAP_TO_ITMX->DOWN
2023-08-15_15:29:17.822235Z SUS_CHARGE new target: DOWN
2023-08-15_15:29:17.822364Z SUS_CHARGE GOTO REDIRECT
2023-08-15_15:29:17.822669Z SUS_CHARGE REDIRECT requested, timeout in 1.000 seconds
2023-08-15_15:29:17.824392Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:17.895303Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:17.958976Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.018262Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.079443Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.130595Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.197848Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.253456Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.318549Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.378993Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.446375Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.507978Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.576823Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.641493Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.695114Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.774571Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.822999Z SUS_CHARGE REDIRECT wait for worker completion...
2023-08-15_15:29:18.823662Z SUS_CHARGE REDIRECT timeout reached. worker terminate and reset...
2023-08-15_15:29:18.831141Z SUS_CHARGE worker terminated
2023-08-15_15:29:18.849938Z SUS_CHARGE W: initialized
2023-08-15_15:29:18.871834Z SUS_CHARGE W: EZCA v1.4.0
2023-08-15_15:29:18.872835Z SUS_CHARGE W: EZCA CA prefix: H1:
2023-08-15_15:29:18.872835Z SUS_CHARGE W: ready
2023-08-15_15:29:18.872980Z SUS_CHARGE worker ready
2023-08-15_15:29:18.883790Z SUS_CHARGE EDGE: SWAP_TO_ITMX->DOWN
2023-08-15_15:29:18.884081Z SUS_CHARGE calculating path: DOWN->DOWN
2023-08-15_15:29:18.886386Z SUS_CHARGE executing state: DOWN (2)
2023-08-15_15:29:18.891745Z SUS_CHARGE [DOWN.enter]
2023-08-15_15:29:18.893116Z Warning: Duplicate EPICS CA Address list entry "10.101.0.255:5064" discarded
2023-08-15_15:29:20.216958Z SUS_CHARGE [DOWN.main] All nodes taken to DOWN, ISC_LOCK should have taken care of reverting settings.
 

ESD_EXC_ETMX LOG:
2023-08-01_15:07:01.324869Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-01_15:07:01.325477Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-15_15:02:53.269349Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-15_15:02:53.269349Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-15_15:08:26.888655Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-15_15:08:26.888655Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-15_15:29:20.255431Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-15_15:29:20.255431Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-01_15:07:01.324869Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-01_15:07:01.325477Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-15_15:02:53.269349Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-15_15:02:53.269349Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-15_15:08:26.888655Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-15_15:08:26.888655Z ESD_EXC_ETMX calculating path: DOWN->DOWN
2023-08-15_15:29:20.255431Z ESD_EXC_ETMX REQUEST: DOWN
2023-08-15_15:29:20.255431Z ESD_EXC_ETMX calculating path: DOWN->DOWN


ESD_EXC_ITMX log:
2023-08-15_15:22:16.033411Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] Restoring things to the way they were before the measurement
2023-08-15_15:22:16.033411Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] Ramping on bias on ITMX ESD
2023-08-15_15:22:16.034430Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_GAIN => 0
2023-08-15_15:22:18.266457Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_SW1 => 8
2023-08-15_15:22:18.517569Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS => OFF: OFFSET
2023-08-15_15:22:18.518166Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_TRAMP => 20
2023-08-15_15:22:18.518777Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_GAIN => 1.0
2023-08-15_15:22:38.431399Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_TRAMP => 2.0
2023-08-15_15:22:41.264244Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_DRIVEALIGN_L2L_SW1S => 5124
2023-08-15_15:22:41.515470Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] ezca: H1:SUS-ITMX_L3_DRIVEALIGN_L2L => ONLY ON: INPUT, DECIMATION, FM4, FM5, OUTPUT
2023-08-15_15:22:41.515470Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] all done
2023-08-15_15:22:41.632738Z ESD_EXC_ITMX EDGE: RESTORE_SETTINGS->COMPLETE
2023-08-15_15:22:41.632738Z ESD_EXC_ITMX calculating path: COMPLETE->COMPLETE
2023-08-15_15:22:41.632738Z ESD_EXC_ITMX executing state: COMPLETE (30)
2023-08-15_15:22:41.636417Z ESD_EXC_ITMX [COMPLETE.enter]
2023-08-15_15:22:41.764636Z ESD_EXC_ITMX REQUEST: DOWN
2023-08-15_15:22:41.764636Z ESD_EXC_ITMX calculating path: COMPLETE->DOWN
2023-08-15_15:22:41.764636Z ESD_EXC_ITMX new target: DOWN
2023-08-15_15:22:41.764636Z ESD_EXC_ITMX GOTO REDIRECT
2023-08-15_15:22:41.764636Z ESD_EXC_ITMX REDIRECT requested, timeout in 1.000 seconds
2023-08-15_15:22:41.768046Z ESD_EXC_ITMX REDIRECT caught
2023-08-15_15:22:41.768046Z ESD_EXC_ITMX [COMPLETE.redirect]
2023-08-15_15:22:41.824688Z ESD_EXC_ITMX EDGE: COMPLETE->DOWN
2023-08-15_15:22:41.824688Z ESD_EXC_ITMX calculating path: DOWN->DOWN
2023-08-15_15:22:41.824688Z ESD_EXC_ITMX executing state: DOWN (1)
2023-08-15_15:22:41.827615Z ESD_EXC_ITMX [DOWN.main] Stopping bias_drive_bias_on
2023-08-15_15:22:41.827615Z ESD_EXC_ITMX [DOWN.main] Stopping L_drive_bias_on
2023-08-15_15:22:41.827615Z ESD_EXC_ITMX [DOWN.main] Stopping bias_drive_bias_off
2023-08-15_15:22:41.827615Z ESD_EXC_ITMX [DOWN.main] Stopping L_drive_bias_off
2023-08-15_15:22:41.923244Z ESD_EXC_ITMX [DOWN.main] Clearing bias_drive_bias_on
2023-08-15_15:22:42.059154Z ESD_EXC_ITMX [DOWN.main] Clearing L_drive_bias_on
2023-08-15_15:22:42.216133Z ESD_EXC_ITMX [DOWN.main] Clearing bias_drive_bias_off
2023-08-15_15:22:42.349505Z ESD_EXC_ITMX [DOWN.main] Clearing L_drive_bias_off
2023-08-15_15:29:20.260953Z ESD_EXC_ITMX REQUEST: DOWN
2023-08-15_15:29:20.260953Z ESD_EXC_ITMX calculating path: DOWN->DOWN
2023-08-15_15:18:31.953103Z ESD_EXC_ITMX [L_DRIVE_WITH_BIAS.enter]
2023-08-15_15:18:34.481594Z ESD_EXC_ITMX [L_DRIVE_WITH_BIAS.main] Starting 14Hz Sine injection on H1:SUS-ITMX_L3_DRIVEALIGN_L2L_EXC
2023-08-15_15:18:34.482160Z ESD_EXC_ITMX [L_DRIVE_WITH_BIAS.main] timer['Injection duration'] = 62
2023-08-15_15:19:36.482043Z ESD_EXC_ITMX [L_DRIVE_WITH_BIAS.run] timer['Injection duration'] done
2023-08-15_15:19:36.516842Z ESD_EXC_ITMX [L_DRIVE_WITH_BIAS.run] Injection finished
2023-08-15_15:19:38.908908Z ESD_EXC_ITMX [L_DRIVE_WITH_BIAS.run] Stopping injection on H1:SUS-ITMX_L3_DRIVEALIGN_L2L_EXC
2023-08-15_15:19:39.011256Z ESD_EXC_ITMX EDGE: L_DRIVE_WITH_BIAS->TURN_BIAS_OFF
2023-08-15_15:19:39.011836Z ESD_EXC_ITMX calculating path: TURN_BIAS_OFF->COMPLETE
2023-08-15_15:19:39.012099Z ESD_EXC_ITMX new target: BIAS_DRIVE_NO_BIAS
2023-08-15_15:19:39.018534Z ESD_EXC_ITMX executing state: TURN_BIAS_OFF (15)
2023-08-15_15:19:39.019024Z ESD_EXC_ITMX [TURN_BIAS_OFF.enter]
2023-08-15_15:19:39.019710Z ESD_EXC_ITMX [TURN_BIAS_OFF.main] Ramping off bias on ITMX ESD
2023-08-15_15:19:39.020547Z ESD_EXC_ITMX [TURN_BIAS_OFF.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_GAIN => 0
2023-08-15_15:19:58.934813Z ESD_EXC_ITMX [TURN_BIAS_OFF.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_OFFSET => 0
2023-08-15_15:19:58.935544Z ESD_EXC_ITMX [TURN_BIAS_OFF.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_TRAMP => 2
2023-08-15_15:19:58.935902Z ESD_EXC_ITMX [TURN_BIAS_OFF.main] ezca: H1:SUS-ITMX_L3_LOCK_BIAS_GAIN => 1
2023-08-15_15:20:01.140528Z ESD_EXC_ITMX EDGE: TURN_BIAS_OFF->BIAS_DRIVE_NO_BIAS
2023-08-15_15:20:01.141391Z ESD_EXC_ITMX calculating path: BIAS_DRIVE_NO_BIAS->COMPLETE
2023-08-15_15:20:01.142015Z ESD_EXC_ITMX new target: L_DRIVE_NO_BIAS
2023-08-15_15:20:01.143337Z ESD_EXC_ITMX executing state: BIAS_DRIVE_NO_BIAS (16)
2023-08-15_15:20:01.144372Z ESD_EXC_ITMX [BIAS_DRIVE_NO_BIAS.enter]
2023-08-15_15:20:03.673255Z ESD_EXC_ITMX [BIAS_DRIVE_NO_BIAS.main] Starting 14Hz Sine injection on H1:SUS-ITMX_L3_LOCK_BIAS_EXC
2023-08-15_15:20:03.673786Z ESD_EXC_ITMX [BIAS_DRIVE_NO_BIAS.main] timer['Injection duration'] = 62
2023-08-15_15:21:05.674028Z ESD_EXC_ITMX [BIAS_DRIVE_NO_BIAS.run] timer['Injection duration'] done
2023-08-15_15:21:05.697880Z ESD_EXC_ITMX [BIAS_DRIVE_NO_BIAS.run] Injection finished
2023-08-15_15:21:07.987796Z ESD_EXC_ITMX [BIAS_DRIVE_NO_BIAS.run] Stopping injection on H1:SUS-ITMX_L3_LOCK_BIAS_EXC
2023-08-15_15:21:08.072581Z ESD_EXC_ITMX EDGE: BIAS_DRIVE_NO_BIAS->L_DRIVE_NO_BIAS
2023-08-15_15:21:08.072581Z ESD_EXC_ITMX calculating path: L_DRIVE_NO_BIAS->COMPLETE
2023-08-15_15:21:08.073301Z ESD_EXC_ITMX new target: RESTORE_SETTINGS
2023-08-15_15:21:08.076744Z ESD_EXC_ITMX executing state: L_DRIVE_NO_BIAS (17)
2023-08-15_15:21:08.079417Z ESD_EXC_ITMX [L_DRIVE_NO_BIAS.enter]
2023-08-15_15:21:10.597939Z ESD_EXC_ITMX [L_DRIVE_NO_BIAS.main] Starting 14Hz Sine injection on H1:SUS-ITMX_L3_DRIVEALIGN_L2L_EXC
2023-08-15_15:21:10.598481Z ESD_EXC_ITMX [L_DRIVE_NO_BIAS.main] timer['Injection duration'] = 62
2023-08-15_15:22:12.598413Z ESD_EXC_ITMX [L_DRIVE_NO_BIAS.run] timer['Injection duration'] done
2023-08-15_15:22:12.633547Z ESD_EXC_ITMX [L_DRIVE_NO_BIAS.run] Injection finished
2023-08-15_15:22:15.937968Z ESD_EXC_ITMX [L_DRIVE_NO_BIAS.run] Stopping injection on H1:SUS-ITMX_L3_DRIVEALIGN_L2L_EXC
2023-08-15_15:22:16.018077Z ESD_EXC_ITMX EDGE: L_DRIVE_NO_BIAS->RESTORE_SETTINGS
2023-08-15_15:22:16.018395Z ESD_EXC_ITMX calculating path: RESTORE_SETTINGS->COMPLETE
2023-08-15_15:22:16.018676Z ESD_EXC_ITMX new target: COMPLETE
2023-08-15_15:22:16.019499Z ESD_EXC_ITMX executing state: RESTORE_SETTINGS (25)
2023-08-15_15:22:16.019891Z ESD_EXC_ITMX [RESTORE_SETTINGS.enter]
2023-08-15_15:22:16.020220Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] Finished with all excitations
2023-08-15_15:22:16.033260Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] Saved GPS times in logfile: /opt/rtcds/userapps/release/sus/common/scripts/quad/InLockChargeMeasurements/rec_LHO/ITMX_14_Hz_1376148154.txt
2023-08-15_15:22:16.033411Z ESD_EXC_ITMX [RESTORE_SETTINGS.main] Restoring things to the way they were before the me

Comments related to this report
camilla.compton@LIGO.ORG - 14:55, Wednesday 16 August 2023 (72280)

These are both valid charge measurements, we could analysis either or both (and check the answer is the same). We repeated the measurements while troubleshooting the issue in 72219. We have now fixed the issue (typo) in SUS_CHARGE that was preventing the last  ETMX measurement from being taken. 

anthony.sanchez@LIGO.ORG - 22:12, Thursday 17 August 2023 (72310)

I just analyzed the first batch of in-lock charge measurements.

There are 13-14 plot points on most of the other plots but only 10 for ETMX.
 

Images attached to this comment
H1 CAL (CAL)
richard.savage@LIGO.ORG - posted 13:49, Tuesday 08 August 2023 - last comment - 14:15, Thursday 17 August 2023(72063)
Movement of Pcal Xend upper beam position to test impact on Pcal X/Y comparison

Julianna Lewis, TonyS, RickS

This morning we moved the upper (inner) Pcal beam at X-end down by 5 mm at the entrance aperture of the Pcal Rx sensor to test the impact on the calibration of the Pcal system at X-end.

We expect that the impact on the calibration of the Xend Pcal will be given by the dot product of the 

The work proceedes as follows:

We expect that this upper beam movement will change the unintended rotation of the test mass and change the calibration of the Rx output by about 0.2 % This assumes that we have moved at roughly 45 deg. with respect to the roughly 22 mm interferometer beam offset and that the offset on the surface of the ETM is roughly half of that seen at the Rx module: - 2.5 mm / 2 x 22 mm x 0.707 x 0.94 hop / mm^2 = -18 hop (hundreths of one percent).  So we expect to see that X/Y comparison factor will change from about 1.000 to 0.9982.

Images attached to this report
Comments related to this report
richard.savage@LIGO.ORG - 14:15, Thursday 17 August 2023 (72300)

The sign of expected change in the X/Y calibration ratio is opposite to what is written in this entry.

The change observed (after minus before) should be given by: 1/2 \vec{c} dot \vec{b} x M/I.  For a vertical displacement of a Pcal beam (c_y), this reduces to M/2I x c_y x b_y.  Thus, a reduction in the X/Y ratio indicates that the sign of the interferometer beam offset is opposite to that of the Pcal beam displacement.

If we move the Pcal beam down and observe a decrease in the X/Y ratio, it would indicate that the intererometer beam is displaced from center in the upward direction.

See this aLog entry

H1 CAL
vladimir.bossilkov@LIGO.ORG - posted 08:29, Friday 28 July 2023 - last comment - 12:31, Tuesday 12 December 2023(71787)
H1 Systematic Uncertainty Patch due to misapplication of calibration model in GDS

First observed as a persistent mis-calibration in systematic error monitoring Pcal lines which measure PCAL / GDS-CALIB_STRAIN affecting both LLO and LHO, [LLO Link] [LHO Link], characterised by these measurements consistently disagreeing with the uncertainty envelope.
It us presently understood that this arises from bugs in the code producing the GDS FIR filters there exists a sizeable discrepancy, which Joseph Betzwieser is spear-heading a thorough investigation to correct,

I make a direct measurement of this systematic error by dividing CAL-DARM_ERR_DBL_DQ / GDS-CALIB_STRAIN , where the numerator is further corrected for kappa values of the sensing, cavity pole, and the 3 actuation stages (GDS does the same corrections internally). This gives a transfer function of the difference induced from errors in the GDS filters.

Attached in this aLog, and its sibling aLog in LLO, is this measurement in blue, the PCAL / GDS-CALIB_STRAIN measurement in orange, and the smoothed uncertainty correction vector in red. Attached also is a text file of this uncertainty correction for application in pyDARM to produce the final uncertainty, in the format of [Frequency, Real, Imaginary].

Images attached to this report
Non-image files attached to this report
Comments related to this report
ling.sun@LIGO.ORG - 15:33, Friday 28 July 2023 (71798)

After applying this error TF, the uncertainty budget seems to agree with monitoring results (attached).

Images attached to this comment
ling.sun@LIGO.ORG - 13:02, Thursday 17 August 2023 (72299)

After running the command documented in alog 70666, I've plotted the monitoring results on top of the manually corrected uncertainty estimate (see attached). They agree quite well.

The command is:

python ~cal/src/CalMonitor/bin/calunc_consistency_monitor --scald-config  ~cal/src/CalMonitor/config/scald_config.yml --cal-consistency-config  ~cal/src/CalMonitor/config/calunc_consistency_configs_H1.ini --start-time 1374612632 --end-time 1374616232 --uncertainty-file /home/ling.sun/public_html/calibration_uncertainty_H1_1374612632.txt --output-dir /home/ling.sun/public_html/

The uncertainty is estimated at 1374612632 (span 2 min around this time). The monitoring data are collected from 1374612632 to 1374616232 (span an hour).

 

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 17:01, Wednesday 13 September 2023 (72871)
J. Kissel, J. Betzwieser

FYI: The time at which Vlad used to gather TDCFs to update the *modeled* response function at the reference time (R, in the numerator of the plots) is 
    2023-07-27 05:03:20 UTC
    2023-07-26 22:03:20 PDT
    GPS 1374469418

This is a time when the IFO was well thermalized.

The values used for the TDCFs at this time were
    \kappa_C  = 0.97764456
    f_CC      = 444.32712 Hz
    \kappa_U  = 1.0043616 
    \kappa_P  = 0.9995768
    \kappa_T  = 1.0401824

The *measured* response function (GDS/DARM_ERR, the denominator in the plots) is from data with the same start time, 2023-07-27 05:03:20 UTC, over a duration of 384 seconds (8 averages of 48 second FFTs).

Note these TDCF values list above are the CAL-CS computed TDCFs, not the GDS computed TDCFs. They're the value exactly at 2023-07-27 05:03:20 UTC, with no attempt to average further over the duration of the *measurement*. See attached .pdf which shows the previous 5 minutes and the next 20 minutes. From this you can see that GDS was computing essentially the same thing as CALCS -- except for \kappa_U, which we know
 - is bad during that time (LHO:72812), and
 - unimpactful w.r.t. the overall calibration.
So the fact that 
    :: the GDS calculation is frozen and
    :: the CALCS calculation is noisy, but is quite close to the frozen GDS value is coincidental, even though
    :: the ~25 minute mean of the CALCS is actually around ~0.98 rather than the instantaneous value of 1.019
is inconsequential to Vlad's conclusions.

Non-image files attached to this comment
louis.dartez@LIGO.ORG - 00:54, Tuesday 12 December 2023 (74747)
I'm adding the modeled correction due to the missing 3.2 kHz pole here as a text file. I plotted a comparison showing Vlad's fit (green), the modeled correction evaluated on the same frequency vector as Vlad (orange), and the modeled correction evaluated using a dense frequency spacing (blue), see eta_3p2khz_correction.png. The denser frequency spacing recovers error of about 2% between 400 Hz and 600 Hz. Otherwise, the coarsely evaluated modeled correction seems to do quite well. 
Images attached to this comment
Non-image files attached to this comment
ling.sun@LIGO.ORG - 12:31, Tuesday 12 December 2023 (74758)

The above error was fixed in the model at GPS time 1375488918 (Tue Aug 08 00:15:00 UTC 2023) (see LHO:72135)

H1 CAL (CAL)
madeline.wade@LIGO.ORG - posted 11:21, Wednesday 21 June 2023 - last comment - 06:56, Friday 18 August 2023(70666)
Tool for plotting calibration systematic error line transfer function measurements with uncertainty estimate

[M. Wade, L. Wade]

The CalMonitor git repository contains a tool that can be used to plot the systematic error line transfer function measurements (see aLOG 69285) alongside a given calibration uncertainty estimate.  This tool has been used to plot the transfer function of PCAL/GDS-CALIB_STRAIN at each line frequency alongside the uncertainty estimate, but it can now also be used to plot the transfer function of PCAL/CAL-DELTAL_EXTERNAL_DQ (which is the inverse of what is computed in the front-end channels referenced in aLOG 69285) as well.  

In order to plot the transfer function of PCAL/CAL-DELTAL_EXTERNAL_DQ alongside an uncertainty estimate, you will need access to the data via NDS, which can be obtained on any of the LDAS machines.  The script for producing the plot is called calunc_consistency_monitor and can be run with the following command line:

$ python /path/to/calmonitor/bin/calunc_consistency_monitor --cal-consistency-config /path/to/calmonitor/config/calunc_consistency_configs_CALCS_H1.ini --start-time GPSSTARTTIME --end-time GPSENDTIME --uncertainty-file /path/to/file/uncertaintyfile.txt --output-dir /path/to/output/directory

In order to plot the transfer function of PCAL/GDS-CALIB_STRAIN alongside an uncertainty estimate, you will need access to the data stored in InfluxDB on the respective site clusters.  Please reach out to Maddie if you would like to know how to obtain these credentials.  The command for producing this plot is:

$ python /path/to/calmonitor/bin/calunc_consistency_monitor --scald-config /path/to/calmonitor/config/scald_config.yml --cal-consistency-config /path/to/calmonitor/config/calunc_consistency_configs_H1.ini --start-time GPSSTARTTIME --end-time GPSENDTIME --uncertainty-file /path/to/file/uncertaintyfile.txt --output-dir /path/to/file/uncertaintyfile.txt

The IFO environment variable must be set outside of the launching of the script.  This can be done with

$ export IFO=H1

The config files in the git repository referenced above (GDS strain config file and CALCS strain config file) have configuration set to only include transfer function measurements at a given line frequency when the uncertainty of the measurements, as derived from the coherence, is less than 2%.

The script will write a plot to the specified output directory with the filename convention uncertinaty_consistency_check_GPSSTARTTIME_GPSENDTIME_STRAINCHANNLENAME.png.  Attached are plots for the PCAL/STRAIN transfer function measurements at the systematic error lines for both the CAL-DELTAL_EXTERNAL_DQ strain channel, with corrections applied, and the GDS-CALIB_STRAIN channel for GPS times 1371391575-1371395175, alongside the uncertainty budget found in calibration_uncertainty_H1_1371405027.txt.

The shaded dots on top of the uncertainy envelope are the transfer function meausrements at each measured line frequency.  If the dots are colored red, this indicates that less than 68% of the measurements lay inside of the 68% confidence interval for this time period.  If the dots are colored green, this indicates that at least 68% of the measurements lay inside of the 68% confidence interval for this time period.

Images attached to this report
Comments related to this report
madeline.wade@LIGO.ORG - 06:56, Friday 18 August 2023 (72312)

Just fixing a typo in the above alog for the records.  Corrected lines below:

In order to plot the transfer function of PCAL/GDS-CALIB_STRAIN alongside an uncertainty estimate, you will need access to the data stored in InfluxDB on the respective site clusters.  Please reach out to Maddie if you would like to know how to obtain these credentials.  The command for producing this plot is:

$ python /path/to/calmonitor/bin/calunc_consistency_monitor --scald-config /path/to/calmonitor/config/scald_config.yml --cal-consistency-config /path/to/calmonitor/config/calunc_consistency_configs_H1.ini --start-time GPSSTARTTIME --end-time GPSENDTIME --uncertainty-file /path/to/file/uncertaintyfile.txt --output-dir /path/to/file/output/directory

Displaying reports 16441-16460 of 86634.Go to page Start 819 820 821 822 823 824 825 826 827 End