Displaying reports 16461-16480 of 86646.Go to page Start 820 821 822 823 824 825 826 827 828 End
Reports until 16:03, Thursday 17 August 2023
H1 General
anthony.sanchez@LIGO.ORG - posted 16:03, Thursday 17 August 2023 (72306)
Thursday Ops day Shift Start

TITLE: 08/17 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 21mph Gusts, 13mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.10 μm/s
QUICK SUMMARY:
Inherited H1 With a 17 hour lock in NOMINAL_LOW_NOISE && OBSERVING
HVAC turned back on, And I'll be watching the temps tonight.

H1 OpsInfo (CDS)
thomas.shaffer@LIGO.ORG - posted 15:41, Thursday 17 August 2023 (72305)
Script to announce live LLO Observing status, or general channel change

Control room users have asked more than once to have some type of alert for when LLO goes into or out of Observing. I've made this, heavily based on our read only LLO medm command. This will create an epics connection to LLO after you input your ligo.org password and then it uses our text to speech program (picotts) to announce any changes to the L1:GRD-IFO_STATE (our intention bit). I put it in userscripts so:

To run it enter: llo_status_live into your terminal and then enter your password.
 
This uses a more general script that alerts on any channel changes for a given channel. Great if you're waiting around for something to finish up. It lives in (userapps)/cds/common/scripts/alert_channel_change.py
H1 General (DetChar, PEM)
richard.mccarthy@LIGO.ORG - posted 15:17, Thursday 17 August 2023 - last comment - 15:22, Thursday 17 August 2023(72303)
Shop Vac running in Mechanical Room

Robert and I went into the Fan rooms to check on Supply Fan 4.  When we entered the hall between Fan rooms there was water on the floor.   The last time this happened it was a clogged condensate drain.  So we check the Fan 1 and 2 room and sure enough more water.

Starting at 2:30 PM until 3 PM PST we (Randy, Fil and Richard) ran a shop vac outside the Fan rooms where the condesate drains exit the enclosure.  We were able to drain the condensate pans so no additional water shoule run onto the floor. 

We have left the water on the floor in the fan rooms as a Tuesday activity.

Thank you Randy for your assistance.

Comments related to this report
jenne.driggers@LIGO.ORG - 15:22, Thursday 17 August 2023 (72304)

While obviously this is critical maintenance that needed to be done regardless of IFO state, it happens that we were in Commissioning mode (not Observe) during this time, doing driven calibration measurements.  So, there should be no effect on any Observing mode data quality for this segment, and no need for any special DQ flags or investigations.

H1 CAL
thomas.shaffer@LIGO.ORG - posted 15:07, Thursday 17 August 2023 (72301)
CAL BB and Simulines run

Followed the usual instructions on wiki/TakingCalibrationMeasurements.

Simulines start GPS: 1376343784.859387
Simulines end GPS: 1376345111.425289
 

2023-08-17 22:04:53,037 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_
SS/DARMOLG_SS_20230817T214248Z.hdf5
2023-08-17 22:04:53,057 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DA
RM_SS/PCALY2DARM_SS_20230817T214248Z.hdf5
2023-08-17 22:04:53,068 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_
L1_SS/SUSETMX_L1_SS_20230817T214248Z.hdf5
2023-08-17 22:04:53,079 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_
L2_SS/SUSETMX_L2_SS_20230817T214248Z.hdf5
2023-08-17 22:04:53,089 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_
L3_SS/SUSETMX_L3_SS_20230817T214248Z.hdf5
 

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 12:07, Thursday 17 August 2023 (72298)
Thu CP1 Fill

Thu Aug 17 10:08:19 2023 INFO: Fill completed in 8min 15secs

 

Images attached to this report
H1 ISC (PEM)
elenna.capote@LIGO.ORG - posted 11:38, Thursday 17 August 2023 - last comment - 13:15, Monday 28 August 2023(72297)
DARM with and without HVAC

Robert did an HVAC off test. Here is a comparison of GDS CALIB STRAIN NOLINES from earlier on in this lock and during the test. I picked both times off the range plot from a time with no glitches.

Improvement from removal of 120 Hz jitter peak, apparent reduction of 52 Hz peak, and broadband noise reduction at low frequency (scatter noise?).

I have attached a second plot showing the low frequency (1-10 Hz) spectrum of OMC DCPD SUM, showing no appreciable change in the low frequency portion of DARM from this test.

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 14:57, Thursday 17 August 2023 (72302)DetChar, FMP, OpsInfo, PEM
Reminders from the summary pages as to why we got so much BNS range improvement from removing the 52 Hz and 120 Hz features shown in Elenna's ASD comparison.
Pulled from https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20230817/lock/range/.

Range integrand shows ~15 and ~5 MPC/rtHz reduction at the 52 and 120 Hz features.

BNS range time series shows brief ~15 MPC improvement at 15:30 UTC during Robert's HVAC OFF tests.
Images attached to this comment
elenna.capote@LIGO.ORG - 11:50, Friday 18 August 2023 (72321)

Here is a spectrum of the MICH, PRCL, and SRCL error signals at the time of this test. The most visible change is the reduction of the 120 Hz jitter peak also seen in DARM. There might be some reduction in noisy peaks around 10-40 Hz in the signals, but the effect is small enough it would be useful to repeat this test to see if we can trust that improvement.

Note: the spectra have strange shapes, I think related to some whitening or calibration effect that I haven't bothered to think about to make these plots. I know we have properly calibrated versions of the LSC spectra somewhere, but I am not sure where. For now these serve as a relative comparison.

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 10:46, Monday 21 August 2023 (72352)DetChar, FMP, PEM
According to Robert's follow-up / debrief aLOG (LHO:72331) and the time-stamps in the bottom left corner of Elenna's DTT plots, she's is using the time 2023-08-17 15:27 UTC, and that corresponds to the time when Robert had turned off all four the supply fans (SF1, SF2, SF3, and SF4) in the corner station (CS) air handler units (AHU) 1 and 2 that supply the LVEA around 2023-08-17 15:26 UTC.
jeffrey.kissel@LIGO.ORG - 13:15, Monday 28 August 2023 (72487)DetChar, PEM, SYS
My Bad: -- Elenna's aLOG is showing the sensitivity on 2023-Aug-17, and the Robert LHO:72331 logging of times listed above are for 2023-Aug-18. 

Elenna's demonstration is during Robert's site-wide HVAC shut down 2023-Aug-17 -- see LHO:72308 (i.e. not just the corner as I'd errantly claimed above.).
H1 PSL
thomas.shaffer@LIGO.ORG - posted 10:07, Thursday 17 August 2023 (72296)
Added 100mL to PSL chiller

Fil informed me that the PSL chiller was alarming. Oli and I went out there and it had a low level alarm. We added 100mL and brought it back near the max level. Last log was from exactly a month ago.

H1 CDS
david.barker@LIGO.ORG - posted 08:38, Thursday 17 August 2023 (72295)
FMCS alarms while Robert runs his tests

Please disregard FMCS chiller alarms for the next hour while Robert runs his tests which require the chillers to be shut down for short periods.

LHO General
thomas.shaffer@LIGO.ORG - posted 07:55, Thursday 17 August 2023 (72292)
Ops Day Shift Start

TITLE: 08/17 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 8mph Gusts, 6mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY: Locked for 9 hours, range has a few points almost touching 160Mpc. Nice.

H1 General
anthony.sanchez@LIGO.ORG - posted 00:15, Thursday 17 August 2023 - last comment - 16:19, Thursday 17 August 2023(72291)
Wednesday Ops Eve Shift End

TITLE: 08/17 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
Lockloss 23:29 UTC
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1376263782

A change was made to ISC_LOCK that will lead to an SDF Diff that will need to be accepted.

Trouble locking DRMI even after PRMI
Ran Initial Alignment.

Elevated Dust levels in Optics labs again.

locking process started at 00:31 UTC
1:16 UTC made it to NOMINAL_LOW_NOISE
1:35 UTC Made it to Observing

lockloss from NLN @ 2:22 UTC almost certainly because of a Pi ring up and a series of Locklosses at LOWNOISE_LENGTH_CONTROL Edit*: It wasn not certain at all infact.

relocking went smoothly until, Lost lock at LOWNOISE_LENGTH_CONTROL @ 3:11 UTC

relocking went through PRMI and took a while, Lost lock at LOWNOISE_LENGTH_CONTROL again at @ 4:14 UTC

I have lost lock twice at LOWNOISE_LENGTH_CONTROL tonight. I am concerned that it may be due to  alog https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72262

Which is where there is a change to that state.
The counter argument to this is that I went past this earlier in my first lock of the night. Which was AFTER ISC_LOCK was loaded.
I'm doing an Initial Alignment again tonight to see if it's just poorly aligned instead, and to buy my self sometime to investigate.
I posted my findings in matter most and in the lock loss alog above . Then Called in the Commishoners to see if there was anything else I should look at.
Lines 5471 , 5472 were changed, and with the help of Danielle and Jenne pointing to line 5488 for another change that was reverted on ISC_LOCK.py, Locking went well from LOWNOISE_LENGTH_CONTROL all the way up to NLN

See comments in https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72262 for Specifics

Made it to NOMINAL_LOW_NOISE @ 5:58 UTC
Made it to Observing @ 6:02 UTC

 

LOG: empty

Comments related to this report
thomas.shaffer@LIGO.ORG - 08:28, Thursday 17 August 2023 (72294)

I'm not seeing any PI modes coming up during the 0222UTC lockloss, or any of the other lock losses from yesterday.

Images attached to this comment
anthony.sanchez@LIGO.ORG - 16:19, Thursday 17 August 2023 (72307)

By "lockloss from NLN @ 2:22 UTC almost certainly because of a Pi ring up" I really mean I thought I had a smoking gun for that first lockloss yesterday, but just didn't understand the Arbitrary "y" cursors on the plots for the PI monitors.
My apologies to the PI team for making poor assumptions.

H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 22:27, Wednesday 16 August 2023 - last comment - 22:27, Wednesday 16 August 2023(72287)
Wednesday Lockloss #1

Initial Unknown Lockloss from NLN (Maybe PI related?) :
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1376274187
ScreenShot 1 ScreenShot 2 ScreenShot 3
23-08-17_02:05:06.841133Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: SUS_PI: has notification
2023-08-17_02:05:12.031195Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: SUS_PI: has notification
2023-08-17_02:05:22.397983Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: SUS_PI: has notification
2023-08-17_02:15:15.962160Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: SUS_PI: has notification
2023-08-17_02:15:21.158619Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: SUS_PI: has notification
2023-08-17_02:22:49.089168Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: SQZ_MANAGER: has notification
2023-08-17_02:22:49.343171Z ISC_LOCK [NOMINAL_LOW_NOISE.run] Unstalling IMC_LOCK
2023-08-17_02:22:49.571864Z ISC_LOCK JUMP target: LOCKLOSS
2023-08-17_02:22:49.578399Z ISC_LOCK [NOMINAL_LOW_NOISE.exit]
2023-08-17_02:22:49.629938Z ISC_LOCK JUMP: NOMINAL_LOW_NOISE->LOCKLOSS
2023-08-17_02:22:49.629938Z ISC_LOCK calculating path: LOCKLOSS->NOMINAL_LOW_NOISE
2023-08-17_02:22:49.632228Z ISC_LOCK new target: DOWN
2023-08-17_02:22:49.636266Z ISC_LOCK executing state: LOCKLOSS (2)
2023-08-17_02:22:49.647637Z ISC_LOCK [LOCKLOSS.enter]


Followed by a Lockloss at LOWNOISE_LENGHT_CONTROL

https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1376277430


2023-08-17_03:16:40.676453Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-OUTPUT_MTRX_2_13 => -1
2023-08-17_03:16:40.676886Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-SRCL1_OFFSET => 0
2023-08-17_03:16:40.677417Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-MICHFF_TRAMP => 10
2023-08-17_03:16:40.677727Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-SRCLFF1_TRAMP => 10
2023-08-17_03:16:40.677943Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] timer['wait'] = 10
2023-08-17_03:16:50.678146Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] timer['wait'] done
2023-08-17_03:16:50.717927Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] ezca: H1:LSC-MICHFF_GAIN => 1
2023-08-17_03:16:50.718315Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] ezca: H1:LSC-SRCLFF1_GAIN => 1
2023-08-17_03:16:50.718524Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] timer['wait'] = 10
2023-08-17_03:16:52.342688Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] Unstalling OMC_LOCK
2023-08-17_03:16:52.344038Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] Unstalling IMC_LOCK
2023-08-17_03:16:52.576674Z ISC_LOCK JUMP target: LOCKLOSS
2023-08-17_03:16:52.576674Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.exit]
2023-08-17_03:16:52.634809Z ISC_LOCK JUMP: LOWNOISE_LENGTH_CONTROL->LOCKLOSS
2023-08-17_03:16:52.634809Z ISC_LOCK calculating path: LOCKLOSS->NOMINAL_LOW_NOISE
2023-08-17_03:16:52.635554Z ISC_LOCK new target: DOWN
2023-08-17_03:16:52.674469Z ISC_LOCK executing state: LOCKLOSS (2)
2023-08-17_03:16:52.674675Z ISC_LOCK [LOCKLOSS.enter]

Images attached to this report
Comments related to this report
anthony.sanchez@LIGO.ORG - 22:16, Wednesday 16 August 2023 (72288)

relocking went through PRMI and took a while, Lost lock at LOWNOISE_LENGTH_CONTROL again at @ 4:14 UTC


2023-08-17_04:13:58.490058Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-OUTPUT_MTRX_2_11 => -1
2023-08-17_04:13:58.490906Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-OUTPUT_MTRX_1_13 => 1
2023-08-17_04:13:58.491239Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-OUTPUT_MTRX_2_13 => -1
2023-08-17_04:13:58.491568Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-SRCL1_OFFSET => 0
2023-08-17_04:13:58.492108Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-MICHFF_TRAMP => 10
2023-08-17_04:13:58.492395Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-SRCLFF1_TRAMP => 10
2023-08-17_04:13:58.492619Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] timer['wait'] = 10
2023-08-17_04:14:08.492962Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] timer['wait'] done
2023-08-17_04:14:08.528400Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] ezca: H1:LSC-MICHFF_GAIN => 1
2023-08-17_04:14:08.528870Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] ezca: H1:LSC-SRCLFF1_GAIN => 1
2023-08-17_04:14:08.529186Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] timer['wait'] = 10
2023-08-17_04:14:10.595463Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] Unstalling IMC_LOCK
2023-08-17_04:14:10.830018Z ISC_LOCK JUMP target: LOCKLOSS
2023-08-17_04:14:10.830018Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.exit]
2023-08-17_04:14:10.893208Z ISC_LOCK JUMP: LOWNOISE_LENGTH_CONTROL->LOCKLOSS
2023-08-17_04:14:10.893208Z ISC_LOCK calculating path: LOCKLOSS->NOMINAL_LOW_NOISE
2023-08-17_04:14:10.895425Z ISC_LOCK new target: DOWN
2023-08-17_04:14:10.899923Z ISC_LOCK executing state: LOCKLOSS (2)
2023-08-17_04:14:10.900490Z ISC_LOCK [LOCKLOSS.enter]


I have lost lock twice at LOWNOISE_LENGTH_CONTROL tonight. I am concerned that it may be due to  alog https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72262

Which is where there is a change to that state.
The counter argument to this is that I went past this earlier in my first lock of the night. Which was AFTER ISC_LOCK was loaded.
I'm doing an Initial Alignment again tonight to see if it's just poorly aligned instead.
If I still cannot get past LOWNOISE_LENGTH_CONTROL after that I will Revert the changes in the mentioned in the alog above.

 

H1 ISC
elenna.capote@LIGO.ORG - posted 22:06, Tuesday 15 August 2023 - last comment - 23:13, Wednesday 16 August 2023(72262)
Failed attempt to debug Lownoise Length Control

Despite some great efforts to track down the source from Jenne and Jeff, we are still seeing a 102 Hz line rung up right at the end of the lownoise_length_control state. Since we had a random lockloss, I asked Tony to take us to lownoise_esd_etmx and I tried walking through lownoise length control by hand (copying the guardian code line by line into the shell).

The lines 5427-5468 ramp various gains to zero, set up the filters and drive matrix for the LSC feedforward, and prepare for the SRCL offset. These lines run fine and do not ring up the 102 Hz line.

I am able to run the first action line of the run state, which sets the MICH FF gain to 1 (line 5480). This runs fine, no 102 Hz line. Then, I ran the next line to turn on the SRCL FF gain (line 5481). This caused an immediate lockloss (huh?), despite the fact that this code has run many times just fine.

On the next lock attempt, I tried running the MICH and SRCL gain lines at the exact same time. Also immediate lockloss.

I have no idea why this is such an issue. All it does it ramp the gains to 1 (the tramps are set on a previous line to 3 seconds).

Both of these locklosses seem to ring up a test mass bounce mode, suggesting that the SRCL FF (I assume) is kicking a test mass pretty hard.

This might be a red herring, or maybe it's a clue. I don't see any 102 Hz line during these locklosses though.

The offending lines:

            ezca['LSC-MICHFF_GAIN']  = lscparams.gain['MICHFF']
            ezca['LSC-SRCLFF1_GAIN'] = lscparams.gain['SRCLFF1'] * lscparams.dc_readout['sign']
Comments related to this report
elenna.capote@LIGO.ORG - 22:50, Tuesday 15 August 2023 (72263)

I think it's pretty clear that this is an LSC feedforward problem. I attached two ndscopes of the ETMX L3 master outs, one zoomed in and one zoomed out. The massive oscillation in the signal is the 102 Hz line, which I first begin to see in the time series starting at UTC 5:24:32 and some milliseconds. This corresponds exactly to the time in the guardian log when the LSC feedforward gain is ramped on (see copied guardian log below).

2023-08-16_05:24:22.551541Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-OUTPUT_MTRX_1_10 => 1
2023-08-16_05:24:22.551849Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-OUTPUT_MTRX_2_10 => -1
2023-08-16_05:24:22.552715Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-OUTPUT_MTRX_1_11 => 1
2023-08-16_05:24:22.553143Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-OUTPUT_MTRX_2_11 => -1
2023-08-16_05:24:22.554026Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-OUTPUT_MTRX_1_13 => 1
2023-08-16_05:24:22.554332Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-OUTPUT_MTRX_2_13 => -1
2023-08-16_05:24:22.554746Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] ezca: H1:LSC-SRCL1_OFFSET => 0
2023-08-16_05:24:22.555266Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.main] timer['wait'] = 10
2023-08-16_05:24:32.555489Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] timer['wait'] done
2023-08-16_05:24:32.593316Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] ezca: H1:LSC-MICHFF_GAIN => 1
2023-08-16_05:24:32.594884Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] ezca: H1:LSC-SRCLFF1_GAIN => 1

2023-08-16_05:24:32.595219Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] timer['wait'] = 1
2023-08-16_05:24:33.595397Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] timer['wait'] done
2023-08-16_05:24:33.660429Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] ezca: H1:LSC-SRCL1_GAIN => -7.5
2023-08-16_05:24:33.660711Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] ezca: H1:LSC-PRCL1_GAIN => 10.0
2023-08-16_05:24:33.661306Z ISC_LOCK [LOWNOISE_LENGTH_CONTROL.run] ezca: H1:LSC-MICH1_GAIN => 3.2
Images attached to this comment
elenna.capote@LIGO.ORG - 16:38, Wednesday 16 August 2023 (72282)OpsInfo

I have added two new lines to lownoise_length_control that increases the LSC FF ramp time from 3 to 10 seconds. These lines are at the end of the main state, right before the run state. I also increased the timer in the first step of the run state to wait 10 seconds after the FF gains are set, before moving to the next part of the run state which changes LSC gains.

This will result in two SDF diffs for the feedforward ramp times in the LSC model. It can be accepted. Tagging Ops

anthony.sanchez@LIGO.ORG - 18:23, Wednesday 16 August 2023 (72285)

SDF changes accepted picture attached.

Images attached to this comment
anthony.sanchez@LIGO.ORG - 23:13, Wednesday 16 August 2023 (72289)

Reverted these changes to make it past LOWNOISE_LENGTH_CONTROL

 5471       ezca['LSC-MICHFF_TRAMP'] = 3  Changed back to 3 from 10
 5472       ezca['LSC-SRCLFF1_TRAMP'] = 3 Changed back to 3 from 10

And

5488 self.timer['wait'] = 1 #Changed back to 1 from 10. 
Images attached to this comment
daniel.sigg@LIGO.ORG - 23:01, Wednesday 16 August 2023 (72290)

This may not be a problem with a filter kick, but with the filter making some loop unstable and driving up the 102Hz line. I suspect that changing the aux dofs gain immediately afterwards makes it stable again. If so, slowing down the transition only makes it worse. We may need to reorder the steps.

H1 ISC (AWC, DetChar-Request, ISC)
keita.kawabe@LIGO.ORG - posted 13:39, Tuesday 15 August 2023 - last comment - 20:52, Wednesday 16 August 2023(72241)
OM2 Beckhoff cable disconnected, voltage reference is used as a heater driver input

As a followup of alog 72061, a batter-operated voltage reference was connected to the OM2 heater chassis. Beckhoff cable was disconnected for now.

Please check if the 1.66Hz comb is still there.

Comments related to this report
keita.kawabe@LIGO.ORG - 13:45, Tuesday 15 August 2023 (72244)

Beckhoff output was 7.15V across the positive and negative input of the driver chassis (when the cable was connected to the chassis), so the voltage reference was set to 7.15V.

We used REED R8801 because its output was clean (4th pic) while CALIBRATORS DVC-350A was noisy (5th pic).

Images attached to this comment
ansel.neunzert@LIGO.ORG - 13:50, Tuesday 15 August 2023 (72247)

detchar-request git issue for tracking purposes.

keita.kawabe@LIGO.ORG - 11:10, Wednesday 16 August 2023 (72277)

As you can see from one of the pictures above, the unit is powered with AC supply so we can leave it for a while.

keita.kawabe@LIGO.ORG - 20:52, Wednesday 16 August 2023 (72286)CDS, ISC

How to recover from power outage

If there is a power outage, the voltage reference won't come back automatically. Though I hope we never need this instruction, I'll be gone for a month and Daniel will be gone for a week, so I'm writing this down just in case.

0. Instruction manual for the voltage reference (R8801) is found in the case of the unit inside a cabinet where all voltage references are stored in the EE shop. Find it and bring it to the floor.

1. The voltage reference and the DC power supply are on top of the work table by HAM6. See the 2nd picture in the above alog.

2. The DC supply will be ON as soon as the power comes back. Confirm that the output voltage is set to ~9V. If not, set it to 9V.

3. Press the yellow power button of the voltage reference to turn it on. You'll have to press it longer than you think is required. See the 1st picture in the above alog.

4. Press the "V" button to set the unit to voltage source mode. Set the voltage to 7.15V. Use right/left buttons to move cursor to the decimal place you'd like to change, and then use up/down buttons to change the number.

5. Most likely, a funny icon that you'll never guess to mean "Auto Power Off" will be displayed at the top left corner of the LCD. Now is the time to look at the LCD description on page 4 of the manual to confirm that it's indeed the Auto Power Off icon.

6. If the icon is indeed there (i.e. the unit is in Auto Power Off mode), press power button and V button at the same time to cancel Auto Power Off. You'll have to press the buttons longer than you think is required. If the icon doesn't go away, repeat.

7. Confirm that the LCD of R8801 looks exactly like the 1st picture of the above alog. You're done.

H1 CAL (CAL)
richard.savage@LIGO.ORG - posted 13:49, Tuesday 08 August 2023 - last comment - 14:15, Thursday 17 August 2023(72063)
Movement of Pcal Xend upper beam position to test impact on Pcal X/Y comparison

Julianna Lewis, TonyS, RickS

This morning we moved the upper (inner) Pcal beam at X-end down by 5 mm at the entrance aperture of the Pcal Rx sensor to test the impact on the calibration of the Pcal system at X-end.

We expect that the impact on the calibration of the Xend Pcal will be given by the dot product of the 

The work proceedes as follows:

We expect that this upper beam movement will change the unintended rotation of the test mass and change the calibration of the Rx output by about 0.2 % This assumes that we have moved at roughly 45 deg. with respect to the roughly 22 mm interferometer beam offset and that the offset on the surface of the ETM is roughly half of that seen at the Rx module: - 2.5 mm / 2 x 22 mm x 0.707 x 0.94 hop / mm^2 = -18 hop (hundreths of one percent).  So we expect to see that X/Y comparison factor will change from about 1.000 to 0.9982.

Images attached to this report
Comments related to this report
richard.savage@LIGO.ORG - 14:15, Thursday 17 August 2023 (72300)

The sign of expected change in the X/Y calibration ratio is opposite to what is written in this entry.

The change observed (after minus before) should be given by: 1/2 \vec{c} dot \vec{b} x M/I.  For a vertical displacement of a Pcal beam (c_y), this reduces to M/2I x c_y x b_y.  Thus, a reduction in the X/Y ratio indicates that the sign of the interferometer beam offset is opposite to that of the Pcal beam displacement.

If we move the Pcal beam down and observe a decrease in the X/Y ratio, it would indicate that the intererometer beam is displaced from center in the upward direction.

See this aLog entry

H1 CAL
vladimir.bossilkov@LIGO.ORG - posted 08:29, Friday 28 July 2023 - last comment - 12:31, Tuesday 12 December 2023(71787)
H1 Systematic Uncertainty Patch due to misapplication of calibration model in GDS

First observed as a persistent mis-calibration in systematic error monitoring Pcal lines which measure PCAL / GDS-CALIB_STRAIN affecting both LLO and LHO, [LLO Link] [LHO Link], characterised by these measurements consistently disagreeing with the uncertainty envelope.
It us presently understood that this arises from bugs in the code producing the GDS FIR filters there exists a sizeable discrepancy, which Joseph Betzwieser is spear-heading a thorough investigation to correct,

I make a direct measurement of this systematic error by dividing CAL-DARM_ERR_DBL_DQ / GDS-CALIB_STRAIN , where the numerator is further corrected for kappa values of the sensing, cavity pole, and the 3 actuation stages (GDS does the same corrections internally). This gives a transfer function of the difference induced from errors in the GDS filters.

Attached in this aLog, and its sibling aLog in LLO, is this measurement in blue, the PCAL / GDS-CALIB_STRAIN measurement in orange, and the smoothed uncertainty correction vector in red. Attached also is a text file of this uncertainty correction for application in pyDARM to produce the final uncertainty, in the format of [Frequency, Real, Imaginary].

Images attached to this report
Non-image files attached to this report
Comments related to this report
ling.sun@LIGO.ORG - 15:33, Friday 28 July 2023 (71798)

After applying this error TF, the uncertainty budget seems to agree with monitoring results (attached).

Images attached to this comment
ling.sun@LIGO.ORG - 13:02, Thursday 17 August 2023 (72299)

After running the command documented in alog 70666, I've plotted the monitoring results on top of the manually corrected uncertainty estimate (see attached). They agree quite well.

The command is:

python ~cal/src/CalMonitor/bin/calunc_consistency_monitor --scald-config  ~cal/src/CalMonitor/config/scald_config.yml --cal-consistency-config  ~cal/src/CalMonitor/config/calunc_consistency_configs_H1.ini --start-time 1374612632 --end-time 1374616232 --uncertainty-file /home/ling.sun/public_html/calibration_uncertainty_H1_1374612632.txt --output-dir /home/ling.sun/public_html/

The uncertainty is estimated at 1374612632 (span 2 min around this time). The monitoring data are collected from 1374612632 to 1374616232 (span an hour).

 

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 17:01, Wednesday 13 September 2023 (72871)
J. Kissel, J. Betzwieser

FYI: The time at which Vlad used to gather TDCFs to update the *modeled* response function at the reference time (R, in the numerator of the plots) is 
    2023-07-27 05:03:20 UTC
    2023-07-26 22:03:20 PDT
    GPS 1374469418

This is a time when the IFO was well thermalized.

The values used for the TDCFs at this time were
    \kappa_C  = 0.97764456
    f_CC      = 444.32712 Hz
    \kappa_U  = 1.0043616 
    \kappa_P  = 0.9995768
    \kappa_T  = 1.0401824

The *measured* response function (GDS/DARM_ERR, the denominator in the plots) is from data with the same start time, 2023-07-27 05:03:20 UTC, over a duration of 384 seconds (8 averages of 48 second FFTs).

Note these TDCF values list above are the CAL-CS computed TDCFs, not the GDS computed TDCFs. They're the value exactly at 2023-07-27 05:03:20 UTC, with no attempt to average further over the duration of the *measurement*. See attached .pdf which shows the previous 5 minutes and the next 20 minutes. From this you can see that GDS was computing essentially the same thing as CALCS -- except for \kappa_U, which we know
 - is bad during that time (LHO:72812), and
 - unimpactful w.r.t. the overall calibration.
So the fact that 
    :: the GDS calculation is frozen and
    :: the CALCS calculation is noisy, but is quite close to the frozen GDS value is coincidental, even though
    :: the ~25 minute mean of the CALCS is actually around ~0.98 rather than the instantaneous value of 1.019
is inconsequential to Vlad's conclusions.

Non-image files attached to this comment
louis.dartez@LIGO.ORG - 00:54, Tuesday 12 December 2023 (74747)
I'm adding the modeled correction due to the missing 3.2 kHz pole here as a text file. I plotted a comparison showing Vlad's fit (green), the modeled correction evaluated on the same frequency vector as Vlad (orange), and the modeled correction evaluated using a dense frequency spacing (blue), see eta_3p2khz_correction.png. The denser frequency spacing recovers error of about 2% between 400 Hz and 600 Hz. Otherwise, the coarsely evaluated modeled correction seems to do quite well. 
Images attached to this comment
Non-image files attached to this comment
ling.sun@LIGO.ORG - 12:31, Tuesday 12 December 2023 (74758)

The above error was fixed in the model at GPS time 1375488918 (Tue Aug 08 00:15:00 UTC 2023) (see LHO:72135)

Displaying reports 16461-16480 of 86646.Go to page Start 820 821 822 823 824 825 826 827 828 End