Displaying reports 15821-15840 of 86534.Go to page Start 788 789 790 791 792 793 794 795 796 End
Reports until 16:05, Tuesday 12 September 2023
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:05, Tuesday 12 September 2023 (72845)
OPS Eve Shift Start

TITLE: 09/12 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 10mph Gusts, 7mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.11 μm/s
QUICK SUMMARY:

IFO is in NLN and Observing as of 21:00 UTC

LHO General
thomas.shaffer@LIGO.ORG - posted 15:59, Tuesday 12 September 2023 (72827)
Ops Day Shift Summary

TITLE: 09/12 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Maintenance day recovery was fairly easy considering the h1iopsusb123 went down and tripped the associated sei platforms. Automated initial alignment and lock. We've been observing for about 2.5 hours now.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
15:03 VAC Gerardo MX, EX n Turbo pump checks 19:34
15:03 FAC Kim EX n Tech clean 16:08
15:03 FAC Karen EY n Tech clean 16:13
15:09 FAC Richard LVEA(BSC) n Temp sensors 15:42
15:26 VAC Jordan, Janos LVEA - FTIR 18:49
15:27 PEM Dave remote n Restart h1pemcs, h1pemex, h1pemey 19:13
15:28 CAL Jeff and Dave control room n restart h1calcs 18:52
15:30 FMCS Patrick control room n Restart FMCS EPICS IOC 18:52
15:31 DAQ Dave remote n Restart DAQ (incl EDCU) after model changes 19:13
15:31 SEI Erik control room n Install new Picket Fence server and EPICS IOC 18:53
15:44   Chris LVEA n FRS 16:53
15:52 CDS/SUS Fil LVEA - BS, SRM,PRM coil driver board mods 18:25
16:01 - Matt H OSB roof n Pictures on viewing deck 16:20
16:09 FAC Kim & Karen HAM Shack n Tech clean 16:49
16:32 SEI Jim LVEA - Swap ITMY CPS card, HAM1 L4C testing 18:51
16:53 FAC Chris FCES - Scavenger hunt 17:50
16:54 AOS Jason, Austin LVEA YES ITMX Oplev swap 19:32
17:11 ENG Betsy, Matt Biergarten YES Planning for O5 18:09
17:26 FAC Karen, Kim LVEA - Tech clean 18:43
17:32 PEM Richard LVEA YES Temp 17:42
17:50   Chris MY, EY n Scavenger hunt 18:19
18:34 PEM Fil LVEA YES Looking at PEM Temp sensors 18:45
18:56 VAC Janos, Jordan LVEA - Pictures 19:11
19:25 PEM Richard LVEA n Measure temperature sensors 19:32
20:44 PEM Fil CER n Reseat temperature cables 20:47
H1 CDS
david.barker@LIGO.ORG - posted 15:59, Tuesday 12 September 2023 - last comment - 16:04, Tuesday 12 September 2023(72843)
CDS Maintenance Summary: Tuesday 12th September 2023

Recovery of h1susb123

TJ, Fil, Erik, Jonathan, Dave:

During the CER in-rack work a timing glitch stopped all the models on h1susb123. After the rack work was completed, we restarted the models by soft booting h1susb123 (sudo systemctl reboot).

When the computer came back we noted that the first IO Chassis Adnaco backplane (the one with the timing card on it) was not visible from the computer, but the other 3 were.

We did a second restart, this time Erik power cycled the IO Chassis and I powered down h1susb123. This time the system came back up with a full compliment of cards.

WP11414 Picket Fence Update

Erik, Dave:

New picket fence server code was started on nuc5, along with a new IOC on cdsioc0. Two new GPS channels were added which record the time the server was started, and the last time picket fence obtained and processed its seismic data. DAQ + EDC restart was required

The new channels are  H1:SEI-USGS_SERVER_GPS and H1:SEI-USGS_SERVER_START_GPS

WP11423 h1calcs bug fix

Jeff, Dave:

A new h1calcs model was installed, no DAQ restart was needed. Please see Jeff's alog for details.

WP11420 Add calibrated PEM BSC temperature channels to DAQ

Richard, Fil, Robert, Dave:

New h1pemcs, h1pemex and h1pemey models were installed. The BSC temperature raw_adc_counts are now converted to DegF using DCC-T0900287 settings. The raw counts DAQ channel keeps it name, for each BSC a new DEGF and DEGC fast 256Hz channel was added to the DAQ, along side equivalent MON slow channels. DAQ restart was required.

Reboot of FMCS IOC

Patrick, Jonathan, Dave:

The FMCS IOC computer was rebooted to verify it starts ok following the last week's FMCS activity. I recovered the alarm settings.

DAQ Restart

Jonathan, Dave:

The DAQ was restarted for the above PEM and Picket Fence changes. Other than a second restart of gds1 being needed, this was an unremarkable restart.

 

Comments related to this report
david.barker@LIGO.ORG - 16:04, Tuesday 12 September 2023 (72844)

Tue12Sep2023
LOC TIME HOSTNAME     MODEL/REBOOT
09:42:46 h1oaf0       h1calcs     <<< New calcs model


09:48:16 h1susb123    ***REBOOT*** <<< First restart, soft reboot
09:50:23 h1susb123    h1iopsusb123
09:56:23 h1susb123    ***REBOOT***  <<<< Second restart, power cycle IO Chassis + computer
09:58:47 h1susb123    h1iopsusb123
09:59:00 h1susb123    h1susitmy   
09:59:13 h1susb123    h1susbs     
09:59:26 h1susb123    h1susitmx   
09:59:39 h1susb123    h1susitmpi  


11:48:41 h1oaf0       h1pemcs     <<< New PEM models
11:49:13 h1iscex      h1pemex     
11:50:02 h1iscey      h1pemey     


11:53:17 h1daqdc0     [DAQ] <<< DAQ 0-leg
11:53:27 h1daqfw0     [DAQ]
11:53:28 h1daqnds0    [DAQ]
11:53:28 h1daqtw0     [DAQ]
11:53:36 h1daqgds0    [DAQ]
11:54:15 h1susauxb123 h1edc[DAQ] <<< EDC
12:01:48 h1daqdc1     [DAQ] <<< DAQ 1-leg
12:01:59 h1daqfw1     [DAQ]
12:02:00 h1daqnds1    [DAQ]
12:02:00 h1daqtw1     [DAQ]
12:02:08 h1daqgds1    [DAQ]
12:02:50 h1daqgds1    [DAQ] <<< 2nd GDS1 restart
 
 

LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 14:39, Tuesday 12 September 2023 (72841)
Famis task: Maint turbo pump functionality test for the X-End and X-Mid.

The turbo pump stations at X-End and X-Mid station were taken thru their functionality test, no issues noted with both systems.

EX
Scroll pump hours: 6311.9
Turbo pump hours: 263.4
Crash bearings life: 100%

MX
Scroll pump hours: 204.9
Turbo pump hours: 111.4
Crash bearings life: 100%

FAMIS tasks 24843 and 24867

 

Images attached to this report
LHO VE (VE)
janos.csizmazia@LIGO.ORG - posted 14:23, Tuesday 12 September 2023 (72840)
Input data for noise examination for pumping the VEP-1W Hepta's vacuum line
The pumping station is located in the Mechanical Room, the vacuum line in question starts there at a temporary pumping station, then penetrates through the wall, into the LVEA, approximately between HAM5 and HAM6. Then it joins the direction of the main tubes, turns north, and then turns east at BSC2, and runs a little bit over BSC7. The purpose of this noise examination is to justify if we can pump this line during observation. It would be quite advantageous, as this is crucial for the cleaning attempt of this line, after the Hepta incident.
On this pumping cart we have a Turbo and a backing pump. The backing pump definitely the noisier, it runs at 1660 rpm = 27.67 Hz. The Turbo pump runs at 60000 rpm = 1000 Hz.

During the measurement, the pumping cart was switched on and off, and the changes in the operation of the pumps have been noted. Beforehand, a loud noise was generated on the tubing, by hitting it with a wrench. The noise measurement, in time:
13:02 - gate valve was closed - some minimal noise: the pumping cart was on at the beginning of the measurement
13:04 - 13:05: hitting the tube with a wrench, approx 2 Hz: quite some noise
13:05 - gate valve reopened - some minimal noise (the reason behind the opening/closing of the gate valve is non-related to the noise-experiment, it is only VAC-related)

13:06:42 - press STOP - this has nothing to do with the noise
13:10:20 - backing pump stops: 27.67 Hz noise-source disappeared. Also, Turbo pump's freq was changed to 850 Hz
13:11:50 - Turbo pump's freq. was changed to 775 Hz
13:13:05 - Turbo pump's freq. was changed to 700 Hz
13:13:20 - Press ON: backing pump starts - 27.67 Hz noise source reappears
13:15:05 - Turbo pump is at full speed: 1000 Hz

13:18:00 - press STOP - this has nothing to do with the noise (1E-4 Torr)
13:21:38 - backing pump stops: 27.67 Hz noise-source disappeared. Also, Turbo pump's freq was changed to 850 Hz
13:23:00 - Turbo pump's freq. was changed to 775 Hz
13:24:15 - Turbo pump's freq. was changed to 700 Hz
13:25:00 - Press ON: backing pump starts - 27.67 Hz noise source reappears
13:26:50 - Turbo pump is at full speed: 1000 Hz

13:29:55 - press STOP - this has nothing to do with the noise (1E-4 Torr)
13:33:30 - backing pump stops: 27.67 Hz noise-source disappeared. Also, Turbo pump's freq was changed to 850 Hz
13:34:55 - Turbo pump's freq. was changed to 775 Hz
13:36:10 - Turbo pump's freq. was changed to 700 Hz

13:36:35 - gate valve was closed (8.5E-2 Torr)
H1 General
thomas.shaffer@LIGO.ORG - posted 13:55, Tuesday 12 September 2023 (72839)
Observing 2054 UTC

Maintenace recovered.

H1 SUS
filiberto.clara@LIGO.ORG - posted 12:59, Tuesday 12 September 2023 (72837)
Triple Acq Driver for BS/SRM/PRM/PR3 Modified

WP 11418
ECR E2300035
FRS 26604

Monitor boards D1900052 were installed in BS and SRM M2/M3 in April (alog 68796).
Today the D1900052 monitor boards were removed and original monitor boards D070480 reinstalled.

Beam Splitter (Rack SUS-C5, U26):
Chassis Serial Number S1100039
Old Monitor Board MON156 - Installed
New Monitor Board S2100109 - Removed

SRM M2 (Rack SUS-C7, U26):
Chassis Serial Number S1100035
Old Monitor Board MON177 - Installed
New Monitor Board S2100483 - Removed

SRM M3 (Rack SUS-C7, U22)
Chassis Serial Number S1000356
Old Monitor Board MON128 - Installed
New Monitor Board S2100108 – Removed

The D1900052 boards that were removed from BS/SRM were installed in PRM M2/M3 and PR3 M3.

PRM M2 (Rack SUS-C4, U19):
Chassis Serial Number S1100015
Old Monitor Board MON222 - Removed
New Monitor Board S2100483 - Installed

PRM M3 (Rack SUS-C4, U18):
Chassis Serial Number S1100025
Old Monitor Board MON233 - Removed
New Monitor Board S2100109 - Installed

PR3 M3 (Rack SUS-C4, U16)
Chassis Serial Number S1100047
Old Monitor Board MON223 - Removed
New Monitor Board S2100108 – Installed

H1 General
austin.jennings@LIGO.ORG - posted 12:58, Tuesday 12 September 2023 (72838)
LVEA Swept

The LVEA has been swept, nothing abnormal to report - all checklist items have been met.

H1 AOS
jason.oberling@LIGO.ORG - posted 12:54, Tuesday 12 September 2023 - last comment - 15:45, Thursday 21 September 2023(72836)
ITMx Optical Lever Armored Fiber and Cooler Installed (FRS 4544 and WP 11422)

J. Oberling, A. Jennings

With a relatively light maintenance window and a chance for some extended Laser Hazard time, we finally completed the installation of an old OpLev ECR, FRS 4544.  Austin and I removed the existing 10m single-mode fiber and installed a new, 3m armored single-mode fiber, and placed the laser in a "thermal isolation enclosure" (i.e. a Coleman cooler).

To start, I confirmed that there was power available for the new setup; the OpLev lasers are powered via a DC power supply in the CER.  I read no voltage at the cable near the ITMx OpLev, so with Fil's help we traced the cable to its other end, found it unplugged, and plugged it in.  I confirmed we had the expected voltage, which we did, so we moved on with the installation.  We had to wait for the ITMx front end computer to come back up (had tripped as part of other work), so while we waited Austin completed the transition to Laser Hazard.  We took a picture (1st attachment) of the ITMx OpLev data (SUM counts, pitch and yaw readings), then powered down the laser.  We placed the laser in the cooler and plugged in the new power supply; laser turned on as expected.  We then installed a Lexan viewport cover and removed the cover from the ITMx OpLev transmitter pylon.  The old 10m fiber was removed, and we found 2 areas where the fiber had been crushed due to over-zealous strain relief with cable ties (attachments 2-4; this is why we originally moved to armored fibers); I'm honestly somewhat surprised any light was being passed through this fiber.  We installed the armored fiber, being careful not to touch the green camera housing and to not overly bend the fiber or jostle the transmitter assembly, and turned on the laser.  Unfortunately we had very little signal (~1k SUM counts) at the receiver, and the pitch and yaw readings were pretty different.  We very briefly removed the Lexan cover (pulled it out just enough to clear the beam) and the SUM counts jumped up to ~7k; we then put the Lexan back in place; we also tried increasing the laser power, but saw no change in SUM counts (laser already maxed out).  This was an indication that we did manage to change the transmitter alignment during the fiber swap, even though we were careful not to jostle anything (it can happen, and it did), and that the Lexan cover greatly changes the beam alignment.  So we loosened the locking screws for the pitch and yaw adjustments and very carefully adjusted the pitch and yaw of the launcher to increase the SUM counts (which also had the effect of centering the beam on the receiver).  The most we could get was ~8.3k SUM counts with the Lexan briefly removed, which then dropped to ~7k once we re-installed the transmitter cover and completely removed the Lexan (no viewport exposure with the transmitter cover re-installed).  We made sure not to bump anything when re-installing the transmitter cover, yet the SUM counts dropped and the alignment changed (the pitch/yaw readings changed, mostly yaw by ~10 µrad).  Maybe this pylon is little more loose than the others?  That's a WAG, as the pylon seems pretty secure.

I can't explain why the SUM counts are so much lower; could be the difference between the new and old fibers, we could have really changed the alignment so we're now catching a ghost beam (but I doubt this, we barely moved anything).  Honestly I'm a little stumped.  Given more time on a future maintenance day we could remove the receiver cover and check the beam at the QPD, but as of now we have what appears to be a good signal that responds to pitch and yaw alignment changes, so we moved on.  We re-centered the receiver QPD, and now have the readings shown in the 5th attachment; ITMx did not move, it stayed in its Aligned state the entire time.  This is all the result of our work on the OpLev.  We'll keep an eye on this OpLev over the coming days, especially watching the SUM counts and pitch/yaw readings (looking for drift and making sure the laser is happy in its new home; it is the oldest installed OpLev laser at the moment).  The last few attachments are pictures of the new cooler and the fiber run into the transmitter assembly.  This completes LHO WP 11422 and FRS 4544.

Images attached to this report
Comments related to this report
rahul.kumar@LIGO.ORG - 15:45, Thursday 21 September 2023 (73039)SUS

The ITMX OPLEV sum has been dropping past one week. It was around 7000 counts last week and since then it has gone down to around 4000 counts - please see screenshot attached.

Images attached to this comment
H1 SEI
jim.warner@LIGO.ORG - posted 12:47, Tuesday 12 September 2023 (72835)
HAM1 3dl4c single sensor un-epoxied coherence not as good as epoxied pairs

I looked at simplifying the HAM1 ground feedforward a little, by adding a single vertical sensor that wasn't epoxied to the floor. Not suprising that this doesn't have quite as good coherence as the 2 epoxied vertical sensors that are currently being used for the ff. Attached plot shows the different coherences. Red coherence between the B_X (test sensor, no epoxy) and B_Z (epoxied sensor). Blue is the coherence between the test sensor and the HEPI Z L4Cs, green is the epoxied to HEPI Z L4Cs. Brown is coherence the summed epoxied sensors with the HEPI Z L4Cs.

Comparing the green and blue coherences, it seems that epoxying might not be totally necessary. The sum of the epoxied sensors is more coherent with HAM1 HEPI than any of the single sensors, compare brown to blue or green.

Images attached to this report
H1 CDS
patrick.thomas@LIGO.ORG - posted 12:35, Tuesday 12 September 2023 (72834)
fmcs-epics-cds computer restarted
Restarted the computer running the FMCS EPICS IOC to check that the changes made yesterday to the subnet mask persist through a reboot. They do.
LHO General
thomas.shaffer@LIGO.ORG - posted 12:27, Tuesday 12 September 2023 (72833)
Ops Day Mid Shift Report

Maintenance has finished. We are moving through initial alignment now.

H1 SEI (CDS, SEI)
erik.vonreis@LIGO.ORG - posted 11:28, Tuesday 12 September 2023 - last comment - 16:59, Thursday 14 September 2023(72831)
Picket Fence updated

The Picket Fence client was updated.  This new version points at a server with lower latency.

It also fixes some bugs, and reports the current time and start time of the service.

Comments related to this report
edgard.bonilla@LIGO.ORG - 16:59, Thursday 14 September 2023 (72892)

I merged this into the main code.

Thank you Erik!

H1 CAL (CDS)
jeffrey.kissel@LIGO.ORG - posted 10:25, Tuesday 12 September 2023 - last comment - 13:07, Wednesday 13 September 2023(72830)
h1calcs Model Rebooted; Gating for CALCS \kappa_U is now informed by KAPPA_UIM Uncertainty (rather than KAPPA_TST)
J. Kissel, D. Barker
WP #11423

Dave has graciously compiled, installed and restarted the h1calcs model. In doing so, that brings in the bug fix from LHO:72820, which fixes the issue that the front-end, CAL-CS KAPPA_UIM library block was receiving the KAPPA_TST uncertainty identified in LHO:72819.

Thus h1calcs is now using rev 26218 of the library part /opt/rtcds/userapps/release/cal/common/models/CAL_CS_MASTER.mdl.

I'll confirm that the UIM uncertainty is the *right* uncertainty during the next nominal low noise stretch later today (2023-09-12 ~20:00 UTC).
Comments related to this report
jeffrey.kissel@LIGO.ORG - 17:37, Tuesday 12 September 2023 (72848)
Circa 9:30 - 10:00a PDT (2023-09-12 16:30-17:00 UTC)
Post-compile, but prior-to-install, Dave ran a routine foton -c check on the filter file to confirm that there were no changes in the
    /opt/rtcds/lho/h1/chans/H1CALCS.txt
besides "the usual" flip of the header (see IIET:11481 which has now become cds/software/advLigoRTS:589).

Also relevant, remember every front-end model's filter file is a softlink to the userapps repo,
    $ ls -l /opt/rtcds/lho/h1/chans/H1CALCS.txt 
    lrwxrwxrwx 1 controls controls 58 Sep  8  2015 /opt/rtcds/lho/h1/chans/H1CALCS.txt -> /opt/rtcds/userapps/release/cal/h1/filterfiles/H1CALCS.txt

Upon the check, he found that foton -c had actually changed filter coefficients.
Alarmed by this, he ran an svn revert on the userapps "source" file for H1CALCS.txt in
    /opt/rtcds/userapps/release/cal/h1/filterfiles/H1CALCS.txt

He walked me through what had happened, and when he did to fix it, *verbally* with me on TeamSpeak, and we agreed -- "yup, that should be fine."

Flash forward to NOMINAL_LOW_NOISE at 14:30 PDT (2023-09-12 20:25:57 UTC) TJ and I find that the GDS-CALIB_STRAIN trace on the wall looks OFF, and there're no impactful SDF DIFFs. I.e. TJ says "Alright Jeff... what'd you do..." seeing the front wall FOM show GDS-CALIB_STRAIN at 2023-09-12 20:28 UTC.

After some panic having not actually done anything but restart the model, I started opening up CALCS screens trying to figure out "uh oh, how can I diagnose the issue quickly..." I tried two things before I figured it out:
    (1) I get through the inverse sensing function filter (H1:CAL-CS_DARM_ERR) and look at the foton file ... realized -- looks OK, but if I'm really gunna diagnose this, I need to find the number that was installed on 2023-08-31 (LHO:72594)...
    (2) I also open up the actuator screen for the ETMX L3 stage (H1:CAL-CS_DARM_ANALOG_ETMX_L3) ... and upon staring for a second I see FM3 has a "TEST_Npct_O4" in it, and I immediately recognize -- just by the name of the filter -- that this is *not* the "HFPole" that *should* be there after Louis restores it on 2023-08-07 (LHO:72043).

After this, I put two-and-two together, and realized that Dave had "reverted" to some bad filter file. 

As such, I went to the filter archive for the H1CALCS model, and looked for the filter file as it stood on 2023-08-31 -- the last known good time:

/opt/rtcds/lho/h1/chans/filter_archive/h1calcs$ ls -ltr
[...]
-rw-rw-r-- 1 advligorts advligorts 473361 Aug  7 16:42 H1CALCS_1375486959.txt
-rw-rw-r-- 1 advligorts advligorts 473362 Aug 31 11:52 H1CALCS_1377543182.txt             # Here's the last good one
-rw-r--r-- 1 controls   advligorts 473362 Sep 12 09:32 H1CALCS_230912_093238_install.txt  # Dave compiles first time
-rw-r--r-- 1 controls   advligorts 473377 Sep 12 09:36 H1CALCS_230912_093649_install.txt  # Dave compiles the second time
-rw-rw-r-- 1 advligorts advligorts 473016 Sep 12 09:42 H1CALCS_1378572178.txt             # Dave installs his "reverted" file
-rw-rw-r-- 1 advligorts advligorts 473362 Sep 12 13:50 H1CALCS_1378587040.txt             # Jeff copies Aug 31 11:52 H1CALCS_1377543182.txt into current and installs it


Talking with him further in prep for this aLOG, we identify that when Dave said "I reverted it," he meant that he ran an "svn revert" on the userapps copy of the file, which "reverted" the file to the last time it was committed to the repo, i.e. 
    r26011 | david.barker@LIGO.ORG | 2023-08-01 10:15:25 -0700 (Tue, 01 Aug 2023) | 1 line

    FM CAL as of 01aug2023
i.e. before 2023-08-07 (LHO:72043) and before 2023-08-31 (LHO:72594).

Yikes! This is the calibration group's procedural bad -- we should be committing the filter file to the userapps svn repo every time we make a change.

So yeah, in doing normal routine things that all should have have worked, Dave fell into a trap we left for him.

I've now committed the H1CALCS.txt filter file to the repo at rev 26254

    r26254 | jeffrey.kissel@LIGO.ORG | 2023-09-12 16:26:11 -0700 (Tue, 12 Sep 2023) | 1 line

    Filter file as it stands on 2023-08-31, after 2023-08-07 LHO:72043 3.2 kHz ESD pole fix and  2023-08-31 LHO:72594 calibration update for several reasons.


By 2023-09-12 20:50:44 UTC I had loaded in H1CALCS_1378587040.txt which was simple "cp" copy of H1CALCS_1377543182.txt, the last good filter file that was created during the 2023-08-31 calibration update,...
and the DARM FOM and GDS-CALIB_STRAIN returned to normal. 

All of panic and fix was prior to us going to OBSERVATION_READY 2023-09-12 21:00:28 UTC, so there was no observation ready segment that had bad calibration.

I also confirmed that all was restored and well by checking in on both
 -- the live front-end systematic error in DELTAL_EXTERNAL_DQ using the tools from LHO:69285) and
 -- the low-latency systematic error in GDS-CALIB_STRAIN using the auto-generated plots on https://ldas-jobs.ligo-wa.caltech.edu/~cal/
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 13:07, Wednesday 13 September 2023 (72863)CDS
Just some retro-active proof from the last few days worth of measurements and models of systematic error in the calibration.

First, a trend of the front-end computed values of systematic error, shown in 2023-09-12_H1CALCS_TrendOfSystematicError.png which reviews the time-line of what had happened.

Next, grabs from the GDS measured vs. modeled systematic error archive which show similar information but in hourly snapshots,
    2023-09-12 13:50 - 14:50 UTC 1378561832-1378565432 Pre-maintenance, pre-model-recompile, calibration good, H1CALCS_1377543182.txt 2023-08-31 filter file running.
    2023-09-12 19:50 - 20:50 UTC 1378583429-1378587029 BAD 2023-08-01, last-svn-commit, r26011, filter file in place.
    2023-09-12 20:50 - 21:50 UTC 1378587032-1378590632 H1CALCS_1378587040.txt copy of 2023-08-31 filter installed, calibration goodness restored.

Finally, I show the systematic error in GDS-CALIB_STRAIN trends from the calibration monitor "grafana" page, which shows that because we weren't in ANALYSIS_READY during all this kerfuffle, the systematic error as reported by that system was none-the-wiser that any of this had happened.

*phew* Good save team!!
Images attached to this comment
H1 SEI
ryan.short@LIGO.ORG - posted 10:23, Tuesday 12 September 2023 (72829)
H1 ISI CPS Noise Spectra Check - Weekly

FAMIS 25956, last checked in alog 72708

BSC high freq noise is elevated for these sensors:

ITMX_ST2_CPSINF_H1    
ITMX_ST2_CPSINF_H3

All other spectra look nominal.

Non-image files attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:10, Tuesday 12 September 2023 (72828)
Tue CP1 Fill

Tue Sep 12 10:07:43 2023 INFO: Fill completed in 7min 39secs

 

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 08:04, Tuesday 12 September 2023 (72826)
Ops Day Shift Start

TITLE: 09/12 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Austin
CURRENT ENVIRONMENT:
    SEI_ENV state: MAINTENANCE
    Wind: 3mph Gusts, 2mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.11 μm/s
QUICK SUMMARY: Locked for 22 hours. Ground motion has just calmed down from a 6.3M earthquake from Taiwan. PEM injections and SUS charge measurements have finished up, maintenance has started.

H1 CDS (CDS)
erik.vonreis@LIGO.ORG - posted 06:53, Tuesday 12 September 2023 (72825)
Workstations updated

Workstations were updated and rebooted.  This was an os package update.  Conda packages were not updated.

H1 CAL
jeffrey.kissel@LIGO.ORG - posted 16:29, Monday 11 September 2023 - last comment - 14:45, Monday 18 September 2023(72812)
Historical Systematic Error Investigations: Why MICH FF Spoiling UIM Calibration Line froze Optical Gain and Cavity Pole GDS TDCFs from 2023-07-20 to 2023-08-07
J. Kissel

I'm in a rabbit hole, and digging my way out by shaving yaks. The take-away if you find this aLOG TL;DR -- This is an expansion of the understanding of one part of multi-layer problem described in LHO:72622.

I want to pick up where I left off in modeling the detector calibration's response to thermalization except using the response function, (1+G)/C, instead of just the sensing function, C (LHO:70150). 

I need to do this for when 
    (a) we had thermalization lines ON during times of
    (b) PSL input power at 75W (2023-04-14 to 2023-06-21) and
    (c) PSL input power at 60W (2023-06-21 to now).

"Picking up where I left off" means using the response function as my metric of thermalization instead of the sensing function.

However, the measurement of sensing function w.r.t. to its model, C_meas / C_model, is made from the ratio of measured transfer functions (DARM_IN1/PCAL) * (DARMEXC/DARMIN2), where only the calibration of PCAL matters. The measurement response function w.r.t. its model, R_meas / R_model, on the other hand, is ''simply'' made by the transfer function of ([best calibrated product])/PCAL, where the [best calibrated product] can be whatever you like, as long as you understand the systematic error and/or extra steps you need to account for before displaying what you really want.

In most cases, the low-latency GDS pipeline product, H1:GDS-CALIB_STRAIN, is the [best calibrated product], with the least amount of systematic error in it. It corrects for the flaws in the front-end (super-Nyquist features, computational delays, etc.) and it corrects for ''known'' time dependence based on calibation-line informed, time-dependent correction factors or TDCFs (neither of which the real-time front-end product, CAL-DELTAL_EXTERNAL_DQ, does). So I want to start there, using the transfer function H1:GDS-CALIB_STRAIN / H1:CAL-DELTAL_REF_PCAL_DQ for my ([best calibrated product])/PCAL transfer function measurement.

HOWEVER, over the time periods when we had thermalization lines on, H1:GDS-CALIB_STRAIN had two major systematic errors in it itself that were *not* the thermalization. In short, those errors were:
    (1) between 2023-04-26 and 2023-08-07, we neglected to include the model of the ETMX ESD driver's 3.2 kHz pole (see LHO:72043) and
    (2) between 2023-07-20 and 2023-08-03, we installed a buggy bad MICH FF filter (LHO:71790, LHO:71937, and LHO:71946) that created excess noise as a spectral feature which polluted the 15.1 Hz, SUS-driven calibration line that's used to inform \kappa_UIM -- the time dependence of the relative actuation strength for the ETMX UIM stage. The front-end demodulates that frequency with a demod called SUS_LINE1, creating an estimate of the magnitude, phase, coherence, and uncertainty of that SUS line w.r.t. DARM_ERR.

When did we have thermalization lines on for 60W PSL input? Oh, y'know, from 2023-07-25 to 2023-08-09, exactly at the height of both of these errors. #facepalm
So -- I need to understand these systematic errors well in order to accurately remove them prior to my thermalization investigation.

Joe covers both of these flavors of error in LHO:72622.

However, after trying to digest latter problem, (2), and his aLOG, I didn't understand why spoiled \kappa_U alone had such impact -- since we know that the UIM actuation strength is quite unimpactful to the response function. 

INDEED (2) is even worse than "we're not correcting for the change in UIM actuation strength -- because 
    (3) Though the GDS pipeline (that finishes the calibration to form H1:GDS-CALIB_STRAIN) computes its own TDCFs from the calibration lines, GDS gates the value of its TDCFs with the front-end-, CALCS-, computed uncertainty. So, in that way, the GDS TDCFs are still influenced by the front-end, CALCS computation of TDCFs.

So -- let's walk through that for a second.
The CALCS-computed uncertainty for each TDCF is based on the coherence between the calibration lines and DARM_ERR -- but in a crude, lazy way that we thought would be good enough in 2018 -- see G1801594, page 13. I've captured a current screenshot, First Image Attachment  of the now-times simulink model to confirm the algorithm is still the same as it was prior to O3. 

In short, the uncertainty for the actuator strengths, \kappa_U, \kappa_P, and \kappa_T, is created by simply taking the larger of the two calibration line transfer functions' uncertainty that go in to computing that TDCF -- SUS_LINE[1,2,3] or PCAL_LINE1. 

HOWEVER -- because the optical and cavity pole, \kappa_C and f_CC, calculation depends on subtracting out the live DARM actuator (see appearance "A(f,t)" in the definition of "S(f,t)" in Eq. 17 from ), their uncertainty is crafted from the largest of the \kappa_U, \kappa_P, and \kappa_T, AND PCAL_LINE2 uncertainties. It's the same uncertainty for both \kappa_C and f_CC, since they're both derived from the magnitude and phase of the same PCAL_LINE2. 

That means the large SUS_LINE1 >> \kappa_U uncertainty propagates through this "greatest of" algorithm, and also blows out the \kappa_C and f_CC uncertainty as well -- which triggered the GDS pipeline to gate its 2023-07-20 TDCF values for \kappa_U, \kappa_C, and f_CC from 2023-07-20 to 2023-08-07.

THAT means, that --for better or worse-- when \kappa_C and f_CC are influenced by thermalization for the first ~3 hours after power up, GDS did not correct for it. Thus, a third systematic error in GDS, (3). 

*sigh*

OK, let's look at some plots.

My Second Image Attachment shows a trend of all the front-end computed uncertainties involved around 2023-07-20 when the bad MICH FF is installed. 
    :: The first row and last row show that the UIM uncertainty -- and the CAV_POLE uncertainty (again, used for both \kappa_C )

    :: Remember GDS gates its TDCFs with a threshold of uncertainty = 0.005 (i.e. 0.5%), where the front-end gates with an uncertainty of 0.05 (i.e. 5%).

First PDF attachment shows in much more clear detail the *values* of bot the the CALCS and GDS TDCFs during a thermalization time that Joe chose in LHO:72622, 2023-07-26 01:10 UTC.

My Second PDF attachment breaks down Joe's LHO:72622 Second Image attachment in to its components:
    :: ORANGE shows the correction to the "reference time" response function with the frozen, gated, GDS-computed TDCFs, by the ratio of the "nominal" response function (as computed from the 20230621T211522Z report's pydarm_H1.ini) to that same response function, but with the optical gain, cavity pole, and actuator strengths updated with the frozen GDS TDCF values,
        \kappa_C = 0.97828    (frozen that the low, thermalized value of the OM2 HOT value reflecting the unaccounted-for change just one day prior at 2023-07-19; LHO:71484)
        f_CC = 444.4 Hz       (frozen)
        \kappa_U = 1.05196    (frozen at a large, noisy value, right after the MICH FF filter is installed)
        \kappa_P = 0.99952    (not frozen)
        \kappa_T = 1.03184    (not frozen, large at 3% because of the TST actuation strength drift)

    :: BLUE shows the correction to the "reference time" response function with the not-frozen, non-gated, CALCS-computed TDCFs, by the ratio of the "nominal" 20230621T211522Z response function to that same response function updated with the CALCS values,
        \kappa_C = 0.95820    (even lower than OM2 HOT value because this time is during thermalization)
        f_CC = 448.9 Hz       (higher because IFO mode matching and loss are better before the IFO thermalizes)
        \kappa_U = 0.98392    (arguably more accurate value, closer to the mean of a very noisy value)
        \kappa_P = 0.99763    (the same as GDS, to within noise or uncertainty)
        \kappa_T = 0.03073    (the same as GDS, to within noise or uncertainty)

    :: GREEN is a ratio of BLUE / ORANGE -- and thus a repeat of what Joe shows in his LHO:72622 Second Image attachment.

Joe was trying to motivate why (1) the missing ESD driver 3.2 kHz pole is a separable problem from (2) and (3), the bad MICH FF filter spoiling the uncertainty in \kappa_U, \kappa_C, and f_CC, so he glossed over this issue. Further what he plotted in his second attachment, and akin to my GREEN curve, is the *ratio* between corrections, not the actually corrections themselves (ORANGE and BLUE) so it kind of hid this difference. 
Images attached to this report
Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 16:21, Monday 11 September 2023 (72815)
This plot was created by create_no3p2kHz_syserror.py, and the plots posted correspond to the script as it was when the Calibration/ifo project git hash was 53543b80.
jeffrey.kissel@LIGO.ORG - 17:21, Monday 11 September 2023 (72819)
While shaving *this* yak, I found another one -- The front-end CALCS uncertainty for the \kappa_U gating algorithm incorrectly consumes \kappa_T's uncertainty.

The attached image highlights the relevant part of the 
    /opt/rtcds/userapps/release/cal/common/models/
        CAL_CS_MASTER.mdl
library part, at the CS > TDEP level.

The red ovals show to what I refer. The silver KAPPA_UIM, KAPPA_PUM, and KAPPA_TST blocks -- which are each instantiations of the ACTUATOR_KAPPA block within the CAL_LINE_MONITOR_MASTER.mdl libary -- each receive the uncertainty output from the above mentioned crude, lazy algorithm (see first image from above LHO:72812) via tag. The KAPPA_UIM block incorrectly receives the KAPPA_TST_UNC tag.

The proof is seen in the first row of other image attachment from above LHO:72812 -- see that while the raw calibration line uncertainty (H1:CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY) is high, the resulting "greater of the two" uncertainty (H1:CAL-CS_TDEP_KAPPA_UIM_GATE_UNC_INPUT) remains low, and matches the third row's uncertainty for \kappa_T (H1:CAL-CS_TDEP_KAPPA_TST_GATE_UNC_INPUT), the greater of H1:CAL-CS_TDEP_PCAL_LINE1_UNCERTAINTY and H1:CAL-CS_TDEP_SUS_LINE3_UNCERTAINTY.

You can that this is the case even back in 2018 on page 14 of G1801594, so this has been wrong since before O3.

*sigh*

This makes me wonder which of these uncertainties the GDS pipeline gates \kappa_U, \kappa_C, and f_CC on ... 
I don't know gstlal-calibration well enough to confirm what channels are used. Clearly, from the 2023-07-26 01:10 UTC trend of GDS TDCFs, they're gated. But, is that because H1:CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY is used as input all of the GDS computed \kappa_U, \kappa_C, and f_CC, or are they using H1:CAL-CS_TDEP_KAPPA_UIM_GATE_UNC_INPUT?

As such, I can't make a statement of how impactful this bug has been.

We should fix this, though.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 12:09, Tuesday 12 September 2023 (72832)
The UIM uncertainty bug has now been fixed and installed at H1 as of 2023-09-12 17:00 UTC. See LHO:72820 and LHO:72830, respectively.
jeffrey.kissel@LIGO.ORG - 14:45, Monday 18 September 2023 (72944)
J. Kissel, M. Wade

Following up on this:
    This makes me wonder which of these uncertainties the GDS pipeline gates \kappa_U, \kappa_C, and f_CC on [... are channels like] H1:CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY used as input all of the GDS computed \kappa_U, \kappa_C, and f_CC, or are they using H1:CAL-CS_TDEP_KAPPA_UIM_GATE_UNC_INPUT?

I confirm from Maddie that 
    - The channels that are used to inform the GDS pipeline's gating algorithm are defined in the gstlal configuration file, which lives in the Calibration namespace of the git.ligo.org repo, under 
    git.ligo.org/Calibration/ifo/H1/gstlal_compute_strain_C00_H1.ini
where this config file was last changed on May 02 2023 with git hash 89d9917d.

    - In that file, The following config variables are defined (starting around Line 220 as of git hash version 89d9917d),
        #######################################
        # Coherence Uncertainty Channel Names #
        #######################################
        CohUncSusLine1Channel: CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY
        CohUncSusLine2Channel: CAL-CS_TDEP_SUS_LINE2_UNCERTAINTY
        CohUncSusLine3Channel: CAL-CS_TDEP_SUS_LINE3_UNCERTAINTY
        CohUncPcalyLine1Channel: CAL-CS_TDEP_PCAL_LINE1_UNCERTAINTY
        CohUncPcalyLine2Channel: CAL-CS_TDEP_PCAL_LINE2_UNCERTAINTY
        CohUncPcalyLine4Channel: CAL-CS_TDEP_PCAL_LINE4_UNCERTAINTY
        CohUncDARMLine1Channel: CAL-CS_TDEP_DARM_LINE1_UNCERTAINTY
      which are compared against a threshold, also defined in that file on Line 114,
        CoherenceUncThreshold: 0.01

    Note: the threshold is 0.01 i.e. 1% -- NOT 0.005 or 0.5% as described in the body of the main aLOG.

    - Then, inside the gstlal-calibration code proper, 
        git.ligo.orgCalibration/gstlal-calibration/bin/gstlal_compute_strain
    whose last change (as of this aLOG) has git hash 5a4d64ce, there are lines of code buried deep that compute create gating around lines 
        :: L1366 for \kappa_T,
        :: L1425 for \kappa_P, 
        :: L1473 for \kappa_U
        :: L1544 for \kappa_C
        :: L1573 for f_CC

    - From these lines one can discern what's going on, if you believe that calibration_parts.mkgate is a wrapper around gstlal's pipeparts.filters class, with method "gate" -- which links you to source code "gstlal/gst/lal/gstlal_gate.c" which actually lives under
        git.ligo.org/lscsoft/gstlal/gst/lal/gstlal_gate.c

    - I *don't* believe it (because I don't believe in my skills in following the gstlal rabbit hole), so I asked Maddie. She says: 
    The code uses the uncertainty channels (as pasted below) along with a threshold specified in the config (currently 0.01, so 1% uncertainty) and replaces any computed TDCF value for which the specified uncertainty on the corresponding lines is not met with a "gap". These gaps get filled in by the last non-gap value, so the end result is that the TDCF will remain at the "last good value" until a new "good" value is computable, where "good" is defined as a value computed during a time where the specified uncertainty channels are within the required threshold.
    The code is essentially doing sequential gating [per computation cycle] which will have the same result as the front-end's "larger of the two" method.  The "gaps" that are inserted by the first gate are simply passed along by future gates, so future gates only add new gaps for any times when the uncertainty channel on that gate indicates the threshold is surpassed.  The end result [at the end of computation cycle] is a union of all of the uncertainty channel thresholds.

    - Finally, she confirms that 
        :: \kappa_U uses 
            . CAL-CS_TDEP_PCAL_LINE1_UNCERTAINTY, 
            . CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY
        :: \kappa_P uses 
            . CAL-CS_TDEP_PCAL_LINE1_UNCERTAINTY, 
            . CAL-CS_TDEP_SUS_LINE2_UNCERTAINTY
        :: \kappa_T uses 
            . CAL-CS_TDEP_PCAL_LINE1_UNCERTAINTY, 
            . CAL-CS_TDEP_SUS_LINE3_UNCERTAINTY
        :: and both \kappa_C f_CC use
            . CAL-CS_TDEP_PCAL_LINE1_UNCERTAINTY, 
            . CAL-CS_TDEP_PCAL_LINE2_UNCERTAINTY, 
            . CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY, 
            . CAL-CS_TDEP_SUS_LINE2_UNCERTAINTY, 
            . CAL-CS_TDEP_SUS_LINE3_UNCERTAINTY

So, repeating all of this back to you to make sure we all understand: If any one of the channels is above the GDS pipeline's threshold of 1% (not 0.5% as described in the body of the main aLOG), then the TDCF will be gated, and "frozen" at the last time *all* of these channels were below 1%.

This corroborates and confirms the hypothesis that the GDS pipeline, although slightly different algorithmically from the front-end, would gate all three TDCFs -- \kappa_U, \kappa_C, and f_CC -- if only the UIM SUS line, CAL-CS_TDEP_SUS_LINE1_UNCERTAINTY was above threshold -- as it was from 2023-07-20 to 2023-08-07.
Displaying reports 15821-15840 of 86534.Go to page Start 788 789 790 791 792 793 794 795 796 End