Displaying reports 13341-13360 of 84064.Go to page Start 664 665 666 667 668 669 670 671 672 End
Reports until 00:21, Wednesday 13 September 2023
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 00:21, Wednesday 13 September 2023 (72854)
OPS Eve Shift Summary

TITLE: 09/13 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Austin
SHIFT SUMMARY:

IFO is LOCKING and is at LASER_NOISE_SUPPRESSION as of submission (07:21 UTC)

Lock Acquisition (not smooth at all)

While attempting lock acquisition, DRMI would not lock, PRMI woul not lock and none of the POP18 abd POP90 traces were anywgere near where they were supposed to be (both just showing noise about 0 counts). Guardian went to check MICH fringes about 5 times before I decided to do an initial alignment. I attemped to manually request/pause at ACQUIRE_PRMI to play the whole BS beam movement game but I couldn't see a second beam after some movement in both yaw and pitch (and guardian kept beating me to it by going into check MICH fringes). There were some strange interference patterns in the AS AIR camera that I hadn't seen before while I was attempting this.

I was checking the troubleshooting documentation throughout this, but found no relevant details to assist with this problem.

Initial Alignment (6:10 UTC)

I had issues getting guardian to perform an automatic (ish) initial alignment. It kept getting stuck trying to lock ALS when ALS had been locked steadily for a few minutes. I manually offloaded green arms and went into the next stage of initial alignment, where it started behaving and moving automatically. During initial alignment, I saw both beams overlaying in the AS AIR camera for the first time since lockloss so that is a good sign (I think). Initial alignment successfully finished so now onto locking!

Post Initial Alignment Lock (6:38 UTC):

DRMI has finally locked. This was definitely some weird alignment issue that got fixed because this time, we didn't even go through PRMI (or MICH Fringes). I expect guardian to manage locking manually.

What was the issue?

I do not know what the root of the lockloss or the lock acquisition was. As said in alog 72852 the wind had died down and there were no significant earthquakes. This leaves the BLRMs glitch lockloss coindcidence.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
23:04 VAC Gerardo EY Mech Room N Coolant Work 23:36
H1 General
ibrahim.abouelfettouh@LIGO.ORG - posted 22:38, Tuesday 12 September 2023 (72852)
Lockloss 05:09 UTC

Random lockloss - wind and seismic acitivity look okay at time of lockloss.

DRMI unlocked as soon as a glitch (maybe 10th one of the night) showed up on BLRMs screen. This is similar to my last few evening locklosses.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 20:01, Tuesday 12 September 2023 - last comment - 22:13, Tuesday 12 September 2023(72851)
OPS Eve Midshift Update

IFO is in NLN and OBSERVING as of 21:00 UTC (Sep 12) - 6hr and 35 minute lock.

Nothing else to report.

Comments related to this report
jenne.driggers@LIGO.ORG - 22:13, Tuesday 12 September 2023 (72853)

I'm not sure what is different after maintenance than before, but the 120 Hz jitter peak seems much worse this afternoon than we'd been having.  The wind that has been picking up seems to be exacerbating this more than it has the last few times we had moderate wind.

When we have our commissioning period tomorrow, I'll have a look to see if any cleaning will help.  But, we should also take a look in the morning to see if there are any other alignment changes that happened, or if any of our (quite small list of) maintenance activities could explain this evening's differences.

H1 SUS (SUS)
ibrahim.abouelfettouh@LIGO.ORG - posted 18:11, Tuesday 12 September 2023 (72850)
In-Lock SUS Charge Measurement - FAMIS 26056 - Weekly

Weekly (missed week 9/4-9/8) In-Lock SUS Charge Measurements (also includes more recent 9/12 measurements).

Closes FAMIS 26056

Images attached to this report
H1 SUS (SUS)
ibrahim.abouelfettouh@LIGO.ORG - posted 17:36, Tuesday 12 September 2023 (72849)
In-Lock SUS Charge Measurement - FAMIS 26057 - Weekly

Weekly In-Lock SUS Charge Measurements.

Closes FAMIS 26057

Images attached to this report
LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 16:50, Tuesday 12 September 2023 (72842)
Y-End Mechanical Room Coolant Leak

This leak appears to be an effect of the overpressure event of the cooling lines at Y-End, reported here, and it makes sense as to why the loss of glycol reported there too.  I was notified by Richard that someone reported a leak at Y-End, headed there and found the leak, probably had leaked for days.  The leak is located on the copper return line, probably a busted solder joint.  To stop the leak, I closed the supply and return valves, see photo, they'll remain closed until the leak is fixed.  Jordan and I let some pigs loose on the spill, they'll be picked on the next couple of days.

Unfortunately, now we have to check the condition of the hepta pump, since the pressure on the cooling lines went well above 100 psi, nominal operation conditions for such pump is a maximum of 88 psi.

Images attached to this report
H1 PEM
david.barker@LIGO.ORG - posted 16:08, Tuesday 12 September 2023 - last comment - 16:15, Tuesday 12 September 2023(72846)
MEDM showing PEM BSC temperatures and settings

I have written a python script which generates an MEDM showing the four BSC temperature sensors and the settings used to convert voltage to DegF (and DegC).

Please reference DCC-T0900287 for details.

Fil measured BSC1's skin temperature to be 69.0 - 69.5 F using the hand held remote temp sensor.

BSC1 initially came back with a low and noisy voltage. Richard reattached the cabling which fixed the issue.

To open this MEDM, from a recent SITEMAP, it is the last entry in the FMCS pull down (the PEM pull down is totally full)

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 16:15, Tuesday 12 September 2023 (72847)

The gain and offset are set with EPICS input records. They can be changed via channel access, and are being configuration controlled by SDF.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:05, Tuesday 12 September 2023 (72845)
OPS Eve Shift Start

TITLE: 09/12 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 10mph Gusts, 7mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.11 μm/s
QUICK SUMMARY:

IFO is in NLN and Observing as of 21:00 UTC

LHO General
thomas.shaffer@LIGO.ORG - posted 15:59, Tuesday 12 September 2023 (72827)
Ops Day Shift Summary

TITLE: 09/12 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Maintenance day recovery was fairly easy considering the h1iopsusb123 went down and tripped the associated sei platforms. Automated initial alignment and lock. We've been observing for about 2.5 hours now.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
15:03 VAC Gerardo MX, EX n Turbo pump checks 19:34
15:03 FAC Kim EX n Tech clean 16:08
15:03 FAC Karen EY n Tech clean 16:13
15:09 FAC Richard LVEA(BSC) n Temp sensors 15:42
15:26 VAC Jordan, Janos LVEA - FTIR 18:49
15:27 PEM Dave remote n Restart h1pemcs, h1pemex, h1pemey 19:13
15:28 CAL Jeff and Dave control room n restart h1calcs 18:52
15:30 FMCS Patrick control room n Restart FMCS EPICS IOC 18:52
15:31 DAQ Dave remote n Restart DAQ (incl EDCU) after model changes 19:13
15:31 SEI Erik control room n Install new Picket Fence server and EPICS IOC 18:53
15:44   Chris LVEA n FRS 16:53
15:52 CDS/SUS Fil LVEA - BS, SRM,PRM coil driver board mods 18:25
16:01 - Matt H OSB roof n Pictures on viewing deck 16:20
16:09 FAC Kim & Karen HAM Shack n Tech clean 16:49
16:32 SEI Jim LVEA - Swap ITMY CPS card, HAM1 L4C testing 18:51
16:53 FAC Chris FCES - Scavenger hunt 17:50
16:54 AOS Jason, Austin LVEA YES ITMX Oplev swap 19:32
17:11 ENG Betsy, Matt Biergarten YES Planning for O5 18:09
17:26 FAC Karen, Kim LVEA - Tech clean 18:43
17:32 PEM Richard LVEA YES Temp 17:42
17:50   Chris MY, EY n Scavenger hunt 18:19
18:34 PEM Fil LVEA YES Looking at PEM Temp sensors 18:45
18:56 VAC Janos, Jordan LVEA - Pictures 19:11
19:25 PEM Richard LVEA n Measure temperature sensors 19:32
20:44 PEM Fil CER n Reseat temperature cables 20:47
H1 CDS
david.barker@LIGO.ORG - posted 15:59, Tuesday 12 September 2023 - last comment - 16:04, Tuesday 12 September 2023(72843)
CDS Maintenance Summary: Tuesday 12th September 2023

Recovery of h1susb123

TJ, Fil, Erik, Jonathan, Dave:

During the CER in-rack work a timing glitch stopped all the models on h1susb123. After the rack work was completed, we restarted the models by soft booting h1susb123 (sudo systemctl reboot).

When the computer came back we noted that the first IO Chassis Adnaco backplane (the one with the timing card on it) was not visible from the computer, but the other 3 were.

We did a second restart, this time Erik power cycled the IO Chassis and I powered down h1susb123. This time the system came back up with a full compliment of cards.

WP11414 Picket Fence Update

Erik, Dave:

New picket fence server code was started on nuc5, along with a new IOC on cdsioc0. Two new GPS channels were added which record the time the server was started, and the last time picket fence obtained and processed its seismic data. DAQ + EDC restart was required

The new channels are  H1:SEI-USGS_SERVER_GPS and H1:SEI-USGS_SERVER_START_GPS

WP11423 h1calcs bug fix

Jeff, Dave:

A new h1calcs model was installed, no DAQ restart was needed. Please see Jeff's alog for details.

WP11420 Add calibrated PEM BSC temperature channels to DAQ

Richard, Fil, Robert, Dave:

New h1pemcs, h1pemex and h1pemey models were installed. The BSC temperature raw_adc_counts are now converted to DegF using DCC-T0900287 settings. The raw counts DAQ channel keeps it name, for each BSC a new DEGF and DEGC fast 256Hz channel was added to the DAQ, along side equivalent MON slow channels. DAQ restart was required.

Reboot of FMCS IOC

Patrick, Jonathan, Dave:

The FMCS IOC computer was rebooted to verify it starts ok following the last week's FMCS activity. I recovered the alarm settings.

DAQ Restart

Jonathan, Dave:

The DAQ was restarted for the above PEM and Picket Fence changes. Other than a second restart of gds1 being needed, this was an unremarkable restart.

 

Comments related to this report
david.barker@LIGO.ORG - 16:04, Tuesday 12 September 2023 (72844)

Tue12Sep2023
LOC TIME HOSTNAME     MODEL/REBOOT
09:42:46 h1oaf0       h1calcs     <<< New calcs model


09:48:16 h1susb123    ***REBOOT*** <<< First restart, soft reboot
09:50:23 h1susb123    h1iopsusb123
09:56:23 h1susb123    ***REBOOT***  <<<< Second restart, power cycle IO Chassis + computer
09:58:47 h1susb123    h1iopsusb123
09:59:00 h1susb123    h1susitmy   
09:59:13 h1susb123    h1susbs     
09:59:26 h1susb123    h1susitmx   
09:59:39 h1susb123    h1susitmpi  


11:48:41 h1oaf0       h1pemcs     <<< New PEM models
11:49:13 h1iscex      h1pemex     
11:50:02 h1iscey      h1pemey     


11:53:17 h1daqdc0     [DAQ] <<< DAQ 0-leg
11:53:27 h1daqfw0     [DAQ]
11:53:28 h1daqnds0    [DAQ]
11:53:28 h1daqtw0     [DAQ]
11:53:36 h1daqgds0    [DAQ]
11:54:15 h1susauxb123 h1edc[DAQ] <<< EDC
12:01:48 h1daqdc1     [DAQ] <<< DAQ 1-leg
12:01:59 h1daqfw1     [DAQ]
12:02:00 h1daqnds1    [DAQ]
12:02:00 h1daqtw1     [DAQ]
12:02:08 h1daqgds1    [DAQ]
12:02:50 h1daqgds1    [DAQ] <<< 2nd GDS1 restart
 
 

LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 14:39, Tuesday 12 September 2023 (72841)
Famis task: Maint turbo pump functionality test for the X-End and X-Mid.

The turbo pump stations at X-End and X-Mid station were taken thru their functionality test, no issues noted with both systems.

EX
Scroll pump hours: 6311.9
Turbo pump hours: 263.4
Crash bearings life: 100%

MX
Scroll pump hours: 204.9
Turbo pump hours: 111.4
Crash bearings life: 100%

FAMIS tasks 24843 and 24867

 

Images attached to this report
LHO VE (VE)
janos.csizmazia@LIGO.ORG - posted 14:23, Tuesday 12 September 2023 (72840)
Input data for noise examination for pumping the VEP-1W Hepta's vacuum line
The pumping station is located in the Mechanical Room, the vacuum line in question starts there at a temporary pumping station, then penetrates through the wall, into the LVEA, approximately between HAM5 and HAM6. Then it joins the direction of the main tubes, turns north, and then turns east at BSC2, and runs a little bit over BSC7. The purpose of this noise examination is to justify if we can pump this line during observation. It would be quite advantageous, as this is crucial for the cleaning attempt of this line, after the Hepta incident.
On this pumping cart we have a Turbo and a backing pump. The backing pump definitely the noisier, it runs at 1660 rpm = 27.67 Hz. The Turbo pump runs at 60000 rpm = 1000 Hz.

During the measurement, the pumping cart was switched on and off, and the changes in the operation of the pumps have been noted. Beforehand, a loud noise was generated on the tubing, by hitting it with a wrench. The noise measurement, in time:
13:02 - gate valve was closed - some minimal noise: the pumping cart was on at the beginning of the measurement
13:04 - 13:05: hitting the tube with a wrench, approx 2 Hz: quite some noise
13:05 - gate valve reopened - some minimal noise (the reason behind the opening/closing of the gate valve is non-related to the noise-experiment, it is only VAC-related)

13:06:42 - press STOP - this has nothing to do with the noise
13:10:20 - backing pump stops: 27.67 Hz noise-source disappeared. Also, Turbo pump's freq was changed to 850 Hz
13:11:50 - Turbo pump's freq. was changed to 775 Hz
13:13:05 - Turbo pump's freq. was changed to 700 Hz
13:13:20 - Press ON: backing pump starts - 27.67 Hz noise source reappears
13:15:05 - Turbo pump is at full speed: 1000 Hz

13:18:00 - press STOP - this has nothing to do with the noise (1E-4 Torr)
13:21:38 - backing pump stops: 27.67 Hz noise-source disappeared. Also, Turbo pump's freq was changed to 850 Hz
13:23:00 - Turbo pump's freq. was changed to 775 Hz
13:24:15 - Turbo pump's freq. was changed to 700 Hz
13:25:00 - Press ON: backing pump starts - 27.67 Hz noise source reappears
13:26:50 - Turbo pump is at full speed: 1000 Hz

13:29:55 - press STOP - this has nothing to do with the noise (1E-4 Torr)
13:33:30 - backing pump stops: 27.67 Hz noise-source disappeared. Also, Turbo pump's freq was changed to 850 Hz
13:34:55 - Turbo pump's freq. was changed to 775 Hz
13:36:10 - Turbo pump's freq. was changed to 700 Hz

13:36:35 - gate valve was closed (8.5E-2 Torr)
H1 General
thomas.shaffer@LIGO.ORG - posted 13:55, Tuesday 12 September 2023 (72839)
Observing 2054 UTC

Maintenace recovered.

H1 SUS
filiberto.clara@LIGO.ORG - posted 12:59, Tuesday 12 September 2023 (72837)
Triple Acq Driver for BS/SRM/PRM/PR3 Modified

WP 11418
ECR E2300035
FRS 26604

Monitor boards D1900052 were installed in BS and SRM M2/M3 in April (alog 68796).
Today the D1900052 monitor boards were removed and original monitor boards D070480 reinstalled.

Beam Splitter (Rack SUS-C5, U26):
Chassis Serial Number S1100039
Old Monitor Board MON156 - Installed
New Monitor Board S2100109 - Removed

SRM M2 (Rack SUS-C7, U26):
Chassis Serial Number S1100035
Old Monitor Board MON177 - Installed
New Monitor Board S2100483 - Removed

SRM M3 (Rack SUS-C7, U22)
Chassis Serial Number S1000356
Old Monitor Board MON128 - Installed
New Monitor Board S2100108 – Removed

The D1900052 boards that were removed from BS/SRM were installed in PRM M2/M3 and PR3 M3.

PRM M2 (Rack SUS-C4, U19):
Chassis Serial Number S1100015
Old Monitor Board MON222 - Removed
New Monitor Board S2100483 - Installed

PRM M3 (Rack SUS-C4, U18):
Chassis Serial Number S1100025
Old Monitor Board MON233 - Removed
New Monitor Board S2100109 - Installed

PR3 M3 (Rack SUS-C4, U16)
Chassis Serial Number S1100047
Old Monitor Board MON223 - Removed
New Monitor Board S2100108 – Installed

H1 General
austin.jennings@LIGO.ORG - posted 12:58, Tuesday 12 September 2023 (72838)
LVEA Swept

The LVEA has been swept, nothing abnormal to report - all checklist items have been met.

H1 AOS
jason.oberling@LIGO.ORG - posted 12:54, Tuesday 12 September 2023 - last comment - 15:45, Thursday 21 September 2023(72836)
ITMx Optical Lever Armored Fiber and Cooler Installed (FRS 4544 and WP 11422)

J. Oberling, A. Jennings

With a relatively light maintenance window and a chance for some extended Laser Hazard time, we finally completed the installation of an old OpLev ECR, FRS 4544.  Austin and I removed the existing 10m single-mode fiber and installed a new, 3m armored single-mode fiber, and placed the laser in a "thermal isolation enclosure" (i.e. a Coleman cooler).

To start, I confirmed that there was power available for the new setup; the OpLev lasers are powered via a DC power supply in the CER.  I read no voltage at the cable near the ITMx OpLev, so with Fil's help we traced the cable to its other end, found it unplugged, and plugged it in.  I confirmed we had the expected voltage, which we did, so we moved on with the installation.  We had to wait for the ITMx front end computer to come back up (had tripped as part of other work), so while we waited Austin completed the transition to Laser Hazard.  We took a picture (1st attachment) of the ITMx OpLev data (SUM counts, pitch and yaw readings), then powered down the laser.  We placed the laser in the cooler and plugged in the new power supply; laser turned on as expected.  We then installed a Lexan viewport cover and removed the cover from the ITMx OpLev transmitter pylon.  The old 10m fiber was removed, and we found 2 areas where the fiber had been crushed due to over-zealous strain relief with cable ties (attachments 2-4; this is why we originally moved to armored fibers); I'm honestly somewhat surprised any light was being passed through this fiber.  We installed the armored fiber, being careful not to touch the green camera housing and to not overly bend the fiber or jostle the transmitter assembly, and turned on the laser.  Unfortunately we had very little signal (~1k SUM counts) at the receiver, and the pitch and yaw readings were pretty different.  We very briefly removed the Lexan cover (pulled it out just enough to clear the beam) and the SUM counts jumped up to ~7k; we then put the Lexan back in place; we also tried increasing the laser power, but saw no change in SUM counts (laser already maxed out).  This was an indication that we did manage to change the transmitter alignment during the fiber swap, even though we were careful not to jostle anything (it can happen, and it did), and that the Lexan cover greatly changes the beam alignment.  So we loosened the locking screws for the pitch and yaw adjustments and very carefully adjusted the pitch and yaw of the launcher to increase the SUM counts (which also had the effect of centering the beam on the receiver).  The most we could get was ~8.3k SUM counts with the Lexan briefly removed, which then dropped to ~7k once we re-installed the transmitter cover and completely removed the Lexan (no viewport exposure with the transmitter cover re-installed).  We made sure not to bump anything when re-installing the transmitter cover, yet the SUM counts dropped and the alignment changed (the pitch/yaw readings changed, mostly yaw by ~10 µrad).  Maybe this pylon is little more loose than the others?  That's a WAG, as the pylon seems pretty secure.

I can't explain why the SUM counts are so much lower; could be the difference between the new and old fibers, we could have really changed the alignment so we're now catching a ghost beam (but I doubt this, we barely moved anything).  Honestly I'm a little stumped.  Given more time on a future maintenance day we could remove the receiver cover and check the beam at the QPD, but as of now we have what appears to be a good signal that responds to pitch and yaw alignment changes, so we moved on.  We re-centered the receiver QPD, and now have the readings shown in the 5th attachment; ITMx did not move, it stayed in its Aligned state the entire time.  This is all the result of our work on the OpLev.  We'll keep an eye on this OpLev over the coming days, especially watching the SUM counts and pitch/yaw readings (looking for drift and making sure the laser is happy in its new home; it is the oldest installed OpLev laser at the moment).  The last few attachments are pictures of the new cooler and the fiber run into the transmitter assembly.  This completes LHO WP 11422 and FRS 4544.

Images attached to this report
Comments related to this report
rahul.kumar@LIGO.ORG - 15:45, Thursday 21 September 2023 (73039)SUS

The ITMX OPLEV sum has been dropping past one week. It was around 7000 counts last week and since then it has gone down to around 4000 counts - please see screenshot attached.

Images attached to this comment
H1 CAL (CDS)
jeffrey.kissel@LIGO.ORG - posted 10:25, Tuesday 12 September 2023 - last comment - 13:07, Wednesday 13 September 2023(72830)
h1calcs Model Rebooted; Gating for CALCS \kappa_U is now informed by KAPPA_UIM Uncertainty (rather than KAPPA_TST)
J. Kissel, D. Barker
WP #11423

Dave has graciously compiled, installed and restarted the h1calcs model. In doing so, that brings in the bug fix from LHO:72820, which fixes the issue that the front-end, CAL-CS KAPPA_UIM library block was receiving the KAPPA_TST uncertainty identified in LHO:72819.

Thus h1calcs is now using rev 26218 of the library part /opt/rtcds/userapps/release/cal/common/models/CAL_CS_MASTER.mdl.

I'll confirm that the UIM uncertainty is the *right* uncertainty during the next nominal low noise stretch later today (2023-09-12 ~20:00 UTC).
Comments related to this report
jeffrey.kissel@LIGO.ORG - 17:37, Tuesday 12 September 2023 (72848)
Circa 9:30 - 10:00a PDT (2023-09-12 16:30-17:00 UTC)
Post-compile, but prior-to-install, Dave ran a routine foton -c check on the filter file to confirm that there were no changes in the
    /opt/rtcds/lho/h1/chans/H1CALCS.txt
besides "the usual" flip of the header (see IIET:11481 which has now become cds/software/advLigoRTS:589).

Also relevant, remember every front-end model's filter file is a softlink to the userapps repo,
    $ ls -l /opt/rtcds/lho/h1/chans/H1CALCS.txt 
    lrwxrwxrwx 1 controls controls 58 Sep  8  2015 /opt/rtcds/lho/h1/chans/H1CALCS.txt -> /opt/rtcds/userapps/release/cal/h1/filterfiles/H1CALCS.txt

Upon the check, he found that foton -c had actually changed filter coefficients.
Alarmed by this, he ran an svn revert on the userapps "source" file for H1CALCS.txt in
    /opt/rtcds/userapps/release/cal/h1/filterfiles/H1CALCS.txt

He walked me through what had happened, and when he did to fix it, *verbally* with me on TeamSpeak, and we agreed -- "yup, that should be fine."

Flash forward to NOMINAL_LOW_NOISE at 14:30 PDT (2023-09-12 20:25:57 UTC) TJ and I find that the GDS-CALIB_STRAIN trace on the wall looks OFF, and there're no impactful SDF DIFFs. I.e. TJ says "Alright Jeff... what'd you do..." seeing the front wall FOM show GDS-CALIB_STRAIN at 2023-09-12 20:28 UTC.

After some panic having not actually done anything but restart the model, I started opening up CALCS screens trying to figure out "uh oh, how can I diagnose the issue quickly..." I tried two things before I figured it out:
    (1) I get through the inverse sensing function filter (H1:CAL-CS_DARM_ERR) and look at the foton file ... realized -- looks OK, but if I'm really gunna diagnose this, I need to find the number that was installed on 2023-08-31 (LHO:72594)...
    (2) I also open up the actuator screen for the ETMX L3 stage (H1:CAL-CS_DARM_ANALOG_ETMX_L3) ... and upon staring for a second I see FM3 has a "TEST_Npct_O4" in it, and I immediately recognize -- just by the name of the filter -- that this is *not* the "HFPole" that *should* be there after Louis restores it on 2023-08-07 (LHO:72043).

After this, I put two-and-two together, and realized that Dave had "reverted" to some bad filter file. 

As such, I went to the filter archive for the H1CALCS model, and looked for the filter file as it stood on 2023-08-31 -- the last known good time:

/opt/rtcds/lho/h1/chans/filter_archive/h1calcs$ ls -ltr
[...]
-rw-rw-r-- 1 advligorts advligorts 473361 Aug  7 16:42 H1CALCS_1375486959.txt
-rw-rw-r-- 1 advligorts advligorts 473362 Aug 31 11:52 H1CALCS_1377543182.txt             # Here's the last good one
-rw-r--r-- 1 controls   advligorts 473362 Sep 12 09:32 H1CALCS_230912_093238_install.txt  # Dave compiles first time
-rw-r--r-- 1 controls   advligorts 473377 Sep 12 09:36 H1CALCS_230912_093649_install.txt  # Dave compiles the second time
-rw-rw-r-- 1 advligorts advligorts 473016 Sep 12 09:42 H1CALCS_1378572178.txt             # Dave installs his "reverted" file
-rw-rw-r-- 1 advligorts advligorts 473362 Sep 12 13:50 H1CALCS_1378587040.txt             # Jeff copies Aug 31 11:52 H1CALCS_1377543182.txt into current and installs it


Talking with him further in prep for this aLOG, we identify that when Dave said "I reverted it," he meant that he ran an "svn revert" on the userapps copy of the file, which "reverted" the file to the last time it was committed to the repo, i.e. 
    r26011 | david.barker@LIGO.ORG | 2023-08-01 10:15:25 -0700 (Tue, 01 Aug 2023) | 1 line

    FM CAL as of 01aug2023
i.e. before 2023-08-07 (LHO:72043) and before 2023-08-31 (LHO:72594).

Yikes! This is the calibration group's procedural bad -- we should be committing the filter file to the userapps svn repo every time we make a change.

So yeah, in doing normal routine things that all should have have worked, Dave fell into a trap we left for him.

I've now committed the H1CALCS.txt filter file to the repo at rev 26254

    r26254 | jeffrey.kissel@LIGO.ORG | 2023-09-12 16:26:11 -0700 (Tue, 12 Sep 2023) | 1 line

    Filter file as it stands on 2023-08-31, after 2023-08-07 LHO:72043 3.2 kHz ESD pole fix and  2023-08-31 LHO:72594 calibration update for several reasons.


By 2023-09-12 20:50:44 UTC I had loaded in H1CALCS_1378587040.txt which was simple "cp" copy of H1CALCS_1377543182.txt, the last good filter file that was created during the 2023-08-31 calibration update,...
and the DARM FOM and GDS-CALIB_STRAIN returned to normal. 

All of panic and fix was prior to us going to OBSERVATION_READY 2023-09-12 21:00:28 UTC, so there was no observation ready segment that had bad calibration.

I also confirmed that all was restored and well by checking in on both
 -- the live front-end systematic error in DELTAL_EXTERNAL_DQ using the tools from LHO:69285) and
 -- the low-latency systematic error in GDS-CALIB_STRAIN using the auto-generated plots on https://ldas-jobs.ligo-wa.caltech.edu/~cal/
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 13:07, Wednesday 13 September 2023 (72863)CDS
Just some retro-active proof from the last few days worth of measurements and models of systematic error in the calibration.

First, a trend of the front-end computed values of systematic error, shown in 2023-09-12_H1CALCS_TrendOfSystematicError.png which reviews the time-line of what had happened.

Next, grabs from the GDS measured vs. modeled systematic error archive which show similar information but in hourly snapshots,
    2023-09-12 13:50 - 14:50 UTC 1378561832-1378565432 Pre-maintenance, pre-model-recompile, calibration good, H1CALCS_1377543182.txt 2023-08-31 filter file running.
    2023-09-12 19:50 - 20:50 UTC 1378583429-1378587029 BAD 2023-08-01, last-svn-commit, r26011, filter file in place.
    2023-09-12 20:50 - 21:50 UTC 1378587032-1378590632 H1CALCS_1378587040.txt copy of 2023-08-31 filter installed, calibration goodness restored.

Finally, I show the systematic error in GDS-CALIB_STRAIN trends from the calibration monitor "grafana" page, which shows that because we weren't in ANALYSIS_READY during all this kerfuffle, the systematic error as reported by that system was none-the-wiser that any of this had happened.

*phew* Good save team!!
Images attached to this comment
Displaying reports 13341-13360 of 84064.Go to page Start 664 665 666 667 668 669 670 671 672 End