Ryan, Rahul, Fil, Dave:
All h1sush7 models stopped running at:
PDT: 2023-09-21 03:48:52.000000 PDT
UTC: 2023-09-21 10:48:52.000000 UTC
GPS: 1379328550.000000
From this time onwards H1 was still in lock with a depressed range of ~130MPc and was out of OBSERVE. h1seih7 SWWDs were tripped.
Recovery process was:
Stop models, fence h1sush7 from the Dophin fabric, reboot h1sush7.
When h1sush7 came back, I verified that the IO Chassis could not be seen. I then fenced and powered the front end down.
Fil went onto the mech room mezzanine and verified that the SUS side of the Kepco dual-power-supply had tripped (SEI side was OK). He powered the IO Chassis back on, I powered h1sush7 computer and all came back correctly.
I untripped the SWWDs to get h1seih7 driving again, Ryan and Rahul recovered the SUS models.
Famis 26458
HEPI Pump Trends for the last 45 days are attached.
H1:HPI-PUMP_L0_CONTROL_VOUT has seen a change of 8[units], in the last 3 days. Im not sure if movement of 8 is unreasonable or not. Also what are the units of these channels?
The increase in drive is probably related to Tyler's change to the MR heater coil settings on the 18th. I think the units on the LO VOUT channel are just drive counts, but I'm not sure.
Below is the summary of the LHO DQ shift for the week of 2023-09-11 to 2023-09-17
The full DQ shift report with day by day details is available at: https://wiki.ligo.org/DetChar/DataQuality/DQShiftLHO20230911
TITLE: 09/21 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 11mph Gusts, 8mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.25 μm/s
QUICK SUMMARY: As mentioned by RyanC in his alog, IOPSUSH7 is showing several issues, causing the SWWD for HAM7 to be tripped; tagging CDS. This has left H1 without squeezing and out of observing since 10:48 UTC, but it has still been locked at low noise for almost 13 hours.
We have moved FC1, ZM1, ZM2, ZM3 and OPO suspensions into SAFE state until the CDS issues are resolved.
All suspensions in HAM7 are now back online and into ALIGNED state after Fil reset the DC power supply and Dave restarted the computers.
HAM7 ISI watchdog tripped at 10:53UTC from the ACT, the ACT actuators saw a large glitch multiple times. Each time I reset the watchdog the act goes crazy. Also I just noticed that HAM7 isn't on the SUS software watchdog medm screen. There's an IOP dackill as well. I gave Jim a call and he's gonna hop on and check it out.
There's an issue with the IOP for HAM7, dackill and FC1, after talking with Jim I tried to call for CDS help but wasn't able to get anyone (left voicemail). FC1s medm screen appears frozen but when you ndscope OSEM channels they're all oscillating rapidly. We're not able to get in Observing with this current issue, we need a CDS person to look into it.
TITLE: 09/20 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
- Relocking upon arrival, back to NLN @ 23:36, OBSERVE @ 23:47
- 1:18 - inc 5.4 EQ from Alaska
- Reaquisition was automated, NLN @ 2:21, OBSERVE @ 2:39
LOG:
No log for this shift.
Following a couple locklosses, H1 is back in observing mode. Seismic motion seems to have leveled out from the EQ earlier and all systems appear to be stable. Currently in a STAND DOWN due to a GRB short detection.
Lockloss @ 0:42, cause unknown. Not seeing a whole lot of evidence for an ASC ringup. On the LSC side of things, it would seem like LSC DARM IN1 is the first to see the instability followed shortly by the ASC AS A DC PD, scope attached. Looking at the locklost website, it would seem that the ETMX L2 UL/UR OSEM stages are seeing the elevated oscillations ~30 before the lockloss occurred.
H1EPICS_HWS.ini has 88 ITMY channels, all are connecting except the following 6:
H1:TCS-ITMY_HWS_BEAM_POS_X
H1:TCS-ITMY_HWS_BEAM_POS_Y
H1:TCS-ITMY_HWS_CO2_POS_X
H1:TCS-ITMY_HWS_CO2_POS_Y
H1:TCS-ITMY_HWS_RH_POS_X
H1:TCS-ITMY_HWS_RH_POS_Y
Explanation of the issue in 73012. As we are now in observing and these are not important channels, we will fix tomorrow.
Camilla confirms it is OK to run without these channels overnight, they will be added back tomorrow when H1 is out of observe.
Camilla installed the new HWS ITMY code at 08:22 while H1 was out of observe. EDC is now green.
TITLE: 09/20 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Austin
SHIFT SUMMARY: H1 was unlocked for most of the morning following an earthquake, then commissioning time this afternoon. One commissioning-caused lockloss has H1 currently relocking.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:55 | PEM | Mitchell | EX, EY | - | Dust monitor vacuum pump checks | 16:39 |
| 15:59 | FAC | Bubba | OSB | - | Driving forklift | 16:15 |
| 17:04 | ISC | TJ | CR | - | Testing FIND_IR updates | 17:16 |
| 18:44 | S&K Elec | Ken | FCTE | - | Add exit signs, to middle and LVEA side of FC tube enclourse | 21:33 |
| 19:59 | ISC | Keita | LVEA | - | Switch OM2 cabling | 20:06 |
| 20:15 | ISC | Jenne | CR | - | FF testing | 20:53 |
TITLE: 09/20 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 19mph Gusts, 12mph 5min avg
Primary useism: 0.06 μm/s
Secondary useism: 0.28 μm/s
QUICK SUMMARY:
- H1 is currently relocking, wirh DRMI just locked
- CDS/SEI ok
This was a good test of our new Picket Fence monitor EPICS channels. The PKT circle on the CDS Oveview is red, and the Picket Fence MEDM (attached) shows that the last update is many minutes ago (should never exceed 4 seconds).
I'm taking a look at the server and will restart it.
No need to restart the Picket Fence process, after almost exactly 20 minutes at 09:53 PDT the service sprung back to life and is updating normally now with nothing done at our end.
Good to know that the automatic restart is working! Also, very nice detailed interface, I'm a big fan.
It's a bit awkward that it took almost 20 minutes to come back again. I'd prefer it to be closer to 5 minutes, will check the code to see if there's anything amiss. There were no alerts/logs on the copy we're running at Stanford, so I assume this is not a problem from the USGS side.
Erik, Camilla
This was already installed on ETMY and the code described by Huy-Tuong Cao in 72229. Today Erik installed conda on h1hwsex and we got the new code running. I had to take a new ETMX reference while the IFO was hot (old py2 pickle files have a different format/encoding to py3). So next Tuesday I should re-take this reference with a cold IFO. This update can be done to the ITMs tomorrow.
To install conda or add it to the path, Erik ran command: '/ligo/cds/lho/h1/anaconda/anaconda2/bin/conda init bash'
Then after closing/reopening a terminal I can pull the hws-server/-/tree/fix/python3 code:
Stop and restart the code after running 'conda activate hws' to use the correct python paths.
Had a few errors that needed to be fixed before the code ran successfully:
New code now running on all optics. Took new references on ITMs and ETMX.
New ETMX reference taken at 16:21UTC with ALS shuttered, after IFO had been down for 90 minutes. Installed at ITMX, new reference taken at 16:18UTC. Installed at ITMY, new reference taken at 16:15UTC. realized where I previously edited code in /home/controls/hws/HWS/ I should have just pulled the new fix/python3 HWS code I went back did this for ITMs and ETMX.
ITMX SLED power has quickly decayed, see attached, so I adjusted the frame rate form 5Hz to 1Hz, instructions in TCS wiki page. This SLED was recently replaced 71476 and was a 2021 SLED so not particularly old. LLO has seen similar issues 66713 but over hours rather than weeks. We need to deicide what to do about this, we could try to touch the trimpoint or contact the company. ITMY SLED is fine.
There was two separate errors on ITMY that stopped the code in the first few minutes, a "segmentation fault" and a "fatal IO error 25". We should watch that this code continues the run without issues.
ITMY code keeps stopping and has been stopped for last few hours. I cannot access the computer, maybe it has crashed? There is an orange light on h1hwsmsr.
ITMX spherical power is very noisy. This appears to be because the new ITMX reference has a lot of dead pixels, I've noted them but haven't yet added them to the dead pixels files as cannot edit the read-only file- TCS wiki link.
ITMY- TJ and I restarted h1hwsmrs1 in the MSR and I restarted the code. We were getting a regular "out of bounds" error, attached, but the data now seems to be running fine. The fix/python3 code didn't have all the master commits in it so when we restarted mrs1 some of the channels were not restarted, found by Dave in 73013. I've updated the fix/python3 code and we should pull this commit and kill/restart the ioc and hws itmy code tomorrow.
ITMX - the bad pixels were added and the data is now mcuh cleaner, updated insruciton in the TCS wiki.
Pulled new python3 code with all channels to both ITM and ETMX computers (already on ETMY). Killed and restart softIoc on h1hwsmsr1 for ITMY following instructions in 65966. The 73013 channels are now running again.
Took new references on all optics at 20:20UTC after the IFO had been unlocked and CO2 lasers off for 5 hours. RH settings nominal IX 0.4W, IY 0.0W, EX 1.0W, EY 1.0W.
Opened FRS29160 for this issue.
This is the third time this has happened
Fil has opened a workpermit to replace this Kepco power supply next Tuesday.
Total H1 observing time lost this morning was 4h,58m from 10:49 UTC to 15:47 UTC. During this time, there was no squeezing and H1's range averaged around 135Mpc.