Below is the summary of the LHO DQ shift for the week of 2023-09-11 to 2023-09-17
The full DQ shift report with day by day details is available at: https://wiki.ligo.org/DetChar/DataQuality/DQShiftLHO20230911
TITLE: 09/21 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 11mph Gusts, 8mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.25 μm/s
QUICK SUMMARY: As mentioned by RyanC in his alog, IOPSUSH7 is showing several issues, causing the SWWD for HAM7 to be tripped; tagging CDS. This has left H1 without squeezing and out of observing since 10:48 UTC, but it has still been locked at low noise for almost 13 hours.
We have moved FC1, ZM1, ZM2, ZM3 and OPO suspensions into SAFE state until the CDS issues are resolved.
All suspensions in HAM7 are now back online and into ALIGNED state after Fil reset the DC power supply and Dave restarted the computers.
HAM7 ISI watchdog tripped at 10:53UTC from the ACT, the ACT actuators saw a large glitch multiple times. Each time I reset the watchdog the act goes crazy. Also I just noticed that HAM7 isn't on the SUS software watchdog medm screen. There's an IOP dackill as well. I gave Jim a call and he's gonna hop on and check it out.
There's an issue with the IOP for HAM7, dackill and FC1, after talking with Jim I tried to call for CDS help but wasn't able to get anyone (left voicemail). FC1s medm screen appears frozen but when you ndscope OSEM channels they're all oscillating rapidly. We're not able to get in Observing with this current issue, we need a CDS person to look into it.
TITLE: 09/20 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
- Relocking upon arrival, back to NLN @ 23:36, OBSERVE @ 23:47
- 1:18 - inc 5.4 EQ from Alaska
- Reaquisition was automated, NLN @ 2:21, OBSERVE @ 2:39
LOG:
No log for this shift.
Following a couple locklosses, H1 is back in observing mode. Seismic motion seems to have leveled out from the EQ earlier and all systems appear to be stable. Currently in a STAND DOWN due to a GRB short detection.
Lockloss @ 0:42, cause unknown. Not seeing a whole lot of evidence for an ASC ringup. On the LSC side of things, it would seem like LSC DARM IN1 is the first to see the instability followed shortly by the ASC AS A DC PD, scope attached. Looking at the locklost website, it would seem that the ETMX L2 UL/UR OSEM stages are seeing the elevated oscillations ~30 before the lockloss occurred.
H1EPICS_HWS.ini has 88 ITMY channels, all are connecting except the following 6:
H1:TCS-ITMY_HWS_BEAM_POS_X
H1:TCS-ITMY_HWS_BEAM_POS_Y
H1:TCS-ITMY_HWS_CO2_POS_X
H1:TCS-ITMY_HWS_CO2_POS_Y
H1:TCS-ITMY_HWS_RH_POS_X
H1:TCS-ITMY_HWS_RH_POS_Y
Explanation of the issue in 73012. As we are now in observing and these are not important channels, we will fix tomorrow.
Camilla confirms it is OK to run without these channels overnight, they will be added back tomorrow when H1 is out of observe.
Camilla installed the new HWS ITMY code at 08:22 while H1 was out of observe. EDC is now green.
TITLE: 09/20 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Austin
SHIFT SUMMARY: H1 was unlocked for most of the morning following an earthquake, then commissioning time this afternoon. One commissioning-caused lockloss has H1 currently relocking.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:55 | PEM | Mitchell | EX, EY | - | Dust monitor vacuum pump checks | 16:39 |
15:59 | FAC | Bubba | OSB | - | Driving forklift | 16:15 |
17:04 | ISC | TJ | CR | - | Testing FIND_IR updates | 17:16 |
18:44 | S&K Elec | Ken | FCTE | - | Add exit signs, to middle and LVEA side of FC tube enclourse | 21:33 |
19:59 | ISC | Keita | LVEA | - | Switch OM2 cabling | 20:06 |
20:15 | ISC | Jenne | CR | - | FF testing | 20:53 |
TITLE: 09/20 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 19mph Gusts, 12mph 5min avg
Primary useism: 0.06 μm/s
Secondary useism: 0.28 μm/s
QUICK SUMMARY:
- H1 is currently relocking, wirh DRMI just locked
- CDS/SEI ok
Lockloss @ 20:29 UTC - during commissioning time, Jenne was driving ITMY to get a measurement for MICH FF updates.
In short - I've added a bit to the ALS_DIFF node in the FIND_IR and SAVE_OFFSET states. The goal was to have it handle the case that was seen the other night where it goes to an offset and there is the smallest bit of light above the noise, but it's still be low threshold and won't fine tune. Just lowering the threshold won't work since this noise seems to vary a bit, so I added in a few other checks.
I didn't fully test this since we've already been down for a few hours from an earthquake and I would have had to restart locking. If this causes any problems, please give me a call or revert ALS_DIFF.py in SVN. I already have a copy saved.
I had a chance to test it out a bit more thoroughly after a lock loss around 130 PT. This time I forced it into a situation where it would overshoot resonance and have a very small amount of light in the signal. It then jumped back and got close enough for the fine tuning threshold to move the node into the next state to fine tune.
This situation has been the most frequent cause of the node not finding IR resonance this year, so hopefully this helps reduce assistance required requests. Next steps for this node are to make a "long search" state that will search through the full range for good buildups, and fix the times that it will fine tune indefinitely, circling around the correct value. This latter issue should just be a minor tuning change, but it's challenging to test.
This was a good test of our new Picket Fence monitor EPICS channels. The PKT circle on the CDS Oveview is red, and the Picket Fence MEDM (attached) shows that the last update is many minutes ago (should never exceed 4 seconds).
I'm taking a look at the server and will restart it.
No need to restart the Picket Fence process, after almost exactly 20 minutes at 09:53 PDT the service sprung back to life and is updating normally now with nothing done at our end.
Good to know that the automatic restart is working! Also, very nice detailed interface, I'm a big fan.
It's a bit awkward that it took almost 20 minutes to come back again. I'd prefer it to be closer to 5 minutes, will check the code to see if there's anything amiss. There were no alerts/logs on the copy we're running at Stanford, so I assume this is not a problem from the USGS side.
Erik, Camilla
This was already installed on ETMY and the code described by Huy-Tuong Cao in 72229. Today Erik installed conda on h1hwsex and we got the new code running. I had to take a new ETMX reference while the IFO was hot (old py2 pickle files have a different format/encoding to py3). So next Tuesday I should re-take this reference with a cold IFO. This update can be done to the ITMs tomorrow.
To install conda or add it to the path, Erik ran command: '/ligo/cds/lho/h1/anaconda/anaconda2/bin/conda init bash'
Then after closing/reopening a terminal I can pull the hws-server/-/tree/fix/python3 code:
Stop and restart the code after running 'conda activate hws' to use the correct python paths.
Had a few errors that needed to be fixed before the code ran successfully:
New code now running on all optics. Took new references on ITMs and ETMX.
New ETMX reference taken at 16:21UTC with ALS shuttered, after IFO had been down for 90 minutes. Installed at ITMX, new reference taken at 16:18UTC. Installed at ITMY, new reference taken at 16:15UTC. realized where I previously edited code in /home/controls/hws/HWS/ I should have just pulled the new fix/python3 HWS code I went back did this for ITMs and ETMX.
ITMX SLED power has quickly decayed, see attached, so I adjusted the frame rate form 5Hz to 1Hz, instructions in TCS wiki page. This SLED was recently replaced 71476 and was a 2021 SLED so not particularly old. LLO has seen similar issues 66713 but over hours rather than weeks. We need to deicide what to do about this, we could try to touch the trimpoint or contact the company. ITMY SLED is fine.
There was two separate errors on ITMY that stopped the code in the first few minutes, a "segmentation fault" and a "fatal IO error 25". We should watch that this code continues the run without issues.
ITMY code keeps stopping and has been stopped for last few hours. I cannot access the computer, maybe it has crashed? There is an orange light on h1hwsmsr.
ITMX spherical power is very noisy. This appears to be because the new ITMX reference has a lot of dead pixels, I've noted them but haven't yet added them to the dead pixels files as cannot edit the read-only file- TCS wiki link.
ITMY- TJ and I restarted h1hwsmrs1 in the MSR and I restarted the code. We were getting a regular "out of bounds" error, attached, but the data now seems to be running fine. The fix/python3 code didn't have all the master commits in it so when we restarted mrs1 some of the channels were not restarted, found by Dave in 73013. I've updated the fix/python3 code and we should pull this commit and kill/restart the ioc and hws itmy code tomorrow.
ITMX - the bad pixels were added and the data is now mcuh cleaner, updated insruciton in the TCS wiki.
Pulled new python3 code with all channels to both ITM and ETMX computers (already on ETMY). Killed and restart softIoc on h1hwsmsr1 for ITMY following instructions in 65966. The 73013 channels are now running again.
Took new references on all optics at 20:20UTC after the IFO had been unlocked and CO2 lasers off for 5 hours. RH settings nominal IX 0.4W, IY 0.0W, EX 1.0W, EY 1.0W.
Detchar, please tell us if the 1.66Hz comb is back.
We changed the OM2 heater driver configuration from what was described in alog 72061.
We used a breakout board with jumpers to connect all OM2 thermistor readback pins (pin 9, 10, 11, 12, 22, 23, 24, 25) to Beckhoff at the back of the driver chassis. Nothing else (even the DB25 shell on the chassis) is connected to Beckhoff.
Heater voltage inputs (pin 6 for positive and 19 for negative) are connected to the portable voltage reference powered by a DC power supply to provide 7.15V.
BTW, somebody powered the OM2 heater off at some point in time, i.e. OM2 has been cold for some time but we don't know exactly how long.
When we went to the rack, half of the power supply terminal (which we normally use for 9V batteries) was disconnected (1st picture), and there was no power to the heater. Baffling. FYI, if it's not clear, the power terminal should look like in the second picture.
Somebody should have snagged cables hard enough, and didn't even bother to check.
Next time you do it, since reconnecting is NOT good enough, read alog 72286 to learn how to set the voltage reference to 7.15V and turn off the auto-turn-off function, then do it, and tell me you did it. I promise I will thank you.
Thermistors are working.
There is an OSEM pitch shift of OM2 at the end of the maintenance period 3 weeks ago (Aug 29)
Having turned the heater back on will likely affect our calibration. It's not a bad thing, but it is something to be aware of.
Indeed it now seems that there is a ~5Mpc difference in the range calculations between the front-ends (SNSW) and GDS (SNSC) compared to our last observation time.
It looks like this has brought back the 1.66 Hz comb. Attached is an averaged spectrum for 6 hours of recent data (Sept 20 UTC 0:00 to 6:00); the comb is the peaked structure marked with yellow triangles around 280 Hz. (You can also see some peaks in the production Fscans from the previous day, but it's clearer here.)
To see if one of the Beckhoff terminals for thermistors is kaput, I disconnected thermistor 2 (pins 9, 11, 22 and 24) from Beckhoff at the back of the heater driver chassis.
For a short while the Beckhoff cable itself was disconnected but the cable was connected back to the breakout board at the back of the driver chassis by 20:05:00 UTC.
Thermistor 1 is still connected. Heater driver input is still receiving voltage from the voltage reference.
I checked a 3-hour span starting at 04:00 UTC today (Sept 21) and found something unusual. There is a similar structure peaked near 280 Hz, but the frequency spacing is different. These peaks lie on integer multiples of 1.1086 Hz, not 1.6611 Hz. Plot attached.
Detchar, please see if there's any change in 1.66Hz comb.
At around 21:25 UTC, I disconnected OM2 thermistor 1 (pins 10, 12, 23, 25 of the cable at the back of the driver chassis) from Beckhoff and connected thermistor 2 (pins 9, 11, 22, 24).
Checked 6 hours of data starting at 04:00 UTC Sept 22. The comb structure persists with spacing 1.1086 Hz.
Electrical grounding of the beckhoff systems has been modified as a result of this investigation -- see LHO:73233.
Corroborating Daniel's statement that the OM2 heater power supply was disrupted on Tuesday Aug 29th 2023 (LHO:72970), I've zoomed in on the pitch *OSEM* signals for both (a) when the suspected time of power distruption (first attachment), and (b) when the power and function of OM2 was restored (second attachment). One can see that upon power restoration and resuming the HOT configuration of TSAMS on 2023-09-19 (time (a) above), OM2 pitch *decreases* by 190 [urad] over the course of ~1 hour, with a characteristic "thermal time constant" exponential shape to the displacement evolution. Then, heading back to 2023-08-29, we can see a similar shaped event that causes OM2 pitch to *increase* by 160 [urad] over the course of ~40 minutes (time (b) above). Then at 40 minutes IFO recovery from maintenance begins, and we see the OM2 pitch *sliders* adjusted to account for the new alignment, as had been done several times before with the OM2 ON vs. OFF state changes. I take this to be consistent with: The OM2 TSAMS heater was inadvertently turned OFF and COLD on 2023-08-29 at 18:42 UTC (11:42 PDT), and The OM2 TSAMS heater was restored turned ON and HOT on 2023-09-19 18:14 UTC (11:14 PDT).