We've started our comissioning period at 20:39UTC after reaquiring NLN at 20:10UTC.
Fil, Dave:
Normally all the HWWDs show some activity to indicate they are operational, but ETMX has been quiet for just over 3 months. To verify it is still operational before we go into the holiday season, Fil went to EX and unplugged one of the satellite amp monitor cables from the front of the HWWD unit. It immediately started its 20 minute countdown. The cable was reattached after only a few seconds.
ndscope 30 day trend (https://lhocds.ligo-wa.caltech.edu/exports/dave/hwwd_30_day.png) in attachment shows the single ETMX data point from yesterday.
At 05:22:13 Tue 21st November 2023 PST most, but not all, Dolphin IPC receivers on h1sush7 recorded a single receive error.
Attachment shows timemachine capture of h1susfc1 and h1sussqzin IPC receive MEDMs.
I think this is the first non-forced spontaneous IPC receive error we have seen during O4, so it warrents a FRS ticket:
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1384712509
We saw some motion in the 1-3 Hz bandwidth at the corner station, and ASC_P saw some motion in the CS signals: INP1, MICH, PRC2...
Reaquired NLN at 20:10UTC
Wed Nov 22 10:04:45 2023 INFO: Fill completed in 4min 42secs
FAMIS 25966, last checked in alog 74193
Script reports elevated high frequency noise for the following sensors:
ITMX_ST2_CPSINF_H3
ITMX_ST2_CPSINF_V1
ETMX_ST2_CPSINF_H2
Elevated noise on ITMX H3 and V1 sensors has been reported in the past several checks, but ETMX ST2 H2 is new and certainly looks elevated.
All other chambers look nominal.
We lost lock 12 seconds after getting to NLN at 00:06UTC, there was a ~520Hz oscillation in DARM, the violins were decently rung up. The DCPDs were at pretty much the same level as they were in the following lock but both of these locks' DCPD signals were 10x what they were when we previously locked on 11/20 at 15:42UTC.
TITLE: 11/22 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 4mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.49 μm/s
QUICK SUMMARY:
PSL 102 (Anteroom) dust counts haven't changed in 23 hours, the 300nm have been stuck at 10 and the 500nm at 0.
IFO_NOTIFY called for help at 09:32 UTC this morning after H1 reached low noise but couldn't start observing due to an outstanding SDF diff. I REVERTED this diff in the EX_ISC SDF table for "H1:ALS-X_FIBR_LOCK_LOGIC_FORCE"
H1 is now happily observing as of 09:41 UTC.
TITLE: 11/22 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
Fairly decent first 2/3's of the shift; then a M7.0 earthquake (+ high microseism) made the last 3hrs of the shift tough---with ALSy being the problem child as usual. In hindsight, I probably should have just waited out the earthquake longer instead of trying to lock after 2hrs.
Ended up staying a little extra to confirm the Initial Alignment completed. Then taking H1 back to locking after tonight's earthquake activity.
LOG:
We'll be here for a while! Have had BSC1's ISI trip and currently waiting for the earth to calm down before returning to locking---staying in the DOWN state until the Picket Fences calm down (it's been almost an hour and they are still in the orange state.
H1's range has been slowly drifting up over the last 3.5hrs (currently, it's just under 155Mpc). Rode through a M6.0 south pacific earthquake. Microseism continues its trend down (maybe finally at 90th percentile---after its HUGE level).
WP 11542
Kepco power supply at EX failed, seized fan. Power supply provides -18V rail to SEI-C1 rack. Power supply replaced and a spare unit staged.
Failed Power Supply: S1202026
Replacement Power Supply: S1201925
Spare Power Supply: S1201928
D. Barker, F. Clara, J. Hanks, E. Von Reis, and Jim Warner
H1 made it back to OBSERVING at 0131.
This was after some adjustments were made to the SQZ Phase after (post Sheila's cable change earlier today, alog #74334). This was my first time adjusting the SQZ Phase while watching the SQZ nuc33 spectra (to get the black trace below the blue trace). Phase started at 170.8 and was taken up to 188.8. (see attached) I did not adjust the OPO Temperature.
For SDF, we had several diffs for SEI, SUS, & ISC subsystems. I ACCEPTED all of them (see attached screenshots for the diffs before they were accepted). Oddly, there were no SDF diffs for the TCSx work (atleast for me tonight).
Looks like L1 is also battling microseism and having issues locking DRMI.
WP11542 EX -18VDC Power Supply Replacement
Fil, Erik:
Towards the end of maintenance, coincidentally 2 seconds before the EDC was restarted, the -18V EX SEIS power supply failed. This caused all of the ADC channels to read 0.0 counts.
Fil and Erik replaced the power supply with a spare unit.
WP11534 h1digivideo1 causing flashing-blue-screens
Jonathan, Patrick, Dave:
Since the upgrade of sw-msr-h1aux last week most, if not all, of the cameras on h1digivideo1 flash their gstreamer viewers periodically with a blue screen. Last Friday I restarted MC1's server process, which did not fix that camera's viewers. First thing today I rebooted h1digivideo1, which did not change anything. This was not surprising, all the server machines were rebooted last Tuesday following the switch upgrade.
Looking at the port statistics on sw-msr-h1aux for the h1digivideo[0,1,2] connections we found that h1digivideo1's VLAN106 port (1/1/16) had a 54% utilization and a dropped packet count of about 9000. Comparing this with the other two machines, which had utilizations around 20% and only dozens of dropped packets.
Jonathan unplugged the ethernet cable for the VLAN106 at both h1digivideo1 and the sw-msr-h1aux ports. The cable appeared to have been seated correctly.
The utilization dropped from 54% to 42% and we have not seen any blue flashes on h1digivideo1's cameras. We will continue to monitor.
WP11541 Add Lock Loss Alert IOC channels to DAQ
Dave:
I created a H1EPICS_ALERT.ini file which contains all of the non-string EPICS records in the lock loss alert IOC. A EDC+DAQ restart was required.
DAQ Restart
Dave, Jonathan:
The DAQ and EDC were restarted. The restart process was interrupted by the loss of h1seiex coincident to the EDC restart, which required some investigation to ensure it was a complete coincidence.
Other than that, and gds1 requiring a second restart, it was a unremarkable restart.
EDC channel count increased by 1082 channels from 56469 to 57551
Tue21Nov2023
LOC TIME HOSTNAME MODEL/REBOOT
11:28:25 h1daqdc0 [DAQ] <<< 0-leg restart
11:28:36 h1daqfw0 [DAQ]
11:28:36 h1daqtw0 [DAQ]
11:28:37 h1daqnds0 [DAQ]
11:28:45 h1daqgds0 [DAQ]
11:29:45 h1susauxb123 h1edc[DAQ] <<< EDC restart (h1seiex failed at this time)
11:44:19 h1daqdc1 [DAQ] <<< delayed 1-leg restart
11:44:30 h1daqfw1 [DAQ]
11:44:30 h1daqtw1 [DAQ]
11:44:32 h1daqnds1 [DAQ]
11:44:41 h1daqgds1 [DAQ]
11:45:36 h1daqgds1 [DAQ] <<< gds1 second restart
11:50:26 h1seiex h1iopseiex <<< first seiex attempt, restart the models
11:50:40 h1seiex h1hpietmx
11:50:54 h1seiex h1isietmx
12:53:15 h1seiex ***REBOOT*** <<< power cycle h1seiex following DC power supply replacement.
12:54:55 h1seiex h1iopseiex
12:55:08 h1seiex h1hpietmx
12:55:21 h1seiex h1isietmx
TITLE: 11/22 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 7mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.65 μm/s
QUICK SUMMARY:
Saw H1 make it to NLN & then have a lockloss seconds later as Ryan was handing off! :( Now high useism continues (with downward trend for the last 12hrs. Currently ALSx is in INCREASE FLASHES.
R. Short, with guidance from K. Kawabe and J. Oberling
I've made some updates to the 'CLOSE_ISS' state in the IMC_LOCK Guardian, which handles the closing of the ISS second loop, that should make the closing of the second loop more consistent.
When the second loop is engaged, the DC working point of the ISS is determined by the output of the second loop's digital AC coupling (H1:PSL-ISS_SECONDLOOP_AC_COUPLING_DRIVE). This output is held when the second loop is closed, but it can occasionally be held far from the average of its oscillations before the second loop is engaged, which in the past has caused locklosses and was mitigated earlier this year (see alog 67347). While the second loop isn't causing locklosses anymore (that I can recall), we do still see the digital AC coupling output being held slightly off from the mean, causing the diffracted power to jump. I've expanded upon Georgia's logic by changing the way the output of the AC coupling drive is held to be more consistent. Instead of waiting to hold the output until it's near the mean over the past 10 seconds (a calculation that itself can take several seconds), the process is now as follows:
I was able to test this logic during the maintenance period today, both with the IMC locked at 2W and 60W, with great success. We'll run with this for a while to see if over several lock acquisitions the second loop is being engaged with a more consistent digital AC coupling drive. The updated IMC_LOCK Guardian code is loaded and committed to SVN.