I restarted the slow-controls SDF system h1cdssdf on h1ecatmon0, running now with its full list of channels. It had been running without the lock-loss-alert channels since Thursday's cdslogin reboot.
TITLE: 12/24 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Earthquake
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 10mph 5min avg
Primary useism: 0.11 μm/s
Secondary useism: 0.59 μm/s
QUICK SUMMARY:
After a 44 hour lock, the IFO needed some help to keep the green arms locked. The useism has been getting pretty high, so I tried the USEISM state in SEI_CONF and green was immediately able to hold. I changed the default states in SEI_ENV to be the Useism and Useism_Earthquake states, but these should be reverted when the useism is back down.
We have finished an alignment and are up to full power with this change.
TITLE: 12/24 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: Very quiet evening with a few earthquakes passing through. H1 was locked and observing the whole time; current lock strecth is coming up on 44 hours.
Over the course of the day today, Ryan and I have noticed the BNS inspiral range dropping starting around noon PST. Since it's mostly come back at this point almost 12 hours later, it seems to be correlated to a drop in LVEA temperature which in turn has raised the large corner station optics (BS and ITMs) according to their vertical top-mass OSEMs (see one- and six- day trends attached). Perhaps the HVAC system lowered the heat into the LVEA in response to the outside temperature rising? Since H1 has remained locked and observing over this time, I imagine the angular control loops have been keeping up with alignment changes, but possibly altering noise in DARM which has been affecting range. I don't believe any action is needed here, but I think it's still worth noting here how corner station temperature changes can affect our observation performance.
State of H1: Observing at 151Mpc
H1 has been locked for almost 40 hours. EQ mode activated twice for quakes passing through, but otherwise it's been a quiet evening so far.
TITLE: 12/23 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 3mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.40 μm/s
QUICK SUMMARY:
H1 has been locked for 35.5 hours. I'll keep an eye on LVEA temps since they seem to have dropped slightly in the past couple hours. BNS range also seems to have dropped a bit.
TITLE: 12/23 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Quiet day, I ran a calibration measurement, and we stayed locked the whole shift (35:40). The range has been dropping slightly over the past hour or so, maybe related to vea small temperature changes?
16:35 I went into the warehouse by the VPW to restart the cluster thermostats after the Jonathon called to let me know it was getting warm in there which I had also done last Saturday the 16th.
18:08 I went out to the H2 enclosure and power cycled the 2 in use thermostats and adjusted the fan.
18:00 EQ from Japan, EQ mode at 18:08 UTC, back to CALM 20 minutes later
19:01 I dropped Observing to run the calibration sweep while LLO was relocking following the EQ and went back into Observing at 19:31UTC
20:30 to 21:00UTC I did a quick corner station building walk through, OSB, LSB, mech room, LExC, and staging (I did the H2 encl and VPW in the morning) and saw nothing noteworthy.
H1:PEM-Y_EBAY_RACK2_TEMPERATURE high temperature alert, almost 7 degrees above nominal but its trending down, tagging FMCS, PEM
STATE of H1: Observing at 163Mpc
I started the cal sweep at 19:01 with the Broadband then I did the simulines.
BB:
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20231223T190155Z.xml
19:08 Simulines started, GPS start: 1387393698.4
File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20231223T190802Z.hdf5
2023-12-23 19:30:03,533 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20231223T190802Z.hdf5
2023-12-23 19:30:03,544 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20231223T190802Z.hdf5
2023-12-23 19:30:03,555 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20231223T190802Z.hdf5
2023-12-23 19:30:03,565 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20231223T190802Z.hdf5
ICE default IO error handler doing an exit(), pid = 42939, errno = 32
GPS end: 1387395021.653981
Sat Dec 23 10:11:09 2023 INFO: Fill completed in 11min 5secs
At 19:58 Fri 22 Dec 2023 the h1hwsex computer went down. This caused two issues:
Camilla has confirmed that the HWS_ETMX camera is not needed over the break.
My HWS camera control code had put this camera into external_trigger mode when H1 locked early Friday morning, it is still in this mode, log entry:
To make EDC and SDF green again, I have started a simulation HWS_ETMX IOC on cdsioc0, similar to what we were already doing for HWS_ETMY.
The simulation has all of its PVs set with VAL=0.0 except for those channels with non-zero values in the OBSERVE.snap and safe.snap SDF files. This ensures the SDF diff count is zero.
To get this going quickly I am running it on cdsioc0 as user ioc in a tmux session. Later I'll work on getting it under systemd control.
TITLE: 12/23 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 160Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 4mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.42 μm/s
QUICK SUMMARY:
TITLE: 12/23 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
H1 was locked and observing the entire shift; current lock stretch coming up on 20 hours. One event candidate, S231223j, but nothing else of note on this very quiet evening.
State of H1: Observing at 161Mpc
H1 has been locked for almost 16 hours. Very quiet evening so far.
TITLE: 12/22 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 160Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT
Wind: 8mph Gusts, 5mph 5min avg
Primary useism: 0.12 μm/s
Secondary useism: 0.39 μm/s
QUICK SUMMARY:
H1 has been locked and observing for almost 12 hours. All other systems look good.
The first and second attached plots show the BNS range with and without ADF servo. It takes 3 hours to reach the maximum BNS range without ADF servo, but it takes only 1 hour with ADF servo. During thermalization, the optimal squeezing angle can drift due to the drift of SRC detuning and SQZ-IFO mode matching. Since the ADF at 1.3 kHz is within the SRC bandwidth 80 kHz, the ADF can track the squeezing angle drift by SRC detuning and SQZ-IFO mode matching which CLF cannot track. The ADF servo uses the ADF squeezing angle (SQZ-ADF_OMC_TRANS_SQZ_ANG) as error signal and feeds it back to the CLF demod phase (SQZ-CLF_REFL_RF6_PHASE_PHASEDEG) to keep the optimal squeezing angle during thermalization.
TITLE: 12/22 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
5.7 from the Dominican Republic came through around 18:55, EQ mode at 18:56UTC, lots of SRM saturations
I noticed that ITMX mode12 was slowly ringing up and turned off the gain and applied settings that worked previously, alog74430.
Theres been some extra noise from the MSR today, Tony went in and found H1FS0 to have an error which was found be a faulty backup power supply which was swapped alog74998
The SQZer lost lock and we went into comissioning at 2:28UTC, it relocked and we went back into Observing at 22:30UTC.
Microseism continues to rise
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 21:33 | EE | Fil | MidY | N | Drop of parts | 22:14 |
| 22:23 | CDS | Tony | H2 ecl | N | Look for spare power supplies | 22:27 |
(Travis S., Gerardo M.)
Late entry.
Last Tuesday Travis and I troubleshooted the lost signal from HAM2 annulus ion pump. Upon getting to HAM1/HAM2 the controllers for both AIPs were noted good, see photos, meaning that they both had a couple of LEDs on, this is good, then the connection for the signal cable on the back of the controller for HAM2 AIP was verified. Voltage read zero when measured at the rack in the mechanical room, then the signal was measured at the controller but on the front panel, we got some sensible voltage ~1.4 VDC, so based on the results we replaced the controller, however that did not fix the issue with the lost signal.
After a brief break (went to X-End to install a fan motor on aftercooler for purge air system) and with a laptop on hand, the signal cable from the controller for HAM2 AIP was removed, I then noticed that the signal for HAM1 AIP went red, reconnected the cable for HAM2 AIP and signal for HAM1 AIP was back. Long story short the signal cables never got switched, and with this new information I reseated the signal cable for HAM1 AIP, then the signal for HAM2 AIP was good, this was the loose connection, we still don't know why it lost connection, perhaps someone was up on top of HAM1 or in the vicinity shaking things around.
A FRS ticket is open to switch the signal cables at the rack 30026, since this is the easy way to fix this.
Lockloss at LOWNOISE_COIL_DRIVERS and TRANSITION_FROM_ETMX
Another LOWNOISE_COIL_DRIVERS lockloss, DCPD saturation then LL. 500Hz oscillation in DARM - violin modes