TITLE: 07/18 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 1mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
H1's been locked for 1.25hrs w/ 4 locklosses over night each with decent recovery times. Microseism is trending down over last 24hrs; forecast is for high winds (redflag warning) in the afternoon.
TITLE: 07/18 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
Relocking and in MOVE_SPOTS. Relocking after the first lockloss of my shift was hands off, and this relock has been hands off this time so far too, so it's been easy. I did not get a chance to offload the SR3 dither offset onto the sliders (85830), but that can easily be done later, especially since it's already been like that for over 5 years!
We had GRB-Short E581624 come in while we were Observing earlier
LOG:
23:30UTC Observing and have been Locked for 3 hours
00:14 GRB-Short E581624
02:02 Lockloss
03:34 NOMINAL_LOW_NOISE
03:36 Observing
04:20 Lockloss
Lockloss at 2025-07-18 04:20 UTC after 46 minutes locked
Lockloss at 2025-07-18 02:02 UTC after 5.5 hours locked
03:36 UTC Observing
During relocking I reloaded the h1asc model filters in so Elenna's new filters could be added (diffs).
H1 is in stand-down which gave me the opportunity to test the notification on the CDS Overview MEDM
TITLE: 07/17 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Locked for 3 hours. We had one lock loss that I'm still not sure of the cause, and relocked was straight forward with an initial alignment.
LOG:
TITLE: 07/17 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 16mph Gusts, 10mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.20 μm/s
QUICK SUMMARY:
Observing at 145Mpc and have been Locked for almost 3 hours. Everything is looking good.
TJ, Elenna, Oli, Sheila
I had noticed a bit ago that SR3 M1 DITHER P has been outputting a 32 consistantly (overview). It seems like the OFFSET has been on for most of over 5 years(ndscope1). Over 5 years ago it stopped moving around (ndscope2) and had stayed at 32 almost all the time since then, probably when the SR3_CAGE_SERVO stopped being used. I spoke to Sheila, and she says she knew about it, and that it was just a remnant of the cage servo that no one bothered to offload into the opticalignment sliders, and that we can offload it to the sliders so that it isn't sitting there anymore. We will probably do this at the next lockloss.
Closes FAMIS 26052. Last checked in alog 85627.
"BSC high freq noise is elevated for these sensor(s)!!!
ITMY_ST1_CPSINF_H3"
By eye, BS_ST2 had elevated counts compared to last week. Otherwise, signals are either comparable or lower.
Locking was straight forward, needed to run an initial alignment, but it was all hands off. There were a bunch of SDFs from commissioning today that we knew would be there.
Briefly dropped observing to make a sqz adjustment from 2106-2108UTC
Camilla, TJ, Dave:
On 26jun2025 I started my HWS camera control code on the four HWS machines [h1hwsmsr, h1hwsmsr1, h1hwsex and h1hwsey]. The code disables the HWS cameras when H1 is in observation mode and re-enables them on lock-loss.
Data since that time has shown no clear correlation between HWS camera status and the 2Hz DARM comb. The comb appears to have persisted until the 4th July, at which point it diminished.
Camilla and TJ decided we should try turning off the camera control code and keep the HWS cameras enabled.
Following the 11:43 lock loss this morning (at which point the cameras were enabled by the code) I stopped the camera control python code on all four machines between the times 11:54 and 11:57 PDT. Note that Ctrl-C inside the tmux sessions did not work, I had to "kill -9 " the python processes.
Attached camera control MEDM from 12:03 shows the cam-ctrl agents have stopped, with their GPS times not updating (denoted by purple boxes).
For now I have removed the CCTL (Cam Control) box from the CDS overview, but I am keeping the cam_ctrl_ioc running and the channels remain in the DAQ.
Looking at the measurement of CSOFT from this alog, and also remembering that CSOFT P tends to be highly coherent with DARM from 10-15 Hz, I decided to make some adjustments to the CSOFT P lowpass filter that would both suppress noise around 10 Hz, and also maybe buy back some phase at 1 Hz. I designed a 7 Hz low pass filter with only 40 dB of attentuation. The current lowpass filter design has 60 dB of attenuation, which seems like overkill. I adjusted the low pass to be elliptic, low Q, which gives us back about 5 degrees of phase, but reduces the gain at 10 Hz and above by a factor of 10. I didn't get a chance to try it in lock, but I changed the guardian to engage this filter on the way up to NLN (line 3341 of ISC_LOCK). There will be an SDF diff. The new filter is in FM7, the old filter is in FM9.
Jennie, Rahul
On Tuesday Rahul and I took the measurements for the horizontal coupling in the ISS array currently on the optical table.
The QPD read 9500 e-7 W.
The X position was 5.26 V, the Y position was -4.98 V.
PD | DC Voltage [mV] pk-pk | AC Voltage [mV] pk-pk |
1 | 600 | 420 |
2 | 600 | 380 |
3 | 600 | 380 |
4 | 600 | 420 |
5 | 800 | 540 |
6 | 800 | 500 |
7 | 600 | 540 |
8 | 800 | 540 |
After thinking about this data I realise we need to retake it as we should record the mean value for the DC coupled measurements. This was with a 78V signal applied from the PZT driver and an input dither signal of 2 Vpp at 100Hz on the oscilloscope and I think 150 mA pump current on the laser.
Rahul, Jennie W
Yesterday we went back into the lab and retook the DC and AC measurements while horizontal dither was on while measuring using the 'mean' setting and without changing the overall input pointing from what it was in the above measurement.
PD | DC Voltage [V] mean | AC Voltage [V] mean |
1 | -4.08 | -0.172 |
2 | -3.81 | 0.0289 |
3 | -3.46 | 0.159 |
4 | -3.71 | 0.17 |
5 | -3.57 | -0.0161 |
6 | -3.5 | 0.00453 |
7 | -2.91 | 0.187 |
8 | -3.36 | 0.0912 |
QPD direction | Mean Voltage [V] | Pk-Pk Voltage [V] |
X | 5.28 | 2.20 |
Y | -4.98 | 0.8 |
QPD sum is roughly 5V.
Next time we need to plug in the second axis of the PZT driver so as to take the dither coupling measurement in the vertical direction.
horizontal dither calibration = 10.57 V/mm
dither Vpk-pk on QPD x-direction = 2.2V
dither Vpk-pk on QPD y-direction = 0.8V
dither motion in horizontal direction in V on QPD = sqrt(2.2^2 + 0.8^2)
motion in mm on QPD that corresponds to dither of input mirror = sqrt(2.2^2 + 0.8^2) / 4.644 = 0.222 mm
Code is here for calibration of horizontal beam motion to QPD motion plus calibration of dither measurements.
To work out the relative intensity noise:
RIN = change in power/ power
= ( change in current/ current) / responsivity of PD
= (change in voltage/voltage) / (responsitvity * load resistance)
Therefore to minimise RIN we want to minimise change in voltage / voltage for each PD.
To get the least coupling to array input alignment we work out
relative RIN coupling = (delta V/ V) / beam motion at QPD
This works because the QPD is designed to be in the same plane as the PD array.
PD | DC Voltage [V] mean | AC Voltage [mV] pk-pk | Beam Motion at QPD [mm] | Relative Coupling [1/m] |
1 | -4.08 | 420 | 0.222 | 465 |
2 | -3.81 | 380 | 0.222 | 450 |
3 | -3.46 | 380 | 0.222 | 496 |
4 | -3.71 | 420 | 0.222 | 511 |
5 | -3.57 | 540 | 0.222 | 683 |
6 | -3.5 | 500 | 0.222 | 645 |
7 | -2.91 | 540 | 0.222 | 838 |
8 | -3.36 | 540 | 0.222 | 726 |
These are all a factor of 50 higher than those measured by Mayank and Shiva but after discussion with Keita either we need higher resolution measurements or we need to further optimise input alignment to the array to minimise the coupling.
Sheila, Camilla
We ran a couple of squeezing angle scans to check the settings of the ADF servo.
One thing that we realized is that the ADF Q demod signal is divided by H1:SQZ-ADF_OMC_TRANS_Q_NORM rather than mulitplied which is what we had thought. We changed the coefficent from 0.18 to 5.8. The first png attachment shows that this transforms the blue ellipse into the orange one. It would be a bit better if we first adjusted the demod phase to maximize the Q signal, so that the ellipse would be aligned along the axis, and the rescaled version would be more like a circle. However you can see in the right side plot that this gives us a reasonably linear readback of sqz angle as we change the RF6 demod angle (which is actually cabled up to RF3 phase) about 150 degrees where our best squeezing is.
Camilla turned the servo back on in sqzparams.
For future reference, a slightly better way to do this would be to move the demod phase to maximize Q, do a scan and set H1:SQZ-ADF_OMC_TRANS_Q_NORM to the ratio (max of Q)/ (max of I). Then you can do a smaller scan around the point with the best squeezing, and in sqzparams set sqz_ang_adjust_ang to the readback angle that you think is best.
This didn't work at the start of today's the lock as the ADF frequency had been left near 10kHz. Once I put the ADF back to 322Hz it seemed to work fine.
For operators, this means that if the squeezing looks bad, running SCAN_SQZANG_FDS alone won't change the SQZ angle. You would need to:
If the servo is running away, try the above instructions, if that doesn't work, the servo can be turned off via editing use_sqz_angle_adjust = False in sqz/h1/guardian/sqzparams.py. Please alog and tag SQZ.
Since we've had this servo running, the range has been higher and sqz more stable, see attached.
Lockloss at 2025-07-17 04:11 UTC due to a power issue with ETMX and TMSX. Currently in contact with Dave and Fil is on his way in.
ETMX M0 and R0 watchdogs tripped
ETMX and TMSX OSEMs are in FAULT
ETMX ESD off
ETMX HWWD notified that it would trip soon, so SEI_ETMX was preemptively put into ISI_OFFLINE_HEPI_ON to keep ISI from getting messed up when it trips
H1SUSETMX ADC channels zeroed out at 21:11:39. SWWDs did not trip because there is no RMS on the OSEM signals, but the HWWD completed its 20 minute countdown and powered down the three ISI coil drivers at 21:32. This indicates ETMX's top stage OSEMs have lost power.
I've opened WP12692 to cover Fil going to EX to investigate.
During the recovery the +24VDC power supply for the SUS IO Chassis was glitched which stopped all the h1susex and h1susauxex models. To recover I first did a straight forward reboot of h1susauxex (no Dolphin), it came back with no issues.
To reboot h1susex was more involved, remember that the EX Dolphin switch was damaged by the 06 April 2025 power outage and has no network control. The procedure to reboot h1susex I used was:
When h1susex came back, I verified all the IO Chassis cards were present (they were all there)
I unpaused the SEI and ISC IPC by writing a 0 to their IPC_PAUSE channels.
The HWWD came back in nominal state.
I reset the SUS SWWD DACKILLs and unbypassed the SEI SWWD.
DIAG_RESET to clear all the IPC errors (it did so) and clear DAQ CRCs (they cleared).
Handed systems over to control room (Oli and Ryan S).
From Fil:
-18VDC Power supply had failed and was replaced.
Power supply is in rack VDD-2, location U25-U28, right-hand supply, label [SUS-C1 C2]
old supply (removed) S1202024
new supply (installed) S1300288
Last night's HWWD sequence is shown below. Reminder that at +40mins the SUS part of the HWWD trips, which sets bit2 of the STAT. This opens internal relay switches, but since we don't route the SUS drives through the HWWD unit (too noisy) this has no effect on operations. The delay between 22:52 and 23:20 is because h1iopsusex was down between 23:01 and 23:20.
Fan motor seized on failed power supply.
Wed16Jul2025
LOC TIME HOSTNAME MODEL/REBOOT
23:15:13 h1susauxex h1iopsusauxex
23:15:26 h1susauxex h1susauxex
23:20:21 h1susex h1iopsusex
23:20:34 h1susex h1susetmx
23:20:47 h1susex h1sustmsx
23:21:00 h1susex h1susetmxpi
TITLE: 07/15 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 15mph Gusts, 9mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.14 μm/s
QUICK SUMMARY: We've only been at low noise for 20 min, but magnetic injections are running now. Maintenance day today.
Maintenance was delayed by 1 hour this day due to a Fermi GRB notice (E581020) that we received on site a few minutes before 1500UTC. We were not in Observing at the time, but we were in a low noise state. I brought us back into observing at 1510UTC where we stayed until one hour after the initial BRD notice.
Jim, Elenna
We had a 6.6 earthquake begin rolling in from Panama, so Jim and I tried to take the ASC arm control loops to the high bandwidth state. I also turned off the LSC feedforward which drives the ETMY PUM.
This obviously creates a lot of noise in DARM, but we are curious to see if it helps us ride out a large earthquake.
This consists of:
Some of these things can be done by hand, but others, like transitioning filters and gains together have to be done with guardian code to ensure they are done at the same time. I copied and pasted lines of code into a guardian shell.
These are the lines of code that will do everything I mentioned above:
ezca.get_LIGOFilter('ASC-CHARD_Y').ramp_gain(300, ramp_time=10, wait=False)
ezca.switch('ASC-CHARD_Y', 'FM3', 'FM8', 'FM9', 'OFF')
ezca.switch('ASC-DHARD_Y', 'FM1', 'FM3', 'FM4', 'FM5', 'FM8', 'OFF')
ezca.switch('ASC-CHARD_P', 'FM9', 'ON')
ezca.switch('ASC-CHARD_P', 'FM3', 'FM8', 'OFF')
ezca['ASC-CHARD_P_GAIN'] = 80
ezca.get_LIGOFilter('ASC-DSOFT_Y').ramp_gain(30, ramp_time=5, wait=False)
ezca.get_LIGOFilter('ASC-DSOFT_P').ramp_gain(10, ramp_time=5, wait=False)
ezca.switch('ASC-DHARD_P', 'FM4', 'FM8', 'OFF')
ezca['LSC-PRCLFF_GAIN'] = 0
ezca['LSC-MICHFF_GAIN'] = 0
ezca['LSC-SRCLFF1_GAIN'] = 0
I saved this as a script called "lownoise_asc_revert.py" in my home directory. This is a bit of a misnomer since it also reverts the LSC feedforward.
We are still locked so far, but we are waiting to see how this goes (R wave just arrived).
We lost lock when the ground motion got to be about 2.5 micron/s.
This was a "large earthquake" aka within the yellow band on the EQ response zone plot. Since these earthquakes are highly likely to cause lockloss, Jim and I are thinking we could try this high bandwidth control reversion for these earthquakes to see if this can help us survive the earthquake. This would take us out of observing and kill the range (we were at about 70 Mpc before the lockloss), but we could then go back to lownoise once the earthquake passes.
Jim also thinks he can make some adjustments to other seismic controls, but I'll let him explain how that would work since he is the expert.
I refined the script to include some sleeps, and wrote another script to revert the reversion so it will put the ASC and LSC feedforward back in the nominal low noise state.
Both scripts are attached. They should be tested!
I have added buttons to run modified versions of Elenna's scripts to my seismic overview ISI_CONFIG screen, the smaller red and blue buttons that say "ASC Hi Gn" and "ASC Low noise" in the upper left, but I don't think we are ready to suggest anyone use them yet. I added a bit of code to ask if you are sure, so they shouldn't be very easy to launch accidentally. I'm trying to compile some data to estimate how much run time we lose or could gain before investing much more effort in automating this.
The lockloss that occurred at 2025-07-18 08:00 UTC (not tagged as a glitch) was preceded by what appears to be a large kick in the yaw ASC. First attachment shows the CSOFT Y and CHARD Y signals a few seconds before the locklosses. This is also apparent in the test mass L2 signals in the second attachment.
Of the 4 locklosses overnight, we had (1) ETMx Glitch lockloss.
The other two locklosses last night seem by eye to have the same behavior (2025-07-18 12:07:24 UTC and 2025-07-18 04:20:15 UTC). Within ~100 ms of the lockloss time, there is something glitchy in the darm error signal where the error signal drops sharply. It looks like the glitchy behavior starts in DARM IN1 slightly before ETMX L3 starts behaving weirdly, but that's hard to tell since I'm just looking at ndscopes.