TITLE: 06/28 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 3mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY:
TITLE: 06/28 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
IFO is in ACQUIRE_DRMI_1F and LOCKING
Overall a very calm shift in which we were locked for the majority of it until the ETM glitch (lockloss alog 85402) killed the 8 hour lock.
Relocking was time consuming since the alignment was terrible and we have the 30 min wait before we can do an initial alignment. After I ran a fully auto inital alignment, locking went quickly until we lost lock at LOWNOISE_ASC (522). This also happened yesterday. Just like yesterday, the wind was hovering around 25mph with the primary microseism on the rise. Just like yesterday, there was an earthquake coming through. While I think we could have rode the EQ out and that the wind conditions weren't too bad, I believe that either this state might be a tad more susceptible to such noise or there's a controls issue. (or the same thing happened twice in the same way and will only happen twice). I will investigate tomorrow during shift.
Either way, we lost lock from DRMI but flashes looked good and I think (knock on SSTL) guardian will be able to get into OBSERVING on its own.
On the bright side, we rode through a 6.1 EQ from the phillippines though I did intervene to turn EQ mode on manually since the bulk of the ground motion was starting but the automation hadn't turned it on yet (judging from how much the ISC had to move to keep IFO still).
LOG:
None
Lockloss due to ETM Glitch. H1 Lockloss Tool.
As I was writing this, we lost lock at LOWNOISE_ASC, same as yesterday. There also happened to be a 5.5 EQ passing through (same as yesterday with a more local 3.0 one)
Jennie W, Rahul, Keita
This is just a summary of our work over the last two days trying to repeat the alignment coupling measurements for the replacement ISS array unit (D1101059, unit S1202965). The reason we need to repeat these is because we have now uograded the washer and clamp plate in QPD assembly. See Keita's previous alog for details.
Thursday
First we changed input alignment to get roughly 4 V on each PD in the array, this is acheived by inserting the larger iris and using the two steering mirrors (M2 closest to the array, M1 further towards the laser) to change the input alignment of the auxiliary laser into the unit.
As we have tilted the QPD by adding the new components we need to re-align the QPD to centre the beam (which is split off from the main beam entering the unit by the beam splitter on the elevator assembly which sits at one corner of the ISS array unit).
Then we unscrewed the four screws holding the QPD down (see image) and tried to move the QPD to minimise the coupling from yaw motion if the input beam to pitch. We only managed to minimise pitch coupling and couldn't get it centred on qpd in yaw as the whole QPD unit moves a lot when not screwed down.
We screwed down the QPD but it was still off in yaw by a lot (see image).
As we were adjusting the input alignment mirror to check the coupling I managed to lost the input alignment to the array.
Friday
Today Keita brought the input alignment back by using the beam viewer to check the position on the diodes while changing M2. Then we saw about 3.5-4V on each of the PDs in the array. Next we only undid the two lower screws on the QPD (these hold the QPD unit itself clamped to the platform it sits on, the two upper screws hold in the connector to the back of the QPD and these were only slighly loosened). Keita moved around the unit till the QPD readout showed we were near centred and then we screwed down the unit. It moves alignment while being screwed down probably because of the angle of the QPD relative to the clamp.
For this alignment we used the QPD amplifier unit that gives a live visual readout of the centering.
We also have the option of using another amplifier that gives the QPD X, Y and SUM channels so we can read them on an oscilloscope but these had some weird saw tooth noise on them (see image from Thursday). Keita then discovered that we were using the wrong cable (too low a current rating) for this amplifier, we searched for the correct one but could not find it. We will get back to this on Monday.
Summary: We think we now have the QPD in a good place relative to the PD array as yaw and pitch are fairly decoupled, but maybe the angle of the QPD in rotation is still slightly off as the P and Y motion of the beam are still slightly off from the QPD quadrants. We need a new cable for the ON-TRAK amplifier.
M. Todd, S. Dwyer
This morning we made a change to the thermalization guardian (managed by ISC) with flags that allow the SQZ guardian to turn on the SQZ_ASC in the unthermalized state or this guardian can turn on SQZ_ASC after 75 minutes (thermalized-ish). We also turned the monitoring off so that this switch does not knock out observing.
For more info see:
TITLE: 06/27 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: We've been locked for just under 5 hours, calm day.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
16:21 | FAC | Randy | EndY | N | Move crates around | 18:02 |
17:38 | CAL | Tony, Francisco | PCAL lab | LOCAL | Tx module maintenance | 19:20 |
20:15 | FIT | Tooba | Arm | N | Going for a walk | 21:03 |
21:12 | CAL | Francisco | PCAL lab | LOCAL | AOM alignment | 23:45 |
22:29 | PSL | Keita, Jennie | Optics lab | LOCAL | ISS array work | Ongoing |
TITLE: 06/27 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 9mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY:
IFO is in NLN (since 18:36 UTC) and OBSERVING since 19:33 UTC (4.5 hr lock)
An hour was spent comissioning to fix SQZ ASC, which now runs deeper into thermalization after NLN.
Otherwise, just planning to continue observing as long as possible!
Closes FAMIS 26399. Last checked in alog 85208.
All fans nominal, comparable to last week and within threshold.
Closes FAMIS 26456. Last checked in alog 84705
Trends look normal. SEI BRS Maintenance work in Mid-May was part of vent work and is the only excursion from the threshold lines.
J. Kissel, F. Clara ECR E2400330 Fil modified another UK SatAmp (D0900900 / D0901284) per ECR E2400330, which means "rev'ing" up the D0901284 circuit board inside from -v4 to -v5. However, this one, S1000278, seems to have more variability in components that were not changed between channels -- namely if I - fit the measured zero and pole frequency, and - assume that a perfect, as drawn, value for the R181 = R182 resistors (as Fil confirms that these were +/-0.1% resistors) the total capacitance seems to have a ~2% variability. Not unreasonable, given that typical off-the-shelf caps of value 10 [uF] have +/-10% variability, so with two of these caps parallel, you could expect the total capacitance to be between 18 and 22 [uF], or +/-10%. As such, I propose we set a tolerance for these transfer functions to have the following values at the following "check" frequencies (i.e. logspace(log10(5),log10(50),3), which is a requestable frequency vector on an SR785 frequency response sweep, if fstart = 50, and fstop = 0.5, with N points = 3). These assume values assume 1% variation in transimpedance amp feedback resistor, and 2.5% variation in whitening cap. Freq Mag (Upper Bound) Mag (Lower Bound) Phase (Upper Bound) Phase (Lower Bound) [Hz] [V/V] or dB([V/V]) [V/V] or dB([V/V]) [deg] [deg] 0.5 5.4098 or 14.664 5.0554 or 14.075 -106.23 -106.5 5.0 38.415 or 31.69 36.671 or 31.286 -135.09 -133.72 50. 54.987 or 34.805 53.868 or 34.627 -174.2 -173.9 Freq Mag Tolerance db(Mag Tolerance) Phase Tolerance [Hz] [V/V] +/- [V/V] dB([V/V]) +/- dB([V/V]) [deg] +/- [deg] 0.5 5.2313 +/- 0.17847 14.372 +/- 0.29138 -106.36 +/- 0.12754 5.0 37.547 +/- 0.86844 31.491 +/- 0.19861 -134.41 +/- 0.67952 50. 54.428 +/- 0.55902 34.716 +/- 0.088757 -174.05 +/- 0.14395 That being said, we wouldn't replace the non-ECR components if the SatAmp didn't pass this test, we'll just use those under-performing SatAmps to be used on low priority stages, i.e. ones not used for damping loop control, i.e. not top mass stages. If many end up exceeding this thresholds, then we can start to prioritize which suspensions get the best.
Fri Jun 27 10:12:24 2025 INFO: Fill completed in 12min 21secs
The filter cavity was having a hard time locking, and it looks like the FC green trans power has been dropping slowly over the last 3 weeks, which seems to be caused by a drop in the SHG power from around 100-110 mW down to 77 mW .
I adjusted the picos and the SHG temperature, and recovered the SHG power to 86 mW, so there is still a lot of missing power. The filter cavity locked after this.
I've added a template for this adjustment to userapps/sqz/h1/Templates/ndscope/SHG_alignment_temp_adjust.yaml
Jennie W, Ryan C, Sheila
Ryan and Jennie saw that the filter cavity had trouble locking again. We looked at the filter cavity transmission and launched power now compared to 10 days ago. The launched power has dropped to 71% of what it was (after my slight improvement this morning), and the transmitted power is 69% of what it was. This means that the main reason the filter cavity tranmission has decreased is the lower injected power (due to SHG power drop), not a filter cavity alignment problem.
The guardian has a checker that the filter cavity is locked in green when transmission is above 60uW, which we lowered to 50uW in the GR_LOCKED checker. With the lower transmission the power was sometimes dropping below 50uW, so this would help. We will see what happens with the SHG power over the next few days.
Lockloss at 17:02 UTC in NLN while we were waiting for the FC to lock.
TITLE: 06/27 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 4mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY:
14:46 UTC DRMI is struggling when ASC starts, I'm going to run a manual_IA
At the time of the request for help 10:22 UTC, we were in in CHECK_AS_SHUTTER where it was presumably stuck at SHUTTER_FAIL which I've encountered again at 15:22 UTC. It's been in that state since 09:05 UTC when we lost lock from 25Ws and the SHUTTER GRD reported "No kick... peak GS13 signal = 51.226"
The shutter did not trigger in this last lockloss, it looks like the light heading to the AS port was not high enough to trigger the shutter.
The lockloss_shutter_check guardian check for a kick in the HAM6 GS 13s anytime that we loose lock with more than 25kW circulating power in the arms. In this case we had just reach 100kW circulating power, so the guardian expected the shutter to trigger. This lockloss looks unusual in that there isn't a increase in the power going to the AS port right before the lockloss.
Ryan ran the shutter test and the shutter is working correctly now.
I think that this is probably the result of an usual lockloss happening at somewhat lower circulating power than usual. We should probably edit the logic in IFO notify to call for operator assistance whenever the shutter is in the failed state.
This is OK, the AS port went dark and stayed there for about 70ms or so after the lockloss and there was no excessive power surge that would have caused the fast shutter to be triggered.
The maximum power of ~1.4W was observed ~160ms after the lockloss, which is well below the threshold for the analog FS trigger (3 to 4W, I don't remember the exact number).
If something similar happened with 60W, though, FS might have been triggered.
Attached is the estimate of the power coming into HAM6 using two different sensors (PEM-CS_ADC_5_19 = HAM6 power sensor in the AS camera can which monitors power before the fast shutter, and ASC-AS_A_DC_NSUM after). Neither of these have hardware whitening, neither saturated (ASA was close to saturation but the HAM6 power sensor saturation threshold is about 570W when beam diverter is open, 5.7k if closed).
Note that the calibration of the PEM channel is a factor of 10 smaller than that in the observation mode (0.177W/ct, see alog 81112 and git repo for lock loss tool) because the beam diverter (90:10) was open.
I made a timeline of what ISC_LOCK was doing during this event.
WP12623 h1asc add fast channels to DAQ
Elenna, Dave:
A new h1asc model was rev-locked and installed. Four new fast DQ channels were added to the DAQ (channel, rate):
> H1:ASC-DC6_P_IN1_DQ, 256
> H1:ASC-DC6_P_OUT_DQ, 512
> H1:ASC-DC6_Y_IN1_DQ, 256
> H1:ASC-DC6_Y_OUT_DQ, 512
DAQ restart needed.
WP12570 Restart Digivideo Cameras with latest pylon
Patrick, Jonathan, Dave:
Jonathan updated pylon on h1digivideo[4,5,6] and restarted all the camera servers on these machines. This should fix the bug of stuck open files accumulating when the camera connection is interrupted.
No DAQ restart needed.
Add PID SMOO channels to vacuum SDF
Dave:
Prior to today's h0vacly restart I added the missing CP PID-control SMOO channels to the vacuum SDF monitor.req and safe.snap files. SDF was restarted 08:29. No DAQ restart needed.
WP12577, 12608, 12615 Upgrade LY Vacuum Controls
Janos, Gerardo, Patrick, Jonathan, Erik, Dave:
Patrick installed a new h0vacly system this morning. Main items are:
Pleae see Patrick's alog for details.
A extended DAQ restart was required, renaming Ion Pump raw minute trend files for uninterrupted lookback and construcing new PT100 (HAM1) raw minute trends following the upgrade of h0vaclx last Tuesday (17th June 2025).
DAQ Restart
Jonathan, Erik, Patrick, Dave:
Immediately following the restart of h1asc at 11:52, the DAQ was restarted using the following procedure:
It was at this late point that I remembered that the temporary H1 version of PT100B is no longer needed, and indeed this channel has no data following the removal of the PT100B Volts channel from h0vacly. However since it is still in the EDC, we need to continue running the temporary IOC until the next DAQ restart. I've removed it from edcumaster.txt as a reminder.
GPS Leap Seconds Updates
Jonathan, Erik, Dave:
Erik's FAMIS task reminded us that the leapseonds files expiration date of 30 June 2025 is rapidly approaching. Although no leap seconds are to be applied, the files need to be updated to reset their expiration dates. Please see Erik and Jonathan's alog for more details.
DNS testing
Erik:
ns1 (backup DNS server) was used by Erik to see if we can reproduce the error whereby loss of connection to GC caused internal CDS name resolution issues. It did not.
Vacuum Ion Pump channel name changes (old-name, new-name)
H0:VAC-FCES_IP23_II123_AIP_IC_VOLTS | H0:VAC-FCES_IPFCC9_IIC9_AIP_IC_VOLTS |
H0:VAC-FCES_IP23_II123_AIP_IC_VOLTS_ERROR | H0:VAC-FCES_IPFCC9_IIC9_AIP_IC_VOLTS_ERROR |
H0:VAC-FCES_IP23_II123_AIP_IC_MA | H0:VAC-FCES_IPFCC9_IIC9_AIP_IC_MA |
H0:VAC-FCES_IP23_II123_AIP_IC_MA_ERROR | H0:VAC-FCES_IPFCC9_IIC9_AIP_IC_MA_ERROR |
H0:VAC-FCES_IP23_II123_AIP_IC_LOGMA | H0:VAC-FCES_IPFCC9_IIC9_AIP_IC_LOGMA |
H0:VAC-FCES_IP23_II123_AIP_IC_LOGMA_ERROR | H0:VAC-FCES_IPFCC9_IIC9_AIP_IC_LOGMA_ERROR |
H0:VAC-FCES_IP23_VI123_AIP_PRESS_TORR | H0:VAC-FCES_IPFCC9_VIC9_AIP_PRESS_TORR |
H0:VAC-FCES_IP23_VI123_AIP_PRESS_TORR_ERROR | H0:VAC-FCES_IPFCC9_VIC9_AIP_PRESS_TORR_ERROR |
H0:VAC-FCES_IP24_II124_AIP_IC_VOLTS | H0:VAC-FCES_IPFCD1_IID1_AIP_IC_VOLTS |
H0:VAC-FCES_IP24_II124_AIP_IC_VOLTS_ERROR | H0:VAC-FCES_IPFCD1_IID1_AIP_IC_VOLTS_ERROR |
H0:VAC-FCES_IP24_II124_AIP_IC_MA | H0:VAC-FCES_IPFCD1_IID1_AIP_IC_MA |
H0:VAC-FCES_IP24_II124_AIP_IC_MA_ERROR | H0:VAC-FCES_IPFCD1_IID1_AIP_IC_MA_ERROR |
H0:VAC-FCES_IP24_II124_AIP_IC_LOGMA | H0:VAC-FCES_IPFCD1_IID1_AIP_IC_LOGMA |
H0:VAC-FCES_IP24_II124_AIP_IC_LOGMA_ERROR | H0:VAC-FCES_IPFCD1_IID1_AIP_IC_LOGMA_ERROR |
H0:VAC-FCES_IP24_VI124_AIP_PRESS_TORR | H0:VAC-FCES_IPFCD1_VID1_AIP_PRESS_TORR |
H0:VAC-FCES_IP24_VI124_AIP_PRESS_TORR_ERROR | H0:VAC-FCES_IPFCD1_VID1_AIP_PRESS_TORR_ERROR |
H0:VAC-FCES_IP25_CS187_STATUS | H0:VAC-FCES_IPFCH8A_CSH8A_STATUS |
H0:VAC-FCES_IP25_II187_IC_VOLTS | H0:VAC-FCES_IPFCH8A_IIH8A_IC_VOLTS |
H0:VAC-FCES_IP25_II187_IC_VOLTS_ERROR | H0:VAC-FCES_IPFCH8A_IIH8A_IC_VOLTS_ERROR |
H0:VAC-FCES_IP25_II187_IC_AMPS | H0:VAC-FCES_IPFCH8A_IIH8A_IC_AMPS |
H0:VAC-FCES_IP25_II187_IC_AMPS_ERROR | H0:VAC-FCES_IPFCH8A_IIH8A_IC_AMPS_ERROR |
H0:VAC-FCES_IP25_VI187_PRESS_TORR | H0:VAC-FCES_IPFCH8A_VIH8A_PRESS_TORR |
H0:VAC-FCES_IP25_VI187_PRESS_TORR_ERROR | H0:VAC-FCES_IPFCH8A_VIH8A_PRESS_TORR_ERROR |
Tue24Jun2025
LOC TIME HOSTNAME MODEL/REBOOT
11:52:26 h1asc0 h1asc <<< Elenna's new asc model
not shown, shutdown of both TW0 and TW1 for file manipulation at this point
12:17:51 h1daqdc0 [DAQ] <<< 0-leg restart
12:18:03 h1daqfw0 [DAQ]
12:18:03 h1daqtw0 [DAQ]
12:18:05 h1daqnds0 [DAQ]
12:18:12 h1daqgds0 [DAQ]
12:18:15 h1susauxb123 h1edc[DAQ] <<< edc restart with new vacuum channel list
12:24:09 h1daqdc1 [DAQ] << 1-leg restart
12:24:22 h1daqfw1 [DAQ]
12:24:22 h1daqtw1 [DAQ]
12:24:23 h1daqnds1 [DAQ]
12:24:31 h1daqgds1 [DAQ]
12:25:06 h1daqgds1 [DAQ] <<< gds1 second restart
15:53 UTC Observing