TITLE: 07/16 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT
Wind: 13mph Gusts, 9mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY:
Signal railed about 6:28 PM local time, I checked trend data for PT120 and no pressure rise noted inside the main volume. Attached is a 14 hour trend of the pump behavior and PT120 (main volume internal pressure).
System will be evaluated as soon as possible.
TITLE: 07/16 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
Currently relocking and in MOVE_SPOTS. This last relock was looking so bad up until ENGAGE_ASC_FOR_FULL_IFO, I am very surprised it made it (40min build ups). The first lock reacquisitoin of my shift, ALSX was having some issue with the crystal frequency and was losing lock a lot. Also, there were many times when it was looking really bad but would hold on (ndscope). Besides ALSX having trouble that first round of relocking, there haven't been any issues.
LOG:
21:20UTC Observing at 137Mpc and have been Locked for 29 minutes
21:23 Dropped out of Observing to touch up OPO temperature and run scan sqz ang
21:27 Observing
22:34 Lockloss
- Alignment looked so bad during DRMI. I tried adjusting for a while, but eventually started an initial alignment
- Multiple lockloss due to ALSX unlocking
00:52 NOMINAL_LOW_NOISE
00:55 Observing
04:12 Lockloss
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
21:09 | ISS | Jennie, Rahul | Optics Lab | Local | Working on the PSL ISS system | 22:36 |
00:53 | Jennie | OpticsLab | n | Hunting down her phone | 00:58 |
A partial continuation of showing the characterization of the OSEM noise before versus after the satellite amplifier swaps. Today, PR2, MC2, TMSX, and ETMX M0 R0 L1 were swapped out (85770), but here I am only showing the comparison plots for PR2 and MC2. I will have to wait until we have a longer period of being in DOWN before I can get good comparisons for TMSX and ETMX because there has not been very much time after the swaps where we weren't locking or where the seismic environment wasn't set to maintenance, which results in a ton of extra noise that isn't able to be properly regressed out.
Here are the previous comparisons: 85485, 85699
PR2
Results
/ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/PR2/SAGM1/Results/allDampRegressCompare_H1SUSPR2_M1_NoiseComparison_1435154988vs1436654703-1200.pdf
r12453
Data
/ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/PR2/SAGM1/Data/dampRegress_H1SUSPR2_M1_1435154988_1200.mat
/ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/PR2/SAGM1/Data/dampRegress_H1SUSPR2_M1_1436654703_1200.mat
r12453
MC2
Results
/ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/MC2/SAGM1/Results/allDampRegressCompare_H1SUSMC2_M1_NoiseComparison_1436631330vs1436638358-1200.pdf
r12454
Data
/ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/MC2/SAGM1/Data/dampRegress_H1SUSMC2_M1_1436631330_1200.mat
/ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/MC2/SAGM1/Data/dampRegress_H1SUSMC2_M1_1436638358_1200.mat
r12454
The comparisons for the rest of this swap set (TMSX, ETMX M0/R0/L0) have been posted as 85952
Lockloss at 2025-07-16 04:12 UTC after almost 3.5 hours locked
Closes FAMIS#28414, last checked 85483
The ITMX measurements actually had enough coherence this week!
9:00 - 12:30 Janos The Kobelco compressor and drying towers have been switched on today during the maintenance period for validation purposes (similarly to the EX dry air system earlier - aLog 85469). At the X-manifold, a KF50 venting port has been opened up, and the air was flowing with full throttle during 3.5 hours. Despite of this, the compressor kept stopping, and the dew point went down nicely. The measured data: - Dew point right before switching off the dryer towers (measured by the in-built dew point meter): -49 deg F (=-45 deg C) - Dew point at the end of the testing period at the X-manifold venting port (measured by our portable dew point meter): -43.6 deg C - Particle count at the end of the testing period at the X-manifold venting port (measured by our portable particle counter): 0 for all sizes - FTIR tests have also been taken at the vent port, and the vial is now in storage This completes the installation and validation process for the Kobelco, and so it is now OK to use for venting (similarly to the EX system).
Lockloss at 2025-07-15 22:34 UTC from unknown causes
00:55 UTC Back to Observing
Had to accept sdf diffs for DHARD_[P,Y]_[A,B]_TRAMPs, which Elenna updated earlier (85774). I'll need to check if they need to be updated in the safe.snap or if they get adjusted in guardian
Monthly Dust trends FAMIS 37254
I've attached the Dust mon plots for the past month.
I have moved the DHARD control to be distributed between AS A and B RF45 Q instead of only on AS A RF45 Q to see if we can reduce the noise in DHARD.
On Monday, I ran an injection in DHARD P and Y (see pitch, see yaw) and determined that both AS A and B RF45 Q have equal SNR for DHARD. The difference is the phase, AS B RF45 is the opposite sign as AS A RF45. Therefore, I figured we could try a new sensing scheme, instead of using just AS A RF45 Q, we would use 0.5*A and -0.5*B signals.
Today, I first tested this by putting in my new input matrix into DC7 pitch and yaw. I watched my new signal compared to the nominal DHARD signal during the engagement of DHARD WFS. There were no offsets in the signal. I ran a quick passive transfer function to determine that the signals have the same magnitude and phase.
Then, Tony was holding us at 2 W to sweep the LVEA before observing, so I used the opportunity to try the new matrix. I put the new matrix in the DHARD B path, aka the DHARD blend filter path. I then ramped halfway between DHARD A, which is just on AS A, and DHARD B which is distributed between AS A and B. This worked well, so I ramped the rest of the way. This worked perfectly for both pitch and yaw. I decided I liked this, so I put the new intrix back in DHARD A and ramped back so I wouldn't have to deal with extra SDF diffs.
I continued to monitor the signals as we powered up and saw that everything was performing well. I updated the guardian to use this new intrix- this matrix is set in the DARM_TO_RF state, and then DHARD is first engaged at DHARD_WFS.
Sensibly, this corresponds to about a sqrt(2) reduction in the DHARD noise above 10 Hz, comparing the signals from today to signals during observing last night.
I accepted the differences in observe SDF as well. Note that I also accepted a CHARD tramp due to some other ASC changes I reported in this alog.
Leo, Jennie, Camilla, WP 12677
Setup steps:
Opened up the back side of SQZT7 and used the Nanoscan beam profile to profile the SQZ beam in 5 points between the LPM and MR13 (layout D2000242) with the nominal PSAMS settings [ZM4 92V Strain 6.0, ZM5 109V Strain -0.4]. And repeated with PSAMS set to 0V,0V [ZM4 0V Strain 2.25, ZM5 0V Strain -5.4], results attached, Leo will analyze.
To change PSAMS to Voltage we wanted:
Before we went back to squeezing, reverted the PSAMS offsets and put the servos back on (after clearing their histories), reduded SEED power back down to 0.6mW, also adjusted waveplates for PUMP rejected power, disabled picos and turned off OPO EXC that the dither lock left on.
TITLE: 07/15 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 132Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 28mph Gusts, 16mph 3min avg
Primary useism: 0.07 μm/s
Secondary useism: 0.17 μm/s
QUICK SUMMARY:
Currently Observing at 137 Mpc and have been Locked for 27 minutes. The wind is reaching 30mph.
LVEA Has been Swept,
I found that there was a hissing coming from an open connection from a Kobelco compressor.
Jordan has since shut that off.
WP 12675
WP 12676
ECR E2400330
Drawing D0901284-v5
Modified List T2500232
The following SUS SAT Amps were upgraded per ECR E2400330. Modification improves the whitening stage to reduce ADC noise from 0.05 to 10 Hz. The EX PUM SAT Amp was NOT upgraded.
Suspension | Old | New | OSEM |
ETMX MO | S1100128 | S1100075 | F1F2F3SD |
ETMX MO/RO | S1100079 | S1100163 | RTLF/RTLF |
ETMX RO | S1100149 | S1100132 | F1F2F3SD |
ETMX UIM | S1000297 | S1100140 | ULLLURLR |
TMSX | S1100098 | S1100150 | F1F2F3LF |
TMSX | S1000292 | S1100058 | RTSD |
MC2 | S1100107 | S1100071 | T1T2T3LF |
MC2/PR2 | S1100087 | S1100147 | RTSD/T1T2 |
PR2 | S1100172 | S1100121 | T3LFRTSD |
F. Clara, J. Kissel
As of 2025/07/25 00:00 UTC, the TMSX satamp box for F1/F2/F3/LF has been swapped from S1100150 to S1100122
See 85980 for more info
I wrote this state at the end of 2023, to automate the relocking of the PLL through turning on the Beckoff autolocker and inputting crystal frequency values to search around. These frequencies it uses to search are assuming the values will be close to zero, so it assumes there isn't an offset alog81425.
We've been going into this fault state more often recently. This state is more of a band-aid for this issue than an actual fix to whatever's happening with the PLLs. ALS-Y sees this issue more frequently than ALS-X, each arm is locking at a different frequency these days, around 0 Hz for Xarm and around 100 for Yarm. Yarm seems more dynamic than Xarm for its' frequency, it has been changing more than Xarm over the past year.
I've updated the code today to use a dictionary with a list for each arm from the single list it was using so hopefully it will make it a little faster.
I'm writing some code with Tonys' statecounter to see exactly how many times we are going into this state for each arm over the past months, and years.
I previously noted a glitch about 30 seconds before lockloss in LOWNOISE ASC, 85685. However, we had two more locklosses from this state last night and I do not see such a glitch so that is a random coincidence. One of those locklosses appears to be caused by an earthquake. However, since 6/11, we have had 9 locklosses in this state that occurred exactly 47 seconds into the state, which seems suspicious, one of those occurred last night, and the lockloss with the glitch was the same.
This seems to be coincident with the engagement of a few DHARD P filters:
2025-07-14_14:26:54.641531Z ISC_LOCK executing state: LOWNOISE_ASC (522)
2025-07-14_14:26:54.642230Z ISC_LOCK [LOWNOISE_ASC.enter]
2025-07-14_14:26:54.655894Z ISC_LOCK [LOWNOISE_ASC.main] ezca: H1:ASC-ADS_PIT3_OSC_CLKGAIN => 300
2025-07-14_14:26:54.656325Z ISC_LOCK [LOWNOISE_ASC.main] ezca: H1:ASC-ADS_PIT4_OSC_CLKGAIN => 300
2025-07-14_14:26:54.656732Z ISC_LOCK [LOWNOISE_ASC.main] ezca: H1:ASC-ADS_PIT5_OSC_CLKGAIN => 300
2025-07-14_14:26:54.657043Z ISC_LOCK [LOWNOISE_ASC.main] ezca: H1:ASC-ADS_YAW3_OSC_CLKGAIN => 300
2025-07-14_14:26:54.657438Z ISC_LOCK [LOWNOISE_ASC.main] ezca: H1:ASC-ADS_YAW4_OSC_CLKGAIN => 300
2025-07-14_14:26:54.657892Z ISC_LOCK [LOWNOISE_ASC.main] ezca: H1:ASC-ADS_YAW5_OSC_CLKGAIN => 300
2025-07-14_14:26:54.658134Z ISC_LOCK [LOWNOISE_ASC.main] timer['LoopShapeRamp'] = 5
2025-07-14_14:26:54.658367Z ISC_LOCK [LOWNOISE_ASC.main] timer['pwr'] = 0.125
2025-07-14_14:26:54.783581Z ISC_LOCK [LOWNOISE_ASC.run] timer['pwr'] done
2025-07-14_14:26:59.658298Z ISC_LOCK [LOWNOISE_ASC.run] timer['LoopShapeRamp'] done
2025-07-14_14:26:59.719537Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CHARD_Y_GAIN => 200
2025-07-14_14:26:59.720456Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CHARD_Y_SW1 => 256
2025-07-14_14:26:59.846249Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CHARD_Y_SW2 => 20
2025-07-14_14:26:59.971686Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CHARD_Y => ON: FM3, FM8, FM9
2025-07-14_14:26:59.972384Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-DHARD_Y_SW1 => 5392
2025-07-14_14:27:00.098073Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-DHARD_Y_SW2 => 4
2025-07-14_14:27:00.223528Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-DHARD_Y => ON: FM1, FM3, FM4, FM5, FM8
2025-07-14_14:27:00.224135Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CSOFT_P_SMOOTH_ENABLE => 0
2025-07-14_14:27:00.224497Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CSOFT_Y_SMOOTH_ENABLE => 0
2025-07-14_14:27:00.224868Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-DSOFT_P_SMOOTH_ENABLE => 0
2025-07-14_14:27:00.225188Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-DSOFT_Y_SMOOTH_ENABLE => 0
2025-07-14_14:27:00.225433Z ISC_LOCK [LOWNOISE_ASC.run] timer['LoopShapeRamp'] = 10
2025-07-14_14:27:10.225728Z ISC_LOCK [LOWNOISE_ASC.run] timer['LoopShapeRamp'] done
2025-07-14_14:27:10.281803Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CHARD_P_TRAMP => 5
2025-07-14_14:27:10.408120Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CHARD_P_SW2 => 16
2025-07-14_14:27:10.533563Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CHARD_P => OFF: FM9
2025-07-14_14:27:10.534285Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CHARD_P_SW1 => 256
2025-07-14_14:27:10.660088Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CHARD_P_SW2 => 4
2025-07-14_14:27:10.785453Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CHARD_P => ON: FM3, FM8
2025-07-14_14:27:10.786315Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-CHARD_P_GAIN => 208
2025-07-14_14:27:10.786535Z ISC_LOCK [LOWNOISE_ASC.run] timer['LoopShapeRamp'] = 5
2025-07-14_14:27:15.786858Z ISC_LOCK [LOWNOISE_ASC.run] timer['LoopShapeRamp'] done
2025-07-14_14:27:15.847152Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-DSOFT_Y_TRAMP => 5
2025-07-14_14:27:15.847580Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-DSOFT_Y_GAIN => 5
2025-07-14_14:27:15.848666Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-DSOFT_P_GAIN => 5
2025-07-14_14:27:15.848917Z ISC_LOCK [LOWNOISE_ASC.run] timer['LoopShapeRamp'] = 5
2025-07-14_14:27:20.849050Z ISC_LOCK [LOWNOISE_ASC.run] timer['LoopShapeRamp'] done
2025-07-14_14:27:20.906577Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-ITMX_M0_DAMP_Y_TRAMP => 10
2025-07-14_14:27:20.907206Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-ETMX_M0_DAMP_Y_TRAMP => 10
2025-07-14_14:27:20.907700Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-ITMY_M0_DAMP_Y_TRAMP => 10
2025-07-14_14:27:20.908422Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-ITMX_M0_DAMP_Y_GAIN => -0.5
2025-07-14_14:27:20.908830Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-ETMX_M0_DAMP_Y_GAIN => -0.5
2025-07-14_14:27:20.909148Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-ITMY_M0_DAMP_Y_GAIN => -0.5
2025-07-14_14:27:20.909562Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-ETMY_M0_DAMP_Y_GAIN => -0.5
2025-07-14_14:27:20.909789Z ISC_LOCK [LOWNOISE_ASC.run] timer['LoopShapeRamp'] = 10
2025-07-14_14:27:30.910055Z ISC_LOCK [LOWNOISE_ASC.run] timer['LoopShapeRamp'] done
2025-07-14_14:27:30.968166Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-SR2_M1_DAMP_P_GAIN => -0.2
2025-07-14_14:27:30.968527Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-SR2_M1_DAMP_Y_GAIN => -0.2
2025-07-14_14:27:30.968806Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-SR2_M1_DAMP_L_GAIN => -0.2
2025-07-14_14:27:30.969073Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-SR2_M1_DAMP_R_GAIN => -0.2
2025-07-14_14:27:30.969343Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-SR2_M1_DAMP_T_GAIN => -0.2
2025-07-14_14:27:30.969606Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:SUS-SR2_M1_DAMP_V_GAIN => -0.2
2025-07-14_14:27:30.969838Z ISC_LOCK [LOWNOISE_ASC.run] timer['LoopShapeRamp'] = 10
2025-07-14_14:27:40.970003Z ISC_LOCK [LOWNOISE_ASC.run] timer['LoopShapeRamp'] done
2025-07-14_14:27:40.972085Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-DHARD_P_SW1 => 1024
2025-07-14_14:27:41.097962Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-DHARD_P_SW2 => 4
2025-07-14_14:27:41.223313Z ISC_LOCK [LOWNOISE_ASC.run] ezca: H1:ASC-DHARD_P => ON: FM4, FM8
2025-07-14_14:27:41.223637Z ISC_LOCK [LOWNOISE_ASC.run] timer['LoopShapeRamp'] = 10
2025-07-14_14:27:41.593743Z ISC_LOCK [LOWNOISE_ASC.run] Unstalling IMC_LOCK
2025-07-14_14:27:41.765955Z ISC_LOCK JUMP target: LOCKLOSS
I will take a look and see if there is anything unstable about these filters. Whatever is occurring seems to be too fast to be seen in the ASC signals themselves, and at first glance I don't see anything strange in the suspension channels either.
DHARD FM4 is engaged with a 10 second ramp- this is a change I made on 6/11: 84973 because we had lost lock on that day twice in the same spot. Two of the locklosses at 47 seconds occurred before that change. Then, later that day on 6/11 I reengaged a boost in DHARD P, which only has a 5 second ramp, 84980. Engaging that boost shouldn't be unstable, but maybe something bas occurrs when they ramp at different times. I'm lengthing the ramp to 10 seconds.
We had another lockloss from this state at the 00:47 mark last night, 1436615757 so I'm not sure this fixed the problem.
However, the lockloss was proceeded by a glitch about 30 seconds before, like another lockloss I noticed in this state. This could be coincidence again, but it's looking a little suspicious!
The glitch appears to be occuring due to the CHARD P change. We ramp a boost off with 2 seconds, and a new shaping and low pass on with 2 seconds, and then change the gain with 5 seconds. Looking at the step response of the shaping and lowpass filter, this ramp should probably be 10 seconds, and the gain also 10 seconds to match. I will keep the boost at 2 seconds to ramp off though. I increased the wait timer to 10 seconds to match this ramping. Model and guardian changes saved and loaded.
I am still not sure what is going on with DHARD P, but as a test I've now separated the low pass and loop shape from the engagement of the boost, since we know those individually are stable to engage. We now engage FM4 with a 10 second ramp and wait time, then engage FM8 with a 5 second ramp and wait time. I edited the ramps and gaurdian code to do so, svaed and loaded. This is kind of annoying, but it might help me debug what's going wrong here.
I watched the signals during lownoise ASC, and this time I saw no glitch in CHARD during its lownoise transition. However, I saw a glitch when the DHARD P FM4 filter was engaged, and no glitch when FM8 was engaged. Maybe the ramp of FM4 should be even longer than 10 seconds. I increased the filter ramp to 15 seconds and increased the guardian wait timer to match. Both changes saved and loaded.
We haven't had a lockloss in this state since this fix (but we've had plenty of locks), so I am going to declare this problem fixed!
(Jordan V., Gerardo M.)
Today we replaced the MKS gauge at FC-C-1, this is the first 6 way cross inside the filter cavity tube enclosure, we installed serial number 390F00490, twice, yes two times. It turns out that the flange has some scratches on the knife edge, and it was not going to seal regardless of the effort that we put into it. Once the gauge was removed the scratches transferred to the copper gasket. We replaced it with serial number 390F00495, this one seems to be doing good. New conflat was leak tested and no leak was detectable above 2.42e-10 torr*l/sec.
The old gauge serial number is 390F00406 with a date code of June 2021.
Additional pictures of the knife edge damage/dirty flange from manufacturer.
More photos of the MKS390 gauge due to new found features.
We found some features internal to the gauge, see attached photos, maybe when welding the conflat to the gauge body they did not use shielding gas internal to the gauge.
Future reference, we did a test on the gauge with an annealed copper gasket, no leaks detected above the 1.0e-10 torr*l/sec. So, if this gauge is deemed good we can use it, contacting vendor with lots of questions. Serial number 390F00495 is for the featured gauge on attached photos.
For clarification the serial number of the "dirty" gauge is 390F00490 and it is getting returned to the vendor.
The gauge installed and working on FC-C-1 is serial number 390F00495.
SQZing has been slowly getting worse over the long lock stretch. The low range coherence check yielded.