Closes FAMIS#37256, last checked 86348
Things to note:
Possible issues:
CS_DUST_LAB1_{300,500}NM both stalled at a value 16 days ago
CS_DUST_DR1_300NM has been at zero for the past four days
CS_DUST_DR1_500NM has been at zero for the past six days
Not an issue:
CS_DUST_LVEA5_300NM has been at 0 for three months - since we turned it off after the vent
CS_DUST_LAB2_{300,500}NM both off as expected (comparing to last month when Ryan C didn't mention it as an issue)
Thu Sep 04 10:05:52 2025 INFO: Fill completed in 5min 48secs
Gerardo confirmed a good fill curbside.
15:59:33 Wed 03sep2025 PDT power glitch. H1 was in lock throughout and no impact on range.
H1 lockloss a few after hitting the 42hr mark for a lock (42hrs03min). At this time we were in at the beginning of 6hr commissioning period and at the time of the lockloss we were about ~10min into running the DARM Offset script for OM2 heating set-up.
Not sure that the DARM offset step caused this as we were almost back at our nominal offset when we lost lock (step 6 out of 7 with H1:OMC-READOUT_X0_OFFSET = 9) which is ~27 mA, nominal is 40 mA.
We checked the lockloss ndscopes and couldn't see any smoking guns when the offset was changing from 8 to 9.
TITLE: 09/04 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 2mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
Day 2 of being locked since Maintenance (41.5hrs)! Commissioning starts in 12min. All is quiet seismically (microseism is below the 50th percentile even!) and winds are low.
H1 just had it's 3rd superevent in 10hrs 1hr ago (1350utc).
TITLE: 09/04 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Another very quiet shift with H1 locked and observing throughout, plus one candidate event. H1 has now been locked for 32 hours.
LOG:
FAMIS 28421, last checked in alog86614
Once again, too low of coherence for ITMX, so no new data point there. The most recent point is from the August 19th measurement.
FAMIS 28464, last checked in alog86272
Nothing much out of the ordinary here that I can see aside from the fact that the ITMX SLED is getting awfully close to the 1mW lower power threshold and will likely hit it in the next month.
Two issues I'm seeing in these plots 1) The ITMX HWS SLED is degrading faster than previous SLEDs and 2) There is some weirdness going on with the ITMX CO2 flow meter a week and a half ago.
The rate of degradation of these sleds is often on the order of roughly 2mW/yr. This is why we can generally get about a year's worth out of each SLED. Comparing the last ITMX SLED and the one that we just installed in May (alog84417), the old SLED decayed at a rate of 2.43mW/yr and the new one is 4.2mW/yr. This is just from making some lines in ndscope, so it's pretty rough, but we can definitey say that it is degrading about 2x as fast as the last SLED (attachment 1). The ITMY SLEDs are a bit more consistent at: previous = 2.7mW/yr and new = 2.2mW/yr.
We saw that on the ITMY SLED this last time around that we were still able to see spherical power changes from lock to lock even with the SLED power reporting 0.5mW, and ring heater changes were tough to see (alog84408). Let's consider 0.5mW to be our limit we can take these, but really it should be before we can't see anything, at our current rate this SLED would finish the run at ~0.3mW. Looks like we will need to swap this one before the end of the run.
In Ryan's alog I noticed that the usual change in flow, as reported by the paddle wheel flow meter on the floor, became much more stable on Sunday Aug. 24th. Looking back over two years (attachment 2), it actually looks like the flow has been unusually unstable since returning from the spring vent, and whatever happened on the 24th brought us back to normal. Being a Sunday, there wasn't much going on (alog86540), we were in the middle of a 40+ hour long lock. Zooming into the event doesn't show the laser doing anything during that time, as if it didn't actually see a change in flow. Not sure what's going on here.
TITLE: 09/03 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
Today we had Commissioning time to make up for the time from Monday's holiday. Also had the opportunity to take H1 to the new ASC High Gain mode to ride out a Magnitude 6.0 earthquake! H1 is approaching 26.5hrs of lock.
LOG:
TITLE: 09/03 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 3mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY: H1 has been locked for 26 hours. Sounds like commissioning and riding through earthquakes today all went well.
(Elenna C [in spirit], Ibrahim A, Corey G, Tony S, Jim W)
Summary: H1 survived a magnitude 6.0 EQ in Alaska via new ASC High Gain button/state. This preserved a 24+hr lock. This probably saved us (at minimum 2.5hrs of downtime!): (1) ~90 min for the earthquake seismic noise to die down & (2) for 1-2hrs of locking. L1 went DOWN due to this EQ about 20min after we saw it rolling through LHO!
Had my first experience getting to ENGAGE the new RED "ASC Hi Gn" button (see Elenna's alog) for an earthquake! This tool is still fairly new and "manual" (& cool!) for operators. Main items for engaging at this point (for me, as a newbie) was waiting for SEI_ENV to go to EARTHQUAKE mode and also watched for any colors on the Picket Fence. While we were dealing with this earthquake, Jim walked in to also observe and answer questions for us; Ibrahim also gave advice since he's engaged this EQ mode twice. All the details for what happened are logged below.
How Everything Went "Down"!
2102 VerbalAlarm: "EQ alert for M6.0 in Aleutians" (it said "Incoming earthquake from Canada")
Using Vibration Sensors To Gauge Health Of HVAC Fans Site Wide FAMIS 26592
H0:VAC-EX_FAN2_570_1 & 2 both seem to have gotten significantly more noisy for about 27 hours from 1800 UTC on Sep 1st to 2100 UTC on Sep 2nd
Sheila and I repeated her 86683 yesterday's FC injections
Interestingly the shape of the injection has changed, meaning the FC loop shape is changing, see quite time today vs yesterday on FC2_M3 plot. The coupling into DARM is also significantly less today. This isn't;t a surprise as we see the FC causing noise into DARM only intermittently 86608, but we don't know the reason why yet. Sheila checked the FC OLG and it looked fine.
Addressed TCS Chillers (Wed [Sept3] 957-1040am local) & CLOSED FAMIS #27823:
Had a little bit of a rigmarole since I hadn't done this for a while. In my quick procedure scan, I didn't see how to read the "floating red ball" in the chillers. I had remembered using the bottom of the ball, but wasn't sure, so I consulted Camilla & TJ. I ended up using the bottom for my initial measurements, but later learned we measure the top of the ball! Either way, they were a little low, but I overfilled them since I used the bottom of the ball for readings. Also not clear what exactly is the "Max level" (reason for overfill). Either way, chillers had received water since June, so adding water an ok thing! :) Attached is a photo of what I'm talking about, and the red ball is about 4mm in diameter so I updated readings I initially entered when filling by adding 4mm to all water level readings.
@ 2:43 am (Local time) H1 called for assistance.
I noticed that it was a SQZ issue, specifically the SHG "PZT was out of range" error on the SQZ_SHG Guardian.
It was bouncing from Locking to Locked then scanning. etc.
I tried Init-ing SQZ_SHG manager..... SQZ_Manager.
Then I took all SQZ GRD Nodes to down except SQZ_PMC & SQZ_SHG and tried to troubleshoot the SQZ_SHG directly.
Looking at the SQZ troubleshooting guide: There wasn't a section for SQZ _SHG troubleshooting. So I searched the ALOG with no hits for "PZT Out of Range".
I tried to adjust the OPO temp to maximize H1:SQZ-CLF_REFL_RF6_ABS_OUTPUT. But changing the OPO temp had no impact.
This is when I started to realize it may take me awhile to find out what is going on, since it's too late to call a SQZ expert, I should just take us to NO-SQZ.
So I ran the noconda python switch_nom_sqz_states.py without script.
Sadly I did accept the SDFs and these too, because I thought they were causing an SDF issue. But it may have been SQZ_ANG_ADJUST needed to be at ADJUST_SQZ_AND_ADF instead of DOWN for the SQZr to be considered ready for OBSERVING.
Made it back to OBSERVING at 11:30:44 UTC
Now that we have made it back to OBSERVING albeit without SQZing I'm using a finer toofed comb on my troubleshooting.
I went looking for Sitemap>SQZ>SQZT0>SHG That last SHG Button was a bit difficult to find when your eyes are still crossed..
I then took a screenshot of the SHG screen before any changes.
I then locked the SQZ_PMC and the SQZ_SHG. The SQZ_SHG was cycling between locked and Unlocked. Dropped from Observing at 11:57:53 UTC.
Then while trying to maximize H1:SQZ-SHG_GR_DC_POWERMON I changed H1:SQZ-SHG_TEC_SETTEMP from 35.89 to 35.61.
After this I ran the noconda python switch_nom_sqz_states.py with script to try and get us the SQZ SQuoZe.
SQZ_MAN was having an issue with the SQZ_FC losing lock at Transition_IR_LOCKIN, I then re-touched up the OPO temp.
Success! The SQZr Is SQUOZE!!
Now I need to accept all the SDF's again (and these too ) cause I didn't follow the directions when i ran the initial no sqzing script.
Observing reached again at 12:51:32 UTC.
I edited SQZ_ANG_ADJUST which had a conditional, reading from sqzparams.use_sqz_angle_adjust, to set the nominal state (which stopped the script running correctly) to just stating the nominal state. Now there is a note in sqzparams.py to change the nominal state in SQZ_ANG_ADJUST and sqz/h1/scripts/switch_nom_sqz_states.py if the flag is changed.
Sheila, Camilla
This morning the SHG PZT was still around 5V so we changed the Offset, Min and Max scan rangers to force it to lock at the peak closer to 50V, see attached for old vs new values and sdfs accepted. Checked it scanned over the correct range by setting the SQZ_SHG guardian to DOWN and under manually scanning the SHG PZT.
Then I copied what Tony did last night and further optimized the SHG temperature to bring the power up from 96mW to 106mW, see attached. Thanks Tony!
Tried to measure NLG (following 76542) at different ISS setpoints (by changing the SHG waveplate to adjust how much power is incident on the AOM), with the same OPO TRANS power. Because in 86363 we were confused that the NLG increased significantly when we realigned Pump fiber, with the same OPO Trans setpoint.
This was confusing and I'd like to repeat next week. I initially thought we were getting pump depletion so decreased the seed power, but later noticed the ISS was just unable to keep up and the control voltage was dropping to -5 for the low values of ISS setpoint, see attached, it could hold in LOCKED_CLF_DUAL with 80uW OPO_TRANS fine but not with the SEED beam.
OPO Setpoint | ISS Setpoint when locked on | Amplified | Unamplified | NLG | Notes | |||
CLF DUAL | SEED | Max | Min | UnAmp | Dark | |||
80 | 4.9 | 3.2 | 0.192924 | 0.002389 | 0.0079612 | -2.1e-5 | 24.2 | |
80 | 2.9 | 2.9 | 0.045037 | -2.2e-5 | 39 ?(using below unamp) | Pump deplation? Reduced SEED power from 0.7 to 0.3 to keep it locked on SEED (still didn't work first time). | ||
80 | 6.3 | 6.3 | 0.0430464 | 0.000546 | 0.0011063 | -2.2e-5 | 38 ? | Unamp signal decreased from SEED power change. |
80 | 4.8 | 4.7 | 0.0454944 | 0.0005378 | 0.0018533 | -2.7e-5 | 24.2 | |
80 | 2.9 | -5 ? | Noticed ISS Controlmon at -5. | |||||
80 | 3.5 | 3.5 | 0.0452317 | 0.00054049 | 0.0018636 | -2.4e-5 | 24.0 | |
80 | 4.95 | 0.04523 | 24.0 | Leaving here. |
Repeated while Corey was relocking today. We had one strange measurement when the ISS setpoint at 3.3V where the un-amplified signal was much lower but when later repeating at 3.1V, we didn't see this.
OPO Setpoint | ISS Setpoint when locked on | Amplified | Unamplified | NLG | |||
CLF DUAL | SEED | Max | Min | UnAmp | Dark | ||
80 | 5.0 | 5.0 | 0.042961 | 0.00051437 | 0.00176935 | -2.1e-5 | 24.0 |
80 | 2.8 | ISS Setpoint dropped to -5, so OPO TRANS not at 80uW | |||||
80 | 3.3 | 3.2 | 0.043273 | 0.0005247 | 0.00106713 | -2.1e-5 | 39.7 |
80 | 6.2 | 6.2 | 0.041031 | 0.00051677 | 0.0017658 | -2.1e-5 | 23.0 |
80 | 6.5 | 6.4 | 0.0409285 | 0.00051425 | 0.00175578 | -2.1e-5 | 23.0 |
80 | 3.6 | 3.5 | 0.0429806 | 0.00051216 | 0.00175932 | -2.7e-5 | 24.1 |
80 | 3.1 | 3.0 | 0.0431685 | 0.000516166 | 0.0017296 | -2.7e-5 | 24.5 |
80 | 2.8 | ISS Setpoint dropped to -5, so OPO TRANS not at 80uW | |||||
80 | 4.9 | 4.8 | 0.0429243 | 0.0005134 | 0.0017627 | -2.7e-7 | 24.0 |
Test for Thursday morning at 7.45 am, assuming we are thermalised.
conda activate labutils
python auto_darm_offset_step.py
Wait until program has finished ~20 mins.
Turn OMC ASC back on by putting master gain slider back to 0.020.
Test for Thursday morning at 7.45 am, assuming we are thermalised, rewrote instructions above and took out last part.
conda activate labutils
python auto_darm_offset_step.py
Wait until program has finished ~15 mins.
Turn OMC ASC back on by putting master gain slider back to 0.020.
Commissioners will turn off OMC ASC and close beam diverter once heating has finished then do DARM offset step and other tests before turning on ASC and opening beam diverter before cooling down OM2 again.
Since the HAM1 vent, I have done a few different measurements of the ASC that provide information on how to calibrate WFS signals from counts to microradians. Here is a summary:
CHARD, INP1 and PRC2 results come from this alog
DHARD results come from this alog
SRM results come from this alog (if you are comparing values, I made a power normalization error in the linked alog)
BS results were taken but never alogged (shame on me)
All of these measurements were taken by notching all ASC loops at 8.125 Hz and injecting an 8.125 Hz line in the desired DoF. The osem wits provide the urad reference.
Unless otherwise specified, the witness channels are the bottom stage osems
DoF | Input Matrix | Calibration | Notes |
CHARD P | -1 * REFL A 45 I + 0.6 * REFL B 45 I |
0.0161 urad [ETMY L2] / ct [REFL A 45 I] 0.0109 urad [ETMY L2] / ct [REFL B 45 I] |
measured as ETMY L2 wit, must transform to L3 urad, can also convert to cavity angle coherence near 1 |
CHARD Y | -1 * REFL A 45 I + 0.8 * REFL B 45 I |
0.0113 urad [ETMY L2] / ct [REFL A 45 I] 0.00965 urad [ETMY L2] / ct [REFL B 45 I] |
measured as ETMY L2 wit, must transform to L3 urad, can also convert to cavity angle coherence near 1 |
DHARD P | 0.5 * AS A 45 Q - 0.5 * AS B 45 Q |
0.00312 urad [ETMX L2] / ct [AS A 45 Q] 0.00312 urad [ETMX L2] / ct [AS B 45 Q] |
measured as ETMX L2 wit, must transform to L3 urad, can also convert to cavity angle coherence = 0.8, 10 averages |
DHARD Y | 0.5 * AS A 45 Q - 0.5 * AS B 45 Q |
0.00612 urad [ITMY L2] / ct [AS A 45 Q] 0.02 urad [ITMY L2] / ct [AS B 45 Q] |
measured as ITMY L2 wit, must transform to L3 urad, can also convert to cavity angle coherence = 0.5, 10 averages |
PRC2 P (PR2) | 1 * POP X RF I | 0.00033 urad / ct | coherence 1 |
PRC2 Y (PR2) | 1 * POP X RF I | 0.000648 urad / ct | coherence 1 |
INP1 P (IM4) |
1.5 * REFL A 45 I + 1 * REFL B 45 I |
0.0104 urad / ct [REFL A 45 I] 0.00988 urad / ct [REFL B 45 I] |
coherence 1 |
INP1 Y (IM4) | 2 * REFL A 45 I + 1 * REFL B 45 I |
0.0141 urad/ct [REFL A 45 I] 0.00608 urad/ct [REFL B 45 I] |
coherence 1 |
MICH P (BS) | 1 * AS A 36 Q | 0.0161 urad [M2]/ct | measured as BS M2 WIT/AS A 36 Q, must transform into M3 urad, coherence near 1 |
MICH Y (BS) | 1 * AS A 36 Q | data not taken | |
SRC1 P (SRM) | 1 * AS A 72 Q | 16.9 urad/ct | coherence near 1 |
SRC1 Y (SRM) | 1 * AS A 72 Q | 10.6 urad/ct | coherence near 1 |
Here is data for MICH yaw and SRC2:
DoF | Input Matrix | Calibration | Notes |
MICH Y | 1 * AS A 36 Q | 0.00248 urad [BS M2] / ct | measured as BS M2 WIT/AS A 36 Q, must transform into M3 urad |
SRC2 P (SRM + SR2) | 1 * AS_C |
33.4 urad [SR2 M3] / ct 44.7 urad [SRM M3] / ct |
SRC2 drive matrix is a combination of SRM and SR2: -7.6 * SRM + 1 * SR2 |
SRC2 Y (SRM + SR2) | 1 * AS_C |
20.9 [SR2 M3] / ct 48.8 [SRM M3] /ct |
SRC2 drive matrix is a combination of SRM and SR2: 7.1 * SRM + 1 * SR2 |