I updated Jim's "rasta plot" with a line for when Jim thinks we should turn on the high ASC to help us ride through earthquakes. This is a guess and will need more tested times, so we will very likely make changes to this in the near future. Operators, please transition to the high ASC before the earthquake arrives. You can transition back when the earthquake has sufficiently calmed down, perhaps even waiting until the SEI_ENV node goes out of Earthquake mode.
Interestingly, the M6.0 from Russia this morning would have been in the high ASC realm, but we looked to have scraped by and survived.
I'm ready to test the Guardian version of this high ASC switch whenever we get the chance. We had a lock loss just before trying it today, so perhaps Monday or opportunistically over the weekend. In the mean time operators will have to do the switching manually.
Over the last several weeks, I have collected some HAM ISI to DARM coupling measurements. These are the only HAM ISI that don't have st0 L4C feedforward, and should have overall not as good performance as the other HAMs. Attached pdfs show the coupling to darm for the closest analogs to their associate cavity length, pitch, yaw and vertical dofs. It seems like HAM2 generally has the worst coupling, then HAM6, then HAM3. HAM2 is already planned to receive st0 l4cs in the next vent, as well as lower noise vertical CPS. HAM3 is only planned to receive the lower noise CPS, for now. For HAM6, I have been trying ground to HEPI l4c feedforward, but so far haven't found a stable filter and don't yet understand why that feedforward is not work at HAM6 yet.
We noticed that comparing the last two calibration reports that there has been a significant change in the systematic error, 87295. It's not immediately obvious what the cause of this is. Two current problems we are aware of: there is test mass charge, and kappa TST is up by more than 3%, and we lost another 1% optical gain since the power outage.
One possible source is the SRC detuning changing (not sure how this could happen, but it might change).
Today, I tried to correct some of these issues.
Correcting actuation:
This is pretty straightforward, I measured the DARM OLG and adjusted the L3 DRIVEALIGN gain to bring the UGF back t about 70 Hz, and Kappa TST back to 1. This required about a 3.5% reduction in the drivealign gain, from 88.285 to 85.21. I confirmed that this did the right thing by comparing the DARM OLG to an old reference and watching kappa TST. I updated the guardian with this new gain, SDFed, and changed the standard calibration pydarm ini file to have this new gain. I also remembered to update this gain in the CAL CS model.
Correcting sensing:
Next, Camilla took a sqz data set using FIS at different SRCL offsets, 87387. We did the usual 3 SRCL offset steps, but then we were confused by the results, so we added in a fourth at -450 ct. Part of this measurement requires us to guess how much SRCL detuning each measurement has, so we spent a bunch of time iterating to make our gwinc fit match the data in this plot. I'm still not sure we did it right, but it could also be something else wrong with the model. The linear fit suggests we need about a -435 ct offset. We changed the offset following this result.
Checking the result:
After these changes, Corey ran the usual calibration sweep. The broadband comparison showed some improvement in the calibration. However, the calibration report shows a significant spring in the sensing function. To compare how this looks relative to previous measurements, I plotted the last three sensing function measurements.
The calibration was still with 1% uncertainty on 9/27. On 10/4, the calibration uncertainty increased. Today, we changed the SRCL offset following our SQZ measurement. This plot compares those three times, and includes the digital SRCL offset engaged at the time. I also took the ratio of each measurement with the measurement from 9.27 to highlight the differences. It seems like the difference between the 9/27 and 10/4 calibration cannot be attributed much to a change in the sensing. And clearly, this new SRCL offset makes the sensing function have an even larger spring than before.
Therefore, I concluded that this was a bad change to the offset, and I reverted. Unfortunately, it's too late today to get a new measurement. Since we have changed parameters, we would need a calibration measurement before we could generate a new model to push. Hopefully we can get a good measurement this Saturday. Whatever has changed about the calibration, I don't think it's from the SRCL offset. Also, correcting the L3 actuation strength was useful, but it doesn't account for the discrepancy we are seeing.
It turns out that changing the H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN was the WRONG thing to do. The point was to only update the drivealign gain to bring us back to the N/ct actuation strength of the model. We lost lock shortly after I updated the drivealign gains, so I didn't realize the error until just now when I checked the grafana page and saw that the monitoring lines were reporting 10% uncertainty!
Vlad and Louis helped me out. By going out of observing and changing the H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN back to the old value (88.285), I was able to bring the monitoring line uncertainty back down to its normal value (2%). I have undone the SDF in the CAL CS model to correct this.
The observing time section with this error is from 1444075529 to 1444081417.
Updating this alog after discussion with the cal team to include a detchar-request tag. Please veto the above time!
Request: veto time segment listed above.
I also did revert the gain change in the pydarm_ini file, but forgot to mention it earlier.
Just had a lockloss while Robert was out on the floor during commissioning time (ends a 20.5+hr lock).
NOTE: For this particular Thurs Calibration, Commissioning work was done first for almost 2-hrs and then this Calibration was run.
Measurement NOTES:
Still having this weird error with the report generation. I regenerated, see the attachment.
I'm unhappy with this result, but that requires more detail to explain, alog incoming. In quick summary, I don't think I will be validating this report.
Thu Oct 09 10:08:21 2025 INFO: Fill completed in 8min 18secs
Gerardo confirmed a good fill curbside.
Elenna, Camilla, Matt
camilla.compton/Documents/sqz/templates/dtt/20251009_SRCLSQZdata.xml and attached.| Type | Time (UTC) | SRCL Offset | Angle | DTT Ref | 
| NoSQZ | 15:34:00 - 15:42:00 | -382 | N/A | ref 0 | 
| FIS SQZ | 16:01:00 - 16:04:00 | -382 | (-)154.7 | ref 1 | 
| FIS SQZ | 16:06:30 - 16:09:30 | -200 | (-)210.6 | ref 2 | 
| FIS SQZ | 16:13:45 - 16:16:45 | 0 | (+)135.6 | ref 3 | 
| FIS SQZ | 16:49:30 - 16:52:30 | -450 | (-)144.6 | ref 4 | 
Took above data at NLG of 21.9, measaured eariler 87385.
We went into no SQZ from 15:34UTC to 15:42UTC. I checked NLG as in 76542.
| OPO Setpoint | Amplified Max | Amplified Min | UnAmp | Dark | NLG | 
| 80 | 0.0134871 | 0.00017537 | 0.0005911 | -2.57e-5 | 21.9 | 
Elenna noticed this OPO temp change made SQZ worse at the yellow BLRMs. I then ran SQZ_OPO_LR's SCAN_OPOTEMP state which moved the OPO temp further in the same direction. This unlocked SQZ but shouldn't have but did make the yellow BLRMs better.
TITLE: 10/09 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 4mph 3min avg
    Primary useism: 0.08 μm/s
    Secondary useism: 0.18 μm/s 
QUICK SUMMARY:
H1's been locked 16.5+hrs (even rode through an M6.0 Russia EQ an hr ago!).
OVERNIGHT: SQZ dropped H1 from Observing for 4min due to the PMC. NO Wake up Calls!
ALSO: Thurs Calibration from 830-noon (local time)...where Calibration will be deferred to later in the morning for Elenna task and/or Robert task starting things off.
Did the Dust Monitor Check and a new (to me atleast) problem would be for DR1 (Diode Room 1).
TITLE: 10/09 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: We stayed locked the whole shift, 7 hours. Calm evening.
LOG:                                                                                                                    
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End | 
|---|---|---|---|---|---|---|
| 22:52 | SAF | Laser HAZARD | LVEA | YES | LVEA is Laser HAZARD \u0d26\u0d4d\u0d26\u0d3f(\u239a_\u239a) | 13:52 | 
| 16:48 | psl | ryanS.jason | optics lab | YES | NPRO! | 00:43 | 
| 22:36 | iss | keita.jennyq | optics lab | yes | iss array align | 00:53 | 
| 22:46 | ISS | Rahul | Optics lab | YES | Join iss array work | 00:05 | 
| 00:02 | FAC | Tyler | X1 beamtube | N | Checks | 00:30 | 
Closes FAMIS28426, last checked in alog87272
ETMX's Veff error bars are pretty big, as expected. The values looks fairly stable over time.
TITLE: 10/08 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
One ETMx Glitch lockloss and fire observed on Rattlesnake.
LOG:
TITLE: 10/08 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 10mph Gusts, 5mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.15 μm/s 
QUICK SUMMARY:
At 2047utc, H1 went out of Observing due to the SQZ PMC. It was almost back, but then H1 had a lockloss at 2050utc & this was due to an ETMx Glitch. Recovery was fairly quick, but I need to tweak ETMx & mostly TMSx. And then DRMI looked bad, so I went through CHECK MICH FRINGES + PRMI. Thern H1 got back to OBSERVING automatically.
Jennie W, Keita, Rahul
Monday:
Keita and I re-optimised the array input pointing using the tilt actuator on the PZT steering mirror and the translation stage.
After taking a vertical and horizontal coupling measurement, he realised that the DC values were very low when we optimised the pointing to improve the vertical coupling. Looking at the QPD the cursor was in the bottom half and so we cannot use the QPD y-readout channel to work out the 'A' channel for either measurement (where the TF is B/A).
So for the A channel in the TF for the vertical coupling we had to use 
A = QPD_X/(cos(QPD_angle*pi/180))/sqrt(2) /Calib_X where A is the times series for thr TF, 'QPD_angle' is the angle between the horizontal dither axis and the QPD x-axis, Calib_X is the calibration of motion on the QPD in the x-axis in V/mm (LHO alog #85897).
And for the A channel in the TF for the horizontal coupling we had to use 
A = QPD_X/(cos(QPD_angle*pi/180))/Calib_X.
The data is plotted here.
Yesterday Keita and I double-shcked the beam size calculation I did on 26th June when we reset the alignment to the ISS array form the laser after it got bumped (we think). The beam size calculated was 0.23 mm beam radius on PD1 (the one with no transmissions through the mirrors) in x direction and 0.20 mm in y direction. The beam size calculated on the QPD was 0.20 mm in x direction and 0.19 mm in y direction. The waist should be close to this point (behind the plane of the array photodiodes) as the Rayleigh range is 9cm in x direction and 10cm in the y direction.
This check is because our beam calibration as reported in this entry, seems to be at least a factor of 2 off from Mayank and Shiva's measurements reported here (dcc LIGO-T2500077).
Since we already know the QPD was slightly off from our measurements on the 6th October, Keita and Rahul went in and retook the calibration measurements of volts on the QPD to mm on the QPD.
In the process Keita noticed that the ON-TRAK amplifier for the QPD had the H indicator lit and so was saturating. He turned the input pump current down from 130mA to120mA and the gain value on the amplifier from G3 (64k ohm) to G2 (16k ohm). The QPD was also recentred on the input pointing position where we had good vertical and horizontal coupling as we had left it in the position we found on Monday where it was off in yaw. We had to do several iterations of alignment switching between vertical and horizontal dither and still could only find a place where the coupling of PDs 1-4 were optimised. PDs 5-8 have bad coupling at this point. At this position we also took calibration measurments where we scanned the beam and noted down the voltage on the QPD X, Y and SUM channels.
Keita notes that for the QPD to be linear the voltages need to be below +/- 7V.
I will attach the final set of measurements in a comment below.
We left the alignment in this state with respect to the bullseye QPD readout.
The coupling measurement from the last measurements we took on Tuesday is here, and the calibration of the motion on the QPD is here.
I was calibrating the above data using the Calib_X and Calib_Y values instead of by sqrt(Calib_X^2 + Calib_Y^2).
Fixed this in the attached graph.
Also 3 of the PDs are starting to near the edge of their aligned range which can be seen looking at the spread of DC values on the array PDs in this graph.