camilla.compton/Documents/sqz/templates/dtt/20250908_SQZdata.xml and attached.| Type | Time (UTC) | Angle | DTT Ref |
| No SQZ | 16:09:30 - 16:19:00 | N/A | ref 0 |
| FIS SQZ (tuned for 1kHz) | 16:22:00 - 16:25:00 | (-)137.5 | ref 1 |
| FIS Mid + SQZ (tuned to no sqz) | 16:26:30 - 16:29:30 | (-)165.0 | ref 2 |
| FIS Mid - SQZ (tuned to no sqz) | 16:32:00 - 16:35:00 | (-)120.4 | ref 3 |
| FIS ASQZ | 16:39:00 - 16:42:00 | (+)252.2 | ref 8 |
| Mean SQZ (ADF off) | 16:42:30 - 16:45:30 | N/A | ref 9 |
| FIS tuned for 100Hz (to look at thermal noise limits) | 16:52:00 - 16:55:00 | (-)119.5 | ref 10 |
| FIS tuned for 100Hz +5deg | 16:55:30 - 16:58:30 | (-)124.2 | ref 11 |
| FIS tuned for 100Hz -5deg | 16:59:00 - 16:02:00 | (-)113.8 | ref 12 |
We tuned FIS around 100HZ and then changed the SQZ angle +/-5deg, plot attached. Shiela thinks this could help us constrain thermal noise in the future.
Plot attached of today's data (SRCL offset at nominal -382) compared to last weeks while OM2 was hot (with SRCL offset tuned at -235). Solid lines are cold OM2, dashed lines are hot OM2. FIS is better with cold OM2 as expected.
We noticed that mean SQZ was different <100Hz with the OM2 hot vs Cold. This persisted if I changed the averages so wasn't caused by a glitch and also seemed to be increased looking at Sheila's 86736 data too. Changed SRCL offset to see if this was the cause but it didn't appear to be. Plot attached, so the cause is OM2.
| Type | Time (UTC) | SRCL Offset | DTT Ref |
| Mean SQZ (ADF off) | 16:42:30 - 16:45:30 | -382 | ref 9 |
| Mean SQZ (ADF off) | 17:17:00 - 17:20:00 | -235 | ref 15 |
| Mean SQZ (ADF off) | 17:08:30 - 17:10:00 | -100 | ref 13 |
| Mean SQZ (ADF off) | 17:11:00 - 17:14:00 | 0 | ref 14 |
| OPO Setpoint | Amplified Max | Amplified Min | UnAmp | Dark | NLG |
| 80 | 0.0395963 | 0.00050063 | 0.00170524 | -2.611e-5 | 22.9 |
Note for Operators, changing the SRCL offset put the THERMALIZATION guardian into error. This is fine.
Once the SRCL offset is back to nominal, just load the THERMALIZATION GRD and it's will go back to normal, I'm not sure what would happen if this was done early in the lock when the GRD was still ramping the offset though.
Mon Sep 08 10:07:09 2025 INFO: Fill completed in 7min 5secs
Gerardo confirmed a good fill curbside. TC-A became good again at 05:56 this morning and looks nominal in today's fill.
Gerardo inspected the TCs and suspects TC-A wiring had been moved by animal activity. Why it corrected itself this morning is a mystery.
FAMIS 31102
The RefCav alignment has been drifting again this week and causing the signal on the TPD to drop, so this could use a touchup. I'll try to find some opportunistic time to do this after a lockloss either today or later this week.
No other major events of note.
TITLE: 09/08 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 3mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
15:20 UTC Superevent S250908y came through, so we delayed the start of commissioning till 15:43 UTC even though there was no automated stand down generated from the event. It seems the event was actually at 15:02 according to GraceDB, so there was some kind of delay? We probably could have gone ahead at 15:30.
The dust monitor in the diode room and the optics lab were reporting that the counts were stuck, I physically restarted the DR dust monitor then its' IOC as the power cycle didn't help. The optics lab was frozen so a power cycle fixed it, I restarted the IOC as well, I probably didn't need to. I reset the alarms levels after restarting the IOC. The reference wiki page on how to restart the dust monitor IOCs.
TITLE: 09/08 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 14:13 UTC (37 hr lock!)
LOG:
TITLE: 09/07 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 159Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: We stayed locked the whole shift, over 31 hours.
LOG: No log.
TITLE: 09/07 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 9mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING as of 14:13 UTC (31 hr lock!)
Sun Sep 07 10:07:43 2025 INFO: Fill completed in 7min 39secs
Good fill. TC-A continues to be all over the shop, occasionally giving believable readings. Similar to yesterday it was TC-A which had a larger Delta-temp and triggered the end-of-fill, closely followed by TC-B.
TITLE: 09/07 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 4mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
I was restarting some FOMs and noticed that PIMON looked frozen, sure enough the last lockloss data it saved in /ligo/data/pimon/locklosses/ was from Aug 19 2025 06:54:13 UTC. I restarted it at 15:00 UTC.
Oli, Camilla (Oli did the work, I just wrote down what they did!)
Oli had been troubleshooting SQZ and found that when the OPO ISS tried to engage, it go to 10V and then unlocked. The OPO Trans output was only ~50uW on LOCKED_CLF_NO_ISS. This was meaning the ISS was maxing out it's 0-10V range to get to the nominal setpoint of 80uW.
We could see on SQZT0 medm that the SHG output power was 81.2mW when nominal is ~ 100mW. This will reduce the available power to the OPO path. We followed the wiki to increase the SHG power (I should make the instructions of when to do this clearer) by adjusting the SHG SETTEMP. Doing this, Oli brought SHG power up from 81mW to 92mW.
Then the OPO Trans output was ~65uW on LOCKED_CLF_NO_ISS (nominal 80uW). So Oli moved the waveplate between the +200MHz AOM and the SHG Rejected PD in the "to OPO" path to increase the available power sent into SHG launch. Then the AOM had enough range and the OPO could lock in LOCKED_CLF_DUAL_NO_ISS.
Then Oli had issues with getting the FC to lock in Green so followed the wiki instructions to in do that too, trending to last good alignments with ZM3 (had moved 20urad on FC lockloss from ASC turning off) and then manually locking FC and increasing power with FC2, (FC1, ZM3) alignments. Oli increased the power in the 0,0 mode form 35 to 65 with alignments. Nominally this is > 80 but it had only been at 50 in the last lock, probably from the SHG power being low.
Back to Observing 7:13amPT.
TITLE: 09/07 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 20:03 UTC (13 hr lock)
Quiet shift. No locklosses. No Earthquakes.
The "bycicle" oscillations where CHARD_Y and DHARD_Y signals oscillate for ~20s seem anecdotally more frequent than yesterday's shift.
LOG:
TITLE: 09/06 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: GWSTAT is still down, we've been locked for 7.5 hours.
LOG: No log
TITLE: 09/06 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 10mph Gusts, 4mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING as of 20:03 UTC (7 hr lock)
Well behaved, quiet and locked.
Calibration monitor before the measurement, measurement report.
Broadband:
Start: 2025-09-06 19:31:00 UTC
Stop: 2025-09-06 19:36:11 UTC
File: 2025-09-06 12:31:00,345 measurement output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250906T193100Z.xml
Simulines:
Start: 2025-09-06 19:37:39 UTC // 1441222677 GPS
Stop: 2025-09-06 20:01:00 UTC // 1441224078.696016 GPS
Files:
2025-09-06 20:01:00,541 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20250906T193739Z.hdf5
2025-09-06 20:01:00,548 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20250906T193739Z.hdf5
2025-09-06 20:01:00,552 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20250906T193739Z.hdf5
2025-09-06 20:01:00,557 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20250906T193739Z.hdf5
2025-09-06 20:01:00,561 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20250906T193739Z.hdf5
Sat Sep 06 10:07:21 2025 INFO: Fill completed in 7min 17secs
From TC-B it looks like a good fill. TC-A started misbehaving at 06:47 this morning, it had rejoined TC-B just prior to the fill at 09:45 before diverging again.
For reference sunrise today was 06:25. Weather is overcast.
Note that it was TC-A which actually triggered the end-of-fill, its temp went from -30C to -200C in 5 seconds. For reference during Friday's fill it went from -90C to -200C in 15 seconds.