FAMIS 31096
No major changes this week, but the FSS RefCav trans TPD jumps I first noticed last week are still happening, so I continued my investigation into their cause.
Thinking these jumps were being caused by some frequency feedback from the IMC or the arms since they don't seem to line up with anything else in the PSL, I trended some IMC signals looking for similar behavior. Sure enough, several IMC electronics signals have trends that follow the jumps in power seen on the FSS TPD, including IMC-F. See final screenshot for a before/after comparison of these signals around last Tuesday's maintenance period when this change began. Also, in looking at more locking examples, I can say that the jumps start occurring towards the end of 'LOCKING_ALS' or during 'FIND_IR' states in main locking and persist until the next lockloss. Again, not sure that anything really needs to be done about this as it hasn't been causing problems that we've noticed so far, but it would be interesting to know why this change started and if it goes away for some reason, so I'll continue to keep an eye on it.
Closes FAMIS 26432, last checked in alog 85850
Laser Status:
NPRO output power is 1.866W
AMP1 output power is 70.15W
AMP2 output power is 140.8W
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 13 days, 0 hr 18 minutes
Reflected power = 23.56W
Transmitted power = 105.5W
PowerSum = 129.1W
FSS:
It has been locked for 0 days 2 hr and 27 min
TPD[V] = 0.8073V
ISS:
The diffracted power is around 3.9%
Last saturation event was 0 days 2 hours and 28 minutes ago
Possible Issues:
PMC reflected power is high
No changes from last week.
Mon Jul 28 10:09:24 2025 INFO: Fill completed in 9min 21secs
Looking at the (few) locklosses over the weekend, there was only one that looked like it may have been from the TMSX_Y oscillation we've been seeing recently alog85973.
07/26/25 17:26 1437586004 No ASC
07/27/25 01:52 1437616362 NO ASC
07/27/25 13:39 1437658811 ASC?
07/28/25 03:13 1437707641 NO ASC
On Friday, Fil swapped the satamp chassis (alog) because we thought that might be the issue. However, it appears we are still having this lockloss, so perhaps we need to try swapping the coil driver chassis.
Took a calibration measurement as the first thing for comissioning this morning (since we missed our window over the weekend)
BB Start: 1437750930
BB End: 1437751248
Calibration Monitor Attached
notification: end of measurement
notification: end of test
diag> save /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250728T151520Z.xml
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250728T151520Z.xml saved
diag> quit
EXIT KERNEL
2025-07-28 08:20:30,384 bb measurement complete.
2025-07-28 08:20:30,384 bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250728T151520Z.xml
2025-07-28 08:20:30,384 all measurements complete.
This compares the broadband from last Thursday (blue reference) to today (red live).
Broadband Re-run (Part of comissioning)
Start: 1437759350
End: 1437759670
notification: end of measurement
notification: end of test
diag> save /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250728T173540Z.xml
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250728T173540Z.xml saved
diag> quit
EXIT KERNEL
2025-07-28 10:40:51,396 bb measurement complete.
2025-07-28 10:40:51,396 bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250728T173540Z.xml
2025-07-28 10:40:51,397 all measurements complete.
The second measurement Ibrahim has posted here was performed when we were at double ESD bias. We corrected the ETMX drivealign L2L gain so that kappa TST would be 1. Even with this adjustment, there appears to be some frequency dependent difference between the measurements.
We had a power glitch yesterday, Sunday 27jul2025 at 12:34:27 PDT. It was detected by the MSR UPS, but not by any other UPS. Attached plot shows the CS Mains Mon at this time. No reports of any issues arising from this glitch, H1 was in lock throughout.
Note, the MSR UPS sytem time is about 13 minutes fast, it reported this glitch as 12:47:13.
TITLE: 07/28 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 2mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.04 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING as of 04:28 UTC. Planned comissioning will take place from 08:30 PT to 11:30 PT.
We dropped out of OBSERVING for 5 minutes due to the squeezer re-locking from 14:18 to 14:23 UTC.
TITLE: 07/28 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
Rode through a surprisingly "big" M5 Iceland EQ which was seen on SUS PRM, and seismometers, only to have a random lockloss 3hrs later.
LOG:
TITLE: 07/27 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 25mph Gusts, 16mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY:
Ibrahim's handing over an H1 locked for 8.5hrs, and environmentally the only notable are the winds now and we just survived a M5.4 Aleutian Islands/Alaska EQ.
Notes From Ops Shift Check Sheet
1) FOM Scan
USGS webpage needed a reload (it had froze).
2) Dust Monitor Check Notifications for LVEA5 & LAB2
Ran the "check_dust_monitors_are_working" script and have had the usual:
2) Access System "Flashing Doors"
3) LHO Control Room Screenshots & FOMs
TITLE: 07/27 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 15:07 UTC
We have been locked all shift, aquiring lock at the very beginning.
We're currently successfully riding through a 5.3 EQ from Alaska.
Nothing else of note.
LOG:
Sun Jul 27 10:09:18 2025 INFO: Fill completed in 9min 14secs
TITLE: 07/27 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 0mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY:
IFO is LOCKING at PREP_DC_READOUT_TRANSITION
Came in and it seemed that IFO lost lock at 13L39 UTC (about an hour before shift, but managed to automatically makle it back past DRMI.
OBSERVING as of 15:07 UTC
As part of the DQ shift, I was looking at the DARM spectrorams to identify non-stationary (of the order of hours) features. Below is a list of features that were relatively obvious from the spectrograms. I used GDS-CALIB_STRAIN_NOLINES channel data for July, 21, 2025. In order to see the frequency variations a bit clear, i used 100-second spectrograms.
To identiy possible sources for the features listed in the above alog, I have produced spectrograms of some of the important auxiallary and environmental channels. I used the same fft parameters as the GDS plots and also restricted to same frequency regions as that of GDS plots above. All these plots available at link.