Broad band ran before the Calibration push:
pydarm measure --run-headless bb
2025-08-28 08:03:41,609 config file: /ligo/groups/cal/H1/ifo/pydarm_cmd_H1.yaml
.....computer noises.....
Completeed successfully!
Calibration was pushed by Elena
!!! ANOTHER WILD BROADBAND Appears !!!
pydarm measure --run-headless bb
2025-08-28 08:37:10,096 config file: /ligo/groups/cal/H1/ifo/pydarm_cmd_H1.yaml
2025-08-28 08:37:10,113 available measurements:
pcal: PCal response, swept-sine (/ligo/groups/cal/H1/ifo/templates/PCALY2DARM_SS__template_.xml)
bb : PCal response, broad-band (/ligo/groups/cal/H1/ifo/templates/PCALY2DARM_BB__template_.xml)
sens: sensing function (/ligo/groups/cal/H1/ifo/templates/DARMOLG_SS__template_.xml)
act1x: actuation X L1 (UIM) stage response (/ligo/groups/cal/H1/ifo/templates/SUSETMX_L1_SS__template_.xml)
act2x: actuation X L2 (PUM) stage response (/ligo/groups/cal/H1/ifo/templates/SUSETMX_L2_SS__template_.xml)
act3x: actuation X L3 (TST) stage response (/ligo/groups/cal/H1/ifo/templates/SUSETMX_L3_SS__template_.xml)
open
restore /ligo/groups/cal/H1/ifo/templates/PCALY2DARM_BB__template_.xml
run -w
save /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250828T153710Z.xml
"The new calibration looks good" ~Elenna circa late Aug 2025
Finally some SIMULINES :
gpstime;python /ligo/groups/cal/src/simulines/simulines/simuLines.py -i /ligo/groups/cal/H1/simulines_settings/newDARM_20231221/settings_h1_20250212.ini;gpstime
PDT: 2025-08-28 08:45:54.867509 PDT
UTC: 2025-08-28 15:45:54.867509 UTC
GPS: 1440431172.867509
....Computer noises ....
Oh no!!! : (
LOCKLOSS during Simulines!
2025-08-28 16:13:08,168 | ERROR | Aborting main thread and Data recording, if any. Cleaning up temporary file structure.
PDT: 2025-08-28 09:13:08.423541 PDT
UTC: 2025-08-28 16:13:08.423541 UTC
GPS: 1440432806.423541
Answer (from Sheila and Elenna): no.
With Jenne's beamsplitter slow let go servo, we have been wondering if, after lockloss, we should just skip initial alignment while the servo does its job. According to Jenne and Ryan, the beamsplitter takes 17 minutes to cool down. Meanwhile, the start of initial alignment to the start of the beamsplitter initial alignment is 10 minutes, and there is an additional 3 minutes from the start of MICH bright align to MICH bright offload (we looked at the initial alignment today, so these are snapshot numbers).
We think that if an initial alignment is run immediately after lockloss, the resulting error in the beamsplitter alignment by the time we reach DRMI will be small, since the initial alignment process will overwrite the slow let go servo.
However, if we don't run initial alignment (because we think the alignment is good or whatever), the slow let go should help when we get to DRMI.
So the answer is, we should run initial alignment based on the usual judgements, i.e. does the alignment look good enough to lock? We can ignore the beamsplitter heating and the slow let-go as a reason to do or not do initial alignment.
We lost lock in move spots, lockloss tool. It looks like the MICH LSC error signal got very large 1 second before lockloss, not sure why.
The PRC1 pitch ASC is no longer working properly, despite its successful test in this alog. While trying to debug this problem, I monitored the REFL 9, 45 and POP WFS signals and none of them crossed zero at the appropriate time. However, the PRC1 yaw ASC is working just fine.
Tony informed me that we have been through PRMI ASC five times since the PRC1 ASC was reengaged. To my knowledge this is the first time it was bad.
Thu Aug 28 10:09:21 2025 INFO: Fill completed in 9min 18secs
Gerardo confirmed a good fill curbside.
Closes FAMIS 28420, last checked in alog 86471.
IX did not run with the following error popping (only error lines copied):
Coherence for bias_drive_bias_off is 0.015409917227075794, which is below the threshold of 0.1. Skipping this measurement
Cannot calculate beta/beta2 because some measurements failed or have insufficient coherence!
Cannot calculate alpha/gamma because some measurements failed or have insufficient coherence!
Something went wrong with analysis, skipping ITMX_13_Hz_1440255043"
Coherence for bias_drive_bias_off is 0.0825334918693312, which is below the threshold of 0.1. Skipping this measurement
Cannot calculate beta/beta2 because some measurements failed or have insufficient coherence!
Cannot calculate alpha/gamma because some measurements failed or have insufficient coherence!
Something went wrong with analysis, skipping ITMX_13_Hz_1438440647
Previously analyzed ETMY_12_Hz_1439650262 - skipping
Analyzing data in ITMX_13_Hz_1437835843...
Reading time series for bias_drive_bias_on
Reading time series for L_drive_bias_on
Reading time series for bias_drive_bias_off
Reading time series for L_drive_bias_off
Coherence for bias_drive_bias_off is 0.07205627679480447, which is below the threshold of 0.1. Skipping this measurement
Cannot calculate beta/beta2 because some measurements failed or have insufficient coherence!
Cannot calculate alpha/gamma because some measurements failed or have insufficient coherence!
Something went wrong with analysis, skipping ITMX_13_Hz_1437835843
Thanks Ibrahim! In the past we've concluded that this is a good sign and means the charge build up on ITMX is low: 81858
Closes FAMIS 26551. Last checked in alog 86515
Laser Status:
NPRO output power is 1.873W
AMP1 output power is 70.19W
AMP2 output power is 141.3W
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 16 days, 0 hr 26 minutes
Reflected power = 23.93W
Transmitted power = 105.2W
PowerSum = 129.1W
FSS:
It has been locked for 0 days 0 hr and 39 min
TPD[V] = 0.8058V
ISS:
The diffracted power is around 4.2%
Last saturation event was 0 days 0 hours and 39 minutes ago
Possible Issues:
PMC reflected power is high
Today we pushed calibration report 20250823T183838Z, which updates the calibration model after changes to the actuation (ESD bias change) and sensing (SRC alignment offset change).
These are the steps I took (and two mistakes I made):
pydarm report --regen 20250823T183838Z
with the ESD bias and drivealign L3 L2L gains updated in the ini file. I checked to make sure the results were sensible. There is a change in the sensing function at low frequency, probably from an L2A2L related change.
pydarm report --regen 20250823T183838Z
to get the GDS filters, checked the GDS filters to ensure they looked normalpydarm commit 20250823T183838Z --valid --message 'new calibration push, new ESD bias and SRC ASC offsets'
pydarm export --push 20250823T183838Z
pydarm upload 20250823T183838Z
pydarm gds restart
H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN
had not been updated with the new drivealign gain. Once I updated it, CAL DELTA L then looked correct
The attached plot compares the before and after PCAL broadband to GDS CALIB STRAIN. Tony will post the usual alog about the simulines measurement once it is complete.
Sadly, we lost lock mid-simulines measurement. However, the broadband pcal measurement showed success, so we are happy to keep this calibration and we will (hopefully) get simulines Saturday.
Here is the same plot as above, except with PCAL/GDS instead of GDS/PCAL.
We've had two locks since this push, and it appears the systematic error during thermalization is even lower than it was before. We updated the SRCL offset during thermalization, partially because it reduces the systematic error. It appears we do not need to update the thermalization servo, as the systemic error of the 33 Hz line is 2% or less during thermalization.
J. Kissel After this Tuesday's maintenance day, when I installed the H1 SUS PR3 pitch and yaw estimators (LHO:86578), I'd thought I'd turned them OFF. I'd accidentally left the pitch estimator ON. Whoops! I've turned them OFF this morning -- if only to get some data with *just* the improved SR3 estimators (i.e. that the SR3 pitch estimator now includes longitudinal sus point to M1 contributions; see LHO:86567 and LHO:86589). The first nominal low noise and subsequent observation segment with SR3 P & Y (P with improved Sus Point L to M1 P contribution) and PR3 P (also with Sus Point L to M1 P contribution includes) was right after maintenance, 2025-08-26 21:16 UTC, but really it had been on from 2025-08-26 17:44 UTC. I turned the PR3 pitch estimator off by 2025-08-28 15:15 UTC. For reference, assuming everything upstream of the switch is on and functional, you can look at the "use estimator or use OSEM" switch status to check if the estimators are on. The current status is H1:SUS-PR3_M1_EST_P_SWITCH_NEXT_CHAN 2.0 H1:SUS-PR3_M1_EST_Y_SWITCH_NEXT_CHAN 2.0 H1:SUS-SR3_M1_EST_P_SWITCH_NEXT_CHAN 3.0 H1:SUS-SR3_M1_EST_Y_SWITCH_NEXT_CHAN 3.0 i.e. (as stated above) the PR3 estimators are OFF = NEXT_CHAN = 2, and the SR3 estimators are ON = NEXT_CHAN = 3. At superficial glance, i.e. "we've been in nominal low noise observing since they've been on," the IFO doesn't seem to mind AT ALL. And we have a data point of 1.0 that says that we can make it through initial alignment and lock acquisition with it on as well. We'll post some more quantitative metrics in a bit.
TITLE: 08/28 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 3mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY:
H1 Has been Locked for 41+ continuous hours without any drops from observing.
Expected drops from Observing:
Today the Comissioning time starts at 1500 -1900 UTC
Calibration:
Target of opportunity:
TITLE: 08/27 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
Smooth and easy shift with H1 locked for 31.75hrs.
LOG:
TITLE: 08/27 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 160Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
GRB-Short E594495 15:35:48 UTC
Vacuum MidX PT 343B Press sensor caused a Verbals Alarm due to phone interfierence with cold cathode gauge. @ 20:44 UTC
Vac Team said this may take a few days to start working properly again, so it may be red for a while.
SuperEvent S250827fo candidate @ 22:50 UTC !!
H1 has been Locked and Observing for 25+ hours.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
17:15 | SPI | Jeff | Optics Lab | N | SPI Inventory | 19:00 |
17:59 | ISS | Rahul | Optics lab | N | Looking for parts | 18:06 |
20:14 | VAC | Janos | MidX MidY | N | Lock out tag out equipment. | 22:14 |
20:20 | SPI | Jeff | Optics Lab | N | SPI Inventory | 20:47 |
20:37 | SPI | Marc | Optics lab | N | Helping Jeff | 20:47 |
22:06 | SPI | Rick | PCAL Lab | Yes | Testing SPI setup for AR & HR coatings measurements | 00:06 |
TITLE: 08/27 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 4mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY:
H1's been locked for almost 26.5hrs. Environmentally, microseism is low, as well as low winds, and there was recently some light rain!
Sheila, Tony, Camilla
Sheila and Tony noticed that we had a ~30 minute 10MPc range drop last night with related extra low-freq glitches, from the range BLRMs this is most clear in th 20-34Hz band. This noise is broadband 15-100Hz looking at DARM, plot. And trending the common range drop channels form last year, plot, we see a slight increase in FC2_M3_NOISEMON, plot, confirmed as the cause by the dtt spectrum. Sheila is proposes the cause could be FC backscatter into the IFO. There is no increase in FC1 noise apart from small regions at 80Hz, 160Hz, 190Hz, plot.
There is slighly more noise 8-20Hz in HAM8 in the low range time and 3 peaks at 7-9Hz are gone, plot.
We see the same increased noise in H1:SUS-FC2_M3_NOISEMON_LL_OUT16 in the early morning range drop, see attached.
Running the same template as yesterday, attached yellow traces, there is increased noise in the FC2 and FC LSC control channels, more noise in HAM8 ISI, and a ISI peak at 3.3Hz (there was an earthquake 45mins before but the FC noise started before the EQ), however, the noise in DARM is actually less than yesterday so the coupling isn't constant.
Jim and I had a look at the HAM8 motion and although it increases at times when the ground motion increases, these times do not seem correlated with the FC excess movement and range drops, attached. Maybe the 3.3Hz ISI peak is related to the noisier ground motion times.
Similar to alog 86227, the BTRP adapter flange and GV were installed on Tuesday at the MY station. Leak checking was completed today with no signal seen above the ~e-12 torrL/s background of the leak detector.
Pumping on this volume will continue until next Tuesday, so some additional noise may be seen by DetChar. This volume is valved out of the main volume, so the pressure readings from the PT-243 gauges can be ignored until further notice.
Here are the first and the last pictures of the leak detector values. The max was 3.5 * 10-12. 90% of the time it stayed at <1 *10-12.
As of Tuesday, August 19, the pumps have been shut off and removed from this system, and the gauge tree valved back in to the main volume. Noise/vibration and pressure monitoring at MY should be back to nominal.
The pumping cart was switched off, and the dead volume was valved back in to the main volume. The pressure dropped rapidly to ~5E-9 within a few minutes, and it continues to drop. Also, we (Travis & Janos) added some more parts (an 8" CF to 6" CF tee; CF to ISO adapters, and an ISO valve) to the assembly, and also added a Unistrut support to the tee; see attached photo. Next step is to add the booster pump itself, and anchor it to the ground.
LOTO was applied now both to the handlers of the hand angle valve and the hand Gate Valve.
Yesterday, we installed the BTRP (Beam Tube Roughing Pump) adapter flange on the 13.25" gate valve just to the -X side of GV13. This included installing a 8" GV onto the roughing pump port of the adapter, moving the existing gauge tree onto the new adapter, and installing a 2.75" blank on an unused port. All of the new CF joints were helium leak tested and no signal was seen above the ~9e-11 torrL/s background of the leak detector.
The assembly is currently valved out of the BT vacuum volume via the 13.25" GV, and is being pumped down via small turbo and aux cart. Therefore, the PT-343 gauge reading is only reporting on the BTRP assembly pressure, not the main BT pressure, so it can be ignored until further notice of it being vavled back in. This system has been pumping via aux cart or leak detector since ~2pm yesterday, and will continue to be pumped until it is in the pressure range of the BT volume. The aux cart is isolated by foam under the wheels, but some noise may be noticed by DetChar folks, hence the DetChar tag on this report.
A before - after pair of photos. As the conductance is very bad in this complex volume, we're aiming to pump it until next Tuesday. The estimated pressure rise of the main volume after valving in this small volume next Tuesday is less than E-12 Torr (after equalizing) - in other words, negligible.
Some backstage snapshots of the great teamwork of Travis, Janos, and me on installing these: Pic. 1 - "before"; 2,3 - 90% complete.
As of Tuesday, August 12, the pumps have been shut off and removed from this system, and the gauge tree valved back in to the main volume. Noise/vibration and pressure monitoring at MX should be back to nominal.
LOTO was applied now both to the handlers of the hand angle valve and the hand Gate Valve. Also, components have been added to the header, only 1 piece away from the booster pump.