TITLE: 08/19 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
Shift started out with Elenna & Jenne troubleshooting H1 due to locklosses at DHARD WFS post-Maintenance. Elenna was eventually able to get H1 past the rough DHARD/CARM Offset steps of ISC_LOCK. H1 was then able to get back to Observing (please see Elenna's alog for more info---Elenna mentioned, as she was leaving for the night, that this was an issue similar to what she saw after the recent HAM1 vent.).
Violins were elevated on this lock, but once High Power Damping settings were started, most modes were damped down pretty fast. ETMy M1 was slow, so I helped it out a little (nominal gain is -0.1 for ETMy Mode1, and I have it at -0.2 and will leave it here for the night since it continues to damp this one nicely.
LOG:
This morning the In-Lock SUS Charge Measurements ran. Attached are the plots for all four Test Masses. Closing FAMIS 28419.
We had a 2 second square wave sensor glitch in PT100. Its pressure jumped from 1.7e-07 to 3.7e-07 Torr for 2 seconds and then jumped back.
VACSTAT sent cell phone texts to team-VAC due to this single gauge event, because HAM1 is separated from the rest of the vertex and is therefore exempt from the 2-gauges-in-alarm filter.
VACSTAT was restarted at 21:26 and I took the opportunity to bring MY's PT124B back into the system.
After Elenna left (and made a summary alog), H1 made it to OBSERVING. To get here, I ACCEPTED new ASC SDFs (see attached).
Violins are elevated after today's Maintenance, but once DAMPING came on, they are mostly all sharply dropping.
Bi-weekliy stats for a few locking sequences for the bast 14 days.
ALS: max Duration 28 Min
Average 4.79 Min
% above 5 minutes 32.07
Date range: 2025-08-06 02:28:40 to 2025-08-20 02:28:40
DRMI [18-101]: max Duration 57 Min
Average 13.68 Min
% above 5 minutes 54.90
Date range: 2025-08-06 02:29:42 to 2025-08-20 02:29:42
CARM: [120 -428] - max Duration 29 Min
Average 5.28 Min
% above 5 minutes 18.42
Date range: 2025-08-06 02:30:51 to 2025-08-20 02:30:51
Powerup [429-590]: max Duration 46 Min
Average 21.633 Min
% above 30 minutes 63.33
Date range: 2025-08-06 02:32:18 to 2025-08-20 02:32:18
We kept losing lock at DHARD WFS, and Jenne and I spent a lot of time figuring out why. Here are some notes about things that we tried:
How I solved the problem:
Some final notes about minor mistakes I made after:
Apologies for missing this detail in my original alog, but we had run two successful initial alignments. The first was run as usual after the end of the maintenance day. Green alignment converged properly, arm alignment looked good. We ran a second after the early problems with DHARD, thinking that maybe the alignment was poor. Same results, good green convergence, but problems with DHARD.
In general, the alignment of the arms and corners looked very good through this whole process. When I stepped in DHARD P and Y, I was making very small steps overall to correct the alignment, steps on the order of 0.01 or 0.03. In general, the buildups and the camera showed good alignment. The green arms also stayed well aligned when the DHARD signal engaged properly, which is telling me that our initial alignment was doing the correct thing for the ITMS. Is it possible that the CARM offset was too large? I have been wondering this, because if the alignment was decent but the WFS signal was junk because we were too far from resonance that might explain this behavior, and it lines up with the fact that the better fix was reducing the carm offset more before moving to REFL. I'm not sure if that makes sense though.
As a side note, DRMI was mostly locking very quickly throughout the night.
TITLE: 08/19 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 6mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
H1's still down post-Maintenance. Currently, Elenna & Jenne are troubleshooting H1 which has had locklosses at DHARD WFS (there weren't any obvious culprits from Maintenance today for why this is all of a sudden an issue with locking).
TITLE: 08/19 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Maintenance day today. An early Norco LN2 truck caused some noise before maintenance (alog86437) and a late LN2 truck delayed locked a bit. While we were waiting Elenna was able to get the PRMI ASC working again, yay! (alog86456). I left BRS X sensor correction on while the LN2 truck was filling at EX and we were still able to lock PRMI/DRMI without issue. The BRS velocities seemed relatively calm during that time.
Relocking has been a bit of a struggle and is still ongoing. DRMI is taking very long to lock, and the few times that we have gone past, we've seen DHARD move us away to a lock loss. Right now Elenna and Jenne are working on the issue.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
18:49 | LASER | LVEA is LASER HAZARD | LVEA | YES | LVEA IS LASER HAZARD | 15:29 |
14:57 | PCAL | Tony | PCAL lab | yes | Prep for Tues meas. | 15:21 |
15:06 | SYS | Randy, Mitchell | LVEA - N bay | no | Craning | 18:08 |
15:17 | CDS | Dave | Remote | n | NDS1 restart | 15:21 |
15:17 | SAF | Oli | LVEA | YES -> No | Transition to laser SAFE | 15:28 |
15:22 | FAC | Tyler | LVEA | yes | Measure a few things | 15:23 |
15:24 | FAC | Tyler, Eric | CS | no | Hydrant testing | 18:13 |
15:29 | PCAL | Tony, Francisco | EX | Yes | PCAL meas. | 18:38 |
15:30 | CDS/PEM | Fil | EX | n | Roof weather station fix | 17:32 |
15:37 | FAC | Kim | LVEA | n | Tech clean | 16:57 |
15:41 | FAC | Chris | LVEA | n | Battery maint. | 16:50 |
15:44 | VAC | Travis, Janos | MY | n | Vac pump work | 19:17 |
15:59 | PCAL | Rick, Dripta | PCAL lab | yes | PCAL work (Dripta out 17:33) | 18:25 |
16:21 | ISC | Daniel | LVEA | n | ASC whitening chassis | 16:21 |
16:37 | PEM | TJ | LVEA | n | Moving dust monitor | 17:36 |
16:48 | Betsy | LVEA | n | Looking for parts and checking on people | 17:29 | |
16:51 | FAC | Chris | MX, EX | n | Filter checks | 17:26 |
16:57 | FAC | Kim | EY | n | Tech clean | 17:43 |
17:01 | Richard | LVEA | n | Checking on people | 17:29 | |
17:16 | PEM | Sam, Gerardo | LVEA | n | Y arm acc. mounting | 17:45 |
17:18 | Camilla | LVEA | n | Dropping off parts | 17:39 | |
17:33 | EE | Fil | LVEA | n | Looking at rack and table positioning | 18:43 |
17:44 | FAC | Kim | EX | yes | Tech clean | 18:42 |
17:44 | SYS | Betsy | LVEA | n | More parts | 17:55 |
17:54 | SQZ | Camilla, Jennie | CR | n | OMC scans, SQZ single bounce, no IMC | 19:35 |
18:11 | FAC | Chris | MY | n | Battery checks | 18:36 |
18:40 | SPI | Tony | PCAL Lab | y(local) | Setting up PS4/5 SPI measurement | 19:08 |
18:54 | VAC | Norco | EX | n | LN2 fill | 20:17 |
19:18 | FAC | Tyler | Vertex | n | Excavating | 20:30 |
19:40 | FAC | Eric | EX,EY | n | Check in mech rooms | 20:19 |
20:07 | OPS | Tony | LVEA | n | LVEA sweep | 20:19 |
21:08 | ISC | Camilla | Opt Lab | n | Parts | 21:26 |
23:00 | PCAL | Rick, Dripta | Opt Lab | yes | PCAL labbing | 00:30 |
WP12755 TW1 offload
Jonathan, Dave:
The offload of the past 6 months of raw minute trend data from h1daqtw1 SSD-RAID to spinning media is complete. Today I restarted daqd on nds1 to reconfigure the latest trend path and Jonathan made this change permanent in puppet. I deleted to old files from nds1 starting at 13:12 in nice mode. It completed at 15:14 (2hr 2min).
Earlier (86446) I updated the OSEM calibration for PR3, so now that that is looking better, I wanted to take the measurements that are needed to get the estimator fits.
Measurement settings:
- HEALTH_CHECK with the damping loops on
- ISI and HEPI in ISOLATED
- New OSEMINF gains (since those are now permanent)
- New DAMP compensation gains in FM7 (since those are also now permanent)
PR3 Pitch:
- Set DAMP P gain to -0.2
SUSPOINT to M1 with DAMP P gain at -0.2
/ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/PR3/Common/Data/2025-08-19_1745_H1ISIHAM2_ST1_WhiteNoise_PR3SusPoint_{L,T,V,R,P,Y}_0p02to50Hz.xml
r12607
Regular TFs with damping on for P with gain at -0.2
/ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/PR3/SAGM1/Data/2025-08-19_1815_H1SUSPR3_M1_WhiteNoise_{L,T,V,R,P,Y}_0p02to50Hz.xml
r12605
PR3 Yaw:
- Set DAMP Y gain to -0.2 (DAMP P gain back at nominal of -1)
SUSPOINT to M1 with DAMP Y gain at -0.2
/ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/PR3/Common/Data/2025-08-19_1645_H1ISIHAM2_ST1_WhiteNoise_PR3SusPoint_{L,T,V,R,P,Y}_0p02to50Hz.xml
r12606
Regular TFs with damping on for Y with gain at -0.2
/ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/PR3/SAGM1/Data/2025-08-19_1715_H1SUSPR3_M1_WhiteNoise_{L,T,V,R,P,Y}_0p02to50Hz.xml
r12604
For the past three years I have been working on a branch of OMC_scan on the ligo/gitcommon/labutils repository in order to fit single boucne scans carried out with no sidebands on. In order to do this I made another function in the OMC scan class called identify_C02()
This uses the TM01 and TM02 peaks as fitting parameters to fit the non-linearity of the PZT, normally we use the 45MHz sidebands for this but when the sidebands are off we can't and so one needs to change the following variables to say which of the modes (arrange in reverse height order) are the TM01 and TM02 carrier peaks in the scan.
first_order = np.argsort(self.peak_heights)[-4]
second_order = np.argsort(self.peak_heights)[-3]
the first variable denotes the TM01 peak and the index specifies that this is the 4th highest peak in whichever scan we are looking at. The next line denotes the TM02 peak which was the 3rd highest peak in the scan we were looking at.
To specify for the code to use this C02 function, you need to call the code with the -s flag which tells the code that the sidebands were off when the scan was taken.
It is also good to run the -o flag set to 2 as this only uses a polynomial of order 2 to fit the data, otherwise it will not get a good estimate of the pzt hysteresis with only three data points (carrier, first and second order) for the fit.
As part of this merge I accidentally merged in a bunch of old files from my branch and so I have removed them.
I still have some changes to propagate to fit_peak.py and fit_two_peaks.py although since our current OMC does not have enough astigmatism to see a double peak at TM02 this is less urgent.
Tony S, Francisco L [Miriam R]
[Editors note: Added scan of procedure with notes included (there's no scanner at the ES), and the "What was done differently?" section. Also added links to some of the references.]
By running es_meas.sh, we completed the measurement procedure T1500062. Measurement went ok. Results are attached in LHO_EndX_PD_ReportV5, values are within 0.10% of previous measurements, which is good for an initial review. Beams at Rx side look centered (see attached images of the power sensor, PS_BEFORE and PS_AFTER).
By design, the Pcal team follows procedure T1500062 to run an end station measurement. Recently, Miriam worked on running a shell script to acquire and save the GPS times that are requested in the procedure. This with the purpose of saving time to the user and to be less prone to user error when (1) reading and copying ten-digit long numbers from machine into paper, then (2) reading and copying ten-digit long numbers from paper into machine.
A week ago, Joe and Miriam successfully tested the shell script at LLO (L1:78065) when making the most recent Pcal ES measurement. Thus, we proceeded in testing that the program is interferometer independent.
Additionally, I decided to use ndscope
instead of StripTool
, since I feel more comfortable and familiar with the functions included in ndscope. The one thing we lose from StripTool is monitoring the actual value of the plotted channels -- ndscope cannot enable the cross-hair function during live plotting -- but we solved that by using the caget
command in the terminal. The scope I used is saved at:
There were no issues while running the shell program. The results, as mentioned in the summary section, agree with previous measurements. The key takeaway is that the processing of data is easily done at the end station, saving us hours of work. Today's outcome, however, does not mean that a printout is not necessary. The printout can become a option for a fail-safe, in case the ES computers become compromised, and use the terminal method as default. We will have to go through at least a couple more iterations of this new program before making a final decision.
(pcal_env) francisco.llamas@cdsdell425:/ligo/gitcommon/Calibration/pcal/O4/ES/scripts/pcalEndstationPy$ python generate_measurement_data.py
--WS PS4 --date 2025-07-21
/ligo/gitcommon/Calibration/pcal/O4/ES/scripts/pcalEndstationPy/generate_measurement_data.py:52: SyntaxWarning: invalid escape sequence '\R
'
log_entry = f"{current_time} {command} \Results found here\: {results_path}\n"
Reading in config file from python file in scripts
../../../Common/O4PSparams.yaml
PS4 rho, kappa, u_rel on 2025-07-21 corrected to ES temperature 299.2 K :
-4.702207423037734 -0.0002694340454223 3.166921849830658e-05
Copying the scripts into tD directory...
Connected to nds.ligo-wa.caltech.edu
martel run
reading data at start_time: 1439656235
reading data at start_time: 1439656787
reading data at start_time: 1439657190
reading data at start_time: 1439657604
reading data at start_time: 1439658060
reading data at start_time: 1439658393
reading data at start_time: 1439658575
reading data at start_time: 1439659259
reading data at start_time: 1439659634
Ratios: -0.4616287378386522 -0.466497509893687
writing nds2 data to files
finishing writing
Background Values:
bg1 = 9.088424; Background of TX when WS is at TX
bg2 = 4.841436; Background of WS when WS is at TX
bg3 = 8.973841; Background of TX when WS is at RX
bg4 = 4.915986; Background of WS when WS is at RX
bg5 = 9.091218; Background of TX
bg6 = 0.432428; Background of RX
The uncertainty reported below are Relative Standard Deviation in percent
[34/528]
Intermediate Ratios
RatioWS_TX_it = -0.461629;
RatioWS_TX_ot = -0.466498;
RatioWS_TX_ir = -0.455998;
RatioWS_TX_or = -0.461669;
RatioWS_TX_it_unc = 0.077366;
RatioWS_TX_ot_unc = 0.075967;
RatioWS_TX_ir_unc = 0.068272;
RatioWS_TX_or_unc = 0.080853;
Optical Efficiency
OE_Inner_beam = 0.987787;
OE_Outer_beam = 0.989670;
Weighted_Optical_Efficiency = 0.988728;
OE_Inner_beam_unc = 0.048451;
OE_Outer_beam_unc = 0.051954;
Weighted_Optical_Efficiency_unc = 0.071040;
Martel Voltage fit:
Gradient = 1636.889155;
Intercept = 0.253680;
Power Imbalance = 0.989563;
Endstation Power sensors to WS ratios::
Ratio_WS_TX = -1.077440;
Ratio_WS_RX = -1.391188;
Ratio_WS_TX_unc = 0.046697;
Ratio_WS_RX_unc = 0.038795;
============================================================= [1/528]
============= Values for Force Coefficients =================
=============================================================
Key Pcal Values :
GS = -5.135100; Gold Standard Value in (V/W)
WS = -4.702207; Working Standard Value
costheta = 0.988362; Angle of incidence
c = 299792458.000000; Speed of Light
End Station Values :
TXWS = -1.077440; Tx to WS Rel responsivity (V/V)
sigma_TXWS = 0.000503; Uncertainity of Tx to WS Rel responsivity (V/V)
RXWS = -1.391188; Rx to WS Rel responsivity (V/V)
sigma_RXWS = 0.000540; Uncertainity of Rx to WS Rel responsivity (V/V)
e = 0.988728; Optical Efficiency
sigma_e = 0.000702; Uncertainity in Optical Efficiency
Martel Voltage fit :
Martel_gradient = 1636.889155; Martel to output channel (C/V)
Martel_intercept = 0.253680; Intercept of fit of Martel to output (C/V)
Power Loss Apportion :
beta = 0.998895; Ratio between input and output (Beta)
E_T = 0.993799; TX Optical efficiency
sigma_E_T = 0.000353; Uncertainity in TX Optical efficiency
E_R = 0.994898; RX Optical Efficiency
sigma_E_R = 0.000353; Uncertainity in RX Optical efficiency
Force Coefficients :
FC_TxPD = 7.901498e-13; TxPD Force Coefficient
FC_RxPD = 6.189270e-13; RxPD Force Coefficient
sigma_FC_TxPD = 4.661048e-16; TxPD Force Coefficient
sigma_FC_RxPD = 3.277513e-16; RxPD Force Coefficient
data written to ../../measurements/LHO_EndX/tD20250819/
Note: The data for this measurement, and this alog [first edition], were all done at EX!
I was able to get some testing in on the SR3 P and Y estimators today with the most up to date estimator and blend filters. There were still some filter issues at first, but thanks to all the hard work from Brian, Edgard, and Ivey, we were able to get everything working!
Each test was done with the other estimator off, so as if the other damping degrees of freedom were just their normal -0.5 gain damping.
SR3 Y Estimator
We had tested the SR3 Y estimator twice before (85615, 86319), but this should be the best case of everything for the Y estimator. Each test was 30 minutes.
Estimator filters: Clean_fits_H1SR3_Y_2025-08-05.mat (86366)
Blend filters: Estimator_blend_skinnynotch_SR3yaw_20250723.m (86265)
Tests
DAMP Y -0.1
2025-08-19 18:22:06 - 18:52:06 UTC
DAMP Y -0.1 and OSEM Y -0.4
2025-08-19 17:22:05 - 17:52:17 UTC
DAMP Y -0.1 and EST Y -0.4
2025-08-19 17:52:23 - 18:22:00 UTC
Preliminary results: /ligo/svncommon/SusSVN/sus/trunk/HLTS/Common/FilterDesign/Estimator/Estimator/SR3_Y_EstTest_2025-08-19.xml r12603
The tests gave really nice results!
SR3 P Estimator
We have never tested the SR3 P estimator, but the Stanford peeps were able to get all the filters needed just in time for me to get a short (~7-10 mins each) set of estimator measurements.
Estimator filters: Clean_fits_H1SR3_P-08-05.mat (86430)
Blend filters: blend_SR3_pitchv1.m (86452)
Tests
DAMP P -0.1
2025-08-19 19:17:05 - 19:25:10 UTC
DAMP P -0.1 and OSEM P -0.4
2025-08-19 19:44:45 - 19:54:45 UTC
DAMP P -0.1 and EST P -0.4
2025-08-19 19:33:01 - 19:44:27 UTC
Preliminary results: /ligo/svncommon/SusSVN/sus/trunk/HLTS/Common/FilterDesign/Estimator/Estimator/SR3_P_EstTest_2025-08-19.xml r12603
These tests don't have as high of a bin width or number of averages as the Y estimator did, but the results still show a decent decrease in noise in ADD_P_TOTAL, besides a new peak that popped up at 3.5 Hz.
Next steps:
Both P and Y estimator tests are looking pretty good, so the next steps are to test them both out while we are locked. On Thursday I'll be doing that for at least SR3 Y and P if there is time. To make sure we can transition from the regular OSEM damping to the estimator damping, I have both SR3 P and Y set to their 'DAMP P/Y -0.1 and OSEM P/Y -0.4' settings. This means -0.1 of their DAMP gains are in the DAMP filter bank, and -0.4 of their gains are in the EST_{P,Y}_DAMP_OSEM filter bank, which give us the same amount and type of damping as -0.5 gain of normal DAMP damping would give us.
On Thursday I will be able to test out the estimator by swapping the -0.4 DAMP_OSEM gain to -0.4 DAMP_EST gain.
These damping settings have been accepted in sdf
Woot! This seems really good.
A few things about pitch
- The peak at 3.5 Hz is expected. This is a pitch mode, but I guess it's really hard to see with the OSEMs. It seems to be only barely damped. You can see it in the blend design plots of alog 86452, it's there in many plots eg. plot 3. For the "to-do" list, we could consider adding a bit more damping to this peak - if it is causing trouble for ASC.
- I would expect the low freq drive (.1 to 1 Hz) to be lower. It will be good to check the updated data.
- You can see the down-side to this particular blend - the 'bottom' setting of 0.1 means that the OSEM noise gets added at 0.1 for this path at high frequencies, so the total goes from 0.2 in light-damping to 0.2 + (0.8 * 0.1) = 0.28 with the light damping + estimator damping. It's fine, but could be played with more if useful. This is not really inherent to this design approach, just to the first one I built.
FAMIS 26546, last checked in alog86417
The script reports BSC high frequency noise is elevated for the following sensors:
ETMX_ST1_CPSINF_V2
Leo, Jennie, Camilla
Followed setup instructions from 80010, with 75mW injected in the SEED beam, we had 1mW on OPO_IR_PD_DC slightly lower than last time. Also took SQZ_FC to FC_MISALIGNED. And opened SQZ beam divertor and fast shutter. Have counts of ~60 on ASC-AS_A and B and 7e-4 on ASC-OMC-A and B NSUMs. Jennie took the OMC PZT down to zero to start and then ran a template userapps/../omc/h1/templates/OMC_scan_sqz_beam.xml.
Repeated mode scans with different ZM PSAMs offsets, the DC centering loops could account for the pitch change for PSAMs, for the extreme PSAMS values, I increased the servo limiter from 85 to 95.
Jennie and Leo have the data to analyze.
See the ndscope with the channels we used to monitor attached.
This template is saved in /ligo/home/jennifer.wright/Documents/OMC_scan/20250819_SQZ_beam_scan_with_ZM_changes.yaml
Following up on these OMC scans: attached is the table with all computed mode-mismatch values.
Leo, Jennie W., Camilla Below is a plot of the OMC scans fitted with a surface polynomial. The plot is from the presentation in T2500228, so the labels on axes will be different. This can be plotted on Matlab simply using the following code block (requires Curve Fitting Toolbox to use fit()). OMCX = [5.5, 4, 2.3, 3, 8.1, 2.3, 2.3, 6, 9.5, 6, 6, 9, 3, 8, 4, 9.5, 9.5]; OMCY = [-0.8, 0.34, 0.2, 0.85, -0.4, -0.1, 0.2, -0.4, -0.6, -4.5, -5.3, 2, 2, -2, -2, -5, -0.6]; OMCdata = [1-4.048/100, 1-3.412/100, 1-3.074/100, 1-3.234/100, 1-5.073/100, ... 1-3.176/100, 1-2.884/100, 1-2.174/100, 1-3.343/100, 1-9.490/100, 1-11.865/100, 1-6.282/100, 1-2.313/100, ... 1-3.013/100, 1-3.021/100, 1-9.179/100, 1-3.243/100]; omc = fit([transpose(OMCX), transpose(OMCY)], transpose(OMCdata), 'poly22'); p = plot(omc);
Similar to alog 86227, the BTRP adapter flange and GV were installed on Tuesday at the MY station. Leak checking was completed today with no signal seen above the ~e-12 torrL/s background of the leak detector.
Pumping on this volume will continue until next Tuesday, so some additional noise may be seen by DetChar. This volume is valved out of the main volume, so the pressure readings from the PT-243 gauges can be ignored until further notice.
Here are the first and the last pictures of the leak detector values. The max was 3.5 * 10-12. 90% of the time it stayed at <1 *10-12.
As of Tuesday, August 19, the pumps have been shut off and removed from this system, and the gauge tree valved back in to the main volume. Noise/vibration and pressure monitoring at MY should be back to nominal.
The pumping cart was switched off, and the dead volume was valved back in to the main volume. The pressure dropped rapidly to ~5E-9 within a few minutes, and it continues to drop. Also, we (Travis & Janos) added some more parts (an 8" CF to 6" CF tee; CF to ISO adapters, and an ISO valve) to the assembly, and also added a Unistrut support to the tee; see attached photo. Next step is to add the booster pump itself, and anchor it to the ground.
LOTO was applied now both to the handlers of the hand angle valve and the hand Gate Valve.
Summary: I didn't see any conclusive difference that showed we could get an improvement from a change in a particular direction. Since the optical gain moves around so much its hard to see the effect of small or quick changes. It might be worth doing a longer test were we step these offsets in sets of three different values per dof (3*4 steps) and do each for five mins (~ 1 hour).
I orginally was trying to use the results of our last injection (2025-06-16 20:41:15 UTC to2025-06-16 21:01:57 UTC ) to determine how to change the OMC ASC alignment.
Using /ligo/home/jennifer.wright/git/2025/OMC_Alignment/20250116_OMC_Alignment_EXC.xml we injected four low frequency lines into H1:OMC-ASC_{POS,ANG}_{X,Y}_EXC - these channels come after the filter banks (ASC-OMC_A_PIT_OFFSET, then ASC-OMC_B_PIT_OFFSET, then ASC-OMC_A_YAW_OFFSET, then ASC-OMC_B_YAW_OFFSET) where the nominal offsets are set. The injection is at four different frequencies (0.0113, 0.0107, 0.0097 and 0.0103 Hz).
The analysis then involves looking at the 410 Hz line height on the OMC DCPD SUM to interpret the effect on optical gain.
However it is not clear from this plot that the full range of phase space has been looked at - ie we have not changed the offset with a large enough amplitude to check their current position is optimal.
Thus today I stepped ASC-OMC_A_PIT_OFFSET, then ASC-OMC_B_PIT_OFFSET, then ASC-OMC_A_YAW_OFFSET, then ASC-OMC_B_YAW_OFFSET - these filter banks are before the ones used above in the model and are the ones where the offset for the QPDs is set. In the
Towards the last three measurements we realised that we need to do larger changes ~0.04 counts to see a change in kappa C then wait for 3-4 mins. On the long term trend none of these changes seemed to produce any meausurable gain, but it would be good to repeat this measurement with longer times spent at each step and perhaps go in slightly larger steps.
The ndscope template is in /ligo/home/jennifer.wright/git/2025/OMC_Alignment/20250724_change _offsets_by_hand.yaml
Jennie, Elenna,
The other day Elenna noticed some coherence between DARM and the light reflected from the OMC (OMC-REFL_A_LF_OUT_DQ) and wondered if this implies our mode-matching or OMC alignment is bad.
So I took some times from above when one of the offsets was non-nominal and we compared the coherence between these times, since it shows some small change maybe there is still some tuning we could do of these offsets to recover some optical gain.
The measurement is saved in /ligo/home/jennifer.wright/git/2025/OMC_Alignment/Jennie_OMC_offsets.xml
Each was around 3 minutes long apart from the starting measurement which we took from a quiet time before the measurements.
The measurement times are:
nominal settings: 16:29:21 UTC
PITCH A: 17:54:11:UTC
PITCH B: 18:39:57 UTC
YAW A :18:58:10 UTC
YAW B :19:23:20 UTC
In the past few weeks have seen rocky performance out of the Calibration pipeline and its IFO-tracking capabilities. Much, but not all, of this is due to [my] user error. Tuesday's bad calibration state is a result of my mishandling of the recent drivealign L2L gain changes for the ETMX TST stage (LHO:78403, LHO:78425, LHO:78555, LHO:79841). The current practice adopted by LHO with respect to these gain changes is the following: 1. Identify that KAPPA_TST has drifted from 1 by some appreciable amount (1.5-3%), presumably due to ESD charging effects. 2. Calculate the necessary DRIVEALIGN gain adjustment to cancel out the change in ESD actuation strength. This is done in the DRIVEALIGN bank so that it's downstream enough to only affect the control signal being sent to the ESD. It's also placed downstream of the calibration TST excitation point. 3. Adjust the DRIVEALIGN gain by the calculated amount (if kappaTST has drifted +1% then this would correspond to a -1% change in the DRIVEALIGN gain). 3a. Do not propagate the new drivealign gain to CAL-CS. 3b. Do not propagate the new drivealign gain to the pyDARM ini model. After step 3 above it should be as if the IFO is back to the state it was in when the last calibration update took place. I.e. no ESD charging has taken place (since it's being canceled out by the DRIVEALIGN gain adjustments). It's also worth noting that after these adjustments the SUS-ETMX drivealign gain and the CAL-CS ETMX drivealign will no longer be the copies of each other (see image below). The reasoning behind 3a and 3b above is that by using these adjustments to counteract IFO changes (in this case ESD drift) from when it was last calibrated, operators and commissioners in the control room could comfortably take care of performing these changes without having to invoke the entire calibration pipeline. The other approach, adopted by LLO, is to propagate the gain changes to both CAL-CS and pyDARM each time it is done and follow up with a fresh calibration push. This approach leaves less to 'be remembered' as CAL-CS, SUS, and pyDARM will always be in sync but comes at the cost of having to turn a larger crank each time there is a change.Somewhere along the way I updated the TST drivealign gain parameter in the pyDARM model even though I shouldn't have. At this point, I don't recall if I was confused because the two sites operate differently or if I was just running a test and left this parameter changed in the model template file by accident and subsequently forgot about it. In any case, the drivealign gain parameter change made its way through along with the actuation delay adjustments I made to compensate for both the new ETMX DACs and for residual phase delays that haven't been properly compensated for recently (LHO:80270). This happened in commit 0e8fad of the H1 ifo repo. I should have caught this when inspecting the diff before pushing the commit but I didn't. I have since reverted this change (H1 ifo commit 41c516). During the maintenance period on Tuesday, I took advantage of the fact that the IFO was down to update the calibration pipeline to account for all of the residual delays in the actuation path we hadn't been properly compensating for (LHO:80270). This is something that I've done several times before; a combination of the fact that the calibration pipeline has been working so well in O4 and that the phase delay changes I was instituting were minor contributed to my expectation that we would come back online to a better calibrated instrument. This was wrong. What I'd actually done was install a calibration configuration in which the CAL-CS drivealign gain and the pyDARM model's drivealign gain parameter were different. This is bad because pyDARM generates FIR filters that are used by the downstream GDS pipeline; those filters are embedded with knowledge of what's in CAL-CS by way of the parameters in the model file. In short, CAL-CS was doing one thing and GDS was correcting for another. -- Where do we stand? At the next available opportunity, we will be taking another calibration measurement suite and using it reset the calibration one more time now that we know what went wrong and how to fix it. I've uploaded a comparison of a few broadband pcal measurements (image link). The blue curve is the current state of the calibration error. The red curve was the calibration state during the high profile event earlier this week. The brown curve is from last week's Thursday calibration measurement suite, taken as part of the regularly scheduled measurements. -- Moving forward, I and others in the Cal group will need to adhere more strictly to the procedures we've already had in place: 1. double check that any changes include only what we intend at each step 2. commit all changes to any report in place immediately and include a useful log message (we also need to fix our internal tools to handle the report git repos properly) 3. only update calibration while there is a thermalized ifo that can be used to confirm that things will back properly, or if done while IFO is down, require Cal group sign-off before going to observing
Posting here for historical reference.
The propagation of the correction of incorrect calibration was in an email thread between myself, Joseph Betzwieser, Aaron Zimmerman, Colm Talbot.
I had produced a calibration uncertainty with necessary correction that would account for the effects of this issue attached here as a text file, and as an image showing how it compares against our ideal model (blue dashed) and the readouts of the calibration monitoring lines at the time (red pentagons).
Ultimately the PE team used the inverse of what I post here, since as a result of this incident it was discovered that PE was ingesting uncertainly in an inverted fashion up to this point.
I am also posting the original correction transfer function (the blue dashed line in Vlad's comment's plot) here from Vlad for completeness. It was created by calculating the modeled response of the interferometer that we intended to use at the time (R_corrected), over the response of the interferometer that was running live at the time (R_original) corrected for online correction (i.e. time dependent correction factors such as Kappa_C, Kappa_TST, etc). So to correct, one would take the calibrated data stream at the time: bad_h(t) = R_original (t) * DARM_error(t) and correct it via: corrected_h(t) = R_original(t) * DARM_error(t) * R_corrected / R_original(t)
So our understanding of what was wrong with the calibration around September 25th, 2024 00:00 UTC has improved significantly since then. We had 4 issues in total. 1) The above mentioned drivealign gain mismatch issue between model, h1calcs, the interferometer and GDS calibration pipeline. 2) The ETMX L1 stage rolloff change that was not in our model (see LHO alog 82804) 3) LHO was not applying the measured SRC detuning to the front end calibration pipeline - we started pushing it in February 2025 (see LHO alog 83088). 4) The fact that pydarm doesn't automatically hit the load filters button for newly updated filters means sometimes humans forget to push that button (see for example LHO alog 85974). Turns out that night the optical gain filter in the H1:CAL-DARM_ERR filter bank had not been updated. Oddly enough, the cavity pole frequency filter bank had been updated, but I'm guessing the individual load button was pressed In the filter archive (/opt/rtcds/lho/h1/chans/filter_archive/h1calcs/), specifically H1CALCS_1411242933.txt has an inverse optical gain filter of 2.9083e-07, which is the same value as the previous file's gain. However, the model optical gains did change (3438377 in the 20240330T211519Z report, and 3554208 in the bad report that was pushed, 20240919T153719Z). The epics for the kappa generation were updated, so we had a mismatch between the kappa_C value that was calculated, and to what optical gain it applied - similar to the actuation issue we had. It should have changed by a factor of 0.9674 (3554208/3438377). This resulted in the monitoring lines showing ~3.5% error at the 410.3 Hz line during this bad calibration period. It also explains why there's a mistmatch between monitoring lines and the correction TFs we provided that night at high frequency. Normally the ratio between PCAL and GDS is 1.0 at 410.3 Hz since the PCAL line itself is used to calculated kappa_C at that frequency and thus matches the sensing at that frequency to the line. See the grafana calibration monitoring line page. I've combined all this information to create an improved TF correction factor and uncertainty plot, as well as more normal calibration uncertainty budgets. So the "calibration_uncertainty_H1_1411261218.png" is a normal uncertainty budget plot, with a correction TF from the above fixes applied. The "calibration_uncertainty_H1_1411261218.txt" is the associated text file with the same data. "H1_uncertainty_systematic_correction.txt" is the TF correction factor that I applied, calculated with the above fixes. Lastly, "H1_uncertainty_systematic_correction_sensing_L1rolloff_drivealign.pdf", is the same style plot Vlad made earlier, again with the above fixes. I'll note the calibration uncertainty plot and text file was created on the LHO cluster, with /home/cal/conda/pydarm conda environment, using command: IFO=H1 INFLUX_USERNAME=lhocalib INFLUX_PASSWORD=calibrator CAL_ROOT=/home/cal/archive/H1/ CAL_DATA_ROOT=/home/cal/svncommon/aligocalibration/trunk/ python3 -m pydarm uncertainty 1411261218 -o ~/public_html/O4b/GW240925C00/ --scald-config ~cal/monitoring/scald_config.yml -s 1234 -c /home/joseph.betzwieser/H1_uncertainty_systematic_correction.txt I had to modify the code slightly to expand out the plotting range - it was much larger than the calibration group usually assumes. All these issues were fixed in the C01 version of the regenerated calibration frames.