Starting up on the work for the PR3 estimator, we first need to recalibrate the OSEM gains, so I took some HAM2 ISO to PR3 DAMP measurements.
Settings:
Measurements:
/ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/PR3/Common/Data/2025-08-05_1700_H1ISIHAM2_ST1_WhiteNoise_ISO_{X,Y,Z}_0p05to40Hz_calibration.xml
r12518
And to clarify (since it's different from the coupling between HAM5 and SR3, the coupling is:
ISO X -> PR3 L
ISO Y -> PR3 T
ISO Z -> PR3 V
For the SR3 P estimator, I went ahead and took SUSPOINT to M1 and regular TF measurements for the Pitch estimator filters.
Settings:
SUSPOINT measurements:
/ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/SR3/Common/Data/2025-08-05_1700_H1ISIHAM5_ST1_WhiteNoise_SR3SusPoint_{L,T,V,R,P,Y}_0p02to50Hz.xml
r12520
Health check TFs:
/ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/SR3/SAGM1/2025-08-05_1800_H1SUSSR3_M1_WhiteNoise_{L,T,V,R,P,Y}_0p02to50Hz.xml
r12521
Last week I took some OLGTF measurements for SR3 for the estimator in Y (86075). It turns out I should've been taking normal transfer functions instead, so here they are.
Settings:
Measurements: /ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/SR3/SAGM1/Data/2025-08-05_1700_H1SUSSR3_M1_WhiteNoise_{L,T,V,R,P,Y}_0p02to50Hz.xml
r12519
The satamps for FC1 and FC2 were swapped out today as part of ECR E2400330, and a new compensation filter was installed in OSEMINF, so it will look like we have had a change in the alignment of the FCs, but this isn't a real change(FC1, FC2).
Tue Aug 05 10:06:03 2025 INFO: Fill completed in 6min 0secs
Gerardo and Erik confirmed a good fill curbside. Looks like knocking the ice from the pipe has fixed TC-B
J. Kissel, O. Patane Oli and I were reviewing the ECR E2400330 upgraded satamp channel response inventory vs. what's installed on which suspension. We went back to revisit the TMSX M1 F1F2F3LF OSEMs because the upgraded S1100150 satamp (originally installed on 2025-07-15; LHO:85770) was pulled out and replaced with the upgraded S1100122 (on 2025-07-24 LHO:85980) under suspicions that *it* might have been the cause of lock-losses that ended up being a result of the TMSX M1 F2 OSEM *coil's* DAC channel glitching (conclusive evidence LHO:86079, replacement LHO:86086). Oli had updated the filters on 2025-07-24, LHO:86072. However, upon careful review in order to post record of the fits' poles and zeros -- done on 2025-07-28 -- I found a bug in file that Oli used to push new compensation. What I've posted to LHO:86032 is the final answer. Today I've updated the compensation to match this final answer best fit for S1100122's channels: - turn OFF the TMSX M1 damping loops - run $ python3 satampswap_bestpossible_filterupdate_ECR_E2400330.py -o TMSX from the command line - hit LOAD_COEFFICIENTS on the GDS_TP screen. - restore damping, watch for any funky ducks All looks good!
Added the fire pumps to current bypass, expires 17:47 today.
Bypass will expire:
Tue Aug 5 05:47:55 PM PDT 2025
For channel(s):
H0:VAC-MX_X1_PT343B_PRESS_TORR
H0:FMC-CS_FIRE_PUMP_2
H0:FMC-CS_FIRE_PUMP_1
C. Compton, J. Kissel Fil and I are pushing forward with the expanded scope of ECR E2400330 to upgrade the whitening stages of the OSEM PD satamps for the top masses of the filter cavity HSTS. More details on the actual upgrade once installed, but I figure a separate instructional aLOG is warranted for the prep we did since the SQZ system and these kind of "once in a blue moon" activities for the filter cavity are still relatively new to folks. (1) Offloaded SQZ WFS DC alignment request to FC1, FC2, and ZM3 to their respective alignment sliders. - assume that the overall SQZ_MANAGER is already in DOWN. - open the SQZ_FC sub-manager guardian log - open ndscope of top mass OSEMs to check that over alignment of the SUS *doesn't* end up changning $ ndscope H1:SUS-{FC1,FC2,ZM3}_M1_DAMP_{P,Y}_IN1_DQ - request SQZ_FC's goto state FC_ASC_OFFLOADED, watch SQZ_FC guardian log and ndscope trend to confirm expected behavior - once done, request SQZ_FC to go back to DOWN. We found that this state did exactly as expected -- pushed the static output of the M1 LOCK banks to the (correctly calibrated) alignment slider OPTICALIGN OFFSET, and while there was a minor blip in top mass OSEM record of physical alignment during the transition, the SUS did not move. (2) Brought FC1 and FC2 to SAFE with their SUS_{FC1,FC2} guardians. (3) Bypassed the use of OSEM readbacks for the IOP software watchdog (SWWD). - from the sitemap, opened the "SWWD" overview screen from the "WD" dropdown in the far bottom left corner - opened each HAM7 and HAM8 SWWD screens - made sure the bypass time, H1:IOP-SUS_{FC1,FC2}_DK_BYPASS_TIME, was set to some long amount of time (e.g. 12000 [seconds]). - hit BYPASS, H1:IOP-SUS_{FC1,FC2}_DACKILL_BPSET
Seismon was restarted.
J. Oberling, R. Short
At the start of maintenance this morning, Jason and I touched up alignment into the PMC and RefCav remotely using the picomotor controlled-mirrors on-table. With the IMC and ISS off, we started with PMC alignment. Ultimately, we could only get maybe 0.1W of improvement in both transmitted and reflected power; see first screenshot with T-cursors for before and after power levels.
We then turned the ISS back on and moved on to RefCav alignment. Here we were able to get about 0.025V of improvement on the TPD signal, mostly in pitch; see second screenshot for before and after comparison.
Neither of these adjustments yielded the gains we were hoping for, and the camera spots certainly still show some apparent misalignments. This means we again may need to do some on-table alignment in the future, but for now, this should be passable.
Following the discovery that the testpoint monitor IOC had stopped updating some time ago I have promoted the TESTPOINT button on the CDS Overview from a blue Related-Display to a full System button.
The button has been moved from the related display grid to the system column, and PICKET FENCE has been moved left to make room (see attachment).
The TESTPOINT button will turn MAGENTA if its GPS time stops updating. If there are no testpoints open, the button is LIGHT-GREEN, otherwise it is GREEN.
As before, clicking on the TESTPOINT button will open the CDS_TPMON_OVERVIEW.adl MEDM.
TITLE: 08/05 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 6mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY: H1 has been locked for 17 hours, but looks like there were three brief drops from observing between 11:33 and 11:40 UTC (I'm assuming SQZ-related, but will look into it). Magnetic injections are running and in-lock charge measurements will happen right after before maintenance begins at 15:00 UTC.
Lockloss happened during in-lock charge measurements, specifically during the 12Hz injection to ETMX. The lockloss tool tags IMC for this one, and it certainly looks like the IMC lost lock first, but I can't say for sure why.
The three drops from Observing that Ryan points out were actually from the CO2 lasers loosing lock, first CO2Y and then CO2X lost lock twice, all between 11:33 and 11:40UTC ~4:30amPT. Both PZTs and laser temperatures started changing ~5minutes before CO2Y last lock. Unsure what would make this happen, LVEA temperature and chiller flowrates as recorded in LVEA were stable, see attached.
Unsure of the reason for this, especially as they both changed at the same time but are for the most part independent systems (apart from shared RF source). We should watch to see if this happens again.
My initial thought was RF, but the two channels we have to monitor that both looked okay around that time. About 4 minutes before the PZTs start to move away there is maybe a slight change in the behavior of the H1:ISC-RF_C_AMP10M_OUTPUTMON channel (attachment 1), but I found a few other times it has similar output and the laser has been okay, plus 4 minutes seems like too long for a reaction like this. The pzts do show some type of glitching behavior 1-2 minutes before they start to drive away that I haven't found at other times (attachment 2). This glitch timing is identical in both laser's pzts.
I trended almost every CO2 channel that seemed worthwhile, I looked at magnetometers, LVEA microphones, seismometers, mainsmon, and I didn't find anything suspicious. The few people on site weren't in the OSB. Not sure what else to look for at this point. I'm wondering if maybe this is some type of power supply or grounding issue, but I'd expect to see it other places as well then. Perhaps places I just haven't found yet.
Workstations were updated and rebooted. This was an OS packages update. Conda packages were not updated.
TITLE: 08/05 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
H1 Stayed Locked and observing the entire shift.
No SQZ issues, Pi's, wind, barely even an eath_quake! It was a very quite night without any super events once again.
All systems running as expected.
GRB-Short E586884 at 23:08:34 UTC
LOG:
No Log
Using Vibration Sensors To Gauge Health Of HVAC Fans Site Wide Famis 26413
Ivey, Edgard.
Ivey made even newer fits for the M1 SR3 in "Yaw using the measurements that Oli took in [LHO: 86075].
These fits are pretty similar to the last set we got in [LHO: 85446]. However, this time, Ivey used vectfit3 for both of the fits directly. She only manually ensured that the zeros were all in the left-hand-plane, and that there were no additional zeros in the M1 to M1 transfer function, even if the measurements show a bit extra phase loss compared to the fit.
These are the fits obtained [see figures for an eye-test of the goodness of fit]:
For the ISI to M1 transfer function:
'zpk([0,0,-0.027+20.489i,-0.064+11.458i,-0.027-20.489i,-0.064-11.458i],[-0.072+6.395i,-0.072-6.395i,-0.096+14.454i,-0.096-14.454i,-0.062+21.267i,-0.062-21.267i],-0.001)'
The M1 to M1 transfer function:
'zpk([-0.002+19.246i,-0.005+8.312i,-0.002-19.246i,-0.005-8.312i],[-0.065+6.392i,-0.065-6.392i,-0.076+14.442i,-0.076-14.442i,-0.055+21.272i,-0.055-21.272i],12.085)'
The fits were added to the Sus SVN and live inside '/ligo/svncommon/SusSVN/sus/trunk/HLTS/Common/FilterDesign/Estimator/fits_H1SR3_2025-07-29.mat' .
The filters can be installed using 'make_SR3_yaw_model.m', which lives in the same folder [for reference, see LHO: 84041, where Oli got the fits running for a test]. This last script, as well as 'make_SR3_yaw_blend.m' were updated to work with the new naming convention for the estimator block.
Hopefully we will get to test the estimator again soon!
TITLE: 08/04 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 19mph Gusts, 13mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
H1 has been locked and Observing for just over 2 hours.
I've got a and Alarm from the CDS screen about H0:VAC-MX_X1_PT343B_PRESS_TOR, but after speaking to Janos this is in preparation for tomorrow's maintence Tuesday VAC team activities.
So other than that everything seems to be humming along.
Jennie W, Camilla C
A while ago we heated up then cooled down the SR3 heater (alog #84749).
As part of measurements using this data I calculated the curvature change, following the approach at LLO by Aidan given iin alog #27262. Matlab code is below.
%calculate SR3 spherical lens
Pin = 2;%W
double_pass = 2;
SR3_t = (3*3600) + (11*60); % Time for cooldown in s.
delta_ITMY = -2.67e-5;% decrease in defocus of ITMY according to Hartmann sensor.
D_ITMY = delta_ITMY./double_pass;% defocus change in Dioptres
D_ITMY_error = 5e-6;% error on defocus in Dioptres.
R_SR3 = 36.013;% cold radius of curvature in m
delta_R = (2./((2/R_SR3)+D_ITMY))-R_SR3; % change in curvature during cooldown in m/
delta_delta_R = D_ITMY_error.*(2./((2./R_SR3)+D_ITMY)); % error on curvature change.
This means the rate of defocus change is 6.6750uD per Watt.
The final curvature change is + 0.0087 m +/- 0.0002 m as the mirror becomes less curved due to cooldown.
J. Kissel I'm are trying to figure out the best metrics for showing off the improvements to the OSEM PD's satellite amplifier's whitening improvements. Thus far, Oli's been using the input to the damping loops as the metric, using a regression of the corresponding ISI's GS13s to subract out a fit of how much of that sensor signal is seismic noise, and dividing out the loop suppression -- see LHO:86149 for the most recent examples comparing before vs. after the sat amp upgrade. Without the presence of any other noise or control signals, that should be a fair comparison of the OSEM PD's sensor noise improvement. However, for a lot of these comparisons ISC control signals are complicating the comparison -- usually at low frequency where ISC control is typically distributed to the top masses. I use this aLOG as an example of how to better understand this contribution breakdown for a relatively simple suspension -- H1SUSMC1 -- which only has P and Y ISC control from the IMC WFS. (Longitudinal control for IMC L is fed to MC2). This will also be interesting in the future :: in the context of how SPI and other sensors may improve the cavity motion, :: in terms of what DOFs and loop's worth of control drive at which frequencies -- important for discussions along the lines of "DOF [blah] is dominating the control signal, and the actuator cross-coupling for M1 drive of DOF [blah] to M3 optic DOF [blorp] is large, so let's reduce the DOF [blah] drive," and :: in terms of whether/where implementing ISI GS13 estimator feedforward will improve things. To understand how much of the damping loop *error* signal is composed of ISC *control* signal, I look compare the - the ISC control signal, - the DAMP *control* signal, against the - the MASTER total control request, all calibrated to the same point in the control system -- where the control output is summed and in the OSEM basis; just down-stream of the EUL2OSEM matrix, and just upstream of the COILOUTF filters which compensate for the coil driver frequency response (uninteresting for this study). Pitch -- the T1T2T3 actuators (3) Attachment 3 Pitch Noise Comparison excerpt from Oli's LHO:86149. These are times when the IMC was LOCKED, so there should be ISC control. But, see the expected factors of 2x-to3x improvement in the OSEM noise below ~5 Hz. So, maybe the ISC control is so low in bandwidth that its affect isn't impacting this study. But, we can see that there's clearly some other loop suppression that has not been accounted for, so maybe it *is* high bandwidth? Let's find out. (1) Attachment 1 Comparison of ISC pitch, DAMP pitch, as well as the other DAMP DOFs that use the T1, T2, and T3 actuators -- Vertical and Roll -- control signals. Here, we can clearly see that the damping loops are dominating the T2 (and thus T3) control signal above ~ 0.5 Hz, or conversely, the IMC WFS DC coupled control is dominating below 0.5 Hz. (2) Attachment 2 shows that the T2 and T3 sensors receive identical request (mostly an out-of-phase combination of Pitch and Roll damping request, as expected from the EUL2OSEM matrix), and T1 drives mostly Roll damping request. The vertical drive request is subdominant at all frequencies. (3) Attachment 4 shows the open loop gain and loop suppression TF magnitudes for pitch. The loop suppression here looks very much like the inverse of the shape of the ASD left in the pitch regression, making me worried that Oli's automated regime for removing the loop suppression isn't perfect... I'll ask. Yaw -- the LFRT actuators (7) Attachment 7 The before vs. after comparison of OSEM noise (5) Attachment 5 Similar comparison of ISC vs. relevant DAMP control -- showing IMC WFS control dominating only below ~0.2 Hz. (6) Attachment 6 As expected from the EUL2OSEM matrix, the LF and RT actuators receive the same control. (8) Attachment 8 Shows the open loop gain and loop suppression TF magnitudes for the Yaw damping loop.
"[...] Attachment 4 shows the open loop gain and loop suppression TF magnitudes for pitch. The loop suppression here looks very much like the inverse of the shape of the ASD left in the pitch regression, making me worried that Oli's automated regime for removing the loop suppression isn't perfect... I'll ask. Followed up wth Oli on this, and indeed there was a bug in the application of the loop suppression -- a blind python "dir" of the optic's directory for exported loop suppression text files returned the list of files alphabetically (L,P,R,T,V,Y) rather than in the canonical order of (L,T,V,R,P,Y) so that means the P suppression was taken out of the T ASD, etc. They've fixed that now (and added the loop suppression itself to the ASD plot as a visual aide) -- here's a sample of the improved MC1 P and Y, before vs. after plot.
The actual full results for MC1 can be found in 86253