WP 12836
ECR E2400330
Modified List T2500232
The following SUS SAT Amps were upgraded per ECR E2400330. Modification improves the whitening stage to reduce ADC noise from 0.05 to 10 Hz.
| Suspension | Old | New | OSEM |
| ETMX L2 (PUM) | S1100146 | S1100119 | ULLLURLR |
| ETMY L2 (PUM) | S1100137 | S1100127 | ULLLURLR |
| ITMX L2 (PUM) | S1100135 | S1100118 | ULLLURLR |
| ITMY L2 (PUM) | S1000277 | S1100148 | ULLLURLR |
F. Clara, J. Kissel, O. Patane
After maintenance wrapped up this morning, I swept through the LVEA to prep for observing. I unplugged an unused extension cord near the PSL racks and the scissor lift in the vertex/west bay area. Everything else looked okay.
Jeff also mentioned that he moved an SR785 from the PSL racks into the Biergarten for his work this morning, but it's unplugged (in case someone goes looking for it later).
I have created an EPICS IOC which reports any Guardian nodes which use source file(s) which have been modified in userapps but have not been loaded into the nodes. On the next restart of such nodes (e.g. guardian reboot) the new code will be automatically loaded, and so this system permits us to load new code in a scheduled and orderly way, for example by the code-author/node-manager during Tuesday maintenance.
This is similar to the front-end filter-module configuration file change (CFC) system, so I've called this GRD-CFC.
The CDS Overview MEDM has a "GRD CFC" button at the bottom showing the GRD-CFC status as GREEN/YELLOW. Pressing this button opens the H1CDS_GUARDIAN_CFC.adl MEDM (attached).
This MEDM has three main areas:
If any nodes are reporting pending changes, you can get pending file details by running the shell command:
guardian_modified_not_loaded
The output from this command provides information on how to list the line differences between pending files and what is running.
Tue Oct 14 10:08:10 2025 INFO: Fill completed in 8min 7secs
Rebooted h1guardian1 at 1547UTC so that we could start with fresh fresh and it would become easier to manage any file differences with help from some of Dave's new screens and scripts.
All nodes started up on their own, and Ryan C helped me recover them all back to a maintenance state.
I went to test the high gain ASC states I made for SEI_ENV this morning while we were still locked this morning. The good news is that there were no errors in the code and it did exactly what I asked of it. The bad news is that we lost lock while transitioning back to a low gain state. This wasn't a great test though because I had a brain fart and forgot to change the thresholds for the test. The high gains were turned on, then immediately turned back. So I'm not entirely sure if the lock loss was from the transition back, or just the fact that we quickly flipped it back and forth.
The major change between what SEI_ENV does and what the script the operators use right now is that the sleep timers between engaging/disengaging each loop are almost completly gone. At minimum for the next test it seems that we need some for the return back to low gain, or maybe increased tramps.
On the bright side, it looks like the structure of the node and the code seems to work, just need to tune it a bit. For now I've returned this node back to its normal operation, without the high asc states.
M. Todd
Today I ran another OMC scan (following last week's instructions alog 87316 and 87342) after the morning lockloss to see if could a better measurement of the hot state overlaps.
I was still limited to about 12 minutes after the lockloss, so we expect some amount of difference in the full hot state.
I've plotted the results on top of last week's scans, in yellow. Analysis of this will follow in a comment.
Workstations were updated and rebooted. This was an OS packages update. Conda packages were not updated.
The second stage heating contactor had burned up, reducing heating capacity. This is why the zone had difficulty maintaining set point on 10/13. I replaced the ruined contactor and checked functionality.
TITLE: 10/14 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 15mph Gusts, 10mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.26 μm/s
QUICK SUMMARY:
There was a small typo in inj_params in fm_dict that I found and fixed and reloaded.
SQZ_OPO reports "pump fiber rej power in ham7 high" so I get to try adjusting that today.
TITLE: 10/14 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
IFO is in NLN and OBSERVING (~10 hr lock)
2:53 - SQZ Dropped
4:10 - SQZ recovered, back to OBS
Mostly quiet shift except for the SQZ dropping due to FC lockloss.
The Squeezer FC lost lock due to FC2 alignment walking away from a lockable GR_LOCKED configuration. As per the wiki, I trended the last time we had a stable lock and reset values for FC1, FC2 and ZM3 (P/Y). Then, I tuned FC2 a bit more to maximize GR_TRANS output. This worked.
Interestingly, I first tried an alignment from a few days ago but this didn't work. I did this because FC2 has been increasing quite a bit in the last few days (screenshots) - probably worth looking into. The values I set FC suspensions to (using opticalign sliders)
FC2 P: 1558, FC2 Y: 109.6
FC1 P: -899.6, FC1 Y: -74
ZM3 P: -66.25 ZM3 Y: -463
I hope this issue doesn't happen again during OWL shift though the trend below from a few days ago shows FC2 P going up for the last few days seemingly with SQZ compensating in its alignment. Anecdotally, it seems related to the temperature since there's a diurnal breathing day to day (lows at night, highs in the day). Tagging SQZ to look at FC2 and FC.
LOG:
TITLE: 10/13 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Locked for 4 hours. We had one lock loss during commissioning time today, though it does not seem to be caused by any commissioning activites. useism is on its way up, and the wind is gusting above 30mph at times.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 22:52 | SAF | Laser HAZARD | LVEA | YES | LVEA is Laser HAZARD | 13:52 |
| 14:36 | FAC | Randy | Yarm | n | Y arm bte checks | 18:36 |
| 14:46 | FAC | Kim | Opt Lab | n | Tech clean | 15:10 |
| 14:59 | FAC | Nelly | MY | n | Tech clean | 15:50 |
| 15:11 | FAC | Kim | MX | n | Tech clean | 16:05 |
| 15:33 | ISC | Matt | Prep lab | n | Cheeta table prep | 15:46 |
| 16:38 | FAC | Kim | Prep lab | n | Table clean | 17:09 |
| 16:50 | ISC | Keita | Opt Lab | YES | ISS array work | 20:54 |
| 17:01 | SYS | Mitchell | MY | n | Checking out equipment | 17:49 |
| 17:56 | ISC | Matt | Prep lab | n | More measurements | 18:13 |
| 18:03 | VAC | Janos | EY, EX | n | Parts locate | 18:33 |
| 18:07 | FAC | Tyler | Xarm | n | Checkng on X1 bte | 18:37 |
| 20:21 | VAC | Travis | Mids | n | Measurement and prep for tomorrow | 20:50 |
| 20:54 | ISC | Rahul, Jennie | Opt Lab | YES | ISS array work | 23:08 |
In 86964 I ran a side by side comparison of the powers at various ports before and after the power outage so we could determine what, if anything had changed. Although it generally seemed like we had regained or even increased the powers everywhere, kappa C has indicated that the optical gain reduced. Our current kappa C has been consistently near 0.98 since the outage.
However, I calculated the full optical gain value today from the most recent calibration report (20251012T200500Z) and the current exported report on August 23 (20250823T183838Z). Each report lists H_c in ct/m. I also chose a time just before each report was measured to calculate the OMC-DCPD_SUM/LSC-DARM_IN1 transfer function that provides the mA/ct calibration value.
| Date | OMC-DCPD_SUM/LSC-DARM_IN1 [mA/ct] | H_c [ct/m] | Optical gain [mA/pm] |
| 10/12 | 2505448 | 3.423e6 | 8.576 |
| 8/23 | 2465012 | 3.478e8 | 8.573 |
It seems like our optical gain between these two times has actually stayed the same to much less than a percent. I'm not sure what to make of this with regards to the calibration, given that the calibration report suggests an overall magnitude change in the sensing function (that we are correcting in the TDCFs), but in actuality the magnitude of the sensing function has remained constant.
TITLE: 10/13 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 19mph Gusts, 14mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.31 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING (4 hr lock)
Wind is pickign up. Secondary microseism is picking up. Hoping to stay quiet and locked for the shift.
For the SR3 P estimator, we had initially developed and installed blend filters (in H1:SUS-SR3_M1_EST_P_FUSION_MEAS_BP and H1:SUS-SR3_M1_EST_P_FUSION_MODL_BP filter banks), named pit_v1, that blended between the OSEMs on-resonance, and the estimator everywhere else (86452). After a bit, Brian Lantz made a pit_v2 that included OSEM damping at two extra frequencies because we were having issues with extra resonances at 0.65 and 0.75 Hz that weren't being damped (86510)(filter comparisons).
Eventually we realized the reason why those two peaks weren't being damped was because we had forgotten to include the model contribution to P from L, and we installed those needed filters(86567). However, we didn't swap the blend filters back to pit_v1 at the time.
So during relocking today (2025/10/13 19:05:00 UTC), I swapped us back to pit_v1. We will run with this until at least tomorrow morning and then verify that we don't see a difference in the damping of those two peaks.
Oli - Thanks for doing this test, I'm looking forward to learning what happens. I think the question here is "how well does it work" and "what do we see" rather than "does it work". I'm hoping this will reduce the RMS a bit, although the analysis is complicated by the cross-coupling w/ length.
Another big EQ rolling through. This one is down near Antartica! Debated when to transition to "ASC Hi Gn", since R-wave was going to be 45-60min. But S&P waves triggered EQ Mode + saw a yellow Picket Fence---Clicked on ASC Hi Gn button, but during the transition H1 lost lock. This might have been too big for ASC Hi Gn regardless, but kinda wished I clicked it earlier.
Winds were peaking just under 40mph around the lockloss as well.
Will be hanging out here for a few hours.
Now we wait for R-wave which is probably due to arrive around 2135-ish utc.
We lost lock ~3 seconds after CHARD_Y filters bits are changed, there's also a big jump in ground motion at the same time, and it was gusting >30mph at the corner station.
E. Capote, J. Kissel, S.Dwyer
The ~1 Hz ring-up that had been marginally stable and transient is now fully unstable during the lock acquisition sequence as we recover from maintenance. The suspicion is that the change in ITM M0/R0 satamps might have changed the top mass damping loop OLGTF enough to augment the damped plant that's the plant for the never-really-well-designed L2DAMP that takes the L2 (or PUM) OSEMs and feeds back their sensor signal to the reaction chain top mass OSEMs actuators. But also, we've never measured these loops in any substantial form, so we wanted to just see what they were doing.
Attached are the results. Here're the templates:
/ligo/svncommon/SusSVN/sus/trunk/QUAD/H1/ITMY/SAGR0/Data/
2025-07-08_2240UTC_H1SUSITMY_R0_WhiteNoise_0p02to50Hz_L2DAMP_OLGTF_P.xml
2025-07-08_2240UTC_H1SUSITMY_R0_WhiteNoise_0p02to50Hz_L2DAMP_OLGTF_R.xml
Very little loop gain and only at 3.3 Hz. The loop stability is questionable at that frequency -- for a few averages the suppression (i.e. the gain peaking) looks really sharp around 3.3 Hz.
T'was really tough to get good coherence; the excitation is pretty well tailored, but it's tough to fight the 1/f^6 suppression of the physical suspension and dirt coupling.
I had to turn OFF the R0 alignment offsets to get this data for pitch. (They were ON for the Roll measurement).
So -- perhaps not the source of the ~1 Hz IFO instability, but boy could this loop use some TLC in order to more effectively achieve its goals...
FYI, we also took ITMX data for these loops as well.
/ligo/svncommon/SusSVN/sus/trunk/QUAD/H1/ITMX/SAGR0/Data
2025-07-08_2240UTC_H1SUSITMX_R0_WhiteNoise_0p02to50Hz_L2DAMP_OLGTF_P.xml
2025-07-08_2240UTC_H1SUSITMX_R0_WhiteNoise_0p02to50Hz_L2DAMP_OLGTF_R.xml
Here's the characterization data and fit results for S1100148, assigned to ITMY L2's ULLLURLR OSEMs.
This sat amp is a UK 4CH sat amp, D0900900 / D0901284. The data was taken per methods described in T080062-v3, using the diagrammatic setup shown on PAGE 1 of the Measurement Diagrams from LHO:86807.
The data was processed and fit using ${SusSVN}/trunk/electronicstesting/lho_electronics_testing/satamp/ECR_E2400330/Scripts/
plotresponse_S1100148_ITMY_L2_ULLLURLR_20250917.m
Explicitly, the fit to the whitening stage zero and pole, the transimpedance feedback resistor, and foton design string are:
The attached plot and machine readable .txt file version of the above table are also found in ${SusSVN}/trunk/electronicstesting/lho_electronics_testing/satamp/ECR_E2400330/Results/
2025-09-17_UKSatAmp_S1100148_D0901284-v5_fitresults.txt
Per usual, R_TIA_kOhm is not used in the compensation filter -- but after ruling out an adjustment in the zero frequency (by zeroing the phase residual at the lowest few frequency points), Jeff nudged the transimpedance a bit to get the magnitude scale within the ~0.25%, shown in the attached results. Any scaling like this will be accounted for instead with the absolute calibration step, i.e. Side Quest 4 from G2501621, a la what was done for PR3 and SR3 top masses in LHO:86222 and LHO:84531 respectively.
Here's the characterization data and fit results for S1100119, assigned to ETMX L2's ULLLURLR OSEMs.
This sat amp is a UK 4CH sat amp, D0900900 / D0901284. The data was taken per methods described in T080062-v3, using the diagrammatic setup shown on PAGE 1 of the Measurement Diagrams from LHO:86807.
The data was processed and fit using ${SusSVN}/trunk/electronicstesting/lho_electronics_testing/satamp/ECR_E2400330/Scripts/
plotresponse_S1100119_ETMX_L2_ULLLURLR_20250916.m
Explicitly, the fit to the whitening stage zero and pole, the transimpedance feedback resistor, and foton design string are:
The attached plot and machine readable .txt file version of the above table are also found in ${SusSVN}/trunk/electronicstesting/lho_electronics_testing/satamp/ECR_E2400330/Results/
2025-09-16_UKSatAmp_S1100119_D0901284-v5_fitresults.txt
Per usual, R_TIA_kOhm is not used in the compensation filter -- but after ruling out an adjustment in the zero frequency (by zeroing the phase residual at the lowest few frequency points), Jeff nudged the transimpedance a bit to get the magnitude scale within the ~0.25%, shown in the attached results. Any scaling like this will be accounted for instead with the absolute calibration step, i.e. Side Quest 4 from G2501621, a la what was done for PR3 and SR3 top masses in LHO:86222 and LHO:84531 respectively.
Here's the characterization data and fit results for S1100118, assigned to ITMX L2's ULLLURLR OSEMs.
This sat amp is a UK 4CH sat amp, D0900900 / D0901284. The data was taken per methods described in T080062-v3, using the diagrammatic setup shown on PAGE 1 of the Measurement Diagrams from LHO:86807.
The data was processed and fit using ${SusSVN}/trunk/electronicstesting/lho_electronics_testing/satamp/ECR_E2400330/Scripts/
plotresponse_S1100118_ITMX_L2_ULLLURLR_20250916.m
Explicitly, the fit to the whitening stage zero and pole, the transimpedance feedback resistor, and foton design string are:
The attached plot and machine readable .txt file version of the above table are also found in ${SusSVN}/trunk/electronicstesting/lho_electronics_testing/satamp/ECR_E2400330/Results/
2025-10-14_UKSatAmp_S1100118_D0901284-v5_fitresults.txt
Per usual, R_TIA_kOhm is not used in the compensation filter -- but after ruling out an adjustment in the zero frequency (by zeroing the phase residual at the lowest few frequency points), Jeff nudged the transimpedance a bit to get the magnitude scale within the ~0.25%, shown in the attached results. Any scaling like this will be accounted for instead with the absolute calibration step, i.e. Side Quest 4 from G2501621, a la what was done for PR3 and SR3 top masses in LHO:86222 and LHO:84531 respectively.
Here's the characterization data and fit results for S1100127, assigned to ETMY L2's ULLLURLR OSEMs.
This sat amp is a UK 4CH sat amp, D0900900 / D0901284. The data was taken per methods described in T080062-v3, using the diagrammatic setup shown on PAGE 1 of the Measurement Diagrams from LHO:86807.
The data was processed and fit using ${SusSVN}/trunk/electronicstesting/lho_electronics_testing/satamp/ECR_E2400330/Scripts/
plotresponse_S1100127_ETMY_L2_ULLLURLR_20250916.m
Explicitly, the fit to the whitening stage zero and pole, the transimpedance feedback resistor, and foton design string are:
The attached plot and machine readable .txt file version of the above table are also found in ${SusSVN}/trunk/electronicstesting/lho_electronics_testing/satamp/ECR_E2400330/Results/
2025-09-16_UKSatAmp_S1100127_D0901284-v5_fitresults.txt
Per usual, R_TIA_kOhm is not used in the compensation filter -- but after ruling out an adjustment in the zero frequency (by zeroing the phase residual at the lowest few frequency points), Jeff nudged the transimpedance a bit to get the magnitude scale within the ~0.25%, shown in the attached results. Any scaling like this will be accounted for instead with the absolute calibration step, i.e. Side Quest 4 from G2501621, a la what was done for PR3 and SR3 top masses in LHO:86222 and LHO:84531 respectively.
ITMY L2 (PUM) Sat Amp S1100148 installed on 10/14/2025. Replaced on 10/16/2024 with S1100080.
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=87515
Note that the satamp for ETMY L2 has now been swapped back to a S1100137, which has been modified for ECR E2400330.
alog: 87722