Displaying reports 381-400 of 85685.Go to page Start 16 17 18 19 20 21 22 23 24 End
Reports until 16:30, Tuesday 28 October 2025
H1 General
oli.patane@LIGO.ORG - posted 16:30, Tuesday 28 October 2025 - last comment - 17:33, Tuesday 28 October 2025(87810)
Ops EVE Shift Start

TITLE: 10/28 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 12mph Gusts, 7mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.28 μm/s 
QUICK SUMMARY:

Attempting to relock again and trying to figure out why we just had two locklosses directly after CLOSE_BEAM_DIVERTERS/start of OMC_WHITENING.

Comments related to this report
oli.patane@LIGO.ORG - 17:33, Tuesday 28 October 2025 (87815)

00:29UTC Back to Observing

H1 SEI (ISC, OpsInfo)
thomas.shaffer@LIGO.ORG - posted 16:07, Tuesday 28 October 2025 - last comment - 13:56, Thursday 30 October 2025(87788)
SEI_ENV node testing with High ASC gain testing Rd 2

Summary

Round 2 of testing the guardianization of turning on and off the high ASC gains (Round 1 - alog87462). SEI_ENV will now automatically move us into the high gain ASC state when a.) we are in the earthquake state b.) there is an incoming or ongoing earthquake that is at or below the dotted line on the "rasta plot"The transition takes 11 seconds to complete, and it will transition back when the ground motion is low enough to bring us out earthquake state.

Details from today

I started testing with a few 10 and 5 second waits between steps, just as is done in the script that we currently use. Once those ran successfully a few times I started to decrease the wait times between steps. Eventually, I had success transitioning all the ASC at the same time, then the FF 10 seconds after. since this was the same configuration that I had last time I tried this, tried to reporduce the lock loss by requesting the High ASC state, then immediately requesting the Low ASC state. This did, again, cause a lock loss. To avoid this I have a wait timer in the High state so it won't switch quickly from one to the other.

Transitioning back out of the high ASC state has the same thresholds as the earthquake state currently. We didn't want to transition back and then have to do it all over again, or wait in earthquake for another 10 minutes for it to calm down. We might make this a bit shorter or smarter after we've seen it work a few times.

Time (hhmmss UTC) Transition to Notes
150251 High 10/5s timers
150457 Low 10/5s timers
150616 High Repeat of above
150724 Low Repeat of above
150754 High 1/5s timers
150930 Low 1/5s timers
151113 High All ASC engaged at once
151218 Low All ASC engaged at once
151326 High All ASC engaged at once
151340 Low Lock loss

 

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 13:56, Thursday 30 October 2025 (87863)OpsInfo

I forgot that this would eventually trigger IFO_NOTIFY if the high gain state were to keep us out of Observing for longer than 10 minutes while IFO_NOTIFY was running. I've changed IFO_NOTIFY to not notify when the SEI_ENV node is in the high asc or transition states.

H1 SUS (SEI)
jeffrey.kissel@LIGO.ORG - posted 14:48, Tuesday 28 October 2025 - last comment - 15:57, Tuesday 11 November 2025(87801)
H1SUSPRM M3, M2, and M1 Drive to M1 Response TFs to inform Estimator Models
J. Kissel

Gathered H1SUSPRM M3, M2, and M1 Drive to M1 Response TFs to inform the "drive" models for a future H1SUSPRM estimator. I'll post the locations / file names in the comments. Here in the main entry, I discuss the state of the control system for H1 SUS PRM so we understand with how much salt would should take these measurements.

Executive summary :: there are some side quests we can launch -- especially on the actuation side of this suspension -- if we think that these measurements reveal "way too much cross coupling for an estimator to work." The first things I'd attack would be 
    - the frequency-dependent and scalar gain differences *between* the nominal low noise state of the coil drivers and the state we need to characterize the suspension. 
    - the very old coil balancing, which was done *without* first compensating for any frequency-dependent gain differences in the channels at the frequency used to balance the coils (see LHO:9453 for measurement technique.)

Here's the detailed summary of all the relevant things for these measurements:
    - The suspension was ALIGNED, with alignment offsets ON, with slider values (P,Y) = (-1629.783, -59.868) ["urad"] 
        :: ALIGNED is needed (rather than just DAMPED [where the alignment sliders are OFF] or MISALIGNED where extra large alignment offsets are ON; per discussion of how the alignment impacts the calibration in LHO:87102)
        :: the usual caveats about the slider calibration, which is still using the [DAC ct / "urad"] gains from LHO:4563).

    - The M1 damping loop were converted to Level 2.0 loop shaping in Jan 2023; LHO:66859, nominally designed to have an EPICs gain of -1.0. However in Aug 2023, the EPICs gains were lowered to -0.5, and have been that way for most of O4, and remain that way now. For all of these measurements, I set the L, P and Y gains to -0.1; the "20% of nominal" gain mantra we've used for the HLTS estimators. I also gathered *almost* all the measurements again with only the Y gain at -0.1, but ran out of time to complete that set for comparison. 

    - Even though it was maintenance day, when we typically turn site-wide sensor correction OFF, I manually turned ON sensor correction for ISI HAM2 to get better coherence below 1 Hz (using instructions in LHO:87790)

    - The M3 L to M3 P filter (and gain) in the M3 DRIVEALIGN frequency-dependent matrix is OFF, per LHO:87523. 

    - There are (M3 P to M3 L) = 1.7 and (M3 Y to M3 L) = 0.52 scalar gains ON in to off-diagonal elements of the M3 DRIVEALIGN matrix whose purpose is change the center of P and Y actuation to be around where the IFO's beam spot typically is.

    - There is a set of M1 L to M1 P filters, "M1L_M3P" and "invM1P_M3P," in the M1 DRIVEALIGN matrix, with a EPICs gain of -1. I think these came from LHO:42549. The measurements I took aren't impacted by this, as I drove from the M1 TEST bank which does not send excitation through the DRIVEALIGN Matrix. HOWEVER, we'll definitely need to consider this when we model the ISC drive which *does* go through the M1 DRIVEALIGN matrix.

    - All M1, M2, and M3 stages of OSEM PDs sat amp whitening filters have been upgraded with ECR E2400330's filter design, and compensated accordingly. 
        :: M1 stage LHO:85463
        :: M2 & M3 stages LHO:87103

    - All M1, M2, and M3 stages of OSEM PDs have been calibrated via the ISI GS13s, and calibrated in the ALIGNED state (LHO:87231)

    - In order to get decent coherence over the band of interest for the M3, M2, and M1 drives, I had to drive the suspension actuators in their highest range state, which is different from the state the IFO usually needs.
        :: M1 = State 1 "LP OFF" (a Triple TOP Driver)
        :: M2 = State 2 "Acq ON, LP OFF" (An ECR E1400369 Triple Acquisition Driver "TACQ" modified for an extra 10x actuation strength. Modified in Sep 2013 LHO:7630)
        :: M3 = State 2 "Acq ON, LP OFF" (An ECR E1400369 Triple Acquisition Driver "TACQ" modified for an extra 10x actuation strength. Modified in Sep 2014 LHO:13956)

        :: The nominal state for the switches are M1 = State 2 "LP ON," M2 = M3 = State 3 "ACQ OFF, LP ON."

    - No actuator channels have had any precise compensation for their coil driver's frequency response in any state.
        :: M1 state 1 channels are all compensated with (z:p) = (0.9 : 30.9996) Hz
        :: M2 state 2 channels are all compensated with (z:p) = (64.9966 : 13) Hz
        :: M3 state 2 channels are all compensated with (z:p) = (64.9966 : 13) Hz

    - There are scalar "coil balancing" non-unity magnitude gains on each of the M2 and M3 stage channels, but it's the same values that have been in play since Jan 2014 (LHO:9419; so, after the M2 TACQ driver mod, but before the M3 TACQ driver mod). There is no coil balancing gains on the M1 stage, they're all either +/- 1.0.

Comments related to this report
jeffrey.kissel@LIGO.ORG - 14:52, Tuesday 28 October 2025 (87808)SEI
Here's the complete data set with L, P, and Y damping loop gains set to -0.1, with the T, V, and R gains at -0.5.

    /ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/PRM/SAGM1/Data/
        2025-10-28_H1SUSPRM_M1toM1_CDState1_M1LPYDampingGain0p1_WhiteNoise_L_0p02to50Hz.xml
        2025-10-28_H1SUSPRM_M1toM1_CDState1_M1LPYDampingGain0p1_WhiteNoise_P_0p02to50Hz.xml
        2025-10-28_H1SUSPRM_M1toM1_CDState1_M1LPYDampingGain0p1_WhiteNoise_R_0p02to50Hz.xml
        2025-10-28_H1SUSPRM_M1toM1_CDState1_M1LPYDampingGain0p1_WhiteNoise_T_0p02to50Hz.xml
        2025-10-28_H1SUSPRM_M1toM1_CDState1_M1LPYDampingGain0p1_WhiteNoise_V_0p02to50Hz.xml
        2025-10-28_H1SUSPRM_M1toM1_CDState1_M1LPYDampingGain0p1_WhiteNoise_Y_0p02to50Hz.xml

    /ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/PRM/SAGM2/Data/
        2025-10-28_H1SUSPRM_M2toM1_CDState2_M1LPYDampingGain0p1_WhiteNoise_L_0p02to50Hz.xml
        2025-10-28_H1SUSPRM_M2toM1_CDState2_M1LPYDampingGain0p1_WhiteNoise_P_0p02to50Hz.xml
        2025-10-28_H1SUSPRM_M2toM1_CDState2_M1LPYDampingGain0p1_WhiteNoise_Y_0p02to50Hz.xml

    /ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/PRM/SAGM3/Data/
        2025-10-28_H1SUSPRM_M3toM1_CDState2_M1LPYDampingGain0p1_WhiteNoise_L_0p02to50Hz.xml
        2025-10-28_H1SUSPRM_M3toM1_CDState2_M1LPYDampingGain0p1_WhiteNoise_P_0p02to50Hz.xml
        2025-10-28_H1SUSPRM_M3toM1_CDState2_M1LPYDampingGain0p1_WhiteNoise_Y_0p02to50Hz.xml
jeffrey.kissel@LIGO.ORG - 14:53, Tuesday 28 October 2025 (87809)
Here's the almost entirely complete data set for *only* the Y damping loop gain set to -0.1, and L, T, V, R, P set to -0.5.

    /ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/PRM/SAGM1/Data/
        2025-10-28_H1SUSPRM_M1toM1_CDState1_M1YawDampingGain0p1_WhiteNoise_L_0p02to50Hz.xml
        2025-10-28_H1SUSPRM_M1toM1_CDState1_M1YawDampingGain0p1_WhiteNoise_T_0p02to50Hz.xml
        [did not get V]
        [did not get R]
        [did not get P]
        [did not get Y]

    /ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/PRM/SAGM2/Data/
        2025-10-28_H1SUSPRM_M2toM1_CDState2_M1YawDampingGain0p1_WhiteNoise_L_0p02to50Hz.xml
        2025-10-28_H1SUSPRM_M2toM1_CDState2_M1YawDampingGain0p1_WhiteNoise_P_0p02to50Hz.xml
        2025-10-28_H1SUSPRM_M2toM1_CDState2_M1YawDampingGain0p1_WhiteNoise_Y_0p02to50Hz.xml

    /ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/PRM/SAGM3/Data/
        2025-10-28_H1SUSPRM_M3toM1_CDState2_M1YawDampingGain0p1_WhiteNoise_L_0p02to50Hz.xml
        2025-10-28_H1SUSPRM_M3toM1_CDState2_M1YawDampingGain0p1_WhiteNoise_P_0p02to50Hz.xml
        2025-10-28_H1SUSPRM_M3toM1_CDState2_M1YawDampingGain0p1_WhiteNoise_Y_0p02to50Hz.xml
oli.patane@LIGO.ORG - 13:45, Tuesday 04 November 2025 (87951)

Took some more of the meaurements for PRM estimator here: 87950

Those four M1 to M1 with DAMP Y at 20% for V R P and Y are still needed

oli.patane@LIGO.ORG - 15:57, Tuesday 11 November 2025 (88066)

Here's the list of estimator measurements for PRM: 88063

H1 IOO
sheila.dwyer@LIGO.ORG - posted 14:45, Tuesday 28 October 2025 (87806)
laser power

The laser power guardian change 87603 is now in and loaded, so that we will use IM4 trans instead of IMC input request.  

We should expect this to change the gain of LSC and ASC loops by 9% compared to earlier in the week, and 3% increase compared to before the power outage. 

H1 ISC
anthony.sanchez@LIGO.ORG - posted 14:38, Tuesday 28 October 2025 (87805)
Bi-Weekly ISC Histograms of different Sections of ISC Locking States.

Bi-Weekly ISC Locking Histograms. 
I turned this into a 1 click button to make these plots: Sitemap-> OPS-> WEEKLIES-> ISC Histograms. 
Once it runs (should only take about 15 seconds) the terminal tells you where to find the plots,  just upload the plots into your alog.

 

Images attached to this report
H1 ISC
thomas.shaffer@LIGO.ORG - posted 14:05, Tuesday 28 October 2025 (87804)
SDF diffs that Prep_For_Locking reverts

We run the Prep_for_Locking state before SDF_Revert, and Sheila and others have wondered how much of that state gets reverted during SDF_Revert. Not a huge amount, but we might want to consider changing these to work more cohesively.

Images attached to this report
H1 PSL (ISC, PSL)
keita.kawabe@LIGO.ORG - posted 13:57, Tuesday 28 October 2025 - last comment - 15:59, Wednesday 29 October 2025(87803)
ISS array S1202965 was put in storage (Rahul, Keita)

Related: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=87729

We disconnected everything from the ISS array installation spare unit S1202965 and stored it in the ISS array cabinet in the vac prep area next to the OSB optics lab. See the first 8 pictures.

The  incomplete spare ISS array assy originally removed from LLO HAM2 (S1202966) was moved to a shelf under the work table right next to the clean loom in the optics lab (see the 9th picture). Note that one PD was pulled from that and was transplanted to our installation spare S1202965.

Metadata for both 2965 and 2966 were updated.

ISS second array parts inventory https://dcc.ligo.org/E2500191 is being updated.

Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 15:59, Wednesday 29 October 2025 (87835)

Rahul and I cleared the optics table so Josh and Jeff can do their SPI work.

Optics mounts and things were put in the blue cabinet. Mirrors, PBS and lenses were put back into labeled containers and in the cabinet in front of the door to the change area.

Butterfly module laser, the LD driver and TEC controller were put back in the gray plastic bin. There was no space in the cabinets/shelves so it's put under the optics table closer to the flow bench area.

Single channel PZT drivers were put back in the cabinet on the northwest wall in the optics lab. Two channel PZT driver, oscilloscopes, a function generator and DC supplies went back to the EE shop.

OnTrack QPD preamp, its dedicated power transformer, LIGO's LCD interface for QPD and its power supply were put in a corner of one of the bottom shelf of the cabinet on the southwest wall.

Thorlabs M2 profiler and a special lens kit for that were given to Tony who stored them in the Pcal lab.

aLIGO PSL ISS PD array spare parts inventory E2500191 was updated.

H1 SUS
oli.patane@LIGO.ORG - posted 13:47, Tuesday 28 October 2025 (87802)
Weekly In-Lock SUS Charge Measurement FAMIS

Closes FAMIS#28429, last checked 87622

ITMX didn't have enough coherence again this time :(

ETMX

ETMY

ITMX

ITMY

Images attached to this report
H1 TCS
thomas.shaffer@LIGO.ORG - posted 12:47, Tuesday 28 October 2025 (87799)
TCS Chiller Water Level Top-Off - Biweekly

FAMIS27827

No water was added, but the filters all looked good. There was no water in the Dixie Leak Detector. Updated the T2200289 sheet.

H1 SEI
ryan.crouch@LIGO.ORG - posted 12:37, Tuesday 28 October 2025 (87797)
BRS Drift Trends -- Monthly

Closes FAMIS27404, last checked in alog87178

In general we can see blips at 10/21 and 09/10 from the BRS-Y damping issues and the power outage.

We can see the BRSY issues we had last week 10/21 alog87634, ETMY BRS temperature seems to still be slowly increasing just like last month?

BRS-X looks to be drifting down for ~all of October.

Images attached to this report
LHO FMCS (PEM)
ryan.crouch@LIGO.ORG - posted 12:23, Tuesday 28 October 2025 (87617)
HVAC Fan Vibrometers Check - Weekly

Last checked in alog87549, closes FAMIS27429.

There are a few noisy outbuilding fans, EY_470_1 and MX_370_1 are the worst offenders with ~ 0.3. MY_270_{1,2} see some sporadic noise increases everyday or less.

For the corner station fans, MR_FAN1_170_{1,2} have gotten a bit noisier 2 days ago, MR_FAN6_170_1 has also gotten a bit noisier the over past day. The winner of noisiest fan in the CS is MR_FAN5_170_1 at around ~0.4.

Images attached to this report
H1 General
ryan.crouch@LIGO.ORG - posted 12:05, Tuesday 28 October 2025 (87796)
LVEA swept

I swept the LVEA, nothing to report this week.

LHO VE
janos.csizmazia@LIGO.ORG - posted 11:57, Tuesday 28 October 2025 (87795)
Quarterly Functionality Test Performed on EX/MX Turbo Pumps and EX Purge Air Skid
Procedure checklist for both stations completed.  No issues were identified at this time.

MX: Scroll pump hours: 236.9
       Turbo pump hours: 149
       Crash bearings: 100%

EX: Scroll pump hours: 7157.3
       Turbo pump hours: 1109
       Crash bearings: 100%

EX purge air compressor was ran for ~3 hours, dew point monitor reached <-40 deg F in ~15 minutes and at time of shut down was reporting -76 degF.

Compressor has 74 running hours total, distributed between the three scroll compressors. Drains are operational, dryers indicate normal operation. The only caveat here is that compared to the EY compressor, the main drain (after the compressor) discharged a lot more water. This indicates a higher humidity in the EX MER.

Images attached to this report
H1 ISC (OpsInfo, SYS)
jeffrey.kissel@LIGO.ORG - posted 10:49, Tuesday 28 October 2025 - last comment - 11:06, Tuesday 28 October 2025(87793)
IMC Input Power at 60W with IM4 and PR2 misaligned, and PRM Aligned for 1.0 hour
J. Kissel, M. Todd, S. Dwyer

Just recording this for posterity in case: Matt and I wanted to (continue) parallelizing our work on characterizing the ISS Array at full / nominal power (60W into the PSL) and characterizing RPM dynamics for future HSTS Estimator modeling, respectively. The estimator team discovered a week or two ago that PRM has a different dynamical response when the SUS is ALIGNED vs. MISALIGNED. So, I misaligned IM4 and PR2 to ensure the 60W didn't go anywhere but a fixed location, and aligned PRM. 

The worry is that IM4 doesn't have a "safe" designated fixed location to dump its reflected beam when misaligned -- there's no "parking dump" like there is for PRM. So -- this an aLOG to indicate the times of high power with IM4 misaligned and what little info we have about the physical position.

I say "what little information about the position we have" because IM4, which is a HAUX suspension -- while IM4 has recently had its OSEM sensor PD sat amp upgraded, we have not measured or installed an absolute calibration for the sensors with an ISI injection. We know from other suspensions, that OSEM PDs can have factors of 2x to 3x errors between the "generic calibration based on electronics and [likely ancient] open light current measurement" and the modern absolute calibration from the ISI GS13s.

There *is* a calibration of the IM4 alignment sliders -- installed in Apr 2024 (LHO:77211). 
However, that calibration was based on the OSEM sensor PDs. 
So we have to take the fidelity of this calibration with a huge grain of salt a la the above distrust in OSEM PD calibration.

So -- IM4 had the following alignment offsets requested of its sliders: 
              OFFSET       OUT16
             ["urad"]    [EB-DAC ct]
    P        +114.539     +1248.53
    Y        +111.103      +625.387

and its *misalignment* offsets -- which are not calibrated in the front-end, but I've calibrated them using the (P,Y) = (10.9005 , 5.6289) [EB-DAC ct / "urad"] calibration from LHO:77211 here: 
              OFFSET       OUT16
             ["urad"]   [EB-DAC ct]
    P         +50.915      +555.0
    Y         +98.598      +555.0

So, misaligned, that give a total requested displacement of  
              OFFSET       OUT16
             ["urad"]   [EB-DAC ct]
    P         165.454     1803.532
    Y         209.701     1180.388

IM4 was misaligned, with PRM aligned and PR2 misaligned, and 60W into the IMC from 2025-10-28 16:08 UTC to 2025-10-28 17:06 UTC. 

After 17:06 UTC, the IMC power remained at 60W, but I aligned IM4 and PR2 and misaligned PRM. (The normal "IFO DOWN" configuration).
(So yes, we didn't turn the IMC power down before we went from misaligned to aligned, either.)
Images attached to this report
Comments related to this report
janos.csizmazia@LIGO.ORG - 11:06, Tuesday 28 October 2025 (87794)
There was a relatively small (~2E-9 Torr) pressure rise in HAM1, which is well aligned with these activities. Both its magnitude, and it's rate of rise are orders of magnitude smaller than a "proper pressure spike event", but it is worth mentioning.
We'll keep an eye out.
Images attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 10:41, Tuesday 28 October 2025 (87792)
Tue CP1 Fill

Tue Oct 28 10:06:36 2025 INFO: Fill completed in 6min 32secs

 

Images attached to this report
H1 SUS
oli.patane@LIGO.ORG - posted 19:01, Monday 27 October 2025 - last comment - 12:46, Tuesday 28 October 2025(87780)
ETMY L2 saturations last week NOT caused by new satamp

Jeff, Ryan S, Oli

During last week's set of issues, something that we saw happen a few times during our lock reacquisition attempts were lots of EY saturations while going through LOWNOISE_COIL_DRIVERS/TRANSITION_FROM_ETMX. The saturations would stop once L2 to R0 damping was turned off (87713), so it seemed like the issue was with the ETMY L2 satamp, so we swapped it out with a different one (87722). We didn't see any of these repeating saturations after that, but also we were changing a lot of things at the time trying to figure out the problem, plus we hadn't been seeing these saturations during every single relock attempt.

The DAC channels that were showing saturations were from DAC1 channels 1 and 2. Checking the model, those channels line up with R0 F2 and F3, which are the channels that control Length on R0. We plotted R0's MASTER OUTs during times where we saw lots of the EY saturations, and at times where the saturations heard on verbals were normal, including a time before we swapped the ETMY L2 satamp on October 14th.

Here's a breakdown of the different examples we looked at:

Normal amount of saturations during LOWNOISE_COIL_DRIVERS / TRANSITION_FROM_ETMX
Date SatAmp verbals ndscope
Oct 1 unmodified oct1_verbals oct1_ndscope
Oct 20 modified oct20_verbals oct20_ndscope
Oct 22 modified oct22_verbals oct22_ndscope
Lots of EY saturations during LOWNOISE_COIL_DRIVERS / TRANSITION_FROM_ETMX
Oct 23 modified oct23_verbals oct23_ndscope                               oct23_ndscope_zoomout
Oct 24 modified oct24_verbals oct24_ndscope

We saw that for the times where the amount of saturations were what we consider 'normal', the LOWNOISE_COIL_DRIVERS/TRANSITION_FROM_ETMX states behave similarly in the R0 MASTER OUT channels, including after swapping the satamp. The OSEMs will see some movement, but it's not too far outside of where they usually sit. However, for the two times that we checked where we had the excessive EY saturations, we saw that right before they started, there was a high frequency glitch seen in the ETMY L2 Length witness channel. This glitch only moved L2 a small amount, about 0.5 um, but it was causing R0 to move a lot in Length, saturating or nearly saturating for a long time.

Plotting the impulse response of the SUS-ETMY_L2_R0DAMP_L filter bank, we see that these filters have an impulse response time of ~16 seconds, and breaking down the impulse response by each filter's contribution, we see that the FM8 (module 7) filter module, invPsmoo, has a wild impulse response. Because of this filter module, the impulse response of the entire R0_L2DAMP_L filter bank is extremely long, and the signal is very large. The frequency response plot for module 7 shows us that it approaches the 10^15 gain at higher frequencies. Additionally, these new satamps have about double the gain at high frequencies as compared to the old satamps, so that would also be exacerbating any issues at higher frequencies.

With all that said, it looks like the conclusion is that the EY saturation issues from last week were not caused by a faulty satamp, but instead by something else that caused L2 to glitch, and the long impulse response and high gain causing R0 to take forever to calm down.

A temporary solution would be to keep the L2 to R0 damping off during locking until after LOWNOISE_LENGTH_CONTROL has finished, to make sure that we are avoiding having it on during all the sudden movements that could upset R0.

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 12:46, Tuesday 28 October 2025 (87798)

I am confused about the conclusions of this alog. Attached is a screenshot of the last time we had these saturations before the satamp was replaced. I did a test where I turned off the R0 tracking loop (ETMY L2->R0 length) by ramping the gain of the loop to zero on a 15 second ramp. The saturations stopped. I then ramped back on and saw the saturations return. I waited seven minutes and tried ramping on again and got the same saturation warnings. This test was done while we sat in state 560, which is lownoise_length_control. The state had completed and we held there in order to track down the saturations.

I can see the glitch that Jeff and Oli found in this alog, but I don't see any other glitches that caused the subsequent saturations when I was turning the gain on and off. The ramp time should be long enough to avoid any sort of issues with the impulse response, and the on/off test happened many minutes after the noted glitch, so I don't think they can be explained by this impulse response issue.

I don't necessarily think this indicates the satamp is the problem, except that we haven't had these saturations since the replacement, and this loop has been running for a long time without issue (my understanding is since O3b, but I don't know for certain).

I agree that a good way to avoid this issue is to engage the R0 tracking later on in the guardian.

Images attached to this comment
H1 ISC
elenna.capote@LIGO.ORG - posted 12:25, Monday 27 October 2025 - last comment - 15:02, Friday 21 November 2025(87768)
DRMI Inventory log

Some DRMI locking info

MICH, PRCL, SRCL filter banks during the "acquire DRMI 1f" state before the lock is grabbed.

OLGs for MICH, PRCL, SRCL after 1F acquisition, DRMI ASC engaged.

 

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 14:51, Tuesday 28 October 2025 (87807)

MICH, PRCL, SRCL filter banks when DRMI 1F is locked, settings for the measurement time above.

Images attached to this comment
elenna.capote@LIGO.ORG - 15:02, Friday 21 November 2025 (88199)

PRMI OLGs

Images attached to this comment
Displaying reports 381-400 of 85685.Go to page Start 16 17 18 19 20 21 22 23 24 End