Search criteria
Section: X1
Task: SEI

REMOVE SEARCH FILTER
SEARCH AGAIN
Reports until 14:31, Wednesday 11 February 2026
H1 SEI (SEI)
jeffrey.kissel@LIGO.ORG - posted 14:31, Wednesday 11 February 2026 (89123)
SPI Fiber Collimator: Refined Collimation / Lens Position / Beam Waist Diameter Measurement Setup
J. Kissel

After conferring with Sina about the results from LHO:89047 and armed with the plan described at the tail end of LHO:89099, went into the optics lab to improve the rapid iteration of beam waist diameter measurement by concocting an optical layout that can measure the beam at two z positions at the same time.

See attached diagram and physical setup.

Note -- the optical table has become over populated with in-vac EOM crystal characterization (and additional beam-scanning set up), so Keita and I shared space by having his setup be at 3 inch beam height and me at 4.5 inch setup. This just barely cleared his beam scan, so I intend to further increase my beam height to 5 inches.

I only have preliminary results (without having changed the collimator lens position at all yet), but they showed hints of astigmatism, so I suspect:
    - the polarization of the emitted beam from the fiber collimator
    - the beam splitters
    - clipping on the beam scan from Keita's setup.

Will confer with the team.
Images attached to this report
Non-image files attached to this report
H1 SEI (CSWG, ISC, SEI)
jeffrey.kissel@LIGO.ORG - posted 13:17, Tuesday 10 February 2026 (89099)
Setting Lens Position for SPI In-vac SuK Fiber Collimator Beam Profile: No Systematic Error; Waist Position is just *extremely* Sensitive to Lens position
J. Kissel, S. Koehlenbeck

Executive Summary: after some more modeling of the beam profiles from LHO:89047, we identify that there's nothing fundamentally wrong in the analysis -- it's just that the out-going waist position of the beam exiting the SPI fiber collimators is just *extremely* sensitive to lens position -- to the tune of z_lens = 11 [mm] +/- 50 [um] can swing the waist position z0 = +/- 1.5 [m], and that makes it extremely challenging to set the waist at z0 = 0.0 [m] with out a better measurement setup/method for adjusting the position.

The full story:

Trying to better understand the results from LHO:89047, in this aLOG I:

(0) Verified Beam Profile Fit with JAMMT instead of A La Mode: Imported the beam profile data from each laser into JaMMT, with the z-axis inverted (i.e. z = [-5.41, -4.496, ... , -0.508] [m]), and confirmed that JaMMT agrees with a la mode in terms of waist radius, w0, and waist position, z0 -- except now with the beam "incoming" into the collimator/lens/fiber.
The fit predicts a waist w0 at position z0 of 
                               JaMMT                           A La Mode (Matlab)
                           w0y'     w0x'                        w0x     w0y            (assume w0x = w0y', w0y = w0x')
        OzOptics       = (0.7041, 0.6766) [mm]                (0.7040, 0.6766) [mm]
        AxcelPhotonics = (0.6807, 0.7048) [mm]                (0.6807, 0.7049) [mm]
     at position 
                            z0y'     z0x'                        z0x      z0y          (assume z0z = z0y', z0y = z0x')
        OzOptics       = (-1.4940, -1.5840) [m]               (1.4940, 1.5837) [m] 
        AxcelPhotonics = (-1.5803, -1.4870) [m]               (1.5803, 1.4869) [m]
where -- you're reading that right, it's not a copy and paste error -- the difference between the matlab fit and jammt fits are at the \delta(w0) = 0.1 [um] and \delta(z0) = 0.1 [mm] level (though for some silly reason the convention of which is the x and y dimension is flipped between the two fitting programs).

This reconfirms/is consistent with LHO:89047 fit of data and their statement that the waist radius, w0 = 0.6915 +/- 0.015 [um] at z0 = 1.5362 +/- 0.053 [m]
See GREEN boxed answers in attached screenshots of OzOptics and AxcelPhotonics JaMMT sessions

(1) Found Mode Field Diameter inside Fiber: Used each of those JaMMT sessions with imported / fit beam profile from both lasers (in the fiber collimator's current lens position) to model the mode field diameter of the fiber (thus validating the SPI conceptual design Slide 60 of G2301177 calculation MFD = 7.14 [um]). To do so, we install the f = 11 [mm] focal length lens at z = 0.0 mm -- i.e. finding the new beam parameters "down stream" (in the +Z direction, "in towards the fiber") post-lens assuming a f = 11 [mm] focal length lens placed at the ideal position. In JaMMT speak, we add a substrate that's a thin lens at position 0.0 [m], with aperture 5.5 [mm], and focal length 0.011 [m].

As can be seen in RED, the now-augmented OzOptics and AxcelPhotonics JaMMT sessions, inserting the fast lens dramatically increases the beam radius (+z of the lens, the red beam extends out as essentially vertically off the scale); which is indicative if really high divergence of the free-space beam as it is mode-matched into the fiber. Said in the direction of our experiment -- the beam coming from the fiber is highly divergent (extremely small waist radius in the fiber), and the fast lens brings the outgoing free-space beam "in check" dramatically reducing the beam divergence, i.e. "collimating" it. 

The JaMMT fit results remain in terms of radius, w0, but to distinguish this in-fiber waist radius from the free-space waist radius, I'll call these the Mode Field radii, MFw0. Those results are 
                          MFw0y', MFw0x' 

        OzOptics         (3.589 , 3.717) [um] @ MFz0y' = MFz0x' = 0.011 [m]
        AxcelPhotonics   (3.597 , 3.726) [um] @ MFz0y' = MFz0x' = 0.011 [m]
Note the order of magnitude on the units -- microns, not millimeters. Also note that the fit for position of the waist is 0.011 [m], or 11 [mm] for all four waist radius data points (to the precision of the display). This matches the ideal/expectation -- that we *want* the f = 11 [mm] lens to focus the free space beam down to the size of the waist of the beam in the fiber core, and to have that waist 11 [mm] away from the lens.

In terms of diameter, that's
                           MFDy', MFDx'

        OzOptics         (7.1780, 7.4340) [um]
        AxcelPhotonics   (7.1940, 7.4520) [um]

This modeled MFD based on the out-going beam measurement, MFD_mean = 7.3145 +/- 0.1487 [um] is within 2.5% of the expected value for the MFD = 7.14 [um]. from a core radius of a = 2.75 [um], and *fiber* numerical aperture, NA = 0.12. (I was wrong to suggest that we might have needed to use the NA from the fiber collimator. Sina was right to use the NA of the optical fiber.)

And just to hit that "highly divergent" point home, turning the mode field radii into a Rayleigh range [with MFzR = pi * (MFw0^2) / \lambda]: that's MFzR_mean = 39.49e-6 [m] as opposed to the collimated free-space beam that has a range of ~3.25 [m].
        
(2) Re-create the real system as a function of lens position With that modeled mode field diameter (radii) in the fiber, we can restart JaMMT with those initial beam parameters, but the position of the waist at z0 = 0.0 [m] rather than the +0.011 [m] we found from Step (1). This changes our frame of reference -- we now assume we know the field waist coming out of the fiber, and we position that field waist, MFz0 = -0.011 [m] behind the lens, and we're trying to *create* a collimated beam with the f = 11 [mm] lens, with a new waist at z0 = 0.0 [m]. 

In JaMMT speak, we 
    (a) Reset and Clear Plot, then Edit the Initial Beam to have 
                            w0       z0   tangential w0   tangential w0   wavelength
                           MFw0y'             MFw0x'
                           [um]      [m]      [um]             [m]           [nm]

        OzOptics           3.589     0.0      3.717            0.0           1064
        AxcelPhotonics     3.597     0.0      3.726            0.0           1064

     
     (b) Add a substrate; a thin lens, at position z = 0.011 (for now), with aperture 5.5 [mm], and focal length 0.011 [m].

     (c) Add beam analyzers at each z position point of the originally measured vector, z = [0.508 0.991 1.499 2.007 3.251 4.496 5.41] [m]
    
The results of (a) and (b) create the screenshots OzOptics and AxcelPhotonics, which show that with the lens at *exactly* z = 0.011 [m], or z = 11 [mm], that puts the waist at
     z_lens = 0.011 [m]    
                     ( w0x ,   w0y )    @ (z0x, z0y)
                       [mm]    [mm]       [mm]  [mm]
     OzOptics         1.0830 , 1.0023      22    22
     AxcelPhotonics   1.0357 , 0.9999      22    22
     

This confirms that the original guess of the waist radius written in the assembly procedure of w0 = 1.05 +/- 0.1 [mm] when setting the lens position to have the waist position z0 = 0.0 [m], and thus a Rayleigh Range, zR = 3.25 [m] was not wrong at all.

After adding in the analyzers (c), you get displays that look like OzOptics and AxcelPhotonics, from which you can read off the model of what the measured beam profile should ideally be.

(3) Model/Discover just how sensitive beam profile of the out-going beam is to lens position: positioning needs to be accurate within 50 [um] (ridiculous!).
Now, nudge the z position of the beam splitter in each data set until you reproduce the beam you measured / fit in Step 0.

I find that a lens position of z = 0.011045 [m] = 11.045 [m] = 11 [mm] + 45 [um] reproduces a real focus that matches the beam profiles we measured and waist radius and position we fit consistent with w0 = 0.6915 +/- 0.015 [um] at z0 = 1.5362 +/- 0.053 [m]. 45 [um]!!

A lens position 45 [um] the other way, z = 11 [mm] - 45 [um] = 0.10955 [m] pushes the waist position ~ 1.5 [m] behind the lens (z0 = ~ -1.5 [m]), i.e. it creates a virtual focus.
See 
    OzOptics z_lens = nom + 45 [um]
    OzOptics z_lens = nom - 45 [um]
    AxcelPhotonics z_lens = nom + 45 [um]
    AxcelPhotonics z_lens = nom - 45 [um]

I conclude from this that with the existing measurement setup and great lack of precision in adjustability of the lens position, it's no wonder we ended up with a waist position off by 1.5 [m].

For the record, with the (OzOptics, AxcelPhotonics) laser's data set, to get the waist position z0 to actually be at 0.0 +/- 0.005 [m] ***, you need the lens position to be z_lens = (10.9997, 10.99969) [mm] = 11 [mm] - 0.3 [um]. Just ridiculous. 
*** Since the beam is a bit astigmatic, you can only model one axis to be exactly z0 = 0.0 at a time, so the 5 [mm] uncertainty covers the z0y position when you set the z0x to 0.0 [m]. 

After corroborating all of this with Sina, she's not surprised. In fact, she was more surprised when I claimed that collimating the beam was easy back in Jun / Aug of 2025.

(4) Sina and I conclude the best hope we have is to 
    (a) Don't worry about having the position of the collimator within the collimator adapter ring be as shown in T2400413. What's critical is that the alignment of the outgoing beam doesn't change in between lens position iterations.
    (b) Instead of entirely backing off the 2x tiny set screws that hold a given lens position, back off only 1x to try to reduce the freedom of the lens position a bit to hopefully increase the precision of the adjustment
    (c) Set up a measurement system the measures and fits the beam position at multiple points with rapid iteration
    (d) Change the density of measured z position points to get more near the collimator
    (e) If you *must* measure the beam diameter / adjust the lens position at one position -- do it near the collimator, rather than past the Rayleigh Range. But also, see steps (c) and (d).

Images attached to this report
H1 SEI (SEI)
corey.gray@LIGO.ORG - posted 08:12, Monday 09 February 2026 (89082)
H1 ISI CPS Noise Spectra Check - Weekly (FAMIS #39337)

FAMIS Link:  39337

Only CPS channels which look higher at high frequencies (see attached) would be the following:

  1. ETMx_ST1 H1
  2. ETMx_ST1 H2
  3. ETMy_ST1 H2
  4. ETMy_ST2 H1

In the bash window we get this note:

"HAM high freq noise is elevated for these sensor(s)!!!:       HAM1_CPSINF_V2  &  V3 "

X1 SEI (INS, SEI)
jeffrey.kissel@LIGO.ORG - posted 13:36, Wednesday 04 February 2026 (89029)
SPi Optomechanics: Optical Fiber Feedthru, Path Cable, and Spool Assembly
J. Kissel
- D2500175 :: S3228003
   . D2100573
   . E2300345
- D2400281

I'm finally back in the optics lab, taking the next steps towards assembling the SPI. 

Specific to this entry -- I'm assembling the Class A clean components of the fiber-coupled seed laser light. In other words, I'm integrating 
    (1) S3228003*** -- One of the 4.5" (inch) 1064nm V-FT Optical Fiber Feedthrough (Feedthru) Conflat Flange (DIAMOND) (D2500175) -- which includes the "off the shelf" (OTS) feedthru and integrated 3 [m] patch cord (the industry jargon for fiber optic cable), and its MIT-designed strain relief assembly D2100573 -- all delivered to LHO after class-A cleaning and assembly at Caltech per E2400159.
    (2) the ANU-designed Fiber Storage Spool assembly, D2400281, of which we have two.

Sina characterized the power transmission of all three of the feedthrus at Stanford before the clean-n-bake process, and identified that S3228003 had 100% transmission, so I've chosen that to be assigned to the MEAS path, where we need the most input power (as it's distributed through the most number of beam splitters). I plan to further integrate the patch cord into the SuK fiber collimator S0272503 for no better reason that to "pair the S[...]03 feedthru with the S[...]03 fiber collimator." 

***The serial numbers for the OTS feedthrus (D2500175) are of the alpha-numeric form 2153228V00n, where n = 1, 2, 3. That full version of the serial number is indicated in their ICS record, but in order to conform to the mold of the DCC S-numbers, I truncated the format to be numeric, S322800n, i.e. removing the identical leading 215 and misleading/unnecessary V character in the middle. 

Pictured here is 
    - The pre-assembly components of the fiber storage spool (First) 
    - The completed assembly of the spool with S3228003's patch cord wound up within it (Second and Third)

The feedthru's patch cord still has a Thor Lab Narrow-Key Mating sleeve (but NOT polarization maintaining) ADAFCB3 that is not intended to be a part of the final assembly, just there for fiber storage during shipment. I'd yet to detach it in these pictures.

Commentary:
     - Coiling the fiber within the spool was nerve racking. It feels like you're trying to coerce dry spaghetti into a curve without it snapping. If you let it go, it "sproings" into a wild relatively straight mess. In the end, holding it all mid-air with both hands, I used the weight of the mating sleeve to slowly pull the coil tighter as I rotated the coil nudging the rest of the coil into the newer smaller circle, until I met the radius of the storage spool. I had the goal of coiling it with one end "on top" and the other "on the bottom" of the stack, but I gave up on that. Once to the desired radius and no smaller, I used the securing cross, resting loosely across only 1/4 of the spool to hold the bulk of the coil of fiber in place while I tucked the rest of the length into the guiding channel. This is doable with a chair and patience in the open space of the optics lab, but I'm not looking forward to ding this in chamber.
     - Thinking through the install, my current plan is as follows :: WHAM3 D5 is currently a 12 inch blank with no 4.5 inch flange adapters. So it *needs* to be replaced by a 12 inch to 3x 4.5 inch flange adapter. So let's create the full 12 inch flange assembly with the 2x, MEAS and REF, fiber feedthroughs and 1x 4.5 blank -- and spool the fibers -- in the optics lab. Then we bring and install the whole 12 inch assembly on to HAM3 as a whole. 
Images attached to this report
H1 General (CDS, OpsInfo, SEI, SUS)
anthony.sanchez@LIGO.ORG - posted 17:17, Monday 02 February 2026 (88997)
Monday Ops Day of Mysteries

TITLE: 02/03 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY:
LVEA was Bifircated via a Laser Hazard areas around HAM 1& 2, while the rest of the LVEA is LASER SAFE unless at height. See M2600004 for details.

PM1 was mysteriously dancing & Saturating too much for Rahul's transfer function measurement all day long. While he was troubleshooting. Update!!! Rahul dragged the EE guys out and found a dead Coil Driver! Mystery solved!

JAC work continued all day starting with mode matching in HAM1, and starting a JM2 swap. JAC table now has all the mounts and Cables but no optics yet. 

HAM7 & ITMY ISI Watchdogs tripped at 23:33 UTC due to mysterious high frequency ground noise that was louder at HAM7 than ITMY.  Jim & Betsy went to go look for fallen wrenches or other watchdog tripping phenomena..... No explanation was ever found. 

OPS info: 
New conda ENV doesn't run watchdog untripping scripts unless you run Conda Kill first.  Tagging CDS. 

LOG:

Start Time System Name Location Lazer_Haz Task Time End
22:49 SAF LVEA IS LASER SAFE LVEA NO* LVEA IS LASER SAFE *BIFURCATED HAM1/2 bring ur LASER GOGGLES 16:49
15:35 FAC Nellie Optics Lab n Tecnical cleaning 15:55
15:37 JAK Betsy LVEA y checking status of LVEA 16:37
15:44 FAC Kim LVEA y technical cleaning 17:55
16:50 SAFETY Travis LVEA yes Setting up barriers for Bifurcated laser hazard  conditions. 17:26
16:56 Cheta Camilla, Matt, Sophie CHETA  lab Yes Updating Camilla on CHETA lab status 18:55
17:06 FAC Nellie LVEA y Technical Cleaning 17:55
17:16 Cheta Ryan CHETA Lab y Getting Optics 17:23
17:21 PSL Jason LVEA yes Energizing the rotation stage to 1 W. 17:30
17:22 Safety Jenny D LVEA y Setting up barriers for Bifurcated Laser zones. 17:26
17:26 SUS Rahul Remote n Taking TF meas. of PM1 18:26
17:28 VAC Travis LVEA y Reducing purge air in HAM1 17:30
17:35 SPI Jeff Optics lab Yes Working on SPI in the Optics Lab. 20:34
17:57 CHETA Ryan S CHETA LAB y Getting optics parts 18:12
18:06 FAC Kim Mid X n technical cleaning 18:54
18:09 SEI Jim LVEA HAM7 n Balancing HAM7 ISI 19:19
18:15 SEI Mitchel LVEA HAM78 N Balancing HAM7 19:19
18:28 VAC Travis LVEA HAM1 y Turning up purge air 18:33
18:30 SUS Rahul LVEA & Optics lab y getting & cleaning parts 18:55
18:31 Laser Trans Oli LVEA Y Laser transitioning to Strange Laser Bifurcated State 18:41
18:34 SQZ Sheila & Karmeng SQZr Racks n Checking on SQZr racks 21:31
18:45 JAC Masayuki & Jason LVEA HAM1 YES JAC Mode matching 20:19
18:48 JAC Ryan S LVEA HAM1 YES Taking picures of JAC & pluggin in the JAC Table. 19:06
18:55 FAC Kim HAM SHAQ N Technical Cleaning 20:25
19:12 TCS Matt OptLab y(local) Putting stuff away 19:18
19:45 VAC Gerado LVEA yes/no Anulus pump work 20:24
19:47 ISC Mitchel LVEA West bay n Getting parts. 20:07
20:09 JAC Jennie LVEA HAM1 YES Checking on Jason and Masayuki 20:19
20:49 LASER SAFETY Travis LVEA HAM1 YES Adjusting the LASER Curtain. 21:03
20:55 FAC Randy LVEA n heading to the WEST Bay area for parts. 22:20
20:56 JAC Betsy Optics Lab y Checking for parts and progress. 22:56
21:01 SPI Jeff & Jim Optics lab Yes Working on SPI 23:01
21:11 EE Marc LVEA HAM1 yes Working with the HAM1 crew 22:56
21:21 JAC Ryan S LVEA JAC Table N working on the JAC table 01:51
21:27 VAC Travis LVEA HAM1 yes Adjusting the Purge air back down for a SUS measuremnent 22:29
21:28 SUS Rahul Remote. N JAC PM1  SUS TF Measurement 22:13
21:36 JAC Jennie W LVEA JACt & HAM1 n/Y working with Ryan S on JAC table, & Waiting for HAM1 crew. 00:00
21:37 SUS Rahul LVEA HAM1 Y Checking PM1 SUS  status 22:12
21:58 FAC Mitchel LVEA West bay N Checking inventory & Parts 22:35
22:05 JAC Betsy LVEA N Running parts 23:59
22:23 JAC Masiuki & Jason LVEA HAM1 Yes Working on Mode matching with JAC 01:58
22:31 JAC Keita LVEA HAM1 Yes Helping HAM1 crew 01:56
22:34 VAC Travis LVEA HAM1 Turning up the Purg air in HAM1 22:43
23:00 JAC Betsy LVEA yes Running parts 23:30
23:39 VAC Travis HAM Shaq N Getting parts. 23:45
23:54 CHETA Camilla Optics Lab N Checking supplies. 00:00
00:06 SUS Rahul LVEA yes Power cycling Satilite boxes 00:10
00:06 SEI Jim LVEA N Walking through the LVEA looking for fallen wrenches & unwatched dogs 00:46
00:10 SEI Betsy LVEA yes Walking around looking Unwatched Dogs 00:30
00:18 SUS Fil & Rahul LVEA HAM1 yea power cycling sat amps to troubleshoot SUS OSC rahul out early 00:57
00:28 EE Marc LVEA HAM1 yes Giving Fill a hand 00:57

 

Images attached to this report
H1 IOO (ISC, OpsInfo, SEI)
jennifer.wright@LIGO.ORG - posted 10:42, Tuesday 27 January 2026 (88916)
Restored sliders on IMC SUS to December 3rd state

Jennie W, Jason O, Sheila D, Masayuki N,

 

Following Jim bringing HAM3 HEPI online (alog #88909) and isolating HAM3 ISI.

Jason and I restored the MC1, 2 and 3 sliders to their position on December 3rd during a lock when the mode-cleaner was locked with 2W input, HAM2 and 3 ISO Gain was , HAM2 and 3 ISI Guardian was set to 'isolated' and 2 and 3 HEPI guardian was set to 'RoBUST ISOLATED'.

This lock state for the IMC Guardian is 100. 

 

Before any adjustments today, we had 45 dB whitening gain on MC2 trans, and 91% of the power was in the 00 mode. This gave us 46 counts on MC2 trans nsum.

After HEPI, ISI, and suspensions are restored, we have mostly 10 mode, with 17.9 counts on MC2 trans sum, still 45 dB whitening gain.

I lowered the whitening gain to 30 dB to restore us to the normal whitening gain.

If we restore the alignment so that 90% of the power is in the 00 mode again, we should have 8.2 counts MC2 trans nsum

Images attached to this report
H1 SEI (SEI)
ibrahim.abouelfettouh@LIGO.ORG - posted 14:40, Monday 26 January 2026 (88898)
HEPI Pump Trends - Monthly FAMIS 38863

Closes FAMIS 38863. Last checked in alog 88726

Trends similar to post-outage plots from last check.

Images attached to this report
H1 SEI (SEI)
ibrahim.abouelfettouh@LIGO.ORG - posted 14:32, Monday 26 January 2026 (88897)
H1 ISI CPS Sensor Noise Spectra Check - Weekly FAMIS 39335

Closes FAMIS 39335, last checked in alog 88829

Comparable to last check. Elevated sensors are the open chambers.

Non-image files attached to this report
H1 IOO (OpsInfo, SEI, SUS)
jeffrey.kissel@LIGO.ORG - posted 11:47, Monday 26 January 2026 (88895)
IMC REFL DC With HAM1 In Air During JAC Install/Alignment Recovery; ISIs ISOLATED (Aligned) vs DAMPED (Floating) Positions and IMC SUS Alignments
J. Kissel, J. Driggers, J. Wright

While we recover the alignment into to the IMC using the amount of DC light on the IMC REFL PD (on IOT2L) (and eventually MC2 TRANS on HAM3) -- e.g. LHO:88869 -- there's been some confusion about the state of the HAM2 and HAM3 ISIs alignment, and whether that matters. 

Here, I compare 3 times:

    Date Time                       2025-12-19 00:15 UTC                     2026-01-24 01:28 UTC                      2026-01-26 17:26

    Description                     HAM1 in AIR Pre-JAC reference           Post-JAC install, Pre EOM Install        Post-JAC Install, Pre-EOM Install

    IMC State                       LOCKED                                  OFFLINE                                  OFFLINE
    IMC REFL Power [mW]               0.65                                  0.22                                     0.18
    IMC MC2 TRANS Power [mW]        307.0                                   0.4 (confidently dark noise)             0.4 (dark noise)

    MC1 P/Y                         +852.04 / -2229.74                      +874.30 / -2233.84                       +874.30 / -2233.84
    MC2 P/Y                         +582.28 /  -627.99                      +568.40 /  -628.20                       +568.40 /  -628.20
    MC3 P/Y                           +9.23 / -2433.51                        -1.07 / -2430.71                         -1.07 / -2430.71

    HPI HAM2/HAM3 Physical State    locked/locked                           locked/locked                            locked/locked
    HPI HAM2/HAM3 ISO Gain          disabled/enabled(?)                     disabled/enabled                         disabled/disabled
    ISI HAM2/HAM3 ISO State         ISOLATED/ISOLATED                       DAMPED/DAMPED                            ISOLATED/ISOLATED
    ISI Residuals (w.r.t. "they've been that way forever alignment position")
                                       HAM2       HAM3                         HAM2       HAM3                          HAM2       HAM3 
        X [um]                         0.00 /     0.00                         +8.9 /    -56.4                          0.00 /     0.00
        Y [um]                         0.00 /     0.00                         +4.8 /    +36.6                          0.00 /     0.00
        Z [um]                         0.00 /     0.00                         -6.6 /     -6.7                          0.00 /     0.00
        RX (Roll) [urad]               0.00 /     0.00                         +0.4 /     +1.2                          0.00 /     0.00
        RY (Pitch)[urad]               0.00 /     0.00                         +1.4 /      0.0                          0.00 /     0.00
        RZ (Yaw)  [urad]               0.00 /     0.00                         +5.2 /    -28.6                          0.00 /     0.00

In summary -- having the ISI tables DAMPED (Floating) vs. ISOLATED with HEPI physically locked does make a tens-of-micro level shift in alignment of the tables. 
This is evident by the amount the IMC SUS had to move (from 2025-12-19 to 2026-01-24) in order to start recovering even 0.2 [mW] on the IMC REFL DC PD.
On the scale 0.7 [mW], and without changing the SUS positions (from 2026-01-24 to 2026-01-26) it (HAM2 only, of course) makes the difference between 0.22 [mW] and 0.18 [mW] or (0.22-0.18)/0.22 = 20%-ish percent difference.
When checking / chasing in-air alignment of the beam projected into HAM2 with HAM1 optics, we should make sure that the ISIs are ISOLATED, if possible.

To bring the ISIs to ISOLATED, with HEPI Locked:
   - Set the HPI-HAM{2,3}_ISO_GAIN to 0.0 (disabling the HPI controls), and 
   - Requesting SEI_HAM{2,3} Guardians to ISOLATED (which brings the ISI to HIGH_ISOLATED)
Images attached to this report
H1 SUS (CDS, SEI)
jeffrey.kissel@LIGO.ORG - posted 08:45, Tuesday 13 January 2026 - last comment - 11:59, Thursday 22 January 2026(88745)
H1SUSB123 and H1SUSH34 SUS and SEI Systems prepped in SAFE for Conversion to SUSB13 and SUSH34
J. Kissel
ECR E2500296, E2400409
WP 12962

D2300401 (for susb13) and D2300383 (for susb2h34)

We begin the major upgrade of the H1SUSB123 and H1SUSH34 SUS and SEI Systems converting them to SUSB13 and SUSH34 a la G2301306 today. We're focusing on upgrading the DACs in the IO chassis () and all the downstream surrounding impact of that  analog electronics This will take down the following computers, and they will be resurrected with new names as follows:
    FORMER NAME           FORMER SUS              NEW NAME           NEW SUS
    h1susb123             ITMX, ITMY, BS          h1susb13           ITMX, ITMY
    h1sush34              MC2, PR2, SR2           h1susb2h34         BS, MC2, PR2, SR2, LO1, LO2
    h1susauxb123          ITMX, ITMY, BS          h1susauxb13        ITMX, ITMY
    h1susauxh34           MC2, PR2, SR2           h1susauxb2h34      BS, MC2, PR2, SR2, LO1, LO2

As such, I've 
    - brought the ITMX, ITMY, BS, HAM3, and HAM4 SEI systems to ISI_DAMPED_HEPI_OFFLINE (so we don't risk any "hard" trips of HEPI during all this).
    - brought all impacted SUS gaurdians to AUTO mode, then to the SAFE state
    - Increased the bypass time on all impacted software watchdogs to bypass time to a large number (90000000 secs), and hit BYPASS
Comments related to this report
marc.pirello@LIGO.ORG - 16:20, Tuesday 13 January 2026 (88752)

Updated timing cards on newly named SUSB13, SUSB13 AUX, SUSB2H34, SUSB2H34 AUX, formerly known as SUSB123, SUSB123 AUX, SUSH34 and SUSH34 AUX.

Timing FPGA Version 1589

marc.pirello@LIGO.ORG - 11:59, Thursday 22 January 2026 (88849)

While we were modifying the power systems in the SUS racks, the AA chassis S1202818 lost its negative voltage rail.  We pulled the chassis and replaced the power board.  This chassis was placed directly back into service.

M. Pirello, O. Patane, J. Kissel

LHO General (SEI)
ryan.short@LIGO.ORG - posted 07:45, Monday 12 January 2026 (88735)
Ops Day Shift Start

TITLE: 01/12 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: MAINTENANCE
    Wind: 8mph Gusts, 4mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.59 μm/s 
QUICK SUMMARY: JAC install and HAM7 alignment will continue today after craning finishes in the LVEA and it's transitioned back to laser hazard.

[Tagging SEI] The ETMX CPS glitched over the weekend, tripping both stages of the ISI, and I am unable to reset it without it tripping again immediately. Since ETMX isn't crucial for planned work today, I'll wait until Jim gets in and consult with him.

H1 General (OpsInfo, PSL, SEI, SUS)
anthony.sanchez@LIGO.ORG - posted 11:34, Friday 26 December 2025 (88659)
Boxing day Ops Checkin

TITLE: 12/26 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: MAINTENANCE
    Wind: 25mph Gusts, 16mph 3min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.90 μm/s 
QUICK SUMMARY:
Cameras:
Everything looks fine currently.  
Lights were left on in the Woodshop & OSB 163, ifsomeone is heading to site. 


SEI: Looks as I'd expect it to look. HAM7 is red because it's had people in and out of it since before the break. I imagine leaving that watchdog tripped is fine.
SUS: Seems to be looking reasonable as well considering the OPO and Jack work that was being done before break.



PSL Status: 

Laser Status:
    NPRO output power is 1.84W
    AMP1 output power is 70.51W
    AMP2 output power is 139.4W
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 16 days, 17 hr 48 minutes
    Reflected power = 25.41W
    Transmitted power = 106.2W
    PowerSum = 131.6W

FSS:
    It has been locked for 2 days 23 hr and 15 min
    TPD[V] = 0.5254V

ISS:
    The diffracted power is around 3.7%
    Last saturation event was 2 days 22 hours and 25 minutes ago


Possible Issues:
        PMC reflected power is high
 

 

VEA Temps:

LVEA looks good for the last day.

The HAM Shaq looks good.

Arm VEAs

Images attached to this report
H1 General (IOO, ISC, OpsInfo, SEI, VE)
thomas.shaffer@LIGO.ORG - posted 10:07, Friday 19 December 2025 (88618)
IOT2 table disconnected and moved away from HAM2

This morning Fil disconnected the table, I removed the bellows and added viewport covers and lexan, and then Randy and I moved the table out of the cleanroom and to the side in the +X direction from it's original location. The bellows I placed in the HAM3 cleanroom on top of the rack, covered in foil. 

Fil also noticed that the ISCT1 bellows were still open, though hanging down. I took the extra precaution and covered these with foil as well to prevent dust drifting onto the table.

Dust counts during the whole process were 0's or 10's when I looked. This seemed too low so I rubbed my glove above the dust monitor during a sample and counts shot up to the hundreds. I guess that space really is that clean, great!

H1 PEM (DetChar, ISC, SEI, SUS)
jeffrey.kissel@LIGO.ORG - posted 08:51, Thursday 11 December 2025 - last comment - 09:06, Thursday 11 December 2025(88473)
It's been ... WINDY.
J. Kissel

Post Dec 4th power outage, we've have an EPIC week of windstorms that have inhibited recovery effort, which has delayed upgrade progress. The summary pages (on their 24 hour cadence) and the OPS logs / environment summary don't really convey this well, so here's a citable link to show how bad last Friday (12/05), Monday (12/08), and Wednesday (12/10) were in terms of wind. Given the normal work weekend, it means that we really haven't had a conducive environment to recover from even a normal lockloss, let alone a 2-hour site-wide power outage. 

The attached screenshot is of the MAX minute trends (NOT the MEAN, to convey how bad it was) of wind speed at each station in UTC time. 
The 16:00 UTC hour mark is 08:00 PST -- the rough start of the human work day, so the vertical grid is marking the work days.
The arrow (and period where there's red-dashed 0 MPH no data) shows the 12/04 power outage.
The horizontal bar shows the weekend when we humans were trying to recover ourselves and not the IFO.
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 09:06, Thursday 11 December 2025 (88475)
Oh right -- and also on Monday, even though the wind wasn't *that* bad, the Earth was mad from the after shocks of 7.0 mag Alaskan EQ, and there were end-station Software Watchdog trips related to it that -- because of an oversight in watchdog calibration -- scared everyone into thinking we should "stand down until we we figure out if this was because the hardware upgrades or power outage." See LHO:88399 and LHO:88415. So, Monday was a wash for environmental reasons too.

Images attached to this comment
H1 SUS
oli.patane@LIGO.ORG - posted 17:00, Tuesday 09 December 2025 - last comment - 11:10, Wednesday 10 December 2025(88445)
Estimators seemingly caused 0.6 Hz oscillations again

Jeff, Oli

Earlier, while trying to relock, we were seeing locklosses preceded by a 0.6 Hz oscillation seen in the PRG. Back in October we had a time where the estimator filters were installed incorrectly and caused a 0.6 Hz lock-stopping oscillation (87689). Even though we haven't made any changes to the estimators in over a month now, I decided to try turning them all off (PR3 L/P/Y, SR3 L/P/Y). During the next lock attempt, there were no 0.6 Hz oscillations seen. I checked the filters and settings and everything looks normal, so I'm not sure why this was happening.

I took spectra of the H1:SUS-{PR3,SR3}_M1_ADD_{L,P,Y}_TOTAL_MON_DQ channels for each suspension and each DOF during two similar times before and after the power outage. I wanted the After time to be while we were in MICROSEISM, since it maybe seems like maybe the ifo isn't liking the normal WINDY SEI_ENV right now, so I wanted both the Before and After times to be in a SEI_ENV of MICROSEISM and the same ISC_LOCK states. I chose the After time to be 2025/12/09 18:54:30 UTC, when we were in an initial alignment, and then found a Before time of 2025/11/22 23:07:21 UTC.

Here are the sprectra for PR3 and SR3 for those times. PR3 looks fine for all DOF, and SR3 P looks to be a bit elevated between 0.6 - 0.75 Hz, but it doesn't look like it should be enough of a difference to cause oscillations.

Then, while talking to Jeff, we discovered the difference in overall noise in the total damping for L and P changed depending on the seismic state we were in, so I made a comparison between MICROSEISM and CALM SEI_ENV states (PR3, SR3). USEISM time was 2025/12/09 12:45:26 UTC and CALM was 2025/12/09 08:54:08 UTC with a BW of 0.02. The only difference in the total drive is seen in L and P, where it's higher below 0.6 Hz when we are in CALM.

So during those 0.6 Hz locklosses earlier today, we were in USEISM. Is it possible that the combination of the estimators in the USEISM state create an unstable combination?

Images attached to this report
Comments related to this report
edgard.bonilla@LIGO.ORG - 08:51, Wednesday 10 December 2025 (88456)

This is possibly true. The estimator filters are designed/measured using a particular SEI environment, so it is expected that they would underperform when we change the SEI loops/blends.

Additionally, we use the GS13 signal for the ISI-->SUS transfer function .It might be the case that the different amount of in-loop/out-of-loop ness of the GS13 might do something to the transfer functions. I don't have any math conclusions from it yet, but Brian and I will think about it.

jeffrey.kissel@LIGO.ORG - 11:10, Wednesday 10 December 2025 (88458)SEI, SUS
I'm pretty confident that the estimators aren't a problem, or at least a red herring.

Just clarifying the language here -- "oscillation" is an overloaded term. And remember, we're in "recovery" mode from Last Thursday's power outage -- so literally *everything* is suspect and wild guesses are are being thrown on around like flour in a bakery, and we only get brief, but separated by 10s of minutes time, unrepeatable, evidence that something's wrong. 

The symptom was "we're trying 6 different things at once to get the IFO going. Huh -- the ndscope time-series IFO build ups as we're locking one time looked to exponentially grow to lock-loss in one lock stretch and in another it just got noisier halfway through this lock stretch. What happened? Looks like something at 0.6 Hz."

We're getting to "that point" in the lock acquisition sequence maybe once every 10 minutes.
There's an entire rack's worth of analog electronics that go dark in the middle of this, as one leg of its DC power failed. (LHO:88446)
The microseism is higher than usual and we're between wind storms, so we're trying different ISI blend configurations (LHO:88444)
We're changing around global alignment because we thing suspensions moved again during the "big" HAM2 ISI trip at the power outage (LHO:88450)
There's a IFO-wide CDS crash after a while that requires all front-ends to be rebooted; with the suspicion that our settings configuration file track system might have been bad . (LHO:88448)...

Everyone in the room thinks "the problem" *could* be the thing they're an expert in, when it's likely a convolution of many things.

Hence, Oli trying to turn OFF the estimators.
An near that time, we switch the configuration of the sensor correct / blend filters of all the ISIs (switching the blends from WINDY to MICROSEISM -- see LHO:88444).

So -- there was 
    - only one, *maybe* two where an "oscillation" is seen, in the sense of "positive feedback" or "exponential growth of control signal." 
    - only one "oscillation" where it's "excess noise in the frequency region around 0.6 Hz," but they check if it actually *is* 0.6 Hz again isn't rigorous.

That happens to be frequency of the lowest L and P modes of the HLTSs, PR3 and SR3.
BUT -- Oli shows in their plots that:
    - Before vs. after the power outage, when looking at times when the ISI platforms are in the same blend state PR3 and SR3 control is the same.
    - The comparing the control request when the ISI platforms are in microseims vs. in windy show the expected change in control authority from ISI input, as the change in shape of the ASD of PR3 and SR3 between ~0.1 and ~0.5 Hz matches the change in shape of the blends.

Attached is an ndscope of all the relevant signals -- our at least the signals in question, for verbal discussion later.


Images attached to this comment
H1 SUS (CDS, SEI, SUS)
jeffrey.kissel@LIGO.ORG - posted 16:14, Monday 08 December 2025 (88415)
Weekend ETMY Software Watchdog Trips were Because of L2 to R0 Longitudinal Tracking not being blocked by USER WD
J. Kissel, J. Warner

Trending around this morning to understand the reported ETMY software watchdog (SWWD) trips over the weekend (LHO:88399 and LHO:88403), Jim and I conclude that -- while unfortunate -- nothing in software, electronics or hardware is doing anything wrong or broken; we just had a whopper Alaskan earthquake (see USGS report for EQ us6000rsy1 at 2025-12-06 20:41:49 UTC) and had a few big aftershocks. 

Remember, since the upgrade to the 32CH, 2^28 bit DAC last week, both end station's DAC outputs will "look CRAZY" to all those whom are used to looking at the number of counts of a 2^20 bit DAC. Namely, the maximum number of counts is a factor of 2^10 = 1024x larger than previously, saturating at +/- 2^27 = +/- 134217728 [DAC counts] (as opposed to +/-2^19 = +/- 524288 [DAC counts]).

The real conclusion: Both SWWD thresholds and USER WD Sensor Calibration need updating; they were overlooked in the change of the OSEM Sat Amp whitening filter from 0.4:10 Hz to 0.1:5.3 Hz per ECR:E2400330 / IIET:LHO:31595.
The watchdogs use a 0.1 to 10 Hz band-limited RMS as their trigger signal, and the digital ADC counts they use (calibrated into either raw ADC voltage or microns, [um], of top mass motion) will see a factor of anywhere from 2x to 4x increase in RMS value for the same OSEM sensor PD readout current. In otherwords, the triggers are "erroneously" a factor 2x to 4x more sensitive to the same displacement.

As these two watchdog trigger systems are currently mis-calibrated, I put all reference of their RMS amplitudes in quotes, i.e. ["um"]_RMS for the USER WDs and ["mV"]_RMS and quote a *change* in value when possible.
Note -- any quote of OSEM sensors (i.e. the OSEM basis OSEMINF_{OSEM}_OUT_DQ and EULER basis DAMP_{DOF}_IN1_DQ) in [um] are correctly calibrated and the ground motion sensors (and any band-limited derivatives thereof; the BLRMS and PeakMons) are similarly well-calibrated.

Also: The L2 to R0 tracking went into oscillation because the USER WDs didn't trip. AGAIN -- we really need to TURN OFF this loop programmatically until high in the lock acquisition sequence. It's too hidden -- from t a user interface standpoint -- for folks to realize that it should never be used, and always suspect, when the SUS system is barely functional (e.g. when we're vented, or after a power outage, or after a CDS hardware / software change, etc.)

Here's the timeline leading up to the first SUS/SEI software watchdog that helped us understand it there's nothing wrong with the software / electronics / hardware but instead it was the giant EQ that tripped things originaly, but then subsequent trips were because of an overlooked watchdog trigger sensor vs. threadhold mis-calibration coupled with the R0 tracking loops.
2025-12-04 
    20:25 Sitewide Power Outage.
    22:02 Power back on.

2025-12-05
    02:35 SUS-ETMY watchdog untripped, suspension recovery
    20:38 SEI-ETMY system back to FULLY ISOLATED (large gap in recovery between SUS and SEI due to SEI GRD non-functional because the RTCDS file system had not yet recovered)
    20:48 Locking/Initial alignment start for recovery.

2025-12-06 
    20:41:49 Huge 7.0 Mag EQ in Alaska

    20:46:30 First s&p-waves hit the observatory; corner station peakmon (in Z) is around 15 [um/s]_peak (30-100 mHz band)
             SUS-ETMY sees this larger motion, motion on M0 OSEM sensors in 0.1 to 10 Hz band increases from 0.01 ["um"]_RMS to 1 ["um"]_RMS.
             SUS-SWWD using the same sensors, in the same band but calibrated into ADC volts is 0.6 ["mV"]_RMS to ~5 ["mV"]_RMS

    20:51:39 ISI-ETMY ST1 USER watchdog trips because the T240s have tilted off into saturation, killing ST1 isolation loops
             SUS-ETMY sees the large DC shift in alignment from the "loss" of ST1, and 
             SUS-ETMY sees the very large motion, increasing to ~100 ["um"]_RMS (with USER WD threshold set to 150 ["um"]_RMS) -- USER WD never trips. But -- peak motion is oscillating to the 300 ["um"]_peak range (but not close to saturating the ADC.)
             SUS-SWWD reports an RMS voltage increase to 500 [mV_RMS] (with the SWWD WD threshold set to 110 ["mV"]_RMS) -- starts the alarm count-down of 600 [sec] = 10 [min].

    20:51:40 ISI-ETMY ST2 USER watchdog trips ~0.5 sec later as the GS13s go into saturation, and actuators try hard to keep up with the "missing" ST1 isolation
             SUS-ETMY really starts to shake here. 

    20:52:36 The peak love/rayleigh waves hit the site, with the corner station Z motion peakmon reporting at 140 [um/s], and the 30 - 100 mHz BLRMS reporting 225 [um/s].
             At this point its clear from the OSEMs that the mechanical system (either the ISI or the QUAD) is clanking against earthquake stops, as the OSEMs show a saw-tooth-like waveforms. 

    20:55:39 SWWD trips for suspension, shutting off suspension DAC output -- i.e. damping loops and alignment offsets -- and sending the warning that it'll trip the ISI soon.
             Since the SUS is still ringing naturally recovering from the still-large EQ and uncontrolled ISI.
    
    20:59:39 SWWD trips for seismic, shutting off all DAC output for HEPI and ISI ETMY
             SUS-ETMY OSEMs don't really notice -- it's still naturally ringing down with a LOT of displacement. There is a noticable small alignment shift as HEPI sloshes to zero.

    21:06    SUS-ETMY SIDE OSEM stops looking like a saw-tooth, the last one to naturally ring-down. After this all SUS looks wobbly, but normal.
             ISI-ETMY ST2 GS-13 stops saturating
 
    21:08    SUS-ETMY LEFT OSEM stops exceeding the SWWD threshold, the last one to do so.

2025-12-07
    00:05    HPI-ETMY and ISI-ETMY User WDs are untripped, though it was a "tripped again ; reset" messy restart for HPI because we didn't realize that the SWWD needed to be untripped.
             The SEI manager state was trying to get bck to DAMPED, which includes turning on the ISO loops for HPI.
             Since no HPI or ISI USER WDs know about the SWWD DAC shut-off, they "can begin" to do so, "not realizing" there is no physical DAC output.
             The ISI's local damping is "stable" without DACs because there's just not a lot that these loops do and they're AC coupled.
             HPI's feedback loops, which are DC coupled, will run away.

    00:11    SUS and SEI SWWD is untripped

    00:11:44 HPI USER WD untripped, 

    00:12    RMS of OSEM motion begins to ramp up again, the L / P OSEMs start to show an oscillation at almost exactly 2 Hz.
             The R0 USER WD never tripped, which allowed the H1 SUS ETMY L2 (PUM) to R0 (TOP) DC coupled longitudinal loop to flow out to the DAC.
             with the Seismic system in DAMPED (HEPI running, but ST1 and ST2 of the ISIs only lightly damped), and
             with the M0 USER WD still tripped and the main chain without any damping or control,
             after HEPI turned on, causing a shift in the alignment of the QUAD, changing the distance / spacing of the L2 stage, and
             the L2 "witness" OSEMs started feeding back the undamped main chain L2 to the reaction chain M0 stage, and slowly begain oscillating in positive feedback. see R0 turn ON vs. SWWD annotated screenshot.
             Looking at the recently measured open loop gain of this longitudinal loop -- taken with the SUS in it's nominally DAMPED condition and the ISI ISOLATED, there's a damped mode at 2 Hz.
             It seems very reasonably that this mode is a main chain mode, and when undamped would destroy the gain margin at 2 Hz and go unstable. See R0Tracking_OpenLoopGain annoted screenshot from LHO:87529.
             And as this loop pushes on the main chain, with an only-damped ISI, it's entirely plausible that the R0 oscillation coupled back into the main chain, causing a positive feedback loop.
             
    
    00:22    The main chain OSEM RMS exceeds the SWWD threshold again, as the positive feedback gets out of control peaking around ~300 ["mV"]_RMS, and the USER WD says ~100 ["um"]_RMS. Worst for the pitch / longitudinal sensors, F1, F2, F3.
             But again, this does NOT trip the R0 USER WD, because the F1, F2, F3 R0 OSEM motion is "only" 80 ["um"]_RMS still below the 150 ["um"]_RMS limit.

    00:27    SWWD trips for suspensions AGAIN as a result, shutting off all DAC output -- i.e. damping loops and alignment offsets -- and sending the warning that it'll trip the ISI soon.
             THIS kills the 
    
    00:31    SWWD trips for seismic AGAIN, shutting off all DAC output for HEPI and ISI ETMY

    15:59    SWWDs are untripped, and because the SUS USER WD is still tripped, the same L2 to R0 instability happens again.
             This is where the impression that "the watchdogs keep tripping; something broken" enters in.
             
    16:16    SWWD for sus trips again
    
    16:20    SWWD for SEI trips again 

2025-12-08
    15:34    SUS-ETMY USER WD is untripped, main chain damping starts again, and recovery goes smoothly.
    
    16:49    SUS-ETMY brought back to ALIGNED
    
Images attached to this report
Non-image files attached to this report
H1 ISC (ISC, SEI, SUS)
marc.pirello@LIGO.ORG - posted 13:22, Tuesday 02 December 2025 (88315)
PCIe Timing Card Firmware Updated

Per WP12909 we updated firmware on the PCIe Timing Cards at EX, EY, MX, and MY, sticker applied to each updated chassis.

EX:
H1-SUSAUX-EX-V5 (S1103885) with timing card (S2101138) firmware updated
H1-SUS-EX-V5 (S1900326) with timing card (S2101152) firmware updated
H1-SEI-EX-V5 (S1900235) with timing card (S2101093) firmware updated
H1-ISC-EX-V5 (S1900329) with timing card (S2101158) firmware updated

EY:
H1-SUSAUX-EY-V5 (S1103621) with timing card (S2101168) firmware updated
H1-SUS-EY-V5 (S1900238) ** Timing Card Replaced Monday ** applied updated sticker to chassis.
H1-SEI-EY-V5 (S1900231) with timing card (S2101173) firmware updated
H1-ISC-EY-V5 (S1900237) with timing card (S2101109) firmware updated

MX:
H1-MX (S1105017) with timing card (S2101151) firmware updated

MY:
H1-MY (S1103622) with timing card (S2101085) fimrware updated

D. Barker, F. Clara, J. Hanks, R. McCarthy, M. Pirello, D. Sigg

H1 General (PEM, SEI)
anthony.sanchez@LIGO.ORG - posted 03:48, Friday 14 November 2025 (88099)
Having a hoot on the Owl shift.

TITLE: 11/14 Owl Shift: 0600-1530 UTC (2200-0730 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 5mph Gusts, 3mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.86 μm/s 
QUICK SUMMARY:
Eve shift had left the IFO in Idle to wait out the higher useism.
I woke up at 2am to see if the useism was low enough to lock since it was falling. 
H1 went through an Initial Alignment before trying to lock, which took longer than expected locking the green Arms. 
Once that was done I allowed DRMI to try to lock a handfull of times. I think when it fails this time I will leave it in idle and try again in a few more hours. 

 

H1 SUS (SEI)
oli.patane@LIGO.ORG - posted 15:56, Tuesday 11 November 2025 (88063)
More measurements for PRM and SRM estimators

I was able to get some more measurements done for the PRM and SRM estimators, so I'll note them and then summarize the current measurement filenames because there's a lot (previous measurements taken 87801 and 87950)

PRM
DAMP Y gain @ 20%
I was able to finish taking the final few DAMP Y @ 20% gain for H1SUSPRM M1 to M1. 

Settings
- PRM aligned
- DAMP Y gain to 20% (L and P gains nominal)
- CD state to 1
- M1 TEST bank gains all at 1 (nominally P and Y have a gain other than 1)
Measurements
2025-11-11_1700_H1SUSPRM_M1toM1_CDState1_M1YawDampingGain0p1_WhiteNoise_{L,T,V,R,P,Y}_0p02to50Hz.xml r12789
    - I took the measurements for V, R, P, and Y today since Jeff had taken L and T a couple weeks ago for this configuration
    - To try and lessen confusion, I changed date/time of all M1 to M1 PRM DAMP Y gain @ 20% measurements to 2025-11-11_1700 so they're all together. This means the M1 to M1 L and T measurements from a couple weeks ago now have this as the date/time


SRM
DAMP {L,P,Y} gain at 20%
I took HAM5 SUSPOINT to SRM M1 measurements

Settings
- SRM aligned
- DAMP {L,P,Y} gain to 20%
- CD state to 1
Measurements
2025-11-11_1630_H1ISIHAM5_ST1_SRMSusPoint_M1LPYDampingGain0p1_WhiteNoise_{L,T,V,R,P,Y}_0p02to50Hz.xml r12790
    - P and Y don't have very good coherence, but I ran out of time to try larger gains

Big measurement list:
PRM measurements:
DAMP gain LPY @ 20% (-0.1)
/ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/PRM/
    SUSPOINT to M1
    Common/Data/2025-11-04_1800_H1ISIHAM2_ST1_PRMSusPoint_M1LPYDampingGain0p1_WhiteNoise_{L,T,V,R,P,Y}_0p02to50Hz.xml
    M1 to M1
    SAGM1/Data/2025-10-28_H1SUSPRM_M1toM1_CDState1_M1LPYDampingGain0p1_WhiteNoise_{L,T,V,R,P,Y}_0p02to50Hz.xml
    M2 to M1
    SAGM2/Data/2025-10-28_H1SUSPRM_M2toM1_CDState2_M1LPYDampingGain0p1_WhiteNoise_{L,P,Y}_0p02to50Hz.xml
    M3 to M1
    SAGM3/Data/2025-10-28_H1SUSPRM_M3toM1_CDState2_M1LPYDampingGain0p1_WhiteNoise_{L,P,Y}_0p02to50Hz.xml

DAMP gain Y @ 20% (-0.1)
/ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/PRM/
    SUSPOINT to M1
    Common/Data/2025-11-04_1930_H1ISIHAM2_ST1_PRMSusPoint_M1YawDampingGain0p1_WhiteNoise_{L,T,V,R,P,Y}_0p02to50Hz.xml
    M1 to M1
    SAGM1/Data/2025-11-11_1700_H1SUSPRM_M1toM1_CDState1_M1YawDampingGain0p1_WhiteNoise_{L,T,V,R,P,Y}_0p02to50Hz.xml
    M2 to M1
    SAGM2/Data/2025-10-28_H1SUSPRM_M2toM1_CDState2_M1YawDampingGain0p1_WhiteNoise_{L,P,Y}_0p02to50Hz.xml
    M3 to M1
    SAGM3/Data/2025-10-28_H1SUSPRM_M3toM1_CDState2_M1YawDampingGain0p1_WhiteNoise_{L,P,Y}_0p02to50Hz.xml

SRM (so far)
DAMP gain LPY @ 20% (-0.1)
/ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/SRM/
    SUSPOINT to M1
    Common/Data/2025-11-11_1630_H1ISIHAM5_ST1_SRMSusPoint_M1LPYDampingGain0p1_WhiteNoise_{L,T,V,R,P,Y}_0p02to50Hz.xml

H1 SEI (SEI)
ibrahim.abouelfettouh@LIGO.ORG - posted 13:22, Sunday 09 November 2025 (88025)
SEI seismometer mass check - Monthly FAMIS

Closes FAMIS 27509, last checked in alog 87296

Similar results to last week with 12 T240 masses out of range compared to last month's 13. Same results for STS.

Averaging Mass Centering channels for 10 [sec] ...
2025-11-09 13:18:07.263464


There are 12 T240 proof masses out of range ( > 0.3 [V] )!
ETMX T240 2 DOF X/U = -1.566 [V]
ETMX T240 2 DOF Y/V = -1.576 [V]
ETMX T240 2 DOF Z/W = -0.951 [V]
ITMX T240 1 DOF X/U = -2.22 [V]
ITMX T240 1 DOF Z/W = 0.466 [V]
ITMX T240 3 DOF X/U = -2.34 [V]
ITMY T240 3 DOF X/U = -1.07 [V]
ITMY T240 3 DOF Z/W = -2.835 [V]
BS T240 1 DOF Y/V = -0.353 [V]
BS T240 3 DOF Z/W = -0.431 [V]
HAM8 1 DOF Y/V = -0.453 [V]
HAM8 1 DOF Z/W = -0.761 [V]


All other proof masses are within range ( < 0.3 [V] ):
ETMX T240 1 DOF X/U = -0.034 [V]
ETMX T240 1 DOF Y/V = -0.086 [V]
ETMX T240 1 DOF Z/W = -0.1 [V]
ETMX T240 3 DOF X/U = -0.076 [V]
ETMX T240 3 DOF Y/V = -0.11 [V]
ETMX T240 3 DOF Z/W = -0.091 [V]
ETMY T240 1 DOF X/U = -0.014 [V]
ETMY T240 1 DOF Y/V = 0.16 [V]
ETMY T240 1 DOF Z/W = 0.21 [V]
ETMY T240 2 DOF X/U = -0.092 [V]
ETMY T240 2 DOF Y/V = 0.195 [V]
ETMY T240 2 DOF Z/W = 0.002 [V]
ETMY T240 3 DOF X/U = 0.224 [V]
ETMY T240 3 DOF Y/V = 0.012 [V]
ETMY T240 3 DOF Z/W = 0.116 [V]
ITMX T240 1 DOF Y/V = 0.26 [V]
ITMX T240 2 DOF X/U = 0.16 [V]
ITMX T240 2 DOF Y/V = 0.256 [V]
ITMX T240 2 DOF Z/W = 0.217 [V]
ITMX T240 3 DOF Y/V = 0.098 [V]
ITMX T240 3 DOF Z/W = 0.103 [V]
ITMY T240 1 DOF X/U = 0.051 [V]
ITMY T240 1 DOF Y/V = 0.108 [V]
ITMY T240 1 DOF Z/W = -0.026 [V]
ITMY T240 2 DOF X/U = 0.018 [V]
ITMY T240 2 DOF Y/V = 0.216 [V]
ITMY T240 2 DOF Z/W = 0.118 [V]
ITMY T240 3 DOF Y/V = 0.068 [V]
BS T240 1 DOF X/U = -0.106 [V]
BS T240 1 DOF Z/W = 0.149 [V]
BS T240 2 DOF X/U = 0.063 [V]
BS T240 2 DOF Y/V = 0.147 [V]
BS T240 2 DOF Z/W = 0.037 [V]
BS T240 3 DOF X/U = -0.176 [V]
BS T240 3 DOF Y/V = -0.29 [V]
HAM8 1 DOF X/U = -0.23 [V]


Assessment complete.
Averaging Mass Centering channels for 10 [sec] ...


2025-11-09 13:18:19.262567
There are 2 STS proof masses out of range ( > 2.0 [V] )!
STS EY DOF X/U = -4.593 [V]
STS EY DOF Z/W = 2.266 [V]


All other proof masses are within range ( < 2.0 [V] ):
STS A DOF X/U = -0.436 [V]
STS A DOF Y/V = -0.917 [V]
STS A DOF Z/W = -0.49 [V]
STS B DOF X/U = 0.167 [V]
STS B DOF Y/V = 0.938 [V]
STS B DOF Z/W = -0.364 [V]
STS C DOF X/U = -0.708 [V]
STS C DOF Y/V = 0.764 [V]
STS C DOF Z/W = 0.522 [V]
STS EX DOF X/U = -0.201 [V]
STS EX DOF Y/V = -0.136 [V]
STS EX DOF Z/W = 0.122 [V]
STS EY DOF Y/V = 1.246 [V]
STS FC DOF X/U = 0.188 [V]
STS FC DOF Y/V = -1.118 [V]
STS FC DOF Z/W = 0.629 [V]


Assessment complete.