Search criteria
Section: H2
Task: SYS

REMOVE SEARCH FILTER
SEARCH AGAIN
Reports until 16:20, Monday 23 June 2025
H1 ISC (ISC, PSL, SYS)
keita.kawabe@LIGO.ORG - posted 16:20, Monday 23 June 2025 - last comment - 16:34, Monday 23 June 2025(85211)
QPD tilt upgrade for ISS array (JennieW, Rahul, Keita)

We're assembling the first unit that incooporates all upgrades including the QPD tilt and here are minor problems we've stumbled upon. (No ISS array unit with an upgrade to tilt the QPD (E1400231) has been assembled before as far as I see and nobody seems to have cared to update all drawings.)

First picture is an example of the QPD before upgrade. QPD assembly (D1400139) and the cable connector assembly (D1300222) are mounted on the QPD platform by the QPD clamp plate (D1300963-v1, an older version) and a pair of split QPD connector clamps (d1300220). Two pieces of kapton insulation sheets are protecting the QPD assy from getting short-circuited to the platform.

After the upgrade, the QPD assy sits on top of a tilt washer (D1400146, called beveled C-bore washer) that tilts the QPD by 1.41deg in a plane that divides YAW and PIT plane by 45 degrees (2nd picture). The bottom kapton will go between the washer and the QPD platform plate.

Problem 1: Insulation between the QPD clamp and the QPD pins is a bit sketchy.

Titled QPD means that the bottom of the QPD assy is shifted significantly in YAW and PIT. A new asymmetric QPD clamp plate with tilted seating for the screws  (D1300963-v2) has been manufactured to accommodate that. But we have no record of updated kapton insulators, so the center of the clamp bore doesn't agree with the kapton (3rd picture, note that the QPD rotation is incorrect in this picture, which had to be fixed when connecting the cable). Since the tilt washer is not captured by anything (it's just sandwiched between the clamp and the platform plate), it's not impossible to shift the QPD assy such that some of the QPD pins will be grounded to the clamp and thus to the QPD platform plate.

You must check that there's no electrical connection between the QPD assy and the platform each time you adjust the QPD position in the lab.

Problem 2: New QPD connector clamp posts are too long, old ones are too short.

Old posts for the QPD connector are 13/16" long, which is too short for the upgrade because of the tilt washer, see 4th picture where things are in a strange balance. It seems as if it's working OK, but you can wiggle the post a bit so the post slides laterally relative to the clamp and/or the platform, it settles to a different angle and suddnly things become loose. To avoid that, you tighten the screws so hard that they start bending (which may be already starting to happen in this picture).

Also, because the clamp positions are 45 degrees away from the direction of tilt, one clamp goes higher than the other.

To address these, somebody procured 1" and 15/16" posts years ago, but they're just too tall to the point where the clamps are loose. If anything, what we need are probably something like 27/32" and 7/8" (maybe 7/8" works for both).

We ended up using older 13/16" posts, but added washers. Two thin washers for the shorter clamp, two thin plus one thick for the taller one (5th picture). This works OK. Shorter screw is the original, longer screw was too long but it works.

Problem 3: It's easy to set the rotation of the QPD wrong.

When retrofitting the tilt washer and the newer QPD clamp plate, you must do the following.

  1. Completely loosen the connector clamps and the QPD clamp to remove the QPD/connector assy as one unit.
  2. Put the tilt washer on top of the kapton insulation sheet at the bottom. The notch mark on the washer must be pointing the 45deg edge of the platform plate (D1300719-v2), see the 6th picture.
  3. Separate the cable connector from the QPD to free the now-obsolete QPD clamp (-v1) that is captured between the QPD and the connector. I inserted a razor blade between the connector and the QPD assy and pried.
  4. Put the assymmetric QPD clamp (-v2) on the top kapton insulation, paying attention to the direction of the new clamp using the 3rd picture as the reference.
  5. Make sure that the rotation of the QPD assy is the same as before because the pins of the QPD are not symmetric. There's no physical mark on the QPD assy itself.
    1. If you're unsure, rotate the QPD assy such that the QPD pins will go to the right sockets on the connector using picture 7. Remember that QPD surface faces the platform down and the drawing of the connector is viewed from the top.
    2. After checking it several times, tighten the QPD clamp.
    3. Put the connector on. Again use picture 7 to put it on at the correct angle.
    4. After checking it several times, tighten the connector clamps.

I screwed up and put the QPD on the connector at a wrong angle. It's easy to catch the error because no quadrant responds to the laser, but it's better not to make a mistake in the first place. It will help if the QPD assy barrel is marked at the cathode-anode1 corner.

It seems that D1300222 and D1101059 must be updated. Systems people please have a look.

D1300222: A tilt washer (D1400146), a new QPD clamp (D1300963-v2) and two sheets of kapton insulation are missing. Spacers are longer than 13/16".

D1101059: Explicitly state that part #28 (D1300963, QPD clamp) must be D1300963-v2.

Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 16:34, Monday 23 June 2025 (85261)

I installed the beam dumps (which are two plates of filter glass, probably from Schott?) for the array after cleaning them according to E2100057.

There are marks that look like water spots and/or some fog that couldn't be removed by repeated drag wiping with methanol (see picture).

After installation, I found that these plates are very loosely captured between two metal plates, see the video, this seems to be by design. I don't like it but the same design has been working in chamber for years.

Images attached to this comment
Non-image files attached to this comment
H1 SEI (SEI, SYS)
rahul.kumar@LIGO.ORG - posted 10:11, Friday 06 June 2025 (84832)
B&K measurement of the periscope in staging building (for HAM1 70Hz investigation) - peak at 48Hz

To investigate the 70Hz feature in HAM1 chamber, which Jim reported in his alog (LHO 84638) I started looking into the structural resonances of the periscope which is installed in HAM1 (see two pictures which TJ sent me, was taken by Corey here - view01, view02) .

Betsy, handed me a periscope (similar but not exactly the same) for investigation purpose, which is now setup in staging building. I attached an accelerometer to the top of the periscope and connected it to the front end of the B&K setup for hammer impact measurements - see picture of the experimental setup here

At first I used two dog clamps to secure the periscope. The results of the two dog clamp B&K measurement is shown in this plot (data from 0.125 to 200Hz)) - one can see a 39Hz feature in the Z hit direction. See zoomed-in (30-100Hz) figure here.

Next, I attached a third dog clamp, just like in HAM1 chamber and took a second round of measurements (especially for Z direction impact). 

This plot compares the two vs three dog clamps scenario on the periscope and one can see that the resonance mode has been pushed up from 39Hz to 48Hz. 

Images attached to this report
H1 ISC (ISC, SYS)
keita.kawabe@LIGO.ORG - posted 15:47, Wednesday 07 May 2025 - last comment - 16:01, Wednesday 07 May 2025(84308)
HAM1 Wednesday

Morning (JennieW, Rahul, Keita)

We used the PRX flashes to align the POP path.

POP periscope location is good but the drawing is not.

The POP periscope position, which was set yesterday by Camilla and others, was right. That means that the drawing on D1000313-v19 is wrong. The periscope in reality is about an inch toward -Y direction relative to D1000313-v19. See the first picture, which was shot with a cellphone inserted under the top periscope mirror and looking straight down the bottom mirror. This means that the dichroic (M12) needed to be shifted by the same amount too.

Since the distance betwen the IFO and the lens for POP WFS doesn't matter that much, everything downstream (i.e. 90:10, PM1 tip-tilt, a lens, 50:50 and POP LSC as well as POP WFS) will be installed using the drawing.

We mainly rotated the periscope mirror clamps around the post for rough alignment, but we might have changed the mirror height by a millimeter or two in the process. 

PM1, which is calld that because it's the 1st (and the last) suspended Mirror for POP, is somehow called RM3 in D1000313. Systems please fix it.

Set the IR beam spot height/position on the periscope as well as the dichroic

The IR beam is supposed to be about 6mm or 1/4" lower than the center line of both of the periscope mirrors as well as the dichroic. This is because the green ALS beams are supposed to be ~13mm higher than IR. See L1200282 “CPy-X, CPx-Y” case on Table 1.

Top Periscope Mirror

It was almost impossible to see how much the beam is lower than the center of the top and bottom periscope mirror. Using the IR viewer card, I and Jennie agreed that the beam is lower than the center, but we could not quantitatively say how much especially on the top. We'll leave it as is, and if the green beam from the end station is too high we will have to use pico because we periscope is already as high as possible.

Bottom peri mirror

If everything is as intended, the bottom periscope mirror is 4" high from the ISI surface and the POP beam is 1/4" lower than that, therefore the POP beam is (1-0.25*sqrt(2)) = 0.646" = 16.4mm away from the bottom edge of the mirror.

Using a ruler in chamber (and measuring the dimensions of a spare Siskiyou mount using caliper), the height of the bottom periscope mirror center was calculated to be ~4.07" from the ISI surface, i.e. 0.7" too high. This means that, when the beam height measured from the ISI is as designed (i.e. 4"-1/4"), the POP beam is (1-(0.25+0.07)*sqrt(2))=0.547"=13.9mm away from the bottom edge of the mirror.

If you have difficulty understanding this, see the cartoon.

POP beam radius is ~2mm, so 13.9mm (or even 13mm for that matter) looks like a safe distance to me. I don't see the need to readjust the height of the bottom periscope mirror.

I adjusted the top periscope mirror to set the beam height right after the bottom peri mirror to be ~3.75" using the IR viewer card and a ruler.

Dichroic

I placed the dichroic about 1" into -Y direction relative to the drawing (because I had to), and used the bottom periscope mirror to set the beam height close to the dichroic to be ~3.75".

Then I used the dichroic to steer the beam into the direction of the location for PM1 without placing 90:10.

For the beam profile measurement, the downstream alignment is done without 90:10. Later we will install 90:10 back in place and do the final alignment.

Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 16:01, Wednesday 07 May 2025 (84310)

And here's a memo of how we "measured" the height of the center of the bottom periscope mirror.

Images attached to this comment
H1 AOS
daniel.sigg@LIGO.ORG - posted 13:07, Monday 21 April 2025 - last comment - 15:52, Tuesday 29 April 2025(84027)
OSEM reabacks for RMs

Fil Marc Daniel

We noticed that the photodiode pins of the in-air cable that connects to RM1 and RM2 were flipped. This will flip cathode and anode which will only matter if there is a bias applied.

Going thru the schematics it seems that with 1:1 wiring the PD anode and cathode are flipped compared to what is shown in Sat Amp D080276. I assume somebody noticed this and flipped the corresponding pins of the in-air cable to correct for it back in 2013.  The polarity of the PD doesn't really matter, if there is no bias. We checked the RM sat amps and they have no bias.

If the Sat Amp PDs are connected as shown om D080276, the OSEM values will be negative. Indeed, the RMs had negative OSEM values as expected. However, all ZMs have positive values, since nobody bothered to flip the pins. We propose to no longer flip the PD pins for the RMs and work with positive OSEM values in the future.

Comments related to this report
jeffrey.kissel@LIGO.ORG - 15:52, Tuesday 29 April 2025 (84176)CDS, ISC, SUS, SYS
J. Kissel

First, supporting Daniel's findings that the RM1 and RM2 HTTS OSEM sensor readbacks have been incorrectly negative for ages, I attach a trend of the OSEM ADC input values (and OSEMINF OFFSETS and GAINs) back to 2017.

Not said explicitly in the above aLOG -- Daniel / Fil / Marc installed fresh new DB25 Sat Amp to Vac Feedthru cables (D2100464) just after this aLOG as a part of cabling up the new signal chain for PM1. These cables are one-to-one pin-for-pin cables so it *should* have played no role in the sign of the sensors or how they're connected. 

However, it's now obvious that RM1 and RM2 have had the ADC input voltages as negative for a *very* long time. 

I reviewed the signal chains for the OSEM PDs; see G2500980.

The RMs are using a US 8CH satamps D1002818 / D080276.

I attach a screen-cap of page 4, which highlights that 
    - UK 4CH satamps D0900900 / D0901284 use 
        . a negative reverse bias configuration, with
        . anode connected to bias and cathode connected to the negative input of the TIA, and
        . an inverting differential amplifier stage
    - US 8CH satamps D1002818 / D080276 use 
        . a positive reverse bias configuration, with 
        . cathode connected to the bias and anode connected to the negative input of the TIA, and [hence the apparent "pin-flip" from the other two satamp types]
        . a non-inverting differential amplifier stage
    - US 4CH satamps D1900089 / D1900217use 
        . a negative reverse bias configuration, with
        . cathode connected to the bias and anode connected to the negative input of the TIA, and
        . a non-inverting differential amplifier stage [hence the overall "sign flip" from the other two]

I disagree with Daniel that "the polarity of the PD" i.e. how the anode and cathode are connected to the transimpedance amplifier.
All versions of the PD pinout + SatAmp configure the system in a reverse bias, be it negative or positive. 
In these configurations, even in a zero bias configuration, photocurrent always flows from from cathode to anode as light impinges on the PD. 
So if the anode is hooked up to the negative input of the transimpedance amp (as in D1002818), that signal will have a different sign than if the cathode is hooked up to the negative input (as in D0900900 and D1900089).

The RMs should have their *positive* bias from the D080276 circuit applied.
In 2011, we'd agreed in G1100856 to jumper all OSEM satamps to the "L" position, a.k.a. the LIGO OSEM a.k.a. AOSEM position, which uses a bias voltage and is thus in photo conductive or pc mode. This is mostly because we wanted to not have to think about keeping track of which satamps are jumpered and which are not, but also because the BOSEM PD didn't mind have a bias, even though it was designed to have no bias.

Given the "thought of 2nd to last, last, and never" history of the RMs as they traversed subsystem from ISC to SUS circa O1, moving from the ASC front-end to a SUS front-end circa O2, then never really appearing in wiring diagrams until O3, etc. it wouldn't surprise me if the RMs didn't get the memo to put the bias jumper in the "L" position jumpering pins 2 and 3.

I disagree with Daniel: only the US 4CH satamp D1900089 / D1900217 should read negative with light on it.

So, my guess is that that someone "in 2013" (i.e. during aLIGO install prior to O1) didn't understand these subtleties between the UK 4CH and US 8CH satamp, saw the "different from UK 4CH satamp" and tried fixing the pins of the in-air D25 from satamp to vacuum flange.

Regardless, the new cable has cleared up the issue, and ADC voltage from RM1 and RM2 is now positive.
Images attached to this comment
H1 ISC (ISC, PSL, SEI, SQZ, SYS)
jeffrey.kissel@LIGO.ORG - posted 14:12, Friday 18 April 2025 (83996)
Power In ALS / SQZ / SPI Paths Post SPI Pick-off Install
J. Kissel scribing for S. Koehlenbeck, J. Freed, and R. Short
ECR E2400083
IIET 30642
WP 12453

Just writing a separate explicit aLOG for this for ease of reference in the future.

Before, during and after the SPI install we measured power along respective paths,
(1) The p-pol in-coming power into the whole ALS / SQZ / SPI pick-off path,
(2) The "ALS COMM" path: p-pol power in transmission of the ALS-PBS01 with the newly modified ALS-HWP2 position to get more power total power in the paths (3) and (4),
(3) The "SPI pick-off" path: The ~s-pol power in reflection of SPI-BS1,
(4) The "ALS/SQZ Fiber Distribution" path: ~s-pol power in transmission of SPI-BS1,
(see discussion in LHO:83978 as to why we're not so confident the reflection from ALS-PBS01 is entirely s-pol, which is why I say "~s-pol" here.)

The BEFORE vs. AFTER power in these paths is as follows:
  Path   Measured Between        Raw Power [mW]  PMC TRANS [W]     Date/Time Measured      Scaled Power [mW]  Fractional Power [%]
  (1)    ALS-L1 and ALS-HWP2         2060          103.2           2025-04-15 21:39 UTC       2095.9               ---

      % BEFORE
  (2)    ALS-M2 and ALS-L2           1970          102.8           2025-04-15 22:17 UTC       2012.2               96%
  (4)    ALS-M9 and ALS-FC2          48.7          103.0           2025-04-15 21:48 UTC         49.646              2%

      % AFTER
  (2)    ALS-M2 and ALS-L2           1790          103.3           2025-04-17 21:13 UTC       1817.7               87%
  (3)    SPI-L1 and SPI-L2            186          103.5           2025-04-16 22:43 UTC        188.7                9%
  (4)    ALS-M9 and ALS-FC2            50.5        103.5           2025-04-16 22:43 UTC         51.232              2%
where I've scaled all the raw power measurements by the rough nominal PMC TRANS power during recent observing times, 105 [W], i.e. 
(Scaled Power [mW]) = (Raw power) * (105 [W] / PMC TRANS [W])
and
(Fractional Power [%]) = 100 * {Scaled Power [mW]; paths (2)-(4)} / {Scaled Power [mW]; path 1}


As Ryan mentions in LHO:83989, for the purposes of safe long term storage, after all was said and done, we rotated SPI-HWP1 such that only 20 [mW] would go toward the SPI-FC1 fiber collimator, and it's dumped upstream just after SPI-L1. These AFTER power levels are what we anticipate running the rest of O4 in, with the SPI pick-off path safely out of commission.

2W Measurements
Head:    Ophir    10A-V2-SH SN122042 
Readout: Ophir    20C-SH    SN171175
Accuracy: +/-3%

200 mW Measurements:
Head:    Thorlabs S401C
Readout: Thorlabs PM100-D
Accuracy: +/- 7%

The attached picture summarizes all this info. The picture was take midday 2025-04-17, so you see the Ophir power meter in the middle of the ALS COMM 
Images attached to this report
H1 PSL (ISC, PSL)
sina.koehlenbeck@LIGO.ORG - posted 15:36, Thursday 17 April 2025 - last comment - 11:44, Monday 21 April 2025(83983)
Install SPI pick-off path: Laser mode to fiber collimator

S. Koehlenbeck, J. Freed, R. Short, J. Kissel

The mode matching of the PSL pick-off beam to the SPI fiber collimator has been implemented using two lenses. The target beam has a mode radius of 550 µm at a position 63.5 cm downstream from the SPI beamsplitter (SPI-BS).

The lens configuration that produced the closest match to the target mode used:

Attached is a beam profile fit performed using JaMMT on data acquired with a WinCamD of the beam after SPI-L2. The measured beam radii at various distances from the SPI-BS are as follows:

Distance (cm) Horizontal Radius (µm) Vertical Radius (µm)
70.734 476 542
91.054 470 543.5
116.454 558.5 616.5

Both lenses are oriented such that their planar sides face the small beam waist between the two lenses. The arrows on the lens mounts point toward the convex surfaces.

The power transmission through the fiber has been measured to be 83 %.

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 10:35, Friday 18 April 2025 (83995)ISC, SEI, SQZ, SYS
ECR E2400083
IIET 30642
WP 12453


Some "for the record" additional comments here:
- Sina refers to the "SPI-BS" above, which is the same as what we've now officially dubbed as "SPI-BS1."

- Lenses were identified to be needed after the initial measurement of the beam profile emanating from SPI-BS1. That initial beam profile measurement is cited in LHO:83956, and the lens also developed in JaMMT with the lenses that were available from the optics lab / PSL inventory.

- If anyone's trying to recreate the model of the beam profile from the two measurements (LHO:83956 with no lenses, and the above LHO:83983) just note that the "zero" position is different in the quoted raw data; in LHO:83956 is the front of the rail, on Column 159 of the table, and in LHO:83983 the zero position is the SPI-BS1 reflective surface which is on Column 149 of the table, i.e. a 10 inch = 25.4 cm difference.

- The real SPI-L1 installed to create this mode-shape / beam profile is labeled by its radius of curvature, which is R = 51.5 mm, and thus its focal length is more precisely f = R*2 = 103 mm. (We did find a lens that does have f = 60 mm for SPI-L2, and it's labeled by its focal length.)

- "the fiber" is that which is intended for permanent use, depicted as SPI_PSL_001 in the SPI optical fiber routing diagram D2400110, a Narrow Key PM-980 Optical Fiber "patch cord" from Diamond, whose length is 30 [m]. This fiber will run all the way out to SUS-R2, eventually, to be connected as the input to the SPI Laser Prep Chassis (D2400156).

- Per design, light going into this fiber is entirely p-pol, due to polarization via SPI-HWP1 and clean-up by SPI-PBS01 just upstream. We did not measure the polarization state of the light exiting the fiber.

- The raw data that informs the statement that "the power transmission thru the fiber has been measured to be 83%":
     : We measured the input to the fiber coupler, SPI-FC1, via the S140C low-power power meter we'd been using throughout the install. The output power was measured via a fiber-coupled power meter Sina had brought with her from Stanford (dunno the make of that one).

     : We measured the power input to the fiber twice several hours apart (with the change in fiber input power controlled via the SPI-HWP1 / SPI-PBS01 combo).,
         (1) 19.9 [mW] with PMC TRANS power at 104.1 [W] at 2025-04-17 16:35 UTC (while the PMC power was in flux from enviromental controls turn on)
         (2) 180 [mW] with PMC TRANS power at 103.5 [W] at 2025-04-17 14:00 UTC (while the PMC power was quite stable)

     : We measured the output power
         (1') 16.6 [mW] with PMC TRANS power at 103.7 [W] at 2025-04-17 17:35 UTC (an hour later than (1))
         (2') 150 [mW] with PMC TRANS power at 103.5 [W] at 2025-04-17 14:00 UTC (simultaneous to (2))

     : Thus derive the transmission to be 
         (1'') (16.6 / 19.9) * (104.1/103.7) = 0.837 = 83.7% and 
         (2'') (150 / 180) * (103.5/103.5) = 0.833 = 83.3%
sina.koehlenbeck@LIGO.ORG - 11:44, Monday 21 April 2025 (84025)

In the attachment you will find the JAMMT model for the measured beam profile of the PSL pick off with the origin a SPI-BS1, as well as the lenses used to adjust the mode of the beam for the fiber collimator FC60-SF-4-A6.2-03.

Images attached to this comment
H1 PSL
jeffrey.kissel@LIGO.ORG - posted 16:03, Wednesday 16 April 2025 - last comment - 11:35, Monday 21 April 2025(83961)
SPi Pickoff Path Install Day Two
J. Kissel scribing for S. Koehlenbeck, J. Oberling, R. Short, J. Freed
ECR E2400083
IIET 30642
WP 12453

Another quick summary aLOG at the end of the day, with more details to come:
- With the power in the ALS/SQZ pick-off path to 10 [mW] for beam profiling,
- Installed a two lens system to handle the unexpectedly different beam profile of the ALS/SQZ pick-off path
- Remeasured the resulting mode after the two lens system, and we're happy enough. We're gunna call them SPI-L1 and SPI-L2.
- Installed steering mirrors SPI-M1 and SPI-M2.
- Rotated ALS-HWP2 to increase the s-pol light in the ALS/SQZ/SPI path to return the power transmitted through SPI-BS1 going to the ALS/SQZ fiber collimator back to 50.5 [mW]. This set the SPI path to 186 [mW] with the PMC TRANS measured at 103.5 [W]. The ALS_EXTERNAL PD in transmission of ALS-M9 measured 31 [mW] ***. 
- Installed SPI-HWP1 and SPI-PBS01
- Measured the power at each port of SPI-PBS01, with the intent to optimize the SPI-HWP1 position to yield maximum p-pol transmission through SPI-PBS01.

*** We expect this is lower than the goal of ~45 [mW] (from LHO:83927) because we've not yet re-aligned the ALS/SQZ fiber collimator path after the install of the SPI-BS1, which translates the beam a bit due to the thickness of the beam splitter. We intend to get back to this once we're happy with the SPI path.
Comments related to this report
ryan.short@LIGO.ORG - 17:54, Wednesday 16 April 2025 (83965)

Small correction to above is after installing SPI-HWP1 and SPI-PBS01, we adjusted HWP1 to have 20mW in transmission of PBS1 (not maximum quite yet) to start alignment into the fiber. Using the two steering mirrors downstream of PBS1 and the collimating lens in front of the fiber, Sina maximized the transmission as measured with the output of the fiber on a spare PD. We then took power measurements of the input and output of the fiber:

  • Input: 19.4mW
  • Output: 13.5mW
  • Transmission ratio: 72.1%

This is a good start, but with a target ratio of >80%, there's still more work to be done here improving the beam into the fiber collimator. Out current mode-matching solution claims we should have 95% mode overlap into the fiber, so hopefully the issue is alignment, but it's entirely possible we'll revisit the mode-matching to see if improvements can be made there too.

The attached photo represents the optical layout as it stands as of where we stopped today, with the new SPI fiber in blue on the left (north) side of the table.

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 12:10, Thursday 17 April 2025 (83976)ISC, SQZ, SYS
Re-post of Ryan's picture at the end of day 2, labeled with the almost entirely complete SPI pick-off path.

Critically here, this shows the PSL row/column grid, confirming that this whole ECR E1900246 ALS pick-off path is 2 rows "higher" in +Y than is indicated on the current version of the as built PSL drawing D1300348-v8.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 11:35, Monday 21 April 2025 (84024)
Ryan grabbed another picture I attach here. This shows the ALS pick-off path on this day in order to support the identification that the beamline between ALS-M1, through the faraday ALS-FI1 and ALS-L1, etc stopping at ALS-M2 (not pictured) is on row 25 of the PSL table *not* row 23 as drawn in D1300348-v8. I attach both the raw picture and my labeled version. So, ya, ALS-M1 should have its HR surface centered on Row 25, Col 117.

Note, the grid in the picture is labeling bolt holes. Because the optical elements are all ~4 inches above the table, the beams appear offset from the way they travel on along the grid given that the photo was taken at a bit of an angle from vertical. May the future updater of D1300348 bear this in mind.

Images attached to this comment
H1 PSL (ISC, SQZ, SYS)
jeffrey.kissel@LIGO.ORG - posted 14:15, Wednesday 16 April 2025 - last comment - 11:36, Monday 21 April 2025(83956)
Beam Profiling the ALS / SQZ Fiber Distribution Pick-off Path in Prep for SPI Pick-off
J. Kissel scribing for S. Koehlenbeck, R. Short, J. Oberling, and J. Freed
ECR E2400083
IIET 30642
WP 12453

During yesterday's initial work installing the SPI pick-off path (LHO:83933), the first optic placed was SPI-BS1, the 80R/20T power beam-splitter that reflects most of the s-pol light towards the new SPI path. The pick-off is to eventually be sent into a SuK fiber collimator (60FC-SF-4-A6.2S-03), so we wanted to validate the beam profile / mode shape of this reflected beam.


The without changing any power in the ALS/SQZ/SPI pick-off path, the power now reflected from newly installed SPI-BS1 measured ~40 [mW] (see LHO:83946). This is too much for the WinCam beam profiler, so they used ALS-HWP2 to rotate the polarization going into ALS-PBS01, and thus reduced the reflected s-pol light in this ALS/SQZ/SPI pick-off path to ~10 [mW]. That necessarily means there's a little more of the ~2 [W] p-pol light transmitted and going toward the HAM1 light pipe, so they placed a temporary beam dump after ALS-M2 so as to not have to think about it.

The they set up a WinCam head on a rail and gathered the beam profile. With the WinCam analysis software on a computer stuck in the PSL, they simply gathered the profile information which I report here: 
# Distance[cm]	Radius[um]   Radius[um]
                   X             Y
    0.0           680.5         717
    17.78         465           504
    25.4          389           428.5
    30.48         346.5         368
    38.1          281.5         300.5

where "X" is parallel to the table, and "Y" is orthogonal to the table. The "0.0" position in this measurement is the "front" of the rail (the right most position as pictured in the attachment), which is Column 159 of the PSL grid. SPI-BS1 has the center of its reflective surface is set in +/- X position in Column 149 (within the existing ALS-PBS01 to ALS-M9 beam line). It's +/- Y position is set to create a reflected beam line along Row 30 of the grid, and the WinCam head and rail are centered in +/- Y on that Row to capture that beam.

Using this profile measurement, we find it to be quite different than expected from when this path was installed circa 2019 (see e.g. LHO:52381, LHO:52292, LHO:51610). Jason shared his mode matching solution from LHO:52292 with us prior to this week, and I've posted it as a comment to that aLOG, see LHO:83957.

We think we can trace the issue down to an error in the as-build drawing for the PSL:
- the whole beam path running in the +/-X direction from ALS-M1 to ALS-M2 is diagrammed to be on row 23 -- however, we find in reality, the path lies on row 25. That's 2 inches more between the (unlabeld) pick-off beam splitter just prior to ALS-M1 and ALS-M1 itself. Easily enough to distort a mode matching simulation.
- Jason confirms that he used the *drawing* to design the lens telescope for this ALS/SQZ fiber distribution pick-off path.

More on this as we work through a lens solution for the SPI path.

As of this entry, we elect to NOT create a new solution for the whole ALS/SQZ fiber distribution pick-off i.e. we *won't* adjust ALS-L1 or ALS-L5 in order to fix the true problem. But, we report what we found in the event that a case is better made to help mode matching and aligning into the ALS/SQZ fiber distribution pick-off easier -- as we have verbal confirmation that it was quite a pain.
For the record the fiber collimator used in the ALS/SQZ distribution pick-off is a Thor Labs F220 APC-1064.
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 14:32, Wednesday 16 April 2025 (83960)
Just a quick trend of the SM1PD1A EXTERNAL PD in transmission of ALS-M9 after they throttled the s-pol power in the ALS/SQZ/SPI path to ~10 [mW]. 
In that trend, you can see the different in "lights on" vs. "lights off" highlighted with the magenta vertical lines.

Note, as you can see in the picture, the reflection of ALS-M9 is dumped so as to not have to think about how much power is or is not going into the ALS/SQZ fiber distribution collimator (ALS-FC2), so the INTERNAL monitor PD that's in the distribution chassis itself is "correctly" unexpectedly reading nothing, so I don't show it.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 08:23, Friday 18 April 2025 (83993)
Correction to the last sentence of the main entry -- the ALS/SQZ fiber collimator is *not* an, but instead a Thorlabs Fiber Port PAF2-5A, pictured well in FinalInstall_ALSfiber.jpg from LHO:83989.

I had incorrectly assumed that this collimator would be a copy of ALS-FC1, which *is* listed in E1300483 as an F220 APC-1064.
sina.koehlenbeck@LIGO.ORG - 11:36, Monday 21 April 2025 (84023)

In the attachment you will find the fit with JAMMT to the measured beam profile data with offset correction:

  • Offset of measurement point to SPI-BS1: 10.2 cm
  • Offset of measurement point to beam profiler surface: 7.3 cm
Distance (cm) Radius horiz. (um) Radius vert. (um)
17.46 680.5 717
35.24 465 504
42.86 389 428.5
47.94 346.5 368
55.56 281.5 300.5
Images attached to this comment
H1 PSL (ISC, SQZ)
jeffrey.kissel@LIGO.ORG - posted 16:03, Tuesday 15 April 2025 - last comment - 12:18, Wednesday 16 April 2025(83933)
SPI Pick-off Path Installation Begun
J. Kissel, S. Koehlenbeck, J. Oberling, R. Short, J. Freed
ECR E2400083
IIET 30642
WP 12453

After this morning's kerfuffle / belated power-outage recovery with the PSL HVAC system was resolved, Sina, Ryan, and Josh began the procedure we're walking thru outlined in Section 1 of T2500024. We're keeping running notes on the fly at the bottom of the google-doc for now.

In summary here, with more details to come, we got as far as 
- Clearing out some old IO equipment that unused and in the way of the SPI pick-off path
- Measuring the power around ALS-PBS01
- Installing the new ALS/SPI 80R/20T beam splitter
- Measuring the beam profile along the future SPI path, in reflection of this 80R/20T beam splitter.
Comments related to this report
jeffrey.kissel@LIGO.ORG - 08:48, Wednesday 16 April 2025 (83946)CSWG, SEI, SYS
Among the first things we did was measure the power at various places along the ALS beam path to get a good starting point. 

For the ~2W beams we used an Ophir 20C-SH (datasheet says accuracy of +/- 3%) and for ~100 mW beams we used a S401C, (datasheet says accuracy of +/- 7%). 

For laser safety, we would unlock the PMC and shutter the laser while we were placing these power meters, so I also kept track of the PMC TRANS to scale our measurements by input power appropriately.
We did NOT turn on the PMC's intensity stabilization servo (ISS), for no particular reason other than we forgot to turn it on for the first measurement, and then wanted to stay consistent. 
This meant that the PMC TRANS itself was slowly noise at that +0.5 [w] level, so my reported values below are "eyeball averages."

So, not exactly a NIST-level precision/accuracy setup, but good enough for sanity checks. As such, I'm not going to bother report uncertainty in the numbers below.

Here're the results (again, this is prior to doing anything to the path).
Times of measurements are all for 2025-04-15, and in UTC, such that trends of other PDs may be captured if need be.
    Location                               Power Meter [mW]         PMC Trans [W]       Time [UTC]
    (1) Going in to ALS-HWP2                   2060                    103.2              21:39     # expected: 2000 [mW]; good!
       (between ALS-L1 and ALS-HWP2)      

    (2) p-pol in trans of ALS-PBS01            1970                    102.8              22:17     # expected: 1950 [mW]; good!
        (between ALS-M2 and ALS-L2)          
    
    (3) s-pol in refl of ALS-PBS01               49.4                  103.2              21:44     # expected: 50 [mW]; good!
        (between ALS-L1 and ALS-M9)            
    
    (4) s-pol in refl of ALS-M9                  47.7                  103.0              21:48     # 44.9 [mW] reported by ALS-C_FIBR_EXTERNAL_DC_POWERMON, which is in trans of ALS-M9 at this time; good!
       (between ALS-M9 and ALS-FC2) 

All of these powers match expectation quite exquisitely. My guess for the inconsistency of (4) with the EXTERNAL monitor PD is that the beam splitter ratio of ALS-M9 programmed into the beckhoff calibration of the PD's channel is a bit off, but this can be cross-checked later.

We then installed SPI-BS1 (the 80R/20T BS), and cross-checked the reflectivity reported in LH0:83863.
    Location                               Power Meter [mW]         PMC Trans [W]       Time [UTC]
    (5) s-pol in refl of SPI-BS1                 37.7                  102.4              22:22

The PMC power is lower between (3) and (5), the input to the SPI-BS1 is different, so we need to scale the measurement a bit,
    Input Power to SPI-BS1 = 49.4 [mW] * (103.2 / 102.4) = 48.81 [mW]
    REFL power from SPI-BS1 = 37.7 [mW]
    
    Fractional reflection = 37.7 [mW] /  48.81 [mW] =  0.772 = 77%
    (from LHO:83863) = 77%.
Thus, our results today are consistent with what Josh and Keita measured in the optics lab.
jeffrey.kissel@LIGO.ORG - 12:18, Wednesday 16 April 2025 (83948)
Pictures from the work on 2025-04-15.

The first three attachments are without labels, just in case the pics are needed for something else in the future.

The diagram we were working with (from the SPI ECR) is also attached here for convenience.

The second three attachments *are* labeled, so I'll describe what happened using those.
20250415_some_optics_removed_labeled.jpg
- This is (mostly) the how the team started the day: with the area where the SPI pick-off path is intended to go full of un-diagrammed spare/unused stuff. I highlight red circles everything that was removed in this first attachment. Additionally, before the picture was taken, ALS-M8 and ALS-FC1 were removed and the temporary large vertical beam block was installed.

20259415_all_optics_removed_labeled.jpg
- This is the "after" all components cleared picture, and the table layout during the power measurements. As you can imagine, because of the lens tube on the SM1PD1A, there was no room between the PD and ALS-M9 to insert a power meter to measure the transmitted light  thru ALS-M9. As such, we can't validate the beam-splitting ratio of that optic. Ah well.

20250415_end_day_1_labeled.jpg
- This is how we left yesterday: We SPI-BS1 installed in its permanent location. Downstream, we sent the reflected beam into a WinCam head such that we could profile the beam incoming to the SPI path -- and assess whether we need lenses in order to adjust the beam size to match our fiber collimator. While we definitely saw the expected change in power and alignment at ALS-FC2, we elected to restore the power and alignment later.
Images attached to this comment
H1 SUS (SEI, SUS, SYS)
jeffrey.kissel@LIGO.ORG - posted 15:49, Monday 14 April 2025 - last comment - 15:49, Monday 14 April 2025(83906)
OSEM Estimator Infrastructure Installed on HAM2/PR3 and HAM5/SR3
J. Kissel, B. Lantz, E. Bonilla, D. Barker
ECR E2400405 
IIET 32526
WP 12459

Using a linear combination of LHO:83724, the google slides in G2402303, and talking to Brian/Edgard who arrived for their week long visit today -- we installed the OSEM estimator infrastructure today on PR3 *and* SR3. 

We've only filled in enough infrastructure to ensure that it's function is entirely OFF.
Hopefully we'll get further this week.

The new channels have been initialized in the safe.snap for PR3 and SR3. 
    /opt/rtcds/userapps/release/sus/h1/burtfiles/
        h1suspr3_down.snap
        h1sussr3_safe.snap

We made edits to Brian / Edgard's work on common library parts,
    /opt/rtcds/userapps/release/sus/common/models
        HLTS_MASTER_W_EST.mdl
        SIXOSEM_T_STAGE_MASTER_W_EST.mdl
medm screens
    /opt/rtcds/userapps/release/sus/common/medm/hxts
        SUS_CUST_HLTS_OVERVIEW_W_EST.adl
    /opt/rtcds/userapps/release/sus/common/medm/estim/
        ESTIMATOR_OVERVIEW.adl
        CONTROL_6.adl
(see attached notes for details).

And then I made edits to
    /opt/rtcds/userapps/release/isi/h1/models/
        h1isiham2.mdl
        h1isiham5.mdl
to add PCIE Dolphin IPC senders to send out the calibrated ISI HAM ST1 GS13s that are pre-projected into the PR3 and SR3 suspension point euler basis, and then modified
    /opt/rtcds/userapps/release/sus/h1/models/
        h1suspr3.mdl
        h1sussr3.mdl
top level models to receive those IPC, and pipe them in to top of the new main libary part, the HLTS_MASTER_W_EST.mdl

Finally, after Dave installed the model changes and restarted the ISIs and HLTSs, we edited the sitemap
    /opt/rtcds/userapps/release/cds/h1/medm/
        SITEMAP.adl
 to use the HLTS_OVERVIEW_W_EST.adl.

Everything file I mention above has been committed to its respective location in the svn.
Screenshots of stuff will come in the comments.
Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 15:39, Monday 14 April 2025 (83907)SEI
h1isiham2 top level BEFORE vs. AFTER, focusing on the change to the output CART to PR3 EUL Suspension Point projections of IPC senders.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 15:40, Monday 14 April 2025 (83908)
h1isiham5 top level BEFORE vs. AFTER, focusing on the change to the output CART to SR3 EUL Suspension Point projections of IPC senders.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 15:41, Monday 14 April 2025 (83909)
Top level of h1suspr3 BEFORE vs. two AFTERs, one focusing on the IPC and the other on the main library block.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 15:42, Monday 14 April 2025 (83910)
HLTS_MASTER_W_EST.mdl screenshot focusing on where the suspension point signals come in.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 15:45, Monday 14 April 2025 (83911)
Some detail on the M1 block inside the HLTS_MASTER_W_EST, because I've cleaned up how the ISIFF is added into the ISC signals. This is the SIXOSEM_T_STAGE_MASTER_W_EST block. Part of the cleanup is to create a new subsystem block ADDFF, so we now have EPICs monitors and test points before and after the ADD. These will be useful for commissioning.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 15:49, Monday 14 April 2025 (83912)
Here's the HLTS MEDM screen overview now, with the new parts highlighted. The second attachment here is a preview of what the infrastructure.  
Images attached to this comment
H1 AOS (SEI, SYS)
jason.oberling@LIGO.ORG - posted 18:16, Tuesday 08 April 2025 - last comment - 17:03, Wednesday 09 April 2025(83823)
WHAM1 Passive Stack Optical Table Pre-Deinstall Measurements (WP 12442)

J. Oberling, R. Crouch

Today we took pre-deinstall measurements of the position of the optical table surface of the WHAM1 passive stack.  The plan was to use the FARO to measure the coordinates of several bolt holes, using a threaded nest that locates the Spherically Mounted Retroreflector (SMR) precisely over the bolt hole, on both the +Y and -Y side of the chamber.  This, unfortunately, did not happen in full due to the untimely death of the FARO's climate sensor (or the FARO's ability to read the climate sensor, we're hoping for the former).  The FARO cannot function without this sensor as it relies on accurate measurements of the air temperature, relative humidity, and air pressure to feed into a model of the refractive index of air, which it needs to accurately calculate the SMR distance from the FARO.  We did manage to get a few points measured before the sensor died.  I've reached out to FARO tech support about getting a new climate sensor and should hear back from them tomorrow (they usually replay in 1 business day).

Summary

We were able to get measurements of 3 bolt holes, all in the furthest -Y line of bolt holes, and an old IAS monument from aLIGO install before the FARO's climate sensor died.  The results are listed below under the Results heading.  The most interesting thing here is there appears to be an error in WHAM1 placement in the x-axis, as the bolt holes we measured are all ~37.25 mm too far in the -X direction from nominal.  We also set a scale on the wall across from the -Y door of the WHAM1 chamber that is registered to the current elevation of the optical table; placing an autolevel so it sights 150.0 mm on this scale (sighting the side of the scale with the 0.5 mm tick marks) places that autolevel 150.0 mm above the surface of the passive stack's optical table.

Details

We started on the -Y side of the WHAM1 chamber.  The FARO was set with a good view of its alignment monuments and the passive stack's optical table.  We ran through the startup checks and calibrations without much issue (we did see a return of the odd 'ADM Checks Failing' error, which had been absent for about 1 month, but it immediately went away and didn't come back when we performed a Go Home operation).  FARO monuments F-CS026 through F-CS035, inclusive, were used to align the FARO to the LHO LVEA local coordinate system; the 2 standard deviation device position uncertainty after this alignment was 0.016 mm (PolyWorks does 100 Monte Carlo simulations of the device position).  This complete, we started measuring.

First, as a quick test of the alignment we took a look at old IAS monument LV24.  This monument was used to align the WHAM2 ISI during aLIGO install, and its nominal X,Y coordinates are [-20122.0, -3050.7] mm (there is no z-axis coordinate as we were not setting these in Z back then, a separate set of wall marks was used for z-axis alignment).  The results are shown in the 1st attached picture; again, ignore the z-axis results as I had to enter something for the nominal or PolyWorks wouldn't accept the entry, so I rounded to the closest whole number (this isn't even the surface of the monument, it's the point 2" above it where the SMR was, due to use of the Hubbs Center Punch Nest (which has a 2" vertical offset when using a 1.5" SMR)).  Knowing how we had to set these older monuments, since I'm one of the people that set them, I'm not entirely surprised by the X and Y deviations.  The monuments we set for aLIGO install (the LV monuments) were placed w.r.t. a set of monuments used to align iLIGO, which themselves were placed w.r.t. the monuments used to install the vacuum equipment during facility construction (the PSI monuments), which themselves were placed w.r.t. the BTVE monuments which define the interface between the arm beam tubes and the LVEA vacuum equipment, which we then found errors in their coordinates during our alignment of the FARO during the O4a/b commisioning break in 2024.  Not at all surprised that errors could have stacked up without notice over all of those monuments set off of monuments set off of monuments set off of...  Also, take note of the x-axis coordinate of this monument, this will be important later.

We then set about taking measurements of the passive stack optical table.  To map the bolt holes we measured we used an XY cartesian basis, assuming the bolt hole in the -X/-Y corner was the origin.  We then proceeded to increment the number by the bolt hole (not distance), following the same XY axis layout used for the IFO.  Using this scheme the bolt holes for the table corners were marked as:

We were able to get measurements for bolt holes (0,0), (14,0), and (25,0).  We were in the process of measuring bolt hole (36,0) (the +X/-Y corner bolt hole) when the FARO's climate sensor died.

To get the coordinates for the bolt holes I used the .EASM file for WHAM1 with the passive stack configuration located at D0901821-v4.  From the assembly, using eDrawings, I was able to get coordinates w.r.t. the chamber origin for the bolt holes we measured.  Those were then added to the coordinates for the WHAM1 chamber, in the LVEA local coordinate system, to get nominal coordinates for the bolt holes.  I also had to add 25.4 mm to the z-axis coordinates to account for the 1" offset of the nest we were using for the SMR; the center of the SMR sits 1" above the point being measured, so I needed to manually add that offset to the nominal z-axis coordinate of the bolt hole.  For reference, according to D0901821 the global coordinates for WHAM1 are [-22692.0, 0.0, 0.0] mm; when converted to the LVEA local coordinate system (removing the 619.5 µrad downward tilt of the X-arm) this becomes [-22692.0, 0.0, +14.1].  The measurement results are shown in the 2nd attached picture.  Notice those x-axis deviations?  Remember the measurement we made of LV24?  Clearly the FARO alignment is not 37 mm off, as the measurement of LV24 showed, so something is definitely up with the x-axis coordinate of the WHAM1 chamber (error in chamber placement?  aLIGO WHAM1 is the iLIGO WHAM2 chamber, moved from its old location next to WHAM3).

Results

We can do some analysis of the numbers we have, although limited since we only have 3 points in a line.  This really only applies to the furthest -Y line of bolt holes on the table, since we weren't able to get measurements of the +Y side to get a more full picture of where the table is sitting, but it's something.  Position tolerances at install in 2012 were +/-3.0 mm in all axes.

I do want to note that D0901821-v4 claims the table surface should be -187.8 mm in LVEA local coordinates (-201.9 mm in global), but this is not the number we used when installing the passive stack in 2012.  In 2012 we used -185.9 mm local (-200.0 mm global), as can be seen in D0901821-v2.  To compare our measurements to the install numbers I changed the nominal z-axis coordinate to match that of our install target (-185.9 + 25.4 mm SMR offset = -160.5 mm) and the results are shown in the final attached picture.

Wall Scale Registered to Current Table Surface Elevation

To finish, we set a scale on the -Y wall directly Crane East of the WHAM1 chamber and registered it to the current elevation of the passive stack's optical table.  To do this we used a scale provided by Jim (the scale was in inches, with 0.01" tick marks) and an autolevel.  We set the autolevel at a fixed elevation on the -Y side of the chamber.  The scale was then placed at each corner of the optical table, starting with the -X/+Y corner, and the autolevel was used to sight the scale; only the scale was moved, the autolevel was fixed (rotated only to follow the scale, but not moved otherwise).  We then averaged the 4 scale readings to get the table elevation, set the autolevel to this reading with the scale back at our starting point (we actually didn't have to move it, thankfully), and then set a scale on the wall using the autolevel.  The 4 scale readings:

The average of the 4 readings is 5.9", and since the autolevel was already sighting 5.9" on our starting point at the -X/+Y corner we left it there.  This may seem high, but we had to have the autolevel high enough that we could see over the various components mounted to the table surface.  We then turned the autolevel and set a scale on the wall.  This scale was in mm (since that's what we had), but this worked out OK.  5.9" is ~149.9 mm (149.86 mm to be exact), so we set the wall scale so it read ~149.9 mm when sighted through the autolevel.  So a 150.0 mm reading on this scale (sighting the side with the 0.5 mm tick marks) is ~150.0 mm above the current position of the passive stack's optical table.

This closes LHO WP 12442.

Images attached to this report
Comments related to this report
jason.oberling@LIGO.ORG - 17:03, Wednesday 09 April 2025 (83842)

TJ O'Hanlon informed me via email that there indeed was an error in the x-axis coordinate at both LHO and LLO, due to the thickness of the septum between HAM1 and HAM2 not being taken into account, which had not been propagated to all of the SYS mechanical layout drawings (and some of the CAD files as well).  I had completely forgotten about this, and explains why we had moved the WHAM1 passive stack monument LV25 further in the -X direction some time back in 2012; the first attached picture shows this (the clear cut out next to the existing monument was the old position of LV25 before we moved it).  I went spelunking through my old 2012 emails to find some communication about this, but all I could find was an email chain re: LLO setting the LHAM6 support tubes and not being able to get them in the proper y-axis position.  Dennis replied that this was due to the septum thickness and would apply to HAM1 and HAM6 at both sites, and that he would update E1200625 with the correct coordinates for all involved chambers.  From E1200625 the x-axis coordinate of WHAM1 should be -22726.7 mm, so I have updated the PolyWorks project with this new, correct coordinate; this is shown in the 2nd attached picture.

From this I can now say that the -Y row of holes on the WHAM1 passive stack's optical table are ~2.56 mm too far in the -X direction.  If we were to use the FARO to survey monument LV25 my guess is that would explain the 2.5 mm error, seeing as how nearby LV24 was also ~2.0 mm too far in -X direction.  As stated in the main alog this difference doesn't exactly surprise me given the "monuments placed off of monuments placed off of monuments" situation we have here.  The FARO was aligned to our X and Y axes using monuments PSI-1, PSI-2, PSI-6, and BTVE-1, so any error between these 1st and 2nd generation monuments and the 4th generation LV monuments will be measured by the FARO.

While I was at it I went ahead and applied the required transform for local to global coordinates.  This is done by creating a new coordinate system and applying the requisite tilt of both the X and Y axes.  The tilt must be entered in degrees and for the opposite axis.  This is because our, for example, y-axis tilt angle w.r.t. local gravity is a rotation of the x-axis.  Since PolyWorks works off of axis rotation, we enter the y-axis angle as an x-axis tilt (same for the x-axis angle).  To get PolyWorks to correctly calculate the transform matrix both values should be entered as positive numbers (I'm not entirely sure why).  The values to enter:

  • X-axis rotation: 0.0000125 µrad -> 0.0007162°
  • Y-axis rotation: 0.0006195 µrad -> 0.0354947°

The calculated transform matrix is shown in the 4th attached picture, which properly matches Table 10 in T980044 (note, the numbers in the transform matrix are in radians, even though I had to enter the rotations in degrees).  To confirm this was correct I manually calculated the correct global z-axis coordinate using the formula in Section 2.3 of T0900340 for each bolt hole; the results were the same between my calculation and PolyWorks'.  The final picture shows the bolt hole survey in the LHO global coordinate frame.

Images attached to this comment
H1 SUS
oli.patane@LIGO.ORG - posted 14:23, Tuesday 08 April 2025 - last comment - 15:32, Tuesday 08 April 2025(83818)
SR3 M1 SUS comparison between all DOFs

Jeff asked me to plot a comparison for SR3 M1 between all degrees of freedom comparing it in vacuum versus in air. I've plotted the last two measurements taken for SR3 from last August at the end of the OFI vent. One measurement was taken in air, and the other was taken in vacuum The pressure for the in vacuum measurement wasn't all the way down to our nominal, but as Jeff said in his alog at the time when we were running these measurements: "most of the molecules are out of the chamber that would contribute to buoyancy, so the SUS are at the position they will be in observing-level vacuum" (79513).

Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 15:32, Tuesday 08 April 2025 (83819)CSWG, SEI, SYS
Calling out the "interesting" off-diagonal elements:
                 D R I V E   D O F 
          L     T     V     R     P     Y

     L    --    nc    nc    meh   eand  YI
 
R    T    nc    --    YI    eand  nc    meh
E 
S    V    meh   YI    --    meh   nc    YI
P
     R    VI    esVI  VI    --    YI    VI
D 
O    P    esVI  VI    YI    meh   --    YI
F
     Y    YI    nc    nc    nc    nc   --

Here's the legend to the matrix, in order of "interesting":
  VI = Very Interesting (and unmodeled); very different between vac and air.
esVI = Modeled, but Still Very Interesting; very different between vac and air
  YI = Yes, Interesting. DC response magnitude is a bit different between vac and air, but not by much and all the resonances show up at roughly the same magnitude.
 meh = The resonant structure is different in magnitude, but probably just a difference in measurement coherence
eand = The cross coupling is expected, and not different between air and vac.
  nd = Not Different (and unmodeled). The cross-coupling is there, but it doesn't change from air to vac.
I've bolded everything above "meh" to help guide the eye.

Recapping in a different way, because the plots are merged in a really funky order,
  VI = L to R (pg 14), 
       T to P (pg 22),
       Y to R (pg 33)

esVI = T to R (pg 16)
       L to P (pg 20)

  YI = L to Y (pg 28), Y to L (pg 27),
       T to V (pg 12), V to T (pg 11),  
       V to P (pg 24),
       P to R (pg 25), 
       Y to V (pg 31),
       Y to P (pg 35)


What a mess! 
The matrix of interesting changes is NOT symmetric across the diagonal
The matrix has unmodeled cross-coupling that *changes* between air and vac
For the elements that are supposed to be there, (like L to P / P to L and T to R / R to T), the cross coupling different between air and vacuum.
For some elements, the cross-coupling is *dramatically worse* at *vac* than it is at air.

Why is there yaw to roll coupling, and why is it changing between air and vacuum??

There's clearly more going on here than just OSEM sensor miscalibration that the Stanford team found with PR3 data in LHO:83605. These measurements are a mere 8 days apart!

The plan *was* to use SR3 as a proxy during the vent to test out the OSEM estimator algorithm they were using to improve yaw, but ... with this much different between air and vac, I'm not so sure the in-air SR3 measurements to inform an estimator to be used at vacuum.
H1 General (EPO)
corey.gray@LIGO.ORG - posted 13:04, Tuesday 08 April 2025 - last comment - 13:21, Tuesday 08 April 2025(83790)
HAM1 BEFORE SEI De-install / ISI Install Photos + Contamination Control Tasks

HAM1 Before Photos:  (HAM1 chamber open just under 90min for this activity)

This morning before the deinstall activities begin, took the opportunity to photo document the HAM1 optical layout.  Keita requested I take photos to record the layout wrt to the iLIGO bolt pattern, because rough alignment of optical components on the new SEI ISI for HAM1 will be done utilizing the bolt patterns of the Optics Tables; so I took a few more photos than normal (top view and angled with a focus on the REFL path).  Took large photos with the Canon 60D DSLR camera as well as my camera phone.  

The photos are being populated in this Google Drive folder:  https://drive.google.com/drive/folders/1yDKp7aByA_TYJ12c8j8BnZM_pd1Q2DBZ?usp=sharing

Naming each photo referencing an updated layout Camilla Compton made which labels all beam dumps, but I also had to use an older layout to preserve naming since the layout on HAM1 currently looks like D1000313v16 (which is also referenced for naming the photos).

The above folder has the Canon photos, and I'll be adding the camera phone images next.

Contamination Control Notes:

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 13:21, Tuesday 08 April 2025 (83816)ISC, SEI, SUS, SYS
Tagging ISC, SUS, SYS, and SEI. Rest in power HAM1 Stack!
H1 SUS (CDS, SYS)
jeffrey.kissel@LIGO.ORG - posted 15:03, Monday 07 April 2025 - last comment - 13:20, Tuesday 08 April 2025(83787)
Recovery from 2025-04-06 Power Outage: +18V DC Power Supply to SUS-C5 ITMY/ITMX/BS Rack Trips, ITMY PUM OSEM SatAmp Fails; Replaced Both +/-18 V Power Supplies and Replaced ITMY PUM OSEM SatAmp
J. Kissel, R. McCarthy, M. Pirello, O. Patane, D. Barker, B. Weaver
2025-04-06 Power outage: LHO:83753

Among the things that did not recover nicely from the 2025-04-06 power outage was the +18V DC power supply to the SUS ITMY / ITMX / BS rack, SUS-C5. The power supply lives in VDC-C1 U23-U21 (Left-Hand Side if staring at the rack from the front); see D2300167. More details to come, but we replaced both +/-18V power supplies and SUS ITMY PUM OSEMs satamp did not survive the powerup, so we replaced that too.

Took out 
    +18V Power Supply S1300278
    -18V Power Supply S1300295
    ITMY PUM SatAmp S1100122

Replaced with
    +18V Power Supply S1201919
    -18V Power Supply S1201915
    ITMY PUM SatAmp S1000227
Comments related to this report
jeffrey.kissel@LIGO.ORG - 13:20, Tuesday 08 April 2025 (83810)CDS, SUS
And now... the rest of the story.

Upon recovery of the suspensions yesterday, we noticed that all the top-mass OSEM sensor values for ITMX, ITMY, and BS were low, *all* scattered from +2000 to +6000 [cts]. They typically should be typically sitting at ~half the ADC range, or ~15000 [cts]; see ~5 day trend of the top mass (main chain, M0) OSEMs for H1SUSBS M1,ITMX M0, and H1SUSITMY M0. The trends are labeled with all that has happen in the past 5 days. The corner was vented on Apr 4 / Friday, so that changes the physical position of the suspensions and the OSEMs see it. At the power outage on Apr 6, you can see a much different, much more drastic change. 

Investigations are rapid fire during these power outages, with ideas and guesses for what's wrong are flying everywhere. The one that ended up having fruit was that Dave mentioned that it looked like "they've lost a [+/-18V differential voltage] rail or something," -- where he's thinking about the old 2011 problem LLO:1857 where 
   - There's a SCSI cable that connects the SCSI ports of a given AA chassis to the SCSI port of the corresponding ADC adapter card on the back of any IO chassis
   - The ADC Adapter Card 's port has very small, male pins that can be easy bent if one's not careful during the connection of the cable.
   - Sometimes, these male pins get bent in such a way that the (rather sharp) pin stabs into the plastic of the connecter, rather than into the conductive socket of the cable. Thus, (typically) one leg, of one differential channel is floating, and this manifests digitally in that it creates an *exact*  -4300 ct (negative 4300 ct) offset that is stable and not noisy. 
   - (as a side note, this issue was insidious: once one bent male pin on the ADC adapter card was bent, and mashed into the SCSI cable, that *SCSI* cable was now molded to the *bent* pin, and plugging it in to *other* adapter cards would bend previously unbent pins, *propagating* the problem.) 

Obviously this wasn't happening to *all* the OSEMs on three suspensions without anyone touching any cables, but it gave us enough clue to go out to the racks.
Another major clue -- the signal processing electronics for ITMX, ITMY and BS are all in the same rack -- SUS-C5 in the CER.
Upon visiting the racks, we found, indeed, that all the chassis in SUS-C5 -- the coil drivers, TOP (D1001782), UIM (D0902668) and PUM (D0902668) -- had their "-15 V" power supply indicator light OFF; see FRONT and BACK pictures of SUS-C5.

Remember several quirks of the system that help us realize what's happened (and looking at the last page of ITM/BS wiring diagram, D1100022 as your visual aide):
(1) For aLIGO "UK" suspensions -- the OSEM *sensors'* PD satellite amplifiers (sat amps, located out in the LVEA within the biergarten) that live out in the LVEA field racks are powered by the coil drivers to which their OSEM *coil actuators* are connected.
So, when the SUS-C5 coil drivers lost a differential power rail, that makes both the coils and the sensors of the OSEM behave strangely (as typical with LIGO differential electronics: not "completely off" just "what the heck is that?"). 
(2) Just as an extra fun gotcha, all of the UK coil drivers back panels are *labeled incorrectly* so that the +15V supply voltage indicator LED is labeled "-15" and the -15V supply is labeled "+15".
So, this is why the obviously positive 18V coming from the rack's power rail is off, but the "+15" indicator light is on an happy.  #facepalm
(3) The AA Chassis and Binary IO for these SUS live in the adjacent SUS-C6 rack; it's + and - 18V DC power supply (separate and different from the supplies for the SUS-C5 rack) came up fine without any over-current trip. Similarly the IO chassis, which *do* live in SUS-C5, are powered by a separate single-leg +24V from another DC power supply, also coming up fine without over-current trip.
So, we had a totally normal digital readback of the odd electronics behavior.
(4) Also note, at this point, we had not yet untripped the Independent Software Watch Dog, and the QUAD's Hardware Watchdog had completely tripped. 
So, if you "turn on the damping loops" it looks like nothing's wrong; at first glance, it might *look* like there's drive going out to the suspensions because you see live and moving MASTER_OUT channels and USER MODEL DAC output, missing that there's no IOP MODEL DAC output. and it might *look* like the suspensions are moving as a result because there are some non-zero signals coming into on OSEMINF banks and they're moving around, so that means the damping loops are doing what they do and blindly taking this sensor signal, filtering it per normal, and sending a control signal out.

Oi.

So, anyways, back to the racks -- while *I* got distracted inventorying *all* the racks to see what else failed, and mapping all the blinking lights in *all* the DC power supplies (which, I learned, are a red herring) -- Richard flipped on the +18V power supply in VDC-C1 U23, identifying quickly that it had over-current-tripped when the site regained power.
See the "before" picture of VDC-C1 U23 what it looks like tripped -- the "left" (in this "front of the rack" view) power supply's power switch on the lower left is in the OFF position, and voltage and current read zero.

Turning the +18V power supply on *briefly* restored *all* OSEM readbacks, for a few minutes.
And then the same supply, VDC-C1 U23, over-current tripped again. 
So Richard and I turned off all the coil drivers in SUS-R5 via their rocker switches, turned on the VDC-C1 U23 left +18V power supply again, then one-by-one powered on the coil drivers in SUS-C5 with Richard watching the current draw on the VDC-C1 U23 power supply.

Interesting for later: when we turned on the ITMY PUM driver, he shouted down "whup! Saw that one!"
With this slow turn on, the power supply did not trip and power to the SUS-R5 held, so we left it ...for a while.
Richard and I identified that this rack's +18V and -18V power supplies had *not* yet had their fans upgraded per IIET:33728.
Given that it was functioning again and having other fish to fry, we elected to not *yet* to replace the power supplies.

Then ~10-15 minutes later, the same supply, VDC-C1 U23, over-current tripped again, again . 
So, Marc and I went forward with replacing the power supplies.
Before replacement, with the power to all the SUS-C5 rack's coil drivers off again, we measured the output voltage of both supplies via DVM: +19.35 and -18.7 [V_DC].
Then we turned off both former power supplies and swapped in the replacements (see serial numbers quoted in the main aLOG); see "after" picture.

Not knowing better we set the supplies to output to a symmetric +/-18.71 [V_DC] as measured by DVM. 
Upon initial power turn on with no SUS-R5 coil drivers on, we measured the voltage from an unused 3W3 power spigot of the SUS-R5 +/-18 V power rail, and measured a balanced +/-18.6 [V_DC].
Similar to Richard and I earlier, I individually turned on each coil driver at SUS-C5 while Marc watched the current draw at the VDC-C1 rack.
Again, once we got the ITMY PUM driver we saw a large jump in current draw. (this is one of the "important later")
I remeasured the SUS-R5 power rail, and the voltage on positive leg had dropped to +18.06 [V_DC].
So, we slowly increased the requested voltage from the power supply to achieve +18.5 [V_DC] again at the SUS-R5 power rail. 
This required 19.34 [V_DC] at the power supply.
Welp -- I guess whomever had set the +18V power supply to +19.35 [V_DC] some time in the past had come across this issue before.

Finishing up at the supplies, we restored power / turned to all the remaining coil drivers had watched it for another bit. 
No more over-current trips. 
GOOD! 

... but we're not done!

... upon returning to the ITMY MEDM overview screen on a CDS laptop still standing by the rack, we saw the "ROCKER SWITCH DEATH" or "COIL DRIVER DEATH" warning lights randomly and quickly flashing around *both* the L1 UIM and the L2 PUM COILOUTFs. Oli reported the same thing from the control room. However, both those coil drivers power rail lights looked fine and the rocker switches had not tripped. Reminding myself that these indicator lights are actually watching the OSEM sensor readbacks; if the sensors are some small threshold around zero, then the warning light flashes. This was a crude remote indicator of whether the coil driver itself had over-current tripped because again, the sensors are powered by the coil driver, so if the sensors are zero then there's a good chance the coil driver if off.
But in this case we're staring at the coil driver and it reports good health and no rocker switch over-current trip.
However we see the L2 PUM OSEMs were rapidly glitching between "normal signal" of ~15000 [cts] and a "noisy zero" around 0 [ct] -- hence the red, erratic (and red herring) warning lights.

Richard's instincts were "maybe the sat amp has gone in to oscillations" a la 2015's problem solved by an ECR (see IIET:4628), and suggest power cycling the sat amp. 
Of course, these UK satamps () are another design without a power switch, so a "power cycle" means disconnecting and reconnecting the cabling to/from the coil driver that powers it at the satamp. 
So, Marc and I headed out to SUS-R5 in the biergarten, and found that only ITMY PUM satamp had all 4 channels' fault lights on and red. See FAULT picture.
Powering off / powering on (unplugging, replugging) the sat amp did not resolve the fault lights nor the signal glitching.
We replaced the sat amp with a in-hand spare and fault lights did NOT light up and signals looked excellent. No noise, and the DC values were restored to their pre-power-outage values. See OK picture.

So, we're not sure *really* what the failure mode was for this satamp, but (a) we suspect it was a victim of the current surges and unequal power rails over the course of re-powering the SUS-C5 rack, which contains the ITMY PUM coil driver that drew a lot of current upon power up, which powers this sat-amp (this is the other of the "important later"); and (b) we had a spare and it works, so we've moved on with post-mortem to come later. 

So -- for all that -- the short answer summary is as the main aLOG says:
- The VDC-C1 U23 "left" +18V DC power supply for the SUS-R5 rack (and for specifically the ITMX, ITMY, and BS coil drivers) over-current tripped several times over the course of power restoration, leading us to
- Replace both +18V and -18V power supplies that were already stressed and planned to be swapped in the fullness of time, and 
- We swapped a sat-amp that did not survive the current surges and unequal power rail turn-ons of the power outage recovery and subsequent investigations.

Oi!
Images attached to this comment
H1 CDS (CDS, ISC, SYS)
jeffrey.kissel@LIGO.ORG - posted 13:46, Monday 07 April 2025 (83777)
Recovery from 2025-04-06 Power Outage: SQZ Timing Comparator Lost Uplink -- Power Cycle Fixes It
J. Kissel, M. Pirello
2025-04-06 Power outage: LHO:83753

Among the things that did not recover nicely from the 2025-04-06 power outage was the Timing Comparator D1001370 that lives in ISC-C2 U40 (see component C261 on pg 3 of D1900511-v9). The symptom was that its time-synchronizing FPGA was caught in a bad state, and the timing fanout in the CER Beckhoff status for the comparator was reporting that H1:SYS-TIMING_C_FO_A_PORT_13_NODE_UPLINKUP was in error (a channel value of zero instead of one). 

We didn't know any of this at the start of the investigation.
At the time of investigation start, we only new of an error by following through the automatically generated "SYS" screens (see attached guide),
   SITEMAP > SYS > Timing > Corner A Button [which had a red status light] 
   > TIMING C_FO_A screen Port 13, dynamically marked as a "C" for comparator [whose number was red, and the status light was red] 
   > Hitting the "C" opens the subscreen for TIMING C_FO_A NODE 13, which shows that red "Uplink Down" message in the middle right
The screenshot shows the NODE 13 screen both in the "now" fixed green version state, and a time-machined "broken" version.

Going out to the CER, we found that status light for Digital Port 13 == Analog Port 14 on the timing fanout (D080534; ISC-C3 U11) was blinking. 
Marc tried moving the comms cable to analog port 16, because "sometimes these things have bad ports." That didn't work, so we moved it back to analog port 14.

That port's comms fiber cable was not labeled, so we followed it physical to find its connection to the SQZ timing comparator (again in ISC-C2 U40, thankfully "right next door"), to find it's "up" status light also blinking.
Marc suggested that the comparators may lose sync, so we power cycled it. This chassis doesn't have a power switch, so we simply disconnected and reconnected its +/-18 V power cable.
After waiting ~2 minutes, all status lights turned green.

#FIXEDIT
Images attached to this report
H1 SEI (ISC, SUS, SYS)
jeffrey.kissel@LIGO.ORG - posted 14:32, Thursday 20 March 2025 - last comment - 11:28, Monday 24 March 2025(83470)
Current Performance of H1 ISI BS Projected to the SusPoint of H1 SUS BS
J. Kissel

Oli and I are beginning the process for designing damping loops for the A+ O5 BBSS. We're running through the same process that I've been running through for over a decade designing suspension damping loops, in which I build up a noise budget for the optic displacement in all DOFs using input noises for seismic noise, DAC noise, and OSEM sensor noise filtered through said damping loops, all propagated thru the matlab dynamical model of the suspension.
 
The first step along that journey is revisiting all the input noise sources, and making sure we have good model for those. 
OSEM noise and DAC noise models have recently been validated and updated when I revisited the HLTS damping loop design (see LHO:65687).
However, I haven't worked on damping loops for suspensions suspended from a BSC-ISI since 2013, see G1300537 for the QUAD, G1300561 for the BSFM, and G1300621 for the TMTS.
In those, I used the 2015 update to the 2005 requirement curve from T1500122 as the input motion.
Now, after a decade worth of commissioning and improvements, I figure it's time to show that work here and use it in modeling future SUS damping loops where the SUS is mounted from a BSC-ISI.

One of the biggest things we've learned over the decades is that the seismic noise input to the suspension at its "Suspension Point" motion for a given suspension can be the (quadrature) sum of many of the ISI's cartesian degrees of freedom, and depends on where and in what orientation it is on the optical table (see T1100617). As such, we installed front-end infrastructure to calculate the calibrate the lowest stage sensors -- the GS13 inertial sensors -- into both the Cartesian and Euler basis (see E1600028). In this aLOG, I do as I did for the HAM-ISI LHO:65639, I show the Cartesian contributions to each of the Beam Splitter's SUS point motion, by multiplying the Cartesian channels by the coefficients in the CART2EUL matrix for the beam splitter.

The time I used for this performance of the H1 ISI BS was 0.01 Hz binwidth (128 sec FFT), 10 average, 50% overlap data set starting at 2025-03-19 14:00 UTC.
    - This was a late night local set, with no wind and 0.1 [um_BLRMS] level microseism (between 0.1-0.3 Hz)
    - GND to ST1 Sensor correction is ON, including the DIFF and COMM inputs.
        - Here at H1, the corner station does NOT have beam rotation sensors to improve the GND T240 sensor correction signal. But, both end stations have a BRS.
        - The wind was low at this measurement time, but it's worth saying that each end-stations wind fences are in dis-repair at the moment, too be fixed soon.
    - ST1 Z drive to ST1 RZ T240 decoupling is ON with a "pele_rz" filter
    - Off diagonal ST1 dispalign matrices are in play, 
         X to RX & RY = -1e-4 & 1e-4, 
         Y to RX = -7e-4, 
         Z to RX & RY = 3.5e-3 & 2.5e-3 
    - ST1 Blend Filters:
        - X & Y = nol4cQuite_250
        - Z = 45mHz_cps
        - RX & RY = Quite_250_cps
        - RZ = nol4cQuite_250.
    - As far as I can tell, there's NO ST1 to ST2 sensor correction on the ST2 CPS, nor is there and ST1 to ST2 FF to the ST2 actuators.
    - ST2 Blend Filters:
        - X & Y = 250mhz
        - Z = 250mhz
        - RX & RY = tilt_800b
        - RZ = 250mhz

These will be used to make updates to 
    /ligo/svncommon/SusSVN/sus/trunk/Common/MatlabTools/
        seisBSC.m or 
        seisBSC2.m
which are toy models of the BSC-ISI performance, used so you don't have to carry around some giant .mat file of performance and you can per-interpolate on to an arbitrary frequency vector, much like I did for seisHAM.m in CSWG:11236.

I've committed the .xmls and .pngs in the following SeiSVN directory:
/ligo/svncommon/SeiSVN/seismic/BSC-ISI/H1/BS/Data/Spectra/Isolated/ASD_20250319/

Images attached to this report
Comments related to this report
brian.lantz@LIGO.ORG - 17:01, Thursday 20 March 2025 (83473)

Dear Oli,

It may be useful to remember that when Jeff says that the "input to the suspension at its "Suspension Point" motion for a given suspension can be the (quadrature) sum of many of the ISI's cartesian degrees of freedom" - what he means is that, if you want to make a Statistical model (which you do), and if the DOFs are independant (which maybe they are, and maybe they are not), then using the quadruture sum of the ASDs is a reasonable thing to do. In fact, the SUSpoint in reality, and the calculation of the SUSpoint, are done with a linear combination, NOT a quadrature sum. This means that if you grab some data from the cart basis sensors, take the ASDs (where you lose the phase), and add them in quadrature you will NOT get the ASD of the measured suspoint. I think this difference is not going to impact any of your calculations, but maybe it will help you avoid aggravation if you try to do some double checking.

-Brian

jeffrey.kissel@LIGO.ORG - 10:03, Monday 24 March 2025 (83521)
The Cartesian performance ASDs of the ISI BS to be used in the statistical model (in the way that Brian cautions in LHO:83473 above) have been exported to 
    /ligo/svncommon/SeiSVN/seismic/BSC-ISI/H1/BS/Data/Spectra/Isolated/ASD_20250319/
        2025-03-19_1400UTC_H1SUSBS_CART_XYZRXRYRZ_ASD.txt
(in the DOF order mentioned in the filename.)

In the same directory, I also export the ASD of live, projected, coherent linear sum computed by the front-end
        2025-03-19_1400UTC_H1SUSBS_EUL_LTVRPY_ASD.txt
(in the DOF order mentioned in the filename.)

If someone wants to race me, they can use this data and the CART2EUL matrix from the screenshot in LHO:83470, or if you want it programmatically, use 
    /opt/rtcds/userapps/release/isc/common/projections/
        ISI2SUS_projection_file.mat

and running the following in the matlab command line,
    >> load /opt/rtcds/userapps/release/isc/common/projections/ISI2SUS_projection_file.mat
    >> ISI2SUSprojections.h1.bs.CART2EUL
    ans =
      -0.7071       0.7071      -0.2738            0       0.1572       0.1572
      -0.7071      -0.7071      -0.0173            0      -0.1572       0.1572
            0            0            0            1      -0.2058       0.1814
            0            0            0            0      -0.7071       0.7071
            0            0            0            0      -0.7071      -0.7071
            0            0            1            0            0            0

... but if I win the race, this plot will be a good by-product of the updates to seisBSC.m, which I'll likely post to the CSWG aLOG, like I did for seisHAM.m in CSWG:11236.
jeffrey.kissel@LIGO.ORG - 11:28, Monday 24 March 2025 (83530)
Jim reminds me of the following:
- This BSC-ISI, ISIB2 has been performing poorly since ~2020. For some yet-to-be-identified reason, after years of physical, electronic, and data analysis investigations by Jim -- see IIET:15234 -- his best guess is some sort of mechanical "rubbing," i.e. mechanical interference / shorting of the seismic isolation, typically by cables.
- He points is finger at the H2 corner (use T1000388 to reminder yourself of where that is on BSC2).

- You can use the "Network" summary pages (https://ldas-jobs.ligo.caltech.edu/~detchar/summary/) and navigate to "Today" > "SEI" tab > "Summary [X]" or "Summary [Y]" or "Summary [Z]" pages, and look at the bottom row of plots to see how the ISIBS compares against other ISIs at LHO (left plot) and LLO (right plot). Here's a direct link to the plots including 2025-03-19 at 14:00 UTC, with the with the "SEI Quiet" time restriction mode ON.

- Also, remember that the MICH lock-acquisition drive from the M2 OSEMs on the SUSBS causes back-reaction on the cage, which messes with the ISI controls, the ISIBS's isolation state guardian is regularly in the FULLY_ISOLATED_SO_ST2_BOOST state, which leaves the FM8 "Boost_3" off until after the ISC_LOCK guardian requests SEI_BS to FULLY_ISOLATED. Because I took data during nominal low noise, the ISI was fully isolated. However, the summary pages above -- even in SEI Quiet mode -- don't filter for whether the ISI is in FULLY_ISOLATED, so you'll that the ISIBS is consistently performing worse. *This* is not a fair comparison or show of how the ISIBS performs worse that the other BSC-ISIs, so take the plots with a big grain of salt.

Also, another point of configuration notes:
- This ISI, like all ISIs at LHO have their CPS synchronized to the timing system.
H1 PSL (CSWG, ISC, Lockloss, SEI, SQZ, SYS)
jeffrey.kissel@LIGO.ORG - posted 16:37, Friday 07 March 2025 - last comment - 12:06, Saturday 08 March 2025(83230)
PMC Duty Cycle from July 1 2022 to July 1 2024
J. Kissel

I've been looking through the data captured about the PMC in the context of the two years of observatory use between July 2022 and July 2024 where we spanned a few "construction, commission, observe" cycles -- see LHO:83020. Remember the end goal is to answer the following question as quantitatively as possible: "does the PMC have a high enough duty cycle in the construction and commissioning phases that the SPI does *not* need to buy an independent laser?"

Conclusions: 
 - Since the marked change in duty cycle after the PMC lock-loss event on 2023-05-16, the duty-cycle of the PMC has been exceptionally high, either 91% during install/commissioning times or 99% during observing times. 
 - Most of the down time is from infrequent planned maintenance. 
 - Recovery time is *very* quick, unless the site loses power or hardware fails. 
 - The PMC does NOT lose lock when the IFO loses lock. 
 - The PMC does NOT lose lock just because we're vented and/or the IMC is unlocked. 
 - To-date, there are no plans to make any major changes to the PSL during the first one or two O4 to O5 breaks.
So, we shouldn't expect to lose the SPI seed light frequently, or even really at all, during the SPI install or during commissioning. And especially not during observing. 

This argues that we should NOT need an independent laser from an "will there even be light?" "won't IFO construction / commissioning mean that we'll be losing light all the time?"  duty-cycle stand point.
Only the pathfinder itself, when fully functional with the IFO, will tell us whether we need the independent laser from a "consistent noise performance" stand-point.

Data and Historical Review
To refresh your memory, the major milestones that happened between 2022 and 2024 (derived from a two year look through all aLOGs with the H1 PSL task):
- By Mar 2022, the PSL team had completed the complete table revamp to install the 2x NeoLase high-power amps, and addressed all the down-stream adaptations.

- 2022-07-01 (Fri): The data set study starts.
- 2022-09-06 (Tue): IO/ISC EOM mount updated, LHO:64882
- 2022-11-08 (Tue): Full IFO Commissioning Resumes after Sep 2022 to Dec 2022 vent to make FDS Filter Cavity Functional (see E2000005, "A+ FC By Week" tab)
- 2023-03-02 (Thu): NPRO fails, LHO:67721
- 2023-03-06 (Mon): NPRO and PSL function recovered LHO:67790
- 2023-04-11 (Tue): PSL Beckhoff Updates LHO:68586
- 2023-05-02 (Tue): ISS AOM realignment LHO:69259
- 2023-05-04 (Thu): ISS Second Loop Guardian fix LHO:69334
- 2023-05-09 (Tue): "Something weird happened to the PMC, then it fixed itself" LHO:69447
- 2023-05-16 (Tue): Marked change in PSL PMC duty-cycle, nothing specific PSL team did with the PMC, but the DC power supplies for the RF & ISC racks we replaced, 69631, while Jason tuned up the FSS path LHO:69637 
- 2023-05-24 : O4, and what we'll later call O4A, starts, we run with 75W requested power from the PSL.
- 2023-06-02 (Fri): PSL ISS AA chassis it was replaced, but PMC stays locked through it LHO:70089
- 2023-06-12 (Sun): PMC PDH Locking PD needs threshold adjustment, LHO:70352, for "never found out why" reason FRS:28260
- 2023-06-19 (Mon): PMC PDH Locking PD needs another threshold adjustment, LHO:70586, added to FRS:28260, but again reasons never found.
- 2023-06-21 (Wed): Decision made to reduce requested power into the IFO to 60W LHO:70648
- 2023-07-12 (Wed): Laser Interlock System maintenance kills PSL LHO:71273
- 2023-07-18 (Tue): Routine PMC / FSS tuneup, with quick PMC recovery LHO:71474
- 2023-08-06 (Sun): Site-wide power glitch takes down PSL LHO:72000
- 2023-09-22 (Fri): Site-wide power gltich takes down PSL LHO:73045
- 2023-10-17 (Tue): Routine PMC / FSS tuneup, with quick PMC recovery LHO:73513
- 2023-10-31 (Tue): Jeff does a mode scan sweeping the PMC FSR LHO:73905
- 2023-11-21 (Tue): Routine PMC / FSS tuneup, with quick PMC recovery LHO:74346
- 2024-01-16 : O4A stops, 3 months, focused on HAM567, no PSL work (see E2000005, "Mid-O4 Break 1" tab)
O4A to O4B break lock losses: 7
       2024-01-17 (Wed): Mid-vent, no IFO, no reported cause.
       2024-01-20 (Sat): Mid-vent, no IFO, no reported cause.
       2024-02-02 (Fri): Mid-vent, no IFO, no reported cause.
       2024-02-08 (Thu): Mid-vent, no IFO, no reported cause. During HAM6 close out, may be related to alarm system
       2024-02-27 (Tue): PSL FSS and PMC On-table Alignment LHO:76002.
       2024-02-29 (Thu): PSL Rotation Stage Calibration LHO:76046.
       2024-04-02 (Tue): PSL Beckhoff Upgrade LHO:76879.
- 2024-04-10 : O4 resumes as O4B start
O4B to 2024-07-01 lock losses: 1
       2024-05-28 (Tue): PSL PMC REFL tune-up LHO:78093.
- 2024-07-01 (Mon): The data set study ends.

- 2024-07-02 (Tue): The PMC was swapped just *after* this data set, LHO:78813, LHO:78814

By the numbers

Duty Cycle (uptime in days / total time in days)
     start_to_O4Astart: 0.8053
    O4Astart_to_O4Aend: 0.9450
    O4Aend_to_O4Bstart: 0.9181
       O4Bstart_to_end: 0.9954
(Uptime in days is the sum on the values of H1:PSL-PMC_RELOCK_DAY just before lock losses [boxed in red] in the attached trend for the given time period)

Lock Losses (number of times "days" goes to zero)
     start_to_O4Astart: 80
    O4Astart_to_O4Aend: 22
    O4Aend_to_O4Bstart: 7
       O4Bstart_to_end: 1
(Number of lock losses is mere the count of red boxes for the given time period)

Lock Losses per calendar days
     start_to_O4Astart: 0.2442
    O4Astart_to_O4Aend: 0.0928
    O4Aend_to_O4Bstart: 0.0824
       O4Bstart_to_end: 0.0123
(In an effort to normalize the locklosses over the duration of the time period to give a more fair assessment.)

I also attach a histogram of lock durations for each duration, as another way to look at how the duty cycle dramatically changed around the start of O4A.
Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 12:06, Saturday 08 March 2025 (83243)CDS, CSWG, SEI, SUS, SYS
The data used in the above aLOG was gathered by ndscope using the following template,
    /ligo/svncommon/SeiSVN/seismic/Common/SPI/Results/
        alssqzpowr_July2022toJul2024_trend.yaml


and then exported (by ndscope) to the following .mat file,
    /ligo/svncommon/SeiSVN/seismic/Common/SPI/Results/
        alssqzpowr_July2022toJul2024_trend.mat


and then processed with the following script to produce these plots
    /ligo/svncommon/SeiSVN/seismic/Common/SPI/Scripts/
        plotpmcuptime_20250224.m    rev 9866


ndscope have become quite an epically powerful data gathering tool!

H1 General (CDS, SYS, VE)
oli.patane@LIGO.ORG - posted 17:00, Friday 27 December 2024 - last comment - 18:19, Friday 27 December 2024(82016)
Ops Eve Shift Start

TITLE: 12/27 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 8mph Gusts, 5mph 3min avg
    Primary useism: 0.08 μm/s
    Secondary useism: 0.78 μm/s
QUICK SUMMARY:

Currently starting relocking and at ENGAGE_ASC_FOR_FULL_IFOjust lost lock from ENGAGE_ASC_FOR_FULL_IFO. Fil just finished the work reconnecting the ITM ESDs and connected the HAM6 high voltage relays to the BSC3 interlocks(something like that - see attached). After he did this we tested out the port protection but it failed due to the fast shutter not being open. It seems like the fast shutter usually reopens when we are going through PRMI/MICH states, so that might have been why it was still closed from our last lockloss, but it doesn't completely make sense because earlier before Corey ran the test, the shutter had opened on its own while we sat in IDLE. I opened the shutter medm and manually opened the shutter and reran the test. It failed again and said it was due to the fast shutter not being able to reopen even though it had correctly closed and reopened it. I toggled the shutter closed and back open, and this time the test ran and came back with OK, so it seems like we are good to go to high power. Jenne called and had me check that we had ESD outputs on ITMX, which we did so it looks like everything is good to go.  When at CHECK_AS_SHUTTERS we did have the LOCKLOSS_SHUTTTER_CHECK alert that we needed to manually check the shutter, so I ran the test script on the AS port protection screen again just to make sure and then INIT'ed that guardian.

As as of Fil completing this work, THE ENTIRE CORNER STATION HIGH VOLTAGE INTERLOCK SYSTEM(except SQZ) IS CONNECTED TO BSC3. tagging VAC, SYS

Also, I am unable to view the guardian logs - when I open them they say 'Leap second data is expired' and then close Tagging CDS

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 18:03, Friday 27 December 2024 (82017)

Back to Observing as of 12/28 02:00 UTC!!

michael.landry@LIGO.ORG - 18:19, Friday 27 December 2024 (82018)

Great recovery, congratulations all!!

LHO VE (CDS, ISC, OpsInfo, SYS, VE)
corey.gray@LIGO.ORG - posted 10:40, Friday 27 December 2024 - last comment - 10:04, Wednesday 02 April 2025(82011)
HAM6 Interlock Status After HAM6 Vacuum Gauge Failure Last Night

(Dave Barker, Fil Clara, Jenne Driggers, Corey Gray, Gerardo Moreno, plus those who signed work permit and whom I phoned last night)

Last Night Recap:
Last night at 959pmPT, there was an audible alarm for the HAM6 vacuum gauge (PT110).  Gerardo went out to check to see if it was dead.  He phoned Fil and they did a power cycle of the gauge to see if this would improve the situation, but it did not.  After this, H1 was taken to IDLE for the night (OWL shift cancelled).  (Should have taken Observatory Mode to CORRECTIVE MAINTENANCE at this point, but only remembered to do this this morning.)

Work Permit 12260 generated.  Fil arrived on-site around 9amPT (17utc).

This Morning's Summary of Fil's Work:

The HAM6 Interlock (for operations of the Fast Shutter + PZT) is now Using the BSC3 Interlock (or VAC gauge from BSC3 since it was decided to BYPASS the HAM6 VAC gauge which failed last night [alog 82005]).  This means that the BSC3 Interlock is no longer being used for the ITM ESDs (which sounds like it's fine because Jenne/Fil said we don't use the ESDs for the ITMs).

Fil noted that he confirmed that the Fast Shutter Chassis in the LVEA is OPERATIONAL.  Fil then left the site after giving me this summary.

As far as a test of this new HAM6 Interlock configuration, I intentionally ran a test of the AS Port Protection system (sitemap/SYS/ AS Port Protection....this was my first time doing this).  After clicking the RUN (test) button, after a few seconds received an "OK" for the (1) Test & (2) Fault.  After this test and the signed-off Work Permit, we are OK to GO for High Power locking.

Currently running an Initial Alignment (note:  ISC_LOCK initially went into ERROR [red box] when I selected Initial Alignment (screenshot of error message attached).  LOAD-ing ISC_LOCK (suggested in the Log) cleared this up.

ALSO: 

Images attached to this report
Comments related to this report
jenne.driggers@LIGO.ORG - 12:09, Friday 27 December 2024 (82013)

Sheila reminded me that we do use ITMs for acquisition, so she's making modifications to the guardian so that we'll use ETMY instead of the ITMs for acquisition.

corey.gray@LIGO.ORG - 10:40, Saturday 28 December 2024 (82031)FRS

FRS:  https://services1.ligo-la.caltech.edu/FRS/show_bug.cgi?id=32955

filiberto.clara@LIGO.ORG - 10:04, Wednesday 02 April 2025 (83700)

HAM6 interlock has been restored. HAM6 gauge is scheduled to be replaced during upcoming vent. See alog 83695.