Search criteria
Section: X1
Task: SEI

REMOVE SEARCH FILTER
SEARCH AGAIN
Reports until 16:41, Tuesday 01 July 2025
H1 ISC (SEI)
elenna.capote@LIGO.ORG - posted 16:41, Tuesday 01 July 2025 - last comment - 18:24, Tuesday 01 July 2025(85486)
HAM2 taken offline, seems to change IM alignment

Jeff, Sheila, Elenna

Executive summary: Our input alignment has changed. We know this because the alignment onto IM4 trans has changed. This corresponds to the IMs 1-3 OSEMs showing a change, which corresponds with HAM2 being taken offline to change the power supply.

Today, an H1SEI23 power supply was replaced, and preceding this activity HAM2 and HAM3 were taken offline, 85475.

When HAM2 came back online, the OSEM readbacks on IMs 1-3 showed a change in both pitch and yaw. The table below summarizes how much each suspension's alignment has changed.

IM Pit change (urad) Yaw change (urad)
1 2 0
2 66.8 2.7
3 14.8 4.3

This shift in alignment is also apparent on IM4 trans, which shows the pitch offset has increased by 0.3 and yaw offset by 0.03 (uncalibrated).

We first noticed something had changed when we ran input alignment- I had to move PR2 by 5 microradians in both pitch and yaw just to get the lock to catch, and then the alignment offload further changed the PR2 sliders.

This change is further backed up by the fact that the POP A offsets have changed, and the PRG at 2 W seems to have decreased slightly.

We're not sure how bad of a problem this is. We have been able to lock and power up with no issues. We had a random unrelated lockloss, so I cannot yet see if the buildups at full power once thermalized have changed significantly.

The locking process is now much slower again, since the alignment offsets for the PRM have changed, and the convergence of the PRM ADS tends to set the rate at which we can engage ASC in full lock.

The first two attachments are screenshots show the shift in the IM alignment and IM4 trans alignment. I included the HAM2 ISI guardian state to show this change corresponds with HAM2 going offline.

The third attachment shows how the POP A offset has changed at full IFO 2 W. We can use this scope to adjust the POP A QPD offset and speed up locking, if we want to keep this input alignment.

The fourth attachment shows the ITMX green camera offset screen in full IFO at 2 W. I was watching this screen to confirm the new ITMX green camera offsets I set earlier today were ok. I think the set value differs from the live value here because of the new input alignment.

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 18:15, Tuesday 01 July 2025 (85489)

We sat at full power for 1 hour and had thermalized to a PRG of 49. Trending back to our last lock before maintenance it looks like we achieve about the same PRG in this amount of time, so there is minimal impact on the buildups. There might be a small reduction in the jitter coupling. We should check again when fully thermalized to be sure.

elenna.capote@LIGO.ORG - 18:24, Tuesday 01 July 2025 (85490)

Since we are going to stay this way for at least tonight, I updated the POP A offsets to help shorten ENGAGE ASC FOR FULL IFO. SDF attached.

Images attached to this comment
H1 SUS (SEI, SUS)
edgard.bonilla@LIGO.ORG - posted 17:42, Monday 30 June 2025 - last comment - 14:41, Tuesday 01 July 2025(85446)
Created new OSEM estimator fits for SR3 Yaw

Ivey and Edgard,

We just finshed a fit of the Yaw-to-Yaw transfer functions for the OSEM estimator using the measurements that Oli took for SR3 last Tuesday [see LHO: 85288].

The fits were added to the Sus SVN and live inside '~/SusSVN/sus/trunk/HLTS/Common/FilterDesign/Estimator/fits_H1SR3_2025-06-30.mat' . They are already calibrated to work on the filter banks for the estimator and can be installed using 'make_SR3_yaw_model.m', which lives in the same folder [for reference, see LHO: 84041, where Oli got the fits running for a test].

Attached below are two pictures of the fits we made for the estimator.

 

The first attachment shows the Suspoint Y to M1 DAMP Y fit. We made sure to fit the asymptotic behavior as well as we could, which ends up being 0.95x10^{-3} um/nm (5% lower than expected from the OSEM calibration). The zpk for this fit is

    'zpk([-0.024+20.407i,-0.024-20.407i,-0.044+11.493i,-0.044-11.493i,0,0],[-0.067+21.278i,-0.067-21.278i,-0.095+14.443i,-0.095-14.443i,-0.07+6.405i,-0.07-6.405i],-0.001)'

 

The second attachment shows the M1 drive Y to M1 DAMP Y fit. We kept the same poles that we had for the other fit, but manually fit the zeros and gain to make a good match. The zpk for this fit is

'zpk([-0.051+8.326i,-0.051-8.326i,-0.011+19.259i,-0.011-19.259i],[-0.067+21.278i,-0.067-21.278i,-0.095+14.443i,-0.095-14.443i,-0.07+6.405i,-0.07-6.405i],12.096)'

Hopefully Oli and co. will have time to test this soon!

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 14:41, Tuesday 01 July 2025 (85477)

The new filters have been loaded in. Here are the matlab plots for the fits for SUSPOINT_Y_2GAP and for EST_MODL_DRV_Y_2GAP.

Images attached to this comment
H1 SEI (SEI)
ibrahim.abouelfettouh@LIGO.ORG - posted 15:48, Friday 27 June 2025 (85397)
BRS Trends - FAMIS 26456

Closes FAMIS 26456. Last checked in alog 84705

Trends look normal. SEI BRS Maintenance work in Mid-May was part of vent work and is the only excursion from the threshold lines.

Images attached to this report
H1 General (Lockloss, SEI)
ryan.short@LIGO.ORG - posted 20:18, Monday 23 June 2025 (85265)
Lockloss @ 03:03 UTC

Lockloss @ 03:03 UTC after almost 6 hours locked - link to lockloss tool

Several quakes rolling through around this time; hard to say which was the real cause but likely a M5.7 in the Caribbean.

H1 General (ISC, OpsInfo, SEI, SUS)
elenna.capote@LIGO.ORG - posted 15:40, Monday 16 June 2025 - last comment - 17:03, Monday 16 June 2025(85092)
Safe SDF reconciliation

In preparation for the RCG upgrade, we are using the relocking time to reconcile SDF differences in the SAFE file.

Here are some of mine:

I have also determined that the unmonitored channel diffs in the LSC, ASC, and OMC models are guardian controlled values and do not need to be saved.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 15:29, Monday 16 June 2025 (85094)

Not accepting or reverting the h1sqz, h1ascsqzfc, or slow controls cs_sqz sdfs, attached. As these have the same observe and safe files.

Images attached to this comment
camilla.compton@LIGO.ORG - 15:35, Monday 16 June 2025 (85095)

Accepted  H1:TCS-ETMX_RH_SET{LOWER,UPPER}DRIVECURRENT as  ndscope-ing shows they are normally at this value.

Images attached to this comment
elenna.capote@LIGO.ORG - 17:00, Monday 16 June 2025 (85103)

Some of these SDFs may have then led to diffs in the OBSERVE state. I have reverted the roll mode tRamp, and accepted the OSC gains in the CAL CS model.

Images attached to this comment
oli.patane@LIGO.ORG - 17:03, Monday 16 June 2025 (85104)

I updated the OPTICALIGN OFFSETs for each suspension that we use those sliders on. I tried using my update_sus_safesnap.py script at first, but even though it's worked one other time in that past, it was not working anytime I tried using it on more than one suspension at a time (it seems like it was only doing one out of each suspension group). I ended up being able to get them all updated anyway eventually. I'm attaching all their sdfs and will be working on fixing the script. Note that a couple of the ETM/TMS values might not match thesetpoint exactly due to the screenshots happening during relocking and after they had moved a bit with the WFS

Images attached to this comment
X1 SEI (ISC, SEI)
jeffrey.kissel@LIGO.ORG - posted 10:15, Monday 16 June 2025 (84825)
Optical Setup and Results of first tuning of SPI In-vac SuK Fiber Collimator Lens Position
J. Kissel

Executive Summary
I've tuned the lens position and measured the projected beam profile of one of three trial fiber collimators to be used for the SPI. The intent is to do this measurement before vs. after a vacuum bake like done for previous collimators (Section 3.3 of E1500384), to see if the bake will cause any negative impact on the performance of the collimator. It also helped clear out a few "rookie mistake" bugs in my data analysis, though there's still a bit of inconsequential confusion in post-processing the fit of the beam profile.

Full Context
The SPI intends to use supposedly vacuum compatible Schaefter + Kirchhoff (SuK) fiber collimators (D2500094, 60FC-0-A11-03-Ti) in order to launch its two measurement (MEAS) and reference (REF) beams from their fiber optical patchcords attached to feedthroughs (D2500175) in to free-space and throughout the ISIK transceiver / interferometer (D2400107). However, unlike their (more expensive) stainless steel counterparts from MicroSense / LightPath used by the SQZ team, SuK doesn't have the facilities to specify a temperature at which they believe the titanium + D-ZK3 glass lens + CuBe & Ti lens retaining ring assembly will fail (see E2300454). As such, we're going to run one SuK collimator through the same baking procedure as the MicroSense / LightPath (see Section 3.3 of E1500384) -- slow 6 [hr] ramp up to 85 [deg C], hold for 24 hours, similar slow ramp down -- and see if it survives.

A catastrophic "if it survived" result would be that the lens cracks under the stress of differential heating between the titanium, CuBe, and glass. 
As we don't expect this type of failure, in order to characterize "if it still functions," we want a quantitative "before" vs "after" metric. 
Since I have to learn how to "collimate" this type of collimator anyways (where "collimate" = mechanically tune the lens position such that emanating beam's waist is positioned as close to the lens as possible), I figure we use a 2D beam profile as our quantitative metric. We expect the symptom of a "failed" fiber collimator to be that "before bake it projects a lovely, symmetric, tunable, Gaussian beam, and after bake the profile looks asymmetric / astigmatic or the lens position is no longer consistently/freely adjustable."

The SPI design requires a beam whose waist (1/e^2) radius is w0 = 1.050 mm +/- 0.1 mm. That puts the Rayleigh range at zR = pi (w0^2) / lambda =  3.25 [m], so in order to get a good fit on where the waist lies, we need a least one or two data points beyond the Rayleigh range. So that means projecting the beam over a large distance, say at least ~5-6 [m].

Measurement Details
2025-06-03_SPIFC_OpticalSetup.pdf shows the optical setup. 

Pretty simple; just consuming a lot of optical table space (which we thankfully have at the moment). The "back" optical table (in IFO coordinates the "-X table (with the long direction oriented in the Y direction") already has a pre-existing fiber coupled, 1064 nm, laser capable of delivering variable power of least 100 [mW] so I started with that. It's output has an FC/APC "angled physical contact" connector, where as the SuK fiber collimators have an FC/PC (flat) "physical contact" connector. I thus used a ThorLabs P5-980PM-FC-2 APC to PC fiber patch cord to couple to the SuK fiber collimator, assembled with a (12 mm)-to-(1 inch) optic mount adapter (AD12NT) and mounted in a standard 1" mirror mount, set at 5 inch beam height. Ensuring I positioned the free-space face of the collimator at the 0 [in] position, I projected to beam down the width of the optical table, placed steering mirrors at 9 [ft] 9 [in] (the maximum +Y grid holes), separated by 6 [in] in order to return the beam down the table. Along this beam, I marked out grid-hole positions to measure the beam profile with a NanoScan head at 
    z = [0.508 0.991 1.499 2.007 3.251 4.496 5.41] [m] 
These are roughly even half-meter points that one gets from "finding 1 inch hole position that gets you close to 0.5 [m] increments", i.e. 
    z = [1'8", 3'3", 4'11", 6'7", 10'8", 14'9", and 17'9"].

Using the SuK proprietary tooling, I then loosened the set-screws that had secured the lens in position with the 1.2 mm flat-head (9D-12), and adjusted the position of the lens with their short eccentric key (60EX-4), per their user manual (T2500062 == Adjustment_60FC.pdf), with the NanoScan was positioned at the 5.41 [m] position.
At the 5.41 [m] position, we expect the beam radii to be w(z=5.41 m) = w0 * sqrt(1 + (z/zR)^2) = 2.036 [mm], or a beam diameter of d(z=5.41 m) = 4.072 [mm].

Regretfully, I made the rookie mistake of interpreting the NanoScan's beam widths (diameters) as radii on the fly, and "could not get a beam 'radii' lower than 3.0 [mm]," at the z = 5.41 position, as adjustments to the eccentric key / lens position approached a "just breath near it and it'll get larger" level at that beam size. This will be totally fine for "the real collimation" process, as beam widths (diameters) of 4.0 [mm] required much less a delicate touch and where quite repeatable (I found, while struggling to achieve 3.0 [mm] width).

Regardless, once I said "good enough" at the lens position, I re-secured the set screws holding the lens in place. That produced a d = 3.4 [mm] beam diameter, or 1.7 [mm] radius, at the z = 5.41 [m]. I then moved the NanoScan head to the remaining locations (aligning the beam into the head at each position, as needed, with either the steering mirrors or the fiber collimator itself) to measure the rest of the beam profile as a function of distance.

2025-06-03_SPIFC_BeamScans_vs_Position.pdf shows the NanoScan head position and raw data at each z position after I tuned the lens position at z = 5.41 [m].
Having understood during the data analysis that the NanoScan software returns either 1/e^2 or D4sigma beam *diameters*, I followed modern convention and used the mean D4sigma values, and converted to radii with the factor of 2, w(z) = d(z) / 2. One can see from the raw data that the beam profile at each point is quite near ''excellently Gaussian,'' which is what we hope will remain true *after* the bake.

Results

2025-06-03_spifc_S0272502_prebake_beamprofile_fit.pdf shows the output of the attached script spifc_beamprofile_S0272502_prebake_20250603.m, which uses Sheila's copy of a la mode to fit the profile of the beam.

Discussion
The data show a pretty darn symmetric beam in X (parallel to the table) and Y (perpendicular to the table), reflecting what had already been seen in the raw profiles.
The fit predicts a waist w0 of 
        (w0x, w0y) = (0.89576, 0.90912) [mm] 
at position 
        (z0x, z0y) = (1.4017, 1.3469) [m] 
away, downstream from the lens. Makes total sense that the z position of the waist is *not* at the lens position, given that I tried to get the beam as small a *diameter* possible at the 5.41 [mm] position, rather than what I should have done which is to tune the lens position / beam diameter to be the desired 4.072 [mm].

What doesn't make sense to me, is that -- in trying to validate the fit and/or show that the beam behaves as an ideal Gaussian beam would -- I also plot the predicted beam radius at the Rayleigh range, 
    w(zR) [model] = w0 * sqrt(1 + (zR/zR)^2) = w0 * sqrt(2),
or 
         (wzRx, wzRy) [model] = (1.2668,1.2857) [mm]
which is much larger than the fit predicts,
         (wzRx, wzRy) [fit] = (0.9677,0.9958) [mm]
at a Rayleigh range,
    zR = pi * w0^2 / lambda
of 
         (zRx,zRy) [from fit w0] = (2.3691, 2.4404) [m]

Similarly confusing, if I plot a line from the waist position (z0x, z0y) to the end of the position vector (6 [m]), whose angle from the x-axis is the divergence angle
    theta_R = lambda / (pi * w0)
i.e. a predicting a waist radius at zEnd = 6 [m] of 
    w(zEnd) = (zEnd - z0) * atan(theta_R)
this results in a beam waist at xEnd much smaller than the fit. Most demo plots, e.g. from wikipedia:Rayleigh_length or wikipedia:Beam_divergence, show that slope of the line should start to match the beam profile just after the Rayleigh range. To be fair, these demos never have quantitative axes, but still. At least the slope seems to match, even though the line is quite offset from the measured / fit z > 6 [m] asymtote.

Conclusion
Though I did not adjust the lens position to place the beam waist at desired position, I now have a good measurement setup and data processing regime to do so for the "production" pair of fiber collimators. For this collimator, since we're just looking at "before" vs. "after" bake data, this data set is good enough.

Let's bake!
Non-image files attached to this report
H1 General (OpsInfo)
corey.gray@LIGO.ORG - posted 01:28, Saturday 14 June 2025 - last comment - 09:11, Saturday 14 June 2025(85039)
H1 Owl Shift Wake Up Call: Due To SDFs

(RyanC, CoreyG)

Got a Wake Up Call at 1211amPDT, but I was sort of already awake.  RyanC was up after his shift and we were both watching H1 remotely.  He was battling H1 most of his shift and made headway toward the end of his shift and then handed off info for how things were going (the winds started dying down around 9pm-ish).  A few things he did for H1:

Images attached to this report
Comments related to this report
corey.gray@LIGO.ORG - 09:11, Saturday 14 June 2025 (85043)SEI

Now that I'm sort of awake, wondering why the Earth was playing with us and that Earthquake which literalluy caught us by surprise seconds after we made it to Observing last night.

Although there were some Alaska quakes around the time of the lockloss, they were under Mag2.6.

So assuming it was the Mag5.0 off the coast of Chile which was roughly about 30min before the lockloss.  I guess that's fine, but why were there no notifications on Verbal of an Earthquake?  Why didn't SEI_ENV transition from CALM to EARTHQUAKE?

Looking at the seismic BLRMS, the last time SEI_ENV transitioned to EARTHQUAKE was about 12hrs ago at 0352utc (see attached screenshot) during RyanC's shift (but he was just getting done dealing with winds at that time, so H1 was down anyway).  But after that EQ, there were a few more earthquakes, which were less that the 0352 one, but not by much, and certainly big enough to knock H1 out at 0725 from the Chilean coast earthquake.  Perhaps it was a unique EQ, because it was off the Pacific coast, albeit South American coast.

Just seems like H1 should have been able to handle this pesky measly Mag5.0 EQ that the Earth taunted us with after a rough night---literally seconds after we had hit the OBSERVING button!  :-/

H1 General (SEI)
anthony.sanchez@LIGO.ORG - posted 16:30, Wednesday 11 June 2025 (84981)
Shift Ops Day shift report.

TITLE: 06/11 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 144Mpc

CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 24mph Gusts, 18mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.14 μm/s
INCOMING OPERATOR: Oli
SHIFT SUMMARY:

Recovered from the earlier lockloss!
An Initial_Alignment was ran, and we got back up to Nominal_Low_Noise at 20:46 UTC 
I did have to accept some SDF diffs for the SEI system, and reached out to Jim about it. He mentioned that the some of those channels should not be monitored.
When the SEI ENV went back to calm, and it dropped us from Observing a few moments later we unmonotored those channels.
We got back to Observing at 21:04 UTC

21:10 UTC we fell out of Observing because of a SQZ issue, returning to Observing at 21:24 UTC


SUS ETMY Roll Mode is growing!
H1:SUS-ETMY_M0_DARM_DAMP_R_GAIN was changed to account for an interesting Roll mode on ETMX, and accepted in SDF. YES, Ya Read That Correctly and I didn't make a mistake here. Changes to ETMY  damped a Roll mode on ETMX .
The Ops Eve Shifter should revert this change, before handing off the IFO to the Night Owl Op.

H1 has been locked for 2 hours and 40 minutes and is currently OBSERVING.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
15:22 FAC LVEA is LASER HAZARD LVEA YES LVEA is LASER HAZARD \u0d26\u0d4d\u0d26\u0d3f(\u239a_\u239a) 22:54
15:46 Fac Mitchel Mech Mezz N Checking the dust pump 16:56
15:49 FAC Tyler VAC prep N Staging tools & sand paper 16:04
15:57 FAC Tyler EY N Checking on bee box 16:17
17:44 EE Fil & Eric WoodShop N Working on cabling chiller yards mon channels. 19:44
17:50 FAC Kim H2 N Technical cleaning. 18:30
18:39 VAC Gerardo, Jordan LVEA - Bolt tightening and pump evaluation 19:19
18:44 PSL RyanS CR N RefCav alignment 18:51
19:06 PEM Robert LVEA - Damping vacuum pumps 19:22
20:04 PEM Camilla LVEA Yes Turning off an SR785 that was left on. 20:09
Images attached to this report
H1 General (Lockloss, SEI)
ryan.short@LIGO.ORG - posted 11:42, Wednesday 11 June 2025 - last comment - 12:22, Wednesday 11 June 2025(84975)
Lockloss @ 18:33 UTC

Lockloss @ 18:33 UTC after 12.5 hr lock stretch - link to lockloss tool

Looks to be from some very local sudden ground motion. USGS has nothing to report yet, but site seismometers and Picket Fence certainly saw activity.

Comments related to this report
ryan.short@LIGO.ORG - 12:22, Wednesday 11 June 2025 (84978)

Now looks to be due to a M4.4 EQ from Fort St. John, Canada.

Images attached to this comment
H1 General (CDS, SEI)
anthony.sanchez@LIGO.ORG - posted 08:28, Wednesday 11 June 2025 - last comment - 08:52, Wednesday 11 June 2025(84964)
HPIHAM1 Channels not found.

SDF Overview looks great except this HPIHAM1 channel not found.

Images attached to this report
Comments related to this report
anthony.sanchez@LIGO.ORG - 08:52, Wednesday 11 June 2025 (84967)

Channels not found list attached as a picture.

Images attached to this comment
H1 General (CAL, SEI)
anthony.sanchez@LIGO.ORG - posted 08:17, Wednesday 11 June 2025 - last comment - 09:56, Wednesday 11 June 2025(84963)
Wednesday Day Ops Morning Shift & Observing!

TITLE: 06/11 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 140Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 11mph Gusts, 5mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.18 μm/s
QUICK SUMMARY:

H1 is still locked 8 Hours and 30 minutes later!
All systems seem to be functioning.
 

There was talk of doing a calibration measurement, Which I started to do right after making sure there wasn't anyone still inside working the LVEA.

I ran a PCAL BroadBand with this command:  
pydarm measure --run-headless bb
2025-06-11 07:44:58,555 config file: /ligo/groups/cal/H1/ifo/pydarm_cmd_H1.yaml
2025-06-11 07:44:58,571 available measurements:
  pcal: PCal response, swept-sine (/ligo/groups/cal/H1/ifo/templates/PCALY2DARM_SS__template_.xml)
  bb  : PCal response, broad-band (/ligo/groups/cal/H1/ifo/templates/PCALY2DARM_BB__template_.xml)

The BroadBand finished. But I did not run the Simulines. It was believed by the Calibration gurus that we don't need it before Observing because our calibration .
"monitoring lines show a pretty good uncertainty for LHO this morning: https://gstlal.ligo.caltech.edu/grafana/d/StZk6BPVz/calibration-monitoring?orgId=1&var-DashDatasource=lho_calibration_monitoring_v3&var-coh_threshold=%22coh_threshold%22%20%3D%20%27cohok%27%20AND&var-detector_state=&from=1749629890225&to=1749652518797 Roughly +/-2% wiggle "
~Joe B

Clicked the button for Observing, And we went right into observing with out any SDF issues!
Went into observing at 14:57 UTC

There are messages though mostly from the SEI system, all of which are Setpoint changes see SPM DIFFS for differences for HAMs 2,3,4,5.
But these have not stopped us from getting in to Observing.

 

 

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 09:56, Wednesday 11 June 2025 (84968)

I have attached a screenshot of the broadband measurement from this morning. It shows that the calibration uncertainty is within +-2%, which means that our new calibration is excellent!

For those who want to plot the latest PCAL broadband, you can use a template that I have saved in /opt/rtcds/userapps/release/cal/h1/dtt_templates/PCAL_BB_template.xml (aka [userapps] cal/h1/dtt_templates/)

In order to use this template, you must find the GPS time of the start of the broadband measurement, which I found today by converting the timestap in Tony's post above into GPS time. This template pulls data from NDS2 because it uses GDS, so you will also need to go to the "Input" tab, and put your current GPS time in the "Epoch stop" entry that is within the "NDS2 selection" box. The current time will hopefully be after the start time of the broadband measurement, so that will ensure that the full span of the data you need is requested from NDS2. If you don't do this, the template will give you an error if you try to run.

Images attached to this comment
H1 CDS (CDS, SEI)
patrick.thomas@LIGO.ORG - posted 12:13, Tuesday 10 June 2025 - last comment - 11:59, Tuesday 01 July 2025(84928)
odd cabling on h1brsey
I went down to end Y to retrieve the usb stick that I remotely copied the c:\slowcontrols directory on h1brsey to, and also to try to connect h1brsey to the kvm switch in the rack. I eventually realized that what I thought was a vga port on the back of h1brsey was probably not, and instead I found this odd seeming wiring connected from what I am guessing is a hdmi or dvi port on the back of h1brsey, to some kind of converter device, then to a usb port on a network switch. I'm not sure what this is about, so I am attaching pictures.
Images attached to this report
Comments related to this report
patrick.thomas@LIGO.ORG - 09:41, Thursday 12 June 2025 (84994)
A work permit has been filed to remove this cabling and put h1brsey on the kvm switch in the rack.
patrick.thomas@LIGO.ORG - 11:59, Tuesday 01 July 2025 (85466)
This has been completed.
H1 CDS (CDS, SEI)
patrick.thomas@LIGO.ORG - posted 16:03, Monday 09 June 2025 (84905)
work on h1brsex and h1brsey
I copied the contents of C:\SlowControls on h1brsex and h1brsey onto a usb stick. I committed changes local to h1brsex into svn. I ran svn update on h1brsey. I was just expecting the changes I committed from h1brsex to show up, but a whole lot more did. I guess no one had run svn update in a long time. It appeared to complete without conflicts and reported that it was now at revision 6189. I started committing files to svn on h1brsey that I thought did not conflict between the two machines, but I think I accidentally committed one that does. I committed local changes on h1brsey to /trunk/BRS2 C#/BRSReadout/configs/LHOEY.config, but afterward found that there are also local changes to the same file on h1brsex.
H1 ISC (SEI)
elenna.capote@LIGO.ORG - posted 11:00, Friday 06 June 2025 (84863)
ASC performance with HAM1 ISI

This is a very quick first look at the ASC performance with the HAM1 ISI. Jim is still working on bringing the ISI up to full performance, but the first attached plot compares the INP1 P and CHARD P error spectra from March 31 just before the vent and yesterday, when we locked to full power and low noise with HAM1 in the isolated state.

Before the vent, we were running feedforward of the HAM1 L4Cs to CHARD, INP1 and PRC2. PRC2 is now on the POP sensor, and CHARD and INP1 both use a combination of the REFL WFS 45 MHz signals. The large peak around 20 Hz before the vent appears to have reduced in INP1 P and shifted down in frequency in CHARD P. The second attachment shows that large peak in CHARD P is coherent with the HAM1 GS13s, from about 10 to 20 Hz, especially with RX, RY, RZ, and Z.

There is also a peak at 6 Hz, which we know is a vertical resonance of the RMs, 84712.

 

Images attached to this report
H1 SEI (SEI, SYS)
rahul.kumar@LIGO.ORG - posted 10:11, Friday 06 June 2025 (84832)
B&K measurement of the periscope in staging building (for HAM1 70Hz investigation) - peak at 48Hz

To investigate the 70Hz feature in HAM1 chamber, which Jim reported in his alog (LHO 84638) I started looking into the structural resonances of the periscope which is installed in HAM1 (see two pictures which TJ sent me, was taken by Corey here - view01, view02) .

Betsy, handed me a periscope (similar but not exactly the same) for investigation purpose, which is now setup in staging building. I attached an accelerometer to the top of the periscope and connected it to the front end of the B&K setup for hammer impact measurements - see picture of the experimental setup here

At first I used two dog clamps to secure the periscope. The results of the two dog clamp B&K measurement is shown in this plot (data from 0.125 to 200Hz)) - one can see a 39Hz feature in the Z hit direction. See zoomed-in (30-100Hz) figure here.

Next, I attached a third dog clamp, just like in HAM1 chamber and took a second round of measurements (especially for Z direction impact). 

This plot compares the two vs three dog clamps scenario on the periscope and one can see that the resonance mode has been pushed up from 39Hz to 48Hz. 

Images attached to this report
H1 General (CDS, OpsInfo, SEI, SQZ)
thomas.shaffer@LIGO.ORG - posted 11:45, Tuesday 03 June 2025 (84748)
LVEA Post Vent Sweep

Betsy, Ryan S, TJ

We did our first walk through after the vent, but next week there will definitely be more to sweep that we either missed or that we said we will hold off a week on. We focused on the most egregious items.

Images attached to this report
H1 SUS
matthewrichard.todd@LIGO.ORG - posted 09:43, Monday 02 June 2025 - last comment - 11:00, Monday 02 June 2025(84712)
RMs PSDs differences

M. Todd, C. Cahillane


Yesterday we were interested in some noise we were seeing around 6Hz in various ASC signals (mostly CHARD pitch). This motivated us to look at the RMs, so I took a PSD of each DOF for both RMs.  We are not sure why there are such differences in all the dofs, and if there are unintended side-effects to this.

I'm specifically wondering why RM2 has larger peaks around 6.25 Hz in all dofs...

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 11:00, Monday 02 June 2025 (84718)SEI
This feature is the primary vertical resonance of the HTTS. 

The ISI HAM1's control system has still not yet been commissioned, so the decision to *unmute* the HTTS "because the ISI isolation performance should be much better than the Stack" will plague us until that's true. 

See "muting the HTTS blade springs" ECR E2200181, which confirms the 6.25 Hz resonance. 
See Integration Issue 33777 for historical notes, specifically Comment 2's record of IIET call discussion and agreement to *unmute*.

I suspect the low-frequency performance will also improve once the ISI's controls are commissioned.

Let's give Jim some time to figure out how to dance around the ~70 Hz feature that he's dealing with, see LHO:84638 (and last week's lock-acquisition problems do to ignoring it LHO:84640).
H1 SEI (SEI)
ibrahim.abouelfettouh@LIGO.ORG - posted 14:56, Sunday 01 June 2025 (84705)
BRS Trends - FAMIS 26455

Closes FAMIS 26455. Last checked in alog 84145

Trends are within red error lines. Seems that there were 3 spikes week of 05-11 to 05-18 which can probably be attributed to vent work, but I am unsure.

 

Images attached to this report
H1 General (SEI)
mitchell.robinson@LIGO.ORG - posted 14:22, Thursday 29 May 2025 (84657)
EX windfence work complete

Work at the EX windfence has been completed. While laying out the last panel (furthest North) it was discovered that the new panel had been stitched incorrectly. An old panel that was in good shape was reused.

Images attached to this report
LHO General (EPO, PEM, SEI)
corey.gray@LIGO.ORG - posted 15:27, Thursday 22 May 2025 (84550)
EX Windy Wind Fence Update For Today

(CoreyG, MitchR, RandyT)

Wind Fence News

Today Location #4 (of6) had fabric panel started it was attached above the middle (so, 3 of 5 horizontal cables were secured to this panel).  The panel was then secured at this stage since the afternoon winds were beginning to pick up.  

At this point we moved to location #5 (of 6) and continued attaching the big/thick horizontal cables (bottom was installed last week, so the remaining 4 were tensioned-up/installed.  

Bee News

After this work was done, several NEW bee swarms were observed along the X-arm as we drove back to the Corner!

(Earlier this week an X-arm bee swarm was observed and a bee person installed a bee hive box; this box seems to be getting populated, but we did see some bees continuing to go into the Beam Tube at this location). 

But today, we saw (3) NEW Swarms at the base of the Beam Tube Enclosure---their swarms appeared within 2hrs!  The bees are entering at the joint between the cement enclosures. Many of the joints have holes in the cauking toward the ground and this is where the swarms were centered; as we drove back to the corner several more of these holes had smaller groups of bees investigating these holes.  Mitch went to notify Richard about the situation when we got back to the Corner Station.

Images attached to this report