Displaying reports 261-280 of 85498.Go to page Start 10 11 12 13 14 15 16 17 18 End
Reports until 17:02, Saturday 25 October 2025
LHO General
corey.gray@LIGO.ORG - posted 17:02, Saturday 25 October 2025 (87743)
Sat EVE Ops Transition

TITLE: 10/25 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 15mph Gusts, 12mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.55 μm/s 
QUICK SUMMARY:

H1 has been locked at NLN over 2hrs (yay!) after a rough week of no observing since Mon night.  Ryan gave me a great summary of the issues/saga that went on all the way up till this morning when RyanS, Sheila, & Elenna fixed H1!

OPS ASSUMPTION:  With H1 appearing back to normal, will assume there's an OWL shift for Tony (unless I hear otherwise)

OPS Handoff from RyanS:

We did just get a warning of an EQ and this one M6 in the South Pacific, AND the EQ Response graph has this EQ squarely on the "HIGH ASC" line.  Because of this (and high microseism and the locking issues of the week), will proactively take H1 out oObserving to transition the ASC Hi Gn button a few minutes before the R-wave arrives (timer set!)

Images attached to this report
LHO General
ryan.short@LIGO.ORG - posted 16:57, Saturday 25 October 2025 (87742)
Ops Day Shift Summary

TITLE: 10/25 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Finally was able to get H1 back to observing today; see my earlier alogs for details on the efforts there. Since getting recovered, we had one lockloss from an unknown source but were able to relock easily. H1 has now been locked for just over 2 hours.

H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 14:03, Saturday 25 October 2025 - last comment - 14:57, Saturday 25 October 2025(87739)
Lockloss @ 20:20 UTC

Lockloss @ 20:20 UTC after 1:15 locked - link to lockloss tool

Sadly a lockloss shortly after a triumphant return to observing. Nothing ringing up that I can see, environment is generally calm except for the consistently elevated microseism, and the lockloss felt quick, so no obvious cause here.

Comments related to this report
ryan.short@LIGO.ORG - 14:57, Saturday 25 October 2025 (87741)

Back to observing at 21:49 UTC. Ran an initial alignment then lock acquisition went fully automatically. No sign of bounce modes, roll modes, or ASC ringups at any point.

H1 General (ISC, OpsInfo)
ryan.short@LIGO.ORG - posted 13:56, Saturday 25 October 2025 - last comment - 14:27, Saturday 25 October 2025(87738)
H1 Returns to Observing

Executive summary: H1 returned to observing as of 19:35 UTC after a challenging week.

Following my lock acquisition and troubleshooting attempts this morning, H1 was able to relock fairly easily again all the way up to LOWNOISE_LENGTH_CONTROL. I requested LOWNOISE_ASC and began watching carefully for any rise in the ~9.8 Hz bounce mode, which I saw quickly start to come up in the early seconds of LOWNOISE_ASC. Once the state finished, my first reaction to try and stop this increase was to transition back to high-gain ASC, which I did using the script on the ISI config screen. Looking back in the ISC_LOCK Guardian log, I was reminded that right before LOWNOISE_ASC we had gone through DAMP_BOUNCE_ROLL, which I assumed was not really doing anything since the roll mode damping gain was commented out. However, I saw that in this state, an entry in the LSC output matrix was being set to -1, which I traced down to being the entry for sending DARM control to ETMY, meaning we likely have been unintentionally exciting the ETMY bounce mode. After confirming that this was very much incorrect, I set the matrix value back to 0 and almost immediately saw the 9.8 Hz peak and the bounce mode monitors start to drop. After a few minutes of watching things calm down, I transitioned back to lownoise ASC and saw no sign of the mysterious hump around 60 Hz that had been seen yesterday. With things looking good so far, I requested NOMINAL_LOW_NOISE, which H1 made it to without issue.

It appears the line in the DAMP_BOUNCE_ROLL state code to send DARM control to ETMY has been there for some time, and a comment there says it's to allow for roll damping on ETMY, which would use DARM as the error signal, so this makes some sense. Before yesterday, this state was being run right before we power up from 2W, and a later state would correct the LSC matrix settings so that no erroneous actuation was being sent to ETMY. To remedy this, I have commented out the line that sets the LSC matrix from DAMP_BOUNCE_ROLL. The roll damping we have been using is still commented out as well, so this state essentially does nothing right now and remains between LOWNOISE_LENGTH_CONTROL and LOWNOISE_ASC.

Soon after reaching NLN, I noticed the familiar 1 Hz ASC oscillation was starting to ring up, seen mostly in CSOFT_P and INP1_P.  At Elenna's direction, I increased the CSOFT_P gain from 20 to 25, and the oscillation started to turn around and subside after a few minutes. Elenna has updated ISC_LOCK to set the final gain of CSOFT_P to 25, and commented out the 30 minute reduction of the gain in the THERMALIZATION node.

Elenna and I then tackled the outstanding SDF diffs, which all ended up being accepted, and are documented in the attached screenshot.

Since squeezing looked like it could have been improved, I ran SQZ_MANAGER through 'SCAN_SQZANG_FDS', which noticeably improved DARM at high frequency.

With things looking about as wrapped up as they could be, I set H1 to start observing for the first time in a few days at 19:35 UTC.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 14:27, Saturday 25 October 2025 (87740)

At Sheila's suggestion, I've entirely removed DAMP_BOUNCE_ROLL from the main locking path since the state right now does nothing. LOWNOISE_LENGTH_CONTROL will now go straight into LOWNOISE_ASC. If we decide later we want to be damping roll modes again, it would be simple to uncomment the lines for the edges around this state.

H1 General
ryan.short@LIGO.ORG - posted 11:12, Saturday 25 October 2025 (87737)
Morning Locking Progress

Started the day by running an initial alignment and relocking up to LOWNOISE_LENGTH_CONTROL with no issues along the way. Eventually spent 1hr 20min in this state and took a PUM/ESD crossover measurement using the template userapps/lsc/h1/templates/PUM_crossover_2024.xml, see attachment. Sheila confirms even with the lower coherence around the crossover frequency of 20 Hz, this is a good measurement and lines up well enough with the 2024 reference.

I then requested 'INCREASE_DARM_OFFSET' since that's where Ryan C. notes he started to see the ~9.8 Hz bounce modes start to ring up and waited. I soon started to see the ETM bounce modes (at least according to the monitors) increasing. I tried applying a damping gain of 1 on ETMX with the filters already set and saw a small response, but the mode was still increasing. Sheila suggested lowering the DARM offset to 7 from 10.75 to possibly give more time, so I did. Changing the ETMX damping gain around some more did affect the mode (especially turning it off), so it may be possible to damp these, but it's also very possible these filters are very outdated. I also transitioned to high-gain ASC between my damping gain steps of 1 and 2 and didn't see an appriciable change in the mode. I didn't get a chance to try many different settings on either ETM before we lost lock, and since the OMC DCPDs weren't close to saturating, lowering the DARM offset may not have helped after all. See ndscope screenshot for a summary of these attempts. The roll mode damping is still commented out in ISC_LOCK, so it never came on during all of this, but I never saw the mode increase at all.

On the next lock acquisition, I'll go slower after LOWNOISE_LENTH_CONTROL to see if I can get a better idea of where this ringup starts.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:40, Saturday 25 October 2025 (87736)
Sat CP1 Fill

Sat Oct 25 10:08:58 2025 INFO: Fill completed in 8min 55secs

 

Images attached to this report
LHO General
ryan.short@LIGO.ORG - posted 07:47, Saturday 25 October 2025 (87735)
Ops Day Shift Start

TITLE: 10/25 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 6mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.44 μm/s 
QUICK SUMMARY: H1 was down overnight due to high winds and microseism coupled wiht ongoing locking troubles. Will start with a fresh initial alignment then see how far in the locking sequence H1 can get this morning.

H1 General (OpsInfo)
ryan.crouch@LIGO.ORG - posted 22:00, Friday 24 October 2025 (87734)
OPS Friday eve shift summary

TITLE: 10/25 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance // Environment
INCOMING OPERATOR: Tony
SHIFT SUMMARY: The high winds and microseism made locking pretty much impossible after the 03:01 UTC lockloss.

LOG: No log.

 

Images attached to this report
H1 PSL (ISC, PSL)
keita.kawabe@LIGO.ORG - posted 18:48, Friday 24 October 2025 - last comment - 16:47, Monday 27 October 2025(87729)
ISS array another design issue (Rahul, Keita)

Summary:

We aligned everything such that none of 8 PDs was excellent but all were OK (we were also able to set up such that 4 pds were excellent but a few were terrible but decided not to take that), we were preparing for putting the array in storage until the installation, only to find that something is wrong with the design of the asymmetric QPD clamp D1300963-V2. It's unusable as is.

QPD clamp doesn't constrain the position of the QPD laterally, and there's a gross mismatch between the position of properly aligned QPD and that of the center hole of the QPD clamp. Because of that, when QPD is properly positioned, one of the QPD pins will touch the QPD clamp and be grounded unless the QPD connector is fixed such a way to pull the QPD pins sideways. Fortunately but sadly, the old non-tilt QPD clamp D1300963-V1 works better, so we'll use that.

Another minor issue, is that there seems to be a confusion as to the direction of the QPD tilt in terms of the word "pitch" and "yaw". The way the QPD is tilted in D1101059-v5 (this is how things are set up in the lab as of now) doesn't seem to follow the design intent of ECR E1400231 though it follows the word of it. After confirming that this is the case with systems, we'll change the QPD tilt direction (or not). This means that we're not ready to put everything in storage quite yet.  

None of these affect the PD array alignment we've done, this is just a problem of the QPD.

Pin grounding issue due to the QPD clamp design.

I loosened the screws for the QPD connector clamps (circled in blue in the first attachment) and the output of the QPD preamp got crazy with super large 60Hz noise and large DC SUM even though there was no laser light.

I disconnected the QPD connector, removed the connector clamps too, and found that one pin of the QPD was short circuited to the ground via the QPD clamp (not to be confused with the QPC connector clamps, see 2nd attachment).

Turns out, the offending pin was isolated during our adjustments all the time because the QPD connector clamps were putting enough lateral pressure as well as down such that the pins were slightly bent from the offending side. I was able to reattach the connector, push it laterally while tightening the clamp screws, and confirm that the QPD functioned fine. But this is not really where we wanted to be.

I rotated the QPD clamp 180 degrees (which turns out to make more sense judging from the drawings in the first attachment), which moved the QPD. Since the beam radius is about 0.2mm, if the QPD moves by 0.2mm it's not useful as a reference of the in-lab beam position. I turned the laser on, repositioned the QPD back to where it should be, but the pin on the opposite side started touching. (Sorry no picture.)

I put the old non-tilt version clamp and it was much, much better (attachment 3). It's annoying because the screw holes don't have an angled recess. The screw head is tilted relative to the mating surface on the clamp, contacting at a single point, and tightening/loosening the screw tend to move the QPD. But it's possible to carefully tighten one screw a bit, then the other one a bit, repeat that dozen times or so until nothing moves even when pushed firmly by finger. After that, you can still move the QPD by tiny amounts by tapping the QPD assy by bigger Allen key. Then tighten again.

What's going on here?

In the 4th attachment, you can see that the "center" hole of the QPD clamp is offset by 0.55" (1.4mm) in the direction orthogonal to A-A, and about 0.07" (even though this number is not specified anywhere in the drawing) or 1.8mm in A-A direction. So the total lateral offset is sqrt(1.4^2+1.8^2)~2.3mm. OTOH, the QPD assy is only 0.5" thick, so the lateral shift arising from the 1.41deg tilt at the back of the QPD assy is just 1.41/180*pi*0.5=0.0123" or 0.3mm.

Given that the beam position relative to the array structure is determined by the array itself and not by how the QPD is mounted, 2.3mm lateral shift is impossibly large, something must be wrong in the design. The 5th attachment is a visual aid for you.

Anyway, we'll use the old clamp, it's not worth designing and manufacturing new ones at this point.

QPD tilt direction.

If you go back to the first attachment, the QPD is tilted in a direction indicated by a red "tilt" arrow in the lab as we just followed the drawing.

The ECR E1400231 says "We have to tilt the QPD 1 deg in tip (pitch) and 1 deg in tilt (yaw)" and it sounds as if it colloborates with the drawing.

However, I suspect that "pitch" and "yaw" in the above sentence might have been misused. In the right figure of the 6th attachment (screeshot of ECR unedited), it seems that the QPD reflection hits the elevator (the red 45 degree thing in the figure) at around 6 O'clock position around the eliptic exit hole, which means that the QPD is tilted in its optical PIT. If it's really tilted 1 degree in optical PIT and 1 degree in optical YAW, the reflection will hit something like 7:30 position instead of 6:00.

That makes sense as the design intent of the ECR is to make sure that the QPD reflection will not go back into the exit hole. The 7th attachment is a side view I made, red lines represent the IR beams, yellow lines the internal hole(s) in the elevator, and green lines the aperture of the two eliptical exit holes. Nothing is to scale, but hopefully you agree that, in order to steer the QPD reflection outside of the exit hole aperture, PIT UP requires the largest tilt and PIT DOWN requires the least tilt. We have a fixed tilt of QPD, so it's best to PIT DOWN, that's what I read from the ECR. If you don't know which angle is bigger or smaller, see attachment 8.

Anyway, I'll ask Callum if my interpretation is correct, and will act accordingly.

Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 16:09, Monday 27 October 2025 (87774)

A followup summary:

Callum and Betsy say that I'm in the best position to judge, so I decided to tilt the QPD in its optical PIT.

Turns out that the QPD was already tilted in QPD's optical PIT so everything is fine(-ish). We'll put the unit in storage tomorrow.

Seems like we were tricked by the part drawing of the tilt washer D1400146, not the assembly drawing D1101059.

Details:

Before rotating anything, I wanted to see if the reflection from QPD could be seen on the aluminum part using the IR viewer, and indeed we could see something. The first attachment shows that some kind of diffraction pattern is hitting the barrel of the 1" 99:1 sampler in PIT. The second attachment shows that the bright spots are gone when Rahul blocked the beam going to QPD, so it's clearly due to the reflection of the QPD. The pattern might come from the gaps at the QPD center. It wasn't clear if the reflection was directly hitting the barrel  through AR, or if it hits 99% coating and reflected towards the barrel.

(There was also some IR visible in the input aperture but the beam is much smaller than this aperture, I believe we're seeing the scattered light coming back to this aperture from inside the array structure.)

We pulled the spare tilt washer D1400146-V1 (drawing with my red lines in the 3rd attachment) and measured the depth of the recess at 12 O'clock position (red E in the drawing), 3:00 (B), 6:00 (C) and 9:00 (D) using a caliper. It's a rough measurement, but anyway we repeated the measurement twice and got the following:

  A B (registration mark) C D
Meas 1 1.45 mm 1.21 1.45 1.70
Meas 2 1.41 1.21 1.49 1.71
Average 1.43 1.21 1.47 1.705

Clearly B at the registration mark is the shallowest position and the opposite position D is the deepest. The recess diameter was measured to be 23.0mm (specified as between .906 and .911" or 23.01 to 23.14mm), so the tilt of the recess as measured is (1.705-1.21)/23 ~ 21.5mrad or 1.2 deg, which reasonably agrees with 1.41deg specification and, more importantly, these measurements cannot be explained if the part was manufactured as specified in the drawing.

It seems that the drawing of the tilt washer D1400146 is incorrect or at least doesn't agree with reality, and the assembly drawing D1101059 was correct in that following that will give us the QPD tilt along optical PIT.

Seeing how the QPD reflection hits the barrel of the 99:1 sampler, the ghost beam dumping doesn't look well thought out but that's what it is.

4th picture shows the registration mark of the tilt ring as was set in the lab for future reference.

We've done the last QPD scan (turns out that I happened to set the PIT-YAW angle really well). Data will be posted later. Now we're ready to pack things up.

Images attached to this comment
keita.kawabe@LIGO.ORG - 16:47, Monday 27 October 2025 (87779)

We "measured" the dimension of the new (non-functional) QPD clamp D1300963-V2 by taking a picture with a ruler.

The offset of the center bore along the line connecting the two screw holes was measured to be about 1.9mm, which agrees pretty well with the above alog where I wrote "(even though this number is not specified anywhere in the drawing) or 1.8mm".

Images attached to this comment
H1 ISC
anthony.sanchez@LIGO.ORG - posted 17:25, Friday 24 October 2025 (87724)
DRMI Locking investigations

Using this Summary website I was able to grab all times that the ISC_LOCK Guardian was in Acquire_DRMI_1F [101] since the beginning of the  O4 run.
Along with it comes what ISC_LOCK State it came From and what State it went To
Pecilar Issues with the website data: The Local and UTC time is the same for some reason which i thought was strange, but the GPS times are fine.

I was able to use the data to make a spreadsheet which I color coded for your TechnoColor viewing pleasure! 
Green is a successful LOCK of DRMI that took us to the next locking state [102].
Red..... is a lockloss, boo!
Orange indicates that ISC_LOCK went from trying to DRMI to trying to PRMI.
 
I then took the duration that ISC_Lock took in Acquire_DRMI_1F and pasted the duration into a different cell depending on if the lock was successful or went to PRMI or Lockloss or another state.

That allowed me to make more Histograms of DRMI data!  
AcqDRMI1f-DRMILocked_Hist.png is a histogram of all the durations that DRMI has successfully locked since the beggining of O4. 
DRMI1F-PRMI.png depicts the durations of all the times DRMI failed to lock and got kicked down to PRMI.
DRMI-Lockloss.png shows us all the times we locklossed from Acquire_DRMI_1F.


Also I used Statecounter to try and break up the O4 run into ~month long sections that are 30 days long starting on the first day of the run.
Month with the lowest Average time spent Locking DRMI: October 20th - Nov 19th 2023 @ 1.439 minutes  
Month with the Highest Average time spent locking DRMI : July 2024  at 13.750 minutes (context we only locked twice in this time frame)
Month with the 2nd Highest Average time spent locking DRMI : October 2025 with 8.351 minutes. 

If the metric we want to use is what percent of the DRMI Locking states lasted longer than 5 minutes to determine the WORST DRMI locking era. That award would go to August 10th -Sep 9th 2025 with a stunning 61% of attempts to lock DRMI were above 5 minutes!
While at the beggining of the run we spent 4 months in a row never exceeding 5 minutes in DRMI. It does seem like it got progressively worse. 
 

Images attached to this report
Non-image files attached to this report
LHO General
ryan.short@LIGO.ORG - posted 16:53, Friday 24 October 2025 (87731)
Ops Day Shift Summary

TITLE: 10/24 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Busy day of continuous troubleshooting of H1. Things have been learned and progress has been made, but currently still waiting to relock after discovering an incorrect whitening setting and swapping a sat amp to see if those help with some of our instabilities. We've had some locklosses at DRMI_3F, but waiting for DRMI ASC to fully converge on this most recent try seemed to work. Currently relocking up to MOVE_SPOTS.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
14:32 FAC Randy Y-arm N Caulking BTE 17:17
17:03 VAC Travis MX, MY N Checking pumps 17:34
17:20 FAC Randy CS, EX N Moving boom lift to EX 17:48
17:44 ISC Keita, Rahul Opt Lab Local ISS array work 18:43
20:27 VAC Travis MY N Pump measurement 20:43
H1 CDS
david.barker@LIGO.ORG - posted 16:10, Friday 24 October 2025 - last comment - 16:20, Friday 24 October 2025(87732)
New SDF settings differences lists

My first SDF post today, which was based on getting archive files from subversion commits, doesn't appear to be complete and in addition may have some false positives.

I have generated new lists, this time getting the archived OBSERVE.snap files from the CDS backups of userapps by the file server.

In the directory /ligo/home/david.barker/tuesdaymaintenance/24oct2025/sdf/observe there are:

mon20oct2025/ Directory of backups as of 00:39 early monday morning

wed22oct2025/ Directory of backups as of 00:39 early wednesday morning

thu23oct2025/ Directory of snaps as of 00:39 ealy thursday morning

mon-wed_diffs.txt Text file of settings changed between the monday and wednesday backups

mon-thu_diffs.txt Text file of settings changed between the monday and thursday backups

snap_diffs.bsh Shell script to generate the diffs files

snapfiles Text file of the userapps "OBSERVE.snap" files as symbolically linked from the target directories

 

Comments related to this report
jennifer.wright@LIGO.ORG - 16:20, Friday 24 October 2025 (87733)

Didn't find any other smoking guns, just the LSC-POP_A filter selection being wrong, making the anti-whitening gain wrong (see alog #87728), which we already knew about from the commit comparison Dave did previously (see alog #87718):

-H1:LSC-POP_A_RF45_I_SW1S 1 4.12400000000000000000e+03 0xffffffff
+H1:LSC-POP_A_RF45_I_SW1S 1 1.05200000000000000000e+03 0xffffffff

-H1:LSC-POP_A_RF45_Q_SW1S 1 4.12400000000000000000e+03 0xffffffff
+H1:LSC-POP_A_RF45_Q_SW1S 1 1.05200000000000000000e+03 0xffffffff

 

H1 General
ryan.crouch@LIGO.ORG - posted 15:50, Friday 24 October 2025 (87730)
OPS Friday EVE shift start

TITLE: 10/24 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 9mph Gusts, 6mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.53 μm/s 
QUICK SUMMARY:

H1 ISC
elenna.capote@LIGO.ORG - posted 14:11, Friday 24 October 2025 (87728)
POP A RF45 whitening set wrong

We (Sheila, Jennie, Ryan, Oli) have found that the POP A RF45 antiwhitening gain was set incorrectly. We tracked this problem down to the SDF being saved incorrectly Monday night along with with various dark offsets (87598). The antiwhitening gain should be -15 dB but was set to -21 dB. Additionally, the new dark offsets were doubled. We think this problem explains why the MICH gain needed to be doubled this week (87645).

I reset the safe SDF to have the correct LSC POP antiwhitening filters, and reverted the dark offsets as well.

Extra kudos to Jennie for scrolling through Dave's SDF difference file (400 settings!) to find this mismatch, 87718.

Sheila edited the guardian to remove the MICH gain doubling.

Images attached to this report
H1 SUS
oli.patane@LIGO.ORG - posted 11:39, Friday 24 October 2025 - last comment - 12:48, Friday 24 October 2025(87722)
ETMY L2 satamp reinstated (reswapped) to try and solve locking issues

Marc, Oli

In 87713 it's noted that some of the strange behavior we've been seeing with the locks seem to be related to the ETMY L2 stage. The ETMY L2 satamp was swapped out on Oct 14th from S1100137 to S1100127 as part of the ECR E2400330 upgrade (87469). So, even though we were seemingly doing fine with this ETMY L2 satamp, we decided to swap it out this morning to see if that is the cause of our issues. Coincidentally, the spare satamp that we have available that was modified for ECR E2400330 and that Jeff had characterized was the satamp that had come from ETMY L2, S1100137. So we decided to put that one back in - it was fine before, although it obviously has now been modified for the satamp upgrade. So it's not been swapped back to exactly what it was before, but this is the closest we can get!

Marc and I took S1100137 down to EY and swapped out S1100127 with S1100137. I edited the fitresults file for S1100137 to work for ETMY L2 and replaced the previous OSEMINF 5.3:0.1 compensation filters to be the best possible compensation filters for this new satamp. 

Updating compensation filters
$ py satampswap_bestpossible_filterupdate_ECR_E2400330.py -o ETMY_L2
All updated filters grabbed for ETMY
ETMY L2 UL compensation filter updated to zpk([5.25],[0.0959],1,"n")
ETMY L2 LL compensation filter updated to zpk([5.21],[0.0953],1,"n")
ETMY L2 UR compensation filter updated to zpk([5.18],[0.0949],1,"n")
ETMY L2 LR compensation filter updated to zpk([5.21],[0.0955],1,"n")
write /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSETMY.txt
Done writing updated filters for ETMY

Swap timeline

Date Serial Number Whitening zp
Before Oct 14th 2025 S1100137 10:0.4
Oct 14th 2025 S1100127 5.3:0.1
Oct 24th 2025 S1100137 5.3:0.1

Satamp characterization data

Here's the characterization data and fit results for S1100137, assigned to ETMY L2's ULLLURLR OSEMs.

This sat amp is a UK 4CH sat amp, D0900900 / D0901284. The data was taken per methods described in T080062-v3, using the diagrammatic setup shown on PAGE 1 of the Measurement Diagrams from LHO:86807.

The data was processed and fit using ${SusSVN}/trunk/electronicstesting/lho_electronics_testing/satamp/ECR_E2400330/Scripts/
plotresponse_S1100137_ETMY_L2_20251020.m

Explicitly, the fit to the whitening stage zero and pole, the transimpedance feedback resistor, and foton design string are:

Optic  Stage  Serial_Number  Channel_Number  OSEM_Name  Zero_Pole_Hz  R_TIA_kOhm  Foton_Design 
ETMY L2 S1100137 CH1 UL 0.0959:5.25 120.875 zpk([5.25],[0.0959],1,"n")
      CH2 LL 0.0953:5.21 121.625 zpk([5.21],[0.0953],1,"n")
      CH3 UR 0.0949:5.18 121.625 zpk([5.18],[0.0949],1,"n")
      CH4 LR 0.0955:5.21 121.5 zpk([5.21],[0.0955],1,"n")

The attached plot and machine readable .txt file version of the above table are also found in ${SusSVN}/trunk/electronicstesting/lho_electronics_testing/satamp/ECR_E2400330/Results/
2025-10-20_UKSatAmp_S1100137_D0901284-v5_fitresults.txt

Per usual, R_TIA_kOhm is not used in the compensation filter -- but after ruling out an adjustment in the zero frequency (by zeroing the phase residual at the lowest few frequency points), Jeff nudged the transimpedance a bit to get the magnitude scale within the ~0.25%, shown in the attached results. Any scaling like this will be accounted for instead with the absolute calibration step, i.e. Side Quest 4 from G2501621, a la what was done for PR3 and SR3 top masses in LHO:86222 and LHO:84531 respectively.

Non-image files attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 12:48, Friday 24 October 2025 (87726)

This swap seems to have fixed the saturation problems, great work!

H1 SUS
elenna.capote@LIGO.ORG - posted 09:04, Friday 24 October 2025 - last comment - 12:47, Friday 24 October 2025(87713)
ETMY reaction chain length tracking causing saturations

We have been getting saturation warnings on EY after going through the lownoise coil drivers state. We finally tracked it down to the R0 F2 and F3 drivers. The reaction chain length drive (aka reaction chain tracking) is causing these saturations. This servo is run using the L2 OSEMs as a witness to then drive the R0 chain to follow the main chain. I turned off the length drive and the saturations stopped. When I turn it back on it continues to saturate.

The only recent change I can think of is the L2 satamp swap on all the test masses.

Comments related to this report
sheila.dwyer@LIGO.ORG - 09:25, Friday 24 October 2025 (87714)

Adding a comment:  We've only had these saturations intermittently the last few days, in this lock it is now contiuous if the R0 tracking is on.  The sat amp swap happened Oct 14th, so perhaps this is some delayed consequence of the swap. 

elenna.capote@LIGO.ORG - 09:51, Friday 24 October 2025 (87715)

I was also reminded that the EY tiltmeter is possibly not performing as well as it can either as of Tuesday this week.

elenna.capote@LIGO.ORG - 12:47, Friday 24 October 2025 (87725)

After the ETMY L2 satamp was swapped back, we no longer get saturations on EY with R0 tracking on!

LHO General
ryan.short@LIGO.ORG - posted 07:34, Friday 24 October 2025 - last comment - 13:28, Friday 24 October 2025(87710)
Ops Day Shift Start

TITLE: 10/24 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 8mph Gusts, 6mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.62 μm/s 
QUICK SUMMARY: H1 was down overnight due to ongoing locking issues. I'll start by running through an initial alignment while I get caught up on events of the past couple days.

Comments related to this report
ryan.crouch@LIGO.ORG - 13:28, Friday 24 October 2025 (87727)SUS

After confirming it was damping well over mulitple locks I added the new settings of ITMX13 to lscparams and reloaded the GRD, -30 phase with a gain of -1.0.

Displaying reports 261-280 of 85498.Go to page Start 10 11 12 13 14 15 16 17 18 End