Displaying reports 421-440 of 85671.Go to page Start 18 19 20 21 22 23 24 25 26 End
Reports until 16:33, Sunday 26 October 2025
LHO General
ryan.short@LIGO.ORG - posted 16:33, Sunday 26 October 2025 (87756)
Ops Day Shift Summary

TITLE: 10/26 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Two lock acquisitions today, both of which had locklosses during TRANSITION_FROM_ETMX on the first try. This made the relocking times a bit longer, but generally the process was automatic. H1 has been locked for 2.5 hours.

H1 General (ISC, OpsInfo, SUS)
ryan.short@LIGO.ORG - posted 16:27, Sunday 26 October 2025 (87755)
Summary of H1 Locking Troubles and Changes for the Week of Oct. 20th-25th

Now that we're back to observing with regularity after a week full of intense troubleshooting, I decided it would be nice to have a summary of the events that contributed to IFO locking issues in the past week and what was done along the way to fix them all in one place, so that's what I'll attempt do in this alog. Anyone should be encouraged to add comments with things I missed or additional commentary. Things I believe to be the most significant changes either contributing to or fixing IFO locking issues I've bolded.

H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 12:40, Sunday 26 October 2025 - last comment - 14:40, Sunday 26 October 2025(87753)
Lockloss @ 18:54 UTC

Lockloss @ 18:54 UTC after 1:41 locked - link to lockloss tool

No obvious cause, but I did see a couple glitches in the PRG a few seconds before the lockloss.

Comments related to this report
ryan.short@LIGO.ORG - 14:40, Sunday 26 October 2025 (87754)

Back to observing at 20:59 UTC.

Had another lockloss during TRANSITION_FROM_ETMX so relocking took a bit longer than usual. Fully automatic otherwise, except for some touchup of PRM I did during DRMI acquisition.

H1 PSL
ryan.short@LIGO.ORG - posted 11:44, Sunday 26 October 2025 (87752)
PSL Status Report - Weekly

FAMIS 27400

Laser Status:
    NPRO output power is 1.858W
    AMP1 output power is 70.71W
    AMP2 output power is 140.2W
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 33 days, 0 hr 29 minutes
    Reflected power = 25.05W
    Transmitted power = 106.7W
    PowerSum = 131.8W

FSS:
    It has been locked for 0 days 2 hr and 11 min
    TPD[V] = 0.5278V

ISS:
    The diffracted power is around 3.4%
    Last saturation event was 0 days 3 hours and 41 minutes ago


Possible Issues:
    PMC reflected power is high

LHO VE
david.barker@LIGO.ORG - posted 10:27, Sunday 26 October 2025 (87750)
Sun CP1 Fill

Sun Oct 26 10:09:38 2025 INFO: Fill completed in 9min 34secs

 

Images attached to this report
LHO General
ryan.short@LIGO.ORG - posted 07:53, Sunday 26 October 2025 - last comment - 10:32, Sunday 26 October 2025(87749)
Ops Day Shift Start

TITLE: 10/26 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Wind
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 31mph Gusts, 23mph 3min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.52 μm/s 
QUICK SUMMARY: H1 has been down since 07:09 UTC and struggled to lock overnight due to high winds. Gusts are still high this morning, but I'll try to run H1 through an initial alignment and locking to see how far it gets.

Comments related to this report
ryan.short@LIGO.ORG - 10:32, Sunday 26 October 2025 (87751)

H1 back to observing at 17:23 UTC

Ran an initial alignment then had H1 relock on its own. One lockloss during TRANSITION_FROM_ETMX, which I'm blaming on a combination of wind and microseism, but H1 made it to low noise on the second try fully automatically. I soon saw the 1 Hz ASC ringup after reaching NLN, so I tried raising the CSOFT_P gain to 30, but didn't see the ringup stop. Out of an abundance of caution I transitioned to high-gain ASC, which did stop the ringup, so a few minutes later I set the CSOFT_P gain back to 25.

I also had to run the switch_nom_sqz_states script for observing with squeezing to fix the nominal states of the SQZ Guardians since it looks like Tony must have run it at some point overnight.

H1 General (Lockloss, SQZ)
anthony.sanchez@LIGO.ORG - posted 00:57, Sunday 26 October 2025 - last comment - 02:29, Sunday 26 October 2025(87747)
Lockloss probably from wind.

H1 called me on my owl shift.
When I logged on I was suprised to learned we were only having troubles with the SQZ system in the wind that is howling and gusting to over 40mph.

Since 

The SQZ_MAN was trying to get the SQZr back to FDS but kept getting hung up on FC_WAIT_FDS and would drop back down. 
I was going to do Observing without squeezing to troubleshoot but we had a lockloss that I'm gonna blame on the wind.

https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1445499787

Running an initial alignment now.

Comments related to this report
anthony.sanchez@LIGO.ORG - 02:29, Sunday 26 October 2025 (87748)

holding in down for the wind to stop breaking the lock.

 

LHO General
corey.gray@LIGO.ORG - posted 21:59, Saturday 25 October 2025 (87744)
Sat EVE Ops Summary

TITLE: 10/26 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:

H1 finally back in buisiness.  Had one lockloss during the shift, but relock was mostly automatic.  Winds picked up in the last hour & microseism is still high.  Owl Shift is on for tonight.  H1 range is hovering around 153Mpc.
LOG:

H1 General
corey.gray@LIGO.ORG - posted 18:18, Saturday 25 October 2025 - last comment - 19:57, Saturday 25 October 2025(87745)
Lockloss H1 After Almost 3.5hrs

H1 just had a lockloss, this was after observing for 30min with H1 ASC Hi Gn to ride out an EQ.

Comments related to this report
corey.gray@LIGO.ORG - 19:57, Saturday 25 October 2025 (87746)

This relock was mostly automatic (I touched up ETMy/TMSy a tiny amount).  After that, H1 automatically went through prmi-check mich fringes & then back to Observing automatically.

LHO General
corey.gray@LIGO.ORG - posted 17:02, Saturday 25 October 2025 (87743)
Sat EVE Ops Transition

TITLE: 10/25 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 15mph Gusts, 12mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.55 μm/s 
QUICK SUMMARY:

H1 has been locked at NLN over 2hrs (yay!) after a rough week of no observing since Mon night.  Ryan gave me a great summary of the issues/saga that went on all the way up till this morning when RyanS, Sheila, & Elenna fixed H1!

OPS ASSUMPTION:  With H1 appearing back to normal, will assume there's an OWL shift for Tony (unless I hear otherwise)

OPS Handoff from RyanS:

We did just get a warning of an EQ and this one M6 in the South Pacific, AND the EQ Response graph has this EQ squarely on the "HIGH ASC" line.  Because of this (and high microseism and the locking issues of the week), will proactively take H1 out oObserving to transition the ASC Hi Gn button a few minutes before the R-wave arrives (timer set!)

Images attached to this report
LHO General
ryan.short@LIGO.ORG - posted 16:57, Saturday 25 October 2025 (87742)
Ops Day Shift Summary

TITLE: 10/25 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Finally was able to get H1 back to observing today; see my earlier alogs for details on the efforts there. Since getting recovered, we had one lockloss from an unknown source but were able to relock easily. H1 has now been locked for just over 2 hours.

H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 14:03, Saturday 25 October 2025 - last comment - 14:57, Saturday 25 October 2025(87739)
Lockloss @ 20:20 UTC

Lockloss @ 20:20 UTC after 1:15 locked - link to lockloss tool

Sadly a lockloss shortly after a triumphant return to observing. Nothing ringing up that I can see, environment is generally calm except for the consistently elevated microseism, and the lockloss felt quick, so no obvious cause here.

Comments related to this report
ryan.short@LIGO.ORG - 14:57, Saturday 25 October 2025 (87741)

Back to observing at 21:49 UTC. Ran an initial alignment then lock acquisition went fully automatically. No sign of bounce modes, roll modes, or ASC ringups at any point.

H1 General (ISC, OpsInfo)
ryan.short@LIGO.ORG - posted 13:56, Saturday 25 October 2025 - last comment - 14:27, Saturday 25 October 2025(87738)
H1 Returns to Observing

Executive summary: H1 returned to observing as of 19:35 UTC after a challenging week.

Following my lock acquisition and troubleshooting attempts this morning, H1 was able to relock fairly easily again all the way up to LOWNOISE_LENGTH_CONTROL. I requested LOWNOISE_ASC and began watching carefully for any rise in the ~9.8 Hz bounce mode, which I saw quickly start to come up in the early seconds of LOWNOISE_ASC. Once the state finished, my first reaction to try and stop this increase was to transition back to high-gain ASC, which I did using the script on the ISI config screen. Looking back in the ISC_LOCK Guardian log, I was reminded that right before LOWNOISE_ASC we had gone through DAMP_BOUNCE_ROLL, which I assumed was not really doing anything since the roll mode damping gain was commented out. However, I saw that in this state, an entry in the LSC output matrix was being set to -1, which I traced down to being the entry for sending DARM control to ETMY, meaning we likely have been unintentionally exciting the ETMY bounce mode. After confirming that this was very much incorrect, I set the matrix value back to 0 and almost immediately saw the 9.8 Hz peak and the bounce mode monitors start to drop. After a few minutes of watching things calm down, I transitioned back to lownoise ASC and saw no sign of the mysterious hump around 60 Hz that had been seen yesterday. With things looking good so far, I requested NOMINAL_LOW_NOISE, which H1 made it to without issue.

It appears the line in the DAMP_BOUNCE_ROLL state code to send DARM control to ETMY has been there for some time, and a comment there says it's to allow for roll damping on ETMY, which would use DARM as the error signal, so this makes some sense. Before yesterday, this state was being run right before we power up from 2W, and a later state would correct the LSC matrix settings so that no erroneous actuation was being sent to ETMY. To remedy this, I have commented out the line that sets the LSC matrix from DAMP_BOUNCE_ROLL. The roll damping we have been using is still commented out as well, so this state essentially does nothing right now and remains between LOWNOISE_LENGTH_CONTROL and LOWNOISE_ASC.

Soon after reaching NLN, I noticed the familiar 1 Hz ASC oscillation was starting to ring up, seen mostly in CSOFT_P and INP1_P.  At Elenna's direction, I increased the CSOFT_P gain from 20 to 25, and the oscillation started to turn around and subside after a few minutes. Elenna has updated ISC_LOCK to set the final gain of CSOFT_P to 25, and commented out the 30 minute reduction of the gain in the THERMALIZATION node.

Elenna and I then tackled the outstanding SDF diffs, which all ended up being accepted, and are documented in the attached screenshot.

Since squeezing looked like it could have been improved, I ran SQZ_MANAGER through 'SCAN_SQZANG_FDS', which noticeably improved DARM at high frequency.

With things looking about as wrapped up as they could be, I set H1 to start observing for the first time in a few days at 19:35 UTC.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 14:27, Saturday 25 October 2025 (87740)

At Sheila's suggestion, I've entirely removed DAMP_BOUNCE_ROLL from the main locking path since the state right now does nothing. LOWNOISE_LENGTH_CONTROL will now go straight into LOWNOISE_ASC. If we decide later we want to be damping roll modes again, it would be simple to uncomment the lines for the edges around this state.

H1 General
ryan.short@LIGO.ORG - posted 11:12, Saturday 25 October 2025 (87737)
Morning Locking Progress

Started the day by running an initial alignment and relocking up to LOWNOISE_LENGTH_CONTROL with no issues along the way. Eventually spent 1hr 20min in this state and took a PUM/ESD crossover measurement using the template userapps/lsc/h1/templates/PUM_crossover_2024.xml, see attachment. Sheila confirms even with the lower coherence around the crossover frequency of 20 Hz, this is a good measurement and lines up well enough with the 2024 reference.

I then requested 'INCREASE_DARM_OFFSET' since that's where Ryan C. notes he started to see the ~9.8 Hz bounce modes start to ring up and waited. I soon started to see the ETM bounce modes (at least according to the monitors) increasing. I tried applying a damping gain of 1 on ETMX with the filters already set and saw a small response, but the mode was still increasing. Sheila suggested lowering the DARM offset to 7 from 10.75 to possibly give more time, so I did. Changing the ETMX damping gain around some more did affect the mode (especially turning it off), so it may be possible to damp these, but it's also very possible these filters are very outdated. I also transitioned to high-gain ASC between my damping gain steps of 1 and 2 and didn't see an appriciable change in the mode. I didn't get a chance to try many different settings on either ETM before we lost lock, and since the OMC DCPDs weren't close to saturating, lowering the DARM offset may not have helped after all. See ndscope screenshot for a summary of these attempts. The roll mode damping is still commented out in ISC_LOCK, so it never came on during all of this, but I never saw the mode increase at all.

On the next lock acquisition, I'll go slower after LOWNOISE_LENTH_CONTROL to see if I can get a better idea of where this ringup starts.

Images attached to this report
Displaying reports 421-440 of 85671.Go to page Start 18 19 20 21 22 23 24 25 26 End