Displaying reports 4261-4280 of 83132.Go to page Start 210 211 212 213 214 215 216 217 218 End
Reports until 10:21, Tuesday 12 November 2024
H1 PSL
daniel.sigg@LIGO.ORG - posted 10:21, Tuesday 12 November 2024 (81223)
Power down unused PSL equipment

Image 1: PSL top eurocrate
From left to right: ISS inner loop, old ISS second loop (depowered), reference cavity heater (depowered), and TTFSS field box.

Image 2: PSL bottom eurocrate
From left to right: injection locking field box (depowered), injection locking servo (depowered), monitor field box, PMC locking field box, and PMC locking servo.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:10, Tuesday 12 November 2024 (81222)
Tue CP1 Fill

Tue Nov 12 10:07:17 2024 INFO: Fill completed in 7min 13secs

Jordan confirmed a good fill curbside.

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 07:45, Tuesday 12 November 2024 - last comment - 09:12, Tuesday 12 November 2024(81219)
Tueasday Ops Day Shift Start

TITLE: 11/12 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 18mph Gusts, 14mph 3min avg
    Primary useism: 0.06 μm/s
    Secondary useism: 0.78 μm/s
QUICK SUMMARY:

Good Maintenance Tuesday Morning Everyone.
H1 was in move spots when I walked in and shortly there after caught a lockloss.
So In_Lock_SUS_Charge Measurements will not be ran this week.
List of expected Maintenance Items :

Comments related to this report
david.barker@LIGO.ORG - 09:12, Tuesday 12 November 2024 (81221)

After discussion with Tony we decided not to install his new PCAL Guardian nodes today, which means no DAQ restart today (my VACSTAT ini change is target-of-opportunity if a DAQ restart were to happen for other reasons)

H1 General
ryan.crouch@LIGO.ORG - posted 01:32, Tuesday 12 November 2024 - last comment - 04:04, Tuesday 12 November 2024(81217)
H1 OPS OWL assistance

H1 called for help from the initial alignment timer expiring at 09:00 UTC, we were in locking green arms with Yarm struggling to lock, Winds were semi high and microseism is above the 90th percentile. I adjusted ETMY in pitch and it was finally able to lock. I finished the IA at 09:32 UTC.

Comments related to this report
ryan.crouch@LIGO.ORG - 04:04, Tuesday 12 November 2024 (81218)

Struggling to hold the ARMs from FSS oscillations and the wind knocking it out. The wind isn't predicted to calm down till 15:00 UTC and only for an hour before they rise even higher than they are currently from windy.com. The 2ndary microseism isn't helping either. 

As of 10:30 UTC I can hold the arms but the FSS keeps oscillating, holding it in DOWN till it passes ( It only took a few minutes)

10:47 ENGAGE_ASC_FOR_FULL_IFO lockloss

11:07 UTC RESONANCE lockloss from DRMI losing it

11:23 UTC RESONANCE lockloss again from DRMI

11:28 UTC FSS keeps oscillating

11:46 PREP_ASC lockloss from the IMC

12:00 UTC ENGAGE_ASC lockloss

The 3 minute average for the wind is just over 20 mph with gusts over 25, that combined with the secondary microseism fully above the 90th percentile relocking this morning is unlikely.

12:35 UTC LOWNOISE_COIL_DRIVERS lockloss

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 22:00, Monday 11 November 2024 (81216)
OPS Eve Shift Summary

TITLE: 11/12 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:

IFO is LOCKING at ACQUIRE_PRMI

For the first third of shift, comissioners were troubleshooting and investigating the PSL glitch after a new finding emerged: the glitch is seen in the PMC mixer with the IMC offline, ISS and FSS off (alog 81207 and its many comments). IFO was in MAINTENANCE, wind gusts were over 30mph and microseism was very high (troughs higher than 90% line and peaks higher than 100% line for secondary).

For the second third of shift, I was attempting to lock since winds calmed down but got caught up during initial alignment since SRM WD M3 would trip incessantly during SRC align, which I assume is due to the microseism being so high since this doesn't normally happen and it's the primary unique condition. After initial alignment finished, IFO reached NLN fully automatically. After an SDF diff clearance (attached), we were observing for a whole 27 minutes!

For the third third of shift, I was attempting to relock following a Lockloss (alog 81215) caused by a 50mph wind gust that even my less-than-LIGO-sensitive ears could detect. I also confirmed it wasn't the PSL. I waited in DOWN until the gusts got back below 30mph and attempted to relock. The PSL glitch happened a few times in between (since IMC was faulting but winds were low) but we got all the way to FIND_IR until another 38mph gust came though. Now, we're stably past ALS but PRMI locks and unlocks every few minutes due to the environment.

LOG:

None

Images attached to this report
H1 PEM (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 20:54, Monday 11 November 2024 (81215)
Lockloss 04:44 UTC

Lockloss due to a 50mph wind gust. I heard the wind shake the building and immediately thereafter, a lockloss.

I also checked particularly if it was an IMC glitch and it wasn't (IMC and ASC channels lost lock at different times (~250ms apart) and IMC locked right after LL.

Short 37 min lock.

 

H1 PSL (PSL)
ibrahim.abouelfettouh@LIGO.ORG - posted 16:46, Monday 11 November 2024 - last comment - 19:25, Monday 11 November 2024(81207)
PMC Mixer Glitches with FSS and ISS off (and IMC Offline)

Ibrahim, Tony, Vicky, Ryan S, Jason

Two screenshots below answering the question of whether the PMC Mixer glitches on its own with FSS and ISS out. It does.

PSL team is having a think about what this implies.

Images attached to this report
Comments related to this report
victoriaa.xu@LIGO.ORG - 18:27, Monday 11 November 2024 (81209)SQZ

Jason, Ryan, Ibrahim, Elenna, Daniel, Vicky - This test had only PSL + PMC for ~2 hours, from 2024/11/11 23:47:45 UTC to 02:06:14 UTC. No FSS, ISS, or IMC.

Mixer glitch#1 @ 1415406517 - 1st PSL PMC mixer REFL glitches. Again, only PSL + PMC. No FSS, ISS, IMC.

The squeezer was running TTFSS, which locks the squeezer laser frequency to the PSL laser frequency + 160 MHz offset. With SQZ TTFSS running, for glitch #1, the squeezer witnessed glitches in SQZ TTFSS FIBR MIXER, and PMC and SHG demod error signals.

This seems to suggest the squeezer is following real PSL free-running laser frequency glitches? Since there is no PSL FSS servo actuating on the PSL laser frequency.
  -  also suggests the 35 MHz PSL (+SQZ) PMC LO VCO is not the main issue, since SQZ witnesses the PSL glitches in the SQZ FIBR MIXER.
  -  also suggests PMC PZT HV is not the issue. Without PSL FSS, any PMC PZT HV glitches should not become PSL laser frequency glitches. Caveat the cabling was not disconnected, just done from control room, so analog glitches could still propagate.

Images attached to this comment
victoriaa.xu@LIGO.ORG - 17:57, Monday 11 November 2024 (81211)

Mixer glitch #2 @ 1415408786. Trends here.

Somehow looks pretty different than glitch #1. PSL-PMC_MIXER glitches are not clearly correlated with NPRO power changes. SQZ-FIBR_MIXER sees the glitches, and SQZ-PMC_REFL_RF35 also sees glitches. But notably the SHG_RF24 does NOT see the glitches, unlike before in glitch #1.

For the crazy glitches at ~12 minutes (end of scope) - the SQZ TTFSS MIXER + PMC + SHG all see the big glitch, and there seem to be some (weak) NPRO power glitches too.

Images attached to this comment
victoriaa.xu@LIGO.ORG - 19:25, Monday 11 November 2024 (81212)

Mixer glitches #3 @ 1415409592 - Trends. Ryan and I are wondering if these are the types of glitches that bring the IMC down? But, checking the earlier IMC lockloss (tony 81199), can't tell from trends.

Here - huge PSL freq glitches that don't obviously correlate with PSL NPRO power changes (though maybe a slight step after the first round of glitches?). But these PSL glitches are clearly observed across the squeezer TTFSS + PMC + SHG signals (literally everywhere).

The scope I'm using is at /ligo/home/victoriaa.xu/ndscope/PSL/psl_sqz_glitches.yaml (sorry it runs very slow).

Images attached to this comment
victoriaa.xu@LIGO.ORG - 19:11, Monday 11 November 2024 (81214)

Thinking about the overall at tests done today, see annotated trends.

  • PSL + PMC + FSS + IMC  (no ISS) ...  bad, IMC lockloss after 1:15 hours even with ISS OFF, 81198. IMC lockloss at 2024/11/11 20:37:44 UTC (1415392682).
     
  • PSL + PMC  (no FSS, no ISS, no IMC), Test #1...  good? No glitches for 30 minutes. Sheila 81200.
     
  • PSL + PMC + FSS  (no ISS, no IMC) ...  bad, Ref cav did not stay locked. Many glitches (visible in PMC mixer). Sheila 81200.
     
  • PSL + PMC  (no FSS, no ISS, no IMC), Test #2 ...  bad this time, Tried again as Sheila suggested. After ~40 min, saw glitch #1 in PMC mixer, in turn witnessed by SQZ TTFSS. Bigger glitch #3 seen after ~1.5 hours (this thread).
    • Some glitches have NPRO power glitches, some don't.
    • The fact that squeezer TTFSS sees glitches could suggest these glitches are real and related to free-running PSL laser frequency glitches?
    • In particular - Glitch #3 (above) has similar peak-to-peak on PSL-PMC_MIXER to what unlocked the IMC in the ISS_OFF test earlier today. Glitch #3 also goes on for longer than #1,2.
Images attached to this comment
H1 General
anthony.sanchez@LIGO.ORG - posted 16:27, Monday 11 November 2024 (81206)
Monday Ops Day Shift End

TITLE: 11/12 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:

H1 Holding ISC_LOCK in IDLE with the IMC locked, BUT with the ISS and the FSS are turned OFF.

Note: note FSS has to be ON to lock the IMC , once locked turn it off.
Using this command to open the ndscope ~/../sheila.dwyer/ndscope/PSL/PSL_fast_channels.yaml
We are watching H1:PSL-PMC_MIXER_OUT_DQ for glitches.


LOG:

Start Time System Name Location Lazer_Haz Task Time End
21:41 SAF Laser LVEA YES LVEA is laser HAZARD 08:21
16:20 FAC Karen Optics & Vac Prep N Technical Cleaning 16:47
16:43 PSL-ISS Sheila Ctrl Rm N ISS investigations. 19:43
16:47 FAC Karen MY N Technical cleaning 17:57
17:14 PSL-ISS Sheila & Elenna LVEA YES Pluggin in a cable for ISS injection 17:27
18:39 FAC Kim MX N Technical Cleaning 19:27

 

H1 PSL
anthony.sanchez@LIGO.ORG - posted 16:17, Monday 11 November 2024 (81205)
Increase in noise on PSL Bullseye Sensor

I was trying to trace some of the noise back to when it started, and noticed an increase in the magnitude of the peak noise on the PMC, and the Bullseye.

Images attached to this report
H1 ISC
elenna.capote@LIGO.ORG - posted 12:30, Monday 11 November 2024 - last comment - 10:58, Tuesday 12 November 2024(81195)
Suspicious ETMX transition locklosses

We seem to be having lots of locklosses during the transition from ETMX/lownoise ESD ETMX guardian states. With Camilla's help, I looked through the lockloss tool to see if these are related to the IMC locklosses or not. "TRANSITION_FROM_ETMX" and "LOWNOISE_ESD_ETMX" are states 557 and 558 respectively.

For reference, transition from ETMX is the state where DARM control is shifted to IX and EY, and the ETMX bias voltage is ramped to our desired value. Then control is handed back over the EX. Then, in lownoise ESD, some low pass is engaged and the feedback to IX and EY is disengaged.

In total since the start of O4b (April 10), there have been 26 locklosses from "transition from ETMX"

From the start of O4b (April 10) to now, there have been 22 locklosses from "lownoise ESD ETMX"

Trending the microseismic channels, the microseism increases seems to correspond with the worsening of this lockloss rate for lownoise ESD ETMX. I think, when correcting for the IMC locklosses, the transition from ETMX lockloss rate is about the same. However, the lownoise ESD ETMX lockloss rate has increased significantly.

Comments related to this report
elenna.capote@LIGO.ORG - 15:39, Monday 11 November 2024 (81202)

Here is a look at what these locklosses actually look like.

I trended the various ETMX, ETMY and ITMX suspension channels during these transitions. The first two attachments here show a side by side set of scopes, with the left showing a successful transition from Nov 10, and the right showing a failed transition from Nov 9. It appears that in both cases, the ITMX L3 drive has a ~1 Hz oscillation that grows in magnitude until the transition. Both the good and bad times show a separate oscillation right at the transition. This occurs right at the end of state 557, so the locklosses within 1 or 2 seconds of state 558 likely are resulting from whatever the of the steps in state 557 do. The second screenshot zooms in to highlight that the successful transition has a different oscillation that rings down, whereas the unsuccessful transition fails right where this second ring up occurs.

I grabbed a quick trend of the microseism, and it looks like the ground motion approximately doubled around Sept 20 (third screenshot). I grabbed a couple of recent successful and unsuccessful transitions since Sept 20, and they all show similar behavior. A successful transition from Sept 19 (fourth attachment) does not show the first 1 Hz ring up, just the second fast ring down after the transition.

I tried to look back at a time before the DAC change at ETMX, but I am having trouble getting any data that isn't minute trend. I will keep trying for earlier times, but this already indicates an instability in this transition that our long ramp times are not avoiding.

Images attached to this comment
elenna.capote@LIGO.ORG - 16:51, Monday 11 November 2024 (81208)

Thanks to some help from Erik and Jonathan, I was able to trend raw data from Nov 2023 (hint: use nds2). Around Nov 8 2023, the microseism was elevated, ~ 300 counts average on the H1:ISI-GND_STS_ITMY_Z_BLRMS_100M_300M channel, compared to ~400 counts average this weekend. The attached scope compares the transition in Nov 8 2023 (left) versus this weekend Nov 10 (right). One major difference here is that we now have a new DARM offloading scheme. A year ago, it appears that this transition also involves some instability causing some oscillation in ETMX L3, but the instability now creates a larger disturbance that rings down in both the ITMX L3 and ETMX L3 channels.

Images attached to this comment
elenna.capote@LIGO.ORG - 18:11, Monday 11 November 2024 (81213)

Final thoughts for today:

  • the lockloss occurs within 1-2 seconds of when the ramp down of IX L3 and the ramp up of EX L3 occurs, so it seems like the issue here is within the L3 control
  • we have had instability issues with this transition back in March when we were commissioning it, which did include a 1 Hz instability
  • The growing oscillation before the transition is about 1 Hz, whereas the larger oscillation that can coincide with the lockloss is around 3 Hz.
  • Although the microseism is higher, this overall RMS on each suspension stage hasn't changed appreciably since April (see attachment). However, this plot compares NLN times. It's seems like the switchover itself is unstable, which is maybe too fast to compare
  • The switchover is between IX/EY control which appears to be on the "old" configuration and the EX "new" configuration, so even if each individual configuration is stable, maybe a combo of them is unstable.
Images attached to this comment
elenna.capote@LIGO.ORG - 10:58, Tuesday 12 November 2024 (81224)OpsInfo

Sheila and I looked a look at these plots again. It appears that the 3 Hz oscillation has the potential to saturate L3, which might be causing the lockloss. The ringup repeatedly occurs within the first two seconds of the gain ramp, which is set to a 10 second ramp. We decided to shorten this ramp to 2 seconds. I created a new variable on line 5377 in the ISC guardian called "etmx_ramp" and set that to 2. The ramps from the EY/IX to EX L3 control are now set to this value, as well as the timer. If this is bad, this ramp variable can be changed back.

H1 TCS
oli.patane@LIGO.ORG - posted 14:01, Wednesday 06 November 2024 - last comment - 08:38, Tuesday 12 November 2024(81106)
TCS Monthly Trends FAMIS

Closes FAMIS#28454, last checked 80607

CO2 trends looking good (ndscope1)

HWS trends looking good (ndscope2)

You can see in the trends when the ITMY laser was swapped about 15-16 days ago.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 08:39, Thursday 07 November 2024 (81122)

Trend shows that the ITMY HWS code stopped running. I restarted it.

camilla.compton@LIGO.ORG - 08:38, Tuesday 12 November 2024 (81220)CDS

Erik, Camilla. We've been seeing that the code running on h1hwsmsr1 (ITMY) kept stopping after ~1hours with a "Fatal IO error 25" (Erik said related to a display) attached.

We checked that memory is fine of h1hwsmsr1. Erik troubleshooted this back to matplotlib trying to make a plot and failing as there was no display to make the plot on. State3.py calls get_wf_new_center() from hws_gradtools.py which calls get_extrema_from_gradients() which makes a contour plot, it's trying to make this plot and thinks there's a display but then can't plot it.  This error isn't happening on h1hwsmsr (ITMX).   I had ssh'ed into h1hwsmsr1 using -Y -C options (allowing the stream image to show), but Erik found this was making the session think there was a display when there wasn't.

Fix: quitting tmux session, logging in without options (ssh controls@h1hwsmsr1),  and starting code again. The code has now been running fine for the last 18 hours.

Images attached to this comment
H1 PSL (PSL)
marc.pirello@LIGO.ORG - posted 12:10, Friday 04 October 2024 - last comment - 17:59, Monday 11 November 2024(80467)
PSL FSS PZT Locked Comparison H1 and L1

In the process of investigating the locklosses due to FSS glitching and working on spare chassis for the FSS in the PSL, we compared the power spectrum of the PZT monitor between H1 and L1.  We found some difference in the power spectrum, plot attached.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 17:59, Monday 11 November 2024 (81210)PSL

I discovered today that LLO is indeed doing some additional digital filtering to their FSS_FASTMON channel, which would very likely explain the difference in spectra Marc shows above. Just looking at the MEDM screen for the filter bank, it shows three filters in use (called "cts2V', "NPRO", and "toMHz") while LHO is using none; parameters of these are attached in a screenshot. I'm not entirely sure what the purpose of these are, but from what I can tell there is an additional pole at 10Hz, which would explain the 1/f-looking drop in noise towards higher frequencies.

Images attached to this comment
Displaying reports 4261-4280 of 83132.Go to page Start 210 211 212 213 214 215 216 217 218 End