Displaying reports 6761-6780 of 85640.Go to page Start 335 336 337 338 339 340 341 342 343 End
Reports until 16:49, Tuesday 12 November 2024
LHO General
corey.gray@LIGO.ORG - posted 16:49, Tuesday 12 November 2024 (81232)
Tues EVE Ops Transition

TITLE: 11/13 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: MAINTENANCE
    Wind: 26mph Gusts, 19mph 3min avg
    Primary useism: 0.67 μm/s
    Secondary useism: 0.77 μm/s
QUICK SUMMARY:

Long Maintenance Day, high microseism, high winds, M5.0 Mexico earthquake.....but the IMC was just locked in the last few minutes!

H1 General
anthony.sanchez@LIGO.ORG - posted 16:45, Tuesday 12 November 2024 (81231)
Tuesday Maintance Ops Shift End

TITLE: 11/13 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
H1 was held in IDLE all day while we had a long maintenance day with a focus on the PSL's TTFSS repair.

The PSL Crew is still in the PSL room, re-installing the TTFSS.
Secondary microseism is still elevated wind is still elevated as well.  

Beckhoff updates were done today at 4PM as well.


LOG:

Start Time System Name Location Lazer_Haz Task Time End
21:41 SAF Laser LVEA YES LVEA is laser HAZARD 08:21
15:32 FAC Karen Optics & VAC Prep Labs N Technical cleaninng 15:42
15:42 FAC Karen EY n Technical Cleaning. 17:01
15:55 FAC Nelli HAM Shaq N Technical Cleaning 16:41
15:56 FAC Kim EX N Technical Cleaning 17:21
16:30 SEI Neil, Jim, Fil LVEA Beirgarden yes Installing GeoPhone Seimometer 19:23
16:46 3IFO Tyler LVEA Yes 3 IFO Checks 17:22
16:57 PSL Jason & Ryan S PSL Room Yes Working on PSL, PMC,  TTFSS 18:41
17:11 SUS Rahul EY & EX N SUS Measurements 18:39
17:13 CDS Richard CER N Maximizing the window for Maintenance 17:16
17:15 N2 NorCo Y arm N N2 fill, another Nor Cal Truck also went down the Y arm 19:15
17:22 FAC Tyler Mid Stations N 3 IFO checks 19:22
17:39 pSL Daniel PSL racks Yes Removing power from old PSL rack equipmnt 19:39
17:45 VAC Travis & Janos EY & MX N Checking vac pump status in Recieving areas 19:39
17:51 FAC Kim & Karen LVEA YES Technical Cleaning Karen out early 19:04
18:30 SEI Neil D. LVEA Yes Taping a GeoPhone to the Floor. 18:50
18:57 SEI Neil CER N Turning on a switch in the CER 19:14
19:05 FAC Kim High bay n Tech clean 20:03
20:08 VAC Janos EY, MX N Vac Hardware check. 22:08
20:58 PSL Daniel LVEA PSL Racks Yes Checking on the FSS 21:51
21:14 FAC Erik Tyler Mid Y N Handling the Air , Tyler back early 22:36
21:41 ISC Christina EY, MX n Inventory 22:16
21:47 PSL Ryan S & Jason PSL Yes Replacing the TTFSS 01:48
22:01 VAC Travis MX N Vac systems check 22:20
22:07 EE Fil CER N Turning on High Voltage 22:17
23:22 VAC Janos Mid X N Measuring Vac system equipment. 23:34
23:47 SEI Jim CER N checking cables 23:52
23:54 CDS Erik CER N Getting a laptop 00:00

 

LHO FMCS
anthony.sanchez@LIGO.ORG - posted 14:32, Tuesday 12 November 2024 (81229)
weekly Checking HVAC Fan Vibrometers

Famis 26336

Just over 6 days ago there was an increase in H0:VAC-EX_FAN1_570_2_ACC_INCHSEC
Both H0:VAC-MR_FAN1_170_1 & 2 ACC_INCHSEC saw an increase in noise in the last 3 days.
 

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 13:22, Tuesday 12 November 2024 - last comment - 14:09, Tuesday 12 November 2024(81227)
Tuesday Maintance Ops Shift Update

TITLE: 11/12 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
CURRENT ENVIRONMENT:
    SEI_ENV state: MAINTENANCE
    Wind: 33mph Gusts, 26mph 3min avg
    Primary useism: 0.08 μm/s
    Secondary useism: 0.85 μm/s
QUICK SUMMARY:

17:11 UTC Test T524646 EXTTRIG: SNEWS alert actve

Maintenance agenda:



 

Comments related to this report
victoriaa.xu@LIGO.ORG - 14:09, Tuesday 12 November 2024 (81228)

Saw 1 PSL glitch  ~50 minutes after Jason/RyanS  (removed the TTFSS box and calibrated the H1:PSL-PWR_HPL_DC_OUT_DQ channel). Otherwise was quiet ~ 3 hours.

Trends here. Glitch visible on mixer with peak-peak +/-0.1, similar to seen yesterday with FSS autolocker off. These are not the biggest mixer glitches we have seen, but are the large ones (like glitch 3, or end of glitch 2 yesterday).  Jason notes the temperature feedback cable is still plugged in (though the FSS box has been removed).

We expect mixer output to be fuzzier (bigger RMS) when FSS is off, because FSS is not running to control the laser frequency noise w.r.t. refcav. So, should be seeing the free-running npro laser linewidth with FSS off, which is expected to be more than with FSS on (aka, no need to get thrown off by the mixer output fuzziness with FSS off).

Images attached to this comment
LHO FMCS
eric.otterman@LIGO.ORG - posted 11:56, Tuesday 12 November 2024 (81225)
VEA temperature changes
The VEAs in Mid X and Mid Y have four temperature sensors located inside the exterior wall. These four sensors are averaged by the automation system for the purpose of temperature control. The location in the exterior wall means all four of these sensors are influenced by outdoor ambient air and consistently read higher or lower than the interior of the VEA, depending on the season. At both mid stations, one of these sensors will be relocated to the interior of the room. Then, two of the sensors in the exterior wall will be taken out of the averaging so that one interior sensor and one wall sensor will be used to figure the average space temperature. This will have an impact on temperature trends for the mid stations, and although these are not critical like the temperatures at the corner and end stations, anyone keeping an eye on the trends will notice a significant change. 
H1 PSL
daniel.sigg@LIGO.ORG - posted 10:21, Tuesday 12 November 2024 (81223)
Power down unused PSL equipment

Image 1: PSL top eurocrate
From left to right: ISS inner loop, old ISS second loop (depowered), reference cavity heater (depowered), and TTFSS field box.

Image 2: PSL bottom eurocrate
From left to right: injection locking field box (depowered), injection locking servo (depowered), monitor field box, PMC locking field box, and PMC locking servo.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:10, Tuesday 12 November 2024 (81222)
Tue CP1 Fill

Tue Nov 12 10:07:17 2024 INFO: Fill completed in 7min 13secs

Jordan confirmed a good fill curbside.

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 07:45, Tuesday 12 November 2024 - last comment - 09:12, Tuesday 12 November 2024(81219)
Tueasday Ops Day Shift Start

TITLE: 11/12 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 18mph Gusts, 14mph 3min avg
    Primary useism: 0.06 μm/s
    Secondary useism: 0.78 μm/s
QUICK SUMMARY:

Good Maintenance Tuesday Morning Everyone.
H1 was in move spots when I walked in and shortly there after caught a lockloss.
So In_Lock_SUS_Charge Measurements will not be ran this week.
List of expected Maintenance Items :

Comments related to this report
david.barker@LIGO.ORG - 09:12, Tuesday 12 November 2024 (81221)

After discussion with Tony we decided not to install his new PCAL Guardian nodes today, which means no DAQ restart today (my VACSTAT ini change is target-of-opportunity if a DAQ restart were to happen for other reasons)

H1 General
ryan.crouch@LIGO.ORG - posted 01:32, Tuesday 12 November 2024 - last comment - 04:04, Tuesday 12 November 2024(81217)
H1 OPS OWL assistance

H1 called for help from the initial alignment timer expiring at 09:00 UTC, we were in locking green arms with Yarm struggling to lock, Winds were semi high and microseism is above the 90th percentile. I adjusted ETMY in pitch and it was finally able to lock. I finished the IA at 09:32 UTC.

Comments related to this report
ryan.crouch@LIGO.ORG - 04:04, Tuesday 12 November 2024 (81218)

Struggling to hold the ARMs from FSS oscillations and the wind knocking it out. The wind isn't predicted to calm down till 15:00 UTC and only for an hour before they rise even higher than they are currently from windy.com. The 2ndary microseism isn't helping either. 

As of 10:30 UTC I can hold the arms but the FSS keeps oscillating, holding it in DOWN till it passes ( It only took a few minutes)

10:47 ENGAGE_ASC_FOR_FULL_IFO lockloss

11:07 UTC RESONANCE lockloss from DRMI losing it

11:23 UTC RESONANCE lockloss again from DRMI

11:28 UTC FSS keeps oscillating

11:46 PREP_ASC lockloss from the IMC

12:00 UTC ENGAGE_ASC lockloss

The 3 minute average for the wind is just over 20 mph with gusts over 25, that combined with the secondary microseism fully above the 90th percentile relocking this morning is unlikely.

12:35 UTC LOWNOISE_COIL_DRIVERS lockloss

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 22:00, Monday 11 November 2024 (81216)
OPS Eve Shift Summary

TITLE: 11/12 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:

IFO is LOCKING at ACQUIRE_PRMI

For the first third of shift, comissioners were troubleshooting and investigating the PSL glitch after a new finding emerged: the glitch is seen in the PMC mixer with the IMC offline, ISS and FSS off (alog 81207 and its many comments). IFO was in MAINTENANCE, wind gusts were over 30mph and microseism was very high (troughs higher than 90% line and peaks higher than 100% line for secondary).

For the second third of shift, I was attempting to lock since winds calmed down but got caught up during initial alignment since SRM WD M3 would trip incessantly during SRC align, which I assume is due to the microseism being so high since this doesn't normally happen and it's the primary unique condition. After initial alignment finished, IFO reached NLN fully automatically. After an SDF diff clearance (attached), we were observing for a whole 27 minutes!

For the third third of shift, I was attempting to relock following a Lockloss (alog 81215) caused by a 50mph wind gust that even my less-than-LIGO-sensitive ears could detect. I also confirmed it wasn't the PSL. I waited in DOWN until the gusts got back below 30mph and attempted to relock. The PSL glitch happened a few times in between (since IMC was faulting but winds were low) but we got all the way to FIND_IR until another 38mph gust came though. Now, we're stably past ALS but PRMI locks and unlocks every few minutes due to the environment.

LOG:

None

Images attached to this report
H1 PEM (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 20:54, Monday 11 November 2024 (81215)
Lockloss 04:44 UTC

Lockloss due to a 50mph wind gust. I heard the wind shake the building and immediately thereafter, a lockloss.

I also checked particularly if it was an IMC glitch and it wasn't (IMC and ASC channels lost lock at different times (~250ms apart) and IMC locked right after LL.

Short 37 min lock.

 

H1 PSL (PSL)
ibrahim.abouelfettouh@LIGO.ORG - posted 16:46, Monday 11 November 2024 - last comment - 19:25, Monday 11 November 2024(81207)
PMC Mixer Glitches with FSS and ISS off (and IMC Offline)

Ibrahim, Tony, Vicky, Ryan S, Jason

Two screenshots below answering the question of whether the PMC Mixer glitches on its own with FSS and ISS out. It does.

PSL team is having a think about what this implies.

Images attached to this report
Comments related to this report
victoriaa.xu@LIGO.ORG - 18:27, Monday 11 November 2024 (81209)SQZ

Jason, Ryan, Ibrahim, Elenna, Daniel, Vicky - This test had only PSL + PMC for ~2 hours, from 2024/11/11 23:47:45 UTC to 02:06:14 UTC. No FSS, ISS, or IMC.

Mixer glitch#1 @ 1415406517 - 1st PSL PMC mixer REFL glitches. Again, only PSL + PMC. No FSS, ISS, IMC.

The squeezer was running TTFSS, which locks the squeezer laser frequency to the PSL laser frequency + 160 MHz offset. With SQZ TTFSS running, for glitch #1, the squeezer witnessed glitches in SQZ TTFSS FIBR MIXER, and PMC and SHG demod error signals.

This seems to suggest the squeezer is following real PSL free-running laser frequency glitches? Since there is no PSL FSS servo actuating on the PSL laser frequency.
  -  also suggests the 35 MHz PSL (+SQZ) PMC LO VCO is not the main issue, since SQZ witnesses the PSL glitches in the SQZ FIBR MIXER.
  -  also suggests PMC PZT HV is not the issue. Without PSL FSS, any PMC PZT HV glitches should not become PSL laser frequency glitches. Caveat the cabling was not disconnected, just done from control room, so analog glitches could still propagate.

Images attached to this comment
victoriaa.xu@LIGO.ORG - 17:57, Monday 11 November 2024 (81211)

Mixer glitch #2 @ 1415408786. Trends here.

Somehow looks pretty different than glitch #1. PSL-PMC_MIXER glitches are not clearly correlated with NPRO power changes. SQZ-FIBR_MIXER sees the glitches, and SQZ-PMC_REFL_RF35 also sees glitches. But notably the SHG_RF24 does NOT see the glitches, unlike before in glitch #1.

For the crazy glitches at ~12 minutes (end of scope) - the SQZ TTFSS MIXER + PMC + SHG all see the big glitch, and there seem to be some (weak) NPRO power glitches too.

Images attached to this comment
victoriaa.xu@LIGO.ORG - 19:25, Monday 11 November 2024 (81212)

Mixer glitches #3 @ 1415409592 - Trends. Ryan and I are wondering if these are the types of glitches that bring the IMC down? But, checking the earlier IMC lockloss (tony 81199), can't tell from trends.

Here - huge PSL freq glitches that don't obviously correlate with PSL NPRO power changes (though maybe a slight step after the first round of glitches?). But these PSL glitches are clearly observed across the squeezer TTFSS + PMC + SHG signals (literally everywhere).

The scope I'm using is at /ligo/home/victoriaa.xu/ndscope/PSL/psl_sqz_glitches.yaml (sorry it runs very slow).

Images attached to this comment
victoriaa.xu@LIGO.ORG - 19:11, Monday 11 November 2024 (81214)

Thinking about the overall at tests done today, see annotated trends.

  • PSL + PMC + FSS + IMC  (no ISS) ...  bad, IMC lockloss after 1:15 hours even with ISS OFF, 81198. IMC lockloss at 2024/11/11 20:37:44 UTC (1415392682).
     
  • PSL + PMC  (no FSS, no ISS, no IMC), Test #1...  good? No glitches for 30 minutes. Sheila 81200.
     
  • PSL + PMC + FSS  (no ISS, no IMC) ...  bad, Ref cav did not stay locked. Many glitches (visible in PMC mixer). Sheila 81200.
     
  • PSL + PMC  (no FSS, no ISS, no IMC), Test #2 ...  bad this time, Tried again as Sheila suggested. After ~40 min, saw glitch #1 in PMC mixer, in turn witnessed by SQZ TTFSS. Bigger glitch #3 seen after ~1.5 hours (this thread).
    • Some glitches have NPRO power glitches, some don't.
    • The fact that squeezer TTFSS sees glitches could suggest these glitches are real and related to free-running PSL laser frequency glitches?
    • In particular - Glitch #3 (above) has similar peak-to-peak on PSL-PMC_MIXER to what unlocked the IMC in the ISS_OFF test earlier today. Glitch #3 also goes on for longer than #1,2.
Images attached to this comment
H1 ISC
elenna.capote@LIGO.ORG - posted 12:30, Monday 11 November 2024 - last comment - 10:58, Tuesday 12 November 2024(81195)
Suspicious ETMX transition locklosses

We seem to be having lots of locklosses during the transition from ETMX/lownoise ESD ETMX guardian states. With Camilla's help, I looked through the lockloss tool to see if these are related to the IMC locklosses or not. "TRANSITION_FROM_ETMX" and "LOWNOISE_ESD_ETMX" are states 557 and 558 respectively.

For reference, transition from ETMX is the state where DARM control is shifted to IX and EY, and the ETMX bias voltage is ramped to our desired value. Then control is handed back over the EX. Then, in lownoise ESD, some low pass is engaged and the feedback to IX and EY is disengaged.

In total since the start of O4b (April 10), there have been 26 locklosses from "transition from ETMX"

From the start of O4b (April 10) to now, there have been 22 locklosses from "lownoise ESD ETMX"

Trending the microseismic channels, the microseism increases seems to correspond with the worsening of this lockloss rate for lownoise ESD ETMX. I think, when correcting for the IMC locklosses, the transition from ETMX lockloss rate is about the same. However, the lownoise ESD ETMX lockloss rate has increased significantly.

Comments related to this report
elenna.capote@LIGO.ORG - 15:39, Monday 11 November 2024 (81202)

Here is a look at what these locklosses actually look like.

I trended the various ETMX, ETMY and ITMX suspension channels during these transitions. The first two attachments here show a side by side set of scopes, with the left showing a successful transition from Nov 10, and the right showing a failed transition from Nov 9. It appears that in both cases, the ITMX L3 drive has a ~1 Hz oscillation that grows in magnitude until the transition. Both the good and bad times show a separate oscillation right at the transition. This occurs right at the end of state 557, so the locklosses within 1 or 2 seconds of state 558 likely are resulting from whatever the of the steps in state 557 do. The second screenshot zooms in to highlight that the successful transition has a different oscillation that rings down, whereas the unsuccessful transition fails right where this second ring up occurs.

I grabbed a quick trend of the microseism, and it looks like the ground motion approximately doubled around Sept 20 (third screenshot). I grabbed a couple of recent successful and unsuccessful transitions since Sept 20, and they all show similar behavior. A successful transition from Sept 19 (fourth attachment) does not show the first 1 Hz ring up, just the second fast ring down after the transition.

I tried to look back at a time before the DAC change at ETMX, but I am having trouble getting any data that isn't minute trend. I will keep trying for earlier times, but this already indicates an instability in this transition that our long ramp times are not avoiding.

Images attached to this comment
elenna.capote@LIGO.ORG - 16:51, Monday 11 November 2024 (81208)

Thanks to some help from Erik and Jonathan, I was able to trend raw data from Nov 2023 (hint: use nds2). Around Nov 8 2023, the microseism was elevated, ~ 300 counts average on the H1:ISI-GND_STS_ITMY_Z_BLRMS_100M_300M channel, compared to ~400 counts average this weekend. The attached scope compares the transition in Nov 8 2023 (left) versus this weekend Nov 10 (right). One major difference here is that we now have a new DARM offloading scheme. A year ago, it appears that this transition also involves some instability causing some oscillation in ETMX L3, but the instability now creates a larger disturbance that rings down in both the ITMX L3 and ETMX L3 channels.

Images attached to this comment
elenna.capote@LIGO.ORG - 18:11, Monday 11 November 2024 (81213)

Final thoughts for today:

  • the lockloss occurs within 1-2 seconds of when the ramp down of IX L3 and the ramp up of EX L3 occurs, so it seems like the issue here is within the L3 control
  • we have had instability issues with this transition back in March when we were commissioning it, which did include a 1 Hz instability
  • The growing oscillation before the transition is about 1 Hz, whereas the larger oscillation that can coincide with the lockloss is around 3 Hz.
  • Although the microseism is higher, this overall RMS on each suspension stage hasn't changed appreciably since April (see attachment). However, this plot compares NLN times. It's seems like the switchover itself is unstable, which is maybe too fast to compare
  • The switchover is between IX/EY control which appears to be on the "old" configuration and the EX "new" configuration, so even if each individual configuration is stable, maybe a combo of them is unstable.
Images attached to this comment
elenna.capote@LIGO.ORG - 10:58, Tuesday 12 November 2024 (81224)OpsInfo

Sheila and I looked a look at these plots again. It appears that the 3 Hz oscillation has the potential to saturate L3, which might be causing the lockloss. The ringup repeatedly occurs within the first two seconds of the gain ramp, which is set to a 10 second ramp. We decided to shorten this ramp to 2 seconds. I created a new variable on line 5377 in the ISC guardian called "etmx_ramp" and set that to 2. The ramps from the EY/IX to EX L3 control are now set to this value, as well as the timer. If this is bad, this ramp variable can be changed back.

H1 ISC
elenna.capote@LIGO.ORG - posted 10:27, Monday 11 November 2024 - last comment - 16:28, Tuesday 12 November 2024(81194)
One CARM Sensor Injections

Sheila and I went out to the floor to plug in the cable to run frequency noise injections (AO2).

I switched us to REFL B (79037) only to run the injections. I first ran the frequency noise injection template. Then, I disenaged the 10 Hz pole and engaged the digital gain to run the ISS injections (low, medium, and high frequency). I was fiddling with the IMC WFS pitch injection template to get it to higher frequency for a jitter injection when we lost lock.

Next step is to at least use the intensity and frequency measurements to determine how the noises are coupling to DARM (would be nice to have jitter too, but we'll see).

Comments related to this report
victoriaa.xu@LIGO.ORG - 13:15, Tuesday 12 November 2024 (81226)

Commit 464b5256 in aligoNB git repo.

elenna.capote@LIGO.ORG - 16:28, Tuesday 12 November 2024 (81230)

These results show that above 3 kHz, some of the intensity noise couples through frequency noise. There is no measurable intensity noise coupling through frequency noise at mid or low frequency.

The frequency noise projection and coupling in this plot are divided by rt2 to account for the fact that frequency noise is measured on one sensor here, but usually is controlled on two REFL sensors. This may not be correct for high frequency (roughly above 4 kHz), where CARM is gain limited.

code lives in /ligo/gitcommon/NoiseBudget/simplepyNB/Laser_noise_projections.ipynb

Images attached to this comment
H1 TCS
oli.patane@LIGO.ORG - posted 14:01, Wednesday 06 November 2024 - last comment - 08:38, Tuesday 12 November 2024(81106)
TCS Monthly Trends FAMIS

Closes FAMIS#28454, last checked 80607

CO2 trends looking good (ndscope1)

HWS trends looking good (ndscope2)

You can see in the trends when the ITMY laser was swapped about 15-16 days ago.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 08:39, Thursday 07 November 2024 (81122)

Trend shows that the ITMY HWS code stopped running. I restarted it.

camilla.compton@LIGO.ORG - 08:38, Tuesday 12 November 2024 (81220)CDS

Erik, Camilla. We've been seeing that the code running on h1hwsmsr1 (ITMY) kept stopping after ~1hours with a "Fatal IO error 25" (Erik said related to a display) attached.

We checked that memory is fine of h1hwsmsr1. Erik troubleshooted this back to matplotlib trying to make a plot and failing as there was no display to make the plot on. State3.py calls get_wf_new_center() from hws_gradtools.py which calls get_extrema_from_gradients() which makes a contour plot, it's trying to make this plot and thinks there's a display but then can't plot it.  This error isn't happening on h1hwsmsr (ITMX).   I had ssh'ed into h1hwsmsr1 using -Y -C options (allowing the stream image to show), but Erik found this was making the session think there was a display when there wasn't.

Fix: quitting tmux session, logging in without options (ssh controls@h1hwsmsr1),  and starting code again. The code has now been running fine for the last 18 hours.

Images attached to this comment
Displaying reports 6761-6780 of 85640.Go to page Start 335 336 337 338 339 340 341 342 343 End