Displaying reports 2261-2280 of 83004.Go to page Start 110 111 112 113 114 115 116 117 118 End
Reports until 13:45, Monday 03 March 2025
H1 SUS
oli.patane@LIGO.ORG - posted 13:45, Monday 03 March 2025 (83142)
PI31 sitting higher after thermalization since EY ring heater change on Feb 21

Since the ring heater power for EY was raised(82962) in order to move away from PI8/PI24, whose noise seemed to be coupling into DARM (82961), the change in ring heater power has caused PI31 to become elevated as the detector thermalizes(ndscope1). PI31 RMS, which usually sits around/below 2, starts ringing up about two hours into the lock and by four hours into the lock, it reaches its new value that it sits at for the rest of the lock, around 4(ndscope2). Once it's at this new value, every so often, it'll start to quickly ring up before being damped within a minute. The first couple of locks after the ring heater change, this ringup was happening every 1 - 1.5 hours, but then it shifted to ringing up every 30 - 45 minutes.

The channels where we see the RMS amplitude increases and the quick ringups are the same channels (well the PI31 versions) where we were seeing the glitches in PI24 and PI8 that were affecting the range(PI8/PI24 versus PI31). So changing the EY ring heater power shifted us away from PI8(10430Hz)/PI24(10431Hz), but towards PI31(10428Hz). Luckily it doesn't look like these ringups, nor the higher RMS that PI31 sits at after we thermalize, have an effect on the range (comparing range drop times and glitchgram times to PI31 ringup times and the downconverted signal from the DCPDs). They also don't seem to be related to any of the locklosses that we've had since the ring heater change.

Images attached to this report
H1 TCS
ryan.crouch@LIGO.ORG - posted 12:15, Monday 03 March 2025 (83140)
TCS Chiller Water Level Top Off - Biweekly

Closes FAMIS 27810, last checked in alog82945

For CO2X the level was at 29.4, I added 160 ml to get to 29.9.

For CO2Y the level was at 10.0, I added 150 ml to get it to 10.5.

The Dixie cup was empty, no signs of current water drippage to be seen.

LHO VE
david.barker@LIGO.ORG - posted 10:37, Monday 03 March 2025 (83139)
Mon CP1 Fill

Mon Mar 03 10:12:33 2025 INFO: Fill completed in 12min 30secs

Gerardo confirmed a good fill curbside.

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 07:42, Monday 03 March 2025 - last comment - 08:07, Monday 03 March 2025(83134)
Mon DAY Ops Transition

TITLE: 03/03 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 12mph Gusts, 8mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.27 μm/s
QUICK SUMMARY:

When the alarm went off and I checked H1, I saw that it was sadly down for about an hr---so I figured I'd need to come in to run an alignment to bring H1 back, but automation was already on it with the alignment and I walked in to H1 already in Observing for about 15min!  (H1 was knocked out by a M4.5 EQ on the SE tip of Orcas Island.)

Secondary (&Primary) µseism continue their trends downward; secondary is now squarely below the 95th percentile line (between 50th & 95th).

Monday Commissioning is slated in about 45 min (at 1630utc).

Comments related to this report
corey.gray@LIGO.ORG - 08:07, Monday 03 March 2025 (83137)

Reacquisition after the EQ lockloss did not have a SRC Noise Ring Up during OFFLOAD_DRMI_ASC.

Images attached to this comment
H1 General (CDS)
anthony.sanchez@LIGO.ORG - posted 22:10, Sunday 02 March 2025 (83133)
Sunday Eve Shift End

TITLE: 03/03 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:

H1 has been locked for coming up on 5 hours without much issue. Been a nice and quiet night.
I do see that the DCPD signals are slowly diverging, but I don't know the cause of that.
Stand Down query Failure is still present on the OPS_Overview screen.
The Unifi AP's are still Disconnected. Tagging CDS

H1 CDS (CDS, DetChar)
anthony.sanchez@LIGO.ORG - posted 17:26, Sunday 02 March 2025 - last comment - 17:48, Sunday 02 March 2025(83131)
Unifi AP's reporting INV instead of OFF in LVEA.

Trying to get into Observing tonight and Verbals is telling me that WAP on in the LVEA.
lvea-unifi-ap has been down since Tuesday. which may be because the WAP was turned off for Maintenance.

the same thing is happening to the CER.

 

Images attached to this report
Comments related to this report
anthony.sanchez@LIGO.ORG - 17:48, Sunday 02 March 2025 (83132)CDS

Logging into Unifi wap control I see that  LVEA-UNIFI-AP and CER-UNIFI-AP Wireless Access Point (WAP) are NOT on, and were last seen 5 days ago.
These WAP  are both listed as Disconnected | disabled, where all of the other turned off access points say they are connected | disabled.
Both of these WAPs do not respond to MEDM click to turn on or off.
Both of the WAPs are unpingable, where the connected  but disables WAPs are still pingable even while they are disabled.

I guess this is "good" for observing as the WIFI cannot be turned on in the LVEA, but on Tuesday we are likely going to want those APs to work again.
 

Images attached to this comment
H1 General
anthony.sanchez@LIGO.ORG - posted 17:13, Sunday 02 March 2025 (83129)
Sunday Eve Shift Start

TITLE: 03/03 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 7mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.41 μm/s
QUICK SUMMARY:
H1 was Locked for 16 hours and 17 minutes, before a lockloss  just before my shift.
Corey and I agreed that an Initial_Alignment was in order to get H1 relocked quickly.

H1 reached Nominal_Low_Noise @ 1:12 UTC

Stand Down query Failure is still visible on the OPS_Overview screen on the top of NUC20.

 

LHO General
corey.gray@LIGO.ORG - posted 16:33, Sunday 02 March 2025 (83127)
Sun DAY Ops Summary

TITLE: 03/02 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:

Quiet shift on this Oscar Night 2025 with a lockloss in the last 40min of the shift.  And with the lower primary+secondary microseism compared to 24hrs ago, it was much harder to see 0.12-0.15Hz oscillations in LSC/SUS signals!  What a difference a day makes.
LOG:

H1 PSL
anthony.sanchez@LIGO.ORG - posted 16:20, Sunday 02 March 2025 (83130)
PSL

PSL Weekly Status Report FAMIS 26358
Laser Status:
    NPRO output power is 1.857W
    AMP1 output power is 70.47W
    AMP2 output power is 139.8W
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 26 days, 3 hr 51 minutes
    Reflected power = 22.27W
    Transmitted power = 106.0W
    PowerSum = 128.3W

FSS:
    It has been locked for 0 days 0 hr and 18 min
    TPD[V] = 0.7976V

ISS:
    The diffracted power is around 4.0%
    Last saturation event was 0 days 0 hours and 18 minutes ago


Possible Issues: None reported

LHO VE
david.barker@LIGO.ORG - posted 10:20, Sunday 02 March 2025 (83128)
Sun CP1 Fill

Sun Mar 02 10:14:17 2025 Fill completed in 14min 14secs

Trying out a new colour scheme with the delta channels to distinguish between 30S, 60S and 120S chans.

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 07:42, Sunday 02 March 2025 (83126)
Sun DAY Ops Transition

TITLE: 03/02 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 0mph Gusts, 0mph 3min avg
    Primary useism: 0.04 μm/s   (yesterday 0.11 μm/s)
    Secondary useism: 0.47 μm/s (yesterday 0.81 μm/s)
QUICK SUMMARY:

H1's had a great night with a current 8+hr lock.  Secondary microseism AND Primary microseism have both been trending down over the last 18-24hrs (see attached). 

LVEA Wifi continues to come up as "ON" in Verbal (but it's been in the INVALID state since 1008amPST on Maintenance Day, see attached).

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 22:05, Saturday 01 March 2025 (83125)
Saturday Eve Shift summary

TITLE: 03/02 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 11mph Gusts, 8mph 3min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.59 μm/s
QUICK SUMMARY:

Locking has been very difficult this evening. Getting past ALS and DRMI has been challenging. Possibly due to the Microseism.
I was able to get ISC_LOCK up to Low_Noise_ETMX before a lockloss.

H1 current status:
I have taken ISC_LOCK to Initial_Alignment again to see if H1 will get past DRMI and actually get locked.
It does seem like the microseism is falling so hopefully we can get locked on the OWL shift via H1 Manager.

H1 General
anthony.sanchez@LIGO.ORG - posted 16:55, Saturday 01 March 2025 (83123)
Satuday Eve Shift Start

TITLE: 03/02 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 12mph Gusts, 8mph 3min avg
    Primary useism: 0.08 μm/s
    Secondary useism: 0.67 μm/s
QUICK SUMMARY:
H1 is currently Locking DRMI_1F after receiving an Initial Alignment.

H1 has apparently been difficult to lock all day.
I'm not entirely sure why that is the case.
The secondary useism isn't that high, though yesterday there was a noticeable increase in primary microseism with out the aide of earthquakes.


Also I have a Stand Down query failure on my OPS_Overview screen, which can be seen on screen shots.

 

LHO General
corey.gray@LIGO.ORG - posted 16:29, Saturday 01 March 2025 (83119)
Sat DAY Ops Summary

TITLE: 03/01 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
INCOMING OPERATOR: Tony
SHIFT SUMMARY:

It's a been a rough day on the locking front.

Microseism is high (the 2hrs when H1 was OBSERVING, you could definitely see alot of 0.12 - 0.15 Hz oscillations in many LSC, ASC, & SUS signals ndscopes).  

DRMI has not looked great (even after fresh alignments).  Not seeing many triggers for it and the flashes are as high as they have been.  Hoping this is also microseism-related.

Also had a return of the SRC noise during OFFLOAD DRMI ASC earlier in the day, but then it went away.
LOG:

LHO General
corey.gray@LIGO.ORG - posted 12:42, Saturday 01 March 2025 (83122)
Mid-Shift Status: Rough Waters Continue

Managed to get 1.75hrs of OBSERVING this morning, but then had a lockloss.  Acquisition is no easy either. DRMI & PRMI have not looked great after Initial Alignments (like they usually had been).  BUT, there is high secondary AND primary microseism for H1.  Will keep battling.  Did not get to run the Saturday Calibration, because H1 was not thermalized this morning (in the last 13hrs we have not had a thermalized H1).

Have had some PRMI offloading which has taken as long as 4-5min (don't recall it taking that long...with that said, this current lock offloaded PRMI in under 2min).

H1 ISC (OpsInfo)
corey.gray@LIGO.ORG - posted 10:56, Saturday 01 March 2025 (83121)
H1 Status: Struggles Continue And SRC Noise During OFFLOAD_DRMI_ASC Returns

H1 struggled last night/this morning--H1 was down close to 6hrs (NOTE:  µseism is elevated). 

Since there hadn't been an alignment, thought this would do the trick since one hadn't been run a few hours into relocking.  Unfortunately, DRMI & PRMI didn't look great post-IA.  CHECK MICH FRINGES helped.  Then continued with locking with various locklosses.  During locking I happened to notice a RETURN of the SRC noise during OFFLOAD_DRMI_ASC (noted: alog83080, alog82997).  This is post-SRC2_Pitch gain reduction a couple days ago (alog83092, alog83078).

This noise issue definitely looks ugly and continues to exhibit SRM/SR2 Saturations heard via Verbal and for the one for this current lock it eventually calmed down on its own after a few minutes (this was around 1732utc).  While waiting for H1 to get to OBSERVING, looked to see if we if there were other instances of this noise returning (atleast for the last 1.5 days).  Here were the times when it occurred with the lower SRC2_P gain of 40 for ISC_LOCK state -111 (w/ notes for each one; also see attachments):

Perhaps it's getting worse?  Also only had the one lockloss from looking at these for the last 1.5 days.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:22, Saturday 01 March 2025 (83120)
Sat CP1 Fill

Sat Mar 01 10:12:29 2025 INFO: Fill completed in 12min 26secs

Delta time channels now continue post-fill showing the max-delta values attained, which for today is about 85C with the trip set to 50C. The Delta values are absolute, so the rise in TC temps appears as a second hump in the 30S lookback channels.

Images attached to this report
H1 ISC
jim.warner@LIGO.ORG - posted 11:19, Monday 03 February 2025 - last comment - 10:32, Monday 03 March 2025(82608)
ESD glitch limit added to ISC_LOCK

During commisioning this morning, we added the final part of the ESD glitch limiting, by adding the actual limit part to ISC_LOCK. I added a limit value of 524188 to ETMX_L3_ESD_UR/UL/LL/LR filter banks, which are the upstream part of the 28 bit dac configuration for SUS ETMX. These limits are engaged in LOWNOISE_ESD_ETMX, but turned off again in PREP_FOR_LOCKING.

In LOWNOISE_ESD_ETMX I added:

            log('turning on esd limits to reduce ETMX glitches')
            for limits in ['UL','UR','LL','LR']:
                ezca.get_LIGOFilter('SUS-ETMX_L3_ESD_%s'%limits).switch_on('LIMIT')

So if we start losing lock at this step, these lines could be commented out. The limit turn-off in PREP_FOR_LOCKING is probably benign.

Diffs have been accepted in sdf.

I think the only way to tell if this is working is to wait and see if we have fewer ETMX glitch locklosses, or if we start riding through glitches that has caused locklosses in the past.

Comments related to this report
camilla.compton@LIGO.ORG - 11:48, Monday 03 February 2025 (82609)

Using the lockloss tool, we've had 115 Observe locklosses since Dec 01, 23 of those were also tagged ETM glitch, which is around 20%.

camilla.compton@LIGO.ORG - 12:15, Monday 10 February 2025 (82723)SEI

Since Feb 4th, we've had 13 locklosses from Observing, 6 of these tagged ETM_GLITCH: 02/1002/0902/0902/0802/0802/06

sheila.dwyer@LIGO.ORG - 11:30, Tuesday 11 February 2025 (82743)

Jim, Sheila, Oli, TJ

We are thinking about how to evaluate this change.  In the meantime we made a comparison similar to Camilla's: In the 7 days since this change, we've had 13 locklosses from observing, with 7 tagged by the lockloss tool as ETM glitch (and more than that identified by operators), compare to 7 days before the change we had 19 observe locklosses of which 3 had the tag. 

We will leave the change in for another week at least to get more data of what it's impact is.

jim.warner@LIGO.ORG - 10:32, Monday 03 March 2025 (83138)

I forgot to post this at the time, we took the limit turn on out of the guardian on Feb 12, with the last lock ending at 14:30 PST, so locks since that date have had the filters engaged, but since they multiply to 1, they shouldn't have an effect without the limit. We ran this scheme between Feb 3 17:40 utc until Feb 12 22:15 utc.

Camilla asked about turning this back on, I think we should do that. All that needs to be done is uncommenting out the lines (currently 5462-5464 in ISC_LOCK.py):

            #log('turning on esd limits to reduce ETMX glitches')
            #for limits in ['UL','UR','LL','LR']:
            #    ezca.get_LIGOFilter('SUS-ETMX_L3_ESD_%s'%limits).switch_on('LIMIT')

The turn off of the limit is still in one of the very early states is ISC_LOCK, so nothing beyond accepting new sdfs should be needed.

Displaying reports 2261-2280 of 83004.Go to page Start 110 111 112 113 114 115 116 117 118 End