Since the ring heater power for EY was raised(82962) in order to move away from PI8/PI24, whose noise seemed to be coupling into DARM (82961), the change in ring heater power has caused PI31 to become elevated as the detector thermalizes(ndscope1). PI31 RMS, which usually sits around/below 2, starts ringing up about two hours into the lock and by four hours into the lock, it reaches its new value that it sits at for the rest of the lock, around 4(ndscope2). Once it's at this new value, every so often, it'll start to quickly ring up before being damped within a minute. The first couple of locks after the ring heater change, this ringup was happening every 1 - 1.5 hours, but then it shifted to ringing up every 30 - 45 minutes.
The channels where we see the RMS amplitude increases and the quick ringups are the same channels (well the PI31 versions) where we were seeing the glitches in PI24 and PI8 that were affecting the range(PI8/PI24 versus PI31). So changing the EY ring heater power shifted us away from PI8(10430Hz)/PI24(10431Hz), but towards PI31(10428Hz). Luckily it doesn't look like these ringups, nor the higher RMS that PI31 sits at after we thermalize, have an effect on the range (comparing range drop times and glitchgram times to PI31 ringup times and the downconverted signal from the DCPDs). They also don't seem to be related to any of the locklosses that we've had since the ring heater change.
Mon Mar 03 10:12:33 2025 INFO: Fill completed in 12min 30secs
Gerardo confirmed a good fill curbside.
TITLE: 03/03 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 8mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.27 μm/s
QUICK SUMMARY:
When the alarm went off and I checked H1, I saw that it was sadly down for about an hr---so I figured I'd need to come in to run an alignment to bring H1 back, but automation was already on it with the alignment and I walked in to H1 already in Observing for about 15min! (H1 was knocked out by a M4.5 EQ on the SE tip of Orcas Island.)
Secondary (&Primary) µseism continue their trends downward; secondary is now squarely below the 95th percentile line (between 50th & 95th).
Monday Commissioning is slated in about 45 min (at 1630utc).
TITLE: 03/03 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
H1 has been locked for coming up on 5 hours without much issue. Been a nice and quiet night.
I do see that the DCPD signals are slowly diverging, but I don't know the cause of that.
Stand Down query Failure is still present on the OPS_Overview screen.
The Unifi AP's are still Disconnected. Tagging CDS
Trying to get into Observing tonight and Verbals is telling me that WAP on in the LVEA.
lvea-unifi-ap has been down since Tuesday. which may be because the WAP was turned off for Maintenance.
the same thing is happening to the CER.
Logging into Unifi wap control I see that LVEA-UNIFI-AP and CER-UNIFI-AP Wireless Access Point (WAP) are NOT on, and were last seen 5 days ago.
These WAP are both listed as Disconnected | disabled, where all of the other turned off access points say they are connected | disabled.
Both of these WAPs do not respond to MEDM click to turn on or off.
Both of the WAPs are unpingable, where the connected but disables WAPs are still pingable even while they are disabled.
I guess this is "good" for observing as the WIFI cannot be turned on in the LVEA, but on Tuesday we are likely going to want those APs to work again.
TITLE: 03/03 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 7mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.41 μm/s
QUICK SUMMARY:
H1 was Locked for 16 hours and 17 minutes, before a lockloss just before my shift.
Corey and I agreed that an Initial_Alignment was in order to get H1 relocked quickly.
H1 reached Nominal_Low_Noise @ 1:12 UTC
Stand Down query Failure is still visible on the OPS_Overview screen on the top of NUC20.
TITLE: 03/02 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
Quiet shift on this Oscar Night 2025 with a lockloss in the last 40min of the shift. And with the lower primary+secondary microseism compared to 24hrs ago, it was much harder to see 0.12-0.15Hz oscillations in LSC/SUS signals! What a difference a day makes.
LOG:
PSL Weekly Status Report FAMIS 26358
Laser Status:
NPRO output power is 1.857W
AMP1 output power is 70.47W
AMP2 output power is 139.8W
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 26 days, 3 hr 51 minutes
Reflected power = 22.27W
Transmitted power = 106.0W
PowerSum = 128.3W
FSS:
It has been locked for 0 days 0 hr and 18 min
TPD[V] = 0.7976V
ISS:
The diffracted power is around 4.0%
Last saturation event was 0 days 0 hours and 18 minutes ago
Possible Issues: None reported
Sun Mar 02 10:14:17 2025 Fill completed in 14min 14secs
Trying out a new colour scheme with the delta channels to distinguish between 30S, 60S and 120S chans.
TITLE: 03/02 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 0mph Gusts, 0mph 3min avg
Primary useism: 0.04 μm/s (yesterday 0.11 μm/s)
Secondary useism: 0.47 μm/s (yesterday 0.81 μm/s)
QUICK SUMMARY:
H1's had a great night with a current 8+hr lock. Secondary microseism AND Primary microseism have both been trending down over the last 18-24hrs (see attached).
LVEA Wifi continues to come up as "ON" in Verbal (but it's been in the INVALID state since 1008amPST on Maintenance Day, see attached).
TITLE: 03/02 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 11mph Gusts, 8mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.59 μm/s
QUICK SUMMARY:
Locking has been very difficult this evening. Getting past ALS and DRMI has been challenging. Possibly due to the Microseism.
I was able to get ISC_LOCK up to Low_Noise_ETMX before a lockloss.
H1 current status:
I have taken ISC_LOCK to Initial_Alignment again to see if H1 will get past DRMI and actually get locked.
It does seem like the microseism is falling so hopefully we can get locked on the OWL shift via H1 Manager.
TITLE: 03/02 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 12mph Gusts, 8mph 3min avg
Primary useism: 0.08 μm/s
Secondary useism: 0.67 μm/s
QUICK SUMMARY:
H1 is currently Locking DRMI_1F after receiving an Initial Alignment.
H1 has apparently been difficult to lock all day.
I'm not entirely sure why that is the case.
The secondary useism isn't that high, though yesterday there was a noticeable increase in primary microseism with out the aide of earthquakes.
Also I have a Stand Down query failure on my OPS_Overview screen, which can be seen on screen shots.
TITLE: 03/01 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
It's a been a rough day on the locking front.
Microseism is high (the 2hrs when H1 was OBSERVING, you could definitely see alot of 0.12 - 0.15 Hz oscillations in many LSC, ASC, & SUS signals ndscopes).
DRMI has not looked great (even after fresh alignments). Not seeing many triggers for it and the flashes are as high as they have been. Hoping this is also microseism-related.
Also had a return of the SRC noise during OFFLOAD DRMI ASC earlier in the day, but then it went away.
LOG:
Managed to get 1.75hrs of OBSERVING this morning, but then had a lockloss. Acquisition is no easy either. DRMI & PRMI have not looked great after Initial Alignments (like they usually had been). BUT, there is high secondary AND primary microseism for H1. Will keep battling. Did not get to run the Saturday Calibration, because H1 was not thermalized this morning (in the last 13hrs we have not had a thermalized H1).
Have had some PRMI offloading which has taken as long as 4-5min (don't recall it taking that long...with that said, this current lock offloaded PRMI in under 2min).
H1 struggled last night/this morning--H1 was down close to 6hrs (NOTE: µseism is elevated).
Since there hadn't been an alignment, thought this would do the trick since one hadn't been run a few hours into relocking. Unfortunately, DRMI & PRMI didn't look great post-IA. CHECK MICH FRINGES helped. Then continued with locking with various locklosses. During locking I happened to notice a RETURN of the SRC noise during OFFLOAD_DRMI_ASC (noted: alog83080, alog82997). This is post-SRC2_Pitch gain reduction a couple days ago (alog83092, alog83078).
This noise issue definitely looks ugly and continues to exhibit SRM/SR2 Saturations heard via Verbal and for the one for this current lock it eventually calmed down on its own after a few minutes (this was around 1732utc). While waiting for H1 to get to OBSERVING, looked to see if we if there were other instances of this noise returning (atleast for the last 1.5 days). Here were the times when it occurred with the lower SRC2_P gain of 40 for ISC_LOCK state -111 (w/ notes for each one; also see attachments):
Perhaps it's getting worse? Also only had the one lockloss from looking at these for the last 1.5 days.
Sat Mar 01 10:12:29 2025 INFO: Fill completed in 12min 26secs
Delta time channels now continue post-fill showing the max-delta values attained, which for today is about 85C with the trip set to 50C. The Delta values are absolute, so the rise in TC temps appears as a second hump in the 30S lookback channels.
During commisioning this morning, we added the final part of the ESD glitch limiting, by adding the actual limit part to ISC_LOCK. I added a limit value of 524188 to ETMX_L3_ESD_UR/UL/LL/LR filter banks, which are the upstream part of the 28 bit dac configuration for SUS ETMX. These limits are engaged in LOWNOISE_ESD_ETMX, but turned off again in PREP_FOR_LOCKING.
In LOWNOISE_ESD_ETMX I added:
log('turning on esd limits to reduce ETMX glitches')
for limits in ['UL','UR','LL','LR']:
ezca.get_LIGOFilter('SUS-ETMX_L3_ESD_%s'%limits).switch_on('LIMIT')
So if we start losing lock at this step, these lines could be commented out. The limit turn-off in PREP_FOR_LOCKING is probably benign.
Diffs have been accepted in sdf.
I think the only way to tell if this is working is to wait and see if we have fewer ETMX glitch locklosses, or if we start riding through glitches that has caused locklosses in the past.
Using the lockloss tool, we've had 115 Observe locklosses since Dec 01, 23 of those were also tagged ETM glitch, which is around 20%.
Since Feb 4th, we've had 13 locklosses from Observing, 6 of these tagged ETM_GLITCH: 02/10, 02/09, 02/09, 02/08, 02/08, 02/06.
Jim, Sheila, Oli, TJ
We are thinking about how to evaluate this change. In the meantime we made a comparison similar to Camilla's: In the 7 days since this change, we've had 13 locklosses from observing, with 7 tagged by the lockloss tool as ETM glitch (and more than that identified by operators), compare to 7 days before the change we had 19 observe locklosses of which 3 had the tag.
We will leave the change in for another week at least to get more data of what it's impact is.
I forgot to post this at the time, we took the limit turn on out of the guardian on Feb 12, with the last lock ending at 14:30 PST, so locks since that date have had the filters engaged, but since they multiply to 1, they shouldn't have an effect without the limit. We ran this scheme between Feb 3 17:40 utc until Feb 12 22:15 utc.
Camilla asked about turning this back on, I think we should do that. All that needs to be done is uncommenting out the lines (currently 5462-5464 in ISC_LOCK.py):
#log('turning on esd limits to reduce ETMX glitches')
#for limits in ['UL','UR','LL','LR']:
# ezca.get_LIGOFilter('SUS-ETMX_L3_ESD_%s'%limits).switch_on('LIMIT')
The turn off of the limit is still in one of the very early states is ISC_LOCK, so nothing beyond accepting new sdfs should be needed.