Displaying reports 961-980 of 86532.Go to page Start 45 46 47 48 49 50 51 52 53 End
Reports until 10:58, Thursday 13 November 2025
LHO VE
david.barker@LIGO.ORG - posted 10:58, Thursday 13 November 2025 (88091)
Thu CP1 Fill

Thu Nov 13 10:13:43 2025 INFO: Fill completed in 13min 39secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 DetChar (DetChar)
joan-rene.merou@LIGO.ORG - posted 10:26, Thursday 13 November 2025 - last comment - 09:11, Friday 19 December 2025(88089)
With voltage set to 0, grounding of the ITMX ESD driver
[Joan-Rene Merou, Alicia Calafat, Sheila Dwyer, Jenne Driggers]

We have entered the LVEA and went to the Beer garden. There, we first turned off the Low Voltage ITM ESD Driver D1600092, first the 15V switch and then the medium voltage switch. In order to turn it on again, it should be reconnected in the opposite order. With the voltage request set to 0 and chassis powered off, we have unplugged the SHV cables going to the chamber and plugged into Robert's ground boxes, which we used to ground to the rack which is grounded to the chamber. This has been done at both drivers (See attached photos).

Afterwards, we have changed the code at /opt/rtcds/userapps/release/isc/h1/guardian/ISC_LOCK.py in order that the LOWNOISE_COIL_DRIVERS will go to LOWNOISE_ESD_ETMY instead of TRANSITION_FROM_ETMX. This has been done by changing lines 6670 and 6674, moving the ", 15" step from line 6670 to 6674. Finally, we communicated the change to the operator and loaded the guardian.
Images attached to this report
Comments related to this report
joan-rene.merou@LIGO.ORG - 13:28, Wednesday 26 November 2025 (88257)
It appears that the grounding did not decrease the amplitude of the combs. As seen in the attached figure, the relative amplitude of the first harmonics of the combs remains mostly the same before and after the change on November 13th.
Non-image files attached to this comment
corey.gray@LIGO.ORG - 09:11, Friday 19 December 2025 (88616)EPO

EPO-Tagging for photo of ESD work

H1 SQZ
matthewrichard.todd@LIGO.ORG - posted 09:34, Thursday 13 November 2025 (88088)
OMC scan with SQZ beam and cold OM2, changing ZM4/5 -- 20251113

M. Todd, S. Dwyer


We wanted to get a couple of measurements of the OMC mismatch with the SQZ beam when we changed from the nominal setting for only one of the ZMs at a time. OM2 was cold for all of these measurements.

Measurement Time OMC Mismatch [%]
ZM4 = 6.2, ZM5 = -0.4 (nominal) 1447088389  2.8 
ZM4 = 4, ZM5 = -0.4 1447088919  2.4 
ZM4 = 6.2, ZM5 = -4.5 1447089464  10.5

It seems the ZM4 is not a very strong actuator for changing the mode at the OMC.

Images attached to this report
H1 PSL
oli.patane@LIGO.ORG - posted 09:29, Thursday 13 November 2025 - last comment - 09:14, Wednesday 19 November 2025(88087)
IMC_LOCK stuck in FAULT due to FSS oscillation

During PRC Align, IMC unlocked and couldn't relock due to the FSS oscillating a lot - PZT MON was showing it moving all over the place, and I couldn't even take the IMC to OFFLILNE or DOWN due to the PSL ready check failing. To try and fix the oscillation issue, I turned off the autolock for the Loop Automation on the FSS screen, and after a few seconds re-enabled the autolocking, and then we were able to go to DOWN fine, and then I was able to relock the IMC.

TJ said this has happened to him and to a couple other operators recently.

 

Images attached to this report
Comments related to this report
jason.oberling@LIGO.ORG - 11:07, Thursday 13 November 2025 (88090)OpsInfo

Took a look at this, see attached trends.  What happened here is the FSS autolocker got stuck between states 2 and 3 due to the oscillation.  The autolocker is programmed to, if it detects an oscillation, jump immediately back to State 2 to lower the common gain and ramp it back up to hopefully clear the oscillation.  It does this via a scalar multiplier of the FSS common gain that ranges from 0 to 1, which ramps the gain from 0dB to its previous value (15dB in this case); it does not touch the gain slider, it does it all in block of C code called by the front end model.  The problem here is that 0dB is not generally low enough to clear the oscillation, so it gets stuck in this State 2/State 3 loop and has a very hard time getting out of it.  This is seen in the lower left plot, H1:PSL-FSS_AUTOLOCK_STATE, it never gets to State 4 but continuously bounces between States 2 and 3; the autolocker does not lower the common gain slider, as seen in the center-left plot.  If this happens, turning the autolocker off then on again is most definitely the correct course of action.

We have an FSS guardian node that also raises and lowers the gains via the sliders, and this guardian takes the gains to their slider minimum of -10dB which is low enough to clear the majority of oscillations.  So why not use this during lock acquisition?  When an oscillation is detected during the lock acquisition sequence, the guardian node and the autolocker will fight each other.  This conflict makes lock acquisition take much longer, several 10s of minutes, so the guardian node is not engaged during RefCav lock acquisition.

Talking with TJ this morning, he asked if the FSS guardian node could handle the autolocker off/on if/when it gets stuck in this State 2/State 3 loop.  On the surface I don't see a reason why this wouldn't work, so I'll start talking with Ryan S. about how we'd go about implementing and testing this.  For OPS: In the interim, if this happens again please do not wait for the oscillation to clear on its own.  If you notice the FSS is not relocking after an IMC lockloss, open the FSS MEDM screen (Sitemap -> PSL -> FSS) and look at the autolocker in the middle of the screen and the gain sliders at the bottom.  If the autolocker state is bouncing between 2 and 3 and the gain sliders are not changing, immediately turn the autolocker off, wait a little bit, and turn it on again.

Images attached to this comment
jason.oberling@LIGO.ORG - 09:14, Wednesday 19 November 2025 (88170)

Slight correction to the above.  The autolocker did not get stuck between states 2 and 3, as there is no path from state 3 to state 2 in the code.  What's happening is the autolocker goes into state 4, detects an oscillation, then immediately jumps back to state 2; so this is a loop from states 2 -> 3 -> 4 -> 2 due to the oscillation and the inability of the autolocker gain ramp to effectively clear it.  This happens at the clock speed of the FSS FE computer, while the channel that monitors the autolocker state is only a 16 Hz channel.  So the monitor channel is no where close to fast enough to see all of the state changes the code is going through during an oscillation.

H1 CDS
david.barker@LIGO.ORG - posted 08:10, Thursday 13 November 2025 (88085)
EY vacuum glitch at beam tube ion pump 03:00:16 Thu 13 Nov 2025 PST

VACSTAT detected a vacuum glitch at 03:00:24 this morning originating with PT427 (EY beam tube ion pump station, about 1000 feet from EY). The vacuum pressure rapidly increased from 1.4e-09 to 1.4e-07 Torr. It then rapidly pumped back down to nominal in 8 seconds. The glitch was detected soon afterwards by all of the gauges in EY. They only increased  from around 1.0e-09 to 2.0e-09 and they took around 25 minutes to pump down.

The glitch was seen at MY as a much reduced amplitude about 6 minutes after the event.

H1 was not locked at the time.

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 07:37, Thursday 13 November 2025 - last comment - 08:22, Thursday 13 November 2025(88084)
Ops Day Shift Start

TITLE: 11/13 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 3mph Gusts, 0mph 3min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 1.22 μm/s 
QUICK SUMMARY:

Currently in DOWN due to excessively high secondary microseism. We're supposed to have calibration measurements and commissioning today. I'll try for a bit to get us back up, but I doubt we'll get past DRMI since it's worse now than it was last night for Ryan or TJ

Comments related to this report
david.barker@LIGO.ORG - 08:22, Thursday 13 November 2025 (88086)

I restarted VACSTAT at 08:18 to clear its alarm. Tyler resolved the RO alarm at 06:17 and now the CDS ALARM is GREEN again.

H1 General
thomas.shaffer@LIGO.ORG - posted 02:54, Thursday 13 November 2025 (88083)
Ops Owl Update

The useism continues to grow. I'll keep the IFO in down and see where things are at again in a few hours.

LHO General
ryan.short@LIGO.ORG - posted 22:01, Wednesday 12 November 2025 (88082)
Ops Eve Shift Summary

TITLE: 11/13 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: TJ
SHIFT SUMMARY: H1 was happily observing until mid-shift when the microseism just got too high and caused a lockloss. Haven't been able to make it past DRMI since then due to the ground motion. I'm leaving H1 in 'DOWN' since it's not having any success, but TJ says he'll check on things overnight.
LOG:

H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 19:19, Wednesday 12 November 2025 (88081)
Lockloss @ 02:44 UTC

Lockloss @ 02:44 UTC - link to lockloss tool

Possibly caused by ground motion, as everything looked to be moving quite a lot at the time, and the secondary microseism band has been rising this evening. Environment plots on the lockloss tool show a quick increase in motion at the time of the lockloss also.

H1 General
oli.patane@LIGO.ORG - posted 16:36, Wednesday 12 November 2025 - last comment - 17:08, Wednesday 12 November 2025(88079)
Ops Day Shift End

TITLE: 11/13 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Observing at 140 Mpc and have been Locked for over 16.5 hours. One short drop out of Observing due to the squeezer dropping out, but that relocked automatically and we're been Observing ever since. GRB-Short E617667 came in today at 21:29 UTC
LOG:

15:30UTC Observing and Locked for over 7.5 hours
    15:52 Out of Observing due to SQZ unlock
    15:56 Back into Observing
    21:29 GRB-Short E617667

Start Time System Name Location Lazer_Haz Task Time End
16:00 FAC Randy YTube n Caulking up those joints 22:57
17:04 FAC Kim MX n Tech clean 18:34
18:21   Sheila. Kar Meng Optics Lab y(local) OPO prep (Sheila out 18:57) 19:10
18:51 VAC Gerardo MX n Looknig for case 20:07
20:09   Corey OpticsLab n Cleaning optics 23:33
21:26   Kar Meng Optics Lab y(local) OPO prep 23:28
22:01   RyanS OpticsLab n Cleaning optics 23:33
22:10   TJ Optics Lab n Spying on Corey and RyanS 22:22
22:59   Matt Optics Lab n Grabbing wipes 23:16
23:34   Matt Prep Lab n Putting wipes away 23:35
Comments related to this report
david.barker@LIGO.ORG - 17:08, Wednesday 12 November 2025 (88080)

Tyler has the reverse osmosis water conditioning system offline overnight. The CDS alarms system has an active cell-phone bypass for this channel which expires tomorrow afternoon. This should be the only channel in CDS ALARM.

Bypass will expire:
Thu Nov 13 05:05:22 PM PST 2025
For channel(s):
    H0:FMC-CS_WS_RO_ALARM
 

LHO General
ryan.short@LIGO.ORG - posted 16:02, Wednesday 12 November 2025 (88078)
Ops Eve Shift Start

TITLE: 11/13 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 5mph Gusts, 3mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.56 μm/s 
QUICK SUMMARY: H1 has been locked for 16 hours and observing the whole day.

LHO VE
david.barker@LIGO.ORG - posted 11:52, Wednesday 12 November 2025 (88077)
Wed CP1 Fill

Wed Nov 12 10:08:17 2025 INFO: Fill completed in 8min 13secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 AOS
oli.patane@LIGO.ORG - posted 09:18, Wednesday 12 November 2025 (88075)
ISI CPS Noise Spectra Check Weekly FAMIS

Closes FAMIS#27534, last checked 87787

I was supposed to do this last week but I was out, so doing it now. Last time it was done was 10/28, so this is comparing to measurements from two weeks ago.

Nothing of note, everything looks very similar to how it looked a couple weeks ago.

Non-image files attached to this report
H1 General
oli.patane@LIGO.ORG - posted 07:38, Wednesday 12 November 2025 - last comment - 07:45, Wednesday 12 November 2025(88073)
Ops Day Shift Start

TITLE: 11/12 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 3mph Gusts, 1mph 3min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.37 μm/s 
QUICK SUMMARY:

Observing at 148 Mpc and have been Locked for over 7.5 hours. Currently in a standdown due to Superevent S251112cm that came in at 11/12 15:19 UTC

Comments related to this report
oli.patane@LIGO.ORG - 07:45, Wednesday 12 November 2025 (88074)

Looks like we had a few events come in last night besides the superevent:

11/12 06:29 UTC GRB-Short E617425

11/12 13:35 UTC GRB-Short E617519

LHO General
ryan.short@LIGO.ORG - posted 22:03, Tuesday 11 November 2025 (88072)
Ops Eve Shift Summary

TITLE: 11/12 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: A completely uneventful shift with H1 observing throughout. Now been locked for almost 7 hours.

H1 CDS
jonathan.hanks@LIGO.ORG - posted 17:21, Tuesday 11 November 2025 (88071)
WP 12876 Update the run number server to help test the new frame writer

As per WP 12876 I updated the run number server with a new version to allow more testing of the new frame writer (h1daqfw2), without allowing it to modify external state in CDS.

The run number server tracks channel list configuration changes in the frames.  The basic idea is the the frame writers create a checksum/hash of the channel list, send it to the run number server and get a run number to include in the frame.  This is then used by the nds1 server to optimize multi-frame queries (if it knows the configuration hasn't changed some of the structures can be re-used between frames (this has been measured at about a 30% speed-up)).

This update added a new port/interface that the server listens on.  It behaves a little different, it will only return the current run number (or 0 if the hash doesn't match) and will not increment the global state, so it is safe to have a test system use.

Now the new frame writer can automatically query the run number server to get the correct value to put in the frame (previously we had been setting it via a epics variable).  One step closer to the new frame writer being in production.

 

H1 GRD (CDS)
thomas.shaffer@LIGO.ORG - posted 14:53, Tuesday 14 January 2025 - last comment - 10:17, Wednesday 12 November 2025(82273)
h1guardian1 machine reboot and point back at nds0

WP12274

FAMIS28946

We rebooted the h1guardian1 machine today for 3 things:

  1. Point the machine back at nds0 as the primary nds server
    • I noticed the other day that the guardian was still defining the chosen nds server as nds1 primary and nds0 as secondary. I'm not entirely sure when this was changed, but maybe 2 years ago (alog66834).
    • This was done by changing the NDSSERVER definition in the /etc/guardian/local-env file
  2. Relieve any stale processes that might latch the gps leap second data.
  3. Quarterly machine reboot FAMIS task

All 168 nodes came back up and Erik confirmed that nds0 was seeing the traffic after the machine reboot.

Comments related to this report
erik.vonreis@LIGO.ORG - 10:17, Wednesday 12 November 2025 (88076)

Server order is set in the guardian::lho_guardian profile in puppet.

Displaying reports 961-980 of 86532.Go to page Start 45 46 47 48 49 50 51 52 53 End