Displaying reports 1581-1600 of 77270.Go to page Start 76 77 78 79 80 81 82 83 84 End
Reports until 08:14, Tuesday 21 May 2024
H1 General (SUS)
thomas.shaffer@LIGO.ORG - posted 08:14, Tuesday 21 May 2024 (77960)
Lock loss just before maintenance start

The IFO lost lock just before maintenance started while the SUS_CHARGE guardian was in the state SWAP_BACK_ETMX. The lock loss happened at 14:58:35 UTC, between the last two lines below from the Guardian log.

2024-05-21_14:58:22.768728Z SUS_CHARGE [SWAP_BACK_ETMX.enter]
2024-05-21_14:58:22.768728Z SUS_CHARGE [SWAP_BACK_ETMX.main] ezca: H1:SUS-ITMX_L3_DRIVEALIGN_L2L_GAIN => 0
2024-05-21_14:58:22.769665Z SUS_CHARGE [SWAP_BACK_ETMX.main] ezca: H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN => 184.65
2024-05-21_14:58:42.790151Z SUS_CHARGE [SWAP_BACK_ETMX.main] ezca: H1:SUS-ITMX_L3_ISCINF_L_SW1S => 4

ETMX seems to move too much after the the L2L gain is applied.

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 07:32, Tuesday 21 May 2024 (77959)
Ops Day Shift Start

TITLE: 05/21 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 22mph Gusts, 18mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.10 μm/s
QUICK SUMMARY: Locked for 7.5 hours, range hasn't been looking too good. A glance at the SQZ FOM when I walked in had the live trace above the reference in higher frequencies. Magnetic injections just started, maintenance day will start soon.

H1 CDS
erik.vonreis@LIGO.ORG - posted 06:47, Tuesday 21 May 2024 (77958)
Workstations updated

Workstations updated and rebooted.  This was an OS package update.  Conda packages were not updated.

H1 General
anthony.sanchez@LIGO.ORG - posted 01:24, Tuesday 21 May 2024 (77957)
Monday Ops Eve Shift End

TITLE: 05/21 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 141Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
Due to 2 backto back locklosses that both happened at LOWNOISE_LENGTH_CONTROL, after 2 sec in that state, I decided to stop in the state directly before and walk through Line by Line to see if there was a particular line in the code that was causing a lockloss. Each line was ran and no lockloss happened. We made it back to OBSERVING at 7:09 UTC.
 
The wind  was elevated, at the time of both locklosses so perhaps it was the wind or the incoming 5.7M earthquake , Plots attached. Perhaps a coinicidence?  IDK.

ISC_LOCK.py is still the previous version the latest version is this one:  ISC_LOCK.py_20may2024
I do beleive I made a change in the Current copy of ISC_LOCK.py but that change should be discarded in favor of ISC_LOCK.py_20may2024
LOG:
                                                                                                                                                                                                                                     

Start Time System Name Location Lazer_Haz Task Time End
22:19 CAL Francisco PCAL lab LOCAL PCAL work 22:49
23:58 PCAL Francisco PCAL Lab Yes PCAL Lab tests 00:19
00:47 PCAL Francisco PCAL Lab Yes PCAL LAB measurements 00:52

 

Images attached to this report
H1 CDS (Lockloss)
anthony.sanchez@LIGO.ORG - posted 22:44, Monday 20 May 2024 - last comment - 22:57, Monday 20 May 2024(77954)
Monday Mid Shift Update

TITLE: 05/21 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 17mph Gusts, 13mph 5min avg
    Primary useism: 0.08 μm/s
    Secondary useism: 0.11 μm/s
QUICK SUMMARY:

Took H1 Out of Observing to do a SQZr Scan.

Incoming 5.4M Earthquake.  - Survived it.

Locklosss from Observing 02:31 UTC
Screenshots attached.

H1 made it all the way up to LOW_NOISE_LENGTH_CONTROL and was knocked out of lock by what I pessumed was wind. But looking at the scopes I dont see any extreme gusts.

On the Second locking attempt H1 also made it to LOW_NOISE_LENGTH_CONTROL, But both times i made it to that state it lasted 2 seconds and 124ms, then lost lock. which looks suspicious.

I started to try to open ISC_LOCK to stop it from getting to that state, and was greated with the following.

 

Traceback (most recent call last):
  File "/var/opt/conda/base/envs/cds/bin/guardmedm", line 11, in
    sys.exit(main())
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/guardian/medm/__main__.py", line 177, in main
    system.load()
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/guardian/system.py", line 400, in load
    module = self._load_module()
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/guardian/system.py", line 287, in _load_module
    self._module = self._import(self._modname)
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/guardian/system.py", line 159, in _import
    module = _builtin__import__(name, *args, **kwargs)
  File "", line 1129, in __import__
  File "", line 1050, in _gcd_import
  File "", line 1027, in _find_and_load
  File "", line 1006, in _find_and_load_unlocked
  File "", line 688, in _load_unlocked
  File "", line 883, in exec_module
  File "", line 241, in _call_with_frames_removed
  File "/opt/rtcds/userapps/release/isc/h1/guardian/ISC_LOCK.py", line 8, in
    import ISC_GEN_STATES
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/guardian/system.py", line 159, in _import
    module = _builtin__import__(name, *args, **kwargs)
  File "", line 1129, in __import__
  File "", line 1050, in _gcd_import
  File "", line 1027, in _find_and_load
  File "", line 1006, in _find_and_load_unlocked
  File "", line 688, in _load_unlocked
  File "", line 883, in exec_module
  File "", line 241, in _call_with_frames_removed
  File "/opt/rtcds/userapps/release/isc/h1/guardian/ISC_GEN_STATES.py", line 5, in
    import ISC_library
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/guardian/system.py", line 159, in _import
    module = _builtin__import__(name, *args, **kwargs)
  File "", line 1129, in __import__
  File "", line 1050, in _gcd_import
  File "", line 1027, in _find_and_load
  File "", line 1006, in _find_and_load_unlocked
  File "", line 688, in _load_unlocked
  File "", line 883, in exec_module
  File "", line 241, in _call_with_frames_removed
  File "/opt/rtcds/userapps/release/isc/h1/guardian/ISC_library.py", line 1189, in
    class DUMP_SDF_DIFFS():
  File "/opt/rtcds/userapps/release/isc/h1/guardian/ISC_library.py", line 1199, in DUMP_SDF_DIFFS
    dcuid = str(models.fec)
AttributeError: 'generator' object has no attribute 'fec'

 

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 22:14, Monday 20 May 2024 (77955)

Tony, Dave:

We noticed that two scripts in isc/h1/guardian had been changed this afternoon around 16:22 relating to the errors being seen. I made a backup copy of ISC_library.py and ISC_LOCK.py and removed the recent changes using the subverion revert command. The immediate problem of ISC_LOCK's non-functioning ALL button was resolved.

rw-rw-r--  1 david.barker     controls  45K May 20 22:05  ISC_library.py_20may2024
-rw-rw-r--  1 david.barker     controls  39K May 20 22:05  ISC_library.py
-rw-rw-r--  1 david.barker     controls 302K May 20 22:08  ISC_LOCK.py_20may2024
-rw-rw-r--  1 david.barker     controls 300K May 20 22:08  ISC_LOCK.py
 

anthony.sanchez@LIGO.ORG - 22:57, Monday 20 May 2024 (77956)

Ok Looking more into these locklosses in LOW_NOISE_LENGTH_CONTROL logs. They both stop exicuting lines at the same time.

Using Meld I was able to find the difference between the 2 versions of ISC_lock.py within the Low_Noise_Length_control Guardian state.

It looks like line 5559 was commented out.
#ezca['LSC-PRCL1_OFFSET'] = -62 # alog 76814

Update:
Jenne wants that to stay put. Cause it needs to stay at 0.

I will walk it through line by line using the guardian command.
 

Images attached to this comment
H1 General
anthony.sanchez@LIGO.ORG - posted 16:37, Monday 20 May 2024 (77952)
Monday Ops Eve Shift Start

TITLE: 05/20 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 6mph Gusts, 4mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.10 μm/s
QUICK SUMMARY:
The H1 has now been locked for 2 + hours and are currently Observing.
Everything currently looks great.
 


 

 

H1 General
ryan.crouch@LIGO.ORG - posted 16:29, Monday 20 May 2024 (77941)
OPS Monday day shift summary

TITLE: 05/20 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Lost lock during commissioning, relocking was straightforward and unassisted after finishing the manual IA after the PR3 moves. We've been locked for 2:15 as of 23:30 UTC

16:00 UTC  We started commissioning

18:45UTC lockloss from commissioning activities

18:46 UTC Manual IA

19:22 UTC Another PR2_SPOT_MOVE

19:41 UTC 2nd Manual IA

While relocking we paused in PRMI to run OLG measurements of MICH and PRCL

21:19 UTC back in Observing after accepting/reverting some SQZ diffs

From the DARM_coherence_checks.xml template it looks like MICHFF needs to be re-tuned

21:36 UTC Superevent S240520cv

LOG:

Start Time System Name Location Lazer_Haz Task Time End
14:46 FAC Karen Optics and VPW N Tech clean 15:15
15:46 FAC Kim MidX N Tech clean 16:38
15:47 FAC Karen MidY N Tech clean 17:26
18:46 CAL Francisco PCAL lab LOCAL PCAL work 19:29
22:19 CAL Francisco PCAL lab LOCAL PCAL work 22:49
Images attached to this report
H1 ISC
ryan.crouch@LIGO.ORG - posted 13:39, Monday 20 May 2024 (77950)
PRMI MICH & PRCL OLG measured

We paused in PRMI_ASC to run the OLGs for MICH and PRCL, they both seem fine.

Images attached to this report
H1 ISC
jenne.driggers@LIGO.ORG - posted 13:28, Monday 20 May 2024 (77945)
Ran A2L with new script

I ran the new A2L script (from alog 77704), after incorporating suggestions from Vlad and Sheila.  In particular, now the script enables the 30 Hz bandstops in the quads, so that we are preventing the oscillation from going around the loops.

It seems to have run okay.  As a future enhancement, I might move the dither oscillation (and the associated bandpasses / bandstops) to a sligthly different frequency, since there may be some persistent line in DARM very close to the A2L freq we are using right now of 30.0 Hz.

Here is what the script printed out:

Overall this seemed good, and Sheila noted that the coherence with the HARD ASC loops went down.  I'll check again after we re-lock.  This took about 30 mins (plus about 10 mins before that, for some last min debugging of variable name spelling errors).  I need to check on the fits and the stdevs, to make sure they make sense, since the stdev now seems much higher than Vlad's notes say it should be.

Images attached to this report
H1 ISC
jenne.driggers@LIGO.ORG - posted 13:27, Monday 20 May 2024 - last comment - 17:33, Monday 20 May 2024(77949)
Re-moved PR3 today, to move beam spot on PR2

Now that the wind has (mostly) calmed down, and we're pretty confident that the increased low frequency noise late last week was due to (not yet understood, intermittent) squeezer-related noise, Sheila and I moved PR3 to get closer to center on PR2.  Mostly we were moving in yaw, but then we occassionally moved PR3 pitch to counter-act the pitch shift that was happening due to cross-coupling. We were making these PR3 pit moves primarily according to the PR3 top mass OSEMs.

While we were moving in yaw using the ISC_LOCK guardian state PR2_spot_move, Sheila noted that the scaling of the slider moves for PR2, PRM, IM4 in response to a PR3 move weren't quite right, since the ASC had to respond a bit.  So, she re-calculated and tuned the scaling factors in that guardian state, and now the ASC responds much less (indicating that all the sliders are moving close to the correct values in response to a PR3 move). 

Overall, we moved PR3 yaw slider from +152.4 to -74.9. We did about the first half of the move in full lock, gaining about 1% increase in POP_A_LF and also about 1% increase in arm cavity buildups.  We had gone maybe 1/4 of our total for today, and we stopped gaining in overall buildups, which makes sense if we got to a point where we're no longer really clipping (and so moving farther isn't un-clipping us more).   I think I started going too fast (and didn't pause to use the pico motor), and we lost lock, so then we did the second half with just the green arms locked, and pico-ing to keep the COMM beatnote high. 

We redid initial alignment, and are on our way back to lock.  We'll post more details on this morning's work, but this is just a quick alog so I don't forget, while I go off to another meeting.

Before we did any work today, we had two quiet times.

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 14:44, Monday 20 May 2024 (77951)

Attached in an annotated screenshot showing some trends as we moved PR3 in lock.

We are now back in NLN, and it looks like we need to adjust the MICH FF.  We will wait for the IFO to thermalize a few hours before doing this.  The squeezing is also not good right now, but changing rapidly during thermalization.

The second screenshot shows the same trend as the first, but with the POP QPD added.  The in lock move of -118 urad in PR3 yaw moved the beam from -0.6 to +0.6 in yaw, while the second out of lock move -112 urad seems to have not moved the spot on POP.  I don't understand why that would be, but it probably makes sense to pico POP before we move another 250urad (if we think we need to).

Images attached to this comment
jenne.driggers@LIGO.ORG - 17:33, Monday 20 May 2024 (77953)

Adding some more data to the confusion over why POP QPD didn't see a move after our out-of-lock PR3 move, I also see that this new lock has the NSUM on POP_A_QPD thermalizing to a higher value than we had earlier, but the yaw value of the QPD seems to still be in about the same place as it was when we lost lock halfway through todays PR3 move. 

Images attached to this comment
H1 SEI (SEI)
corey.gray@LIGO.ORG - posted 12:36, Monday 20 May 2024 (77948)
H1 ISI CPS Noise Spectra Check - Weekly (Famis 25992)

FAMIS Link:  25992

Only CPS which looks higher at high frequencies (see attached) would be:

  1. ITMx St1 V1 (with & ITMx St1 V2 being just a little higher than ADE 1mm noise).
  2. ITMy St1 V2
Non-image files attached to this report
H1 SQZ
andrei.danilin@LIGO.ORG - posted 10:42, Monday 20 May 2024 (77890)
Squeezer paramater estimation with QuantumRelGamma model.

Sheila, Camilla, Andrei

Time-traces from aLOG's 77133, 77268 and 77710 were used to deduce the systems parameters. Three types of measurements were acquired: without the squeezer beam, without the Filter Cavity, and with both. The data was normalized, therefore, we analyzed the value in decibels of quantum noise reduction compared to no squeezing rather than the PSD of the signal.

Budget module QuantumRelGamma was used for fiting.

To fit the measurements without the Filter Cavity, we assumed the Filter Cavity Mismatch parameter to be 1 (equivalent to ifo.Squeezer.Type = 'Freq Independent').

After inferring the quantum noise by adjusting the arm power and optical gain parameters, we then adjusted all other parameters, mainly focusing on phase, Filter Cavity detuning, injected squeezing, SEC detuning, and IFO-OMC mismatch.

The code was primarily based on Dhruva's interactive-sqz-main.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:21, Monday 20 May 2024 (77944)
Mon CP1 Fill

Mon May 20 10:08:12 2024 INFO: Fill completed in 8min 8secs

Jordan confirmed a good fill curbside.

Images attached to this report
H1 General
ryan.crouch@LIGO.ORG - posted 09:03, Monday 20 May 2024 - last comment - 11:47, Monday 20 May 2024(77940)
OPS Monday day shift update

We started comissioning at 16:00 UTC and will continue till 19:00 UTC, starting with some SQZ optimization.

Comments related to this report
ryan.crouch@LIGO.ORG - 11:47, Monday 20 May 2024 (77947)Lockloss

Lockloss at 18:45UTC during commissioning, likely from commissioning work, we were in PR2_SPOT_MOVE

H1 General (SEI)
ryan.crouch@LIGO.ORG - posted 16:35, Saturday 18 May 2024 - last comment - 10:09, Monday 20 May 2024(77906)
OPS Saturday day shift update

TITLE: 05/18 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Lost lock at the end of the shift, DRMI alignment issues and PRMI was plagued by a ~20Hz buzz that kept killing it after 2 seconds, the mode cleaner was slow to lock in each attempt today. After the 3rd IA DRMI was finally in the mood to lock and I didn't have to go through PRMI and I returned to Observing at 21:15UTC. I was not able to take the calibration measurement this afternoon as planned.

15:25 UTC HAM3 ISI CPS trip while doing green arms in IA. I could not finish SRC align, as SRM was continually saturating. There was also a couple of IY saturations, and there was an earthquake shaking everything at the time.

16:02 UTC HAM3 ISI CPS trip during FIND_IR? I couldn't get PRMI to stay locked for more than 2 seconds.

17:00 UTC I restored the alignments to 05/18/24 03:00 UTC which was right before we locked last night for the 11 hour lock. After locking ALS I could immediately see the beatnotes were better, but DRMI only had small flashes. A few rounds of CHECK_MICH and still bad flashes, 13:31 I started a 2nd IA skipping GREEN_ARMs, there were no saturations during this IA and I was able to finish SRC_ALIGN.

I'm still not able to get PRMI to stay locked for more than 2 seconds, theres a 20ish Hz oscillation that kills it when the MICH1 filter is ramped on? I tried lowering the PRMI-MICH gain from 3.2 to 2.8 in lscparams as Sheila suggested the other day. It lasted longer but was still ultimately killed by the 20Hz buzzing, I lowered it more to 2.6 and it lasted even longer but was killed when PRCL1 turned on? AS AIR was still clearly misaligned so I went through CHECK_MICH which made things worse?  I now had worse PRMI flashes and still a clearly misaligned AS AIR. During this I was also trying to help adjust PRM/BS while in PRMI to fix AS AIR.

19:37UTC After some more seemingly fruitless adjusting of PRM and BS I started a 3rd IA... it looks centered on AS-AS_C during SRC and PRC alignment?

Same behavior as before, bad DRMI then PRMI gets killed after a few seconds from the 20Hz, its also always right after the BS_M1_L filter is turned on. Lockloss after a few rounds of PRMI.

For the next attempt I tried turning down the PRMI-PRCL gain from 6 to 4 but then it was able to lock DRMI all of a sudden and we didn't even go through PRMI... so I'll revert these changes. I have no idea whats different this attempt from the last or why it decided now it wants to work but I'll take it. The alignment was just finally good enough, even though it wasn't on the first attempt after the IA finished?

21:15 UTC back to Observing

23:21 UTC lockloss from a HAM3 ISI trip, same as before, CPS sensor tagging SEI

Images attached to this report
Comments related to this report
jim.warner@LIGO.ORG - 17:34, Saturday 18 May 2024 (77914)SEI

Looking at trends, there were four trips today from HAM3 CPS glitches, none of which can be attributed to earthquakes, as far as I can tell. HAMs are usually not the first to get tripped, and the glitches Ryan posts don't look physical, they look like typical CPS glitches. Tony went out and power cycled the CPS parts of the HAM ISI interface chassis for HAM3, we'll just have to wait and see if that fixes it or if more invasive work is needed.

First attached trends are the high frequency blrms we have for monitoring for glitches, they seem like they would have provided some warning for this, there were glitches an hour or so before the first trip. I think we have some tests for this in DIAG_MAIN? We should think about how to get this  to get some more attention.

 

Images attached to this comment
ryan.crouch@LIGO.ORG - 10:09, Monday 20 May 2024 (77943)SEI

There is indeed a test in DIAG_MAIN for noisy CPS sensors called "SEI_CPS_NOISEMON" and it was going off last night during the glitches and 2 trips (marked by the T cursors at 02:37 and 02:47 UTC), doing a "guardctrl log -n1000 -a "1400207700" -b "1400210000" DIAG_MAIN" of a time where the CPS was glitching last night yielded the following trace back. Maybe there should be verbal_alarms check for this as well?

2024-05-20_02:34:59.601882Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: SEI_CPS_NOISEMON: Noisy HAM CPS(s): [('HAM3', 'V2')]
2024-05-20_02:36:49.102804Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: SEI_CPS_NOISEMON: Noisy HAM CPS(s): [('HAM3', 'V2')]
2024-05-20_02:37:01.099752Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: SEI_CPS_NOISEMON: Noisy HAM CPS(s): [('HAM3', 'H2'), ('HAM3', 'V2')]
2024-05-20_02:37:01.353280Z DIAG_MAIN [RUN_TESTS.run] USERMSG 1: SEI_STATE: ['HAM3'] is not nominal
2024-05-20_02:37:05.341046Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: FAST_SHUTTER_HV: Fast Shutter HV not Ready
2024-05-20_02:37:05.353980Z DIAG_MAIN [RUN_TESTS.run] USERMSG 3: SEISMON_EQ: LSC-REFL_SERVO_SPLITMON > 8V
2024-05-20_02:37:05.354500Z DIAG_MAIN [RUN_TESTS.run] USERMSG 4: SHUTTERS: AS beam shutter open (nominal: closed)
2024-05-20_02:37:05.478436Z DIAG_MAIN [RUN_TESTS.run] USERMSG 3: SEISMON_EQ: IMC-REFL_SERVO_SPLITMON > 8V
2024-05-20_02:37:05.478660Z DIAG_MAIN [RUN_TESTS.run] USERMSG 4: SHUTTERS: Shutter A closed (nominal: open)
2024-05-20_02:37:05.605105Z DIAG_MAIN [RUN_TESTS.run] USERMSG 6: SQUEEZING: SQZ_MANAGER is not in the nominal state. Check we are Squeezing.
2024-05-20_02:37:05.907983Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: ESD_DRIVER: ESD X driver OFF
2024-05-20_02:37:08.857117Z DIAG_MAIN [RUN_TESTS.run] USERMSG 2: SEI_CPS_NOISEMON: Noisy HAM CPS(s): [('HAM3', 'V2')]
2024-05-20_02:37:09.471870Z DIAG_MAIN [RUN_TESTS.run] USERMSG 2: PSL_FSS: PZT MON is high, may be oscillating
2024-05-20_02:38:30.391003Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: PSL_ISS: Diffracted power is low
2024-05-20_02:40:33.387179Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: PSL_ISS: Diffracted power is low
2024-05-20_02:40:41.389505Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: PSL_ISS: Diffracted power is low
2024-05-20_02:40:54.382921Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: PSL_ISS: Diffracted power is low
2024-05-20_02:41:06.394464Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: PSL_ISS: Diffracted power is low
2024-05-20_02:41:11.386640Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: PSL_ISS: Diffracted power is low
2024-05-20_02:41:34.390778Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: PSL_ISS: Diffracted power is low
2024-05-20_02:43:51.602046Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: SEI_CPS_NOISEMON: Noisy HAM CPS(s): [('HAM3', 'H2'), ('HAM3', 'V2')]
2024-05-20_02:44:18.387630Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: PSL_ISS: Diffracted power is low
2024-05-20_02:47:24.388026Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: PSL_ISS: Diffracted power is low
2024-05-20_02:47:37.597677Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: SEI_CPS_NOISEMON: Noisy HAM CPS(s): [('HAM3', 'H2'), ('HAM3', 'V2')]
2024-05-20_02:47:37.854048Z DIAG_MAIN [RUN_TESTS.run] USERMSG 1: SEI_STATE: ['HAM3'] is not nominal
2024-05-20_02:47:42.394967Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: PSL_ISS: Diffracted power is low
2024-05-20_02:48:21.393532Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: PSL_ISS: Diffracted power is low
2024-05-20_02:48:33.387284Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: PSL_ISS: Diffracted power is low
2024-05-20_02:50:27.386254Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: PSL_ISS: Diffracted power is low
2024-05-20_02:51:25.216172Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: PSL_FSS: PZT MON is high, may be oscillating
2024-05-20_02:52:30.383231Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: PSL_ISS: Diffracted power is low
2024-05-20_02:53:48.383839Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: PSL_ISS: Diffracted power is low
2024-05-20_02:54:03.858072Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: SEI_CPS_NOISEMON: Noisy HAM CPS(s): [('HAM3', 'H2'), ('HAM3', 'V2')]
2024-05-20_02:54:04.477258Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: SEI_CPS_NOISEMON: Noisy HAM CPS(s): [('HAM3', 'H1'), ('HAM3', 'H2'), ('HAM3', 'H3'), ('HAM3', 'V1'), ('HAM3', 'V2'), ('HAM3', 'V3')]
2024-05-20_02:54:05.232099Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: SEI_CPS_NOISEMON: Noisy HAM CPS(s): [('HAM3', 'H1'), ('HAM3', 'H2'), ('HAM3', 'H3'), ('HAM3', 'V1'), ('HAM3', 'V3')]
2024-05-20_02:54:05.475537Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: SEI_CPS_NOISEMON: Noisy HAM CPS(s): [('HAM3', 'H1'), ('HAM3', 'H3'), ('HAM3', 'V1'), ('HAM3', 'V3')]
2024-05-20_02:54:10.853656Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: SEI_CPS_NOISEMON: Noisy HAM CPS(s): [('HAM3', 'H1')]
2024-05-20_02:54:27.392176Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: PSL_ISS: Diffracted power is low
2024-05-20_02:54:47.394045Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: PSL_ISS: Diffracted power is low
2024-05-20_02:54:57.724598Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: SEI_CPS_NOISEMON: Noisy HAM CPS(s): [('HAM3', 'H1'), ('HAM3', 'H3'), ('HAM3', 'V1'), ('HAM3', 'V3')]
2024-05-20_02:55:00.292951Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: SEI_CPS_NOISEMON: Noisy HAM CPS(s): [('HAM3', 'H1'), ('HAM3', 'H2'), ('HAM3', 'H3'), ('HAM3', 'V1'), ('HAM3', 'V2'), ('HAM3', 'V3')]
2024-05-20_02:55:06.224234Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: SEI_CPS_NOISEMON: Noisy HAM CPS(s): [('HAM3', 'H2'), ('HAM3', 'H3'), ('HAM3', 'V1'), ('HAM3', 'V2'), ('HAM3', 'V3')]
2024-05-20_02:55:06.356844Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: SEI_CPS_NOISEMON: Noisy HAM CPS(s): [('HAM3', 'H2'), ('HAM3', 'V1'), ('HAM3', 'V2')]
2024-05-20_02:55:06.482500Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: SEI_CPS_NOISEMON: Noisy HAM CPS(s): [('HAM3', 'H2'), ('HAM3', 'V2')]
2024-05-20_02:55:08.976174Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: SEI_CPS_NOISEMON: Noisy HAM CPS(s): [('HAM3', 'H2')]
2024-05-20_02:56:27.394588Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: PSL_ISS: Diffracted power is low
2024-05-20_02:56:56.390876Z DIAG_MAIN [RUN_TESTS.run] USERMSG 0: PSL_ISS: Diffracted power is low

Related: alog75134

Images attached to this comment
H1 CDS
david.barker@LIGO.ORG - posted 16:31, Friday 17 May 2024 - last comment - 11:12, Monday 20 May 2024(77899)
fmscan, program to show filter module activity over long time periods

fmscan is a program which, for a given filter module and time period, reports when filters were switched on/off.

It is designed to be ran from MEDM as an MEDM_EXEC_LIST program. To set this up, add the following to your MEDM_EXEC_LIST environmental variable:

    :fmscan;/ligo/gitcommon/fmscan/fmscan.py &P &

The first attachment shows the fmscan option being selected in the Execute pull down menu, opened from MEDM with the right-mouse-button. You can select any EPICS PV channel associated with the filter module (e.g. GAIN, OFFSET, SWI, etc)

Second attachment shows the time-selector GUI with the default time range of the past 24 hours. Any gpstime format can be used (e.g. GPS seconds, "1 month ago", etc.)

The third attachment shows the filters activity for H1:LSC-SRCL1 for the past day. Filter changes are shown in bold. First and last rows show the start/stop times respectively.

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 11:12, Monday 20 May 2024 (77946)

fmscan has been added to the default MEDM_EXEC_LIST environment variable via puppet. If you don't redefine MEDM_EXEC_LIST in your .bashrc file, fmscan will now automatically show up in your exec listing.

Displaying reports 1581-1600 of 77270.Go to page Start 76 77 78 79 80 81 82 83 84 End