Displaying reports 8001-8020 of 83695.Go to page Start 397 398 399 400 401 402 403 404 405 End
Reports until 16:05, Tuesday 21 May 2024
LHO General
ryan.short@LIGO.ORG - posted 16:05, Tuesday 21 May 2024 (77966)
Ops Eve Shift Start

TITLE: 05/21 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 20mph Gusts, 16mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY: After some alignment changes today (and reversions) during maintenance day today, H1 has just finished initial alignment and starting lock acquisition.

H1 GRD (CDS, OpsInfo)
thomas.shaffer@LIGO.ORG - posted 15:52, Tuesday 21 May 2024 (77964)
A small Guardian change, a few failed tests, and some new Verbal tests

During maintenance and during relocking today we tested out a few new features in both guardian land and for Verbal. Here's a summary and status of each:

Guardian

Verbal

H1 General
thomas.shaffer@LIGO.ORG - posted 12:46, Tuesday 21 May 2024 (77963)
Ops Day Mid-Shift Report

Maintenance finsihed up around 1145PT, but some input mirror movements done by commissioners are still trying to get the beam back. We will resume locking as soon as we can.

H1 CDS (CDS)
erik.vonreis@LIGO.ORG - posted 11:15, Tuesday 21 May 2024 (77962)
Mid station Wifi is down for maintenance

CDS Wifi access points at both mid stations are down for maintenance waiting on parts.  If you need to use a CDS laptop at either mid station, ask Jonathan, Erik or Dave for help

LHO VE
david.barker@LIGO.ORG - posted 10:36, Tuesday 21 May 2024 (77961)
Tue CP1 Fill

Tue May 21 10:09:01 2024 INFO: Fill completed in 8min 57secs

Travis confirmed a good fill curbside.

Images attached to this report
H1 General (SUS)
thomas.shaffer@LIGO.ORG - posted 08:14, Tuesday 21 May 2024 (77960)
Lock loss just before maintenance start

The IFO lost lock just before maintenance started while the SUS_CHARGE guardian was in the state SWAP_BACK_ETMX. The lock loss happened at 14:58:35 UTC, between the last two lines below from the Guardian log.

2024-05-21_14:58:22.768728Z SUS_CHARGE [SWAP_BACK_ETMX.enter]
2024-05-21_14:58:22.768728Z SUS_CHARGE [SWAP_BACK_ETMX.main] ezca: H1:SUS-ITMX_L3_DRIVEALIGN_L2L_GAIN => 0
2024-05-21_14:58:22.769665Z SUS_CHARGE [SWAP_BACK_ETMX.main] ezca: H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN => 184.65
2024-05-21_14:58:42.790151Z SUS_CHARGE [SWAP_BACK_ETMX.main] ezca: H1:SUS-ITMX_L3_ISCINF_L_SW1S => 4

ETMX seems to move too much after the the L2L gain is applied.

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 07:32, Tuesday 21 May 2024 (77959)
Ops Day Shift Start

TITLE: 05/21 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 22mph Gusts, 18mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.10 μm/s
QUICK SUMMARY: Locked for 7.5 hours, range hasn't been looking too good. A glance at the SQZ FOM when I walked in had the live trace above the reference in higher frequencies. Magnetic injections just started, maintenance day will start soon.

H1 CDS
erik.vonreis@LIGO.ORG - posted 06:47, Tuesday 21 May 2024 (77958)
Workstations updated

Workstations updated and rebooted.  This was an OS package update.  Conda packages were not updated.

H1 General
anthony.sanchez@LIGO.ORG - posted 01:24, Tuesday 21 May 2024 (77957)
Monday Ops Eve Shift End

TITLE: 05/21 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 141Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
Due to 2 backto back locklosses that both happened at LOWNOISE_LENGTH_CONTROL, after 2 sec in that state, I decided to stop in the state directly before and walk through Line by Line to see if there was a particular line in the code that was causing a lockloss. Each line was ran and no lockloss happened. We made it back to OBSERVING at 7:09 UTC.
 
The wind  was elevated, at the time of both locklosses so perhaps it was the wind or the incoming 5.7M earthquake , Plots attached. Perhaps a coinicidence?  IDK.

ISC_LOCK.py is still the previous version the latest version is this one:  ISC_LOCK.py_20may2024
I do beleive I made a change in the Current copy of ISC_LOCK.py but that change should be discarded in favor of ISC_LOCK.py_20may2024
LOG:
                                                                                                                                                                                                                                     

Start Time System Name Location Lazer_Haz Task Time End
22:19 CAL Francisco PCAL lab LOCAL PCAL work 22:49
23:58 PCAL Francisco PCAL Lab Yes PCAL Lab tests 00:19
00:47 PCAL Francisco PCAL Lab Yes PCAL LAB measurements 00:52

 

Images attached to this report
H1 CDS (Lockloss)
anthony.sanchez@LIGO.ORG - posted 22:44, Monday 20 May 2024 - last comment - 22:57, Monday 20 May 2024(77954)
Monday Mid Shift Update

TITLE: 05/21 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 17mph Gusts, 13mph 5min avg
    Primary useism: 0.08 μm/s
    Secondary useism: 0.11 μm/s
QUICK SUMMARY:

Took H1 Out of Observing to do a SQZr Scan.

Incoming 5.4M Earthquake.  - Survived it.

Locklosss from Observing 02:31 UTC
Screenshots attached.

H1 made it all the way up to LOW_NOISE_LENGTH_CONTROL and was knocked out of lock by what I pessumed was wind. But looking at the scopes I dont see any extreme gusts.

On the Second locking attempt H1 also made it to LOW_NOISE_LENGTH_CONTROL, But both times i made it to that state it lasted 2 seconds and 124ms, then lost lock. which looks suspicious.

I started to try to open ISC_LOCK to stop it from getting to that state, and was greated with the following.

 

Traceback (most recent call last):
  File "/var/opt/conda/base/envs/cds/bin/guardmedm", line 11, in
    sys.exit(main())
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/guardian/medm/__main__.py", line 177, in main
    system.load()
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/guardian/system.py", line 400, in load
    module = self._load_module()
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/guardian/system.py", line 287, in _load_module
    self._module = self._import(self._modname)
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/guardian/system.py", line 159, in _import
    module = _builtin__import__(name, *args, **kwargs)
  File "", line 1129, in __import__
  File "", line 1050, in _gcd_import
  File "", line 1027, in _find_and_load
  File "", line 1006, in _find_and_load_unlocked
  File "", line 688, in _load_unlocked
  File "", line 883, in exec_module
  File "", line 241, in _call_with_frames_removed
  File "/opt/rtcds/userapps/release/isc/h1/guardian/ISC_LOCK.py", line 8, in
    import ISC_GEN_STATES
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/guardian/system.py", line 159, in _import
    module = _builtin__import__(name, *args, **kwargs)
  File "", line 1129, in __import__
  File "", line 1050, in _gcd_import
  File "", line 1027, in _find_and_load
  File "", line 1006, in _find_and_load_unlocked
  File "", line 688, in _load_unlocked
  File "", line 883, in exec_module
  File "", line 241, in _call_with_frames_removed
  File "/opt/rtcds/userapps/release/isc/h1/guardian/ISC_GEN_STATES.py", line 5, in
    import ISC_library
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/guardian/system.py", line 159, in _import
    module = _builtin__import__(name, *args, **kwargs)
  File "", line 1129, in __import__
  File "", line 1050, in _gcd_import
  File "", line 1027, in _find_and_load
  File "", line 1006, in _find_and_load_unlocked
  File "", line 688, in _load_unlocked
  File "", line 883, in exec_module
  File "", line 241, in _call_with_frames_removed
  File "/opt/rtcds/userapps/release/isc/h1/guardian/ISC_library.py", line 1189, in
    class DUMP_SDF_DIFFS():
  File "/opt/rtcds/userapps/release/isc/h1/guardian/ISC_library.py", line 1199, in DUMP_SDF_DIFFS
    dcuid = str(models.fec)
AttributeError: 'generator' object has no attribute 'fec'

 

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 22:14, Monday 20 May 2024 (77955)

Tony, Dave:

We noticed that two scripts in isc/h1/guardian had been changed this afternoon around 16:22 relating to the errors being seen. I made a backup copy of ISC_library.py and ISC_LOCK.py and removed the recent changes using the subverion revert command. The immediate problem of ISC_LOCK's non-functioning ALL button was resolved.

rw-rw-r--  1 david.barker     controls  45K May 20 22:05  ISC_library.py_20may2024
-rw-rw-r--  1 david.barker     controls  39K May 20 22:05  ISC_library.py
-rw-rw-r--  1 david.barker     controls 302K May 20 22:08  ISC_LOCK.py_20may2024
-rw-rw-r--  1 david.barker     controls 300K May 20 22:08  ISC_LOCK.py
 

anthony.sanchez@LIGO.ORG - 22:57, Monday 20 May 2024 (77956)

Ok Looking more into these locklosses in LOW_NOISE_LENGTH_CONTROL logs. They both stop exicuting lines at the same time.

Using Meld I was able to find the difference between the 2 versions of ISC_lock.py within the Low_Noise_Length_control Guardian state.

It looks like line 5559 was commented out.
#ezca['LSC-PRCL1_OFFSET'] = -62 # alog 76814

Update:
Jenne wants that to stay put. Cause it needs to stay at 0.

I will walk it through line by line using the guardian command.
 

Images attached to this comment
H1 General
anthony.sanchez@LIGO.ORG - posted 16:37, Monday 20 May 2024 (77952)
Monday Ops Eve Shift Start

TITLE: 05/20 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 6mph Gusts, 4mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.10 μm/s
QUICK SUMMARY:
The H1 has now been locked for 2 + hours and are currently Observing.
Everything currently looks great.
 


 

 

H1 General
ryan.crouch@LIGO.ORG - posted 16:29, Monday 20 May 2024 (77941)
OPS Monday day shift summary

TITLE: 05/20 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Lost lock during commissioning, relocking was straightforward and unassisted after finishing the manual IA after the PR3 moves. We've been locked for 2:15 as of 23:30 UTC

16:00 UTC  We started commissioning

18:45UTC lockloss from commissioning activities

18:46 UTC Manual IA

19:22 UTC Another PR2_SPOT_MOVE

19:41 UTC 2nd Manual IA

While relocking we paused in PRMI to run OLG measurements of MICH and PRCL

21:19 UTC back in Observing after accepting/reverting some SQZ diffs

From the DARM_coherence_checks.xml template it looks like MICHFF needs to be re-tuned

21:36 UTC Superevent S240520cv

LOG:

Start Time System Name Location Lazer_Haz Task Time End
14:46 FAC Karen Optics and VPW N Tech clean 15:15
15:46 FAC Kim MidX N Tech clean 16:38
15:47 FAC Karen MidY N Tech clean 17:26
18:46 CAL Francisco PCAL lab LOCAL PCAL work 19:29
22:19 CAL Francisco PCAL lab LOCAL PCAL work 22:49
Images attached to this report
H1 ISC
ryan.crouch@LIGO.ORG - posted 13:39, Monday 20 May 2024 (77950)
PRMI MICH & PRCL OLG measured

We paused in PRMI_ASC to run the OLGs for MICH and PRCL, they both seem fine.

Images attached to this report
H1 ISC
jenne.driggers@LIGO.ORG - posted 13:28, Monday 20 May 2024 (77945)
Ran A2L with new script

I ran the new A2L script (from alog 77704), after incorporating suggestions from Vlad and Sheila.  In particular, now the script enables the 30 Hz bandstops in the quads, so that we are preventing the oscillation from going around the loops.

It seems to have run okay.  As a future enhancement, I might move the dither oscillation (and the associated bandpasses / bandstops) to a sligthly different frequency, since there may be some persistent line in DARM very close to the A2L freq we are using right now of 30.0 Hz.

Here is what the script printed out:

Overall this seemed good, and Sheila noted that the coherence with the HARD ASC loops went down.  I'll check again after we re-lock.  This took about 30 mins (plus about 10 mins before that, for some last min debugging of variable name spelling errors).  I need to check on the fits and the stdevs, to make sure they make sense, since the stdev now seems much higher than Vlad's notes say it should be.

Images attached to this report
H1 ISC
jenne.driggers@LIGO.ORG - posted 13:27, Monday 20 May 2024 - last comment - 17:33, Monday 20 May 2024(77949)
Re-moved PR3 today, to move beam spot on PR2

Now that the wind has (mostly) calmed down, and we're pretty confident that the increased low frequency noise late last week was due to (not yet understood, intermittent) squeezer-related noise, Sheila and I moved PR3 to get closer to center on PR2.  Mostly we were moving in yaw, but then we occassionally moved PR3 pitch to counter-act the pitch shift that was happening due to cross-coupling. We were making these PR3 pit moves primarily according to the PR3 top mass OSEMs.

While we were moving in yaw using the ISC_LOCK guardian state PR2_spot_move, Sheila noted that the scaling of the slider moves for PR2, PRM, IM4 in response to a PR3 move weren't quite right, since the ASC had to respond a bit.  So, she re-calculated and tuned the scaling factors in that guardian state, and now the ASC responds much less (indicating that all the sliders are moving close to the correct values in response to a PR3 move). 

Overall, we moved PR3 yaw slider from +152.4 to -74.9. We did about the first half of the move in full lock, gaining about 1% increase in POP_A_LF and also about 1% increase in arm cavity buildups.  We had gone maybe 1/4 of our total for today, and we stopped gaining in overall buildups, which makes sense if we got to a point where we're no longer really clipping (and so moving farther isn't un-clipping us more).   I think I started going too fast (and didn't pause to use the pico motor), and we lost lock, so then we did the second half with just the green arms locked, and pico-ing to keep the COMM beatnote high. 

We redid initial alignment, and are on our way back to lock.  We'll post more details on this morning's work, but this is just a quick alog so I don't forget, while I go off to another meeting.

Before we did any work today, we had two quiet times.

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 14:44, Monday 20 May 2024 (77951)

Attached in an annotated screenshot showing some trends as we moved PR3 in lock.

We are now back in NLN, and it looks like we need to adjust the MICH FF.  We will wait for the IFO to thermalize a few hours before doing this.  The squeezing is also not good right now, but changing rapidly during thermalization.

The second screenshot shows the same trend as the first, but with the POP QPD added.  The in lock move of -118 urad in PR3 yaw moved the beam from -0.6 to +0.6 in yaw, while the second out of lock move -112 urad seems to have not moved the spot on POP.  I don't understand why that would be, but it probably makes sense to pico POP before we move another 250urad (if we think we need to).

Images attached to this comment
jenne.driggers@LIGO.ORG - 17:33, Monday 20 May 2024 (77953)

Adding some more data to the confusion over why POP QPD didn't see a move after our out-of-lock PR3 move, I also see that this new lock has the NSUM on POP_A_QPD thermalizing to a higher value than we had earlier, but the yaw value of the QPD seems to still be in about the same place as it was when we lost lock halfway through todays PR3 move. 

Images attached to this comment
H1 SEI (SEI)
corey.gray@LIGO.ORG - posted 12:36, Monday 20 May 2024 (77948)
H1 ISI CPS Noise Spectra Check - Weekly (Famis 25992)

FAMIS Link:  25992

Only CPS which looks higher at high frequencies (see attached) would be:

  1. ITMx St1 V1 (with & ITMx St1 V2 being just a little higher than ADE 1mm noise).
  2. ITMy St1 V2
Non-image files attached to this report
Displaying reports 8001-8020 of 83695.Go to page Start 397 398 399 400 401 402 403 404 405 End