TITLE: 05/21 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 20mph Gusts, 16mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY: After some alignment changes today (and reversions) during maintenance day today, H1 has just finished initial alignment and starting lock acquisition.
During maintenance and during relocking today we tested out a few new features in both guardian land and for Verbal. Here's a summary and status of each:
Guardian
Verbal
Maintenance finsihed up around 1145PT, but some input mirror movements done by commissioners are still trying to get the beam back. We will resume locking as soon as we can.
CDS Wifi access points at both mid stations are down for maintenance waiting on parts. If you need to use a CDS laptop at either mid station, ask Jonathan, Erik or Dave for help
Tue May 21 10:09:01 2024 INFO: Fill completed in 8min 57secs
Travis confirmed a good fill curbside.
The IFO lost lock just before maintenance started while the SUS_CHARGE guardian was in the state SWAP_BACK_ETMX. The lock loss happened at 14:58:35 UTC, between the last two lines below from the Guardian log.
2024-05-21_14:58:22.768728Z SUS_CHARGE [SWAP_BACK_ETMX.enter]
2024-05-21_14:58:22.768728Z SUS_CHARGE [SWAP_BACK_ETMX.main] ezca: H1:SUS-ITMX_L3_DRIVEALIGN_L2L_GAIN => 0
2024-05-21_14:58:22.769665Z SUS_CHARGE [SWAP_BACK_ETMX.main] ezca: H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN => 184.65
2024-05-21_14:58:42.790151Z SUS_CHARGE [SWAP_BACK_ETMX.main] ezca: H1:SUS-ITMX_L3_ISCINF_L_SW1S => 4
ETMX seems to move too much after the the L2L gain is applied.
TITLE: 05/21 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 22mph Gusts, 18mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY: Locked for 7.5 hours, range hasn't been looking too good. A glance at the SQZ FOM when I walked in had the live trace above the reference in higher frequencies. Magnetic injections just started, maintenance day will start soon.
Workstations updated and rebooted. This was an OS package update. Conda packages were not updated.
TITLE: 05/21 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 141Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
Due to 2 backto back locklosses that both happened at LOWNOISE_LENGTH_CONTROL, after 2 sec in that state, I decided to stop in the state directly before and walk through Line by Line to see if there was a particular line in the code that was causing a lockloss. Each line was ran and no lockloss happened. We made it back to OBSERVING at 7:09 UTC.
The wind was elevated, at the time of both locklosses so perhaps it was the wind or the incoming 5.7M earthquake , Plots attached. Perhaps a coinicidence? IDK.
ISC_LOCK.py is still the previous version the latest version is this one: ISC_LOCK.py_20may2024
I do beleive I made a change in the Current copy of ISC_LOCK.py but that change should be discarded in favor of ISC_LOCK.py_20may2024
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
22:19 | CAL | Francisco | PCAL lab | LOCAL | PCAL work | 22:49 |
23:58 | PCAL | Francisco | PCAL Lab | Yes | PCAL Lab tests | 00:19 |
00:47 | PCAL | Francisco | PCAL Lab | Yes | PCAL LAB measurements | 00:52 |
TITLE: 05/21 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 17mph Gusts, 13mph 5min avg
Primary useism: 0.08 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY:
Took H1 Out of Observing to do a SQZr Scan.
Incoming 5.4M Earthquake. - Survived it.
Locklosss from Observing 02:31 UTC
Screenshots attached.
H1 made it all the way up to LOW_NOISE_LENGTH_CONTROL and was knocked out of lock by what I pessumed was wind. But looking at the scopes I dont see any extreme gusts.
On the Second locking attempt H1 also made it to LOW_NOISE_LENGTH_CONTROL, But both times i made it to that state it lasted 2 seconds and 124ms, then lost lock. which looks suspicious.
I started to try to open ISC_LOCK to stop it from getting to that state, and was greated with the following.
Traceback (most recent call last):
File "/var/opt/conda/base/envs/cds/bin/guardmedm", line 11, in
sys.exit(main())
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/guardian/medm/__main__.py", line 177, in main
system.load()
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/guardian/system.py", line 400, in load
module = self._load_module()
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/guardian/system.py", line 287, in _load_module
self._module = self._import(self._modname)
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/guardian/system.py", line 159, in _import
module = _builtin__import__(name, *args, **kwargs)
File "", line 1129, in __import__
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 688, in _load_unlocked
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/opt/rtcds/userapps/release/isc/h1/guardian/ISC_LOCK.py", line 8, in
import ISC_GEN_STATES
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/guardian/system.py", line 159, in _import
module = _builtin__import__(name, *args, **kwargs)
File "", line 1129, in __import__
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 688, in _load_unlocked
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/opt/rtcds/userapps/release/isc/h1/guardian/ISC_GEN_STATES.py", line 5, in
import ISC_library
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/guardian/system.py", line 159, in _import
module = _builtin__import__(name, *args, **kwargs)
File "", line 1129, in __import__
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 688, in _load_unlocked
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/opt/rtcds/userapps/release/isc/h1/guardian/ISC_library.py", line 1189, in
class DUMP_SDF_DIFFS():
File "/opt/rtcds/userapps/release/isc/h1/guardian/ISC_library.py", line 1199, in DUMP_SDF_DIFFS
dcuid = str(models.fec)
AttributeError: 'generator' object has no attribute 'fec'
Tony, Dave:
We noticed that two scripts in isc/h1/guardian had been changed this afternoon around 16:22 relating to the errors being seen. I made a backup copy of ISC_library.py and ISC_LOCK.py and removed the recent changes using the subverion revert command. The immediate problem of ISC_LOCK's non-functioning ALL button was resolved.
rw-rw-r-- 1 david.barker controls 45K May 20 22:05 ISC_library.py_20may2024
-rw-rw-r-- 1 david.barker controls 39K May 20 22:05 ISC_library.py
-rw-rw-r-- 1 david.barker controls 302K May 20 22:08 ISC_LOCK.py_20may2024
-rw-rw-r-- 1 david.barker controls 300K May 20 22:08 ISC_LOCK.py
Ok Looking more into these locklosses in LOW_NOISE_LENGTH_CONTROL logs. They both stop exicuting lines at the same time.
Using Meld I was able to find the difference between the 2 versions of ISC_lock.py within the Low_Noise_Length_control Guardian state.
It looks like line 5559 was commented out.
#ezca['LSC-PRCL1_OFFSET'] = -62 # alog 76814
Update:
Jenne wants that to stay put. Cause it needs to stay at 0.
I will walk it through line by line using the guardian command.
TITLE: 05/20 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 4mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY:
The H1 has now been locked for 2 + hours and are currently Observing.
Everything currently looks great.
TITLE: 05/20 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Lost lock during commissioning, relocking was straightforward and unassisted after finishing the manual IA after the PR3 moves. We've been locked for 2:15 as of 23:30 UTC
16:00 UTC We started commissioning
18:45UTC lockloss from commissioning activities
18:46 UTC Manual IA
19:22 UTC Another PR2_SPOT_MOVE
19:41 UTC 2nd Manual IA
While relocking we paused in PRMI to run OLG measurements of MICH and PRCL
21:19 UTC back in Observing after accepting/reverting some SQZ diffs
From the DARM_coherence_checks.xml template it looks like MICHFF needs to be re-tuned
21:36 UTC Superevent S240520cv
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
14:46 | FAC | Karen | Optics and VPW | N | Tech clean | 15:15 |
15:46 | FAC | Kim | MidX | N | Tech clean | 16:38 |
15:47 | FAC | Karen | MidY | N | Tech clean | 17:26 |
18:46 | CAL | Francisco | PCAL lab | LOCAL | PCAL work | 19:29 |
22:19 | CAL | Francisco | PCAL lab | LOCAL | PCAL work | 22:49 |
I ran the new A2L script (from alog 77704), after incorporating suggestions from Vlad and Sheila. In particular, now the script enables the 30 Hz bandstops in the quads, so that we are preventing the oscillation from going around the loops.
It seems to have run okay. As a future enhancement, I might move the dither oscillation (and the associated bandpasses / bandstops) to a sligthly different frequency, since there may be some persistent line in DARM very close to the A2L freq we are using right now of 30.0 Hz.
Here is what the script printed out:
Overall this seemed good, and Sheila noted that the coherence with the HARD ASC loops went down. I'll check again after we re-lock. This took about 30 mins (plus about 10 mins before that, for some last min debugging of variable name spelling errors). I need to check on the fits and the stdevs, to make sure they make sense, since the stdev now seems much higher than Vlad's notes say it should be.
Now that the wind has (mostly) calmed down, and we're pretty confident that the increased low frequency noise late last week was due to (not yet understood, intermittent) squeezer-related noise, Sheila and I moved PR3 to get closer to center on PR2. Mostly we were moving in yaw, but then we occassionally moved PR3 pitch to counter-act the pitch shift that was happening due to cross-coupling. We were making these PR3 pit moves primarily according to the PR3 top mass OSEMs.
While we were moving in yaw using the ISC_LOCK guardian state PR2_spot_move, Sheila noted that the scaling of the slider moves for PR2, PRM, IM4 in response to a PR3 move weren't quite right, since the ASC had to respond a bit. So, she re-calculated and tuned the scaling factors in that guardian state, and now the ASC responds much less (indicating that all the sliders are moving close to the correct values in response to a PR3 move).
Overall, we moved PR3 yaw slider from +152.4 to -74.9. We did about the first half of the move in full lock, gaining about 1% increase in POP_A_LF and also about 1% increase in arm cavity buildups. We had gone maybe 1/4 of our total for today, and we stopped gaining in overall buildups, which makes sense if we got to a point where we're no longer really clipping (and so moving farther isn't un-clipping us more). I think I started going too fast (and didn't pause to use the pico motor), and we lost lock, so then we did the second half with just the green arms locked, and pico-ing to keep the COMM beatnote high.
We redid initial alignment, and are on our way back to lock. We'll post more details on this morning's work, but this is just a quick alog so I don't forget, while I go off to another meeting.
Before we did any work today, we had two quiet times.
Attached in an annotated screenshot showing some trends as we moved PR3 in lock.
We are now back in NLN, and it looks like we need to adjust the MICH FF. We will wait for the IFO to thermalize a few hours before doing this. The squeezing is also not good right now, but changing rapidly during thermalization.
The second screenshot shows the same trend as the first, but with the POP QPD added. The in lock move of -118 urad in PR3 yaw moved the beam from -0.6 to +0.6 in yaw, while the second out of lock move -112 urad seems to have not moved the spot on POP. I don't understand why that would be, but it probably makes sense to pico POP before we move another 250urad (if we think we need to).
Adding some more data to the confusion over why POP QPD didn't see a move after our out-of-lock PR3 move, I also see that this new lock has the NSUM on POP_A_QPD thermalizing to a higher value than we had earlier, but the yaw value of the QPD seems to still be in about the same place as it was when we lost lock halfway through todays PR3 move.
FAMIS Link: 25992
Only CPS which looks higher at high frequencies (see attached) would be: