TITLE: 01/06 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 9mph 5min avg
Primary useism: 0.07 μm/s
Secondary useism: 0.65 μm/s
QUICK SUMMARY:
Troubleshooting is continuing for the locklosses at the TRANSITION_FROM_ETMX state. Wind and secondary microseism are still very high.
Closes FAMIS 26225
Laser Status:
NPRO output power is 1.819W (nominal ~2W)
AMP1 output power is 67.9W (nominal ~70W)
AMP2 output power is 137.5W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PMC:
It has been locked 30 days, 23 hr 15 minutes
Reflected power = 16.9W
Transmitted power = 109.4W
PowerSum = 126.3W
FSS:
It has been locked for 0 days 0 hr and 30 min
TPD[V] = 0.9213V
ISS:
The diffracted power is around 2.2%
Last saturation event was 0 days 1 hours and 43 minutes ago
Possible Issues: None
Closes FAMIS 26485
T240 Centering
2024-01-06 13:24:12.266253
There are 11 T240 proof masses out of range ( > 0.3 [V] )!
ITMX T240 1 DOF X/U = -0.891 [V]
ITMX T240 1 DOF Y/V = 0.448 [V]
ITMX T240 1 DOF Z/W = 0.545 [V]
ITMX T240 2 DOF Y/V = 0.354 [V]
ITMX T240 2 DOF Z/W = 0.387 [V]
ITMX T240 3 DOF X/U = -0.913 [V]
ITMY T240 3 DOF X/U = -0.394 [V]
ITMY T240 3 DOF Z/W = -1.336 [V]
BS T240 3 DOF Z/W = -0.353 [V]
HAM8 1 DOF X/U = -0.34 [V]
HAM8 1 DOF Z/W = -0.526 [V]
All other proof masses are within range ( < 0.3 [V] ):
ETMX T240 1 DOF X/U = 0.04 [V]
ETMX T240 1 DOF Y/V = 0.032 [V]
ETMX T240 1 DOF Z/W = 0.018 [V]
ETMX T240 2 DOF X/U = 0.014 [V]
ETMX T240 2 DOF Y/V = -0.009 [V]
ETMX T240 2 DOF Z/W = 0.07 [V]
ETMX T240 3 DOF X/U = 0.043 [V]
ETMX T240 3 DOF Y/V = -0.036 [V]
ETMX T240 3 DOF Z/W = 0.049 [V]
ETMY T240 1 DOF X/U = 0.147 [V]
ETMY T240 1 DOF Y/V = 0.191 [V]
ETMY T240 1 DOF Z/W = 0.259 [V]
ETMY T240 2 DOF X/U = -0.007 [V]
ETMY T240 2 DOF Y/V = 0.231 [V]
ETMY T240 2 DOF Z/W = 0.116 [V]
ETMY T240 3 DOF X/U = 0.281 [V]
ETMY T240 3 DOF Y/V = 0.182 [V]
ETMY T240 3 DOF Z/W = 0.204 [V]
ITMX T240 2 DOF X/U = 0.256 [V]
ITMX T240 3 DOF Y/V = 0.27 [V]
ITMX T240 3 DOF Z/W = 0.208 [V]
ITMY T240 1 DOF X/U = 0.188 [V]
ITMY T240 1 DOF Y/V = 0.169 [V]
ITMY T240 1 DOF Z/W = 0.058 [V]
ITMY T240 2 DOF X/U = 0.111 [V]
ITMY T240 2 DOF Y/V = 0.252 [V]
ITMY T240 2 DOF Z/W = 0.199 [V]
ITMY T240 3 DOF Y/V = 0.151 [V]
BS T240 1 DOF X/U = -0.097 [V]
BS T240 1 DOF Y/V = -0.281 [V]
BS T240 1 DOF Z/W = 0.19 [V]
BS T240 2 DOF X/U = 0.018 [V]
BS T240 2 DOF Y/V = 0.103 [V]
BS T240 2 DOF Z/W = -0.039 [V]
BS T240 3 DOF X/U = -0.09 [V]
BS T240 3 DOF Y/V = -0.262 [V]
HAM8 1 DOF Y/V = -0.291 [V]
Assessment complete.
STS Centering
2024-01-06 13:28:25.261111
There are 2 STS proof masses out of range ( > 2.0 [V] )!
STS EY DOF X/U = -4.117 [V]
STS EY DOF Z/W = 2.8 [V]
All other proof masses are within range ( < 2.0 [V] ):
STS A DOF X/U = -0.551 [V]
STS A DOF Y/V = -0.899 [V]
STS A DOF Z/W = -0.466 [V]
STS B DOF X/U = 0.462 [V]
STS B DOF Y/V = 0.913 [V]
STS B DOF Z/W = -0.433 [V]
STS C DOF X/U = -0.613 [V]
STS C DOF Y/V = 0.798 [V]
STS C DOF Z/W = 0.393 [V]
STS EX DOF X/U = -0.151 [V]
STS EX DOF Y/V = 0.032 [V]
STS EX DOF Z/W = 0.079 [V]
STS EY DOF Y/V = 0.19 [V]
STS FC DOF X/U = 0.269 [V]
STS FC DOF Y/V = -0.931 [V]
STS FC DOF Z/W = 0.725 [V]
Assessment complete.
IFO is LOCKING but losing lock at TRANSITION_FROM_ETMX (probably due to high wind/microseism)
MX VEA Temperature Investigation:
GraceDB Query Failures
Transition from EX Failures - The Lockloss causing state (alog 75215)
Minor:
Notes to self on transition from etmx locklosses we've been having recently (6 Jan 2024), and other lockloss investigations.
POP18 looks quite wobbly. Do we have clipping again? Haven't really checked anything w.r.t. this, so perhaps something to come back to.
A few weeks ago, we'd been having trouble with ALSX noticing that it was locked, so we renormalized it's PD around noon Pacific on 13 Dec 2023. The next lock, it already looked like the renorm was unneccessary, ALSX trans was about 1.15. Stayed at that level over many locks for many days.
Dec 20th 02:15 UTC, first time I see ALSX trans getting to 1.2. After that, pretty consistently at that level. So, probably not related to this week's troubles?
The first lock this morning that was using guardian automatically lost lock during the state Transition_from_etmx, but the lockloss was about 21.5 sec after the swap away from ETMX. So, I thought we should be able to do the swap and sit there, if the locklosses were due to something else like the ETMX bias ramping, or something (not that I know why that should cause a lockloss).
Ibrahim was able to stop and wait for 10+ mins at LOWNOISE_ASC. We ran LOWNOISE_COIL_DRIVERS with guardian, then in the guardian shell did all the prep stuff in Transition_from_etmx, then as a block did lines 5086-5089 to actually do the swap. Lost lock during the 5 sec ramp swapping away from ETMX, so this didn't work.
Checking some ETMY filters (since Sheila et al checked ETMX yesterday). Looks like h1susetmy foton file was last loaded in April 2023 (according to the date of the last _install.txt file in the filter archive), so before O4 began, so really there should be no possiblitiy of changes there, and indeed I don't find any changes.
* ETMY L2 Lock only filter used is FM7 "Qprime", has not changed shape in a long time (just picked a few older foton files out of the filter_archive).
* ETMY L1 Lock only filter used is FM10 "UIMcomp", also not changed in a long time.
* ETMY L3 lock filters FM5, FM8, FM9, FM10. But these are all flat to pretty high freq, before they are notches / stopbands, so no effect really on our low freq stability issues. Also, no changes.
h1susitmx foton also last loaded April 2023, before O4, so shouldn't be any changes there.
Lockloss around 1388535878 (6 Jan 00:24:20 UTC) was about 2 mins after the swap to ETMY and ITMX.
Lockloss around 1388533427 (5 Jan 23:43:29 UTC) was only about 10 sec after the swap.
Seems like we're just quite delicate and susceptible to wind + microseism after this swap?
In case we were running into lack of control authority due to being in the lownoise coil driver state, we tried (without much hope, but trying anyway) to skip lownoise_coil_drivers and go straight from lownoise_asc to transition_from_etmx. We lasted a little more than 5 sec past the swap, but not much more than that, so this isn't an avenue toward different diagnosis path.
The wind is supposed to start getting better after 4pm today, so I wonder if the best path forward is to just wait out the environment? I wonder if (as Ibrahim has said) this level of microseism plus wind is just untenable for us to reacquire lock with.
Sat Jan 06 10:08:28 2024 INFO: Fill completed in 8min 22secs
Lockloss at TRANSITION_FROM_ETMX. Despite microseism and wind, the lock acquisition was smooth until this point.
The last 8 Locklosses have been from this state.
Lockloss 19:18 UTC
Lost lock at the same state but this time:
I posted my notes so far in alog 75215.
Lockloss 20:13 UTC
Lost lock at the same state while attempting a workaround to the issue (alog 75215)
TITLE: 01/06 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: None (Cancelled OWL)
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 25mph Gusts, 18mph 5min avg
Primary useism: 0.09 μm/s
Secondary useism: 0.77 μm/s
QUICK SUMMARY:
IFO is down due to high microseism - will attempt to lock (but will go back to down if there's no chance)
MX Temperature is showing a red alert on the alarm handler - will investigate
FOM nuc24's image capture code which posts the display images to the CDS web page had incorrectly switched over to 4k monitor mode a few days ago. I rebooted nuc24 at 10:10 PST and all is looking good now.
TITLE: 01/06 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Microseism/Wind
SHIFT SUMMARY: Challenging evening fighting excess ground motion from wind/microseism and consistent locklosses after reaching full power during lock acquisition.
H1 is currently DOWN until environmental conditions improve and/or further investigation into the instability seen after powering up.
Gabriele, Rahul, Jeff, Oli, Betsy
Find below the initial (no damping loop) transfer functions for the built X1 (Staging Building Test Stand) M1 (Top Mass) BBSS.
These were taken generally following Jeff and Oli's alog 74142, where we used their provided DTT template and tuned the excitation amplitude for each DoF to avoid overflows.
Files are saved on the X1 work station system under ligo/svncommon/SusSVN/sus/trunk/BBSS/X1/BS/SAGM1/Data/
They are dated 01/05/24.
Attached is a pdf of annotated frequency transfer functions for the above measurement.
Find below the Transfer Function resonant peaks frequencies in list form. The second column shows cross coupled freqs i.e. these same frequencies were seen/repeated on different degrees of freedom.
Number of Resonant Peaks | Frequency (Hz) | |
1 | 0.40625 | 0.40625 |
2 | 0.414062 | 0.414062 |
3 | 0.421875 | |
4 | 0.445312 | |
5 | 0.453125 | |
6 | 0.96875 | |
7 | 1.03125 | |
8 | 1.15625 | |
9 | 1.16406 | 1.16406 |
10 | 1.32031 | |
11 | 1.4375 | |
12 | 1.54906 | |
13 | 1.875 | |
14 | 2.59375 | |
15 | 2.60156 | |
16 | 2.69531 | |
17 | 2.9919 | 2.99219 |
18 | 3.84375 |
The "UNSURE" table contains some peaks that were somewhat noisy (and not included in the annotation). According to the model, there are supposed to be 2 peaks over 10 Hz (~19Hz and ~31Hz), but due to the noise, I couldn't determine if these were measured - there are definitely peaks in this region at the listed frequencies (none in the "right" spots though).
UNSURE |
21.9375 |
26.75781 |
37.4375 |
37.39062 |
For ease of comparison, below is another table with the modeled peak frequencies. Note that while there are 18 peaks, they do not correspond to the same "18 modes" - this is a pure coincidence.
Mode Number | Mode Frequency (Hz) |
1 | 0.412468 |
2 | 0.416083 |
3 | 0.425915 |
4 | 0.473335 |
5 | 1.03738 |
6 | 1.1633 |
7 | 1.16605 |
8 | 1.18585 |
9 | 1.45428 |
10 | 1.46622 |
11 | 1.53717 |
12 | 1.90606 |
13 | 2.64974 |
14 | 2.71041 |
15 | 3.03671 |
16 | 3.85694 |
17 | 19.6021 |
18 | 31.9402 |
This is still a WIP.
TITLE: 01/06 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
IFO is LOCKING
Comissioning has gone long due to two primary issues:
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
17:53 | FAC | Kim | H2 | N | Technical Cleaning | 18:53 |
17:55 | VAC | Travis | EX Mech Room | N | Vacuum Stuff | 19:55 |
18:53 | EE | Marc | MY | N | Part Pickup | 19:23 |
18:53 | SUS | Randy | EX | N | Part Pickup | 19:52 |
19:24 | EE | Marc | H2 | N | Electronic Checks | 20:24 |
19:25 | RUN | Camilla | MX | N | Maintain and/or improve health | 20:12 |
20:12 | VAC | Richard + Gerardo | LVEA | N | Temp Check | 21:12 |
TITLE: 01/05 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 20mph Gusts, 16mph 5min avg
Primary useism: 0.14 μm/s
Secondary useism: 0.71 μm/s
QUICK SUMMARY:
H1 is in the process of relocking; currently up to ENGAGE_ASC_FOR_FULL_IFO. We'll pause at TRANSITION_FROM_ETMX for some commissioning investigations. Secondary microseism is elevated and wind has picked up this afternoon.
Lockloss again during TRANSITION_FROM_ETMX; the same place as before.
Since the wind is forecasted to die down this evening, I'll plan on holding H1 at 2W in the CHECK_VIOLINS_BEFORE_POWERUP state and engage 2W damping until the wind calms, then try again to get through TRANSITION_FROM_ETMX in case the excess ground motion is what's causing trouble.
Camilla, Erik, Dave:
h1hwsmsr (HWS ITMX and /data RAID) computer froze at 22:14 Thu 04 Jan 2024 PST. The EDC disconnect count went to 88 at this time.
Erik and Camilla have just viewed h1hwsmsr's console, which indicated a HWS driver issue at the time. They rebooted the computer to get the /data RAID NFS shared to h1hwsex and h1hwsmsr1. Currently the ITMX HWS code is not running, we will start it during this afternoon's commissioning break.
One theory of the recent instabilities is the camera_control code I started just before the break to ensure the HWS cameras are inactive (in extenal trigger mode) when H1 is locked. Every minute the camera_control code gets the status of the camera, which along with the status of H1 lets it decide if the camera needs to be turned ON or OFF. Perhaps with the main HWS code getting frames from the camera, and the control code getting the camera status, there is a possible collision risk.
To test, we turn the camera_control code off at noon. I will rework the code to minimize the number of camera operations to the bare minimum.
At ~ 20:00UTC we left the HWS code running (restarted ITMX) but stopped Dave's carema control code 74951 on ITMX, ITMY, ETMY, leaving the camera's off. They'll be left off over the weekend until Tuesday. ETMX is still down from yesterday 75176.
If the computers remain up over the weekend we'll look at incorporating the camera control into the hws code to avoid crashes.
Erik swapped h1hwsex to a new v1 machine. We restarted the HWS code and turned the camera to external trigger mode so it too should remain off over the weekend.
I've commented out the HWS test entirely (only ITMY was being checked) from DIAG_MAIN since no HWS cameras are capturing data. Tagging OpsInfo.
Trace from h1hwsmsr crash attached.
All 4 computers remained up and running over the weekend, with the camera on/off code paused. We'll look into either making Dave's code smarter or incorporating the cameras turning on/off into the hws-server code so that we don't send multiple calls to the camera at the same time, our leading theory as to why these hws computers have been crashing.