I changed the supply air temperature high limit of the End X air handler from 60 to 65 degrees to give the program broader control of the chilled water valve. This will allow the program to throttle the chilled water rather than rely on heater coils for temperature control. This may create a small increase in the temperature trending while the P.I. loop adjusts.
Right after Commissioning, H1 had a lockloss and several systems showed odd behavior (EY ISI WD trips, CPS St1 have huge signals, EY Ring Heaters show bad/dead state...).
Fil now at EY and mentioned the TCS and ISC Power Supplies were down and so he needed to restart them.
FRS33539 ticket started.
Sheila, Mayank, Camilla
The SQZ ang servo has been off since Feb 18th 82891. Today it was turned back on in sqzparams.py and with SQZ_ANG_ADJUST nominal state is now ADJUST_SQZ_ANG_ADF. We expect this to work better than in 82891 as we adjusted H1:SQZ-ADF_VCXO_PLL_PHASE to 25deg so that the AFD H1:SQZ-ADF_OMC_TRANS_SQZ_ANG error signal is in a linear range where our best SQZ is. This should improve the squeezing and keep it more stable.
Range was better last night with the SQZ angle servo keeping the squeezing tuned around 350Hz (yellow BLRMS), plot attached.
Sheila, Matt, Mayank, Camilla
Today we changed the nominal FC beamspot control:
Sheila put the OSC amplitudes back to 10 and steps each AL2 gain to check its in I only (need to change PIT2 by 90deg), then adjusted A2Ls to get I close to zero. YAW1 has more noise than others.
Turned off the INJ_ANG ASC (feeds back to ZM3 as "FC beamspot control") and then steps ZM2/3/ P/Y offsets. After we worked out the matrix element, we tried to close each loop: turned off input, clear history, turn back on input, turn on gain. Closed each one and and did the next. Closed INJ_ANG for ZM3 in PIT and YAW.
ZM2 had a large response in PIT2 and not a noticeable response in PIT1. If we move ZM1, we see it in both PIT1 and PIT2, but the INJ_ANG loop zeros PIT2 but the PIT1 error signal remains. There's too options to avoid this, either do a combination of ZM1/3 to avoid cross coupling, but the easier option is to keep the INJ_POS loop slower (gain lower) than INJ_ANG. The change was very small (or no change) for YAW1 so we decided to only use PIT1/2 to INJ_ANG to ZM3 P/Y as the feedback for this week.
Optical ALIGN | INJ ANG | ASC_ADS | Matrix Element Needed | |
ZM3 P | +3 on ZM3 P | -0.00480 in P | +5.978 on PIT2 | -0.0008 (much too small, increased to -0.08) |
ZM3 Y | -7 on ZM3 | +0.0095 in Y | -4.6 on YAW2 | -0.002 (wrong sign, too small, changed to +0.06) |
Edited SQZ_FC guardian OFFLOAD_FC_ASC to also offload INJ_POS if we decide to use these filters. As the intergrators as still in the FC ASC filters, this is a homemade ASC offload, rather than the standardized ISC one. A future task is to move the intergrators into the ZM suspensions as we did for ZM4/5/6.
Accepted sdfs, and checked the that FC Guarudan just switches on the SQZ-FC_ASC_INJ_ANG_P/Y inputs and doesn't touch the matrix elements. INIT did reset the outmarix, but this should be controlled only by sdf so I deleted that block and reloaded.
We did split up the filters for the filter cavity ASC, adding intergrators with a zero at 0.01 Hz to FC1,2 and ZM3 ASC filter banks (M1_LOCK_P and Y), and modifying the true intergators in the ASC filters to have poles at 0.01 Hz. Then we modified the guardian so that it uses the standard generic offloading guardian state (gen_OFFLOAD_ALIGNMENT_MANY). We also removed the state called clear_FC_ASC from the graph, because that shouldn't be needed if we are using the standard offloading guardian.
Mon Mar 10 10:08:35 2025 INFO: Fill completed in 8min 32secs
Jordan confirmed a good fill curbside.
FAMIS 31076
Nothing really to report this week; trends are looking steady and things are behaving as expected.
TITLE: 03/10 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 7mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.35 μm/s
QUICK SUMMARY:
H1 lost lock (7hr lock with "refined" tag) about an hr ago and just finished an initial alignment. Commissioning time is scheduled in 40+min.
Secondary microseism looks below the 95th percentile (probably following the activity in the N.Atlantic's Greenland/Labrador region). Winds are breezy just below 20mph and have been fairly constand the last 22hrs. The big EQ last night was on the small island of Jan Mayen part of Norway (our biggest EQ of the weekend).
Link to the report
TITLE: 03/10 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
IFO is LOCKING and in ENVIRONMENT due to the 6.6 mag EQ from Iceland that we're still recovering from - Lockloss alog 83259.
The primary microseism is still elevated and will likely take another 30 mins to 1hr to return to pre-EQ levels (which were high to begin with). I began attempting locking ~20 minutes ago but this was not successful. I'll set H1 for initial aligment before my shift ends to give the extra time for microseism to die down.
Other than that, we had a lockloss that seems to have been caused by high ground motion at EY and that kind of correlated with an EQ that happened while secondary microseism and high winds were present - LL tool marked it with the WINDY tag. Lockloss alog 83257. The lock acquisition was fully automatic and only took 1hr 17 mins including initial alignment.
On the bright side, the wind has died down significantly.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
21:33 | OPS | Ibrahim | X-arm | N | Stroll to MX | 22:19 |
22:20 | OPS | Corey | Optics Lab | N | Photo of parts | 22:46 |
6.6 Mag Iceland EQ Caused Lockloss that happened 4 minutes after we reached NLN again. Ground motion is too high to attempt locking so staying in IDLE until it passes.
Lockloss from what looks to be seismic motion caused by a 4.9 EQ combo'd with high microseism. The timing doesn't exactly math up though so I'll investigate further.
Back to OBSERVING 02:41 UTC.
Buuuuut there's a 6.6 EQ on its way from Iceland so we lost lock again 4 mins later.
TITLE: 03/09 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
Today (and last night) was ETMx-related with (2) ETMx locklosses (one overnight & one at very start of shift). Then had almost 5hrs of downtime with 3 locklosses in a row at LOWNOISE_ESD_ETMX (resulting in making some Calls for help, but H1 came back on its own...after pausing for about an hr before this troublesome state---see alogs). Keita also suggests some measurements we should run next time we are locking. Secondary microseism has mostly been flat (but maybe if you squint at it right, you can see it possibly drifting down). Winds have been around 20mph for much of the last 6hrs.
H1 is finally thermalized, so a calibration could be run (but microseism is high still, so I'd be wary...esp with the grief of getting through LOWNOISE ESD ETMx this morning).
LOG:
TITLE: 03/09 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 22mph Gusts, 18mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.39 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING as of 19:24 UTC (3.5 hr lock)
IFO is stable but was having issues regarding LOWNOISE_ESD_ETMX state, assumed due to high microseism, which is coming down. There were also 2 locklosses. Both the locklosses and locking issues are detailed in Corey's alog 83249.
Over roughly 7hrs, we had 2-consecutive ETMx Glitch locklosses taking H1 down from OBSERVING.
After that last lockloss, have had 2-consecutive LOWNOISE_ESD_ETMX locklossess while trying to get H1 back. An alog search on LOWNOISE_ESD_ETMX lockloss issues were noted in alog82912. Since we are having a similar situation (high microseism and frequent locklosses at this state), I made changes to LOWNOISE_ESD_ETMX in ISC_LOCK.py by increasing the (1) ETMx ramp time & (2) a timer after this step (see attached):
Change Save & Loaded for ISC_LOCK and we are currently on the way to see if we can get through LOWNOISE_ESD_ETMX (or have a 3rd Consecutive lockloss here).
Well, that didn't work. H1 had a 3rd-consecutive lockloss within a couple seconds of going through LOWNOISE_ESD_ETMX. (Posted situation on mattermost & consulting the Call List for assistance.)
So, I am restoring the (1) etmx_ramp back to 20 (from my 60) & (2) self.timer['ETMswap'] back to 30 (from 60).
Attached is the restored values noted above.
Below you can see what happened---lockloss happened FAST within 4-seconds of starting LOWNOISE_ESD_ETMx. Jim wsa first on the call list and he asked if his ETMx ESD limit steps happen---they did not even get to try---that step is the LAST step for LOWNOISE_ESD_ETMx.
Here is the ISC_LOCK for this lockloss:
2025-03-09_17:35:40.042953Z ISC_LOCK [TRANSITION_FROM_ETMX.run] timer['ETMswap'] done
2025-03-09_17:35:40.092335Z ISC_LOCK [TRANSITION_FROM_ETMX.run] ADS convergence check
2025-03-09_17:35:40.195243Z ISC_LOCK EDGE: TRANSITION_FROM_ETMX->LOWNOISE_ESD_ETMX
2025-03-09_17:35:40.195907Z ISC_LOCK calculating path: LOWNOISE_ESD_ETMX->NOMINAL_LOW_NOISE
2025-03-09_17:35:40.196027Z ISC_LOCK new target: LOWNOISE_LENGTH_CONTROL
2025-03-09_17:35:40.201276Z ISC_LOCK executing state: LOWNOISE_ESD_ETMX (558)
2025-03-09_17:35:40.206296Z ISC_LOCK [LOWNOISE_ESD_ETMX.enter]
2025-03-09_17:35:40.216682Z ISC_LOCK [LOWNOISE_ESD_ETMX.main] switching ESD back to ETMX
2025-03-09_17:35:40.217217Z ISC_LOCK [LOWNOISE_ESD_ETMX.main] ezca: H1:SUS-ITMX_L3_LOCK_L_TRAMP => 60
2025-03-09_17:35:40.217751Z ISC_LOCK [LOWNOISE_ESD_ETMX.main] ezca: H1:SUS-ITMX_L3_LOCK_L_GAIN => 0
2025-03-09_17:35:40.218187Z ISC_LOCK [LOWNOISE_ESD_ETMX.main] ezca: H1:SUS-ETMX_L3_LOCK_L_TRAMP => 60
2025-03-09_17:35:40.218678Z ISC_LOCK [LOWNOISE_ESD_ETMX.main] ezca: H1:SUS-ETMX_L3_LOCK_L_GAIN => 1
2025-03-09_17:35:40.219270Z ISC_LOCK [LOWNOISE_ESD_ETMX.main] ezca: H1:SUS-ETMY_L3_LOCK_L_TRAMP => 60
2025-03-09_17:35:40.219711Z ISC_LOCK [LOWNOISE_ESD_ETMX.main] ezca: H1:SUS-ETMY_L3_LOCK_L_GAIN => 0
2025-03-09_17:35:40.219923Z ISC_LOCK [LOWNOISE_ESD_ETMX.main] timer['ETMswap'] = 60
2025-03-09_17:35:44.717933Z ISC_LOCK [LOWNOISE_ESD_ETMX.run] Unstalling IMC_LOCK
2025-03-09_17:35:44.853115Z ISC_LOCK [LOWNOISE_ESD_ETMX.run] USERMSG 0: IMC_LOCK: has notification
2025-03-09_17:35:45.067039Z ISC_LOCK JUMP target: LOCKLOSS
2025-03-09_17:35:45.070405Z ISC_LOCK [LOWNOISE_ESD_ETMX.exit]
2025-03-09_17:35:45.140996Z ISC_LOCK JUMP: LOWNOISE_ESD_ETMX->LOCKLOSS
2025-03-09_17:35:45.141251Z ISC_LOCK calculating path: LOCKLOSS->NOMINAL_LOW_NOISE
2025-03-09_17:35:45.142460Z ISC_LOCK new target: DOWN
2025-03-09_17:35:45.150074Z ISC_LOCK executing state: LOCKLOSS (2)
2025-03-09_17:35:45.150466Z ISC_LOCK [LOCKLOSS.enter]
2025-03-09_17:35:45.154072Z ISC_LOCK [LOCKLOSS.main] ezca: H1:GRD-PSL_FSS_REQUEST => READY_FOR_MC_LOCK
2025-03-09_17:35:45.267242Z ISC_LOCK JUMP target: DOWN
2025-03-09_17:35:45.271745Z ISC_LOCK [LOCKLOSS.exit]
2025-03-09_17:35:45.331052Z ISC_LOCK JUMP: LOCKLOSS->DOWN
2025-03-09_17:35:45.331185Z ISC_LOCK calculating path: DOWN->NOMINAL_LOW_NOISE
None of the first 3 LOWNOISE_ESD_ETMx (558) locklosses have an IMC tag. (3rd one had a WINDY tag & we did transition back to SEI_ENV = USEISM at 1721utc (about an hr ago).)
All 3 have the REFINED tag.
A Look At Microseism with comparson to Elenna's plot in Nov when the microseism season was starting and here we are a few more months into it. We've certainly had worse days, but we are in the middle of a noisy microseism period at about 600 counts since last night.
STATUS: Have been holding at MAX_POWER [520]
Here is a look at the last LOWNOISE_ESD_ETMx (using an Elenna template).
For the 4th locking attempt, time settings were restored (from change I made), and on this one I held at MAX POWER [520] for about 50min while reading through alogs, looking at previous lockloss to post in the alog and then eventually just continued on, and this time, H1 made it thru LOWNOISE_ESD_ETMx. Attached you can see how things looked going from TRANSITION FROM ETMX [557] to LOWNOISE ESD ETMX [558]....in contrast to my lockloss alogged earlier, this one looks like sort of like the plot on left from Elenna's scopes posted here.
J. Freed, S. Dwyer
Yesterday we did damping loop injections on all 6 BOSEMs on the PR3 M1. PR3 shows quite alot of coupling in the 10-25Hz range. This is a continuation of the work done previously for ITMX, ITMY, and PR2
As some signals were quite strong, instead of gain of 750, gains of 300 and 600 were collected (300 is labled as low_noise). Also, this time injections were performed in diaggui instead of awggui
The plots, code, and flagged frequencies are located at /ligo/home/joshua.freed/20241021/scrpts. While the diaggui files are at /ligo/home/joshua.freed/20241021/data. This time, 600 gain data was also saved as a reference in the diaggui files (see below), saved in 20241021_H1SUSPR3_M1_OSEMNoise_T3.xml
pr3.png Shows all results for PR3 with the top half being at 300 gain and the bottom being at 600 gain. All sensors showed strong coupling in 10-25Hz range at 600 gain. [LF, RT, T2, T3] showed strong coupling in 10-25Hz range at 300 gain. [SD, T1] instead showed some coupling in the 46-48Hz range at 300 gain. I am unsure if this is signifficant or another noise source while the test was performed.
WP 12380
Found one of the power supplies for the ±18V ICS/TCS tripped off. Unit was warm to touch, likely failed fan. Both power supplies were replaced. The negative power supply had its original fan. The ±18V feeds the RF Distribution Amp (CPS Timing) and Ring Heaters causing multiple subsystems to report errors.
F. Clara, S. Dwyer, J. Figueroa, C. Gray,O. Patane, and M. Pirello
Removed the following supplies, neither had improved fans:
S1300289, S1300295
Replaced them with the following supplies with improved fans installed:
S1201923, S1201926