We lost lock just as I was beginning HVAC shutdowns to take advantage of the nearly 160 Mpc range. When we regained lock, the range was only about what it was for my last shutdown (77477), so I will defer. Here are the times for what I did do:
Start of shutdown (UTC) |
Start of end of shutdown (UTC)) |
Equipment shut down |
16:25 |
16:37 |
Office area HVAC |
16:50 |
16:57 lock loss 16:55:49 |
Chiller, all turbines, office area HVAC, split minis in CER |
Lockloss @ 23:57 UTC - link to lockloss tool
Ends lock at 6 hours. No obvious cause, and I don't see the ETMX motion prior to this lockloss as we've seen in the past.
TITLE: 06/29 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: One lock loss with a straight forward relock, and then a delayed calibration measurement. We've been locked for 5.5 hours.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
16:08 | SAF | LVEA | LVEA | YES | LVEA IS LASER HAZARD | 10:08 |
16:23 | PEM/FAC | Robert | Site | n | HVAC shutdowns | 19:02 |
19:03 | SQZ | Terry | Opt Lab | local | SHG work | 00:57 |
TITLE: 06/29 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 4mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY: H1 has been locked for 5 hours.
Calibration sweep taken today at 2106UTC in coordination with LLO and Virgo. This was delayed today since we were'nt thermalized at 1130PT.
Simulines start:
PDT: 2024-06-29 14:11:45.566107 PDT
UTC: 2024-06-29 21:11:45.566107 UTC
GPS: 1403730723.566107
End:
PDT: 2024-06-29 14:33:08.154689 PDT
UTC: 2024-06-29 21:33:08.154689 UTC
GPS: 1403732006.154689
I ran into the error below when I first started the simulines script, but it seemed to move on. I'm not sure if this frequently pops up and this is the first time I caught it.
Traceback (most recent call last):
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 314, in _
bootstrap
self.run()
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 108, in r
un
self._target(*self._args, **self._kwargs)
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 427, in generateSignalInje
ction
SignalInjection(tempObj, [frequency, Amp])
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 484, in SignalInjection
drive.start(ramptime=rampUp) #this is blocking, and starts on a GPS second.
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/awg.py", line 122, in start
self._get_slot()
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/awg.py", line 106, in _get_sl
ot
raise AWGError("can't set channel for " + self.chan)
awg.AWGError: can't set channel for H1:SUS-ETMX_L1_CAL_EXC
One item to note is that h1susex is running a different version of awgtpman since last Tuesday.
This almost certainly failed to start the excitation.
I tested a 0-amplitude excitation on the same channel using awggui with no issue.
There may be something wrong the environment the script is running in.
We haven't made any changes to the environment that is used to run simulines. The only thing that seems to have changed is that a different version of awgtpman is running now on h1susex as Dave pointed out. Having said that, this failure has been seen before but rarely reappears when re-running simulines. So maybe this is not that big of an issue...unless it happens again.
turns out i was wrong about the environment not changing. according to step 7 of the ops calib measurement instructions, simulines has been getting run in the base cds environment...which the calibration group does not control. That's probably worth changing. In the meantime, I'm unsure if that's the cause of last week's issues.
The CDS environment was stable between June 22 (last good run) and Jun 29.
There may have been another failure on June 27, which would make two failures and no successes since the upgrade.
The attached graph for June 27 shows an excitation at EY, but no associated excitation at EX during the same period. Compare with the graph from June 22.
On Jun 27 and Jun 28, H1:SUS-ETMX_L2_CAL_EXCMON was excited during the test.
Andrei, Naoki, Sheila
For the upcoming measurement of the FC backscattering, we need to calibrate the length change of the FC. To do this, we calculated the transfer function from the GS_SL FC control signal [Hz] to FC2 displacement [μm]. We followed the steps in Diagram.png to get to result. The plot bode_all_datasets.png contains all the used datasets.
The resulting transfer function is presented here: Tranfer_func.png (where Result curve is the transfer function). The result was exported to frequency/magnitude/phase dataset and can be found in result_data.txt. The remaining .txt files contain all the used datasets for this calculation.
Assuming that the frequency of the FC resonance shift Δf equal to c/2L corresponds to the FC length change ΔL equal to λ/2. (λ = 532 nm, L = 300 m), then Δf/ΔL = c/(L * λ) = 1.88*1012 [Hz/m] = 1.88*106 [Hz/μm]. Multiplying Transfer function by this coefficient will get us the open loop unity gain frequency of 39.4 Hz. Open-loop gain plot (after multiplication) can be found in the following figure: openloop_gain.png.
For FC2 suspension plant, we used sus_um in H1:CAL-CS_SUM_PRCL_PRM filter bank. The sus_um is the PRM suspension plant in the unit of um/count. Although the FC2 and PRM are the same HSTS suspensions, FC2/PRM M2 and M3 actuation strength is different by 0.326/2.83 according to the suspensions control design summary table on the door of control room as shown in the attachment (TACQ for FC2, TACQ*** for PRM). So we added this factor for FC2 M3 path.
The long ramp time was confusing me while I was trying to adjust the TMS to speed up locking, so I looked into what needed such a long ramp. I thought it was the TMS servo, but it seems to only use the TEST bank. I couldn't find another place in ISC_LOCK or the ALS guardians that specifically referenced this bank, so I changed it to 2sec like for TMSY and then accepted it in SDF safe and observe. We made it through this acquisition at 2sec so maybe we're OK.
Sat Jun 29 10:09:48 2024 INFO: Fill completed in 9min 45secs
Lockloss 1403715367
There was ground motion from an earthquake just after the lock loss, but the lock loss itself seemed to be quite sudden. ETMX sees those usual wiggles that we often see.
Robert had just started running HVAC shutdown tests, but this seems very unlikely to be the cause.
Back to Observing at 1800UTC
I touched up TMSX P, PRM Y, and a bit pf BS P to speed of acquisition and avoid an initial alignment.
Our ALSX PLL beatnote seems to have gone quite bad ~10 hours ago, getting < -30 dBm. It recovered during this acquisition but I worry we will have more issues with this.
TITLE: 06/29 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 4mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY: Locked for 8 hours, calm environment after the earthquake settled down.
TITLE: 06/29 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Two locklosses this shift and recovery was straightforward in both cases. Earthquakes at the end of the shift shook things up, but ultimately H1 rode through. H1 has now been locked and observing for almost 2 hours.
LOG: No log for this shift.
Lockloss @ 05:35 UTC - link to lockloss tool
End of short lock, no obvious cause. I don't see the same ETMX motion as in the previous lockloss.
H1 back to observing at 06:29 UTC, fully automatic relock.
Lockloss @ 03:06 UTC - link to lockloss tool
ETMX saw a small hit about a half second before the lockloss.
H1 back to observing at 04:14 UTC. Automatic relock except BS and PRM needed adjusting to lock DRMI.
I made slow (ramp time = 120s) changes of the ITMY ESD bias today during observation mode in order to find a minimum in electronics ground noise coupling, using coherence between DARM and a current clamp on a grounding cable as the figure of merit. This project will continue, but I am done for the day and wanted to get today's times into the alog.
Start of change (GPS) |
End of change (GPS) |
ITMX bias at start (V) |
ITMX bias at end (V) |
ITMY bias at start (V) |
ITMY bias at end (V) |
1403651100 |
1403651220 |
0 |
0 |
0 |
-39 |
1403653448 |
1403653569 |
0 |
0 |
-39 |
170 |
1403655679 |
1403655799 |
0 |
0 |
170 |
-222 |
1403657640 |
1403657760 |
0 |
0 |
-222 |
0 |
|
|
|
|
|
|
FAMIS 26444, last checked in alog78167
Both BRSs look good. The slight motion of BRS-Y looks to line up with small temperature changes.
FAMIS 26313, last checked in alog78597
CS fan 5 had a slight jump up in noise about a week ago, still well within range.
All other fans largely unchanged from last check and within range.
TITLE: 06/28 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Two lock reacquisitions, both automated. Robert has been changing some ITM L3 biases while in observing, as approved.
LOG:
H1 back to observing at 03:41 UTC.
ALS X PLL unlocking provided frequent interruptions in this relock. Eventually made it to DRMI, where PRM needed adjustments after going through MICH_FRINgES.