This is a continuation of the work discussed in 78734.
|
Start of change (GPS) |
End of change (GPS) |
ITMX bias at start (V) |
ITMX bias at end (V) |
ITMY bias at start (V) |
ITMY bias at end (V) |
|
1403804070 |
1403804192
|
0 |
0 |
0 |
77 |
|
1403804365 |
1403804489
|
0 |
0 |
77 |
-80 |
|
1403807629 |
1403807751
|
0 |
0 |
-80 |
-20 |
|
1403809555 |
1403809816 |
0 |
-40 |
-20 |
-40 |
|
1403812204 |
1403812324
|
-40 |
36
|
-40 |
-40 |
|
1403816907 |
1403817127
|
36 |
36 |
-40 |
0 |
After H1 lost lock this afternoon, I took the opportunity to do a quick RefCav alignment tweak since the TPD was showing the transmission was low at around 680mV. Using the two picomotor-controlled mirrors in the FSS path, mostly adjusting in pitch, I was able to improve the signal from about 680mV to 870mV. More of an increase than I was expecting, but this should be good to last until the PMC is swapped out on Tuesday (where this alignment may need to be done again). I suspect I could've spent more time to improve this further, but I stopped here so as not to delay IFO locking any longer.
Lockloss @ 23:39 UTC - link to lockloss tool
Locked for 26 minutes. No obvious cause, but I see the ETMX motion about 100ms before the lockloss this time.
I'm going to take this opportunity to do a quick FSS RefCav alignment adjustment so that it can hopefully make it until Tuesday when we swap out the PMC.
H1 back to observing at 00:57 UTC. Pretty much a fully automated relock; I just slightly adjusted PRM to make buildups during PRMI better, but it might've caught on its own.
TITLE: 06/30 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Just got back to low noise after we had a lock loss that ended a 15 hour lock. Relocking was fully auto and it didn't even run an initial alignment.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 16:08 | SAF | LVEA | LVEA | YES | LVEA IS LASER HAZARD | 10:08 |
| 18:23 | SQZ | Terry | Opt Lab | local | SHG work | 21:47 |
| 21:05 | SQZ | Kar Meng | Opt Lab | local | SHG work | 01:05 |
| 22:40 | PEM | Robert | LVEA | y | Looking for noise source | 23:00 |
TITLE: 06/30 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 4mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY: H1 lost lock about an hour ago but is relocking well; so far up to TRANSITION_FROM_ETMX.
The RefCav TPD is reading down to 0.69V, showing a warning on DIAG_MAIN, and has been falling over the past couple weeks. Since this is likely due to the increased PMC loss lowering the output of the cavity, I don't suspect fixing the RefCav alignment will get too much out of it, but I can try an alignment this evening if the IFO loses lock.
Lockloss1403820932
Ending a 15 hour lock. This lock had the ETMX wiggles as well, but they happened almost a second prior to lock loss and were larger than I have seen.
FAMS26260
Laser Status:
NPRO output power is 1.821W (nominal ~2W)
AMP1 output power is 67.15W (nominal ~70W)
AMP2 output power is 137.3W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 32 days, 23 hr 34 minutes
Reflected power = 22.75W
Transmitted power = 105.2W
PowerSum = 127.9W
FSS:
It has been locked for 0 days 11 hr and 42 min
TPD[V] = 0.6936V
ISS:
The diffracted power is around 2.0%
Last saturation event was 0 days 11 hours and 43 minutes ago
Possible Issues:
PMC reflected power is high
FSS TPD is low
Sun Jun 30 10:09:55 2024 INFO: Fill completed in 9min 52secs
TITLE: 06/30 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 6mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY: Locked for 7.5 hours, noise and range look okay.
TITLE: 06/30 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Three locklosses this shift, all with unknown causes but only two with the ETMX motion before that we've been seeing. Recovery from each was fairly straightforward, however the ALS-X PLL continued to be problematic and unlock when trying to recover from the first lockloss.
H1 has been observing for 1 hour.
LOG: No log for this shift.
Lockloss @ 06:00 UTC - link to lockloss tool
Locked for 49 minutes. No obvious cause; larger ETMX motion right before the lockloss this time.
H1 back to observing at 07:01 UTC. Automatic relock except for manually adjusting ETMX to lock ALS X faster.
Lockloss @ 04:15 UTC - link to lockloss tool
Locked for 38 minutes. No obvious cause, but I see the familiar small ETMX hit almost a half second before the lockloss.
H1 back to observing at 05:13 UTC
We lost lock just as I was beginning HVAC shutdowns to take advantage of the nearly 160 Mpc range. When we regained lock, the range was only about what it was for my last shutdown (77477), so I will defer. Here are the times for what I did do:
|
Start of shutdown (UTC) |
Start of end of shutdown (UTC)) |
Equipment shut down |
|
16:25 |
16:37 |
Office area HVAC |
|
16:50 |
16:57 lock loss 16:55:49 |
Chiller, all turbines, office area HVAC, split minis in CER |
Lockloss @ 23:57 UTC - link to lockloss tool
Ends lock at 6 hours. No obvious cause, and I don't see the ETMX motion prior to this lockloss as we've seen in the past.
H1 back to observing at 03:41 UTC.
ALS X PLL unlocking provided frequent interruptions in this relock. Eventually made it to DRMI, where PRM needed adjustments after going through MICH_FRINgES.
TITLE: 06/29 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: One lock loss with a straight forward relock, and then a delayed calibration measurement. We've been locked for 5.5 hours.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 16:08 | SAF | LVEA | LVEA | YES | LVEA IS LASER HAZARD | 10:08 |
| 16:23 | PEM/FAC | Robert | Site | n | HVAC shutdowns | 19:02 |
| 19:03 | SQZ | Terry | Opt Lab | local | SHG work | 00:57 |
Calibration sweep taken today at 2106UTC in coordination with LLO and Virgo. This was delayed today since we were'nt thermalized at 1130PT.
Simulines start:
PDT: 2024-06-29 14:11:45.566107 PDT
UTC: 2024-06-29 21:11:45.566107 UTC
GPS: 1403730723.566107
End:
PDT: 2024-06-29 14:33:08.154689 PDT
UTC: 2024-06-29 21:33:08.154689 UTC
GPS: 1403732006.154689
I ran into the error below when I first started the simulines script, but it seemed to move on. I'm not sure if this frequently pops up and this is the first time I caught it.
Traceback (most recent call last):
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 314, in _
bootstrap
self.run()
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 108, in r
un
self._target(*self._args, **self._kwargs)
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 427, in generateSignalInje
ction
SignalInjection(tempObj, [frequency, Amp])
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 484, in SignalInjection
drive.start(ramptime=rampUp) #this is blocking, and starts on a GPS second.
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/awg.py", line 122, in start
self._get_slot()
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/awg.py", line 106, in _get_sl
ot
raise AWGError("can't set channel for " + self.chan)
awg.AWGError: can't set channel for H1:SUS-ETMX_L1_CAL_EXC
One item to note is that h1susex is running a different version of awgtpman since last Tuesday.
This almost certainly failed to start the excitation.
I tested a 0-amplitude excitation on the same channel using awggui with no issue.
There may be something wrong the environment the script is running in.
We haven't made any changes to the environment that is used to run simulines. The only thing that seems to have changed is that a different version of awgtpman is running now on h1susex as Dave pointed out. Having said that, this failure has been seen before but rarely reappears when re-running simulines. So maybe this is not that big of an issue...unless it happens again.
turns out i was wrong about the environment not changing. according to step 7 of the ops calib measurement instructions, simulines has been getting run in the base cds environment...which the calibration group does not control. That's probably worth changing. In the meantime, I'm unsure if that's the cause of last week's issues.
The CDS environment was stable between June 22 (last good run) and Jun 29.
There may have been another failure on June 27, which would make two failures and no successes since the upgrade.
The attached graph for June 27 shows an excitation at EY, but no associated excitation at EX during the same period. Compare with the graph from June 22.
On Jun 27 and Jun 28, H1:SUS-ETMX_L2_CAL_EXCMON was excited during the test.