18:15-18:35UTC took MICH, PRCL, SRCL LSC noise budget injections. Following instructions in 74681 and 74788. Last taken with cold OM2 in 78554.
Committed in ligo/gitcommon/NoiseBudget/aligoNB/aligoNB/H1/couplings and on aligoNB git. For plot, used gpstime 1403879360 # Hot OM2, 2024/07/01 14:29UTC, Observing Time, O4b, IFO locked 3h30 hour, 158MPc range
In addition to usual references, I added H1:CAL-CS_{MICH,PRCL,SRCL}_DQ and saved as ref13.
Comparing today's lsc noise budget to cold OM2 lsc noise budget in 78554:
IFO is in LOCKING after an unknown LOCKLOSS at 18:55 UTC
Scheduled 8:30 - 11:30 PT (15:30 UTC to 18:30 UTC) Comissioning went well and finished at 11:36 PT.
Other:
After Sheila/Jennie's 78776 SRM move, the SRCL FF got worse. Compare green before SRM move to brown after move in attached plot. We tuned the SRCL FF gain from 1.18 to 1.14 to improve from brown to blue trace. If we see SRCL coherence, we could remeasure and fit the FF. Accepted in sdf and lscparams.
Mon Jul 01 10:12:00 2024 INFO: Fill completed in 11min 56secs
Gerardo confirmed a good fill curbside.
Sheila, Jennie W
We lost some range with the hot OM2 so we thought to see if we could move SRM and raise the coupled cavity pole to regain this range as we did before in this entry.
Took 5 minutes no squeeze time.
Opened POP beam diverter to monitor POPAIR
Following the procedure in this entry we opened the SRC1 ASC loops by turning off input and offsets simultaneously after changing ramp to 0.1s.
Set the ramp to 10s, turn gain to 0, clear history, gain back to 4.
Then we started to move SRM alignment sliders.
The starting alignment sliders are here.
started yaw steps down on SRM, wrong way as f_cc avg went down, changed to up in yaw, then noticed POPAIR_RF18 was going down so switched to pitch steps down. Found a poitn at which it no longer chnaged fcc, went back to maximum found for f_cc. The trends can be seen here where the first vertical cursor is when we opened the beam diverter and the second is when we thought we had the highest coupled cavity pole (middle bottom trace). So we put the SRM sliders to their values for this time after finishing.
I then changed offsets in SRC1 ASC to -0.0417 for ASC-SRC1_P_OFFSET and 0.0977 for ASC-SRC1_Y_OFFSET but didn't turn them on while we grabbed 5 minutes no squeeze time.
17:05:30 UTC closed loops and switched on new offsets.
Sheila used the second period of no squeeze time to plot the how the range compares to June 24th when we had a cold OM2. This plot shows the DARM comparison on the top plot (Yellow is cold OM2, blue is hot OM2) and the range difference on the bottom. This plot shows the cumulative range comparison on the top (yellow is cold OM2, blue is hot OM2) with the same difference in range plotted on the bottom. From the bottom plot it looks like with hot OM2 we gain range between 25 and 40Hz, don't gain between 40 and 90 Hz and lose range for hot OM2 above this point.
So summary is, we didn't completely recover the range by raising the coupled cavity pole. Hot OM2 is good for low frequency range but not for high frequency. We still need to do some PSAMS tuning today so this might regain us range at higher frequencies.
I changed the values (ASC_AS72_P and ASC_AS72_Y) in line 551 and 552 of lscparams.py to match the new offsets and reloaded ISC_LOCK guardian.
Then I accepted these offset values in sdf after changing the ramps back to their setpoint which was 5s.
The no sqz time for reference:
With OM2 hot, and SRM alignment adjusted as Jennie described above, we took no sqz time from 16:58- 17:05 UTC on July 1st.
For the CO2 lasers, ITMY and ITMX have dropped in power but only by a small amount (by ~1 and <1 [W]? I'm not sure on the units).
ITMY and ITMX HWS_SLEDPOWERMON dropped to zero last Tuesday as they were turned off due to alignment issues alog78654, and the SPHERICAL_POWER is still flat lined from the SR3 moves.
We tried the new hot OM2 MICH FF filter loaded in FM5 from 78688. The original FM6 FF was better, see attached, so we left everything as nominal. Is MICH changing over time so that last weeks injections are not good today?
TITLE: 07/01 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 1mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING (3 hr 35 min lock)
Optics Lab (Lab 1) Dust Cts have settled since an excursion about 1 hour and 10 mins ago which saw elevated levels for 11 minutes at around 13:34 UTC (6:34 PT). They triggered the yellow alert, and almost got up to the red alert threshold but did not quite make it. Counts came back below threshold at around 13:45 UTC. Trend screenshot attached.
TITLE: 07/01 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: One lockloss early in the shift, but recovery was simple and have been observing since. H1 has now been locked for 7 hours.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
16:08 | SAF | LVEA | LVEA | YES | LVEA IS LASER HAZARD | Ongoing |
18:23 | SQZ | Terry | Opt Lab | local | SHG work | 21:47 |
21:05 | SQZ | Kar Meng | Opt Lab | local | SHG work | 00:18 |
22:40 | PEM | Robert | LVEA | y | Looking for noise source | 23:00 |
01:01 | - | Kar Meng | EY | n | Driving to EY | 01:25 |
This is a continuation of the work discussed in 78734.
Start of change (GPS) |
End of change (GPS) |
ITMX bias at start (V) |
ITMX bias at end (V) |
ITMY bias at start (V) |
ITMY bias at end (V) |
1403804070 |
1403804192
|
0 |
0 |
0 |
77 |
1403804365 |
1403804489
|
0 |
0 |
77 |
-80 |
1403807629 |
1403807751
|
0 |
0 |
-80 |
-20 |
1403809555 |
1403809816 |
0 |
-40 |
-20 |
-40 |
1403812204 |
1403812324
|
-40 |
36
|
-40 |
-40 |
1403816907 |
1403817127
|
36 |
36 |
-40 |
0 |
After H1 lost lock this afternoon, I took the opportunity to do a quick RefCav alignment tweak since the TPD was showing the transmission was low at around 680mV. Using the two picomotor-controlled mirrors in the FSS path, mostly adjusting in pitch, I was able to improve the signal from about 680mV to 870mV. More of an increase than I was expecting, but this should be good to last until the PMC is swapped out on Tuesday (where this alignment may need to be done again). I suspect I could've spent more time to improve this further, but I stopped here so as not to delay IFO locking any longer.
Lockloss @ 23:39 UTC - link to lockloss tool
Locked for 26 minutes. No obvious cause, but I see the ETMX motion about 100ms before the lockloss this time.
I'm going to take this opportunity to do a quick FSS RefCav alignment adjustment so that it can hopefully make it until Tuesday when we swap out the PMC.
H1 back to observing at 00:57 UTC. Pretty much a fully automated relock; I just slightly adjusted PRM to make buildups during PRMI better, but it might've caught on its own.
TITLE: 06/30 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Just got back to low noise after we had a lock loss that ended a 15 hour lock. Relocking was fully auto and it didn't even run an initial alignment.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
16:08 | SAF | LVEA | LVEA | YES | LVEA IS LASER HAZARD | 10:08 |
18:23 | SQZ | Terry | Opt Lab | local | SHG work | 21:47 |
21:05 | SQZ | Kar Meng | Opt Lab | local | SHG work | 01:05 |
22:40 | PEM | Robert | LVEA | y | Looking for noise source | 23:00 |
TITLE: 06/30 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 4mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY: H1 lost lock about an hour ago but is relocking well; so far up to TRANSITION_FROM_ETMX.
The RefCav TPD is reading down to 0.69V, showing a warning on DIAG_MAIN, and has been falling over the past couple weeks. Since this is likely due to the increased PMC loss lowering the output of the cavity, I don't suspect fixing the RefCav alignment will get too much out of it, but I can try an alignment this evening if the IFO loses lock.
Calibration sweep taken today at 2106UTC in coordination with LLO and Virgo. This was delayed today since we were'nt thermalized at 1130PT.
Simulines start:
PDT: 2024-06-29 14:11:45.566107 PDT
UTC: 2024-06-29 21:11:45.566107 UTC
GPS: 1403730723.566107
End:
PDT: 2024-06-29 14:33:08.154689 PDT
UTC: 2024-06-29 21:33:08.154689 UTC
GPS: 1403732006.154689
I ran into the error below when I first started the simulines script, but it seemed to move on. I'm not sure if this frequently pops up and this is the first time I caught it.
Traceback (most recent call last):
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 314, in _
bootstrap
self.run()
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 108, in r
un
self._target(*self._args, **self._kwargs)
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 427, in generateSignalInje
ction
SignalInjection(tempObj, [frequency, Amp])
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 484, in SignalInjection
drive.start(ramptime=rampUp) #this is blocking, and starts on a GPS second.
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/awg.py", line 122, in start
self._get_slot()
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/awg.py", line 106, in _get_sl
ot
raise AWGError("can't set channel for " + self.chan)
awg.AWGError: can't set channel for H1:SUS-ETMX_L1_CAL_EXC
One item to note is that h1susex is running a different version of awgtpman since last Tuesday.
This almost certainly failed to start the excitation.
I tested a 0-amplitude excitation on the same channel using awggui with no issue.
There may be something wrong the environment the script is running in.
We haven't made any changes to the environment that is used to run simulines. The only thing that seems to have changed is that a different version of awgtpman is running now on h1susex as Dave pointed out. Having said that, this failure has been seen before but rarely reappears when re-running simulines. So maybe this is not that big of an issue...unless it happens again.
turns out i was wrong about the environment not changing. according to step 7 of the ops calib measurement instructions, simulines has been getting run in the base cds environment...which the calibration group does not control. That's probably worth changing. In the meantime, I'm unsure if that's the cause of last week's issues.
The CDS environment was stable between June 22 (last good run) and Jun 29.
There may have been another failure on June 27, which would make two failures and no successes since the upgrade.
The attached graph for June 27 shows an excitation at EY, but no associated excitation at EX during the same period. Compare with the graph from June 22.
On Jun 27 and Jun 28, H1:SUS-ETMX_L2_CAL_EXCMON was excited during the test.