FAMIS 20704
The NPRO has had a couple of sudden power jumps in the past week, seen also by output of AMP1 and less so from AMP2. The AMP LD powers didn't move at the same time as these jumps.
PMC transmitted power continues to fall while reflected power increases; when we swap the PMC tomorrow morning this should finally be put to rest.
Jason's brief incursion last Tuesday shows up clearly in environment trends.
TITLE: 07/01 Eve Shift: 2300-0500 UTC (1600-2200 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT
Wind: 16mph Gusts, 11mph 5min avg
Primary useism: 0.07 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY: H1 has been locked and observing for just over 3 hours. ITM bias change tests while observing have resumed for the afternoon.
TITLE: 07/01 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 20:17 UTC (3 hr xx min lock)
Very calm rest-of-shift after a speedy <1 hr 30 min lock acquisition.
Other:
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 16:08 | SAF | LVEA | LVEA | YES | LVEA IS LASER HAZARD | 10:08 |
| 14:39 | FAC | Karen | Optics, Vac Prep | N | Technical Cleaning | 15:06 |
| 15:51 | PEM | Robert | LVEA | YES | Viewport Compensation Plate Experiment | 18:48 |
| 16:04 | FAC | Karen | MY | N | Technical Cleaning | 17:17 |
| 16:04 | FAC | Kim | MX | N | Technical Cleaning | 16:44 |
| 16:59 | FAC | Eric | EX Chiller | N | Glycol static and residual pressure check | 17:41 |
| 17:02 | FAC | Tyler | MX, MY, EX, EY | N | Glycol Check | 17:41 |
| 17:31 | FAC | Chris | EY | N | HVAC Work | 18:31 |
| 20:00 | FAC | Karen | Optics Lab | Local | Technical Cleaning | 20:00 |
| 20:01 | PCAL | Francisco | Optics Lab | Local | Testing relays | 21:52 |
| 21:47 | SQZ | Terry, Kar Meng, Camilla | Optics Lab | Local | SHG Work | 22:16 |
FAMIS 21042
pH of PSL chiller water was measured to be just above 10.0 according to the color of the test strip.
Naoki, Camilla
We continued the ZM4 PSAMS scan with hot OM2 in 78636. We changed ZM4 PSAMS strain voltage from 7.5V to 5.5V and saw the squeezing and range improvement as shown in the first attachment. The second attachment shows the squeezing imrovement. The squeezing level is 5.2dB around 2 kHz.
We also tried 4.5V ZM4 PSAMS, but it is worse than 5.5V. So we set the ZM4 PSAMS at 5.5V. The ZM5 PSAMS strain voltage is -0.78V.
Everytime we changed ZM4 PSAMS, we compensated the ZM4 pitch alignment with slider and ran SCAN_ALIGNMENT_FDS and SCAN_SQZANG.
18:15-18:35UTC took MICH, PRCL, SRCL LSC noise budget injections. Following instructions in 74681 and 74788. Last taken with cold OM2 in 78554.
Committed in ligo/gitcommon/NoiseBudget/aligoNB/aligoNB/H1/couplings and on aligoNB git. For plot, used gpstime 1403879360 # Hot OM2, 2024/07/01 14:29UTC, Observing Time, O4b, IFO locked 3h30 hour, 158MPc range
In addition to usual references, I added H1:CAL-CS_{MICH,PRCL,SRCL}_DQ and saved as ref13.
Comparing today's lsc noise budget to cold OM2 lsc noise budget in 78554:
IFO is in LOCKING after an unknown LOCKLOSS at 18:55 UTC
Scheduled 8:30 - 11:30 PT (15:30 UTC to 18:30 UTC) Comissioning went well and finished at 11:36 PT.
Other:
After Sheila/Jennie's 78776 SRM move, the SRCL FF got worse. Compare green before SRM move to brown after move in attached plot. We tuned the SRCL FF gain from 1.18 to 1.14 to improve from brown to blue trace. If we see SRCL coherence, we could remeasure and fit the FF. Accepted in sdf and lscparams.
Mon Jul 01 10:12:00 2024 INFO: Fill completed in 11min 56secs
Gerardo confirmed a good fill curbside.
Sheila, Jennie W
We lost some range with the hot OM2 so we thought to see if we could move SRM and raise the coupled cavity pole to regain this range as we did before in this entry.
Took 5 minutes no squeeze time.
Opened POP beam diverter to monitor POPAIR
Following the procedure in this entry we opened the SRC1 ASC loops by turning off input and offsets simultaneously after changing ramp to 0.1s.
Set the ramp to 10s, turn gain to 0, clear history, gain back to 4.
Then we started to move SRM alignment sliders.
The starting alignment sliders are here.
started yaw steps down on SRM, wrong way as f_cc avg went down, changed to up in yaw, then noticed POPAIR_RF18 was going down so switched to pitch steps down. Found a poitn at which it no longer chnaged fcc, went back to maximum found for f_cc. The trends can be seen here where the first vertical cursor is when we opened the beam diverter and the second is when we thought we had the highest coupled cavity pole (middle bottom trace). So we put the SRM sliders to their values for this time after finishing.
I then changed offsets in SRC1 ASC to -0.0417 for ASC-SRC1_P_OFFSET and 0.0977 for ASC-SRC1_Y_OFFSET but didn't turn them on while we grabbed 5 minutes no squeeze time.
17:05:30 UTC closed loops and switched on new offsets.
Sheila used the second period of no squeeze time to plot the how the range compares to June 24th when we had a cold OM2. This plot shows the DARM comparison on the top plot (Yellow is cold OM2, blue is hot OM2) and the range difference on the bottom. This plot shows the cumulative range comparison on the top (yellow is cold OM2, blue is hot OM2) with the same difference in range plotted on the bottom. From the bottom plot it looks like with hot OM2 we gain range between 25 and 40Hz, don't gain between 40 and 90 Hz and lose range for hot OM2 above this point.
So summary is, we didn't completely recover the range by raising the coupled cavity pole. Hot OM2 is good for low frequency range but not for high frequency. We still need to do some PSAMS tuning today so this might regain us range at higher frequencies.
I changed the values (ASC_AS72_P and ASC_AS72_Y) in line 551 and 552 of lscparams.py to match the new offsets and reloaded ISC_LOCK guardian.
Then I accepted these offset values in sdf after changing the ramps back to their setpoint which was 5s.
The no sqz time for reference:
With OM2 hot, and SRM alignment adjusted as Jennie described above, we took no sqz time from 16:58- 17:05 UTC on July 1st.
For the CO2 lasers, ITMY and ITMX have dropped in power but only by a small amount (by ~1 and <1 [W]? I'm not sure on the units).
ITMY and ITMX HWS_SLEDPOWERMON dropped to zero last Tuesday as they were turned off due to alignment issues alog78654, and the SPHERICAL_POWER is still flat lined from the SR3 moves.
We tried the new hot OM2 MICH FF filter loaded in FM5 from 78688. The original FM6 FF was better, see attached, so we left everything as nominal. Is MICH changing over time so that last weeks injections are not good today?
TITLE: 07/01 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 1mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING (3 hr 35 min lock)
Optics Lab (Lab 1) Dust Cts have settled since an excursion about 1 hour and 10 mins ago which saw elevated levels for 11 minutes at around 13:34 UTC (6:34 PT). They triggered the yellow alert, and almost got up to the red alert threshold but did not quite make it. Counts came back below threshold at around 13:45 UTC. Trend screenshot attached.
TITLE: 07/01 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: One lockloss early in the shift, but recovery was simple and have been observing since. H1 has now been locked for 7 hours.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 16:08 | SAF | LVEA | LVEA | YES | LVEA IS LASER HAZARD | Ongoing |
| 18:23 | SQZ | Terry | Opt Lab | local | SHG work | 21:47 |
| 21:05 | SQZ | Kar Meng | Opt Lab | local | SHG work | 00:18 |
| 22:40 | PEM | Robert | LVEA | y | Looking for noise source | 23:00 |
| 01:01 | - | Kar Meng | EY | n | Driving to EY | 01:25 |
Lockloss @ 23:39 UTC - link to lockloss tool
Locked for 26 minutes. No obvious cause, but I see the ETMX motion about 100ms before the lockloss this time.
I'm going to take this opportunity to do a quick FSS RefCav alignment adjustment so that it can hopefully make it until Tuesday when we swap out the PMC.
H1 back to observing at 00:57 UTC. Pretty much a fully automated relock; I just slightly adjusted PRM to make buildups during PRMI better, but it might've caught on its own.
Calibration sweep taken today at 2106UTC in coordination with LLO and Virgo. This was delayed today since we were'nt thermalized at 1130PT.
Simulines start:
PDT: 2024-06-29 14:11:45.566107 PDT
UTC: 2024-06-29 21:11:45.566107 UTC
GPS: 1403730723.566107
End:
PDT: 2024-06-29 14:33:08.154689 PDT
UTC: 2024-06-29 21:33:08.154689 UTC
GPS: 1403732006.154689
I ran into the error below when I first started the simulines script, but it seemed to move on. I'm not sure if this frequently pops up and this is the first time I caught it.
Traceback (most recent call last):
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 314, in _
bootstrap
self.run()
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 108, in r
un
self._target(*self._args, **self._kwargs)
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 427, in generateSignalInje
ction
SignalInjection(tempObj, [frequency, Amp])
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 484, in SignalInjection
drive.start(ramptime=rampUp) #this is blocking, and starts on a GPS second.
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/awg.py", line 122, in start
self._get_slot()
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/awg.py", line 106, in _get_sl
ot
raise AWGError("can't set channel for " + self.chan)
awg.AWGError: can't set channel for H1:SUS-ETMX_L1_CAL_EXC
One item to note is that h1susex is running a different version of awgtpman since last Tuesday.
This almost certainly failed to start the excitation.
I tested a 0-amplitude excitation on the same channel using awggui with no issue.
There may be something wrong the environment the script is running in.
We haven't made any changes to the environment that is used to run simulines. The only thing that seems to have changed is that a different version of awgtpman is running now on h1susex as Dave pointed out. Having said that, this failure has been seen before but rarely reappears when re-running simulines. So maybe this is not that big of an issue...unless it happens again.
turns out i was wrong about the environment not changing. according to step 7 of the ops calib measurement instructions, simulines has been getting run in the base cds environment...which the calibration group does not control. That's probably worth changing. In the meantime, I'm unsure if that's the cause of last week's issues.
The CDS environment was stable between June 22 (last good run) and Jun 29.
There may have been another failure on June 27, which would make two failures and no successes since the upgrade.
The attached graph for June 27 shows an excitation at EY, but no associated excitation at EX during the same period. Compare with the graph from June 22.
On Jun 27 and Jun 28, H1:SUS-ETMX_L2_CAL_EXCMON was excited during the test.