TITLE: 07/03 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
The IFO has been unlocked all day due to a HAM2 DAC failure and for PSL and ALS adjustments left over from yesterday's maintenence.
The Beat note at End X was touched up by Jenne Camilla and Sheila. This started in the morning and ended shortly before 4.
We started an Initial Alignment, but got stuck in Find X arm IR.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
16:08 | SAF | LVEA | LVEA | YES | LVEA IS LASER HAZARD | 10:08 |
15:10 | FAC | Karen | Optics Lab | N | Technical cleaning | 15:46 |
15:33 | PSL | Jason & Jennie | PSL Enclosure | YES | Touching up alignment of ALS FC2 | 17:16 |
17:19 | PEM | Robert Carlos Milly | Along arm Y | No | Testing a Seismometer in the dirt | 20:19 |
17:21 | ISC | Richard | LVEA HAM6 | Yes | Checking out some racks near HAM6 | 17:31 |
17:24 | ASL | Jenne, Sheila, Camilla, Ollie | X Arm | Yes | Transitioning to LASER HAZARD to adjust ALS | 21:01 |
18:07 | PCAL | Karen & Francisco | PCAL Lab | Yes | Technical cleaning & escort. | 18:22 |
21:03 | ALS | Camilla | LVEA | Yes | Getting power meter | 21:22 |
21:14 | VAC | Gerardo | FTCE | N | Getting parts and checking Vacuum | 21:41 |
21:23 | ASL | Camilla, Jenne, Sheila | EX | Yes | Realigning ALS X fiber | 23:30 |
21:29 | CDS | Jonathan fil | CER | N | Unplugging cat 5 cables to restart cameras | 22:32 |
23:10 | PEM | Robert | LVEA | Yes | Setting up shaker | 23:30 |
TITLE: 07/03 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 10mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
TITLE: 07/03 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT
Wind: 6mph Gusts, 4mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
After Erik replaced the HAM2 DAQ.
There was a timing error on another chassis h1sus2b, which required a restart. see Eriks alog.
I then put all the Optics in safe mode to restart that chassis & reset the ISI and HPI watchdogs.
There was a DAC Kill i also had to reset to reset the ISI and HPI watchdogs.
We are now trying to get an initial alignment in. But the IMC is not locking. This may be due to the fact that the IMC is in the HAM that tripped this morning.
Taking ISC_LOCK to IDLE
Trending the Sliders back before the earthquake for the IMC.
Camilla and I touched up MC1, MC3, PRM, IM2, IM4 back to the slider values before the tripped chassis and recovery and IM3 see Camilla's Alog.
Jason and Jenne went into the PSL room to adjust the ASL_FC2. When they came out Jenne said that an ASL adjustment at End X was next.
Jenne Sheila Camilla & Ollie are at EX working on ALS fiber alignment.
[Vlad, Louis] We were taking a look at the GDS FIR Filter plots (pg. 21+) in the calibration reports at LHO (H1_calibration_report_20240601T183705Z.pdf) and LLO (L1_calibration_report_20240518T183037Z.pdf) side by side. We noticed several features in the GDS FIR filter comparison plots that we don't understand. 1. res_corr_comparison.png: the LHO res corr comparison (I think this stands for "residual correction comparison") starts to run away at low frequencies (<8 Hz), while it's flat at LLO. 2. ratio_res_corr_comparison.png: LHO's "Ratio of Res Corr comparison" plot has a low frequency ripple that is not present in LLO's reports. 3. ratio_res_corr_no_cc_pole_comparison.png: same as above for the "Ratio of Res Corr No CC Pole Comparison" plots 4. ratio_tst_corrections_comparison.png: There are resonances present in LHO's "Ratio of TST corrections comparison" plots that 1.) don't appear in LLO's reports and 2.) don't match up with the violin modes at 500 Hz and 1kHz. The same is true for the PUM (ratio_pum_corrections_comparison.png) and UIM (ratio_uim_corrections_comparison.png) stages. The biggest concern is whether these discrepancies are outside of nominal for the GDS FIR pipeline, which would mean that we are introducing additional errors in the GDS pipeline. Could it be an issue in our model of DARM somewhere along the way? Or a mismatch between CAL-CS and the model?
J. Oberling, J. Driggers
As a result of yesterday's PMC work several beams downstream of the PMC were misaligned; one of these was the beam into the fiber pickoff for ALS and SQZ. I went in this morning to tweak this alignment so there was sufficient light available for both the ALS PLL loops and SQZ. In the past this has been a very quick tweak to the steering mirror that directs the pickoff beam into the fiber, but not today. Adjusting the steering mirror only brought the fiber transmission signal from ~0.02 to ~0.035 (our past max has been around 0.09), and trending the external PD that monitors the light available to the fiber showed we had more light than before the PMC work. Regardless, I used a power meter to check the amount of power incident on the fiber coupler and found it at 40 mW; when this path was first installed in 2019, and upon recovery after the PSL laser upgrade in 2021, we had 50 mW in this pickoff path, so I adjusted ALS-HWP2 to bring the power back to 50 mW (interesting that the external PD thought we had more light when we had less?). Ultimately, I had to start adjusting the fiber coupler alignment as well as the steering mirror. The coupler has 5 degrees of freedom to adjust: horizontal, vertical, longitudinal, pitch, and yaw; these adjustments move the fiber coupler's internal coupling lens (longitudinal adjustment is performed by moving the 3 screws that control pitch and yaw all in the same direction by the same amount). Horizontal and vertical adjustment did nothing but quickly drive the beam off of the fiber; I accidentally did this when checking the horizontal alignment and had a very hard time finding it again, calling Jenne in for assistance (Thank You!!). What ultimately worked was adjusting the lens away from the fiber (so towards the steering mirror) by a little bit (roughly 1/8 turn of the allen key; this always resulted in a loss of pickoff signal), tweaking the steering mirror to peak the signal (rarely got back to the starting value), then carefully tweaking each of the three pitch/yaw adjustments on the coupler to peak the signal again (it's at this point the signal would increase past is previous max). At first the going was very slow, fighting for every 0.001 to 0.002 increase, but increases became larger around the middle of the 0.06 - 0.07 range. In the end we stopped when the fiber transmission signal was reading ~0.101, which the highest it's been in over a year.
Why did we have to move the coupling lens? The immediate answer is, "Because the beam size changed." But why did the beam size change? A couple of WAGs: The beam size out of this new PMC is different than the old, or the slight alignment change we saw post-PMC has the beam passing through the 2 lenses between the ALS/ISCT1 pickoff and the fiber pickoff (lenses ALS-L1 and ALS-L3; the fiber pickoff is a pickoff from the ALS/ISCT1 path on the IO side of the PSL table) in such a way that the beam size changed slightly (yeah, this one is pretty big WAG...). Regardless, from what I saw this morning it's possible that the beam size output from the new PMC is slightly different than the old.
Wed Jul 03 10:13:04 2024 INFO: Fill completed in 13min 1secs
Gerardo confirmed a good fill curbside.
Andrei, Naoki, Sheila
Following the aLOG 78726, we've calculated the Optical Gain [counts/μm] from FC_IR_OLTF following steps in Diagram. This transfer function will be used to calibrate the FC mirror movement from its Error signal for later use in RIN estimation and backscattering measurements. Inferred Optical Gain plot can be found here: inferred_OG_with_fit.png. Cavity pole frequency is estimated as 34 Hz.
A reminder that the current EX Timing System error is expected due to a mismatch in the timing card firmware version for h1susex's timing card. To verify that this is the only timing error at EX, I have added a check in the CDS STAT system. It will report an error is an additional timing error occurs.
I have annoted the timing system top level MEDM with this information, and included a related display button to open the CDS STAT MEDM (see attached).
When the timing system is updated to expect the new firmware version and remove the EX error, I will remove these temporary measures.
The exhaust fans that serve the LEXC bathrooms were not commanding on. We searched for any sign of power local to the fan and none could be found. All circuits that serve the fan were not tripped. At this time we have manually forced them on at the FMCS. We suspect the failed AHU in LEXC zone 2 sets the occupancy schedule for these fans. The fans will continue to be operated manually until PHP-02 (dead Daiken) is repaired. B. Gateley E. Otterman T. Guidry
MX, EX, MY all received approximately 130 gallons of glycol between the 3. Line pressures were set to their desired levels. T. Guidry E. Otterman C. Soike
Sheila, Tony, Jenne, Camilla. Details in 78828.
Ryan, Jenne, Camilla
We tried aligning ASC-POP_A with IM4 adn then ASC-X_TR_A with PR2 but couldn't get the X arm locked.
We then reverted IM3, IM4 and PR2 so that the x-arm could lock.
Ryan slowly moved IM3 back to this position and the servos followed along!
At 14:44 UTC h1sush2b suffered a timing error and had to be restarted.
The timing error happened when work was being done in the same rack in the CER, see https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=78823.
The timing error was a "long cycle" of 50 milliseconds.
Restart log for this morning's work:
Wed03Jul2024
LOC TIME HOSTNAME MODEL/REBOOT
05:30:57 h1sush2a ***REBOOT***
05:33:09 h1sush2a h1iopsush2a
06:45:01 h1sush2a ***REBOOT***
06:47:12 h1sush2a h1iopsush2a
07:18:01 h1sush2a ***REBOOT***
07:20:13 h1sush2a h1iopsush2a
07:35:51 h1sush2a ***REBOOT***
07:38:04 h1sush2a h1iopsush2a
07:38:17 h1sush2a h1susmc1
07:38:30 h1sush2a h1susmc3
07:38:43 h1sush2a h1susprm
07:38:56 h1sush2a h1suspr3
07:59:25 h1sush2b ***REBOOT***
08:01:07 h1sush2b h1iopsush2b
08:01:20 h1sush2b h1susim
08:01:33 h1sush2b h1sushtts
Started well pump to top off fire water tank. Pump will run for 4 hrs and auto off.
TITLE: 07/03 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 16mph Gusts, 12mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
DIAG_MAIN says : CONNECTION ERRORS, See SPM DIFFS for dead channels.
CDS_OVERVIEW shows:
H1OPSUSHA is clearly having a hardtime as described in Oli's alog: 78823 . Erik is saying that this is a bad 18 bit DAC.C
Erik, Oli
Got a call from IFO_NOTIFY - there was ground motion that caused the HAM2 ISI (11:56:26 UTC), SEI SW (11:56:28 UTC), and Hepi (11:56:31 UTC) watchdogs to trip. Along with/causing this (or by coincidence?) is an error with H1IOPSUSH2A, and the GDS screen was showing 'LTDS' in red. The front end needs to be either power-cycled, or possibly a card replaced, so Erik will be heading into site in a bit to see about fixing the errors. I manualed ISC_LOCK into IDLE, and either me or the DAY operator will get the ifo back up once the CDS issue is resolved.
[Erik, Richard]
The third 18-bit DAC (DAC 2) on h1sush2a failed. It no longer appeared on the PCI Bus.
Richard replaced the DAC and we restarted the front end.
I originally misdiagnosed the problem as a bad timing card, since the DAC failure caused cascading timing errors on other cards, so we also replaced the timing card with serial number S2101122
Serial number of the failed DAC is:
101208-06
The ground motion plotted here is very unlikely to have caused a seismic watchdog trip. The ham2 trip was almost certainly caused by the sush2a failure.
[Jenne, Sheila, Tony]
The PMC swap work earlier today (alog 78813) resulted in a slight shift of the alignment of the beams coming out of the PMC.
After Ryan and Jason finished getting the ref cav aligned and locked, the IMC alignment looked poor. I was able to mostly get it back by hand-moving the sliders for the PZT at the PSL periscope. I touched MC2 a tiny bit, and then the IMC was able to lock. I lowered the IMC WFS thresholds (eg H1:IMC-IMC_TRIGGER_THRESH_ON and _OFF) so that it would take over alignment. Once the alignment was pretty nearly finished, I used the IMC_LOCK guardian to offload the IMC WFS. I saved in the safe.snap file for IMCASC the PZT offsets that had been offloaded.
In parallel, Sheila worked on the ALS system. The initial goal was to check and adjust the PSL pickoff portion of the COMM beatnote, however Sheila noted that the ALS Xarm couldn't lock, since the end station PLLs weren't getting enough light from the PSL. Sheila lowered the threshold of H1:ALS-C_FIBR_INTERNAL_DC_LOW from 0.05mW to 0.02mW, and also lowered H1:ALS-X_FIBR_LOCK_FIBER_LAUNCHLIM from 0.3 to 0.2 . After this, the ALS Xarm was able to lock, and Sheila went out to ISCT1 and got the COMM beatnote up to about -1 dBm. We then convinced ALS Yarm to lock after lowering H1:ALS-Y_FIBR_LOCK_FIBER_LAUNCHLIM from 0.3 to 0.2. Sheila also changed H1:SQZ-FIBR_TRANS_DC_LOW from 0.2 to 0.05 mW and H1:SQZ-FIBR_LOCK_FIBER_LAUNCHLIM from 0.3 to 0.15, after which she was able to get the squeezer all locked up, so it seems like the squeezer should work fine when we get to that point this evening.
Attached is the SDF screenshot that Sheila took for the squeezer thresholds. The other ALS thresholds were also accepted in their safe.snap files. All of these will need to be accepted in the Observe.snap SDFs in order for us to get to Observing.
I helped the initial alignment a bit, by moving IM4 and PR2 a bit for the Input_align, and then moving PRM for PRC_align. After that, initial alignment seemed to finish on its own.
Particularly since we have been having ALS beatnote issues lately (see alog 78806 for some notes), we will almost certainly want to go into the PSL tomorrow to tweak the alignment of the optic that directs the beam to the fiber for the fiber distribution box, so that we can undo all these threshold changes. Jason makes the good point that the longest part of doing that tweak will be garbing up for going into the PSL, so it should be quite easy.
[RyanC, Jenne]
We lost DRMI lock 3 times in a row at the same place (without losing ALS lock), so it seemed like we had some DRMI alignment issues. It turns out that we are barely on POP A QPD, which is not at all the same place that the offsets are set to. For now, it seems to work well to take PRC1 out of the loop. We'll need to take a look at it in the morning though.
Our 2W power recycling gain, before full IFO alignment is complete, is already above 60. Usually in this state it is as high as 58, so this is unusually good.
We are very near the edge of IM4 trans, so dividing by a too-small number, which then makes our PRG artificially high. That makes more sense, particularly since the arm transmissions aren't higher than usual.
After Tony and I brought alignment sliders from HAM2 WD trip back 78823, the IMC locked and we moved IM3 (alot) to center the beam back on IM4 trans. Plot attached, IM3 sliders moved 450urad in Yaw, 40urad in Pitch. Jaosn is currently in the PSL so temperature changes may mean we need to readjust again later. May need to pico once we're at full lock with good range.
Usually with the IFO down and IMC locked, we have 1.91 on the IM4 trans nsum and centered beam (0.04 P, 0 in Y), plot. After the PSL alignment change yesterday we had 1.69 on the IM4 trans nsum and the beam mis-centered, mainly in yaw (0.3 P, 1 in Y) plot. See overall change here.
Calibration sweep taken today at 2106UTC in coordination with LLO and Virgo. This was delayed today since we were'nt thermalized at 1130PT.
Simulines start:
PDT: 2024-06-29 14:11:45.566107 PDT
UTC: 2024-06-29 21:11:45.566107 UTC
GPS: 1403730723.566107
End:
PDT: 2024-06-29 14:33:08.154689 PDT
UTC: 2024-06-29 21:33:08.154689 UTC
GPS: 1403732006.154689
I ran into the error below when I first started the simulines script, but it seemed to move on. I'm not sure if this frequently pops up and this is the first time I caught it.
Traceback (most recent call last):
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 314, in _
bootstrap
self.run()
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 108, in r
un
self._target(*self._args, **self._kwargs)
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 427, in generateSignalInje
ction
SignalInjection(tempObj, [frequency, Amp])
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 484, in SignalInjection
drive.start(ramptime=rampUp) #this is blocking, and starts on a GPS second.
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/awg.py", line 122, in start
self._get_slot()
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/awg.py", line 106, in _get_sl
ot
raise AWGError("can't set channel for " + self.chan)
awg.AWGError: can't set channel for H1:SUS-ETMX_L1_CAL_EXC
One item to note is that h1susex is running a different version of awgtpman since last Tuesday.
This almost certainly failed to start the excitation.
I tested a 0-amplitude excitation on the same channel using awggui with no issue.
There may be something wrong the environment the script is running in.
We haven't made any changes to the environment that is used to run simulines. The only thing that seems to have changed is that a different version of awgtpman is running now on h1susex as Dave pointed out. Having said that, this failure has been seen before but rarely reappears when re-running simulines. So maybe this is not that big of an issue...unless it happens again.
turns out i was wrong about the environment not changing. according to step 7 of the ops calib measurement instructions, simulines has been getting run in the base cds environment...which the calibration group does not control. That's probably worth changing. In the meantime, I'm unsure if that's the cause of last week's issues.
The CDS environment was stable between June 22 (last good run) and Jun 29.
There may have been another failure on June 27, which would make two failures and no successes since the upgrade.
The attached graph for June 27 shows an excitation at EY, but no associated excitation at EX during the same period. Compare with the graph from June 22.
On Jun 27 and Jun 28, H1:SUS-ETMX_L2_CAL_EXCMON was excited during the test.
I had a lockloss at LASER_NOISE_SUPPRESION but now XARMs PDH keeps locking and unlocking and looks very fuzzy and the DIFF beatnotes bad, -14