Naoki and I took some no squeezing time 20:34 to 20:42UTC. See 4.2dB at 1kHz. Plot attached.
We tried to tune the OPO temperature with the aim of reducing the yellow 300Hz BLRM H1:SQZ-OPO_TEC_SETTEMP changed from 31.49 to 31.485deg. The SQZ_ANG_ADJUST guardian adjusted the SQZ angle as we adjusted the OPO temperature, which is nice.
I took a look at a few times that the PSL dust monitors have alarmed over the past two weeks, I was looking to see if there was any correlation between the wind and the CS dust monitors, particularly the PSL ones. I found a few examples where the winds picked up and then the dust monitors saw their counts increase (ref1, ref2, ref3, ref5), but only in ref1, and maybe 5 do we see the dust propogate out to the other dust monitors. In ref1 the wind also further increases after the dust counts increase and there is not another dust count jump from it. I also saw a few times where the dust counts increased without any real wind changes (ref4, ref6). Ref5 seems to show some dust moving around the LVEA with the wind?
Start:
PST: 2023-12-06 11:28:18.787126 PST
UTC: 2023-12-06 19:28:18.787126 UTC
GPS: 1385926116.787126
End:
PST: 2023-12-06 11:50:26.331166 PST
UTC: 2023-12-06 19:50:26.331166 UTC
GPS: 1385927444.331166
Files:
2023-12-06 19:50:26,172 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20231206T1928
20Z.hdf5
2023-12-06 19:50:26,191 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_2023120
6T192820Z.hdf5
2023-12-06 19:50:26,204 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_2023120
6T192820Z.hdf5
2023-12-06 19:50:26,218 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_2023120
6T192820Z.hdf5
2023-12-06 19:50:26,230 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_2023120
6T192820Z.hdf5
ICE default IO error handler doing an exit(), pid = 354122, errno = 32
LLO and us have dropped out of Observing for planned calibration and commissioning time to last until 1500 PT (2300 UTC).
Back to Observing at 2304 UTC
Wed Dec 06 10:10:05 2023 INFO: Fill completed in 10min 1secs
Gerardo confirmed a good fill curbside.
I took un-damped spectra (amplitude spectra density) of all the suspensions today to check for rubbing or mechanical/electrical issues. The latest results (orange trace) have been compared with the ones taken in 2020 (blue trace, LHO alog 56019 which were judged healthy) and the dotted line is the bosem sensor noise. Given below are the results for some of them (while I am still processing others):-
For now I have just glanced through the plots and will look into the details later.
Quads - ETMX, ETMY, ITMX, ITMY : They look healthy to me, although I need to cross check the tall peaks of ETMX (for F1, F2, F3 sensors).
HSTS - MC1, MC2, MC3, PR2, PRM, SR2, SRM :
MC1 RT BOSEM (also L and Y dof) looks to be a bit worse than previous measurements- see page 5 on MC1. Later, I requested Oli to take a transfer function measurements on MC1 and they looks healthy. He will post the plots later. Perhaps RT bosem on MC1 is degrading with time.
On MC2, sensor T3 (MC2 page 3) shows a broad peak at 15Hz approximately (this also needs to be cross checked)
HLTS - PR3, SR3 : the results look better than the ones taken in 2020, however I still need to cross check the resonant peaks.
The results for the other remaining suspensions will posted after processing them.
Having looked through all the un-damped SUS spectra of most concern is the MC1 M1 stage RT channel BOSEM. I would suspect a fault in the read-back electronics chain (e.g. the ADC, AA-chassis or Satellite Box) rather than a fault in-chamber with the BOSEM. If this elevated noise has not been observed in subsequent measurements, then it could be intermittent and therefore tricky to track down. So I would recommend monitoring the MC1 M1 stage RT channel for elevated noise.
TITLE: 12/06 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 6mph 5min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.84 μm/s
QUICK SUMMARY: Range has been slowly dropping over the last hour, unsure why so far. Violins are still slowly decreasing.
The range seems ro have recovered, but has been more glitchy. The dip in range seems to correlate with noise seem in our DARM BLRMS between 10-60Hz and maybe a little into the 60-100Hz band as well. This makes me think that its the the smae noise that we've been seeing. I don't see anything in the 30-100mHz Z SEI BLRMS though.
TITLE: 12/06 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
All the action was in the last 20min! H1 had a lockloss. Microseism leveled off over the shift (but it's still OVER the 95th percentile).
LOG:
Smooth running first half of the shift with H1 locked for 5+hrs. Microseism continues to increase (above the 95th percentile).
I have created a script that will grab the current OPTICALIGN_{P,Y}_OFFSET values for our suspensions and use those to replace the OPTICALIGN_{P,Y}_OFFSET in their safe.snap files. This is so we have a quick and easy way to update these values to more recent offset configurations.
We tested the script during Tuesday Maintenance for the RMs and OMs and it worked well(txt showing input and output), and after conversation with Jenne and TJ, we've decided to have it run at the end of every initial alignment, so we will be adding a new state in INIT_ALIGN that runs this script for every suspension (except the SQZ optics for now).
For manual usage, the script is called update_sus_safesnap.py and can be found at $(USERAPPS)/isc/h1/scripts/update_sus_safesnap.py. It can be run without arguments to update the safe offset values for all of the optics in the default groups (IMs, RMs, MCs, PRs, BSs, SRs, OMs, TMSs, ETMs, ITMs), or the exact optic groups or individual optics can be specified using -o [OPTICS].
TITLE: 12/06 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 11mph 5min avg
Primary useism: 0.12 μm/s
Secondary useism: 0.78 μm/s
QUICK SUMMARY:
H1's been Observing for 1+hrs post-Maintenance and this is with microseism which is slowly continuing a climb above the 95th-percentile; there's a slight increase in breezes. Range is currently beginning to touch 150Mpc (and DARM looks better than it did over the rough weekend when there was a DARM bump from 10-55Hz).
TITLE: 12/06 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Maintenance recovery was straight forward and fully automated. Only one pair of SDF diffs from the maintenance period.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:56 | FAC | Randy | OSB | n | Move Genie lift | 16:56 |
| 15:57 | FAC | Tyler | EX | n | Look at chiller 1 | 17:44 |
| 16:05 | SAF | TJ | LVEA | N | Laser hazard transition | 16:20 |
| 16:07 | VAC | Jordan | Endy | N | Turbopump | 17:08 |
| 16:11 | CAL | Tony | PCAL lab | local | Prep for measurement | 16:18 |
| 16:12 | FAC | Karen | Endy | N | Tech clean | 17:27 |
| 16:12 | FAC | Chris | Ends | N | FAMIS checks | 19:51 |
| 16:14 | FAC | Cindi | FCES | N | Tech clean | 17:47 |
| 16:17 | EE | Ken | EndX | N | EE work, lights | 19:56 |
| 16:20 | TCS | Camilla, TJ | LVEA | Y | Inv TSCY chiller lines | 16:33 |
| 16:21 | PSL | RyanS | PSL anteroom | LOCAL | Check dustmon | 16:47 |
| 16:30 | SUS | Jason | LVEA | Y | OPLEV BS work | 17:26 |
| 16:31 | VAC | Janos, Travis | Site | N | Turbo bellow hoses | 18:02 |
| 16:38 | SUS | Dave | remote | n | Restart SUS EX | 20:16 |
| 16:41 | FAC | Chris + Pest | Site | N | Pest contractor walkthroughs | 17:47 |
| 16:46 | TCS | Camilla, TJ | Mech room | N | TCSY chiller | 17:07 |
| 16:47 | PEM | Robert | LVEA | Y | Viewport work, MC3 camera offline | 20:57 |
| 16:49 | PEM | Fil | EndX then Y | N | PEM cabling | 19:56 |
| 16:53 | CAL | RyanS | CR | N | PSL work, IMC offline | 17:32 |
| 17:04 | SUS | Rahul, Oli | CR | N | Undamped SUS spectra | 20:19 |
| 17:12 | VAC | Gerardo | LVEA | - | Turbo pump testing | 19:37 |
| 17:23 | TCS | Camilla | LVEA | - | Turn on CO2Y | 17:29 |
| 17:26 | CDS | Fernando, Marc | CER | n | Power supply inventory | 18:49 |
| 17:28 | PCAL | Tony, Dana, Rick | EX | YES | PCAL measurement | 19:55 |
| 17:37 | SQZ | Daniel | LVEA-SQZT | YES | Look on table at cabling situation | 18:55 |
| 17:43 | VAC | Jordan | EY | n | Turning off turbo, scroll pump test | 18:13 |
| 17:45 | FAC | Tyler | LVEA | - | 3IFO check | 17:52 |
| 17:52 | FAC | Tyler | EX | n | Mech room | 19:25 |
| 18:19 | FAC | Karen, Kim | LVEA | - | Tech clean | 19:43 |
| 18:41 | SEI | Jim | CR | n | ITM BSC meas. | 19:36 |
| 18:51 | CC | Ryan C | LVEA, EX, EY | - | Dust mon checks | 20:14 |
| 19:37 | VAC | Gerardo | FCES | n | Purge air measurement | 20:14 |
| 20:48 | OPS | Camilla | LVEA | Y | Sweep | 20:59 |
| 21:19 | PCAL | Tony | Pcal lab | local | PCAL meas | 00:49 |
| 21:36 | ISC | TJ, Ryan S | LVEA | Y | Touch up COMM beatnote | 22:17 |
| 22:28 | CC | RyanC | FCES | n | Dust mon check | 22:45 |
WP11562 Reduce SUS EX SWWD countdown period
Dave, Erik, Jonathan, TJ:
As a follow on from the EX HWWD trip early Sat morning, we reduced the time for the SUS SWWD to issue its DACKILL from 20 minutes to 15 minutes.
This will mean that the SUS SWWD will trip 5 minutes before the HWWD.
The SWWD timers are hard coded (by design) in the IOP models. I created a new h1iopsusex with the second timer changed from 900 seconds (15 mins) to 600 seconds (10 mins). Adding this to the first timer of 5 mins gives us the required 15 mins.
In theory this just needed a restart of the models on h1susex, but it did not go well.
The models were stopped with 'rtcds stop --all'.
The h1iopsusex model was started, I verfied the timer change was installed (it was)
I restarted the models h1susetmx, h1sustmsx, h1susetmxpi. So far so good.
I did a list check with 'rtcds status' and was just about to logout when h1susex completely locked up.
h1iopsusex started at 11:58:46, lockup was at 12:00:50 (2min 4sec later)
There was no recourse except to remotely power cycle h1susex via its IPMI port. It was fenced from the dolphin switch before power was cycled.
Tue05Dec2023
LOC TIME HOSTNAME MODEL/REBOOT
11:58:31 h1susex h1iopsusex
11:58:54 h1susex h1susetmx
11:59:13 h1susex h1sustmsx
11:59:35 h1susex h1susetmxpi
12:08:27 h1susex ***REBOOT*** <<< power cycle following computer lock up
12:10:39 h1susex h1iopsusex
12:10:52 h1susex h1susetmx
12:11:05 h1susex h1sustmsx
12:11:18 h1susex h1susetmxpi
Functionality test was done on the corner station turbo pumps, see notes below:
Output mode cleaner tube turbo station;
Scroll pump hours: 5564.5
Turbo pump hours: 5625
Crash bearing life is at 100%
X beam manifold turbo station;
Scroll pump hours: 786.2
Turbo pump hours: 788
Crash bearing life is at 100%
Y beam manifold turbo station;
Scroll pump hours: 1880.9
Turbo pump hours: 605
Crash bearing life is at 100%
Back from maintenance at 2307 UTC. Only one SDF to note, for EX DACKILL times related to WP11562
Relocking was automated, we just had to stop to increase the COMM beatnote as planned (alog 74618). Violins are still higher but damping so far.
Ansel, Camilla
This morning we found that the 0.98Hz DARM comb was related the the ITMX HWS camera 1Hz sync frequency, see alog 74614 where I adjusted ITMX from 1Hz to 5Hz at 20:10UTC. This was a past issue with an expected fix in place 44847 FRS4559.
Ansel then suggested that the 4.98Hz DARM comb 68261 could also be from HWS as we have nominally used a 5Hz sync frequency on the ITM HWSs during O4. At 22:25UTC I adjusted both ITMX and ITMY camera sync frequencies from 5Hz to 7Hz (instructions).
In May we had the ETM HWS and camera off for 1 week 69648, but I didn't repeat the test with the ITMs, ETMY is currently off, ETMX has 15Hz camera frequency.
Yep, as hypothesized:
Plots attached.
More precise comb spacings for future reference / searchability: 0.996785 Hz, 4.98425 Hz, 6.97794 Hz.
Another nice find! This is good news to clean up more of the H1 spectrum. I hope this one can also be turned off in observing. Any others? Potentially there might be one at 9.5 Hz since we're also seeing a problematic 9.48 Hz comb.
This change does also affect the comb in DARM, as expected. Yesterday's (Dec 6) daily spectrum shows that the near-5Hz comb has been replaced by a near-7Hz comb. (Figure attached, see points marked with blue squares.)
Two combs being related to the HWS of course raises suspicion that there could be more, so I'll also mention a couple of things I checked that are probably *not* due to HWS.
Recent monthly Fscan spectra show a comb at multiples of 0.996785 Hz, in approximately the region 20-100 Hz. Subsequent investigation shows that it probably appeared between Sept 21 and Sept 24, although the exact date is difficult to tell.
Further details:
Note that there was a comb in O3 with spacing 0.996806 Hz (which, on inspection of the O3 spectrum, seems to have a double-peak structure). Although they are very close, the new comb comb does not precisely align with the O3 comb, nor with its second peak.
Ansel pointed out on 20th September 72993 I adjusted the HWS ITMX camera frame rate from 5Hz to 1Hz as the HWS SLED had decayed. I woluld expect the pixel brightness to be larger for ITMX comapred to the amout of SLED power, but it's been lower than ITMY even with the slower camera sync freqnuncy (ITMY was 5Hz, ITMX 1Hz), plot attached.
Today (20:10UTC) I adjusted HWS ITMX the frame rate back from 1Hz to 5Hz. We've previously seen coupling from the HWS camera, see 44847 but expected we'd fixed the issue by using external power supplies for the cameras FRS4559. We could discuss turning all HWS off during observing if this is the cause of the comb.
Update: H1:PEM-CS_MAG_LVEA_OUTPUTOPTICS_Y_DQ sees this comb very clearly, and shows that it appeared part way through the day on Sept 20th. Will try to identify start time more clearly using this magnetometer channel.
Looks like it's gone in the magnetometer channel! Pre/post spectra attached.
Thank you Camilla for helping to mitigate this comb. I wonder if there are other combs that are being caused by the HWS system / power supplies. Can we turn off all HWS if they are not used during observing? We may find this would solve other problems in addition to this one. Thanks!