X End Station Measurement:
During the Tuesday maintenace, the PCAL team(Rick Savage, Dana Jones, & Tony Sanchez) went to EndX with Working Standard Hanford aka WSH(PS4) and took an End station measurements.
The EndX Station Measurements were carried out according to the procedure outlined in Document LIGO-T1500062-v15, Pcal End Station Power Sensor Responsivity Ratio Measurements: Procedures and Log, and was completed by 11:45 am.
Note:
After the normal measurement, we did a few auxilary measurments.
LIGO-T1500062-v15 Measurement Log
First thing we did is take a picture of the beam spot before anything is touched! I then put the target apature cap on the RX sphere to see how far off from center the beam is.
Martel:
Martel Voltage sources voltage into the PCAL Chassis's Input 1 channel. We record the GPStimes that a -4.000V, -2.000V and a 0.000V voltage was applied to the Channel. This can be seen in Martel_Voltage_Test.png. We also did a measurement of the Martel's voltages in the PCAL lab to calculate the ADC conversion factor, which is included on the above document.
Plots while the Working Standard(PS4) is in the Transmitter Module during Inner beam being blocked, then the outer beam being block, followed by the background measurment: WS_at_TX.png.
The Inner, outer, and background measurement while WS in the Receiver Module: WS_at_RX.png.
The Inner, outer, and background measurement while RX Sphere is in the RX enclosure, which is our nominal set up without the WS in the beam path at all.: TX_RX.png.
The last picture is of the Beam spot after we had finished the measurement. Note this beam spot is only ONE beam but the beam postitons were not adjusted during the measurement.
All of this data is then used to generate LHO_EndX_PD_ReportV2.pdf which is attached, and a work in progress in the form of a living document. This document was created with the PCALPARAMS['WHG'] = 0.916985 # PS4_PS5 as of 2023/04/18
And Not the latest number [PCALPARAMS['WHG'] = 0.91536 # 12/05/2023].
But I did run the report with the latest number just for my own curiosity and marked it down as a "Monday End Station measurement" performed on 2023-12-04 so it sits right behind the real measurement on the 5th and it doesn't vary by much: LHO_ENDX_PD_TEST_REPORT.pdf
All of this data and Analysis has been commited to the SVN :
https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Projects/PhotonCalibrator/measurements/LHO_EndX/
This is where the measurement normally ends, but instead we took a few more iteresting measurements.
Auxilary Measurements:
Rx Sphere is in it's nominal location and the WS was not in the beam path at all.
Opened the Shutter GPStime 1385839530
RxPD = 0.2234 before moving the beam block.
Moved the beam block allowing both beams to hit RX sphere.
start time 1385839590
End time 1385839890
Shutter both beams for a background measurement.
Start time 1385839945
RxPD = 4.30966e-5
End time 1385840005
The OFSPD_OFFSET was set to 6.0. This should be the maximum power we reach while normally operating.
start time 1385840225
RXPD = 0.805388
TXPD = 0.813297
End time 1385840525
PCAL Lab Responsivity Ratio Measurement:
A WSH/GSHL (PS4/PS5)FrontBack Responsivity Ratio Measurement was ran, analyzed, and pushed to the SVN.
The analysis of this measurement produces 4 PDF files which we use to vet the data for problems.
raw_voltages.pdf
avg_voltages.pdf
raw_ratios.pdf
avg_ratios.pdf
Obligitory BackFront PS4/PS5 Responsivity Ratio:
A WSH/GSHL (PS4/PS5)BF Responsivity Ratio measurement was ran, analyzed, and pushed to the SVN.
The analysis of this measurement produces 4 PDF files which we use to vet the data for problems.
raw_voltages2.pdf
avg_voltages2.pdf
raw_ratios2.pdf
avg_ratios2.pdf
This adventure has been brought to you by Rick Savage, Dana Jones & Tony Sanchez.
Two new measurements were added to the svn, one on the 3rth of Dec, and another on the 4th.
The one on the 4th , xtD20231204 actually happened on Dec 5th and uses much of the data from Dec 5th except starttime 7 and 8 on the config.py file were changed to reflect the time that we had both beams on the RX sensor instead of only the Inner or outer.
This resulted increasing in magnitude of the Rx Calibrarion. More information can be found on the SVN.
The one on the 3rd , xtD20231203 actually happened on Dec 5th as well and uses much of the data from Dec 5th except starttime 7 and 8 on the config.py file were changed to reflect the time that we had both beams on the RX sensor AND ther OFS OFFSET was set to 6.0.
This resulted increasing in magnitude of the Rx Calibrarion. More information can be found on the SVN.
We installed the new MICHFF FM8 filter that Gabriele fit 74602, 74595. This looks to do a great job suppressing the MICH excitation, see pink trace attached. A little less suppression 150-200Hz so we'll want to run some coherences with NLN data to see if we care about MICH in this region. Saved in safe and observe sdf and ISC_LOCK.
Left the MICHFF off for 5 minutes 22:05:45 to 22:10:45UTC.
Tagging CAL, this may have an effect on the 17.6Hz line 74259
Naoki and I took some no squeezing time 20:34 to 20:42UTC. See 4.2dB at 1kHz. Plot attached.
We tried to tune the OPO temperature with the aim of reducing the yellow 300Hz BLRM H1:SQZ-OPO_TEC_SETTEMP changed from 31.49 to 31.485deg. The SQZ_ANG_ADJUST guardian adjusted the SQZ angle as we adjusted the OPO temperature, which is nice.
I took a look at a few times that the PSL dust monitors have alarmed over the past two weeks, I was looking to see if there was any correlation between the wind and the CS dust monitors, particularly the PSL ones. I found a few examples where the winds picked up and then the dust monitors saw their counts increase (ref1, ref2, ref3, ref5), but only in ref1, and maybe 5 do we see the dust propogate out to the other dust monitors. In ref1 the wind also further increases after the dust counts increase and there is not another dust count jump from it. I also saw a few times where the dust counts increased without any real wind changes (ref4, ref6). Ref5 seems to show some dust moving around the LVEA with the wind?
Start:
PST: 2023-12-06 11:28:18.787126 PST
UTC: 2023-12-06 19:28:18.787126 UTC
GPS: 1385926116.787126
End:
PST: 2023-12-06 11:50:26.331166 PST
UTC: 2023-12-06 19:50:26.331166 UTC
GPS: 1385927444.331166
Files:
2023-12-06 19:50:26,172 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20231206T1928
20Z.hdf5
2023-12-06 19:50:26,191 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_2023120
6T192820Z.hdf5
2023-12-06 19:50:26,204 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_2023120
6T192820Z.hdf5
2023-12-06 19:50:26,218 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_2023120
6T192820Z.hdf5
2023-12-06 19:50:26,230 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_2023120
6T192820Z.hdf5
ICE default IO error handler doing an exit(), pid = 354122, errno = 32
LLO and us have dropped out of Observing for planned calibration and commissioning time to last until 1500 PT (2300 UTC).
Back to Observing at 2304 UTC
Wed Dec 06 10:10:05 2023 INFO: Fill completed in 10min 1secs
Gerardo confirmed a good fill curbside.
I took un-damped spectra (amplitude spectra density) of all the suspensions today to check for rubbing or mechanical/electrical issues. The latest results (orange trace) have been compared with the ones taken in 2020 (blue trace, LHO alog 56019 which were judged healthy) and the dotted line is the bosem sensor noise. Given below are the results for some of them (while I am still processing others):-
For now I have just glanced through the plots and will look into the details later.
Quads - ETMX, ETMY, ITMX, ITMY : They look healthy to me, although I need to cross check the tall peaks of ETMX (for F1, F2, F3 sensors).
HSTS - MC1, MC2, MC3, PR2, PRM, SR2, SRM :
MC1 RT BOSEM (also L and Y dof) looks to be a bit worse than previous measurements- see page 5 on MC1. Later, I requested Oli to take a transfer function measurements on MC1 and they looks healthy. He will post the plots later. Perhaps RT bosem on MC1 is degrading with time.
On MC2, sensor T3 (MC2 page 3) shows a broad peak at 15Hz approximately (this also needs to be cross checked)
HLTS - PR3, SR3 : the results look better than the ones taken in 2020, however I still need to cross check the resonant peaks.
The results for the other remaining suspensions will posted after processing them.
Having looked through all the un-damped SUS spectra of most concern is the MC1 M1 stage RT channel BOSEM. I would suspect a fault in the read-back electronics chain (e.g. the ADC, AA-chassis or Satellite Box) rather than a fault in-chamber with the BOSEM. If this elevated noise has not been observed in subsequent measurements, then it could be intermittent and therefore tricky to track down. So I would recommend monitoring the MC1 M1 stage RT channel for elevated noise.
TITLE: 12/06 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 6mph 5min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.84 μm/s
QUICK SUMMARY: Range has been slowly dropping over the last hour, unsure why so far. Violins are still slowly decreasing.
The range seems ro have recovered, but has been more glitchy. The dip in range seems to correlate with noise seem in our DARM BLRMS between 10-60Hz and maybe a little into the 60-100Hz band as well. This makes me think that its the the smae noise that we've been seeing. I don't see anything in the 30-100mHz Z SEI BLRMS though.
TITLE: 12/06 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
All the action was in the last 20min! H1 had a lockloss. Microseism leveled off over the shift (but it's still OVER the 95th percentile).
LOG:
Smooth running first half of the shift with H1 locked for 5+hrs. Microseism continues to increase (above the 95th percentile).
I have created a script that will grab the current OPTICALIGN_{P,Y}_OFFSET values for our suspensions and use those to replace the OPTICALIGN_{P,Y}_OFFSET in their safe.snap files. This is so we have a quick and easy way to update these values to more recent offset configurations.
We tested the script during Tuesday Maintenance for the RMs and OMs and it worked well(txt showing input and output), and after conversation with Jenne and TJ, we've decided to have it run at the end of every initial alignment, so we will be adding a new state in INIT_ALIGN that runs this script for every suspension (except the SQZ optics for now).
For manual usage, the script is called update_sus_safesnap.py and can be found at $(USERAPPS)/isc/h1/scripts/update_sus_safesnap.py. It can be run without arguments to update the safe offset values for all of the optics in the default groups (IMs, RMs, MCs, PRs, BSs, SRs, OMs, TMSs, ETMs, ITMs), or the exact optic groups or individual optics can be specified using -o [OPTICS].
TITLE: 12/06 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 11mph 5min avg
Primary useism: 0.12 μm/s
Secondary useism: 0.78 μm/s
QUICK SUMMARY:
H1's been Observing for 1+hrs post-Maintenance and this is with microseism which is slowly continuing a climb above the 95th-percentile; there's a slight increase in breezes. Range is currently beginning to touch 150Mpc (and DARM looks better than it did over the rough weekend when there was a DARM bump from 10-55Hz).
TITLE: 12/06 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Maintenance recovery was straight forward and fully automated. Only one pair of SDF diffs from the maintenance period.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:56 | FAC | Randy | OSB | n | Move Genie lift | 16:56 |
15:57 | FAC | Tyler | EX | n | Look at chiller 1 | 17:44 |
16:05 | SAF | TJ | LVEA | N | Laser hazard transition | 16:20 |
16:07 | VAC | Jordan | Endy | N | Turbopump | 17:08 |
16:11 | CAL | Tony | PCAL lab | local | Prep for measurement | 16:18 |
16:12 | FAC | Karen | Endy | N | Tech clean | 17:27 |
16:12 | FAC | Chris | Ends | N | FAMIS checks | 19:51 |
16:14 | FAC | Cindi | FCES | N | Tech clean | 17:47 |
16:17 | EE | Ken | EndX | N | EE work, lights | 19:56 |
16:20 | TCS | Camilla, TJ | LVEA | Y | Inv TSCY chiller lines | 16:33 |
16:21 | PSL | RyanS | PSL anteroom | LOCAL | Check dustmon | 16:47 |
16:30 | SUS | Jason | LVEA | Y | OPLEV BS work | 17:26 |
16:31 | VAC | Janos, Travis | Site | N | Turbo bellow hoses | 18:02 |
16:38 | SUS | Dave | remote | n | Restart SUS EX | 20:16 |
16:41 | FAC | Chris + Pest | Site | N | Pest contractor walkthroughs | 17:47 |
16:46 | TCS | Camilla, TJ | Mech room | N | TCSY chiller | 17:07 |
16:47 | PEM | Robert | LVEA | Y | Viewport work, MC3 camera offline | 20:57 |
16:49 | PEM | Fil | EndX then Y | N | PEM cabling | 19:56 |
16:53 | CAL | RyanS | CR | N | PSL work, IMC offline | 17:32 |
17:04 | SUS | Rahul, Oli | CR | N | Undamped SUS spectra | 20:19 |
17:12 | VAC | Gerardo | LVEA | - | Turbo pump testing | 19:37 |
17:23 | TCS | Camilla | LVEA | - | Turn on CO2Y | 17:29 |
17:26 | CDS | Fernando, Marc | CER | n | Power supply inventory | 18:49 |
17:28 | PCAL | Tony, Dana, Rick | EX | YES | PCAL measurement | 19:55 |
17:37 | SQZ | Daniel | LVEA-SQZT | YES | Look on table at cabling situation | 18:55 |
17:43 | VAC | Jordan | EY | n | Turning off turbo, scroll pump test | 18:13 |
17:45 | FAC | Tyler | LVEA | - | 3IFO check | 17:52 |
17:52 | FAC | Tyler | EX | n | Mech room | 19:25 |
18:19 | FAC | Karen, Kim | LVEA | - | Tech clean | 19:43 |
18:41 | SEI | Jim | CR | n | ITM BSC meas. | 19:36 |
18:51 | CC | Ryan C | LVEA, EX, EY | - | Dust mon checks | 20:14 |
19:37 | VAC | Gerardo | FCES | n | Purge air measurement | 20:14 |
20:48 | OPS | Camilla | LVEA | Y | Sweep | 20:59 |
21:19 | PCAL | Tony | Pcal lab | local | PCAL meas | 00:49 |
21:36 | ISC | TJ, Ryan S | LVEA | Y | Touch up COMM beatnote | 22:17 |
22:28 | CC | RyanC | FCES | n | Dust mon check | 22:45 |
WP11562 Reduce SUS EX SWWD countdown period
Dave, Erik, Jonathan, TJ:
As a follow on from the EX HWWD trip early Sat morning, we reduced the time for the SUS SWWD to issue its DACKILL from 20 minutes to 15 minutes.
This will mean that the SUS SWWD will trip 5 minutes before the HWWD.
The SWWD timers are hard coded (by design) in the IOP models. I created a new h1iopsusex with the second timer changed from 900 seconds (15 mins) to 600 seconds (10 mins). Adding this to the first timer of 5 mins gives us the required 15 mins.
In theory this just needed a restart of the models on h1susex, but it did not go well.
The models were stopped with 'rtcds stop --all'.
The h1iopsusex model was started, I verfied the timer change was installed (it was)
I restarted the models h1susetmx, h1sustmsx, h1susetmxpi. So far so good.
I did a list check with 'rtcds status' and was just about to logout when h1susex completely locked up.
h1iopsusex started at 11:58:46, lockup was at 12:00:50 (2min 4sec later)
There was no recourse except to remotely power cycle h1susex via its IPMI port. It was fenced from the dolphin switch before power was cycled.
Tue05Dec2023
LOC TIME HOSTNAME MODEL/REBOOT
11:58:31 h1susex h1iopsusex
11:58:54 h1susex h1susetmx
11:59:13 h1susex h1sustmsx
11:59:35 h1susex h1susetmxpi
12:08:27 h1susex ***REBOOT*** <<< power cycle following computer lock up
12:10:39 h1susex h1iopsusex
12:10:52 h1susex h1susetmx
12:11:05 h1susex h1sustmsx
12:11:18 h1susex h1susetmxpi
Ansel, Camilla
This morning we found that the 0.98Hz DARM comb was related the the ITMX HWS camera 1Hz sync frequency, see alog 74614 where I adjusted ITMX from 1Hz to 5Hz at 20:10UTC. This was a past issue with an expected fix in place 44847 FRS4559.
Ansel then suggested that the 4.98Hz DARM comb 68261 could also be from HWS as we have nominally used a 5Hz sync frequency on the ITM HWSs during O4. At 22:25UTC I adjusted both ITMX and ITMY camera sync frequencies from 5Hz to 7Hz (instructions).
In May we had the ETM HWS and camera off for 1 week 69648, but I didn't repeat the test with the ITMs, ETMY is currently off, ETMX has 15Hz camera frequency.
Yep, as hypothesized:
Plots attached.
More precise comb spacings for future reference / searchability: 0.996785 Hz, 4.98425 Hz, 6.97794 Hz.
Another nice find! This is good news to clean up more of the H1 spectrum. I hope this one can also be turned off in observing. Any others? Potentially there might be one at 9.5 Hz since we're also seeing a problematic 9.48 Hz comb.
This change does also affect the comb in DARM, as expected. Yesterday's (Dec 6) daily spectrum shows that the near-5Hz comb has been replaced by a near-7Hz comb. (Figure attached, see points marked with blue squares.)
Two combs being related to the HWS of course raises suspicion that there could be more, so I'll also mention a couple of things I checked that are probably *not* due to HWS.