With the Monday holiday and no calibration run last Sat, ran a calibration this morning to start of a Wed Commissioning session.
Measurement NOTES:
For some reason this error is still happening where the report generation pulls a very old report from March 2024 as the reference. I think this is some environment error. I regenerated the report now and it pulled the last valid report, which makes the comparisons much more sensible.
TITLE: 09/03 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 1mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY:
H1 had issues last night with the SQZ (as noted by Tony in his OWL alogs), but h1 has been locked/thermalized for just over 17hrs37min. Bring up thermalization, because the plan is for a calibration suite of measurements to be run at 8am. Also looks like there will be commissioning right after this measurement TODAY (830-1130amPDT).
On nuc23, the DARM .5-2kHz bands have been recovering from the SQZ issue starting 2hrs ago and look like they are at normal levels for the last hr or so (see attached).
@ 2:43 am (Local time) H1 called for assistance.
I noticed that it was a SQZ issue, specifically the SHG "PZT was out of range" error on the SQZ_SHG Guardian.
It was bouncing from Locking to Locked then scanning. etc.
I tried Init-ing SQZ_SHG manager..... SQZ_Manager.
Then I took all SQZ GRD Nodes to down except SQZ_PMC & SQZ_SHG and tried to troubleshoot the SQZ_SHG directly.
Looking at the SQZ troubleshooting guide: There wasn't a section for SQZ _SHG troubleshooting. So I searched the ALOG with no hits for "PZT Out of Range".
I tried to adjust the OPO temp to maximize H1:SQZ-CLF_REFL_RF6_ABS_OUTPUT. But changing the OPO temp had no impact.
This is when I started to realize it may take me awhile to find out what is going on, since it's too late to call a SQZ expert, I should just take us to NO-SQZ.
So I ran the noconda python switch_nom_sqz_states.py without script.
Sadly I did accept the SDFs and these too, because I thought they were causing an SDF issue. But it may have been SQZ_ANG_ADJUST needed to be at ADJUST_SQZ_AND_ADF instead of DOWN for the SQZr to be considered ready for OBSERVING.
Made it back to OBSERVING at 11:30:44 UTC
Now that we have made it back to OBSERVING albeit without SQZing I'm using a finer toofed comb on my troubleshooting.
I went looking for Sitemap>SQZ>SQZT0>SHG That last SHG Button was a bit difficult to find when your eyes are still crossed..
I then took a screenshot of the SHG screen before any changes.
I then locked the SQZ_PMC and the SQZ_SHG. The SQZ_SHG was cycling between locked and Unlocked. Dropped from Observing at 11:57:53 UTC.
Then while trying to maximize H1:SQZ-SHG_GR_DC_POWERMON I changed H1:SQZ-SHG_TEC_SETTEMP from 35.89 to 35.61.
After this I ran the noconda python switch_nom_sqz_states.py with script to try and get us the SQZ SQuoZe.
SQZ_MAN was having an issue with the SQZ_FC losing lock at Transition_IR_LOCKIN, I then re-touched up the OPO temp.
Success! The SQZr Is SQUOZE!!
Now I need to accept all the SDF's again (and these too ) cause I didn't follow the directions when i ran the initial no sqzing script.
Observing reached again at 12:51:32 UTC.
I edited SQZ_ANG_ADJUST which had a conditional, reading from sqzparams.use_sqz_angle_adjust, to set the nominal state (which stopped the script running correctly) to just stating the nominal state. Now there is a note in sqzparams.py to change the nominal state in SQZ_ANG_ADJUST and sqz/h1/scripts/switch_nom_sqz_states.py if the flag is changed.
Sheila, Camilla
This morning the SHG PZT was still around 5V so we changed the Offset, Min and Max scan rangers to force it to lock at the peak closer to 50V, see attached for old vs new values and sdfs accepted. Checked it scanned over the correct range by setting the SQZ_SHG guardian to DOWN and under manually scanning the SHG PZT.
Then I copied what Tony did last night and further optimized the SHG temperature to bring the power up from 96mW to 106mW, see attached. Thanks Tony!
TITLE: 09/03 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Very quiet shift with H1 locked and observing throughout; lock stretch is currently up to 8 hours. A M5.3 EQ out of Japan rolled through about an hour ago but didn't even put us into EQ mode.
Bi-weekly Histograms of Locking statew times.
Histogram of time spent at NLN or higher:
Length of data 22
max Duration 2613 Min
Average 770.5 Min
% below 5 minutes : 0.0
Between 2025-08-20 00:27:34 and 2025-09-03 00:27:34
PowerUp Histogram
Length of data 30
max Duration 39 Min
Average 24.966666666666665 Min
% above 30 minutes 73.33333333333333
Between 2025-08-20 00:45:04 and 2025-09-03 00:45:04
DRMI-Histogram
Length of data 51
max Duration 49 Min
Average 11.647058823529411 Min
% above 5 minutes 54.90196078431373
Between 2025-08-20 00:46:03 and 2025-09-03 00:46:03
ALS Histogram
Length of data 53
max Duration 30 Min
Average 5.471698113207547 Min
% above 5 minutes 32.075471698113205
Between 2025-08-20 00:46:40 and 2025-09-03 00:46:40
TITLE: 09/02 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: A fair amount of work done even though it was looking to be a slower maintenance day. We managed to stay locked for a few hours into it, while staying in light maintenance mode. We relocked automatically with an initial alignment, then lost lock 39 seconds later from an ETMX glitch. On the way up the second time, Elenna opportunistically tuned up PRMI ASC since it has shown some signs of non-convergence (alog86694). We have now been locked for 2.5 hours while a small dropout when the SQZr unlocked.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:30 | CDS | Fil, Erik | EX | n | HWWD cable | 16:10 |
| 15:31 | FAC | Erik | Ends | n | Chiller alarms | 16:03 |
| 15:34 | FAC | Kim, Nelly | mids, ends | n | Mats | 15:35 |
| 15:34 | FAC | Randy | CS | n | Forklift from VPW to LSB | 16:28 |
| 15:35 | FAC | Tyler | vertex | n | Fire hydrant | 15:36 |
| 15:35 | VAC | Norco | CP7 (EY) | n | LN2 delivery | 16:54 |
| 15:36 | VAC | Travis | MX | n | Pump work | 20:03 |
| 15:37 | FAC | Tyler | LVEA | n | Yellow tape | 15:38 |
| 15:37 | FAC | Someone | Staging | n | Rock moving | 19:35 |
| 15:38 | FAC | - | site | n | Fire pumps running | 19:33 |
| 15:43 | SQZ | Sheila | CR | n | SQZ meas. | 16:34 |
| 15:55 | VAC | Janos | MX | n | Joining Travis for pump work | 20:03 |
| 15:56 | SEI | Jim | EX, EY | n | Wind fence inspection | 17:53 |
| 15:58 | FAC | Kim, Nelly | LVEA | n | Tech clean | 17:04 |
| 16:12 | CDS | Fil | LVEA | n | SUS56 racks and near HAM3 racks, cable and chassis prep | 17:16 |
| 16:15 | FAC | Randy, Tyler | OSB rec., LVEA | n | Forklift into OSB receiving and then moving into LVEA (craning?) | 16:33 |
| 16:27 | VAC | Gerardo, Jordan, Anna | LVEA | n | Taking measurements near HAM6 (Jordan, Anna out 1659) | 17:17 |
| 16:34 | FAC | Chris | LVEA, Y arm, X arm | n | Pest control escort | 17:52 |
| 16:37 | PCAL | Rick, Tony | PCAL lab | local | PCAL lab meas. | 20:23 |
| 16:42 | ISC | Daniel | LVEA | n | AS_C whitening cable | 18:32 |
| 16:53 | PSL | Jason | CR | n | Ref cav remote alignment | 17:11 |
| 17:01 | - | Richard | LVEA | n | LVEA check | 17:26 |
| 17:05 | FAC | Kim, Nelly | EY, EX | n | Tech clean | 18:24 |
| 17:21 | PEM | Robert | LVEA | n | Shaker setup | 19:35 |
| 17:32 | CDS | Marc | MY, EY | n | Checking on power supplies | 18:05 |
| 17:51 | SYS | Betsy | LVEA | n | Grabbing dog clamps | 18:08 |
| 17:53 | FAC | Chris | LVEA | n | FAMIS checks | 18:26 |
| 18:08 | SYS | Betsy | Opt Lab | - | Parts | 18:38 |
| 18:24 | FAC | Kim, Nelly | FCES | n | Tech clean | 18:50 |
| 18:27 | FAC | Chris | Ends, Mids | n | FAMIS checks | 18:54 |
| 18:35 | SQZ | Camilla, Fil | LVEA - SQZT7 | LOCAL | Noise hunt | 19:05 |
| 18:42 | SYS | Mitchell | LVEA - W | n | Parts hunt | 18:53 |
| 18:47 | - | Matt | LVEA | n | Sweep | 19:05 |
| 18:56 | VAC | Gerardo, Jordan | MY | n | Parts hunt | 19:35 |
| 19:46 | PEM | Robert | LVEA | n | Wrapping up | 20:03 |
| 21:38 | PCAL | Francisco | PCAL lab | - | Packing up to ship out | 22:33 |
| 22:12 | SYS | Betsy | Opt Lab | n | Parts | 22:40 |
| 22:51 | ISC | Camilla | Opt Lab | n | Parts hunt | 23:17 |
TITLE: 09/02 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 10mph Gusts, 5mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY: H1 has been locked for 2 hours.
FAMIS 31101
The fire alarm yesterday briefly shut off air handlers, so we see some temperature changes in several places that appear to have all settled back to nominal. Also, Jason's PMC, RefCav, and ISS adjustments this morning are seen clearly. No other major events of note in the past week.
FAMIS 26684
pH of PSL chiller water was measured to be between 10.0 and 10.5 according to the color of the test strip.
TonyS and RickS
Today, we measured the transmissivity of one of the SPI HR mirrors and the AR surface reflectivity of one of the SPI beamsplitters, as requested by Jeff Kissel. The reflectivities of the beamsplitters were measured earlier, see this aLog entry.
HR mirror reflectivity: We used a set of Gentec power meters procured for the Pcal effort. An SPI HR mirror (see attached photo for details). Aligned at 45 deg. AOI with p-polarized light, the power transmissivity (P_trans/P_inc) was measured to be 23 e-6 W/ 567 e-3 W or 4.1e-5 (0.004%).
BS AR surface reflectivity: The reflectivity of the AR surface of a 50/50 beamsplitter was measured in two ways, both after mounting the BS with the AR surface closest to the input beam.
1. Using the PS5 Pcal power sensor and blocking the beam reflected from the 50 % reflectivity surface near the beamsplitter, the output voltage (after zeroing the no-laser-light level) was - 0.387 mV. With both the main reflected beam (~ 50 % of the incident light) and the beam reflected from the AR surface, the voltage was - 1.3699 V. Doubling the voltage for what would have been expected with the full incident power, the estimated AR reflectivity is 0.387 e-3 / (2 x 1.3699) = 1.4 e-4 or 0.014 %.
2. Using power meters to measure the power in the beam reflected from the AR surface and the power incident on the BS, we measure 52.6 e-6 W / 564 mW = 9 e-5 or about 0.01 %.
Tried to measure NLG (following 76542) at different ISS setpoints (by changing the SHG waveplate to adjust how much power is incident on the AOM), with the same OPO TRANS power. Because in 86363 we were confused that the NLG increased significantly when we realigned Pump fiber, with the same OPO Trans setpoint.
This was confusing and I'd like to repeat next week. I initially thought we were getting pump depletion so decreased the seed power, but later noticed the ISS was just unable to keep up and the control voltage was dropping to -5 for the low values of ISS setpoint, see attached, it could hold in LOCKED_CLF_DUAL with 80uW OPO_TRANS fine but not with the SEED beam.
| OPO Setpoint | ISS Setpoint when locked on | Amplified | Unamplified | NLG | Notes | |||
| CLF DUAL | SEED | Max | Min | UnAmp | Dark | |||
| 80 | 4.9 | 3.2 | 0.192924 | 0.002389 | 0.0079612 | -2.1e-5 | 24.2 | |
| 80 | 2.9 | 2.9 | 0.045037 | -2.2e-5 | 39 ?(using below unamp) | Pump deplation? Reduced SEED power from 0.7 to 0.3 to keep it locked on SEED (still didn't work first time). | ||
| 80 | 6.3 | 6.3 | 0.0430464 | 0.000546 | 0.0011063 | -2.2e-5 | 38 ? | Unamp signal decreased from SEED power change. |
| 80 | 4.8 | 4.7 | 0.0454944 | 0.0005378 | 0.0018533 | -2.7e-5 | 24.2 | |
| 80 | 2.9 | -5 ? | Noticed ISS Controlmon at -5. | |||||
| 80 | 3.5 | 3.5 | 0.0452317 | 0.00054049 | 0.0018636 | -2.4e-5 | 24.0 | |
| 80 | 4.95 | 0.04523 | 24.0 | Leaving here. | ||||
Repeated while Corey was relocking today. We had one strange measurement when the ISS setpoint at 3.3V where the un-amplified signal was much lower but when later repeating at 3.1V, we didn't see this.
| OPO Setpoint | ISS Setpoint when locked on | Amplified | Unamplified | NLG | |||
| CLF DUAL | SEED | Max | Min | UnAmp | Dark | ||
| 80 | 5.0 | 5.0 | 0.042961 | 0.00051437 | 0.00176935 | -2.1e-5 | 24.0 |
| 80 | 2.8 | ISS Setpoint dropped to -5, so OPO TRANS not at 80uW | |||||
| 80 | 3.3 | 3.2 | 0.043273 | 0.0005247 | 0.00106713 | -2.1e-5 | 39.7 |
| 80 | 6.2 | 6.2 | 0.041031 | 0.00051677 | 0.0017658 | -2.1e-5 | 23.0 |
| 80 | 6.5 | 6.4 | 0.0409285 | 0.00051425 | 0.00175578 | -2.1e-5 | 23.0 |
| 80 | 3.6 | 3.5 | 0.0429806 | 0.00051216 | 0.00175932 | -2.7e-5 | 24.1 |
| 80 | 3.1 | 3.0 | 0.0431685 | 0.000516166 | 0.0017296 | -2.7e-5 | 24.5 |
| 80 | 2.8 | ISS Setpoint dropped to -5, so OPO TRANS not at 80uW | |||||
| 80 | 4.9 | 4.8 | 0.0429243 | 0.0005134 | 0.0017627 | -2.7e-7 | 24.0 |
Fil, Camilla, Sheila. WP#12784
In 86323 we noticed a high pitched noise in SQZT7. Today we found the SQZT7 picos were enabled. When Sheila disabled them, the noise disappeared. It did not come back when they were re-enabled.
All picos should be off in Obserivng so I monitored the sdf. However, even when enabled, the picos should not make this noise. Fil suggests we watch this pico controller closely as when used it it might intermittently get stuck trying to drive and start to make this noise again.
Tagging DetChar: this pico has been on in Observing from 24th June until 2nd September.
TJ and I then checked that all other picos are monitored in sdf.
We made it up to NLN at 1941UTC but this only lasted 39 seconds before an ETMX glitch took us out. On the way back up we noticed that it would need to go through PRMI, so Elenna took the time to fix it since we ran into some issues with it last week (alog86618). We wrapped up a few SDFs that were left over from maintenance day activities, and then to observing at 2109 UTC.
After problems with PRC1 ASC, reported here, I checked the PRC1 P error signal and saw that the REFL signal seems to have a small offset. However, it appears that the POP X I signal has no offset, so I updated the error signal and tested the sign. The ISC DRMI code is updated with this new matrix.
Sheila, Matt, Camilla
As the interferometer stayed locked during the early part of maintence day, we did some injections into the filter cavity length to try to measure the noise caused by backscatter. (Previous measurements 78579 and 78262, recent range drops that we think are filter cavity related: 86596)
The attached screenshot from Camilla shows the excitations and their appearance in DARM. (Compare to Naoki's measurement here, and to the time with no injection but elevated noise here). I made a simple script to make the projection based on this injection, which is available here. It does show about a factor of 3 a higher level of noise in DARM than Naoki's projection 78579, this is large enough that we should add it to our noise budget but far too small to explain our low range times last week.
M. Todd
I did a sweep of the LVEA, everything looked fine.
I unplugged an extension cord from outside the PSL racks, and noted that a forklift by the 3IFO racks was left plugged it. I left it plugged in as it was.
Robert was still in there while I was doing my sweep, and he was notified that he was the last one.
During the commissioning window this morning, I worked on the St0 to St1 feedforward for the HAM1 ISI. This time I borrowed some code from Huyen to try the RIFF frequency domain fitting package from Nikhil. This required using matlab 2023b which seems to have a lot of computationally heavy stuff like code suggestions added so it was kind of clunky to use and I'm not sure what all of the different fitting options do, so each dof took multiple rounds of fitting to get working. I also had to add an ac-coupling high pass after the fact to the filters because they all went to 1 at 0hz. Still the results I got for HAM1 seem to work pretty well. Attached spectra are the on-off data I collected for the X,Y and Z dofs. Refs are the ff off, live traces ff on spectra. Top of each image are the asds for the ff on & off, bottom is the magnitude of the st0 l4c to st1 gs13 tf. The improvement is broad ~10x less motion from 5 hz up to ~50hz. I'm looking at the rotational dofs still, but there is less coherence there, so not as much to win.
Elenna has said this seemed to have improved chard asc, maybe she has some plots to add.
There is about an order of magnitude improvement in the CHARD P error signal between 10-20 Hz as a result of these improvements, comparing the NLN spectra from three days ago versus today. Fewer noisy peaks are also present in INP1 P. I included the CHARD P coherence with GS13s, focusing on the three DoFs with the most coherence: RX, RZ, and Z. The improvements Jim made greatly reduced that coherence. To achieve the CHARD P shot noise floor at 10 Hz and above, there is still some coherence of CHARD P with GS13 Z that is likely contributing noise. However, for the IFO, this is sufficient noise reduction to ensure that CHARD P is not directly limiting DARM above 10 Hz. I also compare the CHARD P coherence with OMC DCPD sum from a few days ago to today, see plot.
In terms of how this compares with our passive stack + L4C feedforward performance, I found some old templates where I compared upgrades to our HAM1 feedforward. I compare our ISI performance now with the passive stack, no L4C feedforward to ASC, and passive stack with the best-performance feedforward we achieved: the results. It's actually a pretty impressive difference! (Not related to the ISI seems to be a change in the shot noise floor- looks like the power on the REFL WFS may have changed from the vent.)
The coupling of CHARD P to DARM appears to be largely unchanged, so this generally means we are injecting about 10x less noise from CHARD into DARM. from 10-30 Hz.
Using the calibration factors I report in this alog, here is the CHARD plot roughly calibrated into radians.