I made focussed shutdowns yesterday of just one or a few fans. The range and spectra were not strongly affected, and I did not find a particularly bad fan.
Nov. 9 UTC
CER ACs
Off 16:30
On 16:40
Off 16:50
On 17:00
Turbine shutdowns
SF4 off : 17:10
SF4 on: 17:21
SF3 and 4 off: 17:30
SF3 and 4 back on: 17:40
SF3 off: 17:50
SF3 back on: 18:00
SF1 and 4 off: 18:30
SF1 and 4 on: 18;35
SF1 and 4 off: 19:00
SF1 and 4 on: 19:18
SF1 and 3 off: 19:30
SF1 and 3 back on: 19:40
SF1 off: 19:50
SF1 back on: 20:00
SF1 and 4 off 20:10
SF1 and 4 back on: 20:20
SF3 off: 22:50
SF3 on: 23:00
SF3 off: 3:41
SF3 on: 3:51
Nov 10 UTC
SF5,6 off 0:00
SF5,6 back on 0:10
TITLE: 11/10 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 2mph Gusts, 1mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.52 μm/s
QUICK SUMMARY:
H1 Is locked in Nominal_Low_Noise and Observing.
The ITMYMode 5 & 6 Violins are Elevated, 10E-15 on DARM, but they are trending downward.
TITLE: 11/10 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 159Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
- Lockloss @ 0:40 UTC, cause unknown
- Took the opportunity to reload the HAM2 ISI GDS TP while we were down
- Alignment looked horrible, so will be running an initial alignment
- H1 lost lock at 25 W, DCPD saturation
- Back to NLN/OBSERVE @ 3:55, attached are some ALS EY SDFs which I accepted
- Superevent S231110g
- 4:06 - inc 5.9 EQ from Indonesia
- Lockloss @ 5:27 - cause unknown
- Relocking was automated, reached NLN/OBSERVE @ 6:46 UTC
LOG:
No log for this shift.
Looking at the DCPD signals during the 25W state before this evening LL, during the LL and the next 25W relock, they looked typical (~8-12k) during the LL but there was a weird glitch like halfway through the states duration and they were diverging when the final glitch and LL occured. The relock after the 25W LL had higher DCPDs signals, ~40k higher than usual, and the previous lock also had higher than usual DCPDs at this state, which were damped down over the long lock. So something caused them (particularly ITMY modes 5/6, and 8... the problem childs) to ring up during the following lock aquisition from the 25W LL, they didn't have enough time to damp down during this lock and so after the 5:27 LL they were still high during aquisition.
Lockloss @ 5:27 UTC, cause unknown, no saturations on verbal. Looks like ASC AS A saw motion first again.
Just got H1 back into observing as of 3:55 UTC. Reaquisition took a bit longer due to alignment being poor and needing an initial alignment, followed by a lockloss at 25 W (which I believed were caused by rung up violins). During the second locking reaquisition, I noticed that the violins were extremely high, so I held H1 at OMC WHITENING to allow the violins to damp before going into observing.
Lockloss @ 0:40 UTC, DCPD saturation right before. Looking at the scope, ASC-AS_A_DC saw the motion first.
TITLE: 11/10 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
INCOMING OPERATOR: Austin
SHIFT SUMMARY: We've been Locked for 31 hours and everything is good.
LOG:
16:00UTC Detector Observing and Locked
18:02 SQZ ISS lost lock and took us out of Observing; needed a few tries but it got itself back up and locked
18:05 Back to Observing
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 16:06 | PEM | Robert | CR, CER | n | HVAC Tests | 20:06 |
| 17:20 | FAC | Karen | OptLab, VacPrep | n | Tech clean | 17:44 |
| 19:31 | FAC | Cindi | WoodShop | n | Laundry | 20:39 |
| 20:07 | Camilla, Jeff | MX | n | Running | 20:39 | |
| 22:37 | SAF | Travis, Danny | OptLab | n | Safety checks | 00:04 |
TITLE: 11/10 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 163Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 11mph Gusts, 8mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.49 μm/s
QUICK SUMMARY:
- H1 has been locked for 30 hours
- CDS/SEI DMs ok
While Twilio's texting service continues to be unavailable while we are awaiting approval for our usage, Twilio's phone calling service does still work.
I have made a hybrid version of locklossalert which uses the cell phone providers for SMS texting, and Twilio for phone calls.
Text and Phone-call alerts are now available again.
Note to operators: please check your alert settings, I have to revert to an earlier configuration after restarting the service.
Thu Nov 09 10:08:41 2023 INFO: Fill completed in 8min 37secs
JoeB, M. Wade, A. Viets, L. Dartez Maddie pointed out an issue with the computation of the kappa_TST starting since 10/13. After looking into it a bit further, Aaron found a noisy peak at 17.7 Hz just above the kappa_TST line (which is very close at 17.6 Hz). It turns out that the peak has been there for a while, but it got louder on 10/13 and has been interfering with the kappa_TST calculation since then. For a quick reference take a look at the top left plot on the calibration summary pages for October 12 and October 13. Taking a look at the DARM spectra for those days, JoeB, noticed a near 17.7Hz on 10/12 at about 23:30 UTC (darm_spect_oct12_2023.png). Interestingly, he noted that it looks like the 17.7Hz line, which was present before the glitch, got louder after the glitch (darm_spect_oct13_2023.png). I've attached an ndscope screenshot of the moment that the glitch happens (darm_17p7hz_glitch_kappaTST_uncertainty.png). Indeed, there is a glitch at around 23:30 on 10/12 and it is seen in by the KappaTST line and the line's uncertainty. Interestingly, after the glitch the TST line uncertainty stayed high by about 0.6% compared to its value before the glitch occurred. This 0.6% increase pushed the mean KappaTST line uncertainty above 1%, which is also the threshold applied by the GDS pipeline to determine when to begin gating that metric (see comment LHO:72944 for more info on the threshold itself). It's not clear to us what caused the glitch or why the uncertainty stayed higher afterwards. I noticed that the glitch at 23:30 was preceded by a smaller glitch by a few hours. Oddly, the mean KappaTST uncertainty also increased (and stayed that way) then too. There are three distinct "steps" in the kappaTST uncertainty shown in the ndscope I attached. I'll note that I initially looked for changes to the 17.7Hz line before and after the TCS changes on 10/13 (LHO:73445) but did not find any evidence that the two are related. == Until we identify what is causing the 17.7Hz line and fix it, we'll need to do something to help the kappaTST estimation. I'd like to see if I can slightly increase the kappaTST line height in DARM to compensate for the presence of this noisy peak and improve the coherence of the L3 sus line TF to DARM.
The TCS Ring Heater changes were reverted 16 October 74116.
On Oct 12th we retuned MICH and SRCL LSC Feedforwards and moved the actuation from ETMX to ETMY PUM 73420.
The MICH FF always has a pole / zero pair at about 17.7 Hz. In the latest filter, the peak is a few dB higher than in previous iterations
The change in H1:CAL-CS_TDEP_SUS_LINE3_UNCERTAINTY is when the MICH and SRCL feedforward filters were changed. See attached.
this is due to a 17.7Hz resonance in the new MICH FF. See LHO:74136.
Between 1383589977 and 1383590038 I adjusted the OMC 3MHz demod phase from 132 degrees to 147 degrees. The OPO lost lock (PZT at 110V) at around 18:00:39 UTC, the squeezer relocked itself but took a few tries to get the FC handoff to IR. After that we went back to observing but the squeezing was poor, so I adjusted the SQZ angle in lock.
I don't understand why the sqz angle should be different after that relock.
Closes FAMIS 20000
The only thing to note that I noticed is that the FSS, PMC relock count and ISS sat count was reset on 11/04.
TITLE: 11/09 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 1mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.51 μm/s
QUICK SUMMARY:
Observing and have been Locked for 17.5 hours. Secondary useism has been going up over the past 12 hours.
Jenne notes that this changing high frequency noise during the first hours of a lock since Wed 25th Oct (purple trace in 73798) may be caused by the new higher CARM gain 73738, changed on that day.
In 73798, I noted The 4.8kHz noise that suddenly changes ~ 1h40 into NLN, Jenne suggests this could be aliased down CARM gain peaking. Looking at the 64kHz channel, plot attached, noise disappears from 18.4 to 18.7kHz (v. large peak) and appears at 16.6kHz to 16.8kHz and at 21.1kHz.
We have had a weekend of shorter (~5hour) locks and two locklosses form LASER_NOISE_SUPPRESSION state#575 (73787, 73831), the state this CARM gain is changed. Maybe this gain change has made us less stable, we'll discuss today reverting it.
CARM sliders reverted back to 6dB in ISC_LOCK (svn) and loaded.
On Monday, Naoki and Sheila 73855 saw that even with the CARM gain back at 12dB, the high frequency squeezing was still bad and the optimal H1:SQZ-CLF_REFL_RF6_PHASE_PHASEDEG sqz angle has to be adjusted a lot.
Maybe the CARM gain increase was effecting stability, but we don't think it wasn't causing the high frequency noise that was present in FDS, FIS but not no SQZ, plot attached. With adjustment to SQZ angle the SQZ greatly improved. It wasn't clear to us why the SQZ angle changed.
Although the overall high frequency noise was still bad once the CARM gain was reduced from 12 back down to 6, the peak around 4600Hz did disappear once the CARM gain was reverted. See attached SQZ BLRM 6 purple trace with CARM at 12 and CARM at nominal 6.
See attached high frequency plot showing peaks at 16.4kHz and 18.7kHz (purple) and thermalized around 16.6kHz (red), peaks disappeared once CARM gain was reduced ( green to blue traces). Maybe this concludes that the peaks are CARM gain peaking as Jenne suggested.
Both ETM RH turned up from 1.0W to 1.1W/segment at 19:03UTC, plan to increase another 0.1W later this afternoon. Follow on from 73437. Will stay in observing during this test.
Made another step up of +0.1W to 1.2W/Segment on ETMX and ETMY at 21:07UTC.
Plots attached of HOM, DARM and ndscopes. Jenne pointed out we should use GDS-CALIB for DARM as it isn't effected bt the calibration changing with the RH changes. On ndscopes, not that at -4hours theres a step in SQZ that effects the range 73446.
High frequency noise reduce, DARM maybe better in in the bucket but circulating power down 7kW and Kappa_C down 0.8%.
This doesn't seem to be doing anything bad to the range so we can leave the ETM Ring Heaters in this 1.2W setting for the weekend. If Operators have any troubles, they can reduce H1:TCS-{ETMY,ETMY}_RH_SET{UPPER,LOWER}POWER from 1.2W to 1.0W.
Adding plots of 11:00UTC,13h30 after ETM RH change. DARM ~1000Hz looks like it thermalized worse than 2 hours after the RH change. 6kHz DARM continued to reduce, but it started that lock particularly high. Circulating power settled at 367kW, 7kW less than nominal. KAPPA_C dropped nearly 1%.
These RH changes were reverted 16 October 2023 73503.
The latest settings are still working fine, both IY05 and IY06 are going down as shown in the attached plot (shows both narrow and broad filters along with the drive output)- however it will take some time before they get down to their nominal level.
ITMY08 is also damping down nicely.