TITLE: 08/19 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 16mph Gusts, 12mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.15 μm/s
QUICK SUMMARY:
- H1 is still locked - going on 16 hours
- All systems stable, DMs read out ok, SEI motion low
TITLE: 08/18 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: H1 has been at locked for almost 8 hours. Quiet shift after H1 locked except for one brief drop out of observing.
LOG:
No log for this shift.
State of H1: Observing at 153Mpc
H1 has been locked and observing for almost 4 hours. Range has increased to above 150Mpc in the past hour, possibly from the wind dying down?
Genevieve, Lance, Robert
To further understand the roughly 10Mpc lost to the HVAC (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72308), we made several focussed shutdowns today. These manipulations were made during observing (with times recorded) because such HVAC changes happen automatically during observing and also, we were reducing noise rather than increasing it. The times of these manipulations are given below.
One early outcome is that the peak at 52 Hz in DARM is produced by the chilled water pump at EX (see figure). We went out and looked to see if the vibration isolation was shorted, it was not, though there are design flaws (the water pipes arent isolated). We switched from CHWP-2 to CHWP-1 to see if the particular pump was extra noisy. CHWP-1 produced a similar peak in DARM at its own frequency. The peak in accelerometers is also similar in amplitude to the one from the water pump at EY. One possibility is that the coupling at EX is greater because of the undamped cryobaffle at EX.
Friday HVAC shutdowns; all times Aug. 18 UTC
15:26 CS SF1, 2, 3, 4 off
15:30:30 CS SF5 and 6 off
15:36 CS SF5 and 6 on
15:40 CS SF1, 2, 3, 4 back on
16:02 EY AH2 (only fan on) shut down
16:10 EY AH2 on
16:20 EY AH2 off
16:28 EY AH2 on
16:45 EY AH2 and chiller off
16:56:30 EY AH2 and chiller on
17:19:30 EX chiller only off, pump stays on
17:27 EXwater pump CHWP-2 goes off
17:32: EX CHWP-2 back on chiller back on right after
19:34:38 EX chiller off, CHWP-2 pump stays on for a while
19:45 EX chiller back on
20:20 EX started switch from chiller 2 to chiller 1 - slow going
21:00 EX Finally switched
21:03 EX Switched back to original, chiller 1 to chiller 2
Turning Roberts reference to LHO:72308 into a hyperlink for ease of navigation. Check out LHO:72297 for a bigger picture representation of how the 52 Hz peak in the broader DARM sensitivity, and from the Time Stamps in Elenna's plots, they were taken at 15:27 UTC, just after the corner station (CS) "SFs 1, 2, 3, 4" are off. SF stands for "Supply Fans" i.e. those air handler unit (AHU) fans that push the cool air in to the LVEA. Recall, there are two fans per air handler unit, for the two air handler units (AHU1 and AHU2) that feed the LVEA in the corner station. The channels that you can use to track the corner station's LVEA HVAC system are outlined more in LHO:70284, but in short, you can check the status of the supply fans via the channels H0:FMC-CS_LVA_AH_AIRFLOW_1 Supply Fan (SF) 1 H0:FMC-CS_LVA_AH_AIRFLOW_2 Supply Fan (SF) 2 H0:FMC-CS_LVA_AH_AIRFLOW_3 Supply Fan (SF) 3 H0:FMC-CS_LVA_AH_AIRFLOW_4 Supply Fan (SF) 4
My Bad: -- Elenna's aLOG is showing the sensitivity on 2023-Aug-17, and the Robert logging of times listed above are for 2023-Aug-18. Elenna's demonstration is during Robert's site-wide HVAC shut down 2023-Aug-17 -- see LHO:72308 (i.e. not just the corner as I'd errantly claimed above.).
For these 2023-Aug-18 times mentioned in this LHO aLOG 72331, check out the subsequent analysis of impact in LHO:72778.
FAMIS 25595, last checked in alog 72166
The levels for both EX FAN1 vibrometers increased slightly just over a day ago, but still well within range.
All other fans look very similar to last check and within acceptable ranges.
TITLE: 08/18 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Austin
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 19mph Gusts, 13mph 5min avg
Primary useism: 0.07 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY: H1 is relocking, currently at MAX_POWER
TITLE: 08/18 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
- EX saturation @ 16:47
- Lockloss @ 17:36 - cause unknown
- Locking was automated, acquired NLN @ 18:41, OBSERVE @ 18:57
- Lockloss @ 18:57 - cause uknown, though microseism is a bit elevated - scope attached
- H1 MANAGER began an IA which went smoothly, but are now having some issue with the IMC randomly unlocking at different states of ISC LOCK
- Leaving H1 to Ryan S. relocking at MOVE SPOTS
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:11 | ISC | Robert | CR | N | HVAC tests | 17:41 |
17:03 | PEM | Robert | EY | N | Check on pump | 17:22 |
18:14 | FAC | Cindy | Mech room | N | Tech clean | 19:05 |
18:29 | SEI | Jim | LVEA | N | Checks | 19:00 |
19:47 | PEM | Robert | EX | N | Work on water pump | 21:33 |
21:53 | VAC | Gerardo | LVEA | N | Grab parts | 22:18 |
I added another L4C under HAM1 for the feedforward test I've been running there. We were locking, so I was trying to minimize the time under the chamber and didn't get a picture in situ, but the attached picture shows the cradle that Tyler 3d-printed for me. This is tucked next to the -x/-y, southeast pier of HAM1. I will probably try to put a cover over this next week, but for now it's just sitting under the chamber, aligned roughly along the IFO X arm. I haven't had a chance to add calibration filters or add alignment values to the HAM1 model, so the only channel for the data so far is the input 3DL4CINF_B_X_IN1_DQ channel.
Second image is some data from DTT looking at asds on top, with calibrated HEPI L4C X and HAM2 STS X. No calibration on the 3DL4C, the fact that the microseism isn't visible makes be think the 3d printed isn't stable enough. LLO says they may have some nicer purpose built metal clamps. Coherence is on the bottom, it's kind of low above 10 hz, it would probably help to have another X sensor.
The l4c serial number is L41501.
I tracked how various combs in the strain channel have changed in frequency over time using the autolines-complete.txt files for the daily FSCAN spectra (TSFT 1800s). I found that the 9.474, 9.475, and 9.480Hz combs have been changing in frequency over the course of O4, and have identified some key dates that warrant further investigation. I tracked the combs' frequency-roaming behavior by selecting a tooth which appeared often and was high-frequency. This was n=65 for the 9.474Hz comb, n=70 for the 9.475Hz comb, and n=64 for the 9.480Hz comb. The graphs of frequency over time I generated are available below. Observations: The 9.474Hz comb's n=65 harmonic initially increases in frequency from 05-24 to 06-05, and then sharply drops from 06-22 to 06-24 The 9.475Hz comb's n=70 harmonic initially increases sharply in frequency from 05-26 to 06-05, and then sharply drops from 06-21 to 06-23 The 9.480Hz comb's n=64 harmonic initially increases sharply in frequency from 05-30 to 06-05, and then sharply drops from 06-22 to 06-23 So it seems that all three combs' frequencies begin increasing in very early O4 until 06-05 and then abruptly decrease from 06-21 to 06-24. It's likely that the combs were affected by changes made to the interferometer during these periods -- I believe that the week of 06-21 (Sunday) was when we set the power back down to 60Hz. I checked several other combs as well (n=11 of 4.98423, n=13 of 29.96518, n=161 of 1.6611, and n=3 of 11.9044) and none of those have observable changes in frequency (they may still be roaming inside their frequency bins) so this feature seems fairly unique to the 9.4Hz family.
Fri Aug 18 10:06:53 2023 INFO: Fill completed in 6min 49secs
Gerardo confirmed a good fill curbside
Since we have to relock, I've switched the order of items in lownoise length control, so that we will change the main LSC gains first, then turn on the FF. Hopefully that will address the issue that Daniel hypothesises is our problem, and we won't ring up the 102 Hz line. If this does not work, do something like an
svn up ISC_LOCK.py
in the userapps/isc/h1/guardian/ folder to reveret to the previous file (which I had checked in just before I made this mod). I have not checked in ISC_LOCK.py with this modification yet.
This *may* have helped, in that the resulting 102 Hz line was smaller during this relock.
This is a quick comparison plot of DARM around 102 Hz from the relock we just had (with reordered guardian steps), and the previous relock with the normal order of guardian steps. Both times I chose are the times when the guardian state number changed to 600 (entering nominal low noise).
Since this seems to have worked (at least it didn't hurt / we were able to lock), I have now committed this to the svn.
Lockloss @ 17:36, no obvious cause.
Closes 25492, last done August 4th.
Laser Status:
NPRO output power is 1.829W (nominal ~2W)
AMP1 output power is 67.15W (nominal ~70W)
AMP2 output power is 135.3W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PMC:
It has been locked 12 days, 1 hr 20 minutes
Reflected power = 16.16W
Transmitted power = 108.9W
PowerSum = 125.1W
FSS:
It has been locked for 1 days 12 hr and 5 min
TPD[V] = 0.8903V
ISS:
The diffracted power is around 2.6%
Last saturation event was 1 days 14 hours and 9 minutes ago
Possible Issues: None
Attached are trends for 45 days of HEPI pressure signals. There are no notable shifts.
TITLE: 08/18 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
1:27 UTC PSL Dust monitor alerts going off during high winds again.
3:29UTC random unknown drop out of OBSERVING. I saw no SDF DIFFS within a few seconds of the event and took the IFO back to OBSERVING.
6:42 UTC another random unknown drop out of OBSERVING. I saw no SDF DIFFS within a few seconds of the event and took the IFO back to OBSERVING.
Other than that it' s been a quiet night.
Current LOCK 25 hours!
LOG: Coyotes chewing on sprinkler nossles out back.
The drop out of observing at 3:29 UTC and 6:42 UTC seems due to SDF diff of syscssqz according to the attached DIAG_SDF log. The SQZ TTFSS COMGAIN and FASTGAIN changed at that time, but they are not monitored now (alog72915), so it should be another SDF diff related to the TTFSS change.
Robert did an HVAC off test. Here is a comparison of GDS CALIB STRAIN NOLINES from earlier on in this lock and during the test. I picked both times off the range plot from a time with no glitches.
Improvement from removal of 120 Hz jitter peak, apparent reduction of 52 Hz peak, and broadband noise reduction at low frequency (scatter noise?).
I have attached a second plot showing the low frequency (1-10 Hz) spectrum of OMC DCPD SUM, showing no appreciable change in the low frequency portion of DARM from this test.
Reminders from the summary pages as to why we got so much BNS range improvement from removing the 52 Hz and 120 Hz features shown in Elenna's ASD comparison. Pulled from https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20230817/lock/range/. Range integrand shows ~15 and ~5 MPC/rtHz reduction at the 52 and 120 Hz features. BNS range time series shows brief ~15 MPC improvement at 15:30 UTC during Robert's HVAC OFF tests.
Here is a spectrum of the MICH, PRCL, and SRCL error signals at the time of this test. The most visible change is the reduction of the 120 Hz jitter peak also seen in DARM. There might be some reduction in noisy peaks around 10-40 Hz in the signals, but the effect is small enough it would be useful to repeat this test to see if we can trust that improvement.
Note: the spectra have strange shapes, I think related to some whitening or calibration effect that I haven't bothered to think about to make these plots. I know we have properly calibrated versions of the LSC spectra somewhere, but I am not sure where. For now these serve as a relative comparison.
According to Robert's follow-up / debrief aLOG (LHO:72331) and the time-stamps in the bottom left corner of Elenna's DTT plots, she's is using the time 2023-08-17 15:27 UTC, and that corresponds to the time when Robert had turned off all four the supply fans (SF1, SF2, SF3, and SF4) in the corner station (CS) air handler units (AHU) 1 and 2 that supply the LVEA around 2023-08-17 15:26 UTC.
Closes WP11368.
We've been seeing the CO2X laser regularly unlocking (alog71594) which takes us out of observing, today we swapped CO2X and CO2Y chillers to see if this issue followed the chiller. Previously, swapping CO2Y with the spare stopped CO2Y from unlocking, alog54980.
The old CO2X (S/N ...822) chiller seems to be reporting a unsteady flow at the LVEA flow meter, see attached, suggesting the S/N ...822 chiller isn't working too well. This is the chiller TJ and I rebuilt in Febuary 67265.
Swap. Following some of the procedure listed in alog#61325: turned off both lasers via medm, turned off and unplugged (electrical and water connections) both chillers, swapped the chillers, replugged in, turned chillers back on (one needed to be turned on via medm), checked water level (nothing added), turned on CO2 lasers via medm and chassis. Post-stick notes have been added to the chillers. Both lasers relocked with ~45W power.
Jason, TJ, Camilla
The worse chiller (S/N 822) flow rate dropped low enough for the CO2Y laser to trip off so swapped CO2Y back to it’s original chiller (S/N 617) and installed the spare chiller (S/N 813) for CO2X. We flushed the spare (instructions in 60792) as it hadn’t been used since February 67265. Both lasers are now running again and flow rates so far look good.
The first set of water we ran though the spare (S/N 813) chiller has small brass or metal pieces in the water (caught in the filter), see attached. Once we drained this and added clean water there was no evidence of metal so we connected it to the main CO2X circuit.
Looking at the removed CO2X chiller (rebuilt in February 67265), it had some black gunk in it, see attached. This is worrying as has been running though the CO2X lines since Feb and was running in the COO2Y system for ~5 hours. I should have checked the reservoir water before swapping the chillers.
Overnight they seem stable as well, but the new TCSX chiller (617) looks very slightly noisier and perhaps has a slight downward trend to its flow. We'll keep watching this and see if it continues.
I spoke too soon. Looks like TCSX relocked at 08:27UTC last night.
On Tuesday evening, the removed chiller (S/N 822) drained slowly. No water came out of the drain valve, only the outlet, which was strange. Today I took the cover off the chiller but couldn't see any issues with the drainage. I left th chiller with all values and the reservoir open so the last of the water can dry out of it.
Naoki and I unmonitored H1:SQZ-FIBR_SERVO_COMGAIN and H1:SQZ-FIBR_SERVO_FASTGAIN from syscssqz observe.snap. They have been regularly taking us out of observing (72171) by changing when the TTFSS isn't really unlocking, see 71652. If the TTFSS really unlocks there will be other sdf diffs and the sqz guardians will unlock.
We still plan to investigate this further tomorrow. We can monitor if it keeps happening using the channels.
Daniel, Sheila
We looked at one of these incidents, to see what information we could get from the beckhoff error checking. The attached screenshot shows that when this happened on August 12th at 12:35 UTC, the beckhoff error code for the TTFSS was 2^20, counting down on the automated error screen (second attachment) the 20th error is Beatnote out of range of frequency comparator. We looked at the beatnote error epics channel, which does seem to be well within the tolerances. Daniel thinks that the error is happening faster than it can be recorded by epics. He proposes that we go into the beckhoff code and add a condition that the error condition has to be met for 0.1s before throwing the error.
In the last 5 days these channels would have taken us out of observing 13 times if they were still monitored, plot attached. Worryingly, 9 times in the last 14 hours, see attached.
Maybe something has changed in SQZ to make the TTFSS more sensitive. The IFO has been locked for 35 hours where sometimes we get close to the edges of our PZT ranges due to temperature drifts over long locks.
I wonder if the TTFSS 1611 PD is saturated as power from the PSL fiber has drifted. Trending RFMON and DC volts from the TTFSS PD, it looks like in the past 2-3 months, the green beatnote's demod RF MON has increased (its RF max is 7), while the bottom gray DC volts signal from the PD has flattened out around -2.3V. Also looks like the RF MON got noisier as the PD DC volts saturated.
This PD should see the 160 MHz beatnote between the PSL (via fiber) and SQZ laser (free space). From LHO:44546, it looks like this PD "normally" would have like 360uW on it, with 180uW from each arm. If we trust the PD calibrations, then current PD values report ~600uW total DC power on the 1611 PD (red), with 40uW transmitted from the PSL fiber (green trend). Pick-offs for the remaining sqz laser free-space path (iem sqz laser seed/LO PDs) don't see power changes, so unlikely the saturations are coming from upstream sqz laser alignment. Not sure if there's some PD calibration issues going on here. In any case, all fiber PDs seem to be off from their nominal values, consistent with their drifts in the past few months.
I adjusted the TTFSS waveplates on the PSL fiber path to bring the FIBR PDs closer to their nominal values, and at least so we're not saturing the 1611. TTFSS and squeezer locks seem to have come back fine. We can see if this helps the SDF issues at all.
These were re-monitored in 72679 after Daniel adjusted the SQZ Laser Diode Nominal Current, stopping this issue.