VACSTAT went into SINGLE (increase sensitivity) mode at 09:25 due to one of our regular BSC3 sensor glitches. I restarted the service at 09:30 to reset back to MONITORING mode.
TITLE: 11/28 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 3mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.15 μm/s
QUICK SUMMARY: H1 lost lock at 14:58 (link to lockloss tool) from a not obvious cause after a 21+ hour lock stretch. Currently running an initial alignment.
H1 back to observing at 17:03 UTC.
TITLE: 11/28 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 159Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: We've been locked the whole shift! ~12.5 hours as of 06:00UTC. Site conditions are freezing fog and what looks like light snow flurries from the cameras, and the winds pretty low.
LOG: No log
TITLE: 11/28 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: One lockloss this morning but once relocked, we spent a few hours commissioning until noon. Since then, H1 has been happily observing, so far up to almost 7 hours.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
16:18 | FAC | Tyler, contractor | H2 Bldg. | n | Inspecting HVAC units | 20:17 |
22:34 | VAC | Janos | MX | n | Vacuum measurements | 23:15 |
TITLE: 11/27 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 2mph 3min avg
Primary useism: 0.07 μm/s
Secondary useism: 0.17 μm/s
QUICK SUMMARY:
H1 is back to observing at 157Mpc as of 20:01 UTC following commissioning time. Activities included DetChar safety injections (alog81518), SQZ script testing and tuning (alog81516), and I ran a magnetic injection suite starting at GPS 1416771605 since that hasn't been done in some time. A calibration sweep was not done as H1 was not thermalized during this commissioning window.
Vicky, Camilla
Vicky had copied over the LLO PSAMS scan scripts into sqz/h1/scripts/SCAN_PSAMS.py, it turns up the gain on the SQZ ASC and turns off SQZ_ANG_ADJUST before stepping the ZM4/5 PSAMS (using the new 80685 servo) to values we gave it
We had to pause SQZ_MANAGER as it was not letting us bring SQZ_ANG_ADJUST to DOWN. Also worked by taking SQZ_MANAGER to SCAN_SQZ_ANG.
Future improvements: Repeat now we have the ASC on. Can we do this in anti-squeezing to be faster (hard to use ASC). Do we need to have the servo turn off SQZ_ANG_ADJUST and scan SQZ ang at each step or can we just leave the servo running?
We also turned the FC beam-spot control on to test it again while we are thermalizing (worked fine) and turned it off before finishing commissioning: reverted sdfs so H1:SQZ-FC_ASC_INJ_ANG_{P,Y}_OUTPUT are frozen a, edited sqzparams and reloaded.
We tried adjusting the Fc de-tuning H1:IOP-LSC0_RLF_FREQ_OFS, starting value 30deg. Tried 28, 32 and 34. Left FC detuning at 32. sdf here.
Ryan S, Sidd
Detchar Safety injections done. Injection start gpstime 1416765666.65
Gracedb Upload for the first waveform is https://gracedb.ligo.org/events/H528692/view/
Wed Nov 27 10:09:53 2024 INFO: Fill completed in 9min 50secs
Jordan confirmed a good fill curbside. TCs only just cleared the -80C trips (A=-87C, B=-85C). Might be increasing the trips to -70C if this trend continues. TCs started positive today, outside temp -1C.
Lockloss @ 16:01 UTC - link to lockloss tool
No obvious cause, but doesn't look like an IMC lockloss as the IMC stays locked for several hundred milliseconds after AS_A starts moving.
Back to NLN at 17:39 UTC and jumping right into commissioning.
TITLE: 11/27 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 6mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.19 μm/s
QUICK SUMMARY: One lockloss overnight from an unknown cause, but H1 has been observing for just over 6 hours. Calibration/commissioning time today is slated for 16:30 to 20:00 UTC (08:30am to noon local).
TITLE: 11/27 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: 1 lockloss this shift (IMC tagged LL), DRMI struggled at first but after an IA relocking was smooth. There were FSS oscillations during initial alignment. The range has been hovering just under 160Mpc.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:58 | CAL | Tony | PCal Lab | Local | Measurement | 00:04 |
01:18 | CAL | Tony | PCAL lab | Local | Check shutter | 02:10 |
This is not the FSS oscillating, it's the FSS having trouble acquiring lock; the FSS Trans PD signal (TPD) is zero (cavity unlocked) except for when the cavity is near resonance (these are the peaks seen in the TPD during this relocking period). The autolocker fails to lock so the TPD drops back to zero and the temperature search continues. The pattern seen in the PMC_HV signal is the PMC following the NPRO frequency change as the FSS autolocker slowly ramps the NPRO crystal temperature to try to lock the RefCav (this is why the NPRO temperature channel has the same shape as PMC_HV). Not clear why the RefCav doesn't want to lock as quickly with this NPRO as it did with the previous ones, but we did see this behavior upon FSS recovery last Friday (I had to lock the RefCav manually so we could move on with PSL recovery as the autolocker was taking too long). I recall seeing this behavior in the past (pre-COVID) and at the time we couldn't figure out the cause. Will look into this more after the Thanksgiving holiday.
I went down to start an IA at 02:48 UTC as DRMI wasn't locking (it tried for a half hour) and the FSS and IMC unlocked after input align and the FSS has been oscillating for the past 10 minutes or so. As I posted this log it finally locked.
Some of the other PSL signals during the FSS oscillations
01:34 UTC lockloss
IMC tag, the IMC and ASC lost it at the same time and there was a small FSS oscillation (-2 to +4) 8 ms before. The FSS NPRO TEMP stepped up 30ms before the LL, the ISS also lost lock.
04:02 UTC reaquired NLN, 04:05 UTC Observing
Ansel reported that a peak in DARM that interfered with the sensitivity of the Crab pulsar followed a similar time frequency path as a peak in the beam splitter microphone signal. I found that this was also the case on a shorter time scale and took advantage of the long down times last weekend to use a movable microphone to find the source of the peak. Microphone signals don’t usually show coherence with DARM even when they are causing noise, probably because the coherence length of the sound is smaller than the spacing between the coupling sites and the microphones, hence the importance of precise time-frequency paths.
Figure 1 shows DARM and the problematic peak in microphone signals. The second page of Figure 1 shows the portable microphone signal at a location by the staging building and a location near the TCS chillers. I used accelerometers to confirm the microphone identification of the TCS chillers, and to distinguish between the two chillers (Figure 2).
I was surprised that the acoustic signal was so strong that I could see it at the staging building - when I found the signal outside, I assumed it was coming from some external HVAC component and spent quite a bit of time searching outside. I think that this may be because the suspended mezzanine (see photos on second page of Figure 2) acts as a sort of soundboard, helping couple the chiller vibrations to the air.
Any direct vibrational coupling can be solved by vibrationally isolating the chillers. This may even help with acoustic coupling if the soundboard theory is correct. We might try this first. However, the safest solution is to either try to change the load to move the peaks to a different frequency, or put the chillers on vibration isolation in the hallway of the cinder-block HVAC housing so that the stiff room blocks the low-frequency sound.
Reducing the coupling is another mitigation route. Vibrational coupling has apparently increased, so I think we should check jitter coupling at the DCPDs in case recent damage has made them more sensitive to beam spot position.
For next generation detectors, it might be a good idea to make the mechanical room of cinder blocks or equivalent to reduce acoustic coupling of the low frequency sources.
This afternoon TJ and I placed pieces of damping and elastic foam under the wheels of both CO2X and CO2Y TCS chillers. We placed thicker foam under CO2Y but this did make the chiller wobbly so we placed thinner foam under CO2X.
Unfortunately, I'm not seeing any improvement of the Crab contamination in the strain spectra this week, following the foam insertion. Attached are ASD zoom-ins (daily and cumulative) from Nov 24, 25, 26 and 27.
This morning at 17:00UTC we turned the CO2X and CO2Y TCS chiller off and then on again, hoping this might change the frequency they are injecting into DARM. We do not expect it to effect it much we had the chillers off for a ling period 25th October 80882 when we flushed the chiller line and the issue was seen before this date.
Opened FRS 32812.
There were no expilcit changes to the TCS chillers bettween O4a and O4b although we swapped a chiller for a spare chiller in October 2023 73704.
Between 19:11 and 19:21 UTC, Robert and I swapped the foam from under CO2Y chiller (it was flattened and not providing any damping now) to new, thicker foam and 4 layers of rubber. Photo's attached.
Thanks for the interventions, but I'm still not seeing improvement in the Crab region. Attached are daily snapshots from UTC Monday to Friday (Dec 2-6).
I changed the flow of the TCSY chiller from 4.0gpm to 3.7gpm.
These Thermoflex1400 chillers have their flow rate adjusted by opening or closing a 3 way valve at the back of the chiller. for both X and Y chillers, these have been in the full open position, with the lever pointed straight up. The Y chiller has been running with 4.0gpm, so our only change was a lower flow rate. The X chiller has been at 3.7gpm already, and the manual states that these chillers shouldn't be ran below 3.8gpm. Though this was a small note in the manual and could be easily missed. Since the flow couldn't be increased via the 3 way valve on back, I didn't want to lower it further and left it as is.
Two questions came from this:
The flow rate has been consistent for the last year+, so I don't suspect that the pumps are getting worn out. As far back as I can trend they have been around 4.0 and 3.7, with some brief periods above or below.
Thanks for the latest intervention. It does appear to have shifted the frequency up just enough to clear the Crab band. Can it be nudged any farther, to reduce spectral leakage into the Crab? Attached are sample spectra from before the intervention (Dec 7 and 10) and afterward (Dec 11 and 12). Spectra from Dec 8-9 are too noisy to be helpful here.
TJ touched the CO2 flow on Dec 12th around 19:45UTC 81791 so the flowrate further reduced to 3.55 GPM. Plot attached.
The flow of the TCSY chiller was further reduced to 3.3gpm. This should push the chiller peak lower in frequency and further away from the crab nebula.
The further reduced flow rate seems to have given the Crab band more isolation from nearby peaks, although I'm not sure I understand the improvement in detail. Attached is a spectrum from yesterday's data in the usual form. Since the zoomed-in plots suggest (unexpectedly) that lowering flow rate moves an offending peak up in frequency, I tried broadening the band and looking at data from December 7 (before 1st flow reduction), December 16 (before most recent flow reduction) and December 18 (after most recent flow reduction). If I look at one of the accelerometer channels Robert highlighted, I do see a large peak indeed move to lower frequencies, as expected. Attachments: 1) Usual daily h(t) spectral zoom near Crab band - December 18 2) Zoom-out for December 7, 16 and 18 overlain 3) Zoom-out for December 7, 16 and 18 overlain but with vertical offsets 4) Accelerometer spectrum for December 7 (sample starting at 18:00 UTC) 5) Accelerometer spectrum for December 16 6) Accelerometer spectrum for December 18