TITLE: 11/16 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 11mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.28 μm/s
QUICK SUMMARY:
Walked in to a locking h1, which then has moved on to INITIAL ALIGNMENT. Ibrahim shared how H1 has had a better time--so am cautiously optimistic for the night---with the goal of OBSERVING!
Environmentally we are becoming quieter with µseism dropping below the 95th percentile and winds not too breezy.
Ibrahim, Ryan S
Seems like an IMC lockloss. Interestingly, there was no obviously glitchy behavior (as previously observed) before or after the lockloss. Attached screenshot of ASC and IMC losing lock.
Adding more channels for the sake of investigation. As Ibrahim mentions, no glitchy behavior was seen leading up to this IMC lockloss, so this was either a large fast glitch out of nowhere or something else giving out. Seemingly all at once (or at least very close to each other), the FSS_FASTMON, RefCav trans, NPRO power, PMC high voltage, PMC mixer, and all IMC signals quickly change, so it's hard to tell which is truly happening first as there is lots of feedback between these signals.
Sheila, Vicky.
We changed the low pass filtering on the ADF SQZ angle servo to be lower, going from 1 Hz low pass to 0.1 Hz low pass on the ADF I/Q demod filter banks. I think this reduced the sqz angle readback noise a bit (trends before/after). It didn't make a huge impact on the control signal that goes to the CLF_RF6 demod phase (bottom purple trace), and it might be worth considering low pass filtering that control signal too. I accepted the SDFs for this filter bank LPF change.
Ibrahim, Sheila, Ryan S
Ibrahim noticed that the guardian was in a loop of locking PRMI something like 20 times overnight.
What happened was that PRMI was poorly aligned (POP18 and 90 I ERR both below 20 counts), but PRMI was able to grab lock for a couple of seconds, but would not hold lock as the guardian engaged the top mass offloading, adjsuted gains and filters. Eventually this hit the timer in H1 manager to do an initial alignmemt, after just over an hour.
There is also a timer in ISC that checks if PRMI hasn't locked in 10 minutes it should move onto MICH fringes. This didn't happen in this case because of the very short locks. Ryan and I read through the ISC_LOCK AQUIRE_PRMI state, and realized that it had a check (line 1371) for DRMI arrived (in the state PRMI locked), which didn't include a check for the state being done, which would return true in ISC LOCK before ISC_DRMI finished the final steps in PRMI locked . In the second, zoomed screenshot, you can see that as soon as the DRMI guardian enters state 35 (PRMI locked), ISC lock moves on immediately from state 50 (ACQUIRE PRMI) to 51 and 52 (PRMI ASC), which resets the timer so that we never went to MICH fringes.
We added a check for the ISC_DRMI state to be both arrived and done in PRMI locked, so that PRMI will have to survive the offloading and boost engagement for the timer to be reset. Ibrahim has now reloaded this. We think that if this situation came up again, now we would only spend 10 minutes relocking PRMI, before going to MICH fringes. If H1 is under the manager control, it would run initial alignment if running MICH fringes didn't help.
I ran the range_compare script using the 10-12 minute low range stretch from the most recent lock and some time before it of the same lock where the range was better, the construction crew by staging was also seen moving large equipment around this time on the cameras.
To produce this figure I ran "python3 /ligo/gitcommon/ops_tools/rangeComparison/range_compare.py --span 600 1415725641 1415728983", I only did 600 seconds of data. The range drop looks to be from low frequency noise, mostly below ~300Hz, the most obvious peak difference looks to be at 46/47 (mostly on sensmon), 36 (mostly on sensmon), and 60Hz (both sensmon and DARM) for the low range stretch. The lines are hard to see, but if you open the pdf in LibreOffice (linux default pdf viewer) and right-click -> Arrange -> Behind Object, then you can hover your cursor over the curves and it will be much easier to see the differences.
The SQZ blrms didn't really change during this time either.
Ryan S, Ibrahim
Unexpected lockloss being investigated but confirmed not the IMC nor the environment. Ryan S is seeing that it is looking like the ETM Glitch.
Does not look like an IMC/PSL lockloss; trends attached. The NPRO temp has been glitching this morning, not drastically, which has been causing the EOM drive to be higher in general, but these glitches were not happening at the time of this lockloss. The shape of of the signal on AS_A is indicative of an arms lockloss and happens before the IMC loses lock according to MC2_TRANS. Additionally, the lockloss tool shows glitches on ETMX starting up to second before the lockloss, although the ETM_GLITCH tag was not applied in this case as the glitches did not meet the required threshold.
(The ndscope template I'm using lives in my home directory ~/templates/psl_glitch_hunting.yaml)
FAMIS 31059
Late entry; these are trends taken on Monday as usual but I neglected to post them until now.
Since troubleshooting of laser glitches is still ongoing, several things in the PSL have been changing more than usual, including enclosure incursions, temperature changes, and ISS diffracted power increases. The only unexpected thing of note is that PMC reflected power seems to have been rising slowly over the past few days, but I think it's too soon to tell if this is a similar increase to what we were seeing here pre-NPRO swap or if it's related to laser troubleshooting. Will certainly be keeping an eye on this.
Fri Nov 15 10:12:59 2024 INFO: Fill completed in 12min 55secs
TITLE: 11/15 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 2mph Gusts, 1mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.35 μm/s
QUICK SUMMARY:
IFO is LOCKING at PRMI_ASC
I arrived and guardian has recently finished an initial alignment and was at CHECK_IR. Microseism is much lower than last few days.
Locklosses overnight:
1415716992 not a PSL/IMC problem, the IMC losses lock 260ms after the IFO, from observing, not sure what it was
1415703738 also not a PSL/IMC problem, the IMC losses lock 240ms after the IFO, from observing, not sure what caused it
1415697803 not a PSL/IMC problem, there was a small earthquake while we were in the ESD transitions. This lockloss is listed as being from state 558 (the one that Elenna increased a ramp time in to avoid locklosses 81260), however the lockloss actually happened in the state before when DARM was still controled by the ITMX ESD. This is just a less robust state to large ground motion.
Ibrahim looked at the long time between the earthquake and relocking, about an hour of that was waiting in ready for the ground motion to come down (which the guardian does independently now), then Ibrahim saw that there were 20 something PRMI locklosses in a row, all about 2 seconds after PRMI locked. (We will look into this more)
We also looked back at Corey's shift, and think that lockloss during the ETM transitions at 2:05 was not a PSL/IMC glitch (noted as LOCK #2 in Corey's alog). So this means we think that we have had 17 hours or so without a lockloss due to the PSL. We will wait and see how today and the weekend goes.
As a reminder, we saw FSS glitches yesterday morning, then did several things before this 17 hour stretch without glitch locklosses.
TITLE: 11/15 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
Started shift with beginning locking for H1. Had a couple of locklosses (as posted earlier), but made it to NLN on the 3rd attempt. had a lockloss a few min ago due to M6.7 EQ from south pacific. Have been touching base with TJ (who will be on OWL), and he wanted to receive Owl alerts and if locking is rough overnight, he'll switch to the PMC/FSS tests overnight.
LOG:
After tonight's Initial Alignment, H1's been fairly good with getting through the bulk of ISC_LOCK, but have had 2 consecutive locklosses a couple of states after MAX POWER:
Locking Notes:
0003-0043 INITIAL ALIGNMENT (w/ ALSy needing touch up by hand)
0044 LOCK#1
LOCK #2: DRMI looked its ugly self. Needed to run CHECK MICH FRINGES and the BS was definitely the culprit for the nasty alignment and was fixed. PRMI & DRMI both locked immediately.
Will continue locking for the next 2.5-3hrs, and then take H1 to IDLE and leave FSS & PMC -ON- for the night.
But if H1 makes it to NLN, will contact Louis or Joe B for a ~15min calibration check.
Vicky, Ryan, Sheila
Zooming in on this 2:05 UTC lockloss, MC2 trans dropped about 150ms after the IFO lost lock, so we think this was not due to the usual PSL/IMC issue, even though there was a glitch in the FSS right before the lockloss.
Ansel reported that a peak in DARM that interfered with the sensitivity of the Crab pulsar followed a similar time frequency path as a peak in the beam splitter microphone signal. I found that this was also the case on a shorter time scale and took advantage of the long down times last weekend to use a movable microphone to find the source of the peak. Microphone signals don’t usually show coherence with DARM even when they are causing noise, probably because the coherence length of the sound is smaller than the spacing between the coupling sites and the microphones, hence the importance of precise time-frequency paths.
Figure 1 shows DARM and the problematic peak in microphone signals. The second page of Figure 1 shows the portable microphone signal at a location by the staging building and a location near the TCS chillers. I used accelerometers to confirm the microphone identification of the TCS chillers, and to distinguish between the two chillers (Figure 2).
I was surprised that the acoustic signal was so strong that I could see it at the staging building - when I found the signal outside, I assumed it was coming from some external HVAC component and spent quite a bit of time searching outside. I think that this may be because the suspended mezzanine (see photos on second page of Figure 2) acts as a sort of soundboard, helping couple the chiller vibrations to the air.
Any direct vibrational coupling can be solved by vibrationally isolating the chillers. This may even help with acoustic coupling if the soundboard theory is correct. We might try this first. However, the safest solution is to either try to change the load to move the peaks to a different frequency, or put the chillers on vibration isolation in the hallway of the cinder-block HVAC housing so that the stiff room blocks the low-frequency sound.
Reducing the coupling is another mitigation route. Vibrational coupling has apparently increased, so I think we should check jitter coupling at the DCPDs in case recent damage has made them more sensitive to beam spot position.
For next generation detectors, it might be a good idea to make the mechanical room of cinder blocks or equivalent to reduce acoustic coupling of the low frequency sources.
This afternoon TJ and I placed pieces of damping and elastic foam under the wheels of both CO2X and CO2Y TCS chillers. We placed thicker foam under CO2Y but this did make the chiller wobbly so we placed thinner foam under CO2X.
Unfortunately, I'm not seeing any improvement of the Crab contamination in the strain spectra this week, following the foam insertion. Attached are ASD zoom-ins (daily and cumulative) from Nov 24, 25, 26 and 27.
This morning at 17:00UTC we turned the CO2X and CO2Y TCS chiller off and then on again, hoping this might change the frequency they are injecting into DARM. We do not expect it to effect it much we had the chillers off for a ling period 25th October 80882 when we flushed the chiller line and the issue was seen before this date.
Opened FRS 32812.
There were no expilcit changes to the TCS chillers bettween O4a and O4b although we swapped a chiller for a spare chiller in October 2023 73704.
Between 19:11 and 19:21 UTC, Robert and I swapped the foam from under CO2Y chiller (it was flattened and not providing any damping now) to new, thicker foam and 4 layers of rubber. Photo's attached.
Thanks for the interventions, but I'm still not seeing improvement in the Crab region. Attached are daily snapshots from UTC Monday to Friday (Dec 2-6).
I changed the flow of the TCSY chiller from 4.0gpm to 3.7gpm.
These Thermoflex1400 chillers have their flow rate adjusted by opening or closing a 3 way valve at the back of the chiller. for both X and Y chillers, these have been in the full open position, with the lever pointed straight up. The Y chiller has been running with 4.0gpm, so our only change was a lower flow rate. The X chiller has been at 3.7gpm already, and the manual states that these chillers shouldn't be ran below 3.8gpm. Though this was a small note in the manual and could be easily missed. Since the flow couldn't be increased via the 3 way valve on back, I didn't want to lower it further and left it as is.
Two questions came from this:
The flow rate has been consistent for the last year+, so I don't suspect that the pumps are getting worn out. As far back as I can trend they have been around 4.0 and 3.7, with some brief periods above or below.
Thanks for the latest intervention. It does appear to have shifted the frequency up just enough to clear the Crab band. Can it be nudged any farther, to reduce spectral leakage into the Crab? Attached are sample spectra from before the intervention (Dec 7 and 10) and afterward (Dec 11 and 12). Spectra from Dec 8-9 are too noisy to be helpful here.
TJ touched the CO2 flow on Dec 12th around 19:45UTC 81791 so the flowrate further reduced to 3.55 GPM. Plot attached.
The flow of the TCSY chiller was further reduced to 3.3gpm. This should push the chiller peak lower in frequency and further away from the crab nebula.
The further reduced flow rate seems to have given the Crab band more isolation from nearby peaks, although I'm not sure I understand the improvement in detail. Attached is a spectrum from yesterday's data in the usual form. Since the zoomed-in plots suggest (unexpectedly) that lowering flow rate moves an offending peak up in frequency, I tried broadening the band and looking at data from December 7 (before 1st flow reduction), December 16 (before most recent flow reduction) and December 18 (after most recent flow reduction). If I look at one of the accelerometer channels Robert highlighted, I do see a large peak indeed move to lower frequencies, as expected. Attachments: 1) Usual daily h(t) spectral zoom near Crab band - December 18 2) Zoom-out for December 7, 16 and 18 overlain 3) Zoom-out for December 7, 16 and 18 overlain but with vertical offsets 4) Accelerometer spectrum for December 7 (sample starting at 18:00 UTC) 5) Accelerometer spectrum for December 16 6) Accelerometer spectrum for December 18
TITLE: 11/15 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 15mph Gusts, 10mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.68 μm/s
QUICK SUMMARY:
H1 was down for troubleshooting all day shift and at the end of TJ's shift he started an Initial Alignment which we are running now.
The goal for today is similar to last night's shift: Will try locking from 430-9pm.
Operator NOTE from TJ: When restoring for locking--order is PMC -> FSS -> ISS (as they are listed on the Ops Overview)
Environmental Notes: µseism is worse than last night (is clearly higher than the 95th percentile & touching "1 count" on FOM). Had been windy most of the day, but has become calmer in the last hour.
Initial Alignment Note (since it has just completed while I've been trying to write this alog for the last 45min!): ALSy wasn't great after INCREASE FLASHES, Elenna touched up the ETMy by hand and this immediately helped-----EVEN WITH HIGH MICROSEISM!
We restarted the calibration pipeline with a new configuration ini such that it no longer subtracts the 60 Hz (and harmonics up to 300 Hz). The configuration change is recorded in commit here: https://git.ligo.org/Calibration/ifo/H1/-/commit/53f2e892a38cfb18815912c33b1f1b8385cfff62 I restarted the pipeline at around 9:35 am PST. Around the same time as this was done at LLO (LLO:74051). The IFO was down at the time, so I left a request with the H1 operators to contact both me and Joe B. when H1 is back at NLN but before going to Observing mode so that we (whoever is first) can confirm that the GDS restart is behaving as expected. Initial checks at LLO indicate that things are working properly, which is promising.
Joe B., Louis D., Corey G. Corey called as soon as H1 reached NLN. The gstlal-calibration pipeline restart with 60 Hz subtraction turned off looks like it's behaving as expected so we gave Corey the green light from Cal to go into Observing. The 60Hz line and its harmonics up to 300Hz look good (i.e. NOLINES looks identical to STRAIN since subtraction for those lines was turned off).
Ryan, Jason, Patrick, Filiberto As part of troubleshooting the PSL we hardware power cycled the PSL Beckhoff computer in the diode room this morning, along with all of the associated diode power supplies and a chassis in the LVEA. I had guessed that everything would autostart, but I was wrong, so I took the opportunity to set it up to do so. This required putting a shortcut to the EPICS IOC startup script in the C:\TwinCAT\3.1\Target\StartUp directory (see attached screenshots), and selecting an option in the TwinCAT Visual Studio project to autostart the TwinCAT runtime. We software restarted the computer again to test this, and after logging in, the Beckhoff runtime and PLC code started, along with the EPICS IOC, but the visualization did not. I found documentation that pointed to the location of the executable that starts the visualization, and added a shortcut to that to the startup directory as well. We didn't have time to restart the computer again to see if that would autostart correctly. For some reason there seemed to be issues with processes reconnecting to the EPICS IOC channels. I tested running caget on the Beckhoff computer itself and got a message about connecting to two different instances of the channel, and a couple of pop up windows related I think to allowing network access, which I said to allow. caget worked, although it gave a blank space for the value, so I tried it again with an invalid channel name, which it correctly gave an error for. On the Linux workstation we were using, the MEDM screens were not reconnecting, even after closing and reopening them, but again caget worked. We had to restart the entire medm process for it to reconnect. The EDCU and SDF also had issues reconnecting, and they had to be restarted too.
As Patrick mentioned, channel access clients which had been connected to the IOC on h1pslctrl0 would not reconnect after its restart.
The EDC stayed in its disconnect state for almost an hour, even though cagets on h1susauxb123 itself were connecting, albeit with "duplicate list entry" warnings:
(diskless)controls@h1susauxb123:~$ caget H1:SYS-ETHERCAT_PSL_INFO_TPY_TIME_HOUR
Warning: Duplicate EPICS CA Address list entry "10.101.0.255:5064" discarded
H1:SYS-ETHERCAT_PSL_INFO_TPY_TIME_HOUR 18
The restart of the DAQ EDC did not go smoothly, I had added a missing channel to H1EPICS_CDSRFM.ini (WP12195) in preparation for next Tuesday maintenance and so the EDC came back with a different channel list to that of the rest of the DAQ. I reverted this file change and a second EDC restart was successful.
11:38:35 h1susauxb123 h1edc[DAQ]
11:46:17 h1susauxb123 h1edc[DAQ]
The slow controls h1pslopcsdf system was also unable to reconnect to the 4 PSL WD channels it monitors. This was restarted at 12:08 14nov2024 PST.
Erik found that MEDM on some workstations would continue to show white-screen for h1pslctrl0 channels and a full restart of MEDM was needed to resolve this.