Follow up from Tuesday maintenance: Daniel upgraded the Beckhoff slow controls software to no longer error on a version mismatch for the new timing cards installed in h1susex and h1omc0 (for new LIGO DAC and variable Duo-Tone frequency respectively).
I have changed the CDS HW stat code to no longer check that the EX timing error was solely due to the version mismatch. I also reverted the SYS_TC3_TIMING.adl file, removing the temporary blurb I had put in there explaining the expected EX error.
I took the opportunity to upgrade the timing entry on the CDS Overview to a system block. This will turn RED if there are any timing errors, or MAGENTA is the timing system freezes. Clicking on it opens the timing overview, please see attached.
Fri Dec 13 10:10:07 2024 INFO: Fill completed in 10min 3secs
Jordan confirmed a good fill curbside.
The mystery glitching this team has been thoroughly investigating was present again for ~30 minutes starting at approximately 6 UTC on Dec 13, as seen in this range time series. HVeto runs over this time period highlight H1:SQZ-FC_LSC_DOF2_OUT_DQ as the most correlated channel (same as in 81587), this time with a significance of 200 (a significance of > 10 is of interest). After checking omega scans of example glitching in the strain data, there is clear glitching present again in this squeezer channel. Similar to previous occurrences of this noise, it promptly appeared and then disappeared.
Laser Status:
NPRO output power is 1.832W
AMP1 output power is 70.23W
AMP2 output power is 137.1W
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 2 days, 16 hr 5 minutes
Reflected power = 23.33W
Transmitted power = 106.0W
PowerSum = 129.4W
FSS:
It has been locked for 0 days 5 hr and 21 min
TPD[V] = 0.7857V
ISS:
The diffracted power is around 2.8%
Last saturation event was 0 days 5 hours and 21 minutes ago
Possible Issues:
PMC reflected power is high
TITLE: 12/13 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 35Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 2mph Gusts, 0mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.73 μm/s
QUICK SUMMARY:
Wet drive in with H1 locked over 3.75hrs after Oli had a wake-up call for SDF diffs overnight. H1 range is hovering around 150Mpc. Over the last 24hrs, µseism has creeped up from ~95th percentile to above it. Currently have had INVALID dust alarms for Vacuum Prep Lab. There was a YELLOW SDF Diff for the CDS DIFF box on the CDS Overview, but it greened up as I was typing about it.
NOTE: next time we are out of OBSERVING, hit LOAD for ISC_LOCK (for RyanS/TJ change)
CDS SDF Diffs:
Reminder that there is a cronjob which automatically accepts any CDS SDF differences related to lock-loss-alert (LLA) settings. Any other diffs, e.g. WIFI Is Activate, are not touched.
The cronjob runs every 10 minutes. If a CDS SDF LLA diff persists longer that this, please contact me.
Was called by H1 Manager, we were at NLN but there were some SDF diffs that needed to get accepted to go in to Observing. They're related to the SRCL changes done yesterday, so I acepted them and we went into Observing.
TITLE: 12/13 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Very quiet shift with H1 locked and observing for the entirety; current lock stretch is up to 24.5 hours. Secondary microseism is on the rise.
TITLE: 12/13 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: We stayed locked the whole shift, 19hours. PRM camera is blue, 2ndary microseism seems to have flatlined, its likely from storms both off the coast and near Greenland as the arms see similar phase differences from the CS. Low frequency noise seems intermittently elevated based on DARM and SQZ dtts.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
16:53 | OPS | LVEA | LVEA | YES | LVEA IS LASER HAZARD | 21:13 |
15:41 | FAC | Kim | Optics lab | N | Tech clean | 16:06 |
15:50 | OPS | RyanC | Vac prep | N | Restart dust monitor | 15:54 |
17:01 | PEM | Robert | LVEA | Y | Setup TCS shaker test | 17:06 |
17:01 | ISC | Camilla | CR | N | SRCL improvement | 18:24 |
17:25 | PEM | Robert | LVEA | Y | Viewport work | 19:09 |
17:39 | FAC | Kim | OSB receiving | N | Lift door for cardboard | 18:04 |
18:12 | FAC | Kim | H2 encl | N | Tech clean | 18:36 |
19:32 | FAC | Christina | OSB receiving | N | Put away cardboard | 19:44 |
19:41 | TCS | TJ | Mech room | N | Turn down TCSY flow | 19:44 |
19:45 | TCS | TJ | LVEA | Y | Restart CO2Y laser | 19:52 |
19:58 | PEM | Robert | LVEA | Y | Turn off PEM stuff | 20:02 |
23:32 | CAL | Rick, Francisco | PCAL lab | LOCAL | PCAL work | 00:32 |
TITLE: 12/12 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 8mph Gusts, 5mph 3min avg
Primary µseism: 0.04 μm/s
Secondary µseism: 0.48 μm/s
QUICK SUMMARY: H1 has been locked for 18.5 hours.
[Robert, Jenne]
We've updated the Jitter cleaning coeffs. Robert had noticed that the last few weeks we've really had some jitter peaks in the ~100-200 Hz range (additional to our normal jitter peaks), so we looked at retraing the jitter cleaning.
In the first attachment the bottom panel is the subtraction before the update. The top panel is after the update. You can see that indeed the red in the upper panel does a better job of subtracting than the lower panel for most of the range. There may be a little tiny bit of extra noise at a low level below ~70 Hz that most searches will not notice, but the long-duration searches (eg stochastic) may see it. However, Robert and I agree that the major benefit makes this an overall win, so I have accepted the new jitter coefficients in both the safe and observe.snap files.
The second screenshot is of the range, where the time cursor shows when the new jitter cleaning started. Since we were squishing this in at the end of commissioning, it's a little hard to see here the range improvement, but I put it in anyway just so it's a little more clear when exactly the 'reset' button in the bottom row was pushed to reset the cleaning coefficients.
Sheila, Camilla. Repeat of 78776, continuing 81575.
During the emergency July/August OFI vent, we accidentally left off the SRCL ASC offsets and the OM2 heater. Today we tried to find best SRCL1 ASC offsets with OM2 heater off. We could improve CAL FCC (but kappaC decreased) with SRC1 ASC YAW offset (-0.62) and adjusted SRCL LSC offset (-140) with SQZ FIS data. This decreased the range so we reverted settings back to nominal. May continue next week and look at FC de-tuning, OMC ASC offsets and LSC FFs as these may need to be updated too.
We repeated the setup in 81575, turned off SRCL LSC offset, opened the POP beam divertor, turned off SRCL1 ASC loops and then moved SRM in pitch and Yaw (0.1urad steps in groups of 5urad). Saw no change in pitch but an offset in yaw improved the FCC (but decreased KappaC). Could see ASC-OMC_NSUM's changing so we may need to update OMC offsets too. could see RF18 increase. FCC (couple cavity pole) increased 4.5Hz. Plots here and here.
Added a -0.62 offset into SRC1 (need to put into ISC_LOCK / sdf / lscparms). Then turned back on SRC1 ASC (ramp time 0.1s turned on offset and on button at same time).
Repeated SQZ FIS vs SRCL de-tuning dataset from 80318, this gave us the SRCL offset of -140 to use:
Outputs from Sheila's 80318 code attached: DARM spectra, fitted SRCL de-tuning to model, offset fit that gave us -140.
Once we reverted the changes, you can see here that the: range increased 5MPc, FCC decreased 4.4Hz, kappaC increased 0.8%, RF18 decreased, RF90 increased.
While we still had the SRCL ASC and LSC offsets in place, I tried some different FC de-tuning values from -29 to -38 (nominal is -32) 0.1Hz 50% overlap 35 average for 3minutes of data each. -35 is maybe the best FC de-tuning though there isn't much of a difference. The most interesting is that the no SQZ time is considerably worse <40Hz than the no SQZ data on 12/9 (IFO locked 2hrs) with the original SRCL offsets. Plot attached.
I took a quick look at the LSC coherence (MICH/PRCL/SRCL) during this change to see if it changed and that explained the worse noise. There is no change in the coherence. I also tried running a nonsens subtraction of the LSC channels on CALIB STRAIN and there was nothing of worth to subtract. Safe to say that these changes today in SRCL detuning do not worsen the LSC coupling.
Commissioning wrapped up and we resumed observing at 20:05 UTC, we've been locked for 14:40.
A comprehensive method to periodically check suspension anomalies in various OSEMs spectrums.py - is a collection of functions that can be used to study fluctuations of various OSEM suspensions over time. git link: https://git.ligo.org/hanford_osem/hanford_OSEM/-/blob/main/spectrums.py?ref_type=heads How to use it: We can git pull from the link above. In any LIGO cluster terminal or Jupyter we can use igwn-py kernel and import just the following dependency: from spectrums Import * # define a time segment object: time_seg = time_segment() In the prompt this will ask to enter a ref_time (reference time), this would be gps time when OSEMs were known to have no anomalies. Once pressed enter, it will ask for check_time, this would be another gps time which is being checked against ref_time. Finally it will ask to enter duration. For a long duration this might take a long time to execute. Keeping duration =180 (in seconds) is a reasonable choice. This will create a time segment object as a dictionary. Now, we can either check the OSEM fluctuations directly by running the following command: host = 'nds.ligo-wa.caltech.edu' osem_fluctuations(time_segments = time_seg, host=host) This will show us available OSEMs and we can enter at least one or a list of OSEMs that we wish to investigate. It may take a few minutes to get all the plots. Checking individual OSEMs might be time consuming, a quick alternative is to check their BLRM values first: check_blrms(time_segments = time_seg, host=host, fs = 256, lowcut = 15, highcut = 20, threshold = None) Runing this function allows us to enter either one or a list of OSEMs like before to have their BLRM values printed. We can apply a threshold of choice to see which OSEM(s) look problematic. Then only check plots of those OSEMs in osem_fluctuations() function.
Today we ran a correlated noise measurement for 1 hour starting at 1416594379, meaning that the detector was locked, thermalized, and with no squeezing engaged for this period of time. We left calibration lines on during the measurement. The measurement was preceeded by a calibration measurement as well, see alog 81461.
During the measurement I ran a script that I edited from Sander Vermeulen that collects the full 524kHz OMC DCPD channels (H1:OMC-DCPD_{A,B}0_IN1) (see Jeff Kissel's comment in alog 69398 which gives a good description of how these channels are created). The data from these two channels was saved in in hdf5 files as 1 second frames. Those files can be found in /ligo/home/elenna.capote/OMC_DCPD/242511-102557/, and on the DCC at this file card: T2400394.
Sheila noted one large glitch at UTC 18:35, so this data may need to be gated around that time.
Edit: the resulting zip file is 9GB so the DCC cannot host it. I am thinking of other ways to share this data with others....
Editedit: added box link to DCC for data access.
This data has been posted to the DCC here: T2400394.
This morning there was a range drop on H1 (163Mpc down to about 151Mpc, see attachment#1). Was working on trying to figure out how to run the Range Check measurements, but while chatting with Vicky on Teamspeak, she reminded me about the daily CP1 Fill can affect range (see attachment #2 which is plot from Dave's alog) ....and the effect certainly lines up! (Also see Oli's alog from Sept here.) The time in question is 1802-1812utc (1002-1012amPT). I will not share the Low-Range-Plots I took for 1810utc since CP1 Fill is most likely the culprit.
However, a note about the range is that it has not really returned to 163Mpc---it's hovered at 157Mpc post-CP1-Fill for the last 4+hrs.
So I ran another Low Range DTT for about an hour ago (2117utc/1317PT).
Attached plots show the 30 minutes around the CP1 overfill for Sunday and Saturday. The H1 range shows a correlation with the CP1 discharge line pressure. An increase in line pressure indicates the presence of cold LN2 vapor, and later liquid, in the pipe. The Y manifold accelerometer signal shows correlated motion.
The accelerometer correlation can also be seen on the previous Sunday. This is not seen clearly during the week because the ACC was nore noisy, presumably due to LVEA activity around 10am each day.
Attached shows ACC signal Sun 8th Sep 2024 correlated to the discharge pressure. Back then we were filling at 8am. It doesn't appear that the beam manifold motion has gotten any worse over the past two months during cp1 fills.
The attached plots shows the BNS range around CP1 fill times for the last six CP1 fills (10 AM PDT) when the IFO was also in the locked state. In four cases among these six, we can see BNS range drop during the CP1 fill. In the remaining two it is not clear whether CP1 fill happened or not. We see a spike in H0:VAC-LY_TERM_M17_CHAN2_IN_MA.mean, but we don't see an extened increase in that channel as we see in the other four cases.
The attached plot show the BNS range variations during the CP1 fill times during the first ~10 days of December. We are plotting only those days when the IFO was in observing (H1:GRD-IFO_OK == 1). For these days, the drop in the BNS range during the fill times seem lower than what we saw during November (plot in the above comment). We also see that the fill times are in general less in these ten days compared to what were in November. Maybe longer the fill time, more the drop in the BNS range!? Also looking at these plots and plots from November, it seem the range might be coming back to a lower value after the fill than it's value before the fill.
Ansel reported that a peak in DARM that interfered with the sensitivity of the Crab pulsar followed a similar time frequency path as a peak in the beam splitter microphone signal. I found that this was also the case on a shorter time scale and took advantage of the long down times last weekend to use a movable microphone to find the source of the peak. Microphone signals don’t usually show coherence with DARM even when they are causing noise, probably because the coherence length of the sound is smaller than the spacing between the coupling sites and the microphones, hence the importance of precise time-frequency paths.
Figure 1 shows DARM and the problematic peak in microphone signals. The second page of Figure 1 shows the portable microphone signal at a location by the staging building and a location near the TCS chillers. I used accelerometers to confirm the microphone identification of the TCS chillers, and to distinguish between the two chillers (Figure 2).
I was surprised that the acoustic signal was so strong that I could see it at the staging building - when I found the signal outside, I assumed it was coming from some external HVAC component and spent quite a bit of time searching outside. I think that this may be because the suspended mezzanine (see photos on second page of Figure 2) acts as a sort of soundboard, helping couple the chiller vibrations to the air.
Any direct vibrational coupling can be solved by vibrationally isolating the chillers. This may even help with acoustic coupling if the soundboard theory is correct. We might try this first. However, the safest solution is to either try to change the load to move the peaks to a different frequency, or put the chillers on vibration isolation in the hallway of the cinder-block HVAC housing so that the stiff room blocks the low-frequency sound.
Reducing the coupling is another mitigation route. Vibrational coupling has apparently increased, so I think we should check jitter coupling at the DCPDs in case recent damage has made them more sensitive to beam spot position.
For next generation detectors, it might be a good idea to make the mechanical room of cinder blocks or equivalent to reduce acoustic coupling of the low frequency sources.
This afternoon TJ and I placed pieces of damping and elastic foam under the wheels of both CO2X and CO2Y TCS chillers. We placed thicker foam under CO2Y but this did make the chiller wobbly so we placed thinner foam under CO2X.
Unfortunately, I'm not seeing any improvement of the Crab contamination in the strain spectra this week, following the foam insertion. Attached are ASD zoom-ins (daily and cumulative) from Nov 24, 25, 26 and 27.
This morning at 17:00UTC we turned the CO2X and CO2Y TCS chiller off and then on again, hoping this might change the frequency they are injecting into DARM. We do not expect it to effect it much we had the chillers off for a ling period 25th October 80882 when we flushed the chiller line and the issue was seen before this date.
Opened FRS 32812.
There were no expilcit changes to the TCS chillers bettween O4a and O4b although we swapped a chiller for a spare chiller in October 2023 73704.
Between 19:11 and 19:21 UTC, Robert and I swapped the foam from under CO2Y chiller (it was flattened and not providing any damping now) to new, thicker foam and 4 layers of rubber. Photo's attached.
Thanks for the interventions, but I'm still not seeing improvement in the Crab region. Attached are daily snapshots from UTC Monday to Friday (Dec 2-6).
I changed the flow of the TCSY chiller from 4.0gpm to 3.7gpm.
These Thermoflex1400 chillers have their flow rate adjusted by opening or closing a 3 way valve at the back of the chiller. for both X and Y chillers, these have been in the full open position, with the lever pointed straight up. The Y chiller has been running with 4.0gpm, so our only change was a lower flow rate. The X chiller has been at 3.7gpm already, and the manual states that these chillers shouldn't be ran below 3.8gpm. Though this was a small note in the manual and could be easily missed. Since the flow couldn't be increased via the 3 way valve on back, I didn't want to lower it further and left it as is.
Two questions came from this:
The flow rate has been consistent for the last year+, so I don't suspect that the pumps are getting worn out. As far back as I can trend they have been around 4.0 and 3.7, with some brief periods above or below.
Thanks for the latest intervention. It does appear to have shifted the frequency up just enough to clear the Crab band. Can it be nudged any farther, to reduce spectral leakage into the Crab? Attached are sample spectra from before the intervention (Dec 7 and 10) and afterward (Dec 11 and 12). Spectra from Dec 8-9 are too noisy to be helpful here.
TJ touched the CO2 flow on Dec 12th around 19:45UTC 81791 so the flowrate further reduced to 3.55 GPM. Plot attached.
The flow of the TCSY chiller was further reduced to 3.3gpm. This should push the chiller peak lower in frequency and further away from the crab nebula.
The further reduced flow rate seems to have given the Crab band more isolation from nearby peaks, although I'm not sure I understand the improvement in detail. Attached is a spectrum from yesterday's data in the usual form. Since the zoomed-in plots suggest (unexpectedly) that lowering flow rate moves an offending peak up in frequency, I tried broadening the band and looking at data from December 7 (before 1st flow reduction), December 16 (before most recent flow reduction) and December 18 (after most recent flow reduction). If I look at one of the accelerometer channels Robert highlighted, I do see a large peak indeed move to lower frequencies, as expected. Attachments: 1) Usual daily h(t) spectral zoom near Crab band - December 18 2) Zoom-out for December 7, 16 and 18 overlain 3) Zoom-out for December 7, 16 and 18 overlain but with vertical offsets 4) Accelerometer spectrum for December 7 (sample starting at 18:00 UTC) 5) Accelerometer spectrum for December 16 6) Accelerometer spectrum for December 18