Daniel, Vicky. We offloaded the TTFSS crystal frequency offsets of ~1.5 GHz to the temperature knobs on the laser controllers again, following the final successful swap to the O1 laser last week lho81426.
The initial to final laser crystal temperature we set on the knobs:
Recap of process:
From start (Sept 2024) to end (Nov 2024) of laser swapping saga:
Recent auxilary laser adjustments:
Bypass will expire:
Tue Nov 26 03:29:36 PM PST 2024
For channel(s):
H0:FMC-CS_FIRE_PUMP_1
H0:FMC-CS_FIRE_PUMP_2
FAMIS 31061
This week's trends capture how things compare pre- and post-NPRO swap towards the end of last week. Generally things are looking good and have been stable since the swap with probably the most notable change being the drop in PMC transmitted power by ~2W and rise in reflected power by ~5W. We suspect this to be due to slightly worse mode-matching into the PMC, which we plan to check in the coming weeks.
TITLE: 11/26 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 4mph 3min avg
Primary useism: 0.12 μm/s
Secondary useism: 0.32 μm/s
QUICK SUMMARY: H1 lost lock this morning at 14:12 UTC due to a M6.1 quake out of Japan and has been down since (meaning magnetic and charge measurements were not run). Moving ISC_LOCK to 'IDLE' and SEI_ENV to 'MAINTENANCE' for maintenance day.
Workstations were updated and rebooted. This was an OS packages update. Conda packages were not updated.
TITLE: 11/26 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 04:24 UTC
Overall a pretty calm shift with one Lock Acquisition that required an initial alignment - all fully auto.
Of note:
LOG:
None
Unknown cause Lockloss (confirmed not PSL/IMC). While microseism is high, there weren't any EQs or other PEM events to cause an LL.
The Lockloss analysis tool is giving an OMC DCPD tag
TITLE: 11/25 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
A bit of a busy morning with convoys of concrete trucks arriving onsite (for a pour for the new storage warehouse), discovering a ringing up violin mode, and then getting ready for Monday Morning Commissioning!
Rahul has new violin settings for ETMy Mode1 and ITMy Mode5 (aka 05 & 06)--see alog81465 . Operators will need to enter these settings by hand until Rahul accepts them.
Also: If H1 drops out of Observing, be sure to LOAD the guardians she mentions in alog81474.
H1 is currently at 13.5hrs for this current lock (and that's even after a low fly-by from a chinook helicopter!).
LOG:
The helicopter can be clearly heard in the LVEA on H1:PEM-CS_MIC_LVEA_BS_DQ
TITLE: 11/25 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 6mph Gusts, 4mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.55 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING as of 19:35 UTC (13 hr lock)
Everything seems quiet, especially violins, which have been a recent issue.
Ibrahim, TJ
Beginning after the first NPRO Swap recovery on 10/30, there have been 7 or so instances of SRM M3 tripping during PREP_FOR_SRY in initial alignment, predominantly due to Y saturation. TJ and I are investigating.
So far, we know that SRM sees elevated noise while in PREP_FOR_SRY, particularly after SRM re-alignment. We've also found that AS_C SUM is too low to be considered locked at this state, but that ALIGN_IFO guardian continues with SR initial alignment, leading to the WD trip. The current hypothesis is that SRM is catching onto a wrong mode, but tricking the automation into continuing alignment.
The first screenshot shows the incidence of the WD tripping during initial alignment happening frequently after 10/30. There is one instance of it happening 10/16 but nothing in the near past before that.
The second shows one recent instance of this trip, leading to the WD trip.
I edited the sys/h1/guardian/injparams.py and sus/h1/guardian/SUS_CHARGE.py code to move the magnetic injections and in-lock charge measurements 20 minutes earlier tomorrow, usual start times are 7:20am and 7:45am. Moved to 7 am and 7:25am, should be over by 7:40am. If we drop out of observing, can an operator please reload PEM_MAG_INJ and SUS_CHARGE guardians.
Aim to turn CO2 laser off at 7:40am tomorrow, before the lockloss to see if we see similar SQZ alignment changes related to CO2 lasers as LLO does: 72244.
Adrian, Robert S.
We examined hours-long stretches of data from the H1:PEM-CS_ACC_HAM2_PRM_Y_DQ accelerometer because it featured a few short-duration, broadband transients every hour (figure 1). These transients are not seen in other chamber accelerometers or GS13s (figure 2), so we suspect the cable or signal conditioning box for the accelerometer is failing. It sounds like Lance has some time to check this tomorrow during the maintenance period.
On 11/12/24 Rahul ran the OPLEV charge measurements, I processed them this afternoon...
ETMY's charge seems to be slightly trending up in UL and LL but looks stagnant in UR. The charge is >= 50 V in LL_{P,Y} and UL_Y.
ETMX's charge looks to have a small upwards trend and is >= 50 V in all DOFs and quadrantss. The charge looks to have risen ~5-10 V in a little over a month over the DOFs and quadrants. The error in the measurement for EX is also larger than we usually see, the secondary microseism was elevated during these measurements.
Over the weekend, we've had greatly improved stability because of the PSL swap. The IFO range has been mostly below 160Mpc, with short times near 165Mpc in the first hour of each lock.
The first attachment shows a comparison of the cleaned GDS strain sensitivity at the 165 Mpc time to later in the lock, along with coherences from the lower range time. The sensitivity earlier in the lock was broadly better from 18-55 Hz, there are coherences with both ASC and LSC that are contributing to the noise from 18-27 Hz so we spent some time trying to address those. The worse sensitivty from 27-55 Hz is not explained by these coherences, and the usual squeezing adjustments (81463 81458) don't seem to be able to help. We might think about checking both the SRC detuning and the filter cavity detuning to see if they make an impact in this region. We do seem to have slightly less high frequency squeezing than in the past.
To address the ASC coherences, I ran the A2L script that TJ put in userapps/isc/h1/scripts/a2l/a2l_min_multi.py with the command python a2l_min_multi.py --all The second attachment shows this script running, using an ndscope template that is in sheila.dwyer/ndscope/ASC/A2L_script_ADS.yaml The first time through the pitch a2l gains for ETMY, ITMY and ITMX were all set to the maximum that the script checks, so we ran it again. The third attachment shows the coherences (with no squeezing) after this A2L adjustment and Camilla's adjustment of the PRCL feedforward. In the past we've seen that after running this a2l script we can do slightly better by dong a manual adjustment based on injections into CHARD or DHARD, (79921 80250) we haven't taken the time to check that today, but the script does seem to have helped in the frequency region where it's measuring. It has made the CHARD Y coherence worse below 15 Hz, which we've seen consistently when the decoupling is good around 20-30 Hz.
The last attachment shows a comparison of the DARM sensitivty in the high range early part of the lock, the lower later sensitivity, and currently after these retunings but without squeezing. This suggests that our adjustments did a good job below 27 Hz where we the squeezing does have an impact in this frequency band where we haven't yet recovered the sensitivity.
Old A2L settings:
'FINAL':{
'P2L':{'ITMX':-1.0,
'ITMY':-0.39,
'ETMX':2.98,
'ETMY':4.72},
'Y2L':{'ITMX':3.05,
'ITMY':-2.43, #+1.0,
'ETMX':4.9,
'ETMY':1.42 }
New A2L settings:
'FINAL':{
'P2L':{'ITMX':-0.66,
'ITMY':-0.05,
'ETMX':3.12,
'ETMY':5.03},
'Y2L':{'ITMX':2.980,
'ITMY':-2.52, #+1.0,
'ETMX':4.99,
'ETMY':1.34 },
Attached updated plot. It seems that these commissioning changes made the range better 10-30Hz, maybe look 50-90Hz. No SQZ is slightly better right at 20Hz.
Also attache is range BLRMs, showing main change is 20-29Hz region.
Running the range comparison script comparing the time of the best range early this morning (11:30UTC) to a time a few hours later (16:15 UTC) and currently (20:30 UTC) using 15 minutes of data. It looks like SQZer differences and violins, since its seen in broadband high and low frequency, and I can see the ~500Hz line rising from ETMY 1 and ITMY 5/6 for the first time span. That line is reduced in the following check, after new damping settings are put in. A line at 300Hz also looks to have grown slightly.
The blue traces are the reference good range time from the beginning of the lock.
I have found new damping settings for ETMY mode01 and ITMY mode05/06, which are given below,
ETMY01
Nominal - FM1+FM6+FM10, Gain = +0.1 (+30deg phase)
New - FM1+FM8+FM10, Gain = +0.1 (+60deg phase)
ITMY 05/06
Nominal - FM5+FM6+FM7+FM10, Gain = +0.02 (-30deg phase)
New - FM5+FM6+FM8+FM10, Gain = +0.01 (+30deg phase)
I have not made any changes in the lscparam file, will check these new settings for a few lock stretches (also let microseism to settle down) before making them final.
Tagging OpsInfo
So once Rahul is happy with the new Violin Mode settings for ETMy 01 & ITMy 05/06, Operators will need to enter his NEW settings by hand (this can be done any time after the DAMP_VIOLINS_FULL_POWER [566] state. For clarity, also attaching screenshots with the filter banks in question circled in light blue:
This morning there was a range drop on H1 (163Mpc down to about 151Mpc, see attachment#1). Was working on trying to figure out how to run the Range Check measurements, but while chatting with Vicky on Teamspeak, she reminded me about the daily CP1 Fill can affect range (see attachment #2 which is plot from Dave's alog) ....and the effect certainly lines up! (Also see Oli's alog from Sept here.) The time in question is 1802-1812utc (1002-1012amPT). I will not share the Low-Range-Plots I took for 1810utc since CP1 Fill is most likely the culprit.
However, a note about the range is that it has not really returned to 163Mpc---it's hovered at 157Mpc post-CP1-Fill for the last 4+hrs.
So I ran another Low Range DTT for about an hour ago (2117utc/1317PT).
Attached plots show the 30 minutes around the CP1 overfill for Sunday and Saturday. The H1 range shows a correlation with the CP1 discharge line pressure. An increase in line pressure indicates the presence of cold LN2 vapor, and later liquid, in the pipe. The Y manifold accelerometer signal shows correlated motion.
The accelerometer correlation can also be seen on the previous Sunday. This is not seen clearly during the week because the ACC was nore noisy, presumably due to LVEA activity around 10am each day.
Attached shows ACC signal Sun 8th Sep 2024 correlated to the discharge pressure. Back then we were filling at 8am. It doesn't appear that the beam manifold motion has gotten any worse over the past two months during cp1 fills.
The attached plots shows the BNS range around CP1 fill times for the last six CP1 fills (10 AM PDT) when the IFO was also in the locked state. In four cases among these six, we can see BNS range drop during the CP1 fill. In the remaining two it is not clear whether CP1 fill happened or not. We see a spike in H0:VAC-LY_TERM_M17_CHAN2_IN_MA.mean, but we don't see an extened increase in that channel as we see in the other four cases.
The attached plot show the BNS range variations during the CP1 fill times during the first ~10 days of December. We are plotting only those days when the IFO was in observing (H1:GRD-IFO_OK == 1). For these days, the drop in the BNS range during the fill times seem lower than what we saw during November (plot in the above comment). We also see that the fill times are in general less in these ten days compared to what were in November. Maybe longer the fill time, more the drop in the BNS range!? Also looking at these plots and plots from November, it seem the range might be coming back to a lower value after the fill than it's value before the fill.
After the very successful PSL work this week, we're electing to not offload the temp changes for the ALS or SQZ lasers to their physical knobs tonight. This means that they all need Crystal Freqs of something near 1.6 GHz (1600 in the channel H1:ALS-X_LASER_HEAD_CRYSTALFREQUENCY). The new-ish ALS state CHECK_CRYSTAL_FREQ is written assuming that all the lasers have the changes offloaded to their knobs, so that the value in that channel is close to zero. After we went through SDF revert, we lost those values (and the search values, which Daniel has updated in SDF for the weekend), so we we lost the PLL lock of the aux lasers. The CHECK_CRYSTAL_FREQ state was 'fighting' us, by putting in candidate values closer to zero. I've updated its candidate values to be closer to 1600. Once we offload their temps to the knobs on their front panels (early next week), then we'll want to undo this change.
Current crystal frequencies: ALSX +1595 MHz, ALSY +1518 MHz, and SQZ +1534 MHz.
I have now reverted the change to CHECK_CRYSTAL_FREQS, so that when Daniel and Vicky are done offloading the temps to the front panels of the lasers, this state will still work.
Ansel reported that a peak in DARM that interfered with the sensitivity of the Crab pulsar followed a similar time frequency path as a peak in the beam splitter microphone signal. I found that this was also the case on a shorter time scale and took advantage of the long down times last weekend to use a movable microphone to find the source of the peak. Microphone signals don’t usually show coherence with DARM even when they are causing noise, probably because the coherence length of the sound is smaller than the spacing between the coupling sites and the microphones, hence the importance of precise time-frequency paths.
Figure 1 shows DARM and the problematic peak in microphone signals. The second page of Figure 1 shows the portable microphone signal at a location by the staging building and a location near the TCS chillers. I used accelerometers to confirm the microphone identification of the TCS chillers, and to distinguish between the two chillers (Figure 2).
I was surprised that the acoustic signal was so strong that I could see it at the staging building - when I found the signal outside, I assumed it was coming from some external HVAC component and spent quite a bit of time searching outside. I think that this may be because the suspended mezzanine (see photos on second page of Figure 2) acts as a sort of soundboard, helping couple the chiller vibrations to the air.
Any direct vibrational coupling can be solved by vibrationally isolating the chillers. This may even help with acoustic coupling if the soundboard theory is correct. We might try this first. However, the safest solution is to either try to change the load to move the peaks to a different frequency, or put the chillers on vibration isolation in the hallway of the cinder-block HVAC housing so that the stiff room blocks the low-frequency sound.
Reducing the coupling is another mitigation route. Vibrational coupling has apparently increased, so I think we should check jitter coupling at the DCPDs in case recent damage has made them more sensitive to beam spot position.
For next generation detectors, it might be a good idea to make the mechanical room of cinder blocks or equivalent to reduce acoustic coupling of the low frequency sources.
This afternoon TJ and I placed pieces of damping and elastic foam under the wheels of both CO2X and CO2Y TCS chillers. We placed thicker foam under CO2Y but this did make the chiller wobbly so we placed thinner foam under CO2X.
Unfortunately, I'm not seeing any improvement of the Crab contamination in the strain spectra this week, following the foam insertion. Attached are ASD zoom-ins (daily and cumulative) from Nov 24, 25, 26 and 27.
This morning at 17:00UTC we turned the CO2X and CO2Y TCS chiller off and then on again, hoping this might change the frequency they are injecting into DARM. We do not expect it to effect it much we had the chillers off for a ling period 25th October 80882 when we flushed the chiller line and the issue was seen before this date.
Opened FRS 32812.
There were no expilcit changes to the TCS chillers bettween O4a and O4b although we swapped a chiller for a spare chiller in October 2023 73704.
Between 19:11 and 19:21 UTC, Robert and I swapped the foam from under CO2Y chiller (it was flattened and not providing any damping now) to new, thicker foam and 4 layers of rubber. Photo's attached.
Thanks for the interventions, but I'm still not seeing improvement in the Crab region. Attached are daily snapshots from UTC Monday to Friday (Dec 2-6).
I changed the flow of the TCSY chiller from 4.0gpm to 3.7gpm.
These Thermoflex1400 chillers have their flow rate adjusted by opening or closing a 3 way valve at the back of the chiller. for both X and Y chillers, these have been in the full open position, with the lever pointed straight up. The Y chiller has been running with 4.0gpm, so our only change was a lower flow rate. The X chiller has been at 3.7gpm already, and the manual states that these chillers shouldn't be ran below 3.8gpm. Though this was a small note in the manual and could be easily missed. Since the flow couldn't be increased via the 3 way valve on back, I didn't want to lower it further and left it as is.
Two questions came from this:
The flow rate has been consistent for the last year+, so I don't suspect that the pumps are getting worn out. As far back as I can trend they have been around 4.0 and 3.7, with some brief periods above or below.
Thanks for the latest intervention. It does appear to have shifted the frequency up just enough to clear the Crab band. Can it be nudged any farther, to reduce spectral leakage into the Crab? Attached are sample spectra from before the intervention (Dec 7 and 10) and afterward (Dec 11 and 12). Spectra from Dec 8-9 are too noisy to be helpful here.
TJ touched the CO2 flow on Dec 12th around 19:45UTC 81791 so the flowrate further reduced to 3.55 GPM. Plot attached.
The flow of the TCSY chiller was further reduced to 3.3gpm. This should push the chiller peak lower in frequency and further away from the crab nebula.
The further reduced flow rate seems to have given the Crab band more isolation from nearby peaks, although I'm not sure I understand the improvement in detail. Attached is a spectrum from yesterday's data in the usual form. Since the zoomed-in plots suggest (unexpectedly) that lowering flow rate moves an offending peak up in frequency, I tried broadening the band and looking at data from December 7 (before 1st flow reduction), December 16 (before most recent flow reduction) and December 18 (after most recent flow reduction). If I look at one of the accelerometer channels Robert highlighted, I do see a large peak indeed move to lower frequencies, as expected. Attachments: 1) Usual daily h(t) spectral zoom near Crab band - December 18 2) Zoom-out for December 7, 16 and 18 overlain 3) Zoom-out for December 7, 16 and 18 overlain but with vertical offsets 4) Accelerometer spectrum for December 7 (sample starting at 18:00 UTC) 5) Accelerometer spectrum for December 16 6) Accelerometer spectrum for December 18