Laser Status:
NPRO output power is 1.842W
AMP1 output power is 70.15W
AMP2 output power is 138.6W
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 0 days, 2 hr 46 minutes
Reflected power = 23.78W
Transmitted power = 105.4W
PowerSum = 129.2W
FSS:
It has been locked for 0 days 0 hr and 33 min
TPD[V] = 0.8228V
ISS:
The diffracted power is around 3.2%
Last saturation event was 0 days 0 hours and 33 minutes ago
Possible Issues:
PMC reflected power is high, its a little higher after the FSS/PMC work today
Sheila D, TJ
To help understand why we've been having inconsistent locking of SRY lately, Sheila and I took an OLG of SRCL with SRY locked this morning. I stopped ALIGN_IFO at WFS_CENTERING_SRY so it was locked but not fully aligned. Compared to the last reference (July 2018) that was on the template, things look a bit worse off. We plan to have someone look at this and try to tune the loop a bit better.
The LVEA has been swept, the FARO is left plugged in in the East bay.
We've been seeing the FSS RefCav TPD signal dropping over recent weeks (not surprising for this time of year), so I went into the PSL enclosure this morning to tune up the FSS path alignment in advance of the holiday break.
To start, I attempted to touch up alignment into the PMC remotely using the picomotors right before it with the ISS off, but I wasn't able to get much of any improvement, so I turned the ISS back on and made my way out to the enclosure. Once there, I started with a power budget of the FSS path using the 3W-capable Ophir stick head:
The largest (and least surprising) area of power loss I noticed was in the AOM diffraction efficiencies, so I started there. I adjusted the AOM stage itself, mostly in pitch, to improve the single-pass, and M21 to improve the double-pass. I also checked that the beam was passing nicely through the FSS EOM (it was, no adjustment needed here). The final power budget:
Having made good improvements, I proceeded to adjust M23 and M47, the picomotor-controlled mirrors before the RefCav, to align the beam back onto the alignment iris at the input of the RefCav. That done, I then instructed the FSS autolocker to lock the RefCav. As seen before and noted most explicitly in alog81780, the autolocker could briefly grab the TEM00 mode but then lose it. I lowered the autolocker's State 2 delay (which determines how long to wait before turning on the temperature control loop after finding resonance) from 1.0 seconds to 0.5, and the autolocker was immediately successful. I've accepted this shorter delay time in SDF; screenshot attached. The TPD was reporting a signal of 0.515 V with the RefCav locked, so I used the pictomotors to improve alignment, finishing with a TPD signal of 0.830 V.
Seeing the beam spot on the RefCav REFL camera was now more than half out of view, I rotated the camera in its mount slightly to center the image. I then unlocked the FSS and used M25 to tweak up the alignment onto the RFPD, improving the voltage when using a multimeter from 0.370 V to 1.139 V, then locked the FSS again to get an RFPD voltage of 0.213 V. This gives a visibility of the RefCav of 81.3%. I wrapped up in the enclosure, turned the environmental controls back to science mode, and returned to the control room. After about an hour while maintenance activities were finishing, I turned the ISS back on; currently diffracting around 3.3%.
This closes WP 12250.
While Oli was ding initial alignment, I saw that the Y arm guardian was in the state ENABLE_WFS although the arm wasn't locked and wasn't well enough aligned to lock. looking at the code, there was no check that the arm was locked before the locking state returned true, it only checks for errors. I've changed the return true to happen if H1:ALS-Y_REFL_LOCK_STATUS = 1
After doing this I caused an issue by trying to cycle through these states while the INIT_ALIGN guardian was still managing the arm guardians. The X arm had already run the WFS and completed, but the initial alignment guardian saw that it wasn't locked, and requested it to scan_alignment. Perhaps this check could be made to only happen if the arm hasn't already offloaded, or if the arm has been in the locking state for a certain amount time.
Fil, Fernando, Patrick, Dave:
We are investigating a Beckhoff device error on the CS-AUX PLC, DEV1 on the CDS Overview. Device went into error at 11:21 PST.
Robert has powered down the chamber illuminator control chassis in the LVEA. This chassis permits remote control of the chamber illuminators via ethernet, which is not needed during O4. There is a worry that these chassis, even when in stand-by mode, could be a source of RF noise.
On the CDS overview I will change DEV1's display logic to be GREEN if the dev count is 21 of 23 and RED if any other value.
New CDS Overview has GREEN DEV1 when device count = 21, RED otherwise.
Looked at the spectum of th 50W CO2 lasers on the fast VIGO PVM 10.6 ISS detectors when the CO2 is locked (using the laser's PZT) and unlocked/ free running: time series attached. Small differences <6Hz, see spectrum attached.
Gabriele, Camilla.
We are not sure if this measurement makes sense.
Attached is the same spectrum with the CO2X laser turned off to see the dark noise. It appears that measurement is limited to dark noise of the diode above 40Hz. The ITMX_CO2_ISS_IN_AC channel dark noise is actually above the level when the laser is on, this doesn't make sense to me.
Gabriele and I checked that the H1:TCS-ITM{X,Y}_CO2_ISS_{IN/OUT}_AC filter: de-whiten zpk([20], [0.01], 0.011322, "n"), is as expected from the PD electronics D1201111, undoing the gain of 105dB with the turning point around 20Hz, foton bode plot attached.
This means that both the AC and DC outputs should be the voltage out of the photodetector before the electronics, where the PD shunt resistance was measured to be 66 Ohms.
Tue Dec 17 10:02:58 2024 INFO: Fill completed in 2min 56secs
Quick fill, coincident with dewar filling.
Ryan S noticed that the range drop before the lockloss at 11:34UTC could be related to the glitches we've been seeing as Omicron sees similar glitches screenshot. DARM looks worse 30-100Hz plot.
Besfreo the lockloss we see a 26-28Hz wobble in DARM (not in the other LSC loops) plot.
TITLE: 12/17 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 2mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.39 μm/s
QUICK SUMMARY:
IFO re-locked itself twice overnight and is currently running magnetic injections before we start maintenance activities.
Slight strangeness that the IFO mode was in "relocking" H1:ODC-OBSERVATORY_MODE = 21 when I arrived.
TITLE: 12/17 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
LOG:
IFO is in INITIAL_ALIGNMENT and ENVIRONMENT
Still recovering from the 7.4 EQ today in Vanuatu. After this, there were many 5+ Mag aftershocks and our primary microseism is still above the 0.1 line. We briefly left EQ mode for 5 or so minutes before another earthquake (Atlantic Ocean this time) caused us to reactivate.
I began locking at 05:22 UTC and managed to get to PRMI but had no flashes. A few minutes later, another EQ hit and we lost lock. At this point, I started initial alignment. I believe IFO will be able to lock very soon since seismic conditions are coming down quickly.
Other:
As the pacific plate shook, Robert took the time to conclude his viewport work, meaning we are capable of transitioning to LASER SAFE once again. I have informed the morning operator independently too.
Interesting LL that happened between the detection of a 7.4 mag EQ but before any picket fence or STS detection. So I don't think it's EQ caused but seems suspicious. Either way, we're riding out a huge EQ right now and will attempt locking afterwards.
TITLE: 12/16 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
Today H1 has been locked the entire shift (over 18hrs!) with the main activity being H1 out of Observing for the Monday Commissioning work (which went 30min over for the period of 1630-2006utc).
Microseism has mostly been flat just under the 95th percentile.
LOG:
Vacuum Prep's Dust Monitor (H1:PEM-CS_DUST_LAB2) continues to alarm on the Alarm Handler (white/invalid alarm) & comes up as NOT OK/Error on the Dust_Monitor script check.
Closes FAMIS#26020, last checked 81715
Things to note:
TITLE: 12/16 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 2mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.36 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING as of 6:34 UTC (17hr 30 min lock!)
Environment looks good.
Camilla, Sheila
We pico'd to center on the POP QPDs, which we were falling off. POP A QPD is used during DRMI ASC, we had been using a large offset. Now in full lock POPA is centered, and in the sdf safe and observe files I've accepted turning off the offset and setting it to 0.
We also noticed a bug with the picomotor code, which is that if someone changes the current motor while the H1:SYS-MOTION_C_PICO_A_INUSE (yellow box on medm) is showing, the current motor channel changes and the medm makes it look like the current motor has changed, but clicking will move the wrong motor. The second screenshot shows this happening, at the time of thei first time cursor, the current motor was 6 but motor 5 was moving.
WP12249
In preparation for the TW0 raw minute trend offload of data accumulated since 18th June 2024 (6 months) TW0 was configured to freeze the old data and write to a new location and NDS0 was configured to serve the past 6 months a trends from their temporary location.
NDS0 DAQD was restarted at 11:41 PDT. This is not the default NDS, so it should have had no impact on control room or FOM.
At time of writing copy is 32/256 dirs copied, which gives an ETA around 13:00 tomorrow, Tue 17th Dec.
Closes FAMIS#28383, last checked 81688
Once again, the coherence for ITMX bias drive bias off is below the coherence threshold - this time the coherence is 0.01, much lower than the threshold of 0.1, so there are once again no new analyzed measurements for ITMX.
Checked that the measurements for ITMX bias are running and the same magnitude for bias in and bias off. They are at a lower magnitude than the quadrant injection on the ESDAMON/LVEASDAMON so we could think about increasing the magnitude if we are not happy with the 81688 no charge build-up on the test mass conclusion. We also still have a pending "to do" from Vlad in 79597: explicitly cast DARM data to np.float64
Ansel reported that a peak in DARM that interfered with the sensitivity of the Crab pulsar followed a similar time frequency path as a peak in the beam splitter microphone signal. I found that this was also the case on a shorter time scale and took advantage of the long down times last weekend to use a movable microphone to find the source of the peak. Microphone signals don’t usually show coherence with DARM even when they are causing noise, probably because the coherence length of the sound is smaller than the spacing between the coupling sites and the microphones, hence the importance of precise time-frequency paths.
Figure 1 shows DARM and the problematic peak in microphone signals. The second page of Figure 1 shows the portable microphone signal at a location by the staging building and a location near the TCS chillers. I used accelerometers to confirm the microphone identification of the TCS chillers, and to distinguish between the two chillers (Figure 2).
I was surprised that the acoustic signal was so strong that I could see it at the staging building - when I found the signal outside, I assumed it was coming from some external HVAC component and spent quite a bit of time searching outside. I think that this may be because the suspended mezzanine (see photos on second page of Figure 2) acts as a sort of soundboard, helping couple the chiller vibrations to the air.
Any direct vibrational coupling can be solved by vibrationally isolating the chillers. This may even help with acoustic coupling if the soundboard theory is correct. We might try this first. However, the safest solution is to either try to change the load to move the peaks to a different frequency, or put the chillers on vibration isolation in the hallway of the cinder-block HVAC housing so that the stiff room blocks the low-frequency sound.
Reducing the coupling is another mitigation route. Vibrational coupling has apparently increased, so I think we should check jitter coupling at the DCPDs in case recent damage has made them more sensitive to beam spot position.
For next generation detectors, it might be a good idea to make the mechanical room of cinder blocks or equivalent to reduce acoustic coupling of the low frequency sources.
This afternoon TJ and I placed pieces of damping and elastic foam under the wheels of both CO2X and CO2Y TCS chillers. We placed thicker foam under CO2Y but this did make the chiller wobbly so we placed thinner foam under CO2X.
Unfortunately, I'm not seeing any improvement of the Crab contamination in the strain spectra this week, following the foam insertion. Attached are ASD zoom-ins (daily and cumulative) from Nov 24, 25, 26 and 27.
This morning at 17:00UTC we turned the CO2X and CO2Y TCS chiller off and then on again, hoping this might change the frequency they are injecting into DARM. We do not expect it to effect it much we had the chillers off for a ling period 25th October 80882 when we flushed the chiller line and the issue was seen before this date.
Opened FRS 32812.
There were no expilcit changes to the TCS chillers bettween O4a and O4b although we swapped a chiller for a spare chiller in October 2023 73704.
Between 19:11 and 19:21 UTC, Robert and I swapped the foam from under CO2Y chiller (it was flattened and not providing any damping now) to new, thicker foam and 4 layers of rubber. Photo's attached.
Thanks for the interventions, but I'm still not seeing improvement in the Crab region. Attached are daily snapshots from UTC Monday to Friday (Dec 2-6).
I changed the flow of the TCSY chiller from 4.0gpm to 3.7gpm.
These Thermoflex1400 chillers have their flow rate adjusted by opening or closing a 3 way valve at the back of the chiller. for both X and Y chillers, these have been in the full open position, with the lever pointed straight up. The Y chiller has been running with 4.0gpm, so our only change was a lower flow rate. The X chiller has been at 3.7gpm already, and the manual states that these chillers shouldn't be ran below 3.8gpm. Though this was a small note in the manual and could be easily missed. Since the flow couldn't be increased via the 3 way valve on back, I didn't want to lower it further and left it as is.
Two questions came from this:
The flow rate has been consistent for the last year+, so I don't suspect that the pumps are getting worn out. As far back as I can trend they have been around 4.0 and 3.7, with some brief periods above or below.
Thanks for the latest intervention. It does appear to have shifted the frequency up just enough to clear the Crab band. Can it be nudged any farther, to reduce spectral leakage into the Crab? Attached are sample spectra from before the intervention (Dec 7 and 10) and afterward (Dec 11 and 12). Spectra from Dec 8-9 are too noisy to be helpful here.
TJ touched the CO2 flow on Dec 12th around 19:45UTC 81791 so the flowrate further reduced to 3.55 GPM. Plot attached.
The flow of the TCSY chiller was further reduced to 3.3gpm. This should push the chiller peak lower in frequency and further away from the crab nebula.
The further reduced flow rate seems to have given the Crab band more isolation from nearby peaks, although I'm not sure I understand the improvement in detail. Attached is a spectrum from yesterday's data in the usual form. Since the zoomed-in plots suggest (unexpectedly) that lowering flow rate moves an offending peak up in frequency, I tried broadening the band and looking at data from December 7 (before 1st flow reduction), December 16 (before most recent flow reduction) and December 18 (after most recent flow reduction). If I look at one of the accelerometer channels Robert highlighted, I do see a large peak indeed move to lower frequencies, as expected. Attachments: 1) Usual daily h(t) spectral zoom near Crab band - December 18 2) Zoom-out for December 7, 16 and 18 overlain 3) Zoom-out for December 7, 16 and 18 overlain but with vertical offsets 4) Accelerometer spectrum for December 7 (sample starting at 18:00 UTC) 5) Accelerometer spectrum for December 16 6) Accelerometer spectrum for December 18