Search criteria
Section: H2
Task: TCS

REMOVE SEARCH FILTER
SEARCH AGAIN
Reports until 10:54, Wednesday 25 June 2025
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 10:54, Wednesday 25 June 2025 - last comment - 13:47, Wednesday 25 June 2025(85327)
Lockloss

Lockloss during commissioning at 2025-06-25 17:52UTC after over 5.5 hours Locked. Cause not known, but probably not commissioning related.

Comments related to this report
oli.patane@LIGO.ORG - 12:56, Wednesday 25 June 2025 (85334)

18:48 Back to NOMINAL_LOW_NOISE

camilla.compton@LIGO.ORG - 13:47, Wednesday 25 June 2025 (85335)TCS

Within the same ~15 seconds of the lockloss, we turned the CO2 powers down form 1.7W each to 0.9W each. In the hope of doing the thermalization tests we tried last week 85238.

We checked the lockloss today and LSC channels at the time last week we turned the CO2s down and see no glitches corresponding with the CO2 waveplate (out of vac) change, we think the lockloss was unrelated.

H1 ISC (SQZ, TCS)
camilla.compton@LIGO.ORG - posted 08:51, Monday 09 June 2025 (84894)
ISCT1 Bellows Back on, all LVEA lasers back on.

TJ, Camilla WP 12605WP 12593

As HAM1 is now low enough in pressure, we removed the HAM1 +Y yellow VP covers, re-attached bellows and removed guillotines. ISCT1 was moved back into place on Friday 84850.

After TJ transitioned to Laser Hazard, we opened the ALS and PSL light pipes and turned back on the SQZ, CO2s and HWSs lasers.  

After okay from VAC, IMC locked without issue, PRC and MICH alignments look good enough on AS AIR camera and we will begin realigning ISCT1 soon.

H1 CDS (CDS, TCS)
erik.vonreis@LIGO.ORG - posted 12:39, Friday 06 June 2025 (84865)
HWS data files moved

HWS servers now point to /ligo/data/hws as the data directory.

The old data directory, h1hwsmsr:/data, is now moved to h1hwsmsr:/data_old

The contents of the old directory were copied into the new directory, except H1/ITMX, H1/ITMY, H1/ETMX, H1/ETMY, under the assumption that these only contain outputs from the running processes.

HWS processes on h1hwsmsr, h1hwsmsr1, h1hwsex were stopped and restarted and are writing to the new directory.

h1hwsey had crashed previously and wasn't running.  It was restarted and is also writing to the new directory

H1 TCS (TCS)
corey.gray@LIGO.ORG - posted 13:55, Monday 02 June 2025 (84721)
TCS Chiller Water Level Top-Off (Bi-Weekly, FAMIS #27816)

Addressed TCS Chillers (Mon [Jun2] 100-125am local) & CLOSED FAMIS #27816:

H1 ISC (TCS)
camilla.compton@LIGO.ORG - posted 13:47, Monday 14 April 2025 (83901)
Checking Return EY ALS beam with QPD offsets on and off.

TJ, Sheila, Matt, Andrew (job shadow today), Camilla. WP12433. Table layout D1400241.

Following on from other beam profile work at the end stations 81358, Madi Simmonds models G2500646 and the ideas we had for understanding ETM mode matching better (googledoc), today we took some beam profiles of the EY ALS beam reflected of ETM HR in the HWS path.

We added a 532nm quarter waveplate downstream of the beamspiltter (ALS-M11) and tuned it to maximize the light reflected into the HWS path. This was removed when we were finished and left with Nanoscan beam profile.

We first took some beam profile measurements (photos 1-4) first with the QPD offsets on nominally, and then with the offsets zero'ed (phots 5-9). Unless the beam has been pico-ed (which it may have) the zero offsets should have the beam more centered on the Transmon parabolic mirrors. The beam changed when the offsets were turned off but did not look better/worse, just different. Compare photo 4 (QPD offsets on) and photo 5 (offsets off).

Photo
#
 
A1 horizontal
13.5%
A2 vertical
13.5%
A1 horizontal
50%
A2 vertical
50%
Started with QPDs on at thier usual values:
1
50.5cm downstream of HWS-M3 (between M3 and M4)
1485
1685
723
871
2
42.5cm downstream of HWS-M3 (between M3 and M4)
1783
1985
877
970
3
36.5cm downstream of HWS-M3 (between M3 and M4)
1872
2146
924
1135
4
13.0cm downstream of HWS-M3 (between M3 and M4)
2540
2991
1247
1516
Then we turned off QPD offsets and retook data:
9
51cm downstream of HWS-M3 (between M3 and M4)
1370
1570
676
853
8
33.5cm downstream of HWS-M3 (between M3 and M4)
1939
2184
959
1086
7
48.5cm downstream of HWS-M3 (between M3 and M4)
1440
1655
706
876
6
36cm downstream of HWS-M3 (between M3 and M4)
1830
2130
913
1098
5
13.0cm downstream of HWS-M3 (between M3 and M4)  
2186
2861
1045
1077
Images attached to this report
H1 TCS (TCS)
ibrahim.abouelfettouh@LIGO.ORG - posted 12:36, Thursday 03 April 2025 (83726)
TCS Chiller Water Level Top Off - FAMIS 27812

Closes FAMIS 27812. Last checked in alog 83486

CO2Y:

Before: 10.4ml

After: 10.4ml

Water Added: 0ml

CO2X:

Before: 30.1ml

After: 30.1ml

Water Added: 0ml

 

No water leaking in our leak detection system (aka dixie cup)

H1 ISC (CAL, ISC, TCS)
jeffrey.kissel@LIGO.ORG - posted 12:31, Thursday 20 March 2025 (83468)
9.75 kHz to 10.75 kHz ASD of OMC DCPD A during Nominal Low Noise, Calibrated in OMC DCPD TEST DAC Drive
J. Kissel

I've been tasked with using the analog voltage input to the OMC DCPD transimpedance amplifiers (LHO:83466) to drive a sine wave around 10 kHz, in order to try to replicate a recent PI ring up which apparently caused broad-band low frequency noise LHO:83335.

There is a remote excitation channel that typically drives the this analog input excitation path is running at 16 kHz: the H1:OMC-TEST_DCPD_EXC channel's filter module lives in the h1lsc.mdl model in a top_names OMC block at the top level of the model, and the output of that filter bank is connected to the lsc0 IO chassis' DAC_0's 11th / 12th digital/analog channel. That DAC's AI channel goes through quite an adventure before arriving at the OMC whitening chassis input -- following D19000511,
    - AI chassis for lsc0 IO chassis DAC_0 lives in ISC-C1 U7, port OUT8-11 D9M spigot (page 2, component C15), connected to cable ISC_406
    - The long ISC_406 DB9 cable connects to a LEMO Patch Panel (D1201450) on the other end in ISC-R5 U12 (page 22, component C210).
    - Funky LEMO to D9M cable D2200106 ISC_444 connects from the patch panel to the whitening chassis,
    - ISC_444 D9F end of the cable to lands at the  J4 D9M port labeled "From AI/DAC / Test Inputs" of OMC DCPD whitening chassis in U24 of ISC-R5 rack.

... but this won't work: this channel is driven at 16 kHz sampling frequency. So, we can't drive anything technically above 8192 Hz, but realistically above ~5-6 kHz.
I digress.

I'm going with SR785 excitation via DB9 breakout.

Anyways -- I attach here an ASD of the OMC DCPDs sampled at 524 kHz during nominal low noise this morning (taking advantage of the live-only 524 kHz channels), low-passed at 1 Hz, without any digital AA filters; the channel H1:OMC-DCPD_524K_A1 filter banks channels from LHO:82686. The OMC DCPD z:p=1:10 Hz whitening is ON during nominal low noise.

The top panels show a zoomed in, and zoomed out version of the DCPDA ASD calibrated into photocurrent on the PDs (H1:OMC-DCPD_524K_A1_OUT channel, * 0.001 [A/mA], all the rest of the calibration is built in to the front-end filter bank; see OUT DTT Calibration).
The bottom panels show a zoomed in, and zoomed out version of the DCPD ASD calibrated into ADC input voltage, with what whitening that was ON divided out, and the 2x ~11 kHz poles of the TIA divided out. (H1:OMC-DCPD-524K_A1_IN channel, * 1/4 [four-channel copy sum to avg conversion] * 0.0001526 [V/ct for 40 Vpp range over an 18 bit ADC] * z:p = (10):(1) inverse whitening filter * z:p = (11k,10k):([]) inverse HF TIA response )

In the zoomed in version of the plots, you can clearly see the mess of acoustic modes at 10.4 kHz. 
One can also see relatively noise free region at 10.3 kHz that looks clean enough for me to target with my analog excitation.

I plot the de-whitened ADC voltage because the transfer function between the TEST DAC input and the TIA's output voltage is designed to the 1.0 in the GW band, from ~50 to 5 kHz, given that in that band, the transimpedance is 100e3 [V/A] and the series resistor that turns the voltage injection of the TEST input into current is 100e3 [V/A] (see ). So "nominally" un-whitened ADC counts should be a one-to-one map to DAC voltage input. At 10 kHz, however, the 2x ~11 kHz poles of the TIA, which drop the TEST input voltage by a factor of 5x at 10 kHz (see, e.g. plots from LHO:78372)

So, that's why in the lower panels of the above mentioned plot of OMC DCPD A's ADC voltage calibrated in the way mentioned above. This should be a one-to-one map to the equivalent DAC voltage to drive.

I looks like the non-rung-up 10.4 kHz PI modes today were of-order 1e-2 V = 10 mV, and the surrounding noise floor is around 1e-5 V = 10uV = 0.01 mV of equivalent DAC drive.

The lower limit of the SR785 SRC amplitude is 0.1 mVp = 0.2 mVpp (single ended).
Also, using the SR785 in a FFT measurement mode, where you can drive the source at a single frequency, there is no ramp. It's just ON and/or OFF.
(I should also say that a SR DS340 function generator also doesn't have a ramp on its output. I considered using that too.)

So -- hopefully, suddenly turning on a noisy 10.3 kHz line at 0.1 mVp into the IFO with a noise floor of 0.01 mV/rtHz at 10kHz won't cause problems, but ... I'm nervous.


Images attached to this report
H1 TCS (TCS)
ibrahim.abouelfettouh@LIGO.ORG - posted 14:29, Friday 07 March 2025 (83234)
TCS Monthly Trends - FAMIS 28458

Closes FAMIS 28458. Last checked in alog 82659.

Images attached to this report
H1 TCS (TCS)
corey.gray@LIGO.ORG - posted 17:51, Wednesday 05 February 2025 - last comment - 09:41, Monday 10 February 2025(82659)
TCS Monthly Trends (FAMIS #28457)

Attached are monthly TCS trends for CO2 and HWS lasers.  (FAMIS link)

Comments related to this report
corey.gray@LIGO.ORG - 09:41, Monday 10 February 2025 (82717)TCS

Whoops!  Forgot to attach the actual trends!! (thanks for catching this, Camilla!)  Attachements are now attached!

Images attached to this comment
H1 TCS (TCS)
anthony.sanchez@LIGO.ORG - posted 16:19, Tuesday 31 December 2024 (82066)
TCS Chiller Water Level Top-Off_Weekly

FAMIS: 27805

CO2X was found at 30.0, I added 150ml to get it to right to the MAX line at 30.5.

CO2Y was found at 10.3, I added 150ml to get it to the MAX line at 10.7.

Side note:
I was up there checking for leaks and I saw this old looking goopy mess toward the corner of the mezzanine. It looks like it is from an old Hepi fluid leak. Tagging SEI.

Images attached to this report
H1 ISC (OpsInfo, TCS)
thomas.shaffer@LIGO.ORG - posted 09:11, Wednesday 18 December 2024 (81891)
EY ring heater bumped from 1.1W to 1.2W

To hopefully help combat our recent bouts of PI24 ring ups (alog81890), we've bummped the ETMY ring heater power from 1.1W per segment to 1.2W. The safe.snap and observe.snap are updated.

We also have a plan to update the PI guardian to temporarily increase the ETM bias to hopefully give us more range to damp the PI. We will need to be out of Observing for this, but we could hopefully save the lock. TBD.

Images attached to this report
H1 General (DetChar, ISC, SUS, TCS)
derek.davis@LIGO.ORG - posted 11:27, Wednesday 11 December 2024 (81764)
LASSO investigations into sharp turn-off of Dec 11 glitching

The broadband glitching that was present in the early hours of Dec 11 (UTC) appears to have suddenly and entirely stopped at 10:36:30 UTC - this sharp feature can be seen in the daily range plot. I completed a series of LASSO investigations around this time in the hopes that such a sharp feature would make it easier for LASSO to identify correlations. I find a number of trend channels that have drastic changes at the same time as this turn-off point related to TCS-ITMY_CO2, ALS-Y_WFS, and SUS-MC3_M1.

The runs I completed are linked here: 

  1. LASSO run of the first half of Dec 11 with the sensemon range as the primary channel 
  2. LASSO run of times near the turn-off point with TCS-ITMY_CO2 as the primary channel 
  3. LASSO run of times near the turn-off point with the sensemon range as the primary channel

Run #1 was a generic run of LASSO in the hopes of identifying a correlation. While no channel was highlighted as strongly correlated to the entire time period, this run does identify  H1:TCS-ITMY_CO2_QPD_B_SEG2_OUTPUT (rank 11) and H1:TCS-ITMY_CO2_QPD_B_SEG2_INMON (rank 15) as having a drastic feature at the turn-off point (example figure). Based on this information, I launched targeted runs #2 and #3. 

Run #2 is a run of LASSO using H1:TCS-ITMY_CO2_QPD_B_SEG2_OUTPUT as the primary channel to correlate against. This was designed to identify any additional channels that may show a drastic change in behavior at the same time. Channels of interest from this run include H1:ALS-Y_WFS_B_DC_SEG3_OUT16 (example figure) and H1:ALS-Y_WFS_B_DC_MTRX_Y_OUTMON (example figure). SEISMON channels were also found to be correlated, but this is likely a coincidence. 

Run #3 targets the same turn-on point, but with the standard sensemon range as the primary channel. This run revealed an additional channel with a change in behavior at the time of interest, H1:SUS-MC3_M1_DAMP_P_INMON (example figure). 

Based on these runs, the TCS-IMTY_CO2 and ALS-Y_WFS channels are the best leads for additional investigations into the source of this glitching. 

Images attached to this report
H1 CDS (CDS, ISC, SYS, TCS)
fernando.mera@LIGO.ORG - posted 16:00, Tuesday 10 December 2024 (81746)
Picomotor controller inspections for LVEA

Per the WP12246:

Visual inspections were performed in the LVEA to track the wires and verify the picomotor controllers existence, connections and spares (physically only). The information gattered was updated in the document E1200072. Important findings were found and the document is pretty much near to the real installation. Electrical part investigations will follow to determine the spares. The names of the rack as well as the physical location for the controllers were verified using the O5 ISC wiring diagram D1900511.

Marc, Fernando

LHO General
thomas.shaffer@LIGO.ORG - posted 22:00, Sunday 01 December 2024 - last comment - 14:26, Monday 02 December 2024(81569)
Ops Eve Shift End

TITLE: 12/02 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 142Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: A long relock today from DRMI locking struggles and lock losses at Transition_From_ETMX. Once back up to observing the range hasn't been very stable, hovering between 140-150Mpc. I was waiting for a drop in triple coincidence before trying to tune squeezing, but that didn't happen before the end of my shift. The 15-50Hz area in particular looks especially poor.
LOG:

Comments related to this report
camilla.compton@LIGO.ORG - 09:54, Monday 02 December 2024 (81577)

Plot of range and glitches attached.

It didn't look like the explicitly squeezing was the issue last night when the range was low as SQZ was stable, sqz plot. The range seemed to correct itself on it's own as we stayed in Observing. If it happens agin, we could have gone to NoSQZ or FIS to check it's not backscatter from SQZ of FC.

Oli and I ran a Bruco during the low range time, bruco website, but Shiela noted that the noise looks un-stationary like scatter so that a bruco isn't the best way of finding the cause.

Images attached to this comment
ryan.crouch@LIGO.ORG - 13:13, Monday 02 December 2024 (81584)ISC

I ran a range comparison using 50 minutes of data from a time in the middle of the bad range and a time after it stops with good range. Excess noise looks to be mostly below 100 Hz for sensmon, for DARM the inflection point looks to be at 60Hz and there is broadband noise but low frequency again seems larger.

I also checked these same times in my "Misaligned" GUI that compares SUS top mass osems, witness sensors, and OPLEVS avg motion to compare alignments for relocking and to look for drifts. It doesn't look all that useful here, the whole IFO is moving together throughout the lock. I ran it for seperate times within the good range block as well and it look pretty much the same.

Images attached to this comment
Non-image files attached to this comment
camilla.compton@LIGO.ORG - 13:32, Monday 02 December 2024 (81586)OpsInfo

As discussed in today's commissioning meeting, if this low range with glitches on Omicron at low frequency happens again, can the operator take SQZ_MANAGER to NO_SQUEEZING for 10 minutes so that we can check this isn't caused by backscatter from something in HAM7/8. Tagging OpsInfo.

derek.davis@LIGO.ORG - 13:41, Monday 02 December 2024 (81587)DetChar, SQZ

Runs of HVeto on this data stretch indicate extremely high correlations between strain glitches and glitches in SQZ FC channels. The strongest correlation was found with H1:SQZ-FC_LSC_DOF2_OUT_DQ.  

The full HVeto results can be seen here: https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20241202/1417132818-1417190418/

An example of H1 strain data and the channel highlighted by HVeto can be seen in the following attached plots: 

Images attached to this comment
oli.patane@LIGO.ORG - 14:26, Monday 02 December 2024 (81588)DetChar, TCS

Derek also so kindly ran lasso for that time period (link to lasso run) and the top correlated channel is  H1:TCS-ITMX_CO2_ISS_CTRL2_OUT_DQ. Back in May we were seeing correlations between drops in range, FC alignment, and the values in this same TCS channel(78089). Here's a screenshot of the range vs that channel - the TCS channel matches with how it was looking back in May. As stated in that May alog thread, the cable for this channel was and is still unplugged :(

Images attached to this comment
H1 OpsInfo (OpsInfo, PEM, SUS, TCS)
camilla.compton@LIGO.ORG - posted 14:38, Monday 25 November 2024 (81474)
PEM and SUS In-lock charge Injections to start early at 7am and 7:25am tomorrow

I edited the sys/h1/guardian/injparams.py and  sus/h1/guardian/SUS_CHARGE.py code to move the magnetic injections and in-lock charge measurements 20 minutes earlier tomorrow, usual start times are 7:20am and 7:45am. Moved to 7 am and 7:25am, should be over by 7:40am. If we drop out of observing, can an operator please reload PEM_MAG_INJ and SUS_CHARGE guardians.

Aim to turn CO2 laser off  at 7:40am tomorrow, before the lockloss to see if we see similar SQZ alignment changes related to CO2 lasers as LLO does: 72244.

H1 PEM (DetChar, PEM, TCS)
robert.schofield@LIGO.ORG - posted 18:06, Thursday 14 November 2024 - last comment - 10:19, Thursday 19 December 2024(81246)
TCS-Y chiller is likely hurting Crab sensitivity

Ansel reported that a peak in DARM that interfered with the sensitivity of the Crab pulsar followed a similar time frequency path as a peak in the beam splitter microphone signal. I found that this was also the case on a shorter time scale and took advantage of the long down times last weekend to use  a movable microphone to find the source of the peak. Microphone signals don’t usually show coherence with DARM even when they are causing noise, probably because the coherence length of the sound is smaller than the spacing between the coupling sites and the microphones, hence the importance of precise time-frequency paths.

Figure 1 shows DARM and the problematic peak in microphone signals. The second page of Figure 1 shows the portable microphone signal at a location by the staging building and a location near the TCS chillers. I used accelerometers to confirm the microphone identification of the TCS chillers, and to distinguish between the two chillers (Figure 2).

I was surprised that the acoustic signal was so strong that I could see it at the staging building - when I found the signal outside, I assumed it was coming from some external HVAC component and spent quite a bit of time searching outside. I think that this may be because the suspended mezzanine (see photos on second page of Figure 2) acts as a sort of soundboard, helping couple the chiller vibrations to the air. 

Any direct vibrational coupling can be solved by vibrationally isolating the chillers. This may even help with acoustic coupling if the soundboard theory is correct. We might try this first. However, the safest solution is to either try to change the load to move the peaks to a different frequency, or put the chillers on vibration isolation in the hallway of the cinder-block HVAC housing so that the stiff room blocks the low-frequency sound. 

Reducing the coupling is another mitigation route. Vibrational coupling has apparently increased, so I think we should check jitter coupling at the DCPDs in case recent damage has made them more sensitive to beam spot position.

For next generation detectors, it might be a good idea to make the mechanical room of cinder blocks or equivalent to reduce acoustic coupling of the low frequency sources.

Non-image files attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 14:12, Monday 25 November 2024 (81472)DetChar, TCS

This afternoon TJ and I placed pieces of damping and elastic foam under the wheels of both CO2X and CO2Y TCS chillers. We placed thicker foam under CO2Y but this did make the chiller wobbly so we placed thinner foam under CO2X.

Images attached to this comment
keith.riles@LIGO.ORG - 08:10, Thursday 28 November 2024 (81525)DetChar
Unfortunately, I'm not seeing any improvement of the Crab contamination in the strain spectra this week, following the foam insertion.

Attached are ASD zoom-ins (daily and cumulative) from Nov 24, 25, 26 and 27.
Images attached to this comment
camilla.compton@LIGO.ORG - 15:02, Tuesday 03 December 2024 (81598)DetChar, TCS

This morning at 17:00UTC we turned the CO2X and CO2Y TCS chiller off and then on again, hoping this might change the frequency they are injecting into DARM. We do not expect it to effect it much we had the chillers off for a ling period 25th October 80882 when we flushed the chiller line and the issue was seen before this date.

Opened FRS 32812.

There were no expilcit changes to the TCS chillers bettween O4a and O4b although we swapped a chiller for a spare chiller in October 2023 73704

camilla.compton@LIGO.ORG - 11:27, Thursday 05 December 2024 (81634)TCS

Between 19:11 and 19:21 UTC, Robert and I swapped the foam from under CO2Y chiller (it was flattened and not providing any damping now) to new, thicker foam and 4 layers of rubber. Photo's attached. 

Images attached to this comment
keith.riles@LIGO.ORG - 06:04, Saturday 07 December 2024 (81663)
Thanks for the interventions, but I'm still not seeing improvement in the Crab region. Attached are daily snapshots from UTC Monday to Friday (Dec 2-6).
Images attached to this comment
thomas.shaffer@LIGO.ORG - 15:53, Tuesday 10 December 2024 (81745)TCS

I changed the flow of the TCSY chiller from 4.0gpm to 3.7gpm.

These Thermoflex1400 chillers have their flow rate adjusted by opening or closing a 3 way valve at the back of the chiller. for both X and Y chillers, these have been in the full open position, with the lever pointed straight up. The Y chiller has been running with 4.0gpm, so our only change was a lower flow rate. The X chiller has been at 3.7gpm already, and the manual states that these chillers shouldn't be ran below 3.8gpm. Though this was a small note in the manual and could be easily missed. Since the flow couldn't be increased via the 3 way valve on back, I didn't want to lower it further and left it as is.

Two questions came from this:

  1. Why are we running so close to the 3.8gpm minimum?
  2. Why is the flow rate for the X chiller so low?

The flow rate has been consistent for the last year+, so I don't suspect that the pumps are getting worn out. As far back as I can trend they have been around 4.0 and 3.7, with some brief periods above or below.

Images attached to this comment
keith.riles@LIGO.ORG - 07:52, Friday 13 December 2024 (81806)
Thanks for the latest intervention. It does appear to have shifted the frequency up just enough to clear the Crab band. Can it be nudged any farther, to reduce spectral leakage into the Crab? 

Attached are sample spectra from before the intervention (Dec 7 and 10) and afterward (Dec 11 and 12). Spectra from Dec 8-9 are too noisy to be helpful here.



Images attached to this comment
camilla.compton@LIGO.ORG - 11:34, Tuesday 17 December 2024 (81866)TCS

TJ touched the CO2 flow on Dec 12th around 19:45UTC 81791 so the flowrate further reduced to 3.55 GPM. Plot attached.

Images attached to this comment
thomas.shaffer@LIGO.ORG - 14:16, Tuesday 17 December 2024 (81875)

The flow of the TCSY chiller was further reduced to 3.3gpm. This should push the chiller peak lower in frequency and further away from the crab nebula.

keith.riles@LIGO.ORG - 10:19, Thursday 19 December 2024 (81902)
The further reduced flow rate seems to have given the Crab band more isolation from nearby peaks, although I'm not sure I understand the improvement in detail. Attached is a spectrum from yesterday's data in the usual form. Since the zoomed-in plots suggest (unexpectedly) that lowering flow rate moves an offending peak up in frequency, I tried broadening the band and looking at data from December 7 (before 1st flow reduction), December 16 (before most recent flow reduction) and December 18 (after most recent flow reduction). If I look at one of the accelerometer channels Robert highlighted, I do see a large peak indeed move to lower frequencies, as expected.

Attachments:
1) Usual daily h(t) spectral zoom near Crab band - December 18
2) Zoom-out for December 7, 16 and 18 overlain
3) Zoom-out for December 7, 16 and 18 overlain but with vertical offsets
4) Accelerometer spectrum for December 7 (sample starting at 18:00 UTC)
5) Accelerometer spectrum for December 16
6) Accelerometer spectrum for December 18 
Images attached to this comment