Displaying reports 13701-13720 of 84072.Go to page Start 682 683 684 685 686 687 688 689 690 End
Reports until 17:42, Monday 28 August 2023
H1 SQZ
victoriaa.xu@LIGO.ORG - posted 17:42, Monday 28 August 2023 (72496)
Quick asqz/sqz measurement at 25W, now seems consistent with recent sqz levels at 60W?

Naoki, Vicky

At 25W, holding at REDUCE_RF45_MODULATION_DEPTH before MAX_POWER, we did a quick anti-sqz/sqz measurement to see if we could loosely infer sqz losses with the IFO relatively cold. We did not see clearly more squeezing here at lower power / colder ifo, as we have in the past, e.g. 66877. We weren't able to easily engage ASC here (it was before move_spots), but slider values are similar to when IFO was last locked and a little bit of walking alignment didn't make a big difference, so we left ASC off.

Looking at DARM (based on GDS), and the SQZ BLRMS, we can read the sqz levels:

which are surprisingly consistent to what we have at full power, e.g. the recent sqz dataset in 71902 with IFO at 60W and thermalized in full lock.

Thoughts:
  -- I'm not sure what it means that now we have similar amounts of squeezing with the IFO at low and high power.
  -- For this test, we left PSAMS at 200V/200V and didn't try walking PSAMS, though in 66877 we saw better squeezing with a cold IFO and lower PSAMS.
  -- The slope of DARM at high ~kHz frequencies seems to be slightly different between 25W and 60W? From the dtt, compare red dashed = "60W no sqz" & cyan line = "25W FDS". Here, the lock sequence was at REDUCE_RF45_MODULATION_DEPTH, so before lownoise length control and laser noise suppression.

We then measured NLG ~ 11.05. In the sqz dataset 71902 we did not separately measure NLG, but given that the NLG has remained consistently accurate with the Feb 2023 NLG calibration, so we can re-affirm that trusting the calibration is probably okay, assuming opo temperature is well-tuned. For comparison, NLG calibration suggests NLG=11 (gen sqz ~ 14.75dB) with opo_trans=80uW.

Images attached to this report
H1 ISC (ISC, Lockloss, OpsInfo, SEI, SUS, SYS)
jeffrey.kissel@LIGO.ORG - posted 16:59, Monday 28 August 2023 - last comment - 16:15, Tuesday 29 August 2023(72497)
2023-08-25 Large EY Temperature Impulse Impact on IFO: Yaw Dominated by SEI Z to RZ from Blade Springs, Pitch Dominated by SUS Sag From Blade Springs
J. Kissel, J. Warner

Given the lovely "natural experiment" of the Aug 25th 2023 ~3.2 [deg F] / 1.8 [deg C]  temperature increase "impulse" over 4 hours at EY (see LHO:72444 and LHO:72428), I wanted to understand both 
    (a) How the IFO handles it / what caused the lock loss
    (b) Document the levels and time-scales of alignment change that resulted in bad alignment for *several* lock stretches -- indeed *days* after the impulse.

We often wave our hands saying things like "well, the vacuum system acts like a low pass filter, with a time constant of [[insert hand-waivers favorite time-scale on the order of hours]]." I wanted to see if we could quantify that, and if not, add a bit more clarity to how complicated the situation is.

To do so, I looked at the Y End Stations' signals in Z, RZ, PIT, and YAW that are either 
    (1) out of loop, or 
    (2) when in-loop -- the loop's feedback control output using the "classic trick" of approximating G/(1+G) ~ 1, where G >> 1, such that (plant) * CTRL = out-of-loop signal as though the loop wasn't there.

Those sensors include 
    - HPI ST1 ISO OUT (which are the IPS, under DC-coupled feedback control) -- calibrated into nano- meters or radians
    - ISI ST1 ISO OUT (which are the CPS, under DC-coupled feedback control) -- calibrated into nano-
    - ISI ST2 ISO OUT (which are the CPS, under DC-coupled feedback control) -- calibrated into nano- 
    - SUS ETMY M0 LOCK OUT (i.e. the WFS, under DC-coupled global ASC control) -- calibrated into micro- meters or radians
    - SUS ETMY M0 DAMP IN (which are "out-of-loop" because the local damping loops are AC-coupled) -- calibrated into micro-
    - SUS TMSY M1 DAMP IN (equally "out-of-loop")  -- calibrated into micro-
    - ETMY L3 Optical Levers -- calibrated into micro-
(I'm pleasantly surprised at how well they all agree, to the ~0.25 micro- kind of level that I have for these trends.)

I conclude:
    (I) The IFO Yaw is most impacted by the SEI system's RZ motion, due to the Z to RZ cross-coupling of the radially symmetric system of triangular ISI blade springs, as the blades sag from temperature increase
        (i) The total SEI system's yaw swung ~2 [urad] during the excursion dominated by ISI ST1, 
        (ii) The SUS ETMY and SUS TMSY follow this input in common, and 
        (ii) ISI ST1 takes the longest time to recover alignment --  then trending over *days* slowly back to pre-impulse equilibrium -- and still not yet there as of Aug 28
    (II) The IFO Pitch most impacted by the ETMY and TMSY SUS system's expected Pitch and Vertical sag from temperature increase.
        (i) The IFO's global alignment drives the pitch of ETMY, which drifted *down* in pitch over ~10 [urad] before losing lock, presumably from running out of range
        (ii) The ASC signals seem to slowly drive the ETMY off into-the weeds trying to recover the original, pre-impulse, alignment, causing *eventual* subsequent lock losses as it pushes the optic *past* the pre-impulse alignment position
        (iii) The TMS, which does not have global control, pitches a similar amount, ~14 [urad] in the *opposite direction, up* in pitch, and also taking *days* to get back the original value (somewhat alleviated )
        (iv) The fact that the Sus-point blades are the *same* for the QUAD and TMS, they're the biggest blades in either SUS, and the order of pitching is about the same implies to me that the pitching is dominated by the upper, Sus-point blades.

I attached the trends that drive me towards these conclusions. 
Give yourself time -- I've stared at these all afternoon to come to these conclusions. And honestly, I *still* don't think I've looked at enough plots (e.g. I don't show the ETMY alignment sliders that are the operators and/or initial alignment drives trying to make up for the SEI blades yaw and SUS blades pitch).

I also attach a .txt file that goes into more detail about how I calibrated the various CTRL signals, including where I got the transfer function values that are scaling the trends that you see.
Images attached to this report
Non-image files attached to this report
Comments related to this report
brian.lantz@LIGO.ORG - 16:15, Tuesday 29 August 2023 (72528)SEI, SUS

Very Interesting!

I've added a few more plots which show the in-loop motion as measured by the CPS sensors on ETMY. These show that the platform didn't move down or twist during the temperature excursion. This is what you would expect, given Jeff's plots from above - the springs sag, and the servos compensate. That's all just peachy, so long as there is no yaw seen at the optic - but the oplev does see yaw.
Either - there is some yaw in the ISI which is not seen by the CPS sensor (eg the sensor itself is temperature sensivity - but this should be pretty small) , or
- the yaw is just from SUS, or
- oplev is affected by temperature, or
- the yaw is  coming from somewhere else (HEPI, piers, SUS, the devil, etc)
I've not thought about this very hard yet - but I attach a 20 hour time stretch from the temp sensor (calibration is crazy, but the shape matches. not sure what's up) and 4 CPS cart-basis sensor signals (calibration should be nanometers or nanoradians). The in-loop change on the CPS is less than 1 nanorad.

Images attached to this comment
H1 General (Lockloss)
austin.jennings@LIGO.ORG - posted 16:24, Monday 28 August 2023 (72498)
Lockloss @ 23:20

Lockloss @ 23:20, DCPD saturation right before, cause unknown.

LHO General
corey.gray@LIGO.ORG - posted 16:23, Monday 28 August 2023 - last comment - 09:16, Tuesday 29 August 2023(72479)
Mon DAY Ops Summary

TITLE: 08/28 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
INCOMING OPERATOR: Austin
SHIFT SUMMARY:
Quiet shift until the noisy earth rolled a Mag7.1 Earthquake to H1.  Made it back to Low Noise (but then it just lost lock).

(Jim discovered LVEA lights were on...probably had been for several days!)

LOG:

Comments related to this report
corey.gray@LIGO.ORG - 09:16, Tuesday 29 August 2023 (72508)

Looks like I forgot to mention some items from my log (due to me not Saving My Alog Draft, getting logged out and losing morning logged items.  Apologies for not having the time, but just  wanted to note:

  • TCS Chillers maintenance check (RyanC)
    • I'm guessing this was between 9-10am local time.
LHO General
austin.jennings@LIGO.ORG - posted 16:02, Monday 28 August 2023 (72494)
Ops Eve Shift Start

TITLE: 08/28 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: SEISMON_ALERT
    Wind: 14mph Gusts, 10mph 5min avg
    Primary useism: 0.06 μm/s
    Secondary useism: 0.12 μm/s
QUICK SUMMARY:

- H1 is holding off at REDUCE RF45 for SQZ work

- SEI looks to have mostly recovered from the EQ earlier

- CDS/DMs ok

H1 General
corey.gray@LIGO.ORG - posted 14:23, Monday 28 August 2023 (72493)
Looking At Changes During Our 60hr Lock Since Range & Locking Has Been Rougher

After H1's longest lock of O4 thus far (60hr29min Lock from 8/23(wed) 1020amPT -- 8/25(fri) 11pmPT), H1 has has been finicky with locking and also has had a degrading inspiral range. 

The primary issue prior to this lockloss was the EY temperature increase of 3+degF due to FAILURE of Chiller Pump #1 at EY.  From about 10am-5pm the temperatures were NON-nominal (but H1 remained locked this whole time(!) as ASC maintained the pointing for EY during this HUGE temperature swing).

With that said about the temperature drift, and at TJ's request, wanted to also list Commissioning changes which occured during the 2.5days of the lock.

(Times in PT)

Aug23 (Wed)

Aug24 (Thurs):  Nothing in alog

Aug25 (Fri)

Aug26 (Sat)

H1 CAL
anthony.sanchez@LIGO.ORG - posted 13:57, Monday 28 August 2023 (72489)
PCAL inLock Blocked Beams test

Last Tuesday (Aug 21, 2023) Rick and I went down to the EX station while the IFO was still locked. 
We asked the operator to turn off BRS sensor correction (Sitemap-> ISI_CONFIG ->SEI_CONF ->WINDY_NO_BRSX). This was to allow us to gentley walk to the other side of the beam tube to access the PCAL Transmitter module.

Once there Rick and I opened the Tx enclosure and blocked the Outer ( Lower ) PCAL beam at GPStime: 1376752360
We saw the following motion of the PCAL channel :  H1:CAL-CS_TDEP_PCAL_X_OVER_Y_REL_MAG .
We did expect this to go up and then settle down. We were suprised by how long it took to settle down.
At GPStime 1376753100 we blocked the inner (Upper) beam was blocked.
That seems to last uninteruppted until GPStime: 1376754170


At 1376754420 ISC_LOCK went to from NOMINAL_LOW_NOISE to DOWN and signals the the start of the End station measurments made that day.

More analysis coming soon.

Images attached to this report
H1 SQZ
daniel.sigg@LIGO.ORG - posted 13:33, Monday 28 August 2023 - last comment - 10:59, Tuesday 05 September 2023(72490)
Squeezer Laser Diode Nominal Current Changed

The nominal value of the squeezer laser diode current was change to 1.863 from 1.95. The tolerance is unchanged at 0.1. Looking at trends we sometimes read a value just below 1.85 leading to a failed laser condition which in turn triggers a relocking of the squeezer laser. However, since we are already locked, all we see is the fast and commen gains ramping down and up.

Images attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 13:50, Monday 28 August 2023 (72491)

Looking at this diode current trend over the past 500 days, we see it fairly stable but trending down very slowly. It may have lost 10 mA over the past year. Resetting the nominal value should keep us in the good band for a while if this trend continues.

Images attached to this comment
victoriaa.xu@LIGO.ORG - 17:58, Wednesday 30 August 2023 (72584)

So far this seems to have fixed the TTFSS gain changing issue! Haven't seen gain changes while locked in the past couple days, since Daniel changed the laser diode nominal current (bottom purple trend).

Images attached to this comment
daniel.sigg@LIGO.ORG - 10:59, Tuesday 05 September 2023 (72679)

In the past week there wasn't single TTFSS gain ramping incident during lock. The fast and common gains are again monitored in SDF.

Images attached to this comment
H1 General
corey.gray@LIGO.ORG - posted 13:31, Monday 28 August 2023 (72488)
7.1 Magnitude Indonesian Earthquake Breaks Lock

2016 H1 lost lock due to 7.1 south pacific earthquake.

We rode through the P & S waves....the R wave is still about 30+min away and it will be "spicy" ~Jim W.

I have taken H1 to IDLE....while we wait for the R wave to pass & for the planet to calm down after that.

Attached are some screenshots Tony snapped.

Images attached to this report
H1 TCS
ryan.crouch@LIGO.ORG - posted 11:23, Monday 28 August 2023 (72483)
TCS C02 Water Chillers Famis 21128

Closes FAMIS 21128

The cup in back corner of mezzanine was empty

LHO VE
david.barker@LIGO.ORG - posted 10:15, Monday 28 August 2023 (72481)
Mon CP1 Fill

Mon Aug 28 10:10:25 2023 INFO: Fill completed in 10min 20secs

Travis confirmed a good fill curbside.

Images attached to this report
H1 PSL
ryan.short@LIGO.ORG - posted 09:34, Monday 28 August 2023 (72480)
PSL 10-Day Trends

FAMIS 19991

Jason was in the anteroom last Tuesday for inventory, which is seen by the environmental trends. Since then, the differential pressure between the anteroom and the laser room has been elevated, but not alarmingly so.

Images attached to this report
LHO FMCS
bubba.gateley@LIGO.ORG - posted 08:36, Monday 28 August 2023 (72478)
Well Pump Running
Well pump is running to replenish the fire water tank. The pump will run for 4 hours and automatically shut down. 
LHO General
tyler.guidry@LIGO.ORG - posted 17:29, Friday 25 August 2023 - last comment - 12:29, Monday 28 August 2023(72444)
EY Chilled Water System Failure
At 9:45am local time Chiller 1 at End Y went into alarm for "Evaporator Water Flow Lost". When I arrived at the EY chiller yard I observed that neither chiller was running but chilled water pump 1 was continuing to run. I noted the alarm and headed for the mezzanine above the AHU's to asses what the supply and return pressures were nearest the evaporator coil. 

Immediately I read 0 (or what I thought was 0) at the return line. This would generally indicate that there has been enough glycol loss within the system that makeup is necessary via the local tank (though I've never seen it get to 0). Until I read that both supply lines were at an alarming 140psi (normal operating pressures for all 4 supply and return float around 30). I immediately phoned Richard to have him command the chilled water pump 1 off to stop the oversupply of chilled water. For reasons not clear to me the disable command via FMCS Compass was not taken at the pump.

I went back to the chiller yard and observed that 1: the pump had not been disabled and 2: pressures at the pump were at around 100psi (normal operating for the current frequency is about 50).

Following that, I manually threw the pump off at the VFD to prevent further runaway of the system. Between the time of noting 140psi and manually throwing the pump off, the system pressure increased to 160psi.

After a thorough walk-down of the system, I elected not to utilize our designed redundancy in chiller 2 and chilled water pump 2 as I was still unaware what was causing the massive overpressure at all supply and return lines. It was also found that the return line was not actually at 0, but instead had made a full rotation and was pegged on the backside of the needle (all of these need replacement now).

Macdonald-Miller was called on site to help asses what the issue might be. Given that there was recent incursions to flow via R. Schofield the strainer was the primary point of concern. We flushed the strainer briefly at the valve and noted a large amount of debris. after a second flush, much less/next to none was noted. This alleviated the system pressure substantially. 

The exact cause of the fault and huge increase of pressure is still not clear. There are a number of flow switches at the chiller. Bryan with Mac-Miller suspects part of the issue may live there, and we are going to pursue this further during our next maintenance window. Work was also performed at the strainer within the chiller where rubber/latex-esque debris was found. Work on Chiller 1 to continue but for now the system and end station is happy on chiller2/CHWP2. Looking at the FMCS screen shows temp's have normalized as of the writing of this log.

T. Guidry B. Haithcox. R. Thompson C. Soike R. McCarthy
Comments related to this report
jeffrey.kissel@LIGO.ORG - 12:29, Monday 28 August 2023 (72485)CDS, DetChar, FMP, Laser Safety, PEM
GREAT SAVE TEAM!

Cross reference LHO:72428 and LHO:72440 for IFO impact.

I'm guessing Tyler's "manually threw the pump off at the VFD to prevent further runaway of the system" was the timing of H0:FMC-EY_CY_H20_PUMPSTAT channel going to zero at 17:35 UTC (10:35 PDT) that I called out in my aLOG.
H1 FMP (DetChar, ISC)
jeffrey.kissel@LIGO.ORG - posted 13:41, Friday 25 August 2023 - last comment - 12:01, Monday 28 August 2023(72428)
Chilled Water Pump Has Failed for EY HVAC Air Handlers
J. Kissel, for T. Guidry, R. McCarthy

Just wanted to get a clear separate aLOG in regarding what Corey mentioned in passing in his mid-shift status LHO:72423:

The EY HVAC Air Handler's chilled water pump 1 of 2 failed this morning 2023-08-25 at 9:45a PDT, and thus the EY HVAC system has been shut down for repair at 17:35 UTC (10:35 PDT). The YVEA temperature is therefore rising as it equilibrates with the outdoor temperature; thus far from 64 deg F to 67 deg F.

Tyler, Richard, and an HVAC contractor are on it, actively repairing the system, and I'm sure we'll get a full debrief later.

Note -- we did not stop our OBSERVATION INTENT until 2h 40m hours later 2023-08-25 20:18 UTC (13:18 PDT), when we've gone out to do some commissioning.
Images attached to this report
Comments related to this report
jenne.driggers@LIGO.ORG - 13:53, Friday 25 August 2023 (72429)

The work that they've been doing so far today to diagnose this issue has been in the 'mechanical room'.  Their work should not add any additional significant noise over what aready occurs in that room at all times, so I do not expect that there should be any data quality issues as a result of this work.  But, we shall see (as Jeff points out) if there are any issues from the temperature itself changing. 

corey.gray@LIGO.ORG - 15:51, Friday 25 August 2023 (72437)FMP

They are done for the weekend and temperatures are returning to normal values. 

Chiller Pump #2 is the chiller we are now running.

Chiller Pump #1 will need to be looked at some more (Tyler mentioned the contractor will return on Tues).

Attached is a look at the last 4+yrs and both EY chillers (1 = ON & 0 = OFF).

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 12:01, Monday 28 August 2023 (72484)DetChar, ISC, SUS
See Tyler's LHO:72444 for more accurate and precise description of what had happened to the HVAC system.
LHO General
ryan.short@LIGO.ORG - posted 08:23, Friday 25 August 2023 - last comment - 10:39, Monday 28 August 2023(72413)
Ops Day Shift Start

TITLE: 08/25 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 14mph Gusts, 11mph 5min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.07 μm/s
QUICK SUMMARY: H1 has been locked for 46 hours.

Comments related to this report
corey.gray@LIGO.ORG - 10:39, Monday 28 August 2023 (72482)

Just a note:  I was the Day Operator, but arrived late due to Route10 being CLOSED. :(

H1 SQZ
victoriaa.xu@LIGO.ORG - posted 10:38, Tuesday 22 August 2023 - last comment - 13:57, Monday 28 August 2023(72371)
Aligned opo pump fiber polarization

Naoki noticed the pump fiber rejected PD (in ham7, rejects pump fiber light that comes out in the wrong polarization) was saturated, so today I re-aligned the pump fiber polarization using sqzt0 pico's (described recently also in 71761).

I'm not sure why pump fiber needs to have its input polarization readjusted so often; I checked the CLF fiber and FCGS fiber, and they both seemed relatively well-aligned despite having not adjusted those in a while. Especially this time, it seems the fiber polarization got misaligned more quickly than before.

Images attached to this report
Comments related to this report
victoriaa.xu@LIGO.ORG - 13:57, Monday 28 August 2023 (72492)

Austin had to reset the sqz pump ISS again on Sunday (72474), lowering the generated sqz level by lowering OPO trans from 80uW (recent nominal) to 65uW. Sqz level correspondingly went down. Naoki and I have re-aligned the pump fiber polarization, and brought the squeezer back to the nominal 80uW generated sqz level.

It's strange that we had to re-align the pump fiber polarization, again. The pump fiber polarization seems to be misaligning more quickly recently, see trends. This time, the fiber polarization misaligned to saturation in 1-2 days (last time was several days, before that we never saw it misaligned to saturation). It also needed both the L/2 and L/4 waveplates to re-align it. We should definitely monitor this situation and see if we can understand/fix why it's happening.

Images attached to this comment
H1 PEM (DetChar)
robert.schofield@LIGO.ORG - posted 19:22, Friday 18 August 2023 - last comment - 11:13, Monday 11 September 2023(72331)
DARM 52 Hz peak from chilled water pump at EX: HVAC shutdown times

Genevieve, Lance, Robert

To further understand the roughly 10Mpc lost to the HVAC (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72308), we made several focussed shutdowns today. These manipulations were made during observing (with times recorded) because such HVAC changes happen automatically during observing and also, we were reducing noise rather than increasing it. The times of these manipulations are given below.

One early outcome is that the peak at 52 Hz in DARM is produced by the chilled water pump at EX (see figure). We went out and looked to see if the vibration isolation was shorted, it was not, though there are design flaws (the water pipes arent isolated). We switched from CHWP-2 to CHWP-1 to see if the particular pump was extra noisy. CHWP-1 produced a similar peak in DARM at its own frequency. The peak in accelerometers is also similar in amplitude to the one from the water pump at EY. One possibility is that the coupling at EX is greater because of the undamped cryobaffle at EX.

 

Friday HVAC shutdowns; all times Aug. 18 UTC

15:26 CS SF1, 2, 3, 4 off

15:30:30 CS SF5 and 6 off

15:36 CS SF5 and 6 on

15:40 CS SF1, 2, 3, 4 back on

 

16:02 EY AH2 (only fan on) shut down

16:10 EY AH2 on

16:20 EY AH2 off

16:28 EY AH2 on

16:45 EY AH2 and chiller off

16:56:30 EY AH2 and chiller on

 

17:19:30 EX chiller only off, pump stays on

17:27 EXwater pump CHWP-2 goes off

17:32: EX CHWP-2 back on chiller back on right after

 

19:34:38 EX chiller off, CHWP-2 pump stays on for a while

19:45 EX chiller back on

 

20:20 EX started switch from chiller 2 to chiller 1 - slow going

21:00 EX Finally switched

21:03 EX Switched back to original, chiller 1 to chiller 2

 

Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 10:29, Monday 21 August 2023 (72350)DetChar, FMP, ISC, OpsInfo
Turning Roberts reference to LHO:72308 into a hyperlink for ease of navigation.

Check out LHO:72297 for a bigger picture representation of how the 52 Hz peak in the broader DARM sensitivity, and from the Time Stamps in Elenna's plots, they were taken at 15:27 UTC, just after the corner station (CS) "SFs 1, 2, 3, 4" are off.

SF stands for "Supply Fans" i.e. those air handler unit (AHU) fans that push the cool air in to the LVEA. Recall, there are two fans per air handler unit, for the two air handler units (AHU1 and AHU2) that feed the LVEA in the corner station.

The channels that you can use to track the corner station's LVEA HVAC system are outlined more in LHO:70284, but in short, you can check the status of the supply fans via the channels
    H0:FMC-CS_LVA_AH_AIRFLOW_1   Supply Fan (SF) 1
    H0:FMC-CS_LVA_AH_AIRFLOW_2   Supply Fan (SF) 2
    H0:FMC-CS_LVA_AH_AIRFLOW_3   Supply Fan (SF) 3
    H0:FMC-CS_LVA_AH_AIRFLOW_4   Supply Fan (SF) 4
jeffrey.kissel@LIGO.ORG - 13:13, Monday 28 August 2023 (72486)DetChar, ISC, SYS
My Bad: -- Elenna's aLOG is showing the sensitivity on 2023-Aug-17, and the Robert logging of times listed above are for 2023-Aug-18. 

Elenna's demonstration is during Robert's site-wide HVAC shut down 2023-Aug-17 -- see LHO:72308 (i.e. not just the corner as I'd errantly claimed above.).
jeffrey.kissel@LIGO.ORG - 11:13, Monday 11 September 2023 (72805)FMP, ISC, OpsInfo
For these 2023-Aug-18 times mentioned in this LHO aLOG 72331, check out the subsequent analysis of impact in LHO:72778.
H1 ISC (PEM)
elenna.capote@LIGO.ORG - posted 11:38, Thursday 17 August 2023 - last comment - 13:15, Monday 28 August 2023(72297)
DARM with and without HVAC

Robert did an HVAC off test. Here is a comparison of GDS CALIB STRAIN NOLINES from earlier on in this lock and during the test. I picked both times off the range plot from a time with no glitches.

Improvement from removal of 120 Hz jitter peak, apparent reduction of 52 Hz peak, and broadband noise reduction at low frequency (scatter noise?).

I have attached a second plot showing the low frequency (1-10 Hz) spectrum of OMC DCPD SUM, showing no appreciable change in the low frequency portion of DARM from this test.

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 14:57, Thursday 17 August 2023 (72302)DetChar, FMP, OpsInfo, PEM
Reminders from the summary pages as to why we got so much BNS range improvement from removing the 52 Hz and 120 Hz features shown in Elenna's ASD comparison.
Pulled from https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20230817/lock/range/.

Range integrand shows ~15 and ~5 MPC/rtHz reduction at the 52 and 120 Hz features.

BNS range time series shows brief ~15 MPC improvement at 15:30 UTC during Robert's HVAC OFF tests.
Images attached to this comment
elenna.capote@LIGO.ORG - 11:50, Friday 18 August 2023 (72321)

Here is a spectrum of the MICH, PRCL, and SRCL error signals at the time of this test. The most visible change is the reduction of the 120 Hz jitter peak also seen in DARM. There might be some reduction in noisy peaks around 10-40 Hz in the signals, but the effect is small enough it would be useful to repeat this test to see if we can trust that improvement.

Note: the spectra have strange shapes, I think related to some whitening or calibration effect that I haven't bothered to think about to make these plots. I know we have properly calibrated versions of the LSC spectra somewhere, but I am not sure where. For now these serve as a relative comparison.

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 10:46, Monday 21 August 2023 (72352)DetChar, FMP, PEM
According to Robert's follow-up / debrief aLOG (LHO:72331) and the time-stamps in the bottom left corner of Elenna's DTT plots, she's is using the time 2023-08-17 15:27 UTC, and that corresponds to the time when Robert had turned off all four the supply fans (SF1, SF2, SF3, and SF4) in the corner station (CS) air handler units (AHU) 1 and 2 that supply the LVEA around 2023-08-17 15:26 UTC.
jeffrey.kissel@LIGO.ORG - 13:15, Monday 28 August 2023 (72487)DetChar, PEM, SYS
My Bad: -- Elenna's aLOG is showing the sensitivity on 2023-Aug-17, and the Robert LHO:72331 logging of times listed above are for 2023-Aug-18. 

Elenna's demonstration is during Robert's site-wide HVAC shut down 2023-Aug-17 -- see LHO:72308 (i.e. not just the corner as I'd errantly claimed above.).
Displaying reports 13701-13720 of 84072.Go to page Start 682 683 684 685 686 687 688 689 690 End