Displaying reports 19441-19460 of 87649.Go to page Start 969 970 971 972 973 974 975 976 977 End
Reports until 16:51, Thursday 08 June 2023
H1 SQZ (ISC, SQZ)
victoriaa.xu@LIGO.ORG - posted 16:51, Thursday 08 June 2023 - last comment - 15:06, Monday 12 June 2023(70289)
Fast SQZ thermalization from DARM at different IFO powers, comparing DARM at diff powers sqz vs. no-sqz

I looked again at DARM at 60W vs. 75W, comparing sqz and no-sqz for both. Each DTT trace uses 8 seconds, 50% overlap, 100 averages.

Lighter colors. 60W times used from 68251, w/15dB generated sqz.

Darker colors. 76W times from 6/4, ~3 hours into lock (semi-thermalized, past fast initial thermalization).

See the summary page with this time window-- comparing red vs. black in this screenshot, I don't see obvious + excess noise with the squeezer on below 50Hz, when we are past fast initial thermalization.

Re: "fast initial SQZ thermalization" -- For reference, see the V-shape in blue that DARM takes with the fast 0.5-1 hour SQZ thermalization from a cold IFO (even unlocked for just 1-2 hours). Notably, in the first 0.5-1 hour, esp if the IFO was cold, the fast-thermalizing-SQZ (blue) exceeds the no-sqz noise (red) both below 50 Hz, and above 1kHz. I think this is related to the rapidly thermalizing SRCL detuning, which rotates squeezing across the band for the first 0.5-1 hour. To fix this SRCL detuning problem for sqz misrotation, I think we'd need to thermalize the SRCL offset; it wouldn't be sufficient to thermalize the sqz angle. If we have to pick an angle, we should pick the sqz angle that optimizes the bucket noise ~100 Hz; this is the most stationary as SRCL thermalizes. I think for calibration, Jeff sees this too as wildness in the rapidly thermalizing sensing function at the start of locks. This effect seems more prominent from an cold IFO, as it seems more muted of a V-shape when IFO relocks quickly (not much down/cold time).

Note the excess noise 20-50 Hz in the 60W vs. 75W configuration. Brina is now looking into understanding the calibration status and LSC coherences for these sqz and no-sqz times.

Images attached to this report
Comments related to this report
brina.martinez@LIGO.ORG - 13:28, Friday 09 June 2023 (70291)SQZ

Here is are a few plots comparing the LSC coherences from MICH, PRCL, and SRCLbetween these times with/without the SQZ, it seems that the SQZ/noSQZ plots for the same days look similar in some regions, but there is more coherence in the 76 W dates.

(just updated the image to match the range from the DARM plots and updated the H1 live to a recent time today (06/09/23)).

Images attached to this comment
victoriaa.xu@LIGO.ORG - 18:29, Thursday 08 June 2023 (70292)

Additional screenshot including DARM 76W at 20mA here, no-sqz = dark green ; fds = purple. Looks like these DARM times were taken during a contrast defect scan, seen by squeezer too.

Images attached to this comment
brina.martinez@LIGO.ORG - 15:06, Monday 12 June 2023 (70380)

Couldn't edit my previous comment to update the image so here are the updated coherence plots.

Images attached to this comment
H1 General
ryan.crouch@LIGO.ORG - posted 16:09, Thursday 08 June 2023 (70288)
OPS Thursday Eve shift start

TITLE: 06/08 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 135Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 17mph Gusts, 11mph 5min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:

LHO General
thomas.shaffer@LIGO.ORG - posted 16:03, Thursday 08 June 2023 (70281)
Ops Day Shift Summary

TITLE: 06/08 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing
SHIFT SUMMARY: Arrived with ALS Y having issues relocking (more details in <a href="https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=70278">alog70278</a>). I'm still not entirely sure what I did to get it to lock, albiet with a poor power, but it ws enough to get the IFO to come back up. After a lock loss from an earthquake, this ALS Y issue seemed to be gone. This makes me think that it was very much alignment related, but I'm not sure exactly where. We are now back at NLN and Observing

LOG:

Start Time System Name Location Lazer_Haz Task Time End
10:50 TCS Camilla CR N Turned of ITMX and ITMY HWS SLED 11:00
16:31 FAC Karen Opt Lab n Tech clean 16:32
16:31 FAC Kim High bay n Tech clean 17:01
18:14 VAC Janos Garb room n Grabbing dust monitor 18:16
20:38 VAC Travis Garb room n Looking for dust mon parts 20:42
H1 General (Lockloss)
thomas.shaffer@LIGO.ORG - posted 14:06, Thursday 08 June 2023 (70287)
Lock loss 2058 UTC

Lock loss 1370293139

5.6 earthquake from Aleutian islands. We transitioned to earthquake mode, but we didn't make it through.

H1 OpsInfo (GRD, SEI)
thomas.shaffer@LIGO.ORG - posted 14:02, Thursday 08 June 2023 (70286)
Added SEI_CS to IFO node exclude list

We had a transition to earthquake mode that brought us out of observing. This was from the SEI_CS node not being excluded in the IFO node.

Why haven't we seen this before? Jim is finding that we haven't been able to even transition to earthquake mode before we lose lock recently, possibly since we went up in power. I'd look for an alog from him in the near future though.

H1 DAQ
david.barker@LIGO.ORG - posted 11:59, Thursday 08 June 2023 (70283)
offload of raw minute trend files from h1daqtw0 to h1daqframes-0 complete

WP11245

Jonathan, TJ, Dave:

The offload of the past 6 months' of raw minute trend files from h1daqtw0 SSD-RAID to h1daqframes-0 HDD-RAID is complete.

The copy took from Tue 11:08 - Wed 12:28 = 25hrs 20min

This morning I reconfigured nds0 to serve these data from their permanent archive location and restarted the nds.

The deletion of the old files from tw0 SSD was from 09:06 to 11:15 this morning, it took 2hrs 9 min

I'm keeping WP11245 open since the reconfiguration of h1daqnds0's daqdrc has only been done so far via manual edits, next Tuesday Jonathan will add this to the DAQ puppet.

Thu08Jun2023
LOC TIME HOSTNAME     MODEL/REBOOT
08:41:24 h1daqnds0    [DAQ]
 

H1 General (ISC)
thomas.shaffer@LIGO.ORG - posted 10:37, Thursday 08 June 2023 (70278)
Relocking notes
LHO VE
david.barker@LIGO.ORG - posted 10:13, Thursday 08 June 2023 (70280)
Thu CP1 Fill

Thu Jun 08 10:08:41 2023 INFO: Fill completed in 8min 41secs

 

Images attached to this report
H1 General (SEI)
camilla.compton@LIGO.ORG - posted 08:06, Thursday 08 June 2023 - last comment - 13:07, Thursday 08 June 2023(70269)
OPS Owl Shift Summary

TITLE: 06/08 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
SHIFT SUMMARY: Not much observing time this shift with lots of time spent at OMC_WHITENING with high violins and 3 locklosses. Currently stuck locking Y-arm and passing off to TJ.
LOG:

H1 range channels and SenseMon are down 70271. Attaching ISC_LOCk state plot as a replacement.

Images attached to this report
Comments related to this report
jim.warner@LIGO.ORG - 13:07, Thursday 08 June 2023 (70285)

Nothing has changed on the SEI side of things that would affect our earthquake robustness. We should have been able to ride out this eq, but SEI_ENV didn't even transition until 5 secs after the IFO lost lock, but we have survived eqs this size without. I'm starting to look to see if we are falling off a pd somewhere or something.

On attached screenshot, left column are some ifo channels, right column are some SEI channels. I don't have much to say at this point, but none of the SEI channels look crazy, IFO is just starting to see the eq and we lose lock.

Images attached to this comment
LHO General
thomas.shaffer@LIGO.ORG - posted 08:00, Thursday 08 June 2023 (70276)
Ops Day Shift Start

TITLE: 06/08 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 11mph Gusts, 8mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.07 μm/s
QUICK SUMMARY: IFO unlocked about an hour ago and Camilla has been struggling with ALS since. Troubleshooting.

H1 TCS
camilla.compton@LIGO.ORG - posted 04:58, Thursday 08 June 2023 (70272)
Scatter on ITMY HWS CCD camera images

After noticing a "double-spot" on one if the ITMY HWS points on May 26th, Dan suggested turning off the HWS SLED to understand where this scattered light is from. seems to have been there since 08 April ~19:30UTC.

Tonight I turned off the ITMY SLED 7:46 to 8:02UTC and 10:54 to 11:01UTC and the ITMX SLED 10:50 to 10:57UTC. After ALS has been shuttered and with both HWS SLEDs off.

Unsure what this light is. We could try turning down the HWS camera exposure to minimize some of it.

Images attached to this report
H1 General
camilla.compton@LIGO.ORG - posted 04:07, Thursday 08 June 2023 (70273)
OPS Owl Mid-shift Summary

STATE of H1: Lock Acquisition

H1 range channels and SenseMon are down 70271.

H1 CDS
camilla.compton@LIGO.ORG - posted 02:25, Thursday 08 June 2023 - last comment - 05:09, Thursday 08 June 2023(70271)
H1 Range not reporting since 08:21UTC

Control room channels H1:CDS-SENSMON_CAL_SNSW_EFFECTIVE_RANGE_MPC and H1:CDS-SENSMON_CLEAN_SNSC_EFFECTIVE_RANGE_MPC have not been reporting correct values since 08:21UTC, see attached.

The SenseMon Range FOM is reporting range fine  is also frozen at time 08:19UTC, see nuc27 screenshot attached. Omicron glitches are not updated as well. 

Images attached to this report
Comments related to this report
john.zweizig@LIGO.ORG - 05:09, Thursday 08 June 2023 (70274)

The SenseMon and Omega problems seem to be related to a error in the /gds file system (seems to be unmounted). I have notified Dan Moraru via mattermost and email, but I'm not sure when he will see those messages.

H1 General (ISC)
camilla.compton@LIGO.ORG - posted 00:05, Thursday 08 June 2023 - last comment - 06:00, Thursday 08 June 2023(70267)
OPS Owl Shift Start

TITLE: 06/08 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 4mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY: Just lost lock at 06:53UTC 1370242452 after 2h49 in NLN. Ryan and I watched CSOFT P ASC control signal ring up before the lockloss, plot attached. Tagging ISC.

VAC, SUS, SEI, CDS, dust monitors all Okay.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 06:00, Thursday 08 June 2023 (70275)ISC

Looked over the last locklosses. In general the CSOFT_P control signal does get larger throughout the lock. In particular the last three "unknown" locklosses show CSOFF_P_OUT16 getting up to 4000 in towards the end: this one along with 20230608 0245UTC and 20230608 0653 UTC.

20230607 1635 UTC shows some CSOFT P noise but not as bad. The other locklosses in the last 3 days CSOFT_P_OUT16 stayed below ~2000.

Plot of last 3 days attached.  Did something change during at the "1 day ago" 06/07 09:37UTC? This was the second lock after maintenance and I moved PR3 to get locked 70217.

Images attached to this comment
H1 ISC (DetChar, FMP, OpsInfo, SEI, SYS)
jeffrey.kissel@LIGO.ORG - posted 17:17, Wednesday 07 June 2023 - last comment - 07:27, Friday 09 June 2023(70258)
Debating Between 60W and 75/76W: Lack of Duty Cycle is NOT related to Input Power Choice
J. Kissel

There's an on-going debate whether we prefer 76 W (now 75W) PSL input power vs. 60 W PSL power, because we think that we had better noise performance at 60 W. It's a great debate, and I'm all for it. One thing I want to make super clear -- the lack of duty cycle during the engineering run and through the start of O4 has *NOT* been because of the increase in power from 60W to 76W. We've had one heck of a couple of months in terms of 
    (1) An insidiously slow drift in yaw on all of our HAM ISIs due to an innocent oversight of a foton-induced copy-and-paste error in ISI RZ blend filters (now fixed)
    (2) Literally every Tuesday for the past 4 Tuesdays, we've had a 1 deg C (~1-2 deg F) rapid temperature excursion in the LVEA, and some not even on Teusdays
    (3) A particularly earthquake full couple of months
    (4) We've had 3 or 4 inadvertent, system-wide, inter-computer communication "dolphin" crashes, sometimes causing a day of confusion from settings lost
    (5) Several electronics chassis fail all inadvertently

Further, (1) and (2) caused all sorts of apparent trouble that we interpreted as PR3 alignment / ISCT1 alignment troubles, and thus there may be some residual noise knock-on effects as a result.

Indeed, though I don't yet have the quantitative evidence to prove it, I think our issues with (1) and (2) drove us to move PR3 into a different position -- which in turn pushed the arm cavity spots to a different position onto a different point absorber/ acoustic mode situation -- which in turn caused our problems with "a new PI" at the start of the run -- and drove the choice to decrease the ETMX Ring Heater power -- which then drove us to decrease the power from 76W to 75W.

All of these issues cropped up right around the increase in power, and have continued through the start of the run, so I think some -- including myself -- had gathered an incorrect impression that H1's low duty cycle at the start of the run has been because of the power increase. With the trends in this aLOG, I argue it has not.

Check out the attached past 3 months worth of trends, in both relative time axis and absolute UTC time axis. 

Remember, the observing run starts -15 days ago, or on May 24 2023 at 15:00 UTC.

1st panel: IMC input power, in Watts, showing the transition from 60W to 76W

2nd panel: residual HAM-ISI position in RZ, in nanoradians, showing the 10-20 urad drift of the tables due to (1)

3rd and 4th panel: temperature zones in the LVEA, in deg C and deg F, respectively, showing the last 4 Teusday's worth of temperature excursions

5th panel: all test mass ring heater power level settings, in Watts showing the early explorations of ring heater settings after power up, and the ETMX reduction during the HAM ISI alignment excursion (the upper half is shown, but both upper and lower halves are set equally each time)
 
6th panel: 0.03 - 0.1 Hz BLRMS of the ground motion at each of the three buildings, showing the "earthquake and wind" band, highlighting (3)

7th panel: PR3's yaw alignment slider, indicating that we've been steering PR3 around all over the place only in the past 4 weeks, likely a result of the ISI yaw drifts (1) and temperature excursions (2).
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 09:35, Thursday 08 June 2023 (70277)
Adding another dimension to the problem...

I was thinking over night about this, and realized "well, maybe there's *one* part of the power increase that has impacted duty cycle; the 11 Hz ring ups from PRCL going unstable thru too much gain."

But, I've added the PRCL gain adjustments to the series of trends -- see new attachments in relative time and UTC time -- and I think the new-ish PRCL instabilities can also be explained by drifts and changes in alignment.
- We started using the THERMALIZATION guardian around Apr 24. This slow ramps the PRCL2 gain through the thermalization period

- We made a change, *reducing* the PRCL2 end-point setting on May 21 -- but this was reactive to the time when the ISI YAW drifts were at maximum. 

- We then restored that end-point setting on May 26

- Then, on May 31, after having starting to have trouble with 11 Hz ring ups of PRCL gain, we adjusted the "base" PRCL1 gain from 1.0 to 1.5, -- but this was reactive is after several days of LVEA temperature drifting around between May 25 and May 31, and an especially bad Tuesday on May 29th.

- On Jun 4th, the LVEA temperature controls settle on a "new normal" and we "find" we need to increase the "base" PRCL set point again to 1.7 on June 6

- Then on Jun 7th, we reset the "base" PRCL1 gain to 1.0, but instead increase the thermalization set point higher.

My vote is the following: we take the hit in time that this will mean:
- Restore (or change to re-create) the LVEA temperature to values we had consistently for months up until May 10th. Since the LVEA is en-mass cooler than before, we can use the individual zone heater settings to bring each zome back *up* to is "prior to May 10th" value.
- Once that's settled, we re-align PR3 YAW to the slider values we had up until May 10th of 151.6 "urad," and run an initial alignment.
- Once that's settled, we go out to ISCT1 and re-reset the alignment of the table (though it's not reproducible, hopefully, doing so will get us back to the ISCT1 alignment we've had for many moons prior to all this mess).
- Once that's settled, restore ETMX ring heater to its value of 1.3 W.
- Once that's settled, we go back to the May10th era PRCL gains and THERMALIZATION guardian set points of PRCL1 = 1.0 and PRCL2 = 23.0.

If all that works, then we re-calibrate PR3's sliders, optical levers, and OSEMs.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 10:19, Thursday 08 June 2023 (70279)
Since it's perhaps quite tough to see all these traces stacked vertically on top of each other (a regrettable "must" because the timing of all these changes is important to the story) I've capture this epic ndscope session as a .yaml template. 

/ligo/home/jeffrey.kissel/2023-06-07/
    2023-06-07_3motrend_IMCPWR_ISIYAW_LVEATEMP_GNDBLRMS_PR3YAW_PRCGAINs.yaml

I can't attach a file with the ".yaml" or ".yml" extension to the aLOG, but it's linux, so I've just changed the file extension to .txt (because, in the end, it *is* just a text file). So if you'd like to download it from here and look around, download and then change the extension back to ".yaml".
Non-image files attached to this comment
richard.mccarthy@LIGO.ORG - 07:27, Friday 09 June 2023 (70297)

With the temperatures more stable then they had been we are waiting to get the cooling coil strainers cleaned before attempting other changes.  The major flucuations we were seeing we caused primarily by a cooling coil not getting to temperature creating an issue for mulitiple days.  Once the strainers are clean temperature control can be changed to try and recreate previous zone temperatures.  Though it was nice have all zones grouped together for once.

H1 DetChar (DetChar, ISC)
gabriele.vajente@LIGO.ORG - posted 07:06, Monday 05 June 2023 - last comment - 08:21, Thursday 08 June 2023(70134)
Evolution of low frequency sensitivity

The low frequency (<50 Hz) strain sensitivity is worse now than it was a few months ago. One of the questions we had was whether this was due to the increase of input power from 60 to 75 W. The quick answer is:

the 20-50 Hz noise got worse both when the power was increased from 60 to 75 W and when the DARM offset was increased from 20 to 40 mA

To try and answer this question:

  1. selected times during the past couple of months when the range was higher than 130 Mpc for at least 10 minutes
  2. for each of those times, computed the DARM spectrum, using either CAL-DELTAL_EXTERNAL_DQ
  3. computed the average noise level in the 20 to 50 Hz region (by averaging the log of the PSD, so to be less sensitive to lines)

The plot attached below shows in the top panel how the noise in the 20-50 Hz region got worse over time: it shows the average noise in the band, normalized to the beignning of the analysis, in March. There are two clear discrete steps when the noise got significantly worse. The second panel shows (in two twin y axis) the input power in black and the DCPD power in red. There are two clear times when the noise got worse: when the power was increased from 60 to 75 W and when the DARM offset was increased from 20 to 40 mA.

The colored X's in the second panel correspond to the DARM spectra shown in the bottom panel, same color.

The results shown here are obtained looking at CAL-DELTAL_EXTERNAL_DQ, but very similar results can be obtained with OMC-DCPD_SUM_OUT_DQ, compensating for the calibration line amplitudes to account for changes in the optical gain.

Images attached to this report
Comments related to this report
victoriaa.xu@LIGO.ORG - 08:21, Thursday 08 June 2023 (70268)

Gabriele also made plots comparing the noise with relevant sqz metrics, I've annotated some trends I see from the various frequency bands here. He's compared the noise reduction between 20-50Hz, 100-200 Hz, and 1-2kHz, with the filter cavity detuning, generated sqz level, and the squeezer blrms. These are parameters I think could meaningfully change the noise in clear ways, there are more parameters we could compare (e.g. psams, sqz angle, srcl detuning, etc..), but a start. Note he has plotted all SQZ BLRMS for all bands ; but the sqz blrms are band selective, so it's a bit confusing comparing them all.

Trends that we noticed when compared with squeezing, in line with his earlier comments, are:
20 - 50 Hz, probably technical noise. Sees higher noise w/higher IFO power, and higher dcpd mA. When I looked at this before (e.g. summary here), this is consistent with my impression that this extra low-freq noise is added technical noise. (normally blrms 1)
100 - 200 Hz, unclear if this is quantum or classical. Not consistent/clear trends with generated sqz level, or fc detuning (though maybe?), or IFO power. Probably some extra technical noise here from the jump to 40mA, as seen on both elevated sqz blrms and worse noise.  (normally blrms 2-3)
1-2 kHz, more squeezing and lower power seemed better for high frequency kHz noise. At 60W power, improvement probably related to lower ifo technical noise, also consistent that injecting more squeezing reduces noise more.  (normally blrms 4-5)

Images attached to this comment
gabriele.vajente@LIGO.ORG - 07:30, Monday 05 June 2023 (70136)

At the same time, there was some improvement in the noise between 100 and 200 Hz, but it seems very loosely correlated with the increased power: I would argue that the noise between 100 and 200 Hz improved gradually before the power increase.

There is almost no difference in the noise between 1 and 2 kHz

Images attached to this comment
LHO FMCS
bubba.gateley@LIGO.ORG - posted 08:09, Wednesday 31 May 2023 - last comment - 17:14, Thursday 08 June 2023(70037)
HVAC in LVEA
This will be a catch-up Alog as it dates back to last week. It started with trying to adjust the airflows (per Robert S.) and the rising temps in the LVEA, so I started investigating and found a large amount of condensate on the floor of AHU-1. We (Tyler, Randy, Chris and I) cleaned up all of the water with wet/dry vacuums and then adjusted mechanical linkages on Fan 1 & 2 to achieve the desired airflows. 
At that time, it was discovered that cooling coil 4 was much warmer than the other coils. I was not sure what other adjustments had been made by others so I continued to monitor. I took Friday off but was checking coil 4 and talking with Richard throughout the day. Towards the end of the day coil 4 was still much warmer than normal and allowing warm air into the LVEA. I decided to turn Fan 4 completely off to see if I could get the temps to stabilize. This strategy helped. 

Yesterday, being maintenance day, I took the opportunity to investigate a little more into why coil 4 was not cooling. After checking functionality of all the valves, I decided to open the strainer and found that it was very full of debris. I cleaned the strainer, reinstalled and turned Fan 4 back on. The coil is now running at normal temperature and the LVEA is coming back to a stable temperature. 

Because of the rising temps last week, many of the zone temperatures were lowered to try and help with the warmer areas. These setpoints have all been returned to the original desired temps. 

A work permit will be put in soon to inspect and clean all remaining strainers on the next Tuesday maintenance period.
Comments related to this report
camilla.compton@LIGO.ORG - 17:16, Wednesday 31 May 2023 (70056)

Adding plots of LVEA temperature over the last 10, 16 and 30 days. With cross-hairs showing dates temperatures changed. This plot is available with command: ndscope /opt/rtcds/userapps/release/cds/h1/scripts/fom_startup/nuc32/VEA_temperatures.yaml

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 17:14, Thursday 08 June 2023 (70284)CDS, DetChar, FMP, ISC, OpsInfo, SYS
Adding some quantitative time and information to this aLOG series.

First, I cover the HVAC system and the channels that I'm using for metrics of the LVEA HVAC system.
The LVEA has 2 giant cooling actuators, or "air handlers" that live in the Mechanical Room, +X -Y of the beam splitter.
These air handlers add together to cool the LVEA as a common actuator.
There are separate, local *heaters* in each zone of the LVEA that serve as "differential" actuators. 

The system is depicted pretty well in the MEDM overview screen found under sitemap > "FMCS OVERVIEW" > "Air Handler 1,2"

The channels that are useful metrics of the system are listed below with their function and units
External, outside the building temperature
    H1:PEM-CS_TEMP_ROOF_WEATHER_DEGF       Temperature on the LVEA building roof, in deg F (look for corresponding "DEGC" channels for deg C)

LVEA temperature sensors << our primary "canary in the coal mine" indicator that the suspensions will likely misaligned
    H0:FMC-CS_LVEA_ZONE1A_DEGF    Temperature, in deg F, around BSC2 (think H1 Beamsplitter and ITMs)    
    H0:FMC-CS_LVEA_ZONE1B_DEGF    ", ", around 3-IFO Area (think old H2 beamsplitter)
    H0:FMC-CS_LVEA_ZONE4_DEGF     ", ", Output arm (think SR3)
    H0:FMC-CS_LVEA_ZONE5_DEGF     ", ", Input arm (think PR3)

Local, corresponding heater units
    H0:FMC-CS_LVEA_HEATER_ZONE1A_PC    percentage of heating power being applied to the zone
    H0:FMC-CS_LVEA_HEATER_ZONE1B_PC
    H0:FMC-CS_LVEA_HEATER_ZONE4_PC
    H0:FMC-CS_LVEA_HEATER_ZONE5_PC

Common HVAC Air Handler 1 -- often referred to as just "AHU1"
    These channels are is human controllable via the HVAC control system
    H1:FMC-CS_LVEA_AH_DAMPER_1_PC            "Percentage of open" Intake air reduction "damper" valve (0% full closed, 100% full open)
    H0:FMC-CS_LVEA_AH_COOLTEMP_1_DEGF        Temperature of the "Coiling Coil"
    H0:FMC-CS_LVEA_AH_COOLTEMP_2_DEGF
    H0:FMC-CS_LVEA_AH_AIRFLOW_1              Output Air flow into the LVEA in cubic feet per minute (cfm, or CFM)
    H0:FMC-CS_LVEA_AH_AIRFLOW_2

Common HVAC Air Handler 2 -- often referred to as just "AHU2"
    These channels are is human controllable via the HVAC control system
    H0:FMC-CS_LVEA_AH_DAMPER_2_PC
    H0:FMC-CS_LVEA_AH_COOLTEMP_3_DEGF
    H0:FMC-CS_LVEA_AH_COOLTEMP_4_DEGF
    H0:FMC-CS_LVEA_AH_AIRFLOW_3
    H0:FMC-CS_LVEA_AH_AIRFLOW_4

The story starts earlier than the adjustment to the air-handler units to reduce the overall cooling airflow into the LVEA. Here's the timeline of things changing, given that IFO problems with temperature started late April 2023. Here's every event.

Apr 26 2023 10:56 PDT (Wednesday, mid-morning)
    Air handler 2's damper has a step-change in behavior, going from diurnal fluctuations between 50% and 80% open, to between 60% and fully 100% open.
    This only causes a minor "glitch" change in the LVEA temperatures, mostly in the diurnal fluctuations getting "reset," changing the diurnal pattern but no overall average temperature change.

May 02 2023 07:57 PDT (Tuesday, first-thing, minutes before the official start of maintenance day)
    Air handler 2's damper, and corresponding cooling temperature 3&4 glitches for 30 minutes. For the most part, the damper returns to the normal 50% and 80% open behavior
    This only causes a minor change in the LVEA temperatures, again in the diurnal fluctuations changing their pattern but no overall average temperature change.

May 10 2023 13:46 PDT (Wednesday afternoon)
aLOG: LHO:69516
Air handler unit 2, AHU2 fails and turns off.
Channels showing this:
    H1:FMC-CS_LVEA_AH_DAMPER 2_PC         Goes all the zero closed at 0%
    H0:FMC-CS_LVEA_AH_AIRFLOW_3           Goes to zero, as airflow ceases
    H0:FMC-CS_LVEA_AH_AIRFLOW_4           Goes to zero, "
    H0:FMC-CS_LVEA_AH_COOLTEMP_3_DEGF     Goes up from stable at 50-60 deg F to really high at ~65 deg F
    H0:FMC-CS_LVEA_AH_COOLTEMP_4_DEGF

    This causes a major, a mean value excursion in LVEA temperature, but after the air handler function is restored, all zones come "back to normal" by May 11 20232 12:38 PDT, ~24 hours later.

May 11 2023 12:38 PDT
With Air Handler 1 restored, the LVEA temperatures return to normal but the airflow is now much larger, with Fan 1 producing ~12500 cfm worth of flow, where it used to only put out 6000 cfm.
Channels that show this:
    H0:FMC-CS_LVEA_AH_AIRFLOW_1
    
    This doesn't change the LVEA temperature, but means that there's a lot more air flowing into the LVEA. Maybe this is what cause Robert to pay attention?
    
    
May 17 2023 09:42 PDT (Saturday morning)
Damper for Air Handler Unit 2 damper starts to open up, constantly at 100%, and has stayed like this since, only occasionally coming off the rails to 80% in the first few days.
Channels showing this
    H1:FMC-CS_LVEA_AH_DAMPER 2_PC

    This causes a marked change in Zone 4, 0.5 deg F decrease in temperature. That's not a lot, but work calling out. The diurnal pattern of Zone 4 changes as well. Further, the "coiling coils" for AHU2 also start to run warmer increasing from (diurnal fluctuations around) 51 deg F to (diurnal fluctuations around) 53 deg F.

Then we get to where Bubba Says - "It started with trying to adjust the airflows (per Robert S.)"

May 24 2023 08:48 PDT (Tuesday maintenance)
In reference to LHO:69894, where Bubba says:

"Fans 1 & 2 [of Air Handler Unit 1] supplying the LVEA were readjusted to [reduced] airflows for the Observation Run. 
 The **total** CFM for AHU 1 was reduced from ~ 26,000 to ~ 11,200 CFM"

With a bit more detail,
 - Fan 1 (H0:FMC-CS_LVA_AH_AIRFLOW_1) is reduced from ~12500 cfm to it's level that it was prior to the May 11 2023 fix of Air Handler 1, now back to ~6000 cfm.
 - Fan 2 (H0:FMC-CS_LVA_AH_AIRFLOW_2) is reduced dramatically from the value it's been in a long time ~11000 cfm down to 4500 cfm.
Thus Bubba's statement about the total from (12500+11000 =) 23500 cfm to (6000+4500 =) ~10500 cfm.

As a result of this we land on Bubba's statement "At that time, it was discovered that cooling coil 4 was much warmer than the other coils" (from the May 17th 2023 change in damper behavior) because you see both coil 3 and coil 4 from Air Handler 2 (H0:FMC-CS_LVEA_AH_COOLTEMP_3_DEGF and H0:FMC-CS_LVEA_AH_COOLTEMP_4_DEGF) at the higher, ~53 deg F level.
   
   This dramatic airflow change causes the LVEA temperatures to merge high for a bit until May 24 18:38 PDT -- Zone 4 (think SR3), the output arm gets warmer to meet the other zones.

May 24 2023 22:30 PDT yo May 25 2023 06:52 PDT (Late Night Tuesday to Early morning Wednesday)
Air Handler 1's cooling coils are slowly stepping up in temperature (by the control system? by a human?) from 46 deg F to as much as 54 deg F.
   
   Somehow, this slowly starts to bring all temperatures *down* in the LVEA in all zones together until the next morning at May 25 2023 06:45 PDT.

May 25 2023 06:52 PDT (Early morning Wednesday)
Air handler 1's cooling coils 1 and 2 get restored back to 46 deg F, and the heater in Zone 1A kicks on for 2.5 hours. (by a the control system? by a human?) 

   This brings all temperatures *up* in the LVEA to a relatively high value of 69 deg F until another change on May 26 2023 15:56 PDT.

May 26 2023 15:56 PDT (Friday Afternoon)
Air Handler 2 Fan 4's air flow drops to zero, and cooling coil 3 and 4 temperature drops from 54 deg F to 49 deg F.

These are Bubba's actions when he says "Towards the end of the day [Friday Afternoon, May 26th] coil 4 was still much warmer than normal and allowing warm air into the LVEA. I decided to turn Fan 4 completely off to see if I could get the temps to stabilize. This strategy helped."

   This brings the temperatures collective *down* in the LVEA from ~69 deg F, to 67 deg F, closer to the "normal" values for the LVEA, though Zone 5 (think PR3) and Zone 1B (think 3IFO) are lower than their "normal" values.

May 30 2023 09:50 PDT - 11:51 PDT (Tuesday, Maintenance Day)
All hell breaks loose with the HVAC as the Fire Alarm system overrides the HVAC controls -- only really mentioned in the operator log during maintenance that day -- see LHO:70003, and drives TJ to investigate drifts in temperature, LHO:70000 thinking "it's happening again!!"

Also, in the mean while Bubba's action he mentions,
" Yesterday, being maintenance day, I took the opportunity to investigate a little more into why coil 4 was not cooling. After checking functionality of all the valves, I decided to open the strainer and found that it was very full of debris. I cleaned the strainer, reinstalled and turned Fan 4 back on. The coil is now running at normal temperature and the LVEA is coming back to a stable temperature. "

    However, upon restoration of the HVAC settings, Air Handler 2's "Fan 4 is turned off" setting from May 26 2023 (Friday Afternoon) get lost, and FAN4 comes back on at full blast at 12000 cfm. Further, Air Handler 1's cooling coils come back on much hotter than they've ever come on before, changing from 45 deg to 54 to 65 deg.

    Once AHU2 Fan 4 comes back on, and air flow to the LVEA re-increases, the temperatures in the LVEA begin to drop again, tanking down as low as 66 deg F.

May 31 2023 06:23 PDT (Early Wednesday Morning)
    Zone 4 and Zone 5 heaters come in to play (by human? or by the control system?) turning on for the first time with Zone 4 at 40 percent, and Zone 5 left to oscillate diurnally between 20% and 40%.

    This brings all LVEA temperatures back up to a really tight cluster, for the first time in months around 67 deg F. This is great, we should hold here if we can.

June 04 2023 18:34 PDT (5 days later, Sunday Evening)
For reasons I can't determine (no other visible change in the metrics I've been using), Zone 1B (think 3IFO) jumps up in temperature 0.5 deg F from 67.2 deg F to 67.7 deg F.

    This is a pretty inconsequential change in temperature that shouldn't affect suspensions.

June 06 2023 09:53 PDT (2 days later, Tuesday Maintenance Day)
    The fire alarm maintenance team is back at it, and this causes the intake damper of Air Handler 2 (H0:FMC-CS_LVEA_AH_DAMPER_2_PC) to drop to zero.

    This causes the entire LVEA temperature to rise again from its stable 67 deg period starting on May 31 2023 06:23 PDT to zones going as high as 69 deg F.

June 06 2023 11:52 PDT (same day)
    The operations team realizes the fire alarm closed the AHU2 damper again, called in facilities and re-opened it, and the temperatures come back down, but overshoot because Zone 1B and Zone 4 configuration got lost.

June 06 2023 17:02 PDT (same day, but later in the evening)
    Some one, or some thing, turns the Zone 1B and Zone 4 heater configuration back on.

    Temperatures return to the "good" May 31 2023 06:23 PDT configuration change
Images attached to this comment
Displaying reports 19441-19460 of 87649.Go to page Start 969 970 971 972 973 974 975 976 977 End