Displaying reports 18801-18820 of 87017.Go to page Start 937 938 939 940 941 942 943 944 945 End
Reports until 08:08, Friday 09 June 2023
H1 General
anthony.sanchez@LIGO.ORG - posted 08:08, Friday 09 June 2023 (70298)
Friday OPS Day Shift start

TITLE: 06/09 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 138Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 12mph Gusts, 10mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:
IFO Current Status: Locked and Observing for the past 16 hours.


 

 

LHO General (SEI, SQZ)
ryan.short@LIGO.ORG - posted 08:00, Friday 09 June 2023 (70296)
Ops Owl Shift Summary

TITLE: 06/09 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 139Mpc
SHIFT SUMMARY: H1 has now been locked for 16 hours and throughout the shift. Very quiet night with range hovering around 140Mpc.

Handing off to Tony for the day.


LOG:

No log for this shift.

LHO General
ryan.short@LIGO.ORG - posted 04:00, Friday 09 June 2023 (70295)
Ops Owl Mid Shift Update

State of H1: Observing at 142Mpc

H1 has been locked and observing for 12 hours now. Very quiet night on site; some rain earlier but wind and ground motion are low.

LHO General
ryan.short@LIGO.ORG - posted 00:03, Friday 09 June 2023 (70294)
Ops Owl Shift Start

TITLE: 06/09 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 137Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 8mph Gusts, 6mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY: Taking over from Ryan C. H1 has been locked and Observing for 8 hours.

H1 General
ryan.crouch@LIGO.ORG - posted 23:58, Thursday 08 June 2023 (70290)
OPS Thursday Eve shift summary

TITLE: 06/09 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 139Mpc
SHIFT SUMMARY:

I forgot to post a midshift update for this shift

Lock#1

NLN at 22:52UTC

We had 2 small drops out of observing by mistakes/miss clicks

In observing at 23:02UTC, out at 23:12:20UTC,

In observing at 23:12:29UTC, out at 01:05:12UTC

In observing at 01:05:40UTC, out at 05:50UTC due to the SQZer losing lock, it relocked itself in 3 minutes and I went back into observing at 05:53UTC

Superevent candidate 06:50UTC, Stand Down

LOG:

No log for this shift

H1 SEI
jim.warner@LIGO.ORG - posted 20:48, Thursday 08 June 2023 - last comment - 16:12, Monday 12 June 2023(70293)
IFO is less robust against earthquakes now than before power up

This has been kind of hard to pick out, but it definitely seems like the IFO is less robust now against earthquakes that it was before increasing power over 60W in the IMC. Basically since then, we haven't been able to stay in NLN for any earthquakes over ~.5um/s, as measured by the peakmon eq witness channel.

Attached plot shows peakmon vs lock state trends since Feb 10 this year. The blue trace is the peakmon ground velocity minute trend, each green point indicates where the IFO was still locked 2 minutes after a local maximum ground velocity (found using the matlab peakfinder routine with some prominence and time separation requirements). The two marked points indicate where IMC power was increased above 60W (X=79730 minutes after Feb 10th) and the start of the run (X=148255 minutes after Feb10). Before power up we had multiple locks survive multipe 500nm/s eqs, a few around 1micron/s and one at almost 3micron/s. After the power up, the IFO doesn't ride out any eqs over 500nm/s and we basically not survived any notable eqs since the start of the run.

This is further reinforced by TJs alog from earlier today, when SEI_CS transitioning knocked us out of OBSERVE. Unless SEI_CS got dropped off the exclude recently, I suspect we never noticed because we were losing lock before the seismic system switched.

There have been no changes to the SEI eq controls. I looked at the small eq Camilla noted this morning and didn't see anything suspicious in the seismic systems, but I'll keep digging.

Images attached to this report
Comments related to this report
jim.warner@LIGO.ORG - 09:46, Friday 09 June 2023 (70301)

I will also add that this ~.5micron/s eq-band velocities are a level that the primary microseism can hit for a week or two at a time during the winter. Conditions that were already challenging for us in prior runs.

evan.hall@LIGO.ORG - 12:34, Monday 12 June 2023 (70369)ISC

Another possible culprit from around the time of April 7: removing 12 dB of low-frequency (< 1 Hz) gain from the Michelson loop to try to reduce the amount of sensor noise reinjection in the GW band (LHO:68432).

It would be a relatively simple test to (1) increase the EPICS gain of the loop by 3 dB to place the UGF back around 10 Hz, and then (2) re-engage the 4:1 Hz boost (LSC-MICH2 FM3). If the noise increase in DARM is acceptably low, perhaps you could run like this for a week to see if the duty cycle improves.

brian.lantz@LIGO.ORG - 13:28, Monday 12 June 2023 (70373)ISC

If this extra gain helps during EQs, it could also be made part of the EQ guardian response.

brina.martinez@LIGO.ORG - 16:12, Monday 12 June 2023 (70383)

Brina, Sheila,

We ran some excitations while locked, need to review plots (will come back to alog to update info)

Images attached to this comment
H1 SQZ (ISC, SQZ)
victoriaa.xu@LIGO.ORG - posted 16:51, Thursday 08 June 2023 - last comment - 15:06, Monday 12 June 2023(70289)
Fast SQZ thermalization from DARM at different IFO powers, comparing DARM at diff powers sqz vs. no-sqz

I looked again at DARM at 60W vs. 75W, comparing sqz and no-sqz for both. Each DTT trace uses 8 seconds, 50% overlap, 100 averages.

Lighter colors. 60W times used from 68251, w/15dB generated sqz.

Darker colors. 76W times from 6/4, ~3 hours into lock (semi-thermalized, past fast initial thermalization).

See the summary page with this time window-- comparing red vs. black in this screenshot, I don't see obvious + excess noise with the squeezer on below 50Hz, when we are past fast initial thermalization.

Re: "fast initial SQZ thermalization" -- For reference, see the V-shape in blue that DARM takes with the fast 0.5-1 hour SQZ thermalization from a cold IFO (even unlocked for just 1-2 hours). Notably, in the first 0.5-1 hour, esp if the IFO was cold, the fast-thermalizing-SQZ (blue) exceeds the no-sqz noise (red) both below 50 Hz, and above 1kHz. I think this is related to the rapidly thermalizing SRCL detuning, which rotates squeezing across the band for the first 0.5-1 hour. To fix this SRCL detuning problem for sqz misrotation, I think we'd need to thermalize the SRCL offset; it wouldn't be sufficient to thermalize the sqz angle. If we have to pick an angle, we should pick the sqz angle that optimizes the bucket noise ~100 Hz; this is the most stationary as SRCL thermalizes. I think for calibration, Jeff sees this too as wildness in the rapidly thermalizing sensing function at the start of locks. This effect seems more prominent from an cold IFO, as it seems more muted of a V-shape when IFO relocks quickly (not much down/cold time).

Note the excess noise 20-50 Hz in the 60W vs. 75W configuration. Brina is now looking into understanding the calibration status and LSC coherences for these sqz and no-sqz times.

Images attached to this report
Comments related to this report
brina.martinez@LIGO.ORG - 13:28, Friday 09 June 2023 (70291)SQZ

Here is are a few plots comparing the LSC coherences from MICH, PRCL, and SRCLbetween these times with/without the SQZ, it seems that the SQZ/noSQZ plots for the same days look similar in some regions, but there is more coherence in the 76 W dates.

(just updated the image to match the range from the DARM plots and updated the H1 live to a recent time today (06/09/23)).

Images attached to this comment
victoriaa.xu@LIGO.ORG - 18:29, Thursday 08 June 2023 (70292)

Additional screenshot including DARM 76W at 20mA here, no-sqz = dark green ; fds = purple. Looks like these DARM times were taken during a contrast defect scan, seen by squeezer too.

Images attached to this comment
brina.martinez@LIGO.ORG - 15:06, Monday 12 June 2023 (70380)

Couldn't edit my previous comment to update the image so here are the updated coherence plots.

Images attached to this comment
H1 General
ryan.crouch@LIGO.ORG - posted 16:09, Thursday 08 June 2023 (70288)
OPS Thursday Eve shift start

TITLE: 06/08 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 135Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 17mph Gusts, 11mph 5min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:

LHO General
thomas.shaffer@LIGO.ORG - posted 16:03, Thursday 08 June 2023 (70281)
Ops Day Shift Summary

TITLE: 06/08 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing
SHIFT SUMMARY: Arrived with ALS Y having issues relocking (more details in <a href="https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=70278">alog70278</a>). I'm still not entirely sure what I did to get it to lock, albiet with a poor power, but it ws enough to get the IFO to come back up. After a lock loss from an earthquake, this ALS Y issue seemed to be gone. This makes me think that it was very much alignment related, but I'm not sure exactly where. We are now back at NLN and Observing

LOG:

Start Time System Name Location Lazer_Haz Task Time End
10:50 TCS Camilla CR N Turned of ITMX and ITMY HWS SLED 11:00
16:31 FAC Karen Opt Lab n Tech clean 16:32
16:31 FAC Kim High bay n Tech clean 17:01
18:14 VAC Janos Garb room n Grabbing dust monitor 18:16
20:38 VAC Travis Garb room n Looking for dust mon parts 20:42
H1 General (Lockloss)
thomas.shaffer@LIGO.ORG - posted 14:06, Thursday 08 June 2023 (70287)
Lock loss 2058 UTC

Lock loss 1370293139

5.6 earthquake from Aleutian islands. We transitioned to earthquake mode, but we didn't make it through.

H1 OpsInfo (GRD, SEI)
thomas.shaffer@LIGO.ORG - posted 14:02, Thursday 08 June 2023 (70286)
Added SEI_CS to IFO node exclude list

We had a transition to earthquake mode that brought us out of observing. This was from the SEI_CS node not being excluded in the IFO node.

Why haven't we seen this before? Jim is finding that we haven't been able to even transition to earthquake mode before we lose lock recently, possibly since we went up in power. I'd look for an alog from him in the near future though.

H1 DAQ
david.barker@LIGO.ORG - posted 11:59, Thursday 08 June 2023 (70283)
offload of raw minute trend files from h1daqtw0 to h1daqframes-0 complete

WP11245

Jonathan, TJ, Dave:

The offload of the past 6 months' of raw minute trend files from h1daqtw0 SSD-RAID to h1daqframes-0 HDD-RAID is complete.

The copy took from Tue 11:08 - Wed 12:28 = 25hrs 20min

This morning I reconfigured nds0 to serve these data from their permanent archive location and restarted the nds.

The deletion of the old files from tw0 SSD was from 09:06 to 11:15 this morning, it took 2hrs 9 min

I'm keeping WP11245 open since the reconfiguration of h1daqnds0's daqdrc has only been done so far via manual edits, next Tuesday Jonathan will add this to the DAQ puppet.

Thu08Jun2023
LOC TIME HOSTNAME     MODEL/REBOOT
08:41:24 h1daqnds0    [DAQ]
 

H1 General (ISC)
thomas.shaffer@LIGO.ORG - posted 10:37, Thursday 08 June 2023 (70278)
Relocking notes
LHO VE
david.barker@LIGO.ORG - posted 10:13, Thursday 08 June 2023 (70280)
Thu CP1 Fill

Thu Jun 08 10:08:41 2023 INFO: Fill completed in 8min 41secs

 

Images attached to this report
H1 General (SEI)
camilla.compton@LIGO.ORG - posted 08:06, Thursday 08 June 2023 - last comment - 13:07, Thursday 08 June 2023(70269)
OPS Owl Shift Summary

TITLE: 06/08 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
SHIFT SUMMARY: Not much observing time this shift with lots of time spent at OMC_WHITENING with high violins and 3 locklosses. Currently stuck locking Y-arm and passing off to TJ.
LOG:

H1 range channels and SenseMon are down 70271. Attaching ISC_LOCk state plot as a replacement.

Images attached to this report
Comments related to this report
jim.warner@LIGO.ORG - 13:07, Thursday 08 June 2023 (70285)

Nothing has changed on the SEI side of things that would affect our earthquake robustness. We should have been able to ride out this eq, but SEI_ENV didn't even transition until 5 secs after the IFO lost lock, but we have survived eqs this size without. I'm starting to look to see if we are falling off a pd somewhere or something.

On attached screenshot, left column are some ifo channels, right column are some SEI channels. I don't have much to say at this point, but none of the SEI channels look crazy, IFO is just starting to see the eq and we lose lock.

Images attached to this comment
H1 ISC (DetChar, FMP, OpsInfo, SEI, SYS)
jeffrey.kissel@LIGO.ORG - posted 17:17, Wednesday 07 June 2023 - last comment - 07:27, Friday 09 June 2023(70258)
Debating Between 60W and 75/76W: Lack of Duty Cycle is NOT related to Input Power Choice
J. Kissel

There's an on-going debate whether we prefer 76 W (now 75W) PSL input power vs. 60 W PSL power, because we think that we had better noise performance at 60 W. It's a great debate, and I'm all for it. One thing I want to make super clear -- the lack of duty cycle during the engineering run and through the start of O4 has *NOT* been because of the increase in power from 60W to 76W. We've had one heck of a couple of months in terms of 
    (1) An insidiously slow drift in yaw on all of our HAM ISIs due to an innocent oversight of a foton-induced copy-and-paste error in ISI RZ blend filters (now fixed)
    (2) Literally every Tuesday for the past 4 Tuesdays, we've had a 1 deg C (~1-2 deg F) rapid temperature excursion in the LVEA, and some not even on Teusdays
    (3) A particularly earthquake full couple of months
    (4) We've had 3 or 4 inadvertent, system-wide, inter-computer communication "dolphin" crashes, sometimes causing a day of confusion from settings lost
    (5) Several electronics chassis fail all inadvertently

Further, (1) and (2) caused all sorts of apparent trouble that we interpreted as PR3 alignment / ISCT1 alignment troubles, and thus there may be some residual noise knock-on effects as a result.

Indeed, though I don't yet have the quantitative evidence to prove it, I think our issues with (1) and (2) drove us to move PR3 into a different position -- which in turn pushed the arm cavity spots to a different position onto a different point absorber/ acoustic mode situation -- which in turn caused our problems with "a new PI" at the start of the run -- and drove the choice to decrease the ETMX Ring Heater power -- which then drove us to decrease the power from 76W to 75W.

All of these issues cropped up right around the increase in power, and have continued through the start of the run, so I think some -- including myself -- had gathered an incorrect impression that H1's low duty cycle at the start of the run has been because of the power increase. With the trends in this aLOG, I argue it has not.

Check out the attached past 3 months worth of trends, in both relative time axis and absolute UTC time axis. 

Remember, the observing run starts -15 days ago, or on May 24 2023 at 15:00 UTC.

1st panel: IMC input power, in Watts, showing the transition from 60W to 76W

2nd panel: residual HAM-ISI position in RZ, in nanoradians, showing the 10-20 urad drift of the tables due to (1)

3rd and 4th panel: temperature zones in the LVEA, in deg C and deg F, respectively, showing the last 4 Teusday's worth of temperature excursions

5th panel: all test mass ring heater power level settings, in Watts showing the early explorations of ring heater settings after power up, and the ETMX reduction during the HAM ISI alignment excursion (the upper half is shown, but both upper and lower halves are set equally each time)
 
6th panel: 0.03 - 0.1 Hz BLRMS of the ground motion at each of the three buildings, showing the "earthquake and wind" band, highlighting (3)

7th panel: PR3's yaw alignment slider, indicating that we've been steering PR3 around all over the place only in the past 4 weeks, likely a result of the ISI yaw drifts (1) and temperature excursions (2).
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 09:35, Thursday 08 June 2023 (70277)
Adding another dimension to the problem...

I was thinking over night about this, and realized "well, maybe there's *one* part of the power increase that has impacted duty cycle; the 11 Hz ring ups from PRCL going unstable thru too much gain."

But, I've added the PRCL gain adjustments to the series of trends -- see new attachments in relative time and UTC time -- and I think the new-ish PRCL instabilities can also be explained by drifts and changes in alignment.
- We started using the THERMALIZATION guardian around Apr 24. This slow ramps the PRCL2 gain through the thermalization period

- We made a change, *reducing* the PRCL2 end-point setting on May 21 -- but this was reactive to the time when the ISI YAW drifts were at maximum. 

- We then restored that end-point setting on May 26

- Then, on May 31, after having starting to have trouble with 11 Hz ring ups of PRCL gain, we adjusted the "base" PRCL1 gain from 1.0 to 1.5, -- but this was reactive is after several days of LVEA temperature drifting around between May 25 and May 31, and an especially bad Tuesday on May 29th.

- On Jun 4th, the LVEA temperature controls settle on a "new normal" and we "find" we need to increase the "base" PRCL set point again to 1.7 on June 6

- Then on Jun 7th, we reset the "base" PRCL1 gain to 1.0, but instead increase the thermalization set point higher.

My vote is the following: we take the hit in time that this will mean:
- Restore (or change to re-create) the LVEA temperature to values we had consistently for months up until May 10th. Since the LVEA is en-mass cooler than before, we can use the individual zone heater settings to bring each zome back *up* to is "prior to May 10th" value.
- Once that's settled, we re-align PR3 YAW to the slider values we had up until May 10th of 151.6 "urad," and run an initial alignment.
- Once that's settled, we go out to ISCT1 and re-reset the alignment of the table (though it's not reproducible, hopefully, doing so will get us back to the ISCT1 alignment we've had for many moons prior to all this mess).
- Once that's settled, restore ETMX ring heater to its value of 1.3 W.
- Once that's settled, we go back to the May10th era PRCL gains and THERMALIZATION guardian set points of PRCL1 = 1.0 and PRCL2 = 23.0.

If all that works, then we re-calibrate PR3's sliders, optical levers, and OSEMs.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 10:19, Thursday 08 June 2023 (70279)
Since it's perhaps quite tough to see all these traces stacked vertically on top of each other (a regrettable "must" because the timing of all these changes is important to the story) I've capture this epic ndscope session as a .yaml template. 

/ligo/home/jeffrey.kissel/2023-06-07/
    2023-06-07_3motrend_IMCPWR_ISIYAW_LVEATEMP_GNDBLRMS_PR3YAW_PRCGAINs.yaml

I can't attach a file with the ".yaml" or ".yml" extension to the aLOG, but it's linux, so I've just changed the file extension to .txt (because, in the end, it *is* just a text file). So if you'd like to download it from here and look around, download and then change the extension back to ".yaml".
Non-image files attached to this comment
richard.mccarthy@LIGO.ORG - 07:27, Friday 09 June 2023 (70297)

With the temperatures more stable then they had been we are waiting to get the cooling coil strainers cleaned before attempting other changes.  The major flucuations we were seeing we caused primarily by a cooling coil not getting to temperature creating an issue for mulitiple days.  Once the strainers are clean temperature control can be changed to try and recreate previous zone temperatures.  Though it was nice have all zones grouped together for once.

H1 General
ezekiel.dohmen@LIGO.ORG - posted 09:11, Tuesday 06 June 2023 - last comment - 08:53, Friday 09 June 2023(70179)
Filter Stage Enable Mismatch
There are a number of models with filters in modules that have been engaged but no filter has been defined for that stage. This causes some confusion when scanning for unresponsive filter stages, as these will clutter the results. 
Attached is an example of what this looks like on the medm screen. FM2 is enabled, but nothing is loaded in that stage so the 2nd box never turns green. 

My guess is that these stages used to have a filter defined for them, but were removed. The solution is to finish the removal of these stages, buy disabling the stage and saving the new state with SDF. 

Full listing of filters/stages that are enabled but don't have the enabled stage defined in the filter file. 

h1susetmy : [('SUS-ETMY_L2_DAMP_MODE19', 'FM1'), ('SUS-ETMY_L2_DAMP_MODE9', 'FM4')],

h1sussqzin : [('SUS-ZM1_M1_WD_OSEMAC_RMSLP_LL', 'FM6'), ('SUS-ZM2_M1_LOCK_L', 'FM6')]

h1hpietmx : [('HPI-ETMX_3DL4CINF_C_Z', 'FM1'), ('HPI-ETMX_3DL4CINF_C_Y', 'FM1'), ('HPI-ETMX_3DL4CINF_C_X', 'FM1'), ('HPI-ETMX_3DL4CINF_B_Z', 'FM1'), ('HPI-ETMX_3DL4CINF_B_Y', 'FM1'), ('HPI-ETMX_3DL4CINF_B_X', 'FM1'), ('HPI-ETMX_3DL4CINF_A_Z', 'FM1'), ('HPI-ETMX_3DL4CINF_A_Y', 'FM1'), ('HPI-ETMX_3DL4CINF_A_X', 'FM1')]

h1lsc : [('LSC-EXTRA_AI_2', 'FM2')]

h1lscaux : [('LSC-LOCKIN_1_DEMOD_9_I', 'FM1'), ('LSC-LOCKIN_1_DEMOD_9_Q', 'FM1')]

h1oaf : [('OAF-CAL_SUM_DARM_L1', 'FM3'), ('OAF-CAL_SUM_DARM_L1', 'FM6'), ('OAF-CAL_SUM_DARM_L3', 'FM3')]

h1calcs : [('CAL-CS_DARM_ANALOG_ETMY_L1', 'FM4')]

h1susproc : [('SUS-ETMY_L2_DAMP_MODE19_BL', 'FM1'), ('SUS-ITMY_L2_DAMP_MODE18_BL', 'FM1'), ('SUS-ITMY_L2_DAMP_MODE19_BL', 'FM1')]

Edited to remove any filter stages under local control. 
Images attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 10:16, Tuesday 06 June 2023 (70184)

Some filters have front-end control over which filters stages are on. Typically, more than one filter stage is used for, say a automatic boost, but only a subset is actually loaded. So, a filter may be on, but empty. This is the expected behaviour. Changing which filter stages are participating would be a front-end model change.

A better solution may be to turn the 2nd box on for empty filters, when they are on.

jeffrey.kissel@LIGO.ORG - 12:16, Wednesday 07 June 2023 (70239)OpsInfo, SUS
After reviewing all the "SUS" filter banks mentioned above, these empty filter banks are either on because they were turned on by mistake (the ZM1 and ZM2 filters) and blindly accepted into SDF, or the bank had never been active use for control or monitoring (the violin MODE control and monitoring filters), so someone was probably playing around with a filter design and cleared it out but forgot to turn off the filter. 

In all SUS cases, the filter module should be turned off and accepted as such in SDF.

We'll make a point to clean this up and clear out the confusion next Tuesday, or during a next convenient lock loss.
jeffrey.kissel@LIGO.ORG - 15:14, Wednesday 07 June 2023 (70247)CDS, SQZ
h1sussqzin ZM1 and ZM2 filters in question have been turned off -- see LHO:70245.
jeffrey.kissel@LIGO.ORG - 15:48, Wednesday 07 June 2023 (70252)CDS, SEI
h1hpietmx filters in question have been turned off -- see LHO:70251.
jeffrey.kissel@LIGO.ORG - 16:04, Wednesday 07 June 2023 (70256)CAL, CDS
The h1oaf and h1calcs filters in question have been turned OFF -- see LHO:70255.
jeffrey.kissel@LIGO.ORG - 08:26, Friday 09 June 2023 (70299)CDS, SUS
The h1susetmy and h1susproc filter modules in question have been addressed -- see LHO:70264.
brian.lantz@LIGO.ORG - 08:53, Friday 09 June 2023 (70300)

To Daniel's point - Another choice is to populate the filter with a gain=1 stage. Then it turns on, but doesn't do anything. SEI does this with some of the calibrations. e.g.  FM1 is the the manufacture's calibration, and FM2 is to tweak the calibration based on measurements. If the sensor is very close to spec, FM2 can just be a gain=1. Then all the automation works more smoothly.

LHO FMCS
bubba.gateley@LIGO.ORG - posted 08:09, Wednesday 31 May 2023 - last comment - 17:14, Thursday 08 June 2023(70037)
HVAC in LVEA
This will be a catch-up Alog as it dates back to last week. It started with trying to adjust the airflows (per Robert S.) and the rising temps in the LVEA, so I started investigating and found a large amount of condensate on the floor of AHU-1. We (Tyler, Randy, Chris and I) cleaned up all of the water with wet/dry vacuums and then adjusted mechanical linkages on Fan 1 & 2 to achieve the desired airflows. 
At that time, it was discovered that cooling coil 4 was much warmer than the other coils. I was not sure what other adjustments had been made by others so I continued to monitor. I took Friday off but was checking coil 4 and talking with Richard throughout the day. Towards the end of the day coil 4 was still much warmer than normal and allowing warm air into the LVEA. I decided to turn Fan 4 completely off to see if I could get the temps to stabilize. This strategy helped. 

Yesterday, being maintenance day, I took the opportunity to investigate a little more into why coil 4 was not cooling. After checking functionality of all the valves, I decided to open the strainer and found that it was very full of debris. I cleaned the strainer, reinstalled and turned Fan 4 back on. The coil is now running at normal temperature and the LVEA is coming back to a stable temperature. 

Because of the rising temps last week, many of the zone temperatures were lowered to try and help with the warmer areas. These setpoints have all been returned to the original desired temps. 

A work permit will be put in soon to inspect and clean all remaining strainers on the next Tuesday maintenance period.
Comments related to this report
camilla.compton@LIGO.ORG - 17:16, Wednesday 31 May 2023 (70056)

Adding plots of LVEA temperature over the last 10, 16 and 30 days. With cross-hairs showing dates temperatures changed. This plot is available with command: ndscope /opt/rtcds/userapps/release/cds/h1/scripts/fom_startup/nuc32/VEA_temperatures.yaml

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 17:14, Thursday 08 June 2023 (70284)CDS, DetChar, FMP, ISC, OpsInfo, SYS
Adding some quantitative time and information to this aLOG series.

First, I cover the HVAC system and the channels that I'm using for metrics of the LVEA HVAC system.
The LVEA has 2 giant cooling actuators, or "air handlers" that live in the Mechanical Room, +X -Y of the beam splitter.
These air handlers add together to cool the LVEA as a common actuator.
There are separate, local *heaters* in each zone of the LVEA that serve as "differential" actuators. 

The system is depicted pretty well in the MEDM overview screen found under sitemap > "FMCS OVERVIEW" > "Air Handler 1,2"

The channels that are useful metrics of the system are listed below with their function and units
External, outside the building temperature
    H1:PEM-CS_TEMP_ROOF_WEATHER_DEGF       Temperature on the LVEA building roof, in deg F (look for corresponding "DEGC" channels for deg C)

LVEA temperature sensors << our primary "canary in the coal mine" indicator that the suspensions will likely misaligned
    H0:FMC-CS_LVEA_ZONE1A_DEGF    Temperature, in deg F, around BSC2 (think H1 Beamsplitter and ITMs)    
    H0:FMC-CS_LVEA_ZONE1B_DEGF    ", ", around 3-IFO Area (think old H2 beamsplitter)
    H0:FMC-CS_LVEA_ZONE4_DEGF     ", ", Output arm (think SR3)
    H0:FMC-CS_LVEA_ZONE5_DEGF     ", ", Input arm (think PR3)

Local, corresponding heater units
    H0:FMC-CS_LVEA_HEATER_ZONE1A_PC    percentage of heating power being applied to the zone
    H0:FMC-CS_LVEA_HEATER_ZONE1B_PC
    H0:FMC-CS_LVEA_HEATER_ZONE4_PC
    H0:FMC-CS_LVEA_HEATER_ZONE5_PC

Common HVAC Air Handler 1 -- often referred to as just "AHU1"
    These channels are is human controllable via the HVAC control system
    H1:FMC-CS_LVEA_AH_DAMPER_1_PC            "Percentage of open" Intake air reduction "damper" valve (0% full closed, 100% full open)
    H0:FMC-CS_LVEA_AH_COOLTEMP_1_DEGF        Temperature of the "Coiling Coil"
    H0:FMC-CS_LVEA_AH_COOLTEMP_2_DEGF
    H0:FMC-CS_LVEA_AH_AIRFLOW_1              Output Air flow into the LVEA in cubic feet per minute (cfm, or CFM)
    H0:FMC-CS_LVEA_AH_AIRFLOW_2

Common HVAC Air Handler 2 -- often referred to as just "AHU2"
    These channels are is human controllable via the HVAC control system
    H0:FMC-CS_LVEA_AH_DAMPER_2_PC
    H0:FMC-CS_LVEA_AH_COOLTEMP_3_DEGF
    H0:FMC-CS_LVEA_AH_COOLTEMP_4_DEGF
    H0:FMC-CS_LVEA_AH_AIRFLOW_3
    H0:FMC-CS_LVEA_AH_AIRFLOW_4

The story starts earlier than the adjustment to the air-handler units to reduce the overall cooling airflow into the LVEA. Here's the timeline of things changing, given that IFO problems with temperature started late April 2023. Here's every event.

Apr 26 2023 10:56 PDT (Wednesday, mid-morning)
    Air handler 2's damper has a step-change in behavior, going from diurnal fluctuations between 50% and 80% open, to between 60% and fully 100% open.
    This only causes a minor "glitch" change in the LVEA temperatures, mostly in the diurnal fluctuations getting "reset," changing the diurnal pattern but no overall average temperature change.

May 02 2023 07:57 PDT (Tuesday, first-thing, minutes before the official start of maintenance day)
    Air handler 2's damper, and corresponding cooling temperature 3&4 glitches for 30 minutes. For the most part, the damper returns to the normal 50% and 80% open behavior
    This only causes a minor change in the LVEA temperatures, again in the diurnal fluctuations changing their pattern but no overall average temperature change.

May 10 2023 13:46 PDT (Wednesday afternoon)
aLOG: LHO:69516
Air handler unit 2, AHU2 fails and turns off.
Channels showing this:
    H1:FMC-CS_LVEA_AH_DAMPER 2_PC         Goes all the zero closed at 0%
    H0:FMC-CS_LVEA_AH_AIRFLOW_3           Goes to zero, as airflow ceases
    H0:FMC-CS_LVEA_AH_AIRFLOW_4           Goes to zero, "
    H0:FMC-CS_LVEA_AH_COOLTEMP_3_DEGF     Goes up from stable at 50-60 deg F to really high at ~65 deg F
    H0:FMC-CS_LVEA_AH_COOLTEMP_4_DEGF

    This causes a major, a mean value excursion in LVEA temperature, but after the air handler function is restored, all zones come "back to normal" by May 11 20232 12:38 PDT, ~24 hours later.

May 11 2023 12:38 PDT
With Air Handler 1 restored, the LVEA temperatures return to normal but the airflow is now much larger, with Fan 1 producing ~12500 cfm worth of flow, where it used to only put out 6000 cfm.
Channels that show this:
    H0:FMC-CS_LVEA_AH_AIRFLOW_1
    
    This doesn't change the LVEA temperature, but means that there's a lot more air flowing into the LVEA. Maybe this is what cause Robert to pay attention?
    
    
May 17 2023 09:42 PDT (Saturday morning)
Damper for Air Handler Unit 2 damper starts to open up, constantly at 100%, and has stayed like this since, only occasionally coming off the rails to 80% in the first few days.
Channels showing this
    H1:FMC-CS_LVEA_AH_DAMPER 2_PC

    This causes a marked change in Zone 4, 0.5 deg F decrease in temperature. That's not a lot, but work calling out. The diurnal pattern of Zone 4 changes as well. Further, the "coiling coils" for AHU2 also start to run warmer increasing from (diurnal fluctuations around) 51 deg F to (diurnal fluctuations around) 53 deg F.

Then we get to where Bubba Says - "It started with trying to adjust the airflows (per Robert S.)"

May 24 2023 08:48 PDT (Tuesday maintenance)
In reference to LHO:69894, where Bubba says:

"Fans 1 & 2 [of Air Handler Unit 1] supplying the LVEA were readjusted to [reduced] airflows for the Observation Run. 
 The **total** CFM for AHU 1 was reduced from ~ 26,000 to ~ 11,200 CFM"

With a bit more detail,
 - Fan 1 (H0:FMC-CS_LVA_AH_AIRFLOW_1) is reduced from ~12500 cfm to it's level that it was prior to the May 11 2023 fix of Air Handler 1, now back to ~6000 cfm.
 - Fan 2 (H0:FMC-CS_LVA_AH_AIRFLOW_2) is reduced dramatically from the value it's been in a long time ~11000 cfm down to 4500 cfm.
Thus Bubba's statement about the total from (12500+11000 =) 23500 cfm to (6000+4500 =) ~10500 cfm.

As a result of this we land on Bubba's statement "At that time, it was discovered that cooling coil 4 was much warmer than the other coils" (from the May 17th 2023 change in damper behavior) because you see both coil 3 and coil 4 from Air Handler 2 (H0:FMC-CS_LVEA_AH_COOLTEMP_3_DEGF and H0:FMC-CS_LVEA_AH_COOLTEMP_4_DEGF) at the higher, ~53 deg F level.
   
   This dramatic airflow change causes the LVEA temperatures to merge high for a bit until May 24 18:38 PDT -- Zone 4 (think SR3), the output arm gets warmer to meet the other zones.

May 24 2023 22:30 PDT yo May 25 2023 06:52 PDT (Late Night Tuesday to Early morning Wednesday)
Air Handler 1's cooling coils are slowly stepping up in temperature (by the control system? by a human?) from 46 deg F to as much as 54 deg F.
   
   Somehow, this slowly starts to bring all temperatures *down* in the LVEA in all zones together until the next morning at May 25 2023 06:45 PDT.

May 25 2023 06:52 PDT (Early morning Wednesday)
Air handler 1's cooling coils 1 and 2 get restored back to 46 deg F, and the heater in Zone 1A kicks on for 2.5 hours. (by a the control system? by a human?) 

   This brings all temperatures *up* in the LVEA to a relatively high value of 69 deg F until another change on May 26 2023 15:56 PDT.

May 26 2023 15:56 PDT (Friday Afternoon)
Air Handler 2 Fan 4's air flow drops to zero, and cooling coil 3 and 4 temperature drops from 54 deg F to 49 deg F.

These are Bubba's actions when he says "Towards the end of the day [Friday Afternoon, May 26th] coil 4 was still much warmer than normal and allowing warm air into the LVEA. I decided to turn Fan 4 completely off to see if I could get the temps to stabilize. This strategy helped."

   This brings the temperatures collective *down* in the LVEA from ~69 deg F, to 67 deg F, closer to the "normal" values for the LVEA, though Zone 5 (think PR3) and Zone 1B (think 3IFO) are lower than their "normal" values.

May 30 2023 09:50 PDT - 11:51 PDT (Tuesday, Maintenance Day)
All hell breaks loose with the HVAC as the Fire Alarm system overrides the HVAC controls -- only really mentioned in the operator log during maintenance that day -- see LHO:70003, and drives TJ to investigate drifts in temperature, LHO:70000 thinking "it's happening again!!"

Also, in the mean while Bubba's action he mentions,
" Yesterday, being maintenance day, I took the opportunity to investigate a little more into why coil 4 was not cooling. After checking functionality of all the valves, I decided to open the strainer and found that it was very full of debris. I cleaned the strainer, reinstalled and turned Fan 4 back on. The coil is now running at normal temperature and the LVEA is coming back to a stable temperature. "

    However, upon restoration of the HVAC settings, Air Handler 2's "Fan 4 is turned off" setting from May 26 2023 (Friday Afternoon) get lost, and FAN4 comes back on at full blast at 12000 cfm. Further, Air Handler 1's cooling coils come back on much hotter than they've ever come on before, changing from 45 deg to 54 to 65 deg.

    Once AHU2 Fan 4 comes back on, and air flow to the LVEA re-increases, the temperatures in the LVEA begin to drop again, tanking down as low as 66 deg F.

May 31 2023 06:23 PDT (Early Wednesday Morning)
    Zone 4 and Zone 5 heaters come in to play (by human? or by the control system?) turning on for the first time with Zone 4 at 40 percent, and Zone 5 left to oscillate diurnally between 20% and 40%.

    This brings all LVEA temperatures back up to a really tight cluster, for the first time in months around 67 deg F. This is great, we should hold here if we can.

June 04 2023 18:34 PDT (5 days later, Sunday Evening)
For reasons I can't determine (no other visible change in the metrics I've been using), Zone 1B (think 3IFO) jumps up in temperature 0.5 deg F from 67.2 deg F to 67.7 deg F.

    This is a pretty inconsequential change in temperature that shouldn't affect suspensions.

June 06 2023 09:53 PDT (2 days later, Tuesday Maintenance Day)
    The fire alarm maintenance team is back at it, and this causes the intake damper of Air Handler 2 (H0:FMC-CS_LVEA_AH_DAMPER_2_PC) to drop to zero.

    This causes the entire LVEA temperature to rise again from its stable 67 deg period starting on May 31 2023 06:23 PDT to zones going as high as 69 deg F.

June 06 2023 11:52 PDT (same day)
    The operations team realizes the fire alarm closed the AHU2 damper again, called in facilities and re-opened it, and the temperatures come back down, but overshoot because Zone 1B and Zone 4 configuration got lost.

June 06 2023 17:02 PDT (same day, but later in the evening)
    Some one, or some thing, turns the Zone 1B and Zone 4 heater configuration back on.

    Temperatures return to the "good" May 31 2023 06:23 PDT configuration change
Images attached to this comment
Displaying reports 18801-18820 of 87017.Go to page Start 937 938 939 940 941 942 943 944 945 End