TITLE: 08/25 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 162Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
- EX saturation @ 11:10/1:48/2:13 UTC
- Lockloss @ 5:56 - new record for a O4 lock at LHO - 60:29!
- Issues getting light on the ALS Y PD - might be related to the chiller issue we had earlier today?
LOG:
No log for this shift.
ALSX-REFL_A_DC
Has no voltage and low power.
Running the Dither Align script on TMSX and ITMX in hopes that it would fix the issue with ALSX ReflPD A
It did not.
I still have an issue with ReflPD A And PDH.
To get the X are ReflPD A issue resolved.
I had initially thought that is was something that wasn't getting power or voltage, but since I was able to close the shutter and see a change in H1:ALS-X_REFL_A_DC_POWER, I realized that everything is indeed working correctly, it's just very poorly aligned.
I tried, rolling back to a few hours ago before oplevs had moved about 4 hours prior.
I Tried baffle dithering twice, following both this https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=62024
and the Dithering instructions found here: https://cdswiki.ligo-wa.caltech.edu/wiki/Troubleshooting%20the%20IFO
This made the flashes I was seeing brighter, but it was still too low for the REFLPD A error 8 to be resolved.
I drove to site so I can get my eyes on the how everything was behaving, and so I can have more screen real estate to see some more trends.
Trending ISC_LOCK back to the last Initial_Alignment gave me a GPS time of 1376843785 or so. Once I got that I put all of the ETMX, ITMX & TMSX back to that location.
I then started an Initial alignment. Stopped Initial alignment are SRC align, and followed trhough walking through initial alignment
Locking Started at 10:46 UTC
Had to accept this SDF to get to observing.
Observing Reached !!!
Lockloss @ 5:56 UTC, DCPD saturation right before. Unknown cause. New record for a H1 lock during O4 - 60:29!
H1 is still chugging along, and has officially set a new O4 lock record of 57:30 (and counting)! Barring a few EX saturations; systems appear stable, and ground motion/wind is low.
At 9:45am local time Chiller 1 at End Y went into alarm for "Evaporator Water Flow Lost". When I arrived at the EY chiller yard I observed that neither chiller was running but chilled water pump 1 was continuing to run. I noted the alarm and headed for the mezzanine above the AHU's to asses what the supply and return pressures were nearest the evaporator coil. Immediately I read 0 (or what I thought was 0) at the return line. This would generally indicate that there has been enough glycol loss within the system that makeup is necessary via the local tank (though I've never seen it get to 0). Until I read that both supply lines were at an alarming 140psi (normal operating pressures for all 4 supply and return float around 30). I immediately phoned Richard to have him command the chilled water pump 1 off to stop the oversupply of chilled water. For reasons not clear to me the disable command via FMCS Compass was not taken at the pump. I went back to the chiller yard and observed that 1: the pump had not been disabled and 2: pressures at the pump were at around 100psi (normal operating for the current frequency is about 50). Following that, I manually threw the pump off at the VFD to prevent further runaway of the system. Between the time of noting 140psi and manually throwing the pump off, the system pressure increased to 160psi. After a thorough walk-down of the system, I elected not to utilize our designed redundancy in chiller 2 and chilled water pump 2 as I was still unaware what was causing the massive overpressure at all supply and return lines. It was also found that the return line was not actually at 0, but instead had made a full rotation and was pegged on the backside of the needle (all of these need replacement now). Macdonald-Miller was called on site to help asses what the issue might be. Given that there was recent incursions to flow via R. Schofield the strainer was the primary point of concern. We flushed the strainer briefly at the valve and noted a large amount of debris. after a second flush, much less/next to none was noted. This alleviated the system pressure substantially. The exact cause of the fault and huge increase of pressure is still not clear. There are a number of flow switches at the chiller. Bryan with Mac-Miller suspects part of the issue may live there, and we are going to pursue this further during our next maintenance window. Work was also performed at the strainer within the chiller where rubber/latex-esque debris was found. Work on Chiller 1 to continue but for now the system and end station is happy on chiller2/CHWP2. Looking at the FMCS screen shows temp's have normalized as of the writing of this log. T. Guidry B. Haithcox. R. Thompson C. Soike R. McCarthy
Closes 26247, last completed in alog 72330
VAC_FAN5_170_2 has had a few spikes over the course of the last week, but the rest of the fans appear stable.
Inspired by Valera's alog demonstrating the improvement in the LLO sensitivity from O3b to O4a (LLO:66948), I have made a similar plot.
I chose a time in February 2020 for the O3b reference. I'm not aware of a good time without calibration lines, so I used the CALIB_STRAIN traces from both times. Our range today just got a new bump up, either due to the commissioning work today or the EY chiller work (72414). Note: I am quoting the GDS calib strain range!
I am adding a second plot showing O3a to O4 a as well, using a reference time from May 2019.
There is a significant amount of work that gave us this improvement. I will try to list what I can recall:
TITLE: 08/25 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Austin
SHIFT SUMMARY:
H1's been locked 53.5hrs (1hr from our O4 longest lock *knock on wood*).
Main news of today was the H1 Chiller Pump#1 going down. Tyler as well as a contractor spent most of the last 6hrs dealing with this and we are currently set and running with Chiller Pump #2.
There was 2hrs of commissioning by Gabriele & Elenna as well.
We appear to have a ~5Mpc increase in range....waiting to hear if this is due to the commissioning or working with different chiller pump!
LOG:
Recently we've been receiving groups of SubGRBs or long grbs from our igwn-alert subscription system that are often old events or repeated. Since we don't normally react to these and will ignore them, we don't need to even have them update in epics. I've added more filtering for these types of events in igwn_response.py and also in the tests.py for VerbalAlarms. The new code is running for both of these systems.
Another related issue I'm looking into is why we haven't received any superevents from out igwn-alert listener in maybe a month. I don't think the issue is on our end, but I have a few more checks. We still get phone calls in the control room, so operators are still informed.
TITLE: 08/25 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 159Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 9mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY:
- H1 has been locked for 53.5 hours, all systems appear stable
Jim installed and updated the HAM1 HEPI feedforward, which he describes in alog 72393. The HAM1 to ASC feedforward has not been updated in some time, and Jim's results showed some difference in CHARD P from his HEPI FF update. Jenne and Jim turned off the HAM1 ASC FF a few days ago (72395) to gather data for retraining.
I used data from that time to retrain the HAM1 to CHARD P, PRC2 P, and INP1 P feedforward. All new filters are labeled with today's date "0825". I turned on these new filters and saw small improvement in the CHARD P and INP1 P error signals. There was no evident improvement in DARM. We are not limited by CHARD P in DARM at this time so I am not that surprised (see third attachment in 72245).
The attached plot shows all ASC REFL loop error signals. The blue trace is the previous HAM1 feedforward in use, and the red live trace shows the new feedforward on.
The filters are not guardian controlled. I updated the SDF observe file in the SEIPROC model. I then asked Corey to take us to the "SDF to SAFE" guardian state so I could also update the SEIPROC safe file. However, that guardian state does not change the SEIPROC model to safe, so I had to change to the safe table by hand. Here are the steps:
"SDF restore screen">"! select request file">choose "safe.snap">open>"load table". Once the diffs are accepted and confirmed, follow those steps to load the "observe.snap" file. I took a screenshot of the safe SDFs I accepted before hitting "confirm".
[Elenna, Dan, Gabriele]
We tested a filter (FM8 in DARM2) that increased the DARM gain below 3-4 Hz, where most of the RMS is accumulated.
The DARM RMS is reduced by a factor of 3. There is no immediate evident effect on the DARM noise aboev 10 Hz. However, it would be useful to test this new filter for a longer time in the future.
Tagging the Cal group, since I think we'd like them to weigh in on when they might have availability to recalibrate using this new filter.
We have implemented this filter today and a calibration sweep was run with the filter on to determine the changes to the calibration.
The new filter is in FM8 of DARM2, and will be engaged in lownoise length control along with another DARM res g that is engaged.
I accepted the SDF in observe and loaded the guardian.
Weekly FAMIS Task
1. Alerts from running the script:
ITMX_ST2_CPSINF_H1 high freq noise is high!
2. All other "floors" from the spectra look normal.
[Elenna, Gabriele]
We tried increasing the SRCL gain by a factor of 2. As expected SRCL_IN got better, SRCL_OUT did not change. No effect on DARM RMS.
Gain is back to nominal
As the title says, we retuned the MICH feedforward, and the new filter performs better at all relevant frequencies.
Guardian has been updated to engage FM9 instead of FM8.
Quoting Elenna: "It's been 0 days since we retuned the LSC FF"
I have accepted the SDF diff in both OBSERVE and SAFE. Forgot to screenshot both times, sorry.
Process for accepting in SAFE:
Select "SDF_TO_SAFE" guardian state in ISC_LOCK
Wait for SDF table to switch to safe
Search for my SDF diff in the LSC table and sorting on substring
Accept diff
Confirm
Select "Nominal Low Noise" in ISC_LOCK guardian
J. Kissel, for T. Guidry, R. McCarthy Just wanted to get a clear separate aLOG in regarding what Corey mentioned in passing in his mid-shift status LHO:72423: The EY HVAC Air Handler's chilled water pump 1 of 2 failed this morning 2023-08-25 at 9:45a PDT, and thus the EY HVAC system has been shut down for repair at 17:35 UTC (10:35 PDT). The YVEA temperature is therefore rising as it equilibrates with the outdoor temperature; thus far from 64 deg F to 67 deg F. Tyler, Richard, and an HVAC contractor are on it, actively repairing the system, and I'm sure we'll get a full debrief later. Note -- we did not stop our OBSERVATION INTENT until 2h 40m hours later 2023-08-25 20:18 UTC (13:18 PDT), when we've gone out to do some commissioning.
The work that they've been doing so far today to diagnose this issue has been in the 'mechanical room'. Their work should not add any additional significant noise over what aready occurs in that room at all times, so I do not expect that there should be any data quality issues as a result of this work. But, we shall see (as Jeff points out) if there are any issues from the temperature itself changing.
They are done for the weekend and temperatures are returning to normal values.
Chiller Pump #2 is the chiller we are now running.
Chiller Pump #1 will need to be looked at some more (Tyler mentioned the contractor will return on Tues).
Attached is a look at the last 4+yrs and both EY chillers (1 = ON & 0 = OFF).
See Tyler's LHO:72444 for more accurate and precise description of what had happened to the HVAC system.