TITLE: 08/26 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Aquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 17mph Gusts, 14mph 5min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY:
- H1 has just lost lock as I am typing this...cause unknown
- SEI motion is a bit elevated in the EQ band, hovering around 0.1 um/s
Happened to look up and see a "Stand Down" state. Did not receive any alerts for this. The graceDB link for this lists this "GRB-SHORT" as NOTGRB. Here's more info on this particular event (E432741): https://gracedb.ligo.org/events/E432741/view/
PSL Report FAMIS report:
Laser Status:
NPRO output power is 1.829W (nominal ~2W)
AMP1 output power is 67.15W (nominal ~70W)
AMP2 output power is 135.5W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PMC:
It has been locked 20 days, 6 hr 1 minutes
Reflected power = 16.33W
Transmitted power = 109.2W
PowerSum = 125.5W
FSS:
It has been locked for 0 days 10 hr and 48 min
TPD[V] = 0.9056V
ISS:
The diffracted power is around 2.4%
Last saturation event was 0 days 10 hours and 48 minutes ago
Possible Issues: None
H1's been locked for 8.5hrs. At around 1630utc (930amPT) H1 took a step down in range. There was also a brief drop out of Observing due to the Squeezer (noted earlier).
At 17:41:44, H1 was dropped out of OBSERVING...after quickling taking OBSERVATORY_MODE to "COMMISSIONING", H1 was ready to go back to OBSERVING at 17:42:21.
Using the Guardian Control command, there was a brief SDF diff due to syscssqz. I am not sure how to find which channel dropped us out. I ran guardctrl on Squeezer nodes and a few other guardian nodes, but could not find the culprit. (see attached).
Sat Aug 26 10:10:48 2023 INFO: Fill completed in 10min 44secs
TITLE: 08/26 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 11mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY:
H1's been locked 4hrs. H1 range continues at increased value of 150Mpc as was for the lend of the last lock. There was a Labs Dust alarm needing to be acknowledged (it had been OK for last3hrs). Winds slowly beginning to pick up.
Hanford site road work continues on a Saturday on Route10 (crack sealing) and I was turned around and had to take the long way around to get to work (this work was normally M-Th 7am-3pm, but they've been working Fri & Sat this week).
h1digivideo3's memory usage is getting high, we will keep an eye on it and perhaps restart a process during a target-of-opportunity.
TITLE: 08/25 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 162Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
- EX saturation @ 11:10/1:48/2:13 UTC
- Lockloss @ 5:56 - new record for a O4 lock at LHO - 60:29!
- Issues getting light on the ALS Y PD - might be related to the chiller issue we had earlier today?
LOG:
No log for this shift.
ALSX-REFL_A_DC
Has no voltage and low power.
Running the Dither Align script on TMSX and ITMX in hopes that it would fix the issue with ALSX ReflPD A
It did not.
I still have an issue with ReflPD A And PDH.
To get the X are ReflPD A issue resolved.
I had initially thought that is was something that wasn't getting power or voltage, but since I was able to close the shutter and see a change in H1:ALS-X_REFL_A_DC_POWER, I realized that everything is indeed working correctly, it's just very poorly aligned.
I tried, rolling back to a few hours ago before oplevs had moved about 4 hours prior.
I Tried baffle dithering twice, following both this https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=62024
and the Dithering instructions found here: https://cdswiki.ligo-wa.caltech.edu/wiki/Troubleshooting%20the%20IFO
This made the flashes I was seeing brighter, but it was still too low for the REFLPD A error 8 to be resolved.
I drove to site so I can get my eyes on the how everything was behaving, and so I can have more screen real estate to see some more trends.
Trending ISC_LOCK back to the last Initial_Alignment gave me a GPS time of 1376843785 or so. Once I got that I put all of the ETMX, ITMX & TMSX back to that location.
I then started an Initial alignment. Stopped Initial alignment are SRC align, and followed trhough walking through initial alignment
Locking Started at 10:46 UTC
Had to accept this SDF to get to observing.
Observing Reached !!!
Lockloss @ 5:56 UTC, DCPD saturation right before. Unknown cause. New record for a H1 lock during O4 - 60:29!
H1 is still chugging along, and has officially set a new O4 lock record of 57:30 (and counting)! Barring a few EX saturations; systems appear stable, and ground motion/wind is low.
At 9:45am local time Chiller 1 at End Y went into alarm for "Evaporator Water Flow Lost". When I arrived at the EY chiller yard I observed that neither chiller was running but chilled water pump 1 was continuing to run. I noted the alarm and headed for the mezzanine above the AHU's to asses what the supply and return pressures were nearest the evaporator coil. Immediately I read 0 (or what I thought was 0) at the return line. This would generally indicate that there has been enough glycol loss within the system that makeup is necessary via the local tank (though I've never seen it get to 0). Until I read that both supply lines were at an alarming 140psi (normal operating pressures for all 4 supply and return float around 30). I immediately phoned Richard to have him command the chilled water pump 1 off to stop the oversupply of chilled water. For reasons not clear to me the disable command via FMCS Compass was not taken at the pump. I went back to the chiller yard and observed that 1: the pump had not been disabled and 2: pressures at the pump were at around 100psi (normal operating for the current frequency is about 50). Following that, I manually threw the pump off at the VFD to prevent further runaway of the system. Between the time of noting 140psi and manually throwing the pump off, the system pressure increased to 160psi. After a thorough walk-down of the system, I elected not to utilize our designed redundancy in chiller 2 and chilled water pump 2 as I was still unaware what was causing the massive overpressure at all supply and return lines. It was also found that the return line was not actually at 0, but instead had made a full rotation and was pegged on the backside of the needle (all of these need replacement now). Macdonald-Miller was called on site to help asses what the issue might be. Given that there was recent incursions to flow via R. Schofield the strainer was the primary point of concern. We flushed the strainer briefly at the valve and noted a large amount of debris. after a second flush, much less/next to none was noted. This alleviated the system pressure substantially. The exact cause of the fault and huge increase of pressure is still not clear. There are a number of flow switches at the chiller. Bryan with Mac-Miller suspects part of the issue may live there, and we are going to pursue this further during our next maintenance window. Work was also performed at the strainer within the chiller where rubber/latex-esque debris was found. Work on Chiller 1 to continue but for now the system and end station is happy on chiller2/CHWP2. Looking at the FMCS screen shows temp's have normalized as of the writing of this log. T. Guidry B. Haithcox. R. Thompson C. Soike R. McCarthy
Closes 26247, last completed in alog 72330
VAC_FAN5_170_2 has had a few spikes over the course of the last week, but the rest of the fans appear stable.
Inspired by Valera's alog demonstrating the improvement in the LLO sensitivity from O3b to O4a (LLO:66948), I have made a similar plot.
I chose a time in February 2020 for the O3b reference. I'm not aware of a good time without calibration lines, so I used the CALIB_STRAIN traces from both times. Our range today just got a new bump up, either due to the commissioning work today or the EY chiller work (72414). Note: I am quoting the GDS calib strain range!
I am adding a second plot showing O3a to O4 a as well, using a reference time from May 2019.
There is a significant amount of work that gave us this improvement. I will try to list what I can recall:
TITLE: 08/25 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Austin
SHIFT SUMMARY:
H1's been locked 53.5hrs (1hr from our O4 longest lock *knock on wood*).
Main news of today was the H1 Chiller Pump#1 going down. Tyler as well as a contractor spent most of the last 6hrs dealing with this and we are currently set and running with Chiller Pump #2.
There was 2hrs of commissioning by Gabriele & Elenna as well.
We appear to have a ~5Mpc increase in range....waiting to hear if this is due to the commissioning or working with different chiller pump!
LOG:
Recently we've been receiving groups of SubGRBs or long grbs from our igwn-alert subscription system that are often old events or repeated. Since we don't normally react to these and will ignore them, we don't need to even have them update in epics. I've added more filtering for these types of events in igwn_response.py and also in the tests.py for VerbalAlarms. The new code is running for both of these systems.
Another related issue I'm looking into is why we haven't received any superevents from out igwn-alert listener in maybe a month. I don't think the issue is on our end, but I have a few more checks. We still get phone calls in the control room, so operators are still informed.
Upon the lockloss, we had the same issue with ALS Y not catching any light. I used the same solution as Tony did this morning by restoring the ETMY/ITMY/TMSY sliders to the last time an initial alignment was completed, ~1376843785. After playing with the sliders for a bit once this was restored, I was able to get light on the PD. Being that this is the second time that we have had to restore sliders, perhaps this is evidence of some slow drift (possibly related to the chiller failure) going on at EY? Will investigate this further.
Update:
Was able to catch ALS Y and lock it, but had a few subsequent locklosses following LOCKING ALS. I will try running another initial alignment to see if this helps. ALS Y still had issues catching on IA.
I tried moving ITMY just a hair, flashes would get right below if not at 1 on the LOCKING green arms ndscope template, but when the WFS would turn on it would immediately lose lock. After a few trial and error attempts, I was able to reconcile this by turning OFF the WFS DOF 1/2 Y (found under the EY overview) immediately when they get turned on in ENABLE_WFS. Then, after 5 seconds or so, I would turn ON the WFS DOF 2 Y first, then WFS DOF 1 Y which then allowed the WFS to converge without breaking the lock.
Had to touch SRM by a bit in P for SRC to catch but otherwise the rest of the initial alignment went unaided.
Attached is a 24 hour trend of the witness sensors and slider values for ETM/TMSY and the oplev trends for ETMY. Looking at the scope, I'm not seeing a whole lot of drift of the course of the day, so I'm a bit perplexed why we ran into the same issue again this afternoon, since obviously there's some issue with the Y arm alignment.