Recovery from the power outage is still ongoing. All planned maintenance day activities have been postponed until recovery is complete. Currently Keita and Peter are tweaking temps in the PMC.
All running now.
Corner--Pushed Level trip reset button and Green button on the controllers and restart was normal. Except, PS #1 (south unit) was outputting less pressure than the other three. We've seen this before after a restart and sure enough the pump was noisy gurgling like. I changed the ACC & DEC parameters on the VFD and spun down the pump slowly. I set these parameters to 600 seconds but it was still a little fast for the super low ugf of the controller s I doubled the PID gain to help it out. A little glitch as the pump stopped and started going backwards before we got the valve closed. After hitting the VFD reset it started back up and again a pressure glitch as the pump starts and the output valve was reopened. Pretty sure no one cared at that point. VFD and PID parameters returned to nominal.
Ends--The VFDs were registering a Low Voltage error, (not sure why this was not the case with the corner station--maybe because they have no power to them until the level trip is reset.) This requires an EE approved panel intrusion to hit the reset and the normal restart follows.
At EndX the PV di not seem to be holding as it should after it looked like it was doing fine so I expected some kinda issue. I made a trip to the EndX but it seemed fine at that point. I spun it down anyway and restarted again and it again and since seems fine.
Travis, the operator, used the wiki instructions and managed okay with them.
This morning after Betsy and Jenne recovered the CS SUS, I tried turning the seismc platforms back on. The HAMs were unusually difficult, the chamber manager was stalling and insisting that the ISI and HPI nodes were stalled. A few things I tried mostly on HAM2, a few on the others:
1. Init-ing every body.
2. Engaging master switches. Occasionally, the chamber manager would go around behind me and turn them back off.
3. Stopping and restarting guardian nodes. On HAM2 I tried just the chamber manager, then all HAM2 nodes.
4. Stopping and Execing all nodes.
5. Reloading all the code on all 3 nodes.
Finally, the only way I could recover the HAM chambers was to go and engage the masterswitch on ISI and HPI, put ISI in Damped, HPI in Isolated, then when HPI was up, I put ISI in High_isolated. When that was done the chamber manager was happy, after hitting INIT on the manager. I've checked HAM4 and HAM6 after and those chamber managers work. The BSC guardians have also been behaving all morning, from what I've seen.
Currently all seismic platforms are up and running.
Nutsinee, Peter, Jason, Jeff B. Checked the TCS-Y chiller system for leaks. Found no active leak in (1) the chiller unit, (2) the external plumbing, or (3) on the TCS-Y table. There were signs of older leaks on both ends of the RF amp. These were dry stains and I could find no current drips around these fittings. I checked in the table (very limited view) and found no evidence of water pooling inside the table. A check of the plumbing under the beam tube, last week, (from the mechanical building to the vertex) was also negative. We had a quick look at the TCS-X table, and found no apparent problems. Both chillers are up and running. I topped off both chillers so they are at the same levels. Will continue to monitor the system.
RF Oscillator is set at 40.68 MHz.
DC power supplies set at 29.99V and 28A. Output reads 30.00V and 0.75A for TCSX, 30.00V and 0.78A for TCSY.
The rotation stage has been reset for both TCSX and Y (searched for home, then go-to power). TCSX output is now 0.229W.
TCSY laser is currently turned off. It's been off for over a month. I checked with Evan and he said it should be left off for now since no one will play with it anytime soon. To turn it on just hit "ON" button on the CO2Y medm screen. The controller box is ready to go.
The trigger reference level for the pre-modecleaner was changed from 1.35 to 1.10. This change was accepted in SDF. The reason for the change is
that with the previous value, the sequencer for the pre-modecleaner thought it was locked when clearly it wasn't. The lower value was more consistent
with observation.
Trigger reference value set back to 1.35 because the pre-modecleaner locked on a non-TEMO0 mode after it was changed to 1.10
Having trouble restarting end Y.
Evan power cycled the comtrol and weather station at end Y. It is now working.
If I remember correctly: h1ecatc1 was running but had some terminals in the INIT ERR state. h1ecatx1 was white on the CDS overview. h1ecaty1 was still running and all terminals were in the OP state. For h1ecatc1, h1ecatx1 and h1ecaty1: Opened the system manager from the target directory. Activated the configuration and restarted in run mode. Opened each PLC from the target directory. Logged in and ran it (if it was not already running). I believe I also restarted the EPICS IOC on h1ecatx1. Burtrestored all to 5:10 this morning.
'OpsInfo' and 'Lockloss' tags have been added to the LHO and LLO aLOGs. CDS Bugzilla 941 for reference.
... and added OpsInfo task to H1 and LHO sections.
Activity Log: All Times in UTC (PT) 07:00 (00:00) Take over from TJ 07:46 (00:46) ETM-Y saturation 08:24 (01:24) ETM-Y saturation 09:22 (02:22) ETM-Y saturation 09:33 (02:33) Reset timing error on ETM-X 09:53 (02:53) ETM-Y saturation 10:42 (03:42) ETM-Y saturation 12:19 (05:19) ETM-Y saturation 12:39 (05:39) ETM-Y saturation 13:05 (06:05) LLO – Going down for maintenance 13:33 (06:33) Power Hit – Short duration power loss 14:15 (07:15) Richard – Going to End-Y to check end station and reset ESD high voltage 14:26 (07:26) Fire Systems Maintenance on site to test hydrants 14:30 (07:30) S&K Electrical on site 14:48 (07:48) Richard – Back from End-Y 14:50 (07:50) Richard – Going to End-X to check end station and reset EDS high voltage 14:55 (07:55) Ken – Going to End-Y to pull cables from End Station to Ion Pump 14:57 (07:57) Karen & Christina – Cleaning in the LVEA 15:00 (08:00) Turn over to Travis End of Shift Summary: Title: 10/20/2015, Owl Shift 07:00 – 15:00 (00:00 – 08:00) All times in UTC (PT) Support: Richard, Peter Incoming Operator: Travis, Shift Summary: 13:33 (06:33) – Lost power to site for a couple of seconds, but long enough to shut down most systyems. Starting site recovery. Peter working PSL recovery. Richard working on site recovery.
The power went out this morning. I went to restart the laser. Pretty much all the alarms on the status screen were set. TwinSafe reported that it was okay. The interlock box status LED did not light up in either the on or off position. I reset all the serial ports on the Beckhoff computer and things came good again. The regular startup was invoked. Laser came up. I engaged the output of the high voltage power supply. None of the servos were locked by the time I got back to the Control Room, as this requires a front end model restart.
A 0634 this morning the Bonneville Power Administration had power dip. Not sure what was happening yet, but this caused a site wide 1 second outage at LHO. We are in the process of restoring systems.
at 13:33 (06:33) lost power to the site for a couple of seconds. Working on recovering the site.
Other than a few ETM-Y saturation's the first half of the shift has been quiet. IFO has been locked in Observing mode for past 13 hours. Range is 75Mpc. Environmental conditions have deteriorated somewhat over the past 4 hours. The wind is still light (3-7mph) to gentle (7-12mph) breeze, with some gusts up near 20mph. Microseism is elevated but flat over the past 12 hours. The was a 5.4 mag EQ in Chile (R-Wave arrival ~ 10:53 (03:53)).
Reset timing error on H1SUSETMX at 09:33 (02:33).
Title: 10/20/2015, Owl Shift 07:00 – 15:00 (00:00 – 08:00) All times in UTC (PT) State of H1: At 00:00 (00:00) Locked at NOMINAL_LOW_NOISE, 22.5W, 78Mpc. Outgoing Operator: TJ Quick Summary: Wind is a light to gentle breeze; microseism is slightly elevated. IFO in Observing mode. All appears to be normal.
Kyle, Gerardo Today we applied Ultratape against the spool and both sides of the stiffener around the entire periphery, in effect, bagging the inaccessible space between the stiffener ring and the spool. Next, we made a small incision and inserted a 1/4" poly tube to allow us to pump the inside of this "tape bag". The overall seal wasn't very effective due to the lack of access in the area of interest - between the spool and the support gusset which is also the area of the leak - but we were able to achieve < 300 torr inside the tape bag. As expected, the Y-mid pressure responded accordingly. So, at this point we have the leak location narrowed down to approx. 1" x 3" at the overlap of the spool seam weld and stiffener stitch weld (poor fabrication technique!) We have some ideas of how to better access this area which we will need to do before applying epoxy or equivalent to seal the leak.
Photos of test setup.
Outstanding detective work! Well done guys. And thank heaven it weren't a gate valve... PS I think we should at least consider running a fusion bead from the inside instead of goop. Section can be vented, right?
Very good that the leak was found. Suggest that the test for ferritic steel be applied to the region around the leak. The test rig for ferritic steel is currently at LLO. It was last used by Tomeka Lewis and Harry Overmier. Useful to know if we are in trouble all around or this location was an anomoly.
Darkhan, Sudarshan, GregM, RickS
The plots in the first attached multi-page .pdf file use SLMtool data (60-sec. long FFTs) taken during the month of Oct. so far.
The first page shows the time-varying calibration factors.
The next eight pages have two plots for each of the four Pcal calibration lines (36.7 Hz, 331.9 Hz, 1083.7 Hz, and 3001.3 Hz).
The first of each set shows the calibrated magnitudes and phases of the strain at each frequency (meters_peak/meter).
The second plot in each set shows ratios (mag and phase) of the three methods (Cal/Pcal, GDS/Pcal, and Cal/GDS). The center panels (GDS/Pcal) are most relevant because we expect discrepancies arising from the CAL_DeltaL_External calculations not including all the necessary corrections.
The plots in the second multi-page .pdf file show the GDS/Pcal ratios at the four Pcal line frequencies over the time period from Oct. 7 to Oct. 11, with histograms and calculated means and standard errors (estimated as standard deviation of the data divided by the square root of the number of points).
Note that these time-varying factors (kappa_tst and kappa_C) have NOT been applied to the GDS calibrations yet, so we expect the GDS/Pcal comparisons to improve once they are applied.
The difference of ~9% in 3 kHz line (mean value) probably comes from the foton IIR filtering which is ~10% at 3 kHz i.e., the front-end DARM is 10% higher than the actual value. SInce online GDS (C00) is derived from output of front-end model, it would also show similar difference. However the offline GDS (C01 or C02) corrects for this and hence expect not to show this difference.
I've now checked all HAMs by requesting offline, waiting for the chamber to go down, then requesting isolated and they all seem to be working now.
(This was supposed to be attached to my previous log about HAM ISI guardian problems. w/e)