J. Kissel, J. Warner, H. Radkins SEI recovery: After Hugh finished his 6-month maintenance of the HEPI accumulators (stay tuned for more detailed aLOG), Jim followed turning on HEPI pump stations. Once pump stations were started, front-ends, and the guardian machine were up and running, there was little-to-no trouble bringing up the SEI systems. All platforms are isolated as normal now. SUS recovery: Once front-ends and the guardian machine were up and running, I brought most suspensions up via guardian. We had a little trouble with the dolphin network in the corner station while reseting the CDS IOP "software" watchdogs, but thankfully it was as simple as turning *on* the dolphin switches in the corner (they'd just been missed in the infrastructure turn-on phase). It was a little unnerving that the start-up guardian state for the SUS is ALIGNED, which means that as *soon* as all watchdogs are clear, things light up and start sending stuff out to the DAC. If everything works well -- and this time it did -- this is fine, but I'd kinda prefer that the start-up state is SAFE, so that the user can control the speed at which things come up. Some thing to consider. Also, many thanks to Cheryl and Jenne, who helped us get the IMC locked and happy. All SUS are damping as normal now (Rana and Evan made sure that the new damping filter settings on the QUADs has been restored correctly), and we're in the process of getting the rest of the IFO alignment to a happy state.
There has been an instrument air alarm periodically for MX since the start of my shift. It seems to have come back after the power outage to a lower level than previous (~60 instead of ~80 psi). I talked to Gerardo about it, and he didn't seem to worried. I have emailed Bubba to look at it tomorrow. OPS takeaway: it is OK to ignore this alarms (which is a MINOR alarm) for now, unless it becomes MAJOR.
David.M, Filiberto.C, Jenne.D
All of the L4C interfaces (D1002739) were tested using the L4C Interface test plan at (T1600185). These were the chassis with serial numbers s1600252, s1600253, s1600254, s1600255, and s1600256. All of the channels were working fine except for the L4C 3 & 4 channels on chassis s1600253 which had the incorrect frequency response. On closer expection we noticed that this board was missing the capacitor located at C8 (see the schematic) for both channels. These capacitors have now been added. The chassis was retested and all channels are now functioning properly. I've attached the test reports for each chassis.
Attached are the temperature and RH plots from the data logger inside the LVEA 3IFO Desiccant Cabinet during the outage. Being inside a closed cabinet with a continual nitrogen boil-off is not a true measurement of the LVEA temperature, but it should give an idea as to how the temperature changed over the weekend.
[from an email by Jeff Bartlett]
Per Kyle's request I have temporarily forced the PT100 Cold Cathode gauge on in the Beckhoff controls software (see attached).
Thanks!
(All time in UTC)
Quick Summary:
14:00 Chris S. to both end stations to retrieve generators
15:19 Fil + Alfredo to End stations to turn on facility related stuff
15:28 Hugh begins working on WP#5910
16:00 John et al. to LVEA
TJ + Richard to End stations to power up front end computers
Locking gate valves open (John, Chandra)
16:05 Switching from generators to site power
16:15 PSL, AUX online
Fil back
16:19 Chandra + John out
Corner station Beckhoff running
16:30 Chandra to MX, EX to plug in VAC system
16:34 Richard + TJ done with EX, going to MX, EY
16:41 Patrick restoring EPICS IOC h0epics2
16:49 epics gate way is up -- nds1 is wokring (Dave) -- It doesn't work by the way
17:00 Richard + TJ back
Richard + Jason turning on high voltage (CER upstairs)
17:09 CDS front end is ready to go
Guardian medm starts up
17:10 Jason + Ed recovering PSL
17:16 Nutsinee to mezzanine and LVEA to recover TCS CO2
All corner station high voltage is on
17:20 PCal EX recovery (Darkhan)
17:43 PCal EY recovery (Darkhan)
18:01 Corner station Dolphins are alive.
Hugh done at EX, heading to EY
18:10 EX ESD is good to go
18:15 Coner station HEPI pump is good (Jim)
DAC restart (Dave)
18:17 Power cycle Beckhoff chasis and PD amplifier (Fil)
18:22 TCS CO2 is ready and fired (NK)
18:26 Fil out of EY, heading to EX
18:42 Fil restarted EX End station 2 Beckhoff chasis
18:49 Hugh back from EY. Jim recovering HEPI pumps.
18:55 Richard to LVEA to reset PSL emergency stop.
Hugh and Jim to EY to restart BRS.
SEI is good to go
19:02 All SUS M0 DAMP P,Y are messed up. Rana recovered the setting. Diff accepted in safe.snap (FM1, FM2, FM5, FM6, FM8, FM10 gain = -1)
19:08 Rich + Terra to beer garden (ITMX ESD driver work)
19:22 BRS are up (Hugh+Jim)
19:34 Rich+Terra out
19:43 Terra to beer garden
19:49 Betsy to EX to powercycle EX PUM BIO I/O chasis
Terra out
20:00 Jeff B to EX > EY > LVEA
20:02 Fil reboot beckhoff chasis
20:13 Darkhan back. PCal Y not working
20:16 Betsy back
20:23 Rich + Daniel puttin chasis back (beer garden)
20:29 Chandra to Mid staions and End stations to install some plugs.
20:32 Jeff B out of EX, heading to EY
20:38 Fill and Patrick to EY (Beckhoff problem)
20:45 Hugh+Krishna+Michael to EY (BRS)
20:56 Jason+Ed out
21:02 JeffB out
21:05 Rich+Daniel out
21:16 Begin IMs alignment. Referencing values from alog26916.
21:35 Hugh+Krishna+Michael to EX (BRS)
21:36 Rich to beer garden (ESD electronics)
21:50 Jeff B to LVEA to obtain weekend temperature data
21:53 Chandra to BSC8 to take pictures of RGA.
22:03 Jeff B out.
22:06 Fil+Patrick back. EY Beckhoff is up and running.
22:20 All IMs are restored to their good values.
Jenne working on aligning MC optics.
22:32 Kyle to retrieve a piecec of electronics
22:39 Kyle back
21:55 Disable ESD to do PI measurement (Terra+Carl -- both end stations)
23:02 IFO_ALIGN slide bars put back in place. Referencing values from alog27557.
Summary
On Friday, Pcals were turned off for the June 3-5 power outage (LHO alog 27558).
Details
After finding out that PcalY PDs are not measuring any power I went to the End-Y and checked following:
From this limited information and under the assumption that nothing is broken in the Pcal system, one of possible sources of the issue might be that control signals are not reaching the Pcal Interface module through an EtherCAT connector. The laser on/off and power level (5.0 V) signals should come through this interface (the Pcal Interface Chassis back-side board circuit is given in D1400149).
The issue in currently under investigation.
Filed FRS 5651
Filiberto, Darkhan,
Summary
Details
As it was reported in the original report above, there was an issue with turning on the PcalY laser and operating the shutter. The issue was discovered yesterday around 11am. After that I came back to get breaker circuits to proceed with investigations, but I found out that it is not needed.
At End Y there were issues with Beckoff system that were discovered independent from Pcal work. The Beckoff system feeds control signals to Pcal through EtherCAT cable into Pcal Interface Module. In the afternoon Filiberto went to EndY and replaced couple interface boards of the Beckoff system (see LHO alog 27583). After he let me know about the replacement work, I went back to EndY to turn on PcalY (needed to switch shutter control back to "remote" and double-check power switches on Pcal modules). Now the issue is resolved and PcalY is operating in its nominal configuration.
Due to issues that arose during the power outage I have removed the following environment variables from each Beckhoff vacuum controls computer: Variable name: EPICS_CAS_AUTO_BEACON_ADDR_LIST Variable value: NO Variable name: EPICS_CAS_BEACON_ADDR_LIST Variable value: 10.1.255.255 These were removed from h0vacex and h0vacey on Friday evening. I removed them from the remaining computers today.
Filiberto, Patrick The corner 5 chassis had to be power cycled. The AOS Baffle PD amplifiers had to be turned on in the beer garden. The end X end station 2 chassis had to be power cycled. The first EK1101 terminal in the end Y end station 2 chassis had to be replaced.
2:15 pm local 1/2 turn open on LLCV bypass - took 3:05 min. to overfill CP3.
J. Oberling, E. Merilh, J. Bartlett
PSL recovery from the power outage is complete. We had 1 issue when bringing the laser up: the power meter that reads the power output from the PSL appears to have failed.
After turning everything (power supplies, control boxes, etc.) we began by bringing up the chillers; no issue here. When bringing up the PSL for the first time we noticed all the power appeared to be in the backwards direction, with nothing in the forward the direction. We killed the laser using the kill button on the table, then ran into the problem of the interlock not resetting. After much messing around with the interlock button and the control box, we found that you REALLY have to pull the button out to reset the interlock (when it feels like the button is about to break, pull harder). That taken care of, we turned the laser on and this time opened the external shutter; there was power being measured on the PDs and power meters external to the HPO box (PMC REFL, HPO external PD) so everything with the laser was functioning properly. Tried power cycling the questionable meter and saw no change. In the interest of getting the PSL up and running, and as this power meter is not used during normal operation of the PSL (only used when the HPO external shutter is closed), I decided to not swap the power meter. I did confirm that we do have a spare, but after a quick survey I could not find water fittings for it (it is a water cooled power meter). We will swap this out at the next opportunity.
All other PSL systems came online without issue. System is currently on and running, and has been warming up for ~1 hour. Will continue to monitor.
Filed FRS 5642 for the failed power meter.
It occured to me that the hysteresis of the pzt might be something I could overcome by dithering the pzt around the values that were restored when we brought the system up.
I started the dither at +/-5000, then dithered at +/-1000, then +/-100, then +/-10, then +/-1, with 20 second sleeps between each change.
I dithered both pitch and yaw.
When the beam from the PSL was restored, the IMC REFL beam was completely on the camera, and the IMC locked (though at low value, WFS are off).
I've uploaded the executable file that will automatically dither the pzt to the OPS wiki, under the page name "pzt dither to recover alignment," and copied the executable into the userapps/release/isi/h1/scripts directory, file name 20160606_pzt_dither.txt.
This will be a running log of activities as they occur for bringing back CDS after the planned outage over the weekend. Refer to Richard's alog about what happened over the weekend for prep.
We started the shift with lights, front gate operation, wifi, etc.
Step 6.3: LVEA Network Switches
This box appeared to be running (green lights & no errors).
Step 6.4 Work Station Computers
Work Stations powered in Control Room-----> But BEFORE these were powered up, The NFS File Server should have been checked in the MSR before powering anything up. This needs to be added to the procedure.
We have found several issues due to computers being started before the NFS File Server was addressed. These items had to be brough up again:
----->And the items above allow us to run Dataviewer now and also bring up the Vacuum Overview
Step 6.5 Wiki & alog & CDS Overview MEDM
They are back. Overview on the wall (thanks, Kissel).
Update: The wiki was actually NOT back since it was started before the NFS File Server was started. So the wiki was restarted.
Sec 8: EX (Richard & TJ)
They have run through the document & are moving on to other outbuildings. On the overview for EX, we see Beckhoff & SUSAUX are back.
Sec 10: MX (Richard & TJ)
Haven't heard from them, but we see that PEMMX is back on the Overview.
Sec 9: EY (Richard & TJ)
This is backonline. So now we can start front ends in the Corner Station!
They are now heading to MY now.....
Sec 11: MY (Richard & TJ)
...but it looks like MY is already back according to the CDS Overview.
STATUS at 10AM (after an hour of going through procedure):
Most of the CDS Overview is GREEN, -except- the LSC. Dave said there were issues with bring the LSC front end back up and will need to investigate in the CER.
End Stations: (TJ, Richard)
7.5 Front Ends (Updated from Kissel)
Everything is looking good on the CDS Overview
Sec 7.8: Guardian (updated from Kissel)
Working on bringing this back. Some of these nodes need data from LDAS (namely, ISC); so some of these may take a while.
BUT, basic nodes such as SUS & SEI may be ready fairly sooner.
7.1 CORNER STATION DC POWER SUPPLIES
These have all been powered ON (Richard).
(Still holding off on End Stations for VAC team to let us know it's ok.)
EY (Richard)
High Power voltage & ESD are back (this step is not in recovery document).
EX (Richard)
High Power voltage & ESD are back (this step is not in recovery document).
There appear to be some order issues here. (Jeff, Nutsinee, and others are working on fixing the order in the document.)
1) We held off on addressing DC high power because of wanting to wait for Vacuum System Team at the LVEA (for vacuum gauge) and at the End Stations (for vacuum gauge & the ESD).
2) We held off on some Corner Station items, because of them being on the Dolphin Network. So to address End Stations FIRST, Assigned Richard & TJ to head out to start the End Station sections of the document & get their Dolphin Network items online. Once they were done, Dave started on cornter station Front Ends on the Dolphin network.
Extraneous items:
Sec 7.6 LVEA HEPI Pump Station Controller
After Hugh's HEPI Maintenance, Jim brought this back.
Sec 8.5 EX HEPI Pump Station Controller
After Hugh's HEPI Maintenance, Jim brought this back.
Sec 9.5 EY HEPI Pump Station Controller
After Hugh's HEPI Maintenance, Jim brought this back.
UPDATE:
At this point (11:42am), we appear to be mostly restored with regard to the CDS side. Most of the operations subsystems are back (we are mostly green on the Ops Overview). The VAC group's Annulus Ion Pumps are back to using site power.
Lingering CDS/Operations items:
Filiberto, Ed, Rich Installation is complete for the ITM ESD Driver for ITMX and ITMY. A short was found on the bias connection to ITMX (see attached sketch to clear up the pin numbering of the legacy connector). The shield was cut and insulated on the 2ft section of cable up near the vacuum feedthrough for this connection. All nominally 24V connections were HIPOT tested to 500V, and the high voltage bias connections were tested to 700V. An ADC (ADC07) was added to SUS-C5-14 to permit the ITM ESD readback channels to be implemented per the latest wiring diagram (D1500464). At time of writing, no aspects of the installed system have been verified in situ. This is the next item on the checkout. Some useful system constants (total for both drivers): +/- 18VDC Nominal Quiescent Current -> +460mA, -420mA +/- 24VDC Nominal Quiescent Current -> +40mA, -40mA +/- 430VDC Nominal Quiescent Current -> +/- 5.6mA Serial Numbers: ITMY ESD Driver (D1600092) (installed in SUS-R6) -> S1600266 ITMX ESD Driver (D1600092) (installed in SUS-R5) -> S1600267 ITM PI AI Chassis (D1600077) (installed in SUS-C6 U23) -> S1600245
Evan, Rana, Sheila
This afternoon there was a problem with the ITMX M0 BIO. The TEST/COIL enable indicator was red, and setting H1:SUS-ITMX_BIO_M0_CTENABLE to 0 and back to 1 could not make it green. Evan tried power cycling the coil driver, which did not work. We were able to reset this and damp the suspension again by setting the BIO state to 1, it seems that anything other than state 2 works.
This might be a hardware problem that needs to be fixed, but for now we can use the suspension like this.
Opened FRS Ticket 5616.
After the weekend power outage, we see that the ITMX M0 BIO settings are back to their nominal combo of:
STATE REQUEST 2.000
COIL TEST ENABLE 1.000
- And the BIT statuses all show green.
Toggling them to alternate 1.000 and 2.000 values respectively and back to niminal turns them from red back to green. Nothing seems to be stuck now and the ill combo that Sheila reported the other day above doesn't seem to be a problem now.
Note, although Jenne is pursuing realigning the IFO, we immediately see that we are suffering from vertically drifting Susepensions which we already know means drifting Pitch/Yaw (see attached). This is expected due to the warming of the VEAs over the hot un-air-conditioned weekend, so the alignment will change as their temperatures re-equilibrate. The realignment being performed now is to just be able to check other systems with some arm light, even though we may be chasing pointing.