Talked to John about a CP7 Pump Level alarm we were getting. He advised me to turn the 'Manual Setting' to 72%.
Note: According to Gerardo, who called back after John, he did not get an alarm notification by phone about this as he should have. Someone needs to check the machine responsible for these calls.
GV7 has sagged since John and I reinstated normal operational state this morning by re-applying 50-60 psi pressure upward on piston and removing locking pin. Not sure how far down it has sagged. I suggest waiting to run laser until it's boosted back up Tuesday morning by someone from the vacuum group.
I'll fix tonight and come in an hour later in the morning
3 hour trend of CS instrument air range between 62 psi - 82 psi. As found, 55 psi < GV5, GV6, GV8 < 60 psi, GV7 = 38 psi -> Adjusted all for > 60 psi. Note: the LVEA was quiet and I could easily hear the displaced air exiting the pneumatic manifold vent port. I would guess that the gate had sagged several inches (foot?) based upon duration of audible venting and the convincing "THUNK" of it hitting the hard stops.
Thank you, Kyle!
I had to restart the TwinCAT code for h0vacex and h0vacey to add back the Inficon gauges that were disabled during the power outage. This reset the PID controls for CP8 and CP7. CP8 is back on PID but CP7 has been oscillating (see attached). At this point I am going to leave it on manual at 85% open before I leave. Someone from the vacuum group might want to monitor it.
CP7 setpoint was raised to 94% because it overfilled last Friday during transition to generator power. Reboots today reset it to nominal 92%, which causes the PID loop to close valve, warming up the transfer line, and then it has a hard time catching up.
J. Driggers, J. Kissel, J. Warner While beginning recovery of the IFO, Jenne noticed that the Y arm fiber polarization was high (showing ~24% of input light rejected, where < 5% is where we want to be). We followed instructions similar to what can be found in the OPS Wiki. Since the percent-of-rejection channels are buried deep in the heart of the ALS MEDM screen jungle, I pulled the channels, H1:ALS-X_FIBR_LOCK_FIBER_POLARIZATIONPERCENT H1:ALS-Y_FIBR_LOCK_FIBER_POLARIZATIONPERCENT into a StripTool, which made it easier to by-eyeball, hand-tune the value via the tiny CDS laptop we were using. Also, there's no rhyme or reason to which of the three knobs to use; we just slowly turned all the knobs in both directions until we started to see the rejected light value go down sub-5%. While there, we also brought the X-arm below 5%. The fiber polarization tuning box was turned off once we were done.
There are instructions on how to perform this task in the Ops Wiki, also. It can be found in the Troubleshooting section.
J. Kissel, J. Warner, H. Radkins SEI recovery: After Hugh finished his 6-month maintenance of the HEPI accumulators (stay tuned for more detailed aLOG), Jim followed turning on HEPI pump stations. Once pump stations were started, front-ends, and the guardian machine were up and running, there was little-to-no trouble bringing up the SEI systems. All platforms are isolated as normal now. SUS recovery: Once front-ends and the guardian machine were up and running, I brought most suspensions up via guardian. We had a little trouble with the dolphin network in the corner station while reseting the CDS IOP "software" watchdogs, but thankfully it was as simple as turning *on* the dolphin switches in the corner (they'd just been missed in the infrastructure turn-on phase). It was a little unnerving that the start-up guardian state for the SUS is ALIGNED, which means that as *soon* as all watchdogs are clear, things light up and start sending stuff out to the DAC. If everything works well -- and this time it did -- this is fine, but I'd kinda prefer that the start-up state is SAFE, so that the user can control the speed at which things come up. Some thing to consider. Also, many thanks to Cheryl and Jenne, who helped us get the IMC locked and happy. All SUS are damping as normal now (Rana and Evan made sure that the new damping filter settings on the QUADs has been restored correctly), and we're in the process of getting the rest of the IFO alignment to a happy state.
Note, although Jenne is pursuing realigning the IFO, we immediately see that we are suffering from vertically drifting Susepensions which we already know means drifting Pitch/Yaw (see attached). This is expected due to the warming of the VEAs over the hot un-air-conditioned weekend, so the alignment will change as their temperatures re-equilibrate. The realignment being performed now is to just be able to check other systems with some arm light, even though we may be chasing pointing.
There has been an instrument air alarm periodically for MX since the start of my shift. It seems to have come back after the power outage to a lower level than previous (~60 instead of ~80 psi). I talked to Gerardo about it, and he didn't seem to worried. I have emailed Bubba to look at it tomorrow. OPS takeaway: it is OK to ignore this alarms (which is a MINOR alarm) for now, unless it becomes MAJOR.
David.M, Filiberto.C, Jenne.D
All of the L4C interfaces (D1002739) were tested using the L4C Interface test plan at (T1600185). These were the chassis with serial numbers s1600252, s1600253, s1600254, s1600255, and s1600256. All of the channels were working fine except for the L4C 3 & 4 channels on chassis s1600253 which had the incorrect frequency response. On closer expection we noticed that this board was missing the capacitor located at C8 (see the schematic) for both channels. These capacitors have now been added. The chassis was retested and all channels are now functioning properly. I've attached the test reports for each chassis.
Attached are the temperature and RH plots from the data logger inside the LVEA 3IFO Desiccant Cabinet during the outage. Being inside a closed cabinet with a continual nitrogen boil-off is not a true measurement of the LVEA temperature, but it should give an idea as to how the temperature changed over the weekend.
[from an email by Jeff Bartlett]
Per Kyle's request I have temporarily forced the PT100 Cold Cathode gauge on in the Beckhoff controls software (see attached).
Thanks!
(All time in UTC)
Quick Summary:
14:00 Chris S. to both end stations to retrieve generators
15:19 Fil + Alfredo to End stations to turn on facility related stuff
15:28 Hugh begins working on WP#5910
16:00 John et al. to LVEA
TJ + Richard to End stations to power up front end computers
Locking gate valves open (John, Chandra)
16:05 Switching from generators to site power
16:15 PSL, AUX online
Fil back
16:19 Chandra + John out
Corner station Beckhoff running
16:30 Chandra to MX, EX to plug in VAC system
16:34 Richard + TJ done with EX, going to MX, EY
16:41 Patrick restoring EPICS IOC h0epics2
16:49 epics gate way is up -- nds1 is wokring (Dave) -- It doesn't work by the way
17:00 Richard + TJ back
Richard + Jason turning on high voltage (CER upstairs)
17:09 CDS front end is ready to go
Guardian medm starts up
17:10 Jason + Ed recovering PSL
17:16 Nutsinee to mezzanine and LVEA to recover TCS CO2
All corner station high voltage is on
17:20 PCal EX recovery (Darkhan)
17:43 PCal EY recovery (Darkhan)
18:01 Corner station Dolphins are alive.
Hugh done at EX, heading to EY
18:10 EX ESD is good to go
18:15 Coner station HEPI pump is good (Jim)
DAC restart (Dave)
18:17 Power cycle Beckhoff chasis and PD amplifier (Fil)
18:22 TCS CO2 is ready and fired (NK)
18:26 Fil out of EY, heading to EX
18:42 Fil restarted EX End station 2 Beckhoff chasis
18:49 Hugh back from EY. Jim recovering HEPI pumps.
18:55 Richard to LVEA to reset PSL emergency stop.
Hugh and Jim to EY to restart BRS.
SEI is good to go
19:02 All SUS M0 DAMP P,Y are messed up. Rana recovered the setting. Diff accepted in safe.snap (FM1, FM2, FM5, FM6, FM8, FM10 gain = -1)
19:08 Rich + Terra to beer garden (ITMX ESD driver work)
19:22 BRS are up (Hugh+Jim)
19:34 Rich+Terra out
19:43 Terra to beer garden
19:49 Betsy to EX to powercycle EX PUM BIO I/O chasis
Terra out
20:00 Jeff B to EX > EY > LVEA
20:02 Fil reboot beckhoff chasis
20:13 Darkhan back. PCal Y not working
20:16 Betsy back
20:23 Rich + Daniel puttin chasis back (beer garden)
20:29 Chandra to Mid staions and End stations to install some plugs.
20:32 Jeff B out of EX, heading to EY
20:38 Fill and Patrick to EY (Beckhoff problem)
20:45 Hugh+Krishna+Michael to EY (BRS)
20:56 Jason+Ed out
21:02 JeffB out
21:05 Rich+Daniel out
21:16 Begin IMs alignment. Referencing values from alog26916.
21:35 Hugh+Krishna+Michael to EX (BRS)
21:36 Rich to beer garden (ESD electronics)
21:50 Jeff B to LVEA to obtain weekend temperature data
21:53 Chandra to BSC8 to take pictures of RGA.
22:03 Jeff B out.
22:06 Fil+Patrick back. EY Beckhoff is up and running.
22:20 All IMs are restored to their good values.
Jenne working on aligning MC optics.
22:32 Kyle to retrieve a piecec of electronics
22:39 Kyle back
21:55 Disable ESD to do PI measurement (Terra+Carl -- both end stations)
23:02 IFO_ALIGN slide bars put back in place. Referencing values from alog27557.
Summary
On Friday, Pcals were turned off for the June 3-5 power outage (LHO alog 27558).
Details
After finding out that PcalY PDs are not measuring any power I went to the End-Y and checked following:
From this limited information and under the assumption that nothing is broken in the Pcal system, one of possible sources of the issue might be that control signals are not reaching the Pcal Interface module through an EtherCAT connector. The laser on/off and power level (5.0 V) signals should come through this interface (the Pcal Interface Chassis back-side board circuit is given in D1400149).
The issue in currently under investigation.
Filed FRS 5651
Filiberto, Darkhan,
Summary
Details
As it was reported in the original report above, there was an issue with turning on the PcalY laser and operating the shutter. The issue was discovered yesterday around 11am. After that I came back to get breaker circuits to proceed with investigations, but I found out that it is not needed.
At End Y there were issues with Beckoff system that were discovered independent from Pcal work. The Beckoff system feeds control signals to Pcal through EtherCAT cable into Pcal Interface Module. In the afternoon Filiberto went to EndY and replaced couple interface boards of the Beckoff system (see LHO alog 27583). After he let me know about the replacement work, I went back to EndY to turn on PcalY (needed to switch shutter control back to "remote" and double-check power switches on Pcal modules). Now the issue is resolved and PcalY is operating in its nominal configuration.
Due to issues that arose during the power outage I have removed the following environment variables from each Beckhoff vacuum controls computer: Variable name: EPICS_CAS_AUTO_BEACON_ADDR_LIST Variable value: NO Variable name: EPICS_CAS_BEACON_ADDR_LIST Variable value: 10.1.255.255 These were removed from h0vacex and h0vacey on Friday evening. I removed them from the remaining computers today.
Filiberto, Patrick The corner 5 chassis had to be power cycled. The AOS Baffle PD amplifiers had to be turned on in the beer garden. The end X end station 2 chassis had to be power cycled. The first EK1101 terminal in the end Y end station 2 chassis had to be replaced.
2:15 pm local 1/2 turn open on LLCV bypass - took 3:05 min. to overfill CP3.
Evan, Rana, Sheila
This afternoon there was a problem with the ITMX M0 BIO. The TEST/COIL enable indicator was red, and setting H1:SUS-ITMX_BIO_M0_CTENABLE to 0 and back to 1 could not make it green. Evan tried power cycling the coil driver, which did not work. We were able to reset this and damp the suspension again by setting the BIO state to 1, it seems that anything other than state 2 works.
This might be a hardware problem that needs to be fixed, but for now we can use the suspension like this.
Opened FRS Ticket 5616.
After the weekend power outage, we see that the ITMX M0 BIO settings are back to their nominal combo of:
STATE REQUEST 2.000
COIL TEST ENABLE 1.000
- And the BIT statuses all show green.
Toggling them to alternate 1.000 and 2.000 values respectively and back to niminal turns them from red back to green. Nothing seems to be stuck now and the ill combo that Sheila reported the other day above doesn't seem to be a problem now.