We've eliminated all ground loops that can be easily eliminated in chamber at this point. There are two remaining grounding-related issues.
The first problem will NOT be addressed until I hear back from Rich.
The second issue is dealt with in air (if the solution is to cut pin13 wire).
We "fixed" another tiptilt with more annoying problem than yesterday (see below).
We "fixed" a bunch of other things by changing four in-vac cables (also see below).
Apart from the issues noted above, we checked picos, OMC REFL QPD (sled), AS_C QPD, WFS(DC) and OMC SUS, and these are all good.
Everything is connected back except the DCPD and OMC QPD cables at the field rack.
We didn't like the way a big coil of cables was dangling from ISI table top to the side, no idea why this was done (just an extra weight without real merit), so we moved it to downstairs. Due to the reduced weight on the ISI, Hugh might want to rebalance ISI.
Tomorrow I'll be out, but Corey and Dan will have to do the following:
After this I don't have a problem closing HAM6 for now.
WP4672. Compiling out of the RCGbranch-2.8, I installed new IOP models for h1iopsush56 and h1iopseih16. An IPC connection was established between the SUS OMC and the SEI HAM6. Basic testing using the panic feature showed the communication was working.
Build pointers were restored to the RCG Tag 2.8.3. I'm working on generic MEDMs for the new SWWDs.
Opened GV11, leaving GV18 closed until a need to open arises
911 - Fil and Aaron to EY to work on AI/AA chassis
913 - Corey and Keita to HAM6 on ground loop hunt
924 - Jeff B and Jodi out to LVEA to inspect 3IFO
929 - Jim B working to recover various systems from power glitch - may require various restarts. See his aLOG.
935 - Gary, Margot and Betsy beginning to clean and prep for install at ITMY
1005 - Dave B restarting all SUS/SEI front ends in the corner station to remedy issue stemming from power glitch
1056 - Gerardo working in west bay
1055 - Kyle opening GV11
1310 - Instrument Air alarm at EY...sensor cable unplugged. Kyle to plug back in
with Betsy: This morning while the ITMy structure was out Betsy and I pulled a couple floorboards out of BSC1 to take FTIR tests of parts of the chamber that had previously failed FTIR (One on chamber floor nearly dead center of chamber, other just inside beamtube towards BSC8.) Before taking the FTIR tests, we took a FBI sample of the floor, then wiped down the whole floor with IPA soaked Vectra Alpha wipes. After the floor dried, we took an FTIR in the center and replaced the floor panels. Took other FTIR (in beamtube floor) and re-wiped floor panels before proceeding with rest of ITMy install.
M. Phelps, G. Traylor, B. Weaver, myself
Today we successfully installed the ITMy lower structure into BSC1. Recabling of the reaction chain has begun. As a health check, we hung the test mass as a single pendulum to verify the integrity of the monolithic. All is well. We relocked the suspension since it is second in line for alignment work a couple of weeks out. Tomorrow, Apollo will remove the BSC install arm.
The old lower stucture was put back in the can and flown to the test stand cleanroom vicinity for upcoming 3IFO rework.
With the addition of the planned Table Top payload per D1201388-v3, I pulled 119kg from the ISI sidewalls. Based on previous ocurrances like this, I don't believe th eCOG sci/eng will be bothered by this CG vertical shift. Will run TFs asap once ICS is done with their cable work.
The evidence I've been able to gather suggests the VFD (Variable Frequency Drive) saw an overvoltage while running at constant speed. OU3 is in fact an error code. This is something that has happened before, maybe a couple times a year. Don't really know what it means, will talk to EE. Since the troubleshooters (Rich & Jeff) shut down the servo despite it running before they pressed the reset button on the VFD, they were dead in the water until the servo was restarted (instructions in the OPS WIKI.)
In this situation with the Servo running and outputting max: 2048 coutns = 10 volts but producing no pressure, one could assume the pumps are not running due to a low fluid level trip or a fault at the VFD. A trip to the Pump Station is warranted.
If the Red indicator on the DD Box is not illuminated, a fluid level trip or a power fault should be suspect. In this case, reduce the output to the VFD by putting the servo in manual and stepping the output to zero or change the set point to zero. Hit the green button on the DD Box, if it illuminates, ramp up the output of the servo manually or in auto mode by first reducing the set point to 5 psi and increasing it to 80 psi, do so slowly to minimize overshoot and risk of fluid level triping and over pressure valve opening.
If the Red Indicator on the DD Box is iluminated, it is likely the VFD is registering a fault. Open the VFD box ( you must be EE qualified to do this!) and look at the VFD display, note the display. Pressing reset from a VFD fault situation will restart everything so before doing so, as above, reduce the servo output to zero. Once the servo output is zero, press reset and then as above, ramp up the output to 80psi. Call me, I might be able to walk you through it remotely.
Notes from the conversation I had with Hugh and Rich this morning: To restart controller (some of what's in the OPS Wiki, as Hugh suggests) ssh controls@h1hpipumpctrl[ex,ey,0] (0 for corner station) (non-standard controls password -- check your secrets page) cd target-ex/ sudo ./run The rest is details of "After the controller is running and there are valid pressure signals, ramp the motor drive up to get the pressure near 80 psi and engage the servo. There will be some lag so slow down the ramping as you near 80 to be sure and not over shoot too far." Hugh mentioned that there're several ways to ramp up the servo, but here's how we did it this morning: - Goto controller MEDM screen (resize if you see white -- it may not be dead) - Flip "Servo Control" to Auto (button in middle, far right) - In bottom left corner, select "Common Tweak Size" of 5 - Slowly (one click every 5-10 [s]) click "+" above "Set Point Tweak" to increase differential pressure set point to 80 [PSI] (a trace will begin to appear on the long black window once you get it within range) - Once you get above ~50 ish [PSI] you can click a little faster, we're just looking to prevent large turn-on transients in the servo which'll move the platform. Hugh was explaining things to me in between clicks, so that's about the amount of time you need. Rich also mentions that MIT has ways of automating this procedure such that "all you have to do is hit reset, then run" on the VFD control panel "and it just goes." I like this idea. This thing should also have a guardian wrapped around it, managed by the HEPI guardian, which in turn reports to the SEI chamber guardian. Hugh also mentions there's an open Integration Issue (see II 242) in order to get LHO's pressure sensed at the chamber like LLO's. The last update says "within weeks" of 2013-Dec-19, but this has been delayed because of cable-pulling person-power. Also, there is some flaw internal to the HEPI Pump Servo board that means the "Pressure OK" and "Level Alarm" lights report mis-information. They should not be trusted. Because this system is to be eventually converted to Beckhoff monitoring, attempts to make them LEDs functional have been abandoned. But so have attempts to convert to Beckhoff, due to similar person-power limitations. We should get some person-power.
Jim, Dave.
The reason h1lsc0 did not shutdown last night is that its IO Chassis was not powered up, presumably due to the glitch. Jim powered it back on via the front panel switch. Unfortunately h1lsc0 was is a strange state, and it did not cleanly remove itself from the dolphin fabric even when all procedures were followed. Most of the Dolphin'ed front ends were glitched when h1lsc0 was rebooted, and these models were restarted. This completes the recovery from the glitch.
J. Kissel Dither paths, which had been added to the front end models a long time ago (see LHO aLOG 11627), have now been added to the QUAD overview screens. Not really much more to say that; It took a long time because squeezing the new banks in took a lot of rearranging and making things smaller. It was really exciting. I can't wait to do every other suspension. Stay on the edge of your seat. For Stuart -- because we're still prototyping the more complete ESD linearization routine, I've reverted the screen to what's in the repository, and added dither paths to *that*, committed, and then re-added the prototype linearization stuff. I updated: /opt/rtcds/userapps/release/sus/common/medm/quad/ SUS_CUST_QUAD_OVERVIEW.adl and there's some new screens (in the same folder): SUS_CUST_QUAD_DITHER2EUL.adl SUS_CUST_QUAD_DITHERINF.adl SUS_CUST_QUAD_L1_DITHER.adl SUS_CUST_QUAD_L2_DITHER.adl SUS_CUST_QUAD_L3_DITHER.adl SUS_CUST_QUAD_M0_DITHER.adl
Restarted the IOP and x1isiitmx models on the seismic test stand in the staging building, which were not running after yesterday's power glitch.
The LDAS gateway computer was frozen after the power glitch of yesterday. Restarted the LDAS gateway, then mounted the frames directory on the frame writer and NDS computers, and restarted the daqd processes on them.
PEM and IOP models at Mid X and Mid Y were restarted to get the GPS time set correctly for the models. Timing was off by 1 second from the restart after yesterday's power glitch. Usually this involves killing the PEM and IOP models, then starting the IOP model and pressing the BURT button as soon as the IOC server starts. The idea is to keep delay of starting the IOP to the shortest time possible, given that the GPS time is derived from the computer clock instead of an IRIG-B.
ITMY install planned for today
BSC1: two central pieces of BSC flooring will be removed to ease cleaning of two "hotspots" identified in prior vent cycle. Spots will be cleaned with isopropanol and Alpha 10 wipes. Flooring will be returned and ITMY install will proceed.
Bubba - Spool piece is out
Bubba - moving 3IFO ISI's via crane in LVEA
Peter K et al working in both H1 and H2 PSL enclosures
Aaron working on AA/AI chassis at EY, will require shutting down some systems
model restarts logged for Wed 11/Jun/2014
2014_06_11 18:11 h1ioppemmx
2014_06_11 18:11 h1ioppemmy
2014_06_11 18:11 h1pemmx
2014_06_11 18:11 h1pemmy
2014_06_11 18:42 h1iopasc0
2014_06_11 18:43 h1iopasc0
2014_06_11 18:50 h1asc
2014_06_11 18:50 h1susauxasc0
2014_06_11 18:51 h1ascimc
2014_06_11 18:51 h1iopasc0
2014_06_11 18:51 h1ioppemmy
2014_06_11 18:51 h1sushtts
2014_06_11 18:57 h1ioplsc0
2014_06_11 18:57 h1lscaux
2014_06_11 18:57 h1lsc
2014_06_11 18:57 h1omc
2014_06_11 19:10 h1iopsusey
2014_06_11 19:12 h1susetmy
2014_06_11 19:12 h1sustmsy
Red are unexpected restarts at the time of power glitch. Purple restarting frozen models. Green fixing SWWD.
Rich, Jeff, Arnaud, Dave
During the power glitch recovery Rich found that the DACs were not driving on h1hpietmy. I discovered my new SWWD on h1susey had tripped from the TMSY suspension. I had noticed earlier that two of the TMSY OSEM signals looked bogus, and Jeff found that the incorrect ADC channels were being read for two channels. To correct the problem quickly to get Rich back on track, I fixed the simulink model and compiled against RCG2.8.3 (the version which was running was compiled against branch2.8). This means the trip level is back to 15,000 counts and the resets are cached. I'll rebuild against branch2.8 tomorrow as part of the HAM4,5,6 install. Also the MEDM still has the incorrect raw signals for these two channels.
We experienced a power glitch at Jun 12 2014 01:08:29 UTC (18:08PDT). The lights in the control room flickered for several seconds, the MSR UPS reported a switch to battery and back. Vacuum controls was not affected. Here is what I have found:
All models on h1lsc0 and h1asc0 stopped running
Both mid station PEM systems have DAQ timing errors
Right hand monitors for workstations opsws0,1,2 went blank
I'm going to restart the affected models. The workstations were power cycled.
h1asc0 booted and came back with no problems.
I was unable to reboot h1lsc0 cleanly, I'll leave this until tomorrow, Jeff said it is not needed this evening.
both mid stations pem systems actually rebooted rather than freeze like asc/lsc. I tried restarting h1ioppemmy and could not clear the timing error. I'll leave both mid station pem systems for tomorrow.
Travis, Betsy, Gary, Margot, Apollo
After Apollo finished mounting the arm, elevator and 5-axis lift table on the BSC 1 flange first thing this morning, the ITMy was deinstalled from the chamber.
I forgot to mention that we stuffed the ACB box into BSC 8 (via BSC1) in order to stow it out of the way for ITMy SUS work.
Annoying TipTilt, part 2
When "fixing" OM3, one of the BOSEMs was really persistent. Both the side and the bottom were touching (first picture), and the bottom gap was non-existent no matter what, plus the maximum side gap I was able to achieve was less than the thickness of aluminum foil.
I used a folded piece of aluminum foil as a shim to raise the micro DB connector (second attachment) to have enough bottom gap, and cocked the connector as much as I can to maximize the side gap (third attachment).
FYI, the shell of the female connector in the second picture is not isolated by design. But the male connector shell that is on the cable on the first and the third picture should be isolated.
Annoying cable and connector problem.
We replaced total of four in-vac cables.
Three of them is due to an issue with yet another design feature to rely on razor-thin gap.
In the attached picture, the front shell is PEEK but the screws that fix the front shell to the metal back shell are metal. The screw heads should be somewhat recessed, but if they are sticking out for any reason, they will contact with the metal part of the feed through, thus the metal back shell is connected to the chamber.
Some of the screws are recessed less than the others, and some of them are sticking out because of the screw head slot was somewhat deformed by the screw drivers or whatever. There can be three modes of failure:
We have identified three cables that were either 1. or 3., and replaced them with new ones which worked OK. There's no reason these screws should be really tight (I think we need to discourage people to do that), but if it fails because the screws are a bit tight, that's not a good cable either.
We identified another cable that was 3., but this was a picomotor cable with thicker gauge wires, and we don't have replacements. Corey made it such that the connector doesn't fully seat (or maybe he just didn't fully tighten the connector, I don't remember). This cable was tested good yesterday and today, but it turned bad when we were doing some other unrelated work at the feed through and its vicinity.
We also found one cable that makes or breaks ground loop depending on where/how we route and how we breathe, and replaced that with a new one:
The serial number markes as ?????? is not unknown, it was recorded before by Corey, it's just that I don't remember.
Closing the documentation loop on these guys (also updating ICS Assembly Load #ASSY-D1300122-LHO). I'll mark the damaged cables as such in ICS as well.
According to ICS, this cable (the elusive D1000223) is most likely one of these: S1202641 (most likely) or S1202643. And as Keita mentions, we have no spares of these at LHO (and NONE for 3IFO!). :(