Still have a glitch in the Main Chain right channel in satellite box 3200715. We have eliminated cables and OSEM as the problem source. Swapped SB 3200715 and SB 320078 and problem stayed on MO Rt. Moved cables back to original position. We swapped cables on the AA channel and the AI channel. Glitch moved from MO Rt to MO F1; possible problem in the AA or AI chassis. Filiberto is going to do some more testing this afternoon to determine which chassis is at fault.
I've started 3 clean rooms in the LVEA to counteract the dust from cleaning, and get the area ready for vents next week. HAM3, HAM8, and HAM6 clean rooms are ON. The clean room over HAM1 needs to have the large switch at the top of the clean room moved to the ON position, before it can be started.
Clean room over HAM1 is now ON.
Turned off the robofom and roboscimon cron jobs on script1 and control100.
Fix problem with suspension test stand workstation (suswork1) where it is unable to access files for burt restore. Found that burt's .snap files are stored in bscteststand:/data/autoburt/ which was not accessible on suswork1. Did the following: Modify bscteststand computer to export /data for nfs mounting on suswork1 Modify suswork1 to mount bscteststand:/data on /data Verify that burtgooey on suswork1 can access .snap files on /data
Same access problem on seismic test stand, modified seiteststand and workstation the same way.
Removed H2 electronics from Mid Y (Rack 1x18 and 1x19). List of items removed: 1 ASC Quad Photodiode Whitening Board D990399 SN10 2 Anti-Image D000186 Rev D SN155 3 Universal Dewhitening Board D000183 SN 113 REV B 4 Universal Dewhitening Board D000183 SN 164 REV C 5 Opt Lever PD Interdace D010033 Rev B SN143 6 Anti-ImageFilter Board D000186 REV C SN125 7 Universal Dewhitening Board D000183 SN 02 REV A 8 Tidal and Mircoseismic Summing Module D020094 SN107 9 SUS PD White/Interface Board D000210 Rev A SN31 10 LOS Bias Module D000341 SN109 11 VMIVME 7851 - h2iscmy 12 Pentek Model 6102 13 Pentek Model 6102 14 D980369 SN120 15 ICS-110B-32 (no dhrtr) 16 SYNC TIMING GEN. D050093 SN LHO19 17 DOUT XVME220 18 VM8DAC-8 (Frequency Devices Inc.) 19 Variable Time Delay Module D010285 REV A SN 129 20 h2iscauxmy (MVME 162-262) 21 VMIVME 3113A 22 DOUT XVME-220 23 VMIVME 4116 24 VMIVME 4116 25 D990147 SN001 26 Satellite Amp Adapter D010069 27 LOS Coil Driver D000325 SN113 28 h2gpsmy MVE 162-333 29 Brandywine VME Card 30 UPS Fortress 1425 31 Isotron - Power Supply Model 2793
Cheryl and I have removed everything (optics, lasers, AOMS, etc) from H2:TCSX. On TCSY we've removed everything with the exception of the AOM and laser which need to be disconnected from the plumbing. We were going to drain all the plumbing of the ethylene glycol but it turned out that this had been at some point in the distant past. We have an inventory of all the parts that have been pulled.
Corey Gray, Eric Allwine, Hugh Radkins worked on marking and drilling holes for the HAM 12 Pier Anchor Bolts Gerardo Moreno started removing the H2 optical lever power supplies from the LVEA Paul from Control Solutions NW came to fix the FMCS system in the control room Patrick Thomas, David Barker removed the control2 sun computer and replaced it with the control10 linux computer, also removed the windows control room Spiricon PC Richard McCarthy, Filiberto Clara removed the H2 electronics in mid Y Cheryl Vorvick, Aidan Brooks worked on decommissioning H2 TCS GV5 and GV7 were soft closed, the Genie-lift was craned over the beam tube, cleaning and crane lift tests continued in the LVEA Cyrus Reed went to look at the phone system in mid Y Reboldt Mallon worked on repairs, concrete grading, roof at mid Y
After a planned power outage due to a 24volt transformer failure Friday 10/15 the FMCS system failed to communicate with the DMS 3500 units. Vendor Control Solutions Northwest arrived Thursday 10/21 am to look at the problem. The issue turned out to be a mismatch in serial port configuration at the 3500 unit. The default setting is apparently incorrect when used to communicate with the UNC 500 which we have. The correct serial port config at the 3500 is: 38.4,1,7,E,M,N. (in the non functioning state is was found to be 38.4,1,7,E,P,N) Note that this only applies to the port which is connected to the UNC.
The GRB alerts to LHO have been terminated until further notice. Same goes for the weekly SNEWS test alerts (as well as any real SNEWS supernova alerts...). The GRB alerts are still being sent to LLO, but those will also be terminated soon. GEO: We are set up to send GRB abd SNEWS alerts to GEO, and they will set up soon to receive them, according to H Grote. Virgo: Virgo has its own independent system.
(Eric A., Corey G.) Laid out the hole markings on the floor this morning, & then set up the drill and drilled hole #1 (of 16). Will continue drilling holes first thing in the morning and then work on removing flooring (and glue).
Still ailing from a hashy M0 Right channel which is not the OSEM or cable plugged into the BOSEM, we are still working through the test plan on Q2 (E1000494). Signals went stale on us while working just now so we restarted the bscteststand computer via a ./startqts. We are currently running through the medms gains to see if we have set magnets at the correct orientation.
Removed and stored optical lever power supplies for RM, MMT3, BS, ITMX, ITMY, FMY, and FMX. Above mentioned are stored on the racks of receiving/storage area of the LSB.
The cosmic ray detector was decommissioned. The H1 PSL, H1 TCS, and PCal lasers were turned off. All VEA's were transitioned to laser safe. Cheryl and Aidan began decommissioning the H2 TCS lasers. Crane lift testing and inspection was done at end X and mid Y. Contractors from Apollo started cleaning the crane equipment in the LVEA. Left with Cheryl in LVEA ~ 16:11
LHO VEAs have been transitioned to laser safe; PSL, TCS, and PCal lasers are deactivated and locked/tagged out.
The LHO cosmic ray detector (CRD), which was on the floor beneath the ITMY chamber, was removed. A cable tray support was temporarily unbolted from the floor and displaced to allow the CRD to easily slide out. The CRD is currently sitting on the floor of the clean storage room, pending a choice of storage location. (Note that the CRD consists of two pieces with the upper resting on top of the lower piece. So the two pieces should be carried separately.) - Ray, Filiberto, Ryan
To support aligo install, two new alog tasks under the "IFO and SubSystems" section were added: . H2 INSTALL . LHO Facilities
The following machines related to science mode were powered down: projector[0,1,3] h0grbsn h1binj h1injection.
found the problem which was preventing ntp client from running on the sei frontend. There was an entry in the /etc/ntp/step-tickers file (was a caltech address) so I had to add the GC NTP server's address to this file. Seems that if this file is empty, ntp would also work. I kept the ntpdate crontab running on the workstation (every minute) and that machine is time shifting a lot to keep up with the front end (shifts of 0.1 second every minute are common). Not sure what's up with that, I have now turned off the crontab and ntpd is running sync'ing with the GC server also.
This morning (Oct. 15) the time on the workstation differed by greater than 10 seconds from the seiteststand. Looked at /var/log/messages, found that no ntpd synchronization had occurred since Oct. 11. Stopped the ntpd on workstation, edited the ntp.conf file to add a 'restrict' line for the ntp server, sync'd the time with ntpdate, and restarted the ntpd. Both workstation and seiteststand are now using the same ntp server, workstation is no longer using seiteststand as a server. Looking at /var/log/messages on seiteststand, there are a number of 'time reset' lines with large values. This needs to be monitored and investigated.
After monitoring the clock on a different workstation (of the same type as seiteststand), it appears the clocks are not very accurate. A Supermicro 1U computer on our DAQ DTS test stand has gained 24, 28,and 19 seconds per hour over the last three hours.
After returning the sat boxes to the rack, M0 F1 was glitchy. Filiberto reseated the cable at the Sat box which appeared to fix it. Now hours later, we are aligning the BOSEMs and have found that this signal peggs at ~6k alot. We have been able to fix it by reseating the cable at the sat box, but this is not working right...
M0 F1 still a problem this morning. Filiberto thinks that the Sat box looks like an OP amp blew again - he's pulling the box again for a trip to the ER, er I mean EE.
Filiberto returned the sat box after another Op Amp swap, but a few hours later we now see bad BOSEM signals in this module...
Day 5: Still working on diagnosis of crazy signals from the Sat box 1 (M0 F1, F2, F3, L). With Filiberto, performed another round of boots to the AA chassis, and bscteststand computer. No changes so far. Likely another blown op amp on Sat box 1. Since we seemed to have changed both satellite boxes and a set of cables (put the new Jay cables in), we might have possibly had more than one problem to diagnose. We decided to put back in the old cables on a different sat. box and things looked pretty good. Filiberto is going to fix the blown op amp on the Sat box 1.
After rearranging sat boxes, Richard and Filiberto found 2 that seem to work. They hauled away those that have issues. With the working sets, I finished aligning all 6 M0 Top BOSEMs. I just have to finish setting them to 50% OLV and then we can start FFTs.
Betsy, Here is a summary of the troubleshooting of the Sat units. Oct 8, 2010 Uninstalled two Satellite units SN320073 and SN320078 from test stand. SN320078 had two bad channels. CH1 had gain off by ~6dB. Replaced IC103 (OP2177AR) with OP284. Did not have OP2177AR replacement part but has been placed on order (expected arrival date 10-14-2010). CH3 had output railing high, replaced IC303 with OP284. SN320073 had CH4 output railing high. replaced IC403 and IC404 with OP284. Ran test on units per Test Plan T080062 - UK Satellite Amplifier Pre-Preduction Test Plan for transfer functions and voltage readings. Oct 11, 2010 Reinstalled SN320073 and SN320078 in test stand. Took opportunity to test unit SN3200715. Tested unit per Test Plan T080062 - UK Satellite Amplifier Pre-Production Test Plan. All test results within specs. Reinstalled unit back in test stand. Oct 12, 2010 While connecting OSEM's Betsy and company found MO F1 signal to be glitchy. This corresponds to unit SN320078 CH1. While trouble shooting, switched cables with another sat unit. When reconnecting cables to original sat unit, CH1 not responding, output ~2.3V. Uninstalled SAT unit SN320078 and placed a working unit in its placed. Replaced the "new" cable and installed an older cable on the vacuum side. Connected all four OSEM's and saw a response from all four channels. Disconnected SAT unit and left all four OSEMS connected. Replaced IC103 with OP284. This IC was one that had been previously replaced. Ran transfer functions on CH1 per Test Procedure T080062. Unit tested good. Used spare cable and OSEM SN0011 from Betsy to test connecting and disconnecting osems while sat powered on. Trying to see if the act of plugging and unplugging the OSEM could be causing opamps to go bad. Oct 13, 2010 Reconnected unit SN320078, knowing that all four OSEMs were connected, we expected to see all four channels come up. Upon powering unit up, all four channels seemed glitchy until CH3 died. All other channels then settled to a good state. Unit SN320073 also had CH4 fail again. Oct 14, 2010 Replaced IC103 (CH1) and IC303 (CH3) on unit SN320078 with OP2177. Replaced IC402 and IC404 on CH4 with OP2177. All opamps that were continually going bad were the OP284. Since they were being used as substitutes until OP2177 arrived. Oct 15, 2010 Took readings of OSEMs. SN # Our Readings Stuart's Reading Symptoms SN688 55.44 57.6 Working Unit SN695 61.08 62.67 Glitchy - Case causing short SN674 48.99 49.51 Low Count SN697 54 55.37 Working Unit - 29K Count Data corresponds with Stuart's data. Even OSEM SN674 shows a lower voltage reading that is consistant with the low count reading from MEDM. Looked at the bad OSEM with the short. The short is coming from the Photodiode Cathode side. Also, we connected and disconnected various OSEMs and don't seem to be having the issue of blowing up the Opamps. Still not sure how the first set of Opamps went bad.