Mark B. Taking TFs on MC2 from 12:45 pm.
I let Michael R know that the NPRO is off. H1:PSL_AMP_ENABLEPWRDOG was alarming this morning.
The power watchdog on the frontend tripped last night, shutting down the laser. This appears to be due to a change in humidity affecting the pickoff power, as we have seen before. The laser power recorded by AMP_PWR3 read 91% when the watchdog was turned on, and decreased by 15% to 77% to trip the power watchdog. The power recorded by OSC_PD_AMP_DC_OUTPUT had decreased by about 250mW during this time, or 0.8%
The laser is still off as we try to move over to the final RF configuration.
This is just a record for those who care about the slow machine h1ecatc1.
I was playing with h1ecatc1 this evening around 18:10. I got an error message when I tried logging in PLC2. The message said 'Page Fault. PLC is stopped.' Obviously PLC2 wasn't running, so I recovered it by clicking the 'reset' button in the 'online' tab and then clicking the 'run' button. It seems running OK right now.
I ran a quick test today on IM3, which now has the largest offsets of the 4 HAM AUXs. In the attached plot, red shows the pitch offset, black the yaw offset - blue shows very different behavior when each offset it removed - the damping of the oscillation when the pitch offset is removed shows the optic is touching an EQ stop. I left all HAM AUXs, IM1, IM2, IM3, and IM4, undamped overnight.
Please keep in mind that the Eddy Current Dampers coupling with pitch is _much_ stronger than that with yaw, so I would't considere the different decay rates as evidence of rubbing.
If rubbing is indeed suspected, I would suggest to check by either running a full set of TF and looking for anomalies, or trying a (small) DC actuation of pitch and yaw in both direction and looking for asymmetries in the displacement measured by the OSEMs.
Thomas and Cheryl Morning status: Alarms: Dust, LVEA 2, 13, and EY 1 invalid - I reset the ignore on all 3, since no dust monitors are connected IOP SUS watchdog, ITMY - OSEM4 yellow PSL, laser OSC_XchilAlarm, ENV, TEMP_Humidity - cleared up Activities: - Table optics install at input arm - BSC2 transfer functions - In-chamber cleaning at EX - Tumbleweed cleanup - Praxair delivery, 10:04, CP-4 - Vent of BSC1/2/3 and HAM4/5/6 - Optics tables moved into squeezer area in LVEA - Test stand computer issues
Wired up the WHAM2 HEPI IPS(Inductive Position Sensors) and the Horizontal L4-Cs (Seismometers). The L4-Cs appear functional as do the IPS. I zero'd the IPS and removed the Dial Indicators. The numbers on the DIs would 'indicate' that they have been disturbed beyond usefulness. While some of the IPS had several 1000 counts, none appeared too close to require position adjustment. I zero'd the IPS to less than 50counts at the INMON(e.g. H1:HPI-HAM2_IPSINF_H4_INMON.) The calibration after the ADC is 655cts/.001" so these are very zero especially wrt initial alignment. HEPI remains locked at HAM2.
Kyle, Gerardo Extra slow at request of others -> hope to minimize moving particulate
1600 hrs. local 2.1 x 10-6 torr @ turbo inlet -> Foreline @ 3.7 x 10-3 torr NOTE: temporary pump system used for this pump-down is ~1/4 the pump speed of the nominal MTP *Venting adjacent volume today (BSC2....etc) did not communicate to gauge on HAM3, HAM3 annulus is vented*
It appears the I/O Chassis for the h1susbstst computer (in the LVEA) has died. I can not get it to power up, after disconnecting the power cord and reconnecting, turning on the power switch results in a momentary flash of LED's, then everything remains dark.
Timing went bad on IOP model running on h1sush2b computer about 10:48 PST this morning. Restarted h1susim and h1iopsush2b models to reset timing. Checked with the usual suspects, but no one claims credit for this event.
The baling crew worked on clearing a second lane from the mid-station to the end. The road to the end is passable and both mid and end nitrogen storage tanks are accessible, with turn-around space for trucks. Bubba and I checked on the status of preparations for the vinyl installer, who is due on-site tomorrow. The floor in front of BSC6 is clear and ready.
1200 hrs. local 2.4 x 10-6 torr @ turbo inlet -> Foreline @ 3.9 x 10-3 torr Y-mid is now accessible -> Mechanical gauge @ tank 8514372 indicates 68" WC of LN2 Tumbleweed crew working toward Y-end. 1225 hrs. local -> Leaving site now
1040 hrs. local 2.7 x 10-6 torr @ turbo inlet -> Foreline @ 4.1 x 10-3 torr Domestic water storage tank @ 43", raw water storage tank > 60" BT roads still not passable -> tumbleweed crew working towards Y-mid 1100 hrs. local -> Leaving site now
We have completed the removal of the two infrastructure racks in the H2 Building, with their contents now relocated in the MSR. All items went into the right side two MSR racks with the exception of:
the Qlogic switch which was moved to the DTS DAQ rack (previously the H2 DAQ rack)
the GC switch which was relocated into the DTS computer rack
the conlog machine which was relocated to the H1 DMT rack, this will become an H1 FE computer
The cdswiki machine was finally made to boot via a manual grub startup. This machine looks like it lost one of its LVM2 drives. Since it will be replaced with the new Debian box next week, we will hope it holds together until then.
The backup file server died again, we suspect a bad boot drive. It is scheduled for a reinstall on the second boot drive soon.
Work continues on the rack relocation from the H2 Building to the MSR. Today we are fixing the cdswiki problems, moving the conlog machine into the H1 DMT rack, getting all services restarted and gutting the old H2 racks. Access to the DTS is down while we reconfigure the GC network in the H2 Building.
Attached is a picture of the two new MSR racks. They are the computer racks next the the two UPS racks.
WP 3615, Dave B and Jim B
We shutdown all the servers in the H2 Building, starting at 9am today. All machines were re-racked in the MSR. The original network switches were moved along with the computers. All were connected to the Raritan Paragon KVM system.
We powered the machines on in order of importance: file and boot servers, admin servers, ifo servers, web and login servers.
All came back except for cdswiki, which appears to have a disk error. We will continue to work on this tomorrow, but for tonight there is no CDS web services: medm snapshots, web pages, work permit, cds wiki. If we cannot get cdswiki working tomorrow, we will put a temporary web server in place to serve the medm screen shots.
Staff with login account should be able to use their accounts now.
We check that the DAQ is acquiring the Vacuum controls and FMCS signals. All front ends were unaffected by this move.
All Ubuntu workstations in the control room needed a power cycle to get operational, we suspect DHCP issues.
I'll post a more detailed alog later.
1300 hrs. local 3.3 x 10-6 torr @ turbo inlet -> foreline @ 4.4 x 10-3 torr
Tumbleweed crew are working their way to Y-mid
The h1boot computer had a kernel panic Dec. 25 in the morning. Rebooted at 11:19 PST Dec. 26.
logs suggest that h1boot became unresponsive at 01:55am Christmas Morning.