Noticed that the turbo at the X-end station was also tripped off (electrical?), also its QDP80 -> Restarted and vavled-in -> resuming pumping at X-end
Jim, Cyrus, Dave
The large wind event was preceded by several power glithches which impacted on the DAQ and killed the front end computers.
After waiting to ensure that the power was stable again, we remotely (via management port) power cycled the FE computers at the end and mid stations and reset the MSR computers. Generally, computers not on the dolphin networks started themselves, some needed a power cycle. Once all the computers were all booted, they all started their models. At that point we discovered the Dolphin IPC in the MSR was non-operational. We suspect the glitchy nature of the outage put the Dolphin switches in a bad state. We stopped all models running on MSR computers attached to the Dolphin network (all but the PSL and SUSAUX). Power cycled the Dolphin switches. Rebooted the FE computers via front panel RESET button. Some models did not autostart and needed their "BURT_RESTORE" button pressed, which we did.
The DAQ was showing bogus data for slow channels (e.g. wind speed below 10mph when it was 50mph outside), so a clean restart of the DAQ was done. The NDS machines took many minutes before they got started, not sure why at the momemt.
Two systems started with a IRIG-B timing signal about 400 (should be 15), which then drifted down to nomimal over 20-30minutes. These were h1sush34 and h1iscey. We allowed these to become good rather than power cycle their IO Chassis.
Once the models were running, stable and had communication, I enabled the SWWD systems to drive the DACs.
Shiela is handling the recovery of the Beckhoff and PSL.
I opened GV2 and disconnected the leak detector -> noticed YBM and XBM turbos had tripped -> I assumed this was due to the gas bump from the gate annulus volume being too high for the, relatively low, safety valve set points of 5x10-2 torr -> Increased set point and restarted YBM turbo -> valved-in YBM turbo -> Increased set point and tried to restart XBM turbo but its QDP80 was also off(?) -> Restarted QDP80 but now turbo trips on vibration at about 75% rpm -> attempted to spin it up a few times but no luck -> This symptom has happened before and has been bypassed by introducing a gas load at the turbo inlet via cracking opening the "up-to-air" needle valve -> I didn't try this now will revisit tomorrow Pumping YBM, Vertex and XBM with YBM turbo tonight
Gerardo, Kyle Following John and Bubba's final-torquing of the under-torqued viewport on HAM4 (S. door, bottom middle) and also HAM5 S. door bottom left -> Kyle and Gerardo sprayed audible bursts of helium (up to 3 feet away or closer) at viewports, feedthroughs and all accessible flanges -> Helium baseline drifted slowly up from 9x10-9 mbar*L/sec to 3x10-8 mbar*L/sec during the test period (lots of helium sprayed) This was not an exhausted test, some flanges were not accessible (light pipes sealed off with aluminized tape etc)., in most cases, flange leak test ports were not pressurized - though big leaks should have responded.
I restarted the PSL after the power glitch.
To do this I also had to reset the settings on the KEPCO power supply, and I reset the long range actuator.
When everything locked, the ISS difracted power was around 21%, I tried toggling the noise eater (nothing else on the ODC indicated a problem with the noise eater). This didn't help, and I ended up adjusting H1:PSL-ISS_REFSIGNAL to bring the diffracted power down to 9%
With wind gusts in the 70mph range, we are seeing spikes in the dust counts, most notable at EX with counts in the 100000s at 0.3um and 20000s at 0.5um. Counts are also elevated in the LVEA around HAM3, HAM4, beer garden, and Y-arm spool but only into the 2000s at 0.3um and high 100s at 0.5um.
Replaced the batteries in the main cabinet of the Mass Storage Room UPS. Also replaced the backup control module. This had gone bad a while ago so we were running without some redundancy.
This dust storm was fast. Here are some photos of the approach. Two power glitches preceded the storm (around 4:00pm).
EX: Switched filter box for ESD (Filiberto)
EX ISI TF running overnight (Jim)----> No banging on chamber!
Leak checking: y-beam manifold (Gerardo) & everywhere else (Kyle)
EY checking accelerometer locations for B&K tests (Arnaud/Tim)
EX calibration work (Paul, Jordan)
13:30 Greg turning on TCS CO2 laser in squeezer bay.
Justin refilled crystal chiller (Corey and Travis present as trainees)
PR3 oplev work beginning (Doug & Jason)
3IFO Quad work in the West Bay (Betsy)
13:54 Old conlog briefly being taken down (Cyrus)
14:23 Pablo to EY for ALS VCO characterization work
Dust monitor #15 (beer garden) alarm (13000 @ 0.3um but 0 @ 0.5um, check functionality??)
14:39 Tim and JeffK to EY to setup B&K measurements
15:03 Doug & Jason out of LVEA
16:00 Kyle opening GV2
16:25 Pablo done for the day
15:42 Paul & Jordan to EY installing voltage monitors
15:44 Hugh to EX restarting HEPI pump station
Two power glitches (~4:00pm local) from a giant Dust Storm that just hit us. Front ends died after the first glitch. Winds are gusting up to 70+ MPH. We're going to wait until the storm passes before beginning to restore everything. Pictures are just before the storm hit.
Old alog 13299. Checked the two RF amp in ISC-R2 and found surprisingly high attenuators of 6dB and 4dB. Replaced the first with a 1dB and the second with a 2dB.
New readbacks:
=== Rack ISC-R2 U38 ===
H1:ISC-RF_C_REFLAMP45M_OUTPUTMON = 22.6
=== Rack ISC-R2 U37 ===
H1:ISC-RF_C_REFLAMP9M1_OUTPUTMON = 22.4
The amplifier in the RF amp is nominally +12dB. 10dBm input will give close to 13dBm on each output at the tested 30 MHz. The slow readbacks should be around 22dB, typically no more than a 1dB off.
Readbacks for other 9 MHz and 45 MHz units in R2 went up accordingly. Some are now too high and need to be readjusted. WFS should be ok, since they use an RF splitter to distribute the LO to the 4 channels.
=== Rack ISC-R2 U18 (WFS REFL_A) ===
H1:ASC-REFL_A_RF9_DEMOD_LOMONCHANNEL_1 = 19.1
H1:ASC-REFL_A_RF9_DEMOD_LOMONCHANNEL_2 = 19.0
H1:ASC-REFL_A_RF9_DEMOD_LOMONCHANNEL_3 = 18.9
H1:ASC-REFL_A_RF9_DEMOD_LOMONCHANNEL_4 = 19.3
=== Rack ISC-R2 U16 (WFS REFL_A) ===
H1:ASC-REFL_A_RF45_DEMOD_LOMONCHANNEL_1 = 18.3
H1:ASC-REFL_A_RF45_DEMOD_LOMONCHANNEL_2 = 18.4
H1:ASC-REFL_A_RF45_DEMOD_LOMONCHANNEL_3 = -75.0432
H1:ASC-REFL_A_RF45_DEMOD_LOMONCHANNEL_4 = -74.9568
=== Rack ISC-R2 U10 (WFS REFL_B) ===
H1:ASC-REFL_B_RF9_DEMOD_LOMONCHANNEL_1 = 19.3
H1:ASC-REFL_B_RF9_DEMOD_LOMONCHANNEL_2 = 19.2
H1:ASC-REFL_B_RF9_DEMOD_LOMONCHANNEL_3 = 19.2
H1:ASC-REFL_B_RF9_DEMOD_LOMONCHANNEL_4 = 19.2
=== Rack ISC-R2 U08 (WFS REFL_B) ===
H1:ASC-REFL_B_RF45_DEMOD_LOMONCHANNEL_1 = 18.5
H1:ASC-REFL_B_RF45_DEMOD_LOMONCHANNEL_2 = 18.2
H1:ASC-REFL_B_RF45_DEMOD_LOMONCHANNEL_3 = 18.3
H1:ASC-REFL_B_RF45_DEMOD_LOMONCHANNEL_4 = 18.4
The 2 broken channels (together with the RF readbacks) are from Beckhoff chassis corner 4 terminal M4. All channels on this module read a value close to zero. Could be a cable problem between the demod chassis and the ASC demod concentrator.
I'm moving the old conlog machine to a new rack, so it (and by extension the historical data) will be unavailable for 15-20 min. The new production conlog system will be unaffected.
h1conlog-old should be available again. It took slightly longer as I needed to correct an IPMI configuration issue.
Jeff gave links in his alog for previous occurances of this error. This table summarizes the date for the front ends which exhibited this error over the past year:
8/11 2014 | h1susb123 |
8/9 2014 | h1seih23 |
4/21 2014 | h1sush2a |
3/18 2014 | h1sush2a |
2/27 2014 | h1seih23 |
12/16 2013 | h1seih23 |
11/7 2013 | h1sush2a |
11/7 2013 | h2sush34 |
8/8 2013 | h1seih23 |
9 events in one year. h1sush2a and h1seih23 show this error more than once (3 times and 4 times respectively)
I have compiled all models using RCG2.8.5 in preparation for Thursday's upgrade. The procedure was:
If anyone needs to compiled models today or tomorrow, please email me as I may have to rebuild against the new IPC file tomorrow.
Running full measurement, but should be done tomorrow morning.
08:00 LVEA is LASER HAZARD 08:00 Visitor - Tim McDonald arrived 08:50 "low/major" alarm in HO:FMC-EX_CY_H2O_SUO-DEGF- John Worden notified 09:01 M Landry - shuttering LASER temporarily for viewport inspection by vacuum team Vac team will be allowed to remove glasses for this operation. . Un-shutter immediately after. Remain LASER hazard 09:15 J Batch - bringing framewriter 1 down to move files. Should only take a few minutes. 09:30 J. Bergman - re-opening PSL shutter 09:30 Kiwamu and Sheila to PSL to instal some PD an AOM to take IMC cavity pole measurements for prep to run at higher power. - out at 12:21 THey will go back in some time this evening. 09:32 Nathan to optics lab 09:34 Jordan and Paul to End-Y to swap out AI chassis for PEM - out 10:17. 09:44 D Barker - "Code Freeze" UNTIL NOON 09:48 Doug and Jason into LVEA to wander and look for pieces parts for OpLev -out at 10:30 09:55 Kyle and Gerardo going to postpone crawling around on chambers in LVEA until after lunch. 10:05 Travis out to HAM 3 to check on Mobility experiment 10:20 Betsy out to hunt LVEA for ISC HAM6 parts 10:39 Praxxair out (didn't see them arrive) 10:40 Betsy working in West Bay on Quad 10:42 Kyle out to End-Y VEA to turn something off. -out at 11:00 12:30 Karen to End-Y 12:13 Pablo called to report he was at End-Y to do characterization on ALS and VCO. 12:34 Filiberto to End-Y to change the bias filter box for the ESD...... to be continued by Corey
As part of the upgrade to RCG2.8.5 I am taking the opportunity to rebuild the H1.ipc file from scratch because it contains a lot of orphaned channels and jumps in channel numbering.
I wrote a script to parse the H1.ipc INI file, compressing each channel into a single line so we can do diff,awk,sed type operations on it (script is parseIpcIniFile)
Here are the orphaned channels which are being removed (Qty 40):
20 days of the end station pressure including the two injection events.
Because of maintenance, the default NDS server has been changed to h1nds0. The raw minute data files for the last several months are being moved and would be unavailable on h1nds1.