Bubba and Vern on site for NOVA Film Crew Shoot with Jana Levin.
Working in peace and quiet. Vertex pressure is looking good @ 2.7e-7 Torr.
FRS 8166 Filament #1 on the Pfeiffer RGA seems to be burned out. It turns off a second after turning it on. I switched to filament #2. It would not stay on either. The default emission current was set to 2 mA. I lowered it to 0.1 mA. The filament (#2) stayed on at that value. Then I raised to 0.3 mA (you have to click in a different window to activate the new setting btw). Then I raised to 1 mA where the filament automatically turned off. Went back to 0.3 mA and that's where it's sitting for now. Pfeiffer suggested we let it warm up and slowly ramp up to 2 mA. These steps are sometimes needed when the filament hasn't been used for a long time (not the case here). Something else is going on. The error message we were getting from the filament turning off was "emission error" or E002. Pfeiffer said a burned out filament will usually yield a different type of error, but sometimes these fail such that the filament still registers a current - when the broken filament cools it reconnects. Here is a manual describing emission error: ftp://ftp.aerodyne.com/ACSM/ACSM_Pack/Quadara2/Manuals/English/QUADERA/quadera.pdf We verified the RGA turbo is ON and pumping out the RGA volume. One theory is that during the bake of the RGA (during oven load bake) we sprung an air leak and now the turbo can't keep up with the leak rate which is causing a high head pressure - too high for a filament to tolerate. Note that one of the two turbos on VBOC is at 33C, which is a little high (two fans are blowing on it). The other is at 27C. Also, the foreline pressure is reading 0.018 Torr. Normally at 0.016 or 0.017 Torr. We should add HV pressure gauges to the system for times like this. Also, one of the electronics units has a recessed pin socket. The other two spare units yielded the same error (filament turning off almost immediately). Maybe instead of replacing the SRS with Pfeiffer we should replace the Pfeiffer with SRS because of these pin issues due the frequency of removing and reinstalling the electronics in between bakes.
J. Kissel, NOVA Film Crew I've modified the main displays on the front wall at the request of the NOVA film crew. The lower screen is displaying the GW150914 waveform and the background of the DTT session showing the Live Sensitivity on the Upper screen has been changed to black. There is zero intention of this being permanent, but I figure there will be no harm in leaving it as such over the weekend since the content normally displayed on these screens only has meaning when we have light in the arms, no one is intending to commission the corner station over the weekend, and there'll be no operations specialists on shift over the weekend. So, it should just be the film crew, Dr. Levin, and their site liaison, and they all want it this way! I'll restore the displays to normal Monday.
NUC2 and NUC3 configurations has been restored to O2 Observation Ready.
Gerardo, Jonathan, Patrick We used dataviewer to plot the raw, second and minute trends for the following channels from May 18 2017 16:38:59 UTC - May 18 2017 20:38:59 (attached): H0:VAC-EY_Y3_PT410A_PRESS_TORR H0:VAC-EY_Y3_410_PIRANI_INTLK H0:VAC-EY_Y3_410_PWR_REQ H0:VAC-EY_Y3_410_PIRANI_INTLK should only be either 0 or 1. In the second trends it takes a value of ~ 2.2. In the seconds trends H0:VAC-EY_Y3_PT410A_PRESS_TORR jumps from 0 to 1, which is not realistic for a pressure reading.
Using h1nds1 port 8088.
More second plots of anomaly, window is 14 hours long, event is about 8.5 hours, unrealistic data noted at CS, EY and EX, not noted (but only small sample of channels checked) at Mids.
Other subsystems affected, but only sampled a few channels from FMCS and ASC.
Activities: all times in UTC
As of 23:00, end of day shift coverage:
Work Permit | Date | Description | alog/status |
6641 | 5/19/2017 11:34 | Sunday evening, May 21st, give personal tour to one or two friends * Walk on Mezzanine in LVEA | |
6640 | 5/18/2017 10:33 | Opening GV 1,2 isolating vertex from x & y beam manifolds. The manifold turbos are ON and open to beam tubes. IP 5,6 are valved out. Pumped out gate annuli on both valves. This will allow access to oplevs/pcal camera to help with ITMx alignment and/or imaging. It could also speed up pump down time. | |
6639 | 5/17/2017 10:33 | Update and restart the Beckhoff PLC code on h0vacmr to match the IP6 controller change from a dualvac to a gamma. Due to the difference in the signals for the controllers, some EPICS channels will be lost and others added. Minute trend files may be renamed. Requires a DAC restart. Will update autoBurt.req file and MEDM screens. Will coordinate with the vacuum group. | 36259, 36266 |
6638 | 5/17/2017 9:45 | Trouble shoot a script that may have crashed the nds1 server. Run an instrumented copy of the operator alog summary script w/ a new nds2 client to see what caused problems. This should be done when users are not in the middle of large measurements in case we impact h1nds1 again. | |
Past WPs | |||
6622 | 5/8/2017 7:15 | Install Air Trap/Bleed (ECR #E1700096) in PSL Crystal Chiller return line. Will need the PSL to be down during the installation. | 36223 |
I forgot to log this action yesterday:
I noticed that the Diode Chiller alarm 'light' was flashing red, intermittently. I went into the chiller room and also noticed that the Xtal chiller was also a little low.
Long story short: I added 200ml to both.
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 50% open. Fill completed in 274 seconds. TC B did not register fill. LLCV set back to 21.0% open. Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill not completed after 3600 seconds. LLCV set back to 33.0% open.
Manually filled CP4 from control room at 100% open. Took 11 minutes. Lowered to 36% open (from 33%).
H1NDS1 locked up on Monday at around 10:51am localtime. Here what happened to the best of my understanding.
TJ was testing his operator alog summary generation script with a pre-release version of the nds2-client at my request. The build of the nds2-client was updated to fix some serious performance regressions on the streaming data interface. When TJ ran his script it ran for a while, then timed out. At this point H1NDS1 was unresponsive and required a restart of the the nds/daqd services.
The logs showed that the same querie was being repeated over and over again. Followed eventually by a large number of socket closed events.
Reviewing the code it ran into an endless loop. If the queiry failed it would retry and retry and retry ... This is what locked up the nds1 server. The reason that it worked on an older nds2-client (0.12.2) and failed on the newer client, is there was a gap in the data, that the 0.12.2 client just skipped over. Where the 0.14.x client raised an error saying there was a gap. This error triggered a retry, which raised and error, which triggered a retry, which ...
The gap was not 'missing' data but was temporarily not-accessible via H1NDS1. For background there are two sources for the nds1 server to pull from, its in memory buffer (which is fed by a continuous stream of data from the data concentrator) and from the files on disk. It takes about 70-120s (depending on where the data falls in the frame, io load on the disk system, ...) from the time that the data leaves the data concentrator before it is written to disk. Each raw frame is 64s long. It takes roughly 40s to write the frame, so you need about 100s from the time the frame writer gets the first seconds worth of data for a frame until the frame can be read from disk. The nds1 servers keep a buffer in memory of live data, and read from it where they can. H1NDS1 is configured to keep 50s of data in memory, which leaves you with a gap of about 55s when you transition from data on disk to data in the in memory buffer.
As a side note, h1nds0 (which guardian uses) has a 100s buffer and does not see a gap unless there is a glitch in the writting the frame. I beleive that this is how the nds1 servers at LLO are configured.
I ran some tests on the production system this morning from roughly 7:00-7:30am localtime. I was able to reproduce the endless error loop. I did not see H1NDS1 lock up, which was probably related to seeing connections close right after the request for data (which is what I would expect). So I am not sure what kept the connections open until the termination of TJs script on Monday.
I recommend the following
measured gain, P2L or Y2L | alpha | beam from center, mm | |
(alpha * 42.2mm/alpha) * (-1, for pitch only) | |||
mc1 p | -1.14 | -0.0544 | 2.30 |
mc1 y | 1.882 | 0.0898 | 3.79 |
mc2 p | -3.81 | -0.1818 | 7.67 |
mc2 y | -0.4 | -0.0191 | -0.81 |
mc3 p | -1.073 | -0.0512 | 2.16 |
mc3 y | -2.83 | -0.1351 | -5.70 |
21-March-2017 | 17-May-2017 | diff, March to May | |
mm | mm | mm | |
mc1 p | 2.42 | 2.30 | -0.12 |
mc1 y | 3.74 | 3.79 | 0.05 |
mc2 p | 7.71 | 7.67 | -0.04 |
mc2 y | -0.85 | -0.81 | 0.04 |
mc3 p | 2.16 | 2.16 | 0 |
mc3 y | -5.7 | -5.70 | 0 |
Virtually no change in IMC mirror beam spots before to after the vent.
The #3 Mitsubishi cooling unit in the MSR has been repaired (FRS #8105). Both control boards and a resistor have been replaced and it is pumping out cold air.
Activities: all times UTC
Robert and I went yesterday and today to the end-Y station to do some work related to line-hunting. Yesterday, we shutdown the MAD CITY LABS nano-drive located on top of the laser table in the VEA room and we also shutdown the CNS Clock II located in the computer room (both systems were shutdown for about two hours). Today, at ~10:58 local time, we shutdown the ESD Pressure Interlock for about an hour. Pictures of the three systems are attached to this log.
We have been working to stabilize the temperatures, mostly at End Y because that is the station that has the completed FMCS controls. End X should be completed by the end of this week. During this process several changes have been made to various components such as the face/by-pass damper, cooling valve, chillers etc. Currently the changes have only been made to the End Y station where we were trying to maintain 68 degrees F. After several conversations with Jeff K. and some calculations made by Jeff, he has determined that a more suitable temperature for E Y would be ~64 degrees F. I have changed the set point in the E Y VEA to to 64 F. I will monitor the temperature closely tonight and hopefully this will have the correct effect on the suspensions. I also had an alarm on the Mid X supply fan this morning and upon further investigation, I found that someone had turned the fan off at the electrical disconnect. I restored the HOA switch to auto position and started the fan.
J. Kissel, B. Gateley More on why I became convinced that the new FMCS temperature sensor array is reporting the wrong temperature (i.e. the set point creates a higher physical temperature than the FMCS error point reports). The message: I'm convinced that the FMCS sensor srray is incorrectly calibrated based on evidence from all other temperature sensors (direct or indirect) in the EY building (see below). Most convincingly is the PCAL Receiver Module's temperature sensor which is in the same units as the FMCS sensor and was reporting the same value prior to the upgrade (and was live throughout the upgrade). Using the difference between the FMCS sensor array and the PCAL receiver module, I recommended that Bubba adjust the FMC setpoint ~(22.4 - 20.2) [deg C] = (72.32 - 68.36) [deg F] = 4 [deg F] lower than it was -- hence he changed the set point from 68 to 64 [deg F]. %%%%%%%%%%%%% DETAILS %%%%%%%%%%%%% I've looked at as many direct and indirect temperature sensors as I know: Type Sub-System Channel Units Notes Temp Sensor FMCS H0:FMC-EY_VEA_AVTEMP_DEG[C/F] [Celsius / Fahrenheit] Average of FMCS sensors below; error point for FMCS temp control servo H0:FMC-EY_VEA_202A_DEG_[C/F] [Celsius / Fahrenheit] H0:FMC-EY_VEA_202B_DEG_[C/F] [Celsius / Fahrenheit] H0:FMC-EY_VEA_202C_DEG_[C/F] [Celsius / Fahrenheit] H0:FMC-EY_VEA_202D_DEG_[C/F] [Celsius / Fahrenheit] Temp Sensor PCAL H1:CAL-PCALY_TRANSMITTERMODULETEMPERATURE [Celsius] reports a few [deg C] higher temp because of the 2W laser inside H1:CAL-PCALY_RECEIVERMODULETEMPERATURE [Celsius] Temp Sensor CCG H1:PEM-EY_TEMP_VEA1_DUSTMON_DEGF [Fahrenheit] Temp Sensor PEM H1:PEM-Y_EBAY_RACK1_TEMPERATURE [Celsius] Heavily influenced by rack temperature, but shows gross trend Temp Sensor PEM H1:PEM-EY_TEMPERATURE_BSC10_ETMY_MON Uncalibrated!! Attached to BSC10 (ETMY's chamber) Temp Sensor SEI H1:ISI-GND_BRS_ETMY_TEMPR Uncalibrated!! Inside the heavily insulated BRS enclosure Disp. Sensor SUS H1:SUS-ETMY_M0_DAMP_V_IN1_DQ [um] -106 [um/(deg C)] LHO aLOG 15995 H1:SUS-ETMY_R0_DAMP_V_IN1_DQ [um] [um/(deg C)] is probably roughly the same as M0 H1:SUS-TMSY_M1_DAMP_V_IN1_DQ [um] -88 [um/(deg C)] LHO aLOG 15995 Disp. Sensor SUS H1:SUS-ETMY_M0_DAMP_P_IN1_DQ [urad] -270 [urad/(deg C)] LHO aLOG 15888 H1:SUS-ETMY_R0_DAMP_P_IN1_DQ [urad] [urad/(deg C)] is probably roughly the same H1:SUS-ETMY_L1_WIT_P_DQ [urad] [urad/(deg C)] is probably less H1:SUS-ETMY_L2_WIT_P_DQ [urad] [urad/(deg C)] is probably even less (because it's -96 urad/(deg C) at the test mass) H1:SUS-ETMY_L3_OPLEV_PIT_IN1_DQ [urad] shows pitch of ETM, but also sensitive to temperature itself H1:SUS-TMSY_M1_DAMP_P_IN1_DQ [urad] [urad/(deg C)] hasn't be I attach 3 sets of 10 day trends. - The first is the most convincing, showing the FMCS temperature against the PCAL temeperature sensors. This, again, is what I used to calibrate to what Bubba should change the FMCS cooling system's set point. - The second shows that how the vertical displacement of the suspensions evolved in the same manor as the PCAL sensors throughout. Unfortunately, there seems to be about a factor of two difference between the VEA temperature change and the retro-predicted change based on the [um/(deg C)] of both the QUAD and the TMTS. Not worth chasing the discrepancy. - The third attachment shows some of the rest of the temperature sensors and temperature sensitive instrumentation, and although they're all differently calibrated and/or uncalibrated, they still show the same consistent trend of a ~2 [deg C] increase all over the VEA and even in the electronics rack. Finally, I attach a rough layout of all of the above sensors.
I've created a dataviewer template with the above channels pre-selected for future ease of trending. It now lives under version control here: /opt/rtcds/userapps/release/pem/h1/dvtemplates/EYVEA_Temperature_Trends.xml I attach the initial version, in case you don't want to remember where it lives.