Robert called from EY to info me that he was going into the VEA area near the BRS
WP 6292
Installed Safety System Controls chasiss in MSR and LVEA chiller closet. Fiber is patched through MSR and LVEA patch panel (next to HAM 2). A new fiber was pulled from HAM2 to to the LVEA chiller closet. EtherCAT cable connecting to the EP1908 units (TwinSAFE digital inputs) was terminated. As of now, we are in a monitor state only.
(Patrick, Gerardo)
Degassed 2 nude ion gauges in the corner station, PT170 and PT180. We did the degassing twice since the first time the degassing was terminated too early for both gauges, we sent second command before the 3 minutes were up, and it turns out that if you do send the command degassing stops. As usual pressure went up to 10-07 torr for both gauges while degassing then, trended down after.
Trends for Inficon gauges PT170 and PT180 (BSC7 and BSC8) over 60 days, compared to the signal from PT120B (BSC2) using a MKS HPS 903 inverted magnetron cold cathode.
Note that PT120B trend has been shifted up to compare slope of signals, the other trends remain unchanged.
Did not receive expected Grace test alert. Did receive SNEWS test alert at 16:03 UTC. TCSY laser tripped off around 15:52 UTC, cause unknown. Attempt to add an ADC card to h1oaf brought down the dolphin network and the TCS chillers. Bad Beckhoff terminals in corner chassis 3 found and replaced. Currently waiting for PSL work to complete. Peter K in PSL enclosure 14:35 UTC Bubba to end X and end Y with Apollo to look at sensors. I set ISI config to VERY_WINDY_NOBRSXY. 14:37 UTC Hanford phone call notification of siren testing in 100K, 200E, 200W and 400 areas. 15:01 UTC Set IMC guardian to DOWN to disable MC locking per Peter's request 15:06 UTC Karen to end Y to clean 15:13 UTC Chris to escort pest control company to LVEA 15:23 UTC Mark P. to CER (WP 6246) 15:31 UTC Notification of remote login (Carlos) 15:32 UTC Jeff B. to LVEA West bay to look for 3IFO part and then to HAM3 clean room curtains 15:39 UTC Kyle to end X to run RGA scan 15:40 UTC Jim W. to end Y to recenter BRS 15:43 UTC Hanford phone call notification that siren testing is complete 15:43 UTC Joe to LVEA (eye wash stations, batteries) 15:46 UTC Jason to PSL enclosure 16:03 UTC SNEWS alert (test) 16:05 UTC Mat cleaning company through gate 16:09 UTC Gerardo and I finished degas of PT170 and PT180, but turned off the degas too soon. 16:22 UTC Gerardo to end X (WP 6290) I restarted frozen video0 16:32 UTC Karen done at end Y 16:35 UTC Filiberto running fiber from HAM2 to chiller closet 16:35 UTC Betsy to LVEA West bay to store item on shelf and then to optics lab 16:45 UTC Kiwamu to LVEA to check TCSY front panel to diagnose laser trip Dave to take down h1oaf frontend to add ADC card to IO chassis (WP 6287) 17:01 UTC LN2 delivery through gate 17:06 UTC Pratt electronics delivery for Richard through gate 17:07 UTC Betsy done 17:10 UTC Karen to LVEA to clean 17:12 UTC Joe done 17:14 UTC Hugh to end stations for weekly HEPI maintenance 17:20 UTC h1oaf frontend down. Dave to CER to add ADC card. 17:22 UTC Jeff B. done 17:25 UTC Pest control company done 17:45 UTC Jim W. back 17:48 UTC Richard to CER to check on Mark and LVEA to check on Filiberto 17:58 UTC Dave and Jim done in CER. h1oaf took dolphin network down. 18:07 UTC Bubba taking Apollo to LVEA to look at sensors 18:09 UTC Joe back to LVEA 18:10 UTC Dave and Jim removing ADC card that was added to h1oaf IO chassis 18:29 UTC Mark and Filiberto pulling Beckhoff corner chassis 3 from rack 18:32 UTC Ryan patching alog 18:33 UTC Dave rebooting guardian machine 18:39 UTC Hugh back 18:39 UTC Joe done 18:41 UTC Jeff restarted TCS chillers 18:43 UTC Hugh to HAM2 to plug STS2 back in 18:49 UTC Dave to end X 18:53 UTC Hugh back 18:54 UTC Chris opening rollup door for cardboard 19:11 UTC Gerardo done 19:12 UTC Dave back from end X 19:38 UTC Cheryl opening/closing ALS shutters from new medm screen 19:51 UTC Jim W. to end Y 20:32 UTC Set ISI back to VERY_WINDY_NOBRSXY again from INIT. 20:32 UTC Hugh taking HAM2 down 20:58 UTC Jim W. back 21:02 UTC Mark P. to mid Y to get parts 21:05 UTC Started degas of PT170 21:09 UTC Kyle back 21:10 UTC Stopped degas of PT170 21:11 UTC Started degas of PT180 21:15 UTC Stopped degas of PT180 21:47 UTC Mark done fixing Beckhoff chassis 3. TCS chiller flows are back. 21:51 UTC Gerardo to LVEA to turn on TCS lasers 21:58 UTC Filiberto to LVEA to continue WP 6292
Compressors #1 and #2 were greased and pressure tested.
Compression test results: #1 at 135 psi, and #2 at 125 psi.
All compressors and electrical motors were greased.
Replaced the pressure relief valve for compressor #2, the valve for compressor #1 was OK and it passed its test.
All compressor assemblies were run tested after service was performed.
Work performed under WP#6290.
After some confusion in understanding who did what, here is the actual history of what transpired this morning in TCS land:
~9am TCSY mystery laser-off issue (alog 31064) - chillers all reporting ok, easy restart of laser at panel by Kiwamu. I add water to TCSY and log for the day.
~Shortly after 9am CDS performs OAF model work, which does something bad to the dolphin network, sending both TCSX and TCSY chillers into FAULT.
~Dave looks at chiller front panels and finds fault, sends Jeff B. to clear fault. Jeff does so and adds another 250mL of water to chiller (didn't know I had already been there).
~12:30-1:30pm discover TCSX and TCSY lasers at 0. Chillers look ok, determine it is a Beckhoff issue which is reporting errors.
~2:45pm EE fixed issue with some Beckhoff board (alog 31081 below).
Lasers back.
M. Pirello, E. Castrellon, F. Clara, R. McCarthy
Per FRS Ticket 6059 and Work Permit 6284 we investigated and repaired the bad channel and restored the unit to operation.
We initially suspected the 5V linear voltage regulator in the Slow Controls Contentrator #2 (S1103451) and pulled it for inspection, but found that its outputs were working correctly. We replaced the suspect regulator and added a heat sink to help dissipate heat from this regulator.
Upon reinstallation, most of the bad channels were working, and only one set of related channels were not working. We traced this failure back to a bent pin on the EtherCAT Corner Station Chassis #3 (S1107447). This pin corresponds with channel DIV40M which Daniel says was not working since its installation. The Beckhoff Module associated with these signals was behaving poorly, acting like it had an internal short. We pulled the Chassis #3 and replaced the Beckhoff module #11 on panel #9, with a brand new E1124 Digital IO Module.
After installation other problems with this chassis surfaced so we replaced one of the computer modules and after much testing we were satisfied that the chassis would work when installed. I attached an image of all blocks green as seen from the CER.
Need also to move (1) ea. EDP200 BT roughing pump from CS Mechanical Room to Y-mid (Part of mid-term - next 6 months - effort to establish an emergency Beam Tube rough pumping option)
WP#6255 and WP#6293 completed 0930 hrs. local -> Valved-in RGA turbo to RGA volume and energized filament. 1130 hrs. local -> Took scans of the RGA volume with and without cal-gases -> isolated RGA turbo from RGA volume -> combined RGA volume with X-end volume and took scans of the X-end with and without calibration gases (inadvertently dumped ~5 x 10-4 torr*L of Krypton or 2 hrs accumulation @ 5 x 10-8 torr*L/sec into site) -> vented RGA turbo and removed from RGA hardware -> installed 1 1/2" UHV valve in its place -> Pumped volume between two 1 1/2" valves to 10-4 torr range before decoupling and de-energizing all pumps, controllers and noise sources with the exception of the RGA electronics which was left energized and with its fan running 24/7. Leaving RGA exposed to X-end, filament off and cal-gases isolated. Will post scan data as a comment to this entry within next 24 hrs..
Here are the scans from yesterday: Note the presence of amu 7 obviously "sourced" from the N2 cal-gas bottle. I will need to revisit the noted observation of the appearance of amu 7 when the cal-gas isolation valve used with Vacuum Bake Oven C is closed and the baffling disappearance of this amu when the isolation valve is opened???.
I have updated the SEI_CONF configuration table to more accurately reflect our experience with higher microseism. The extremes of this table (i.e. any version of very high wind and/or microseism) are still being explored as we roll into winter, but so far the nominal "WINDY" state has been sufficient up to 1+ micron/s RMS microseism and 40 mph winds. I have also made a few of the states in SEI_CONF not "requestable", mostly states that had "microseism" in the name. These states are all versions of our high microseism configuration from O1, which only worked in low winds. These states are still available, but you will have to hit the "all" button on SEI_CONF, they no longer show up on the top level drop-down.
We also might be close to getting some epics earthquake notifications, so that information might get included on this screen in the future.
Operators should reference Jeff's alog yesterday (31029), and my alog 30848 when trying to make decisions about seismic configuration.
Temporarily installed a 2 in. diameter, 45 degree thin film polariser in the output of the pre-modecleaner. Measured 0.342 W reflected from the polariser with the 10A-V2-SH power meter. Measured 92.5 W transmitted through the pre-modecleaner with the polariser removed, and with the L300W-LP power meter. The power stabilisation was on for both measurements. The output polarisation is calculated to be (1 - 0.342/92.5)*100 = 99.6% linearly polarised. Using the same thin film polariser in the input beam to the pre-modecleaner, 15.0 mW was measured in reflection. 1.51 W was measured in transmission. The power stabilisation was off for this measurement. The input polarisation is calculated to be (1 - 0.015/1.51)*100 = 99%. It's not obvious why it would be this low. Other than perhaps the angle of incidence alignment is off a little - a degree of freedom we do not have - because typically thin film polarisers are somewhat sensitive to input angle. Another high power attenuator was installed in the beam between the two Picomotor equipped mounts. We confirmed that we could re-lock the pre-modecleaner prior to turning off the high power oscillator to allow for modifications to the field box(es) by Daniel and Keita. Jason/Peter
I applied a bug fix suggested by John Zweizig in the demodulation routine in the GDS pipeline that reduces error due to finite machine precision. After this, it appears that the kappas as computed by GDS, especially the cavity pole, are significantly less noisy, but still not in agreement with the SLM tool (See this aLOG for reference: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=30888). Below is a table of mean and standard deviation values for the data taken from GDS, SLM, and the ratio GDS / SLM: SLM mean SLM std GDS mean GDS std ratio mean ratio std Re(kappa_tst) 0.8920 0.0068 0.8916 0.0056 0.9995 0.0043 Im(kappa_tst) -0.0158 0.0039 -0.0145 0.0008882 1.0013 0.0041 Re(kappa_pu) 0.8961 0.0080 0.8958 0.0057 0.9997 0.0065 Im(kappa_pu) -0.0050 0.0056 -0.0035 0.0013 1.0015 0.0059 kappa_c 1.1115 0.0094 1.1154 0.0072 1.0035 0.0060 f_c 354.2338 2.9305 345.6435 0.7686 0.9758 0.0084 Here are covariance matrices and correlation coefficient matrices between SLM and GDS: Covariance Correlation Re(kappa_tst) 1.0e-04 * 0.4615 0.3157 1.0000 0.8238 0.3157 0.3181 0.8238 1.0000 Im(kappa_tst) 1.0e-04 * 0.1506 0.0007 1.0000 -0.0216 0.0007 0.0079 -0.0216 1.0000 Re(kappa_pu) 1.0e-04 * 0.6387 0.3113 1.0000 0.6866 0.3113 0.3219 0.6866 1.0000 Im(kappa_pu) 1.0e-04 * 0.3139 -0.0036 1.0000 -0.0490 -0.0036 0.0174 -0.0490 1.0000 kappa_c 1.0e-04 * 0.8895 0.4815 1.0000 0.7118 0.4815 0.5144 0.7118 1.0000 f_c 8.5876 -0.0023 1.0000 -0.0010 -0.0023 0.5908 -0.0010 1.0000 Plots and histograms are attached.
During the windstorm yesterday, the PCal team attempted to complete end station calibrations of both ends. The calibration for EY went off without a hitch (results to come in a separate aLog). However, while setting up for the EX calibration, I dropped the working standard from the top of the PCal pylon onto the floor of the VEA. The working standard assembly ended up in 3 pieces: the integrating sphere, one spacer piece, and the PD with the second spacer piece. Minor damage was noted, mostly to the flanges of the integrating sphere and spacer pieces where the force of the fall had pulled the set screws through the thin mating flanges. I cleaned up and reassembled the working standard assembly and completed the end station calibration. Worried that some internal damage had occurred to the PD or integrating sphere, I immediately did a ratios measurement in the PCal lab. The results of this showed that the calibration of the working standard had changed by ~2% which is at the edge of our acceptable error. As a result of this accident, we are currently working to put together a new working standard assembly from PCal spares. Unfortunately this means that we will lose the calibration history of this working standard and will start fresh with a new standard. We plan to do frequent (~daily) ratios measurements of the new working standard in the PCal lab in order to establish a new calibration trend before the beginning of O2.
Opened FRS Ticket 6576 - PCal working standard damaged.
WP6289 Dave
during the model shutdown due to h1oaf0 ADC problem, h1guardian0 was rebooted (it had been up 28 days). Yesterday INJ_TRANS node needed a restart to permit hardware injections, so we felt a reboot was in order.
All nodes came back automatically and correctly.
WP6287. Richard, Jim, Dave:
this morning between 10:18 and 11:20 PDT we installed a seventh ADC card into the h1oaf0 IO Chassis for PEM expansion. Unfortunately this ADC (a PMC card on a PMC-to-PCIe adapter) has a fault which causes the front end computer to not boot up. In fact with the One-Stop fiber optics cable attached h1oaf0 did not output anything on its VGA graphics port, so not even the BIOS was shown. When the fiber was disconnected, the computer booted.
When h1oaf0 was powered up, it glitched the Dolphin network even though it was shutdown correctly. This glitched all the Dolphined front ends in the corner station, which now included the PSL. While we were removing the new ADC card from the IOC chassis, I restarted the PSL models. On the second h1oaf0 restart the PSL was again shutdown. At that point we restarted all the corner station models (starting with the PSL).
We did make two changes to the h1oaf0 IO Chassis which we left in:
WP6288 Dave, Jim:
We cleanly power cycled h1iscex and its IO Chassis this morning between 11:45 and 12:06 PDT. We were not able to reproduce the slight offset on an ADC channel (see attached). Note this channel is actually only seeing ADC bit-noise, so the offset is in the micro-volt level.
Sheila,.Jeff, Ed
This was an attempt to fix the drop in green arm power that happened last sunday. 30884
Since it didn't work, operators will continue to see that green power from the X arm is low.
If this can't be fixed, we can just rescale the normalization.
BRSY has been continuing its slow drift off of the photodiode, and was about 2-300 counts from the edge, so this morning I went to EY to try to recenter it. I think I was successful, but will need a couple hours to tell. Right now it's still rung up pretty bad, so we will need to wait for it to damp down on it's own a bit before trying to re-engage it. For now, operators should use one of the seismic configurations that doesn't use BRSY.
Looks like BRSY is closer to center now (at ~ -3000) than before, but given the current drift of ~1500 cts/week I didn't get as much margin before the next adjustment as I'd prefer. Will probably have to do this again in ~2 months.
Remember it will probably drift up because of the slow thermal equilibration for the next 1-2 days, probably ending up above 3k counts. I think that is very good. Good job, you have mastered the BRS!
J. Kissel Admiring the work of the SEI and ASC teams, we've just lost lock on a really impressive lock stretch in which we had ~40 mph winds, ~70th percentile microseism, and a 5.4 Mag earhtquake in the horn of Africa and survived. It would be most excellent it DetChar can compare amplitudes of ISC control signals, check out the beam rotation sensor tilt levels, the ISI platform sensor amplitudes, take a look at optical lever pitch and yaw compared with ASC signals etc. Start: Oct 31 2016 16:15:05 UTC End: 17:37-ish UTC
Winds and some ground BLRMS (showing microseism and the earthquake arrival) for this lock stretch. We survived at least one gust over 50mph before losing lock. No one changed seismic configuration during this time.
For the record, the units of the above attached trends (arranged in the same 4-panel format as the plot) are ([nm/s] RMS in band) [none] ([nm/s] RMS in band) [mph] Thus, - the earthquake band trend (H1:ISI-GND_STS_ITMY_Z_BLRMS_30M_100M) shows the 5.3 [mag] EQ peaked at 0.1 [um/s] RMS (in Z, in the corner station, between 30-100 [mHz]), - the microseism (again in Z, in the corner station, H1:ISI-GND_STS_ITMY_Z_BLRMS_100M_300M) is averaging 0.25 [um/s] RMS between 100-300 [mHz] (which is roughly average, or 50th percentile -- see LHO aLOG 22995), and - the wind speed (in the corner station) is beyond the 95th percentile (again, see LHO aLOG 22995) toward the end of this lock stretch, at 40-50 [mph]. Aside from Jordan Palamos' work in LHO aLOG 22995, also recall David McManus' work in LHO aLOG 27688, that -- instead of a side-by-side bar graph, shows a surface map. According to the cumulative surface map, with 50th percentile winds and 95th percentile winds, the duty cycle was ~30% in O1. So, this lock stretch is not yet *inconsistent* with O1's duty cycle, but it sure as heck-fy looks promising.
I get a notification that Not all nodes arrived. Sheila had a look at the configuration and it seems that the BRS IS turned off and things look ok.
I switched to VERY_WINDY_NOBRSXY. There are two people out at end stations and both end station lights are on. No answer on the phone.