h1seih23 filled its internal connection tracking table yesterday and had to be rebooted. Here is the current list of conntrack counts for all front ends as of 12:14PDT (the table fills at 65,536)
h1psl0 5,382
h1seih16 4,310
h1seih23 4,329
h1seih45 5,332
h1seib1 3,293
h1seib2 3,296
h1seib3 3,296
h1sush2a 5,400
h1sush2b 2,275
h1sush34 4,355
h1sush56 4,350
h1susb123 4,372
h1susauxh2 2,266
h1susauxh34 2,263
h1susauxh56 2,261
h1susauxb123 2,266
h1oaf0 6,400
h1lsc0 4,345
h1asc0 5,378
h1pemmx 2,231
h1pemmy 2,241
h1susauxey 2,267
h1susey 3,326
h1seiey 3,292
h1iscey 5,368
h1susauxex 2,266
h1susex 3,323
h1seiex 3,296
h1iscex 5,366
Since Fil has just put the Xend ALS fiber in a [something], I tweaked the polarization such that H1:ALS-X_FIBR_LOCK_FIBER_POLARIZATIONPERCENT is at 9% (it started at 22%). Also, yesterday Daniel and I tweaked up the Yend ALS fiber polarization.
Rebooted the h1seih23 computer because network connectivity had failed due to the nf_conntrack table full error.
Vac
Comm
SEI
SUS
CDS
Fac
No safety meeting today.
Summary of CDS maintenance model and DAQ restarts. Upgraded DAQ to RCG2.9.4. SEI new models. Upgraded Mid Station PEM to RGG2.9.4. DAQ restart to support upgrades.
* = unexpected restart
model restarts logged for Tue 07/Jul/2015
2015_07_07 00:30 h1fw1*
2015_07_07 09:49 h1nds0
2015_07_07 10:37 h1hpiham1
2015_07_07 10:42 h1hpiham2
2015_07_07 10:42 h1isiham2
2015_07_07 10:48 h1hpiham3
2015_07_07 10:48 h1isiham3
2015_07_07 10:56 h1hpiham4
2015_07_07 10:57 h1isiham4
2015_07_07 11:08 h1hpiham5
2015_07_07 11:08 h1isiham5
2015_07_07 11:13 h1isiham6
2015_07_07 11:15 h1hpiham6
2015_07_07 11:25 h1hpiitmy
2015_07_07 11:27 h1hpibs
2015_07_07 11:27 h1isiitmy
2015_07_07 11:29 h1isibs
2015_07_07 11:33 h1hpiitmx
2015_07_07 11:33 h1isiitmx
2015_07_07 11:38 h1hpietmx
2015_07_07 11:39 h1isietmx
2015_07_07 11:43 h1hpietmy
2015_07_07 11:43 h1isietmy
2015_07_07 11:52 h1broadcast0
2015_07_07 11:52 h1dc0
2015_07_07 11:52 h1fw0
2015_07_07 11:52 h1fw1
2015_07_07 11:52 h1nds0
2015_07_07 11:52 h1nds1
2015_07_07 12:11 h1fw0*
2015_07_07 12:31 h1fw0*
2015_07_07 13:06 h1ioppemmy
2015_07_07 13:06 h1pemmy
2015_07_07 13:07 h1ioppemmy
2015_07_07 13:08 h1ioppemmy
2015_07_07 13:09 h1ioppemmy
2015_07_07 13:10 h1ioppemmy
2015_07_07 13:11 h1ioppemmy
2015_07_07 13:12 h1pemmy
2015_07_07 15:54 h1ioppemmx
2015_07_07 15:54 h1pemmx
[Stefan, Jenne] Since the Yarm gate valves were opened this afternoon (but the Xarm is still closed), we aligned the Yarm to the green beam, and then aligned the input IR pointing to the Yarm. (We assumed that the BS pointing was fine from yesterday.) * We used the ITMY baffle PDs to set the TMS pointing for the green input, then used the green WFS to align the Yarm. We touched PR3 a bit to keep the green transmission at the corner on its camera. * We used PR2 and IM4 to align the IR beam to the arm. * We then aligned PRX, followed by SRY. After that, we started on the SRY work that I've already reported on.
[Stefan, Jenne] We were informed that the SRY lock acquisition step of initial alignment was flaky, and could get into a state of ringing up the SR optics. The solution was two-fold: (1) We increase the input power to 10W to improve the SNR on ASAIR_A_RF45_Q. (After the ASC offload step, the power is returned to the original 2W). (2) We utilize the front end fast triggering, for both the output and the FM4 integrator. For item (1), we did a little bit of finagling in the guardian script to ensure that the value of H1:PSL-POWER_SCALE_OFFSET matches H1:IMC-PWR_IN_OUTMON. This is required so that we don't lose lock (due to de-triggering) when we are changing the power levels. Since we're using H1:LSC-ASAIR_LF_NORM_OUT for triggering in item 2, if the value in H1:PSL-POWER_SCALE_OFFSET (which does the "norm" in H1:LSC-ASAIR_LF_NORM) does not match reality (as approximated by H1:IMC-PWR_IN_OUTMON), the loop will often de-trigger even if it shouldn't have, which is why we need those values to match. For item (2), we changed the SET_SRY guardian to leave the SRCL input always on, and use H1:LSC-ASAIR_LF_NORM for triggering. We found that values of 50 cts up, and 30 cnts down were good thresholds. Since only the output of the LSC filters are triggered, this means that we cannot leave the FM4 integrator on unless the loop is engaged. We use the filter module triggering (which was already triggering FM1, a +6dB gain) to also trigger the FM4 integrator. Since the integrator's oomph is needed to acquire the lock, we use a "delay" in the filter module of 0 seconds (it used to be 0.2 seconds).
Nic, Evan
The frequency noise in CARM is limited in part by the dark noise of REFLAIR9I between 30 Hz and 1 kHz.
First, an explanation of the plot traces:
In the case that the CARM loop is sensing-noise limited, the dark noise (and the in-lock shot noise) of REFLAIR9I should appear on REFL9I. The CARM ugf is about 15 to 20 kHz, and is >30 dB below 1 kHz. Additionally, the white(ish) noise in REFL9I from 30 Hz to 1 kHz strongly suggests that we are seeing some kind of sensing noise. So we believe that the sensing noise from REFLAIR9I is indeed being impressed onto the CARM loop, and is the dominant contributor to the OOL noise between 30 Hz and 1 kHz.
We would like to additionally measure the shot noise as seen in REFLAIR9I (with no lock), to see how large it is relative to the REFLAIR9I dark noise. Either way, it is probably advantageous to switch control of CARM to REFL9I.
around 15:37PDT I noticed my CDS overview MEDM was not connecting to the models running on h1seih23. The control room TV MEDM screen was not showing this problem, but new MEDM connections were not being established. I was also unable to ping or ssh onto h1seih23.
h1seih23's console is reporting many errors (several per second) of the type:
nf_conntrack: table full, dropping packet.
and sometimes
net_ratelimit: 66 callbacks suppressed
a quick goggle search shows that the internal IP connection table has reached its limit.
Over time new MEDM connections are actually being established. Guardian is not reporting any issues with its connections.
So we are thinking that new channel access connections are flakey, but established ones (like guardian) will continue to work. The IFO team would like to delay a restart of h1seih23 until tomorrow unless it becomes a major issue to IFO locking.
J. Kissel, L. Prokhorov, T. Shaffer Another episode in the saga of the High Voltage ESD Drivers. Though we've not been able to identify why, the high-voltage for both end station's ESD High Voltage Drivers were found OFF. We've queried the usual suspects, and those who were around, and no one has any idea why they might have been *turned* off, so we suspect that they had tripped off from all of the vacuum and electronics work at the end-stations today (the best aLOGs for which are the sparse ops log, LHO aLOG 19462, and VE logs 19472, and 19471). The ETMY Driver remains railed negative. We've tried all of the possible button mashing and cable plugging and unplugging we've either done before or heard has been tried before, with no success. We request the assistance of the CDS team in the morning. --------- As you know, we're still in the odd state that the ETMY high-voltage driver is controlled by the new(ish) low-voltage driver, and the ETMX high-voltage driver is controlled via a temporary Beckhoff setup. Therefore, the start-up procedure for both is different. ETMX - Flip ON the high-voltage supplies' power switch in the racks near the entrance of the building (you'll hear a ~few second loud continuous beep, wait for the initialization procedure to complete) - Hit black "V SET" button, followed by entering in 430 V (i.e. hit 4, 3, 0, then enter) - Hit black "I SET" button, followed by entering in 80 mA (i.e. hit 8, 0, then enter) - Hit red "OUTPUT ON/OFF" button You should see the volts go to ~430 V, and the current go to ~3 mA Now, head out to the driver in the XVEA. You should see both the 430 V and the (low voltage, I don't remember what it is 12 or 15 or something) lights on the front panel showing bright green. You can either turn on the driver manually by hitting the red "start" button, or using the temporary remote system. You'll know the driver is ON when the light above that button goes from OFF to bright RED. To use the temporary remote system, via a terminal on a work station, toggle one of the remote switches: caput H1:ISC-EXTRA_X_BO_4 0 caput H1:ISC-EXTRA_X_BO_4 1 then turn on the second, caput H1:ISC-EXTRA_X_BO_3 1 Once you see all of the lights on on the front panel of the high voltage driver, make sure to confirm that you see a change in the readbacks when turning ON and OFF requested signals (i.e. turn the BIAS "DC" ON and OFF, and turn an OFFSET in each of the quadrants UL, LL, UR, and LR ON and OFF). ETMY (This is the same as ETMX) - Flip ON the high-voltage supplies' power switch in the racks near the entrance of the building (you'll hear a ~few second loud continuous beep, wait for the initialization procedure to complete) - Hit black "V SET" button, followed by entering in 430 V (i.e. hit 4, 3, 0, then enter) - Hit black "I SET" button, followed by entering in 80 mA (i.e. hit 8, 0, then enter) - Hit red "OUTPUT ON/OFF" button You should see the volts go to ~430 V, and the current go to ~3 mA (This is different than ETMX) - Head out to the YVEA (so you can see the front panel of the ESD driver, because these first few steps rarely work), and open the BIO screen of the ETMY SUS. - Hit the big beige RESET button on the bottom right corner. This *may* toggle the High Voltage Driver on and off, and you *may* regain functionality but likely not. - Several times now, the driver has been found to be "railed negative" in which he primary symptom is that the readbacks for every channel show about -15700 [ct], and the channels are unresponsive to change in requested drive signal. Here's what has been tried in the past (ranging from "that's probably it" to "there's no reason hat would have worked anyways, but we're grasping at straws"), and so what I've tried this evening: - Manually turning ON and OFF the driver with the START button on the front, with the REMOTE RESET cable and INPUT cable still plugged in. - Unplugging the REMOTE RESET cable and manually turning ON and OFF the driver with the START button (INPUT cable still plugged in) - Plugging the REMOTE RESET cable back in, and trying both to remotely and manually turn ON and OFF the driver - Unplugging REMOTE RESET and INPUT cables, manually turning ON and OFF the driver - Toggling all of the low-voltage driver binary I/O switches (i.e. the HI/LO VOLTAGE and HI VOLT DISCONNECT switches) None of which were successful this time around.
Opened FRS Ticket #3284 (https://services.ligo-la.caltech.edu/FRS/show_bug.cgi?id=3284)
This seems to be a problem when the HV is removed and the system is not powered down or reset. Not sure why. At EY I powered the unit off removed the DAC cable Powered the unit on re attached the DAC cable and all seems to be working.
Attached is an image of what Richard means by "the DAC cable," which, on the front panel is marked as "PREAMP" and "INPUT." As mentioned above in my debugging procedure, I had tried unplugging this cable, hitting the start/stop button, then replugging in the cable, but this did *not* work for me. But I *must* have done something different than Richard, because he says that this method is the most reliable method for fixing this problem. He did mention that he *did not* unplug the REMOTE RESET cable, where as I had already had the REMOTE RESET cable unplugged (from other attempts) when I performed the power cycle. Maybe the REMOTE RESET cable needs to be plugged in IN when unplugging the PREAMP INPUT cable and hitting the start/stop button.
Elli, Nutsinee
Today we moved theITMY HWS periscope to fix the return beam clipping issue (it was clipped at the lens between the two periscope mirros). I lost the green beam during the alignment (ITMY misaligned) so I left it there for now. The return beam seems to be clipping at the edge of the first iris (the green came right through the center though). *Should* be a quick fix next time (don't I always hope that?).
Scott L. Ed P. Rodney H. 7/6/15 Beginning this week we increased our crew size by one man. Rodney Haux previously worked for LIGO and has graciously agreed to return and assist with the beam tube cleaning. Monday was spent relocating all equipment to Y-End where we will begin cleaning and moving toward the corner station. The lights were suspended and fans located for air movement. The crew started vacuuming support tubes and capping them as they were cleaned. The 1 ton truck was running extremely rough and it was discovered that the air box was 95% blocked with debris from mice. We were able to remove enough of the material to get the truck down there however, we may need a new air box as this one is very damaged with rodent excrement. I plan on picking up a new air filter today. 7/7/15 Cleaned 59 meters of tube including the bellows in those sections ending 10 meters east of HSW-2-096. I recently purchased some cooling vest for the crew which seem to be working relatively well. The ice packs only last 3-4 hours at most and then need to be replaced with another set. The triple digit temperatures are taking a toll.
Chris S. (Joe D.50%) 500 meters of the X-Arm beam tube enclosure have metal strips installed on the upper portions of each joint. We started at the bridge and are working northward. This has been a slow process in part due to the triple digit heat we have been experiencing.
While at EX with Jeff K. and Leonid for an ESD issue (alog to come), I restarted the code for the BRS since it crashed during ER7. As expected it was very rung up, so I turned the damper off to let it ring down naturally over the week.
Valved out Turbo pump from HAM6 at 11:45 am, ion pump is mantaining pressure, currently at 4.5 x 10-6 torr.
Turbo pump + cart still running.
The 18 bit DAC card has been removed from the h1pemmx I/O chassis to update it's firmware. The h1ioppemmx and h1pemmx models were modified to not include the 18 bit DAC.
All Times in UTC:
14:45 Jeff B out to LVEA to move cleanrooms
15:15 Andres out to LVEA to join Jeff
15:26 Karn and Christina to EY
15:36 Joe into LVEA to check on orklift batteries, Fire Extinguishers, eye-wash stations...etc
15:45 Hugh into the Bier Garden
15:47 Kyle to EY
15:52 Bubba goin to MX and EX
15:56 Karen and Christina leaving EY
16:07 Hugh out of LVEA and heading to EXto replace T240 with STS2. Back at 17:59.
16:16 Fil out to Ends to do P-Cal power (both) and magnetometer power for Jordan at EY
16:26 Betsy and Travis out to the LVEA to check for post-vent fodder.
16:39 Andres out
16:46 Jeff B out
17:02 Leaving EX cristina & karen
17:25 Restarting all SEI Models (Kissel)
17:30 Praxair called in to say a truck will be on-site in 15min (for "379" which is MX's CP6) & he mentioned that another truck will be heading to "380" (which is EX's CP8).
17:40 Noticed Beckhoff at EX went down, need to figure out how to remote login and restart.
17:53 pemmx rcg upgrade & pulling a card as well (Dave)
18:05 Continuing Vent clean up in the Cleaning Room (Betsy)
18:06 Heading into LVEA for hose (Joe)
18:10 TCS Chassis Install at EX & then to EY (Sudarshan, Vinny, & Jordan)
18:20 Gerardo said to keep an eye on EY pressure---any alarm for pressure increase
18:22 Gerardo on top of HAM6 (for how long?)
18:27 Kyle opening GV5 & GV7
18:38 Sudarshan turning on P-Cal LASER at EX
18:48 Fill reported back from Ends
18:49 Joe into LVEA
18:50 DAQ restart
19:13 Sheila heading to EX to restart slow controls computer
19:28 Kyle out of LVEA. GV 5,7 are now opened.
19:35 Sheila back from EX
19:48 Fill called from EX. Continuing work for PEM
19:51 Sudarshan back from EX
20:03 Dave re-boot somthing at EY
20:30 Kyle, John & Gerardo to EY to valve in NEG and valve out Turbo
21:15 John called from EY. Arm is open and Gauges are reading inconsistently. (2orders of magnitude)
21:19 Jordan and Vinny to EY WP 5336
21:20 Katie to EY, Mangetomoeter calibration.
21:21 Fil called from EY. While doing cabling, accidentally disconnected 12V oscillator source.
22:11 Nutsinee out to HAM4 to adjust alignment on HWS table
22:44 Dave and Jim at MX. Took PEM down to remove 18bit DAC.
22:47 Ellie out o HWS table to assist Nutsi
22:48 Jeff K, TJ and Leo to EX to debug ESD issue.