Richard, Filiberto, Jim, Dave:
This week we have installed the modified 18bit DAC cards in most H1 Suspension IO Chassis. All suspension systems have been upgraded except for h1sush56
(we have exhausted the first batch of cards).
Full list of systems which use 18bit DACs which are not presently upgraded:
We have four modified cards which are showing errors and need further investigation:
All of the IO chassis except the mid station PEM Chassis. have been upgraded to the new 24V DC power supplies. Caltech sent up kits that fit in the chassis seamlessly. At this time the only problem we have to report is one on/off switch did not function. Easily replaced. Again Thank you to Steve Vass and Bob Taylor for the help.
Monday thru Wed. We took the time to replace all of the AA/AI chassis with chassis that had the TI chip replaced with an AD8622 or AD8672. In the end we had hoped to have all boards installed with AD8622 but have a number with the AD8672. The only technical difference we care about is the amount of current the 8672 draws as it is higher than AD8622. We will have chassis that run a little warmer than others. The entire site has been changed out now. This would not have been possible to accomplish in this time without the help of Steve Vass and Bob Taylor who came up to assist. Thank you.
Stuart, Greg, Dan, Jim, Dave:
Since the LDAS tape library was moved from the OSB Computer Users Room (CUR) to the Warehouse (WH) on Tuesday 12th May h1fw0
became unstable. This came as a surprise, and we have been working over the past week and a half to understand the problem and implement a solution.
When the Tape Library was in the CUR, it was in fiber optics connection with LDAS in the LSB via the multimode patch panel in the CUR. In the WH, the tape library was added to the H1 DAQ Q-Logic Fiber Channel Switches (FCS). At this point the tape traffic along with h1fw0
's traffic are sent via single mode to the LSB. We think this additional data traffic on these switches is the root cause of h1fw0
's instability problems.
Upon further investigation, data errors were seen on one of the FCS in the LSB, which correlate to frame errors on the LDAS SATABOY, which then correlate to h1fw0
restarts.
For the pair of FCS (one in MSR, one in LSB) which are showing the error, we tried using a different single mode link between the buildings and a different patch cable in the LSB. The error rate was unchanged.
When LDAS stop all access to the Tape Library, h1fw0
becomes stable again. So we know its the additional traffic which is expressing the core problem.
So where do we stand now? Dan has several single mode SFPs on order which will be here for next Tuesday maintenance. We plan on replacing the SFPs in the FCS pair to get back to where we wanted to be on 5/12. To get through this weekend, Dan has done the following:
For the FCS pair which the glitchy link, the single mode SFPs have been turned off. To permit packets from this FCS to get to LSB, a multimode fiber optics link was made between the MSR FCS.
This change was made at 15:40 PDT Wed 5/20 and the tape library was ramped up to full operation, h1fw0
has been stable since then (20 hours), so it looks like a good short term solution for this upcoming long weekend.
The h1sush2a computer was shutdown to allow a DAC card (D0) to be replaced in the I/O chassis, due to reports of a stuck voltage. See alog 18543. It was not possible to log in to this computer remotely, as it was not accepting new connections due to a "nf_conntrack: table full" error viewable on the console. I was able to shut the computer down in an orderly manner from the console. We have not encountered the nf_conntrack error in many months, but it appears when there's too many network connections being made. The stuck voltage on D0 disappeared after the card was replaced and the computer restarted.
John, Bubba
The 3 IFO storage containers are under a constant nitrogen gas purge taken from the CP2 LN2 storage dewar boiloff.
The dewpoint sensors are showing some erratic behavior so Bubba and I investigated. We have determined that the pressure regulators supplied with each storage container are not operating correctly at such low inlet pressures (< a few psi ) and actually alternately shut the flow off and then as the pressure builds they again allow flow. The glitches shown do not reflect the air quality in the storage can but only at the sensor itself - which is directly exposed to LVEA ambient conditions on it's exhaust port.
We will replace these regulators with simple rotameters in the next week or two.
Scott L. Ed P. Chris S. Cris M.(1/2 day) 5/18/15 Removed lights and rehang lights at next section north of HNW-4-052. Vacuum support tubes and spray diluted bleach/water solution. Cleaned 9 meters of tube, ending 7.6 meters north of HNW-4-052. 5/19/15 Cleaned 57 meters of tube, ending 11.5 meters north of HNW-4-055. The crew also attended the dedication ceremony. 5/20/14 Scott L. Ed P. Cris M.(1/2 day) Refill water tank, clean vacuum machines. Trip to town for propane,gasoline, and diesel. Cleaned 46 meters of tube, ending at 9 meters north of HNW-4-057.
Just in case the laser trips, would the person bringing the laser back up please make a note of the status of the laser and whether or not the chillers are running. On the PSL Beckhoff computer, hit the "Stat>" button located towards the bottom right of the main screen (the one with the schematic of the laser drawn). The next screen is the System status screen. Make a note of whether just the "Interlock OK" field, located near the top left hand corner, is red. Or if all the fields on the left hand side is red. Also please make a note of whether or not the chillers are running.
Without going into the full soap opera, the PSL chiller was replaced with the "original" chiller. This chiller was upgraded with the new vortex flow sensors. A new calibration factor for the flow sensor was entered into the chiller's memory bank and the bypass valve was manipulated to give the right flow rate through the HPO laser heads and front end laser. A log about the interlock will be submitted later once I've sorted out the messy details. JeffB, Peter, Jason
Peter, Kiwamu, Evan
In the process of trying to realign the IMC, we found that the LF OSEM on the top stage of MC3 is seemingly nonresponsive. Additionally, the left-hand OSEMs on the M2 and M3 stages have very few counts coming from their shadow sensors.
For M1 LF, the input readback is railed low, and does not respond even when the MC3 alignment slider is "moved" many hundreds of microradians in yaw. RT, in contrast, shows a clear response. The voltage and current readbacks for LF are similarly nonresponsive. It seems that they acquired their present values around yesterday at 11:20 AM local time.
Peter and I looked at the HAM2 driver chassis, but did not see anything immediately wrong. Then we went to the satellite boxes. Peter found that one of the cables for the MC3 top stage was not screwed in and was hanging slightly. Fixing this did not change the behavior of LF, however.
It seems that this problem can be traced back to the ADC/DAC chassis which drives the LF OSEM. It is putting out -25 V all the time.
By physically disconnecting the DAC drive to the T3, LF, RT, and SD on MC3 (thereby removing the -25 V being sent to LF), Kiwamu and I were able to see that the OSEM shadow sensors all gave reasonable numbers of counts (several thousands at least), including LF and the middle/bottom stage OSEMs.
We restarted the MC3 model, but this did not fix the issue.
As a background task over the past few weeks I've been working on a Matlab script that will provide the capability to test all suspension actuator channels. The aim would be to have it available to run on a weekly/bi-weekly basis, during a maintenance window by Detector Engineers or Operators, so that we can be more pro-active in our search for OSEM channels that may have become unresponsive. The test itself is very straightforward (just applying a ramping positive/negative bias to the OSEM and monitoring the sensors response), but the logistics of running it on so many channels and maintaining a measurement list, are somewhat more of a challenge. This script is currently being tested at LLO, and after completing my survey of actuators at LLO I aim to release it for LHO ASAP.
The Y end station had a Guralp seismometer set up with some clearly temporary electronics: cables strung on the floor and in front of racks; power supply and breakout boxes on the floor. With the work going on in the Y-end VEA to install the new ES-Driver, this set up was getting in the way, and posing a tripping hazard. As the work permit run-through in the morning meeting revealed no permit for this setup, I took the liberty of disconnecting the cables and cleaning things up. The Guralp itself was not moved, but signals are no longer being acquired (if indeed they were before).
(R. Weiss, B. Abbott, C. Torrie, J. Worden, B. Gateley, G. Moreno)
We assembled one TMDS*, and is currently being tested in the corner station mechanical room.
We are using dry air from the corner station purge air system to test the unit.
The corner station purge air will remain on until testing is complete.
*TMDS = Test Mass Discharging System.
you're gonna need a bigger c-clamp
Not possible to use a bigger clamp - there is no crane in that room.
700 Cris - LVEA
800 Cris - Out
807 Ken - LVEA working on tables
900 Jeff K. - To EY
901 Jim B. - To CER
902 Bubba - LVEA to move a 3IFO pallet
915 Richard - To CER
916 Hugh - To LVEA to clean up presentation from yesterday
919 Bob, Steve - H2 room
919 Filliberto, Andres - To EY cable pulling for low noise ESD
951 Karen, Cris - To MY and MX
1005 Sudarshan, Darkhan, Rick - To EX for PCal work
1035 Richard - out of CER
1043 Richard - LVEA to talk to Ken near HAM 6
1051 Karen - Leaving MY
1105 Nutsinee - To EY, tagging cable
1112 Dave B. - To EX
1119 Richard - Out of LVEA
1129 Jim B. - Out of CER
1148 Dave B. - Back from EX
1209 Jeff K. - Done with activities at EY
1255 Filliberto - To LVEA to help Ken
1256 Sudarshan, Darkhan, Rick - Leave EX for lunch
1307 Richard - To LVEA
1320 Marco - To EY
1352 Richard - To LVEA
1448 Jodi - To LVEA placing barcodes
1503 Jodi - out
1504 Kiwamu - To EX and EY transitioning to laser hazard
1517 Elli, Nutsinee - To LVEA to restart CO2 laser
1528 Elli, Nutsinee - out
1546 Richard - out
1548 Rick, Sudarshan, Darkhan - To EX
Elli, Nutsinee
Following LHO alog17941 (PSL accelerometer detected 1Hz glitches) and LLO alog17844 (1Hz peak seen in a magnetometer). Elli left the HWSX camera on from the night of May 16th until the morning of May 18th. I have compared and attached the spectrum from two lock stretches where the HWSX camera was turned on (May 17th 12:00:00 UTC) and off (May 15th 12:00:00 UTC). There's no evidence of HWS camera at the corner station coupled into DARM at ~45 Mpc.
The concern I believe is that this 1Hz comb may be the cause of the 1Hz signal that was seen on the AOM's of the CO2 laser systems see LLO alog 16873. This is not the most pressing issue at this point in time I am of the understanding because the AOMS are not currently used for intensity stabilization of the CO2 lasers, but it is something that is wanted in the near future.
The HWS in the corner station do not cause a 1 Hz feature in the spectrum, they cause glitches once per second. It's not easy to recognize in a spectrum, but very clear in a spectrogram with 1/4 sec FFTs and overlap of 0.9; see Josh's alog. Attached are two spectrograms of the ITMX GS13, the first from the time with the HWS off and the second with it on. The glitches are clear. The third attachment is DARM, where there's no evidence of the glitches.