Thomas Vo, Mitchell Robinson, Tyler Guidry, Lisa Austin Arm Cavity Baffle for ETM-x (BSC9) assembly complete and ready for balancing. For balancing, Extender Shelf distance must be set; Sliding Weights must be set; & Magnet damping spacing must be set. SR2 Scraper Baffle ready for installation in H1-HAM4. Needs to be packaged for transport from cleanroom. [pic3] SR2 Scraper Baffle ready for packaging and storage for 3rd IFO - HAM4. [pic2] H1-HAM5 SR3 AR, SR3 HR and SRM Baffles ready for installation. Needs to be packaged for transport from cleanroom. [pic 1] H1-HAM4 SR2 AR and Hartmann Baffles ready for installation. Needs to be packaged for transport from cleanroom. [pic 4]
The MSR UPS unit did not log any power problems yesterday, but h1susb123 froze up completely and all Dolphin connected front end computer models crashed. Following previous power glitches we have had supposedly running models not actually working correctly until the computer was power cycled, so this morning we power cycled all H1 front ends to be safe.
h1seih23 showed an initial IRIGB/DuoTone error which went away after about 10 mins, this phenomena has been noted at LLO as well.
On Christina's request, the PSL models were burt restored to 19:00 29th August rather than using the safe.snap.
Vincent had made an INI change to h1hpiitmy, I restarted the DAQ to capture the new configuration.
The IOP model for the X1 triple test stand has been restarted following the power glitch Thursday evening. The DAQ was restarted as well. The other staging building test stands are also OK.
As a result of the Thursday evening power glitch, all computers on X1 DTS have been restarted. The system is now functional.
h0digivideo0 and h0digivideo1 both restarted last night as they are not on UPS power. Before restarting the camera iocs, and the camera code (h1cam12 on h0digivideo1 only as it is the only one currently installed), I took the opportunity to run the Ubuntu OS updates for both servers and reboot them.
Since it seems like we have done all we can (although Richard is game to look at a few more things), we have inserted a -1 gain in the Output Filter bank of HAM6 V1. This is not standard and none will rest peacefully with this pea under our mattress.
Found the problem--Installation error!
See Cyrus's entry below (7652).
The LVEA ethernet switch sw-lvea-aux0 was inadvertently powered off in the LVEA electronics room when the AC power cord supplying the switch was knocked out of the wall socket at about 08:40 PDT. This also affects sw-lvea-aux1 which is PoE powered and supplies network connectivity to (mostly) the PSL. I corrected the cord issue, and both switches were back up and operational by 09:40 PDT, downtime therefore was about an hour. Patrick has been alerted to restart any dust monitors that are reliant on the network connections through this switch. Long term this switch will be DC powered once the required 48V source supplies are in place, and this weakness due to the 'temporary' AC cord will go away.
(On behalf of Gerardo.)
The power glitch last night during the evening storm caused the air bake oven controller to do...something unknown. It appears that the controller went into STBY mode after the power cycled and thus likely quit applying heat to the ITM06-PUM. So, the ITM06-PUM prism cure profile will now look like this:
Prism cured at room temp for ~24-48 hours (depending on which prism).
Mass loaded into oven and 2 hour heat ramp to 34 degC started at 5pm THUR the 5th.
Power glitched at ~6pm so 34 degC was likley not reached.
(So, more room temp curing - another 16 hours)
Restaring the controller at ~10am FRI the 6th. WIll observe controller today and hope for no more storms.
Found that all computers were running except for h1susb123, it was frozen, unresponsive in every way. Power cycled h1susb123, almost all models started when h1susb123 returned. Had to start h1susprm and h1sussrm manually. Could not clear bad DAQ status on h1sush2b or h1seib1. Finally power cycled both computers, they came back up in a good state. ALL SYSTEMS NEED TO BE BURT RESTORED! Vincent will burt restore itmy and bs, as he has a specific time which he considers good.
After getting my home internet connection back I was able to log in remotely and take a look at the CDS systems.
The DAQ survived the storm, it it on UPS power.
Of all the front ends, those on the Dolphin networks are down. I tried to get h1seib1 back, but was only partially successful (models started but their DAQ data is bad). I have stopped the HPI and ISI ITMY models.
The Vacuum Controls system looks good, it is on UPS power.
Web, alog, wiki, login, auth services are all good.
ITMX main chain tests after its alignement today, won't start before tomorrow, when front ends will be up and running again
The power glitched visibly in the control room for half a second. We lost the frame builder immediately, and then all the front ends slowly but surely crashed in its wake. And so ends a night of work at LHO. We won't begin to resurrect until the storms planned for this evening pass. We really should have the front ends and related systems on some sort of UPS. At least to with stand 10 seconds without power.
You need enough power to keep up all the DC power supplies and the lasers. We at LLO CDS are all for such backup power. The issues are the cost and required support. Of course, we are already very expert at resets here in the stormy south coast.
Following on Filiberto's upgrade of the M2 driver boxes on PRM and SRM per alog 7630, I updated the h1susprm and h1sussrm models to match, piggybacking on his work permit 4117:
* I changed the PRM and SRM blocks at the top level to use the MC_MASTER part (as for MC2 which was modified earlier) rather than HSTS_MASTER.
* Per Stuart's suggestion, I terminated the three new outputs MC_X_M*_OUT on the new part (rather than feeding them into an IMC block as for MC2).
* I copied the matching M2_COILOUTF filter definitions from the latest copy of L1SUSMC2.txt in the SVN after checking that they corresponded to LLO alog 4495. (Ideally I would have gotten them from H1SUSMC2.txt, but it appears we've never updated them there. I held off doing that at this time because it will need a separate WP and because there's a risk of breaking the IMC locking if something is counting on the existing poor compensation, but we should get to it soon, after we confirm that it's working for PRM and SRM.)
* I rebuilt and restarted both models. h1sussrm came up with no problems except for a few bad entries in the safe.snap, which I corrected. (There were lines like H1:SUS-SRM_M1_DAMP_L_STATE_GOOD 1 H1:SUS-SRM_M1_DAMP_L_STATE_GOOD 100676356" which were hangovers from a faulty script used at one time. I deleted the first half of each line. The PRM safe.snap had the same issue, but BURT didn't complain (see next item). I fixed it in the same fashion anyway.)
* Initially the PRM model showed a red DC bit and many of the EPICS values were not properly initialized. I realized after posting the initial version of this alog that I hadn't saved after the last edit, so the MC_MASTER block was still called MC_MASTER, so all the channel names were wrong. I saved the model, rebuilt and restarted it and this time the DC bit was green. I'll follow up with Dave Barker tomorrow to see if there's any corrective action needed re the MC_MASTER channels that were being written temporarily. The hourly backup for 16:00 hours is corrupt but the 15:00 and 17:00 ones are good.
* I committed the new models, filters and safe.snap files.
My fix of yesterday for the dud lines in the PRM and SRM safe.snap files wasn't quite right - I had deleted the erroneous second copy of the channel names, but I should have left the "1":
H1:SUS-PRM_M1_DAMP_L_STATE_GOOD 1 100676356
H1:SUS-PRM_M1_DAMP_P_STATE_GOOD 1 113258548
H1:SUS-PRM_M1_DAMP_R_STATE_GOOD 1 113258548
H1:SUS-PRM_M1_DAMP_T_STATE_GOOD 1 113258548
H1:SUS-PRM_M1_DAMP_V_STATE_GOOD 1 113258548
H1:SUS-PRM_M1_DAMP_Y_STATE_GOOD 1 113258548
H1:SUS-SRM_M1_DAMP_L_STATE_GOOD 1 100676356
H1:SUS-SRM_M1_DAMP_P_STATE_GOOD 1 113258548
H1:SUS-SRM_M1_DAMP_R_STATE_GOOD 1 113258548
H1:SUS-SRM_M1_DAMP_T_STATE_GOOD 1 113258548
H1:SUS-SRM_M1_DAMP_V_STATE_GOOD 1 113258548
H1:SUS-SRM_M1_DAMP_Y_STATE_GOOD 1 113258548
I committed the new fix.
- 8:40, Corey to squeezer area, to retrive items for Sheila, out by 8:50 am.
- 8:50, Hugh to HAM6, sensor check up, out by 9:20 am.
- 8:50, Apollo to start moving clean room above BSC3, installation aborted, see Bubba's entry.
- 9:10, Filiberto to H1 electronic room, R&R chasis.
- 9:15, Thomas to test stand area, assembly of SLC baffles.
- 9:45, Richard and ken, to outer stations (mids and ends) removal of weather stations from roof.
- 11:20, Betsy and Jason to ITMX test stand, to do stuff.
- 11:32, Dale to test stand to take photos, mission aborted "not as cool", due to metal PUM.
- 14:58, Dale and ??? to "take a look out on the floor", out within 20 min.
Alarms and Reboots:
- End X instrument air alarmed twice, low value of 58.94.
- CDS alarms that should have been disabled, for now, h1oaf, h1lsc and h1asc, as long as is not being used.
- SRM and PRM startup by Mark B @ 15:45.
We made an attempt at placing a clean room over BSC 3 only to realize that the work platform configuration was incorrect. This required relocation of the E-module from inside the beer garden to the east side of BSC 3 and some slight modification of the work platform. This also required relocating the gowning room from the west side to the east side of the small clean room currently being ocuppied by SUS. Tyler assisted Lisa A. with assembly for most of the day.
I have powered off the old VME boot servers in the MSR, as they are no longer needed. The main consequence of this is that the continuous beeping from the failed RAID array has now stopped, so any new beeping or other alert sounds coming from the MSR should be investigated.
As Keita wrote here, the TMSX initial alignment is done.
I need to catch up on posting my TMSX pictures, so here are my ResourceSpace collections (date in title):
LHO TMSX 4Sept2013 - IAS alignment, upper mass cabling, and pitch balancing
LHO TMSX 30Aug2013 - reattaching ISC cables to ISC table, optics covered with Alpha wipes, and offse
LHO TMSX 30Aug2013 - how to turn the telescope 180 degrees, and move to the test stand from the lab
LHO TMSX 26Aug2013 - upper structure move and first attempt at cable routing
LHO TMSX 21Aug2013 optics contamination and OSEM testing
LHO TMSX 16Aug2013 on test stand - cables and safety hardware
Hugo & Jim reported a response error of HAM6 V1. The IPS (Inductive Position Sensor) responded with opposite sign but good amplitude. We visually checked the IPS and verified its sign was correct. EE was called in for assistance and the outputs of the Pier Pod was verified to be consistent and correct. Cables were swapped and we even drove V1 with the V4 Pod. All suspects eliminated leaving the Actuator itself. The Parker Valve has been swapped and the Actuator is now in bleed mode. Old Valve SN: L170 replaced w/ L221.
The Actuator was put back into run mode at 1405 pdt giving us 165minutes of bleed time. More than enough for this vertical actuator given the reported experiences at MIT and LLO. We ran another test and the sign anomaly still exists...? Don't know what to say about that...