PeterK, MattH, ChrisM, RickS This morning, Peter found that the Crystal Chiller would not restart (apparently the laser has been down since last Friday). He went to the Laser Area Enclosure and found a puddle on the floor near the High Power Oscillator. Removing the HPO covers revealed that a small (~3/16" ID) hose had separated and was leaking. Hose was replaced, water mopped up, chiller restarted, water level topped off. More details and photos to follow.
model restarts logged for Mon 30/Jun/2014
2014_06_30 10:45 h1iopsush2a
2014_06_30 10:46 h1suspr3
2014_06_30 10:46 h1susprm
2014_06_30 10:48 h1susmc1
2014_06_30 10:48 h1susmc3
2014_06_30 10:49 h1iopsush34
2014_06_30 10:50 h1suspr2
2014_06_30 10:50 h1sussr2
2014_06_30 10:52 h1iopsush56
2014_06_30 10:52 h1susmc2
2014_06_30 10:54 h1susomc
2014_06_30 10:54 h1sussr3
2014_06_30 10:54 h1sussrm
2014_06_30 10:56 h1hpiham1
2014_06_30 10:56 h1hpiham6
2014_06_30 10:56 h1iopseih16
2014_06_30 10:56 h1isiham6
2014_06_30 10:58 h1iopseih23
2014_06_30 10:59 h1isiham2
2014_06_30 10:59 h1isiham3
2014_06_30 11:01 h1hpiham2
2014_06_30 11:01 h1hpiham3
2014_06_30 11:02 h1iopseih45
2014_06_30 11:04 h1hpiham5
2014_06_30 11:04 h1isiham4
2014_06_30 11:04 h1isiham5
2014_06_30 11:06 h1hpiham4
no unexpected restarts. Install of SWWD on HAMs
Jeff, Patrick, Dave.
I installed the latest IOP SWWD models on the following systems: h1iopsusb123, h1iopseib1, h1iopseib2, h1iopseib3. All SUS and SEI models on these front ends were stopped (using guardian to first put them into a safe state) and then restarted.
This completes the SWWD FE install, I am now cleaning up the MEDM screens.
In the corner of the "cleaning area" adjacent to the door to the LVEA there is now a fully stocked Contamination Control (CC) Area and 3x fully stocked Contamination Control Portable Kits.
These are available for all to use. In fact if you have an open chamber there is no reason you shouldn't have a Contamination Control Portable Kits. How do I know what a Contamination Control Portable Kits looks like? Don't worry they are the BIG PINK Really Useful boxes, and as such pretty noticeable. See attached pdf for picutres of the portable kits and the main CC area.
Further details can be found here LIGO-T1400310: Contamination Control Kits - Parts list, poster and kiting plan and use. At this DCC link you will find slides descirbing the area, a poster of all of the items included in the kits, a correspoding parts list, labels and a chamber entrance guide.
Please see Jeff Bartlett if you have any questions and liaise with Jeff for re-stocking of the portable kits and areas.
Calum Torrie
Last week at LHO (Travis, Betsy, Kate and Calum) tested the TM Green Lantern, LIGO-D1400060-v4, on the ITM in the WBSC3 chamber. The Test Mass (TM) Green Lantern gives a system which allows us to: -
i) Illuminate the optic (in chamber) in a consistent manner at grazing angle to face of optic. (The TM Green Lantern can be installed while first contact (FC) in place. In addition the FC can be removed while TM Green Lantern is in place.)
ii) Highlight dust on the surface of the optic cf. previous methods of illumination (flashlight) which couldn't operate at grazing angle. As such the previous methods illuminatated substrate defects instead of dust on the surface.
iii) Image the dust on the optic (as a result of the effective illumination).
iv) See (and image) the effectiveness of the FC on a dusty optic. (This has not yet been tested.)
The TM Green Lantern is now on site at LHO (class B) under Betsy's ownership. See attached image below.
Calum Torrie (and team listed above).
After this mornings boots (just happened, see more from Dave I suspect shortly), I see that the guardians for all of the SUSes come back in EXEC mode. THey were previously set to PAUSE or other state. Why does it reset to EXEC after boots?
Guardian nodes come up in EXEC after restart. Their state is not preserved across restarts. I understand why this would be an issue, though, so maybe I should work on a way to preserve state across restarts.
Why did the nodes restart, though? Did the guardian machine get rebooted?
Presumably the problem here was that the nodes came up in EXEC, and then went to their default ALIGNED state, which disturbed the optic alignment happening in chamber. One way around this would have been temporarily modify the SUS nodes to come up with the initial request as SAFE. This would have prevented the damping loops from being engaged and the alignment offsets from being enabled.
Since we got the "All Clear" from SUS this morning on ITMx suspension health, the following are the final alignment numbers for the ITMx and CPx. For ACB final alignment numbers see alog 12542.
CPx
reset ITMX and ETMY BS and ETMX IPS are close to trip levels, did not reset
ITMX suspension is healthy, ready for handoff TCS installing computer at end X, connecting to camera, running ring heater to test the Hartmann sensor Richard: cabling at end stations
This brought the level to just about the max.
In cleanroom start of 4pm session:
All counts zero
In chamber start of 4pm session
0.5um...20 counts
remainder....zero counts
In chamber end of session:
0.5um...30 counts
remainder...zero counts
This is a late alog from last week :
OM1 OM2 and OM3 needed some medm work :
1.Base change matrices for damping: they have been populated using the usual matlab script "make_sushtts_projection.m" in /HTTS/Common/MatlabTools.
2.Calibration filters : copied from the RMs
3.Coil output signs : set to - + + - in the order UL LL UR LR. From an email from Suresh, magnet polarity is opposite of HAUX (E1200215), therefore the sign convention is also opposite.
3.Damping filters : copied from RMs, they seem to work with gain of -20 for L, -0.04 for Pitch and Yaw
4.Safe.snap were updated and commited into the svn
It would be good to take some TFs/spectra at some point
I restarted the following IOP models: sush2a, sush34, sush56, seih16, seih23, seih45. All user models on these front ends were stopped and restarted as part of this work. Guardian was used to put the associated SUS and SEI sytems into a safe state.
The only systems left to be upgraded are the SUS and SEI for BSC1,2,3. This is scheduled for tomorrow.
No DAQ restart was required for this work as it had already been restarted last week.
J. Kissel, A. Pele, T. Sadecki, B. Weaver, After another day's worth of fumbling and hand-waving, we've finally traced the discrepant transfer function magnitude to a loose cable connection at the air-side of the feedthrough. In the course of attempted fixes, we measured a few more things, moved a few more things, replaced the R0 F2 and F3 OSEMs, but the only thing that fixed it was the cable connection. This flaw with the electronics chain should have been found earlier with our cable swap on Friday, with remeasuring the open light current, and with Arnaud's DC actuation test below, but frustratingly, it did not. But we found it. I'm in the process of getting formal transfer function measurements now, but I'm confident from what I'm seeing thus far that this SUS is read for close-out. Will post comparison TFs once they're finished. We have exORSIIIIIIIZED the demons. This SUS is clee-uh. Details (In rough chronological order) ------- - Arnaud compared the ETMY and ITMX R0 F2 to F2 and F3 to F3 drive at DC using the alignment offsets (the only QUADs available, so take ERM and TCP chain comparison with a grain of salt), and concluded that the chains are driving with comparable and within a chain F2 and F3 are of comparable strength with such a quick study. - Betsy retook an L2L transfer function (just for sanity's), saw the same discrepant factor of two -- and some more Yaw resonant cross-coupling. Still F2 shows the majority of the badness. - Decided just to swap the OSEMs, and hopefully move on. Below are the new serial numbers, open light current, and compensation gains and offsets. These *have* been installed. We'll take a new safe.snap this evening after all the commotion has died down. - Betsy & Travis poked at this, looked at that, iterated with measurements several times on the floor. - At last, Betsy went a long the signal chain, and made sure all connections were secure, and finally found this bad connection at the in-air connection at the feedthrough. - Simultaneously, Travis loosened up the the cable loop from the optical table to the top mass. This may have alleviated the extra Yaw peaks that had shown up today. Open light current values and serial numbers for the new OSEMs. Raw New New S/N ADC Gain Offset R0F2 25646 1.170 -12823 077 R0F3 26554 1.130 -13277 147 We'll also update the OSEM chart and grab a new safe.snap once transfer functions are finished.
TFs look great. For some reason the comparison script is failing on my laptop; will post comparisons tomorrow. But, for all intents and purposes, H1 SUS ITMX is ready for close-out. A victory for team SUS!
Comparison plots. All looks well, as expected. To demonstrate the problem, I also include the prior *bad* measurements of the reaction chain (2014-06-27_2117).
More transfer function measurements were taken on the lower stages of ITMX last tuesday.
Interesting things to notice :
1. The measurements with no top mass damping (1st and 3rd attachments) appear to show resonnances which are not predicted by the model. Those are most certainly resonnances of the reaction chain moving because of the reaction force (e.g. 3rd page of 1st attachment : resonnance at 0.66Hz is not predicted by the model, but matches with the frequency of the first yaw mode measured on the top mass reaction chain, cf this measurement from alog 12555).
2. There is a factor of 2ish discrepancy between the model and the measurement for both UIM and PUM
Today the sus overview medm screens were udpated (again) to account for the new-new iop dackill state channel name. They haven't been commited into the svn, and won't be until LLO has the change.
The linearization modification Jeff made for the etmy has been moved to SUS_CUST_QUAD_OVERVIEW_with_linearization_2014-06-30.adl
Locally modified files are :
M bsfm/SUS_CUST_BSFM_OVERVIEW.adl
M quad/SUS_CUST_QUAD_OVERVIEW.adl
M quad/SUS_CUST_QUAD_R0.adl
M omcs/SUS_CUST_OMCS_OVERVIEW.adl
M hxts/SUS_CUST_HLTS_OVERVIEW.adl
M hxts/SUS_CUST_HSTS_OVERVIEW.adl
M tmts/SUS_CUST_TMTS_OVERVIEW.adl
M SUS_CUST_IOP_DACKILL.adl
B. Weaver, T. Sadecki, A. Pele, J. Kissel In hopes of find a smoking gun in a dead OSEM LED or photo-diode, we backed off the H1 SUS ITMX's R0 F2 and F3 OSEMs and gathered a new measurement of their open light current. Sadly, the difference is at most 5-10%, and does not explain the factor of two seen last Friday (LHO aLOG 12522). Thus far we haven't changed the OSEMINF compensation values to match the new reading, because we're not sure if we're gunna swap out the OSEMs yet. Or next step is to drive alignment offsets and compare against other suspensions to see if the coil / actuation side of the OSEM is the culprit. We'll also measure coil resistance. (Remember, the electronics chain has been exonerated with the cable swap test we did on Friday). For reference: Raw New New Former Former ADC Gain Offset Gain Offset R0F2 28600 1.049 -14300 1.003 -14958 R0F3 26320 1.140 -13160 1.063 -14114 Obtained using the following matlab script: /ligo/svncommon/SusSVN/sus/trunk/Common/MatlabTools/ [resultstring gains offsets] = prettyOSEMgains('H1','ITMX');
Note that these numbers were never installed, and now obsolete since we replaced these two OSEMs. See LHO aLOG 12544 for new OSEMs and open light current values.
[Mark Arnaud]
as reported on thursday, sustools.py has been updated in order to account for the new IOP wd state channel names.
By typing in the command line :
/opt/rtcds/userapps/release/sus/common/scripts/./sustools.py -o ETMX wdNames
The output returns all wd state channel names associated with this suspension, including the new IOP channel name:
['H1:SUS-ETMX_R0_WDMON', 'H1:SUS-ETMX_M0_WDMON', 'H1:SUS-ETMX_L1_WDMON', 'H1:SUS-ETMX_L2_WDMON', 'H1:IOP-SUS_EX_ETMX_DACKILL', 'H1:SUS-ETMX_DACKILL']
All sus guardians will be restarted and tested tomorrow
By doing this change (adding "ifo-rooted" iop channels), guardian machine ran into a memory issue, similarly as few months ago with ISI guardians.
Without going into details, we basically reverted the upgrade, meaning that guardian won't look at iop wd until further notice.
All sus guardians were restarted.