Last Friday and today, I worked on installing remaining TMS cable and documenting each cable for TMS. The table below documents cables we installed, positions on Cable Brackets, and what components they are hooked up to. For in-air cabling, we only had the SUS cables run to the Test Stand; this is why I don't have names for these ISC TMS cables.
In-Air Cable |
Chamber feed-thru |
In-vac cable | Cable Bracket | In-Vac Cable |
Cable Bracket on TMS |
In-Vac Component |
---|---|---|---|---|---|---|
H1:SUS_BSC9_TMONX-1("SUS1") | ..........||......... | D1000225 s/n S1106816 | CB3, 1st floor | D1000234 s/n V2-96-903 | --- | OSEMS: Face1, Face2, Face3, Left |
H1:SUS_BSC9_TMONX-4("SUS2") | ..........||......... | D1000225 s/n S1106771 | CB3, 2nd floor | D1000234 s/n V2-88-934 | --- | OSEMS: Right, Side, ---, --- |
: Not sure of name, cable not run : | ..........||......... | D1000924 s/n S1104104 | CB6, 1st floor | D1000568 s/n S1104110 | CB-primary, 1st floor | Green QPD (D1000231 s/nS1202413) |
: Not sure of name, cable not run : | ..........||......... | D1000924 s/n S1203963 | CB6, 2nd floor | D1000568 s/n S1202739 | CB-primary, 2nd floor | Red QPD (D1000231 s/nS1202411) |
: Not sure of name, cable not run : | ..........||......... | D1000223 s/n S1202653 | CB5, 1st floor | D1000921 s/n S1104112* | CB-entry, 2nd floor | Picomotors (D1000238 s/n S1105218) |
: Not sure of name, cable not run : | ..........||......... | D1000223 s/n S1202656 | CB5, 2nd floor | D1000921 s/n S1104113 | CB-entry, 1st floor | Beam Diverter (D1000237 s/n S1202724) |
in-vac cable | cable bracket | in-vac cable | in-vac component |
H1:SUS_BSC9_TMONX-1("SUS1") |
in-vac cable | cable bracket | in-vac cable | in-vac component |
H1:SUS_BSC9_TMONX-1("SUS1") |
These cables are run mainly according to D1300007, although we did make some changes to improve workability (see photo). One unfortunate thing here is, in D1300007, there is mention of a cable to be used, and there is a note that for this cable, having "metal ears" is optional. So we went without them here, but with the ears NOT on the cable, it makes it impossible to disconnect a cable from a cable bracket (see photo) without having to also remove the cable bracket from the table (this is because when the cables are connected via a set screw, the set screw access hole is blocked when connected cables are attached to the Cable Bracket). LAME.
I also set up equipment for testing cables...mainly this was bringing lots of stuff which are needed for the Picomotor cables.
* For D1000921 S1104112 above, this cable was originally noted as S11041111 due to bag, but looks like the bag was mis-labeled. S1104112 is what is really installed.
not sure why of all the empty lines after the table above. :-/
Cheryl and Keita backed out the TMSX OSEMs and I logged the OL values and calculated gains and offsets using the Matlab script /ligo/svncommon/SusSvn/sus/trunk/Common/MatlabTools/prettyOSEMgains.m:
>> prettyOSEMgains('H1','TMSX')
M1F1 25441 1.179 -12720
M1F2 22328 1.344 -11164
M1F3 28300 1.060 -14150
M1LF 23159 1.295 -11580
M1RT 26172 1.146 -13086
M1SD 28743 1.044 -14372
I entered the new gains and offset and updated and commited the safe.snap.
And after the above was done OSEMs were centered such that the H1:SUS-TMSX_OSEMINF_??_OUT numbers are within +-20um.
SUS team was notified that TMSX is ready for SUS testing.
RAID controller 0 has failed in the SATABoy that is connected to FW1, as evidenced by the audible alarm and serial console log messages. Looking at the uptime trend in DataViewer, it looks like FW1 crashed at ~02:23 UTC on 8 Sep and did not start (reliably) running again until 06:11 UTC the same day. That range probably corresponds to the time when the controller failed, or started to fail. Controller 1 appears to be active and working at present. For now I've silenced the audible alarm until Dan can hopefully come up and have a look at it.
Filiberto investigated the TMSX issue (7675, 7676) and found a wrong cable had been used.
Now things are better, and there's a valid DC signal appearing, but there's still HF noise. Zooming out shows peaks at 1850 Hz and multiples (3700 Hz, 5550 Hz, signs of something at 7400 Hz), so something is presumably oscillating.
Filiberto tried various combinations of powering down the sat amps and swapping them and eventually got the 1850 Hz ringing to go away.
Found missing cables at vacuum mock feedthru panel. This cable is not a standard one to one pin, but is flipped on one end to simulate the in-vacuum feedthru. After this, Mark reported high frequency noise. Did different variations of powering off coil driver and satellite unit. Signals look stable.
sw-msr-h1daq, being a Layer 3 switch, supports running a number of routing protocols; and apparently RIP and OSPF are enabled by default. Since we only use this as a Layer 2 device these protocols were never configured, and because I wouldn't want to use them on this device anyway, I have disabled them in the config.
Moved the NW Vertical back to operation mode this morning after bleeding for the weekend. Will continue commissioning.
Please see entry at LLO.
Since we use the same library part, this is the case for LHO as well.
The last two transfer functions took on the lower stage of the beamsplitter under vacuum, were showing a mismatch with the model and Livingston's measurements (cf first plot attached).
Before thinking of any mechanical issues, earthquake stops touching or misaligned flags, I wanted to check if the actuation and sensing side were acting as expected.
After the issue we had with tmsx few weeks ago (cables beeing swapped), I wrote a script to test medm-signs/cabling/magnet polarity all in one, and ran it today for the beamsplitter. It looks like there is something wrong with the actuation chain of M2 UR osem (cf the red curve of second attachement). The test consists in sending a positive ramp to the osems, one after the other, expecting to see a response that is following the drive. UR is the only osem seeing the contrary (not obvious on the plot because of the scaling)
Even though it doesn't match the convention any more, I swapped the sign of UR coilouput filters gain in the medm (+1 instead of -1) and took a new transfer function in M2 yaw to way with DTT (cf last attachement). The TF looks way better now, with a really good coherence.
TF will be running tonight for longitudinal and pitch dofs.
Corey installed all in-vac cables for TMS.
OSEM cables were connected to the out-of-vac cables via a dummy feedthrough (see Mark's alog about the problem we have).
ISC cables (QPD, beam diverter, and picomotor) were routed next to the dummy feedthrough but were not connected to the feedthrough.
On Monday we'll test all ISC cables/connections.
Keita got the TMSX OSEMs connected up and asked me to see if the readouts were reasonable.
Unfortunately there is a large amount of HF noise at assorted frequencies up to about 600 Hz. DTT and DataViewer views are attached.
The first quadrapus (F1, F2, F3, LF) is several orders of magnitude worse. When the quadrapusses were swapped the noise went with the quadrapus.
There doesn't appear to be a valid DC signal underlying the noise, except possibly for F2.
This week, I worked on HAM-HEPI commissionig with HughR and JimW.
I perfomed what could be called "assembly validation" tests on HAM2 and HAM3 HEPI. Those tests allow checking the good mechanical behaviour of the platforms, as well as the good functionning of the sensors and actuators. Tests performed include, but ar enot limited to: Local Static Offset test (small drive in one corner, we read the IPS sensors responses in every corner), sensor spectra, and Linearity tests. Thests were concluded with transfer function measurements which confirmed that the whole chain (actuation, platform, sensors) is functional, on both platforms. Transfer functions are attached.
All results are commited under the SVN, and testing reports are ongoing.
I also started designing Isolation loops (actually position loops, using IPS sensors only). I could not get thoses loops to be stable and I don't know why. I tried different designs, lowering the ISO block gain, and decreasing the ramp time on the boost filters, with no success. I am hoping Sebastian, who is coming next week, could give the problem a fresher look.
In parallel, I showed HughR and JimW how to use the SEI commissioning tools on HAM6-HEPI. They noticed that one of the acutuators (V1) was performing in the reverse direction. Their throughout troubleshooting, revealed a misconection at the actuator's hoses, which they fixed.
Kyle, Gerardo
Jason, Betsy
So after fighting the ITMx reaction chain into it's appropriate yaw and x-direction position behind the main chain all day, it was totally out in height (darn coupled mechanics). After then adjusting it to be at the appropriate height it was totally out in yaw and x-position. We'll start again on Monday. I guess.
J. Kissel, M. Barton, H. Paris [D. Feldbaum via phone] After getting the PSL back up, it took very little time to get the mode cleaner re-locked. Hugo - restored the HAM2 and HAM3 isolation systems to proper alignment Mark - restored the mode cleaner suspensions to yesterday's alignment JSK / Mark - burt restored h1ascimc model to regain the input periscope's PZT alignment Mark had left Stefan's MClockwatch running on opsws3, as I had done Wednesday which happily picked it up the mode cleaner flashes and brought us to a pretty nice* lock. Win! Boy are we close to having a fully automated input optics chain... *The alignment looks a little funky, but Mark's seeing plenty enough SNR to get what he needs done up at 500+ [Hz], so we're not gunna bother tweaking it.
Corey to LVEA East area to get items Crane work by BSC3 (LVEA) – Apollo Taking Measurements on ITMx (LVEA) – Dough Cable work at End X (TMS) – Corey H2 PSL enclosure work – Sheila/Jeff K. Hanford Fire Department on site Work on LVEA (Switch fixing) – Cyrus Post mount on LVEA roof – John/Bubba Computers Power Cycling – James/David Doing some work at ITMx (LVEA Test Stand) – Betsy LVEA transitioned to Laser Hazard – Justin Concrete has been poured! Baffle work on East Bay (LVEA) – Thomas/Lisa Work on Dust Monitor at End X – Patrick Measurements on Beam Splitter – Arnaud LVEA transitioned to Laser Safe – Justin
S. Dwyer, J. Kissel, D. Barker [with help from R. Savage, P. King, D. Feldbaum, and K. Thorne by phone] Since Sheila was the only cognizant PSL operator on site, and I wanted to learn, I followed her around as she made attempts to bring the PSL back up from yesterday's power outtage. As of ~2:00p PT (21:00 UTC), the PSL is now running in the following state: - Low Power (32.1 W NPRO "front end" output power) - FSS on, ISS off (see details below) - PSL Enclosure is in "Science Mode" with air conditioning, fans and lights turned way down or off. The full story: We began with the latest start-up procedure: T1200259. However, after getting through all the steps to turn on the PSL to a low-power state (as advised by Rick -- though we couldn't find an aLOG indicating that it had been left in low power), we had found two error messages left on the STAT sub-screen of the PSL Beckhoff Overview Screen (see T0900641_Figure8p8_PSLStatusScreen.png from pg 54 of T0900641): "EPICS ALARM" and "VB PROGRAM ONLINE" were red. In addition, the DIODE CHILLER (DC) [water] flow rate was a little bit below nominal threshold (> 22.5 [liters per minute]) at 21.8 [lpm] on the the CHIL screen (see T0900641_Figure8p7_ChillerScreen.png from pg 53 of T0900641). In this condition, we also found that the High Power Laser's [HPL's] external shutter would not open from the Beckhoff Overview Screen -- needed regardless of whether you want to run in high- or low- power mode, because this shutter prevents light from continuing on to the Pre-Mode Cleaner (see T0900641_Figure7p1_HPLShutters.png from pg 27 of T0900641.) After a couple of hours of chasing our tails flipping switches, redoing the procedure, and calling the available experts, David clued us in to a "one line" solution that Keith had put together, which Keith then pointed us to: LLO aLOG 2322, indicating that the H1:PSL-EPICSALARM is the culprit. Regrettably, Keith's network layout and computer configurations are arranged just differently enough that we couldn't use his exact solution. Instead, Dave and I looked for this channel, and found the channel buried in the ${userapps}/opt/rtcds/userapps/trunk/psl/h1/models/h1pslpmc.mdl, (which was, of course, tough to find because the model is two years old, and still uses the now-defunct pink EzcaRead block, with the full channel name hard-coded at the top level), and as Dave mentions, we traced the logic back to a latched flow rate alarm in the MIS sub-block (this inside of which is shown in h1pslpmc_MIS.png). Another interesting point: though the four channels in the front end which control define the H1:PSL-EPICSALARM, H1:PSL-MIS_FLOW_OK H1:PSL-MIS_CLOSE_SHUTTER_IO H1:PSL-MIS_CLOSE_SHUTTER_ISC H1:PSL-MIS_CLOSE_SHUTTER_CTRL_ROOM are displayed on the MEDM / EPICS version of the PSL Overview Screen (as seen from the control room, see PSL_OVERVIEW_SCREEN.png), H1:PSL-EPICSALARM is not shown and therefore the link between these EPICS variables and the Beckhoff alarm is unclear. It turns out, as Dave mentioned, only H1:PSL-MIS_FLOW_OK was still red after Sheila and I gave up poking around, so Dave and I queried the threshold and trigger variables "manually" via caget (because they're also not shown on the EPICS or Beckhoff screens) and found the trigger variable comfortably in the range, so we reset it. For future reference, controls@opsws8:models 1$ caget H1:PSL-MIS_FLOW_INMON H1:PSL-MIS_FLOW_INMON -5373.77 controls@opsws8:models 0$ caget H1:PSL-MIS_FLOW_UPPER H1:PSL-MIS_FLOW_UPPER -5000 controls@opsws8:models 0$ caget H1:PSL-MIS_FLOW_LOWER H1:PSL-MIS_FLOW_LOWER -6000 (Note: these threshold and trigger variables are uncalibrated, so it's unclear how these counts relate to [lpm] shown on the Beckhoff screen.) As soon as we reset this variable, the STATUS screen started blinking wildly, and we appeared to be back in business as all of the automated loops took over. Nice! However, we noticed after a few minutes that the reference cavity would not remain locked for more than 10 - 30 seconds. I had assumed this was because the laser still needed to warm up, and there was some sort of mode mismatch and/or thermal issues with the ISS/FSS. After a few hours (telecons) of letting the PSL warm up, we found the reference cavity still blinking in and out of lock. Sheila and I sat down to diagnose, but found ourselves too ignorant of the FSS and ISS to really figure out what the problem was. We simply tried various combinations of switching the autolockers for both ON and OFF, and found that the refcav stayed locked with only the FSS running. Since we just want to get the mode cleaner locked enough for Mark to continue violin mode hunting and for Jamie to play around with Guardian, we figured leaving the ISS off -- until those with further expertise return -- would be just fine.
The reset button on the Laser.adl was implemented and accepted by the PSL team in July. As announced on the PSL mailing list this should prevent the model from opening and closing the external laser shutter each time the flow sensor triggered without any user interaction. We observed in Hannover that this could happen, when the flow is close to the threshold. We should definitely add that point to the startup procedure. The values for the flow watch cannot be calibrated into lpm, because they just refer to the flow in comparison to the manual slide bar position onto the flow sensor.