Should get the remaining Actuators attached tomorrow morning.
Today's activities:
- Jim W, to LVEA, lock HEPI HAM02.
- Sheila, RefCav locking lesson to operator.
- Richard M, to LVEA, cable work under HAM02.
- Apollo crew, to LVEA, move IOT2L.
- Apollo crew, door prep work, HAM02.
- Betsy, LVEA, SR2 work.
- Jim W, to LVEA, HEPI lock HAM03.
- Apollo crew, to LVEA, door prep work, HAM03
- Filiberto, LVEA and X-end, ESD measurements.
- Mitchel & Thomas, MCB assembly work, West bay area.
- Hugh and Greg, X-end, HEPI work.
- Jim B and Dave B, to Y-End, troubleshooting.
Vendors:
- Porta potty service.
(Sheila, Gerardo)
Sheila showed me how to lock the reference cavity.
One change occurred to get the system to behave, Sheila lowered the resonant threshold down to 0.5 V from 0.9 V.
The reference cavity was able to lock manually, but now it appears misaligned when locked.
Jim, Cyrus, Dave
Rolf added a new feature to RCG2.8 to permit a front end to run without an IRIGB card (GPS time is obtained via EPICS Channel Access from a remote IOC). We are in the process of testing this on h1pemmx.
To prepare for the test, I added the line "remoteGPS=1" to the CDS block on h1ioppemmx. I added a cdsEzCaRead part, reading the GPS time from the DAQ data concentrator on channel H1:DAQ-DC0_GPS. I svn updated the trunk area, and compiled h1ioppemmx and h1pemmx against the latest trunk.
Test 1: keep the IRIG-B card in the computer, restart the IOP model several times. We noticed that the sync of the GPS time from IOPPEMMX and its reference DC0 does change from restart to restart but keeps synchronized to within a second.
We are in the process of test 2, removing the IRIGB card from h1pemmx. At the same time, Cyrus is reconfiguring the X-ARM switching sysems for the FrontEnd and DAQ switches, which will permit replacement of two netgear switches at MX with media converters. The use of full switches to support a single front end computer is obviously wasteful.
On completion of today's IRIGB tests, we will re-install the IRIGB card and reload the 2.7.2 version of the IOP and PEM code. While this test is progressing the DAQ status from MX is 0x2000 and its data is bad.
Jim, Dave.
Eagle eyed Kyle noticed that the medm screen snapshots stopped working at 6am this morning. script0 was pingable, but we could not ssh onto it. Its console was frozen, and it had to be rebooted. We restarted the medm screen snapshot program.
Jim and Dave
We restarted the user and iop models on h1susey several times investigating the DAC status bits (follow on from yesterday's ITMX,Y issue). We did not find any problems at EY, the status bits are consistent with the AI units being powered down. We wanted to try powering them up, but they are missing the +15V supply.
Jason and I cut away the Ameristat from around the legs of tripods and realigned the instruments to have them ready in the AM. I added new scribe lines to the ACB targets to represent the new horizontal centerlines.
This morning at ~9:30 I locked HAM2 HEPI. At ~ 1:30pm, HAM3 HEPI got a similar treatment. Offsets from floating position for both were about 100 cts (=100cts / [(7.87V/mm)*(1638cts/V)] ~.0003"), which is what Hugh reported he shot for when locking.
Started around 9:00 AM on October 29. Plot attached.
R. McCarthy, P. King The in-air cables (D1300464) used for the outer loop power stabilisation photodetector array were installed (see attached pictures). Looking at the flange, from left to right. On the left hand side subflange the cables S1301012 and S1301013 were installed. On the right hand side subflange the cables S1301014 and S1301015 were installed. These were attached to the black coloured mating pieces and are face to face flush as shown in the second attached picture.
Results of main and reaction chain of ITMX (in chamber, in air, suspension undamped, ISI locked) are showing a very good match with the model and phase 2b (test stand) measurements, so the suspension is free of rubbing for alignment, cf first attached pdf for the main chain and second one for the reaction chain. The two last pdf are showing the tf against the model (third pdf for main chain and fourth for reaction chain). As usual, the reaction chain's pitch is off due to the stiffness added by the cables.
I had troubles getting ITMX data with matlab from last friday's measurement, because of a typo I made in the code when updating the new channel names. Although, I was able to recover data of the main chain with dtt with the right channel names, and it looks healthy, cf the attached document.
I reran a new set of measurements overnight on monday for the reaction chain, but this time it failed because of a drive issue. The details have been logged by Jeff cf alog 8279.
So I started again tonight and it seems to be working fine for now.
D. Barker, J. Batch, J. Kissel, A. Pele Arnaud has been having trouble this week taking transfer functions on ITMX. After a lot of chasing our tails, and finding a few bugs in the infrastructure work that I've been doing (see LHO aLOG 8247, and [sorry for the lack of aLOGging, slaps own wrist, was on a deadline for Stuart]), we finally discovered that the analog "keep-alive" signal that is sent from the I/O chassis to the AI Chassis for all the SUS on the h1susb123 (H1SUSITMX, H1SUSITMY, H1SUSBS) was failing. It had apparently failed on Sunday at 7pm PT, when I was here fixing an unrelated bug in the *library parts* for the QUAD (which means, if it *was* me, it would have affected all QUAD models, and Arnaud has successfully driven / damped H1SUSETMX since Sunday). It's now fixed, with a hard power cycle of the front end and IO chassis. (Some rather upsetting) Details below. --------- The symptoms we identified: - The IOP Model output on the SUS OVERVIEW screen was showing zeros when we expected to have output signal - The IOP's GDS_TP screen showed the 5th bit was red -- explained on pg 18 of T1100625 to be "Anti-imaging (AI) chassis watchdog (18bit DAC modules only / [on the] IOP [screen] only): For 18 bit DAC modules, the IOP [front end model] sends a 1 [Hz] heartbeat to the connected AI chassis via a [16 bit] binary output [card] on these modules. The AI chassis, in turn, returns a bit via the DAC binary input register to indicate receipt." (Don't worry, even with my [edits], it still doesn't even make sense, even to *me*.) - The LED near the the input of the AI chassis was OFF (not red, just off) - The switch to flip in the DAC duotone signal on the 31st channel of ADC0 in the IO chassis, which is controlled by the same 16 bit BIO card, was malfunctioning in that -- when watching the 31st channel of ADC0 in dataviewer -- we saw the signal flip from noise to *zero* instead of noise to the typical pretty duotone sinewaves. Welp. Looks like we need to add yet ANOTHER watchdog layer to the overview screen. Dave's conjecture is that some how this 16 bit BIO card got in a bad state on Sunday. It's unclear how, though, since I was just stopping and restarting the user models, and was not playing with the IOP, nor was I turning on and off the front end or the IO chassis. Anyways. How do we solve any problem with computers? Power down and power back up. *sigh*. There's now a reasonably successful procedure for gracefully bringing a front end / IO chassis up and down, with out affecting other front ends. Here's what I got from picking Dave's brain, and watching over his shoulder: (1) Kill the user model processes running on that front end. ]$ ssh h1susb123 ]$ killh1susitmy ]$ killh1susitmx ]$ killh1susbs (2) Kill the IOP process running on the front end ]$ killh1iopsusb123 (3) Remove the front end from the IPC / Dolphin network, so you don't crash every other front end. Note, you only should do this step if you're powering down the front end and IO chassis. It's not necessary when just stop and starting frontend processes. ]$ sudo -s ]$ /opt/DIS/sbin/dxtool prepare-shudown 0 (4) Turn off the front-end gracefully* (still as super user) ]$ power off * This didn't work for us. The front end powered down, and then immediately began rebooting itself and bringing all the models back up. So, we had to - wait for it to finish rebooting and bringing up the models - kill the front end processes again - Go into the Mass Storage Room (MSR) and hold the power button until it powered down. (5) Turn off the IO chassis by going to the CDS Highbay, and flicking the rocker switch on the front of the chassis.** ** This doesn't work FOR ANY IO CHASSIS. Jim informs me that the rocker switch is wired to the wrong pins on the motherboard. For every IO chassis. Yeah. So, one has to disconnect the DC power in the back of the rack by unscrewing properly secure cables, risking powering down the chassis unevenly. Similarly on power up. TOTAL BADNESS. Apparently at LLO, they've installed lamp-style rocker switches right on the the cable to work around this problem and badness. (a) Why don't we have this already at LHO? (b) Was this an accepted, global, CDS fix? (c) Why can't we just re-wire the IO chassis? (6) Turn on the IO chassis via the same rocker switch in the front (assuming you've reconnected the DC power, and like I did, flipped the rocker to the off position expecting it to work before hand.) (7) Use monit (the remote controller of front ends' power that I still know too little about) to gracefully turn on the front end. Upon power up, the front is gracefully inserted back into the dolphin network, the IOP front end process is restarted, and then the user front end processes***. *** Because I've been making a bunch of changes to the EPICs variables in these models, and haven't yet had the chance to update the safe.snaps, the start-up process takes much longer to restore the snap (trying to reconcile the differences I presume), which means the $(IFO):FEC-$(DCUID)_BURT_RESTORE flag doesn't get set before the process looks for it's timing synchronization signal, and just hangs there red and dead claiming no sync. You have to then hit the button (when the EPICs gateway catches up, some time later), restart the front-end process (which captures that this bit is now set), then it happily picks up the IOP timing sync, and springs back to life. That's the process. Don't you feel better?
If the front-end is not locked up, you can simply shutdown it down with: sudo shutdown -hP now (This command will shutdown the Dolphin client as well) If you want to shutdown all models on a front-end: sudo /etc/kill_models.sh We have a lot of power outages at LLO, hence the invention of in-line power kill switches as it is a long way to the DC power room. David K. may have them already fabbed - we will check.
After moving the jigs to the west bay clean room for more space, we were able to finish the mods requiring the use of a punch that previously got bound up. Next step is to bend the blades and move the baffle onto the balancing fixture.
This morning, I worked on balancing and floating the ITMX ISI. I also added some "temporary" masses (3kg) to the optical table to approximate the missing weight of the SUS fiber guards. As of now, the balancing is done, but I still need to go back in and fine-tune the lockers, then re-gap CPS's. Only issue I had was the LVEA seems seismically noisier than the end-station, which makes reading the floating position of the ISI on the CPS's difficult.
At about 9:03 PDT, there was a timing error detected by the DACs on h1sush2b, which resulted in all outputs being set to 0. The only remedy is to restart the IOP model, so I killed h1susim, restarted h1iopsush2b, then started h1susim again. This has cleared the DAC error. I burt restored to 8:00 PDT.
A simple 'sudo /etc/startWorld.sh' will do all this
Phase 3b spectra have been measured on the beamsplitter after Jeff Kissel implemented the new filters.
The attached pdf are showing a comparison between spectra taken in March 26th phase 3a (in blue), and May 17th phase 3b (in green) with the suspension damped (first pdf) and undamped (second one). The ISI was damped.
Data, measurement lists and figures have been commited on the svn
Attached are spectra from the beamsplitter under vacuum (phase 3b) comparing damped and undamped spectra for M1 and M2. Damping looks to be working fine for all dof, except in yaw, where it increases the motion (page 12 of first pdf and page 7 of second pdf). Those spectra have been taken with a gain of -2 in L and P (as it has been used during HIFO-Y) and -1 for the other dofs
Attached is a performance spectra of the beamsplitter, plotted with osem sensor noise. The spectra compares H1 BS in chamber, in air, with no ISI isolation, suspension damped, and L1 BS in vacuum with ISI and suspension damped.
Test is completed, pemmx front end has been reverted back to its original state (IRIGB card installed, 2.7.2 code running).
The Test was a SUCCESS, the IOP ran without an IRIGB card. This is indicated by a ZERO time on the IRIGB diagnostics on the GDS_TP MEDM screen (see attached).
One problem found was with the replacement of the DAQ network switch with a media converter. This caused all the DAQ data from all the other front ends withc share the second 10 GigE card on h1dc0 to go bad. We tried to restart the mx_streamer on h1pemmx but that only made matters worse and all the FEs data went bad for a few seconds. I'll leave it to Cyrus to add more details. We reinstalled the netgear switch for the DAQ, but kept the media converter for the FE network as this showed no problems.
The media converters I tried are bridging media converters, which means they act like a small 1 port switch with 1 uplink. I went with these because when the computer is powered off, the embedded IPMI interface negotiates at 100Mbps, not 1Gbps, and a standard media converter will not negotiate this rate (it is fixed to the fiber rate). Therefore a bridging converter maintains access to the IPMI management interface on the front end computer at all times, not just when booted and connected to the switch at 1Gbps. However, the switching logic in these media converters do not support Jumbo Frames, which when used on the DAQ network corrupts the Open-MX data. I've confirmed this by looking at the documentation again and comparing to a non-bridging version. So, I'll need to obtain some additional non-bridging media converters for use on the DAQ network which should work better for this application as they are strictly Layer1 devices with no Layer 2 functionality.