As part of the transition to RCG-2.9, dataviewer has been updated to 2.9.1. The new dataviewer reads the revised nds1 protocol and fixed bugzilla 728 to allow proper display of unsigned integer trend data.
The DAQ software has been updated to 2.9 on h1dc0, h1fw0, h1fw1, h1nds0, h1nds1, and h1broadcast0. The DAQ was restarted to get the new software running on the named computers.
Stopped the chillers I started yesterday and started the 2nd chillers at each mid station. They will run most of the day for periodic running and maintenance checks.
Dan, Koji
We succeeded in locking the OMC using AS45 and the OMC REFL beam, and took a number of noise measurements and transfer functions of the cavity length. Data and plots are forthcoming.
On ISCT6, we took a pickoff of OMC REFL and steered this beam onto the ASAIR_A RFPD. A lens was added to focus the spot onto the diode. (Note that in order to recover the OMC REFL beam, we undid Keita's precautionary misalignment of the OMC REFL picomotors. We'll need to steer this beam away from the OMC REFL QDPs again before we start locking.) We used the OMC QPD alignment servo to keep the beam aligned to the cavity. The dither alignment loops are still unstable and need to be fixed.
We used a pair of SR560s to take the ASAIR_A_45 I-phase signal and feed this back into the PZT driver. We found that the aux input path for the PZT driver was working (yay!), and even better found that very little loop shaping was necessary to get a stable lock. We were quite surprised that the flat-response PZT was giving us a nice 1/f loop; It looks like the capacitance of the PZT and the output impedance of the PZT driver are just right to provide a pole at around 2Hz, and the dewhitening filter on the board acts as a low-frequency boost.
We measured the cavity length noise from ~10mHz to 100kHz. The resonances of the PZT mirror tombstone at ~8kHz and the PZT itself at ~80kHz were clearly visible. There was a forest of lines around 1kHz that was *very* easy to excite by tapping on the HEPI crossbars. The sensitivity of the OMC length to light taps on a structure outside the vacuum was rather alarming and needs to be investigated more closely -- does the noise couple into the ISI, the OMC suspension, the tip-tilts? One good piece of news is the transfer function of PZT voltage --> cavity length is smooth around 20kHz, so there is a good place for an analog dither signal.
Evan, Elli
The auxiliary laser can now lock to the carrier (IM4 transmitted beam) on IOT2R using a NewFocus LB1005 servo controller. LB1005 settings are gain=5, PI corner=3kHz, LF gain limit=60dB.
A 1.8GHZ network analyzer is used as the local oscillator to set the frequency offset. We can sweep the frequency offset over 3.5MHz.
We measured the open loop transfer function (see attached plot). The UGF is at 15kHz with 70 degrees of phase margin.
Jeff, Sebastien
It seems that we had an IPC error on ITMX today (around 18:30:00 UTC). The ISI-ITMX watchdogs reported a 'payload trip', but the SUS watchdogs were fine: the WDMON_RFM_EPICS_ERROR channel shows an error at that time. See plot attached for more details.
J. Kissel, S. Biscans, D. Barker, J. Batch *Sigh* there's always a fire. Today's fire was that just before Dave and Jim went at EX to swap ADC cards (see LHO aLOG 16023), they'd asked Seb to turn off SEI ETMX. Once they looked, they found that the IOP Software Watchdog had tripped on the SEI system. In summary, there is a nasty but slow instability between the ST2 Y isolation loop and the QUAD T, RX, and RZ motion. We've never seen it before because we rarely, if ever, run the QUAD without damping and the ISI in fully isolated for more than an hour or so to take transfer functions. However, the independent software watchdog did exactly what it what it was supposed to do, and prevented extended shaking of the suspension caused by ISI instabilities. As Vern suggests, perhaps I awoke a daemon by jokingly invoking quotes from the Poltergeist last night, but it looks like we're well protected against daemon attacks. Here's the story. 05:25 UTC - Jeff requests SEI_ETMX Manager guardian to go to Fully Isolated (Remember this configuration includes *no ST1 RZ isolation*, and *no ST2 Z, RX, RY, or RZ* see SEI aLOG 658) 05:28 UTC-5:35 UTC - Jeff takes transfer functions of SUS ETMX (see LHO aLOG 16012), and *leaves* M0 damping loops OFF. Over the next two+ hours, the maximum ISI ST2 Y RX RZ displacement begins to ring up, most prominently at exactly 0.46 [Hz], the first Transverse Mode of the QUAD's main chain. (Note that the first L Mode at 0.43 [Hz], the first R mode at 0.92 [Hz] are also rung up as well, but not at the same amplitude). There is some, very slow, parasitic, positive feedback cross coupling between the free ST1 ISI RZ, or ST2 Z, RX, RY, and/or RZ and the QUAD's first T / R mode that leaks into the ST2 ISI's Y DOF loop that *is* closed, and slowly but surely rings up the ST1/ST2 Y/RX/RZ motion, and eventually rings up the whole system into instability. The QUAD's watchdogs DO NOT trip, because the M0 and L2 watchdog has been spuriously set to 80 000 [ct]. Even if they did, the SUS all actuation on the SUS is already OFF, so it would have made no difference. Only if ALL FOUR SUS USER watchdogs tripped (M0, R0, L1, and L2), would it have triggered the SUS USER DACKILL, and sent a "PAYLOAD BAD" flag to the ISI, tripping its USER watchdog. However, the SUS Independent Software Watchdog (IOP watchdog or SWWD), which also watches the RMS of the Main Chain begins sees this increase in RMS. 07:49 UTC - The ETMX SWWD begins to register that the QUAD's "OSEM4," i.s. Main Chain LF, a Vertical Sensor, is surpassing the 110 [mV] RMS threshold, originally defined by Jeff during the Hardware Watchdog testing (see LHO aLOG 12496. As stated there, this is roughly 10 +/- 5 [um] or [urad] peak-to-peak thoughout the ISI/ QUAD system. Note that the LF sensor is seeing MUCH more RMS than any other sensor, but RT and SD are the next biggest contenders, implying some sort of Roll / Transverse / RX / Y - ey kind of motion. 08:00 UTC - The SWWD's ETMX / QUAD watchdog is now constantly above threshold (State 3) 08:05 UTC - The SWWD ETMX / QUAD watchdog sends a warning to the SEI ETMX watchdog indicating things are getting serious (SUS WD goes to State 4, SEI WD goes to State 3), and shuts down the SUS DACs, tripping the all SUS IOP's DACKILL (for TMSX, ETMX M0, and ETMX R0). 08:08 UTC - The ISI's ST2 USER watchdog trips on the Actuators, but going only to State 3. This turns off ST2 X&Y, and the SWWD OSEM RMS begins to decline. << -- This points the finger at an ST2 X & Y instability (but ONLY when M0 is free). 08:09 UTC - The SWWD's SEI watchdog get the final 1 minute warning that the SEI DACs are about to be shut down (SEI WD goes to State 4) 08:10 UTC - The SWWD shuts down the SEI DACs, tripping the SEI IOP DACKILL (State 0). This sends both stages of the ISI to their USER WD State 4: full isolation shutdown. I attach a whole bunch of plots I needed to back up this story. However, in addition to data mining, I *did* re-create the chamber configuration and take a standard SUS transverse transfer function, but gathered all of the ST1 T240s and ST2 GS13s as response channels, and plotted the transfer functions and coherence between M0 T and ISI Y RX and RZ -- 2015-01-12_2355_H1ETMX_WhiteNoise_tf.pdf There's a lot of interesting information in there, but most importantly, there's a very coherent, transfer function at 0.46 [Hz] for all stages and all DOFs, with a magnitude of Y/T [m/m] RX / T RZ / T ST1 2.3e-4 1.3e-4 0.014 ST2 0.16 0.018 0.033
The SEI Team: Action Items from this event: - Hook the SUS IOP watchdog's warning flag up (when the SUS IOP watchdog switches to State 4) to the ISI USER watchdog, to smoothly ramp down the isolation loops *before* the SEI DACs are shut down. This will also enhance /increase the warning signs / notifications / alarms, by default. (highest priority) - Reduce the M0 threshold SUS USER WD (would not have affected the outcome of this event, but it should be reduced just the same) - Find out exact what the instability was (this will be time consuming both for the IFO and man-power, so we will likely *not* do this.)
Karen & Cris in LVEA 08:04 Jim W. running excitations on HAM3 08:36 Corey boxing 3IFO ISC table components in squeezer bay and mechanical room 08:43 Jim B. taking down nds0 to compile daqd modules for RCG 2.9 09:11 Elli to IOT2R 09:18 Cris to end X 09:24 Bubba to end Y fan room to put heater unit in place 09:32 Jeremy and Mitchel to LVEA West Bay to work on elliptical baffle 09:35 Sudarshan and Rick to end Y for PCAL work 09:42 Hugh and Brian to LVEA 10:11 Hugh operating crane 10:16 Dave and Jim B. to end X 10:16 Karen to end Y to clean 10:26 Dave and Jim B. powering down h1seiex to replace ADC cards 10:57 Travis to LVEA West Bay to pick up and store equipment 10:57 Hugh operating crane 11:28 Hugh operating crane 11:45 Cris back from end X 11:47 Brian and Hugh out of LVEA 11:53 Corey out of LVEA 11:53 Mitchel and Jeremy done 11:57 Karen back from end Y 11:58 Bubba back from end Y 12:00 Betsy running transfer functions on ETMX 13:41 Corey back to squeezer bay 13:52 Cyrus to end X to reboot workstation 14:18 Cyrus back from end X 14:28 Delivery for Richard 14:52 Elli back to IOT2R 15:15 Aaron looking for chassis near BSC3 15:52 Jeff K. running transfer functions on ETMX
Sudarshan, Dan
The ISS second loop didnot close using the automated python script. We also tried to close the loop manually with the steps described in alog 14291 but without any success. The loop rails each time we try to close it. From some prelimianry investigation, it looks like there is some problem with the electronics readout of the PDs. Attached is a plot showing all 8 PD signals of the ISS photodiode array.
I have started one of two chillers at each mid station for periodic running and maintenance checks. They will run overnight and then I will switch to the 2nd chiller tomorrow.
Hugo, Arnaud, Dave:
Hugo and Arnaud committed the latest versions of isi/common/models/[isi2stagemaster.mdl/isihammaster.mdl] to the repostitory. I upgraded H1, svn revision numbers given below
file | old | new |
isi2stagemaster | r8417 | r9545 |
isihammaster | r8122 | r9545 |
I tested the new common models by compiling an ISI HAM and ISI BSC model. I then compiled all the models against RCG2.9 with no failures.
The H1.ipc file was regenerated performing two run throughs of "make -i World". The new H1.ipc is the same as the old with the exception of three IPC channnels which have become obsolete:
I have replaced the original IPC file for the remainder of today in case any models need to be restarted.
Output power is ~ 32.7 W (Should be ~ 30 W) Watchdog is active No warning in SYSSTAT other than "VB program online" PMC Locked for 7 days Reflected power is 8.77% of transmitted power (Should be 10% or less) FSS Reference cavity has been locked for 5 hours Trans PD threshold is .4 V (Should be at least .9 V) ISS Diffracted power is ~ 7.4% Last saturation event was 3 days and 18 hours ago
With BrianS's help, opened up two containers, inventoried cables etc and installed the shorting plugs on the GS-13 cables.
So all the BSC ISIs (in Storage) and three HAM ISIs are done. Two HAMs remain.
Saturday, no restart reported
model restarts logged for Sun 11/Jan/2015
2015_01_11 22:26 h1fw1
one unexpected restart.
Just because we are a tad paranoid now, I took V and P TFs of the ETMx main chain and a P TF of the reaction chain around noon today. All is well with this suspension still. BSC9 pressure is ~2x10-6 Torr.
Jim and Dave:
WP5003. We replace three ADC cards in the h1seiex IO Chassis which were PMC cards in PCIe carriers. Three regular PCIe ADC cards were removed from the DTS x1seiham for this swap out. This swap is needed for the RCG2.9 upgrade planned for tomorrow.
There are four ADC cards in this system, all but the first ADC were replaced.
I performed some DAQ trending of raw ADC channels in the swapped cards prior and post swap to verify the data looks continguous.
The procedure was: remove h1seiex from Dolphin network, stop all models, power down h1seiex, powerdown IO Chassis, replace ADC cards, power up IO Chassis, power up h1seiex, let all models autostart.
The NDS server on h1nds0 was shut down for a few minutes to allow building of RCG-2.9 daqd executables for the frame writer, nds, and broadcaster. This should not have affected anyone since the default NDS server is h1nds1.
Tweaked the AOM defracted power down to ~ 7.4% from ~8.5%.
J. Kissel Something went wrong with the IMC after my safe.snap captures. It seems like the MC WFS DC signal indicates that they've lost there spots, so the ASCIMC servos are continually steering the IMC slowly off resonance. I attach two trends. The first is the past 6 hours, showing that when I started taking down suspensions, the MC WFSs lost there spots. The second is the last hour, showing how the IMC is in a bad locking loop, where it acquires, and then slowly tanks. I've burtrestore'd all relevant epics IOCs to 02:10p PDT (before I came in) I could think of, i.e. burtrestore h1susmc1epics 14:10 burtrestore h1susmc2epics 14:10 burtrestore h1susmc3epics 14:10 burtrestore h1ascepics 14:10 burtrestore h1ascimcepics 14:10 burtrestore h1sushttsepics 14:10 But things are still stuck in the loop. I'm going to continue to try to debug, but I solicit any remote help, if anyone's out there reading... Will post if I find a/the solution.
J. Kissel, w/ a remote A. Staley & E. Hall Though we're still not sure why the spots have gotten mis-centered on the IMC WFSs, I was able to offload the previously functional IMC WFS requested values to the OPTICALIGN alignment offsets in MC1, MC2, and MC3, and this extended the cycle by a few more minutes. This still did not get light on MC WFS B so the ASCIMC loops still eventually pull the alignment of into la-la land. Thinking of what happened this past friday, Alexa suggests just to leave the IMC WFS OFF, and I have done so. The IMC has been locked rock solid since. A couple of debugging notes, but still no answer: Also -- Alexa informed me that the power 50% power drops were because the DRMI guardian had not been set to the DOWN state, so once the IMC locked, it would downgrade the input power from 10 to 5 [W]. This was a red herring, which I thought was more MC WFS mysteriousness that I couldn't figure out at first. Here's a comparison between previously functional alignment offsets (with MC WFSs engaged, so modified from the static H1:IMC-${OPTIC}_${DOF}_OFFSET), and those that have had the previously functional MC WFS DC values (as read by the output M1 LOCK banks) offloaded to them: H1:IMC-${OPTIC}_${DOF}_OFFSET H1:SUS-${OPTIC}_M1_LOCK_${DOF}_OUTMON H1:SUS-${OPTIC}_M1_OPTICALIGN_${DOF}_OFFSET P Y P Y P Y MC1 this morning +0.7 +0.2 -18.1 +16.8 +1024.2 +2100.0 MC1 with WFS offload +0.7 +0.2 +0.7 +0.2 +1006.1 -2083.2 MC2 this morning -21.5 +27.6 +18.4 +1.8 515.8 -556.7 MC2 with WFS offload -21.5 +27.6 -21.5 +27.6 534.2 -554.9 MC3 this morning +5.8 +18.0 -13.0 +1.5 -527.3 -2100.0 MC3 with WFS offload +5.8 +18.0 +5.8 +18.0 -540.3 -2198.5 I did NOT save these new alignment values to the SUS' ALIGNED file, it was just a stab in the dark. I've taken a whole bunch more trends, which indicate that the alignment offsets were identical to before I got started, until I changed them as indicated above. However -- zooming in on when, and in what order I turned ON and OFF the suspensions vs. relocking the IMC, I find there may be a pattern / problem in the order and timing in which I took down the three MCs. After staring at all five plots at once (I know, it's hard to do without a lot of screen real estate, but maybe open in 5 different tabs and flip between), I recall that I turned OFF MC1 first, *without* pausing the IMC guardian or requesting it to be DOWN. Before I moved on to the other suspensions, I restored MC1 to confirm that the IMC came back. As such, in the interim, I think the IMC WFS began to steer the IMC in a bad direction with the misaligned (i.e. OFF) MC1, pulling the integrators to mis-center the spots -- especially on MC WFS B. Further, I didn't have nearly as many IMC screens open as I did after trying to diagnose the problem, so I didn't see that the MC WFS were being pulled off course. I thought, that if this is true, it should be as simple as restoring the Sunday "morning" offsets, and clearing the history on the MC WFS integrators, and the spots should immediately become re-well centered on the MC WFS again. BUT, that didn't work either. With the "morning" offsets on the MCs, and history cleared on the ASC IMC loops, the spots reappear on the MEDM cross-hairs, but just barely. Which means the TRANS power began to tank again with full gain MC WFS on. Now I think, we should use the picomotors to recenter the MC WFS. But, I have zero experience with the pico-motor game, and I have learned to harbor great fear of them, especially after Suresh informs me that they're somehow wired such that a YAW request moves the beam in PITCH. Do we not have a DC centering servo on the MC WFS? For now, I leave the MC WFS OFF (via the gain slider in the bottom left corner), and I've cleared the history of the ASC IMC DOF loop filter banks.
J. Kissel, with Suresh (remotely) this time For starters -- thanks for all your help remote commissioners! So I think Suresh might have nailed down the problem: - Last week, Thursday afternoon, Jan 8th, Suresh commissions the MC WFS DC Centering servos, a.k.a. DOF 4 servos, a.k.a. the MC1 / MC3 differential pitch and common yaw servos (see LHO aLOG 15865). First with a gain of -0.1, and slightly later with a gain of -1.0. In doing so, he changes the alignment sliders of the IMC suspensions, but does not *save* the new alignments to the aligned.snap text files. - The next morning, when the IMC guardian was changed from LOCKED to DOWN, the old, now invalid, alignment values returned (see LHO aLOG 15968). When the new centering servos started pulling the spots off the MC WFS from the bad alignment, the DOF 4 gains were changed to zero. "Turn it off! Turn it off!" - When the alignment values were restored, the DOF4 centering loops were left off, because they didn't instantly work as designed, and there were bigger fish to fry. We now know it's because the MC WFS are so miscentered that even with a good IMC alignment for starters, then pull things off the quadrants. - These new alignment values remain stored, and have new even been captured in a safe.snap. - That the IMC can lock by itself, and stay locked robustly, without guardian or WFS implies that this indeed a good alignment on the rest of the REFL path (i.e. on the trigger PD and the IMC REFL Length diode). - So, we should re-center the spots on MC WFS using the pico-motors by hand, as I mention above in a previous comment. I didn't yet enact on the solution, because I wanted to do a few other things tonight (and because of my previously mentioned superstitious fear of picomotors), but I write it down for those to attack whenever they can.
Evan and I recentered the IMC wfs, they are working now.
In order to calculate the sensor correction gain, I put HAM2 in this configuration for the night:
All DOFs:
- 750mHz blend
- Sensor correction OFF
I'll put it back to its nominal configuration Monday morning
The matching gains for X, Y and Z are
0.9750
0.8431
0.8167