[Evan, Jenne, JimW]
We turned the ITM pitch oplev damping back on, set some QPD offsets for POP and transmons, and were able to go to 20W without trouble. We have finally made it to nominal low noise for the first time in weeks! Hooray! Sadly, a 2.84Hz oscillation killed us after about 10 seconds. Work continues, but I was excited enough to write an alog.
1555 -1635 hrs. local -> To and from Y-mid Recent conditions -> 60F - 75F days, Dewar ~40% full vapor pressure 14 psi, LLCV 18% open [Executed nominal procedure as was practiced for 48 hour over-fill intervals] LN2 at exhaust after 32 min 30 seconds with LLCV bypass 1/2 turn open and exhaust check valve bypass open Next scheduled over-fill to be Monday, April 4th before 4pm local. NOTE: Need to revisit agreement between Dewar's local mechanical level gauge and value indicated by CDS. Also, recommend adopting new default for LLCV bypass X turns open for new 72 hour over-fill interval (something more open than previous default of 1/2 turn open).
After seeing that it took 30 minutes to get liquid out and given the warm weather I called the control room and had Jim raise the liquid level control valve to 20% from the existing 18%. The next web snapshot showed that the exhaust pressure had risen to 1 psi (from .7).
We're looking at replacing the HWS code which is compiled in MATLAB (and quite incompatible with newer versions on Ubuntu) with a more streamlined, Python-based version. The test bed for this is H1HWSEY.
So far, we have tested the following:
serial_cmd and succesfully connect to the internal menu system of the cameratake commandNeed to install EPICS/ PYEPICS ancd CDS tools.
State of H1: unlocked for 3.5 hours due to an earthquake
Activities since mid-day update:
Currently Activity: Jenne is looking at the alignment
See the attached for an incorrect syntax and the correct way to direct the front end to save the channels in the Science Frames.
The asterisk placed after the desired science frame rate makes the rate interpreted incorrectly. Since it did not know what to do with 1024*, it was storing the channel in the commissioning frames at 4096 and not at all in the science frames. The asterisk on the channel name is the correct way to go. The directions are all there, I just failed to look closely. I should have learned the necessity of doing so after all these years.
The modification was to a library part so the edits at least were not extensive. The change is committed:
hugh.radkins@opsws1:models 0$ svn commit -m "Corrected syntax error in frame storage"
Sending models/ISI_to_SUS_library.mdl
Transmitting file data .
Committed revision 13015.
hugh.radkins@opsws1:models 0$ pwd
/opt/rtcds/userapps/release/isi/common/models
After Jeff and Hugh made their fixes to all the ISI models and restarted them, I did a DAQ restart.
The restart was normal except that h1fw1 did not come back. It had tried to restart its daqd process, and then kernel panic'ed similar to Wednesday evening's crash (VFS count 0). I reset the machine by pressing the front panel RESET button, it all came back normally after that.
We had two more kernel panics after this restart. The sequence is: daqd process dies, monit starts new daqd, process gets as far as opening its log file, kernal panic from put_cred_rcu function.
After second panic, I performed a full power down of h1fw1 and h1ldasgw1. Recovery sequence: power up h1ldasgw1 wait for boot, then manually mount /cds-h1-frames and export as NFS. Then power up h1fw1.
Remember that since the h1susex and h1susey computers were upgraded to faster models (but still running the 2.6 kernel and 2.9.6 RCG), the sus model timing error rate was reduced but no zero'ed.
I have trended 30 days of FEC-88 (SUS ETMX) and FEC-98 (SUS ETMY) STATE_WORD starting 31 days ago, to determine the rate or TIM (second bit) and ADC (third bit) errors.
h1susetmx had 36 TIM errors and 1 ADC error. 37 errors in 30 days = average of 1 error per 20 hours.
h1susetmy had 125 TIM errors and 1 ADC error. 126 errors in 30 days = average of 1 error per 5.8 hours.
The process to gather the data is to use the command line nds1 client to give the MAX channel minute trend. A python script then masks out the upper bits from the STATE_WORD, so the overflows and CFC bits are discarded.
[Keita, Jenne]
We have recorded the label of each cable connected to ISCT6. Anyone may feel free to disconnect all of the cables at any time on Monday morning, to facilitate the moving of the table.
Attached is the set of cable labels, and where they are.
Current (2:43pm local) pressures (Torr): PT-243: 1.07e-9 PT-244: 1.40e-9 PT-210: 1.42e-9
~1230 hrs. local -> Started pump cart ~1255 hrs. local -> Began pumping BSC4 annulus volume -> No change in PT140B resulted
1 April 2016
Site Activities:
15:45 JeffB - LVEA - 3IFO
15:53 JeffB - out of LVEA
15:20 Kirshna, Michael - EY BRS-2
17:20 Chandra and Kyle - hook up pump cart to BSC4
17:56 Kyle-Chandra - craning pump cart to prime location (West crane, will restore crane parking)
17:58 Water delivery done at both End Stations
17:59 Fil-Ed LVEA to eval upgrade to PSL electronics
19:30 all but Kyle out of the LVEA
20:03 Kyle out of the LVEA
H1 Activities:
15:20 Aiden, DaveB - EY Hartman testing code
17:25 COMM PLL red - not sure why, resolved after QPD offsets set
17:25 EX QPD offsets - Evan measuring Dark Offsets, since new hardware was installed
17:30 hartman EY epics not working - Dave and Aidan working on it
19:30 H1 attempting to lock DRMI, no known issues to prevent full lock
20:00 earthquake, 6.1, Papua New Guinea, guardian in Down
The PI AA/BP Chassis was reinstalled at EX this morning. No issues found with chassis, see alog 26351. Unit was left powered on, as not to pull down the TMS QPS's signals.
Unit installed in Rack SUS-C2, slot U21.
Using a power budget for IOT2L at input power to the IMC of 1W (alog 9331), which measured 12mW on REFL PD, and a modification to the IMC REFL path HWP (alog 9521) for IMC input power of 10W, which maintained the power on REFL PD of 12mW, I've calculated the power REFL PD saw in O1 at 22W IMC input, and will potentialy see in O2 at 50W. Summary below. Chart attached.
O1 numbers, 22W:
Potential O2 numbers, 50W:
Pressure profiles over 1 day and 60 day attached Current pressures (Torr): PT-243: 1.05e-9 PT-244: 1.35e-9 PT-210: 1.35e-9
The HAM ISI model updates Tuesday, alog 26321, allow the suspension motion calculation to be done by the ISI. The filters and matrices to convert from ISI GS13 cartesian basis to the optic euler basis are now filled.
The BSC ISIs were updated for this earlier in the month. Alogs: original model updates-25913, ISI vs SUS calibration filter- 25993, mods for multi sus plarforms- 26117, library part cleanup/corrections- 26128, BSC filters & matrices populated- 26132, svn commits- 26143.
DetChar can now start pointing to these ISI computations. Look at channels: H1:ISI-{isi platform}_SUSPOINT_{suspension}_EUL_{euler dof}MON.
euler dof is L, T, V, R, P, orY
isi platform is HAM2, 3,..6, ETMY, ETMX,..,BS
suspensions available are ETMY, TMSY, ETMX, TMSX, ITMY, ITMX, BS, MC1, MC2, MC3, PRM, PR2, PR3, IM1, IM2, IM3, IM4, SRM, SR2, SR3, OMC. Of course you have to get the suspension on the correct isi platform.
Attached is a snapshot of the medms: the GS13 filter bank and available suspension calcs are accessed from the ISI Chamber overview; the transformation matrices are accessed from the calc screen.
SDF files, safe and OBSERVE, have been updated and committed to the svn:
hugh.radkins@opsws1:burtfiles 0$ svn commit -m "updated snaps for the SUSPOINT motion calculations"
Sending burtfiles/h1isiham2_OBSERVE.snap
Sending burtfiles/h1isiham2_safe.snap
Sending burtfiles/h1isiham3_OBSERVE.snap
Sending burtfiles/h1isiham3_safe.snap
Sending burtfiles/h1isiham4_OBSERVE.snap
Sending burtfiles/h1isiham4_safe.snap
Sending burtfiles/h1isiham5_OBSERVE.snap
Sending burtfiles/h1isiham5_safe.snap
Sending burtfiles/h1isiham6_OBSERVE.snap
Sending burtfiles/h1isiham6_safe.snap
Transmitting file data ..........
Committed revision 13006.
hugh.radkins@opsws1:burtfiles 0$ pwd
/opt/rtcds/userapps/release/isi/h1/burtfiles
Above I noted the ISI channels at which DetChar should look but I listed the epics channel:
DetChar can now start pointing to these ISI computations. Look at channels: H1:ISI-{isi platform}_SUSPOINT_{suspension}_EUL_{euler dof}MON.
Of course, the channels of interest would be the DAQ channels, e.g. H1:ISI-ETMY_SUSPOINT_TMSY_EUL_L stored at 1024; these are also going into the Science Frames. Welllll, not just yet.
Woops, I made a syntax error on my frame collection designation--see new alog.
These are now being captured correctly in the Science Frames--see 26400.
We've had much more success tonight with the non-broken Xarm Trans QPD. We once again re-centered the spots on the ETMs, although they didn't need much moving. We are able to sit at 10W and 12W just fine now. Now, we're running into regular ol' loop oscillations, so we've been measuring loops at different powers, and trying to re-tune them.
CHARD Y seemed the most egregious, so we created a new control and boost filter combo, which live in FMs 4 and 5. Unfortunately, these filters are totally unuseable at 2W, although they improve our stability at 10W, so right now the guardian still only engages the old loop shape filters. We'll have to re-think the 2W filter situation to make sure we can transition between these filters. Right now, we were by-hand turning off the CHARDY loop, changing the filters, then re-engaging the loop. Attached is an open loop gain for the new loop.
PRC2 pitch we've decided is kind of okay if we use a factor of 2 less gain.
Now, we're seeing oscillations that also show up in AS 90, so we suspect either the MICH or SRC angular loops. Unfortunately, there's something going on with NDS/the lockloss plotter/something, such that I can't get data from the last ~5 locklosses. The ones before that, I can still get and plot, but it can't find data for the last several even if it's been an hour since that lockloss.
So, next up: Measure the MICH and SRC loops at 10W to see if they're close to unstable. Measure again at 15W, and then think about going from there.
I feel I should know this already, but what is known about the QPD failure (circumstance at failure, failure mode, etc.)
It's not totally clear to me yet what the exact problem was. R.McCarthy is looking into why (apparently) putting the PI chassis spoiled the signal. Removing the new PI chassis seems to have fixed our problems. See alog 26328 and comments for symptoms and Rich's comment.
The PI AA was off and its OpAmp inputs were probably 'shorted' to ground due to the input protection diodes.
I am designing the input circuitry for the ITM PI Driver using the same input chip as that used on the ETM PI AA, so I will hedge our bets by including some input protection circuitry (current limit and clamp) to avoid this if that turns out to be the case.
In the lab, Fil reproduced the situation at EX by connecting a function generator to a coil driver test box (D1000931) and only used the single to differential converter of the board inside (D1000879) to drive the input of unpowered PI bandpass, and daisy chained to powered AA board.
Things looked OK until the PI input reached about +-1V differential (that's +-500mV positive and -+500mV negative), anything larger than that and the voltage started to be pulled down. Looked like a diode and a small resistor in series to me. As soon as the PI bandpass was powered on, everything got back to normal.
Daniel is correct. The chips used on the input to the PI filters have internal input protection diodes that will (up to the limit of their current handling capacity, which is not much over 10mA or so) clamp the voltage from the QPD amplifier to something around a volt. This is not a problem if the PI BPF is powered, which is the normal state of the system. This event prompted a redesign of the differential input to the ITM ESD Driver to avoid this in the future. Another case of incremental learning.
Darkhan, Kiwamu,
We found that the optical follower servo (aka OFS) for the Pcal Y had stopped running. Pressing around some buttons on the medm screen, we could not make it back up running. We are suspecting some kind of hardware issue at the moment. According to trend data, it seems that it stopped running at around 18:30 UTC on Mar 18th for some reason. We will take a look at the hardware in the next Tuesday during the maintenance period.
J. Kissel, E. Hall, [R. Savage via remote communication at LLO] This was re-discovered again tonight. Opened FRS ticket #5241 so we don't forget. Email sent to E. Goetz and T. Sadecki suggesting they address on Monday 4/4. Further information from our re-discovery: Looks like Kiwamu had left the calibration line amplitude in a low / off state. We restored the line amplitudes to nominal, and it caused excess noise in the DARM spectrum (lots of lines, a little bit of elevated noise floor). We don't see any sine-wave-like excitation in the OFS, TX, or RX PDs with a single calibration line on at reasonable amplitude (which is contradictory to elevated noise in DARM). Rick suggests: - Check the AOM. - Check the shutter. - Check that the laser hasn't tripped.
We lost lock several times very suddenly within 20 minutes of powering up. Also, POP90 seemed to be escaping upward. Looking at the 36 MHz wavefront sensors, one can see several quadratures of AS A are escaping. (AS A has not been used in any alignment loops for the past six weeks or so). In particular, AS A 36Q seems to correspond to the motion of SRM yaw as seen by its OSEM (see attachment).
We switched control of the SRM yaw loop from AS B 36I to AS A 36Q immediately after powering up to 20 W. This stops POP90 from escaping and seems to hold SRM yaw in place. So far we have been at >20 W for more than 90 minutes. The new setting (−0.5 ct/ct for AS A 36Q to SRC1 yaw) is not in the guardian.
Overall, the carrier power in the PRC seems less stable than before. It seems to have ~0.2 Hz fluctuations that sometimes approach 5% RIN, and correspond to fluctuations in cHard pitch, PRM pitch, and beamsplitter pitch. This frequency seems too low to correspond to an optomechanical resonance, so we suspect some alignment control problem. We tried turning down the PRM pitch loop gain by a factor of 2, but this didn't seem to change anything.
As Sheila anticipated, angular noise in DARM is quite bad right now. Some of it can probably be improved with a careful A2L retuning. The rest may require loop reshaping.
We also chose new QPD offsets for the soft loops and the PRM loops, in order to maximize recycling gain at 2 W.