Continuing our look at the test mass drumhead modes, I've set up damping filters for
MODE5 ITMX Drumhead 8162 Hz , MODE12 ITMY Drumhead 8161 Hz
Was able to predict which drumhead frequency corresponded to which test mass based on frequency shift during previous night's 4 hour lock: known ~15kHz modes of ITMX shifted 0.09 Hz over the four hours, ITMY shifted 0.13 Hz over the four hours. Accordingly, 8162 Hz shifted 0.07 Hz and I was able to ring up on ITMX. 8161 Hz shifted 0.11 Hz and I was able to ring up on ITMY. Torsional modal frequency is proportional to sqrt(G/p) and longitudinal modal frequency to sqrt(E/p) where G is shear modulus, E is Youngs modulus, and p is density (see P080069).
We had minimal PI issues through the night; most PI related locklosses over the past week could've been prevented with simple early-action small phase changes and tonight reinforced that. Jim and Nutsinee were both successful at damping; I fixed Jim's earlier problematic MODE17 with a filter change and the rest of the 4.5 hour lock went smoothly, requiring only one phase tweak a few hours in. We lost lock from the power glitch.
We just had what appears to be a site-wide power glitch at 13:08 UTC. All front ends down. Richard is here and starting power cycling.
- Tried switching the guardian back to using REFL WFS instead of POPX WFS for PRC2. This is simply a boolean variable change in lscparams.py: use_popx_wfs = 0. We also updated the the beam diverter closing state to also use the same boolean variable.
- However, we need to retune the REFL WFS at the new modulation index - we saw the loop run away in Nominal Low Noise,
- We also tried to repeat the jitter coupling reduction with INP1 offsets. While the transfer function again was reduced, the ASC loops were having problems with the large offset - we saw CSOFT ring up at 1Hz. Also there was a 3Hzish signal visible in CHARD and CSOFT. So much for reproducability...
- Next we also tried a CHARD YAW offset. Indeed -3500ct in CHARD YAW started bringing down the IM4 YAW jitter coupling. As suspected this probably means we simply cancel an unknow coupling with a deliberate misalignment induced coupling.
I've updated the PI Damping Operator Wiki with step-by-step instructions for damping PI's. These instructions cover basic damping for modes we already know about and repeat exactly what I've worked through with almost all of the operators by now. They should be enough to walk you through damping most modes that might ring up during a longer lock. We've also added a verbal alarm to alert you if a PI has started ringing up.
Wiki can be opened from LHO Ops Wiki ('PI DAMPING') and from PI Overview MEDM screen:

Many thanks to Nutsinee for the awesome edits.
PSL trip took down the IFO, or vice versa. No commissioners are on site currently, so I'm calling it a night (with Corey's approval). Stefan and other commissioners made noise about coming back in so they try to recover, but I don't have the procedure to restart the PSL.
TITLE: 09/29 Day Shift: 23:00-7:00 UTC
STATE of H1: PSL down
SHIFT SUMMARY: Lock was generally going well until the PSL went down, PI's were a learning experience, vigilance is required.
LOG:
23:00 IFO almost locked at end of Corey's shift
1:15 DRMI flashes non-existent, did initial alignment through MICH_DARK
2:00 NLN, PI Mode 26 immediately started ringing up, but tweaking phase brought it back down.
3:30 4 PIs ring up, 17, 25, 27 & 2, breaking lock before I could make any improvemts on any of them, but 17 rang up first
3:50 NLN, PI 17 rang up around power up, but managed to bring it down again
4:47 Lockloss, PSL is down
I have restarted the PSL. Looking at the status screen of the PSL, I have determined that the situation was exactly the same as those from yesterday (30063). However, because it was already late in the night, I decided not to call the experts. Instead, I by myself did the same recovery procedure as I did yesterday. I added 200 ml water to the chiller.
Thanks for handling PI's Jim. You saved a few locks.
Got in right as Jim was leaving. Looking back at the spectrum, it was only MODE17 that rang up during the 3:30 time; all other modes just had increasing RMS due to bleed over from the extremely heightened mode (note MODE27 did not ring up, it was MODE10 - I've changed the color to make this more distinguishable). This one had not been problematic since I implemented stepping filters and a smarter guardian to handle (several days now).
At 3:50 power up, MODE17 was still significanly rang up (since Jim relocked so quickly), hence seeing it right away. If lock is lost due to PI and you immediately relock, pause at DC_READOUT before power up to ensure mode is damped first. I'll go through this with operators.
Carlos, Dave:
h1fs1 kernel panic'ed apparently during its 08:39PDT backup of h1fs0. We rebooted via power cycle and manually got the ZFS file system back into sync so the hourly cronjobs could continue to run.
TITLE: 09/29 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Aligning
INCOMING OPERATOR: Jim W.
SHIFT SUMMARY:
OK shift. Had issues locking in the middle of the day (completely dead DRMI), but Sheila tracked this down to a misaligned PRM (Stefan alogged this). After that, took H1 up to a state for the Commissioners, and that is where they are at right now.
LOG:
Updated Nuc6 to so the PI Strip Tool has Terra's new mode.
(Betsy, Gerardo)
This morning added 500 ml of water, and walked/crawled the length of the pipe looking for water drops, none found. The inside of the TCS-Y table was not inspected.
model restarts logged for Wed 28/Sep/2016
2016_09_28 11:38 h1nds0
2016_09_28 13:29 h1nds0
two unexpected nds0 restarts, hopefully memory upgrade will fix this.
WP 6199 The memory for h1nds0 was increased from 24Gb to 48Gb and the daqd was reconfigured to make use of additional RAM. This was done in an attempt to solve the nds0 restarts that have been occurring. h1nds0 now has the same amount of memory as h1nds1. The daqd for h1nds1 was also reconfigured to use more of the available RAM, but was not restarted. The change will be effective with the next restart of the DAQ. Note that nds0 and nds1 are now configured the same.
Daniel, Stefan
We measured the ISS out of loop RIN using a straight shot beam to the OMC at 25W (35.9mA DCPD_SUM). After choosing some better misaligned positions, we got about 1e-8/rtHz between 150Hz and 800Hz (see attached plot).
The first attempt was limited by scatter noise from misaligned optics. The following new misalignment TEST offsets cut down a lot on that scatter noise:
PRM_P 520 unchanged
PRM_Y -3200 (was -6200, +3000) - this one was not updated yet, because nominally the PRM should dump the beam at a beam bump on top of HAM1.
SRM_P 707
SRM_Y -1910 (was 1090, -3000)
ITMY P 40.3
ITMY Y 53.1 (was -46.9, +100)
None of those new offsets are in SDF yet.
Here is a plot of some of the noise projections made from braodband injections last night.
The frequency noise coupling is from Evan's measurement (alog 29893) and the spectrum of the (almost) in loop sensor refl 9 I, so the sensor noise imposed by the loop still needs to be added. It is interesting that some of our jitter peaks show up in frequency noise. Frequency noise is above the DARM noise floor for the peak at 950 Hz. As Jenne wrote (29817), this peak has sometimes gone away when the 9MHz modulation depth is reduced, although not always.
In the bucket, MICH, SRCL, frequency and intensity noise are all too small to explain our broadband noise.
Edit: Attaching the plot this time
Stefan will be conducting OMC measurements during this meeting.
TITLE: 09/29 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
Wind: 6mph Gusts, 3mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY:
H1 dropped out at about 3:30amPST hovering just under 80Mpc.
AS AIR video looked obviously misaligned. Just opted to start an Initial Alignment.
Jason (on phone), Jim, Sheila, Stefan, Terra, Kiwamu,
The PSL tripped twice tonight, one at around 2:06 UTC and the other at 3:36 UTC. For the first trip, Jason remotely walked us through the recover procedure of the PSL on the phone. For the second time, we could not get Jason on the phone, but since the fault status was exactly the same, we repeated the same recover procedure by ourself. For both times, the laser came back up without issues. The attached shows the PSL staus screen before we cleared the errors for the first time. In each time, we had to add a few 100 ml of water to the chiller to clear the warning. Also, when we went to the chiller for the second time, there was water not only on the floor but also on the door and the chiller itself. It looked as if water was sprayed. Not sure if this is the same as what Sheila reported a few days ago (29964).
Filed FRS #6324.
After an Alignment, I wanted to run an INIT state for ISC_LOCK. Noticed that it was nowhere to be found. Mentioned this to TJ, and he ended up running the INIT by hand via Guardian command line.
This was due to a bug in the guardian MEDM states screen generation. Fixed in version 1.0.1 which was pushed yesterday.
I opened DBB shutter for HPO to make some measurement. After 30 seconds or so, the DBB moved to "interlock" mode. Resetting the interlock by going to "stand by" and then to "manual" immediately brings the system back to "interlock" mode.
This is not triggered by the oscillation monitor (so called "software interlock" in T0900576), as the oscillation monitor output (H1:PSL-DBB_INTERLOCK_LP_OUT) never goes above threshold (currently set to 1000). It should be so-called "hardware interlock". However, PSL documentation is kind of convoluted I have a hard time finding what closed the shutter when there's a "hardware" interlock trigger, is it DBB frontend or is there a parallel hardware circuit that closes the shutter regardless of the DBB frontend?
Anyway, according to Jason this has been a problem for a while and then it went away on the day of power outage.
Filed FRS #6330. I asked Fil to take a look at the AA and AI chassis associated with the DBB during the next maintenance window (10/4/2016). If those check out the problem is likely in the DBB control box that lives in the PSL rack in the LVEA.