From looking at a few of the loudest Omega scans in the recent lock, i.e. scan 1 and scan 2, I noticed that ASAIR_A_RF45_Q is glitching in a similar way. There's about a one minute period of excess glitching in DARM (first plot). A coherence spectrogram of ASAIR_A_RF45_Q with DARM shows high coherence in the right frequency band, 20 to 500 Hz, only during this time (second plot). There's only excess noise in this AS45Q channel during this time (third plot). There's no excess coherence of MICH, PRCL, or SRCL with DARM during this time (fourth through sixth plots). I'm not quite sure what this indicates - maybe an excess of junk light, or intensity fluctuations onto the OMC? There's no indication that the coupling to DARM is changing, since the AS45Q channel itself starts glitching at the same time DARM does. We got some loud CBC injections at the start of the lock, so we can hopefully use those to check whether this channel is safe.
J. Kissel, J. Warner, D. Barker HAM6 Software / IOP Watchdog, which is triggered on the TOP OSEMs of the OMC SUS, tripped at May 02 2015 18:15:50 UTC or 1114625766 shutting down the entire HAM6 SEI system, HEPI and ISI. As soon as the SEI system was shut off, the BLRMS of the OSEMs that were upset went down. The offending OMCS OSEMs were T3 and LF, which are sensing unrelated DOFs. @DetChar what happened here?
J. Kissel, J. Warner We're trying to recover the IFO after the epic 10.5 [hr] lock stretch, but we're having trouble getting past the DC READOUT transition. I attach the last four lock losses, 2015-05-02 16:36:53.342000 ISC_LOCK LOWNOISE_ESD_ETMY -> LOCKLOSS 2015-05-02 16:56:35.161000 ISC_LOCK DC_READOUT -> LOCKLOSS 2015-05-02 17:10:12.335000 ISC_LOCK DC_READOUT_TRANSITION -> LOCKLOSS 2015-05-02 17:36:44.730000 ISC_LOCK DC_READOUT_TRANSITION -> LOCKLOSS in which all but the first (which is the end of the 10.5 [hr] stretch) show a ring up of some 8 [Hz] oscillation. In the DARM ASD this appears as a sharp non-stationarity at 6-8 [Hz] with harmonics at 12 and 18 that show up as soon as the OMC starts to look for the carrier. Is this back scattter? Is this the DARM offset being incorrect? I don't know... we'll keep lookin'; any help is appreciated.
J. Kissel, J. Warner Another example. This time we were able to hold in DC_READOUT_TRANSITION, but after ~10 minutes the 6-8 [Hz] non stationary would keep popping up, and eventually we lost it. I tried the following: - Reducing the DARM gain from 600 to 500 (perhaps because the IFO optical is different/better/worse the loop is on the edge of stability) -- no effect - Reducing the OMC's input ASC QPD servo gain from 0.2 down in 0.025 increments (the DCPD camera shows angular fluctuations, the ASAIR camera looks pretty solid) -- no effect Maybe this is something to do with some new back-scattering? The non-stationarity only begins to appear once I start to engage the OMC locking, but I don't really understand why locking the OMC would have an effect, given that it should all be back-reflection from the OMC's input coupling mirror...
Not Nutsinee, this is Jim. Didn't see I was still logged in as her.
We are still having difficulty acquiring lock, I've switched the intent bit, I'll revert when we re-acquire.
Lock back up at 12:35 local. Jeff spent the last couple of hours changing DARM gains and trying other fixes, but we've reverted everything now and the lock came right back. No idea. Range is even a little better, it was 8-is when I came in, it's now a solid 10.
I should add, we did change the end station beam direction ISI St1 blends from 90 to 45 mhz blends.
Another lock loss at 15:05 local, so about 2.5 hours for that lock.
J. Kissel, J. Warner Hooray for Nutsinee! Jim noticed the the same 504.8 Hz violin mode that was giving us trouble is on the higher side, so I've turned on the H1:SUS-ITMY_L2_DAMP_MODE2 damping loop, with a little bit of gain to ensure it doesn't ring up any further. I may get another DARM open loop gain measurement and try Ed's Stochasic injection later in the day, but for now (other than violin mode damping) I'm going to leave it undisturbed. Good luck H1!
Mini-run Evening Shift Summary
16:50 Wind picking up speed. Blend filter switched to 90 mHz
16:52 PSL tripped
17:00 Kiwamu restarted PSL Front End
18:28 CONNECTION ERRORS (dead channels) on ISC_DRMI (alog 18162)
19:05 CONNECTION ERRORS cleared
20:54 ETMX Watchdog tripped
22:32 Wind speed decreased. I was able to bring the ifo to DC readout for a short time.
23:24 Locking at LOWNOISE_ESD_ETMY
"ESD X driver is tripped" message shows up often, but not causing any trouble.
00:11 Off shift. Leaving ifo locked on LOWNOISE_ESD_ETMY. Observation Intent switched to Undisturbed.
Starting 23:24. Currently on-going.
J. Kissel, N. Kijbunchoo We've tried several more times to reacquire lock after the damping the rung up violin modes (LHO aLOG 18153), a slow but steady increase in wind up to the current 30-35 [mph] (see 1st attachment), the PSL tripping (LHO aLOG 18159), a mysterious guardian failure (LHO aLOG 18162), and now a rigorous trip of the ETMX seismic isolation because of what I think is my user error. We've run out of steam, and I don't think there's a point in continuing to battle the IFO under these terribly windy conditions. We'll start again tomorrow morning. Some details from the few lock acquisition attempts after the Guardian failure: Attempt 1: Just let the guardian try and do it's thing. Lost lock at the start of TRANSITION_TO_QPDs (but didn't realize it) Attempt 2: Noticed the recycling gain (as measured by ASAIR) was low, lost lock again Because I'd heard Kiwamu talking about low recycling gain vs. high recycling gain causing a sign flip in the ASC loops, I suspected the low recycling gain was the problem. So I had Nusinee play with the alignment of PR3, and the ITMs to try and get the ASAIR signal back up into the 600s (where is was in the 500s). She found that PR3 YAW was most effective (and the recycling gain has stayed in the 600s since she touched it up) -- we still lost lock on the TRANSITION TO QPDS. Attempt 3: After relocking, the recycling gain came up awesomely with out having to touch anything. Still lost lock at the same point. Attempt 4: This time, requested DRMI_LOCKED, such that we could go manually through each of the steps leading up to SWITCH_TO_QPDs. We got as far as the step right before -- REDUCE_CARM_OFFSET -- which completed. I was *just* about to hit go on the TRANSISTION_TO_QPDS when we lost lock. Then Nutsinee noticed that the ETMX SEI isolation system had tripped. After chasing down a bunch of WD trip and lock loss tools, we found the lock loss in the following order: HEPI 03:54:12 UTC (Actuators) IFO 03:54:13 UTC ISI 03:54:19 UTC (ST1 Actuators) TMSX 03:54:24 UTC Now, because I was messing around with the ISC_LOCK guardian in manual, I have a feeling it was me somehow sending a huge Tidal impulse to HEPI that took down the chamber but I can't be sure. Looking at the plot of the lock loss, it's certainly a huge impulsive spike that kicks the chamber, not like some slow shove from the wind or as if the Tidal was huge (again because of wind) and it suddenly hit the edge of the range. Anyways, I don't think there's something systematically wrong with HEPI that we need to freak out about, this was one lock loss of MANY over the past day, and my gut feeling tells me that the problem right now is that the TRANSITION_TO_QPDs fails while it's servoing the ALS-C_DIFF_PLL_CTRL_OFFSET and even during the REDUCE_CARM_OFFSET, because there's too much uncontrolled arm angular motion (from wind) for the CARM reduction to happen. According to the guardian logs, this servo seems to stress and eventually break the IMC lock, but I'm not sure if it's a cause or effect. We're gunna try to lock one more time, because getting to DRMI_LOCKED is incredibly robust, even in these high winds. BUT, we're not gunna log the result if negative and just go home.
Oh -- one more thing: we found something suspicious in the ISC_LOCK guardian, exactly in the TRANSITION TO QPDs's state definition: (Line 714) "ezca['LSC-TR_CARM_OFFSET'] = -3.3 #reduced from -3.3 when recylcing gain went from 29 to 40"" Seems strange that this OFFSET value would be the same as what the comment says it was reduced *from*. BUT -- this Guardian code hadn't changed since the ~3 hour lock stretch this morning, so I'm not sure if it's a problem. The lock losses happen during the only other thing of substance in this state, the self.servo of ALS-C_DIFF_PLL_CTRL_OFFSET using LSC-ASAIR_A_RF45_Q_NORM_MON as the readback channel.
Jeff switching from Manual to Exec (03:54:19) does not explain HEPI tripped (03:54:12). However, it does correspond to ISI tripped (03:54:19) which probably caused the TMSX tripped (03:54:24). Thus the kick on HEPI that caused a lock loss right before REDUCE_CARM_OFFSET is still a mystery...
J. Kissel, N. Kijbunchoo, K. Izumi While I was peacefully explaining tilt-horiztonal coupling to Nutsinee waiting for the DRMI to lock, it acquired, but the ISC_DRMI guardian node got stuck in the DRMI_1F_LOCKED_ASC state complaining in the SPF DIFFs that the channel H1:ASC-INMATRIX_P_1_9 (the REFLA RF9I to element INP1_P) is dead. Kiwamu pointed us to Jamie's solution the last time this had occurred (see LHO aLOG 17545 for problem, and LHO aLOG 17548 for fix), but this time we're 100% confident that no one has made any change to guardian code. I've tried reloading the guardian code, but that's all I'm willing to do. We've been working so hard to get DRMI up since we lost lock from violin mode problems. I note that Dave has been reporting that the guardian machine is grossly overloaded today (LHO aLOG 18152), but at this point I can only claim these two things are connected anecdotally. I've left a message at the Guardian Help Desk.
D. Barker, J. Kissel, and then J. Rollins Pre call from Jamie: Dave and I tried chasing a solution to the above problem a little further by not just reloading the guardian code, but restarting the node as Sheila had done in LHO aLOG 17545. Unlike that previous situation, though, the problem cleared with the restart. Regrettably, the node comes up in "INIT" and when I tried requesting the same state it had frozen in, "DRMI_1F_LOCKED_ASC," it dropped the DRMI lock. It didn't kill the ALS COMM over DIFF lock though, which is nice. Then with Jamie on the phone: By then DRMI had recovered to LOCK_DRMI_1F, but remained stationary because when you restart a subordinate node, his manager -- in this case ISC_LOCK -- loses management possession and doesn't know what to do. The trouble is that the only way for ISC_LOCK to regain possession of all of its subordinates is to go to the INIT state, which I didn't want to do because we already had an ALS COMM, ALD DIFF, and DRMI locked up. Jamie recommended that I try force regaining possession of: - Put the ISC_LOCK in MANUAL mode (via the "all" states subscreen) - Jump to INIT (this *worked* and repossessed the ISC_DRMI node) - Jump back to the state it *was* in before it had been put into manual, and switch back to EXEC. However, upon switching back to EXEC, the IFO lost all locks. Jamie thinks it's because I should have requested the state *after* the one where it was stuck, I think (now while writing this log) that I just went to the wrong state period (i.e. not what it was before I went to manual). *sigh* Oh well. We're on our way back up...
I have created a 10 minute injection file simulating a stochastic source at omega_GW=1 at 100Hz, and placed it on the cds system in: /ligo/home/edward.daw/research/hardware_injections/2015_05_01/inj10mins.txt The file was created as follows: on the h1hwinj1 machine, cd /ligo/home/edward.dad/research/hardware_injections/dependencies/sources/virgo/NAPNEW/SCRIPTS/IsotropicSbGenerator python IsotropicSbGenerator.py --init IsotropicSbGenerator2.ini This code generates a single 600 second frame which I subsequently moved to /ligo/home/edward.daw/research/hardware_injections/2015_05_01/SB_HI_L1-1114555770-600.gwf. To convert the frame to an ascii file, I tried running a local matlab, but I couldn't get a license. I therefore shipped the frame to my laptop, and used matlab interactively: >> [data,tsamp]=frgetvect('SB_HI_L1-1114555770-600.gwf','H1:strain',1114555770,600); >> outfile=fopen('inj10mins.txt','w'); >> fprintf(outfile,'%g',data); >> fclose(outfile); ...and finally I used gsisftp to move the resulting text file back to the cds machine at the above location. The above matlab code could easily be used to scale the data by a factor, as it seems you have done with previous injections, if the existing scale proves inappropriate for the injection. Please inject this 10 minute duration signal once the machine is stable and you are ready for more injection tests. Thanks. Ed
Nice work Ed, Jeff, and Giancarlo getting this ready in time for the mini run. I have a similar question to the one posed by Jeff. Is the output of the file in units of strain or is in units of Initial LIGO counts? The reason I ask is because, during Initial LIGO, we used this code, or code like it, to create injection files with a frequency-dependent transfer function applied. For aLIGO, we don't want to apply this transfer function. Would it be possible to make a plot of the amplitude spectral density of the injection file? It should have a power-law shape with index -3/2.
J. Kissel, N. Kijbunchoo, K. Izumi After deciding that we've damped the 504.8 [Hz] violin mode enough to advance to DC readout, we got a few minutes of DC readout on ETMX, and then we noticed that wind began to pick up speed a few hours ago. We lost lock, and during recovery DRMI was taking particularly long to lock back up. From Izumisan we learned that - If it takes particularly long for DRMI to lock up, e.g. greater than ~15 minutes, - If one hasn't run initial alignment for quite some time, and - The AS port shows flashes that are not strictly LG00 modes, but LG10/01 or any higher order modes it's likely that one needs to redo initial alignment. We began embarking on initially alignment, and found the arms difficult to lock on green. I suggested we move the blend frequency up on the beam line directions of ETMX and ETMY, as is common practice when winds are > 20 [mph] as they are now. Just after we increased the blend frequency, the PSL laser tripped at 16:50 PDT / 23:50 UTC. Quick (and at this point only superficial) investigations do not reveal a reason. Regrettably, the global PSL team is rather busy with LLO's laser at the moment (see LLO aLOG 17959, and subsequent entries). Kiwamu and Nutsinee are in the PSL diode room now recovering (and have done so as I finish this log). The usual perfect storm on the first day of an attempt to leave the IFO alone and collect data!
We see some bursts of noise broadly around 200 Hz (first plot - you may need to squint a bit). I guessed that this was due to non-stationary coupling with SRCL, a la Gabriele. The second plot is a coherence spectrogram for the same five minutes. The coherence is indeed changing in bursts. The third plot is the coherence with MICH, PRC, and SRC over the same time. The coherence with MICH especially is extremely high; I think I saw that the feedforward is disabled?
Yes, we have not yet finished commissioning the MICH and SRCL feed-forward with the better recycling gain. For the sake of robustness, we've decided not to advance beyond "LOWNOISE_ESD_ETMY" to the "LSC_FF" state for the mini run.