(The log is messing up with me and my entry from last night got deleted this morning when I tried to add a plot - new failure mode for me) Commissioning Team Sensitivity status We achieved 36 Mpc on Friday, improving the sensitivity over the previous day by:We had one single lock lasting for about 2h, while making measurements/tests, so data are not clean (see range summary). We collected only a few minutes of clean data (March 27, 23:50-23:55 UTC, if I remember correctly - Evan please check this). The second plot shows the progression of sensitivity this week, starting from 16 Mpc (not shown in this plot), to the red curve from last night. We noticed that the high frequency part of the sensitivity is slightly worse than the previous day, without any apparent reasons beside different initial alignment. Also, we learned a few things about data quality based on the analysis of the long lock from the night before. Sheila and Kiwamu tracked down the ISS glitching problem , which is now fixed (not permanently - need some automatic way of keeping the ISS happy). Also, Josh et al. pointed out that the long lock the other night showed ETMY L3 DAC glitches , and this needs to be fixed. This adds to the already reported whistles , that also need attention. Other news
- Adding cut-offs in the ASC DHARD dofs
- Switching off completely the ETMX ESD driver after switching to the low noise ESD on ETMY
- Adding MICH and SRCL feed-forward subtraction (still not optimal, but somewhat effective, see Evan's entry
Problems in the evening The last thing we wanted to get done on Friday to conclude this great week was to close the beam diverters on the transmon. At the same time, some of the low noise steps done outside the Guardian (like the BS M2 coil driver switch to low noise) were added to the Guardian code. These two (apparently innocent) goals created a couple of unexpected problems (see entry 17547 and entry 17548 ), but I believe at this point Sheila and Jamie fixed both. Also, the wind started blowing hard right after dinner (see Dan and Evan's entry ). For the records, the prophetic words of wisdom that Daniel told us right after the 36 Mpc lock were: "Go home, it can only go down-hill from here".
- Daniel and Evan attempted to make the ETMX ESD driver remotely switchable in the afternoon, but this first attempt failed (not a big deal yesterday as we had only one long lock)
- On a positive side, adding a digital limiter to the ESD correction signal seems to have solved the problem with the EY ESD driver switching off after each unlock (see this entry and comments
On my drive in I saw a kiddie pool flying across Jadwin Ave, so I knew it would be a good day to work on ALS. The gusts today are up to 45- 50 mph; I can hear the building shake when one hits. While the changes I've made so far aren't enough to lock the IFO under these conditions, there is a lot of progress.
One more thing, this morning I opened the X end beam divereter and re did the inital alignment. Now both beam divereters are open. The inital alingment has it's own problems when the wind is this high.
In hopes to capture graphically what Shiela done in this log as far as switching the Fiber PLL's AOM input from a fixed oscillator at 160 [MHz] to the as-originally-planned IMC VCO input, I've updated the diagram I'd originally put together in G1400519, and Alexa and co. cleaned up / made more accurate / published in P1400105 to form the stand-alone diagram: The CARM / ALS Electro-Optical Controls Diagram G1500456 I attach a copy of -v1 to this log for easy access. Focus on the bottom left corner of the diagram, at the input of the red VCO, whose frequency is tuned by the fast output of IMC common mode board. It's output, as currently shown, is how it's configured now -- it's split to head both to the double-passed FSS AOM and to the ALS Fiber PLL AOM. Previously* the path to the ALS Fiber PLL AOM had been replaced with a fixed 80 [MHz] oscillator. *Recall the history / evolution of this connection: it had always been planned to be this way, and LLO who commissioned their IFO in a more "natural," or serial, fashion always had this connection. However, because LHO had commissioned their ALS system and DRMI in parallel during the HIFO Integration Phase, having the IMC control hooked up to the Fiber PLL would couple them in a distracting detrimental fashion. Thus, the team "temporarily" replaced the IMC VCO output with an independent, fixed oscillator, see E1300659. As with many things in LIGO "temporary" can mean "years," so it's only now that Sheila and Daniel decided that the full IFO was commissioned enough that we could again restore to the original plan.
The problem experienced yesterday in the ISC_LOCK guardian was caused by an errant instantiation of an ezca object in the main importable part of the bs_m2_switch.py module. This extra ezca instance created a separate EPICS interface that interfered with the one provided by guardian. Avoid doing this, since, as was shown, it causes problems.
It's understandable why one would think this is necessary, though, because of the somewhat obscure way that guardian provides the ezca interface to the usercode. Guardian puts a special ezca instance into the main "built-in" namespace which makes it universaly available to all usercode. When writing usercode modules, though, it's unclear how to emulate this for the purposes of testing, or to make a module usable in other contexts.
I've added the following code to the bottom of the bs_m2_switch.py module. This makes the module directly executable from the command line, outside of the context of guardian. The "__name__ == '__main__'" conditional defines code that will only be executed when the code is directly executed, rather than imported by something else. It then instantiates an ezca object into the "__builtin__" namespace in a way that mimics what guardian does. The primary function defined in the module (m2_switch()) is then callable in the same way it would be in guardian:
if __name__ == '__main__':
import __builtin__
from ezca import Ezca
__builtin__.ezca = Ezca()
m2_switch()
This should be considered as a prototype for all other similar modules that people want to be executable outside of guardian.
In general, though, I would encourage programers to make states dedicated to the execution of this kind of logic, which can then executed on their own via e.g.:
guardian ISC_LOCK BS_M2_SWITCH
model restarts logged for Fri 27/Mar/2015
2015_03_27 13:26 h1fw1
one unexpected restart.
Commissioning Team
We will write a more meaningful entry later, but here is a list of things we did/tried this afternoon, so we don't forget:
We collected a ~ 2h lock with range around 36 Mpc (not clean data).
We now have a problem we haven't encoutnered before. The ISC_LOCK guardian has SPM DIFFs errors, because of dead channels. There are ten channels related to the ALS_COMM guardian listed as the SPM diffs, I don't know if these are all of them or just the first ten. Attached screen shot shows the list, and also that we can caget these channels. Kiwamu and I restarted the guardians, which made no difference, then we tried destroying them as well, with the same result.
This morning Elli and I retuned the dark offsets for the transmon QPDs, after doing this we were able to do the entire CARM offset reduction without using the in air transmission PDs. After this succeeded we closed the beam diverters, redid the initial alignment, and attempted to lock. We failed at the TR_CARM step three times in a row, so we opened the beam diverters and re did the initial alignment. After that we were able to lock without using the In air PDs, which is what we have been doing all day.
The current situation is that the End X beam diverter is closed, and End Y is open. Kiwamu re did the initial alignment in this situation, and when we have ALS locked we can see that the spots are well centered on X end transmission QPDs. We were hopping to try locking like this, but ran into difficulties with the mich lock acquisiton attempts breaking the lock of ALS DIFF. We saw that the beat note power has slowly drifted down over the last few weeks, (from 0.5 dBm to -0.5 dBm), so we though it could be that a small alignment kick to the BS is enough to loose the beat note completely. Its hard to verify this theory since we don't trust the timing between the Beckhoff and the RCG.(alog 17455) I went to the table and touched up the beat note alingment, now it is about 0 dBm.
I tracked down the guardian communication problem to the following code in the /opt/rtcds/userapps/release/isc/h1/guardian/bs_m2_switch.py module:
import ezca ezca = Ezca()
This is NOT OK to put in any module being imported by guardian code. This creates a second, separate instance of the EPICS interaction object, that interferes with the one created and supplied by guardian itself. It effectively breaks all EPICS interaction in the guardian node, which is what happened here.
If you think you need to do something like this please contact me and we'll figure out a better way.
Presumably this module was newly added to the ISC_LOCK node before the problem arose, and the reported breakage happened after it was reloaded. After removing those lines from bs_m2_switch.py everything now appears to be working normally.
As an aside, please always provide FULL DISCLOSURE when reporting problems like this. Usually things don't break spontaneously on their own. Whatever was being done right before you saw the problem is probably what caused it. (Kiwamu: if you didn't know about this change when we spoke on the phone then you're off the hook).
Evan, Dan
The Guardian issue was fixed when we returned from dinner, but then the winds came. We switched the end station ISI's to the 90mHz blends and increased the tidal gains; the common UGFs are now 0.1Hz (were 0.05Hz) and the X-arm tidal UGF is 0.1Hz (was 0.06Hz).
Initially we observed the same problem that Sheila reported, the ALS would lose lock as soon as the DRMI started to drive the mirrors. But after mucking with the DRMI alignment for a little while (not at all sure that this helped), I have been able to acquire lock with DRMI a couple of times. A test of applying a large DC offset into the BS longitudinal drive does not break the ALS_DIFF lock, so maybe that problem is no longer a problem.
There is now a new problem, the DRMI Guardian isn't happy with the state DRMI_LOCKED_1F_ASC (which is the requested state from the ISC_LOCK Guardian). Upon finishing this state is recalculates the path and jumps back to LOCK_DRMI_1F, see the attached log, and the attached graph. This has happened three times now.
Also attached is a plot of our progress since Monday night. I am leaving the IFO set to CHECK_IR to acquire ALS data during the windy conditions.
For MICH FF, the filters used are FM6 and FM7 (which are stopbands for the violin modes and harmonics), and FM9 (which inverts the BS M2→M3 plant). The gain is 0.040 ct/ct.
For SRCL FF, the filters used are FM6 and FM7 (as above). The gain is −2.5 ct/ct.
Both feed back only to ITMY L2.
Also attached is a text file and noise budget of a good spectrum from today. I would like to remark that
Thank you so much Jamie.
I think bs_m2_switch.py is a code that TJ, Sheila and I have been preparing today. I was aware of the active coding effort that TJ has been putting today, but I simply did not know that bs_m2_switch had been already loaded in the ISC_LOCK guardian. My apologies that I did not collect and did not provide all information about the coding activities on ISC_LOCK at around the time.
Dan: the ISC_DRMI assert_drmi_locked GuardStateDecorator (defined in ISC_library.py) is wrapping the methods in DRMI_LOCKED_1F_ASC, and it returns 'LOCK_DRMI_1F' if DRMI_locked() returns False. The obvious explanation for the behavior you saw is just that DRMI_locked() was False. Here's what's currently defined in that function (at least from what's currently committed to the USERAPPS):
def DRMI_locked():
#log('checking DRMI lock')
return ezca['LSC-MICH_TRIG_MON'] and ezca['LSC-PRCL_TRIG_MON'] and ezca['LSC-SRCL_TRIG_MON']
The epics problem was my bad. I imported bs_m2_switch but didn't think of it when I called Jamie since we weren't calling it I assumed it couldn't be doing much. Apologies.
Using the latest BRUCO results, I've annotated a hi-res spectrum from last night's lock. Peaks, bumps, and so on are labeled with the largest coherent channels.
There are a few peaks, at 46, 166, 420, and 689Hz, that are coherent with PRCL/SRCL, and they have changed since the blue reference (Mar 19th at 18:00). This might be due to the POP --> POPAIR switch.
The source of the 300Hz lines has been identified, it's the BS violin modes. The frequency is given in Mark Barton's table of predicted violin resonances (thanks Jeff!).
We turned off all the ring heater power supplies yesterday; this got rid of a line at ~74Hz but didn't change the 55Hz line and its harmonics. These lines are certainly due to some electronics in the EX racks.
Features coherent with the IMC WFS and the PSL persicope are probably the same thing (input beam jitter), hopefully from the same source (PSL PZT mount).
As I think is known, there is a filter solution for the ring heater driver to eliminate the fan noise. It is the same basic design as was used in the Seismic Coil Driver fans. We just need to make these available for installation soon. Added this to my to-do list.
The 160 and 420 Hz peaks are moving in frequency with a timescale of some minutes. The moves in a coherent way, in the sense that the frequencies seem pretty much to maintain a ratio of 2.5
See my previous log entry for an analysis of the line wandering.
It turns out the 55Hz line is actually a 57Hz line, and it's the Hartmann camera. This was explored in detail at LLO a long time ago, I should have remembered.
I have turned off the HWS box in the EX electronics rack. This eliminated the 57Hz line and harmonics from the rack magnetometer, but for some reason the overall noise floor has increased, see attached. Probably turning off the camera is a better solution, as Aidan recommends in the LLO log. We'll see if this affects the noise.
J. Kissel, J. Warner, J. Romie, H. Radkins, N. Kijbunchoo, K. Izumi, G. Moreno SUMMARY: We just finished ~7 HOUR lock stretch at our best sensitivity ever, between 32 and 34 [Mpc]! We've all been pleasantly surprised this morning to see that the lock stretch that Sheila Evan, Dan and Lisa started last night lasted the entire night. Unfortunately, as of ~10 minutes ago (~8:40a PDT, ~15:40 UTC), there are GIANT glitches and non-stationarity that keep popping up, spoiling the sensitivity. There's no one in the LVEA, so we're not at all sure what's caused the sudden change in behaviour. Wind seems fine at ~5 [mph], ground motion is still pretty low, the 1-3 [Hz] hasn't even come up yet. Anxious to explore things, Jim installed some low-pass filters in the HAM5 and HAM6 ISIs at around 8:45 to try to reduce scattering / acoustic coupling to the ISI, and Kiwamu began exploring MICH coupling to DARM. But, as I write this log, we lost lock. However, for DetChar purposes, one can assume the detector was undisturbed from March 27 9:00 UTC to March 27 ~15:00 UTC.
K. Izumi, J. Kissel For the record: Kiwamu drove down to the EY end station to check: *this* lock loss did *not* cause the EY ESD Driver to trip. Huh.
Looking at spectrograms of DARM during the first hour of lock (first plot) and the last hour (second plot), it seems to me that the noise is more or less stationary, but there are huge glitches. They are so big that you can even see them easily in time domain (third plot). A zoom in is visible in the fourth plot. They look like bursts of oscillations at about 5.5 kHz.
We now believe that the glitches we had in the lock stretch from this morning was due to the ISS which repeatedly unlocked. This is a typical behavior of the ISS when the diffraction power is too low. Indeed the diffracted power had been 4% on average during the time when the interferometer was locked. There was clear correlation between arm cavities' power and the ISS diffracted power. See the attached trend of some relevant channels. Elli adjusted the diffracted power in this after noon so that the diffracted power is now at 8% with only the inner loop closed.
Looking at glitch-to-glitch coupling between auxiliary channels and DARM shows a large number of glitches in the > 1kHz range that are coincident with glitches in CARM, REFL 9 PIT/YAW, and REFL 45 PIT. Interestingly, CARM is highly correlated with high frequency glitches until about 12:00:00 UTC, at which point REFL 45 PIT becomes the stronger veto.
It looks like REFL 45 Q PIT was offset by about 2500 counts during the lock, is it possible that intensity fluctuations on an uncentered WFS are showing up as alignment glitches? I've attached a time series covering 2 hours of the lock from 11 UTC to 13 UTC.
We're currently running code to see if the lower frequency (50-200 Hz) glitches are caused by zero-crossings in the 18-bit DACs.
I've attached an Omicron glitchgram for the whole day, it seems as if the higher frequency glitches and the glitches populating the 50-200 Hz region are the two dominant populations right now. There are also a few high SNR glitches scattered around in the 100-400 Hz region that we'll follow up individually.
-2^16 Crossings in ETMY L3 ESD causing many of the glitches in this lock:
In addition to the arches reported in 17452 and 17506 we found DAC glitches in this lock when ETMY L3 ESD DAC outputs were crossing -2^16 counts. Attached is a PDF with a few examples that were lined up by hand. We will follow up more closely to see if other suspensions and penultimate stages also add glitches. Note: At Livingston, SUS MC2 M3 DACs were also a problem.
If you'd like to see the primary culprits from this long lock, here is a tar file of omega scans (thanks to Joe Areeda) of the loudest 100 glitches between 30 and 200Hz. The vertical lines that repeat are DAC glitches, the crazy wandering features are the arches described in the pages linked above, those two mechanisms account for most of the glitches we see.
The whistles in DARM are still quite prominent in this lock. As briefly summarized in this alog, they happen whenever the PSL VCO frequency crosses 79.2 MHz. This points to the likely cause. This is a reliable enough indicator that we can get some statistics. In the first hour of the current lock (Mar 27 9 UTC to 10 UTC), the whistles happen at a rate of 4 per minute. Below are four spectrograms. The first two show some whistles identified this way. A new feature is that there seems to be a second oscillator very close by, or maybe some harmonic of the beat note. The next two spectrograms show a minute with several whistles, and the next minute where none occurred. Even when none go through zero frequency, you can still see the beat note hovering up near Nyquist.
The attached script will find all the crossings which indicate whistle / RF beat note glitches due to the 79.2 MHz crossing. Currently the PSL VCO readback only appears to be changing once a second, so the time accuracy of glitch finding is only one second. For vetoing purposes, this could be improved by doing a linear fit of IMC-F to the PSL VCO readback, then using the much faster sampled IMC-F to measure the VCO frequency.
Here is a plot of coherences with some PEM accelerometers. We have been thinking that the noise around 200-250 Hz was due to the PSL periscope PZT, based on coherences like the one in the lower left plot. However, there is suscpicously similar coherence between the ISCT6 accelerometer and DARM. Indeed, the coherence between these two accelerometers is high in this frequency range, suggesting that some cross talk could be the dominant signal in these accelerometers.
The right two panels show that the accelerometers which have coherence with DARM around 13 Hz and 16 Hz are also coherent with each other, but not nearly as much.
There is some cabling work that needed to be completed on this particular channel (ISCT6_ACC). Hope to bring it online on Monday.
Dan, Evan, Kiwamu, Lisa, Sheila (and earlier advice from Keita)
Today we turned on the DARM boost filter described here, to increase the suppression in the 1-6Hz band. The boost itself was no problem, but we realized that changing the phase of the open-loop gain around 10Hz changes the sign of the ETMY and ITMX bounce modes in the DARM error signal, which means the sign of the damping loops need to change as well. This led to a few hours of bounce mode excitement, but the new signs and phases are now in the Guardian.
The first plot is a comparison of the OMC TRANS intensity noise before and after the boost filter was engaged. The RMS before the boost was 2e-2 /rt[Hz], now it is 3e-3 /rt[Hz]. The OMC TRANS camera is much calmer.
The boost filter also reduced the coherence between the OMC length error signal and DARM. Yesterday, Keita pointed out that the OMC length loop was far, far too coherent with DARM between 1-6 Hz, and this noise was being upconverted around the dither line at 3.3kHz. The upconversion mechanism is not clear; if the OMC length is properly servoed the dither should not generate any RIN (or, very little) and should not upconvert any low-frequency RIN noise to sidebands around the dither frequency. But, that's what's happening. The new boost filter reduces the coherence at low frequency (second plot) and reduces the sidebands at 3.3kHz (third plot), but the coherence at low frequency and at 3.3kHz is not zero (second, fourth plots).
As a precaution against OMC LSC --> DARM coupling, we have reduced the gain of the OMC length loop even further: the OMC LSC boost is now off, and there is a 30Hz rolloff filter (FM3) in the OMC length loop. The UGF is a little more than 10Hz, as described by the green trace from the first figure in this entry.
Keita also recommended changing the dither frequency and changing the dither amplitude and OMC LSC loop gain. We haven't had a chance to do these things tonight. We need to investigate why this noise is being upconverted - note that the amplitude of the 3.3kHz dither line in the OMC RIN increased after the OMC LSC boost was turned off (third plot).
The good ETMY and ITMX bounce mode damping settings with the DARM boost ON are (in M0_DARM_DAMP_V):
The good settings with the boost OFF are:
This switching is handled by the Guardian.
By the way, an ongoing mystery is the source of a 300Hz line in DARM that is very large at the beginning of a lock but steadily decays to the level of the noise floor over a timescale of ~10-20min. The rate of decay is somewhat matched to the BS butterfly mode and the earlier twin peaks noise, but the line is present when we're locked on RF readout, so it doesn't seem to be coming from the OMC. We've checked the IOP inputs that are sampled at 65k and we don't see a line at 16384+/-300Hz that could be aliased, but we should check again at the very start of a lock when the 300Hz line is very prominent.
why use the error signal for bounce mode damping, when the DARM control signal is available?
the phase of G/(1+G) changes very little as the loop gain changes in the case that G>>1 (which should be the case with the Bounce/Roll RG filters on)
Rana,
This is a good point. We are planning to change the pick off point so that it takes the DARM signal from its output as you suggested.
Dan, Sheila
The attached screenshot shows a one second trend of the beckhoff channel that is the fast shutter trigger readback, along with two RCG channels, the first is AS_C sum, this PD should be the source for the fast shutter trigger, and ASAIR_LF, which should see similar power fluctuations.
The beckhoff channel seems to be 0.2 seconds ahead of the RCG channels. This timing difference sometimes makes it appear as though the fastshutter shut first, causing the lockloss, as in the second attached screenshot.
We would like to prove definteively that the shutter is only shutting when it should, but we don't have a readback that we trust.
Dave and I looked into this somewhat and we don't think that the timestamps of the EtherCAT systems are used at all by the frame builder. As a matter of fact the internal clock of these machines was about 3 seconds off. Since it is hard to believe that the EtherCAT systems violate causality, the most likely candidates are frame builder, data concentrator or maybe the EPICS gateway. This would also mean that all data transported through EPICS to the frame builder could be affected. More investigations are pending.