late post from Tuesday evening when PNNL disrupted offsite network access for maintenance. The GRB alert system running on h1fescript0 reported that it was unable to contact the GraceDB resource, and then seamlessly reconnected when network access was restored. This outage lasted just over a minute, so the CAL_INJ_CONTROL MEDM screen would have been RED for a few seconds.
[ext-alert 1117332997] CRITICAL: Error querying gracedb: [Errno 113] No route to host
[ext-alert 1117333060] CRITICAL: Error querying gracedb: [Errno 110] Connection timed out
[ext-alert 1117333063] CRITICAL: Error querying gracedb: [Errno 113] No route to host
[ext-alert 1117333064] CRITICAL: Error querying gracedb: [Errno 113] No route to host
Since there have been a few uncommitted commissioning changes to filter files and guardian code, Dave and Sheila confirmed it was time to commit them all to the svn since the IFO is running ~well for E7. Dave and I are working to clear house in all subsystems. I have committed:
All SUS.txt filter files
All 2 ASC filter files - H1ASC.txt, H1ASCIMC.txt
All 2 ASC filter files -H1ALSEX.txt, H1ALSEY.txt
All LSC.py guardian scripts
Note, all SUS guardians are up-to-date in the SVN.
When we ent to commit H1SUSMC2 it failed with an error stating file is "out of date". So, we copied the file to a backup and did a work around:
betsy.bland@operator1:/opt/rtcds/userapps/release/sus/h1/filterfiles$ cp H1SUSMC2.txt H1SUSMC2.txtbak
betsy.bland@operator1:/opt/rtcds/userapps/release/sus/h1/filterfiles$ svn up H1SUSMC2.txt
Conflict discovered in 'H1SUSMC2.txt'.
Select: (p) postpone, (df) diff-full, (e) edit,
(mc) mine-conflict, (tc) theirs-conflict,
(s) show all options: ^Csvn: Caught signal
betsy.bland@operator1:/opt/rtcds/userapps/release/sus/h1/filterfiles$ svn revert H1SUSMC2.txt
Reverted 'H1SUSMC2.txt'
betsy.bland@operator1:/opt/rtcds/userapps/release/sus/h1/filterfiles$ svn up H1SUSMC2.txt
U H1SUSMC2.txt
Updated to revision 10745.
betsy.bland@operator1:/opt/rtcds/userapps/release/sus/h1/filterfiles$ cp H1SUSMC2.txtbak H1SUSMC2.txt
This however caused the FE DAQ status to show the error that the filter file had changed. Indeed to file changed name, and then back, but we confirm that the contents of the file are the same, so we will have to hit the LOAD COEFF button on MC2 to clear the FE alarm.
Just before setting the intent bit for the current lock stretch, Patrick, Kiwamu and I looked at SDF. A few red alarms needed attention. We set SDFs:
11:32 Richard turned off lights in the LVEA, reports that the crane lights are still on. Lights are stuck on at end Y. 11:49 Karen getting paper towels out of mechanical room. 12:24 Kiwamu and Betsy have gone through the SDF differences. Went to Science mode and notified LLO control room.
11:28 Dave reloaded filter module coefficients for h1sus frontend. (WP 5245)
Dave reloaded filter module coefficients for h1lsc frontend. (WP 5245)
Locked on LSC_FF. Bubba is going to move a HEPI to mid X, so not putting in science mode. Range is glitching.
10:51 Bubba moving HEPI and forklift on trailer in through gate coming from LSB. HEPI is in back of pickup, forklift is on trailer. Heading to mid X. 10:55 Bubba is a little more than half way to mid X. 10:59 Bubba made turn into mid X 10:59:29 Lock loss 11:13 Bubba at mid X, starting opening door and moving HEPI in 11:49 Bubba driving back 12:12 Bubba done
We set up the new RF source and a spare in the EE shop with the custom 9.100230 MHz OCXOs. These OCXOs have a very small tuning coefficient of 0.2 ppm/V which is about 25 times lower than the others. This effects the gain of the PLL and it takes about an hour to lock (see Timing_XOLocking_lowgain.pdf).
A new revision of the 1PPS locking FPGA code (E1200033-v3, version 3, svn 103) has been released which adds gain selection through the DIP switches. The attached figures show the step response for gains of 4, 16 and 64, respectively (Timing_XOLocking_gain4_step.pdf, Timing_XOLocking_gain16_step.pdf and Timing_XOLocking_gain64_step.pdf). The PLL is still under damped with a gain of 4, but over damped with the gain of 64. With a gain of 16 the RF source locks in about 5 minutes.
Added H1:GRD-ISC_LOCK_STATE_N to the include list and regenerated the channel list. 82 channels got added. + H1:CAL-INJ_ODC_BIT13 + H1:GRD-ISC_LOCK_STATE_N + H1:OAF-BOUNCE_ALL_GAIN + H1:OAF-BOUNCE_ALL_LIMIT + H1:OAF-BOUNCE_ALL_OFFSET + H1:OAF-BOUNCE_ALL_RSET + H1:OAF-BOUNCE_ALL_SW1S + H1:OAF-BOUNCE_ALL_SW2S + H1:OAF-BOUNCE_ALL_SWSTAT + H1:OAF-BOUNCE_ALL_TRAMP + H1:OAF-BOUNCE_ETMX_GAIN + H1:OAF-BOUNCE_ETMX_LIMIT + H1:OAF-BOUNCE_ETMX_OFFSET + H1:OAF-BOUNCE_ETMX_RSET + H1:OAF-BOUNCE_ETMX_SW1S + H1:OAF-BOUNCE_ETMX_SW2S + H1:OAF-BOUNCE_ETMX_SWSTAT + H1:OAF-BOUNCE_ETMX_TRAMP + H1:OAF-BOUNCE_ETMY_GAIN + H1:OAF-BOUNCE_ETMY_LIMIT + H1:OAF-BOUNCE_ETMY_OFFSET + H1:OAF-BOUNCE_ETMY_RSET + H1:OAF-BOUNCE_ETMY_SW1S + H1:OAF-BOUNCE_ETMY_SW2S + H1:OAF-BOUNCE_ETMY_SWSTAT + H1:OAF-BOUNCE_ETMY_TRAMP + H1:OAF-BOUNCE_ITMX_GAIN + H1:OAF-BOUNCE_ITMX_LIMIT + H1:OAF-BOUNCE_ITMX_OFFSET + H1:OAF-BOUNCE_ITMX_RSET + H1:OAF-BOUNCE_ITMX_SW1S + H1:OAF-BOUNCE_ITMX_SW2S + H1:OAF-BOUNCE_ITMX_SWSTAT + H1:OAF-BOUNCE_ITMX_TRAMP + H1:OAF-BOUNCE_ITMY_GAIN + H1:OAF-BOUNCE_ITMY_LIMIT + H1:OAF-BOUNCE_ITMY_OFFSET + H1:OAF-BOUNCE_ITMY_RSET + H1:OAF-BOUNCE_ITMY_SW1S + H1:OAF-BOUNCE_ITMY_SW2S + H1:OAF-BOUNCE_ITMY_SWSTAT + H1:OAF-BOUNCE_ITMY_TRAMP + H1:OAF-ROLL_ALL_GAIN + H1:OAF-ROLL_ALL_LIMIT + H1:OAF-ROLL_ALL_OFFSET + H1:OAF-ROLL_ALL_RSET + H1:OAF-ROLL_ALL_SW1S + H1:OAF-ROLL_ALL_SW2S + H1:OAF-ROLL_ALL_SWSTAT + H1:OAF-ROLL_ALL_TRAMP + H1:OAF-ROLL_ETMX_GAIN + H1:OAF-ROLL_ETMX_LIMIT + H1:OAF-ROLL_ETMX_OFFSET + H1:OAF-ROLL_ETMX_RSET + H1:OAF-ROLL_ETMX_SW1S + H1:OAF-ROLL_ETMX_SW2S + H1:OAF-ROLL_ETMX_SWSTAT + H1:OAF-ROLL_ETMX_TRAMP + H1:OAF-ROLL_ETMY_GAIN + H1:OAF-ROLL_ETMY_LIMIT + H1:OAF-ROLL_ETMY_OFFSET + H1:OAF-ROLL_ETMY_RSET + H1:OAF-ROLL_ETMY_SW1S + H1:OAF-ROLL_ETMY_SW2S + H1:OAF-ROLL_ETMY_SWSTAT + H1:OAF-ROLL_ETMY_TRAMP + H1:OAF-ROLL_ITMX_GAIN + H1:OAF-ROLL_ITMX_LIMIT + H1:OAF-ROLL_ITMX_OFFSET + H1:OAF-ROLL_ITMX_RSET + H1:OAF-ROLL_ITMX_SW1S + H1:OAF-ROLL_ITMX_SW2S + H1:OAF-ROLL_ITMX_SWSTAT + H1:OAF-ROLL_ITMX_TRAMP + H1:OAF-ROLL_ITMY_GAIN + H1:OAF-ROLL_ITMY_LIMIT + H1:OAF-ROLL_ITMY_OFFSET + H1:OAF-ROLL_ITMY_RSET + H1:OAF-ROLL_ITMY_SW1S + H1:OAF-ROLL_ITMY_SW2S + H1:OAF-ROLL_ITMY_SWSTAT + H1:OAF-ROLL_ITMY_TRAMP inserted 82 pv names deleted 0 pv names
Winds are around 10 mph. Seismic noise in the 1 - 3 Hz and 3 - 10 Hz bands has come up. 08:09 Beam tube cleaning is starting Corey reports trouble maintaining lock in the transition to REFL_TRANS. Dan says it is losing lock in the setting of the TR_REFL9 and TR_CARM elements of the LSC input matrix.
model restarts logged for Wed 03/Jun/2015
2015_06_03 00:30 h1fw1*
2015_06_03 13:43 h1fw0*
* = two unexpected restarts
Environment: Fairly quiet shift with only moderate wind for first 1/3 of the shift (died down).
At 12:47utc (5:47amPST) 8+hr lock ended.
Lockloss possibilities: PR3 oscillations a few seconds prior. Seismic event seen in 0.03- 0.1Hz band, but no earthquakes seen online. No wind at all.
Acquisition afterward hasn't been trivial. Y-arm was a bit off, so tweaked on ETMy. Noticed a bit of a yaw misalignment in AS_AIR video while locked in Michelson, so moved the BS -0.3 clicks in yaw & this made the flashing much better. After this the flashes looked much better when attempting DRMI acquisition. Have taken H1 up to various points in the Guardian script prior to the Engage_ASC step. Alignment looks good, so fear there is another issue with H1.
Dan does not see anything obvious via oplevs.
There were some small changes made to the Guardian script (basically change ramp times to investigate why we keep dropping out of lock. Basically we are wanting to fix H1!). Technically, this is a Guardian script change and should have a work permit for this.
Shift Summary:
Decent shift with H1 locked for over half of the shift. Frustrating lock acquisition-ing after the lockloss for the end of shift.
Dan, Corey
Tonight we followed up the measurement of ITMX's bounce mode Q with a measurement of the roll mode. Same procedure: flipped the sign on the damping gain, excited the mode, zeroed the damping gain, allowed the mode to decay.
This time we lost lock after two hours, but this was enough for a clean measurement. Also, as mentioned in a comment to the bounce mode entry, I did a better job of anti-aliasing before decimating the data for demodulation, and the fit to the RMS channel and the fit to the demodulated data are in good agreement.
RMS channel: 456,700
Demodulated DARM_ERR: 458,000
The mode frequency is 13.978Hz, plots attached. Btw, I chose ITMX for these measurements because it receives the least amount of actuation -- ETMY gets DARM_CTRL, both ETMs are controlled by DHARD and CHARD, and ITMY is used for the SRCL FF. We feed DSOFT and CSOFT back to the ITMs, and they both have oplev damping in pitch, but these loops should have very little gain at the bounce/roll frequencies. (Also there are bounce/roll bandstops in the L2 and OL filter banks.) So, there's not much chance that the ITMX modes are being damped/excited by control loops.
Noticed L1 went down around 10:30utc. I talked with Danny at LLO to confirm they were down (and he gave est of quickest they could be back up would be 40min), so Dan & I took H1 out of Science Mode at ~10:33utc (3:33amPST) to perform a list of items we were waiting to do.
#1 Ring up Roll mode for ITMx
The Roll mode was rung up by changing H1:SUS-ITMX_M0_DARM_DAMP_R_GAIN from it's initial value of 20 and taking it to a negative value (we went from -5 up to -200), observing increased Roll Mode, and then damping it a little (took gain to +2.0), and then to 0.0 (where it was left). Dan will take some measurements of the Roll Mode.
#2 OMC Transitioned From "QPD Alignment" to "Dither Alignment"
Ran the H1:0MC-ASC_MASTERGAIN from 0.1 down to 0.0. Took the switch from QPD to Dither. And then took the Master Gain to 0.05. After this, we watched the OMC alignment lines (which are a group of lines between 1660-1760Hz) & watched them decrease as Dither improved the alignment. We kept the OMC in this state.
#3 ETMy Violin 1st Harmonic Suppression
Referencing Dan's alog #17365, we switched the MODE3 filters for ETMY (on the SUS_CUST_L2_DAMP_MODE_FILTERS.adl medm) & played with the gain. We were surprised that we ended up with a -200 for the gain (originally tried +100 & even higher).
H1 back to Science Mode at 11:03utc (4:03amPST)
NOTE: Our range appeared to take a little step up to 54Mpc, but we've seen it nosedive down to 43Mpc...hopefully it'll stabilize. Do not see any environmental effect leading to this sudden drop in the last 30min. We currently have a +7.5hr lock.
[Time for lunch.]
Winds have all died down.
H1 has stabilized, but we had a 9Mpc step down starting at ~11:30utc & continue to stay at this range.
Chris Pankow, Jeff Kissel, Adam M, Eric Thrane We restarted tinj at LHO (GPS=1117411494 ) to resume transient injections. We scheduled a burst injection for GPS=1117411713. The burst waveform is in svn. It is, I understand, a white noise burst. It is, for the time being, are standard burst injection waveform until others are added. The injection completed successfully. Following this test, we updated the schedule to inject the same waveform every two hours over the next two days. The next injection is scheduled for GPS=1116241935. However, this schedule may change soon as Chris adds new waveforms to the svn repository. We are not carrying out transient injection tests at LLO because the filter bank needs to be updated and we cannot be sure that the filters are even close to correct. Adam thinks they will be updated by ~tomorrow.
First, some details, the injection is actually a sine-Gaussian with parameters: SineGaussian t0 = 1116964989.435322284 q = 28.4183770634 f0 = 1134.57994534 The t0 can be safely ignored since this injection would have no counterpart in L1, but it should be noted that this injection *does* have all relevant antenna patterns and polarization factors applied to it (e.g. it is made to look like a "real" GW signal). I attach the time and frequency domain plots of the waveform --- however they are relative to an O1 type spectrum and so may not be indicative of the actual performance of the instrument at this period of time. Given the most recent spectra and frequency content of the injection, this could be weaker up to a factor of ~2-3. The characteristic SNRs I calculated using the O1 type spectrum: Waveform SineGaussian at 1116964989.435 has SNR in H1 of 7.673043 Waveform SineGaussian at 1116964989.435 has SNR in L1 of 20.470634 Network SNR for SineGaussian at 1116964989.435 is 21.861438 So, it's possible that this injection had an SNR as low as ~2, not accounting for variance from the noise. The excitation channel (H1:CAL-INJ_TRANSIENT_EXCMON, trended) does show a non-zero value, and the "count" value is consistent with the amplitude of the strain, another monitor (H1:CAL-INJ_HARDWARE_OUT_DQ, at a higher sample rate) also shows the full injection, though it is not calibrated. So, the injection was successfully scheduled and looks to have been made. I also did an omega scan of the latter channel, and the signal is at the correct frequency (but has, notably, a very long duration). I did a little poking around to see if this showed up in h(t) (using H1:GDS-CALIB_STRAIN). Unfortunately, it is not visible in the spectrogram of H1:GDS-CALIB_STRAIN (attached). It may be some peculiarity of the scheduling, but it's interesting to note that the non-zero excitation occurs about a second after the GPS time that Eric quotes. More interestingly, this does not seem to have fired off the proper bits in the state vector. H1:GDS-CALIB_STATE_VECTOR reports the value 456 for this period, which corresponds to the data being okay, gamma being okay, but no injection taking place. it also appears to mean that no calibration was taking place (bits 3 and 4 are off). I'm guessing I'm just misinterpreting the meaning of this. I'd recommend, for future testing, a scale factor of 3 or 4, to make it *clearly* visible and give us a point of reference. We should also close the loop with the ODC / calibration folks to see if something was missed.
I can see that burst injections have been occurring on schedule, generally. The schedule file (which you currently have to log into h1hwinj1 to view) reads, in part:... 1117411713 1 1 burst_test_ 1117419952 1 1 burst_test_ 1117427152 1 1 burst_test_ ...Compare that to the bit transitions in CAL-INJ_ODC:pshawhan@> ./FrBitmaskTransitions -c H1:CAL-INJ_ODC_CHANNEL_OUT_DQ /archive/frames/ER7/raw/H1/H-H1_R-11174/*.gwf -m fffffff 1117400000.000000 0x00003f9e Data starts 1117400027.625000 0x00003fde 6 on 1117400028.621093 0x00003fff 0 on, 5 on 1117406895.000000 0x00003e7f 7 off, 8 off 1117411714.394531 0x0000347f 9 off, 11 off 1117411714.480468 0x00003e7f 9 on, 11 on 1117419953.394531 0x0000347f 9 off, 11 off 1117419953.480468 0x00003e7f 9 on, 11 on 1117427153.394531 0x0000347f 9 off, 11 off 1117427153.480468 0x00003e7f 9 on, 11 on ...Bits 9 and 11 go ON when a burst injection begins, and off when it ends. The offset of start time is because the waveform file initially contains zeroes. To be specific, the first 22507 samples in the waveform file (=1.37372 sec) are all exactly zero; then the next several samples are vanishingly small, e.g. 1e-74 . At t=1.39453 sec = 22848 samples into the waveform file, the strain amplitude of the waveform is about 1e-42. At the end time of the injection (according to the bitmask transition), the strain amplitude has dropped to about 1e-53, but I believe the filtering extends the waveform some. To give an idea of how much, the end time is about 130 samples, or ~8 msec, after the strain amplitude drops below 1e-42. Earlier in the record, bit 6 went on to indicate that the transient filter gain was OK, bit 5 went on to indicate that the transient filter state was OK, and bit 0 went on at the same time to indicate a good summary. Somewhat later, bit 8 went off when CW injections began, and bit 7 went off at the same time to indicate the presence of any hardware injection signal. Note that tinj seems to have been stopped (or died) at about GPS 1117456410, according to the tinj.log file, and that's consistent with a lack of bit transitions in CAL-INJ_ODC after that time.
Checking the other bitmask channels, there's a curious pattern in how the hardware injection bits from CAL-INJ_ODC are getting summarized in ODC-MASTER:pshawhan@> ./FrBitmaskTransitions -c H1:ODC-MASTER_CHANNEL_OUT_DQ -m 7000000 /archive/frames/ER7/raw/H1/H-H1_R-11174/H-H1_R-11174[12345]*.gwf 1117410048.000000 0x07000000 Data starts 1117411714.391540 0x05000000 25 off 1117411714.391723 0x07000000 25 on 1117411714.391845 0x05000000 25 off 1117411714.475463 0x07000000 25 on 1117419953.391540 0x05000000 25 off 1117419953.391723 0x07000000 25 on 1117419953.391845 0x05000000 25 off 1117419953.475463 0x07000000 25 on 1117427153.391540 0x05000000 25 off 1117427153.391723 0x07000000 25 on 1117427153.391845 0x05000000 25 off 1117427153.475463 0x07000000 25 on ...The same pattern continues for all 7 of the injections. We can see that there's a brief interval (0.000183 s = 3 samples at 16384 Hz) which is marked as a burst injection, then "no injection" for 2 samples, then back on for 0.083618 s = 1370 samples. Knowing how the sine-Gaussian ramps up from vanishingly small amplitude, I think this is the real in the sense that the first whisper of a nonzero cycle returns to effectively zero for a couple of samples before it grows enough to be consistently "on". It is also interesting to see that the interval ends in ODC-MASTER slightly (5.0 ms) earlier than it does in CAL-INJ_ODC. I suspect that this is OK and the model really runs at 16384 Hz, but CAL-INJ_ODC is a down-sampled record of the real bit activity. I also confirmed that the injection bit got carried over to CALIB-STATE_VECTOR:pshawhan@> ./FrBitmaskTransitions -c H1:GDS-CALIB_STATE_VECTOR -m 1ff /archive/frames/ER7/hoft/H1/H-H1_HOFT_C00-11174/H-H1_HOFT_C00-11174[012]*.gwf 1117401088.000000 0x000001c8 Data starts 1117408113.375000 0x000001cc 2 on 1117408119.375000 0x000001dd 0 on, 4 on 1117408143.750000 0x000001df 1 on 1117408494.812500 0x000001dd 1 off 1117408494.937500 0x000001c8 0 off, 2 off, 4 off 1117411714.375000 0x00000148 7 off 1117411714.500000 0x000001c8 7 on 1117419953.375000 0x00000148 7 off 1117419953.500000 0x000001c8 7 on 1117420916.625000 0x000001cc 2 on 1117420922.625000 0x000001dd 0 on, 4 on 1117421174.000000 0x000001df 1 on 1117424707.062500 0x000001dd 1 off 1117424954.437500 0x000001c8 0 off, 2 off, 4 off 1117426693.000000 0x000001cc 2 on 1117426699.000000 0x000001dd 0 on, 4 on 1117426833.625000 0x000001df 1 on 1117427153.375000 0x0000015f 7 off 1117427153.500000 0x000001df 7 on ...Bit 7 indicates burst injections. It comes on for 2 16-Hz samples at the appropriate times.
As of 7:00am PDT on Friday, June 5, there have been 13 burst hardware injections at LHO over the last two days. All of these are represented by segments (each 1 second long) in the DQ segment database, and can be retrieved using a command like:pshawhan@> ligolw_segment_query_dqsegdb --segment-url https://dqsegdb5.phy.syr.edu --query-segments --include-segments H1:ODC-INJECTION_BURST --gps-start-time 1117300000 --gps-end-time 'lalapps_tconvert now' | ligolw_print -t segment -c start_time -c end_time -d ' 'These time intervals also agree with the bits in the H1:GDS-CALIB_STATE_VECTOR channel in the H1_HOFT_C00 frame data on the Caltech cluster, except that there is a gap in the h(t) frame data (due to filters being updated and the h(t) process being restarted, as noted in an email from Maddie). Similar DB queries show no H1 CBC injection segments yet, but H1 CW injections are ongoing:pshawhan@> ligolw_segment_query_dqsegdb --segment-url https://dqsegdb5.phy.syr.edu --query-segments --include-segments H1:ODC-INJECTION_CW --gps-start-time 1117400000 --gps-end-time 'lalapps_tconvert now' | ligolw_print -t segment -c start_time -c end_time -d ' ' 1117406895 1117552800By the way, by repeating that query I observed that the CW segment span was extended by 304 seconds about 130 segments after the segment ended. Therefore, the latency of obtaining this information from the segment database ranges from 130 to 434 seconds, depending on when you query it. (At least under current conditions.) I also did similar checks at LLO, which revealed a bug in tinj -- see https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=18517 .
Dan, Travis
Around 06:50 UTC we started to observe frequent glitching in the PRCL and SRCL loops that generated a lot of nonstationary noise in DARM between 20 and 100Hz. The glitches occur several times a minute, it's been two hours of more or less the same behvaior and counting. Our range has dropped by a couple of megaparsecs. The first plot has a spectrogram of DARM compared to SRCL that shows a burst of excess noise at 08:08:20 UTC.
The noise shows up in POP_A_RF45_I and POP_A_RF9_I, but not so much in the Q phases, see second plot. (MICH is POPA_RF45_Q, PRCL and SRCL are from the I-phases.) A quick look at the PRM and SRM coil outputs doesn't reveal a consistent DAC value at the time of the glitches, so maybe DAC glitching isn't a problem, see third plot. The three optical levers that we're using in the corner station (ITMX, ITMY, SR3, all in pitch) don't look any different now than they did before 0700 UTC.
I'm pretty sure these come from PRM, but one stage higher than you looked, M2. Attached is a lineup of PRM M2 UR and the glitches in CAL DELTAL. Looks like 2^16 = 65536 counts DAC glitches to me. I'll check some other times and channels, but wanted to report it now.
After a lot of followup by many detchar people (TJ, Laura, Duncan, and Josh from a plane over the Atlantic), we haven't really been able to make DAC glitches work as an explanation for these glitches. A number of channels (especially PRM M2 and M3) start crossing +/- 2^16 around 7 UTC, when the glitches begin. Some of these glitches line up with the crossings, but there are plenty of glitches that go unexplained, and plenty of crossings that don't correspond to a glitch. It's possible that DAC glitches are part but not all of the explanation. We'll be following this up further since one of these glitches corresponds to an outlier in the burst search. Duncan does report that no software saturations (channels hitting their software limit, as with the tidal drive yesterday) were found during this time, so we can rule those out.
Andy, Laura, TJ, Duncan, Josh: To add to what Andy said, here are a few more plots on the subject,
1) The glitching being bad does coincide with SRM and PRM channels being close to 2^16 (plotted only positive values here, negative values are similar). Of course this is pretty weak evidence as lots of things drift.
2) A histogram of the number of glitches versus the DAC value of the PRM M2 UR channel has a small line at 2^16. Almost not significant. Statistically, with the hveto algorithm, we find only a weak correlation with +/-2^16 crossings in the PRM M2 suspensions. Again, all very weak.
3) As you all reported, the glitches are really strong in SRCL and PRCL, ASC REFL channels, etc. hveto would veto ~60% of yesterday's glitches using SRCL and PRCL, as shown in this time-frequency plot. But the rest of the glitches would still get through and hurt the searches.
So we haven't found the right suspension, or there is something more complicated going on. Sorry we don't have a smoking gun - we'll keep looking.
The IFO was down, so we hit the load coeff on MC2 during it's walk back up to LSC_FF lock. The IFO didn't waiver so all seems fine.