TJ, Miriam
Miriam found a population of glitches while following up loud background events from PyCBC that seem to be due to ETMY oplev laser glitches.
It seems like ETMY optical lever laser glitches are coupling into h(t) through L2 via the oplev damping loops, similar to how the ETMX oplev laser glitches were found to be coupling into h(t) in alog 31810 from November 2016. These are thought to be laser glitches since they show up strongly in the OPLEV SUM readout.
The attachment shows data from Feb 25th, but I've seen similar behavior from earlier today.
The first page of the attachment shows the BLRMS of the ETMY L3 OPLEV SUM aligned with Omicron triggers in h(t).
The second page shows the ETMY L3 OPLEV SUM BLRMS aligned with the oplev damping loop error point, which seems to be where the coupling into h(t) is coming from.
The third page shows the EMTY L3 OPLEV SUM BLRMS aligned with the L2 noisemon, which shows the same coincident glitches.
Verbal alarms reported a timing error just before 10pm (local time) Saturday night. This was a transient alarm, which cleared within seconds.
I have just completed the analysis of the error. The alarm was raised by the 1PPS comparator in the MSR, the fourth input signal went OOR (Out-0f-Range).
This channel is the independent Symmetricom GPS receiver, its nominal range is -200 to 0, at 21:58 4 Mar 2017 PST it briefly went to -201
Trending signal_3 for a day around this time shows that the signal wandered for several hours before settling down. I verified that the other three signals being compared did not make any excursions at this time, indicating the error was with the Symmetricom signal itself (trend attached).
Using MEDM time-machine, I captured the detailed comparator error screen at this time which verifies the error was "PPS 3 OOR" (image attached)
FAMIS #4718 For some reason HAM2 YAW did not plot. ETMX and ETMY may need to be recentered in pitch. HAM2 and ITMX have been close to -10 in pitch for the last 7 days.
TITLE: 03/09 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 65Mpc
OUTGOING OPERATOR: Nutsinee
CURRENT ENVIRONMENT:
Wind: 4mph Gusts, 2mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.14 μm/s
QUICK SUMMARY:
No issues to report.
TITLE: 03/09 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 58Mpc
INCOMING OPERATOR: Patrick
SHIFT SUMMARY: Locked and Observe most of the night (except for one little accidence, see below). PI mode 27 has two BP filters on right now. Patrol visited the site last night. Foton doesn't seem to work when trying to use it on an medm screen (usual right click >> foton >> click on filter)
LOG:
10:00 Hanford Patrol on site. Called the control room from the gate but the thing didn't work. I only heard echos of my voice when trying to talk. They came through anyway.
10:10 Patrol left the exit gate, headed to LSB. Off site couple of minutes later.
13:18 Noticed two BP filters were on for PI mode 27. Ed told me about this before he left but I didn't realized they were both being used. Accidently went out of Observe trying to revert the configuration. I left the BP filters as they are for now.
14:08 Bubba heading to Y arm to check on tumbleweed.
14:26 Bubba back.
instafoton problem has been fixed, some diagnostics code was added which inadvertently generated a reliance on a temporary file's ownership.
5.9M West of Macquarie Island
Was it reported by Terramon, USGS, SEISMON? Yes, Yes, No
Magnitude (according to Terramon, USGS, SEISMON): 5.9, 5.9, na
Location: 60.146°S 150.296°E
Starting time of event (ie. when BLRMS started to increase on DMT on the wall): ~3:34 local time
Lock status? H1 stayed locked, LLO is down
EQ reported by Terramon BEFORE it actually arrived? Yes
Miriam, Hunter, Andy A subset of blip glitches appear to be due to a glitch in the ETMY L2 coil driver chain. We measured the transfer function from the ETMY L2 MASTER channel to the NOISEMON channel (specifically, for the LR quadrant). We used this to subtract the drive signal out of the noisemon, so what remains would be any glitches in the coil drive chain itself (and not just feedback from DARM). The subtraction works very well as seen in plot 1, with the noise floor a factor of 100 below the signal from about 4 to 800 Hz. We identified some blip glitches from Feb 11 and 12 as well as Mar 6 and 7. Some of the Omega scans of the raw noisemon signals look suspicious, so we performed the subtraction. The noisemons seem to have an analog saturation limit at +/- 22,000 counts, so we looked for cases where the noisemon signal is clearly below this. In some cases, there was nothing seen in the noisemon after subtraction, or what remained was small and seemed like it might be due to a soft saturation or nonlinearity in the noisemon. However we have identified at least three times where there is a strong residual. These are the second through fourth plots. We now plan to automate this process to look at many more blip and check all test mass L2 coils in all quadrants.
In case someone wants to know, the times we report here are:
1170833873.5
1170934017
1170975288.38
I have noticed similarly caused glitches on the 10th March, in particular for the highest SNR Omicron glitch for the day:

![]()
Looking at the OmegaScan of this glitch in H(t) and then the highest SNR coincident channels which are all the quadrants of H1:SUS-ETMY_L2_NOISEMON:


Hi Borja,
could you point us to the link to those omega scans? I would like to see the time series plots to check if the noisemon channels are saturating (we saw that sometimes they look like that in the spectrogram when it saturates).
I am also going to look into the blip glitches I got for March 10 to see if I find more of those (although I won't have glitches with such a high SNR like the one you posted).
Thanks!
Hi Miriam,
The above OmegaScan can be found here
Also I noticed that yesterday the highest New SNR glitch for the whole day reported by PyCBC live 'Short' is of this type as well. The OmegaScan for this one can be found here.
Hope if helps!
Hi Miriam, Borja,
While following up on a GraceDB trigger, I looked at several glitches from March 1 which seem to match those that Borja posted. The omegascans are here, in case these are also of interest to you.
Hi,
Borja, in the first omega scan you sent, the noisemon channels are indeed saturated. In that case it is difficult to tell apart if that is the reason for the spectrogram looking like that or if indeed it might be a glitch in the coil drive. Once Andy has a more final version of his code, we can check on that. In the second omega scan, the noisemon channels look just like the blip glitch looks in the calib_strain channel, which means the blip was probably already in the DARM loop before and the noisemon channels are just hearing it. Notice also that, besides the PyCBC_Live 'short', we have a version of PyCBC_Live that is dedicated specifically to find blip glitches (see aLog 34257), so at some point we will be looking into times coming from there (I will keep in mind to look into the March 10 list).
Paul, those omega scans do not quite look like what we are looking for. We did look into some blip glitches where the noisemon channels looked like what you sent and we did not find any evidence for glitches in the coil drive. But thanks for your omega scans, I will be checking those times when Andy has a final version of the subtraction code.
Attached are temperature trends for the past two days, the past year, and the past fortnight. The year
tend is attached because Gerardo mentioned to me that this behaviour had in fact been going on for some
time. Indeed there appear to be dips in the temperature except for the period between 21st December 2016
to 8th February 2017, or even 1st March 2017.
One thing that is apparent is that there is a ~0.3 degC temperature shift from 1st March. That in
turn seems to coincide with the recent spate of temperature excursions in the Laser Room temperature.
I didn't find anything in the alog concerning work going on in the enclosure around that time. There
was mention of a laser trip but that was reset from the outside. No mention of any other work around
1st March.
Looks like an earthquake is on the way. SEISMON doesn't seem to be very updated. Terramon predicted an R-wave velocity from the 5.9 earthquake to be 2.4 micron/s.
TITLE: 03/09 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 61Mpc
OUTGOING OPERATOR: Ed
CURRENT ENVIRONMENT: Wind: 5mph Gusts, 4mph 5min avg Primary useism: 0.01 μm/s Secondary useism: 0.18 μm/s
QUICK SUMMARY: Spotted a bunch of EY CDS overview went red. The status reverted before I can identify which column it was complaining.
I attached a plot of H1IOPSUSEY status over the past 30 mins. I saw two columns went red on this one. The number read 6 and 512.
Note to operators: remember that h1susex and h1susey are running the newer, faster computers to keep processing time to within the 60uS limit. This system has a feature, occassionally the IOP runs long for a single cycle resulting in a latched TIM, ADC error on the IOP, and sometimes an IPC error which propagates to the SUS models, SEI models and ISC models at the end station. A cronjob running every minute at the zero second mark clears these latched errors.
As an example of the frequency of these glitches, over the past 7 days, EX has glitched 61 times and EY 38 times (once every 2 and 4 hours respectively).
We should only report problems if the error bits are not being cleared after a minute has elapsed.
TITLE: 03/09 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 64Mpc
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY:
LOG:
I was successful in damping it by first adjusting the set freq and then changing to 90˚phase and 1000 gain.
I wasn't able to test the actuality of my actions with mode 23 due to lockloss and a different set of PI conditions with the new lock, however I did return the settings for that mode to what I had set them to previously
I also had to switch on a BP filter to gain control of mode 27 which seemed to be impervious to my efforts at first. I finally got a lock going that looks like it will hold for a while.
All Times in UTC
00:29 PSL Temp Excursion.
00:45 Bailiers back from Y arm
00:47 Initiated locking sequence
16:54 Gerardo and Fil into LVEA to pull the AC breaker for the PSL AC to stop temp excursions
1:06 Gerardo and Fil out
1:19 Just realized tht OMC wasn't locking. Re-requested READY_FOR_HANDOFF.
1:31 finally moving THROUGH DC Readout
7:31 restarted VerbalAlarms
8:00 Handing of too Nutsinee.
Lockloss at 00:33
00:47 Initiated locking sequence
16:54 Gerardo and Fil into LVEA to pull the AC breaker for the PSL AC to stop temp excursions
1:06 Gerardo and Fil out
1:19 Just realized tht OMC wasn't locking. Re-requested READY_FOR_HANDOFF.
1:31 finally moving THROUGH DC Readout
1:45 accepted some SDF diffs for Bounce mode damping (left over from my last shift?) modes ringing down nicely
1:47 Intention Bit "Undisturbed" 61.2Mpc
TITLE: 03/09 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC STATE of H1: Observing at 63Mpc INCOMING OPERATOR: Ed SHIFT SUMMARY: Tumbleweed bailing throughout the day. Currently on Y arm. PSL enclosure temperature has remained slowly returning to nominal. DMT inspiral range for LLO is back. GW status webpage is not. Dan is contacting Caltech to sort it out. I may have the details wrong, but it sounds like the detchar observing and calibration bits were not being set since Tuesday maintenance. This appears to have been resolved. John said that Apollo has been working at mid X on HVAC controls. LOG: Chris at mid Y WP 6518 16:01 UTC Christina to mid X, Karen to mid Y 17:04 UTC Karen leaving mid Y
With a nudge from peterF and mevans, I checked to see how hard it might be to do some time-domain subtraction of the jitter in H1 DARM. This is similar to what Sheila (alog 34223) and Keita (alog 33650) have done, but now it's in the time domain so that we could actually clean up DARM before sending it to our analysis pipelines.
The punchline: It's pretty easy. I got pretty good feedforward subtraction (close to matching what Sheila and Keita got with freq-domain subtraction) without too much effort.
Next steps: See if the filters are good for times other than the training time, or if they must be re-calculated often (tomorrow). Implement in GDS before the data goes to the analysis pipelines (farther future?).
I was finding it difficult to calculate effective Wiener filters with so many lines in the data, since the Wiener filter calculation is just minimizing the RMS of the residual between a desired channel (eg. DARM) and a witness (eg. IMC WFS for jitter). So, I first removed the calibration lines and most of the 60Hz line. See the first attached figure for the difference between the original DARM spectrum and my line-subtracted DARM spectrum. This is "raw" CAL-DELTAL_EXTERNAL, so the y-axis is not in true meters.
I did not need to use any emphasis filters to reshape DARM or the witnesses for the line removal portion of this work. The lines are so clear in these witnesses that they don't need any help. I calculated the Wiener filters for each of the following channels separately, and calculated their estimated contribution to DARM individually, then subtracted all of them at once. H1:CAL-PCALY_EXC_SUM_DQ has information about the 7Hz line, the middle line in the 36Hz group, the 332Hz line and the 1080Hz line. H1:LSC-CAL_LINE_SUM_DQ has information about the highest frequency line in the 36Hz group. Both of those are saved at 16kHz, so required no extra signal processing. I used H1:SUS-ETMY_L3_CAL_LINE_OUT_DQ for the lowest frequency of the 36Hz group, and H1:PEM-CS_MAINSMON_EBAY_1_DQ for the 60Hz power lines. Both of these channels are saved slower (ETMY cal at 512Hz and MainsMon at 1kHz), but since they are very clean signals, I felt comfortable interpolating them up to 16kHz. So, these channels were interpolated using Matlab's spline function before calculating their Wiener filters. Robert or Anamaria may have thoughts on this, but I only used one power line monitor, and only at the corner station for the 60Hz line witness. I need to re-look at Anamaria's eLIGO 60Hz paper to see what the magical combination of witnesses was back then.
Once I removed the calibration lines, I roughly whitened the DARM spectrum, and calculated filters for IMC WFS A and B, pit and yaw, as well as all 3 bullseye degrees of freedom. Unfortunately, these are only saved at 2kHz, so I first had to downsample DARM. If we really want to use offline data to do this kind of subtraction, we may need to save these channels at higher data rates. See the second attached figure for the difference between the line-cleaned DARM and the line-and-jitter-cleaned DARM spectrum. You can see that I'm injecting a teeny bit of noise in, below 9Hz. I haven't tried adjusting my emphasis filter (so far just roughly whitening DARM) to minimize this, so it's possible that this can be avoided. It's interesting to note that the IMC WFS get much of the jitter noise removed around these broad peaks, but it requires the inclusion of the bullseye detector channels to really get the whole jitter floor down.
Just because it's even more striking when it's all put together, see the third attachment for the difference between the original DARM spectrum and the line-and-jitter-cleaned DARM spectrum.
It might be worth pushing the cleaned data through the offline PyCBC search and seeing what difference it makes. How hard would it be to make a week of cleaned data? We could repeat e.g. https://sugwg-jobs.phy.syr.edu/~derek.davis/cbc/O2/analysis-6/o2-analysis-6-c00-run5/ using the cleaned h(t) and see what the effect on range and glitches are. The data could be made offline, so as long as you can put h(t) in a frame (which we can help with) there's no need to get it in GDS to do this test.
Do you think it would be possible to post the spectrums as ascii files? It would be interesting to get a very rough estimate of the inspiral range difference.
In fact, I'm working on a visualization of this for a comparison between C00 and C01 calibration versions. See an example summary page here:
https://ldas-jobs.ligo.caltech.edu/~alexander.urban/O2/calibration/C00_vs_C01/L1/day/20161130/range/
I agree with Other Alex and I'd like to add your jitter-free spectrum to these plots. If possible, we should all get together at the LVC meeting next week and discuss.
J. Kissel, D. Tuyenbayev Analyzing the high-frequency data for the UIM that we took last night (LHO aLOG 31601), we find -- as previously suspected -- there is lots of dynamical resonant features in the UIM / L1 actuation stage; it definitely does NOT fall as f^6 to infinity as one might naively suspect. There are even more features than the (now anticipated; LHO aLOG 31432) broad impacts of the violin modes of the Sus Point-to-TOP wires (~311 Hz), and UIM-to-PUM wires (~420 Hz). We had seen hints of these features previously (LHO aLOG 24917), but here they are fully characterized out to 500 Hz with a combination of swept-sine (SS) and broad-band (BB) transfer function ratios (the calibration standard measurements of PCAL2DARM = C / (1+G) and iEXC2DARM = C A_i / (1+G)). The measurements yield the actuation strength of the UIM stage, in terms [m] of test mass displacement per [ct] of drive from the L1_TEST_L bank, which is the Euler-basis equivalent to DAC [ct]. To scale to [m/N], is a mere scale factor, measured to be 20/2^18 [V/ct] 0.62e-3 [A/V]* 1.7082 [N/A] = 8.08e-8 [N/ct] (see LHO aLOG 31344). Via private communication in January this year, Norna suspects that 111 Hz feature is the first internal mode of the UIM blades, backed by a bench test of the blades at CIT which revealed a resonance at 109 Hz. No ideas on the 167 Hz mode though. These high frequency dynamics continue to plague the estimate of the UIM actuation strength at DC using the traditional frequency-dependent sweep method, because these high frequency dynamics begin to affect the actuation strength at as low a frequency as ~30 Hz (LHO aLOG 31427), and any model fitting code gets totally distracted by these features. A challenge to the CSWG team: fit this transfer function above 20 Hz and create a set of zeros and poles that can be used as a "correction" filter to a model that falls off as f^6. This filter need not perfectly resolve the details of all of the high-Q features, but it must track the overall frequency dependence over the 20 - 500 Hz region well. I attach all of the measurements compressed onto one (discontinuous) frequency vector as an ascii in the standard DTT form of [freq re(TF) im(TF)]. To use: >> foo = load('2016-11-17_H1SUSETMY_L1_Actuation_HighFreqChar_asciidump.txt') >> figure; loglog(foo(:,1), abs( foo(:,2)+1i*foo(:,3) )) This data is also committed to the CalSVN repo here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/H1/Results/Actuation/2016-11-17_H1SUSETMY_L1_Actuation_HighFreqChar_asciidump.txt Kiwamu has already tried to create such a filter from the previous data (LHO aLOG 28206), but was limited by that measurement's high-frequency bound falling between the 111, 137, and 167 Hz features. Details: Analysis code: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/H1/Scripts/FullIFOActuatorTFs/process_H1SUSETMY_L1_HFDynamicsTest_20161117.m Config files: IFOindepPars = '../../../Common/params/IFOindepParams.conf'; IFOdepPars = {'../../params/H1params.conf'}; IFOmeasPars = {'../../params/2016-11-12/H1params_2016-11-12.conf'}; PCALPars = {'../../params/2016-11-12/measurements_2016-11-12_ETMY_L1_actuator.conf'}; Model: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/DARMmodel/src/computeDARM.m Will post the data for the fitting challenge later this afternoon.
I made an update to the quad matlab model to account for these mystery features. See CSWG log 11197.
I describe my use of the Frequency Domain System Identification toolbox (FDIDENT) to fit this transfer function in CSWG elog #11205. FDIDENT is a third party Matlab toolbox which provides tools for identifying linear dynamic single-input/single-output (SISO) systems from time response or frequency response measurements. The toolbox is free for non-profit use.
https://www.mathworks.com/products/connections/product_detail/product_35570.html
http://home.mit.bme.hu/~kollar/fdident/
A stable, but non-minimum phase, model without delay – compatible with a Linear Time Invariant (LTI) representation -- results in a best fit for a 22 order numerator and 28 order denominator model, m2228. The model is compared to the measurement data in the attached bode plot.
I attach several new parts of this high frequency characterization in order to facilitate incorporating the uncertainty in any future transfer function fitting.
I attach three new text files:
"..._tf.txt" -- a copy of the originally attached text file, columns are
[freq re(A) im(A)]
"..._coh.txt" -- an export of the (prefiltered) coherence, columns are
[freq iEXCCoh PCALCoh]
"..._relunc.txt" -- an export of the combined relative uncertainty on the transfer function, columns are
[freq sigma_A]
Computing the uncertainty on this actuation strength was a bit of a challenge.
Remember, the above measure of the actuation strength of the UIM stage, A, is a combination of two transfer functions, as described in P1500248, Section V. In this aLOG they're referred to as "PCAL2DARM" where we use the photon calibrator as a reference actuator, and "iEXC2DARM" where the suspension stage under test is used as the actuator. Typically, the iEXC2DARM transfer function has the lowest coherence.
Even worse, I've combined many data sets of both transfer functions covering different frequency regions each with a different number of averages.
Thus form the uncertainty, I've taken each frequency region's data set, and
- Filtered both iEXC and PCAL transfer functions for data points in which the iEXC TF has coherence greater than 0.95,
- Created a relative uncertainty vector for each iEXC and PCAL transfer functions using the standard B&P equation,
sigma_TF(f) / TF = sqrt( (1-C(f)) / (2 N C(f)) )
where C(f) is the coherence, and N is the number of averages (N was 10 for swept sine TFs, 25 for broad band TFs)
- Concatenated the data sets to form the overall transfer function, A,
- Combined the two uncertainty vectors in the standard way,
sigma_A / A = sqrt((sigma_iEXC / iEXC)^2 + (sigma_PCAL / PCAL)^2)
- Sorted the collection of
[frequency complextf iexccoh pcalcoh sigma_A]
by frequency.
- Exported the uncertainty.
Note that one only needs one column of uncertainty, for the absolute uncertainty in magnitude is just
|sigma_A| = abs(A) * (sigma_A / A)
and the absolute uncertainty in phase is
/_ sigma = 180/pi * (sigma_A / A)
I attach a plot of the magnitude and its uncertainty for demonstrative purposes, so that when the files are used, you can compare your plots of this against mine to be sure you're using the data right. Note that I've multiplied the uncertainty by a factor of 10 for plotting only so that it's visible.
I've updated and committed the function that's used to process this data, and it can be found here:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/H1/Scripts/FullIFOActuatorTFs/
process_H1SUSETMY_L1_HFDynamicsTest_20161117.m