Distribution of hours at which scratchy glitches occurred according to the ML output from GravitySpy. In addition, a histogram of amount of O1 time spend in analysis ready mode is provided. I have uploaded omega scans and FFT spectrograms of what Scratchy glitch looked like in O1.
For those of us who haven't been on DetChar calls to have heard this latest DetChar nickname... "Scratchy glitches?"
Hi Jeff,
Scotty's comment above refers to Andy's comment to the range drop alog 30797 (see attachment here and compare to Andy's spectrogram). We're trying to help figure out its cause. It's a good lead that they seem to be related to RM1 and RM2 motion.
"Scratchy" is the name used in GravitySpy for these glitches. They are called that because they sound like scratches in audio https://wiki.ligo.org/DetChar/InstrumentSounds . In FFT they look like mountains, or if you look closer, like series of wavy lines. They were one of the most numerous types of H1 glitches in O1. In DetChar we also once called them "Blue mountains." Confusing, I know. But there is a DCC entry disambiguating (in this case equating) scratchy and blue mountain https://dcc.ligo.org/LIGO-G1601301 and a further entry listing all of the major glitch types https://dcc.ligo.org/G1500642 and the notes on the GravitySpy page.
Ed, Sheila
Are ezca connection errors becoming more frequent? Ed has had two in the last hour or so, one of which contributed to a lockloss (ISC_DRMI).
The first one was from ISC_LOCK, the screenshot is attached.
Happened again but for a different channel H1:SUS-ITMX_L2_DAMP_MODE2_RMSLP_LOG10_OUTMON ( Sheila's post was for H1:LSC-PD_DOF_MTRX_7_4). I trended and found data for both of those channels at the connection error times, and during the second error I could also caget the channel while ISC_LOCK still could not connect. I'll keep trying to dig and see what I find.
Relevant ISC_LOCK log:
2016-10-25_00:25:57.034950Z ISC_LOCK [COIL_DRIVERS.enter]
2016-10-25_00:26:09.444680Z Traceback (most recent call last):
2016-10-25_00:26:09.444730Z File "_ctypes/callbacks.c", line 314, in 'calling callback function'
2016-10-25_00:26:12.128960Z ISC_LOCK [COIL_DRIVERS.main] USERMSG 0: EZCA CONNECTION ERROR: Could not connect to channel (timeout=2s): H1:SUS-ITMX_L2_DAMP_MODE2_RMSLP_LOG10_OUTMON
2016-10-25_00:26:12.129190Z File "/ligo/apps/linux-x86_64/epics-3.14.12.2_long-ubuntu12/pyext/pyepics/lib/python2.6/site-packages/epics/ca.py", line 465, in _onConnectionEvent
2016-10-25_00:26:12.131850Z if int(ichid) == int(args.chid):
2016-10-25_00:26:12.132700Z TypeError: int() argument must be a string or a number, not 'NoneType'
2016-10-25_00:26:12.162700Z ISC_LOCK EZCA CONNECTION ERROR. attempting to reestablish...
2016-10-25_00:26:12.175240Z ISC_LOCK CERROR: State method raised an EzcaConnectionError exception.
2016-10-25_00:26:12.175450Z ISC_LOCK CERROR: Current state method will be rerun until the connection error clears.
2016-10-25_00:26:12.175630Z ISC_LOCK CERROR: If CERROR does not clear, try setting OP:STOP to kill worker, followed by OP:EXEC to resume.
It happened again just now.
Opened FRS on this, marked a high priority fault.
[Jenne, Daniel, Stefan]
There seems to be an offset somewhere in the ISS second loop. When the 2nd loop comes on, even though it is supposed to be AC coupled, the diffracted power decreases significantly. This is very repeatable with on/off/on/off tests. One bad thing about this (other than having electronics with unknown behavior) is that the diffracted power is very low, and can hit the bottom rail, causing lockloss - this happened just after we started trending the diffracted power to see why it was so low.
Daniel made it so the second loop doesn't change the DC level of diffracted power by changing the input offset for the AC coupling servo (H1:PSL-ISS_SECONDLOOP_AC_COUPLING_SERVO_OFFSET from 0.0 to -0.5), the output bias of the AC coupling servo (H1:PSL-ISS_SECONDLOOP_AC_COUPLING_INT_BIAS from 210 to 200), and the input offset of the 2nd loop (H1:PSL-ISS_THIRDLOOP_OUTPUT_OFFSET from 24.0 to 23.5 - this is just summed in to the error point of the 2nd loop servo). What we haven't checked yet is if we can increase the laser power with these settings.
Why is there some offset in the ISS 2nd loop that changes the diffracted power?? When did this start happening?
We were able to increase power to 25W okay, but turning off the AC coupling made things go crazy and we lost lock. The diffracted power went up, and we lost lock around the time it hit 10%.
The 2nd loop output offset observed by the 1st loop was about 30mV (attached, CH8). With the 2nd ISS gain slider set at 13dB and a fixed gain stage of 30, this correspond to 0.2mV offset in the AC coupling point. This offset is relatively small.
One thing that has happened in the past two weeks or so is that the power the 1st loop sensor (PDA) receives was cut by about half (second attachment). This was caused by the move from the old position to the new position of the PD.
Since the sensing gain of the 1st loop was reduced by a factor of two, seen from the 2nd loop the 1st loop is twice as efficient an actuator. Apparently the second loop gain slider was not changed (the slider is still at 13dB), so even if the same offset was there before, the effect was a factor of two smaller before.
Another thing which is kind of far-fetched is that I switched off the DBB crate completely and we know that opening/closing the DBB and frontend/200W shutters caused some offset change in the 2nd loop board.
Since the noise of the detector has improved around the 331.9 Hz Pcal injection frequency, we can reduce the amplitude of the injection (current setting 9000 cts for both sine and cosine). I have reverted changes that increased the amplitude of this line (see LHO aLOG 30476). The new amplitude setting is 2900 (for both sine and cosine amplitudes), which is the same as it was before increasing the injection amplitude. This also brings the total injections to Pcal Y below the threshold (see LHO aLOG 30802). The threshold is 44,000 counts. The current total injection is now 38650.0 counts. Screenshot of excitation settings attached.
Note to self: check the front-end calculations of the uncertainty and coherence of these lines before and after this change after the IFO reverted back to 25 [W] input power. Example checks: - do the calculations show the expected decrease in coherence / increase in uncertainty? - how much was the uncertainty / coherence when the SNR was so high? - do we like that level of uncertainty? did it reveal more real optical parameter changes instead of noise?
Delayed update, these changes were accepted in the SDF today (Oct. 26, 2016, ~10:20 PDT).
Bake exercise had been scheduled to end today but will now be extended as it doesn't seem to be limiting others work. "Hotter, longer" is the game here.
Whilst prepping for installation of the Picomotor equipped mirror mounts, we took the opportunity to measure
the powers in/out of the pre-modecleaner. As with the previous measurement, both the input and output windows
are on. The ISS was off with the offset slider at 20. The 300W water cooled power meter was used for the
measurements using a 10s average.
Pincident = 141.5W
Ptrans = 105.8
Preflected = 31.0
transmission = 105.8 / 141.5 * 100
= (74.8 +/- 0.1) %
visibility = (1 - 31.0 / 141.5) * 100
= (78.1 +/- 0.2) %
Which would imply losses of (3.3 +/- 0.3) % including the input and output windows.
Also attached are two thermal images; one of the pre-modecleaner PZT and the other of the output window.
The PZT did not appear to be any hotter than the body of the pre-modecleaner. Ambient room temperature was
nominally 23 degC.
Jason/Peter
I ran Bruco on two times around excess noises as Andy's suggestion.
Oct 24, 12:40:00 UTC : https://ldas-jobs.ligo-wa.caltech.edu/~youngmin/BruCo/PRE-ER10/H1/Oct24/H1-1161348017-600/
Oct 24, 15:45:00 UTC : https://ldas-jobs.ligo-wa.caltech.edu/~youngmin/BruCo/PRE-ER10/H1/Oct24/H1-1161359117-600/
The first time is right after Range Drop which Stefan mentioned (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=30790).
For the comparison, the bruco scan on nominal state around 70Mpc is here (https://ldas-jobs.ligo-wa.caltech.edu/~youngmin/BruCo/PRE-ER10/H1/Oct24/H1-1161342497-600/)
It's not easy for me to find a specific channel to look at, therefore,
Any comments/suggestions on what I should look at and/or any further analysis are welcome.
Thanks-
This shows that we have ASC noise below 30 Hz, and that perhaps the A2L for ITMY was not tuned well at the end of the lock: ITMY_L3_ISCINF_P
SRCL noise is high from 15-50 Hz, we will attempt to make a better feedforward filter soon for this (This is also the conculsion of some quick noise injections this morning.)
PRCL coherence is high both where the SRCL coherence is high and at the jitter peaks, which could be coupling through SRCL or frequency noise lock point errors.
PSL channels that have good whitening show coherence around our high frequency lump. (OSC PD INT ISS PDB)
There are a few channels that I think could be added to the excluded list:
SUS-ETMY_L2_FASTIMON_LL_OUT_DQ
OMC-PI_DCPD_64KHZ_AHF_DQ
OMC-DCPD_NULL_OUT_DQ
Looking at the frequency range of interest (mainly 50 - 200 Hz), there isn't any coherence with significant channels. This is not unexpected if the noise is due to scattering, since it would be a highly non linear coupling, thus not seen with a coherence analysis.
Removed 2 picomotor controlled mirror mounts from ISCT1. They were intended for WFS_REFL_AIR, but never commissioned.
Here is a result of the cross correlated analysis using the data from the last night (3 hours, starting at 6:40:00 UTC, BW = 0.1 Hz) (30780).
Also, for comparison, I overlaid the DCPD spectra from Sep. 26th (30115) where we were operating at 50 W and this was before all the TCS and ASC revert. The optical gains for the two data sets in [amps/meters], according to the Pcal height at 331.9 Hz, are almost same with the latest lock lower by 14 %. Therefore any difference in the DCPD current can be almost directly interpreted as change in the DARM displacement. I did not check the cavity pole locations.
Overall, the latest lock appeared to have lower noise floor in the jitter band (200 Hz - 1 kHz) by a factor of 2- 5. Also, the latest noise exposed a number of peaks that were not easily to see in the September data. There were two small frequency regions where the noise level became worse, namely 365 Hz and 1084 Hz. These two peaks were not present in the September data. Additionally, the laser noise contribution above 3 kHz improved. The fig file is attached too.
At 5:20am local time we saw a significant range drop (from about 70Mpc to 60Mpc) that seems to be due to a signicant increase of the line structure in the bucket that always lingers around the noise floor.
Attached are two spectra - 1h12min apart (from 11:18:00 and 12:20:00 UTC on 2016/10/24), showing that structure clearly.
Plot two shows the seismic BLRMS from the three LEAs - the corner shows the clearest increase. We are now chasing any particularly bad correlations around 12:20 thgis morning in the hope that it will give us a hint where this scatter is from.
From you request, I ran Bruco on those times. The results are as follows,
bad time (12:20 UTC) : https://ldas-jobs.ligo-wa.caltech.edu/~youngmin/BruCo/PRE-ER10/H1/Oct24/H1-1161346817-600/
good reference(11:08 UTC): https://ldas-jobs.ligo-wa.caltech.edu/~youngmin/BruCo/PRE-ER10/H1/Oct24/H1-1161342497-600/
These could give you a hint for range drop.
Here is a plot of thge auxiliary loops, again comparing good vs bad.
Note the two broad noise humps around 11.8Hz and 23.6Hz. They both increased at the bad time compared to the good time.
Interesingly, the peaks showing up in the DARM spectrum are the 4th 5th, etc. to 12th harmonic of that 11.8-ish Hz.
It all smells to me like some form of scatter in the input chain.
The IMs do not change their motion between Stefan's good time (11:08 UTC) and bad time (12:20 UTC). But, the RMs, particularly RM2, see elevated motion, almost a factor of 2 more motion between 8Hz - 15Hz.
First screenshot is IMs, second is RMs. In both, the references are the good 11:08 time, and the currents are the bad 12:20 time.
Stefan and TeamSEI are looking at the HEPI and ISI motion in the input HAMs right now.
EDIT: As one would expect, the REFL diodes see an increase in jitter at these same frequencies, predominantly in pitch. See 3rd attachment.
I quickly grabbed a time during O1 when this type of noise was happening, and it also corresponds to elevated motion around 6 Hz in RM1 and RM2. Attached are a spectrogram of DARM, and the pitch and yaw of RM2 at the time compared to a reference. There is a vertical mode of the RMs at 6.1 Hz (that's the LLO value, couldn't find it for LHO). Maybe those are bouncing more, and twice that is what's showing up in PRCL?
There should not be any ~6 Hz mode from the RM suspensions (HSTS or HLTS), so I am puzzled what this is. For a list of expected resonant frequencies for HSTS and HLTS see links from this page https://awiki.ligo-wa.caltech.edu/aLIGO/Resonances
@Norna: the RMs, for "REFL Mirrors" are HAM Tip-Tilt Suspensions, or HTTS (see, e.g. G1200071). These, indeed, have been modeled to have their highest (and only) vertical mode at 6.1 Hz (see HTTS Model on the aWiki). I can confirm there is no data committed to the SUS repo on the measured vertical mode frequencies of these not-officially-SUS-group suspensions at H1. Apologies! Remember, these suspensions don't have transverse / vertical / roll sensors or actuators, so one have to rely on dirt coupling showing up in the ASDs of the longitudinal / pitch / yaw sensors. We'll grab some free-swinging ASDs during tomorrow's maintenance period.
Stefan has had Hugh and I looking SEI coupling to PRCL over this period, and so far I haven't found anything, but HAM1 HEPI is coherent with the RM damp channels and RM2 shows some coherence to CAL_DELTAL, around 10hz. Attached plot shows coiherence from RM2_P to HEPI Z L4Cs (blue), RM2_P to CAL_PRCL (brown), and RM2_P to CAL_DELTAL (pink). The HAM1_Z to PRCL is similar to the RM2_P to CAL_PRCL, so I didn't include it. HAM1 X and RY showed less coherence, and X was at lower frequency. There are some things we can do to improve the HAM1 motion if it's deemed necessary, like increasing the gain on the Z isolation loops, but there's not a lot of extra margin there.
Here are ASDs of the HAM3 HEPI L4Cs (~in-line dofs: RY RZ & X) and the CAL-CS_PRCL_DQ. The HAM2 and HAM1 HEPI channels would be assessed the same way: The increase in motion seen on the HAM HEPIs is much broader than that seen on the PRC signal. Also, none of these inertial sensor channels see any broadband coherence with the PRC, example also attached.
Freee swing PSD of RMs and OM are in alog 30852.
Rick, Evan G., Travis
As part of the bi-annual PCal maintenance, today we optimized the drive range of the PCal OFS for both end stations. The procedure for this was:
1) Turn off PCal lines and inject 10 Hz sine wave.
2) Break the OFS lock.
3) Note that the AOM drive is large (~1.5V) with loop open.
4) Adjust the offset in 1V steps to find the maximum OFS PD output.
5) Record max OFS PD output.
6) Close shutter and record minimum OFS PD output.
7) Set offset to half of 95% of max OFS PD output.
8) Find amplitude of injected sine wave that give us the maximum p-p OFS PD voltage.
9) Record magnitudes of carrier and sideband frequencies of the OFS and TX PDs.
Results:
PCal X:
PCal Y:
As the final part of the maintenance for PCal Y, we took a transfer function of the OFS. See attached traces.
The limit on the excitation amplitude is total number of counts that should be allowed to be sent to the Pcal. This is a frequency independent value. So for Pcal Y, the the maximum in H1:CAL-PCALY_EXE_SUM should be no larger than 44,000 counts. For Pcal X, the total of H1:CAL-PCALX_EXE_SUM and H1:CAL-PINJX_HARDWARE_OUT should be less than 57,000 counts. Right now, Pcal Y has the following injections set: Freq. (Hz) Amp. (cts) ------------------------- 7.9 20000.0 36.7 750.0 331.9 9000.0 1083.7 15000.0 ------------------------- Total = 44750.0 This is just above the threshold. It might be worth returning the 331.9 Hz line to O1 level (see LHO aLOG 30476 for the increased amplitude lines) since the detector noise in this region is recently improved. Pcal X has the following injections set: Freq. (Hz) Amp. (cts) ------------------------- 1501.3 39322.0 And CW injections on the Pcal X are totaling ~1585 counts giving Total = 40907.
J. Kissel, D. Tuyenbayev,
We turned on two calibration lines using ETMY coil drivers on stages L1 and L2. The SNRs of these lines are roughtly 1/3 of the regular calibration lines (regular cal. lines have SNR of ~100 with 10s FFT).
_FREQ (Hz) _CLKGAIN (ct)H1:SUS-ETMY_L1_CAL_LINE 33.7 60.0 O2-scheme synched oscillator for kappa_U
H1:SUS-ETMY_L2_CAL_LINE 34.7 27.0 O2-scheme synched oscillator for kappa_P
These lines will be used to better quantify calibration of the PUM and UIM actuators.
A related report: LHO alog 29291.
κU, κP and κT using these additional lines at ~6am on Wed., Oct. 12 are:
κU = 0.9828 - 0.0637i
κP = 0.9499 - 0.0380i
κT = 0.9784 - 0.0440i
These are preliminary values calculated manually for transfer functions taken at a selected time (no trends or averaging except for 10avg. in DTT), and the DARM model for ER9 (H1params_2016-07-01.conf). We will use SLM tool data to look at the parameter trends.
Greg M, Darkhan T,
We calculated kappa values using SLM data (10s FFTs) generated over 10 days between Oct 10 and Oct 21 with ER9 parameter file
'${CalSVN}/Runs/PreER9/H1/params/H1params_2016-07-01.conf'
The plots show raw (unaveraged) values. SNRs of the L1 and L2 lines (used for calculation of κU and κP respectively) were set to give approximately 1/3 SNR compared to the L3 line (with the ER9 noise floor).
The data was taken from an additional SLM tool instance which was setup by Greg to calculate 33.7 Hz, 34.7 Hz, 35.9 Hz and 36.7 Hz line FFTs.