Have remained in observing. No changes to report.
I have turned off the violin mode damping filters for IX and IY by zeroing their gains. Patrick accepted these into SDF so that we can go to observing.
This is meant to be temporary, i.e., for this one lock only.
The next time the interferometer comes into lock, the Guardian will turn on the normal violin mode damping settings. These settings will appear as SDF diffs (two on IX, and six on IY). These should be accepted into SDF.
We do not expect these modes to ring up during the course of the lock. However, if the mode height on the control room DARM spectrum around 500 Hz rises above 10−16 m/rtHz, the damping should be turned back on:
A screenshot of the nominal IY damping settings is attached (I didn't take one for IX).
ITMX:
ITMY
Thanks to Jeff, Patrick, and Jim for babysitting this.
The second lock (after the earthquake) lasted about 10 hours. Strangely, the ITM first harmonics (which range from 500 to 505 Hz) do not all seem to ring down.
An analysis of this data to measure Q of the fundamental modes for ITMX and ITMY violin modes is reported in 23331. The few modes that show an actual ringdown have a Q of about 0.3e9.
An analysis of this data to measure Q of the 2nd harmonics for ITMX and ITMY violin modes is reported in 23383.
There was some suspicion that the whitening gain for some of the ACB PDs became bad, but I have confirmed on the floor that the gain setting on the MEDM screen is reflected correctly in the diode amplifier box (D1301017) over the entire gain range for all baffle PDs for IX, IY, EX and EY.
I've followed the test procedure E1400003.
Violin mode damping for ITMX and ITMY is off. The damping gains of 0 were accepted in SDF (see attached screenshots). Evan wants to see how long they take to ring down without damping. ISI blends are at Quite_90.
During today's maintenance period I surveyed the switch configuration of the Output Configuration Switch (OCS) for each SUS oplev (WP #5588). The OCS is the small board with 4 banks of 8 switches each that attaches to the front of each oplev whitening chassis; it allows us to control the amount of whitening gain and filtering we apply to the oplev QPD signal. The results of this survey have been gathered and posted to the DCC here: T1500556. This document needs to be updated anytime the switch configuration of these OCSs changes; if anyone makes any changes to an OCS please let me know, via email or alog (preferably both), what you did so I can keep us up to date with the current OCS configuration.
TITLE: 11/03 [EVE Shift]: 00:00-08:00 UTC (16:00-00:00 PDT), all times posted in UTC STATE Of H1: Lock acquisition OUTGOING OPERATOR: Jeff QUICK SUMMARY: Lights appear off in the LVEA, PSL enclosure, end X, end Y and mid X. I can not tell from the camera if they are off at mid Y. Winds are less than 10 mph. Appear to be increasing. ISI blends are at Quite_90. Earthquake seismic band is between 0.01 and 0.02 um/s. Microseism is between 0.1 and 0.3 um/s. Jeff has been having trouble with ITMY bounce and roll modes. Has been stopping at REDUCE_CARM_OFFSET_MORE to allow Jenne and Evan to investigate.
The roll modes have been conquered. When they were too high, the DHARD loops become unstable and we lose lock. We stopped at REDUCE_CARM_OFFSET_MORE, which was the highest state that we could sit at without losing the lock.
ITMY's roll mode was the worst offender according to the monitor screen, so we only worked on damping that mode. We used the usual filter settings (which are always left in the correct state) and the usual negative sign for the gain. The usual gain that guardian leaves the ITMY roll mode damping is -40, but I was using larger gains to get the mode to go down faster. Once the roll modes were all around the "okay" level on the monitor screen, we let guardian continue the locking sequence.
We were concerned at one point that guardian had frozen on us, but it turns out that we were just being fooled. It looked like guardian was not finishing the DC_READOUT.main state, and not even starting the DC_READOUT.run state. However, it was all fine. I think I have been fooled by this before, but as a reminder (to myself especially), if an EPICS value is not going to change, guardian will not log that it tried to write the value. So, the last few steps of the main sequence were to write some gain values that were not going to change, which is why it looked like it was just stuck. Also, there was nothing in the run part of the state that was going to cause any log messages, so it didn't look like anything had happened, even though we were already getting the "return true" at the end of the state. In the end, we just requested a higher state, and guardian moved on as usual.
following Keith's LLO install, I have configured monit on h1fescript0 to automatically start the GraceDB notification system on system reboot,and to restart it if it stops running. Here are the details:
created a new exttrig account local to h1fescript0. Currently this has the standard controls password.
In the exttrig home directory, install script run_ext_alert.sh
Install a start/stop script in /etc/init.d/ext_alert
Install monit and mailx on the machine
Configure monit via /etc/monit/monitrc and /etc/monit/conf.d/monit_ext_alert
I changed the ownership of the cert files from user controls to user exttrig
Tested by killing the ext_alert process and checking monit restarted it
I have removed the obsolete startup instructions in the [[ExternalAlertNotification]] wiki page
This closes FRS3415.
Activity Log: All Times in UTC (PT) 15:51 (07:51) Norco - LN2 delivery to Mid-Y 16:00 (08:00) Take over from Jim 16:00 (08:00) Bubba – Greasing fans in the LVEA 16:00 (08:00) Ken – Working on solar panel on X-Aem 16:00 (08:00) Joe – Removing grouting forms from HAM1 16:15 (08:15) Christina & Karen – Cleaning at End-Y 16:15 (08:15) Jodi & Mitchel – Going to Mid-X to unload 1-Ton 16:18 (08:18) Filiberto & Leslie – Working on Temp sensors at BSC1 & BSC3 16:24 (08:24) Richard – Going into LVEA to check Beckhoff cabling 16:30 (08:30) Bubba – Finished in the LVEA going to End-X then Mid-X to grease fans 16:46 (08:46) Richard – Out of LVEA – Going to End-Y 16:50 (08:50) Restart GraceDB 16:52 (08:52) Norco – LN2 delivery to Mid-X 17:00 (09:00) Christina & Karen – Finished at End-Y – Going to End-X 17:04 (09:04) Jason – Finished with OpLev survey in the LVEA – Going to End-X 17:06 (09:06) Kyle & Gerardo – Going to Mid-Y, Mid-X, and End-X moving vacuum equipment 17:09 (09:09) Nutsinee – Going into LVEA to work on CO2Y table 17:14 (09:14) Jodi & Mitchel – Done with stroage moves at Mid-X 17:15 (09:15) Richard – Back from End-Y 17:16 (09:16) Hugh & Richard - Going into CER to work on ITM-Y Coil Driver 17:20 (09:20) Bubba – Finished with fans on Y arm – Going to X-Arm 17:21 (09:21) Jason – Finished with survey at End-X – Going to End-Y 17:22 (09:22) Keita – Going into LVEA to check transimpedance gain for ACB diode amps 17:45 (09:45) Restart GraceDB 17:48 (09:48) Christina & Karen – Finished at End-X – Going to the LVEA 17:49 (09:49) Richard – Going into CER to test ITM-Y Coil Driver with Hugh 17:50 (09:50) Jason – Back from End-Y – Finished with OpLev survey 18:10 (10:10) US Linen on site 18:25 (10:25) Nutsinee – Out of LVEA – Going to End-Y to restart PCal camera controller 18:30 (10:30) Richard & Ken – Going to work on solar panel on Y-Arm 18:33 (10:33) Peter – Going into the LVEA 18:35 (10:35) Keita – Finished at CS – Going to End-X and then End-Y 18:40 (10:40) Peter – Out of the LVEA 18:45 (10:45) Hugh – Finished with Coil Driver work 18:45 (10:45) Hugh – Going to end stations to check HEPI fluid levels 19:05 (11:05) Nutsinee – Back from end stations 19:11 (11:11) KingSoft – Going to RO to check water system 19:14 (11:14) Richard & Ken – Going to solar panel on Y-Arm 19:16 (11:16) Hugh – Back from End Stations HEPI check – Checking the CS HEPIs 19:19 (11:19) Bubba – Finished greasing fans 19:20 (11:20) Nutsinee – Going back to End-Y to work on camera controller 19:22 (11:22) Keita – Finished at End-X – Going to End-Y 19:32 (11:32) Kyle & Gerardo – Finished 19:35 (11:35) Filiberto & Leslie – Finished in LVEA 19:36 (11:36) Richard & Ken – Finished at Y-Arm – Going to X-Arm 19:45 (11:45) Nutsinee – Back from End-Y 19:55 (11:55) Ed & Jeff B – Sweep the LVEA 20:00 (12:00) Keita – Finished at End-Y – Turned off lights and WAP 20:02 (12:02) Jeff & Jason – Running charge measurements at End-Y and End-X 21:05 (13:05) Start relocking 22:16 (14:16) Kyle & Gerardo – Going to X28 23:42 (15:42) Keita & Kiwamu – Going into LVEA to check on ISS Second Loop 23:48 (15:48) Keita & Kiwamu - Out of the LVEA 00:00 (16:00) Turn over to Patrick End of Shift Summary: Title: 11/03/2015, Day Shift 16:00 – 00:00 (08:00 – 16:00) All times in UTC (PT) Support: Jeff K, Ed, Jenne Incoming Operator: Patrick Shift Summary: Completed maintenance and measurement tasks at ~13:00. Ran initial alignment. Working on relocking.
J. Kissel, J. Oberling Following the recently updated instructions, I've taught Jason how to measure charge today during the tail end of maintenance. We've gathered data today in hopes that with the bonus data that we'd gotten on Friday (LHO aLOG 22991), and last week's regular measurement (LHO aLOG 22903), we can make a more concrete conclusion about the current rate of charge accumulation on ETMY. This will help us predict whether we need to flip the ETMY bias voltage sign again before the (potential) end of the run on Jan 14. In short -- based on today's data I think we will need to flip ETMY's ESD bias voltage again before the run is over, unless we change how often we keep ETMY's bias ON, or reduce by half as LLO has done. I attach the usual trend plots for both test masses, showing the effective bias voltage as well as the trend in actuation strength (as measured by the optical lever; I hope to show the data with the last two weeks of kappa_TST as well, demostrating the longitudinal acutation strength. Stay tuned for future aLOG). Further, as the 5th attachment, I plot the same results for the actuation strength in ETMY, but with the X-Axis extended out to Jan 14th. If we take the last 4 dates worth of data points (and really the last three, since it appears the data just after the flip was anomolous), we can see, by-eye, we'll need a flip sometime around mid-to-late December 2015 if the trend continues as is. I'll call this Option (1). Alternatively, I can think of two options forward: (2) Reduce the bias voltage by 1/2 now-ish (again, as LLO has done), and take the same 8-hour shift duty-cycle hit to characterize the actuation strength at 1/2 the bias. Essentially taking the hit now or later. or (3) Edit the lock acuisition sequence in such a way that we turn OFF the H1 ETMY bias when we're not in low-noise. If we do (2), we're likely to slow down the charging rate continuously, and potentionally not have to flip the sign at all. It's essentially no different than a sign flip in characterization of its effects, but we'd *definitely* have to change the CAL-CS calibration where we did not before. In support of (3), The trends from ETMX confirm that turning OFF the ESD bias voltage for long periods of time (e.g. for 24+ hour-long observation segments [yeah!]) will reduce the charge accumulation rate. It's unclear (to my, right now) how *much* ETMY ESD OFF time we would get (though it is an answerable question by looking at a sort-of "anti"-duty-cycle pie chart), but I have a feeling that any little bit helps. This would require continued vigilance on charge measurements. Why bother changing anything, what's a well-known 10-20% systematic, if you're going to correct for it at the end of the run, you ask? The calibration group's stance on this is that we have many systematics of which we need to keep track, some of which we have no control over (unlike this one) and some of which we haven't yet identified. We believe we're within our required uncertainty budget, but unknown systematics are always killer. As such, we want to at least keep the controllable, known systematics as low as reasonable. If we can recover/reduce a slowly increasing 10% systematic in the instrument, without having to digitilly correct for it, chunk up the analysis segments into many "epochs" where you have to do this or that different thing over the course of the run, and/or they have different uncertainties, then we do so. It's essentially the "if you can, fix the fundamental problem" argument, especially considering the other known and unknown systematics over which we don't have control. As such, and because the accumulation is still sufficiently slow (~8 [V/week] or ~2 [% Act. Strength Change/week]) that we can decide when to make the change (and which of the options to persue) based on man-power avaibility of the next holiday-full two months. On a side-note: today's data set contains only contains 3 points per quadrant, where we normally get 4 to 5. This should not have a huge impact on accuracy, or our ability to make a statement about the trend, especially since the sqrt(weighted variance) uncertainty bars are so small. It might also mean, of pinched for maintanance day or bonus time, it may be OK to continue to get only 3 data points instead of 5.
I have spent about an hour today checking some basic behavior of the ISS 2nd loop servo circuits. I did the following two measurements.
No crazy thing was found in today's test. Our plan is that whenever we have issues with the ISS 2nd loop in some future, we will repeat the same type of measurements with an aim to identify what is changing.
Offset measurement
I checked how the reference signal propagates to the servo signal output by changing the reference signal. The measurement was done with the IMC intentionally unlocked. Therefore there was no light incident on the ISS PD array in HAM2. The result is shown in the first attachment. An important quantity to note here is the signal output value when zero reference voltage was requested. It was 2.76 V. Also note that zero output signal can be obtained when the reference signal is requested to be between 5 and 10 mV. This does not sound crazy to me. According to the resultant plot, the linearity seems also fine. Additionally I checked the reference monitor as well which also showed a good linearity. Later, I did a coarse version of the same test with the full power incident on IMC (~22 W). Even though the actual RIN was too high to make a precise measurement, the linear coefficient between the reference and output signals seemed consistent with what I have measurement with no light. I also attach the raw data in txt format.
Operating point measurement
Then I locked the IMC and changed the input power step by step with a step size of about 2 [W]. In each step, I manually moved the reference signal such that the output signal stays around zero [V]. This tells us how the operating point evolves as a function of the input power. I did this test from 2 [W] to 22 [W]. The result is shown in the second attachment. As you can see in the plot, the operating point evolves linearly against the input power as expected. Additionally, I recorded IM4_TRANS_SUM as well in order to check the linearity of the output power of the IMC. IM4_TRANS seems to be also linear. The data in txt format is attached as well.
Note that all the tests were done with PD1-4 as an intensity sensor. PD5-8 was not checked during the tests.
There are two things that don't make sense.
1. DC gain mismatch between the drawing/traveller and the measurement.
When Second Loop boost, integrator and additional gain are all off and the variable gain slider is set to 0dB, the gain from REF signal monitor to Second Loop Output should be -220, taking into account the modification for the second loop electronics on the floor (D1300439, https://dcc.ligo.org/S1400214): -1 for the REF summation, -10dB (or 1/3.2) instead of 0dB for the variable gain amplifier, -100 for the last stage of "whitening" path (U34), and -7 for the daughter board.
D1300439 DC gain = -1/3.2*-100*-7=-220.
From Kiwamu's plots, the measured DC gain is about -12.5V/0.04V~ -310.
2. Mystery factor of two in REFSIG DAC output.
"REFSIG" slider has a mystery factor of 2 in the filter for the DAC output. As a result, when the slider is set to -0.577V, the output of the filter reads -1889 counts instead of -0.577V/40Vpp * 2^16ctspp = -945counts.
However, REF signal monitor, which I think is a read back of the offset voltage coming out of REFSIG DAC channel, reads -940 counts and 0.574V.
Temperature sensors for BSC1 and BSC3 were installed this morning. Sensors were installed on southeast corner of each chamber. Power cables were run from SUS-R6 field rack (Next to BSC2). Units seem to have been modified and have a different gain compared to units installed at the end stations. Richard secured the temperature sensor at EY.
Ed & Jeff B. Sweep LVEA. Unplugged WAP, Removed stair from South side between HAM5 & HAM6, and turned off the lights.
Kyle, Gerardo CP5 LLCV typically 85% open under normal conditions -> Increased vapor pressure should reduce this and broaden the range of control
Kyle, Gerardo X-end @ 1000-1005 hrs. local. LEA @ ~1120-1125 hrs. local
Dave, Nutsinee
First I went to restart the UT1 unit at EY. Came back to find that the camera is still not connecting to the Camera Control Pro software. Dave was able to ping the camera so I restarted the h1pcaly minimac and went back to restart the UT1 unit. Still not working.
ECR E1500428
II 1149
WP 5587
BrianL RichardM DaveB JeffK Hugh
With Email Approval of the ECR from PeterF and Code Testing Completed at Stanford by BTL & Hugo (T1500555) we implemented this change at LHO today.
See T1500555 for details of the change with before & after model views and testing output.
First we Power cycled the coil drivers, bio chassis, and the FE computer for ITMY. This is an attempt to reset/clear who knows what might be causing the occassional glitching of that platform that is noted in Integration Issue 1149.
Next after retreaving the new isi2stagemaster.mdl from the svn and making/installing the new model, the ITMY was restarted. We did further testing by unplugging the Bio cable from the Coil Driver and confirmed the watchdog would not trip unless the signal was low for 10 seconds.
Then all BSC ISIs were made safe and all restarted after recompiling the model.
Per JimW's request (alog 23032), I edited the ISC_LOCK guardian such that it throws notification messages when the ISS 2nd loop fails in the engagement. I have reloaded ISC_LOCK and checked the code into the SVN.
The notification messages will be displayed (1) if the IMC_LOCK guardian falls back to the LOCKED state (which is the programmed failure sequence) and/or (2) if the engagement process takes more than 4 minutes. The below are the new version of the ENGAGE_ISS_2ND_LOOP state. The red lines are the ones I newly added.
* * * * * * in ENGAGE_ISS_2ND_LOOP * * * * * *
def main(self):
nodes['IMC_LOCK'] = 'ISS_ON'
self.wait_iss_minutes = 4.0 # in [min]
self.timer['ISSwait'] = self.wait_iss_minutes*60 # in [sec]
@get_watchdog_IMC_check_decorator(nodes)
@nodes.checker()
def run(self):
# notify when the ISS fails. 2015-Nov-3, KI
if nodes['IMC_LOCK'] == 'LOCKED' or nodes['IMC_LOCK'] == 'OPEN_ISS':
notify('!! 2nd loop engagement failed !!')
# notify when the ISS is taking too many minutes. 2015-Nov-3, KI
if self.timer['ISSwait']:
notify('ISS spent more than %d minutes. Check ISS'%(self.wait_iss_minutes))
# if IMC arrives at the requested state, move on.
return nodes['IMC_LOCK'].arrived
Later today, we confirmed that the ISC_LOCK guardian behaved as intended -- it displayed the messages when the ISS failed in the engagement. Also there were a few minor typos which are now fixed. The guardian code is checked into the SVN.
Summary: We had single-IFO time so I tested the new inverse actuation filter for PCALX. WP5530 Sudarshan and I believe we tracked down the factor of 2 and sign error from the initial PCALX test, see aLog 22160. We wanted to do this test to confirm that. CBC injections: The waveform file is: https://daqsvn.ligo-la.caltech.edu/svn/injection/hwinj/Details/Inspiral/H1/coherenttest1from15hz_1126257408.out The XML parameter file is: https://daqsvn.ligo-la.caltech.edu/svn/injection/hwinj/Details/Inspiral/h1l1coherenttest1from15hz_1126257408.xml.gz I did three CBC injections. The start times of the injections were: 1128303091.000000000, 1128303224.000000000, and 1128303391.000000000. The command line to do the injections is: ezcawrite H1:CAL-INJ_TINJ_TYPE 1 awgstream H1:CAL-PCALX_SWEPT_SINE_EXC 16384 coherenttest1from15hz_1126257408.out 1.0 -d -d >> 20151006_log_pcal.out awgstream H1:CAL-PCALX_SWEPT_SINE_EXC 16384 coherenttest1from15hz_1126257408.out 1.0 -d -d >> 20151006_log_pcal.out awgstream H1:CAL-PCALX_SWEPT_SINE_EXC 16384 coherenttest1from15hz_1126257408.out 1.0 -d -d >> 20151006_log_pcal.out I have attached the log. I had to change the file extension to be posted to the aLog. DetChar injection: I injected Jordan's waveform file: https://daqsvn.ligo-la.caltech.edu/svn/injection/hwinj/Details/detchar/detchar_03Oct2015_PCAL.txt The start time of the injection is: 1128303531.000000000 The command line to do the injections is: awgstream H1:CAL-PCALX_SWEPT_SINE_EXC 16384 detchar_03Oct2015_PCAL.txt 1.0 -d -d >> 20151006_log_pcal_detchar.out I have attached the log. I had to change the file extension to be posted to the aLog.
Chris Buchanan and Thomas Abbott,
Quick follow-up with omega scans. It looks like most of the power is seen in GDS-CALIB_STRAIN about eight seconds after each listed injection time, consistently for each of these three injections. Doesn't look like there are omicron triggers for these times yet, but omega scans for GDS-CALIB_STRAIN are attached.
Full omega scans generated here:
https://ldas-jobs.ligo.caltech.edu/~christopher.buchanan/Omega/Oct07_PCALX_Inj1/
https://ldas-jobs.ligo.caltech.edu/~christopher.buchanan/Omega/Oct07_PCALX_Inj2/
https://ldas-jobs.ligo.caltech.edu/~christopher.buchanan/Omega/Oct07_PCALX_Inj3/
For complete documentation of the detchar safety injections:
The injections are 12 sine-gaussians, evenly spaced from 30hz to 430hz, 3 seconds apart with a Q of 6. There are three sets with increasing SNR of 25, 50, 100 (intended). However, the SNR is limited by the PCAL acuation range at higher frequencies.
To generate the waveforms I used the script written by Peter Shawhan / Andy located here: https://daqsvn.ligo-la.caltech.edu/websvn/filedetails.php?repname=injection&path=%2Fhwinj%2FDetails%2Fdetchar%2FGenerateSGSequencePCAL.m
I tuned the injections to stay within the PCAL actuation limits referenced in Peter Fritschel's document https://dcc.ligo.org/LIGO-
The intended time (seconds from start time of injections), freqency, snr, and amplitude (in units of strain) for all injections are pasted below:
__time__ __freq__ __SNR__ __AMP__
0.50 30.0 25.0 5.14e-21
3.50 38.2 25.0 4.96e-21
6.50 48.7 25.0 2.15e-21
9.50 62.0 25.0 2.07e-21
12.50 79.0 25.0 1.75e-21
15.50 100.6 25.0 1.78e-21
18.50 128.2 25.0 1.92e-21
21.50 163.3 25.0 2.06e-21
24.50 208.0 25.0 2.39e-21
27.50 265.0 10.0 1.11e-21
30.50 337.6 5.0 8.39e-22
33.50 430.0 5.0 8.51e-22
36.50 30.0 50.0 1.03e-20
39.50 38.2 50.0 9.92e-21
42.50 48.7 50.0 4.31e-21
45.50 62.0 50.0 4.14e-21
48.50 79.0 50.0 3.51e-21
51.50 100.6 50.0 3.55e-21
54.50 128.2 50.0 3.85e-21
57.50 163.3 50.0 4.12e-21
60.50 208.0 50.0 4.77e-21
63.50 265.0 20.0 2.21e-21
66.50 337.6 10.0 1.68e-21
69.50 430.0 10.0 1.7e-21
72.50 30.0 100.0 2.06e-20
75.50 38.2 100.0 1.98e-20
78.50 48.7 100.0 8.62e-21
81.50 62.0 100.0 8.27e-21
84.50 79.0 100.0 7.01e-21
87.50 100.6 100.0 7.1e-21
90.50 128.2 100.0 7.69e-21
93.50 163.3 100.0 8.24e-21
96.50 208.0 100.0 9.54e-21
99.50 265.0 40.0 4.43e-21
102.50 337.6 20.0 3.36e-21
105.50 430.0 20.0 3.4e-21
Here are the SNR of the CBC injections using the daily BBH matching filtering settings: end time SNR chi-squared newSNR 1128303098.986 20.35 32.86 19.86 1128303231.985 22.62 32.73 22.10 1128303398.985 23.25 21.05 23.25 Expected SNR is 18.4. Though a recovered SNR of 20 (about 10% percent difference from 18.4) is comparable to some of the SNR measurements when doing injections with CALCS in aLog 21890. Note this is the same waveform injected here except in aLog 21890 it starts from 30Hz. In both cases the matched filtering starts at 30Hz. The last two have a bit higher SNR though.
I edited Peter S.'s matlab script to check the sign of these PCAL CBC injections. Looks like the have the correct sign. See attached plots. To run code on LHO cluster: eval '/ligotools/bin/use_ligotools' matlab -nosplash -nodisplay -r "checksign; exit" Also in hindsight I should have done a couple CALCS CBC injections just to compare the SNR at the time with the PCAL injections.
gwdetchar-overflow -i H1 -f H1_R -O segments -o overflow --deep 1128303500 1128303651 124
It returns an empty table, so no overflows.
A time-domain check of the recovered strain waveforms is here: https://wiki.ligo.org/Main/HWInjO1CheckSGs. I found that the sign is correct, the amplitude matches within a few percent at most frequencies, and the phases are generally consistent with having a frequency-independent time delay of 3 or 4 samples (about 0.2 ms). Details are on that wiki page.
Thomas Abbot, Chris Buchanan, Chris Biwer I've taken Thomas/Chris' table of recovered omicron triggers for the PCAL detchar injection and calculated the ratio of expected/recovered SNR and added some comments: Recovered time time since frequency recovered expected recovered/expected comments 1128303531 (s) (Hz) SNR SNR SNR 1128303531.5156 0.515599966 42.56 34.07 25 1.3628 1128303534.5078 3.5078001022 61.90 39.41 25 1.5764 1128303537.5039 6.5039000511 64.60 28.29 25 1.1316 1128303540.5039 9.5039000511 79.79 23.89 25 0.9556 1128303543.5039 12.5039000511 1978.42 21.38 25 0.8552 suspicious, the frequency is very high 1128303546.502 15.5020000935 144.05 26.24 25 1.0496 1128303549.502 18.5020000935 185.68 26.38 25 1.0552 1128303552.502 21.5020000935 229.34 26.29 25 1.0516 1128303555.501 24.5009999275 918.23 27.34 25 1.0936 1128303558.501 27.5009999275 315.97 11.05 10 1.105 1128303564.5005 33.5004999638 451.89 6.76 5 1.352 1128303567.5156 36.515599966 50.12 68.53 50 1.3706 1128303570.5078 39.5078001022 61.90 78.23 50 1.5646 1128303573.5039 42.5039000511 76.45 52.04 50 1.0408 1128303576.5039 45.5039000511 91.09 48.42 50 0.9684 1128303579.5039 48.5039000511 116.63 47.73 50 0.9546 1128303582.502 51.5020000935 144.05 52.59 50 1.0518 1128303585.502 54.5020000935 177.91 52.3 50 1.046 1128303588.502 57.5020000935 261.81 54.8 50 1.096 1128303591.501 60.5009999275 323.36 55.64 50 1.1128 1128303594.501 63.5009999275 414.01 19.67 20 0.9835 1128303597.501 66.5009999275 390.25 9.55 10 0.955 1128303600.5005 69.5004999638 481.99 9.34 10 0.934 1128303603.5156 72.515599966 48.35 136.81 100 1.3681 1128303606.5078 75.5078001022 71.56 156.91 100 1.5691 1128303609.5039 78.5039000511 76.45 102.72 100 1.0272 1128303612.5039 81.5039000511 138.03 102.85 100 1.0285 1128303615.5039 84.5039000511 134.83 95.52 100 0.9552 1128303618.502 87.5020000935 1283.14 104.17 100 1.0417 frequency seems a bit high 1128303621.502 90.5020000935 211.97 107.18 100 1.0718 1128303624.502 93.5020000935 261.81 104.53 100 1.0453 1128303627.501 96.5009999275 323.36 109.66 100 1.0966 1128303630.501 99.5009999275 414.01 42.15 40 1.05375 1128303633.5005 102.5004999638 959.39 19.11 20 0.9555 this last injection had some kind of glitch on it In most cases looks like the ratio is within 0.1 of 1. On a quick glance I see 10 injections that were not within this range.