TITLE: Sept 23 Day shift, 15:00-23:00UTC, 08:00-16:00PT
STATE Of H1: Commissioning, Range = 78Mpc, lock is 30+ hours long!
SUPPORT: MikeL, Sheila, Chris Biwer, JeffK
SHIFT SUMMARY: In Observe most of shift, currently in Commissioning. Commissioning includes injections, filter change, and others in progress.
INCOMING OPERATOR: Jim
ACTIVITY LOG:
15:58:43UTC - DMT glitch, Range = -1, no effect on IFO
16:00:43UTC - DMT glitch, Range = -1, no effect on IFO
20:16:27UTC - Commissioning, injections
20:35UTC - Chris Biwer injections end
20:45:20UTC - cleared the ETMX timing error bit that was stuck
20:45:40UTC - GRB notice
20:47:16UTC - put IFO back into Observe
22:21:53UTC - Commissioning
22:22:06UTC - engaged Sheila's DHARD_Y filter
22:22:16UTC - put IFO back into Observe
22:22:18UTC - with the new filter, SDF kicked the IFO out of Observe
Currently JeffK has the IFO.
J. Kissel, C. Biwer As the title says. Nothing exciting, just updating a status checking bit in the ONC. SAFE and OBSERVE.snaps have been updated and committed to the userapps repo.
J. Kissel Similar to what was done for the HARDWARE injection filter (LHO aLOG 21703), I've added a minus sign to the identical BLIND injection filter bank. This facilitates testing this bank, which we hope to do soon. I've also turned on FM5 where this minus sign lives, and accepted the the new configuration in the SDF system (both in OBSERVE and SAFE.snaps), and committed the new filter bank to the repo.
Test of Sheila's filter. GRB standown time is complete. Returning to Commissioning while LLO is down due to temperature transient in their LVEA.
The FOM for H1 Range was modified at some point in the last 24 hours, and at Vern and Mike's request I backed out the 20Mpc horizontal line, and saved the template with a y-axis range of 0-101Mpc.
The FOM for H1 Range is posted on the website and so considered to be under version control, though not in SVN, and changes need to be approved before implemented.
L1 went out of lock. At H1 we turned off the intent bit and injected some hardware injections. The hardware injections were the same waveform that was injected on September 21. For more information about those injections see aLog entry 21759 For information about the waveform see aLog entry 21774. tinj was not used to do the injections.The commands to do the injections were: awgstream H1:CAL-INJ_TRANSIENT_EXC 16384 H1-HWINJ_CBC-1126257410-12.txt 0.5 -d -d >> log2.txt awgstream H1:CAL-INJ_TRANSIENT_EXC 16384 H1-HWINJ_CBC-1126257410-12.txt 1.0 -d -d >> log2.txt ezcawrite H1:CAL-INJ_TINJ_TYPE 1 awgstream H1:CAL-INJ_TRANSIENT_EXC 16384 H1-HWINJ_CBC-1126257410-12.txt 1.0 -d -d >> log2.txt awgstream H1:CAL-INJ_TRANSIENT_EXC 16384 H1-HWINJ_CBC-1126257410-12.txt 1.0 -d -d >> log2.txt To my chagrin the first two injections were labeled as burst injections. Taken from the awgstream log the corresponding times are approximates of the injection time: 1127074640.002463000 1127074773.002417000 1127075235.002141000 1127075742.002100000 The expected SNR of the injection is ~18 without any scaling factor. I've attached omegascans of the injections. There is no sign of the "pre-glitch" that was seen on September 21.
Attached stdout of command line.
Neat! looks good.
Hi Chris, It looks like there is a 1s offset between the times you report and the rough coalescence time of the signal. Do you know if it is exactly 1s difference?
Yes, as John said, all of the end times of the waveforms are just about 1 second later that what's in the original post. I ran a version my simple bandpass-filtered overlay script for these waveforms. Filtering both the model (strain waveform injected into the system) and the data from 70-260 Hz, it overlays them, and also does a crude (non-optimal) matched filter to estimate the relative amplitude and time offset. The four plots attached are for the four injected signals; note that the first one was injected with a scale factor of 0.5 and is not "reconstructed" by my code very accurately. The others actually look rather good, with reasonably consistent amplitudes and time delays. Note that the sign of the signal came out correctly!
I ran the daily BBH search with the injected template on the last two injections (1127075235 and 1127075742). For 1127075235; the recovered end time was 1127075235.986, the SNR was 20.42, the chi-squared was 29.17, and the newSNR was 19.19. For 1127075242; the recovered end time was 1127075242.986, the SNR was 20.04, the chi-squared was 35.07, and the newSNR was 19.19.
KW sees all the injections with the +1 sec delay, some of them in multiple frequency bands. From /gds-h1/dmt/triggers/H-KW_RHOFT/H-KW_RHOFT-11270/H-KW_RHOFT-1127074624-64.trg /gds-h1/dmt/triggers/H-KW_RHOFT/H-KW_RHOFT-11270/H-KW_RHOFT-1127074752-64.trg /gds-h1/dmt/triggers/H-KW_RHOFT/H-KW_RHOFT-11270/H-KW_RHOFT-1127075200-64.trg /gds-h1/dmt/triggers/H-KW_RHOFT/H-KW_RHOFT-11270/H-KW_RHOFT-1127075712-64.trg tcent fcent significance channel 1127074640.979948 146 26.34 H1_GDS-CALIB_STRAIN_32_2048 1127074774.015977 119 41.17 H1_GDS-CALIB_STRAIN_8_128 1127074773.978134 165 104.42 H1_GDS-CALIB_STRAIN_32_2048 1127075235.980545 199 136.82 H1_GDS-CALIB_STRAIN_32_2048 1127075743.018279 102 74.87 H1_GDS-CALIB_STRAIN_8_128 1127075742.982020 162 113.65 H1_GDS-CALIB_STRAIN_32_2048 Omicron also sees them with the same delay From : /home/reed.essick/Omicron/test/triggers/H-11270/H1:GDS-CALIB_STRAIN/H1-GDS_CALIB_STRAIN_Omicron-1127074621-30.xml /home/reed.essick/Omicron/test/triggers/H-11270/H1:GDS-CALIB_STRAIN/H1-GDS_CALIB_STRAIN_Omicron-1127074771-30.xml /home/reed.essick/Omicron/test/triggers/H-11270/H1:GDS-CALIB_STRAIN/H1-GDS_CALIB_STRAIN_Omicron-1127075221-30.xml /home/reed.essick/Omicron/test/triggers/H-11270/H1:GDS-CALIB_STRAIN/H1-GDS_CALIB_STRAIN_Omicron-1127075731-30.xml peak time fcent snr 1127074640.977539062 88.77163 6.3716 1127074773.983397960 648.78342 11.41002 <- surprisingly high fcent, could be due to clustering 1127075235.981445074 181.39816 13.09279 1127075742.983397960 181.39816 12.39437 LIB single-IFO jobs also found all the events. Post-proc pages can be found here: https://ldas-jobs.ligo.caltech.edu/~reed.essick/O1/2015_09_23-HWINJ/1127074640.98-0/H1L1/H1/posplots.html https://ldas-jobs.ligo.caltech.edu/~reed.essick/O1/2015_09_23-HWINJ/1127074773.98-1/H1L1/H1/posplots.html https://ldas-jobs.ligo.caltech.edu/~reed.essick/O1/2015_09_23-HWINJ/1127075235.98-2/H1L1/H1/posplots.html https://ldas-jobs.ligo.caltech.edu/~reed.essick/O1/2015_09_23-HWINJ/1127075742.98-3/H1L1/H1/posplots.html all runs appear to have reasonable posteriors.
Here is how Omicron detects these injections: https://ldas-jobs.ligo-wa.caltech.edu/~frobinet/scans/hd/1127074641/ https://ldas-jobs.ligo-wa.caltech.edu/~frobinet/scans/hd/1127074774/ https://ldas-jobs.ligo-wa.caltech.edu/~frobinet/scans/hd/1127075236/ https://ldas-jobs.ligo-wa.caltech.edu/~frobinet/scans/hd/1127075743/ Here are the parameters measured by Omicron (loudest tile): 1127074640: t=1127074640.981, f=119.9 Hz, SNR=6.7 1127074773: t=1127074773.981, f=135.3 Hz, SNR=11.8 1127075235: t=1127075235.981, f=114.9 Hz, SNR=12.8 1127075742: t=1127075742.981, f=135.3 Hz, SNR=12.4
The BayesWave single IFO (glitch only) analysis recovers these injections with the following SNRs: 4640: 8.65535 4773: 19.2185 5253: 20.5258 5742: 20.1666 The results are posted here: https://ldas-jobs.ligo.caltech.edu/~meg.millhouse/O1/CBC_hwinj/
Going to Observe without filter test.
General Question: Does this knock us out of Observation Mode? Could I have reset this this morning?
IFO was in Commissioniing.
No, the DIAG_MAIN guardian node is NOT under the OBSERVATION READY check. It can be changed/reset/etc. without affecting OBSERVATION MODE.
I think Cheryl was talking about the diag reset button on the GDS overview screen for the front end, not the DIAG_MAIN guardian.
Finished hardware injection tests. More details in upcoming aLog. Last injection went in at approximately 20:35 UTC.
LLO is down - H1 in commissioning for injections and a filter test, starting at 20:16:27UTC.
At times 1127059140GPS (15:58:43UTC), and 1127059260GPS (16:00:43UTC), the H1 Range DMT montor did not get data, so put in the dafult which is -1.
There was no glitch or change to the H1 IFO.
Will notify when done.
Intent mode was turned off before tests.
I've uploaded new and approved coherent waveforms for hardware injection testing. SVN is at revision number 5097. There is a H1L1 coherent version of the September 21 test injection that was done at LHO. It can be found here: * H1 waveform * L1 waveform * XML parameter file There is a H1L1 coherent version of the September 21 test injection that was done at LHO and the waveform begins at 15Hz. This waveform should be tested after the previous waveform has been tested. It can be found here: * H1 waveform * L1 waveform * XML parameter file
I've attached time series of the four waveforms. Y-axis is h(t) in strain. EDIT: Re-uploaded image files with title and proper y and x labels.
Chris, I think the links to the XML parameter files are broken, could you please add corrected ones? Error message: The requested URL /svn/injection/hwinj/Details/Inspiral/coherenttest1_1126257410.xml.gz was not found on this server. Cheers, Bruce
Hi sorry forgot the h1l1 at the beginning. https://daqsvn.ligo-la.caltech.edu/svn/injection/hwinj/Details/Inspiral/h1l1coherenttest1_1126257410.xml.gz and https://daqsvn.ligo-la.caltech.edu/svn/injection/hwinj/Details/Inspiral/h1l1coherenttest1from15hz_1126257410.xml.gz
VerbalAlarms reports that DIAG_MAIN guardin node is in error.
Appears to be a standard NDS2 burp:
2015-09-21T22:57:59.11305 File "/ligo/apps/linux-x86_64/guardian-1485/lib/python2.7/site-packages/guardian/worker.py", line 459, in run
2015-09-21T22:57:59.11306 retval = statefunc()
2015-09-21T22:57:59.11306 File "/opt/rtcds/userapps/release/sys/common/guardian/SYS_DIAG.py", line 178, in run
2015-09-21T22:57:59.11307 return SYSDIAG.run_all()
2015-09-21T22:57:59.11307 File "/opt/rtcds/userapps/release/sys/common/guardian/SYS_DIAG.py", line 151, in run_all
2015-09-21T22:57:59.11308 ret &= self.run(name)
2015-09-21T22:57:59.11308 File "/opt/rtcds/userapps/release/sys/common/guardian/SYS_DIAG.py", line 136, in run
2015-09-21T22:57:59.11310 for msg in self[name](**kwargs):
2015-09-21T22:57:59.11311 File "/opt/rtcds/userapps/release/sys/h1/guardian/DIAG_MAIN.py", line 66, in PSL_ISS
2015-09-21T22:57:59.11311 diff_pwr = avg(-10, 'PSL-ISS_DIFFRACTION_AVG')
2015-09-21T22:57:59.11312 File "/ligo/apps/linux-x86_64/cdsutils-497/lib/python2.7/site-packages/cdsutils/avg.py", line 67, in avg
2015-09-21T22:57:59.11312 for buf in conn.iterate(*args):
2015-09-21T22:57:59.11313 RuntimeError: Requested data were not found.
Reloaded.
Having the guardian go into error because of an NDS2 hiccough is kind of irritating.
Based on this StackExchange answer, I added the following handler function to the DIAG MAIN guardian:
def try_avg(*args):
while True:
try:
q = avg(*args)
except RuntimeError:
log('Encountered runtime error while trying to average {}'.format(args[1]))
continue
break
return q
where avg is the cdsutils.avg function.
This is now used for the ISS diffraction and the ESD railing diag tests. If we like it, we should consider propagating it to the rest of the guardian.
This is a fine hack solution for this one case, but please don't propogate this around to all guardian NDS calls. Let me come up with a way to better handle it within the guardian infrastructure, so we don't end up with a lot of cruft in the guardian user code.