Displaying reports 61801-61820 of 83107.Go to page Start 3087 3088 3089 3090 3091 3092 3093 3094 3095 End
Reports until 18:19, Wednesday 23 September 2015
H1 CDS
jim.warner@LIGO.ORG - posted 18:19, Wednesday 23 September 2015 (21870)
ext_alert.py code updated at LHO

The code the brings up GraceDB alerts on our overviews was somewhat outdated, as it used an FAR of 1e-6 hz (~once every 12 days) instead of 3.8e-7 (~1 a month). Dave wrote up instructions for me to do the update when one of the IFOs dropped out of lock. Just before he left, the opportunity arose, so I updated the code. It's running now.

H1 CAL
jeffrey.kissel@LIGO.ORG - posted 17:03, Wednesday 23 September 2015 - last comment - 19:18, Thursday 24 September 2015(21868)
DARMOLGTF and PCALY to DARM Sweeps taken for Calibration Validation
J. Kissel

I've taken new DARM open loop gain and PCALY to DARM transfer functions to validate the current calibration. During the PCALY to DARM transfer function, I take the transfer function from PCALY's RX PD (calibrated into [m] of ETMY motion) and the CAL-CS front-end's DELTAL_EXTERNAL (calibrated into DARM [m], which -- since we're driving ETMY -- is identical to [m] of ETMY motion). These two different methods agree to within 4% and 3 [deg] over the 15 [Hz] to 1.2 [kHz] band. The calibration discrepancy expands to a whopping 9% and 4 [deg] if we look a frequencies between 5 and 15 [Hz] ;-).

I think we're in great shape, boys and girls.

Details
--------------
- CAL-CS does not correct for any slow time depedence (optical gain, test mass actuation strength, etc), so any agreement you see with the current interferometer is agreement with the reference model taken on Sep 10th 2015 (LHO aLOG 21385).

- In the previous measurement, Kiwamu had to fudge the phase by ~90 [us] to get the phase to agree. Now that we've updated the cycle delay between sensing and actuation to 7 [16 kHz clock cycles] to better approximate the high-frequency frequency response of AA, AI, and the OMC DCPD signal chain, we no longer have to fudge the phase -- AND the phase between the two metrics agree. NICE.

- I've made sure to turn OFF calibration lines *during both of these measurements, but there should be ample data just before and just after with calibration lines ON, such that we can compare our results against theirs to help refine our estimates of systematic error.

- The measurements live in
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/DARMOLGTFs/2015-09-23_H1_DARM_OLGTF_7to1200Hz.xml
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/PCAL$/2015-09-23_PCALY2DARMTF_7to1200Hz.xml
and have been committed to the CalSVN. We'll process these results shortly, and perform a similar analysis as Darkhan has done in yesterday's aLOG 21827.
Images attached to this report
Comments related to this report
darkhan.tuyenbayev@LIGO.ORG - 14:41, Thursday 24 September 2015 (21898)

The parameter file for this measurement was committed to calibration SVN:

CalSVN/aligocalibration/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/H1DARMparams_1127083151.m

Attached plots show components of DARM loop TF and their residuals vs. DARM model for O1.

Non-image files attached to this comment
kiwamu.izumi@LIGO.ORG - 19:18, Thursday 24 September 2015 (21926)CAL

It looks better. Very nice.

By the way, I wanted to measure the open loop without the MICH or SRCL feedforward because I wanted to demonstrate that the unknown shape in the residual in magnitude is not due to these feedforward corrections. Though this may be a crazy thought. Anyway, it would be great if you can run an open-loop measurement without the feedforwards at some point, just once.

H1 General
cheryl.vorvick@LIGO.ORG - posted 16:10, Wednesday 23 September 2015 (21865)
OPS Day Summary:

TITLE:  Sept 23 Day shift, 15:00-23:00UTC, 08:00-16:00PT

STATE Of H1:  Commissioning, Range = 78Mpc, lock is 30+ hours long!

SUPPORT: MikeL, Sheila, Chris Biwer, JeffK

SHIFT SUMMARY:  In Observe most of shift, currently in Commissioning.  Commissioning includes injections, filter change, and others in progress.

INCOMING OPERATOR: Jim

ACTIVITY LOG

15:58:43UTC - DMT glitch, Range = -1, no effect on IFO

16:00:43UTC - DMT glitch, Range = -1, no effect on IFO

20:16:27UTC - Commissioning, injections 

20:35UTC - Chris Biwer injections end

20:45:20UTC - cleared the ETMX timing error bit that was stuck

20:45:40UTC - GRB notice

20:47:16UTC - put IFO back into Observe

22:21:53UTC - Commissioning

22:22:06UTC - engaged Sheila's DHARD_Y filter

22:22:16UTC - put IFO back into Observe

22:22:18UTC - with the new filter, SDF kicked the IFO out of Observe

 

Currently JeffK has the IFO.

 
H1 INJ
jeffrey.kissel@LIGO.ORG - posted 16:08, Wednesday 23 September 2015 (21864)
INJ ODC BitMask Updated To Reflect Minus Sign Installation on INV Actuation Filter in HARDWARE Bank
J. Kissel, C. Biwer

As the title says. Nothing exciting, just updating a status checking bit in the ONC. SAFE and OBSERVE.snaps have been updated and committed to the userapps repo.
H1 INJ (CAL)
jeffrey.kissel@LIGO.ORG - posted 15:54, Wednesday 23 September 2015 (21861)
Sign Flip added to BLIND INJ Inverse Actuation Filter
J. Kissel

Similar to what was done for the HARDWARE injection filter (LHO aLOG 21703), I've added a minus sign to the identical BLIND injection filter bank. This facilitates testing this bank, which we hope to do soon.

I've also turned on FM5 where this minus sign lives, and accepted the the new configuration in the SDF system (both in OBSERVE and SAFE.snaps), and committed the new filter bank to the repo.
Images attached to this report
H1 ISC (ISC)
cheryl.vorvick@LIGO.ORG - posted 15:26, Wednesday 23 September 2015 (21859)
DHARD_Y FM2 Boost filter engaged at 22:22:06UTC

Test of Sheila's filter.  GRB standown time is complete.  Returning to Commissioning while LLO is down due to temperature transient in their LVEA.

H1 General (DetChar)
cheryl.vorvick@LIGO.ORG - posted 14:01, Wednesday 23 September 2015 (21856)
H1 Range FOM updated

The FOM for H1 Range was modified at some point in the last 24 hours, and at Vern and Mike's request I backed out the 20Mpc horizontal line, and saved the template with a y-axis range of 0-101Mpc.

 

The FOM for H1 Range is posted on the website and so considered to be under version control, though not in SVN, and changes need to be approved before implemented.

H1 INJ (DetChar, INJ)
christopher.biwer@LIGO.ORG - posted 13:53, Wednesday 23 September 2015 - last comment - 12:53, Friday 25 September 2015(21852)
single-IFO hardware injection tests at H1
L1 went out of lock. At H1 we turned off the intent bit and injected some hardware injections.

The hardware injections were the same waveform that was injected on September 21. For more information about those injections see aLog entry 21759

For information about the waveform see aLog entry 21774.

tinj was not used to do the injections.The commands to do the injections were:
awgstream H1:CAL-INJ_TRANSIENT_EXC 16384 H1-HWINJ_CBC-1126257410-12.txt 0.5 -d -d >> log2.txt
awgstream H1:CAL-INJ_TRANSIENT_EXC 16384 H1-HWINJ_CBC-1126257410-12.txt 1.0 -d -d >> log2.txt
ezcawrite H1:CAL-INJ_TINJ_TYPE 1
awgstream H1:CAL-INJ_TRANSIENT_EXC 16384 H1-HWINJ_CBC-1126257410-12.txt 1.0 -d -d >> log2.txt
awgstream H1:CAL-INJ_TRANSIENT_EXC 16384 H1-HWINJ_CBC-1126257410-12.txt 1.0 -d -d >> log2.txt

To my chagrin the first two injections were labeled as burst injections.

Taken from the awgstream log the corresponding times are approximates of the injection time:

1127074640.002463000
1127074773.002417000
1127075235.002141000
1127075742.002100000

The expected SNR of the injection is ~18 without any scaling factor.

I've attached omegascans of the injections. There is no sign of the "pre-glitch" that was seen on September 21.
Images attached to this report
Comments related to this report
christopher.biwer@LIGO.ORG - 13:57, Wednesday 23 September 2015 (21855)DetChar, INJ
Attached stdout of command line.
Non-image files attached to this comment
david.shoemaker@LIGO.ORG - 14:04, Wednesday 23 September 2015 (21857)
Neat! looks good.
john.veitch@LIGO.ORG - 01:37, Thursday 24 September 2015 (21878)
Hi Chris,
It looks like there is a 1s offset between the times you report and the rough coalescence time of the signal. Do you know if it is exactly 1s difference?
peter.shawhan@LIGO.ORG - 09:19, Thursday 24 September 2015 (21887)INJ
Yes, as John said, all of the end times of the waveforms are just about 1 second later that what's in the original post.

I ran a version my simple bandpass-filtered overlay script for these waveforms.  Filtering both the model (strain waveform injected into the system) and the data from 70-260 Hz, it overlays them, and also does a crude (non-optimal) matched filter to estimate the relative amplitude and time offset.  The four plots attached are for the four injected signals; note that the first one was injected with a scale factor of 0.5 and is not "reconstructed" by my code very accurately.  The others actually look rather good, with reasonably consistent amplitudes and time delays.  Note that the sign of the signal came out correctly!
Images attached to this comment
Non-image files attached to this comment
christopher.biwer@LIGO.ORG - 09:47, Thursday 24 September 2015 (21890)
I ran the daily BBH search with the injected template on the last two injections (1127075235 and 1127075742).

For 1127075235; the recovered end time was 1127075235.986, the SNR was 20.42, the chi-squared was 29.17, and the newSNR was 19.19.
For 1127075242; the recovered end time was 1127075242.986, the SNR was 20.04, the chi-squared was 35.07, and the newSNR was 19.19.
reed.essick@LIGO.ORG - 14:19, Thursday 24 September 2015 (21896)
KW sees all the injections with the +1 sec delay, some of them in multiple frequency bands.
From 
  /gds-h1/dmt/triggers/H-KW_RHOFT/H-KW_RHOFT-11270/H-KW_RHOFT-1127074624-64.trg
  /gds-h1/dmt/triggers/H-KW_RHOFT/H-KW_RHOFT-11270/H-KW_RHOFT-1127074752-64.trg
  /gds-h1/dmt/triggers/H-KW_RHOFT/H-KW_RHOFT-11270/H-KW_RHOFT-1127075200-64.trg
  /gds-h1/dmt/triggers/H-KW_RHOFT/H-KW_RHOFT-11270/H-KW_RHOFT-1127075712-64.trg

    tcent         fcent significance channel
1127074640.979948   146    26.34      H1_GDS-CALIB_STRAIN_32_2048
1127074774.015977   119    41.17      H1_GDS-CALIB_STRAIN_8_128
1127074773.978134   165   104.42      H1_GDS-CALIB_STRAIN_32_2048
1127075235.980545   199   136.82      H1_GDS-CALIB_STRAIN_32_2048
1127075743.018279   102    74.87      H1_GDS-CALIB_STRAIN_8_128
1127075742.982020   162   113.65      H1_GDS-CALIB_STRAIN_32_2048

Omicron also sees them with the same delay
From :
  /home/reed.essick/Omicron/test/triggers/H-11270/H1:GDS-CALIB_STRAIN/H1-GDS_CALIB_STRAIN_Omicron-1127074621-30.xml
  /home/reed.essick/Omicron/test/triggers/H-11270/H1:GDS-CALIB_STRAIN/H1-GDS_CALIB_STRAIN_Omicron-1127074771-30.xml
  /home/reed.essick/Omicron/test/triggers/H-11270/H1:GDS-CALIB_STRAIN/H1-GDS_CALIB_STRAIN_Omicron-1127075221-30.xml
  /home/reed.essick/Omicron/test/triggers/H-11270/H1:GDS-CALIB_STRAIN/H1-GDS_CALIB_STRAIN_Omicron-1127075731-30.xml

    peak time          fcent      snr
1127074640.977539062  88.77163  6.3716
1127074773.983397960 648.78342 11.41002  <- surprisingly high fcent, could be due to clustering
1127075235.981445074 181.39816 13.09279
1127075742.983397960 181.39816 12.39437

LIB single-IFO jobs also found all the events. Post-proc pages can be found here:

 https://ldas-jobs.ligo.caltech.edu/~reed.essick/O1/2015_09_23-HWINJ/1127074640.98-0/H1L1/H1/posplots.html
 https://ldas-jobs.ligo.caltech.edu/~reed.essick/O1/2015_09_23-HWINJ/1127074773.98-1/H1L1/H1/posplots.html
 https://ldas-jobs.ligo.caltech.edu/~reed.essick/O1/2015_09_23-HWINJ/1127075235.98-2/H1L1/H1/posplots.html
 https://ldas-jobs.ligo.caltech.edu/~reed.essick/O1/2015_09_23-HWINJ/1127075742.98-3/H1L1/H1/posplots.html

all runs appear to have reasonable posteriors.
florent.robinet@LIGO.ORG - 23:17, Thursday 24 September 2015 (21935)DetChar
Here is how Omicron detects these injections:

https://ldas-jobs.ligo-wa.caltech.edu/~frobinet/scans/hd/1127074641/
https://ldas-jobs.ligo-wa.caltech.edu/~frobinet/scans/hd/1127074774/
https://ldas-jobs.ligo-wa.caltech.edu/~frobinet/scans/hd/1127075236/
https://ldas-jobs.ligo-wa.caltech.edu/~frobinet/scans/hd/1127075743/

Here are the parameters measured by Omicron (loudest tile):
1127074640: t=1127074640.981, f=119.9 Hz, SNR=6.7
1127074773: t=1127074773.981, f=135.3 Hz, SNR=11.8
1127075235: t=1127075235.981, f=114.9 Hz, SNR=12.8
1127075742: t=1127075742.981, f=135.3 Hz, SNR=12.4
joey.key@LIGO.ORG - 12:53, Friday 25 September 2015 (21947)
The BayesWave single IFO (glitch only) analysis recovers these injections with the following SNRs:
4640: 8.65535
4773: 19.2185
5253: 20.5258
5742: 20.1666
The results are posted here:
https://ldas-jobs.ligo.caltech.edu/~meg.millhouse/O1/CBC_hwinj/
Images attached to this comment
H1 DetChar (DetChar)
cheryl.vorvick@LIGO.ORG - posted 13:47, Wednesday 23 September 2015 (21854)
GRB notification at 20:45:40UTC

Going to Observe without filter test.

H1 CDS (CDS)
cheryl.vorvick@LIGO.ORG - posted 13:46, Wednesday 23 September 2015 - last comment - 19:18, Wednesday 23 September 2015(21853)
ETMX Diag Reset at 20:45:20UTC
Comments related to this report
corey.gray@LIGO.ORG - 15:53, Wednesday 23 September 2015 (21862)

General Question:  Does this knock us out of Observation Mode?  Could I have reset this this morning?

cheryl.vorvick@LIGO.ORG - 16:05, Wednesday 23 September 2015 (21863)

IFO was in Commissioniing.

jameson.rollins@LIGO.ORG - 16:42, Wednesday 23 September 2015 (21867)

No, the DIAG_MAIN guardian node is NOT under the OBSERVATION READY check.  It can be changed/reset/etc. without affecting OBSERVATION MODE.

sheila.dwyer@LIGO.ORG - 19:18, Wednesday 23 September 2015 (21873)

I think Cheryl was talking about the diag reset button on the GDS overview screen for the front end, not the DIAG_MAIN guardian.

H1 INJ (DetChar, INJ)
christopher.biwer@LIGO.ORG - posted 09:01, Wednesday 23 September 2015 - last comment - 21:35, Thursday 24 September 2015(21838)
new approved coherent CBC waveform
I've uploaded new and approved coherent waveforms for hardware injection testing. SVN is at revision number 5097.

There is a H1L1 coherent version of the September 21 test injection that was done at LHO. It can be found here:
  * H1 waveform
  * L1 waveform
  * XML parameter file

There is a H1L1 coherent version of the September 21 test injection that was done at LHO and the waveform begins at 15Hz. This waveform should be tested after the previous waveform has been tested. It can be found here:
  * H1 waveform
  * L1 waveform
  * XML parameter file
Comments related to this report
christopher.biwer@LIGO.ORG - 16:26, Wednesday 23 September 2015 (21845)DetChar, INJ
I've attached time series of the four waveforms. Y-axis is h(t) in strain.

EDIT: Re-uploaded image files with title and proper y and x labels.
Images attached to this comment
bruce.allen@LIGO.ORG - 20:51, Thursday 24 September 2015 (21933)
Chris, I think the links to the XML parameter files are broken, could you please add corrected ones?   Error message:

The requested URL /svn/injection/hwinj/Details/Inspiral/coherenttest1_1126257410.xml.gz was not found on this server.

Cheers, Bruce
christopher.biwer@LIGO.ORG - 21:35, Thursday 24 September 2015 (21934)
Hi sorry forgot the h1l1 at the beginning. https://daqsvn.ligo-la.caltech.edu/svn/injection/hwinj/Details/Inspiral/h1l1coherenttest1_1126257410.xml.gz

and https://daqsvn.ligo-la.caltech.edu/svn/injection/hwinj/Details/Inspiral/h1l1coherenttest1from15hz_1126257410.xml.gz
H1 General
peter.shawhan@LIGO.ORG - posted 20:42, Tuesday 22 September 2015 - last comment - 16:27, Wednesday 23 September 2015(21824)
GWIstat fixed
Cheryl and Jeff brought to my attention that GWIstat was reporting incorrect information today.  It turns out that the ~gstlalcbc home directory at Caltech was moved to a new filesystem today; GWIstat gets its information from a process running under that account, and apparently got into a funny state.  I have now restarted it.  For the rest of the current observing segment it will report the duration only from the time I restarted it, about 3:32 UTC.  I apologize for the problem!
Comments related to this report
peter.shawhan@LIGO.ORG - 06:15, Wednesday 23 September 2015 (21836)
I see this morning that GWIstat is not showing the correct duration for the current observing segment.  The log file on ldas-grid.ligo.caltech.edu, where it is now running, shows that it was restarted twice during the night for no obvious reason, and it is reporting the duration only since it was restarted.  I'll ask the Caltech computing folks to look into this.  New hardware for ldas-grid was put into use yesterday, and maybe they were still shaking it down last night.
peter.shawhan@LIGO.ORG - 16:27, Wednesday 23 September 2015 (21866)
Stuart Anderson told me this is a known problem that seems to have arisen from a condor configuration change.  They know how to fix it but will need to restart condor.  Until they do that, gwistat should indicate status correctly (except for momentary outages) but may sometimes display the wrong duration for the current state.
H1 CAL (AOS, CAL)
sudarshan.karki@LIGO.ORG - posted 18:19, Tuesday 22 September 2015 - last comment - 18:35, Wednesday 23 September 2015(21817)
Time Varying Calibration Parameters- Updates

SudarshanK, DarkhanT

We were using 137 degree of correction factor on kappa_tst on our time varying parameters calculation. (alog 21594). Darkhan found a negative sign that was placed at a wrong position in the DARM model which gave us back 180 degrees of phase. Additionally, Shivaraj found that  we were not accounting for DAQ downsampling filter used in ESD Calibration line. These two factors gave us back almost all the phase we were missing. There was also an analog antialiasing filter missing in the actuation TF that was applied in the new model. After these corrections, Darkhan created the new upated epics variable. These epics variable are committed at:

CalSVN/Runs/O1/Scripts/CAL_EPICS

Using these new epics  variable, kaapas were recalculated for LHO. For, LLO these epics variable doesnot exist yet. The new plot is attached below. The imaginary parts of all the kappa's are now close to their nominal values of 0 and real part are few percent (2-3%) from their nominal values of 1, which is within the uncertainity of the model. Cavity pole is still off from its nominal value of 341 Hz but has stayed constant over time.

The script to calculate these time varying factors is committed to SVN:

LHO: CalSVN/aligocalibration/trunk/Runs/ER8/H1/Scripts/CAL_PARAM/

LLO: CalSVN/aligocalibration/trunk/Runs/ER8/L1/Scripts/CAL_PARAM/

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 18:44, Tuesday 22 September 2015 (21821)DetChar, ISC
Recall that Stefan made changes to the OMC Power Scaling on Sunday 13 September 2015 (in the late evening PDT, which means Sept 14th UTC). One can see the difference in character (i.e. the subsequent consistency) of kappa_C after this change on Sudarshan's attached plot. 

Once can also see that, for a given lock stretch, that the change in optical gain is now more that ~2-3%. That means that ~5 [Mpc] trends we see on our 75 [Mpc] the in-spiral range, which we've seen evolve over long, 6+ hour long lock stretches, cannot be entirely attributed to optical gain fluctuations as we've been flippantly sure of, and claiming.

However, now that we've started calculating these values in the GDS pipeline (LHO aLOGs 21795 and 21812), it will be straight-forward to make comparative plots between the calculated time dependent parameters and every other IFO metric we have. And we will! You can too! Stay tuned!
evan.hall@LIGO.ORG - 18:35, Wednesday 23 September 2015 (21871)

Just to drive the point home, I took 15 hours' worth of range and optical gain data from our ongoing 41+ hour lock. The optical gain fluctuates by a few percent, but the range fluctuates by more like 10 %.

Non-image files attached to this comment
H1 GRD
travis.sadecki@LIGO.ORG - posted 15:59, Monday 21 September 2015 - last comment - 15:28, Wednesday 23 September 2015(21751)
DIAG_MAIN node in error

VerbalAlarms reports that DIAG_MAIN guardin node is in error.

Comments related to this report
evan.hall@LIGO.ORG - 16:20, Monday 21 September 2015 (21754)

Appears to be a standard NDS2 burp:

2015-09-21T22:57:59.11305   File "/ligo/apps/linux-x86_64/guardian-1485/lib/python2.7/site-packages/guardian/worker.py", line 459, in run
2015-09-21T22:57:59.11306     retval = statefunc()
2015-09-21T22:57:59.11306   File "/opt/rtcds/userapps/release/sys/common/guardian/SYS_DIAG.py", line 178, in run
2015-09-21T22:57:59.11307     return SYSDIAG.run_all()
2015-09-21T22:57:59.11307   File "/opt/rtcds/userapps/release/sys/common/guardian/SYS_DIAG.py", line 151, in run_all
2015-09-21T22:57:59.11308     ret &= self.run(name)
2015-09-21T22:57:59.11308   File "/opt/rtcds/userapps/release/sys/common/guardian/SYS_DIAG.py", line 136, in run
2015-09-21T22:57:59.11310     for msg in self[name](**kwargs):
2015-09-21T22:57:59.11311   File "/opt/rtcds/userapps/release/sys/h1/guardian/DIAG_MAIN.py", line 66, in PSL_ISS
2015-09-21T22:57:59.11311     diff_pwr = avg(-10, 'PSL-ISS_DIFFRACTION_AVG')
2015-09-21T22:57:59.11312   File "/ligo/apps/linux-x86_64/cdsutils-497/lib/python2.7/site-packages/cdsutils/avg.py", line 67, in avg
2015-09-21T22:57:59.11312     for buf in conn.iterate(*args):
2015-09-21T22:57:59.11313 RuntimeError: Requested data were not found.

Reloaded.

evan.hall@LIGO.ORG - 15:25, Wednesday 23 September 2015 (21858)

Having the guardian go into error because of an NDS2 hiccough is kind of irritating.

Based on this StackExchange answer, I added the following handler function to the DIAG MAIN guardian:

def try_avg(*args):
    while True:
        try:
            q = avg(*args)
        except RuntimeError:
            log('Encountered runtime error while trying to average {}'.format(args[1]))
            continue
        break
    return q

where avg is the cdsutils.avg function.

This is now used for the ISS diffraction and the ESD railing diag tests. If we like it, we should consider propagating it to the rest of the guardian.

jameson.rollins@LIGO.ORG - 15:28, Wednesday 23 September 2015 (21860)

This is a fine hack solution for this one case, but please don't propogate this around to all guardian NDS calls.  Let me come up with a way to better handle it within the guardian infrastructure, so we don't end up with a lot of cruft in the guardian user code.

Displaying reports 61801-61820 of 83107.Go to page Start 3087 3088 3089 3090 3091 3092 3093 3094 3095 End