Displaying reports 51941-51960 of 85584.Go to page Start 2594 2595 2596 2597 2598 2599 2600 2601 2602 End
Reports until 11:27, Monday 06 March 2017
H1 PSL
jason.oberling@LIGO.ORG - posted 11:27, Monday 06 March 2017 - last comment - 11:29, Monday 06 March 2017(34617)
Forensics for 2017-3-6 PSL Trips

The PSL tripped 4 times last night, 3 of them in quick succession.  I've done some preliminary forensics, and all 4 were trips of the "Head 1-4 Flow" watchdog, with the Head 3 flow sensor being the culprit in every trip.

Trip #1: 5:07 UTC (21:07 PST 2017-3-5)

PSL tripped due to a trip of the "Head 1-4 Flow" interlock, caused by the flow reading from the Head 3 flow sensor dropping below the trip point.  While restarting the laser after this trip, it took a couple attempts to get the crystal chiller to start and stay running.  After the first attempt, the reported flow from the Head 3 flow sensor did not get above the trip point, so the chiller shut off.  The chiller restarted without issue on the 2nd attempt and the PSL restart went smooth from there.  Once the system was up and running, Jim had to reset the noise eater.

The first attachment shows the flow readings from the active laser head flow sensors along with the "Head 1-4 Flow" interlock signal.  The second attachment shows a slightly zoomed in view of the reading from the Head 3 flow sensor at the time of the trip.  As can be seen, the flow signal from Head 3 became ragged and dropped below the trip point, shutting down the system.

Trip #2: 5:47 UTC (21:47 PST 2017-3-5)

The PSL tripped again, for the same reason as trip #1.  This was also caused by the flow reading from the Head 3 flow sensor dropping below the trip point.  The third and fourth attachments show info for this trip, showing that again Head 3 was the cause.  The restart went smooth, and Jim had gone to the LVEA to reset the noise eater when...

Trip #3: 6:02 UTC (22:02 PST 2017-3-5)

The third PSL trip occurred while Jim was in the LVEA resetting the noise eater.  This trip was identical to the last 2.  The fifth and sixth attachments show that once again the flow reading from the Head 3 flow sensor was the cause of the trip; I also included the power output from the HPO in the fifth attachment.  As can be seen in the fifth attachment (PSL_Trip3_Head1-3_Flow_2017-3-6_06:02:06.png) there was another issue with restarting the crystal chiller; just over 30 seconds after restarting the chiller it shut down due to the flow reading from the Head 3 flow sensor.  The second chiller restart attempt was successful.  There were then zero issues in restarting the PSL; Jim had to reset the NPRO noise eater again once the rest of the system was up and running.

Trip #4: 7:59 UTC (23:59 PST 2017-3-5)

The fourth and final trip occurred just as TJ was taking over for the OWL shift.  Once again this was a trip of the "Head 1-4 Flow" interlock caused by the flow reading from the Head 3 flow sensor dropping below the trip point; this is shown in the final 2 attachments.  There were no issues restarting the PSL after this trip.  Once everything was up and running, TJ had to reset the noise eater twice and was then on to locking the IFO.

 

Images attached to this report
Comments related to this report
jason.oberling@LIGO.ORG - 11:29, Monday 06 March 2017 (34618)

Filed FRS 7559 for these trips.

H1 CAL
jeffrey.kissel@LIGO.ORG - posted 11:12, Monday 06 March 2017 (34616)
2017-03-06 New Calibration Sensing Function Measurement Suite
J. Kissel, E. Goetz

I've gathered our "bi-weekly" calibration suite of measurements to track the sensing function, and ensure all calibration is within reasonable uncertainty and to have corroborating evidence for a time-dependent detuning spring frequency & Q. Evan will post analysis results later.

The data have been saved and committed to:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Measurements/SensingFunctionTFs
    2017-03-06_H1DARM_OLGTF_4to1200Hz_25min.xml
    2017-03-06_H1_PCAL2DARMTF_4to1200Hz_8min.xml

    2017-03-06_H1_PCAL2DARMTF_BB_5to1000Hz.xml

This suite, again for time-tracking, took ~35 minutes from 2017-03-06, 18:02 - 18:35 UTC.
Images attached to this report
H1 General
travis.sadecki@LIGO.ORG - posted 10:51, Monday 06 March 2017 (34614)
Intentional lockloss 18:43 UTC

Lock was broken intentionally to begin Jenne's arm cavity scans.

H1 General
travis.sadecki@LIGO.ORG - posted 10:06, Monday 06 March 2017 (34613)
H1 out of Observing

We are still locked but we have transitioned to CALIBRATION for JeffK's calibration sweeps.  We have started the commissioning window as of 18:00 UTC with Keita's permission.  I assume this means the window will shift to 18:00-22:00 UTC.

H1 General
travis.sadecki@LIGO.ORG - posted 08:17, Monday 06 March 2017 (34609)
Ops Day Shift Transition

TITLE: 03/06 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 65Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    Wind: 26mph Gusts, 22mph 5min avg
    Primary useism: 0.06 μm/s
    Secondary useism: 0.24 μm/s
QUICK SUMMARY: Locked for 7 hours.  Other than the PSL issue over the night, no issues were handed off.

H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 08:17, Monday 06 March 2017 (34610)
CDS O2 restart report, Wednesday 1st - Sunday 5th March 2017

model restarts logged for Sun 05/Mar/2017
2017_03_05 07:14 h1broadcast0

unexpected crash of h1broadcaster, was rebooted to recover.

model restarts logged for Sat 04/Mar/2017 - Wed 01/Mar/2017 No restarts reported

LHO General
thomas.shaffer@LIGO.ORG - posted 08:00, Monday 06 March 2017 (34604)
Ops Owl Shift Summary

TITLE: 03/06 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 58Mpc
INCOMING OPERATOR: Travis
SHIFT SUMMARY: PSL tripped as Jim handed the IFO off to me, but I got Jason on the phone and he got it right back up. PI mode 23 has popped up a few times but gone down before I have had to do anything. I think I remember someone saying this is associated with wind gusts somehow. I just ran a2l to fix pitch while LLO is down. It is still windy outside.

LOG:

H1 PSL
thomas.shaffer@LIGO.ORG - posted 07:42, Monday 06 March 2017 (34608)
PSL Weekly Report


Laser Status:
SysStat is good
Front End Power is 33.98W (should be around 30 W)
HPO Output Power is 169.3W
Front End Watch is GREEN
HPO Watch is GREEN

PMC:
It has been locked 0 days, 7 hr 13 minutes (should be days/weeks)
Reflected power = 14.49Watts
Transmitted power = 62.65Watts
PowerSum = 77.15Watts.

FSS:
It has been locked for 0 days 7 hr and 0 min (should be days/weeks)
TPD[V] = 1.868V (min 0.9V)

ISS:
The diffracted power is around 4.5% (should be 3-5%)
Last saturation event was 0 days 7 hours and 13 minutes ago (should be days/weeks)

Possible Issues:

None

H1 General
thomas.shaffer@LIGO.ORG - posted 05:28, Monday 06 March 2017 (34605)
Ops Mid shift report

Observing at ~58Mpc. Winds are gusting into the 30s and the Handford commute is going. Locked for 4hours so far

LHO General
thomas.shaffer@LIGO.ORG - posted 01:10, Monday 06 March 2017 (34603)
Ops Owl Shift Transition

TITLE: 03/06 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 58Mpc
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
    Wind: 15mph Gusts, 9mph 5min avg
    Primary useism: 0.08 μm/s
    Secondary useism: 0.30 μm/s
QUICK SUMMARY: PSL tripped as my shift started. I called Jason and he got things going again. We are now back to low noise and ~60Mpc.

H1 General
jim.warner@LIGO.ORG - posted 00:13, Monday 06 March 2017 (34602)
Shift Summary

TITLE: 03/06 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: TJ
SHIFT SUMMARY: Bad news PSL
LOG:

Jeff was recovering from an EQ when I arrived. I managed to get the IFO back up for a few hours, but then the PSL went on the fritz. I called Jason, after a few rounds of filling the chiller and resetting the noise eater I was able to get locked again. As soon as TJ arrived, though, the PSL went down again. He's working with Jason (again) on getting back up. Re-locking has been easy though.

H1 General
jeffrey.bartlett@LIGO.ORG - posted 16:05, Sunday 05 March 2017 (34600)
Ops Day Shift Summary
Ops Shift Log: 03/05/2017, Day Shift 16:00 – 00:00 (08:00 - 16:00) Time - UTC (PT)
State of H1: Unlocked and IFO in DOWN due to EQ  
Intent Bit: Commissioning/Earthquake.
Support: Jason
Incoming Operator: Jim

Shift Summary: Ran A2L script – Pitch is close to 0.6 and Yaw is at 0.5. PSL AC Temp alerts. Per Jason, went into LVEA to check Makeup Air fan and AC controls. See aLOG #34586.

Lost lock due to a Mag 6.5 earthquake near Kandrian, Papua New Guinea. Primary microseism shot up to 2.0 um/s. Put IFO into DOWN until tremors subside.          

Activity Log: Time - UTC (PT)
16:00 (08:00) Take over from Cheryl
16:40 (08:40) Ran A2L check script
21:29 (13:29) PSL AC alert
21:59 (13:59) PSL AC alert
22:00 (14:00) Jeff – Into the LVEA - Drop to Commissioning to check PSL AC – LLO is down
22:11 (14:11) Jeff – Out of LVEA – Back to Observing
23:33 (15:33) Lost lock – Due to Mag 6.5 EQ near Kandrian, Papua New Guinea
23:33 (15:33) Put IFO into DOWN until shaking calms down
00:00 (16:00) Turn over to Jim

 

 

 

H1 SEI
jeffrey.bartlett@LIGO.ORG - posted 15:49, Sunday 05 March 2017 (34599)
Lockloss Mag 6.5 EQ - Papua New Guinea
Mag 6.5 Earthquake near Kandrian, Papua New Guinea

    Seen on USGS, Seismon, and Terramon
    USGS - EQ at 22:47 UTC
    Terramon - EQ at 22:47 UTC
    Rise in the BLRMS at around 23:00 UTC, to 3.0um/s
    H1 lost lock, LLO was already down
H1 General
jeffrey.bartlett@LIGO.ORG - posted 12:14, Sunday 05 March 2017 (34596)
Ops Day Mid-Shift Summary
  In Observing for 7.5 hours. IFO seems to be running well at this time. Wind is a steady Moderate Breeze (up to 18 mph) with gusts up in the low 30s. Primary microseism in X and Y is a bit elevated, secondary microseism is flat at about 0.35um/s. A2L Pitch is up to 0.7, Yaw is below the reference. Everything else is OK.   
H1 CDS
david.barker@LIGO.ORG - posted 06:49, Sunday 05 March 2017 - last comment - 14:59, Sunday 05 March 2017(34587)
cds issues

I'm working with Cheryl on current CDS issues. Looks like h1broadcast0 machine locked up, Cheryl is rebooting this machine.

The timing system has no errors currently, looks like this was a transient problem. 

Comments related to this report
david.barker@LIGO.ORG - 07:38, Sunday 05 March 2017 (34590)

DMT monitors are back after Cheryl rebooted h1broadcast0. I verified that no full frames were lost from h1fw0 or h1fw1 overnight. We did lose HofT frames when the DMT broadcaster was down. Keith is testing redundant broadcasters at LLO to prevent a problem like this.

keith.thorne@LIGO.ORG - 14:59, Sunday 05 March 2017 (34598)
Redundant GDS broadcasting now working at LLO to one DMT machine.  Need to install network switch(es), additional Ethernet cards for redundancy on the redundant DMT machines..
H1 PSL (DetChar, PSL)
cheryl.vorvick@LIGO.ORG - posted 06:41, Sunday 05 March 2017 - last comment - 10:03, Monday 06 March 2017(34586)
PSL makeup air found at 100%, now turned down to 20%
Images attached to this report
Comments related to this report
jason.oberling@LIGO.ORG - 08:48, Sunday 05 March 2017 (34593)

That's interesting.  When I checked the make-up air on Wednesday (after Travis first witnessed this temp drop) it was definitely running at 20%.  Did you check the make-up air fan in the acoustic closet (East of the PSL enclosure) to confirm it was running at 100% (it's very loud and drowns out the power supply in the closet when at 100%)?  There have been times when the PSL environmental controls indicate the fan is 100%, but when the make-up air fan itself is checked it is running at 20%.  There seems to be some odd link in the evironmental controls software where it shows 100% on all fans if one set of fans (i.e. the anteroom or laser room HEPA fans) was set to 100%.  This also works the other way, where the HEPA fans will sometimes show 20% (they are either 100% or off) after the make-up air has been set to 20%.  Essentially, the environmental controls appear to hold the last user set fan speed that then follows from fan screen to fan screen, but appears to not be applied until the user does so.  I don't know if this is how the system is supposed to behave, but I have seen this behavior on more than one occasion.

jeffrey.bartlett@LIGO.ORG - 14:45, Sunday 05 March 2017 (34597)
  Received alerts on PSL AC/Temp at 21:29 & 21:59 UTC. Spoke with Jason. Dropped out of Observing (LLO down) to go into the LVEA and check Makeup Air fans and AC controls. 
   (1). Makeup Air fan is running at LOW SPEED. It is NOT running at 100%.
   (2). The AC Controls - Makeup Air fan speed at 100%. I reset it to 20%. Verified that the Makeup Air fan was still at low speed. 

  Trended the five temperature sensors located in the PSL Laser enclosure for the past 30 days. Results are posted below. 

  There were 8 low temperature event starting on 03/01 at around 20:02 UTC. These events are consistent across the five sensors. The Dust Monitor looks a bit odd but this is due to the 1 minute sample taken every 10 minutes. There is some time lag but reasonably good coherence none the less. There is less coherence between the temperatures in the PSL and in the LVEA.      
Images attached to this comment
gerardo.moreno@LIGO.ORG - 10:03, Monday 06 March 2017 (34612)

(John W, Gerardo M)

There appears to be something that is turning on, then off, maybe something triggered by a thermostat.  Noise can be observed on the accelerometers on the PSL table.  See attachments, all plots are 5 days long.  On first plot the signal for the accelerometer (color blue) was modified to match the temperature signal.  Other plot no mods done.

Images attached to this comment
H1 General
cheryl.vorvick@LIGO.ORG - posted 03:27, Sunday 05 March 2017 - last comment - 19:18, Sunday 05 March 2017(34584)
lockloss, temperature in PSL drops 2degF
Images attached to this report
Comments related to this report
cheryl.vorvick@LIGO.ORG - 03:45, Sunday 05 March 2017 (34585)
  • WFS A DC PIT - good, then temperature drops, seems to be recovering, lockloss
  • PSL temperature drops 2degF in 14 minutes
  • EOM power drops, Periscope power drops, PZTs move by 100 in pitch and 40 in yaw
Images attached to this comment
peter.king@LIGO.ORG - 19:18, Sunday 05 March 2017 (34601)
The picture is more likely to be the pre-modecleaner reflection, not the reference cavity transmission.
If it really is the reference cavity transmission, then we have a big alignment and mode matching
problem.

    The picture seems to indicate that the horizontal alignment has changed.
H1 CAL
jeffrey.kissel@LIGO.ORG - posted 18:08, Tuesday 14 February 2017 - last comment - 09:27, Monday 06 March 2017(34153)
All-of-O2 Trend (thus far) of PCALY RXPD vs. TXPD: How Early Jan. Clipping Effects on Time-Dependent Correction Factors and h(t)
J. Kissel, S. Kandhasamy

As we begin to produce the systematic error budget for H1's response function, we're looking to figure out how to address the clipping of light going into H1 PCAL Y's RX PD that had been revealed in early January (see LHO aLOG 33108 and 33187).

Because we got extremely lucky and took the 2017-01-03 reference measures at a time when the clipping -- which had been found to be slowly varying as a function of time -- had varied briefly back to nominal, this problem does not create any impact on the static, frequency-dependent part of the calibration pipeline. However, because the small, time-dependent correction factors are calculated using the H1 PCALY RXPD, these correction factors are systematically biased. 

Working through the math of T1500377, specifically Eqs. 9, 12, 15, and 16, one can see that while the cavity pole frequency estimate would not be impacted, the scalar correction factors are:
    If 
        x_pcal' = eps x_pcal
    where eps is a real number (the number to convert the apparent displacement, x_pcal, into the real displacement, x_pcal'), then
        kappa_TST' = eps kappa_TST
    and
         kappa_PU' = (1 / A_0^PU) [A_total' - kappa_TST' A_0^TST]
                        (A_total' = eps A_total, from Eq. 11)
                    = (eps / A_0^PU) [A_total - kappa_TST A_0^TST]
                    = eps kappa_PU
    and finally,
                 S' = (1 / C_res) [ x_pcal' / d_err - D_0 (kappa_TST' A_0^TST + kappa_PU' A_0^PU)]^-1
                 S' = eps S
    such that
          kappa_C' = |S'|^2 / Re[S'] = (1 / eps) kappa_C
which means that we can simply scale the entire response function with this time-dependent systematic error:
                 R' = [1 / (kappa_C' C_0) ] + [kappa_PU' A_0^PU + kappa_TST' A_0^TST]
                    = [eps / (kappa_C C_0)] + eps [kappa_PU A_0^PU + kappa_TST A_0^TST]
                 R' = eps R

I attach a 77 day minute trend of the ratio between RXPD and TXPD that's been obtained from data viewer***. I'm not yet advocating that time-series this be used as the representative systematic error, but I post it to be demonstrative. Stay tuned on how this data is encorporated into the uncertainty / error budget.

*** Because it's data viewer, it spits out some dumb Julian calendar time vector. Just subtract off the first value, and you get a time vector in days since the start of the trend, which is Nov 30 2016 00:54:44 UTC.

The data also lives in the CAL repo here:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Measurements/PCAL/
    2017-02-14_H1PCALY_RXPD_Trend.txt
    2017-02-14_H1PCALY_TXPD_Trend.txt

The script used to process this data and plot the figure is here:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/PCAL/
    plot_h1pcaly_RXvsTXPD_trend_20170214.m
Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 09:27, Monday 06 March 2017 (34611)
J. Kissel, on behalf of S. Karki and C. Cahillane.

The data provided by Sudarshan via SLM tool that represents the PCAL clipping actually used by Craig in his systematic error budget for CBC analysis chunks 2 & 3 during O2 -- i.e. from Nov 30 2016 17:09:39 UTC to Jan 22 2017 04:36:04 (1164560996 to 1169094982) --  has been committed to the svn here:
    ${CalSVN}/aligocalibration/trunk/Runs/O2/H1/Results/PCAL/
O2_RxPD_TxPD_factor_dcsready.txt

I attach a .png of the data also committed to the same location.
Images attached to this comment
Non-image files attached to this comment
H1 AOS (DetChar, PSL)
miriam.cabero@LIGO.ORG - posted 09:19, Friday 20 January 2017 - last comment - 07:23, Monday 06 March 2017(33446)
Sub-set of blip glitches might originate from PSL

Tom Dent, Miriam Cabero

We have identified a sub-set of blip glitches that might originate from PSL glitches. A glitch with the same morphology as a blip glitch shows up in the PSL-ISS_PDA_REL_OUT_DQ channel at the same time as a blip glitch is seen in the GDS-CALIB_STRAIN channel.

We have started identifying times of these glitches using omicron triggers from the PSL-ISS_PDA_REL_OUT_DQ channel with 30 < SNR < 150 and central frequencies between ~90 Hz and a few hundreds of Hz. A preliminary list of these times (on-going, only period Nov 30 - Dec 6 so far) can be found in the file

https://www.atlas.aei.uni-hannover.de/~miriam.cabero/LSC/blips/O2_PSLblips.txt

or, with omega scans of both channels (and with a few quieter glitches), in the wiki page

https://www.lsc-group.phys.uwm.edu/ligovirgo/cbcnote/PyCBC/O2SearchSchedule/O2Analysis2LoudTriggers/PSLblips

Only two of those times have full omega scans for now:

https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20161204/1164844817-1164931217/scans/1164876856.97/

https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20161204/1164844817-1164931217/scans/1164882018.54/

 

The whitened time-series of the PSL channel looks like a typical loud blip glitch, which could be helpful to identify/find times of this sub-set of blip glitches by other methods more efficient than the omicron triggers:

https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20161204/1164844817-1164931217/scans/1164876856.97/1164876856.97_H1:PSL-ISS_PDA_REL_OUT_DQ_1.00_timeseries_whitened.png

Comments related to this report
thomas.dent@LIGO.ORG - 09:05, Friday 20 January 2017 (33450)
marco.cavaglia@LIGO.ORG - 14:48, Sunday 22 January 2017 (33513)DetChar
I ran PCAT on H1:GDS-CALIB_STRAIN and H1:PSL-ISS_PDA_REL_OUT_DQ from November 30, 2016 to December 31, 2016 with a relatively high threshold (results here: https://ldas-jobs.ligo-wa.caltech.edu/~cavaglia/pcat-multi/PSL_2016-11-30_2016-12-31.html). Then I looked at the coincidence between the two channels. The list of coincident triggers is:

-----------------------------------------------------
List of triggers common to PSL Type 1 and GDS Type 1:
#1:	1164908667.377000

List of triggers common to PSL Type 1 and GDS Type 10:
#1:	1164895965.198000
#2:	1164908666.479000

List of triggers common to PSL Type 1 and GDS Type 2:
#1:	1164882018.545000

List of triggers common to PSL Type 1 and GDS Type 4:
#1:	1164895924.827000
#2:	1164895925.031000
#3:	1164895925.133000
#4:	1164895931.640000
#5:	1164895931.718000
#6:	1164895958.491000
#7:	1164895958.593000
#8:	1164895965.097000
#9:	1164908667.193000
#10:	1164908667.295000
#11:	1164908673.289000
#12:	1164908721.587000
#13:	1164908722.198000
#14:	1164908722.300000
#15:	1164908722.435000

List of triggers common to PSL Type 1 and GDS Type 7:
#1:	1166374569.625000
#2:	1166374569.993000

List of triggers common to PSL Type 1 and GDS Type 8:
#1:	1166483271.312000

-----------------------------------------------------

I followed-up with omega scans and among the triggers above, only 1164882018.545000 is a blip glitch. The others are ~ 1 sec broadband glitches with frequency between 512 and 1024. A few scans are attached to the report.
Images attached to this comment
thomas.dent@LIGO.ORG - 07:08, Monday 23 January 2017 (33531)

Hi Marco,

your 'List of triggers common to PSL Type 1 and GDS Type 4' (15 times in two groups) are all during the known times of telephone audio disturbance on Dec 4 - see https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32503 and https://www.lsc-group.phys.uwm.edu/ligovirgo/cbcnote/PyCBC/O2SearchSchedule/O2Analysis2LoudTriggers/PSLGlitches

I think these don't require looking into any further, the other classes may tell us more.

miriam.cabero@LIGO.ORG - 05:22, Tuesday 24 January 2017 (33566)

The GDS glitches that look like blips in the time series seem to be type 2, 7, and 8. You did indeed find that the group of common glitches PSL - GDS type 2 is a blip glitch. However, the PSL glitches in the groups with GDS type 7 and 8 do not look like blips in the omega scan. The subset we identified clearly shows blip glitch morphology in the omega scan for the PSL channel, so it is not surprising that those two groups turned out not to be blips in GDS.

It is though surprising that you only found one time with a coincident blip in both channels, when we identified several more times in just one week of data from the omicron triggers. What was the "relatively high threshold" you used?

marco.cavaglia@LIGO.ORG - 15:29, Friday 10 February 2017 (34050)DetChar
Hi. Sorry for taking so long with this. I rerun PCAT on the PSL and GDS channels between 2016-11-30 and 2016-12-31 with a lower threshold for glitch identification (glitches with amplitude > 4 sigma the noise floor) and with a larger coincidence window (coincident glitches within 0.1 seconds). The list of found coincident glitches is attached to the report. Four glitches in Miriam's list [https://www.atlas.aei.uni-hannover.de/~miriam.cabero/LSC/blips/O2_PSLblips.txt] show up in the list: 1164532915.0 (type 1 PSL/type 3 GDS), 1164741925.6 (type 1 PSL/type 1 GDS), 1164876857.0 (type 8 PSL/type 1 GDS), 1164882018.5 (type 1 PSL/type 8 GDS). I looked at other glitches in these types and found only one additional blip at 1166374567.1 (type 1 PSL/type 1 GDS) out of 9 additional coincident glitches. The typical waveforms of the GDS glitches show that the blip type(s) in GDS are type 1 and/or type 8. There are 1998 (type 1) and 830 (type 8) glitches in these classes. I looked at a few examples in cat 8 and indeed found several blip glitches which are not coincident with any glitch in the PSL channel. I would conclude that PCAT does not produce much evidence for a strong correlation of blip glitches in GDS and PSL. If there is, PSL-coincident glitches must be a small subset of blip glitches in h(t). However, some blips *are* coincident with glitches in the PSL, so looking more into this may be a good idea.
Non-image files attached to this comment
miriam.cabero@LIGO.ORG - 02:13, Wednesday 15 February 2017 (34164)

Hi,

thanks Marco for looking into this. We already expected that it was a small sub-set of blip glitches, because we only found very few of them and we knew the total number of blip glitches was much higher. However, I believe that not all blip glitches have the same origin and that it is important to identify sub-sets, even if small, to possibly fix whatever could be fixed.

I have extended the wiki page https://www.lsc-group.phys.uwm.edu/ligovirgo/cbcnote/PyCBC/O2SearchSchedule/O2Analysis2LoudTriggers/PSLblips and the list of times https://www.atlas.aei.uni-hannover.de/~miriam.cabero/LSC/blips/O2_PSLblips.txt up to yesterday. It is interesting to see that I did not identify any PSL blips in, e.g., Jan 20 to Jan 30, but that they come back more often after Feb 9. Unfortunately, it is not easy to automatically identify the PSL blips: the criteria I used for the omicron triggers (SNR > 30, central frequency ~few hundred Hz) do not always yield to blips but also to things like https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=156436, which also affects CALIB_STRAIN but not in the form of blip glitches.

None of the times I added up to December appear in your list of coincident glitches, but that could be because their SNR in PSL is not very high and they only leave a very small imprint in CALIB_STRAIN compared with the ones from November. In January and February there are several louder ones with bigger effect on CALIB_STRAIN though.

thomas.dent@LIGO.ORG - 11:59, Monday 20 February 2017 (34266)

The most recent iteration of PSL-ISS flag generation showed three relatively loud glitch times:
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20170210/latest/scans/1170732596.35/

https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20170210/latest/scans/1170745979.41/

https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20170212/latest/scans/1170950466.83/

The first 2 are both on Feb 10, in fact a PSL-ISS channel was picked by Hveto on that day (https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20170210/latest/#hveto-round-8) though not very high significance.
PSL not yet glitch-free?

miriam.cabero@LIGO.ORG - 03:28, Tuesday 21 February 2017 (34276)

Indeed PSL is not yet glitch free, as I already pointed out in my comment from last week.

florent.robinet@LIGO.ORG - 06:21, Tuesday 21 February 2017 (34281)

Imene Belahcene, Florent Robinet

At LHO, a simple command line works well at printing PSL blip glitches:

source ~detchar/opt/virgosoft/environment.sh
omicron-print channel=H1:PSL-ISS_PDA_REL_OUT_DQ gps-start=1164500000 gps-end=1167500000 snr-min=30 freq-max=500 print-q=1 print-duration=1 print-bandwidth=1 | awk '$5==5.08&&$2<2{print}' 

GPS times must be adjusted to your needs.

This command line returns a few GPS times not contained in Miriam's blip list: must check that they are actual blips.

miriam.cabero@LIGO.ORG - 06:07, Wednesday 22 February 2017 (34312)

The PSL has different types of glitches that match those requirements. When I look at the Omicron triggers, I do indeed check that they are blip glitches before adding the times to my list. Therefore it is perfectly consistent that you find GPS times with those characteristics that are not in my list. However, feel free to check again if you want/have time. Of course I am not error-free :)

florent.robinet@LIGO.ORG - 00:42, Thursday 23 February 2017 (34339)

I believe the command I posted above is an almost-perfect way to retrieve a pure sample of PSL blip glitches. The key is to only print low-Q Omicron triggers.

For example, GPS=1165434378.2129 is a PSL blip glitch and it is not in Miriam's list.

There is nothing special about what you call a blip glitch: any broadband and short-duration (hence low-Q) glitch will produce the rain-drop shape in a time-frequency map. This is due to the intrinsic tiling structure of Omicron/Omega.

miriam.cabero@LIGO.ORG - 07:23, Monday 06 March 2017 (34606)

Next time I update the list (probably some time this week) I will check the GPS times given by the command line you suggest (it would be nice if it does indeed work perfectly at finding only these glitches, then we'd have an automated PSL blips finder!)

Displaying reports 51941-51960 of 85584.Go to page Start 2594 2595 2596 2597 2598 2599 2600 2601 2602 End