Displaying reports 50481-50500 of 83117.Go to page Start 2521 2522 2523 2524 2525 2526 2527 2528 2529 End
Reports until 08:35, Tuesday 24 January 2017
H1 General
edmond.merilh@LIGO.ORG - posted 08:35, Tuesday 24 January 2017 (33568)
Shift Summary - Day Transition
TITLE: 01/24 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventitive Maintenance
OUTGOING OPERATOR: Patrick
CURRENT ENVIRONMENT:
    Wind: 3mph Gusts, 2mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.43 μm/s 
QUICK SUMMARY:
LHO General
patrick.thomas@LIGO.ORG - posted 08:32, Tuesday 24 January 2017 (33567)
Ops Owl Shift Summary
TITLE: 01/24 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Preventative Maintenance
INCOMING OPERATOR: Ed
SHIFT SUMMARY: Still at NLN. Out of observing for Tues. maintenance. Set ISI config to SC_OFF_NOBRSXY.
LOG:

10:21 UTC Lock loss. HAM6 ISI tripped. Possibly eartquake related.
10:58 UTC Observing.
11:11 UTC Lock loss. PI modes 27 and 28.
11:59 UTC Observing.
12:14 UTC Realized I forgot to set the observatory mode to observing and did so.
12:35 UTC Bubba to water room to set heater on iceball on fire pump pressure relief discharge valve.
12:41 UTC GRB. LLO is down.
15:50 UTC Jim B. restarting control room wall tv computers
16:05 UTC Dick G. starting WP 6453
16:10 UTC Bubba to endY to start fan lubrication
16:16 UTC Travis and Evan to end Y for PCAL calibration. Set ISI config to SC_OFF_NOBRSXY.
16:21 UTC Karen and Christina to end stations
16:25 UTC Joe D. to LVEA (eye wash, fire ext., etc.)
16:27 UTC Carlos restarting nuc for control station camera control
16:30 UTC Gerardo to end X
LHO General
patrick.thomas@LIGO.ORG - posted 03:50, Tuesday 24 January 2017 (33565)
Observing
Back to observing at 11:50 UTC.
LHO General
patrick.thomas@LIGO.ORG - posted 03:14, Tuesday 24 January 2017 (33564)
Lock loss
Lost lock at 11:11 UTC likely to PI modes I had difficulty damping (see attached).
Images attached to this report
LHO General
patrick.thomas@LIGO.ORG - posted 02:58, Tuesday 24 January 2017 (33563)
Observing
Back to observing at 10:58 UTC.
H1 SEI
patrick.thomas@LIGO.ORG - posted 02:45, Tuesday 24 January 2017 (33562)
Earthquake Report
Inarajan Village, Guam

Was it reported by Terramon, USGS, SEISMON? Yes, Yes, No

Magnitude (according to Terramon, USGS, SEISMON): 5.5, 5.5, NA

Location: 212km SE of Inarajan Village, Guam

Starting time of event (ie. when BLRMS started to increase on DMT on the wall): ~9:50 UTC

Lock status?  Lock loss (it may just have been a coincidence)

EQ reported by Terramon BEFORE it actually arrived? No, I noticed it on the BLRMS before the USGS even reported it.
Images attached to this report
LHO General
patrick.thomas@LIGO.ORG - posted 01:18, Tuesday 24 January 2017 (33560)
Ops Owl Shift Transition
TITLE: 01/24 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 65Mpc
OUTGOING OPERATOR: Nutsinee
CURRENT ENVIRONMENT:
    Wind: 11mph Gusts, 8mph 5min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.52 μm/s
QUICK SUMMARY:

No issues to report. RO is in alarm.
H1 DetChar (DetChar, PEM, SEI)
jess.mciver@LIGO.ORG - posted 00:50, Tuesday 24 January 2017 (33561)
Suggested times for additional snow plow DQ vetoes

Detchar noticed glitches in h(t) due to snow plowing on Jan 13. LHO alog entries noting ongoing plowing were extremely helpful for figuring out what these glitches were. See the first attachment for spectrograms of an example snow plow glitch showing that near a VEA the initial impact of the plow on the ground and the subsequent scraping of the ground couples into h(t) up to ~100 Hz (witnessed by the ground STS2s, HEPI L4Cs and ISI GS13s). The last spectrogram shows the clear cadence of the plowing near the corner station on an 8 minute timescale. 

Building on a DQ flag Laura added to identify when snow plowing was happening near a VEA on Jan 09-10 (alog 33116), I looked at the BLRMS (10-30, 3-10, and 1-3 Hz) for a couple weeks after that day to suggest other times to flag. 

The second attachment shows BNS range and the 10-30 Hz BLRMS for days of interest. I used times noted in the alogs as when snow plowing was happening (marked in blue and orange) as a baseline to identify other potential snow plow times not mentioned in the alogs (marked in red).

Seismic glitches identified on Jan 19 and 20 (marked in purple) are likely unrelated to plowing and should be caught with an automated 10-30 Hz or 3-10 Hz BLRMS threshold DQ flag. 

I recommend all identified times (blue, orange, and red) from Jan 09-20 be flagged with H1:DCH-SNOW_PLOW (v2). 

Non-image files attached to this report
H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 00:00, Tuesday 24 January 2017 (33559)
Ops EVE shift summary

TITLE: 01/24 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC

STATE of H1: Observing at 67Mpc

INCOMING OPERATOR: Patrick

SHIFT SUMMARY: Lockloss once. Not sure how. Otherwise quiet shift. H0:FMC-CS_WS_RO_ALARM was blipping. At least John is awared of it. See alog33552 for the rest of the activities.

LOG:

00:52 Kyle to MY

01:19 Kyle back

H1 TCS (TCS)
nutsinee.kijbunchoo@LIGO.ORG - posted 23:21, Monday 23 January 2017 - last comment - 10:45, Tuesday 24 January 2017(33557)
TCSY current jumped

Looks like the CO2Y laser lock point may be lost due to small jumps in the current that kicked the pzt out of its place.  This is the first time I've seen such behavior (for myself anyway). This TCS guardian node is currently tied to the intent bit and can potentially kick us out of observe.

Images attached to this report
Comments related to this report
betsy.weaver@LIGO.ORG - 10:07, Tuesday 24 January 2017 (33572)

Plotted again with the full story of transitioning data points, as well as with IFO range to see when the lock transitions happen, and the TCSY flow rate.  The laser power transits appropriately at lock loss and again at lock aquisition with a small spike in some TCSY channels.  It's the big transit of some of the channels at the beginning of the 2nd lock that is the weird.

Images attached to this comment
betsy.weaver@LIGO.ORG - 10:22, Tuesday 24 January 2017 (33574)

And here's a plot of the next 2 lock loss/acquisition events wrt TCSY.  This time all signals did what is expected, no TCSY laser mode transistions with subsequent servo corrections.

Images attached to this comment
aidan.brooks@LIGO.ORG - 10:45, Tuesday 24 January 2017 (33576)

I think the CO2 laser lock loss is due to the servo being set to an absolute DC value of power. When locking, the CO2 laser PZT scans its full range and we monitor the output power. We then chose a locking point based on an output power that is roughly half way between the minimum and maximum powers in a mode transition. When the loop is locked, the difference between the CO2 laser output power and the set point becomes the error signal for the PZT (with some gain, low pass filtering and integration).

This keeps the laser output power very stable but suffers from a problem that should the long term efficiency (over periods of days to weeks) of the laser changes (get better or worse), the mode-transition to power output power relationship may drift up or down. A mode hop will then approach the set point power. At some point, this becomes too close, the PZT voltage:output power relationship will become nonlinear and we'll lose lock.

Betsy and I think this is probably what happened here. This is backed up by the new locking point (laser power set point) being set a bit lower than previous. 

The attached plot shows the laser output power dropping in advance of the PZT voltage. In fact, you can see the laser go through a mode-hop before the PZT reaches zero volts.

This is a relatively uncommon but currently unavoidable event. My recommendation is that we next characterize how frequently this does occur.

Images attached to this comment
H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 21:31, Monday 23 January 2017 - last comment - 23:33, Monday 23 January 2017(33552)
Eve midshift summary

Been locked and observing for 10 hrs 44 mins. H1IOPASC0 timing is red on the cds overview. Not sure if hitting Diag Reset will drop us out of Observe so I'm leaving it alone for now. Hit the diag reset button and cleared the error without dropping the intent bit.

Comments related to this report
nutsinee.kijbunchoo@LIGO.ORG - 22:10, Monday 23 January 2017 (33553)

Lockloss 06:02:15 UTC. I hit the H1IOPASC0 diag reset button 06:01:10 UTC.  Earlier I noticed a bunch of ETMY IPC went red and stayed red for less than a minute (1169272562-1169272582 -- that's why I decided to hit the diag reset button to make sure that one is not causing the other). Otherwise nothing's obvious on the FOM that could be the cause of the lockloss.

nutsinee.kijbunchoo@LIGO.ORG - 22:51, Monday 23 January 2017 (33555)

Back to NLN but the intent bit is waiting on TCS_ITMY_CO2 guardian. I'm investigating why it loses its lock point.

nutsinee.kijbunchoo@LIGO.ORG - 23:13, Monday 23 January 2017 (33556)

Back to Observe 06:52 UTC

nutsinee.kijbunchoo@LIGO.ORG - 23:33, Monday 23 January 2017 (33558)

The input power dropped by about half a watt compared to the previous lock.

Images attached to this comment
H1 CAL (CAL)
craig.cahillane@LIGO.ORG - posted 19:18, Monday 23 January 2017 (33549)
Total LHO Uncertainty Budget
C. Cahillane

The first LHO total uncertainty budget is reported here for frequency range = 5-5000 Hz and GPSTime = 1167559936.

This uncertainty budget was produced in five stages:
1) Take measurements of sensing plant C(f) and actuation stages A_UIM(f), A_PUM(f), A_TST(f).
2) Run MCMC to fit calibration model parameters to each measurement.
3) Run Gaussian Process (GP) to fit residuals and find unmodeled systematic errors.
4) Produce histograms of the time-dependent parameters two minutes prior to the GPSTime requested.
5) Sample MCMC, GP posteriors, and time dependent histograms to produce a numerical uncertainty budget.

The first plot is the total uncertainty budget for all frequencies.  Also reported here is the extreme uncertainties for 10-2000 Hz and 2000-5000 Hz.  This plot is probably why you're here.


Other plots are:
 2,  3,  4,  5: The MCMC MAP fit (green) to the Jan 3rd, 2017 reference measurements (red dots), plus 1000 fit instances (grey). (2=Sensing, 3=UIM, 4=PUM, 5=TST)
 6,  7,  8,  9: The GP mean posterior (dark blue line) and 1 sigma uncertainty (light blue) and measurement residuals. (6=Sensing, 7=UIM, 8=PUM, 9=TST)
PDFs 1,  2,  3,  4: The MCMC parameter corner plots for each measurement. (1=Sensing, 2=UIM, 3=PUM, 4=TST)
PDF 5: The time-dependent parameter values and linear fit for two minutes of data, plus histograms. 
Sorry for the overwhelming number of plots.  This was requested by the calibration group.  Most people can safely ignore these.

Potential source of uncertainty not yet included:
1) Spring Frequency and Inverse Q systematic error + uncertainty
If these sources matter there will be a comment to this aLOG.

For the LLO total uncertainty budget, please see LLO aLOG 31111.
Images attached to this report
Non-image files attached to this report
LHO VE
kyle.ryan@LIGO.ORG - posted 17:55, Monday 23 January 2017 (33548)
CP4 level indication(s)
Variations in the CDS indicated level for CP4 today before 5:15 pm local time were the result of experimental tinkerings and should be ignored.  As of 5:15 pm local time, I am pumping on the clogged line with only a diaphragm pump having an in-line 1/4-turn ball valve which has been throttled to permit a maximum flow rate of 0.4 LPM.  This flow rate was that which was observed when the diaphragm pump+throttled valve was supplied with 10 psi from a UHP N2 cylinder having an appropriately sized rotameter.  Having vacuum on this line tonight should nominally result in a "0.0" percent full value displayed on the MEDM screens.  Until further notice, other than 0.0 values shown will indicate that something pertaining to the "clog" has changed and would be of interest.  
H1 ISC (CAL, SUS)
jeffrey.kissel@LIGO.ORG - posted 16:39, Monday 23 January 2017 (33546)
Proposed Improved DARM Filter for Combatting 4.7 kHz, 10th order Violin Mode Harmonics
J. Kissel, S. Dwyer, K. Kawabe
WP # 6448

Since we've been having trouble with the intermittent ring-up of the H1 SUS ETMY's 4735 Hz violin mode, Sheila and Keita suggest that we permanently turn on the 4735 Hz notch that's already in place in the LSC-DARM2 filter bank. Contrary to traditional belief, however, a change to the DARM filter banks no longer has no impact on the calibration, in general, because the time-dependent correction factors -- now applied to the h(t) data stream -- are computed using ratios of calibration lines, and the frequency dependence of full DARM loop shape (including changes to the DARM filter bank) between line frequencies must be compensated (see T1500377 and P1600063). 

In this particular case, one might expect a well-designed notch at 4.7 kHz should have negligible impact on calibration line frequencies used for time-dependence, given that the highest frequency line used is 331 Hz. However, the notch was not well-designed; just something slapped in quickly in a panic to prevent lock loss.

Thus, Sheila and I propose a new design. See attached. 
Called out explicitly here:
                  Current Design: notch(4735.25,5,80)
   Increased Q of Current Design: notch(4735.25,100,80)
             Proposed New Design: ellip("BandStop",4,1,80,4720,4750)gain(1.12202)

Recall that foton's notch functions take inputs of the form
   notch(center frequency, Q, dB isolation in stop-band)  
   ellip("filter type",order,dB ripple,db isolation, start frequency, stop frequency)

The current design's Q is unnecessarily low, which means that at 331 Hz, there is a 1.15 [deg] phase impact. Too big, if we're correcting for 1% level changes in the optical gain. Both new designs (the higher Q notch and the elliptic band stop) have much less phase impact at the highest calibration line frequency (the elliptic has 0.09 [deg] phase loss, the high-Q notch has only 0.06 [deg] phase loss). However, the elliptic bandstop gets much greater isolation in a broader, 30 Hz band, than the high-Q notch.

Thus, we proposed the elliptic band stop.

We will upgrade the filter bank, make a new DARM loop model, and update the front-end EPICs representation of that model necessary for correct computation of time-dependent correction factors tomorrow morning during maintenance. 
Images attached to this report
H1 DetChar
thomas.shaffer@LIGO.ORG - posted 14:57, Friday 20 January 2017 - last comment - 22:36, Monday 23 January 2017(33464)
CS Card Reader Found ON, Now Off

Kyle went to grab a Helium tank just past the card reader at 22:51 UTC and found the card reader already ON. This is on our LVEA Sweep checklist but must have been missed. It is now OFF

Comments related to this report
corey.gray@LIGO.ORG - 15:02, Friday 20 January 2017 (33465)DetChar

It would be good to know if we should keep these Readers OFF or ON.  Originally we had been turning them OFF after LVEA Sweeps, but the Sweep checklist had this crossed out & it marked with "ON" (So, the most recent version of the Checklist had the Card Reader line removed). 

Maybe these Card Readers are not an issue?  Maybe I heard Robert say these Card Readers were negligible.

michael.landry@LIGO.ORG - 22:36, Monday 23 January 2017 (33554)

Robert Schofield's investigations showed no coupling from the card reader in O1. These should be left ON, as they are used by the RRT for site status reconstruction in trigger evaluation.

H1 AOS (DetChar, PSL)
miriam.cabero@LIGO.ORG - posted 09:19, Friday 20 January 2017 - last comment - 07:23, Monday 06 March 2017(33446)
Sub-set of blip glitches might originate from PSL

Tom Dent, Miriam Cabero

We have identified a sub-set of blip glitches that might originate from PSL glitches. A glitch with the same morphology as a blip glitch shows up in the PSL-ISS_PDA_REL_OUT_DQ channel at the same time as a blip glitch is seen in the GDS-CALIB_STRAIN channel.

We have started identifying times of these glitches using omicron triggers from the PSL-ISS_PDA_REL_OUT_DQ channel with 30 < SNR < 150 and central frequencies between ~90 Hz and a few hundreds of Hz. A preliminary list of these times (on-going, only period Nov 30 - Dec 6 so far) can be found in the file

https://www.atlas.aei.uni-hannover.de/~miriam.cabero/LSC/blips/O2_PSLblips.txt

or, with omega scans of both channels (and with a few quieter glitches), in the wiki page

https://www.lsc-group.phys.uwm.edu/ligovirgo/cbcnote/PyCBC/O2SearchSchedule/O2Analysis2LoudTriggers/PSLblips

Only two of those times have full omega scans for now:

https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20161204/1164844817-1164931217/scans/1164876856.97/

https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20161204/1164844817-1164931217/scans/1164882018.54/

 

The whitened time-series of the PSL channel looks like a typical loud blip glitch, which could be helpful to identify/find times of this sub-set of blip glitches by other methods more efficient than the omicron triggers:

https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20161204/1164844817-1164931217/scans/1164876856.97/1164876856.97_H1:PSL-ISS_PDA_REL_OUT_DQ_1.00_timeseries_whitened.png

Comments related to this report
thomas.dent@LIGO.ORG - 09:05, Friday 20 January 2017 (33450)
marco.cavaglia@LIGO.ORG - 14:48, Sunday 22 January 2017 (33513)DetChar
I ran PCAT on H1:GDS-CALIB_STRAIN and H1:PSL-ISS_PDA_REL_OUT_DQ from November 30, 2016 to December 31, 2016 with a relatively high threshold (results here: https://ldas-jobs.ligo-wa.caltech.edu/~cavaglia/pcat-multi/PSL_2016-11-30_2016-12-31.html). Then I looked at the coincidence between the two channels. The list of coincident triggers is:

-----------------------------------------------------
List of triggers common to PSL Type 1 and GDS Type 1:
#1:	1164908667.377000

List of triggers common to PSL Type 1 and GDS Type 10:
#1:	1164895965.198000
#2:	1164908666.479000

List of triggers common to PSL Type 1 and GDS Type 2:
#1:	1164882018.545000

List of triggers common to PSL Type 1 and GDS Type 4:
#1:	1164895924.827000
#2:	1164895925.031000
#3:	1164895925.133000
#4:	1164895931.640000
#5:	1164895931.718000
#6:	1164895958.491000
#7:	1164895958.593000
#8:	1164895965.097000
#9:	1164908667.193000
#10:	1164908667.295000
#11:	1164908673.289000
#12:	1164908721.587000
#13:	1164908722.198000
#14:	1164908722.300000
#15:	1164908722.435000

List of triggers common to PSL Type 1 and GDS Type 7:
#1:	1166374569.625000
#2:	1166374569.993000

List of triggers common to PSL Type 1 and GDS Type 8:
#1:	1166483271.312000

-----------------------------------------------------

I followed-up with omega scans and among the triggers above, only 1164882018.545000 is a blip glitch. The others are ~ 1 sec broadband glitches with frequency between 512 and 1024. A few scans are attached to the report.
Images attached to this comment
thomas.dent@LIGO.ORG - 07:08, Monday 23 January 2017 (33531)

Hi Marco,

your 'List of triggers common to PSL Type 1 and GDS Type 4' (15 times in two groups) are all during the known times of telephone audio disturbance on Dec 4 - see https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32503 and https://www.lsc-group.phys.uwm.edu/ligovirgo/cbcnote/PyCBC/O2SearchSchedule/O2Analysis2LoudTriggers/PSLGlitches

I think these don't require looking into any further, the other classes may tell us more.

miriam.cabero@LIGO.ORG - 05:22, Tuesday 24 January 2017 (33566)

The GDS glitches that look like blips in the time series seem to be type 2, 7, and 8. You did indeed find that the group of common glitches PSL - GDS type 2 is a blip glitch. However, the PSL glitches in the groups with GDS type 7 and 8 do not look like blips in the omega scan. The subset we identified clearly shows blip glitch morphology in the omega scan for the PSL channel, so it is not surprising that those two groups turned out not to be blips in GDS.

It is though surprising that you only found one time with a coincident blip in both channels, when we identified several more times in just one week of data from the omicron triggers. What was the "relatively high threshold" you used?

marco.cavaglia@LIGO.ORG - 15:29, Friday 10 February 2017 (34050)DetChar
Hi. Sorry for taking so long with this. I rerun PCAT on the PSL and GDS channels between 2016-11-30 and 2016-12-31 with a lower threshold for glitch identification (glitches with amplitude > 4 sigma the noise floor) and with a larger coincidence window (coincident glitches within 0.1 seconds). The list of found coincident glitches is attached to the report. Four glitches in Miriam's list [https://www.atlas.aei.uni-hannover.de/~miriam.cabero/LSC/blips/O2_PSLblips.txt] show up in the list: 1164532915.0 (type 1 PSL/type 3 GDS), 1164741925.6 (type 1 PSL/type 1 GDS), 1164876857.0 (type 8 PSL/type 1 GDS), 1164882018.5 (type 1 PSL/type 8 GDS). I looked at other glitches in these types and found only one additional blip at 1166374567.1 (type 1 PSL/type 1 GDS) out of 9 additional coincident glitches. The typical waveforms of the GDS glitches show that the blip type(s) in GDS are type 1 and/or type 8. There are 1998 (type 1) and 830 (type 8) glitches in these classes. I looked at a few examples in cat 8 and indeed found several blip glitches which are not coincident with any glitch in the PSL channel. I would conclude that PCAT does not produce much evidence for a strong correlation of blip glitches in GDS and PSL. If there is, PSL-coincident glitches must be a small subset of blip glitches in h(t). However, some blips *are* coincident with glitches in the PSL, so looking more into this may be a good idea.
Non-image files attached to this comment
miriam.cabero@LIGO.ORG - 02:13, Wednesday 15 February 2017 (34164)

Hi,

thanks Marco for looking into this. We already expected that it was a small sub-set of blip glitches, because we only found very few of them and we knew the total number of blip glitches was much higher. However, I believe that not all blip glitches have the same origin and that it is important to identify sub-sets, even if small, to possibly fix whatever could be fixed.

I have extended the wiki page https://www.lsc-group.phys.uwm.edu/ligovirgo/cbcnote/PyCBC/O2SearchSchedule/O2Analysis2LoudTriggers/PSLblips and the list of times https://www.atlas.aei.uni-hannover.de/~miriam.cabero/LSC/blips/O2_PSLblips.txt up to yesterday. It is interesting to see that I did not identify any PSL blips in, e.g., Jan 20 to Jan 30, but that they come back more often after Feb 9. Unfortunately, it is not easy to automatically identify the PSL blips: the criteria I used for the omicron triggers (SNR > 30, central frequency ~few hundred Hz) do not always yield to blips but also to things like https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=156436, which also affects CALIB_STRAIN but not in the form of blip glitches.

None of the times I added up to December appear in your list of coincident glitches, but that could be because their SNR in PSL is not very high and they only leave a very small imprint in CALIB_STRAIN compared with the ones from November. In January and February there are several louder ones with bigger effect on CALIB_STRAIN though.

thomas.dent@LIGO.ORG - 11:59, Monday 20 February 2017 (34266)

The most recent iteration of PSL-ISS flag generation showed three relatively loud glitch times:
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20170210/latest/scans/1170732596.35/

https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20170210/latest/scans/1170745979.41/

https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20170212/latest/scans/1170950466.83/

The first 2 are both on Feb 10, in fact a PSL-ISS channel was picked by Hveto on that day (https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20170210/latest/#hveto-round-8) though not very high significance.
PSL not yet glitch-free?

miriam.cabero@LIGO.ORG - 03:28, Tuesday 21 February 2017 (34276)

Indeed PSL is not yet glitch free, as I already pointed out in my comment from last week.

florent.robinet@LIGO.ORG - 06:21, Tuesday 21 February 2017 (34281)

Imene Belahcene, Florent Robinet

At LHO, a simple command line works well at printing PSL blip glitches:

source ~detchar/opt/virgosoft/environment.sh
omicron-print channel=H1:PSL-ISS_PDA_REL_OUT_DQ gps-start=1164500000 gps-end=1167500000 snr-min=30 freq-max=500 print-q=1 print-duration=1 print-bandwidth=1 | awk '$5==5.08&&$2<2{print}' 

GPS times must be adjusted to your needs.

This command line returns a few GPS times not contained in Miriam's blip list: must check that they are actual blips.

miriam.cabero@LIGO.ORG - 06:07, Wednesday 22 February 2017 (34312)

The PSL has different types of glitches that match those requirements. When I look at the Omicron triggers, I do indeed check that they are blip glitches before adding the times to my list. Therefore it is perfectly consistent that you find GPS times with those characteristics that are not in my list. However, feel free to check again if you want/have time. Of course I am not error-free :)

florent.robinet@LIGO.ORG - 00:42, Thursday 23 February 2017 (34339)

I believe the command I posted above is an almost-perfect way to retrieve a pure sample of PSL blip glitches. The key is to only print low-Q Omicron triggers.

For example, GPS=1165434378.2129 is a PSL blip glitch and it is not in Miriam's list.

There is nothing special about what you call a blip glitch: any broadband and short-duration (hence low-Q) glitch will produce the rain-drop shape in a time-frequency map. This is due to the intrinsic tiling structure of Omicron/Omega.

miriam.cabero@LIGO.ORG - 07:23, Monday 06 March 2017 (34606)

Next time I update the list (probably some time this week) I will check the GPS times given by the command line you suggest (it would be nice if it does indeed work perfectly at finding only these glitches, then we'd have an automated PSL blips finder!)

Displaying reports 50481-50500 of 83117.Go to page Start 2521 2522 2523 2524 2525 2526 2527 2528 2529 End