Displaying reports 52041-52060 of 86143.Go to page Start 2599 2600 2601 2602 2603 2604 2605 2606 2607 End
Reports until 08:01, Sunday 26 March 2017
LHO General
patrick.thomas@LIGO.ORG - posted 08:01, Sunday 26 March 2017 (35091)
Ops Owl Shift Summary
TITLE: 03/26 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 64Mpc
INCOMING OPERATOR: Jim
SHIFT SUMMARY: Only out of observing to run a2l when LLO lost lock.
LOG:

11:38 UTC LLO lost lock. Out of observing to run a2l.
11:44 UTC a2l done. Back to observing.
11:45 UTC GRB
LHO General
patrick.thomas@LIGO.ORG - posted 04:00, Sunday 26 March 2017 (35090)
Ops Owl Mid Shift Summary
Have remained in observing. No issues to report.
LHO General
patrick.thomas@LIGO.ORG - posted 23:58, Saturday 25 March 2017 (35089)
Ops Owl Shift Transition
TITLE: 03/26 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 65Mpc
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
    Wind: 5mph Gusts, 4mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.22 μm/s 
QUICK SUMMARY:

No issues to report.
H1 General
cheryl.vorvick@LIGO.ORG - posted 23:49, Saturday 25 March 2017 (35088)
Ops Eve Summary:

TITLE: 03/26 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 67Mpc
INCOMING OPERATOR: Patrick
SHIFT SUMMARY: 

H1 General
cheryl.vorvick@LIGO.ORG - posted 20:54, Saturday 25 March 2017 (35087)
Lockloss plot from 01:05UTC

OMC DC SUM and DHARD Y seem to glitch at the same time

Images attached to this report
H1 General
cheryl.vorvick@LIGO.ORG - posted 19:51, Saturday 25 March 2017 - last comment - 09:29, Monday 27 March 2017(35085)
H1 glitch, Verbal Alarms shows OMC DCPD Saturation, but date and time are incorrect, maybe the claimed source of the glitch is too
Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 09:29, Monday 27 March 2017 (35107)OpsInfo

This is a classic case of "Middle-Click Syndrome". Someone highlighted that OMC DCPD line in the Verbal Terminal and then accidentally middle clicked later. I checked this by first checking the Verbal logs to make sure it wasn't in there, and then I went to the alarms work station and middle clicked in the Verbal Terminal. Sure enough, the same March 10 OMC DCPD saturation message showed up.

H1 General
cheryl.vorvick@LIGO.ORG - posted 19:39, Saturday 25 March 2017 - last comment - 13:13, Monday 27 March 2017(35086)
Control Room computers powered off

About an hour after coming on shift I noticed something in the CR air and put on my mask, but after a while it was clear that that didn't help, so I started looking for other things, and realized it sort of smelled like burnt plastic.  Called Corey, Richard, Dave, and a couple other people.  Turned off most of the CR computers (about 4 warm ones), checked the MSR, checked the Computer user's room, and the smell was only in the CR.  Tried to contact Robert, waited, but then turned off his computer, and am now waiting to see if that computer was overheating, and if turning it off clears up the air.

Currently watching H1 from the Computer Users Room, and have Verbal Alarms running in here.

Comments related to this report
david.barker@LIGO.ORG - 13:13, Monday 27 March 2017 (35112)

we were not able to pin down where the odor was coming from, and there are no odors today.

H1 General
cheryl.vorvick@LIGO.ORG - posted 19:20, Saturday 25 March 2017 (35084)
Ops Eve Transition:

TITLE: 03/26 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 68Mpc
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
    Wind: 8mph Gusts, 6mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.25 μm/s
QUICK SUMMARY:

H1 General
jim.warner@LIGO.ORG - posted 16:08, Saturday 25 March 2017 (35083)
Shift Summary

TITLE: 03/25 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 67Mpc
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY:
LOG:
Quiet shift. Nothing to report. Amber was on site about 21:00 with a tour. Robert has been on site waiting patiently for LLO to go down.

LHO General
patrick.thomas@LIGO.ORG - posted 07:53, Saturday 25 March 2017 (35082)
Ops Owl Shift Summary
TITLE: 03/25 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 64Mpc
INCOMING OPERATOR: Jim
SHIFT SUMMARY: Observing entire shift. No issues to report.
LOG:

07:57 UTC GRB
13:42 UTC Changed sign of gain to damp PI mode 27
LHO General
patrick.thomas@LIGO.ORG - posted 04:01, Saturday 25 March 2017 (35081)
Ops Owl Mid Shift Summary
Have remained in observing. No issues to report.
LHO General
patrick.thomas@LIGO.ORG - posted 00:06, Saturday 25 March 2017 (35080)
Ops Owl Shift Transition
TITLE: 03/25 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 61Mpc
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
    Wind: 9mph Gusts, 6mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.30 μm/s 
QUICK SUMMARY:

No issues to report.
H1 General
cheryl.vorvick@LIGO.ORG - posted 23:54, Friday 24 March 2017 (35079)
Ops Eve Summary:

TITLE: 03/25 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 67Mpc
INCOMING OPERATOR: Patrick
SHIFT SUMMARY: one lock loss, easy relocking
LOG:

Images attached to this report
H1 SUS (ISC, PEM, SUS)
suresh.doravari@LIGO.ORG - posted 23:49, Friday 24 March 2017 (35077)
Oplev laser work : PCal room temperature matched to Y-End VEA

[Suresh D, Bubba G, Jason O, Rick S., Sudarshan G]

We have seen that a non-glitchy oplev laser could be pushed into a glitchy behaviour if the ambient temperature changes.  Since we presently tune the lasers for single-mode (non-glitchy) operation in the PCal lab and then transport them to their respective oplev locations, we are required to retune their power in-situ to recover glitch-free behaviour.  This is often a time consuming process.  We therefore wished to be able to set the PCal room to the temperature of LVEA (or XVEA or YVEA as needed) so that we may be able to increase the reliability of this process and reduce the time needed to tune them in-situ.

At this time the two end-station VEAs in LHO have glitchy lasers.  We therefore wish to replace these with tuned lasers at the earliest opportunity.   We measured the temperature of these VEAs.

End station temperature
 Location time temperature remarks
Y VEA 23Mar2017 12:10PM 66.1 degF 15mins wait time for T to settle [SD, BG]
X VEA 23Mar2017 03:18PM 66.5 degF  15 mins waiting time. T oscillates between 66.3 to 66.8 degF in ~5mins [SD,RS,SG]

 

 

 

 

 

 

Bubba reset the air-conditioning control settings in the LSB labs, so that we can set the PCal lab temperature to required value.

We have prepared three lasers for glitch free operation. [SD, JO].  One of them is presently ready for installation at Y-end (tuned at 66.1 degF room temperature).  See attached pics for further details.  

Two others lasers are under preparation for installation at X-end and possibly BS.

Note: The same thermocouple sensor (Fluke hand-held device) was used  at all locations to avoid calibration differences between various wall mounted sensors at these locations.

 

 

Images attached to this report
H1 General
cheryl.vorvick@LIGO.ORG - posted 23:13, Friday 24 March 2017 (35078)
Ops Eve Update:

TITLE: 03/25 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 67Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    Wind: 19mph Gusts, 17mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.32 μm/s
QUICK SUMMARY: locked in Observe

H1 SEI
hugh.radkins@LIGO.ORG - posted 16:22, Friday 24 March 2017 (35075)
LHO Ground STSs--Mainly Corner Station Look

Why?--Robert is going to guide me on helping him with Chamber Pinning investigation.  That is, does proximity to the chamber reduce wind induced tilt of the floor?

SEI is using just one of our three LVEA STS2s, ITMY (STS2B) for HEPI and ISI Sensor Correction.  So we'll move an under utilized STSs around the floor to see how distance from/to the Chamber Legs affects the tilt of the floor induced by wind.

Right now, the HAM5 machine is actually in the BierGarten a couple meters -Y of the ITMY machine; I'll call the machine in this position Roam1.  Not quite close enough for a huddle test (especially if it were windy) so I'll move it closer next week.

Meanwhile, we can compare things now to access the seismometers' health.

Had a fairly quiet wind last night so attached are low wind comparisons.

Plot1 are the X-axes.  Roam1 and ITMY compare pretty well until we get down to about 20mHz. HAM2 X may have a problem or it is just the location.  With these light winds, really less than 5mph mostly during the 5500sec spectra, it is troubling to see the ETMY X channel.  The wind is light, it is 4:30am, no reason for that noise compared to the others.

The Y axes in Plot2 reveal the good agreement between ITMY and Roam1 is now down to below 10mHz.  HAM2 again we assume is suffering either from it's location or its rough LIGO life over the years.  The End Machines compare quite well to below 10mHz as well.

The 3rd plot shows great z axis agreement between the site instruments to well below 10mHz (except maybe HAM2 again, around 5-6mHz.)  Too bad this channel wasn't the one we need for tilt studies.

Images attached to this report
LHO OpsInfo
thomas.shaffer@LIGO.ORG - posted 16:06, Friday 24 March 2017 (35064)
Ops Day Shift Summary

TITLE: 03/24 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 63Mpc
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY: 11.5 hour lock observing for most of the time, other than for a 24min while Robert turned on the fire pump while LLO was down. DARM seems to be a bit more noisy since then and our range has been dipping a bit. I'm not sure where the balers are, but they are not where Bubba said they were suppose to be. We are still seeing some large spike in the seismic 3-10Hz BLRMS so maybe this is the cause, but this is just a wild guess.

LOG:

H1 DetChar (DetChar, SUS)
gabriele.vajente@LIGO.ORG - posted 14:58, Friday 24 March 2017 - last comment - 16:12, Wednesday 29 March 2017(35073)
Correlation of blip glitches with test mass suspension signals

In brief, I found some significant correlation between (a class of) blip glitches and the values of some suspension signals coming from the OSEM and local sensor readout.

I found that some of the blip glitches tend to cluster around particular values of the signals. In particular the signals listed below are the most significant ones:

Read below for more details on the methodology and the results.

Method

At this point I have 5548 times identified as blip glitches, and the value of the suspension channels for each of those. And 5548 more times that are clean data, with the value of the suspension channels for each of those. 

Results

Here's an example plot of the result. I picked one of the most interesting channel (SUS-ETMX_M0_DAMP_Y_INMON). The PDF files attached below contains the same kind of histograms for all channels, divided by test mass.

The first row shows results for the SUS-ETMX_M0_DAMP_Y_INMON signal. The first panel compares the histogram of the values of this signal for blip times (red) and clean times (blue). This histogram is normalized such that the value of the curve is the empirical probability distribution of having a particular value of the signal when a blip happens (o doesn't happen). The second panel in the first row is the cumulative probability distribution (the integral of the histogram): the value at an abscissa x gives the probability of having a value of the signal lower than x. It is a standard way to smooth out the histogram, and it is often used as a test for the equality of two empirical distributions (the Kolmogorov-Smirnov test). The third panel is the ratio of the histogram of glitchy times over the histogram of clean times: if the two distributions are equal, it should be one. The shaded region is the 95% confidence interval, computed assuming that the number of counts in each bin of the histogram is a naive Poisson distribution. This is probably not a good assumption, but the best I could come up with.

The second row is the same, but this time I'm considering the derivative of the signal. The third row is for the second derivative. I don't see much of interest in the derivative plots of any signal. Probably it's due to the high frequency content of those signals, I should try to apply a low pass filter. On my to-do list.

However, looking a the histogram comparison on the very first panel, it's clear that the two distributions are different: a subset of blip glitches happen when the signal SUS-ETMX_M0_DAMP_Y_INMON has a value of about 15.5. There are almost no counts of clean data for this value. I think this is a quite strong correlation.

You can look at all the signals in the PDF files. Here's a summary of the most relevant findings. I'm listing all the signals that show a significant peak of blip glitches, as above, and the corresponding value:

  ETMX ETMY ITMX ITMY
L1_WIT_LMON     -21.5 -54.5
L1_WIT_PMON 495, 498 609   581
L1_WIT_YMON 753 -455 643, 647 500.5
L2_WIT_LMON 437, 437.5   -24 -34.3
L2_WIT_PMON     1495 -993, -987
L2_WIT_YMON     -170, -168, -167.5 69
L3_OPLEV_PIT_OUT16     -13, -3 7
L3_OPLEV_YAW_OUT16 -4.5 7.5 -10, -2 3
M0_DAMP_L_INMON 36.6   14.7 26.5, 26.9
M0_DAMP_P_INMON   -120 363 965, 975
M0_DAMP_R_INMON 32.8 -4.5 190 -90
M0_DAMP_T_INMON 18.3 10.2 20.6 27.3
M0_DAMP_V_INMON -56.5 65 -25 -60
M0_DAMP_Y_INMON 15.5 -71.5 -19, -17 85

Some of the peaks listed above are quite narrow, some are wider. It looks like ITMY is the suspension with more peaks, and probably the most significant correlations. But that's not very conclusive.

To do next

 

Images attached to this report
Non-image files attached to this report
Comments related to this report
andrew.lundgren@LIGO.ORG - 15:24, Friday 24 March 2017 (35074)
There were some days with extremely high rates of glitches in January. It's possible that clumping of glitches in time could be throwing off the results. Maybe you could try considering December, January, and February as separate sets and see if the results only hold at certain times.

Also, it would be nice to see how specifically you can predict the glitches in time. Could you take the glitch times, randomly offset each one according to some distribution, and use that as the 'clean' times? That way, they would have roughly the same time distribution as the glitches, so you shouldn't be very affected by changes in IFO alignment. And you can see if the blips can be predicted within a few seconds or within a minute.
gabriele.vajente@LIGO.ORG - 17:25, Friday 24 March 2017 (35076)

In my analysis I assumed that the list of blip glitches were not affected by the vetoes. I was wrong.

Looking at the histogram in my entry, there is a set of glitches that happen when H1:SUS-ETMX_M0_DAMP_Y_INMON > 15.5. So I plotted the value of this channel as a function of time for all blip glitches. In the plot below the blue circles are all the blips, the orange dots all the blips that passes ANALYSIS_READY, and the yellow crosses all the blips that passes the vetoes. Clearly, the period of time when the signal was > 15.5 is completely vetoed. 

So that's why I got different distributions: my sampling of clean times did include the vetoes, while the blip list did not. I ran again the analysis including only non vetoed blips, and that family of blips disappeared. There are still some differences in the histograms that might be interesting to investigate. See the attached PDF files for the new results.

One more thing to check is the distribution over time of the blip glitches and of the clean times. There is an excess of blips between days 39 and 40, while the distribution of clean data is more uniform. This is another possible source of issues in my analsysis. To be checked.

Images attached to this comment
Non-image files attached to this comment
miriam.cabero@LIGO.ORG - 16:12, Wednesday 29 March 2017 (35200)

@Gabriele

The excess of blips around day 40 corresponds to a known period of high glitchiness, I believe it was around Dec 18-20. I also got that peak when I made a histogram of the lists of blip glitches coming from the online blip glitch hunter (the histogram is at https://ldas-jobs.ligo.caltech.edu/~miriam.cabero/tst.png, is not as pretty as yours, I just made it last week very quickly to get a preliminary view of variations in the range of blips during O2 so far).

Displaying reports 52041-52060 of 86143.Go to page Start 2599 2600 2601 2602 2603 2604 2605 2606 2607 End