Displaying reports 46861-46880 of 83439.Go to page Start 2340 2341 2342 2343 2344 2345 2346 2347 2348 End
Reports until 21:59, Wednesday 19 July 2017
H1 General
corey.gray@LIGO.ORG - posted 21:59, Wednesday 19 July 2017 (37641)
Back To OBSERVING, Missed GRB, REFL Gain Servo Diff

4:11  Had a random lockloss.  It's windy out, but not too bad. 

As were were seconds from getting to NLN, we had a GRB (Swift) & this was confirmed with Will at LLO. 

Also had another H1SYSECATC1PLC2SDF diff similar to yesterday.  It is related to the Refl Servo Gain, and this time we were at 6dB (vs 7dB which was ACCEPTED by me yesterday).  This matches what Thomas wanted from his alog yesterday.  So, perhaps Guardian now accounts for this & we should be good for future locks.

4:51 OBSERVING.

LHO General
corey.gray@LIGO.ORG - posted 20:11, Wednesday 19 July 2017 (37639)
Mid Shift Status

Have some nice triple coincidence going with GEO (& Virgo is locked, but not in OBSERVING).  

Rode through a 5.8 EQ in Japan.  Sustained winds at about 10mph for last 10hrs.

LHO General
corey.gray@LIGO.ORG - posted 16:22, Wednesday 19 July 2017 (37636)
Transition To EVE

TITLE: 07/19 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 53Mpc
OUTGOING OPERATOR: Ed
CURRENT ENVIRONMENT:
    Wind: 14mph Gusts, 11mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.06 μm/s

Low useism & slight winds.
QUICK SUMMARY:

Ed just finished up an A2L and took us back to OBSERVING while he was handing off to me.

Jenne just finished walking a tour through the Control Room a few minutes ago.

H1 General
edmond.merilh@LIGO.ORG - posted 16:00, Wednesday 19 July 2017 (37634)
Running a2l

22:51 New lock is 1 hr old and LF DARM is looking "off". Livingston was informed.

22:58 Intention Bit back to Undisturbed

 

H1 SUS (CAL, DetChar, ISC, SUS)
jeffrey.kissel@LIGO.ORG - posted 15:28, Wednesday 19 July 2017 (37628)
Charge Measurement Update; Some H1SUSETMY Quadrants Now Have ~50-100 [V] Effective Bias Voltage
J. Kissel

I've measured the effective bias voltages for each quadrant of the ETM ESD systems, as per the usual optical lever angular response method. 

Unfortunately, it looks like several of the quadrants of the ETMY ESD have accumulated an effective bias voltage of (negative) ~50-100 [V] -- a change of about 30-40 [V] since the last measurement on Jun 27 2017 -- namely UL in Pitch, LL in yaw, UR in pitch, and LR in yaw. This corresponds to a 15-20% decrease in angular actuation strength (relative to no effective bias voltage; a 10% change since last measured on Jun 27th). 
See attached 1-year trends of the effective bias voltage and angular actuation strength.

This seems to have not drastically affected the longitudinal actuation strength, as measured by the calibration lines in DARM, which reports only a ~5% increase in actuation strength with respect to when we last updated the actuation strength in the calibration model in January. 
Looking back through the summary pages, one can see a ~1-2% decrease in actuation strength (which corresponds to an increase of the necessary correction factor, kappa_{tst}) between July 6th (when we were hit by the 5.8 Mag Montana Earthquake) and July 9th once we were able to recover the IFO enough to measure with calibration lines. It usually takes a month or two for charge to accumulate a ~1-2% change "naturally." 
I attach four reports from the summary pages, 
     2017-06-28 -- Just after the last charge measurement
     2017-07-06 -- Just before the 5.8 Mag Montana earthquake
     2017-07-09 -- just after the 5.8 Mag Montana earthquake
     2017-07-19 -- today.
The overall h(t) calibration is OK, since we correct for this changing actuation strength.
Note that on Jun 27th, the calibration team switched the GDS pipeline's actuation strength reference from H1 PCALY's RX PD to its TX PD, due to known problems with the RX PD clipping. This was *not* done in the front-end calculation. So the red trace should be trusted, and the gray should be ignored.

So, if you asked me "do you think this drastic effective bias voltage change is from the Montana Earthquake?" 
I would say "The data isn't as dense as we'd like for a definitive statement, but signs point to yes."

This leaves a few questions open:
(1) Can we ascertain the geometric location of a charge source that would impact the actuation strengths as above?
    - Yes, but it'll need a little thinking, and I want to get this data up. Stay tuned.

(2) Is it plausible that that geometric arrangement would cause so little change in longitudinal actuation strength?
    - Yes, but contingent upon answer to (1).

(3) Even with glass tips on all EQ stops, is it still possible that charge can accumulate from a violent earthquake?
    - I don't know enough about the mechanism to say... 

(4) Is there a potential for this effective bias voltage configuration / amount to cause the recent mystery noise (see LHO aLOGs 37616, 37599 and 37590)?
    - Plausible; one might imagine if there's enough effective bias voltage, that coupling to ambient electric fields is larger...
    Remember there are three potential ways that the ESD system is vulnerable to stray charge changing its canonical actuation strength (alpha),
        (a) Charge accumulated in between the test mass and reaction mass, (beta)
        (b) Charge accumulated on areas outside of the gap, on the test mass (beta_2)
        (c) Charge on the surrounding cage (gamma)
    and these angular actuation measurements cannot distinguish between these, as 
        V_eff = (beta - beta_2) / [2 (alpha - gamma)]
    See discussion in T1500467.

(5) Is it worth spinning up the TMDS band-wagon?
    - My vote is no, but this decision should be made with a much more broad audience of input.

For the record, ETMX looks OK (all quadrants' effective bias voltage is below +/-40 [V]), but the data is to sparse to make conclusive statements about whether there was (a) a similar ~30-40 [V] jump, or (b) it was suddenly relative to the Montana quake. The majority of quadrants look to be continuing on the same trend as before; only UL in yaw and LR in pitch are suspicious.
Images attached to this report
H1 General
edmond.merilh@LIGO.ORG - posted 15:21, Wednesday 19 July 2017 - last comment - 15:58, Wednesday 19 July 2017(37627)
SVN Committals du jour

the following were committed by me to the SVN:

 

Comments related to this report
edmond.merilh@LIGO.ORG - 15:30, Wednesday 19 July 2017 (37632)

Some more for the list:

  • /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSITMX.txt      hugh.radkins Jul 10 14:19
  • /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSETMY.txt      jim.warner Jul 10 14:17
  • /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSETMX.txt      jim.warner Jul 10 14:06
jeffrey.kissel@LIGO.ORG - 15:58, Wednesday 19 July 2017 (37635)CDS, SUS
J. Kissel, J. Warner, H. Radkins, Operators

All of the suspension filter changes are from the past few weeks worth of work on violin mode damping. Thanks for cleaning up after us!
H1 General
edmond.merilh@LIGO.ORG - posted 15:15, Wednesday 19 July 2017 - last comment - 16:54, Wednesday 19 July 2017(37626)
Lockloss and Recovery - H1 back to Observing

19:39 H1 lost lock to no obvious cause. Jeff K seized the opportunity to take charge measurements he wasn't able to yesterday. the LLO operator was informed of this activity. They, too, will do some opportunistic work in the interim.

20:57 I needed to re-align the green arms.

21:11 Begin main locking sequence.

22:00 chasing dow PI modes 28 and 26

22:06 Intention bit - Undisturbed 52Mpc

 

Comments related to this report
hugh.radkins@LIGO.ORG - 16:54, Wednesday 19 July 2017 (37638)Lockloss, PEM, SEI

Interestingly, the lockloss is very proximal to the EndY Chiller fault but I can't say they are related.

I trended numerous PEM and ISI channels (some attached) and the CY_STATUS dropped from 1 to 0 at 19:38:37utc, the IFO dropped lock 19:39:29, and PUMPSTAT alarmed at 19:39:40.  There is a large MAINMON glitch when the CY_STATUS changed but no ground seismometer response at the time of lock loss.  I'm not sure about the CY_STATUS relation to the PUMPSTAT.  I stepped through the spectra of the floor STS but see nothing in the broad stroke at which to dig deeper.

Images attached to this comment
LHO FMCS
bubba.gateley@LIGO.ORG - posted 15:01, Wednesday 19 July 2017 - last comment - 15:33, Wednesday 19 July 2017(37629)
E Y Chilled Water Pump Trip
At about 12:45 P M local time the END Y chilled water pump tripped, reason unknown. It was reset at ~ 2:30 P M. The temperature in the VEA rose ~ 1.75 degrees F. during that time.
Comments related to this report
jeffrey.kissel@LIGO.ORG - 15:33, Wednesday 19 July 2017 (37633)CAL, CDS, DetChar, ISC
Tagging relevant parties.
See attached 5-day temperature report to show the change relative to how well we've been doing.
Images attached to this comment
H1 SEI
edmond.merilh@LIGO.ORG - posted 12:40, Wednesday 19 July 2017 (37617)
EQ Report

Terramon is reporting .586um/s from the Tonga region and oddly enough, BLRMS are showing exactly that. Seismon hasn't updated since an Alaskan area quake that happened yesterday.

H1 rode through like a champ.

Images attached to this report
H1 General
edmond.merilh@LIGO.ORG - posted 12:23, Wednesday 19 July 2017 (37609)
Mid-Shift Summary

15:00 Bubba to MX to survey upcoming Apollo welding work.

15:31 big LF glitch

15:33 Marc to MY

15:34 another LF glitch

15:51 Marc and Bubba back

16:44 Chris will move the 1 ton from behind the water tower to the LSB.

16:57 big LF glitch

18:32 Cheryl into optics lab

18:45 Chandra to MY

18:59 Karen to MY to clean

19:07 Found INJ_Trans node in "INIT" state so I returned it to the INJECT_SUCCESS state.

19:17 another LF glitch

19:19 coordinating with those that have file mods to commit them to SVN

H1 ISC
jenne.driggers@LIGO.ORG - posted 11:06, Wednesday 19 July 2017 - last comment - 13:19, Wednesday 19 July 2017(37616)
Mainsmon coherent enough to subtract low freq noise?!

Looking at the subtraction for today, I noticed that the PEM mainsmon (PEM-EY_MAINSMON_EBAY_1_DQ) was coherent enough to subtract noise at frequencies other than the power lines.  This is pretty unusual, so we're trying to think up things to check, like whether the ETM is much more charged than it used to be.  TVo and Sheila are looking at BruCo to see if anything else looks suspicious.

I'll take a quick look to see if other mainsmon channels are also highly coherent, but this may be an interesting avenue to look at for why we have this extra noise now. 

Of note is that I didn't see this mainsmon subtraction for data from July 15th (alog 37590).

Images attached to this report
Comments related to this report
jenne.driggers@LIGO.ORG - 11:39, Wednesday 19 July 2017 (37618)DetChar, PEM

The mainsmon channels also seem more noisy than in the past.  Is this something that we've seen before?

The attached spectra show that the mainsmons are noisier now than they were a few days ago.  Spot checking, the noise was there on July18th, but not July17th.  It's not clear from the time series trends that there is anything different going on.

EDIT: Spot checking more spectra, it looks like the noise may have started on 17July2017, between 20:00 and 21:00 UTC.

Images attached to this comment
thomas.massinger@LIGO.ORG - 12:08, Wednesday 19 July 2017 (37619)DetChar

Josh, TJ

Can you explain how you're finding the broadband coherence between the MAINSMON and h(t)?

The bruco run from today doesn't show anything other than 60 Hz + harmonics: 

https://ldas-jobs.ligo.caltech.edu/~thomas.massinger/bruco_H1_July19_10UTC/PEM-EY_MAINSMON_EBAY_1_DQ.png

We did some spot checking and don't see significant broadband coherence throughout the day, it looks consistent with other days from the last few weeks: 

https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=172856


Blue: 2017-07-19 03:30:00
Yellow: 2017-07-19 01:30:00
Green: 2017-06-30 14:30:00
Red: 2017-07-04 14:30:00

jeffrey.kissel@LIGO.ORG - 12:12, Wednesday 19 July 2017 (37620)
For the record, the analog CDS team reverted the ESD power supplies on July 11th -- see LHO aLOG 37455.
jenne.driggers@LIGO.ORG - 12:52, Wednesday 19 July 2017 (37621)

Hmmm.  Looking at coherence on DTT, I'm also not seeing much.  I was inferring that there would be coherence based on the subtractability of the noise.  As Kiwamu pointed out, perhaps it's a series of glitches or something, where the coupling is constant but the noise isn't, so when you look at coherence averaged over long times, it doesn't show up?

EDIT: It looks like Kiwamu was right, that there was a glitch, probably in the power lines.  I re-did the subtraction in sections of 256 seconds rather than a full 1024, and the first sets were fine and normal (no broad subtraction with mainsmon), and the last set is pretty significant.  So, maybe this is just a regular thing that happens, and I just caught it by accident.  The attached is a spectrum during the time of the glitch.  I assume that the glitch must be on the power lines, since I get such good subtraction using them as my witness.

Images attached to this comment
thomas.vo@LIGO.ORG - 13:13, Wednesday 19 July 2017 (37623)

Sheila and I ran BruCo:

https://ldas-jobs.ligo-wa.caltech.edu/~thomas.vo/bruco_July19/

During this GPS time (1184486418) around 25-35 Hz range, there is a lot of coherence between H1:ASC-OMC_A_PIT(YAW) to DARM.  But spot checking the few hours after with DTT, this seems to go away so maybe there's some transient stuff going on during this time.

Non-image files attached to this comment
thomas.massinger@LIGO.ORG - 13:19, Wednesday 19 July 2017 (37625)DetChar

Looking at a spectrogram of the MAINSMON channel, there are two broadband glitches near the end of the 1024 second stretch from your original plot:

https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=172923

 

H1 CDS
david.barker@LIGO.ORG - posted 10:23, Wednesday 19 July 2017 - last comment - 12:39, Wednesday 19 July 2017(37611)
Virgo alerts are not currently working

Starting at 04:25 PDT this morning (11:25 UTC) the virgo alert system stopped working.

The log file reports a bad request error:

Traceback (most recent call last):
  File "/opt/rtcds/userapps/release/cal/common/scripts/vir_alert.py", line 498, in <module>
    far=args.far, test=args.test))
  File "/opt/rtcds/userapps/release/cal/common/scripts/vir_alert.py", line 136, in query_gracedb
    'label: %sOPS %d .. %d' % (ifo, start, end))
  File "/opt/rtcds/userapps/release/cal/common/scripts/vir_alert.py", line 162, in log_query
    return list(connection.events(query))
  File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 618, in events
    response = self.get(uri).json()
  File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 374, in get
    return self.request("GET", url, headers=headers)
  File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 488, in request
    return GsiRest.request(self, method, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 356, in request
    return self.adjustResponse(response)
  File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 369, in adjustResponse
    raise HTTPError(response.status, response.reason, response_content)
ligo.gracedb.rest.HTTPError: (400, 'BAD REQUEST / {"error":"Invalid query"}')

 

Comments related to this report
david.barker@LIGO.ORG - 12:39, Wednesday 19 July 2017 (37622)

After discussion with Keita, we will stop monit trying to run vir_alert on h1fescript0 for now. I believe the plan is that these alerts will be added to the standard gracedb database prior to the next V1 engineering run. I have sent emails bringing sysadmins attention to a potential issue with gracedb-test in case other users are being impacted.

H1 General
corey.gray@LIGO.ORG - posted 18:47, Tuesday 18 July 2017 - last comment - 21:55, Wednesday 19 July 2017(37602)
Back to OBSERVING

Took H1 back to OBSERVING at 1:22utc (6:22pm).  Had an SDF Diff, but it was related to the REFL Servo Gain noted by Thomas & Pep.

Running with a range of ~54Mpc.

Comments related to this report
corey.gray@LIGO.ORG - 21:55, Wednesday 19 July 2017 (37640)

Attached is what the SDF Diff was when I ACCEPTED it (but this doesn't match what Thomas notes in his alog).

Here the gain was at 7dB (usually it was 6dB), but Thomas says he changed it vice versa.

Images attached to this comment
Displaying reports 46861-46880 of 83439.Go to page Start 2340 2341 2342 2343 2344 2345 2346 2347 2348 End