Displaying reports 50781-50800 of 83128.Go to page Start 2536 2537 2538 2539 2540 2541 2542 2543 2544 End
Reports until 12:16, Saturday 14 January 2017
H1 DetChar (DetChar, ISC, SUS)
andrew.lundgren@LIGO.ORG - posted 12:16, Saturday 14 January 2017 (33267)
Weak evidence that ETMY L2 is associated with range drop
There's some evidence that something strange is happening in the ETMY drive electronics and it's associated with the range drops. The ETMY L2 coil monitors (the noisemons) get quieter by about a factor of two around the same time as the range drops, but there's also excess noise that's not accounted for by the drive signal.

The correlation of strange behavior of the noisemon is not perfectly correlated with the drops in range, and the excess noise in the noisemon does not have the same shape as the excess in DARM and is not coherent with it. However, this does point to further problems with drive problems at EY. So would it be possible to switch control over to EX as Sheila suggested before? The range loss due to this problem is dropping the event rate by at least a factor of 2 (20% range loss cubed), so we should do anything with a chance of solving it. Power-cycling the EY electronics might also help, or checking any fast readbacks of the driver (the version in frames go only up to 1 kHz so if there's high-frequency junk we can't see it.

Attached are three plots. The first is a comparison of the noisemon during the reference (Jan 13 1 UTC) and the bad time (Jan 13 14:45 UTC). The second is a comparison of the noisemons with the drive signals (L2 MASTER UL) subtracted. The transfer function is measured from the data, since it is clearly changing. The transfer functions have the same shape and phase, but are different by a scale factor, about 0.42. There's excess noise during the bad time that is clearly above the noise floor of the coil monitor. The final plot is the RMS of the monitor channel during Jan 13 (ending at the last lock), showing that it starts to drop around the same time as the range. On Jan 8, the RMS is steady for the whole day, while on Jan 12 it also shows a drop during a time when the DARM range drops.
Images attached to this report
LHO General
patrick.thomas@LIGO.ORG - posted 12:15, Saturday 14 January 2017 - last comment - 13:30, Saturday 14 January 2017(33268)
Ops Day Mid Shift Summary
Lost lock in POWER_UP, same as Ed reported. Something seems wrong with the ISS (see previous alog). I have stopped at DC_READOUT while I investigate, but I'm not making much progress. I have left a message with Keita and Peter.
Comments related to this report
patrick.thomas@LIGO.ORG - 12:17, Saturday 14 January 2017 (33269)
It seems similar to alog 30335.
andrew.lundgren@LIGO.ORG - 12:26, Saturday 14 January 2017 (33270)DetChar, PSL
Could it be a problem with the ISS AOM power supply? The summary page shows that the voltage has dropped (plot).
patrick.thomas@LIGO.ORG - 12:47, Saturday 14 January 2017 (33271)
I'm not certain, but my guess is that that signal is actuated on to set the diffracted power.
patrick.thomas@LIGO.ORG - 12:54, Saturday 14 January 2017 (33272)
I think it might be a control loop somewhere, because locking and unlocking it can sometimes make the noise go away.
patrick.thomas@LIGO.ORG - 13:00, Saturday 14 January 2017 (33273)
I watched the diffracted power signal and it got noisy as soon as it got to the DIFF portion of FIND_IR.
patrick.thomas@LIGO.ORG - 13:30, Saturday 14 January 2017 (33274)
Peter is on his way to the site.
H1 PSL
patrick.thomas@LIGO.ORG - posted 11:43, Saturday 14 January 2017 (33266)
ISS diffracted power noisy since last lock loss?
Could this be a sign of a problem related to the lock losses on POWER_UP? See attached.
Non-image files attached to this report
LHO General
patrick.thomas@LIGO.ORG - posted 08:12, Saturday 14 January 2017 (33264)
Ops Day Shift Start
TITLE: 01/14 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ed
CURRENT ENVIRONMENT:
    Wind: 2mph Gusts, 1mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.22 μm/s 
QUICK SUMMARY:

IFO is at down. Going to start an initial alignment.
H1 General
edmond.merilh@LIGO.ORG - posted 08:04, Saturday 14 January 2017 (33262)
Shift Summary - Owl
TITLE: 01/14 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Patrick
SHIFT SUMMARY:
H1 not locking past increase power.
 
LOG:

17:50 lockloss during re-locking. HAM6 ISI Watchdog trip.

08:15 Not holding on to what looks lioke a good DRMI lock. Touching up cavity alignments.

12:15 lockloss @ Increase Power/PI mode 27 ringing up sharply/instantly

12:41 lockloss @ increase power. Going to try manually increasing power.

13:22 I manually moved to ADJUST_POWER. my first attempt was to move to 10W. OMC DCPD saturation, then lockloss. I shall try smaller moves on the next lock.

16:00 Finally got to the point where  could try the manual increase. 1W increase broke the lock.

H1 CDS
edmond.merilh@LIGO.ORG - posted 07:01, Saturday 14 January 2017 - last comment - 10:00, Saturday 14 January 2017(33259)
Ops Wiki pages no longer available at this time

Unfortunate turn of events considering the trouble I'm having.

Images attached to this report
Comments related to this report
ryan.blair@LIGO.ORG - 08:20, Saturday 14 January 2017 (33263)

Appears to be a general problem with the authentication system across the Lab/LSC this morning, not just the Ops wiki server.

LIGO-SAML-DS, which finds authentication servers, is unhappy with some very recent metadata changes (example from an affected GC server):

2017-01-14 07:51:56,396 ERROR Metadata signature cannot be verified: xmlsec1 returned non-zero code 1

If you (or CDS admins) need a workaround in the mean time while the central system is worked on, give me a call.

david.barker@LIGO.ORG - 10:00, Saturday 14 January 2017 (33265)

here are the recent ssl_error logs from cdswiki when I try to open the web service

 

[Sat Jan 14 09:59:00.588103 2017] [wsgi:error] [pid 23984:tid 140688044332800] [remote 71.84.186.186:38281] mod_wsgi (pid=23984): Exception occurred processing WSGI script '/usr/lib/cgi-bin/wsgi/ligo-saml-discovery-service/LIGO_SAML_DiscoveryService.wsgi'.

[Sat Jan 14 09:59:00.588145 2017] [wsgi:error] [pid 23984:tid 140688044332800] [remote 71.84.186.186:38281] Traceback (most recent call last):

[Sat Jan 14 09:59:00.588170 2017] [wsgi:error] [pid 23984:tid 140688044332800] [remote 71.84.186.186:38281]   File "/usr/lib/cgi-bin/wsgi/ligo-saml-discovery-service/LIGO_SAML_DiscoveryService.wsgi", line 212, in application

[Sat Jan 14 09:59:00.588206 2017] [wsgi:error] [pid 23984:tid 140688044332800] [remote 71.84.186.186:38281]     if returnURL.split('?')[0] in spDict[entityIDclaimed]:

[Sat Jan 14 09:59:00.588234 2017] [wsgi:error] [pid 23984:tid 140688044332800] [remote 71.84.186.186:38281] KeyError: 'https://lhocds.ligo-wa.caltech.edu/shibboleth-sp'

LHO FMCS
john.worden@LIGO.ORG - posted 06:41, Saturday 14 January 2017 - last comment - 07:35, Saturday 14 January 2017(33258)
LVEA Heater cycling

Bubba turned on heater 3A yesterday to help deal with the cold conditions. You can see in the plot that the heater is cycling, probably on a protective temperature switch.

Nutsinee reported some misbehavior in the LVEA 10-30 Hz seismic activity.

I'm not sure there is any correlation between these two events but someone might look at this. The heater is likely ~25kw so there is some significant EMI from the contactor opening and closing. 

Images attached to this report
Comments related to this report
edmond.merilh@LIGO.ORG - 07:35, Saturday 14 January 2017 (33260)

I'm seeing a bit of that too.

Images attached to this comment
H1 General
edmond.merilh@LIGO.ORG - posted 04:30, Saturday 14 January 2017 (33257)
Mid-Shift Summary - Owl

Re-locking has been a daunting task to say the least. PRC gave me a really hard time trying to get it locked. Upon getting it locked and the control signals converged, DRMI alignment still looked pretty terrible. BS alignment was checked repeatedly but to no avail to the alignment. I then decided to do a complete initial alignment. The IMC became unstable while trying to re-align the arms and I had to clear he WFS and re-align the optics. After letting the loops have the alignment for tuning during a brief break I continued with the initial alignment. Locking attempts following have been more sucesssful but not completely successful. So far, the farthest I've gotten is DC Readout.

H1 General
edmond.merilh@LIGO.ORG - posted 00:18, Saturday 14 January 2017 (33256)
Shift Summary - Owl Transition
TITLE: 01/14 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Earthquake
OUTGOING OPERATOR: Nutsinee
CURRENT ENVIRONMENT:
    Wind: 4mph Gusts, 3mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.24 μm/s 
QUICK SUMMARY:
H1 re-locking (unsuccessfully) as I walked in. I put the Observatory Mode to EQ as lockloss occurred during the seismic event and I will kep it that way until relock.EQ bands back down to .03um/s. Going to get back to it!
H1 General (SEI)
nutsinee.kijbunchoo@LIGO.ORG - posted 00:12, Saturday 14 January 2017 (33255)
EQ Report

4.8M Indonesia

Was it reported by Terramon, USGS, SEISMON? Yes, Yes, No

Magnitude (according to Terramon, USGS, SEISMON): 4.8, 4.8, NA

Location: 56km ENE of Taniwel, Indonesia

Starting time of event (ie. when BLRMS started to increase on DMT on the wall): ~06:30 UTC

Lock status? Caused lockloss.

EQ reported by Terramon BEFORE it actually arrived? Yes

 

Note: There were multuple earthquakes happened around the same time. It's not neccessary that this one in particular caused the lockloss. But the timing reported by Terramon matches.

H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 00:06, Saturday 14 January 2017 (33254)
Ops EVE shift summary

TITLE: 01/14 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC

STATE of H1: Earthquake

INCOMING OPERATOR: Ed

SHIFT SUMMARY: The range was awfully low but the IFO stayed locked until an earthquake knocked us out. Otherwise nothing exciting.

LOG:

07:19 Lockloss. There were several earthquakes happened right around the same time but the arrival time of the 4.8M Indonesia earthquake is closest to the peak time observed in the control room. The peak reached 0.4 um/s on the 0.03-0.1Hz FOM.

H1 General (DetChar)
nutsinee.kijbunchoo@LIGO.ORG - posted 22:08, Friday 13 January 2017 - last comment - 23:15, Friday 13 January 2017(33250)
Fast range deterioration coincides with funny 10-30Hz seismic activity

About 2 hours into this lock stretch the range started to drop very fast. About the same time 10-30 Hz seismic activity (LVEA vertex) is behaving funny. I don't believe that it's real (no one is out there). I don't see funny things in magnetometers. Still looking for more clues. Electronics related?

Images attached to this report
Comments related to this report
krishna.venkateswara@LIGO.ORG - 22:33, Friday 13 January 2017 (33251)

The temperature is dipping a couple of degrees in the LVEA again. This morning, Bubba had to turn on some extra heat in the LVEA, which was close to the time of the drop in range. Are some new fans getting turned on somehwhere producing extra seismic noise??

nutsinee.kijbunchoo@LIGO.ORG - 23:03, Friday 13 January 2017 (33252)

If that's the case shouldn't we see this behavior more often in the past 24 hours?

nutsinee.kijbunchoo@LIGO.ORG - 23:15, Friday 13 January 2017 (33253)

The weird glitches have stopped but the range doesn't recover. They seem to be 15 minutes spacing.

Images attached to this comment
H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 20:55, Friday 13 January 2017 - last comment - 07:49, Saturday 14 January 2017(33249)
Injection Guardian

It's been set to KILL since Jan 4th. Are we leaving it at KILL on purpose?

Comments related to this report
keith.thorne@LIGO.ORG - 07:49, Saturday 14 January 2017 (33261)CDS
This applies to the transient injections (not continuous pulsars).  It is likely that we are trying to accumulate a good stretch of data without such injections since we came back up in January to better measure sensitivity backgrounds.
Displaying reports 50781-50800 of 83128.Go to page Start 2536 2537 2538 2539 2540 2541 2542 2543 2544 End