Displaying reports 55921-55940 of 86073.Go to page Start 2793 2794 2795 2796 2797 2798 2799 2800 2801 End
Reports until 08:25, Saturday 29 October 2016
LHO General
corey.gray@LIGO.ORG - posted 08:25, Saturday 29 October 2016 - last comment - 10:37, Saturday 29 October 2016(30990)
Morning Status

TITLE: 10/29 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 58.0319Mpc
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
    Wind: 11mph Gusts, 7mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.46 μm/s
QUICK SUMMARY:

Currently have a lock approaching 2Hrs (started at around 73Mpc, but was slowly trending down, and in the last few minutes took a step down to 55Mpc) . 

Robert is here and has headed out to the end stations for injection work (the BRS was switched to the VERY_WINDY_NOBRSXY state).   Switching Observatory Mode to:  UNKNOWN (since I don't see an "injection" option).

Cheryl gave me a suggestion of keeping TCSx at 0.4 & moving TCSy from 0.0 to 0.2 for the next change.

Comments related to this report
corey.gray@LIGO.ORG - 10:22, Saturday 29 October 2016 (30995)

Holding off on any changes to TCS since Robert is doing work.  Current lock approaching ~4hrs.  Have been hovering between 50-60Mpc.

corey.gray@LIGO.ORG - 10:37, Saturday 29 October 2016 (30996)

10:28:  Turning BRSx back ON (Robert came to check in saying he'll head back out to EY, but won't be at EX for a few hours.)  Thought of making a change on TCSy, but decided to hold off so we don't possibly cause a lockloss---want to give Robert his time.

He plans to transition EY to do possible black glass measurements at viewports (per WP 6274).

H1 General (AOS, Lockloss, OpsInfo)
cheryl.vorvick@LIGO.ORG - posted 08:14, Saturday 29 October 2016 (30989)
Ops Owl Summary

State of H1: Locked in NLN, TCSX at 0.4W, TCSY at 0W

Incoming Operator: Corey

On Site: Robert

Possible issues that might effect locking / locking stability:

BS optical lever is glitching in sum and yaw, and glitching enough that this might have killed a lock already.

H1 AOS (AOS, OpsInfo)
cheryl.vorvick@LIGO.ORG - posted 08:08, Saturday 29 October 2016 - last comment - 12:44, Saturday 29 October 2016(30988)
BS Optical lever - strange behavior - glitchy in SUM and YAW but PITCH doesn't look too bad

Might have broken the lock 2.5 hours ago, and could be a problem today.

Images attached to this report
Comments related to this report
jenne.driggers@LIGO.ORG - 12:44, Saturday 29 October 2016 (30997)

While the BS oplev can certainly cause trouble during lock acquisition, once we get to Engage_DRMI_ASC that oplev's feedback is turned off, so it is unlikely that it caused a lockloss from any state after DRMI is locked.

H1 General (DetChar, OpsInfo, TCS)
cheryl.vorvick@LIGO.ORG - posted 06:47, Saturday 29 October 2016 (30987)
H1in semi-observe as of 13:42UTC

State of H1: locked in NLN

Activities / Changes

H1 AOS
cheryl.vorvick@LIGO.ORG - posted 06:08, Saturday 29 October 2016 - last comment - 06:13, Saturday 29 October 2016(30984)
Looks like TCSY 0.4W isn't a good thing

It looks like about 20 minutes after the increase in TCSY from 0 to 0.4W the range started to drop and continued until lock loss. Plot attached.

Images attached to this report
Comments related to this report
cheryl.vorvick@LIGO.ORG - 06:13, Saturday 29 October 2016 (30985)

I've turned the pre-heating off on TCSY - if anything the optic is overheated, may need to cool before relocking is stable.

H1 SUS
cheryl.vorvick@LIGO.ORG - posted 05:47, Saturday 29 October 2016 (30983)
TMS alignment slider ramping times

Set to 2 seconds and saved in SDF.

H1 AOS (AOS)
cheryl.vorvick@LIGO.ORG - posted 04:26, Saturday 29 October 2016 (30982)
TCS changes

11:23UTC, changed TCSY from 0W to 0.4W, TCSX was at 0.2W and remains at 0.2W, next change in about 90 minutes.

H1 General (DetChar, OpsInfo)
cheryl.vorvick@LIGO.ORG - posted 03:25, Saturday 29 October 2016 (30981)
H1 in semi-Observe, 10:21UTC

State of H1: Nominal Low Noise

Activities that will contnue:

H1 ISC
sheila.dwyer@LIGO.ORG - posted 02:51, Saturday 29 October 2016 (30979)
some work on low frequencies tonight

I did a few more things tonight

Images attached to this report
H1 ISC
sheila.dwyer@LIGO.ORG - posted 22:35, Friday 28 October 2016 (30978)
DBB coherences

Now that we have the DBB plugged in again, we can relook at the coherences of the QPDs.  The screenshot shows coherences for DARM and SRCL with for the 200W beam (opening the 35W shutter unlocks the IFO).

We have small coherences with DARM just below 1kHz and around 4.2kHz, but not otherwise.  SRCL does have more coherence with one of the QPDs below 100Hz.

Images attached to this report
H1 GRD (INJ)
evan.goetz@LIGO.ORG - posted 20:06, Friday 28 October 2016 - last comment - 09:00, Saturday 29 October 2016(30975)
Gain of PINJX_TRANSIENT filter bank
I think that the nominal setting of the CAL-PINJX_TRANSIENT filter bank gain is supposed to be zero. When a transient injection is imminent, then the gain is ramped to 1.0, the injection happens, and then the gain is ramped to zero. However, the SDF safe set point is 1.0. Therefore, I am setting the gain to 0 and accepting the change in the SDF safe file.
Comments related to this report
keith.thorne@LIGO.ORG - 06:26, Saturday 29 October 2016 (30986)CAL, GRD
The existing code in the INJ_TRANS Guardian script does indeed do this.  
david.barker@LIGO.ORG - 09:00, Saturday 29 October 2016 (30992)

if guardian is controlling the gain, perhaps sdf shouldn't be monitoring it.

H1 ISC (CDS, GRD, ISC)
jenne.driggers@LIGO.ORG - posted 20:32, Monday 24 October 2016 - last comment - 09:10, Saturday 29 October 2016(30831)
cdsutils avg giving weird results in guardian??

The results of cdsutils.avg() in guardian is sometimes giving us very weird values. 

We use this function to measure the offset value of the trans QPDs in Prep_TR_CARM.  At one point, the result of the average gave the same (wrong) value for both the X and Y QPDs, to within 9 decimal places (right side of screenshot, about halfway down).  Obviously this isn't right, but the fact that the values are identical will hopefully help track down what happened.

The next lock, it correctly got a value for the TransX (left side of screenshot, about halfway down), but didn't write a value for the TransY QPD, which indicates that it was trying to write the exact same value that was already there (epics writes aren't logged if they don't change the value). 

So, why did 3 different cdsutils averages all return a value of 751.242126465?

This isn't the first time that this has happened.  Stefan recalls at least one time from over the weekend, and I know Cheryl and I found this sometime last week. 

Images attached to this report
Comments related to this report
jameson.rollins@LIGO.ORG - 21:01, Monday 24 October 2016 (30832)

This is definitely a very strange behavior.  I have no idea why that would happen.

As with most things guardian, it's good to try to get independent verification of the effect.  If you make the same cdsutils avg calls from the command line do you get similarly strange results?  Could the NDS server be getting into a weird state?

jenne.driggers@LIGO.ORG - 21:11, Monday 24 October 2016 (30833)

On the one hand, it works just fine right now in a guardian shell.  On the other hand, it also worked fine for the latest acquisition.  So, no conclusion at this time.

jenne.driggers@LIGO.ORG - 01:03, Tuesday 25 October 2016 (30838)OpsInfo

This happened again, but this time the numbers were not identical.  I have added a check to the Prep_TR_CARM state that if the absolute value of the offsets are larger than 5 (normally they're around 0.2 and 0.3, and the bad values have all been above several hundred) then notify and don't move on. 

Operators:  If you see the notification Check Trans QPD offsets! then look at H1:LSC-TR_X_QPD_B_SUM_OFFSET and H1:LSC-TR_Y_QPD_B_SUM_OFFSET.  If you do an ezca read on that number and it's giant, you can "cheat" and try +0.3 for X, and +0.2 for Y, then go back to trying to find IR.

sheila.dwyer@LIGO.ORG - 21:10, Friday 28 October 2016 (30976)OpsInfo

This happened again to Jim, and Cheryl, today and caused multiple locklosses

I've commented out the averaging of the offsets in the guardian. 

We used to not do this averaging, and jsut rely on the dark offsets not to change.  Maybe we could go back to that.  

 

For operators, until this is fixed you might need to set these by hand:

If you are having trouble with FIND IR, this is something to check.  From the LSC overview sceen, click on the yellow TRX_A_LF TRY_A_LF button toward the middle oc the left part of the screen.  Then click on the R INput button circled in the attachment, and from there check that both the X and Y arm QPD SUMs have reasonable offsets.  (If there is not IR in the arms, the offset should be about -1*INMON)

Images attached to this comment
david.barker@LIGO.ORG - 09:10, Saturday 29 October 2016 (30994)

Opened as high priority fault in FRS:

ticket 6559

H1 GRD
sheila.dwyer@LIGO.ORG - posted 15:15, Monday 24 October 2016 - last comment - 09:06, Saturday 29 October 2016(30815)
exca connection error

Ed, Sheila

Are ezca connection errors becoming more frequent?  Ed has had two in the last hour or so, one of which contributed to a lockloss (ISC_DRMI).

The first one was from ISC_LOCK, the screenshot is attached. 

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 18:15, Monday 24 October 2016 (30828)

Happened again but for a different channel H1:SUS-ITMX_L2_DAMP_MODE2_RMSLP_LOG10_OUTMON ( Sheila's post was for H1:LSC-PD_DOF_MTRX_7_4). I trended and found data for both of those channels at the connection error times, and during the second error I could also caget the channel while ISC_LOCK still could not connect. I'll keep trying to dig and see what I find.

Relevant ISC_LOCK log:

2016-10-25_00:25:57.034950Z ISC_LOCK [COIL_DRIVERS.enter]
2016-10-25_00:26:09.444680Z Traceback (most recent call last):
2016-10-25_00:26:09.444730Z   File "_ctypes/callbacks.c", line 314, in 'calling callback function'
2016-10-25_00:26:12.128960Z ISC_LOCK [COIL_DRIVERS.main] USERMSG 0: EZCA CONNECTION ERROR: Could not connect to channel (timeout=2s): H1:SUS-ITMX_L2_DAMP_MODE2_RMSLP_LOG10_OUTMON
2016-10-25_00:26:12.129190Z   File "/ligo/apps/linux-x86_64/epics-3.14.12.2_long-ubuntu12/pyext/pyepics/lib/python2.6/site-packages/epics/ca.py", line 465, in _onConnectionEvent
2016-10-25_00:26:12.131850Z     if int(ichid) == int(args.chid):
2016-10-25_00:26:12.132700Z TypeError: int() argument must be a string or a number, not 'NoneType'
2016-10-25_00:26:12.162700Z ISC_LOCK EZCA CONNECTION ERROR. attempting to reestablish...
2016-10-25_00:26:12.175240Z ISC_LOCK CERROR: State method raised an EzcaConnectionError exception.
2016-10-25_00:26:12.175450Z ISC_LOCK CERROR: Current state method will be rerun until the connection error clears.
2016-10-25_00:26:12.175630Z ISC_LOCK CERROR: If CERROR does not clear, try setting OP:STOP to kill worker, followed by OP:EXEC to resume.

sheila.dwyer@LIGO.ORG - 21:12, Friday 28 October 2016 (30977)

It happened again just now. 

Images attached to this comment
david.barker@LIGO.ORG - 09:06, Saturday 29 October 2016 (30993)

Opened FRS on this, marked a high priority fault.

ticket 6558

Displaying reports 55921-55940 of 86073.Go to page Start 2793 2794 2795 2796 2797 2798 2799 2800 2801 End