Displaying reports 53121-53140 of 83287.Go to page Start 2653 2654 2655 2656 2657 2658 2659 2660 2661 End
Reports until 11:40, Sunday 30 October 2016
LHO General
corey.gray@LIGO.ORG - posted 11:40, Sunday 30 October 2016 - last comment - 17:53, Sunday 30 October 2016(31011)
Mid Shift Status

Whoops---just dropped out of lock. at 18:40 (11:40am).

Comments related to this report
corey.gray@LIGO.ORG - 11:49, Sunday 30 October 2016 (31012)

Taking IMC to MISALIGNING state so Robert can do some non-locked measurements.

To prevent splashing of light,

  • CLOSED ISCTEX green beam & ISCTEY fiber beam shutters.  
  • ISCTEY green beam said it was already CLOSED [I toggled it Open & Closed for good measure & to observe changes]).
  • Didn't touch ISCTEX fiber beam
daniel.sigg@LIGO.ORG - 17:53, Sunday 30 October 2016 (31015)

no fibers shutters, EY swapped, see alog 30624.

LHO General
corey.gray@LIGO.ORG - posted 08:56, Sunday 30 October 2016 (31009)
Ops Morning Transition

TITLE: 10/30 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
    Wind: 9mph Gusts, 6mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.26 μm/s
QUICK SUMMARY:

H1 General
cheryl.vorvick@LIGO.ORG - posted 07:57, Sunday 30 October 2016 - last comment - 12:19, Sunday 30 October 2016(31008)
Ops Owl Summary:

State of H1: relocking

Shift Details / Activites:

Notifications:

Comments related to this report
terra.hardwick@LIGO.ORG - 12:19, Sunday 30 October 2016 (31013)
I would advise against making the PI vocal notification repeat; this would probably increase the stress level while attempting to damp a mode and in general we want to keep notification noise to a minimum. Remember that these notifications are kept logged right on the screen, so missed notifications (and the PI StripTool screen) can be checked quickly for status updates.
H1 AOS (DetChar, OpsInfo, TCS)
cheryl.vorvick@LIGO.ORG - posted 06:18, Sunday 30 October 2016 (31007)
H1 in NLN, TCSY set to 0.4W, TCSX remains at 0.2W

State of H1: relocked and in Nominal Low Noise, Range is 68MPc

Activities:

H1 General (OpsInfo)
cheryl.vorvick@LIGO.ORG - posted 03:43, Sunday 30 October 2016 - last comment - 04:09, Sunday 30 October 2016(31005)
Ops Update: 10:40UTC
Comments related to this report
cheryl.vorvick@LIGO.ORG - 04:09, Sunday 30 October 2016 (31006)Lockloss, OpsInfo

11:08UTC - bad click, broke lock, was making slow progress with bounce and roll

H1 GRD (GRD, OpsInfo)
cheryl.vorvick@LIGO.ORG - posted 02:20, Sunday 30 October 2016 - last comment - 13:28, Sunday 30 October 2016(31004)
ISC_LOCK guardian doesn't set ALIGN_IFO to SET_SUS_FOR_ALS?
Images attached to this report
Comments related to this report
jenne.driggers@LIGO.ORG - 13:28, Sunday 30 October 2016 (31014)

We chose to set the AlignIFO node ready for locking the arms in LockingArmsGreen.  Since the Down state is mostly stuff to stop actuation when we lose lock, we don't touch the AlignIFO node in that state.  Once you get to LockingArmsGreen, immediately after Ready, the ISC_LOCK guardian will set the AlignIFO node appropriately.

H1 General (OpsInfo, SEI)
cheryl.vorvick@LIGO.ORG - posted 00:54, Sunday 30 October 2016 - last comment - 01:39, Sunday 30 October 2016(31002)
Owl shift begins - big EQs in Italy, no locking

State of H1: SEI in LARGE_EQ_NOBRSXY, ground motion increased to 10um/s

Help: JimW changed SEI settings to EARTH_QUAKE_V2, and explained the use of LARGE_EQ if ground motion got worse

Details:

Comments related to this report
cheryl.vorvick@LIGO.ORG - 01:39, Sunday 30 October 2016 (31003)OpsInfo, SEI
  • 8:37UTC - back to EARTH_QUAKE_V2
  • running through Initial Alignment
H1 General
jim.warner@LIGO.ORG - posted 00:03, Sunday 30 October 2016 (31000)
Shift Summary

TITLE: 10/29 Day Shift: 23:00-7:00 UTC , all times posted in UTC
STATE of H1: Relocking Earthquake
SHIFT SUMMARY:
Robert and Jenne both left part way through the shift. PI mode 2 rang up (which was easily handled by adjusting damping phase) and  2 violin modes on ETMY rang up (508.585 & 508.661), but it took me 20+ minutes to realize, because there is no alarm system and the monitor screens are kind of unclear. Seems like we could take some pages from the PI book to help with that. I also changed the TCSY power from 0 to .2w per Kiwamus request.

LHO General
corey.gray@LIGO.ORG - posted 16:04, Saturday 29 October 2016 (30991)
Ops Day Shift Summary

TITLE: 10/29 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Jim
SHIFT SUMMARY:
H1 was locked entire shift.  Robert was at the end stations (mostly EY) performing measurements.  Jenne worked on a filter for SRCL FeedForward.

Noticed the HAM2 HEPI has an excitation noted in the CDS Overview---Jim says this might be from Robert.

DIAG_MAIN is noting IM3 P&Y are out of nominal range (can talk to Cheryl about this).

LOG:

H1 TCS
corey.gray@LIGO.ORG - posted 13:56, Saturday 29 October 2016 (30998)
TCSy Chiller Topped Off at ~20:40UTC (1:40pmPST)

Last time I was on shift, we were filling the TCSy Chiller fairly frequently.  Went out to chec on both chillers.  

LHO General
corey.gray@LIGO.ORG - posted 08:25, Saturday 29 October 2016 - last comment - 10:37, Saturday 29 October 2016(30990)
Morning Status

TITLE: 10/29 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 58.0319Mpc
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
    Wind: 11mph Gusts, 7mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.46 μm/s
QUICK SUMMARY:

Currently have a lock approaching 2Hrs (started at around 73Mpc, but was slowly trending down, and in the last few minutes took a step down to 55Mpc) . 

Robert is here and has headed out to the end stations for injection work (the BRS was switched to the VERY_WINDY_NOBRSXY state).   Switching Observatory Mode to:  UNKNOWN (since I don't see an "injection" option).

Cheryl gave me a suggestion of keeping TCSx at 0.4 & moving TCSy from 0.0 to 0.2 for the next change.

Comments related to this report
corey.gray@LIGO.ORG - 10:22, Saturday 29 October 2016 (30995)

Holding off on any changes to TCS since Robert is doing work.  Current lock approaching ~4hrs.  Have been hovering between 50-60Mpc.

corey.gray@LIGO.ORG - 10:37, Saturday 29 October 2016 (30996)

10:28:  Turning BRSx back ON (Robert came to check in saying he'll head back out to EY, but won't be at EX for a few hours.)  Thought of making a change on TCSy, but decided to hold off so we don't possibly cause a lockloss---want to give Robert his time.

He plans to transition EY to do possible black glass measurements at viewports (per WP 6274).

H1 AOS (AOS, OpsInfo)
cheryl.vorvick@LIGO.ORG - posted 08:08, Saturday 29 October 2016 - last comment - 12:44, Saturday 29 October 2016(30988)
BS Optical lever - strange behavior - glitchy in SUM and YAW but PITCH doesn't look too bad

Might have broken the lock 2.5 hours ago, and could be a problem today.

Images attached to this report
Comments related to this report
jenne.driggers@LIGO.ORG - 12:44, Saturday 29 October 2016 (30997)

While the BS oplev can certainly cause trouble during lock acquisition, once we get to Engage_DRMI_ASC that oplev's feedback is turned off, so it is unlikely that it caused a lockloss from any state after DRMI is locked.

H1 GRD (INJ)
evan.goetz@LIGO.ORG - posted 20:06, Friday 28 October 2016 - last comment - 09:00, Saturday 29 October 2016(30975)
Gain of PINJX_TRANSIENT filter bank
I think that the nominal setting of the CAL-PINJX_TRANSIENT filter bank gain is supposed to be zero. When a transient injection is imminent, then the gain is ramped to 1.0, the injection happens, and then the gain is ramped to zero. However, the SDF safe set point is 1.0. Therefore, I am setting the gain to 0 and accepting the change in the SDF safe file.
Comments related to this report
keith.thorne@LIGO.ORG - 06:26, Saturday 29 October 2016 (30986)CAL, GRD
The existing code in the INJ_TRANS Guardian script does indeed do this.  
david.barker@LIGO.ORG - 09:00, Saturday 29 October 2016 (30992)

if guardian is controlling the gain, perhaps sdf shouldn't be monitoring it.

H1 TCS
jeffrey.bartlett@LIGO.ORG - posted 06:49, Thursday 27 October 2016 - last comment - 23:24, Saturday 29 October 2016(30920)
TCS Laser Noise

Kiwamu saked the ops to run some TCS laser noise measurements.

SETUP:

Started the run: 

    TSC X - Initial Power = 0.2W                     TSC Y - Initial Power = 0.0W

Time Power   Time Power
03:00 0.3W   03:00 0.1W
04:30 0.4W   04:30 0.2W
         
         
         
         
         
         
         

At 05:08 Lost lock due to a Mag5.8 EQ in Alaska.

Comments related to this report
travis.sadecki@LIGO.ORG - 15:58, Thursday 27 October 2016 (30943)

I only managed to get one more data point for both arms:

15:30 utc:  TCSx at 0.5W for 90 minutes.

                   TCSy at 0.3W for 90 minutes.

cheryl.vorvick@LIGO.ORG - 03:04, Friday 28 October 2016 (30952)OpsInfo

Oct 28, 10:03UTC, TCSX power set to 0.6, TCSY power set to .4

cheryl.vorvick@LIGO.ORG - 04:32, Friday 28 October 2016 (30953)OpsInfo

oct 28, 11:32UTC, change X from 0.6 to 0.7, changed y from 0.3 to 0.4

cheryl.vorvick@LIGO.ORG - 06:28, Friday 28 October 2016 (30954)

oct 28, 13:27UTC, tcsx raised to 0.8, tcsy raised to 0.5

cheryl.vorvick@LIGO.ORG - 07:00, Friday 28 October 2016 (30955)

As range dropped and arm signals got more noisy I feared H1 was about to lose lock, and touched up TMSX and TMSY alignment.  Hopefully this didn't invalidate the data for TCS analysis.

Tweaks by TMS:

  • tmsx pitch 12:16UTC, 12:27UTC
  • tmsx yaw 9:56UTC
  • tmsy pitch 12:29UTC, 12:47UTC, 13:39UTC
  • tmsy yaw 12:48UTC,13:30 to 13:38UTC

Tweaks and TCS changes by timeline:

  • 9:56UTC
  • 10:03UTC - changed TCS
  • 11:32UTC - changed TCS
  • 12:16UTC
  • 12:27UTC
  • 12:29UTC
  • 12:47UTC
  • 12:48UTC
  • 13:27UTC - changed TCS
  • 13:30 to 13:38UTC

 

 

 

 

travis.sadecki@LIGO.ORG - 15:26, Friday 28 October 2016 (30967)

15:05 UTC: TCSx at 0.9W for 45 min.

                     TCSy at 0.6W for 45 min.

jim.warner@LIGO.ORG - 23:24, Saturday 29 October 2016 (30999)

At 4:43 UTC today (10/30, or (10/29 still PST)), after Robert left, I changed TCSY to .2W, per Cheryl's suggestion left with Corey. TCSX is still at .4W.

H1 ISC (CDS, GRD, ISC)
jenne.driggers@LIGO.ORG - posted 20:32, Monday 24 October 2016 - last comment - 09:10, Saturday 29 October 2016(30831)
cdsutils avg giving weird results in guardian??

The results of cdsutils.avg() in guardian is sometimes giving us very weird values. 

We use this function to measure the offset value of the trans QPDs in Prep_TR_CARM.  At one point, the result of the average gave the same (wrong) value for both the X and Y QPDs, to within 9 decimal places (right side of screenshot, about halfway down).  Obviously this isn't right, but the fact that the values are identical will hopefully help track down what happened.

The next lock, it correctly got a value for the TransX (left side of screenshot, about halfway down), but didn't write a value for the TransY QPD, which indicates that it was trying to write the exact same value that was already there (epics writes aren't logged if they don't change the value). 

So, why did 3 different cdsutils averages all return a value of 751.242126465?

This isn't the first time that this has happened.  Stefan recalls at least one time from over the weekend, and I know Cheryl and I found this sometime last week. 

Images attached to this report
Comments related to this report
jameson.rollins@LIGO.ORG - 21:01, Monday 24 October 2016 (30832)

This is definitely a very strange behavior.  I have no idea why that would happen.

As with most things guardian, it's good to try to get independent verification of the effect.  If you make the same cdsutils avg calls from the command line do you get similarly strange results?  Could the NDS server be getting into a weird state?

jenne.driggers@LIGO.ORG - 21:11, Monday 24 October 2016 (30833)

On the one hand, it works just fine right now in a guardian shell.  On the other hand, it also worked fine for the latest acquisition.  So, no conclusion at this time.

jenne.driggers@LIGO.ORG - 01:03, Tuesday 25 October 2016 (30838)OpsInfo

This happened again, but this time the numbers were not identical.  I have added a check to the Prep_TR_CARM state that if the absolute value of the offsets are larger than 5 (normally they're around 0.2 and 0.3, and the bad values have all been above several hundred) then notify and don't move on. 

Operators:  If you see the notification Check Trans QPD offsets! then look at H1:LSC-TR_X_QPD_B_SUM_OFFSET and H1:LSC-TR_Y_QPD_B_SUM_OFFSET.  If you do an ezca read on that number and it's giant, you can "cheat" and try +0.3 for X, and +0.2 for Y, then go back to trying to find IR.

sheila.dwyer@LIGO.ORG - 21:10, Friday 28 October 2016 (30976)OpsInfo

This happened again to Jim, and Cheryl, today and caused multiple locklosses

I've commented out the averaging of the offsets in the guardian. 

We used to not do this averaging, and jsut rely on the dark offsets not to change.  Maybe we could go back to that.  

 

For operators, until this is fixed you might need to set these by hand:

If you are having trouble with FIND IR, this is something to check.  From the LSC overview sceen, click on the yellow TRX_A_LF TRY_A_LF button toward the middle oc the left part of the screen.  Then click on the R INput button circled in the attachment, and from there check that both the X and Y arm QPD SUMs have reasonable offsets.  (If there is not IR in the arms, the offset should be about -1*INMON)

Images attached to this comment
david.barker@LIGO.ORG - 09:10, Saturday 29 October 2016 (30994)

Opened as high priority fault in FRS:

ticket 6559

H1 GRD
sheila.dwyer@LIGO.ORG - posted 15:15, Monday 24 October 2016 - last comment - 09:06, Saturday 29 October 2016(30815)
exca connection error

Ed, Sheila

Are ezca connection errors becoming more frequent?  Ed has had two in the last hour or so, one of which contributed to a lockloss (ISC_DRMI).

The first one was from ISC_LOCK, the screenshot is attached. 

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 18:15, Monday 24 October 2016 (30828)

Happened again but for a different channel H1:SUS-ITMX_L2_DAMP_MODE2_RMSLP_LOG10_OUTMON ( Sheila's post was for H1:LSC-PD_DOF_MTRX_7_4). I trended and found data for both of those channels at the connection error times, and during the second error I could also caget the channel while ISC_LOCK still could not connect. I'll keep trying to dig and see what I find.

Relevant ISC_LOCK log:

2016-10-25_00:25:57.034950Z ISC_LOCK [COIL_DRIVERS.enter]
2016-10-25_00:26:09.444680Z Traceback (most recent call last):
2016-10-25_00:26:09.444730Z   File "_ctypes/callbacks.c", line 314, in 'calling callback function'
2016-10-25_00:26:12.128960Z ISC_LOCK [COIL_DRIVERS.main] USERMSG 0: EZCA CONNECTION ERROR: Could not connect to channel (timeout=2s): H1:SUS-ITMX_L2_DAMP_MODE2_RMSLP_LOG10_OUTMON
2016-10-25_00:26:12.129190Z   File "/ligo/apps/linux-x86_64/epics-3.14.12.2_long-ubuntu12/pyext/pyepics/lib/python2.6/site-packages/epics/ca.py", line 465, in _onConnectionEvent
2016-10-25_00:26:12.131850Z     if int(ichid) == int(args.chid):
2016-10-25_00:26:12.132700Z TypeError: int() argument must be a string or a number, not 'NoneType'
2016-10-25_00:26:12.162700Z ISC_LOCK EZCA CONNECTION ERROR. attempting to reestablish...
2016-10-25_00:26:12.175240Z ISC_LOCK CERROR: State method raised an EzcaConnectionError exception.
2016-10-25_00:26:12.175450Z ISC_LOCK CERROR: Current state method will be rerun until the connection error clears.
2016-10-25_00:26:12.175630Z ISC_LOCK CERROR: If CERROR does not clear, try setting OP:STOP to kill worker, followed by OP:EXEC to resume.

sheila.dwyer@LIGO.ORG - 21:12, Friday 28 October 2016 (30977)

It happened again just now. 

Images attached to this comment
david.barker@LIGO.ORG - 09:06, Saturday 29 October 2016 (30993)

Opened FRS on this, marked a high priority fault.

ticket 6558

Displaying reports 53121-53140 of 83287.Go to page Start 2653 2654 2655 2656 2657 2658 2659 2660 2661 End