Whoops---just dropped out of lock. at 18:40 (11:40am).
TITLE: 10/30 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
Wind: 9mph Gusts, 6mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.26 μm/s
QUICK SUMMARY:
State of H1: relocking
Shift Details / Activites:
Notifications:
State of H1: relocked and in Nominal Low Noise, Range is 68MPc
Activities:
11:08UTC - bad click, broke lock, was making slow progress with bounce and roll
We chose to set the AlignIFO node ready for locking the arms in LockingArmsGreen. Since the Down state is mostly stuff to stop actuation when we lose lock, we don't touch the AlignIFO node in that state. Once you get to LockingArmsGreen, immediately after Ready, the ISC_LOCK guardian will set the AlignIFO node appropriately.
State of H1: SEI in LARGE_EQ_NOBRSXY, ground motion increased to 10um/s
Help: JimW changed SEI settings to EARTH_QUAKE_V2, and explained the use of LARGE_EQ if ground motion got worse
Details:
TITLE: 10/29 Day Shift: 23:00-7:00 UTC , all times posted in UTC
STATE of H1: Relocking Earthquake
SHIFT SUMMARY:
Robert and Jenne both left part way through the shift. PI mode 2 rang up (which was easily handled by adjusting damping phase) and 2 violin modes on ETMY rang up (508.585 & 508.661), but it took me 20+ minutes to realize, because there is no alarm system and the monitor screens are kind of unclear. Seems like we could take some pages from the PI book to help with that. I also changed the TCSY power from 0 to .2w per Kiwamus request.
TITLE: 10/29 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Jim
SHIFT SUMMARY:
H1 was locked entire shift. Robert was at the end stations (mostly EY) performing measurements. Jenne worked on a filter for SRCL FeedForward.
Noticed the HAM2 HEPI has an excitation noted in the CDS Overview---Jim says this might be from Robert.
DIAG_MAIN is noting IM3 P&Y are out of nominal range (can talk to Cheryl about this).
LOG:
Last time I was on shift, we were filling the TCSy Chiller fairly frequently. Went out to chec on both chillers.
TITLE: 10/29 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 58.0319Mpc
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
Wind: 11mph Gusts, 7mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.46 μm/s
QUICK SUMMARY:
Currently have a lock approaching 2Hrs (started at around 73Mpc, but was slowly trending down, and in the last few minutes took a step down to 55Mpc) .
Robert is here and has headed out to the end stations for injection work (the BRS was switched to the VERY_WINDY_NOBRSXY state). Switching Observatory Mode to: UNKNOWN (since I don't see an "injection" option).
Cheryl gave me a suggestion of keeping TCSx at 0.4 & moving TCSy from 0.0 to 0.2 for the next change.
Holding off on any changes to TCS since Robert is doing work. Current lock approaching ~4hrs. Have been hovering between 50-60Mpc.
10:28: Turning BRSx back ON (Robert came to check in saying he'll head back out to EY, but won't be at EX for a few hours.) Thought of making a change on TCSy, but decided to hold off so we don't possibly cause a lockloss---want to give Robert his time.
He plans to transition EY to do possible black glass measurements at viewports (per WP 6274).
Might have broken the lock 2.5 hours ago, and could be a problem today.
While the BS oplev can certainly cause trouble during lock acquisition, once we get to Engage_DRMI_ASC that oplev's feedback is turned off, so it is unlikely that it caused a lockloss from any state after DRMI is locked.
I think that the nominal setting of the CAL-PINJX_TRANSIENT filter bank gain is supposed to be zero. When a transient injection is imminent, then the gain is ramped to 1.0, the injection happens, and then the gain is ramped to zero. However, the SDF safe set point is 1.0. Therefore, I am setting the gain to 0 and accepting the change in the SDF safe file.
The existing code in the INJ_TRANS Guardian script does indeed do this.
if guardian is controlling the gain, perhaps sdf shouldn't be monitoring it.
Kiwamu saked the ops to run some TCS laser noise measurements.
SETUP:
Started the run:
TSC X - Initial Power = 0.2W TSC Y - Initial Power = 0.0W
Time | Power | Time | Power | |
03:00 | 0.3W | 03:00 | 0.1W | |
04:30 | 0.4W | 04:30 | 0.2W | |
At 05:08 Lost lock due to a Mag5.8 EQ in Alaska.
I only managed to get one more data point for both arms:
15:30 utc: TCSx at 0.5W for 90 minutes.
TCSy at 0.3W for 90 minutes.
Oct 28, 10:03UTC, TCSX power set to 0.6, TCSY power set to .4
oct 28, 11:32UTC, change X from 0.6 to 0.7, changed y from 0.3 to 0.4
oct 28, 13:27UTC, tcsx raised to 0.8, tcsy raised to 0.5
As range dropped and arm signals got more noisy I feared H1 was about to lose lock, and touched up TMSX and TMSY alignment. Hopefully this didn't invalidate the data for TCS analysis.
Tweaks by TMS:
Tweaks and TCS changes by timeline:
15:05 UTC: TCSx at 0.9W for 45 min.
TCSy at 0.6W for 45 min.
At 4:43 UTC today (10/30, or (10/29 still PST)), after Robert left, I changed TCSY to .2W, per Cheryl's suggestion left with Corey. TCSX is still at .4W.
The results of cdsutils.avg() in guardian is sometimes giving us very weird values.
We use this function to measure the offset value of the trans QPDs in Prep_TR_CARM. At one point, the result of the average gave the same (wrong) value for both the X and Y QPDs, to within 9 decimal places (right side of screenshot, about halfway down). Obviously this isn't right, but the fact that the values are identical will hopefully help track down what happened.
The next lock, it correctly got a value for the TransX (left side of screenshot, about halfway down), but didn't write a value for the TransY QPD, which indicates that it was trying to write the exact same value that was already there (epics writes aren't logged if they don't change the value).
So, why did 3 different cdsutils averages all return a value of 751.242126465?
This isn't the first time that this has happened. Stefan recalls at least one time from over the weekend, and I know Cheryl and I found this sometime last week.
This is definitely a very strange behavior. I have no idea why that would happen.
As with most things guardian, it's good to try to get independent verification of the effect. If you make the same cdsutils avg calls from the command line do you get similarly strange results? Could the NDS server be getting into a weird state?
On the one hand, it works just fine right now in a guardian shell. On the other hand, it also worked fine for the latest acquisition. So, no conclusion at this time.
This happened again, but this time the numbers were not identical. I have added a check to the Prep_TR_CARM state that if the absolute value of the offsets are larger than 5 (normally they're around 0.2 and 0.3, and the bad values have all been above several hundred) then notify and don't move on.
Operators: If you see the notification Check Trans QPD offsets!
then look at H1:LSC-TR_X_QPD_B_SUM_OFFSET and H1:LSC-TR_Y_QPD_B_SUM_OFFSET. If you do an ezca read on that number and it's giant, you can "cheat" and try +0.3 for X, and +0.2 for Y, then go back to trying to find IR.
This happened again to Jim, and Cheryl, today and caused multiple locklosses
I've commented out the averaging of the offsets in the guardian.
We used to not do this averaging, and jsut rely on the dark offsets not to change. Maybe we could go back to that.
For operators, until this is fixed you might need to set these by hand:
If you are having trouble with FIND IR, this is something to check. From the LSC overview sceen, click on the yellow TRX_A_LF TRY_A_LF button toward the middle oc the left part of the screen. Then click on the R INput button circled in the attachment, and from there check that both the X and Y arm QPD SUMs have reasonable offsets. (If there is not IR in the arms, the offset should be about -1*INMON)
Opened as high priority fault in FRS:
Ed, Sheila
Are ezca connection errors becoming more frequent? Ed has had two in the last hour or so, one of which contributed to a lockloss (ISC_DRMI).
The first one was from ISC_LOCK, the screenshot is attached.
Happened again but for a different channel H1:SUS-ITMX_L2_DAMP_MODE2_RMSLP_LOG10_OUTMON ( Sheila's post was for H1:LSC-PD_DOF_MTRX_7_4). I trended and found data for both of those channels at the connection error times, and during the second error I could also caget the channel while ISC_LOCK still could not connect. I'll keep trying to dig and see what I find.
Relevant ISC_LOCK log:
2016-10-25_00:25:57.034950Z ISC_LOCK [COIL_DRIVERS.enter]
2016-10-25_00:26:09.444680Z Traceback (most recent call last):
2016-10-25_00:26:09.444730Z File "_ctypes/callbacks.c", line 314, in 'calling callback function'
2016-10-25_00:26:12.128960Z ISC_LOCK [COIL_DRIVERS.main] USERMSG 0: EZCA CONNECTION ERROR: Could not connect to channel (timeout=2s): H1:SUS-ITMX_L2_DAMP_MODE2_RMSLP_LOG10_OUTMON
2016-10-25_00:26:12.129190Z File "/ligo/apps/linux-x86_64/epics-3.14.12.2_long-ubuntu12/pyext/pyepics/lib/python2.6/site-packages/epics/ca.py", line 465, in _onConnectionEvent
2016-10-25_00:26:12.131850Z if int(ichid) == int(args.chid):
2016-10-25_00:26:12.132700Z TypeError: int() argument must be a string or a number, not 'NoneType'
2016-10-25_00:26:12.162700Z ISC_LOCK EZCA CONNECTION ERROR. attempting to reestablish...
2016-10-25_00:26:12.175240Z ISC_LOCK CERROR: State method raised an EzcaConnectionError exception.
2016-10-25_00:26:12.175450Z ISC_LOCK CERROR: Current state method will be rerun until the connection error clears.
2016-10-25_00:26:12.175630Z ISC_LOCK CERROR: If CERROR does not clear, try setting OP:STOP to kill worker, followed by OP:EXEC to resume.
It happened again just now.
Opened FRS on this, marked a high priority fault.
Taking IMC to MISALIGNING state so Robert can do some non-locked measurements.
To prevent splashing of light,
no fibers shutters, EY swapped, see alog 30624.