09:53 UTC Seeing that Terramon predicted 6.7 um/s I hurried and swithed SEI config to EARTH_QUAKE_V2 as the earthquake arrvied on the FOM. However, that was before I realized that what I saw was already the peak at ~1um/s and it was already turning back down. We could have ridden through this earthquake. Livingston is also down.
04:46 EY saturation
07:45 EY saturation
01:06UTC I reverted back to the original sei configuration for BSCs and locking attempts continue.
01:34UTC multiple locklosses at FIND_IR. Reverting back to new configs just to see if I can get past this stage.
01:50UTC New sei configs got me into DRMI land. MICH alignment looks pretty terrible. Trending witness sensors not revealing much. Going to try some alignment with INITIAL_ALIGNMENT guardian. Not going to do the arms if I can help it.
02:00UTC INITIAL_ALIGNMENT/INPUT_ALIGN not working. After trying to adjust the gain higher to get a lock I decided to trend(TimeMachine)IM4. I found it to be about 100urads out in pit. This seems to have corrected the issue. Perhaps this less-than-optimal pointing has been the sore spot in my locking efforts? Also had to put SR2 back to center before aligning SRC.
04:00UTC H1 Locked, Observing Range is ~72Mpc. Lock stretch so far: 01:08 coincidental with L1.
02:51UTC - NLN 75.3Mpc
Keita is working on some ALS diffs that were knocking the intent bit loose. I firmed up some channels to be unmonitored for some BS gain settings with the assistance of Jim W ( by telephone) so that I could set the intent bit.
TITLE: 12/02 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 73.9893Mpc
INCOMING OPERATOR: Ed
SHIFT SUMMARY:
Locked for a good chunk of the shift, but useism & wind grew pretty noisy in the afternoon....to a point where locking was going to be tough.
LOG:
At 23:13 we received a "Gamma Ray Burst" Alarm.
Following L1500117:
AT 23:21 We were bumped out of OBSERVING!! (Sheila noticed it, we didn't hear it for some reason during all the hub bub of figuring out the deluge of GraceDB reports.) It wasn't obvious why we were bumped out of OBSERVING. But since we tend to get bumped out due to SDF Diffs/changes, JimW looked at the DIAG_SDF node's log. On here he noticed we were getting DIFFS for the SDF NODE: sysecatx1plc2
Unfortunately, when the diffs appeared they only happened for a few seconds (knocked us out of OBSERVING), and then go away. So it was hard to see what channels were in question. It happened several times, and managed to catch a glance at the channels in question. Words we saw in the channel were: ALS, Fiber, polarization. Since they deal with the ALS, we can probably safely say we can NOT_MONITOR these channels (but we should get a blessing from Keita). Or if we are in a fix in the middle of the night, and you are able to figure out who these elusive channels are, be sure to note the channels, NOT_MONITOR them for the night and then alog what you did.
Epilogue:
We ended up losing lock at 23:56--it's really noisy seismically with useism & wind.
Regarding strange sdf error of h1sysecatx1plc2:
I used a brute force method of copying OBSERVE.snap file, stripped unnecessary information including but not limited to non-ALS channels, divide into 20 line chunks, and fed them to lockloss tool for [-10,+10] seconds window centered at the first time when IFO was kicked out (GPS 1164756098).
There are three channels that changed 2 seconds prior to the event (first attachment):
H1:ALS-X_FIBR_SERVO_IN1GAIN
H1:ALS-X_FIBR_LOCK_TEMPERATURECONTROLS_ERRORSIGNAL
H1:ALS-X_FIBR_LOCK_TEMPERATURECONTROLS_POLARITY
Based on this, I looked at the fiber PLL lock status and sure enough, there was a large glitch (smaller peak on ch3 of the second attachment) in PLL. After 3 seconds or so the autolocker started to "relock" by lowering the gain and such, there's a huge swing in the beat note, and it relocked. No suspicious thing in polarization, RF and DC level during this.
It's not clear why this happened, but this is just the end station PLL that is not used for anything during OBSERVE, and there's no sign that there's an RF disaster going on at the end station.
I went ahead and unmonitored these three channels in OBSERVE in sdf.
I didn't unmonitor ALL end station ALS channels.
If this happens next time for other X end ALS channels:
4:20pm local
Took 22 min. to overfill CP3 by raising LLCV to 50% open from control room. I raised nominal value from 16% to 17% for weekend.
We had been in a GRB standown period from 23:13UTC before H1 lost lock.
O2 days 1,2:
model restarts logged for Thu 01/Dec/2016 No restarts reported
h1guardian was restarted at 08:50PDT, its RAM was increased from 12GB to 48GB.
model restarts logged for Wed 30/Nov/2016 No restarts reported
ER10:
model restarts logged for Tue 29/Nov/2016 No restarts reported
Lock# 8
Times are in: GPS (UTC)
Start time: 1164672438.0 (Dec 2 00:07:01 UTC)
Itbit Engage: [0, 1164672969.0, 1164673578.0, 1164674812.0, 1164700372.0, 1164745711.0]
Itbit Disengage: [0, 1164673508.0, 1164674325.0, 1164700288.0, 1164745010.0, 1164747085.0]
End time: 1164747085.0 (Dec 2 20:51:02 UTC)
Total length: 74647.0 , 20hr 44min 7sec
Total Science: 72774.0 , 20hr 12min 54sec
Lock# 9
Times are in: GPS (UTC)
Start time: 1164750245.0 (Dec 2 21:43:48 UTC)
Itbit Engage: [0, 1164750308.0, 1164756280.0, 1164756461.0, 1164757097.0]
Itbit Disengage: [0, 1164756100.0, 1164756401.0, 1164756773.0, 1164758206.0]
End time: 1164758206.0 (Dec 2 23:56:26 UTC)
Total length: 7961.0 , 2hr 12min 41sec
Total Science: 7334.0 , 2hr 2min 14sec
End of Day Summary
Current Status: Unlocked
Total Day Locked: 22hr 56min 48sec [95.6%] (82608/86400)
Total Day Science: 22hr 15min 8sec [92.7%] (80108/86400)
This is the first trial of a daily post by the Day shift operator of a summary of the locks for the UTC day. This information is created by (userapps)/sys/h1/scripts/VerbalAlarms/Lock_Logging.py which runs in the background of the VerbalAlarms program. The Lock# was started at the begining of O2 and is used only as a reference, it is only used in the control room. The idea behind these daily posts is that it will give the CR something to reference. The lock clock that is displayed in the CR will soon show the current Lock# to match these.
Any question, please let me know. Hopefully its helpful!
I forgot to tag OpsInfo
Jenne, Jim W, Dave:
In order to run the a2l_min_LHO.py script from an MEDM shell launcher, I had to explicitly define full paths for all files used. Jenne created a testWrite.py which I used to test this out.
The MEDM command block in the A2L_screen.adl file (callable from SITEMAP via the SUS pulldown) is
command[0] {
label="A2L Script"
name="xterm"
args="-hold -e 'cd /opt/rtcds/userapps/trunk/isc/common/scripts/decoup; /usr/bin/python /opt/rtcds/userapps/trunk/isc/common/scripts/decoup/a2l_min_LHO.py'"
}
This is used on the A2L screen that Jim made, accessible from the SUS tab on the sitemap. You can now run the A2L script by just clicking the button on the screen.
The first 3 times that this was used / tested, the A2L ran successfully and wrote the appropriate EPICS values, but the text files that summarize the results (so we don't have to trend for the data) didn't get written when the script was run from medm. Dave's work was to fix this.
I plan to trend the data and hand-create the summary files for the three times they didn't get written, so that it's easier to run the beam position calculation script without losing these data points:
After the lockloss ISI ST2 configs went back to SC_B. I'm switching back to SC_A.
And I got this EZCA connection error that comes and goes but didn't seem to effect the locking. Still struggling to lock as of now. Kept losing after Turn on BS ST2.
Got pass DRMI, but then lost lock at CARM_15PM. Somewhere in between fast shutter thing tripped HAM6 ISI.
Still losing lock at DRMI_LOCKED. I don't know why would SR2 be moving so much. Because PRMI and DRMI was able to locked with okay counts I don't really believe that it needs an initial alignment. I'm going to try one real quick anyway.
If that doesn't help I'm going to try turn off SRC2 loop during next DRMI_LOCKED.I briefly went through Jenne's troubleshooting document. I'll be watching out for ASC error signal next time.I zeroed all the SRC error signals before engaging DRMI ASC, following Jenne's Troubleshoot document. Finally was able to move on. Sorry... should have thought of it much sooner.
Back to Observe again at 14:52 UTC. I accepted the SDF ETMX bounce mode damping gain. Bounce mode is still high, but it's slowly damping. I didn't increase the gains.