Highlights from my data quality shift last weekend (12-15 January 2017) at Hanford:
Full notes may be found here: https://wiki.ligo.org/DetChar/DataQuality/DQShiftH120170111
Lowered LLCV settings on both CP3 & CP4
CP3 from 17% to 15%
CP4 from 34% to 33%
Exhaust temps were lower than ambient and exhaust pressures above zero.
TITLE: 01/17 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventitive Maintenance
OUTGOING OPERATOR: Travis
CURRENT ENVIRONMENT:
Wind: 7mph Gusts, 4mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.57 μm/s
QUICK SUMMARY: Freezing rain on the way in. Maintenance has already began.
TITLE: 01/17 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Preventative Maintenance
INCOMING OPERATOR: TJ
SHIFT SUMMARY: A bit of a rough night. Started with an EQ. Proceeded with one short lock with dwindling range. Some IMC locking issues after lockloss that turned out to be user error. Back to Observing at the end of the shift.
LOG: See previous logs.
15:20 Bubba and contractors unloading a van at OSB receiving.
15:36 A/C work starting in control room.
The IMC hasn't locked since the lockloss. While following the troubleshooting Wiki instructions, I attempted to use Dataviewer to look at trends of various IMC related channels to no avail (see screenshot of error) no matter which data rate setting I chose. With no data to point me to what is wrong, I cleared the IMC WFS as a hail mary. This didn't help. I'm not sure what to do next here.
Using TimeMachine as a last resort, I also set the IMC PZTs back to values from a previous lock stretch 12 hours ago. This also did not help.
Turns out this was due to some MC2 OSEMs being railed because I apparently forgot to take ISC_LOCK to DOWN before going back to INITIAL_ALIGNMENT. IMC is locked again now.
Dataviewer has been fixed. The operating system type wasn't determined properly, so paths to programs were incorrect. Operator should log out of the workstation and log back in.
Didn't notice anything out of the ordinary on the FOMs, other than the range dropping since the beginning of the lock stretch.
Changing the gain of PI mode 9 kicked us out of Observe.
Back to Observing at 12:52.
I've told Travis we can unmonitor all PI damping gains and phases; some that we weren't damping before must have gotten missed, so feel free to unmonitor.
With microseism above the 90th percentile, I had to wait for the EQ to ring down entirely before make much progress locking (~2 hours).
5.4M Guisa, Cuba
Was it reported by Terramon, USGS, SEISMON? Yes, Yes, No
Magnitude (according to Terramon, USGS, SEISMON): 5.4, 5.4, NA
Location: 43km S of Guisa, Cuba; LAT: 19.9, LON: -76.6
Starting time of event (ie. when BLRMS started to increase on DMT on the wall): 9:15 UTC
Lock status? Locking at the time, became unable to proceed with locking.
EQ reported by Terramon BEFORE it actually arrived? No, I noticed it on the BLRMS before the USGS even reported it.
TITLE: 1/17 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Initial Alignment
OUTGOING OPERATOR: Jeff B
QUICK SUMMARY: After the PSL crash at the end of Jeff's shift, the alignment doesn't look so good, so I'm running through IA.
Shift Summary:Lost lock due to PSL Head Flow sensor error. Jason recovered the PSL. Reset the Noise Eater and the X-Arm Fiber Polarization. Tried relocking and could not get either arm to lock green. Ran Green Arm Initial Alignment and restarted locking. Locking appeared to be working and was at DRMI-1F when Travis came in.
Activity Log: Time - UTC (PT)
PSL tripped at 06:15 (22:15) with a Head 1-4 Flow Sensor error. Jason recovered PSL. I reset the Noise Eater.
Jeff filed FRS 7115 for this trip.
IFO is locked at NOMINAL_LOW_NOISE. The power is 28.6W and the range is up to 59 Mpc. The intent bit was set to Observing during the first half of the shift, except for a couple of short periods at the beginning of the shift when I dropped it out while working on the PI Modes.
The PI modes have settled down. Mode-9 is a little higher than I would like. However, it is not responding to tweaks and seems to be happy around 1.0. I am going to leave it where it is, for now. The violin mode are behaving reasonably well after Patrick suppressed them at the end of the Day shift.
The environmental conditions are good and improving. The wind is Light Breeze (1-3 MPH). Primary and secondary microseism are elevated but no longer rising.
After Patrick finished damping the ETMY Mode-10 violin mode, I acceptrd accepted the SDF difference (screen shot attached) so we could go into Observing.
ISC_LOCK at DOWN and observatory mode in corrective maintenance upon arrival. The HAM6 ISI is tripped. I will attempt to lock but given the alogs from last night it sounds like I may need help from a commissioner. Running 'ops_auto_alog -t Day' reported an error: patrick.thomas@operator0:~$ ops_auto_alog.py -t Day Traceback (most recent call last): File "/opt/rtcds/userapps/release/cds/h1/scripts/ops_auto_alog.py", line 300, inalog.main(Transition,shift) File "/opt/rtcds/userapps/release/cds/h1/scripts/ops_auto_alog.py", line 218, in main operator = self.get_oper_w_date('{day}-{month_name}'.format(day=lday, month_name=self.all_months[lmonth]), 'Owl') File "/opt/rtcds/userapps/release/cds/h1/scripts/ops_auto_alog.py", line 130, in get_oper_w_date date_ln = self.get_date_linum(date) - 1 TypeError: unsupported operand type(s) for -: 'NoneType' and 'int'
17:59 UTC Sheila, Jenne, Keita, Evan G. and Heather (new fellow) in control room.
The ops_auto_alog.py error seems to be an issue with only the ops account, I can et it to work on other accounts on both the operator0 machine and opsws12. Jim Batch mentioned some issues with the ops account this morning so they may be related.