TITLE: 09/28 [Owl Shift]: 07:00-15:00 UTC (00:00-08:00 PDT), all times posted in UTC
STATE Of H1: Observing at ~70 MPc
OUTGOING OPERATOR: Patrick
QUICK SUMMARY: H1SUSETMX Timing error being reported on the CDS Overview, I'll reset it when LLO or us drops. Environment is calm, wind speeds are less than 10 mph, and we've been locked for almost 50 hours!
TITLE: 09/27 [EVE Shift]: 23:00-07:00 UTC (16:00-00:00 PDT), all times posted in UTC STATE Of H1: Observing, ~72 MPc SHIFT SUMMARY: Rode through four earthquakes in the Northern Mid-Atlantic Ridge. 45 Hz RF noise came and went a couple of times. A timing error has appeared on H1SUSETMX. Had a number of SUS ETMY saturations. Had a couple of OMC DCPD saturations. Wind is below 5 mph. INCOMING OPERATOR: TJ SUS E_T_M_Y saturating (Sun Sep 28 1:16:40 UTC) SUS E_T_M_Y saturating (Sun Sep 28 1:16:42 UTC) OMC DCPD Saturation (Sun Sep 28 1:16:42 UTC) SUS E_T_M_Y saturating (Sun Sep 28 1:16:48 UTC) SUS E_T_M_Y saturating (Sun Sep 28 2:30:11 UTC) SUS E_T_M_Y saturating (Sun Sep 28 3:22:20 UTC) SUS E_T_M_Y saturating (Sun Sep 28 3:23:31 UTC) SUS E_T_M_Y saturating (Sun Sep 28 3:56:10 UTC) SUS E_T_M_Y saturating (Sun Sep 28 3:56:12 UTC) OMC DCPD Saturation (Sun Sep 28 3:58:30 UTC) SUS E_T_M_Y saturating (Sun Sep 28 3:58:33 UTC) SUS E_T_M_Y saturating (Sun Sep 28 5:35:33 UTC) SUS E_T_M_Y saturating (Sun Sep 28 6:9:35 UTC) SUS E_T_M_Y saturating (Sun Sep 28 6:9:41 UTC) SUS E_T_M_Y saturating (Sun Sep 28 6:9:45 UTC) SUS E_T_M_Y saturating (Sun Sep 28 6:9:47 UTC) SUS E_T_M_Y saturating (Sun Sep 28 6:9:49 UTC) SUS E_T_M_Y saturating (Sun Sep 28 6:9:51 UTC) SUS E_T_M_Y saturating (Sun Sep 28 6:9:54 UTC) SUS E_T_M_Y saturating (Sun Sep 28 6:9:56 UTC) SUS E_T_M_Y saturating (Sun Sep 28 6:10:2 UTC) SUS E_T_M_Y saturating (Sun Sep 28 6:10:26 UTC) SUS E_T_M_Y saturating (Sun Sep 28 6:10:28 UTC) SUS E_T_M_Y saturating (Sun Sep 28 6:10:30 UTC) SUS E_T_M_Y saturating (Sun Sep 28 6:10:32 UTC) SUS E_T_M_Y saturating (Sun Sep 28 6:10:48 UTC) SUS E_T_M_Y saturating (Sun Sep 28 6:10:53 UTC) SUS E_T_M_Y saturating (Sun Sep 28 6:10:55 UTC) SUS E_T_M_Y saturating (Sun Sep 28 6:10:57 UTC) SUS E_T_M_Y saturating (Sun Sep 28 6:10:59 UTC)
Throughout ER8 to present we have seen many examples of the 45 MHz EOM driver acting up. I did a study of this noise between 11th Sept - 23rd Sept using a BLRMS of H1:LSC-MOD_RF45_AM_CTRL_OUT_DQ and from this study, found a threshold on the BLRMS on which to create a data quality flag (details of exactly what I did can be found on this page). This is the flag the analyses use to completely remove this bad data from their searches (CAT1 veto)
From 12th Sept 00:00 UTC - 26th Sept 03:00 UTC the flag marked 23789 seconds (6.61 hours) of LHO data (2.93% of H1:DMT-ANALYSIS_READY). This corresponds to 17611 seconds (4.89 hours) of coincident data (3.14% of {H1,L1}:DMT_ANALYSIS_READY).
From 26th Sept 03:00 UTC - 28th Sept 00:00 UTC the flag marked 25992 seconds (7.22 hours) of LHO data (17.5% of H1:DMT-ANALYSIS_READY). This corresponds to 10970 seconds (3.05 hours) of coincident data (12.6 % of {H1,L1}:DMT_ANALYSIS_READY)
The EOM driver seems to have been acting up more in the last few days - the flag has marked more time as bad in the last few days as 2 weeks of data taking!
00:03 - 00:08 UTC Stepped out of control room 00:51 UTC Peter K. here eclipse hunting at LSB parking lot ~ 02:00 - 02:10 UTC Stepped out of control room 02:20 UTC heard buzzing from speaker in control room, turned it off ~02:20 UTC RF noise starting 02:39 - 02:43 UTC Stepped out of control room RF noise stopped SUS ETMX timing error on CDS overview
Peter K. done at 9:25 pm local time (04:25 UTC).
The FAR threshold used by Approval Processor to select GW event candidates for potential EM follow-up has been adjusted this evening. It is now 1.9e-7 Hz for both the gstlal LowMass pipeline and the MBTAOnline pipeline; thus, we expect the union of them to be around 3.8e-7 Hz (one per month) of CBC triggers. For Burst triggers, we currently are using only one pipeline, cWB, so the threshold for that is 3.8e-7 Hz. One consequence of this is that a CBC trigger with FAR between 1.9e-7 and 3.8e-7 will cause the audible alarm to sound in the control room (because the script that does that just requires FAR < 3.8e-7) but Approval Processor will not select that trigger and the operator will not be presented with an Operator Sign-off box on the GraceDB page for that trigger. So just don't be surprised if that happens.
TITLE: 09/27 [EVE Shift]: 23:00-07:00 UTC (16:00-00:00 PDT), all times posted in UTC STATE Of H1: Observing at ~70 MPc OUTGOING OPERATOR: Nutsinee QUICK SUMMARY: Lights appear off in the LVEA, PSL enclosure, end X, end Y and mid X. I can not tell from the camera if they are off at mid Y. Seismic in 0.03 - 0.1 Hz band is ~.02 um/s. Seismic in 0.1 - 0.3 Hz band is ~.06 um/s. Wind speeds are less than 20 mph.
TITLE: 09/27 [DAY Shift]: 15:00-23:00 UTC (08:00-16:00 PDT), all times posted in UTC
STATE Of H1: Observing at ~73 Mpc. The ifo has been locked 42 hours now.
SUPPORT: Mike
QUICK SUMMARY: The RF45 hasn't been acting up much (Laura suggested it probably was acting up between ~20:00-21:00 UTC, but the glitches was probably settle since it didn't set off the Verbal alarm) . Hardly any ETMY glitches alarm today. Wind speed ~10 mph. Seismic activity in 0.03-0.1 Hz band coming back to nominal.
INCOMING OPERATOR: Patrick
ACTIVITY LOG:
NO ACIVITY. It's been a boring shift (good for science!).
A 4.9M earthquake in Chile is coming through. Wind speed ~10mph. Increasing seismic activity in the earthquake band. Only one ETMY glitch caused by the 45 MHz so far.
Parameters:
GPS Start Time = 1127333892 # Beginning of time span, in GPS seconds, to search for injections
GPS End Time = 1127420292 # Ending of time span, in GPS seconds, to search for injections
Check Hanford IFO = True # Check for injections in the Hanford IFO frame files.
Check Livingston IFO = True # Check for injections in the Livingston IFO frame files.
IFO Coinc Time = 0.01 # Time window, in seconds, for coincidence between IFO injection events.
Check ODC_HOFT = True # Check ODC-MASTER_CHANNEL_OUT_DQ channel in HOFT frames.
Check ODC_RAW = True # Check ODC-MASTER_CHANNEL_OUT_DQ channel in RAW frames.
Check ODC_RDS = True # Check ODC-MASTER_CHANNEL_OUT_DQ channel in RDS frames.
Check GDS_HOFT = True # Check GDS-CALIB_STATE_VECTOR channel in HOFT frames.
Report Normal = True # Include normal (IFO-coincident, consistent, and scheduled for network injections and consistent and scheduled for IFO injections) injections in report
Report Anomalous = True # Include anomalous (non-IFO-coincident, inconsistent, or unscheduled) injections in report
No injections found. No scheduled injections within the time span checked.
TITLE: 09/27 [DAY Shift]: 15:00-23:00 UTC (08:00-16:00 PDT), all times posted in UTC
STATE Of H1: Observing at ~70Mpc. The ifo has been locked for 36 HOURS!
OUTGOING OPERATOR: Ed M.
QUICK SUMMARY: 45 MHz EOM driver hasn't been glitching since shift started. Mike called to give advice on how to fix GraceDB query issue. I'm reading 0 mph wind speed everywhere on the PEM weather medm screen. A little bit of seismic activity in 3-10 Hz band which seems to correspond to the slight drop in BNS range but otherwise everything else looks quiet. The ifo is locking and going strong.
ER8 day 33 (last day). O1 days 1 - 9
Fri 18 - Mon 21: No Restarts Reported
model restarts logged for Tue 22/Sep/2015
2015_09_22 10:57 h1nds0
2015_09_22 10:57 h1nds1
2015_09_22 11:22 h1broadcast0
Maintenance Tuesday. NDS starts related to offloading raw minute trends from SSD. Detchar O1 DMT channel list applied to broadcaster.
Wed 23 - Sat 26: No Restarts Reported
I simply logged in to the alog using my credential on Ops station. I opened up GraceDB webpage and noticed that the GraceDB querying is good again.
Please disregard the alog. Mike just told me that GraceDB working was coincidence with Dave restarting the Python script. It wasn't me...
The ext_alert.py script which periodically views GraceDB had failed. I have just restarted it, instructions for restarting are in https://lhocds.ligo-wa.caltech.edu/wiki/ExternalAlertNotification
Getting this process to autostart is now on our high priority list (FRS3415).
here is the error message displayed before I did the restart.
File "ext_alert.py", line 150, in query_gracedb
return query_gracedb(start, end, connection=connection, test=test)
File "ext_alert.py", line 150, in query_gracedb
return query_gracedb(start, end, connection=connection, test=test)
File "ext_alert.py", line 135, in query_gracedb
external = log_query(connection, 'External %d .. %d' % (start, end))
File "ext_alert.py", line 163, in log_query
return list(connection.events(query))
File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 441, in events
uri = self.links['events']
File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 284, in links
return self.service_info.get('links')
File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 279, in service_info
self._service_info = self.request("GET", self.service_url).json()
File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 325, in request
return GsiRest.request(self, method, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 201, in request
response = conn.getresponse()
File "/usr/lib/python2.7/httplib.py", line 1038, in getresponse
response.begin()
File "/usr/lib/python2.7/httplib.py", line 415, in begin
version, status, reason = self._read_status()
File "/usr/lib/python2.7/httplib.py", line 371, in _read_status
line = self.fp.readline(_MAXLINE + 1)
File "/usr/lib/python2.7/socket.py", line 476, in readline
data = self._sock.recv(self._rbufsize)
File "/usr/lib/python2.7/ssl.py", line 241, in recv
return self.read(buflen)
File "/usr/lib/python2.7/ssl.py", line 160, in read
return self._sslobj.read(len)
ssl.SSLError: The read operation timed out
I have patched the ext_alert.py script to catch SSLError exceptions and retry the query [r11793]. The script will retry up to 5 times before crashing completely, which is something we may want to rethink if we have to.
I have request both sites to svn up and restart the ext_alert.py process at the next convenient opportunity (the next time it crashes).
TITLE: Sep 26 OWL Shift 23:00-07:00UTC (16:00-00:00 PDT), all times posted in UTC
STATE Of H1: Observing
LOCK DURATION: Entire Shift
SUPPORT: N/A
INCOMING OPERATOR: Nutsinee
Activity log:
09:13 SUS E_T_M_Y saturating (Sun Sep 27 9:13:21 UTC)
09:13 SUS E_T_M_Y saturating (Sun Sep 27 9:13:23 UTC) .6Mpc
10:19 SUS E_T_M_Y saturating (Sun Sep 27 10:19:5 UTC) 3Mpc
11:45 SUS E_T_M_Y saturating (Sun Sep 27 11:45:40 UTC) odd...no loss of range
12:21 Noticed the GraceDB quey failure.
14:15 Mike checked in by telephone. He suggested that I e-mail Duncan, Leo and Peter S about the GraceDB query failure that has existed for the duration of my shift.
Shift Summary: Smooth sailing with the IFO locked at 74Mpc. Wind speeds <10mph. 4 ETMY glitches. No obtrusive seismic or weather activity. I e-mailed Duncan, Leo and Peter S about the GraceDB query failure that has existed for the duration of my shift. Handing off to Nutsinee.
Mid-Shift Summary: Smooth sailing so far with the IFO locked at 74Mpc. Wind speeds have decreased to <5mph. 3 ETMY glitches so far.
H1SUSETMX Timing error cleared by hitting Diag Reset at 07:53:00 UTC