16:05-16:31: H1 Out Of OBSERVING To Correct ISC_LOCK.py code (Range was 0Mpc during this time!)
Yesterday during my shift, I noticed that the ISC_LOCK log was continually running a line of script (a few times a second). I sent an email to commissioners & JeffK pointed me to the appropriate alog related to this change (Heather/Sheila alog33437). It sounds like this Reset button was being continuously looped & this is the channel:
ISC_LOCK [NOMINAL_LOW_NOISE.run] ezca: H1:OAF-RANGE_RLP_4_RSET => 2
So, yesterday Heather & TJ took a look at these lines of code to fix this (We just wanted this Reset to happen once after a lock). And then the operator was to wait for an appropriate time to hit LOAD on the ISC_LOCK. This morning I saw that L1 was down, so I took H1 out of OBSERVING & hit the LOAD, but I received an ERROR for the ISC_LOCK guardian node.
Instead of delving much into the code, I immediately made a call the the Guardian Help Desk (i.e. Jamie). I texted him photos of the error message & then he found the issue (it was a missing Closing Parenthesis). Once this was corrected, saved ISC_LOCK.py, hit LOAD on ISC_LOCK node, & the RED ERROR went away. I then went back to OBSERVING.
NOTE: During this time, H1 Range went to 0Mpc, BUT we were still at NLN the entire time. So our current lock is approaching 24hrs.
Here is the Error which came up on the ISC_LOCK log, when I initially pressed LOAD:
2017-02-03T16:27:08.58122 File "/opt/rtcds/userapps/release/isc/h1/guardian/ISC_LOCK.py", line 3994
2017-02-03T16:27:08.58123 for blrms in range(1,11):
2017-02-03T16:27:08.58123 ^
2017-02-03T16:27:08.58124 SyntaxError: invalid syntax
2017-02-03T16:27:08.58134 ISC_LOCK LOAD ERROR: see log for more info (LOAD to reset)
Here are the lines in question from ISC_LOCK.py code (the parenthesis which was missing is highlighted below)
3992 subp.call(['/opt/rtcds/userapps/trunk/isc/h1/guardian/./All_SDF_observe.sh'])<--------- !
3993 #clear history of blrms
3994 for blrms in range(1,11):
3995 ezca['OAF-RANGE_RLP_{}_RSET'.format(blrms)]=2
3996 #ezca['OAF-RANGE_RBP_{}_RSET'.format(blrms)]=2
TITLE: 02/03 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 65Mpc
OUTGOING OPERATOR: Nutsinee
CURRENT ENVIRONMENT:
Wind: 9mph Gusts, 8mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.33 μm/s
QUICK SUMMARY:
We are humming along with a lock approaching 23hrs.
NOTE: H1 Range was 0Mpc for about 26min (16:05-16:31), but we were STILL running at NLN. With L1 dropping out this morning, I took H1 out of OBSERVING to LOAD an update on ISC_LOCK.py.......but there was a Guardian Error & Jamie helped me out. When that was fixed, immediately went back to OBSERVING. Will make separate log about this. And L1 is also back with us! (Terra's good. She happened to notice our range down and called to check to make sure the PI Modes were going to be watched.) :)
Had a light dusting of snow this morning, but it has paused snowing for the moment.
Accumulated hours are as of 8 am local time on February 3rd, 2017. Original front end diode box S/N OBS2-FE-DB. This diode box has since been refurbished. 18082 hours Replacement front end diode box (this box was only swapped out due to a power supply failure) This diode box still has diode current head room despite the large number of operating hours. S/N DB-1207 18116 hours Current front end diode box S/N SPARE3-FE-DB 5104 hours The high power oscillator diode boxes, OBS2-DB1, OBS2-DB2, OBS2-DB3 and OBS2-DB4 have 15795, 15795, 15794, and 15794 hours respectively.
TITLE: 02/03 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 66Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Quiet shift. Been locked for 22 hrs. Leaving a bit early. It's snowing outside.
LOG:
~15:30 Peter to Diode room to take some picture.
After falling quite low the previous week, the reference cavity transmission increased without intervention. With the decrease in room temperature the reference cavity transmission has decreased again. Attached is a plot showing the trend.
Been locked for 18.5 hours. Not much happened since shift started. Cleared timing error on IOPASC0 when I got here.
TITLE: 02/03 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 64Mpc
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY: 14hr lock, so it was quiet.
LOG:
Corey had the IFO locked when I got here. I'm leaving it locked for Nutsinee. Not much happened in between.
Summary of the DQ shift from Monday 30th Jan to Wednesday 1st Feb (inclusive), click here for full report:
Regarding the lockloss on Jan 30th, I just want to point out that light wind-gusts do not typically increase ground motion in the 10-30 Hz band above background. Ground motion in this band is largely determined by building fans, motors and traffic. You can see similar features on other days clearly not associated with wind. There was an unexplained lockloss on Jan 27th, close to that time but was probably unrelated.
TITLE: 02/02 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
I looked at a few times pointed out to me by Aidan and Betsy and didn't find any evidence that the flow rate sensor glitches are coupling into h(t). I've attached one example where there are two fairly large glitches in the flow rate readout with no visible effects in the h(t) spectrogram from the same times.
WP6465 Re-route timing for EE-lab to DTS
Dave, Jim, Fil:
Instead of running the EE-Lab timing-fanout directly from the MSR timing-master, we re-routed this fiber line to the timing-fanout in the DTS (H2 building). This means the EE-Lab timing-fanout can be turned on and off without impacting on the Beckhoff SDF settings.
We repurposed the fiber pair connected to ports [13,14] of the Multimode patch between MSR and H2 Building. Last year we had used this line for temporarily extending the CDS network into the DTS for environmental monitoring (we left the fiber-ethenet converters in place in the DTS and MSR).
The EE-Lab timing-fanout was turned on and locked its signal correctly.
Putting the concrete floor tiles back in the EE-Lab and MSR made some noise, this was noted by the operator.
CDS group running cable (WP6465) & heard loud thumps from HEAVY floor tiles (50-70lbs) being dropped. This happened multiple times and some of the drops were seen on the H1 DARM Spectra.
Here are times where we saw the bumps:
22:02:58 & 22:06:15
General Keita Notes:
Tues Maintenance Day (1/31) Hold Over Items (possible for next Tues?)
Next Tues New Items
PyCBC analysts, Thomas Dent, Andrew Lundgren
Investigation of some unusual and loud CBC triggers led to identifying a new set of glitches which occur a few times a day, looking like one or two cycles of extremely high-frequency scattering arches in the strain channel. One very clear example is this omega scan (26th Jan) - see particularly LSC-REFL_A_LF_OUT_DQ and IMC-IM4_TRANS_YAW spectrograms for the scattering structure. (Hence the possible name SPINOSAURUS, for which try Googling.)
The cause is a really strong transient excitation at around 30Hz (aka 'thud') hitting the central station, seen in many accelerometer, seismometer, HEPI, ISI and SUS channels. We made some sound files from a selection of these channels :
PEM microphones, interestingly, don't pick up the disturbance in most cases - so probably it is coming through the ground.
Note that the OPLEV accelerometer shows ringing at ~60-something Hz.
Working hypothesis is that the thud is exciting some resonance/relative motion of the input optics which is causing light to be reflected off places where it shouldn't be ..
The frequency of the arches (~34 per second) would indicate that whatever is causing scattering has a motion frequency of about 17Hz (see eg https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=154054 as well as the omega scan above).
Maybe someone at the site could recognize what this is from listening to the .wav files?
A set of omega scans of similar events on 26th Jan (identified by thresholding on ISI-GND_STS_HAM2_Y) can be found at https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/isi_ham2/
Wow that is pretty loud, seems like it is even seen (though just barely) on seismometers clear out at EY with about the right propagation delay for air or ground propagation in this band (about 300 m/s). Like a small quake near the corner station or something really heavy, like the front loader, going over a big bump or setting its shovel down hard. Are other similar events during working hours and also seen at EY or EX?
It's hard to spot any pattern in the GPS times. As far as I have checked the disturbances are always much stronger in CS/LVEA than in end station (if seen at all in EX/EY ..).
More times can be found at https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/isi_ham2/jan23/ https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/isi_ham2/jan24/
Hveto investigations have uncovered a bunch more times - some are definitely not in working hours, eg https://ldas-jobs.ligo-wa.caltech.edu/~tjmassin/hveto/O2Ac-HPI-HAM2/scans/1169549195.98/ (02:46 local) https://ldas-jobs.ligo-wa.caltech.edu/~tjmassin/hveto/O2Ab-HPI-HAM2/scans/1168330222.84/ (00:10 local)
Here's a plot which may be helpful as to the times of disturbances in CS showing the great majority of occurrences on the 23rd, 26th-27th and early on 28th Jan (all times UTC). This ought to be correlated with local happenings.
The ISI-GND HAM2 channel also has loud triggers at times where there are no strain triggers as the ifo was not observing. The main times I see are approximately (UTC time)
Jan 22 : hours 13, 18 21-22
Jan 23 : hours 0-1, 20
Jan 24 : hours 0, 1, 3-6, 10, 18-23
Jan 25 : hours 21-22
Jan 26 : hours 17-19, 21-22
Jan 27 : hours 1-3, 5-6, 10, 15-17, 19, 21, 23
Jan 28 : hours 9-10
Jan 29 : hours 19-20
Jan 30 : hours 17, 19-20
Hmm. Maybe this shows a predominance of times around hour 19-20-21 UTC i.e. 11-12-13 PST. Lunchtime?? And what was special about the 24th and 27th ..
Is this maybe snow falling off the buildings? The temps started going above the teens on the 18th or so and started staying near freezing by the 24th. Fil reported seeing a chunk he thought could be ~200 lbs fall.
Ice Cracking On Roofs?
In addition to ice/snow falls mentioned by Jim, thought I'd mention audible bumps I heard from the Control Room during some snowy evenings a few weeks ago (alog33199)....Beverly Berger emailed me suggesting this could be ice cracking on the roof. We currently do not have tons of snow on the roofs, but there are some drifts which might be on the order of a 1' tall.
MSR Door Slams?
After hearing the audio files from Thomas' alog, I was sensitive to the noise this morning. Because of this, thought I'd note some times this morning when I heard a noise similar to Thomas' audio, and this noise was the door slamming when people were entering the MSR (Mass Storage Room adjacent to the Control Room & there were a pile of boxes which the door would hit when opened...I have since slid them out of the way). Realize this isn't as big of a force as what Robert mentions or the snow falls, but just thought I'd note some times when they were in/out of the room this morning:
I took a brief look at the times in Corey's previous 'bumps in the night' report, I think I managed to deduce correctly that it refers to UTC times on Jan 13. Out of these I could only find glitches corresponding to the times 5:32:50 and 6:09:14. There were also some loud triggers in the ISI-GND HAM2 channel on Jan 13, but only one corresponded in time with Corey's bumps: 1168320724 (05:31:46).
The 6:09 glitch seems to be a false alarm, a very loud blip glitch at 06:09:10 (see https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/H1_1168322968/) with very little visible in aux channels. The glitch would be visible on the control room glitchgram and/or range plot but is not associated with PEM-CS_SEIS or ISI-GND HAM2 disturbances.
The 5:32:50 glitch was identified as a 'PSL glitch' some time ago - however, it also appears to be a spinosaurus! So, a loud enough spinosaurus will also appear in the PSL.
Evidence : Very loud in PEM-CS_SEIS_LVEA_VERTEX channels (https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=155306) and characteristic sail shape in IMC-IM4 (https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=155301).
The DetChar SEI/Ground BLRMS Y summary page tab has a good witness channel, see the 'HAM2' trace in this plot for the 13th - ie if you want to know 'was it a spinosaurus' check for a spike in HAM2.
Here is another weird-audio-band-disturbance-in-CS event (or series of events!) from Jan 24th ~17:00 UTC :
https://ldas-jobs.ligo-wa.caltech.edu/~tdent/detchar/o2/PEM-CS_ACC_LVEAFLOOR_HAM1_Z-1169312457.wav
Could be someone walking up to a piece of the instrument, dropping or shifting some heavy object then going away .. ??
Omega scan: https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/psl_iss/1169312457.3/
The time mentioned in the last entry turns out to have been a scheduled Tuesday maintenance where people were indeed in the LVEA doing work (and the ifo was not observing, though locked).
Oh, forgot to mention another change. When Jamie was looking for issues with line 39994, we did make a change to:
3994 for blrms in range(1,11):
"range" used to be something else, but I didn't capture that before we changed it. (it was something like "gnumpy.range(1,11):" before). Will this change anything from what was initially intended?
The error condition was caused by a SyntaxError exception in the code. A simple load of the code will catch these exceptions, so you can easily avoided these load-time exceptions by parsing the code with e.g. guardutil first:
$ guardutil print ISC_LOCK
If there are any syntax errors in the code, that call to guardutil will catch and print it for you, before you hand it over to the operators.
Also, we've been trying to avoid calling out to shell scripts with subprocess calls. Is there some reason this 'All_SDF_observe.sh'
script can't be properly integrated?
I also note that the return code of the script is not being checked, so if the script fails nothing will catch it. That's not good. That's one of the main reasons we avoid the subprocess calls.
"range" is similar to "numpy.arange", except it returns a simple python list instead of a numpy.array. There's no reason to use numpy for this operations.
This was definitely my fault. I'm not sure how, but I somehow deleted that parenthesis when I cut and pasted the blrms code from run to main.
I'm also a bit confused when I saw that shell script call. There is a pass directly before it, almost as if someone didn't want the script to actually be ran, but a pass doesn't work for that purpose and the script has been ran every time.