Started labeling the power supplies in the CER mezzanine. Will try to complete task next maintenance window.
WP 6460
Set the addresses to the Beckhoff modules used for the safety system at EX.
In the VEA, the table enclosure interlocks were connected to the EP1908 units (digital input). Connection is made through a patch panel already installed on the tables. All internal cabling inside the table was already complete.
F. Clara, R. McCarthy
Now monitoring 266,552 channels. Channel list attached.
IFO is down for the maintenance window. Expect to be starting recovery in an hour or two.
I reset both PSL power watchdogs at 19:27 UTC (11:27 PST). This closes FAMIS task 3636.
I have updated the python environment setup scripts to include paths for Ubuntu 14, SL6, SL7, and Debian 8, in addition to the Ubuntu 12 which was supported. The setup is correct, however since paths are now more specific to the operating system, there may be packages which were installed for Ubuntu 12 which can't be found on Ubuntu 14, SL6, SL7, or Debian 8. If your python script no longer runs on one of those systems, but does work with Ubuntu 12, let one of the CDS admins know and we'll fix it. This was triggered by a report that instafoton.py didn't work with Debian 8. I've tested instafoton.py with Ubuntu 12, Ubuntu 14, and Debian 8 and it works.
Ops Shift Transition: 02/07/2017, Day Shift 16:00 – 00:00 (08:00 - 16:00) - UTC (PT)
TITLE: 02/07 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 67Mpc
INCOMING OPERATOR: Jeff
SHIFT SUMMARY: One lockloss that I couldn't find the cause of, but it came right back up. Maintenance starting a few minutes early since LLO has been down.
LOG:
Length: 5hours 42min
Cause: Unknown so far
Back to observing at 12:51.
TITLE: 02/07 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 67Mpc
OUTGOING OPERATOR: Travis
CURRENT ENVIRONMENT:
Wind: 5mph Gusts, 4mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.36 μm/s
QUICK SUMMARY: Wind has died down and the PIs seemed to have calmed down.
TITLE: 02/07 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 66Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: A bit of a rough go with PI modes after the lockloss early in the shift. After working through that, we are back to Observing for ~1 hour.
LOG: See previous aLogs.
Ran A2L while LLO is still down.
We are back to Observing, but PI mode 26 continues to be unresponsive. It is currently elevated but not rising. I'll continue to monitor it, but I still can't damp it.
Was just able to damp this remotely finally. Travis has edited, saved, and loaded the gain that worked to the SUS_PI guardian (+4000 gain). The phase that worked is -30 deg. Thanks Travis.
This mode is suddenly finnicky (it hasn't been problematic during this run before). To damp, it required low gain (~1000) immediately at power up and I was able to damp by finding the phase that most leveled out the mode and then stepping up the gain 1000 at a time to around 5000. This is the usual procedure and exactly what Travis did correctly; just seems we needed a few tries to find the set up that worked and that it required more fine-tuned gain and phase than usual. As Travis mentioned above, once PI modes get very rung up (a bit over 100 on the StripTool for this one for example) they become unresponsive. They also take awhile to damp; Travis's second lock loss was likely due to relocking quickly and the mode was still too rung up to be able to damp.
This mode experienced about 1 Hz shift a few days ago, likely due to temp changes that have been messing up other PI modes too. However, Travis pointed out that it was damped successfully for several days since then using the old settings. I'll take a look at it's frequency shift behavior over the last few days and see if anything different happened in the lock before all this started.
Note: while I was writing the alog above, I watched Mode26 start to rise again (about 45 min to an hour into lock). I stepped the gain along from -30 --> -90 and it turned the mode back down easily.
Went out of Observe for less than 1 minute at 7:01 UTC to reload the Guardian code for this change.
PyCBC analysts, Thomas Dent, Andrew Lundgren
Investigation of some unusual and loud CBC triggers led to identifying a new set of glitches which occur a few times a day, looking like one or two cycles of extremely high-frequency scattering arches in the strain channel. One very clear example is this omega scan (26th Jan) - see particularly LSC-REFL_A_LF_OUT_DQ and IMC-IM4_TRANS_YAW spectrograms for the scattering structure. (Hence the possible name SPINOSAURUS, for which try Googling.)
The cause is a really strong transient excitation at around 30Hz (aka 'thud') hitting the central station, seen in many accelerometer, seismometer, HEPI, ISI and SUS channels. We made some sound files from a selection of these channels :
PEM microphones, interestingly, don't pick up the disturbance in most cases - so probably it is coming through the ground.
Note that the OPLEV accelerometer shows ringing at ~60-something Hz.
Working hypothesis is that the thud is exciting some resonance/relative motion of the input optics which is causing light to be reflected off places where it shouldn't be ..
The frequency of the arches (~34 per second) would indicate that whatever is causing scattering has a motion frequency of about 17Hz (see eg https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=154054 as well as the omega scan above).
Maybe someone at the site could recognize what this is from listening to the .wav files?
A set of omega scans of similar events on 26th Jan (identified by thresholding on ISI-GND_STS_HAM2_Y) can be found at https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/isi_ham2/
Wow that is pretty loud, seems like it is even seen (though just barely) on seismometers clear out at EY with about the right propagation delay for air or ground propagation in this band (about 300 m/s). Like a small quake near the corner station or something really heavy, like the front loader, going over a big bump or setting its shovel down hard. Are other similar events during working hours and also seen at EY or EX?
It's hard to spot any pattern in the GPS times. As far as I have checked the disturbances are always much stronger in CS/LVEA than in end station (if seen at all in EX/EY ..).
More times can be found at https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/isi_ham2/jan23/ https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/isi_ham2/jan24/
Hveto investigations have uncovered a bunch more times - some are definitely not in working hours, eg https://ldas-jobs.ligo-wa.caltech.edu/~tjmassin/hveto/O2Ac-HPI-HAM2/scans/1169549195.98/ (02:46 local) https://ldas-jobs.ligo-wa.caltech.edu/~tjmassin/hveto/O2Ab-HPI-HAM2/scans/1168330222.84/ (00:10 local)
Here's a plot which may be helpful as to the times of disturbances in CS showing the great majority of occurrences on the 23rd, 26th-27th and early on 28th Jan (all times UTC). This ought to be correlated with local happenings.
The ISI-GND HAM2 channel also has loud triggers at times where there are no strain triggers as the ifo was not observing. The main times I see are approximately (UTC time)
Jan 22 : hours 13, 18 21-22
Jan 23 : hours 0-1, 20
Jan 24 : hours 0, 1, 3-6, 10, 18-23
Jan 25 : hours 21-22
Jan 26 : hours 17-19, 21-22
Jan 27 : hours 1-3, 5-6, 10, 15-17, 19, 21, 23
Jan 28 : hours 9-10
Jan 29 : hours 19-20
Jan 30 : hours 17, 19-20
Hmm. Maybe this shows a predominance of times around hour 19-20-21 UTC i.e. 11-12-13 PST. Lunchtime?? And what was special about the 24th and 27th ..
Is this maybe snow falling off the buildings? The temps started going above the teens on the 18th or so and started staying near freezing by the 24th. Fil reported seeing a chunk he thought could be ~200 lbs fall.
Ice Cracking On Roofs?
In addition to ice/snow falls mentioned by Jim, thought I'd mention audible bumps I heard from the Control Room during some snowy evenings a few weeks ago (alog33199)....Beverly Berger emailed me suggesting this could be ice cracking on the roof. We currently do not have tons of snow on the roofs, but there are some drifts which might be on the order of a 1' tall.
MSR Door Slams?
After hearing the audio files from Thomas' alog, I was sensitive to the noise this morning. Because of this, thought I'd note some times this morning when I heard a noise similar to Thomas' audio, and this noise was the door slamming when people were entering the MSR (Mass Storage Room adjacent to the Control Room & there were a pile of boxes which the door would hit when opened...I have since slid them out of the way). Realize this isn't as big of a force as what Robert mentions or the snow falls, but just thought I'd note some times when they were in/out of the room this morning:
I took a brief look at the times in Corey's previous 'bumps in the night' report, I think I managed to deduce correctly that it refers to UTC times on Jan 13. Out of these I could only find glitches corresponding to the times 5:32:50 and 6:09:14. There were also some loud triggers in the ISI-GND HAM2 channel on Jan 13, but only one corresponded in time with Corey's bumps: 1168320724 (05:31:46).
The 6:09 glitch seems to be a false alarm, a very loud blip glitch at 06:09:10 (see https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/H1_1168322968/) with very little visible in aux channels. The glitch would be visible on the control room glitchgram and/or range plot but is not associated with PEM-CS_SEIS or ISI-GND HAM2 disturbances.
The 5:32:50 glitch was identified as a 'PSL glitch' some time ago - however, it also appears to be a spinosaurus! So, a loud enough spinosaurus will also appear in the PSL.
Evidence : Very loud in PEM-CS_SEIS_LVEA_VERTEX channels (https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=155306) and characteristic sail shape in IMC-IM4 (https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=155301).
The DetChar SEI/Ground BLRMS Y summary page tab has a good witness channel, see the 'HAM2' trace in this plot for the 13th - ie if you want to know 'was it a spinosaurus' check for a spike in HAM2.
Here is another weird-audio-band-disturbance-in-CS event (or series of events!) from Jan 24th ~17:00 UTC :
https://ldas-jobs.ligo-wa.caltech.edu/~tdent/detchar/o2/PEM-CS_ACC_LVEAFLOOR_HAM1_Z-1169312457.wav
Could be someone walking up to a piece of the instrument, dropping or shifting some heavy object then going away .. ??
Omega scan: https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/psl_iss/1169312457.3/
The time mentioned in the last entry turns out to have been a scheduled Tuesday maintenance where people were indeed in the LVEA doing work (and the ifo was not observing, though locked).
Please see LLO alog 30874.
Robert, Valera, Anamaria
A few further points, using the same methods as in the link above:
1) If H1 had the same jitter noise but the L1 coupling, things would be kind of alright for the current noise/power - see first plot.
2) Comparing to O1 couplings there are some interesting things... We did two tests, one in the beginning and one at the end, after Robert's work. The couplings and, as such, the DARM contributions, are vastly different, see second plot comparing the two injections. Something reduced the coupling between these two points, perhaps Robert's adjustment of IMC WFS offsets. Then it got big again in O2... If we could get to the coupling at the end of O1, life would be grand in psljitterland.
The third plot compares all three "states", beginning of O1, end of O1 and O2 in terms of ambient jitter as seen by IMC WFS A DC (in lock) and the coupling functions. Notice that there was something strange going on in the beginning of O1 with these signals, they show ridiculous 60 Hz harmonics - perhaps some grounding/wiring problem that was later fixed.
The fourth plot does the same comparison, but with the periscope motion (for completeness).
As far as L1 is concerned, we have the same coupling and approximately the same noise as in O1, so I'm not adding those plots here.
I suppose it's possible that H1 has always had jitter issues but they were not obvious (I definitely didn't appreciate the scale) ... until the noise was exacerbated by the HPO and the detector noise at higher power was low enough to see it.
One difference between O1 and O2 for H1 is the laser power. We are currently running with 30W input, whereas it was 22W for O1. One observation we made during the last commissioning period was that the coupling seemed strongly dependent on the initial alignment.