TITLE: 12/12 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 71.8Mpc
INCOMING OPERATOR: Ed
SHIFT SUMMARY:
All shifts should be like this....weekend! Cheryl handed off a locked H1 & it is approaching 16hrs of lock. The range looks very good on this stretch: flat, no glitch drops. useism is just under the 90percentile and winds are low.
1st Harmonic of Violins (~1kHz) are high (1x^-15). Perhaps we should address damping these guys down at some point when L1 is down?
Tonight Amber Henry was my copilot at the operator helm. Walked her through some of the basics.
Plot attached:
Thanks. FYI we no longer need to log PI damping changes (unless of course something goes wrong/abnormal).
TITLE: 12/11 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 70.4106Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
(Gray, Kawabe, Vorvick)
Wanted to close the loop here (mainly wanted to confirm there were no other channels we had to worry about).
Today Cheryl took care of UNMONITORING the channel (H1:ALS-Y_FIBR_LOCK_TEMPERATURECONTROLS_ON) we observed knocking us out of OBSERVING the last two days.
I went ahead & followed Keita's instructions for checking ALL the channels for this frontend (h1sysecaty1plc2), which Cheryl posted (here), to make sure there were no other channels in h1sysecaty1plc2 we might be missing. The [15] channel list files are at: /ligo/home/keita.kawabe/LockLoss/SDFERRORS/h1sysecaty1plc2 & I ran a lockloss script on all of these files for both OBSERVING drops which happened during Cheryl's shifts (GPS time: 1165432802 & 1165531183) & for both of these times, the channel noted above is the only one which makes an obvious change (see attachment #1 & #2 for both times).
So we should now be good (unless a new channel makes a change on us).
Terra called in to check on H1. I hadn't looked outside, so wasn't aware that we'd had sleet this morning after I got here, and she's telling me that road conditions in town are not good. She and I talked about H1 and it's running well, so she did not come to the site. She emailed Keita, so he's aware.
Local time translates into 15:00UTC yesterday to 21:00 today.
After reading Keita's alog 32025, I used the template to passively look at coherance over this lock, and pitch is at 0.9 and has been and this was not in the previous lock, interestingly enough, though yaw was worse at the end of the last lock.
Going out of Observe now to reduce the coherance made sense in light of the long lock we just had.
paasive a2l checks:
Here's what's odd - pitch and yaw coherances are better an hour after running a2l, but the entire region 10Hz to 40Hz is elevated, and the range has gone down.
Plot start time is 20:32:06UTC.
New plot - start time 20:43:21 - coherence is higher but 10-40Hz is not elevated.
Smaller coherence with larger DARM noise at 20Hz means that the excess noise is not due to a2l coupling. When something other than a2l coupling dominates the noise at 20Hz, running or not running a2l is irrelevant, so please don't run it.
Again I'll say this, it's a perfect project for a fellow to write a script to plot the coherence and DARM BRMS for 15-25Hz or so VS time. With such a script it would be easy for operators to determine (and show to others) if running a2l would be beneficial or not.
Anyway, going out of OBSERVE to run a2l script is OK when all of below are satisfied.
Do it sparingly, single IFO data is still valuable for CW.
(Added later at 13:30 local time: If the coherence is super bad like Cheryl's "before" plot for PIT, there's not much point in waiting hours to run a2l, but make sure that the alignment itself is not suspect.)
We are running out of control range in the LVEA so I have incremented the heat;
Zone 1A increase to 10ma from 9ma.
Zone 5 increase to 10ma from 9ma.
Time: Sun Dec 11 17:26:10 UTC 2016
Location: 66km SW of Kirakira, Solomon Islands; LAT: -11.0, LON: 161.6
Magnitude: 5.3
R-Wave Velocity (micro m/s): 0.597189
TITLE: 12/11 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 71.6801Mpc
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
Wind: 4mph Gusts, 2mph 5min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.48 μm/s
QUICK SUMMARY:
TITLE: 12/11 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY: We were locked until an earthquake in the Pacific took us down
LOG:
15:00 Lockloss from earthquake, peak eq band rms was about 2 microns, I was pretty much able to start trying to lock again immediately
15:20 ALS is hopelss, so I start initial alignment, we are now (16:00) almost back to DC readout
Cheryl observed an instance of H1 being dropped out of OBSERVING due to SDF changes tracked down to the Computer/SDF Node: sysecaty1plc2
(This was for the Yarm. We noticed this same issue a week ago for the analogous Computer/Node for the Xarm: sysecatx1plc2.)
I continued the work of figuring out who is our pesky channel dropping us out of OBSERVING. The first thing I did was look at the (3) channels Keita found for X last week and see if the Y-arm counterparts changed today---found nothing in dataviewer. I then ran the scripts Cheryl ran and came with the same result of seeing a change with the channel H1:FEC-1031_SDF_DIFF_CNT. But this is just a name for a channel SDF uses.
I then just went to where sysecaty1plc2 is on medm. This is related to Beckhoff, so maybe the channel can be tracked down by snooping around medm land. To get to a baseline/starting point, I went to:
SITE MAP / SYS / EtherCAT overview / H1 Y1 PLC2 /
From here you have several different subsystems (Als, Asc, Isc, Lsc, Sys). So, I went through all of these subsystems and the screens nested within them. The first thing I did was to find the "*_ERROR_FLAG" status box for each subsystem (it's green for all, and I reckon if there was a change to the system, it would go red). So I grabbed this channel for all the subsystems mentioned above, and the only one which changed when we dropped from OBSERVING was the Als one. I then played the same game--go into the nested windows within and trend "*_ERROR_FLAG" channels for each component within Als. Ultimately, I ended up finding a single channel which had activity around the time in question. It was found here:
SITE MAP / SYS / EtherCAT overview / H1 Y1 PLC2 / Als / Y / Fibr / Lock / Temeraturecontrols (i.e. H1ALS_Y1PLC2_Y_FIBR_LOCK_TEMPERATURECONTROLS.adl)
And on this medm, the channel in question is: H1:ALS-Y_FIBR_LOCK_TEMPERATURECONTROLS_ON
I'm not saying this is the ONLY channel which could be culprit for the OBSERVING drop, but this is one I saw drop out at that time (see attachment#1), BUT there is a caveat. If I look at 20min before the Drop, the ALS channel in question had some similar drop outs (see attachement#2). For the earlier one, the drops only lasted about 10sec (attachment#3). For the drops which took us out of OBSERVING (attachment#1), after 15sec of drops, then we dropped out of OBSERVING (& overall the ALS ON switch went off/on for about 40sec). So maybe the SDF changes have to happen for a certain amount of time before latching us out of OBSERVING?
As another check, I looked at the last 12hrs of this lock and the only time H1:ALS-Y_FIBR_LOCK_TEMPERATURECONTROLS_ON had these fits of turning OFF for a handful of seconds were in that 20min time period when we dropped out.
Question: Is this enough to warrant NOT MONITORING H1:ALS-Y_FIBR_LOCK_TEMPERATURECONTROLS_ON? Or should we keep searching?
Here's a plot of the a2l coherence at the beginning of my shift.