Ops Shift Log: 12/06/2016, Day Shift 16:00 – 00:00 (08:00 - 16:00) Time - UTC (PT) State of H1: IFO locked at NOMINAL_LOW_NOISE, 27.9W, 62.1MPc Intent Bit: Observing Wind: Is ranging from a Light to Moderate Breeze (7-14mph), with Light snow 0.03 – 0.1Hz: Currently between 0.09 – 0.03um/s 0.1 – 0.3Hz: Currently at 0.8um/s Outgoing Operator: TJ
TITLE: 12/06 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 59.5354Mpc
INCOMING OPERATOR: Jeff
SHIFT SUMMARY: Aside from one lockloss that I can't explain, we have been locked the whole shift. The range fell starting around 12:30 and has come up a bit, but not fully. I have ran the DARM_a2l_passive.xml every once in a while and it came out with different results almost every time. So I wasn't sure if I should run it in the beginning of the shift, but I ran it after LLO dropped out and the range wasn't recovering. Not sure if it helped.
TCSY chiller flow has been an issue throughout the night, but seems to get stable for a few hours at a time.
LOG:
*All Observing times are with ion pump near BSC8 running.
Observing with a 3.5hr lock. Range has taken a dip that seems to be coincident with the Hanford traffic again, but is on its way back up. LLO is also Observing after a long night of close tries.
Be care for those driving in, the roads are icy.
Not sure of the cause yet, everything seemed all good and normal. Running lockloss plots now though and I'll update if I find anything.
The TCSY chiller flow seems to be struggling for the last 40min or so. This doesn't seem to be the normal issue of an air bubble because it is not recovering as it normally would. I saw a WP for the replacement of this flow meter but I didn't see an alog stating that it happened.
Attached is an hour trend of the flow rate.
Seems to have stablized itself now, back at 3.0Gpm
Eh, this actually is kinda the usual "thing" going on with this sensor of late... It drops low for a while ~30-60 mins and then recovers eventually. The work permit we have open is to fix next Tuesday.
I was wondering if this was coming from the CO2 controller chassis, D1200745. I had a look to see if there were accompanying glitches on the CO2Y laser temperature channel which might indicate a common cause. Whilst there are some glitches that show up simultaneously on both channels, there are many that are not simultaneous. It is far from obvious that there is a common cause for the flow rate variations.

% check to see if the CO2Y laser flow meter is broken or if there is a
% problem with electronics instead. Compare to CO2Y laser temperature
t0 = 1165166030 - 72*3600;
t1 = t0 + 2*24*3600;
chan = {'H1:TCS-ITMY_CO2_LASERTEMPERATURE.mean,m-trend', ...
'H1:TCS-ITMY_CO2_FLOWRATE.mean,m-trend'};
[data, t, info] = get_data(t0,t1,chan,[]);
subplot(2,1,1)
plot((1:size(data,1))/60, data(:, 1)); grid on
axis([0 48 23.72 23.82])
xlabel('Time (hr)')
ylabel('CO2Y laser temperature (C)')
title('CO2Y laser channel glitches - correlated or not?')
subplot(2,1,2)
plot((1:size(data,1))/60, data(:, 2)); grid on
axis([0 48 2 3.5])
xlabel('Time (hr)')
ylabel('CO2Y flow rate (gpm)')
orient tall
print('-dpdf', 'CO2Y_flow_rate_errors.pdf')
TITLE: 12/06 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 65.1367Mpc
OUTGOING OPERATOR: Travis
CURRENT ENVIRONMENT:
Wind: 9mph Gusts, 7mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.36 μm/s
QUICK SUMMARY: Travis handed me a locked IFO, let's hope we can keep it that way.
For those headed to the site: There is a large amount of slush on the ground and the tempurature is just now getting below freezing, so I imagine that it will turn to thick ice by the time that people show up. Be careful!
TITLE: 12/06 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 72.9403Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: Other than the lockloss due to PSL trip, not a bad evening for locking.
LOG:
See previous aLogs for the play-by-play tonight.
For DaveB, there was a /ligo server connection interrupt at 1:32 UTC.
For DetChar, Keita wanted to acknowledge/remind that Observing lock stretches for today have an ion pump running at BSC8.
PSL dust monitor is still alarming periodically.
7:57 GRB alert.
Straightforward relocking after PSL was reset. No IA required, just a typical bit of tweaking of ALS and PRMI.
Lockloss due to PSL tripping off. Currently on the phone with Jason working to remotely restart if possible. Will update as I know more.
Jason noticed that the chiller was complaining about low water level as he was bringing it back up. This is apparently due to the fact that when the chiller trips off, it burps a bunch of water onto the floor. I topped the Xtal chiller off with 300 mL H2O.
Filed FRS ticket 6853 for this trip.
Also, back to Locking now.
Also, Rana and photographer are on site. I let them onto the Observation Deck to take pics while we are relocking.
Sorting through the myriad of signals leads me to think that the laser trip was due to the NPRO passing out. Although it is possible that the flow rate in head 3 dipped below the 0.4 lpm limit. Head3FlowNPRO.png suggests that the NPRO tripped out before the flow rate in head 3 reached its limit.
When restarting the laser last night, the status screen on the PSL Beckhoff PC indicated a trip of the "Head 1-4 Flow" interlock; although looking at the graphs Peter posted above it appears that the laser lost power before any of the flow sensors dropped below the trip threshold.
Further forensics: Attached are trends of the laser head temperatures around the time of last night's PSL trip. To my eye nothing looks out of the ordinary.
We have been locked in Observing for almost 2 hours. No issues to report.
I used the lockloss2 script that automatically checks for sus saturations and plots them using the lockloss tool, and saw that one of the three locklosses (2016-12-05 16:02:42 UTC) in the last day or so was probably caused by a glitch on SR3. The attached screenshot shows the timeline, there is clearly a glitch on the top mass of SR3 about 0.2 seconds before the lockloss.
The dither outputs (which we use for the cage servo) don't show anything unual until after the lockloss, which means that this is not a cage servo problem. Looking at top mass OSEMINF LF and RT are the two that seem to have a glitch first, at about the same time.
I've added a lockloss template /ligo/home/ops/Templates/Locklosses/channels_to_look_at_SR3.txt for any operators who have an unexplained lockloss and want to check if it is simlar to this one.
Sheila and I looked again at this particular lockloss (2016-12-06 10:05:39 UTC) and agree that the glitch that likely caused the lockloss are actually on the T1 and LF top stage OSEMs. These are indeed on the same set cabling, satellite amp, and driver run. See attached for updated lockloss plot this time with the OSEMINF channels. We'll keep watching locklosses to see if this happens more.
PT170 and PT180 were installed in March 2016 when the X/Y beam manifold volumes were vented. PT140 was installed in Sept. 2016. Attached is trend from time of installation. Looks like PT170 & PT180 have been drifting up since after closing and re-opening isolation GVs 5/7 for HAM6 vent. We burped in accumulated gas from gate annuli but I'm not sure what the second hump is from April 9th. No log entry found.
But don't worry, we are back to Observing at 10:36 UTC.
I haven't seen anything of note for the lockloss. I checked the usual templates, with some screenshots of them attached.
This seems like another example of the SR3 problem. (alog 32220 FRS 6852)
If you want to check for this kind of lockloss, zoom the time axis right around the lockloss time to see if the SR3 sensors change fractions of a second before the lockloss.
See my note in alog 32220, namely that Sheila and I looked again and we see that the glitch is on the T1 and LF coils, which share a line of electronics. The second lockloss TJ started with in this log (12/06) are somewhat unconclusively linked to SR3 - no "glitches" like the first one 12/05, but instead all 6 top mass SR3 OSEMs show motion before lockloss.
Sheila, Betsy
Attached is a ~5 day trend of the SR3 top stage OSEMs. T1 and LF do have an overall step in the min/max of their signals which happened at the time of that lockloss which showed the SR3 glitch (12/05 16:02 UTC)...