Maintenance activities are complete. Reset Fiber Polarization on X-Arm. Starting relocking.
After detecting failures on /Ligo, We power off CDSFS0, review all cable connections and verified that the controller card was plugged correctly on the board, after power back on we manually remounted and exported /Ligo. We also verified all workstations were connected again.
Richard, Patrick, Evan
We refocused the SRM digital camera (cam17) while watching IR flashes on the SRM. The camera aperture was also stopped down a bit.
I've unmonitored all elements of ASC DC5 input matrix for OBSERVE in a hope that this is somewhat useful for passive ASC sensing measurement.
Unmonitored channels are H1:ASC-INMATRIX_P_16_XX and H1:ASC-INMATRIX_Y_16_XX where XX is 1, 2, 3, ... 33.
DC5 is not used at LHO, output of DC5 filters are still monitored (attached), and as soon as the output is turned on we're kicked out of observing, so this has zero impact on the IFO performance.
WP 6383 Carlos, Jim The wireless access points at EX and EY have been unplugged and the switch ports turned on. To reactivate the WAP for CDS access, the red ethernet cable will need to be plugged in to the switch as outlined in the VEAsWirelessAccess page in the CDS Wiki. Please remember to disconnect it when done!
I've unplugged both mid stations as well.
Carlos reports the parking lot at End-Y is very icy. Please be careful.
17:31 Lockloss due to maintenance activities
Ops Shift Log: 12/06/2016, Day Shift 16:00 – 00:00 (08:00 - 16:00) Time - UTC (PT) State of H1: IFO locked at NOMINAL_LOW_NOISE, 27.9W, 62.1MPc Intent Bit: Observing Wind: Is ranging from a Light to Moderate Breeze (7-14mph), with Light snow 0.03 – 0.1Hz: Currently between 0.09 – 0.03um/s 0.1 – 0.3Hz: Currently at 0.8um/s Outgoing Operator: TJ
TITLE: 12/06 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 59.5354Mpc
INCOMING OPERATOR: Jeff
SHIFT SUMMARY: Aside from one lockloss that I can't explain, we have been locked the whole shift. The range fell starting around 12:30 and has come up a bit, but not fully. I have ran the DARM_a2l_passive.xml every once in a while and it came out with different results almost every time. So I wasn't sure if I should run it in the beginning of the shift, but I ran it after LLO dropped out and the range wasn't recovering. Not sure if it helped.
TCSY chiller flow has been an issue throughout the night, but seems to get stable for a few hours at a time.
LOG:
*All Observing times are with ion pump near BSC8 running.
Observing with a 3.5hr lock. Range has taken a dip that seems to be coincident with the Hanford traffic again, but is on its way back up. LLO is also Observing after a long night of close tries.
Be care for those driving in, the roads are icy.
Not sure of the cause yet, everything seemed all good and normal. Running lockloss plots now though and I'll update if I find anything.
But don't worry, we are back to Observing at 10:36 UTC.
I haven't seen anything of note for the lockloss. I checked the usual templates, with some screenshots of them attached.
This seems like another example of the SR3 problem. (alog 32220 FRS 6852)
If you want to check for this kind of lockloss, zoom the time axis right around the lockloss time to see if the SR3 sensors change fractions of a second before the lockloss.
J. Kissel, B. Weaver, T. Sadecki Just for reference, I include a lockloss that was definitely *not* caused by the SR3 glitching for future comparison and distinction of whether this SR3 glitch has happened or not. Also, remember, although the OPS wiki's instructions suggest that one must and can only use lockloss2, not everyone has the alias yet for this more advanced version. You can make the plots, and do everything you need with the more basic version: lockloss -c /ligo/home/ops/Templates/Locklosses/channels_to_look_at_SR3.txt select It would also be great to get the support of @DetChar on this one. The fear is that these glitches begin infrequently, but get successively more frequent. Once they do, we should consider replacing electronics. The fishy thing, though, is that LF and RT are on separate electronics chains, given the cable layout of HAM5 (see D1101917). Maybe these glitches are physical motion? Maybe with statistics of two, it's unclear whether it's that LF and RT just *appear* to be the culprit whether it may be a random set of OSEMs glitching.
See my note in alog 32220, namely that Sheila and I looked again and we see that the glitch is on the T1 and LF coils, which share a line of electronics. The second lockloss TJ started with in this log (12/06) are somewhat unconclusively linked to SR3 - no "glitches" like the first one 12/05, but instead all 6 top mass SR3 OSEMs show motion before lockloss.
Sheila, Betsy
Attached is a ~5 day trend of the SR3 top stage OSEMs. T1 and LF do have an overall step in the min/max of their signals which happened at the time of that lockloss which showed the SR3 glitch (12/05 16:02 UTC)...
The TCSY chiller flow seems to be struggling for the last 40min or so. This doesn't seem to be the normal issue of an air bubble because it is not recovering as it normally would. I saw a WP for the replacement of this flow meter but I didn't see an alog stating that it happened.
Attached is an hour trend of the flow rate.
Seems to have stablized itself now, back at 3.0Gpm
Eh, this actually is kinda the usual "thing" going on with this sensor of late... It drops low for a while ~30-60 mins and then recovers eventually. The work permit we have open is to fix next Tuesday.
I was wondering if this was coming from the CO2 controller chassis, D1200745. I had a look to see if there were accompanying glitches on the CO2Y laser temperature channel which might indicate a common cause. Whilst there are some glitches that show up simultaneously on both channels, there are many that are not simultaneous. It is far from obvious that there is a common cause for the flow rate variations.
% check to see if the CO2Y laser flow meter is broken or if there is a
% problem with electronics instead. Compare to CO2Y laser temperature
t0 = 1165166030 - 72*3600;
t1 = t0 + 2*24*3600;
chan = {'H1:TCS-ITMY_CO2_LASERTEMPERATURE.mean,m-trend', ...
'H1:TCS-ITMY_CO2_FLOWRATE.mean,m-trend'};
[data, t, info] = get_data(t0,t1,chan,[]);
subplot(2,1,1)
plot((1:size(data,1))/60, data(:, 1)); grid on
axis([0 48 23.72 23.82])
xlabel('Time (hr)')
ylabel('CO2Y laser temperature (C)')
title('CO2Y laser channel glitches - correlated or not?')
subplot(2,1,2)
plot((1:size(data,1))/60, data(:, 2)); grid on
axis([0 48 2 3.5])
xlabel('Time (hr)')
ylabel('CO2Y flow rate (gpm)')
orient tall
print('-dpdf', 'CO2Y_flow_rate_errors.pdf')
TITLE: 12/06 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 65.1367Mpc
OUTGOING OPERATOR: Travis
CURRENT ENVIRONMENT:
Wind: 9mph Gusts, 7mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.36 μm/s
QUICK SUMMARY: Travis handed me a locked IFO, let's hope we can keep it that way.
For those headed to the site: There is a large amount of slush on the ground and the tempurature is just now getting below freezing, so I imagine that it will turn to thick ice by the time that people show up. Be careful!
Lockloss due to PSL tripping off. Currently on the phone with Jason working to remotely restart if possible. Will update as I know more.
Jason noticed that the chiller was complaining about low water level as he was bringing it back up. This is apparently due to the fact that when the chiller trips off, it burps a bunch of water onto the floor. I topped the Xtal chiller off with 300 mL H2O.
Filed FRS ticket 6853 for this trip.
Also, back to Locking now.
Also, Rana and photographer are on site. I let them onto the Observation Deck to take pics while we are relocking.
Sorting through the myriad of signals leads me to think that the laser trip was due to the NPRO passing out. Although it is possible that the flow rate in head 3 dipped below the 0.4 lpm limit. Head3FlowNPRO.png suggests that the NPRO tripped out before the flow rate in head 3 reached its limit.
When restarting the laser last night, the status screen on the PSL Beckhoff PC indicated a trip of the "Head 1-4 Flow" interlock; although looking at the graphs Peter posted above it appears that the laser lost power before any of the flow sensors dropped below the trip threshold.
Further forensics: Attached are trends of the laser head temperatures around the time of last night's PSL trip. To my eye nothing looks out of the ordinary.
PT170 and PT180 were installed in March 2016 when the X/Y beam manifold volumes were vented. PT140 was installed in Sept. 2016. Attached is trend from time of installation. Looks like PT170 & PT180 have been drifting up since after closing and re-opening isolation GVs 5/7 for HAM6 vent. We burped in accumulated gas from gate annuli but I'm not sure what the second hump is from April 9th. No log entry found.