Displaying reports 51881-51900 of 83235.Go to page Start 2591 2592 2593 2594 2595 2596 2597 2598 2599 End
Reports until 01:05, Tuesday 06 December 2016
H1 TCS
thomas.shaffer@LIGO.ORG - posted 01:05, Tuesday 06 December 2016 - last comment - 09:30, Wednesday 07 December 2016(32230)
TCSY Chiller flow issue

The TCSY chiller flow seems to be struggling for the last 40min or so. This doesn't seem to be the normal issue of an air bubble because it is not recovering as it normally would. I saw a WP for the replacement of this flow meter but I didn't see an alog stating that it happened.

Attached is an hour trend of the flow rate.

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 02:01, Tuesday 06 December 2016 (32231)

Seems to have stablized itself now, back at 3.0Gpm

betsy.weaver@LIGO.ORG - 21:07, Tuesday 06 December 2016 (32288)

Eh, this actually is kinda the usual "thing" going on with this sensor of late...  It drops low for a while ~30-60 mins and then recovers eventually.  The work permit we have open is to fix next Tuesday.

aidan.brooks@LIGO.ORG - 09:30, Wednesday 07 December 2016 (32304)

I was wondering if this was coming from the CO2 controller chassis, D1200745. I had a look to see if there were accompanying glitches on the CO2Y laser temperature channel which might indicate a common cause. Whilst there are some glitches that show up simultaneously on both channels, there are many that are not simultaneous. It is far from obvious that there is a common cause for the flow rate variations.

 

% check to see if the CO2Y laser flow meter is broken or if there is a

% problem with electronics instead. Compare to CO2Y laser temperature

 

t0 = 1165166030 - 72*3600;

t1 = t0 + 2*24*3600;

 

chan = {'H1:TCS-ITMY_CO2_LASERTEMPERATURE.mean,m-trend', ...

    'H1:TCS-ITMY_CO2_FLOWRATE.mean,m-trend'};

 

[data, t, info] = get_data(t0,t1,chan,[]);

 

subplot(2,1,1)

plot((1:size(data,1))/60, data(:, 1)); grid on

axis([0 48 23.72 23.82])

xlabel('Time (hr)')

ylabel('CO2Y laser temperature (C)')

title('CO2Y laser channel glitches - correlated or not?')

 

subplot(2,1,2)

plot((1:size(data,1))/60, data(:, 2)); grid on

axis([0 48 2 3.5])

xlabel('Time (hr)')

ylabel('CO2Y flow rate (gpm)')

orient tall

print('-dpdf', 'CO2Y_flow_rate_errors.pdf')

Images attached to this comment
Non-image files attached to this comment
LHO General
thomas.shaffer@LIGO.ORG - posted 00:10, Tuesday 06 December 2016 (32229)
Ops Owl Shift Transition

TITLE: 12/06 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 65.1367Mpc
OUTGOING OPERATOR: Travis
CURRENT ENVIRONMENT:
    Wind: 9mph Gusts, 7mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.36 μm/s
QUICK SUMMARY: Travis handed me a locked IFO, let's  hope we  can keep it that way.

For those headed to the site: There is a large amount of slush on the ground and the tempurature is just now getting below freezing, so I imagine that it will turn to thick ice by the time that people show up. Be careful!

H1 General
travis.sadecki@LIGO.ORG - posted 00:02, Tuesday 06 December 2016 (32228)
OPS Eve Shift Summary

TITLE: 12/06 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 72.9403Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY:  Other than the lockloss due to PSL trip, not a bad evening for locking. 
LOG:

See previous aLogs for the play-by-play tonight.

For DaveB, there was a /ligo server connection interrupt at 1:32 UTC.

For DetChar, Keita wanted to acknowledge/remind that Observing lock stretches for today have an ion pump running at BSC8.

PSL dust monitor is still alarming periodically.

7:57 GRB alert.

H1 General
travis.sadecki@LIGO.ORG - posted 23:27, Monday 05 December 2016 (32227)
Observing at 7:26 UTC

Straightforward relocking after PSL was reset.  No IA required, just a typical bit of tweaking of ALS and PRMI. 

H1 General
travis.sadecki@LIGO.ORG - posted 22:23, Monday 05 December 2016 - last comment - 10:30, Tuesday 06 December 2016(32222)
Lockloss 6:09 UTC

Lockloss due to PSL tripping off.  Currently on the phone with Jason working to remotely restart if possible.  Will update as I know more.

Comments related to this report
travis.sadecki@LIGO.ORG - 22:36, Monday 05 December 2016 (32223)

Jason noticed that the chiller was complaining about low water level as he was bringing it back up.  This is apparently due to the fact that when the chiller trips off, it burps a bunch of water onto the floor.  I topped the Xtal chiller off with 300 mL H2O.

travis.sadecki@LIGO.ORG - 22:56, Monday 05 December 2016 (32224)

Filed FRS ticket 6853 for this trip.

travis.sadecki@LIGO.ORG - 22:57, Monday 05 December 2016 (32225)

Also, back to Locking now.

travis.sadecki@LIGO.ORG - 23:09, Monday 05 December 2016 (32226)

Also, Rana and photographer are on site.  I let them onto the Observation Deck to take pics while we are relocking.

peter.king@LIGO.ORG - 08:18, Tuesday 06 December 2016 (32237)
Sorting through the myriad of signals leads me to think that the laser trip was due to the NPRO passing out.
Although it is possible that the flow rate in head 3 dipped below the 0.4 lpm limit.  Head3FlowNPRO.png
suggests that the NPRO tripped out before the flow rate in head 3 reached its limit.
Images attached to this comment
jason.oberling@LIGO.ORG - 09:28, Tuesday 06 December 2016 (32239)PSL

When restarting the laser last night, the status screen on the PSL Beckhoff PC indicated a trip of the "Head 1-4 Flow" interlock; although looking at the graphs Peter posted above it appears that the laser lost power before any of the flow sensors dropped below the trip threshold.

jason.oberling@LIGO.ORG - 10:30, Tuesday 06 December 2016 (32243)

Further forensics:  Attached are trends of the laser head temperatures around the time of last night's PSL trip.  To my eye nothing looks out of the ordinary.

Images attached to this comment
H1 General
travis.sadecki@LIGO.ORG - posted 20:13, Monday 05 December 2016 (32221)
Ops Eve Mid-shift Summary

We have been locked in Observing for almost 2 hours.  No issues to report.

H1 SUS (Lockloss, OpsInfo)
sheila.dwyer@LIGO.ORG - posted 19:11, Monday 05 December 2016 - last comment - 18:24, Tuesday 06 December 2016(32220)
lockloss caused by SR3 glitch

I used the lockloss2 script that automatically checks for sus saturations and plots them using the lockloss tool, and saw that one of the three locklosses (2016-12-05 16:02:42 UTC) in the last day or so was probably caused by a glitch on SR3.  The attached screenshot shows the timeline, there is clearly a glitch on the top mass of SR3 about 0.2 seconds before the lockloss.  

The dither outputs (which we use for the cage servo) don't show anything unual until after the lockloss, which means that this is not a cage servo problem.  Looking at top mass OSEMINF LF and RT are the two that seem to have a glitch first, at about the same time. 

I've added a lockloss template /ligo/home/ops/Templates/Locklosses/channels_to_look_at_SR3.txt  for any operators who have an unexplained lockloss and want to check if it is simlar to this one. 

Images attached to this report
Comments related to this report
betsy.weaver@LIGO.ORG - 18:24, Tuesday 06 December 2016 (32278)

Sheila and I looked again at this particular lockloss (2016-12-06 10:05:39 UTC) and agree that the glitch that likely caused the lockloss are actually on the T1 and LF top stage OSEMs.  These are indeed on the same set cabling, satellite amp, and driver run.  See attached for updated lockloss plot this time with the OSEMINF channels.  We'll keep watching locklosses to see if this happens more.

Images attached to this comment
H1 General
travis.sadecki@LIGO.ORG - posted 18:28, Monday 05 December 2016 (32219)
Observing at 2:27 UTC

Back to Observing at 2:27 UTC.  No issues with relocking.  A slight tweak of PRMI was all that was needed.

H1 General (CDS)
travis.sadecki@LIGO.ORG - posted 18:05, Monday 05 December 2016 (32218)
Lockloss 1:58 UTC, OPS workstation issues

Lockloss at 1:58 UTC.  No obvious cause.  Was coincident with me rebooting the OPS workstation.  OPS workstation 2nd monitor shut off, first monitor was showing symptom of needing to be rebooted (slow, jumpy mouse movements, etc.),  verified that the cable was still plugged in, and hardbooted.  Seems to have come back fine with both monitors, but seemed I have never seen this issue before so might need some further investigation.

H1 General
travis.sadecki@LIGO.ORG - posted 17:51, Monday 05 December 2016 (32217)
Back to Observing

We are now back in Observing mode.  No issues were encountered during the lock sequence, went all the way to NLN the first try after Sheila finished here camera work.  I ran Keita's a2l check script which showed that running a2l would be prudent.  I then ran a2l.  Set to Observing at 1:50 UTC.

H1 ISC
daniel.sigg@LIGO.ORG - posted 17:49, Monday 05 December 2016 (32215)
ISS Signals

I repeated the measurement from alog 31399.

The attached plot looks very similar for all traces but PSL-ISS_PDB_REL—indicating that the changes described in alog 32206 significantly reduced the extra noise in the first loop ISS path.

We also see that the new REFL_A_RIN channel (see alog 32191) is working fine.

Between 10 and 30 Hz there is coherence between REFL_A_RIN and the DCPDs. There is a clear shelf in both spectra pointing towards a parasitic interferometer path.

Non-image files attached to this report
LHO VE
chandra.romel@LIGO.ORG - posted 17:38, Monday 05 December 2016 - last comment - 05:33, Tuesday 06 December 2016(32214)
corner pressure

PT170 and PT180 were installed in March 2016 when the X/Y beam manifold volumes were vented. PT140 was installed in Sept. 2016. Attached is trend from time of installation. Looks like PT170 & PT180 have been drifting up since after closing and re-opening isolation GVs 5/7 for HAM6 vent. We burped in accumulated gas from gate annuli but I'm not sure what the second hump is from April 9th. No log entry found.

Images attached to this report
Comments related to this report
chandra.romel@LIGO.ORG - 05:33, Tuesday 06 December 2016 (32235)
Could be water accumulation on gauge ion grid is causing drift.
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 17:28, Monday 05 December 2016 (32212)
CDS Monday Maintenance Summary, 5th December 2016

6373 increase rate of h1ascimc model

Daniel, Jim, Dave:

The h1ascimc model processing rate was increased from 2048 to 16384.

6374 add 16kHz LSC channel to DAQ

Daniel:

6378 Add channels to DAQ broadcaster

TJ(Detchar), Dave:

Slow channels were added to the DAQ DMT broadcaster

6371 Relocate digital camera17

Richard, Jim:

Digital camera #17 was relocated to PRM

DAQ restart

Dave:

DAQ was restarted once at 11:49PST to support model work.

LHO VE
chandra.romel@LIGO.ORG - posted 15:54, Monday 05 December 2016 - last comment - 17:50, Monday 05 December 2016(32204)
CP3 overfill

3:08 pm local

Took 39 min. to fil CP3 at 50% open on LLCV. Raised nominal level from 17% to 19%.

Comments related to this report
chandra.romel@LIGO.ORG - 17:50, Monday 05 December 2016 (32216)

Lowered to 18% open after a couple hours since exhaust pressure is higher than usual. 19% was too much flow.

H1 DetChar (DetChar, SUS)
keita.kawabe@LIGO.ORG - posted 12:03, Monday 05 December 2016 - last comment - 17:29, Monday 05 December 2016(32115)
OPLEV transimpedance VS glitch

Summary:

OPLEV EX laser glithes were reported to be causing DARM glitches by detchar (e.g. alog 31810) even though EY is no worse or maybe somewhat glitchier (1st attachment, this is a week's worth of trend before Jason made an adjustment). It seems like this is due to a hidden design feature that there's a fixed 0.1uf cap across transimpedance resistor, which gives different pole for different resistor.

As a quick fix I'll insert digital zp(160,16) for all EX segments, as it seems to me that we cannot change the transimpedance without accessing the receiver module on oplev pyron.

Details:

T1600085 tells us that the effective transimpedance and the whitening gain of ETMX and ETMY are [2*10k,18dB] and [2*100k,0dB] (a factor of 2 comes from the differential drive, physical transimpedance is either 10k or 100k). They both use two stages of whitening filters too.

D1100290 shows that there's a 0.1u cap across the transimpedance resistor. The pole formed by 0.1uF cap and [10k, 100k] resistor is [160Hz, 16Hz].

Analog signal of each segment including whitening gain is

ETMX ~ 8*2*10k*zp([],160Hz)*photocurrent (a factor of 8 due to 18db whitening gain)

ETMY ~ 2*100k*zp([],16Hz)*photocurrent.

DC level is about the same but ETMX has an effective zp(16,160) whitening relative to Y. Look at RIN (second attachment) and PIT and YAW signals (third).

Residual power glitch that isn't canceled out by power scaling goes into the oplev PIT, and the effect is  worse for ETMX due to this difference because X and Y has the same damping filter with similar gain. For f>300Hz the PIT signal is limited by the electronics noise, so the power glitch at this frequency directly goes to oplev damping without any cancellation.

Inserting zp(160,16) for all EX segments will, for the moment, make EY and EX about the same.

Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 14:15, Monday 05 December 2016 (32199)AOS, DetChar, SUS

WP6380.

Added FM10 (named "alog32115") to H1:SUS-ETMX_L3_OPLEV_SEG1, SEG2, SEG3 and SEG4.

Some old ISIINF channels were still defined in the foton file but not in the frontend, JeffK confirmed that all of them were safe to remove from the file. These channels were automatically purged from the foton file when we saved it.

Filter coefficients of the SUS-ETMX model were reloaded, I enabled the new filters, and they were accepted in SDF safe.

This introduced a new 16Hz pole (and 160Hz zero) only to ETMX oplev damping, which should have made EX and EY oplev plant identical (or more similar to each other) which is a good thing.

But there might be some impact on ASC. If this makes HARD (or SOFT though less likely) unstable, turn FM10 off for the moment.

Note that I didn't make similar changes to ITMs yet, though they both use 10kHz transimpedance resistors as these are not of immediate concern. Longer lever arm of ITM oplevs means that the equivalent angle noise of the oplev sensing electronics for ITMs are roughly a factor of 6 smaller than ETMs.

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 15:30, Monday 05 December 2016 (32202)AOS, DetChar, ISC, SUS
J. Kissel

I've checked the open loop gain transfer functions of the ETMX optical lever pitch loops after this change. As expected from the frequency content of the new filter, there is no significant difference in the OLG TF with these new filters ON vs OFF. 

I attach the OLGTFs:
- RED: With Keita's change OFF
- BLUE: With Keita's change ON
- BLACK: Same transfer function for ETMY (which has had no change)

The template lives here:
/ligo/home/jeffrey.kissel/Templates/
ETMX_L2_OLDAMP_P_OLGTF.xml
ETMY_L2_OLDAMP_P_OLGTF.xml
Images attached to this comment
jenne.driggers@LIGO.ORG - 17:29, Monday 05 December 2016 (32213)

[Travis, Jenne]

We have also accepted these FM10s in the Observe SDF file.

Displaying reports 51881-51900 of 83235.Go to page Start 2591 2592 2593 2594 2595 2596 2597 2598 2599 End