Displaying reports 50661-50680 of 83121.Go to page Start 2530 2531 2532 2533 2534 2535 2536 2537 2538 End
Reports until 14:14, Tuesday 17 January 2017
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 14:14, Tuesday 17 January 2017 - last comment - 14:28, Tuesday 17 January 2017(33385)
CDS Tuesday Maintenance Summary, Tuesday 17th January 2017

WP6438 Remove h1tw0 from DAQ EDCU

Dave:

Due to its extended downtime, h1tw0 was removed from the DAQ EDCU for now to GREEN-up the EDCU.

MR Vacuum Beckhoff Change

WP6431 Patrick, Dave:

h0vacmr Beckhoff EPICS was changed for IP5. H0EDCU_VAC.ini was modified. Also h0/target/h0vacmr/autoBurt.req.

EX change delayed to next week.

WP6415 Install new FMCS computer in MSR

Richard, Carlos:

The new FMCS controller rack-mount computer (fmcs-compass) is being installed in the MSR.

DAQ Restart

Dave:

After running for 43 days, the DAQ was restarted at 12:29PST. This was a clean restart.

TJ reported that Guardian DIAG_MAIN node processing time improved significanly after h1nds0 was restarted. Trends show this happed on the last h1nds0 restart as well, with the processing time gradually increased thereafter.

Comments related to this report
david.barker@LIGO.ORG - 14:28, Tuesday 17 January 2017 (33386)

The EDCU is now GREEN, operators should investigate if it turns RED. I've also blanked-out the TW0 slot so we dont have any INV white boxes.

Images attached to this comment
H1 CDS (GRD)
thomas.shaffer@LIGO.ORG - posted 14:11, Tuesday 17 January 2017 (33383)
DAQ Restarts Decrease the Loop Time of DIAG_MAIN

The loop time (exec time) for DIAG_MAIN has been higher than usual the past few weeks. The normal is around 2-3s depending on what conditions are met. Today, I noticed that the exec time on DIAG_MAIN dropped back to its 2-3s after the DAQ restart. I checked with Dave and there was nothing new on NDS0 that should have been introduced, so on his suggestion I looked back a few weeks. The second attached shot clearly shows where the restart happened on the 6th of this month. The first shot is from todays restart.

The length of the loop time is mostly determined by how long the handful of nds calls take. So presumably, it is the these calls that are changing after a DAQ restart, but why?

Images attached to this report
H1 General
edmond.merilh@LIGO.ORG - posted 14:09, Tuesday 17 January 2017 (33384)
LVEA Sweep

Jason O, Ed Merilh

We did a final sweep of the LVEA:

H1 CDS (CDS, VE)
patrick.thomas@LIGO.ORG - posted 12:32, Tuesday 17 January 2017 (33382)
h0vacmr updated, h0vacex delayed
WP 6431

I have updated the PLC code on h0vacmr to change the IP5 controller from a multivac to gamma controller. The CP dewar at end X was being filled today, so I am holding off on updating h0vacex until next week.
H1 AOS
robert.schofield@LIGO.ORG - posted 12:13, Tuesday 17 January 2017 (33381)
New spots on PRM baffle

There seem to be some new spots on the PRM baffle: I posted the first picture a couple of months ago, the second is from this mornings lock. This too suggests possible compensation plate changes and potential increases in scattering noise in the input arm.

Images attached to this report
H1 SUS (DetChar, SUS)
keita.kawabe@LIGO.ORG - posted 12:00, Tuesday 17 January 2017 (33380)
CPX chain is likely rubbing/touching (Jenne, Betsy, JeffK, Keita)

I undamped ITMs and CPs, and measured the free swinging spectra (brown CPX, pink CPY).

Apparently all of the peaks are very damped for CPX and this is probably the sign of rubbing/touching.

I moved the CPX up by giving it a V offset of 200000 counts and P offset of -5000 counts and things got better (green). The seismic level was different between green and brown but it's clear that some of the peaks were recovered. It's not quite like CPY, though, and it's not clear if CPX is better or worse.

Anyway, if it's been rubbing, it's possible that it was stuck to some odd angle after a large EQ or something and stayed there since then.

Jeff is taking TF measurements to confirm if things look good both for IX and IY.

(Note: The reason ITMY V and R are elevated for high frequency is because of a noise RT BOSEM. Betsy confirmed that this has been a problem for a long time.)

Images attached to this report
LHO VE
chandra.romel@LIGO.ORG - posted 11:46, Tuesday 17 January 2017 (33379)
HAM6 pressure

Noting that HAM6 pressure is plateauing above 1e-7 Torr.

Images attached to this report
H1 PSL (PSL)
corey.gray@LIGO.ORG - posted 11:24, Tuesday 17 January 2017 (33378)
PSL Weekly Report

Note:  There was a PSL trip last night & also ISS electronics work this morning for Maintenance.

Laser Status:
SysStat is good
Front End Power is 34.11W (should be around 30 W)
Front End Watch is GREEN
HPO Watch is GREEN
PMC:
It has been locked 0.0 days, 0.0 hr 10.0 minutes (should be days/weeks)
Reflected power is 17.05Watts and PowerSum = 76.71Watts.

FSS:
It has been locked for 0.0 days 0.0 hr and 10.0 min (should be days/weeks)
TPD[V] = 1.157V (min 0.9V)

ISS:
The diffracted power is around 4.2% (should be 3-5%)
Last saturation event was 0.0 days 0.0 hours and 9.0 minutes ago (should be days/weeks)

Possible Issues:
PMC reflected power is "noted" as being high in our PSL Report script.  Attached is a 1-month trend of PMC Refl

This closes FAMIS 7421.
 

Images attached to this report
H1 PSL
peter.king@LIGO.ORG - posted 10:54, Tuesday 17 January 2017 (33377)
Work permit 6436 completed
Work covered by permit 6436 is complete.

This was to replace the chassis power regulator board in the ISS AA chassis.

Nothing "wrong" appeared with the board.  The positive voltage regulator (LM2941CT) tab had some
discolouration but it was not the one that failed.  The negative voltage regulator (LM2991CT) tab
looked just fine.  There was a small amount of lint stuck to the negative voltage regulator tab
but no signs of shorting or other tell-tale signs.
Images attached to this report
H1 SEI
thomas.shaffer@LIGO.ORG - posted 10:15, Tuesday 17 January 2017 - last comment - 10:16, Tuesday 17 January 2017(33374)
HEPI Pump Pressure 45 Day Trends

EY Pressure looks like it dropped a tiny bit on 1-11-2017, but I don't see any other issues.

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 10:16, Tuesday 17 January 2017 (33375)

This closes FAMIS4528.

LHO FMCS
bubba.gateley@LIGO.ORG - posted 10:05, Tuesday 17 January 2017 (33373)
Turned off heat in Zone 3A
I have turned off the heat in Zone 3A in the LVEA.
LHO FMCS
john.worden@LIGO.ORG - posted 09:59, Tuesday 17 January 2017 (33372)
LVEA HVAC

We have increased fan flows in the corner in an attempt to stabilize temperatures. The 4 fans are now set to ~11000 cfm.

H1 DetChar (DetChar)
alexander.urban@LIGO.ORG - posted 09:57, Tuesday 17 January 2017 (33371)
Data quality shift report, 12--15 Jan 2017

Highlights from my data quality shift last weekend (12-15 January 2017) at Hanford:

Full notes may be found here: https://wiki.ligo.org/DetChar/DataQuality/DQShiftH120170111

LHO VE
chandra.romel@LIGO.ORG - posted 09:15, Tuesday 17 January 2017 (33370)
CP3,4 LLCV adjustments

Lowered LLCV settings on both CP3 & CP4

CP3 from 17% to 15%

CP4 from 34% to 33%

Exhaust temps were lower than ambient and exhaust pressures above zero.

LHO General
thomas.shaffer@LIGO.ORG - posted 08:12, Tuesday 17 January 2017 (33367)
Ops Day Shift Transition

TITLE: 01/17 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventitive Maintenance
OUTGOING OPERATOR: Travis
CURRENT ENVIRONMENT:
    Wind: 7mph Gusts, 4mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.57 μm/s
QUICK SUMMARY: Freezing rain on the way in. Maintenance has already began.
 

H1 General
travis.sadecki@LIGO.ORG - posted 06:25, Tuesday 17 January 2017 - last comment - 08:06, Tuesday 17 January 2017(33361)
Unable to get IMC to lock and Dataviewer issues

The IMC hasn't locked since the lockloss.  While following the troubleshooting Wiki instructions, I attempted to use Dataviewer to look at trends of various IMC related channels to no avail (see screenshot of error) no matter which data rate setting I chose.  With no data to point me to what is wrong, I cleared the IMC WFS as a hail mary.  This didn't help.  I'm not sure what to do next here.

Images attached to this report
Comments related to this report
travis.sadecki@LIGO.ORG - 06:45, Tuesday 17 January 2017 (33362)

Using TimeMachine as a last resort, I also set the IMC PZTs back to values from a previous lock stretch 12 hours ago.  This also did not help.

travis.sadecki@LIGO.ORG - 07:07, Tuesday 17 January 2017 (33363)

Turns out this was due to some MC2 OSEMs being railed because I apparently forgot to take ISC_LOCK to DOWN before going back to INITIAL_ALIGNMENT.  IMC is locked again now.

james.batch@LIGO.ORG - 08:06, Tuesday 17 January 2017 (33366)

Dataviewer has been fixed. The operating system type wasn't determined properly, so paths to programs were incorrect.  Operator should log out of the workstation and log back in.

H1 PSL
jeffrey.bartlett@LIGO.ORG - posted 23:54, Monday 16 January 2017 - last comment - 10:16, Tuesday 17 January 2017(33353)
PSL Trip

  PSL tripped at 06:15 (22:15) with a Head 1-4 Flow Sensor error. Jason recovered PSL. I reset the Noise Eater.

Comments related to this report
jason.oberling@LIGO.ORG - 10:16, Tuesday 17 January 2017 (33376)

Jeff filed FRS 7115 for this trip.

LHO General
patrick.thomas@LIGO.ORG - posted 08:47, Monday 16 January 2017 - last comment - 08:38, Tuesday 17 January 2017(33336)
Ops Day Shift Transition
ISC_LOCK at DOWN and observatory mode in corrective maintenance upon arrival. The HAM6 ISI is tripped. I will attempt to lock but given the alogs from last night it sounds like I may need help from a commissioner.

Running 'ops_auto_alog -t Day' reported an error:

patrick.thomas@operator0:~$ ops_auto_alog.py -t Day
Traceback (most recent call last):
  File "/opt/rtcds/userapps/release/cds/h1/scripts/ops_auto_alog.py", line 300, in 
    alog.main(Transition,shift)
  File "/opt/rtcds/userapps/release/cds/h1/scripts/ops_auto_alog.py", line 218, in main
    operator = self.get_oper_w_date('{day}-{month_name}'.format(day=lday, month_name=self.all_months[lmonth]), 'Owl')
  File "/opt/rtcds/userapps/release/cds/h1/scripts/ops_auto_alog.py", line 130, in get_oper_w_date
    date_ln = self.get_date_linum(date) - 1
TypeError: unsupported operand type(s) for -: 'NoneType' and 'int'
Comments related to this report
patrick.thomas@LIGO.ORG - 10:00, Monday 16 January 2017 (33337)
17:59 UTC Sheila, Jenne, Keita, Evan G. and Heather (new fellow) in control room.
thomas.shaffer@LIGO.ORG - 08:38, Tuesday 17 January 2017 (33369)

The ops_auto_alog.py error seems to be an issue with only the ops account, I can et it to work on other accounts on both the operator0 machine and opsws12. Jim Batch mentioned some issues with the ops account this morning so they may be related.

Displaying reports 50661-50680 of 83121.Go to page Start 2530 2531 2532 2533 2534 2535 2536 2537 2538 End