Displaying reports 51121-51140 of 83143.Go to page Start 2553 2554 2555 2556 2557 2558 2559 2560 2561 End
Reports until 15:08, Tuesday 03 January 2017
H1 General
jim.warner@LIGO.ORG - posted 15:08, Tuesday 03 January 2017 (32930)
LVEA Swept

While waiting for the ground to stop shaking, I ran throught Betsy's annotated LVEA sweep. I didn't find anything out of place, I did run through the science mode process for the PSL (unclear if that was necessary, I got the impression from the checksheet on the PSL that it was). Everything else seemed okay.  I don't believe the ends have been done, but access is dicey today.

H1 OpsInfo (SEI)
jim.warner@LIGO.ORG - posted 14:28, Tuesday 03 January 2017 - last comment - 15:05, Tuesday 03 January 2017(32927)
Sensor correction turned off for earthquake, needs to be reverted

We just lost the first NLN lock of the year to a 7.2 earthquake in Fiji. Terramon predicts 18 micron displacements, so we wanted to turn off the low frequency feedforward on the ISIs. Unfortunately, CDS had restarted all of the guardians right after the lockloss, so we couldn't use ISI_CONFIG to do this. TJ had written a script to do this a while ago (before & independent of the seismic configuration guardians) and I had an alias set up to run it. This script conflicts with the seismic guardians however, so just re-requesting "WINDY" on ISI_CONFIG may not recover the correct configuration. So, to ensure that we get everybody back to where they belong, we should run the following command in a terminal:

python /opt/rtcds/userapps/release/isi/h1/scripts/Toggle_SEI_Sensor_Correction.py -c 1 && python /opt/rtcds/userapps/release/isi/h1/scripts/Toggle_SEI_Sensor_Correction.py -e 1

Should just be copy/paste, any issues will show up in SDF, so that would be a good check. The only ISI that should be red in the down state is the BS, with 21 diffs. All ISI and HEPI should be green in OBSERVE.

This is not a long term issue, just a kludge I had to use because of poor timing with CDS work.

Comments related to this report
jim.warner@LIGO.ORG - 15:05, Tuesday 03 January 2017 (32929)

Corey has reverted this. Unless CDS takes guardian down again at another perfectly disoptimal time, no further action is needed.

H1 SEI
corey.gray@LIGO.ORG - posted 14:27, Tuesday 03 January 2017 (32926)
Earthquake Report: 7.2, Fiji, H1 Lockloss
H1 General
corey.gray@LIGO.ORG - posted 14:21, Tuesday 03 January 2017 (32925)
Up & Then Down....

After alignment and moderate troubleshooting, we had H1 aimed toward NOMINAL LOW NOISE at 2pm PST, but as we were approaching, a big EQ (7.2, Fiji) was incoming.  Managed one data point of around 65Mpc before dropping out.  Jenne was in the middle of ACCEPTING many OFFSET changes from her Dark Offset Measurements (not sure she accepted all of them).  

Taking this opportunity of down-time to address a few items:

H1 SEI
jim.warner@LIGO.ORG - posted 14:06, Tuesday 03 January 2017 (32924)
BRS-Y centering still looks good, probably OK til March

One of the things I checked this morning the health of the 3 different BRSs on site. Everybody looks ok: the corner station BRS guardian was still running smoothly, EX doesn't look like it did anything crazy and EY continues a slow (and slowing) drift toward one limit. Attached time series are the Drift mons for both end BRSs, blue is EX, green is EY. EY has drifted about 10k counts in 9 weeks, it's got about 8k counts margin left, about 7.5 weeks.

Images attached to this report
H1 ISC
jenne.driggers@LIGO.ORG - posted 13:50, Tuesday 03 January 2017 - last comment - 19:03, Wednesday 04 January 2017(32923)
Dark offsets set

I reset the dark offsets for all LSC and ASC PDs.  The ASC diodes already had scripts in ..../userapps/asc/common/scripts/dark_offset/, so I copied the style of those into an LSC script that now lives in ..../userapps/isc/common/scripts/dark_offset/. 

I also created a bash script that will call each of those scripts in succession - it's natively in the isc folder, but linked in the asc folder:  all_offsets_LSCandASC.

Comments related to this report
jenne.driggers@LIGO.ORG - 19:03, Wednesday 04 January 2017 (32975)

For last lock, and the one that Ed just got, we've had SDF diffs due to not rounding in one of the pre-existing dark offset scripts (the one that does the end station QPDs).  The diffs were of the order 1e-16, so were just accepted so that we can Observe.  I have modified (although not run, since we just locked) the script such that we have rounding.  This should prevent the problem in the future.  Next time we lose lock and I'm around, I'll try to remember to hand-round those values so that we don't keep getting non-useful SDF diffs.

Also, Ed might write about this in his shift summary, but it seems like the ASAIR_LF offset was set a little wrong.  Sheila and Ed took the IMC to offline and hand-set that offset, and we were able to get through the PRX_Locked state of initial alignment.  (The wrong dark offset was causing the output to not meet the guardian's threshold of whether the cavity was locked or not, even though it was locked).

H1 SEI
jim.warner@LIGO.ORG - posted 12:41, Tuesday 03 January 2017 (32922)
Seismometer Center Check - FAMIS Task 6082

Looks like a number of seismometers are out of shape, especially the corner station ground STS, 6 volts seems pretty bad relative to the 2 volt spec.

 

Averaging Mass Centering channels for 10 [sec] ...


2017-01-03 12:04:38.352808
There are 2 STS proof masses out of range ( > 2.0 [V] )!
STS B DOF Y/V = -6.441 [V]
STS B DOF Z/W = 4.384 [V]


All other proof masses are within range ( < 2.0 [V] ):
STS A DOF X/U = -0.486 [V]
STS A DOF Y/V = 0.035 [V]
STS A DOF Z/W = -0.568 [V]
STS B DOF X/U = 0.986 [V]
STS C DOF X/U = -0.0 [V]
STS C DOF Y/V = -0.0 [V]
STS C DOF Z/W = -0.0 [V]
STS EX DOF X/U = -0.089 [V]
STS EX DOF Y/V = 0.557 [V]
STS EX DOF Z/W = 0.126 [V]
STS EY DOF X/U = 0.168 [V]
STS EY DOF Y/V = 0.084 [V]
STS EY DOF Z/W = 0.464 [V]


Assessment complete.

jim.warner@opsws0:~ 0$ t240_center  
Averaging Mass Centering channels for 10 [sec] ...
2017-01-03 12:16:41.356637


There are 12 T240 proof masses out of range ( > 0.3 [V] )!
ETMY T240 3 DOF Z/W = 0.405 [V]
ITMX T240 1 DOF X/U = -0.589 [V]
ITMX T240 1 DOF Y/V = 0.355 [V]
ITMX T240 1 DOF Z/W = 0.31 [V]
ITMX T240 2 DOF X/U = 0.346 [V]
ITMX T240 2 DOF Y/V = 0.35 [V]
ITMX T240 2 DOF Z/W = 0.358 [V]
ITMX T240 3 DOF X/U = -0.573 [V]
ITMY T240 2 DOF Z/W = 0.306 [V]
ITMY T240 3 DOF Z/W = -1.025 [V]
BS T240 1 DOF Z/W = 0.423 [V]
BS T240 2 DOF Y/V = 0.393 [V]


All other proof masses are within range ( < 0.3 [V] ):
ETMX T240 1 DOF X/U = 0.136 [V]
ETMX T240 1 DOF Y/V = 0.099 [V]
ETMX T240 1 DOF Z/W = 0.157 [V]
ETMX T240 2 DOF X/U = -0.137 [V]
ETMX T240 2 DOF Y/V = -0.252 [V]
ETMX T240 2 DOF Z/W = 0.039 [V]
ETMX T240 3 DOF X/U = 0.091 [V]
ETMX T240 3 DOF Y/V = 0.075 [V]
ETMX T240 3 DOF Z/W = 0.085 [V]
ETMY T240 1 DOF X/U = 0.046 [V]
ETMY T240 1 DOF Y/V = -0.065 [V]
ETMY T240 1 DOF Z/W = -0.099 [V]
ETMY T240 2 DOF X/U = 0.256 [V]
ETMY T240 2 DOF Y/V = -0.135 [V]
ETMY T240 2 DOF Z/W = 0.038 [V]
ETMY T240 3 DOF X/U = -0.108 [V]
ETMY T240 3 DOF Y/V = 0.049 [V]
ITMX T240 3 DOF Y/V = 0.281 [V]
ITMX T240 3 DOF Z/W = 0.262 [V]
ITMY T240 1 DOF X/U = 0.228 [V]
ITMY T240 1 DOF Y/V = 0.153 [V]
ITMY T240 1 DOF Z/W = 0.19 [V]
ITMY T240 2 DOF X/U = 0.092 [V]
ITMY T240 2 DOF Y/V = 0.272 [V]
ITMY T240 3 DOF X/U = -0.111 [V]
ITMY T240 3 DOF Y/V = 0.265 [V]
BS T240 1 DOF X/U = 0.031 [V]
BS T240 1 DOF Y/V = 0.13 [V]
BS T240 2 DOF X/U = 0.231 [V]
BS T240 2 DOF Z/W = 0.218 [V]
BS T240 3 DOF X/U = 0.253 [V]
BS T240 3 DOF Y/V = 0.073 [V]
BS T240 3 DOF Z/W = 0.092 [V]


Assessment complete.
 

H1 CAL (CAL)
aaron.viets@LIGO.ORG - posted 11:46, Tuesday 03 January 2017 (32920)
New GDS filters for LHO with corrected analog AA gain
[Evan G, Alex U, Aaron V]

I have produced new filters for Hanford that incorporate the correction Evan made to the Pcal to DARM transfer function. (see https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32907)
The filters can be found in the calibration SVN at this location:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/GDSFilters/H1GDS_1167485844.npz

The filters were produced using this Matlab script in SVN revision 4029:
ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/TDfilters/H1_run_td_filters_1167485844.m

The parameters files used (all in revision 4029) were:
ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/Common/params/IFOindepParams.conf
ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/H1/params/H1params.conf
ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/H1/params/2016-11-12/H1params_2016-11-12.conf
ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/params/H1_TDparams_1167485844.conf
ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/H1/Scripts/CAL_EPICS/D20161122_H1_CAL_EPICS_VALUES.m

The first two plots (png files) are spectrum comparisons between CALCS and GDS. All the same options were used that are currently being used on the online calibration, including application of time-dependent corrections. The last pdf file shows kappas computed by SLM, GDS, and CALCS. It appears that Evan's correction has resolved the discrepancy we were seeing in kappa_tst and kappa_pu between GDS and CALCS. Compare the data from the summary pages from the same time period: https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20161207/cal/time_varying_factors/
However, SLM tool data does not agree at this time.
Images attached to this report
Non-image files attached to this report
H1 CDS
james.batch@LIGO.ORG - posted 11:28, Tuesday 03 January 2017 (32919)
Updated lead seconds data base for gpstime python package
The gpstime package leap seconds data base file was out of date, and user/group owned by a non-controls account.  Updated the file and changed permissions on the gpstime directory to allow updates.
H1 PSL
jason.oberling@LIGO.ORG - posted 11:14, Tuesday 03 January 2017 - last comment - 14:44, Tuesday 03 January 2017(32918)
H1 PSL After the Holiday Break

J. Oberling, P. King (from Pasadena)

Checked on the PSL this morning, all was fine except we lost ~10 W of power over the break (before the break we were at ~160 W, this morning we were at ~150 W) and the NPRO noise eater needed to be reset.  Everything looked as expected, so the power loss is most likely due to natural decay of the HPO pump diodes.  I increased the HPO diode currents to recover our lost power; the currents were changed from 50.5 A to 51.0 A for each HPO diode box.  I then tweaked the pump diode temperatures, see the table below for a summary of the changes (remember that each HPO diode box has 7 individual laser diodes).  The PSL is now outputting ~166.5 W from the HPO box itself (the internal power reading is ~212 W).

 
Operating Temperature for HPO Pump Diodes

 
Diode Box 1 Diode Box 2 Diode Box 3 Diode Box 4
Old New Old New Old New Old New
D1 26.0 25.5 20.5 20.0 22.5 22.0 25.0 24.5
D2 26.5 26.0 20.0 19.5 26.5 26.0 22.5 22.0
D3 28.5 28.0 21.0 20.5 26.5 26.0 24.0 23.5
D4 25.0 24.5 19.0 18.5 23.5 23.0 22.5 22.0
D5 27.0 26.5 19.0 18.5 27.5 27.0 24.5 24.0
D6 26.5 26.0 19.5 19.0 22.0 21.5 24.5 24.0
D7 24.0 23.5 20.0 19.5 23.0 22.5 24.5 24.0
 

With the ISS off this gives 80 W incident on the PMC, which is the desired incident power for the PMC; the PMC was outputting ~65 W.  I then did a quick tweak of the beam alignment into the PMC, only tweaking the horizontal alignment (vertical looked fine), which improved the PMC transmitted power to ~67 W (ISS still off).  With the ISS turned back on the PMC is outputting 64.7 W, which is where it was set at the start of O2.  Also with the ISS on, the FSS RefCav TPD is reading ~3.6 V, so no adjustment is necessary there.  The H1 PSL is ready for the resumption of O2.

Comments related to this report
jason.oberling@LIGO.ORG - 14:44, Tuesday 03 January 2017 (32928)

Attached is a 14 day trend of the NPRO, FE, and HPO laser powers.  As can be seen, the only power drop is seen in the HPO, lending evidence to the power loss being due to natural decay of the pump diodes.

Images attached to this comment
H1 PSL
corey.gray@LIGO.ORG - posted 10:41, Tuesday 03 January 2017 (32917)
PSL Weekly Report


Laser Status:
SysStat is good
Front End Power is 34.18W (should be around 30 W)
Front End Watch is GREEN
HPO Watch is GREEN
PMC:
It has been locked 0.0 days, 20.0 hr 28.0 minutes (should be days/weeks)
Reflected power is 13.29Watts and PowerSum = 80.42Watts.

---->There is a note that this PMC Reflected power is high, but that was for O1 (for O2 we are fine).

FSS:
It has been locked for 0.0 days 1.0 hr and 5.0 min (should be days/weeks)
TPD[V] = 3.673V (min 0.9V)

ISS:
The diffracted power is around 2.6% (should be 3-5%)
Last saturation event was 0.0 days 0.0 hours and 0.0 minutes ago (should be days/weeks)

THis closes FAMIS 7419.

H1 AOS (AOS, SEI, SUS)
corey.gray@LIGO.ORG - posted 10:29, Tuesday 03 January 2017 (32916)
Optical Lever 7 Day Trends

HAM2 Yaw is now off-scale around -30 (but don't think Jason wants to monitor this anymore...so perhaps it should be removed from procedures/templates/scripts?).

Everything else looked fine.

This closes FAMIS 4708.

Images attached to this report
H1 CAL (DetChar, INJ, OpsInfo)
jeffrey.kissel@LIGO.ORG - posted 10:23, Tuesday 03 January 2017 (32915)
PCALY Calibration Lines and PCALX CW Hardware Injections turned ON
J. Kissel

I've restored PCALX and PCALY's excitations (including calibration lines on Y and CW hardware injections on X) from the shut down. All systems are go, OFS looks stable.
H1 ISC
jeffrey.kissel@LIGO.ORG - posted 10:07, Tuesday 03 January 2017 (32914)
ALS Fiber Polarizonation Check: < 10%, OK to GO
J. Kissel

ALS Fiber PLL's rejected/wrong polarization looks nice and low. The wrong polarization has swung up and down over the past 6 days, but never over ~20%. This is expected since the long-term drift is not controlled, and temperatures of the site have been dynamic. Currently hovering at a very acceptable ~7 / 4 % for X / Y, respectively. Trend attached.
Images attached to this report
H1 SEI (OpsInfo)
jeffrey.kissel@LIGO.ORG - posted 09:28, Tuesday 03 January 2017 - last comment - 09:44, Tuesday 03 January 2017(32911)
SEI Configuration Restored to Nominal (BRS X Damping still OFF)
J. Kissel, C. Gray

We've used the chamber managers to bring all SEI systems back to nominal -- HAMs to ISOLATED, ITMs and ETMs to FULLY_ISOLATED, and BS to ISOLATED_DAMPED. 

Before turning on sensor correction, I checked that the end-station BRS signals looked sane, and they do. BRS X damping is still OFF, but the amplitude of the raw tilt signal looks within +/- 100 [ct], which is acceptable. 

Once isolated, I used the ISI_CONFIG manager to bring all platforms to their respective WINDY configuration, as per nominal.

All chambers are now running smoothly as expected, with there BLRMS blinky-light matrices almost entire green, with a little yellow sprinkled here and there (again, as expected).
Comments related to this report
jeffrey.kissel@LIGO.ORG - 09:44, Tuesday 03 January 2017 (32913)
J. Kissel, J. Warner

BRS X Damping restored. This is doable remotely, by requesting 
     caput H1:ISI-GND_BRS_ETMX_USER 1

this changes the displace of this channel on the BRS overview screen from "DISABLED" to "ENABLED." However, if the rotational velocity of the BRS X signal (H1:ISI-GND_BRS_ETMX_VEL) is within +/- 800 [ct] [as is the case currently], the damping will not turn on, and the status bit (H1:ISI-GND_BRS_ETMX_DAMPBIT) will continue to report that damping is OFF.
H1 TCS
betsy.weaver@LIGO.ORG - posted 12:20, Tuesday 20 December 2016 - last comment - 15:25, Tuesday 03 January 2017(32776)
TCSY In-line Flow Sensor replaced

This morning, Jason, Mark and I swapped the assumed-to-be failing TCSY flow sensor which has been showing epochs of glitching and low readout (while other indicators show normal flow, alogs 32712 and 32230).  The process to do this was such:

 

1) Key laser off at control box in rack, LVEA

2) Turn RF off at mezzanine rack, Mech room

3) Turn chiller off on mezzanine, Mech room

4) Turn power off on back of controller box in rack, LVEA (we also pulled the power cable to the sensor off the front of the controller, but it was probably overkill)

5) Close in-line valves under BSC chamber near yellow sensor to-be-swapped, LVEA

6) Quick-disconnect water tubes at manifold near table, LVEA

7) Pulled yelow top off of yellow sensor housing under BSC at the piping, LVEA

8) Pulled the blue and black wires to the Power recepticles inside the housing (see pic attached).  Pulled full grey cable out of housing.

9) While carefully supporting blue piping*, unscrewed large white nut holding housing/sensor to piping (was tough, in fact so tough that we later removed all of the teflon tape which was unneeded in his join)

10) Pull* straight up on the housing (hard) and it comes out of the piping.

11) Reverse all above steps to insert new housing/sensor, wires and turn everything back on.  Watch for rolled o-rings on the housing and proper alignment of the noth feature when installing the new sensor.  Verify mechanical flow sensors in piping line show ~3-4 G/m readout when flow/chiller is restored to functionality.

12) Setup new flow sensor head with Settings:  Go to the other in-use sensor, pull off the top and scroll through the menu items (red and white buttons on the unit (shown in pic).  Set the new head to these values.

13) Verify the new settings on the head are showing a ~3 G/m readout on the medm screen.  If not, possibly there is setting on the sensor that needs revisited.

14) Monitor TCS to see that laser comes back up and stabilizes.

* Blue piping can crack so be careful to always support it and avoid torque torque

 

Note - with the sensor removed, we could see alot of green merk in the blue piping where the paddle wheel sits.  Still suffering green sludge in this system...

Images attached to this report
Comments related to this report
peter.king@LIGO.ORG - 12:57, Tuesday 20 December 2016 (32777)
A few pictures to add to those already posted.

The O-ring closest to the paddle wheel had a cut to it.  Not near the electronics,
plus there's the other O-ring so it doesn't look like water was getting into where
the electronics is housed.

Some kind of stuff stuck to each blade (paddle?) of the paddle wheel.  Not a good
sign if the cooling water for the laser is meant to be clean.
Images attached to this comment
marc.pirello@LIGO.ORG - 13:20, Tuesday 20 December 2016 (32778)

Settings were as follows:

FLO Unit (Flow Unit) = G/m (default was L/m)

FActor (K-Factor) = 135.00 (default was 20)

AVErage (Average) = 0

SEnSit (Sensitivity) = 0

4 Set (4mA Set Point) = 0 G/m

20 Set (20mA Set Point = 10 G/m (default was 160)

ContrAST (Contrast) = 3

betsy.weaver@LIGO.ORG - 14:05, Tuesday 20 December 2016 (32782)

Here's both TCS system laser power and flow for the past day.  The drop out in the ITMY data is our few hour sensor replacement work.  So far no glitching or low droops.  Although, there weren't any for the last 24 hours on the old sensor either.

Images attached to this comment
jason.oberling@LIGO.ORG - 15:17, Tuesday 03 January 2017 (32931)

Attached is a 14 day duration minute trend of the TCSy chiller flow rate and CO2 laser power since our swap of tthe TCSy flow sensor.  There have been 7 glitches below 2 GPM, with 3 of those glitches being below 1 GPM; all 7 glitches occured in the last week.  Unless the spare flow sensor is also faulty (not beyond belief, but still a hard one to swallow) the root cause of our TCSy flow glitches lies elsewhere.

Images attached to this comment
alastair.heptonstall@LIGO.ORG - 15:25, Tuesday 03 January 2017 (32932)

It might be a good idea to try swapping the laser controller chassis next.  The electronics path for this flow meter is very simple - just the controller and then into the EtherCAT chassis where it's read by an ADC.

LHO General (OpsInfo)
corey.gray@LIGO.ORG - posted 00:02, Thursday 17 November 2016 - last comment - 12:32, Tuesday 03 January 2017(31555)
OPS Eve Shift Summary

TITLE: 11/17 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Aligning
INCOMING OPERATOR: Travis
SHIFT SUMMARY:

Shift started with Alignment work, then ASC work.
LOG:

To Do:

Run Sheila's ASC measurement  (30min) after we are locked for atleast a couple hours.

Initial Alignment Notes:

INPUT_ALIGN:  Xarm IR flashes were up to 0.6, but it wouldn't lock.  Tweaked on IM4 & PR2 a bit while looking at Xarm & ASAIR video, but didn't get anywhere.  Also doubled & quadrupled XARM gain to no avail.  Sheila did some work on filters and quadrupled gain and then this seemed to do the trick.

MICH_DARK_LOCKED:  The spot did not look dark (saturated) and look really misaligned.  Took it DOWN a few times.  Sheila noticed that the exposure was high.  After lowering it, had a spot to work with...but it still looked different from previous alignments.  Moved on after centering.

SRC_ALIGN:  Looked misaligned here.  Took it DOWN a few times.  Sheila showed me how to fix alignment by misaligning SRM & using SR2 to center the beam on the AS_C Photodetector.  This did the trick & then went back to locking.

Comments related to this report
corey.gray@LIGO.ORG - 12:32, Tuesday 03 January 2017 (32921)OpsInfo

For the SRC_ALIGN Locking note above, just wanted to mention that ALIGN_IFO was set to DOWN & then SRM MISALIGNED to center SR3.

Displaying reports 51121-51140 of 83143.Go to page Start 2553 2554 2555 2556 2557 2558 2559 2560 2561 End