Displaying reports 54301-54320 of 86340.Go to page Start 2712 2713 2714 2715 2716 2717 2718 2719 2720 End
Reports until 20:50, Tuesday 03 January 2017
H1 General
edmond.merilh@LIGO.ORG - posted 20:50, Tuesday 03 January 2017 - last comment - 21:18, Tuesday 03 January 2017(32947)
Mid-Shift Summary

00:33 Runnnig a2l. DTT ploit shows high incoherence in YAW. Python problems? Not running from the MEDM screen

00:52 Jeff K running calibration measurements

02:44 Checking a2l, post calibration. PIT is showing a pretty good deal of mis-alignment. Going to run the script one more time before setting the intention bit.

03:02 H1 in Observing


Comments related to this report
edmond.merilh@LIGO.ORG - 21:18, Tuesday 03 January 2017 (32949)PSL

Also, PSL AOM diffracted power is running at ≈7%, which by my recollection is kind of high(ish).

H1 General (DetChar)
edmond.merilh@LIGO.ORG - posted 19:49, Tuesday 03 January 2017 - last comment - 17:07, Thursday 05 January 2017(32944)
Just spoke to Joe Hanson at LLO

He was informing me that they were going to go to Observing. I told him we had been there for a few hours already but he brought to my attention the fact that GWI stat is reporting us as NOT ok. Anyone?

Comments related to this report
edmond.merilh@LIGO.ORG - 20:04, Tuesday 03 January 2017 (32945)

Apologies. We've been at NLN for about that long. In Observation for only about 1 hour.

keita.kawabe@LIGO.ORG - 20:27, Tuesday 03 January 2017 (32946)CAL, DetChar

Seems like H1:DMT-CALIBRATED is 0 (zero) not 1. Is this related to the calibration task performed today?

Is this why GWI stat thinks that H1 is not OK?

Images attached to this comment
keita.kawabe@LIGO.ORG - 20:44, Tuesday 03 January 2017 (32948)

Sent a message to Jeff Kissel, Aaron Viets and Alex Urban.

john.zweizig@LIGO.ORG - 23:42, Tuesday 03 January 2017 (32950)DetChar
I tried a few things to see if I could figure out why the calibration flag wasn't set. 

1) restarted the redundant calibration pipeline, This probably caused some of the backup frames to be lost but the primary and low latency frames would not be affected. The Science_RSegs_H1 process 

 https://marble.ligo-wa.caltech.edu/dmt/monitor_reports/Science_RSegs_H1/Segment_List.html

is generating segments from the output of the (restarted) redundant pipeline, but it is getting the same results.

2) Checked for dataValid errors in the channels in the broadcaster frames. dataValid would probably cause the pipeline to flush the h(t) data. No such errors were found

3) checked for subnormal/Nan data in the broadcaster frames. Another potential proble,m tha tmight cause the pipeline to flush the data. No problems of this type were found either.

4) checked pipeline log file - nothing unusual

5) Checked for frame errors or broadcaster restarts flagged by the broadcast receiver. Last restart was Dec 5!

So, I can see no reason for the ht pipeline to not be running smoothly.
alexander.urban@LIGO.ORG - 00:07, Wednesday 04 January 2017 (32953)

Alex U. on behalf of the GDS h(t) pipeline team

I've looked into why the H1:DMT-CALIBRATED flag is not being set, and TL;DR: it's because of the kappa_TST and kappa_PU factors.

Some detail: the H1:DMT-CALIBRATED flag can only be active if we are OBSERVATION_READY, h(t) is being produced, the filters have settled in, and, since we're tracking time-dependent corrections at LHO, the kappa factors (except f_CC) must each be within range -- outside of 10% their nominal value, the DMT-CALIBRATED flag will fail to be set. (See the documentation for this on our wiki page: https://wiki.ligo.org/viewauth/Calibration/TDCalibReviewO1#CALIB_STATE_VECTOR_definitions_during_ER10_47O2)

I attach below a timeseries plot of the real and imaginary parts of each kappa factor. (What's actually plotted is 1 + the imaginary part, to make them fit on the same axes.) As you can see, around half an hour or so in, the kappa_TST and kappa_PU factors go off the rails, straying 20-30% outside their nominal values. (kappa_C, which is a time-dependent gain on the sensing function, and f_CC both stay within range during this time period.)

Earlier today, Jeff reported on some work done with the L2/L3 actuation stages (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32933) which may in principle affect kappa_TST and kappa_PU. It's possible we will need a new set of time domain filters to absorb these changes into the GDS pipeline. (I also tried a test job from the DMT machine, but the problems with kappas were still present, meaning a simple restart won't solve the problem.)

Images attached to this comment
peter.shawhan@LIGO.ORG - 05:06, Wednesday 04 January 2017 (32958)
GWIstat (also the similar display gwsnap) was reporting that H1 was down because of the h(t) production problem; it did not distinguish between that and a down state.  I have now modified GWIstat (and gwsnap) to indicate if there is no good h(t) being produced but otherwise the detector is running.
aaron.viets@LIGO.ORG - 06:36, Wednesday 04 January 2017 (32959)
The attached pdf shows that CALCS and GDS agree on the calculation of kappa_tst. I suspect we may need to calculate new EPICS. Jeff (or perhaps Evan or Darkhan) will need to confirm this based on the recent L2/L3 crossover changes that Alex pointed out.
Images attached to this comment
Non-image files attached to this comment
aaron.viets@LIGO.ORG - 17:07, Thursday 05 January 2017 (32998)
Here is a comparison between h(t) computed in C00 frames (with kappas applied) and the "correct"-ish calibration, with no kappas applied. The first plot shows the spectra of the two from GPS time 1167559872 to 1167559936. The red line is C00, and the blue line has no kappas applied. The second plot is an ASD ratio (C00 / no-kappas-applied) during the same time period. 
The cache file that has the no-kappas-applied frames can be found in two locations:
ldas-pcdev1.ligo-wa.caltech.edu:/home/aaron.viets/H1_hoft_GDS_frames.cache
ldas-pcdev1.ligo-wa.caltech.edu:/home/aaron.viets/calibration/H1/gstreamer10_test/H1_hoft_GDS_frames.cache

Also, the file
ldas-pcdev1.ligo-wa.caltech.edu:/home/aaron.viets/H1_hoft_test_1167559680-320.txt
is a text file that has only h(t) from GPS time 1167559680 to 1167600000.
Images attached to this comment
H1 CAL
jeffrey.kissel@LIGO.ORG - posted 18:54, Tuesday 03 January 2017 (32942)
Post-Break Calibration Reference Measurements Complete
J. Kissel

I've taken the calibration measurement suite that shall be representative of post-winter break. Analysis to come, data files listed below. 

Sensing Function Measurements:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Measurements/SensingFunctionTFs/
   Swept Sine:
       2017-01-03_H1DARM_OLGTF_4to1200Hz_25min.xml
       2017-01-03_H1_PCAL2DARMTF_4to1200Hz_8min.xml
   Broadband:
       2017-01-03_H1_PCAL2DARMTF_BB_5to1000Hz.xml

Actuation Function Measurements:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Measurements/FullIFOActuatorTFs/2017-01-03/
   UIM:
       2017-01-03_H1SUSETMY_L1_iEXC2DARM_25min.xml
       2017-01-03_H1SUSETMY_L1_PCAL2DARM_8min.xml
   PUM:
       2017-01-03_H1SUSETMY_L2_iEXC2DARM_17min.xml
       2017-01-03_H1SUSETMY_L2_PCAL2DARM_8min.xml
   TST: 
       2017-01-03_H1SUSETMY_L3_iEXC2DARM_8min.xml
       2017-01-03_H1SUSETMY_L3_PCAL2DARM_8min.xml

Note that this includes the new/better L2/L3 crossover design re-installed earlier today (see LHO aLOG 32933), both in ETMY itself and in the CAL-CS replica that forms DELTAL_EXTERNAL_DQ. The mean data points for ratio of PCAL to DELTL_EXTERNAL (which should be unity if we've calibrated the data correctly), show a ~10%, frequency-dependent deviation, worst at ~200 Hz. We'll have to wait until time-dependent parameters are corrected for before deciding if anything is really "wrong" or incorrect.

We know that we will have to adjust the actuation strength and sensing gain by a scalar ~1% because of mistakenly over counting the gain of the analog anti-imaging and anti-aliasing filters (see LHO aLOG 32907), but this won't be the majority of the discrepancy.
Images attached to this report
H1 TCS
edmond.merilh@LIGO.ORG - posted 18:15, Tuesday 03 January 2017 (32941)
TCSY chiller flow is low alarms

Two alarms so far 5 minutes apart. Trends don't really show anything obvious.

H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 17:05, Tuesday 03 January 2017 (32940)
CDS O2 restart report: Thursday 22nd December 2016 - Monday 2nd January 2017

Thu 22nd Dec - Sat 24th Dec No restarts reported

Sun 25 Dec Many unexpeced restarts of h1tw0 (05:35 - 13:10). System turned off to prevent further restarts.

Mon 26th Dec - Fri 30th Dec No restarts reported

Sat 31st Dec

2016_12_31 15:57 h1iopsusauxh34
2016_12_31 15:57 h1susauxh34

 

h1susauxh34 computer died, was power cycled.

Sun 1st Jan - Mon 2nd Jan No restarts reported

H1 SUS (CAL, DetChar, ISC, SUS)
jeffrey.kissel@LIGO.ORG - posted 16:58, Tuesday 03 January 2017 (32939)
Charge Measurement Update; All is well after Holiday Break
J. Kissel

I've grabbed traditional "charge" (effective bias voltage due to charge) measurements from H1 SUS ETMX and ETMY this afternoon while under an earthquake. Measurments show that the effective bias voltage is still holding around / under +/-10 [V] in all quadrants. Nice!

Still on the to-do list: compare this against longitudinal actuation strength measurements via calibration lines, a. la. LHO aLOG 24547. Perhaps our new years resolution can be to start this regular comparison up again.
Images attached to this report
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 16:57, Tuesday 03 January 2017 (32938)
CDS Maintenance summary, Tuesday 3rd January 2017

awgtpman issues

Jenne, Dave, Jim:

we experienced some TP issues this morning. Command line "diag -l" was slow to start and did not support testpoints. First we restarted the models on h1susauxh34 since this showed errors over the break and had CRC errors, this did not fix the TPs. Next we restarted the awgtpman process on h1asc and this did fix the problems. Remember that h1asc has a special awgtpman process to permit more testponts to be opened. The reason for today's problem is unknown.

Guardian reboot

Dave, Jim:

To ensure the python leap-second updates were installed on all nodes, we rebooted h1guardian0 (it had been running for 33 days). All nodes came back with no problems. We recovered about 4GB of memory.

python gpstime leapseconds

Jim

gpstime package updated, see alog 32919 for details.

H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 16:48, Tuesday 03 January 2017 (32937)
Checking DAQ for leap second

Jeff K, Jonathan, Jim, Dave:

For due diligence we performed some sanity tests on the DAQ to confirm the leap-seconds did not cause any problems.

Event at Known Time:

Jeff K dropped a ball onto the control room floor at a recorded time. We looked at a seismometer signal (e.g. H1:ISI-GND_STS_HAM2_X_DQ) using both NDS1 (dataviewer and command-line) and NDS2. The signal showed up in the latter part of the recorded second as expected.

Decode IRIG-B analog signal:

The digitized IRIG-B signal H1:CAL-PCALX_IRIGB_OUT_DQ was read by hand for an arbitary GPS time. The time chosen is GPS = 1167523720 which corresponds to a UTC time of Jan 04 2017 00:08:22.

The decoded IRIGB time is 00:08:40 which is UTC + 18. Tthere have indeed been 18 leap seconds applied to UTC since the GPS epoch of Jan 1980, this is correct.

For anyone interested in decoding irig-b by hand, the attached image shows the seconds, minutes, hours part of the analog signal along with the decoding table.

Images attached to this report
H1 General
edmond.merilh@LIGO.ORG - posted 16:16, Tuesday 03 January 2017 (32935)
Shift Summary - Eve Transition
TITLE: 01/04 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Earthquake
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
    Wind: 14mph Gusts, 10mph 5min avg
    Primary useism: 0.18 μm/s
    Secondary useism: 0.20 μm/s 
QUICK SUMMARY:
H1 down after EQ in Fiji area Locking in progress and so far it is going well.
TITLE: 01/04 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Earthquake
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
    Wind: 14mph Gusts, 10mph 5min avg
    Primary useism: 0.18 μm/s
    Secondary useism: 0.20 μm/s 
QUICK SUMMARY:
LHO General
corey.gray@LIGO.ORG - posted 16:01, Tuesday 03 January 2017 (32909)
Ops Day Summary

TITLE: 01/03 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC

STATE of H1: Preventitive Maintenance
INCOMING OPERATOR: Ed
SHIFT SUMMARY:
Had snowy roads/parking lot & Hanford site had DELAY of 60-90min.
Spent first few hours recovering from Holiday Mode; around lunch time started an INITIAL ALIGNMENT; minor trouble shooting; made it to NOMINAL_LOW_NOISE for 1-min and then 7.2 EQ took us down & will keep us down for a while. 
 
LOG:

Restoring From Holiday Log Notes:

LHO VE
kyle.ryan@LIGO.ORG - posted 15:56, Tuesday 03 January 2017 (32934)
Swept off snow from X2-8 solar panels (Y2-8 had been blown clean since yesterday)
John, Kyle, Alfredo, Gerardo 

X2-8 battery charge a little low but still had lots of life left.
H1 ISC (CAL, DetChar, ISC)
jeffrey.kissel@LIGO.ORG - posted 15:52, Tuesday 03 January 2017 (32933)
PUM/TST or L2/L3 Crossover Reverted to Better, Jul 2016 Design
J. Kissel

Before the holiday break, I'd discovered that we had somehow lost the settings improved the design of the L2/L3 (or PUM/TST) crossover -- see LHO aLOG 32540 for the bad news discovery, and LHO aLOG 28746 for the original design.

I've now fixed the problem, and we have the new improved crossover again.

This required several steps:
(1) Turned on / switched over to the appropriate filters in the L2 and L3 DRIVEALIGN_L2L filter banks:
                  Good               Bad
     L2 L2L    (FM6, FM7)       (FM2, FM7 ,FM8)
     L3 L2L    (FM4, FM5)       (FM3, FM4)

(2) Turned on / swtiched over to the appropriate filters in the corresponding replicas of those filter banks in the CAL-CS model, such that the calibration will be unaffected.

(3) Changed the LOWNOISE_ESD_ETMY state of the ISC_LOCK guardian, such that it now forces the new configuration instead of the old. Committed to the userapps repository.

(4) Accepted the changes in the H1SUSETMY and H1CALCS SDF systems.

Hopefully we won't lose these settings again!
Images attached to this report
H1 General
jim.warner@LIGO.ORG - posted 15:08, Tuesday 03 January 2017 (32930)
LVEA Swept

While waiting for the ground to stop shaking, I ran throught Betsy's annotated LVEA sweep. I didn't find anything out of place, I did run through the science mode process for the PSL (unclear if that was necessary, I got the impression from the checksheet on the PSL that it was). Everything else seemed okay.  I don't believe the ends have been done, but access is dicey today.

H1 TCS
betsy.weaver@LIGO.ORG - posted 12:20, Tuesday 20 December 2016 - last comment - 15:25, Tuesday 03 January 2017(32776)
TCSY In-line Flow Sensor replaced

This morning, Jason, Mark and I swapped the assumed-to-be failing TCSY flow sensor which has been showing epochs of glitching and low readout (while other indicators show normal flow, alogs 32712 and 32230).  The process to do this was such:

 

1) Key laser off at control box in rack, LVEA

2) Turn RF off at mezzanine rack, Mech room

3) Turn chiller off on mezzanine, Mech room

4) Turn power off on back of controller box in rack, LVEA (we also pulled the power cable to the sensor off the front of the controller, but it was probably overkill)

5) Close in-line valves under BSC chamber near yellow sensor to-be-swapped, LVEA

6) Quick-disconnect water tubes at manifold near table, LVEA

7) Pulled yelow top off of yellow sensor housing under BSC at the piping, LVEA

8) Pulled the blue and black wires to the Power recepticles inside the housing (see pic attached).  Pulled full grey cable out of housing.

9) While carefully supporting blue piping*, unscrewed large white nut holding housing/sensor to piping (was tough, in fact so tough that we later removed all of the teflon tape which was unneeded in his join)

10) Pull* straight up on the housing (hard) and it comes out of the piping.

11) Reverse all above steps to insert new housing/sensor, wires and turn everything back on.  Watch for rolled o-rings on the housing and proper alignment of the noth feature when installing the new sensor.  Verify mechanical flow sensors in piping line show ~3-4 G/m readout when flow/chiller is restored to functionality.

12) Setup new flow sensor head with Settings:  Go to the other in-use sensor, pull off the top and scroll through the menu items (red and white buttons on the unit (shown in pic).  Set the new head to these values.

13) Verify the new settings on the head are showing a ~3 G/m readout on the medm screen.  If not, possibly there is setting on the sensor that needs revisited.

14) Monitor TCS to see that laser comes back up and stabilizes.

* Blue piping can crack so be careful to always support it and avoid torque torque

 

Note - with the sensor removed, we could see alot of green merk in the blue piping where the paddle wheel sits.  Still suffering green sludge in this system...

Images attached to this report
Comments related to this report
peter.king@LIGO.ORG - 12:57, Tuesday 20 December 2016 (32777)
A few pictures to add to those already posted.

The O-ring closest to the paddle wheel had a cut to it.  Not near the electronics,
plus there's the other O-ring so it doesn't look like water was getting into where
the electronics is housed.

Some kind of stuff stuck to each blade (paddle?) of the paddle wheel.  Not a good
sign if the cooling water for the laser is meant to be clean.
Images attached to this comment
marc.pirello@LIGO.ORG - 13:20, Tuesday 20 December 2016 (32778)

Settings were as follows:

FLO Unit (Flow Unit) = G/m (default was L/m)

FActor (K-Factor) = 135.00 (default was 20)

AVErage (Average) = 0

SEnSit (Sensitivity) = 0

4 Set (4mA Set Point) = 0 G/m

20 Set (20mA Set Point = 10 G/m (default was 160)

ContrAST (Contrast) = 3

betsy.weaver@LIGO.ORG - 14:05, Tuesday 20 December 2016 (32782)

Here's both TCS system laser power and flow for the past day.  The drop out in the ITMY data is our few hour sensor replacement work.  So far no glitching or low droops.  Although, there weren't any for the last 24 hours on the old sensor either.

Images attached to this comment
jason.oberling@LIGO.ORG - 15:17, Tuesday 03 January 2017 (32931)

Attached is a 14 day duration minute trend of the TCSy chiller flow rate and CO2 laser power since our swap of tthe TCSy flow sensor.  There have been 7 glitches below 2 GPM, with 3 of those glitches being below 1 GPM; all 7 glitches occured in the last week.  Unless the spare flow sensor is also faulty (not beyond belief, but still a hard one to swallow) the root cause of our TCSy flow glitches lies elsewhere.

Images attached to this comment
alastair.heptonstall@LIGO.ORG - 15:25, Tuesday 03 January 2017 (32932)

It might be a good idea to try swapping the laser controller chassis next.  The electronics path for this flow meter is very simple - just the controller and then into the EtherCAT chassis where it's read by an ADC.

Displaying reports 54301-54320 of 86340.Go to page Start 2712 2713 2714 2715 2716 2717 2718 2719 2720 End