Displaying reports 55921-55940 of 86064.Go to page Start 2793 2794 2795 2796 2797 2798 2799 2800 2801 End
Reports until 06:08, Saturday 29 October 2016
H1 AOS
cheryl.vorvick@LIGO.ORG - posted 06:08, Saturday 29 October 2016 - last comment - 06:13, Saturday 29 October 2016(30984)
Looks like TCSY 0.4W isn't a good thing

It looks like about 20 minutes after the increase in TCSY from 0 to 0.4W the range started to drop and continued until lock loss. Plot attached.

Images attached to this report
Comments related to this report
cheryl.vorvick@LIGO.ORG - 06:13, Saturday 29 October 2016 (30985)

I've turned the pre-heating off on TCSY - if anything the optic is overheated, may need to cool before relocking is stable.

H1 SUS
cheryl.vorvick@LIGO.ORG - posted 05:47, Saturday 29 October 2016 (30983)
TMS alignment slider ramping times

Set to 2 seconds and saved in SDF.

H1 AOS (AOS)
cheryl.vorvick@LIGO.ORG - posted 04:26, Saturday 29 October 2016 (30982)
TCS changes

11:23UTC, changed TCSY from 0W to 0.4W, TCSX was at 0.2W and remains at 0.2W, next change in about 90 minutes.

H1 General (DetChar, OpsInfo)
cheryl.vorvick@LIGO.ORG - posted 03:25, Saturday 29 October 2016 (30981)
H1 in semi-Observe, 10:21UTC

State of H1: Nominal Low Noise

Activities that will contnue:

H1 ISC
sheila.dwyer@LIGO.ORG - posted 02:51, Saturday 29 October 2016 (30979)
some work on low frequencies tonight

I did a few more things tonight

Images attached to this report
H1 ISC
sheila.dwyer@LIGO.ORG - posted 22:35, Friday 28 October 2016 (30978)
DBB coherences

Now that we have the DBB plugged in again, we can relook at the coherences of the QPDs.  The screenshot shows coherences for DARM and SRCL with for the 200W beam (opening the 35W shutter unlocks the IFO).

We have small coherences with DARM just below 1kHz and around 4.2kHz, but not otherwise.  SRCL does have more coherence with one of the QPDs below 100Hz.

Images attached to this report
H1 GRD (INJ)
evan.goetz@LIGO.ORG - posted 20:06, Friday 28 October 2016 - last comment - 09:00, Saturday 29 October 2016(30975)
Gain of PINJX_TRANSIENT filter bank
I think that the nominal setting of the CAL-PINJX_TRANSIENT filter bank gain is supposed to be zero. When a transient injection is imminent, then the gain is ramped to 1.0, the injection happens, and then the gain is ramped to zero. However, the SDF safe set point is 1.0. Therefore, I am setting the gain to 0 and accepting the change in the SDF safe file.
Comments related to this report
keith.thorne@LIGO.ORG - 06:26, Saturday 29 October 2016 (30986)CAL, GRD
The existing code in the INJ_TRANS Guardian script does indeed do this.  
david.barker@LIGO.ORG - 09:00, Saturday 29 October 2016 (30992)

if guardian is controlling the gain, perhaps sdf shouldn't be monitoring it.

H1 ISC (ISC, OpsInfo, TCS)
kiwamu.izumi@LIGO.ORG - posted 19:22, Friday 28 October 2016 (30974)
TCS long term measurement; no improvement so far, we should do differential next

Jeff B, Cheryl, Travis and Kiwamu,

In the past two days, Jeff, Cheryl and Travis performed a random walk on the CO2 settings for me (30920 and comments therein). I don't see significant change so far. In fact, it might have deteriorated the jitter peaks slightly.

I now ask the operators to perform differential scans instead (e.g. raising only one CO2 at a time).


The motivation was to see if we can exert any kind of effects on the jitter peaks in 200-1000Hz by changing the CO2 settings. Because TCS measurements usually take a long time, I have asked the operators to do some random walk on the TCS settings when possible. So far we have done a common scan (i.e. raising both CO2 powers simultaneously) and I don't see big change in the jitter peaks in 200-1000 Hz, in particular the ones at 285, 365 and 620 Hz. The attached shows DARM spectra with different CO2 settings.

These curves correspond to the following time.

As you can see, the ambient noise (most of the time appears to be shot noise) varies depending on the time because some of them overlapped with the active injection tests by Robert. But, this is not something I am looking for. Among the 6 noise curves, the best jitter noise was obtained from 27/10/2016 9:40:00 UTC which is actually before the series of CO2 tests started. So it is possible that the common CO2 may have deteriorated the jitter peaks slightly. We should do a differential scan next.

Images attached to this report
H1 SUS (DetChar, ISC)
jeffrey.kissel@LIGO.ORG - posted 17:43, Friday 28 October 2016 (30972)
Charge Measurement Update; Still ready for regular bias sign flipping...
J. Kissel, B. Weaver

Betsy grabbed charge measurements yesterday. I've processed them. The charge is still right around 0 [V] effective bias -- we're ready for regular bias flipping. 
Images attached to this report
H1 ISC
kiwamu.izumi@LIGO.ORG - posted 17:20, Friday 28 October 2016 (30969)
SRCL calibration improved -- there was a sign confusion

Summary- the sensing sign in the online calibration for SRCL has been wrong.

This has been causing overestimated noise in 10 - 100 Hz in the past years(!). My bad. This is now fixed.


Details- Daniel and Stefan a week or two ago told me that changing the shape of the digital filter in SRCL affected the calibrated SRCL displacement spectrum. This statement made me suspect that something was wrong in the online calibration or aka CALCS. Today, I have re-measured the SRCL open loop gain in nominal low noise with an input power of 25 W. A plot of the open loop is attached in this etnry. The absolute value of the SRCL sensing was found to be the same. However, the measurement indicated that the sensing gain should be a negative number (dP/dL < 0 or smaller counts as the SRC length expands). This contradicted with what we have had in CALCS where the sensing was set to positive. This is very simillar to what we had in the online DARM calibration (29860), but this time SRCL has been wrong for years.

Flipping the sensing gain in CALCS (so as to match the sign with the measurement) decreased the noise level in the online monitor by a factor of 2 at 60 Hz in 10 - 100 Hz. You can see the difference below.

The cyan is before the sign flip in CALCS, and the green is after. In order to double check the validity, I produced the calibrated spectrum only using SRCL_IN1 (blue) which agreed with the online calibration. There is small discrepancies between SRCL_IN1 and CAL-CS by a few % which, I believe, is due to the fact that we don't take the time delay of the system into account in CAL-CS. The sign flip is now implemented by adding a minus sign in FM1 of CAL-CS_SUM_SRCL_ERR (which is now a scaler value of -9.55e-6). I did not change the absolute value.

Additionally, I looked at some calibration codes that I made some time ago (18742) and confirmed that I mistakenly canceled the minus sign in the sensing gain of the model. Also, according to the guardian code ISC_LOCK and trend data, the sign of the SRCL servo gain in LSC or the relevant filters in the SUS SRM did not change at least in this past year. I am fairly sure that this calibration has been wrong for quite some time.

The relevant items can be found at the following locations.

Open loop measurement: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER10/H1/Measurements/LscDrmi/SRCL_oltf_25W_20161028.xml

Open loop analysis code: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER10/H1/Scripts/LscDrmi/H1SRCL_OLTFmodel_20161028.m

Plots for open loop: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER10/H1/Results/LscDrmi/2016-10-28_SRCL_openloop.pdf

                /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER10/H1/Results/LscDrmi/2016-10-28_SRCL_openloop.png

Images attached to this report
Non-image files attached to this report
H1 SUS (DetChar, ISC)
jeffrey.kissel@LIGO.ORG - posted 17:13, Friday 28 October 2016 - last comment - 10:34, Monday 31 October 2016(30970)
All V3 & R3 Modes Notched in All HSTS Suspensions -- Success at 27.5 Hz and 41 Hz.
J. Kissel

With hints of improvement last night from Sheila (see LHO aLOG 30947) on reducing the most egregious ~41 Hz peak in DARM (which is known to be the HSTS roll modes, a.k.a R3), I launched a campaign of adding similar ~1 Hz wide notches to all HSTS suspensions' local damping loops (M1_DAMP) at both the highest roll mode and highest vertical modes (a.k.a V3 at ~27.5 Hz).

The filter designs are
V3 notch: FM8 "SB27.5" ellip("BandStop",4,1,60,27.05,28.05)gain(1.12202)   [Input -- Always On; Output -- Ramp; Ramp Time -- 2 sec]
R3 notch: FM9 "SB40.9" ellip("BandStop",4,1,60,40.4,41.3)gain(1.12202)     [Input -- Always On; Output -- Ramp; Ramp Time -- 2 sec]
where I've made sure that the lowest and highest frequency V3 modes (coincidentally both IMC mirrors: MC1 @ 27.38 Hz, MC2 @ 27.74 Hz) faLL within the stop band of the notch (and I've confirmed that this is true with Sheila's R3 notch design as well). Further, the V3 notch causes a phase loss of only 2.2 [deg] at 10 Hz, so it will not impact any of the damping loop's phase margins. To confirm, I've spot checked SR2's local damping open loop gain tfs just to be sure. Indeed all DOFs are still quite stable (and quite poorly tuned, as expected).

During the campaign I found that PRM, MC1, and MC3's "ellip50" standard low-pass filters were not engaged, so I engaged them.

I've greened up all ODC status lights, and then accepted all of the changes in the SDF system -- V3 filters ON, R3 filters ON, ellip50 filters ON, and Correct Damp State.

While haven't had an uber-long lock stretch since I've turned on all of these notches, I was at least able to grab 5 averages of a 5 [mHz] BW ASD and compare against the long data set from last night. The notches appear to have done their job -- a large fraction of the modes have been squashed and don't show up in DARM anymore. Success! 

The SR2 damping loop open loop gain TF templates have been committed here:
/ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/SR2/SAGM1/Data/
2016-10-28_2223_H1SUSSR2_M1_openloop_L_WhiteNoise.xml
2016-10-28_2223_H1SUSSR2_M1_openloop_P_WhiteNoise.xml
2016-10-28_2223_H1SUSSR2_M1_openloop_R_WhiteNoise.xml
2016-10-28_2223_H1SUSSR2_M1_openloop_T_WhiteNoise.xml
2016-10-28_2223_H1SUSSR2_M1_openloop_V_WhiteNoise.xml
2016-10-28_2223_H1SUSSR2_M1_openloop_Y_WhiteNoise.xml
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 10:34, Monday 31 October 2016 (31027)DetChar, IOO, ISC
Following up with another extremely long lock stretch from over the weekend (new data starts at 2016-10-31 07:00 UTC), it looks like I've definitely killed several of the resonances, such that they don't substantially appear in the DARM noise. 

However, the MC2 V3 mode @ 27.7642 Hz and what remains of the PR2 R3 mode @ 40.935 Hz are still as bad as they were before. This likely means these modes are getting excited via non-local control. The ISC control is a likely culprit for MC2, because the V3 mode is abnormally high and may be out of range of a generic notch. Further the IMC cross-over is around 15 Hz, so notching these frequencies is more difficult (i.e. there's currently no notches for the M3 stage length control of the IMC). 

We'll continue to poke around looking for poorly notched loops.
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 17:07, Friday 28 October 2016 (30971)
CDS model and DAQ restart report, Tuesday 25th - Thursday 27th October 2016

model restarts logged for Thu 27/Oct/2016
2016_10_27 10:30 h1psliss
2016_10_27 10:32 h1broadcast0
2016_10_27 10:32 h1dc0
2016_10_27 10:32 h1fw0
2016_10_27 10:32 h1fw1
2016_10_27 10:32 h1fw2
2016_10_27 10:32 h1nds0
2016_10_27 10:32 h1nds1
2016_10_27 10:32 h1tw0
2016_10_27 10:32 h1tw1

Daniel's ISS model change with associated DAQ restart.

model restarts logged for Wed 26/Oct/2016
2016_10_26 00:26 h1fw2
2016_10_26 06:36 h1fw2
2016_10_26 08:21 h1fw2
2016_10_26 12:14 h1fw2
2016_10_26 12:32 h1fw2
2016_10_26 12:38 h1fw2
2016_10_26 13:44 h1fw2
2016_10_26 16:45 h1susetmx

Jonathan's daqd work on fw2. Jeff restarted susetmx as part of BIO investigation.

model restarts logged for Tue 25/Oct/2016
2016_10_25 08:23 h1sushtts
2016_10_25 08:38 h1susmc1
2016_10_25 08:38 h1susmc3
2016_10_25 08:38 h1susprm
2016_10_25 08:39 h1suspr3
2016_10_25 08:40 h1susim
2016_10_25 08:48 h1susmc2
2016_10_25 08:48 h1suspr2
2016_10_25 08:48 h1sussr2
2016_10_25 08:57 h1sussrm
2016_10_25 08:59 h1susomc
2016_10_25 08:59 h1sussr3
2016_10_25 09:10 h1susbs
2016_10_25 09:10 h1susitmx
2016_10_25 09:10 h1susitmy
2016_10_25 09:18 h1susauxasc0
2016_10_25 09:20 h1susauxh2
2016_10_25 09:20 h1susauxh34
2016_10_25 09:20 h1susauxh56
2016_10_25 09:22 h1susauxb123
2016_10_25 09:33 h1susetmx
2016_10_25 09:35 h1sustmsx
2016_10_25 09:37 h1susetmy
2016_10_25 09:37 h1sustmsy
2016_10_25 09:40 h1susauxex
2016_10_25 09:41 h1susauxey

2016_10_25 10:23 h1alsey
2016_10_25 10:23 h1caley
2016_10_25 10:23 h1iopiscey
2016_10_25 10:23 h1iscey
2016_10_25 10:23 h1pemey

2016_10_25 10:28 h1broadcast0
2016_10_25 10:28 h1dc0
2016_10_25 10:28 h1fw0
2016_10_25 10:28 h1fw1
2016_10_25 10:28 h1fw2
2016_10_25 10:28 h1nds0
2016_10_25 10:28 h1nds1
2016_10_25 10:28 h1tw1

2016_10_25 11:58 h1iopiscex
2016_10_25 11:58 h1pemex
2016_10_25 12:00 h1alsex
2016_10_25 12:00 h1calex
2016_10_25 12:00 h1iscex
2016_10_25 12:02 h1fw2
2016_10_25 12:10 h1fw2
2016_10_25 12:27 h1fw2
2016_10_25 12:28 h1fw2
2016_10_25 12:32 h1fw2
2016_10_25 13:52 h1fw2
2016_10_25 13:56 h1fw2

Tuesday maintenance. All SUS DAQ configurations changed. Unexpected restarts of h1iscey and h1iscex, related to powering of test equipment. Jonathan's daqd testing on fw2.

H1 General
travis.sadecki@LIGO.ORG - posted 16:00, Friday 28 October 2016 (30964)
Ops Day Shift Summary

TITLE: 10/28 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Aligning
INCOMING OPERATOR: Jim
SHIFT SUMMARY:  Walked in to a locked IFO.  It broke lock shortly thereafter, possibly due to PEM injections.  After a bit of a prolonged IA, we have been back at NLN for the last half of the day.
LOG:

15:13 Robert to LVEA

15:15 Chris to MX

15:57 Chandra to EX

16:15 Chandra back

16:19 Kiwamu to LVEA for ISS OLTF

16:20 Betsy, Bubba to LVEA

16:36 Betsy, Bubba out

16:37 Kiwamu done

17:30 Adjusted fiber polarization for both arms

18:48 Richard down the Xarm BTE

21:26 Chandra to MY
 

LHO VE
chandra.romel@LIGO.ORG - posted 15:26, Friday 28 October 2016 (30968)
CP3 overfill
2:30 pm local

Took 24 sec. to overfill CP3 by doubling LLCV to 36% open. TC plot attached.
Images attached to this report
H1 TCS
jeffrey.bartlett@LIGO.ORG - posted 06:49, Thursday 27 October 2016 - last comment - 23:24, Saturday 29 October 2016(30920)
TCS Laser Noise

Kiwamu saked the ops to run some TCS laser noise measurements.

SETUP:

Started the run: 

    TSC X - Initial Power = 0.2W                     TSC Y - Initial Power = 0.0W

Time Power   Time Power
03:00 0.3W   03:00 0.1W
04:30 0.4W   04:30 0.2W
         
         
         
         
         
         
         

At 05:08 Lost lock due to a Mag5.8 EQ in Alaska.

Comments related to this report
travis.sadecki@LIGO.ORG - 15:58, Thursday 27 October 2016 (30943)

I only managed to get one more data point for both arms:

15:30 utc:  TCSx at 0.5W for 90 minutes.

                   TCSy at 0.3W for 90 minutes.

cheryl.vorvick@LIGO.ORG - 03:04, Friday 28 October 2016 (30952)OpsInfo

Oct 28, 10:03UTC, TCSX power set to 0.6, TCSY power set to .4

cheryl.vorvick@LIGO.ORG - 04:32, Friday 28 October 2016 (30953)OpsInfo

oct 28, 11:32UTC, change X from 0.6 to 0.7, changed y from 0.3 to 0.4

cheryl.vorvick@LIGO.ORG - 06:28, Friday 28 October 2016 (30954)

oct 28, 13:27UTC, tcsx raised to 0.8, tcsy raised to 0.5

cheryl.vorvick@LIGO.ORG - 07:00, Friday 28 October 2016 (30955)

As range dropped and arm signals got more noisy I feared H1 was about to lose lock, and touched up TMSX and TMSY alignment.  Hopefully this didn't invalidate the data for TCS analysis.

Tweaks by TMS:

  • tmsx pitch 12:16UTC, 12:27UTC
  • tmsx yaw 9:56UTC
  • tmsy pitch 12:29UTC, 12:47UTC, 13:39UTC
  • tmsy yaw 12:48UTC,13:30 to 13:38UTC

Tweaks and TCS changes by timeline:

  • 9:56UTC
  • 10:03UTC - changed TCS
  • 11:32UTC - changed TCS
  • 12:16UTC
  • 12:27UTC
  • 12:29UTC
  • 12:47UTC
  • 12:48UTC
  • 13:27UTC - changed TCS
  • 13:30 to 13:38UTC

 

 

 

 

travis.sadecki@LIGO.ORG - 15:26, Friday 28 October 2016 (30967)

15:05 UTC: TCSx at 0.9W for 45 min.

                     TCSy at 0.6W for 45 min.

jim.warner@LIGO.ORG - 23:24, Saturday 29 October 2016 (30999)

At 4:43 UTC today (10/30, or (10/29 still PST)), after Robert left, I changed TCSY to .2W, per Cheryl's suggestion left with Corey. TCSX is still at .4W.

H1 ISC (CDS, GRD, ISC)
jenne.driggers@LIGO.ORG - posted 20:32, Monday 24 October 2016 - last comment - 09:10, Saturday 29 October 2016(30831)
cdsutils avg giving weird results in guardian??

The results of cdsutils.avg() in guardian is sometimes giving us very weird values. 

We use this function to measure the offset value of the trans QPDs in Prep_TR_CARM.  At one point, the result of the average gave the same (wrong) value for both the X and Y QPDs, to within 9 decimal places (right side of screenshot, about halfway down).  Obviously this isn't right, but the fact that the values are identical will hopefully help track down what happened.

The next lock, it correctly got a value for the TransX (left side of screenshot, about halfway down), but didn't write a value for the TransY QPD, which indicates that it was trying to write the exact same value that was already there (epics writes aren't logged if they don't change the value). 

So, why did 3 different cdsutils averages all return a value of 751.242126465?

This isn't the first time that this has happened.  Stefan recalls at least one time from over the weekend, and I know Cheryl and I found this sometime last week. 

Images attached to this report
Comments related to this report
jameson.rollins@LIGO.ORG - 21:01, Monday 24 October 2016 (30832)

This is definitely a very strange behavior.  I have no idea why that would happen.

As with most things guardian, it's good to try to get independent verification of the effect.  If you make the same cdsutils avg calls from the command line do you get similarly strange results?  Could the NDS server be getting into a weird state?

jenne.driggers@LIGO.ORG - 21:11, Monday 24 October 2016 (30833)

On the one hand, it works just fine right now in a guardian shell.  On the other hand, it also worked fine for the latest acquisition.  So, no conclusion at this time.

jenne.driggers@LIGO.ORG - 01:03, Tuesday 25 October 2016 (30838)OpsInfo

This happened again, but this time the numbers were not identical.  I have added a check to the Prep_TR_CARM state that if the absolute value of the offsets are larger than 5 (normally they're around 0.2 and 0.3, and the bad values have all been above several hundred) then notify and don't move on. 

Operators:  If you see the notification Check Trans QPD offsets! then look at H1:LSC-TR_X_QPD_B_SUM_OFFSET and H1:LSC-TR_Y_QPD_B_SUM_OFFSET.  If you do an ezca read on that number and it's giant, you can "cheat" and try +0.3 for X, and +0.2 for Y, then go back to trying to find IR.

sheila.dwyer@LIGO.ORG - 21:10, Friday 28 October 2016 (30976)OpsInfo

This happened again to Jim, and Cheryl, today and caused multiple locklosses

I've commented out the averaging of the offsets in the guardian. 

We used to not do this averaging, and jsut rely on the dark offsets not to change.  Maybe we could go back to that.  

 

For operators, until this is fixed you might need to set these by hand:

If you are having trouble with FIND IR, this is something to check.  From the LSC overview sceen, click on the yellow TRX_A_LF TRY_A_LF button toward the middle oc the left part of the screen.  Then click on the R INput button circled in the attachment, and from there check that both the X and Y arm QPD SUMs have reasonable offsets.  (If there is not IR in the arms, the offset should be about -1*INMON)

Images attached to this comment
david.barker@LIGO.ORG - 09:10, Saturday 29 October 2016 (30994)

Opened as high priority fault in FRS:

ticket 6559

H1 GRD
sheila.dwyer@LIGO.ORG - posted 15:15, Monday 24 October 2016 - last comment - 09:06, Saturday 29 October 2016(30815)
exca connection error

Ed, Sheila

Are ezca connection errors becoming more frequent?  Ed has had two in the last hour or so, one of which contributed to a lockloss (ISC_DRMI).

The first one was from ISC_LOCK, the screenshot is attached. 

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 18:15, Monday 24 October 2016 (30828)

Happened again but for a different channel H1:SUS-ITMX_L2_DAMP_MODE2_RMSLP_LOG10_OUTMON ( Sheila's post was for H1:LSC-PD_DOF_MTRX_7_4). I trended and found data for both of those channels at the connection error times, and during the second error I could also caget the channel while ISC_LOCK still could not connect. I'll keep trying to dig and see what I find.

Relevant ISC_LOCK log:

2016-10-25_00:25:57.034950Z ISC_LOCK [COIL_DRIVERS.enter]
2016-10-25_00:26:09.444680Z Traceback (most recent call last):
2016-10-25_00:26:09.444730Z   File "_ctypes/callbacks.c", line 314, in 'calling callback function'
2016-10-25_00:26:12.128960Z ISC_LOCK [COIL_DRIVERS.main] USERMSG 0: EZCA CONNECTION ERROR: Could not connect to channel (timeout=2s): H1:SUS-ITMX_L2_DAMP_MODE2_RMSLP_LOG10_OUTMON
2016-10-25_00:26:12.129190Z   File "/ligo/apps/linux-x86_64/epics-3.14.12.2_long-ubuntu12/pyext/pyepics/lib/python2.6/site-packages/epics/ca.py", line 465, in _onConnectionEvent
2016-10-25_00:26:12.131850Z     if int(ichid) == int(args.chid):
2016-10-25_00:26:12.132700Z TypeError: int() argument must be a string or a number, not 'NoneType'
2016-10-25_00:26:12.162700Z ISC_LOCK EZCA CONNECTION ERROR. attempting to reestablish...
2016-10-25_00:26:12.175240Z ISC_LOCK CERROR: State method raised an EzcaConnectionError exception.
2016-10-25_00:26:12.175450Z ISC_LOCK CERROR: Current state method will be rerun until the connection error clears.
2016-10-25_00:26:12.175630Z ISC_LOCK CERROR: If CERROR does not clear, try setting OP:STOP to kill worker, followed by OP:EXEC to resume.

sheila.dwyer@LIGO.ORG - 21:12, Friday 28 October 2016 (30977)

It happened again just now. 

Images attached to this comment
david.barker@LIGO.ORG - 09:06, Saturday 29 October 2016 (30993)

Opened FRS on this, marked a high priority fault.

ticket 6558

H1 CAL (DetChar, ISC)
jeffrey.kissel@LIGO.ORG - posted 00:36, Thursday 01 October 2015 - last comment - 18:06, Friday 28 October 2016(22140)
Official, Representative Calibrated ASD for the Start of O1 -- Now With Time Dependent Corrections Displayed
J. Kissel, for the Calibration Team

I've updated the results from LHO aLOG 21825 and G1501223 with an ASD from the current lock stretch, such that I could display the computed time dependent correction factors, which have recently been cleared of systematics (LHO aLOG 22056), sign errors (LHO aLOG 21601), and bugs yesterday (22090). 

I'm happy to say, that not only does the ASD *without* time dependent corrections still fall happily within the required 10%, but if one eye-balls the time-dependent corrections and how they would be applied at each of the respective calibration line frequencies, they make sense.

To look at all relevant plots (probably only interesting to calibrators and their reviewers), look at the first pdf attachment. The second and third .pdfs are the money plots, and the text files are a raw ascii dump of respective curves so you can plot them however or whereever you like. All of these files are identical to what is in G1501223.

This analysis and plots have been made by
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Scripts/produceofficialstrainasds_O1.m
which has been committed to the svn.
Non-image files attached to this report
Comments related to this report
kiwamu.izumi@LIGO.ORG - 18:06, Friday 28 October 2016 (30973)

Apparently, this script has been moved to a slightly different location. The script can be found at

/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Scripts/DARMASDs/produceofficialstrainasds_O1.m

Displaying reports 55921-55940 of 86064.Go to page Start 2793 2794 2795 2796 2797 2798 2799 2800 2801 End