Displaying reports 58381-58400 of 77989.Go to page Start 2916 2917 2918 2919 2920 2921 2922 2923 2924 End
Reports until 16:00, Saturday 01 August 2015
H1 ISC
lisa.barsotti@LIGO.ORG - posted 16:00, Saturday 01 August 2015 - last comment - 18:34, Saturday 01 August 2015(20131)
High frequency excess noise is ~0.6 times shot + dark noise
Evan, Lisa

This entry is to clarify the fact that the impact of  this excess of high frequency noise  is actually bigger than the coherence with the ASC channels suggests, as it can clearly be seen by comparing OMC NULL and SUM.

For example, around 2 kHz, the discrepancy in the noise floor between OMC SUM (total noise) and OMC NULL (shot + dark noise) is about 15%, so corresponding to a noise which is 0.6 times shot + dark.

The attachment shows OMC SUM/NULL in H1 at low noise (left) compared to L1 (right). 

So, the message is that we are looking for something quite big here..

Images attached to this report
Comments related to this report
lisa.barsotti@LIGO.ORG - 18:34, Saturday 01 August 2015 (20140)
Maybe not surprising, this noise is not stationary from lock to lock. Last night the noise was lower than the night before (first plot: compare OMC SUM green trace with red trace; NULL was the same in both locks).
Images attached to this comment
H1 AOS
robert.schofield@LIGO.ORG - posted 12:37, Saturday 01 August 2015 - last comment - 18:01, Saturday 01 August 2015(20130)
PEM injections after 17:30 UTC

After 17:30 UTC the interferometer was not undisturbed: I was making PEM injections.

Comments related to this report
lisa.barsotti@LIGO.ORG - 16:20, Saturday 01 August 2015 (20133)DetChar, ISC
The interferometer has been locked undisturbed for several hours in low noise before Robert started his injections.

The range degraded slowly over time, and it has been polluted by some huge glitches, similarly to what has been observed in the past. 
Images attached to this comment
lisa.barsotti@LIGO.ORG - 18:01, Saturday 01 August 2015 (20138)PSL
It turns out that the range was degraded by a changing ISS coupling during the lock. 
Evan and Matt had left the ISS second loop open, as they were having problems with it.

You would see a plot with the a DARM spectrum at the beginning and at the end of this lock, showing large peaks appearing in DARM (a factor of a few above the noise floor), if DTT hadn't crash on me twice while trying to save the plot as PDF...
Non-image files attached to this comment
LHO VE
kyle.ryan@LIGO.ORG - posted 12:22, Saturday 01 August 2015 (20129)
1155 -1210 hrs. local -> In and out of X-end VEA
 Measured temps of heated areas of RGA -> 95C < temps < 120C -> Made slight changes to variac settings -> Aux. cart @ 2.5 x 10-5 torr (seems high for this configuration)
H1 DAQ (CDS)
david.barker@LIGO.ORG - posted 10:14, Saturday 01 August 2015 - last comment - 10:42, Saturday 01 August 2015(20127)
DAQ still stable, one week on

Its been a week since the DAQ reconfiguration which reduced the NFS/QFS disk loading and both framewriters continue to be 100% stable. The attached plot shows the restarts of h1fw0 (red circles), h1fw1 (green circles) and the DAQ system as a whole (blue squares) for the month of July. The Magenta lines show when h1fw0 and h1fw1 were modified. In the past 7 days, the only restarts of the framewriters are associated with complete DAQ restarts.

Images attached to this report
Comments related to this report
keith.thorne@LIGO.ORG - 10:42, Saturday 01 August 2015 (20128)DAQ
Which indicates the existing aLIGO DAQ frame writer meet/exceed the original design requirement (~10MB/sec frames to disk). They do not meet the current needs of ~30-40 MB/sec, of course
H1 ISC
evan.hall@LIGO.ORG - posted 03:36, Saturday 01 August 2015 - last comment - 18:01, Saturday 01 August 2015(20126)
Sum and null of OMC DCPDs, noch einmal

Matt, Lisa, Evan

Tonight we looked at the coherences between the OMC DCPD channels and ASC AS C, this time at several different interferometer powers. In the attached plots, green is at 11 W, violet is at 17 W, and apricot is at 24 W.

Evidently, the appearance of excess high-frequency noise in OMC DCPD sum (and the coherence of OMC DCPD sum with ASC AS C) grows as the power is increased. We believe that this behavior rules out the possibility that this is excess noise is caused by RIN in the AS port carrier, assuming that any such RIN is independent of the DARM offset and of the PSL power. Since the DARM offset is adjusted during power-up to maintain a constant dc current on the DCPDs, RIN in the AS carrier should result in an optical power fluctuation whose ASD (in W/rtHz) does not vary during the power-up. This is the behavior that we see in the null stream, where the constant DCPD dc currents ensure that the shot-noise-induced power fluctuation is independent of the PSL power.

Images attached to this report
Non-image files attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 18:01, Saturday 01 August 2015 (20139)

On a semi-related note, the slope in the OMC DCPDs at high frequencies is mostly explained by the uncompensated preamp poles and the uncompensated AA filter.

Non-image files attached to this comment
H1 ISC
jameson.rollins@LIGO.ORG - posted 01:44, Saturday 01 August 2015 - last comment - 17:16, Saturday 01 August 2015(20125)
ISC_LOCK::DOWN state is back to being a 'goto'

I modified the ISC_LOCK guardian to revert the DOWN state back to being a 'goto'.  This allows you to select the state directly, without having to go to MANUAL.

The reason it had been removed as a 'goto' was because occaissionally someone would accidentally request a lower state while the IFO is locked, which would cause the IFO to go back through DOWN to get to the errantly requested state.  To avoid this I implemented some graph shenanigans:  I disconnected DOWN from the rest of the graph, but told it to jump to a new READY state at the bottom of the main connected part of the graph once it's done:

This allows DOWN to be a goto, so it's always directly requestable, but prevents guardian from seeing a path through it to the rest of the graph.  Once DOWN is done, though, it jumps into the main part of the graph at which point guardian will pick up with the last request and move on up as expected.

Images attached to this report
Comments related to this report
jameson.rollins@LIGO.ORG - 17:16, Saturday 01 August 2015 (20136)

Well that didn't work.  See alog 20134.  Separating DOWN from the rest of the graph caused some unanticipated bad affects.  This is actually not inherent in disconnected DOWN from the rest of the graph, but it needed to be considered a bit more carefully.  See the other post for more info.

H1 SUS
jeffrey.kissel@LIGO.ORG - posted 22:09, Friday 31 July 2015 (20124)
All SUS Models Clear of Redundant IPC Error EPICs Channels
J. Kissel
WP 5395
ECR E1500230
II 1054

I've removed all redundant IPC Error EPICs channels from the top-lvel models of all SUS this evening. This is in accordance with ECR E1500230. The models compile, and have been committed to the SVN. They will be installed this coming Tuesday.

Once installed, this closes out the ECR and Integration Issue for LHO.
H1 SUS (CDS, SUS)
sheila.dwyer@LIGO.ORG - posted 22:00, Friday 31 July 2015 (20123)
UIM rocks

Jeff, Sheila

We have had three locklosses in the last 2 days that were caused by the ETMX UIM coil driver rocker switch tripping.  The only solution is to drive to the end station, and flip the rocker switch back on.  Something is wrong with this coil driver (Jeff thinks it should just be replaced).

The only way to notice this is looking at the OSEM centering medm screen.  It would be great to add it to SYS DIAG (its not caught by the current noisemon check), and the ops overview screen, and the quad overview in a more obvious way.

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 18:10, Friday 31 July 2015 (20119)
h1susey still IOP glitching with new BIOS settings

Over the past two days h1susey has IOP glitched 10 times compared with only two times the previous two days. Here is a log of the recent glitches

Jul 30 2015 00:26:43 PDT ADC
Jul 30 2015 12:40:43 PDT ADC
Jul 30 2015 21:23:43 PDT TIM+ADC
Jul 30 2015 23:47:43 PDT TIM+ADC
Jul 31 2015 02:27:43 PDT TIM+ADC
Jul 31 2015 03:42:43 PDT TIM+ADC
Jul 31 2015 05:32:43 PDT TIM+ADC
Jul 31 2015 05:49:43 PDT TIM+ADC
Jul 31 2015 08:19:43 PDT TIM+ADC
Jul 31 2015 08:50:43 PDT ADC
 
As seen before, sometimes the IOP stateword show both Timing and ADC errors, other times only ADC errors.
H1 ISC (IOO)
evan.hall@LIGO.ORG - posted 17:41, Friday 31 July 2015 (20117)
IMC REFL power

With the IMC unlocked, these are some numbers for power on the IMC REFL PD (S1203775):

H1 ISC (ISC)
stefan.ballmer@LIGO.ORG - posted 16:40, Friday 31 July 2015 (20114)
Power drop on 9MHz oscillator
Evan, Kiwamu, Stefan

Evan reported that he observed a sudden drop in several AS poer signals, including 
H1:ASC-AS_C_SUM_OUT_DQ
H1:LSC-ASAIR_B_RF90_I_ERR_DQ
around Jul 31 2015 11:23:32 UTC

We found a corresponding drop of the 9MHz main oscillator feed, monitored by:
H1:ISC-RF_C_REFLAMP9M1_OUTPUTMON
H1:ISC-RF_C_AMP9M1_OUTPUTMON
Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 16:35, Friday 31 July 2015 - last comment - 18:23, Friday 31 July 2015(20113)
channels which differ between the two sites' science frames

attached is a file listing the channel differences between the L1 and H1 science frames.

Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 18:23, Friday 31 July 2015 (20121)DAQ, ISC, SEI, SUS, SYS
It looks like almost all of the non-PEM differences can be explained by differences in hardware, control scheme/choices, and non-deprecated channels due to little-to-no maintenance. 

LHO has a beam rotation sensor, and LLO does not.
< H1:ISI-GND_BRS_ETMX_REF_OUT_DQ 256
< H1:ISI-GND_BRS_ETMX_RY_OUT_DQ 256

LHO uses a different tidal scheme (T1400733).
< H1:LSC-Y_ARM_OUT_DQ 256
< H1:LSC-Y_TIDAL_OUT_DQ 256

LHO has not yet updated the CAL-CS calibration for IMC-F, so it remains in OAF.
< H1:OAF-CAL_IMC_F_DQ 16384

LLO has PI damping and LLO does not?
> L1:LSC-X_EXTRA_AI_1_OUT_DQ 2048
> L1:LSC-X_EXTRA_AI_2_OUT_DQ 2048
> L1:LSC-X_EXTRA_AI_3_OUT_DQ 2048
> L1:LSC-Y_EXTRA_AI_1_OUT_DQ 2048
> L1:LSC-Y_EXTRA_AI_2_OUT_DQ 2048
> L1:LSC-Y_EXTRA_AI_3_OUT_DQ 2048

Regardless of what was decided via the formal process, Daniel hasn't visited LLO recently and force-reduced the ODC data rate.
< H1:ODC-X_CHANNEL_OUT_DQ 16384
< H1:ODC-Y_CHANNEL_OUT_DQ 16384
< H1:PSL-ODC_CHANNEL_OUT_DQ 16384
---
> L1:ODC-X_CHANNEL_OUT_DQ 32768
> L1:ODC-Y_CHANNEL_OUT_DQ 32768
> L1:PSL-ODC_CHANNEL_OUT_DQ 32768

LLO has not completely deprecated OAF for all of its LSC DOF calibrations.
> L1:OAF-CAL_CARM_X_DQ 16384
> L1:OAF-CAL_DARM_DQ 16384
> L1:OAF-CAL_MICH_DQ 16384
> L1:OAF-CAL_PRCL_DQ 16384
> L1:OAF-CAL_SRCL_DQ 16384
> L1:OAF-CAL_XARM_DQ 16384
> L1:OAF-CAL_YARM_DQ 16384

LLO uses a different ISS second loop scheme (or hasn't deprecated one of its attempts that is no longer used)?
> L1:PSL-ISS_SECONDLOOP_PD_14_SUM_OUT_DQ 16384
> L1:PSL-ISS_SECONDLOOP_PD_58_SUM_OUT_DQ 16384

LLO has a HV ESD driver on its ITMX, LHO does not.
> L1:SUS-ITMX_L3_ESDAMON_DC_DQ 256
> L1:SUS-ITMX_L3_ESDAMON_LL_DQ 256
> L1:SUS-ITMX_L3_ESDAMON_LR_DQ 256
> L1:SUS-ITMX_L3_ESDAMON_UL_DQ 256
> L1:SUS-ITMX_L3_ESDAMON_UR_DQ 256
> L1:SUS-ITMX_L3_ESDDMON_CAS_DQ 256
> L1:SUS-ITMX_L3_ESDDMON_HVN_DQ 256
> L1:SUS-ITMX_L3_ESDDMON_HVP_DQ 256
> L1:SUS-ITMX_L3_ESDDMON_LVN_DQ 256
> L1:SUS-ITMX_L3_ESDDMON_LVP_DQ 256
> L1:SUS-ITMX_L3_ESDDMON_MCU_DQ 256
> L1:SUS-ITMX_L3_ESDDMON_TM1_DQ 256
> L1:SUS-ITMX_L3_ESDDMON_TM2_DQ 256
> L1:SUS-ITMX_L3_ESDDMON_TM3_DQ 256
> L1:SUS-ITMX_L3_ESDDMON_TM4_DQ 256
> L1:SUS-ITMX_L3_ESDDMON_TM5_DQ 256
> L1:SUS-ITMX_L3_ESDDMON_TM6_DQ 256
H1 CDS (SUS)
david.barker@LIGO.ORG - posted 16:32, Friday 31 July 2015 - last comment - 17:12, Friday 31 July 2015(20111)
SUS ITMX software watchdog trip during period of bad IFO state

During the period of bad IFO state which started at approx 12:12 PDT, the ITMX SWWD triped the SEI-B3 system. In the plot below, Ch1 shows the ITMX SUS top stage F1 OSEM signal, which rapidly exceeded its 95mV trip limit. This started the SUS SWWD counter. Five minutes later, Ch2 shows the signal going to the SEI-B3 drop to zero (the BAD state). This started the SEI SWWD counter, shown in Ch3 as going from one (GOOD) to three (1ST COUNTER). Four minutes later the SEI SWWD transitioned to four (2ND COUNTER) and one minute later tripped the SWWD, which zeroed all SEI-B3 DAC outputs. This shows up on the RMS plot as a slightly elevated signal, but SUS continues to be rung up. At 12:27 the operator intervened and manually panic'ed the SUS SWWD, at which point the RFM started decreasing.

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 17:12, Friday 31 July 2015 (20116)

Here is a timeline of the CDS issues we had today (all times local):

10:37 h1oaf0 models stop running

11:11 h1oaf0 models restarted

11:15 h1calex and h1caley models restarted with IRIGB channels added

11:55 DAQ restarted due to cal model changes

12:12 SUS ITMX rings up

12:22 SEI-B3 SWWD trip

12:27 SUSB123 manual trip of DAC

LHO VE
kyle.ryan@LIGO.ORG - posted 16:22, Friday 31 July 2015 - last comment - 18:12, Friday 31 July 2015(20112)
X-end -> ~100C bake of RGA at BSC5 over the weekend
Kyle, Gerardo

In and out of X-end VEA 

~1030 - 1230 hrs. local

Added 1.5" O-ring valve in-series with existing 1.5" metal angle valve -> Wrapped RGA with heater tapes and foil -> Elevated pump cart off of floor (resting on foam) -> NW40 inlet 50 L/s turbo (no vent valve) backed by aux. cart (no vent valve) -> Begin 100C bake 

In and out of X-end VEA between 

1405 - 1425 hrs. local, 

1450 - 1455 hrs. local and 

1600 - 1605 hrs. local. 

NOTE:  Will need to enter X-end VEA to make adjustments Saturday morning
Comments related to this report
kyle.ryan@LIGO.ORG - 18:12, Friday 31 July 2015 (20120)
~1710 -1800 hrs. local

I realized that I had a CFF inlet 50L/s turbo on the shelf as well as UHV 1.5" valve -> Swapped out 1.5" O-ring valve and NW40 inlet turbo for their dryer cousins -> resumed bakeout
H1 General (DAQ, GRD, IOO, PSL, SEI, SUS)
cheryl.vorvick@LIGO.ORG - posted 13:43, Friday 31 July 2015 - last comment - 22:50, Friday 31 July 2015(20103)
something happend coincidentally with a lock loss, and we had a number of minutes of a bad IFO state

Something happened at lock loss and the IFO did not reach the defined DOWN state.

Symptoms:

Dave, Jaime, Sheila, operators, and others are investigating.

Images attached to this report
Comments related to this report
jameson.rollins@LIGO.ORG - 22:50, Friday 31 July 2015 (20105)

Let me try to give a slightly more detailed narrative as we were able to reconstruct it:

  • At around 11:55, Dave initiated a DAQ restart.  At this point the IFO was locked in NOMINAL_LOW_NOISE.
  • The DAQ restart came with a restart of the NDS server being used by Guardian.
  • The IMC_LOCK guardian node was in ISS_ON, which utilizes cdsutils.avg(), which is an NDS call.  When the NDS server went away, the cdsutils.avg() threw a CdsutilError which caused the IMC_LOCK node to go into ERROR where it waits for operator reset (bug number 1).
  • The ISC_LOCK guardian node, which manages the IMC_LOCK node, noticed that the IMC_LOCK node had gone into ERROR and itself threw a notification.
  • No one seemed to notice that a) the IMC_LOCK node was in error and b) that the ISC_LOCK node was complaining about it (bug number 2)
  • At about 12:12 the IFO lost lock.
  • The ISC_LOCK guardian node relies on the IMC_LOCK guardian node to report the lock losses.  But since the IMC_LOCK node was in error it wasn't doing anything, which of course includes not checking for lock losses.  Consequently the ISC_LOCK node didn't know the IFO had lost lock, it didn't repond and didn't reset to DOWN, and all the control outputs were left on.  This caused the ISS to go into oscillation, and it drove multiple suspensions to trip.

So what's the take away:

bug number 1: guardian should have caught the NDS connection error during the NDS restart and gone into a "connection error" (CERROR) state.  In that case, it would have continually checked the NDS connection until it was re-established, at which point it would have continued normal operation.  This is in contrast to the ERROR state where it waits for operator intervention.  I will work on fixing this for the next release.

bug number 2: The operators didn't know or didn't repond to the fact that the IMC_LOCK guardian had gone into ERROR.  This is not good, since we need to respond quickly to these things to keep the IFO operating robustly.  I propose we set up an alarm in case any guardian node goes into ERROR.  I'll work with Dave et. al to get this setup.

As an aside, I'm going to be working over the next week to clean up the guardian and SDF/SPM situation to eliminate all the spurious warnings.  We've got too many yellow lights on the guardian screen, which means that we're now in the habit of just ignoring them.  They're supposed to be there to inform us of problems that require human intervention.  If we just leave them yellow all the time they end up having zero affect and we're left with a noisy alarm situation that everyone just ignores.

thomas.shaffer@LIGO.ORG - 16:58, Friday 31 July 2015 (20115)

A series of events lead to the ISC_LOCK Gaurdian to not understand that there was a lockloss.

  1. ISC_LOCK was brought to DOWN after realizing the confusion.
  2. A series of events lead to the ISC_LOCK Gaurdian to not understand that there was a lockloss.
  3. ISC_LOCK was brought to DOWN after realizing the confusion.
  4. DAQ restart by Dave at 11:55 PST
  5. IMC_LOCK went into Error with a "No NDS server available" from the DAQ restart
  6. This was not seen by the operator, or was dismissed as a result of the restart.
  7. Lockloss at 12:12 PST
  8. ISC_LOCK did not catch this lock because IMC_LOCK was still in Error.
  9. Since the ISC_LOCK thought it was still in full lock, it was still actuating on many suspensions and trip some watchdogs (like Daves alog20111)
  10. ISC_LOCK was brought to DOWN after realizing the confusion.

To prevent this from happening in the future, Jamie will have Guardian continue to wait for the NDS server to reconnect, rather than stopping and waiting for user intervention before becoming active again.  I also added a verbal alarm for Guardian nodes in Error to alert Operators/Users that action is required.

(If i missed something here please let me know)

H1 INJ
evan.hall@LIGO.ORG - posted 23:22, Thursday 30 July 2015 - last comment - 18:06, Friday 31 July 2015(20078)
1821 Hz TMSX sensor spikes

Matt, Evan

Why do the TMSX RT and SD OSEMs have such huge spikes at 1821 Hz and harmonics? These spikes are about 4000 ct pp in the time series. In comparison, the other OSEMs on TMSX are 100 ct pp or less (F1 and LF shown for comparison).

Also attached are the spectra and time series of the corresponding IOP channels.

Images attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 02:44, Friday 31 July 2015 (20083)

On a possibly related note: in full lock, the TMSX QPDs see more than 100 times more noise at 10 Hz than the TMSY QPDs do.

From Gabriele's bruco report, the X QPDs have some coherence with DARM around 78 Hz and 80 Hz. A coherence plot is attached.

Images attached to this comment
arnaud.pele@LIGO.ORG - 12:10, Friday 31 July 2015 (20100)

It seems similar to the problem from log 12465. Recycling AA chassis power fixed the issue at the time.

keita.kawabe@LIGO.ORG - 18:06, Friday 31 July 2015 (20118)

Quenched the oscillation for now (Vern, Keita)

We were able to clearly hear some kHz-ish sound from the satellite amplifier of TMSX that is connected to SD and RT. Power cycling (i.e. removing the cable powering the BOSEM and connecting it again) didn't fix it despite many trials.

We moved to the driver, power cycled the driver chassis, and it didn't help either.

The tone of the audible oscillation changed when we wiggled the cable on the satellite amp, but that didn't fix it.

Vern gave the DB25 connector on the satellite amp a hard whack in a direction to push the connector further into the box, and that fixed the problem for now.

Images attached to this comment
H1 CAL (CAL)
darkhan.tuyenbayev@LIGO.ORG - posted 20:36, Thursday 30 July 2015 - last comment - 20:16, Friday 31 July 2015(20073)
Actuation and sensing functions' correction factors and CC pole frequency trend over the last weekend locks

Summary

To decrease uncertainty in calculation of actuation function correction factor, kappa_A, sensing function correction factor, kappa_C, and CC pole frequency, f_c, we've recently increased calibration line amplitudes to give SNR of 100 with 10s FFT (see LHO alog #19792). Earlier Kiwamu posted his investigation of CC pole frequency over the last weekend in LHO alog comment #19988. In this alog we show kappa_A, kappa_C and f_c calculated according to the method described in T1500377-v3 for the same time interval (2015-07-25 00:00 UTC to 2015-07-27 UTC, when GRD-ISC_LOCK_STATE_N >= 501, 1 min FFTs).

Statistical uncertainties of kappa_A, kappa_C and f_c within 1.5 hours time interval highlighted with green are:

Xctrl(34.7) and PcalX(33.1), std(kappa_A) = +/- 0.45 % (1 sigma)
PcalX(325.1), std(kappa_C) = +/- 1.12 % (1 sigma); std(f_c) = +/- 5.20 Hz (1 sigma)
PcalY(331.9), std(kappa_C) = +/- 1.43 % (1 sigma); std(f_c) = +/- 5.55 Hz (1 sigma)
PcalX(534.7), std(kappa_C) = +/- 0.70 % (1 sigma); std(f_c) = +/- 2.08 Hz (1 sigma)
PcalY(540.7), std(kappa_C) = +/- 0.78 % (1 sigma); std(f_c) = +/- 2.68 Hz (1 sigma)

Notice that kappa_C and f_c on the left subplots were calculated from low SNR 325.1 Hz and 331.9 Hz Pcal lines set by Evan (see LHO alog comment #19823). Calculation of these parameters using higher SNR 534.7 Hz and 540.7 Hz Pcal lines (right subplots) gave less noisy results.

Details

C_0, D_0 and A_0 were taken from LHO ER7 DARM model.

To make kappa_C calcluations consistent between results from 4 Pcal lines, a manual correction to phases of Pcal lines that correspond to 130us of time advance was applied. On the plot we report only changes in f_c by subtracting mean value of about 300 Hz. In order to receive an absolute value of f_c using this method, we must take into account exact time delay/advance of PCAL RXPD channel w.r.t. DARM_ERR; possibly a frequency independent phase shift (however we do now know any reason for that); and the DARM model TFs at the reference time, C_0, D_0 and A_0.

Plots of 1 min FFT dewhitened calibration line amplitudes and phases are given below.

Calibration line uncertainties in DARM_ERR readout in a 1.5 hours interval (highlighted with green color) are as follows:

Xctrl( 34.7) = 2.2000e-01 (+/- 0.00 %); Derr( 34.7) = 2.9738e-10 (+/- 0.15 %)
PcalX( 33.1) = 2.4587e-02 (+/- 0.00 %); Derr( 33.1) = 3.0817e-10 (+/- 0.26 %)
PcalX(325.1) = 1.0724e-01 (+/- 0.00 %); Derr(325.1) = 2.0593e-10 (+/- 1.48 %)
PcalY(331.9) = 9.3791e-02 (+/- 0.01 %); Derr(331.9) = 2.0150e-10 (+/- 1.55 %)
PcalX(534.7) = 7.1100e-01 (+/- 0.00 %); Derr(534.7) = 5.8223e-10 (+/- 0.52 %)
PcalY(540.7) = 6.3845e-01 (+/- 0.01 %); Derr(540.7) = 5.8948e-10 (+/- 0.45 %)

P.S.

After today's calibration telecon we've changed calibration lines that will be used for estimation of kappa_A, kappa_C and f_c to (see LHO alog #20063):

We are also planning to add an ESD line close to low frequency PCALY line and another high frequency low SNR PCALX line at 3001.3 Hz after completing power budget investigations of PCALX module.

Images attached to this report
Comments related to this report
darkhan.tuyenbayev@LIGO.ORG - 20:16, Friday 31 July 2015 (20122)CAL

Plot of kappa_A, kappa_C and f_c calculated from new calibration lines (see LHO alogs #20063 and #20052) from the last night lock stretches undisturbed lock stretches is given below.

As it was reported in LHO alog #20089, undisturbed data was collected for ~25 minutes in the interval [Jul 31 2015 09:21:13 UTC, Jul 31 2015 09:46:13 UTC], this interval is highlighted with green data points.

Details

Statistical uncertainties of 1 min FFT calibration line amplitudes in d_err in undisturbed interval (highlighted with green markers on the attached plot) are:

PCALY line in d_err(36.7 Hz)   = 3.6189e-10 (+/- 0.14 %)
 DARM line in d_err(37.3 Hz)   = 4.4939e-10 (+/- 0.08 %)
PCALY line in d_err(331.9 Hz)  = 3.0103e-10 (+/- 0.41 %)
PCALY line in d_err(1083.7 Hz) = 3.5701e-10 (+/- 1.30 %)

Statistical uncertainties of calculated kappa_A, kappa_C and f_c in undisturbed interval are:

from Xctrl(37.3) and PcalY(36.7):
    std(kappa_A) = +/- 0.92 % (1 sigma)
from PcalY(331.9):
    std(kappa_C) = +/- 0.73 % (1 sigma)
    std(f_c)     = +/- 3.40 Hz (1 sigma)

Statistical uncertainties of calculated kappa_A, kappa_C and f_c are:

Images attached to this comment
Displaying reports 58381-58400 of 77989.Go to page Start 2916 2917 2918 2919 2920 2921 2922 2923 2924 End