Displaying reports 60161-60180 of 86299.Go to page Start 3005 3006 3007 3008 3009 3010 3011 3012 3013 End
Reports until 16:08, Thursday 28 April 2016
LHO General
thomas.shaffer@LIGO.ORG - posted 16:08, Thursday 28 April 2016 (26860)
Ops Eve Shift Transition

TITLE: 04/28 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: Travis
CURRENT ENVIRONMENT:
    Wind: 17mph Gusts, 7mph 5min avg
    Primary useism: 0.39 μm/s
    Secondary useism: 0.25 μm/s
QUICK SUMMARY: Earthquake hit little over 3 hours ago and we have been ringing down since. Some rotation stage work is going on in the mean time.

H1 General
travis.sadecki@LIGO.ORG - posted 16:00, Thursday 28 April 2016 (26858)
Ops Day Shift Summary

TITLE: 04/28 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: TJ
SHIFT SUMMARY: Relocked the IFO to DC Readout by 15:45 UTC.  At ~20:00 UTC, we got nailed by a 7.0 EQ in Vanuatu and have been down since.  Jim and Rich have been using the downtime to work on EY ISI, and various commissioners have been doing offline tasks as well.
LOG:

16:00 Jeff B in and out of cleaning bay all day

16:50 Fil to MY

16:30 Jim and Rich starting EY ISI work

17:36 Fil back

18:00 Chandra to GV7

18:00 Jeff B to both ends mech. rooms

18:33 Jeff B back

19:59 EQ hits, trips all ISI platforms and a few SUSes
 

H1 PSL (PSL)
travis.sadecki@LIGO.ORG - posted 10:58, Thursday 28 April 2016 (26853)
Weekly PSL Chiller Reservoir Top-Off

I added 150 mL of H2O to the Crystal Chiller.

H1 CAL (CAL)
craig.cahillane@LIGO.ORG - posted 10:38, Thursday 28 April 2016 - last comment - 12:57, Thursday 28 April 2016(26847)
All of O1 LHO C02 Spectrograms and GPSTime = 1135136350 Calibration Uncertainty
C. Cahillane

I have attached remade systematic error and uncertainty spectrograms for all of O1, as well as some a specific calibration at GPSTime = 1135136350.

These plots include uncertainty from detrended time-dependent kappas.

I have also included the uncertainty components plots for ease of viewing what contributes to uncertainty.

For LLO, see LLO aLOG 25914
Non-image files attached to this report
Comments related to this report
craig.cahillane@LIGO.ORG - 12:57, Thursday 28 April 2016 (26857)CAL
C. Cahillane

I have also included the .txt files so anyone may make the LHO C02/C03 response function plus uncertainty plots at GPSTime = 1135136350.

Non-image files attached to this comment
H1 SEI
hugh.radkins@LIGO.ORG - posted 10:37, Thursday 28 April 2016 (26852)
BRS Y --Continues to Drift and will need recentering before long

Here are trends of the BRS raw balance position: a 12 day (right) and 30 days, left.  The driftmon has a range of +-16000 so at -13000 and still trending we don't have much room.  At the current rate, we'll reach the limit in about three days.

Images attached to this report
H1 CDS
james.batch@LIGO.ORG - posted 09:40, Thursday 28 April 2016 (26851)
Test version of medm currently installed
A test version of MEDM is currently installed in the control room to address CDS Bugzilla 789.  Although this wasn't really intended, the simple act of compiling the MEDM software installs it.  If it causes no harm, I'll leave it in place, if there's problems the original can be reinstalled in a few minutes.

This is to support testing of the historical MEDM screen software.
H1 TCS (TCS)
aidan.brooks@LIGO.ORG - posted 08:59, Thursday 28 April 2016 - last comment - 12:48, Thursday 28 April 2016(26848)
Restarted HWS-X, excluded high variance data points from measurement

I restarted the HWSX sensor with a new template file that excludes the very high variance centroids from the measurement. This should result in a much less noisy measurement of the wavefront error.

The new Python version of the HWS code does this automatically by weighting each data point in the HWS image by the inverse of its variance when calculating beam properties.

Comments related to this report
aidan.brooks@LIGO.ORG - 12:02, Thursday 28 April 2016 (26855)

The HWS is working nicely now. We see the ITMX thermal lens forming again.

Images attached to this comment
aidan.brooks@LIGO.ORG - 12:48, Thursday 28 April 2016 (26856)

Did the same for HWS-Y. Also reset the magnification to 7.5 instead of the default 17.5x

H1 CDS
james.batch@LIGO.ORG - posted 08:11, Thursday 28 April 2016 (26846)
Restarted web MEDM screen capture
Restarted web screen capture for h0 to pick up revised FMCS overview and fan sensor detail screens.  These now show all values since the new vacuum controls were installed.
H1 ISC (ISC)
jenne.driggers@LIGO.ORG - posted 00:03, Thursday 28 April 2016 - last comment - 02:49, Thursday 28 April 2016(26843)
ASDC power fluctuation (angular instability)

One of the lockloss classes from tonight has been some kind of angular instability that we see very strongly on the AS camera right when we start to power up.  Not sure what it is, but it's the kind of thing that could be a BS angular instability, since the Michelson fringe separates, and the 2 bright spots orbit around each other for a second or so before we lose lock.  It looks primarily like pitch according to the oplevs, but there are certainly some yaw components in there.

Attached is a plot showing the lockloss, including the power at the AS port and the oplev (or wit) signals from each of the IFO optics.  If we call the frequency of the AS DC power fluctuations 2f, then it looks like perhaps BS and the ITMs are moving in pitch at 1f.

We tried once turning off the CHARD, MICH and SRC1 loops just before powering up.  We were able to make it to 3W and sit for a few moments, but then lost lock on the way to 5W, although this was a fast lockloss, probably just because we didn't have enough loops on.  We'd like to try powering up with all the loops on to 3W, to confirm that we can't even do that (we had always been going straight for 5W when we saw the instability). We'd also like to try with only 1 loop off at a time - MICH seems the most suspicious loop, but CHARD does seem to get noisy when the rotation stage is moving. 

As a side note, I've added tried to add a convergence checker to the Engage_SRC_ASC state, so that you no longer have to wait by hand before going to Part1.  You should be able to safely select Part1, and the guardian log will tell you what loop(s) aren't converged yet. The wait seems to not be working consistently yet, so maybe we can't depend on it yet, unfortunately.

Images attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 02:49, Thursday 28 April 2016 (26845)

I seemed to have better luck by slowing down the rotation stage velocity. At 1/10 the nominal velocity, I went to 3 W no problem with all the loops on, and then onward in several stages up to 25 W.

H1 ISC
sheila.dwyer@LIGO.ORG - posted 23:23, Wednesday 27 April 2016 - last comment - 10:51, Thursday 28 April 2016(26841)
some more motivation to try 90 MHz centering

Today we have seen at least three examples that might make us want to return to trying 90 MHz centering.  The first two screenshots attached show locklosses where we lost sideband buildups, which is usually a sign of misalignment of the BS or SRC.  (We are currently using a combination of AS36I A+B for SRM control 26785 and AS36BQ  for BS control).   You can see that the BS yaw control signal drifts along with the sideband powers, and that there is clearly also a signal in the 90MHz Yaw signals, which might indicate we could use these sensors to prevent this kind of run away and lockloss.  

The third one shows the lock that CHris and TJ left overnight last night, and you can also see sideband powers drifting a bit over the first  couple hours of the lock, which is also tracked by some signal in 90 MHz yaw.  

Images attached to this report
Comments related to this report
christopher.wipf@LIGO.ORG - 10:51, Thursday 28 April 2016 (26844)

Here's the cavity pole fluctuation during the 4/26 overnight lock, monitored using Kiwamu's line tracking technique. After the initial transient it was stable within a few Hz. Looks like a favorable verdict on the new SRC1 WFS combinations.

Images attached to this comment
H1 GRD
sheila.dwyer@LIGO.ORG - posted 22:55, Wednesday 27 April 2016 - last comment - 18:28, Thursday 28 April 2016(26840)
mystery DRMI guardian lockloss

Jenne, Sheila, Chris, TJ, Evan

It seems that tonight we have been sabotaged by some code that we have been using for a long time (this haas only happened once that we caught it, although we have a lot of unexplained locklosses tonight).

In the attached screenshot you can see that the DRMI guardian was sitting at DRMI_3F_LOCKED (130) when it decided to go to LOCK_DRMI_1F (30).  There is a decorator in DRMI_3F_LOCKED that apparently returned LOCK_DRMI_1F, because it though DRMI was unlocked (it was fine as you can see from the power build ups in the top row). 

The code that checks for DRMI lock is: 

def DRMI_locked():
    #log('checking DRMI lock')
    return ezca['LSC-MICH_TRIG_MON'] and ezca['LSC-PRCL_TRIG_MON'] and ezca['LSC-SRCL_TRIG_MON']
def DRMI_locked():
    #log('checking DRMI lock')
    return ezca['LSC-MICH_TRIG_MON'] and ezca['LSC-PRCL_TRIG_MON'] and ezca['LSC-SRCL_TRIG_MON']
def DRMI_locked():
    #log('checking DRMI lock')
    return ezca['LSC-MICH_TRIG_MON'] and ezca['LSC-PRCL_TRIG_MON'] and ezca['LSC-SRCL_TRIG_MON']
 
 
however, as you can see from the plot, all of these trig mons were 1 for the whole time.  Reutning LOCK_DRMI_1F resets settings for DRMI to reaquire, so we would expect that to break the lock.
 
Jenne used the new guardlog to grab the DRMI log from that time, it is attached.  
Images attached to this report
Non-image files attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 23:56, Wednesday 27 April 2016 (26842)

It happened again at 6:53:07

Images attached to this comment
sheila.dwyer@LIGO.ORG - 18:28, Thursday 28 April 2016 (26863)

Sheila, Jenne, Jamie, Chris, Evan, Dave

We still don't understand why this would have happened, although we should be able to debug it a little bit better if it happens again. 

Jenne and Jamie edited the DRMI_Locked function so that there will be more information in the guardian log in the future:

def DRMI_locked():

    MichMon = ezca['LSC-MICH_TRIG_MON']
    PrclMon = ezca['LSC-PRCL_TRIG_MON']
    SrclMon = ezca['LSC-SRCL_TRIG_MON']
    if (MichMon > 0.5) and (PrclMon > 0.5) and (SrclMon > 0.5):
        # We're still locked and triggered, so return True
        return True
    else: 
        # Eeep!  Not locked.  Log some stuff
        log('DRMI TRIGGERED NOT LOCKED:')
        log('LSC-MICH_TRIG_MON = %s' % MichMon)
        log('LSC-PRCL_TRIG_MON = %s' % PrclMon)
        log('LSC-SRCL_TRIG_MON = %s' % SrclMon)
        return False
 
This also avoids the question of what might happen if the ezca calls I don't return a bool.
 
Dave tells us that the data recorded in the DAQ is not necessarily synchronus with the EPICS data, so looking at H1:LSC-MICH_TRIG_MON using nds2 doesn't necessarily give us the same data that the guardian gets (this would explain why nothing would have shown up in the lockloss plots even though the guardian apparently sees one of the TRIG_MONs changing). Dave is going to add the TRIG_MON channels to conlog.  
 
We looked at POP18_I_ERR durring the time that this was happening, and it should have been above the threshold the entire time, so there seems to be no reason the trigger should have gone off.  One new suspicous thing is that this happens at the same time that the triggering is swapped over to POPDC by the ISC_LOCK guardian.  However, you can see in the attached plots that the thresholds are lowered (to -100) when the DRMI guardian thinks that the lock is lost.  Chris and I looked through the triggering in the model, and it doesn't seem like lowering the threshold should turn off the trigger in any case, although it looks based on the timing that it was lowering the thresholds that caused the problem. I added a sleep after the trigger matrix is reset to POPDC before the thresholds are reset, although I don't think this was the problem since DRMI seems to think it is unlocked before the thresholds are reset.   
 
 
def DRMI_locked():
    MichMon = ezca['LSC-MICH_TRIG_MON']
    PrclMon = ezca['LSC-PRCL_TRIG_MON']
    SrclMon = ezca['LSC-SRCL_TRIG_MON']
    if (MichMon > 0.5) and (PrclMon > 0.5) and (SrclMon > 0.5):
        # We're still locked and triggered, so return True
        return True
    else: 
        # Eeep!  Not locked.  Log some stuff
        log('DRMI TRIGGERED NOT LOCKED:')
        log('LSC-MICH_TRIG_MON = %s' % MichMon)
        log('LSC-PRCL_TRIG_MON = %s' % PrclMon)
        log('LSC-SRCL_TRIG_MON = %s' % SrclMon)
        return Falsedef DRMI_locked():
    MichMon = ezca['LSC-MICH_TRIG_MON']
    PrclMon = ezca['LSC-PRCL_TRIG_MON']
    SrclMon = ezca['LSC-SRCL_TRIG_MON']
    if (MichMon > 0.5) and (PrclMon > 0.5) and (SrclMon > 0.5):
        # We're still locked and triggered, so return True
        return True
    else: 
        # Eeep!  Not locked.  Log some stuff
        log('DRMI TRIGGERED NOT LOCKED:')
        log('LSC-MICH_TRIG_MON = %s' % MichMon)
        log('LSC-PRCL_TRIG_MON = %s' % PrclMon)
        log('LSC-SRCL_TRIG_MON = %s' % SrclMon)
        return False
Images attached to this comment
H1 PSL (PSL)
peter.king@LIGO.ORG - posted 17:25, Wednesday 27 April 2016 - last comment - 18:07, Friday 29 April 2016(26832)
Laser power noise
Made measurements of the oscillator power noise with a few photodiodes.

PowerNoise.png shows the free-running oscillator relative power noise
measured before the acousto-optic modulator.  This is more than 10 times
noisier than when the laser was installed in the H1 enclosure.  The
other trace in the plot is the out of loop of the relative power noise.
It is also about a factor of 10 higher than it should be.

    Whilst the power stabilisation was locked, I looked at the AC coupled
output of the photodiode and did not observe any oscillations.  The maximum
peak-to-peak variations were ~40 mVpp.
Images attached to this report
Comments related to this report
matthew.evans@LIGO.ORG - 09:06, Thursday 28 April 2016 (26850)

If I am reading this plot correctly, the ~37kHz rep-rate seen in last week is probably represented in this spectrum by the peak which the ISS is adding at that frequency.  (The 500kHz oscillation is too high to see here.)  It might be very informative to see what is going on above 100kHz since the ISS seems to be adding a lot of noise at 100kHz (about a factor of 10 above its input).

evan.hall@LIGO.ORG - 12:34, Friday 29 April 2016 (26881)

Is this plot really calibrated into RIN?

The digital RIN readback for the OOL inner-loop sensor appears to be 20 dB lower than the trace shown here (26773). Same for the digital readback for the HPO transmission.

keita.kawabe@LIGO.ORG - 18:07, Friday 29 April 2016 (26894)

Look at alog 26893.

Peter's "relative power noise" agrees well with the raw voltage spectrum of Rick and mine. In other words, Peter's plot seems to overestimate RIN by 15 dB or so for the HPL monitor (DC level is about -6V), and about 20dB for 1st loop sensor (DC level -9 to -10 Volt).

LHO VE (CDS, VE)
patrick.thomas@LIGO.ORG - posted 17:07, Wednesday 27 April 2016 - last comment - 09:05, Thursday 28 April 2016(26831)
Forced X1 PT140 Cold Cathode on in Beckhoff
The X1 PT140 Pirani gauge is reading above the software interlock threshold to turn on the Cold Cathode gauge. Per Chandra's request I have bypassed the software interlock by forcing the variables in Beckhoff on h0velx (see attached screenshot).
Images attached to this report
Comments related to this report
kyle.ryan@LIGO.ORG - 09:05, Thursday 28 April 2016 (26849)
We may use the intermittent "bad" behavior of the pirani/cable/connection or whatever as an excuse to install the aLIGO wide range gauge sooner rather than later.  
H1 INJ (INJ)
christopher.biwer@LIGO.ORG - posted 00:27, Wednesday 27 April 2016 - last comment - 15:28, Thursday 28 April 2016(26792)
set up hardware injection guadian node
Chris B., Jamie R.

The hardware injection guardian node has been setup at LHO.  The node should be ready to perform injections for the engineering run. Many thanks to Jamie.

The node is called INJ_TRANS. I have paused it.

Code is in: /opt/rtcds/userapps/release/cal/common/guardian

States that can be requested

A graph of the guardian states is attached. There are two states that can be requested:
  * INJECT_SUCCESS: Request this when you want to do injections
  * INJECT_KILL: Request this to cancel an injection

You should request INJECT_SUCCESS to perform an injection. The node will move to the WAIT_FOR_NEXT_INJECT will continuously check for an injection that are going to happen in the next five minutes (so if there are no injections for a long time, the node will spend a long time in this state). Once an injection is soon, it uploads an event to gracedb, reads the waveform data, and waits to inject. Eventually it will move into the injection state and inject the waveform. It will move back to the WAIT_FOR_NEXT_INJECT state and begin waiting for the next injection.

While the node is preparing to do an injection, eg. gracedb upload, etc., there will be a USERMSG letting the operator know an injection is about to occur. See MEDM screen below.

How to schedule an injection

This is just some short hand notes for how to schedule an injection with the guardian node until a document is in the DCC.

There are three steps:
  (1) Update the schedule file and validate it
  (2) Reload the guardian node
  (3) Request INJECT_SUCCESS if its not already

The current schedule file at the time of writing is located here: https://redoubt.ligo-wa.caltech.edu/svn/cds_user_apps/trunk/cal/common/guardian/schedule/schedule_1148558052.txt

The location of the schedule file is defined in https://redoubt.ligo-wa.caltech.edu/svn/cds_user_apps/trunk/cal/common/guardian/INJ_TRANS.py, search for the variable schedule_path.

An example line is:
1145685602 INJECT_DETCHAR_ACTIVE 0 1.0 /ligo/home/christopher.biwer/projects/guardian_hwinj/test_waveforms/box_test.txt None

Where:
  * First column is GPS start time of the injection.
  * Second column is the name of the guardian state that will perform the injection. Choices are INJECT_CBC_ACTIVE, INJECTION_BURST_ACTIVE, INJECT_DETCHAR_ACTIVE, and INJECT_STOCHASTIC_ACTIVE.
  * Third column says whither you want to do the injection in observing mode. If this is 1, then do the injection only if the IFO is in observing mode. Otherwise set this to 0.
  * The fourth column is the scale factor. This is a float that is multiplied with the timeseries. For example, 2.0 makes the waveform's amplitude twice as large and 0.5 makes the waveform's amplitude twice as small.
  * The fifth column is the path to the waveform file. Please use full paths.
  * The sixth column is the path to the meta-data file. Please use full paths. If there is no meta-data file, then type None.

Do not schedule injections closer than 300 seconds apart. If you want to do schedule injections closer than 300 seconds, then you will want to tune imminent_seconds in INJ_TRANS.py.

You should validate the schedule file. To run the script on a LHO work stations do:
PYTHONPATH=/opt/rtcds/userapps/release/cal/common/guardian/:${PYTHONPATH}
python /opt/rtcds/userapps/release/cal/common/scripts/guardian_inj_schedule_validation.py --schedule /opt/rtcds/userapps/release/cal/common/guardian/schedule/schedule_1148558052.txt --min-cadence 300

Note you need the glue and gracedb python packages to run this script - currently an FRS to get this installed.

Failure states

There are a number of failure states, eg. waveform file cannot be read, etc. If you validate the schedule the node shouldn't run into any failures. If a failure state is entered, the node will not leave it on its own. To leave a failure state identify the problem, resolve the problem, request INJECT_SUCCESS, and reload the node. Places where a failure could occur will print a traceback in the guardian log.

GraceDB authentication

I write this for anyone not familiar with the process.

Running this guardian node will require a robot certificate because the node will upload events to GraceDB automatically. To get a robot certificate follow the instructions at https://wiki.ligo.org/viewauth/AuthProject/LIGOCARobotCertificate.

We created a robot certificate for the controls account at LHO for the h1guardian0 machine.

We had to ask the GraceDB admins (Alex P.) to add the subject line from the cert to the grid-map file.

In the hardware injection guardian node env we set X509_USER_CERT to the file path of the cert and X509_USER_KEY to the file path of the public key.

Tested gracedb API with: gracedb ping.

Successful injections on GraceDB

Injections on GraceDB are given the INJ label if they are successful. There is a success message also printed in the GraceDB event page, with the line from the schedule file. For example H236068.

Test injections

At the end of the night I did a 2 hour series of CBC injections separated by 400 seconds, I've attached plots of those injections as sanity checks that everything looks alright.
Images attached to this report
Comments related to this report
christopher.biwer@LIGO.ORG - 15:28, Thursday 28 April 2016 (26859)INJ
Command line to bring up MEDM screen: guardmedm INJ_TRANS
H1 PSL (IOO, ISC, PSL)
evan.hall@LIGO.ORG - posted 21:07, Tuesday 26 April 2016 - last comment - 12:01, Thursday 28 April 2016(26808)
RIN from different PMC ports

Chris, Keita, Evan

Today we were able to lock the outer ISS loop with the modecleaner at 20 W (and no interferometer). We looked at several PSL/IOO PD signals (the FSS transmission PD, the ISS inner-loop PDs, the IM4 transmission PD, and the ISS outer-loop PDs) and tried to understand their behavior in different ISS configurations.

Naively one would expect all these signals (except the in-loop ISS PDs) to agree with each other, since they should all be out-of-loop sensors for the RIN leaving the PMC. Together, these signals monitor three of the four PMC ports: the FSS transmission sees the RIN of one port, the out-of-loop inner-loop ISS PD sees the RIN of another port, and IM4 trans and the out-of-loop outer-loop ISS PD sees the RIN of yet another port.

These are the behaviors we observed (see attached pdf):

We think that a possible explanation for these effects is that both ISS PDs are seeing some correlated noise that is not seen by either the FSS PD or the post-IMC PDs. In this scenario, the inner-loop ISS would suppress the HPO noise but impress this correlated noise on the light entering the PMC.

Briefly we entertained the idea that the light circulating in the PMC could be multimoded (either from the NPRO or the HPO), but judging from the RIN before and after the IMC, this seems to not be the case (png attachment).

One other idea is that some of the 808 nm light is getting through the PMC and onto the ISS.

Images attached to this report
Non-image files attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 22:23, Tuesday 26 April 2016 (26811)

Is this really incompatible with jitter? There are a lot of variations visible on the PMC reflected camera. The finesse of the PMC isn't that great (~100), and neither is jitter supression. If there is a static misalignment into the PMC, there would also be a linear term for the jitter to intensity conversion. The two inner loop detectors see rather different signals at 10Hz, if the inner loop is engaged but not the outer one.

evan.hall@LIGO.ORG - 13:06, Wednesday 27 April 2016 (26824)

Certainly the jitter seen on the IMC WFS is worse than before the HPO turn-on.

Before the turn-on, the jitter below 100 Hz was 1 nrad/Hz1/2 or so (LHO#21212). Now it is 10 nrad/Hz1/2 at 10 Hz, with a 1/f slope.

The attachment shows IMC signals with the inner ISS loop off (dashed) and on (solid).

Non-image files attached to this comment
keita.kawabe@LIGO.ORG - 17:24, Wednesday 27 April 2016 (26833)

Update: BS alert. Read the next entry.

Jitter is much larger than before, but the jitter alone doesn't seem to explain all of our observations at the same time when the 1st loop is closed but the 2nd loop open.

PDA=P+a*J+Sa, PDB=P+b*J+Sb, IM4=P+x*c*J+Sim4

P is the intensity noise leaving the AOM. When the loop is open it's just the free running noise P0.

J is the beam jitter (01 amplitude relative to 00) coming out of PMC.

a, b and c are the jitter to intensity coupling at PDA, PDB and IM4 trans due to clipping or diode inhomogeneity or whatever.

x is the attenuation of 01 mode amplitude by IMC, which is about 0.3%.

Sa, Sb and Sim4 are the sensing noise.

When 1st loop is closed, J is imprinted on P:

P=P0/(1+G) - G/(1+G) *(b*J + Sb) ~ P0/G - b*J - Sb,

PDA ~ P0/G + (a-b)*J +Sa-Sb,

IM4 ~ P0/G + (x*c-b)*J + Sim4-Sb ~ P0/G -b*J +Sim4-Sb. (note x=3E-3.)

where G is the OLTF.

Allowing some conspiracies but not extreme ones, lack of coherence between PDA and IM4 is explained in either of the following:

  • b~0, PDA~a*J, IM4~P0/G+sensing.
  • a-b~0 (e.g. common clipping like a particulate on the AR side of the PMC mirror), PDA~P0/G+sensing, IM4~b*J.

The first case is false because swapping PDA and PDB makes no difference in IM4.

In the second case, PDA spectrum should look like all sensing noise, but this "sensing" noise in reality is big at 10Hz.

So, even if the clipping effect is common in PDA and PDB so the PDA and IM4 becomes incoherent, we need another noise that is not unlike big sensing noise, i.e. of about the same amplitude on PDA and PDB, is incoherent between PDA and PDB, and does not appear on downstream sensors.

keita.kawabe@LIGO.ORG - 12:01, Thursday 28 April 2016 (26854)

I take my words back about PDA-downstream coherence.

I was looking at the coherences from this morning, and it seems like when only the first loop is on, 1st loop out of loop sensor is coherent with downstream sensor before and after the IMC (attached, bottom red and blue). The plot is calibrated in RIN.

Note that we switched the control photodiode from PDB to PDA last night, so in this plot the out of loop sensor is PDB. I switched them back again at 17:49:10 UTC.

Anyway, out of loop sensor is more coherent with downstream sensors than HPL monitor is at f<10Hz (bottom red|blue VS brown|pink), but HPL is more coherent from 10 to 200 Hz. Difference between bottom brown and bottom pink probably doesn't mean much, just the noise floor difference between IMC-PWR and MC2_TRANS.

Some thinking necessary, but at the moment I cannot say that jitter cannot explain everything.

Images attached to this comment
Displaying reports 60161-60180 of 86299.Go to page Start 3005 3006 3007 3008 3009 3010 3011 3012 3013 End