Displaying reports 57241-57260 of 83394.Go to page Start 2859 2860 2861 2862 2863 2864 2865 2866 2867 End
Reports until 10:07, Friday 29 April 2016
H1 General
travis.sadecki@LIGO.ORG - posted 10:07, Friday 29 April 2016 (26875)
Morning meeting notes
H1 SEI
hugh.radkins@LIGO.ORG - posted 09:55, Friday 29 April 2016 - last comment - 10:20, Friday 29 April 2016(26874)
BRSY Trends--Not clearly related to wind speed

The attached 1 day trend plot shows the BRSY RX (tilt), wind speed, BRS Beam velocity, and the Y STS Seismometer motion.  Yesterday at the 1800 hours mark, the tilt and velocity rapid decrease is from when we forced the damping on.  Otherwise for the remainder of the plots, the BRS is running on its own.  The shape of the trends don't scale with the wind velocity as one might expect...

Images attached to this report
Comments related to this report
krishna.venkateswara@LIGO.ORG - 10:20, Friday 29 April 2016 (26876)

Remember that BRS_IN is the raw angle of the beam-balance, which is an undamped ~7.6 mHz oscillator with Q of ~2700. The real-time signal will be completely dominated by the resonance - it is like looking at the raw DARM channel and seeing only the violin mode and trying to see if there is a GW signal based on the unfiltered amplitude. To see the real-time ground tilt, you have to looks at (10-100 mHz) BLRMS of BRS_OUT or some such filtered signal.

LHO General
thomas.shaffer@LIGO.ORG - posted 23:16, Thursday 28 April 2016 (26829)
Ops Eve Shift Summary

TITLE: 04/29 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY: Plagued by earthquakes, a bit of wind, and then laser trouble. Wasn't a great night tonight.

SEI note: I struggled to get ISI ITMX to stay in fully_isolated, even after the 0.03-0.1Hz seemed to be back to normal. I'm leaving it fully_isolated with the T240s in Low gain now, but it has fooled me before. Also is there a way for us to tell if one of the sensors is already in low gain? Should it be added to the DIAG to make sure all are in high gain?
LOG:

H1 PSL (PSL)
richard.savage@LIGO.ORG - posted 23:11, Thursday 28 April 2016 (26871)
PSL down

Keita, PeterK, Rick

Keita and I went out to the laser room to make some RIN measurements.

Short summary:

     Front end RIN seems normal both with the high power oscillator (HPO) running and with it off

     High Power Oscillator RIN much higher (>factor of 10).  Glitches (few cycles of 500 kHz) at abouyt 37 kHz on the hight power oscillator light, but not the front end light

     More tomorrow regarding this.

After shutting the laser (front end and HPO) down to reset the Long Range Actuator (via the reset button on the back of the control box), the HPO only came back up with about 1/4 of the expected power.  Troubleshooting and brainstorming on the phone with Peter, we tried increasing the diode currents by 1 A (left the laser in this condition).  This due to the low power and because the head 4 output dropoed from about 96% to about 91%, i.e. it came back about 5% lower after switching the laser off to reset the LRA and turning it back on.

HPO power (with internal shutter open, front end off) still only came up to about 50 W (expecting close to 200).

Peter suspects that some internal optic has been damaged.  We decided to shut the system completely down - lasers and chillers - and try a cold start in the morning.

I plan to talk with MattH and Olli Puncken at LLO first thing in the morning to see if they have any ideas or suggestions.

It's looking pretty likely at this point that we aren't going to have a high-power laser until we can go into the HPO to inspect the optics and assess the state of the HPO.  This would start next week on Tuesday at the earliest, when PeterK returns from vacation.

Assuming that the cold start in the morning is not successful, we will have to decide is we want to try to get a the 15 watts reflected from the HPO aligned and modematched to the PMC so we will at least have a low power beam to work with until next week.

H1 AOS
richard.mittleman@LIGO.ORG - posted 18:31, Thursday 28 April 2016 (26867)
ETMY-ISI rX Sensor Correction

I'm leaving the rX-BRS sensor correction at ETMY on over night, as far as I can tell at the moment it is doing very little, but if someone gets suspicious about ETMY motion feel free to turn it off in ISI-ETMY_ST1_SENSCOR_GND_RX_Match

H1 CDS
patrick.thomas@LIGO.ORG - posted 18:30, Thursday 28 April 2016 - last comment - 10:35, Friday 29 April 2016(26866)
Corner Station Beckhoff down
Patrick, Matt, Kiwamu, Vern

We tried testing some code changes for the rotation stage. They didn't seem to work, so I reverted the change to the Laser Power library and went through the GUI to start afresh (copy new code from target directory, compile, run, etc.). Now everytime I do this I get a divide by zero error. (see attached).

Did someone change the code in PLC1 that introduced a divide by zero error, and not run it until we tried to now? Or did I somehow do this?
Images attached to this report
Comments related to this report
patrick.thomas@LIGO.ORG - 19:10, Thursday 28 April 2016 (26869)
Seem to be fixed by recompiling PLC1. I've burtrestored PLC1, PLC2 and PLC3 to 6:10 this morning (local time).
sheila.dwyer@LIGO.ORG - 19:37, Thursday 28 April 2016 (26870)

Here are two examples of locklosses when powering up from the last 24 hours.  The first one shows the rotation stage moving in a jerky way, this was an example of a time when the velocity was changed before the request was made, but is worse than the normal "moving in the wrong direction" problem.  You can see that the accelerometers on the PSL all have glitches when the rotation stage angle encoder records a change in angle. 

In the second example the rotation stage velocity moves smoothly, and the power changes smoothly, but we have a lockloss which could be ASC related.  

Images attached to this comment
daniel.sigg@LIGO.ORG - 08:01, Friday 29 April 2016 (26873)

One thing to remember is that TwinCAT will try to reuse its previously stored values of variables, when you log in with a slightly modified code. Generally, this is a good thing, but it can fail with an internal variable restored to a value which leads to a divide by zero error. You need to login and use the Reset (clear all variables except the persistant ones), or Reset All  (clear all variables). In the later case, you definitely need a SDF/burt restore.

patrick.thomas@LIGO.ORG - 10:35, Friday 29 April 2016 (26878)
I had tried logging in with the PLC and resetting the variables.
LHO VE
kyle.ryan@LIGO.ORG - posted 18:26, Thursday 28 April 2016 (26865)
LHO Bake Ovens off line for repairs and upgrades
Kyle, Joe D.

Our two aLIGO workhorse Vacuum Bake Ovens are getting repaired and upgraded.  We expect VBOC to re-enter service next week followed by VBOD sometime thereafter.
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 17:51, Thursday 28 April 2016 - last comment - 18:33, Thursday 28 April 2016(26862)
Chris Wipf's MEDM time machine installed

Chris, Carlos, Jonathan, Jim, Dave:

today we installed Chris' medm_time_machine software on the CDS workstations. This permits the user to ask how an MEDM screen looked like in the past, provided the channel data is available from the selected NDS server.

To open the feature, right-mouse inside an MEDM and select Execute (last item) and TimeMachine (last item on pull-out window). See figure 1.

As an example, H1 was locked at noon today and is currently unlocked due to an earthquake. Figure 2 is requesting the OMC DC-PDA filter module screen at noon (5 hours and 30 minutes ago at the time of asking). Figure 3 shows the current running MEDM on top, and the timewarped screen from 12:02 PDT today below it. The time strings in the upper right corner show the times. Channels which are not in the DAQ show up as white rectangles (for example strings, momentary buttons, redundent outputs). When the IFO was locked, the input was many thousands of counts.

One feature we added to the launcher today was to allow the user to set the precision of the playback data. In some screens a precision of 3 is good, for others lower precision makes the screen more readable.

Jonathan showed that by changing the NDSSERVER enviromnent setting and opening a kerberos ticket, this system is able to get data from the LDAS NDS2 server.

Things to note:

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 18:08, Thursday 28 April 2016 (26864)

For this to run on the control room workstations, we installed pcaspy (easy-install pcaspy) locally at:

/usr/local/lib/python2.7/dist-packages/pcaspy-0.5.1-py2.7-linux-x86_64.egg/pcaspy

keith.thorne@LIGO.ORG - 18:33, Thursday 28 April 2016 (26868)
pcaspy should already be installed for LLO CDS, if we wish to install the patch
LHO VE
chandra.romel@LIGO.ORG - posted 16:32, Thursday 28 April 2016 (26861)
emergency GV hood
Successfully pumped down emergency GV hood prototype! After vulcanizing two 1/4" diam. o-rings and sealing via one band clamp, the volume pumped down to fractions of a torr. Using aux cart, pressure at turbo read 9.1e-3 Torr with a hefty foreline pressure of 3.2 Torr. I will try to improve hole plugging. I want to test the 6" gap calculation incrementally to see if the polyurethane withstands the atmospheric pressure differential as predicted.
Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 16:08, Thursday 28 April 2016 (26860)
Ops Eve Shift Transition

TITLE: 04/28 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: Travis
CURRENT ENVIRONMENT:
    Wind: 17mph Gusts, 7mph 5min avg
    Primary useism: 0.39 μm/s
    Secondary useism: 0.25 μm/s
QUICK SUMMARY: Earthquake hit little over 3 hours ago and we have been ringing down since. Some rotation stage work is going on in the mean time.

H1 General
travis.sadecki@LIGO.ORG - posted 16:00, Thursday 28 April 2016 (26858)
Ops Day Shift Summary

TITLE: 04/28 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: TJ
SHIFT SUMMARY: Relocked the IFO to DC Readout by 15:45 UTC.  At ~20:00 UTC, we got nailed by a 7.0 EQ in Vanuatu and have been down since.  Jim and Rich have been using the downtime to work on EY ISI, and various commissioners have been doing offline tasks as well.
LOG:

16:00 Jeff B in and out of cleaning bay all day

16:50 Fil to MY

16:30 Jim and Rich starting EY ISI work

17:36 Fil back

18:00 Chandra to GV7

18:00 Jeff B to both ends mech. rooms

18:33 Jeff B back

19:59 EQ hits, trips all ISI platforms and a few SUSes
 

H1 CAL (CAL)
craig.cahillane@LIGO.ORG - posted 10:38, Thursday 28 April 2016 - last comment - 12:57, Thursday 28 April 2016(26847)
All of O1 LHO C02 Spectrograms and GPSTime = 1135136350 Calibration Uncertainty
C. Cahillane

I have attached remade systematic error and uncertainty spectrograms for all of O1, as well as some a specific calibration at GPSTime = 1135136350.

These plots include uncertainty from detrended time-dependent kappas.

I have also included the uncertainty components plots for ease of viewing what contributes to uncertainty.

For LLO, see LLO aLOG 25914
Non-image files attached to this report
Comments related to this report
craig.cahillane@LIGO.ORG - 12:57, Thursday 28 April 2016 (26857)CAL
C. Cahillane

I have also included the .txt files so anyone may make the LHO C02/C03 response function plus uncertainty plots at GPSTime = 1135136350.

Non-image files attached to this comment
H1 TCS (TCS)
aidan.brooks@LIGO.ORG - posted 08:59, Thursday 28 April 2016 - last comment - 12:48, Thursday 28 April 2016(26848)
Restarted HWS-X, excluded high variance data points from measurement

I restarted the HWSX sensor with a new template file that excludes the very high variance centroids from the measurement. This should result in a much less noisy measurement of the wavefront error.

The new Python version of the HWS code does this automatically by weighting each data point in the HWS image by the inverse of its variance when calculating beam properties.

Comments related to this report
aidan.brooks@LIGO.ORG - 12:02, Thursday 28 April 2016 (26855)

The HWS is working nicely now. We see the ITMX thermal lens forming again.

Images attached to this comment
aidan.brooks@LIGO.ORG - 12:48, Thursday 28 April 2016 (26856)

Did the same for HWS-Y. Also reset the magnification to 7.5 instead of the default 17.5x

H1 GRD
sheila.dwyer@LIGO.ORG - posted 22:55, Wednesday 27 April 2016 - last comment - 18:28, Thursday 28 April 2016(26840)
mystery DRMI guardian lockloss

Jenne, Sheila, Chris, TJ, Evan

It seems that tonight we have been sabotaged by some code that we have been using for a long time (this haas only happened once that we caught it, although we have a lot of unexplained locklosses tonight).

In the attached screenshot you can see that the DRMI guardian was sitting at DRMI_3F_LOCKED (130) when it decided to go to LOCK_DRMI_1F (30).  There is a decorator in DRMI_3F_LOCKED that apparently returned LOCK_DRMI_1F, because it though DRMI was unlocked (it was fine as you can see from the power build ups in the top row). 

The code that checks for DRMI lock is: 

def DRMI_locked():
    #log('checking DRMI lock')
    return ezca['LSC-MICH_TRIG_MON'] and ezca['LSC-PRCL_TRIG_MON'] and ezca['LSC-SRCL_TRIG_MON']
def DRMI_locked():
    #log('checking DRMI lock')
    return ezca['LSC-MICH_TRIG_MON'] and ezca['LSC-PRCL_TRIG_MON'] and ezca['LSC-SRCL_TRIG_MON']
def DRMI_locked():
    #log('checking DRMI lock')
    return ezca['LSC-MICH_TRIG_MON'] and ezca['LSC-PRCL_TRIG_MON'] and ezca['LSC-SRCL_TRIG_MON']
 
 
however, as you can see from the plot, all of these trig mons were 1 for the whole time.  Reutning LOCK_DRMI_1F resets settings for DRMI to reaquire, so we would expect that to break the lock.
 
Jenne used the new guardlog to grab the DRMI log from that time, it is attached.  
Images attached to this report
Non-image files attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 23:56, Wednesday 27 April 2016 (26842)

It happened again at 6:53:07

Images attached to this comment
sheila.dwyer@LIGO.ORG - 18:28, Thursday 28 April 2016 (26863)

Sheila, Jenne, Jamie, Chris, Evan, Dave

We still don't understand why this would have happened, although we should be able to debug it a little bit better if it happens again. 

Jenne and Jamie edited the DRMI_Locked function so that there will be more information in the guardian log in the future:

def DRMI_locked():

    MichMon = ezca['LSC-MICH_TRIG_MON']
    PrclMon = ezca['LSC-PRCL_TRIG_MON']
    SrclMon = ezca['LSC-SRCL_TRIG_MON']
    if (MichMon > 0.5) and (PrclMon > 0.5) and (SrclMon > 0.5):
        # We're still locked and triggered, so return True
        return True
    else: 
        # Eeep!  Not locked.  Log some stuff
        log('DRMI TRIGGERED NOT LOCKED:')
        log('LSC-MICH_TRIG_MON = %s' % MichMon)
        log('LSC-PRCL_TRIG_MON = %s' % PrclMon)
        log('LSC-SRCL_TRIG_MON = %s' % SrclMon)
        return False
 
This also avoids the question of what might happen if the ezca calls I don't return a bool.
 
Dave tells us that the data recorded in the DAQ is not necessarily synchronus with the EPICS data, so looking at H1:LSC-MICH_TRIG_MON using nds2 doesn't necessarily give us the same data that the guardian gets (this would explain why nothing would have shown up in the lockloss plots even though the guardian apparently sees one of the TRIG_MONs changing). Dave is going to add the TRIG_MON channels to conlog.  
 
We looked at POP18_I_ERR durring the time that this was happening, and it should have been above the threshold the entire time, so there seems to be no reason the trigger should have gone off.  One new suspicous thing is that this happens at the same time that the triggering is swapped over to POPDC by the ISC_LOCK guardian.  However, you can see in the attached plots that the thresholds are lowered (to -100) when the DRMI guardian thinks that the lock is lost.  Chris and I looked through the triggering in the model, and it doesn't seem like lowering the threshold should turn off the trigger in any case, although it looks based on the timing that it was lowering the thresholds that caused the problem. I added a sleep after the trigger matrix is reset to POPDC before the thresholds are reset, although I don't think this was the problem since DRMI seems to think it is unlocked before the thresholds are reset.   
 
 
def DRMI_locked():
    MichMon = ezca['LSC-MICH_TRIG_MON']
    PrclMon = ezca['LSC-PRCL_TRIG_MON']
    SrclMon = ezca['LSC-SRCL_TRIG_MON']
    if (MichMon > 0.5) and (PrclMon > 0.5) and (SrclMon > 0.5):
        # We're still locked and triggered, so return True
        return True
    else: 
        # Eeep!  Not locked.  Log some stuff
        log('DRMI TRIGGERED NOT LOCKED:')
        log('LSC-MICH_TRIG_MON = %s' % MichMon)
        log('LSC-PRCL_TRIG_MON = %s' % PrclMon)
        log('LSC-SRCL_TRIG_MON = %s' % SrclMon)
        return Falsedef DRMI_locked():
    MichMon = ezca['LSC-MICH_TRIG_MON']
    PrclMon = ezca['LSC-PRCL_TRIG_MON']
    SrclMon = ezca['LSC-SRCL_TRIG_MON']
    if (MichMon > 0.5) and (PrclMon > 0.5) and (SrclMon > 0.5):
        # We're still locked and triggered, so return True
        return True
    else: 
        # Eeep!  Not locked.  Log some stuff
        log('DRMI TRIGGERED NOT LOCKED:')
        log('LSC-MICH_TRIG_MON = %s' % MichMon)
        log('LSC-PRCL_TRIG_MON = %s' % PrclMon)
        log('LSC-SRCL_TRIG_MON = %s' % SrclMon)
        return False
Images attached to this comment
H1 INJ (INJ)
christopher.biwer@LIGO.ORG - posted 00:27, Wednesday 27 April 2016 - last comment - 15:28, Thursday 28 April 2016(26792)
set up hardware injection guadian node
Chris B., Jamie R.

The hardware injection guardian node has been setup at LHO.  The node should be ready to perform injections for the engineering run. Many thanks to Jamie.

The node is called INJ_TRANS. I have paused it.

Code is in: /opt/rtcds/userapps/release/cal/common/guardian

States that can be requested

A graph of the guardian states is attached. There are two states that can be requested:
  * INJECT_SUCCESS: Request this when you want to do injections
  * INJECT_KILL: Request this to cancel an injection

You should request INJECT_SUCCESS to perform an injection. The node will move to the WAIT_FOR_NEXT_INJECT will continuously check for an injection that are going to happen in the next five minutes (so if there are no injections for a long time, the node will spend a long time in this state). Once an injection is soon, it uploads an event to gracedb, reads the waveform data, and waits to inject. Eventually it will move into the injection state and inject the waveform. It will move back to the WAIT_FOR_NEXT_INJECT state and begin waiting for the next injection.

While the node is preparing to do an injection, eg. gracedb upload, etc., there will be a USERMSG letting the operator know an injection is about to occur. See MEDM screen below.

How to schedule an injection

This is just some short hand notes for how to schedule an injection with the guardian node until a document is in the DCC.

There are three steps:
  (1) Update the schedule file and validate it
  (2) Reload the guardian node
  (3) Request INJECT_SUCCESS if its not already

The current schedule file at the time of writing is located here: https://redoubt.ligo-wa.caltech.edu/svn/cds_user_apps/trunk/cal/common/guardian/schedule/schedule_1148558052.txt

The location of the schedule file is defined in https://redoubt.ligo-wa.caltech.edu/svn/cds_user_apps/trunk/cal/common/guardian/INJ_TRANS.py, search for the variable schedule_path.

An example line is:
1145685602 INJECT_DETCHAR_ACTIVE 0 1.0 /ligo/home/christopher.biwer/projects/guardian_hwinj/test_waveforms/box_test.txt None

Where:
  * First column is GPS start time of the injection.
  * Second column is the name of the guardian state that will perform the injection. Choices are INJECT_CBC_ACTIVE, INJECTION_BURST_ACTIVE, INJECT_DETCHAR_ACTIVE, and INJECT_STOCHASTIC_ACTIVE.
  * Third column says whither you want to do the injection in observing mode. If this is 1, then do the injection only if the IFO is in observing mode. Otherwise set this to 0.
  * The fourth column is the scale factor. This is a float that is multiplied with the timeseries. For example, 2.0 makes the waveform's amplitude twice as large and 0.5 makes the waveform's amplitude twice as small.
  * The fifth column is the path to the waveform file. Please use full paths.
  * The sixth column is the path to the meta-data file. Please use full paths. If there is no meta-data file, then type None.

Do not schedule injections closer than 300 seconds apart. If you want to do schedule injections closer than 300 seconds, then you will want to tune imminent_seconds in INJ_TRANS.py.

You should validate the schedule file. To run the script on a LHO work stations do:
PYTHONPATH=/opt/rtcds/userapps/release/cal/common/guardian/:${PYTHONPATH}
python /opt/rtcds/userapps/release/cal/common/scripts/guardian_inj_schedule_validation.py --schedule /opt/rtcds/userapps/release/cal/common/guardian/schedule/schedule_1148558052.txt --min-cadence 300

Note you need the glue and gracedb python packages to run this script - currently an FRS to get this installed.

Failure states

There are a number of failure states, eg. waveform file cannot be read, etc. If you validate the schedule the node shouldn't run into any failures. If a failure state is entered, the node will not leave it on its own. To leave a failure state identify the problem, resolve the problem, request INJECT_SUCCESS, and reload the node. Places where a failure could occur will print a traceback in the guardian log.

GraceDB authentication

I write this for anyone not familiar with the process.

Running this guardian node will require a robot certificate because the node will upload events to GraceDB automatically. To get a robot certificate follow the instructions at https://wiki.ligo.org/viewauth/AuthProject/LIGOCARobotCertificate.

We created a robot certificate for the controls account at LHO for the h1guardian0 machine.

We had to ask the GraceDB admins (Alex P.) to add the subject line from the cert to the grid-map file.

In the hardware injection guardian node env we set X509_USER_CERT to the file path of the cert and X509_USER_KEY to the file path of the public key.

Tested gracedb API with: gracedb ping.

Successful injections on GraceDB

Injections on GraceDB are given the INJ label if they are successful. There is a success message also printed in the GraceDB event page, with the line from the schedule file. For example H236068.

Test injections

At the end of the night I did a 2 hour series of CBC injections separated by 400 seconds, I've attached plots of those injections as sanity checks that everything looks alright.
Images attached to this report
Comments related to this report
christopher.biwer@LIGO.ORG - 15:28, Thursday 28 April 2016 (26859)INJ
Command line to bring up MEDM screen: guardmedm INJ_TRANS
Displaying reports 57241-57260 of 83394.Go to page Start 2859 2860 2861 2862 2863 2864 2865 2866 2867 End