Displaying reports 52101-52120 of 83254.Go to page Start 2602 2603 2604 2605 2606 2607 2608 2609 2610 End
Reports until 11:17, Wednesday 30 November 2016
H1 DetChar (DetChar)
evan.goetz@LIGO.ORG - posted 11:17, Wednesday 30 November 2016 - last comment - 13:43, Wednesday 30 November 2016(32026)
Glitch rate elevated compared with previous lock stretches, more glitches at 2 kHz and 3 kHz?
Looking at the current glitch rate, it is elevated compared to previous locks in the last days.

Figure 1: current glitch rates (Nov 30)
Figure 2: Nov 28 glitch rates

Note that the current glitch rates are all elevated. It's easier if you open these in windows you can swap back and forth, or just stare at them side-by-side.

Looking at the SNR distribution, there is a large population of SNR<30 glitches (note large hump instead of linear decay on the log-log plot)
Figure 3: current SNR distribution (Nov 30)
Figure 4: Nov 28 SNR distribution

Now looking at the histogram of glitch SNR versus frequency, it's clear there are more numerous higher SNR glitches at low frequencies, but the higher glitch rate for low SNR glitches seems to be coming mostly from the (new?) 2 kHz and 3 kHz glitches.
Figure 5: current SNR versus frequency (Nov 30)
Figure 6: Nov 28 SNR versus frequency
Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 13:43, Wednesday 30 November 2016 (32036)OpsInfo

Our violin mode second harmonics are rung up, which is most likely to be the problem here. We had a rough lockloss late monday night in which things got rung up.  For now Ed, Jeff B and Jeff K are working on damping some of the rung up first harmonics, that is why we are not in observing mode right now.

The guardian automatically damps the first harmonic violin modes, so they are normally small after we have had some long lock stretches, but the second harmonics will only get damped if operators actively work on them. It would be a good idea for operators to try to watch these and try to damp them as well as we can.  Allowing for operators to damp these and change settings while we are in observing mode would facilitate getting these modes damped.

We have been having ISI trips on locklosses recently which is probably how these are getting rung up. We are hoping that the tidal triggering change described in alog 31980 will prevent the trips, so that harmonics will not get rung up as often.

H1 SUS (CDS)
jeffrey.kissel@LIGO.ORG - posted 11:16, Wednesday 30 November 2016 - last comment - 09:17, Wednesday 29 March 2017(32021)
SUS PR2 Frame Rate Differences -- Understood; Let's Leave It Be
J. Kissel, S. Aston, P. Fritschel

Peter was browsing through the list of frame channels and noticed that there are some differences between H1 and L1 on PR2 (an HSTS), even after we've both gone through and made the effort to revamp our channel list -- see Integration Issue 6463, ECR E1600316, LHO aLOG 30844, and LLO aLOG 29091.

What difference he found is the result of the LHO-ONLY ECR E1400369 to increase the drive strength of the lower states *some* of the HSTSs. This requires the two sites to have a different front-end model library part for different types of the same suspension because the BIO control each stage is different depending on the number of drivers that have been modified;
At LHO the configuration is
    Library Part        Driver Configuration            Optics
    HSTS_MASTER.mdl     No modified TACQ Drivers        MC1, MC3
    MC_MASTER.mdl       M2 modified, M3 not modified    MC2
    RC_MASTER.mdl       M2 and M3 modified              PRM, PR2, SRM, SR2

At LLO, the configuration is
    Library Part        Driver Configuration            Optics
    HSTS_MASTER.mdl     No modified TACQ Drivers        MC1, MC3, PR2
    MC_MASTER.mdl       M2 modified, M3 not modified    MC2, PRM, SR2, SRM
    RC_MASTER.mdl       M2 and M3 modified       none

This model's DAQ channel list for the MC and RC masters is the same. The HSTS master is different, and slower, because these SUS are used for angular control only: 
                           HSTS (Hz)        MC or RC (Hz)
    M3_ISCINF_L_IN1        2048               16384

    M3_MASTER_OUT_UL       2048               16384
    M3_MASTER_OUT_LL       2048               16384
    M3_MASTER_OUT_UR       2048               16384
    M3_MASTER_OUT_LR       2048               16384

    M3_DRIVEALIGN_L_OUT    2048               4096

Since LLO's PR2 does not have any modifications to its TACQ drivers, it uses the HSTS_MASTER model, which means that PR2 alone is going to show up as a difference in the channel list between the sites that seemed odd Peter -- that L1 had 6 more 2048 Hz channels than H1. Sadly, it *is* used for longitudinal control, so LLO suffers the lack of stored frame rate.

In order to "fix" this difference, we'd have to create a new library part for LLO's PR2 alone that has the DAC channel list of an MC or RC master, but have the BIO control logic of an HSTS master (i.e. to operate M2 and M3 stages with an unmodified TACQ driver). That seems excessive given that we already have 3 different models due to differing site preferences (and maybe range needs), so I propose we leave things as is, unless there's dire need to compare the high frequency drive signals to the M3 stage of PR2 at LLO.

I attach a screenshot that compares the DAQ channel lists for the three library parts, and the two types of control needs as defined by T1600432.
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 08:59, Thursday 01 December 2016 (32066)
Just to trace out the history HSTS TACQ drivers at both sites:

Prototype of L1200226 increase MC2 M2 stage at LLO:
LLO aLOG 4356
      >> L1 MC2 becomes MC_MASTER.

ECR to implement the L1200226 on MC2, PRM, and SRM M2 stages for both sites: E1200931
      >> L1 PRM, SRM become MC_MASTERs
      >> H1 MC2, PRM, SRM become MC_MASTERs

LLO temporarily changes both PR2 and SR2 M2 drivers for an L1200226 driver: LLO aLOG 16945
And then reverted two days later: LLO aLOG 16985
     
ECR to increase the drive strength SR2 M2 stage only at LLO: E1500421
      >> L1 SR2 becomes MC_MASTER

ECR to increase the drive strength of SR2 and PR2 M2 and PRM, PR2, SRM, SR2 M3 at LHO only: E1400369
      >> H1 SRM, SR2, SRM, SR2 become RC_MASTERs.
stuart.aston@LIGO.ORG - 09:17, Wednesday 29 March 2017 (35180)
LLO have since had an ECR to increase the drive strength for PR2 M2 stage: E1700108
      >> L1 PR2 from HSTS_MASTER becomes MC_MASTER

This has now been implemented (and has stuck this time) at LLO: LLO aLOG's 32597 and 32623.
H1 CDS
david.barker@LIGO.ORG - posted 11:10, Wednesday 30 November 2016 (32024)
CDS Wifi Access Points powered down in mid and end stations

We remotely powered down the CDS WAPs in both mids and end stations by disabling the POE switch ports around 10:45PST. When opportunity arises we'll go to these locations, disconnect the ethernet cables and restart the switch ports.

H1 ISC (OpsInfo)
jenne.driggers@LIGO.ORG - posted 11:04, Wednesday 30 November 2016 - last comment - 11:30, Friday 02 December 2016(32022)
Beam spots not moving too much since last alignment work

I have looked at all the A2L data that we have since the last time the alignment was significantly changed, which was Monday afternoon after the PSL PZT work (alog 31951).  This is the first attached plot.

The first data point is a bit different than the rest, although I'm not totally sure why.  Other than that, we're mostly holding our spot positions quite constant.  The 3rd-to-last point, taken in the middle of the overnight lock stretch (alog 32004) shows a bit of a spot difference on ETMX, particularly in yaw, but other than that we're pretty solid.

For the next ~week, I'd like operators to run the test mass a2l script (a2l_min_lho.py) about once per day, so that we can track the spot positions a bit.  After that, we'll move to our observing run standard of running a2l once a week as part of Tuesday maintenence.

The second attached plot is just the last 2 points from the current lock.  First point was taken immediately upon lock, second was take about 30 min into the lock.  The maximum spot movement in the figure appears to be about 0.2mm, but I think that is within the error of the A2L measurement.  I can't find it right now, but once upon a time I ran A2L 5 or 7 times in a row to see how consistent the answer is, and I think I remember the stdev was about 0.3mm. 

The point of the second plot is that at 30W, it doesn't seem to make a big difference if we run a2l immediately or a little later, so we can run it for our once-a-days as soon as we lock, or when we're otherwise out of Observe, and don't have to hold off on going to Observe just for A2L.

Images attached to this report
Comments related to this report
corey.gray@LIGO.ORG - 11:13, Wednesday 30 November 2016 (32025)

In case you don't have it memorized, here's the location of the A2L script:

  • cd /opt/rtcds/userapps/release/isc/common/scripts/decoup
  • ./a2l_min_LHO.py
keita.kawabe@LIGO.ORG - 11:30, Friday 02 December 2016 (32106)

A2L: How to know if it's good or bad at the moment.

Here is a dtt template to passively measure a2l quality: /opt/rtcds/userapps/release/isc/common/scripts/decoup/DARM_a2l_passive.xml

It measures the coherence between DARM and ASC drive to all test masses using 404 seconds worth of data.

All references started 25 seconds or so after the last a2l was finished and 9 or 10 seconds before the intent bit was set (GPS 116467290).

"Now" is actually about 15:00 UTC, 7AM PT, and you can see that the coherence at around 20Hz (where the ASC feedback to TM starts to be dominated by the sensing noise) significantly worse, and DARM itself was also worse, so  you can say that the a2l was worse AT THIS PARTICULAR POINT IN TIME.

Thing is, this might slowly drift around and go better or worse. You can run this template for many points in time (for example each hour), and if the coherence seems to be consistently worse than right after a2l, you know that we need a2l. (A better approach is to write a script to plot the coherence as a time series, which is a good project for fellows.)

If it is repeatedly observed over multiple lock stretches (without running a2l) that the coherence starts small at the beginning of lock and becomes larger an hour or two into the lock, that's the sign that we need to run a2l an hour or two after the lock.

Images attached to this comment
H1 SUS (IOO, OpsInfo)
jeffrey.kissel@LIGO.ORG - posted 10:14, Wednesday 30 November 2016 (32020)
Macros Updated for SUS IM MEDM Screens to fix DAC Output Confusion
J. Kissel, J. Driggers

Jenne identified that the IM overview screens had an incorrect order of channels in their DAC output, where the IOP model outputs had mistakenly come before the USER model in left-to-right fashion. I found while trying to commit the fix to the macro files that Stuart had already found, fixed and commit changes back in March of this year -- see LHO aLOG 25320.

So, I've reverted my changed, and svn up'd to the repo version, and all is well. Thanks Stuart!
LHO General
corey.gray@LIGO.ORG - posted 10:05, Wednesday 30 November 2016 (32012)
Day Transition: O2 Has Officially Started!! (at 8am PST)
TITLE: 11/30 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
    Wind: 6mph Gusts, 5mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.53 μm/s 
QUICK SUMMARY:
Just like clockwork, H1 kept us on our toes and dropped out of lock during the end of Chery'ls OWL shift.  But when I arrived (to a crowded Control Room with the O2 Start Livestream), Cheryl was taking H1 up (with attention to Bounce, Roll, Violin modes).  
 
After H1 made it to NOMINAL LOW NOISE (NLN), we had a few loose items we wanted to address before going to OBSERVING:

So we ended up & down from OBSERVING a few times for the items above.  Now we are back OBSERVING & should be have most loose ends done.

18:02 (10:02amPST):  Chatted with William Parker at LLO.  He mentioned they are battling seismic.  They have had high useism due to storm in the Gulf and they also have winds of 10-15mph which make locking problematic.

Morning Meeting Minutes

I must admit only having one ear to our 8:30am meeting (busy busy with prepping for OBSERVING), but it seemed short.  Basically announced we are in a new operational state for LHO with O2.  

Images attached to this report
H1 TCS
nutsinee.kijbunchoo@LIGO.ORG - posted 09:46, Wednesday 30 November 2016 (32019)
HWS cameras might be cross talking at corner station

Kiwamu, Nutsinee

We had another camera glitch this morning and restarted the computer didn't solve the problem. Kiwamu tried turning off the all the camera and frame grabber switches while running the stream image code but it seems to be streaming something glitchy even without any real inputs (this is also true with SLED off). We also tried streaming images on one camera at a time, X appeared to be running fine but started to glitch as soon as we stream images from Y camera. This is also true with the HWS code. No matter which order we talk to the camera, HWSX will always be the one that glitch if we are talking to Y camera at the same time. We used to be able to stream images from both camera at the same time. And clearly we were able to run both HWSX and HWSY scripts simultaneously without any issues.

 

We will keep HWSX code running for now. HWSY code is not running.

Images attached to this report
H1 CAL (DetChar, OpsInfo)
jeffrey.kissel@LIGO.ORG - posted 09:32, Wednesday 30 November 2016 (32017)
PCALX Roaming Calibration Line Frequency Changed from 1001.3 to 1501.3 Hz
J. Kissel

Continuing the schedule for this roaming line with a move from 1001.3 to 1501.3 Hz. We (as in operators and I instead of just I) will make an effort to pay closer attention to this, so we can be done with the schedule sooner and turn off this line for the duration of the run.


Frequency    Planned Amplitude        Planned Duration      Actual Amplitude    Start Time                 Stop Time                    Achieved Duration
(Hz)         (ct)                     (hh:mm)                   (ct)               (UTC)                    (UTC)                         (hh:mm)
---------------------------------------------------------------------------------------------------------------------------------------------------------
1001.3       35k                      02:00                   39322.0           Nov 28 2016 17:20:44 UTC  Nov 30 2016 17:16:00 UTC         days    @ 30 W  
1501.3       35k                      02:00                   39322.0           Nov 30 2016 17:27:00 UTC
2001.3       35k                      02:00                   39322.0           
2501.3       35k                      05:00                   39322.0           
3001.3       35k                      05:00                   39322.0           
3501.3       35k                      05:00                   39322.0           
4001.3       40k                      10:00                   39322.0           
4301.3       40k                      10:00                   39322.0                
4501.3       40k                      10:00                   39322.0           
4801.3       40k                      10:00                   39222.0           
5001.3       40k                      10:00                   39222.0           


Frequency    Planned Amplitude        Planned Duration      Actual Amplitude    Start Time                 Stop Time                    Achieved Duration
(Hz)         (ct)                     (hh:mm)                   (ct)               (UTC)                    (UTC)                         (hh:mm)
---------------------------------------------------------------------------------------------------------------------------------------------------------
1001.3       35k                      02:00                   39322.0           Nov 11 2016 21:37:50 UTC    Nov 12 2016 03:28:21 UTC      ~several hours @ 25 W
1501.3       35k                      02:00                   39322.0           Oct 24 2016 15:26:57 UTC    Oct 31 2016 15:44:29 UTC      ~week @ 25 W
2001.3       35k                      02:00                   39322.0           Oct 17 2016 21:22:03 UTC    Oct 24 2016 15:26:57 UTC      several days (at both 50W and 25 W)
2501.3       35k                      05:00                   39322.0           Oct 12 2016 03:20:41 UTC    Oct 17 2016 21:22:03 UTC      days     @ 50 W
3001.3       35k                      05:00                   39322.0           Oct 06 2016 18:39:26 UTC    Oct 12 2016 03:20:41 UTC      days     @ 50 W
3501.3       35k                      05:00                   39322.0           Jul 06 2016 18:56:13 UTC    Oct 06 2016 18:39:26 UTC      months   @ 50 W
4001.3       40k                      10:00                   39322.0           Nov 12 2016 03:28:21 UTC    Nov 16 2016 22:17:29 UTC      days     @ 30 W (see LHO aLOG 31546 for caveats)
4301.3       40k                      10:00                   39322.0           Nov 16 2016 22:17:29 UTC    Nov 18 2016 17:08:49 UTC      days     @ 30 W          
4501.3       40k                      10:00                   39322.0           Nov 18 2016 17:08:49 UTC    Nov 20 2016 16:54:32 UTC      days     @ 30 W (see LHO aLOG 31610 for caveats)   
4801.3       40k                      10:00                   39222.0           Nov 20 2016 16:54:32 UTC    Nov 22 2016 23:56:06 UTC      days     @ 30 W
5001.3       40k                      10:00                   39222.0           Nov 22 2016 23:56:06 UTC    Nov 28 2016 17:20:44 UTC      days     @ 30 W (line was OFF and ON for Hardware INJ)
H1 DCS (DCS)
gregory.mendell@LIGO.ORG - posted 09:31, Wednesday 30 November 2016 (32018)
DCS switch to O2

DCS (LDAS) successfully switched from using the ER10 locations to archive data to the O2 locations starting from:

1164554240 == Nov 30 2016 07:17:03 PST == Nov 30 2016 09:17:03 CST == Nov 30 2016 15:17:03 UTC.

This change should be transparent to users requesting data.

H1 DetChar (CAL, DetChar)
evan.goetz@LIGO.ORG - posted 09:29, Wednesday 30 November 2016 (32016)
Pcal X "roaming" line turned off and laser shuttered as test
Jeff K., Evan G.

At 17:16:30 Nov 30 2016 UTC, we turned off the 1001.3 Hz Pcal X line was turned off as a test for the DetChar group.

At 17:21:30 Nov 30 2016 UTC, we shuttered the Pcal X laser.

At 17:26:30 Nov 30 2016 UTC, we un-shuttered the Pcal X laser.

The line frequency will be moved to 1501.3 Hz shortly.
H1 PSL
edmond.merilh@LIGO.ORG - posted 09:02, Wednesday 30 November 2016 (32014)
PSL Weekly 20 Day Trends - FAMIS #6124

Trends are the last 20 days  due to trends not being taken last week.

WeeklyXtal - Nothing unusual. Amp powers following humidity. Normal power dergading in Osc diode power. Blown readback on Osc DB3 current.

WeeklyLaser - normal. Incursions Monday.

WeeklyEnv - normal

WeeklyChiller - normal except for a trip on Tuesday morning.

Images attached to this report
H1 TCS
nutsinee.kijbunchoo@LIGO.ORG - posted 08:34, Wednesday 30 November 2016 - last comment - 08:41, Wednesday 30 November 2016(32008)
Restart h1hwsmsr computer

Around 8:30 AM local. Cameras glitched again.

Comments related to this report
nutsinee.kijbunchoo@LIGO.ORG - 08:41, Wednesday 30 November 2016 (32009)

8:40 AM restarted h1hwsmsr computer again. This time with cameras and frame grabbers turned off.

~9:00 AM We restarted h1hwsmsr again.

H1 General
cheryl.vorvick@LIGO.ORG - posted 08:15, Wednesday 30 November 2016 (32007)
Ops Owl Summary:

State of H1: relocking, at ENGAGE_SOFT_LOOPS

Activities:

Images attached to this report
H1 ISC (DetChar, ISC)
andrew.lundgren@LIGO.ORG - posted 02:47, Wednesday 30 November 2016 - last comment - 11:23, Friday 02 December 2016(32002)
Check 1080 Hz band coherence with jitter witnesses
Could someone on site check the coherence of DARM around 1080 Hz with the usual jitter witneses? We're not able to do it offsite because the best witness channels are stored with a Nyquist of 1024 Hz. What we need is the coherence from 1000 to 1200 Hz with things like IMC WFS (especially the sum, I think). The DBB would be nice if available, but I think it's usually shuttered.

There's indirect evidence from hVeto that this is jitter, so if there is a good witness channel we'll want to increase the sampling rate in case we get an SN or BNS that has power in this band.
Comments related to this report
cheryl.vorvick@LIGO.ORG - 03:23, Wednesday 30 November 2016 (32003)
  • IMC WFS channels are ALL collected at 2048Hz
  • I can't search for an IMC WFS coherence with DARM at 1080Hz
  • I put in an FRS, #6800
evan.goetz@LIGO.ORG - 07:52, Wednesday 30 November 2016 (32006)
@Andy I'll have a look at IOP channels.
evan.goetz@LIGO.ORG - 08:46, Wednesday 30 November 2016 (32010)DetChar, ISC
Evan G., Keita K.

Upon request, I'm attaching several coherence plots for the 1000-1200 Hz band between H1:CAL-DELTAL_EXTERNAL_DQ and many IMC WFS IOP channels (IOP-ASC0_MADC0_TP_CH[0-12]), ISS intensity noise witness channels (PSL-ISS_PD[A,B]_REL_OUT_DQ), PSL QPD channels (PSL-ISS_QPD_D[X,Y]_OUT_DQ), ILS and PMC HV mon channels, and ISS second loop QPD channels.

Unfortunately, there is low coherence between all of these channels and DELTAL_EXTERNAL, so we don't have any good leads here.
Non-image files attached to this comment
keita.kawabe@LIGO.ORG - 11:23, Friday 02 December 2016 (32105)

A2L: How to know if it's good or bad at the moment.

Here is a dtt template to passively measure a2l quality: /opt/rtcds/userapps/release/isc/common/scripts/decoup/DARM_a2l_passive.xml

It measures the coherence between DARM and ASC drive to all test masses using 404 seconds worth of data.

All references started 25 seconds or so after the last a2l was finished and 9 or 10 seconds before the intent bit was set (GPS 116467290).

"Now" is actually about 15:00 UTC, 7AM PT, and you can see that the coherence at around 20Hz (where the ASC feedback to TM starts to be dominated by the sensing noise) significantly worse, and DARM itself was also worse, so  you can say that the a2l was worse AT THIS PARTICULAR POINT IN TIME.

Thing is, this might slowly drift around and go better or worse. You can run this template for many points in time (for example each hour), and if the coherence seems to be consistently worse than right after a2l, you know that we need a2l. (A better approach is to write a script to plot the coherence as a time series, which is a good project for fellows.)

If it is repeatedly observed over multiple lock stretches (without running a2l) that the coherence starts small at the beginning of lock and becomes larger an hour or two into the lock, that's the sign that we need to run a2l an hour or two after the lock.

[EDIT] Sorry wrong alog.

Images attached to this comment
H1 ISC (CDS, GRD, OpsInfo)
sheila.dwyer@LIGO.ORG - posted 01:11, Wednesday 30 November 2016 - last comment - 13:03, Wednesday 30 November 2016(31996)
a few measurements tonight, more SDF/guardian stuff

I made a few measurements tonight, and we did a little bit more work to be able to go to observe. 

Measurements:

First, I tried to look at why our yaw ASC loops move at 1.88 Hz, I tried to modify the MICH Y loop a few times which broke the lock but Jim relocked right away.  

Then I did a repeat of noise injections for jitter with the new PZT mount, and did repeats of MICH/PRCL/SRCL/ASC injections.  Since MICH Y was about 10 times larger in DARM than pit, (it was at about the level of CHARD in DARM) I adjusted MICH Y2L by hand using a 21 Hz line.  By chaning the gain from 2.54 to 1, the coupling of the line to DARM was reduced by a bit more than a factor of 10, and the MICH yaw noise is now a factor of 10 delow darm at 20Hz.  

Lastly, I quickly checked if I could change the noise by adjusting the bias on ETMX.  A few weeks ago I had changed the bias to -400V, which reduced the 60Hz line by a factor of 2, but the line has gotten larger over the last few weeks.  However, it is still true that the best bias is -400V.  We still see no difference in the broad level of noise when changing this bias. 

Going to observe:

I've added round(,3) to the SOFT input matrix elements that needed it, and to MCL_GAIN in ANALOG_CARM

DIAG main complained about IM2 y being out the nominal range, this is because of the move we made after the IMC PZT work (31951).  I changed the nominal value from -209 to -325 for DAMP Y IN1

A few minutes after Cheryl went to observe, we were kicked out of observe again because of fiber polarization, both an SDF difference becuase of the PLL autolocker and because of a warning in DIAG main.  This is something that shouldn't kick us out of observation mode because it doesn't matter at all.  We should change DAIG_MAIN to only make this test when we are acquiring lock, and perhaps not monitor some channels in SDF observes. We decided the easiest solution for tonight was to fix the fiber polarization, so Cheryl did that. 

Lastly, Cheryl suggested that we orgainze the gaurdian state for ISC_LOCK so that states which are not normally used are above NOMINAL_LOW NOISE, I've renumbered the states but not yet loaded the guardian because I think that would knock us out of observation mode and we want to let the hardware injections happen. 

REDUCE_RF9 modulation depth guardian problem:

It seems like the reduce RF9 modulation depth state somehow skips restting some gains (screenshot shows the problem).  (noted before in alog 31558).  This could be serious, and could be why we have occasionally lost lock in this state.  I've attached a the log, this is disconcerting because the guardian log reports that it set the gains, but it seems not to have happened.  For the two PDs which did not get set, it also looks like the round step is skipped. 

2016-11-30_06:34:34.450020Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-REFL_A_RF9_I_GAIN => 3.99052462994
2016-11-30_06:34:34.461120Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-REFL_A_RF9_Q_GAIN => 3.99052462994
2016-11-30_06:34:34.461760Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-REFL_A_RF9_Q_GAIN => 3.991
2016-11-30_06:34:34.462600Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-POPAIR_A_RF9_I_GAIN => 1.99526231497
2016-11-30_06:34:34.463200Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-POPAIR_A_RF9_I_GAIN => 1.995
2016-11-30_06:34:34.464820Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-POPAIR_A_RF9_Q_GAIN => 1.995262314972016-11-30_06:34:34.450020Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-REFL_A_RF9_I_GAIN => 3.990524629942016-11-30_06:34:34.461120Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-REFL_A_RF9_Q_GAIN => 3.99052462994
2016-11-30_06:34:34.461760Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-REFL_A_RF9_Q_GAIN => 3.991
2016-11-30_06:34:34.462600Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-POPAIR_A_RF9_I_GAIN => 1.99526231497
2016-11-30_06:34:34.463200Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-POPAIR_A_RF9_I_GAIN => 1.995
2016-11-30_06:34:34.464820Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-POPAIR_A_RF9_Q_GAIN => 1.99526231497
2016-11-30_06:34:34.466310Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-REFLAIR_A_RF9_I_GAIN => 0.498815578742
 
I reported this in bugzilla 1062 and committed the guardian code as revision 14719

We accepted the wrong values (neither of these PDs is in use in lock) in SDF so that Adam could make a hardware injection, but these are the wrong values and should be different next time we lock. The next time the IFO locks, the operator should accept the correct values

Images attached to this report
Non-image files attached to this report
Comments related to this report
jameson.rollins@LIGO.ORG - 11:36, Wednesday 30 November 2016 (32027)

Responded to bug report: https://bugzilla.ligo-wa.caltech.edu/bugzilla3/show_bug.cgi?id=1062

jenne.driggers@LIGO.ORG - 12:18, Wednesday 30 November 2016 (32031)

Similar thing happened for ASC-REFL_B_RF45_Q_PIT during the last acquisition.  I have added some notes to the bug so that Jamie can follow up.

jenne.driggers@LIGO.ORG - 13:03, Wednesday 30 November 2016 (32035)

We think that Jamie's comment that we're writing to the same channel too fast is probably the problem.  Sheila is currently circulating the work permit to fix the bug.

H1 INJ (INJ)
adam.mullavey@LIGO.ORG - posted 01:10, Wednesday 30 November 2016 - last comment - 11:01, Wednesday 30 November 2016(31998)
Coherent CBC Injections

I've scheduled a CBC injection to begin at 9:20 UTC (1:20 PT).

Here is the change to the schedule file:

1164532817 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/imri_hwinj_snr24_1163501538_{ifo}_filtered.txt

I'll be scheduling more shortly.

Comments related to this report
adam.mullavey@LIGO.ORG - 01:47, Wednesday 30 November 2016 (32000)

I've scheduled another two injections. The next one is a NSBH inspiral scheduled at 10:30 UTC (2:30 PT) and the following one is another BBH scheduled for 11:40 UTC (3:40 PT).

Here is the update to the schedule file:

1164537017 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/nsbh_hwinj_snr24_1163501314_{ifo}_filtered.txt

1164541217 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/imri_hwinj_snr24_1163501530_{ifo}_filtered.txt

The xml files can be found in the injection svn in the Inspiral directory.

adam.mullavey@LIGO.ORG - 11:01, Wednesday 30 November 2016 (32023)INJ

All three of these scheduled injections were successfully injected at LHO. The first two were coincident with LLO, the third wasn't injected at LLO as the L1 IFO was down at the time. The relevant section of the INJ_TRANS guardian log is attached.

Non-image files attached to this comment
H1 SYS
jenne.driggers@LIGO.ORG - posted 19:25, Tuesday 29 November 2016 - last comment - 11:39, Wednesday 30 November 2016(31992)
Observatory intent bit un-monitored

[Jenne, JimW, JeffK, Sheila, EvanG, Jamie]

We were ready to try hitting the Intent bit, since SDF looked clear, but kept failing.  We were auto-popped out of Observation.  With Jamie on the phone, we realized that the ODCMASTER SDF file was looking at the Observatory intent bit.  When the Observe.snap file was captured, the intent bit was not set, so when we set the intent bit, SDF saw a difference, and popped us out of Observe.  Eeek! 

We have not-monitored the observatory intent bit.  After doing this, we were able to actually set the bit, and stick in Observe. 

Talking with Jamie, it's perhaps not clear that the ODCMASTER model should be under SDF control, but at least we have something that works for now.

Comments related to this report
jameson.rollins@LIGO.ORG - 11:39, Wednesday 30 November 2016 (32029)

I think unmonitoring the intent bit channel is the best thing to do.  I can see why we would like to monitor the other settings in the ODC models.  So I think this is the "right" solution, and no further action is required.

H1 General
betsy.weaver@LIGO.ORG - posted 12:52, Tuesday 29 November 2016 - last comment - 09:42, Friday 02 December 2016(31975)
O2 prep - LVEA walk thru for electronics cleanup

Betsy, Keita, Daniel

As part of the LVEA sweep, prior to the start of O2, this morning, we spent over an hour doing a cleanup of misc cables and test equipment in the LVEA and electronics room.  There were quite a few cables dangling from various racks, here's the full list of what we cleaned up and where:

Location Rack Slot Description
Electronics Room ISC C2   Found unused Servo controller/cables/mixer from top of rack.  Only power was connected, but lots of dangling cables.  Removed entire unit and cables.
Electronics Room ISC C3 19 D1000124 - Port #7 had dangling cable - removed and terminated.
Electronics Room ISC C4 Top Found dangling cable from "ALS COM VCO" Port 2 of 6.  Removed and terminated.
Electronics Room Rack next to PSL rack   Dangling fiber cable.  Left it...
LVEA near PSL ISC R4 18 ADC Card Port stickered "AO IN 2" - Dangling BNC removed.
LVEA near PSL ISC R4 18 to PSL P1 BNC-Lemo with restor blue box connecting "AO2" R4 to "TF IN" on P1 PMC Locking Servo Card - removed.
LVEA near PSL ISC R4 20 T'd dangling BNC on back of chassis - removed T and unused BNC.
LVEA near PSL     Disconnected unused O-scope, Analyzer, and extension cords near these racks.
LVEA Under HAM1 south   Disconnected extension cord running to powered off Beckhoff Rotation stage termination box thingy.  Richard said unit is to be removed someday altogether.
LVEA Under HAM4 NE cable tray   Turned off via power cord the TV monitor that was on.
LVEA HAM6 NE corner   Kiwamu powered off and removed power cables from OSA equipment near HAM6 ISCT table.
LVEA     Unplugged/removed other various unused power strips and extension cords.

I also threw the main breaker to the OFF position on both of the free standing unused transformer units in the LVEA - one I completely unplugged because I thought I could still hear it humming.

No monitors computers appear to be on except the 2 VE BECKHOFF ones that must remain on (in their stand alone racks on the floor).

We'll ask the early morning crew to sweep for Phones, Access readers, lights, and WIFI first thing in the morning.

Comments related to this report
filiberto.clara@LIGO.ORG - 08:48, Wednesday 30 November 2016 (32011)

Final walk thru of LVEA was done this morning. The following items were unplugged or powered off:

Phones
1. Next to PSL Rack
2. Next to HAM 6
3. In CER

Card Readers
1. High Bay entry
2. Main entry

Wifi
1. Unplugged network cable from patch panel in FAC Rack

corey.gray@LIGO.ORG - 09:42, Friday 02 December 2016 (32100)OpsInfo

Added this to Ops Sticky Notes page.

Displaying reports 52101-52120 of 83254.Go to page Start 2602 2603 2604 2605 2606 2607 2608 2609 2610 End