Displaying reports 52121-52140 of 83254.Go to page Start 2603 2604 2605 2606 2607 2608 2609 2610 2611 End
Reports until 06:07, Wednesday 30 November 2016
H1 General
cheryl.vorvick@LIGO.ORG - posted 06:07, Wednesday 30 November 2016 (32004)
lock degrading - out of Observe
H1 ISC (DetChar, ISC)
andrew.lundgren@LIGO.ORG - posted 02:47, Wednesday 30 November 2016 - last comment - 11:23, Friday 02 December 2016(32002)
Check 1080 Hz band coherence with jitter witnesses
Could someone on site check the coherence of DARM around 1080 Hz with the usual jitter witneses? We're not able to do it offsite because the best witness channels are stored with a Nyquist of 1024 Hz. What we need is the coherence from 1000 to 1200 Hz with things like IMC WFS (especially the sum, I think). The DBB would be nice if available, but I think it's usually shuttered.

There's indirect evidence from hVeto that this is jitter, so if there is a good witness channel we'll want to increase the sampling rate in case we get an SN or BNS that has power in this band.
Comments related to this report
cheryl.vorvick@LIGO.ORG - 03:23, Wednesday 30 November 2016 (32003)
  • IMC WFS channels are ALL collected at 2048Hz
  • I can't search for an IMC WFS coherence with DARM at 1080Hz
  • I put in an FRS, #6800
evan.goetz@LIGO.ORG - 07:52, Wednesday 30 November 2016 (32006)
@Andy I'll have a look at IOP channels.
evan.goetz@LIGO.ORG - 08:46, Wednesday 30 November 2016 (32010)DetChar, ISC
Evan G., Keita K.

Upon request, I'm attaching several coherence plots for the 1000-1200 Hz band between H1:CAL-DELTAL_EXTERNAL_DQ and many IMC WFS IOP channels (IOP-ASC0_MADC0_TP_CH[0-12]), ISS intensity noise witness channels (PSL-ISS_PD[A,B]_REL_OUT_DQ), PSL QPD channels (PSL-ISS_QPD_D[X,Y]_OUT_DQ), ILS and PMC HV mon channels, and ISS second loop QPD channels.

Unfortunately, there is low coherence between all of these channels and DELTAL_EXTERNAL, so we don't have any good leads here.
Non-image files attached to this comment
keita.kawabe@LIGO.ORG - 11:23, Friday 02 December 2016 (32105)

A2L: How to know if it's good or bad at the moment.

Here is a dtt template to passively measure a2l quality: /opt/rtcds/userapps/release/isc/common/scripts/decoup/DARM_a2l_passive.xml

It measures the coherence between DARM and ASC drive to all test masses using 404 seconds worth of data.

All references started 25 seconds or so after the last a2l was finished and 9 or 10 seconds before the intent bit was set (GPS 116467290).

"Now" is actually about 15:00 UTC, 7AM PT, and you can see that the coherence at around 20Hz (where the ASC feedback to TM starts to be dominated by the sensing noise) significantly worse, and DARM itself was also worse, so  you can say that the a2l was worse AT THIS PARTICULAR POINT IN TIME.

Thing is, this might slowly drift around and go better or worse. You can run this template for many points in time (for example each hour), and if the coherence seems to be consistently worse than right after a2l, you know that we need a2l. (A better approach is to write a script to plot the coherence as a time series, which is a good project for fellows.)

If it is repeatedly observed over multiple lock stretches (without running a2l) that the coherence starts small at the beginning of lock and becomes larger an hour or two into the lock, that's the sign that we need to run a2l an hour or two after the lock.

[EDIT] Sorry wrong alog.

Images attached to this comment
H1 GRD
cheryl.vorvick@LIGO.ORG - posted 01:30, Wednesday 30 November 2016 - last comment - 01:52, Wednesday 30 November 2016(31999)
H1:CAL-PINJX_TRANSIENT_GAIN kicks H1 out of Observe, now it's not monitored
Images attached to this report
Comments related to this report
adam.mullavey@LIGO.ORG - 01:52, Wednesday 30 November 2016 (32001)

Thanks for catching this Cheryl! Yes, please leave this channel unmonitored.

H1 ISC (CDS, GRD, OpsInfo)
sheila.dwyer@LIGO.ORG - posted 01:11, Wednesday 30 November 2016 - last comment - 13:03, Wednesday 30 November 2016(31996)
a few measurements tonight, more SDF/guardian stuff

I made a few measurements tonight, and we did a little bit more work to be able to go to observe. 

Measurements:

First, I tried to look at why our yaw ASC loops move at 1.88 Hz, I tried to modify the MICH Y loop a few times which broke the lock but Jim relocked right away.  

Then I did a repeat of noise injections for jitter with the new PZT mount, and did repeats of MICH/PRCL/SRCL/ASC injections.  Since MICH Y was about 10 times larger in DARM than pit, (it was at about the level of CHARD in DARM) I adjusted MICH Y2L by hand using a 21 Hz line.  By chaning the gain from 2.54 to 1, the coupling of the line to DARM was reduced by a bit more than a factor of 10, and the MICH yaw noise is now a factor of 10 delow darm at 20Hz.  

Lastly, I quickly checked if I could change the noise by adjusting the bias on ETMX.  A few weeks ago I had changed the bias to -400V, which reduced the 60Hz line by a factor of 2, but the line has gotten larger over the last few weeks.  However, it is still true that the best bias is -400V.  We still see no difference in the broad level of noise when changing this bias. 

Going to observe:

I've added round(,3) to the SOFT input matrix elements that needed it, and to MCL_GAIN in ANALOG_CARM

DIAG main complained about IM2 y being out the nominal range, this is because of the move we made after the IMC PZT work (31951).  I changed the nominal value from -209 to -325 for DAMP Y IN1

A few minutes after Cheryl went to observe, we were kicked out of observe again because of fiber polarization, both an SDF difference becuase of the PLL autolocker and because of a warning in DIAG main.  This is something that shouldn't kick us out of observation mode because it doesn't matter at all.  We should change DAIG_MAIN to only make this test when we are acquiring lock, and perhaps not monitor some channels in SDF observes. We decided the easiest solution for tonight was to fix the fiber polarization, so Cheryl did that. 

Lastly, Cheryl suggested that we orgainze the gaurdian state for ISC_LOCK so that states which are not normally used are above NOMINAL_LOW NOISE, I've renumbered the states but not yet loaded the guardian because I think that would knock us out of observation mode and we want to let the hardware injections happen. 

REDUCE_RF9 modulation depth guardian problem:

It seems like the reduce RF9 modulation depth state somehow skips restting some gains (screenshot shows the problem).  (noted before in alog 31558).  This could be serious, and could be why we have occasionally lost lock in this state.  I've attached a the log, this is disconcerting because the guardian log reports that it set the gains, but it seems not to have happened.  For the two PDs which did not get set, it also looks like the round step is skipped. 

2016-11-30_06:34:34.450020Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-REFL_A_RF9_I_GAIN => 3.99052462994
2016-11-30_06:34:34.461120Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-REFL_A_RF9_Q_GAIN => 3.99052462994
2016-11-30_06:34:34.461760Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-REFL_A_RF9_Q_GAIN => 3.991
2016-11-30_06:34:34.462600Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-POPAIR_A_RF9_I_GAIN => 1.99526231497
2016-11-30_06:34:34.463200Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-POPAIR_A_RF9_I_GAIN => 1.995
2016-11-30_06:34:34.464820Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-POPAIR_A_RF9_Q_GAIN => 1.995262314972016-11-30_06:34:34.450020Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-REFL_A_RF9_I_GAIN => 3.990524629942016-11-30_06:34:34.461120Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-REFL_A_RF9_Q_GAIN => 3.99052462994
2016-11-30_06:34:34.461760Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-REFL_A_RF9_Q_GAIN => 3.991
2016-11-30_06:34:34.462600Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-POPAIR_A_RF9_I_GAIN => 1.99526231497
2016-11-30_06:34:34.463200Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-POPAIR_A_RF9_I_GAIN => 1.995
2016-11-30_06:34:34.464820Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-POPAIR_A_RF9_Q_GAIN => 1.99526231497
2016-11-30_06:34:34.466310Z ISC_LOCK [REDUCE_RF9_MODULATION_DEPTH.main] ezca: H1:LSC-REFLAIR_A_RF9_I_GAIN => 0.498815578742
 
I reported this in bugzilla 1062 and committed the guardian code as revision 14719

We accepted the wrong values (neither of these PDs is in use in lock) in SDF so that Adam could make a hardware injection, but these are the wrong values and should be different next time we lock. The next time the IFO locks, the operator should accept the correct values

Images attached to this report
Non-image files attached to this report
Comments related to this report
jameson.rollins@LIGO.ORG - 11:36, Wednesday 30 November 2016 (32027)

Responded to bug report: https://bugzilla.ligo-wa.caltech.edu/bugzilla3/show_bug.cgi?id=1062

jenne.driggers@LIGO.ORG - 12:18, Wednesday 30 November 2016 (32031)

Similar thing happened for ASC-REFL_B_RF45_Q_PIT during the last acquisition.  I have added some notes to the bug so that Jamie can follow up.

jenne.driggers@LIGO.ORG - 13:03, Wednesday 30 November 2016 (32035)

We think that Jamie's comment that we're writing to the same channel too fast is probably the problem.  Sheila is currently circulating the work permit to fix the bug.

H1 INJ (INJ)
adam.mullavey@LIGO.ORG - posted 01:10, Wednesday 30 November 2016 - last comment - 11:01, Wednesday 30 November 2016(31998)
Coherent CBC Injections

I've scheduled a CBC injection to begin at 9:20 UTC (1:20 PT).

Here is the change to the schedule file:

1164532817 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/imri_hwinj_snr24_1163501538_{ifo}_filtered.txt

I'll be scheduling more shortly.

Comments related to this report
adam.mullavey@LIGO.ORG - 01:47, Wednesday 30 November 2016 (32000)

I've scheduled another two injections. The next one is a NSBH inspiral scheduled at 10:30 UTC (2:30 PT) and the following one is another BBH scheduled for 11:40 UTC (3:40 PT).

Here is the update to the schedule file:

1164537017 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/nsbh_hwinj_snr24_1163501314_{ifo}_filtered.txt

1164541217 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/imri_hwinj_snr24_1163501530_{ifo}_filtered.txt

The xml files can be found in the injection svn in the Inspiral directory.

adam.mullavey@LIGO.ORG - 11:01, Wednesday 30 November 2016 (32023)INJ

All three of these scheduled injections were successfully injected at LHO. The first two were coincident with LLO, the third wasn't injected at LLO as the L1 IFO was down at the time. The relevant section of the INJ_TRANS guardian log is attached.

Non-image files attached to this comment
H1 General
cheryl.vorvick@LIGO.ORG - posted 01:03, Wednesday 30 November 2016 (31997)
Ops Owl Transition

TITLE: 11/30 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 72.6285Mpc
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
    Wind: 8mph Gusts, 5mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.50 μm/s
SUMMARY:

H1 General
jim.warner@LIGO.ORG - posted 00:06, Wednesday 30 November 2016 (31995)
Eve Shift Summary

TITLE: 11/30 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: NLN, Sheila was making some measurements
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY: Locking was pretty easy tonight
LOG:
Corey had it locked when I came in.

Kiwamu did a measurement at DC Readout.

Jenne and crew cleaned up some SDF hickups, ODCMASTER OBSERVATION bit was preventing us from going to Observe.

Sheila had some measurements in NLN that broke lock a couple times.

H1 CAL (CAL, DetChar, ISC)
jeffrey.kissel@LIGO.ORG - posted 21:37, Tuesday 29 November 2016 (31994)
PCAL2DARM Broad Band Injection to test GDS Pipeline Success; One new and one newly published set of Sensing Function Sweeps
J. Kissel

I've injected a broad-band PCAL EY excitation into the IFO and measured its response in CAL-DELTAL EXTERNAL to confirm that systematic errors are small. I've followed it immediately with a set of reference-measurement sensing function swept sines of PCAL2DARM and DARMOLG TFs that have been tuned to improve the low frequency (> 30 Hz) information. 

Attached are the results. I also attach the results for a never published measurement that was taken on Nov 21 2016 20:13:36 UTC.

One can see that
- The systematic error between 10-200 Hz (where the coherence is reasonable enough to make a statement) is frequency dependent, but less that 5% and 2 [deg]. 
- The swept sine version of the same measurement agrees with the frequency dependence, but confirms that the discrepancy flattens out in magnitude, sticking to around -5%
- Here're the fit results for physical parameters of the sensing function:
                                 [Units]  value(95% c.i.)      
Meas Date                 2016            Nov 21              Nov 30
IFO Input Power                  [W]      29.5                29.9
SRC1 Loop Status                          ON                  ON

Optical Gain              x 1e6  [ct/m]   1.150 (0.003)       1.124 (0.003)
DARM/RSE Cav. Pole Freq.         [Hz]     348.7 (6.0)         347.5 (5.8)
Detuned SRC Optical Spring Freq. [Hz]     7.09 (0.2)          7.40  (0.2)
Optical Spring Q-Factor (1/Q)    []       0.0413 (0.02)       0.0581 (0.02)
Residual Time Delay              [us]     1.87  (4.9)         0.84 (4.7)
 
aLOG                                      LHO 31994


- There is statistically significant evidence for what I'll call "DC flattening" in the sensing function, that we see for the first time with these two measurements because we've tuned the transfer function to have high precision there. This is interesting because our current simplified model of the sensing function (described in detail in LHO aLOG 31665, for example) does *not* include the influence of the test mass's finite stiffness at low frequency. The parametrization we currently use is based on the 2001 Buonanno and Chen Paper (via Evan Hall's thesis work and Kiwamu & Craig's similar derivation), but none of these incorporate a test mass with real dynamics and finite stiffness. The only study I've seen that does do this -- and shows evidence for the response flattening below a certain frequency -- was some work done by Adam Mullavey very early on, in which he used an Optickle simulation with (I believe) the full QUAD dynamical model included -- see pg 5 of G1400064. There are two impacts of this effect on the physical parameter estimation:
    (1) It distracts the lower-frequency fit of the optical spring frequency and Q slightly -- which means that there is systematic error, (albeit small for this level of detuning) in the sensing function as high as 30 Hz.
    (2) There is a few tens of percent discrepancy in the 5-10 Hz region. However, Adam's study hints that -- although the detuned spring frequency increase with power (as we have seen) the restoration-to-flatness frequency does not. I think this makes sense physically -- the finite stiffness of the suspension (at least in the longitudinal direction) should not change with power (of course the angular plant does -- see e.g. LHO aLOG 25368)

Other Notes:
- (I only re-discovered this today -- thanks Shivaraj!) that these templates come pre-calibrated. 
The swept-sine calibration is an imported text file that calibrates the DELTAL EXT / PCALY RX transfer function (i.e. not the individual channels) from preER10,
     /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER10/H1/Scripts/ControlRoomCalib/pcal2darm_calib.txt
written by Kiwamu, with the script 
     /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER10/H1/Scripts/ControlRoomCalib/H1_pcal2darm_correction.m
and originally mentioned in . 
The broad-band calibration is an imported text file that calibrates DELTAL EXT from the PreER9 model,
    /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER9/H1/Scripts/ControlRoomCalib/DARM_FOM_calibration_new_20160512.dat
written again by Kiwamu, with the script 
    /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER9/H1/Scripts/ControlRoomCalibH1DARM_FOM_correction.m

These scripts, regrettably, have a few FIXMEs and are not using the latest DARM model. We should update these script, regenerate the ASCII dump of calibration, and recalibrate this data to confirm that the systematic errors are still small (and in fact they may get smaller, who knows). 

- All data is exported from these templates always use the raw channel's value so any matlab analysis done in the past is unaffected by the above imported DTT calibrations.

- The time of the broadband injection was about 40 [sec] between Nov 30 2016 02:53:00 to 02:53:40 UTC (enough to get 25 avgs at 0.1 Hz BW, or 10 sec FFT with 75% overlap). This will be used to make the same comparison of PCAL to GDS-CALIB_STRAIN, especially now that it is correcting from time-dependent systematic errors (see LHO aLOG 31926).

Measurement templates: 
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/H1/Measurements/
    PCAL/2016-11-30_H1_PCAL2DARMTF_BB_5to1000Hz.xml
    PCAL/2016-11-30_H1_PCAL2DARMTF_4to1200Hz_fasttemplate.xml
    DARMOLGTFs/2016-11-30_H1_DARM_OLGTF_4to1200Hz_fasttemplate.xml


DARM model / Sensing Function Fit constructed from:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/
H1/Scripts/PCAL/fitDataToC_20161116.m (rev3908)

Common/params/IFOindepParams.conf (rev3829)
H1/params/H1params.conf (rev3855)
H1/params/2016-11-12/H1params_2016-11-12.conf (rev3855)
Images attached to this report
Non-image files attached to this report
H1 ISC
jenne.driggers@LIGO.ORG - posted 19:27, Tuesday 29 November 2016 (31993)
PRCL1 filter not set again - problem fixed in guardian

[Sheila, Jenne, JeffK]

While clearing SDF differences, we caught the same filter-not-on situation that Ansel and Young-Min helped find in alog 31524.  We looked around in the guardian, and Sheila found why they didn't get set, and has fixed the problem.  SDF saves the day!

The problem is that the filters were only being set in the DRMI_Lock_Wait state, so if we gave up on catching the DRMI and went for PRMI, then had a successful transition from PRMI to DRMI without a vertex lockloss, we would never go back to that state, so the filters would never get set.  Sheila has moved the setting of those filters later in the guardian to the offload state that we go through no matter which way we acquire the lock, so this shouldn't be a problem again.

H1 SYS
jenne.driggers@LIGO.ORG - posted 19:25, Tuesday 29 November 2016 - last comment - 11:39, Wednesday 30 November 2016(31992)
Observatory intent bit un-monitored

[Jenne, JimW, JeffK, Sheila, EvanG, Jamie]

We were ready to try hitting the Intent bit, since SDF looked clear, but kept failing.  We were auto-popped out of Observation.  With Jamie on the phone, we realized that the ODCMASTER SDF file was looking at the Observatory intent bit.  When the Observe.snap file was captured, the intent bit was not set, so when we set the intent bit, SDF saw a difference, and popped us out of Observe.  Eeek! 

We have not-monitored the observatory intent bit.  After doing this, we were able to actually set the bit, and stick in Observe. 

Talking with Jamie, it's perhaps not clear that the ODCMASTER model should be under SDF control, but at least we have something that works for now.

Comments related to this report
jameson.rollins@LIGO.ORG - 11:39, Wednesday 30 November 2016 (32029)

I think unmonitoring the intent bit channel is the best thing to do.  I can see why we would like to monitor the other settings in the ODC models.  So I think this is the "right" solution, and no further action is required.

H1 DetChar (CAL, DetChar)
evan.goetz@LIGO.ORG - posted 19:14, Tuesday 29 November 2016 (31991)
Pcal Y laser shuttered - no change in 1080 Hz glitch rate
Evan G., Jeff K.

We shuttered the Pcal Y laser for a short test starting at 02:43:19 Nov 30 2016 UTC (un-shuttered at 2:51:45). This is the final nail in the coffin that Pcal injection at 1083.7 Hz is **NOT** [edited] causing 1080 Hz glitching. The glitch rate remains unchanged.

This follows another test from Jeff, turning off the 1083.7 Hz line, but leaving the Pcal laser un-shuttered (see LHO aLOG 31610). This previous test showed that the glitch rate was unchanged when the calibration line injection was turned off.

In the attached image, from -10 to -5 minutes, the Pcal laser is shuttered, but the glitching at 1080 Hz remains.
Images attached to this report
H1 ISC
kiwamu.izumi@LIGO.ORG - posted 18:26, Tuesday 29 November 2016 (31990)
Updated green references for smoother locking

I have updated the green beam references. Next time when the interferometer drops the lock, we should run initial alignment from scratch.

 

[Overview]

I have updated the green beam references (ALS-X/Y_QPD_PIT/YAW_OFFSET and ALS-X/Y_CAM_ITM_PIT/YAW_OFS) this after noon in response to alog 31873. We didn't try locking the interferometer with this new set of references yet, but theoretically this will give us smoother locking sequence (e.g., less fluctuation in the power recycling gain and etc.). The adjustment was done with the interferometer fully locked on DC readout with an input light power of 2 W.

[Automation scripts]

This time Sheila suggested making an automation script so that we don't have to repeat this process by hand in future. So I made such scripts which are attached to this entry. These scripts servo the QPD offset points by looking at the green WFS error signals. The servos are based on the cdsutils.servo module and the servos are activated only when the green transmission signals are sufficiently high. The input matrices I chose are very naive (they are a diagonal matrix) but were good enough for individual loops to converge slowly. I didn't really try to adjust the gains yet, but probably one can bump up the servo gains if things are too slow. One trick that I re-discovered (and Sheila told me) was that the Y arm WFSs have a local minimum where the WFS error signals can go close to zero while the green transmission is not high. I am not sure if the script can handle the servos if the misalignment is too large. Today, in fact, I initially fell into this local minimum. I manually brought the QPD offsets to a point where the WFSs are not far away from zero and the transmission is high-ish. Then I adjusted the input matrix and gain there in the script which ran OK afterwards. I didn't have any troubles for the X arm.

Non-image files attached to this report
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 17:36, Tuesday 29 November 2016 (31988)
CDS maintenance summary, Tuesday 29th November 2016

WP6352 Update CDS matlab license server

Carlos:

Matlab licenses were updated on the serv er.

WP6357 Add ERROR channels to cell phone alarm texter

Chandra, Patrick, Dave:

For each Beckhoff VAC and FMCS channel monitored by the cell phone alarm texter program, their equivalent ERROR channel was added to the monitor list.

H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 17:30, Tuesday 29 November 2016 (31987)
CDS ER10 Summary, Monday 21st - Monday 28th November 2016

model restarts logged for Mon 28/Nov/2016
2016_11_28 14:40 h1alsex
2016_11_28 14:40 h1alsey
2016_11_28 14:42 h1lsc
2016_11_28 14:42 h1susprocpi
2016_11_28 14:44 h1broadcast0

2016_11_28 14:44 h1dc0
2016_11_28 14:44 h1fw0
2016_11_28 14:44 h1fw2
2016_11_28 14:44 h1nds0
2016_11_28 14:44 h1nds1
2016_11_28 14:44 h1tw1
2016_11_28 14:45 h1fw1

Maintenance work. Removed DAQ channels for als(ex,ey), lsc and susprocpi. Added gds chans to broadcaster. Restart DAQ.

model restarts logged for Thu 24/Nov/2016 - model restarts logged for Sun 27/Nov/2016 No restarts reported

model restarts logged for Wed 23/Nov/2016
2016_11_23 09:10 h1tw0
2016_11_23 09:40 h1tw0
2016_11_23 09:41 h1tw0
2016_11_23 12:58 h1tw0

completed offloading raw min trends from tw0.

model restarts logged for Tue 22/Nov/2016
2016_11_22 00:03 h1tw0
   ...
2016_11_22 10:23 h1tw0

2016_11_22 11:55 h1calex
2016_11_22 11:55 h1caley
2016_11_22 11:56 h1calcs
2016_11_22 11:57 h1susetmxpi
2016_11_22 11:57 h1susitmpi

2016_11_22 11:59 h1dc0
2016_11_22 11:59 h1susetmypi
2016_11_22 12:01 h1broadcast0
2016_11_22 12:01 h1fw0
2016_11_22 12:01 h1fw1
2016_11_22 12:01 h1fw2
2016_11_22 12:01 h1nds0
2016_11_22 12:01 h1nds1
2016_11_22 12:01 h1tw1
2016_11_22 12:03 h1fw1

2016_11_22 12:03 h1omcpi
2016_11_22 12:04 h1fw1
2016_11_22 12:05 h1fw1
2016_11_22 12:11 h1fw1

many h1tw0 restarts (not shown) as raw minute trends are offloaded. Maintenance Tuesday. CAL and PI model changes with DAQ restart.

model restarts logged for Mon 21/Nov/2016
2016_11_21 22:29 h1tw0
...
2016_11_21 23:58 h1tw0

many unexpected restarts (not shown) of h1tw0 as raw minute trends are offloaded.

H1 CDS
david.barker@LIGO.ORG - posted 17:15, Tuesday 29 November 2016 (31986)
displaying daily lock log

TJ's VerbalAlarms system maintains a H1 lock log (with lock segment numbers). I'm not sure if there is an easy way to display this so I wrote a simple script called show_lock_log

show_lock_log
-------------------------------------------------------------------------
 
Displaying file: /ligo/logs/VerbalAlarms/Lock_times/2016/11/lock_log_11_29_2016.txt
 
Lock# 74
Times are in: GPS (UTC)
Start time: 1164415514.0  (Nov 29 00:44:57 UTC)
  Itbit Engage: [0]
  Itbit Disengage: [0]
End time: 1164415525.0  (Nov 29 00:45:05 UTC)
Total length: 11.0 , 0hr 0min 11sec
Total Science: 0 , 0hr 0min 0sec


Lock# 75
Times are in: GPS (UTC)
Start time: 1164421049.0  (Nov 29 02:17:12 UTC)
  Itbit Engage: [0]
  Itbit Disengage: [0]
End time: 1164421057.0  (Nov 29 02:17:19 UTC)
Total length: 8.0 , 0hr 0min 8sec
Total Science: 0 , 0hr 0min 0sec


Lock# 76
Times are in: GPS (UTC)
Start time: 1164463135.0  (Nov 29 13:58:38 UTC)
  Itbit Engage: [0, 1164465119.0]
  Itbit Disengage: [0, 1164470781.0]
End time: 1164474658.0  (Nov 29 17:10:38 UTC)
Total length: 11523.0 , 3hr 12min 3sec
Total Science: 5662.0 , 1hr 34min 22sec



End of Day Summary
Current Status: Commission, Locked since Nov 29 22:05:01 UTC (1164492318.0)
Total Day Locked: 5hr 7min 24sec [21.3%] (18444/86400)
Total Day Science: 1hr 34min 22sec [6.6%] (5662/86400)
 
done
 

Note that unlike iLIGO, segment numbers are not limited to locks exceeding 5 minutes in length. End of day es defined in UTC, which means 16:00 PST (or 17:00 PDT).

H1 SEI
hugh.radkins@LIGO.ORG - posted 15:26, Tuesday 29 November 2016 - last comment - 17:41, Tuesday 29 November 2016(31979)
BRSX Status Bits for Trouble shooting

I see now Jim had already plotted most of these yesterday but since I had already wrote this log, I thought i should post it anyway. My explainations and plots are better anyway.

 

The two attached plots show 2 months and 7 days of a number of the status channels from the BRS for diagnosis.  On the 7 day plot, the large steps up on the DRIFTMON (upper left) are when Jim opened the BRS box on Thursday and when I opened it yesterday--this is a thermal step and recovery.

The CBIT indicates if the C# code is running. This code reads the camera light image.

The AMPBIT is an indication of the BRS Beam swing amplitude--looking at the 60 day image indicates the PEM crew really got the beam disturbed as it has not been in the past 60 days until last Wednesday. Jim plots 90 days and shows this happened ~21 Sept but things did not get out of whack as it did last Wednesday.

The drop outs (to zero) of the CBIT and CAMERA channels are indications of stopping the code as we attempt to get it running again.

The BOXBIT logics from 1 when the Beckoff communications to the BRS box are disrupted.  Not sure why it occurred for Jim and on 8 Nov (no log spotted) but not for me yesterday.

The DAMPBIT just indicates the set point enabled damping has kicked on.

The LIGHTSRC and MODBIT are not included in these plots as they were unity for the duration indicating the light source has been fine and the ISI is communicating without problem.  I expected to see a DRIFT_BIT channel but it remained ellusive.

Images attached to this report
Comments related to this report
jim.warner@LIGO.ORG - 17:41, Tuesday 29 November 2016 (31989)OpsInfo

Tagging (hopefully) Ops, so operators see this. An FRS ticket (6799) has been submitted.

krishna.venkateswara@LIGO.ORG - 17:12, Tuesday 29 November 2016 (31984)

Thanks Hugh, Jim, this is very helpful.

The CBIT dropping for short periods is normal. It indicates a slowdown in C# data processing due to extra cpu usage - usually because of a remote login.

The AMPBIT being low is simply showing the problem that the amplitude is large. This is an 'effect' and not the 'cause'. Other bits are similarly not showing anything unexpected.

The main problem is the BOXBIT which drops on Nov. 8th, where surprisingly it did no harm. On Nov. 25th, it dropped and I suspect this is what led to the encoder values being corrupted. After yesterday's fix, Hugh told me that the values were corrupted again and I checked the BOXBIT and I see another drop yesterday evening (see attached pic). I suspect that either the power supply going out to the BRS enclosure being intermittent, or the ethernet cable to the box being loose, is the problem. Or the Beckhoff motor controller may be failing and may need to be replaced.

If the encoder values are corrupt, the BRS-X damper may not work and people should avoid going close to BRS-X while it is being used tonight.

Images attached to this comment
Displaying reports 52121-52140 of 83254.Go to page Start 2603 2604 2605 2606 2607 2608 2609 2610 2611 End