Displaying reports 51661-51680 of 83211.Go to page Start 2580 2581 2582 2583 2584 2585 2586 2587 2588 End
Reports until 00:15, Sunday 11 December 2016
LHO General
corey.gray@LIGO.ORG - posted 00:15, Sunday 11 December 2016 (32431)
EVE Operator Summary

TITLE: 12/11 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 72.7965Mpc
INCOMING OPERATOR: Jim
SHIFT SUMMARY:

Lock is going on 31+hrs with a range drifting slightly down from 75 to 70Mpc over the last 8-10hrs.  useism has sorty of flattened out just under the 90 percentile.  

LOG:

H1 GRD
corey.gray@LIGO.ORG - posted 22:26, Saturday 10 December 2016 - last comment - 07:02, Sunday 11 December 2016(32435)
Continuing Investigation Into "sysecaty1plc2" SDF Issue With Knocking Us Out Of OBSERVING

Cheryl observed an instance of H1 being dropped out of OBSERVING due to SDF changes tracked down to the Computer/SDF Node:  sysecaty1plc2

(This was for the Yarm.  We noticed this same issue a week ago for the analogous Computer/Node for the Xarm:  sysecatx1plc2.)

I continued the work of figuring out who is our pesky channel dropping us out of OBSERVING.  The first thing I did was look at the (3) channels Keita found for X last week and see if the Y-arm counterparts changed today---found nothing in dataviewer.  I then ran the scripts Cheryl ran and came with the same result of seeing a change with the channel H1:FEC-1031_SDF_DIFF_CNT.  But this is just a name for a channel SDF uses.  

I then just went to where sysecaty1plc2 is on medm.  This is related to Beckhoff, so maybe the channel can be tracked down by snooping around medm land.  To get to a baseline/starting point, I went to:

SITE MAP / SYS / EtherCAT overview / H1 Y1 PLC2 /

From here you have several different subsystems (Als, Asc, Isc, Lsc, Sys).  So, I went through all of these subsystems and the screens nested within them.  The first thing I did was to find the "*_ERROR_FLAG" status box for each subsystem (it's green for all, and I reckon if there was a change to the system, it would go red).  So I grabbed this channel for all the subsystems mentioned above, and the only one which changed when we dropped from OBSERVING was the Als one.  I then played the same game--go into the nested windows within and trend "*_ERROR_FLAG" channels for each component within Als.  Ultimately, I ended up finding a single channel which had activity around the time in question.  It was found here:

SITE MAP / SYS / EtherCAT overview / H1 Y1 PLC2 / Als / Y / Fibr / Lock / Temeraturecontrols  (i.e. H1ALS_Y1PLC2_Y_FIBR_LOCK_TEMPERATURECONTROLS.adl)

And on this medm, the channel in question is:  H1:ALS-Y_FIBR_LOCK_TEMPERATURECONTROLS_ON

I'm not saying this is the ONLY channel which could be culprit for the OBSERVING drop, but this is one I saw drop out at that time (see attachment#1), BUT there is a caveat.  If I look at 20min before the Drop, the ALS channel in question had some similar drop outs (see attachement#2).  For the earlier one, the drops only lasted about 10sec (attachment#3).  For the drops which took us out of OBSERVING (attachment#1), after 15sec of drops, then we dropped out of OBSERVING (& overall the ALS ON switch went off/on for about 40sec).  So maybe the SDF changes have to happen for a certain amount of time before latching us out of OBSERVING?

As another check, I looked at the last 12hrs of this lock and the only time H1:ALS-Y_FIBR_LOCK_TEMPERATURECONTROLS_ON had these fits of turning OFF for a handful of seconds were in that 20min time period when we dropped out. 

Question:  Is this enough to warrant NOT MONITORING H1:ALS-Y_FIBR_LOCK_TEMPERATURECONTROLS_ON?  Or should we keep searching?

Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 07:02, Sunday 11 December 2016 (32437)
This is enough to unmonitor that xhannel. To find out all activities that might be relevant, read the very last section of Cheryl's alog and run the lockloss script for ey ethercar plc2.
H1 CDS
patrick.thomas@LIGO.ORG - posted 20:51, Saturday 10 December 2016 (32434)
Installed Conlog, but leaving stopped
WP 6385

I installed Conlog on the conlog-master and conlog-replica machines. I am going to leave it stopped and not acquiring data until I finalize the channel list with Dave. The work permit should be left open.
H1 ISC (ISC, OpsInfo)
corey.gray@LIGO.ORG - posted 18:49, Saturday 10 December 2016 (32433)
A2L Quality Over 25+Hour Lock: Looking at Pitch

Keita made a template to passively measure the Quality of A2L for H1 & he said it would be good to look at this over a long period (alog#32106).  We are currently on a 25+ Hour Lock.  During this lock:

Before running the 2nd A2L I ran Keita's template for once an hour during the lock.  I saw that many hours (~16hrs) after the 1st A2L that the DARM & ASC Coherence for PITCH began to increase again, and then I ran the second A2L.  (I mainly focused on PITCH because it was the DOF which showed the most change.)

After both A2Ls, didn't really notice an improvement in range.  Did notice a better looking DARM spectra (on nuc3).  

Question:  If the DARM & ASC Coherence increases like this, do we want to run the A2L?  Is this the right thing to do?  Or do we just let the Coherence increase?

Attached is a look at the PITCH coherence for every hour during this lock (with A2L moments marked).

Images attached to this report
H1 General
corey.gray@LIGO.ORG - posted 17:12, Saturday 10 December 2016 (32432)
1:00-1:04 Out Of Observing: A2L & SDF Table Select.

Timeline for A2L on current 24+hr lock:

SDF Channel File Changed

Also wanted to take this time to look at the channels involved with SDF node which dropped Cheryl out of Observing during her shift.  (Do we really want to be dropped out of OBSERVING for looking at some SDF channel files??)

OBSERVATORY_MODE

Took to "CALIBRATION" for this 4mins of downtime from OBSERVING

LHO General
corey.gray@LIGO.ORG - posted 16:45, Saturday 10 December 2016 (32429)
Transition To EVE

TITLE: 12/11 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 74.5957Mpc
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
    Wind: 4mph Gusts, 3mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.47 μm/s
QUICK SUMMARY:

Can see a slight increase in the "secondary useism" over the last 12hrs. 

Looking at Violin Mode peaks on DARM, we have:

H1 PSL (PSL)
richard.savage@LIGO.ORG - posted 16:25, Saturday 10 December 2016 (32428)
Pre-modecleaner repair and tests of all-bolted PMC prototypes

LiyuanZ, PeterK, BetsyW, AlenaA, RickS (with support from EddieS, CalumT, StephenA, DennisC, GarilynnB, MalikR, et al.)

Some time ago, LLO removed PMC SN08 from operation due to a glitchy PZT.  

We recently removed and replaced the PZT and the curved mirror it actuates, using an original-style PZT ordered by Pking and a spare mirror from the original PMC mirrors (supposedly) provided by BennoW.

We characterized the losses in the cavity using a setup in the LHO "Triples Lab" (upper floor of Staging Building) that utilizes an NPRO and three Pcal-style integrating spheres and associated photodetectors (see LIGO-T1600204-v3).

We made some improvements to our measurement setup that preclude direct comparisons of the estimated losses before and after replacing the M4 mirror, but we estimate that the total round trip losses were reduced by about a factor of two by replacing the one mirror (investigations of other highly contaminated PMC indicate that the PZT is the source of the contaminants and that the mirror bonded to the PZT is the most contaminated).

Our current best estimate of the average losses per mirror for this cavity is about 60 ppm (see attached table).

We were surprised to find that the transmitted light level through M4 is about 40 times smaller than M3.  The spec for the M3 and M4 transmission is 60 ppm and we calculate the M3 transmission to be about 65 ppm, but the M4 transmission of only 1.6 ppm is a mystery.  It doesn't seem it was from the same coating run, as expected.  However, discussions with DanielS indicated that this might be acceptable.  The M4 transmitted light is used for the ISS path and is typically attenuated by about a factor of 100 on the PSL table.

We also tested a new concept for fabricating the PMCs that relies on machining tolerances for setting the orientation of the four cavity mirrors and eliminates all gluing from the assembly.  Two original PMC bodies were re-machined at a local machine shop in a single setup with the hope of achieving relative accuracy between the points on which the cavity mirrors register on the level of 5 micrometers.

We assembled both "all-bolted" prototypes under a clean bench by mounting the mirrors against three balls that register at the bottoms of counterbores in the aluminum bodies and holding the mirrors (and the sandwiched PZT) in place using off-the-shelf SS flexures (see attached photos).  We used mirrors recently procured by PeterK from ATF.

We discovered that there was an error in the coating of the new PMC flat mirrors; the reflectivity is only 2,400 ppm when it was supposed to be 24,000 ppm.  Thus the cavity finesse is 10x higher than desired.  While this won't work for the PSL, it aids in measuring the mirror losses.  The results of two measuremens are tabulated in the attached table for the S/N10 body.  The average losses per mirror are estimated to be about 11 ppm.  We have not measured the losses for the other "all-bolted" cavity yet.

These measurements confirm the abiltiy to machine the bodies to the required tolerances.  We will test the PZT performance as best we can in our lab setup when time allows.

Images attached to this report
Non-image files attached to this report
H1 General
cheryl.vorvick@LIGO.ORG - posted 15:50, Saturday 10 December 2016 (32426)
Ops Day Summary:

TITLE: 12/10 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 73.9225Mpc
INCOMING OPERATOR: Corey
Assistance:

SHIFT SUMMARY:

H1 General (DetChar)
cheryl.vorvick@LIGO.ORG - posted 11:29, Saturday 10 December 2016 - last comment - 11:59, Monday 12 December 2016(32420)
H1 kicked out of Observe, back in Observe
Comments related to this report
cheryl.vorvick@LIGO.ORG - 11:50, Saturday 10 December 2016 (32421)

Corey suggested looking at DIAG_SDF log, and there is activity that coincides with H1 going out of Observe:

19:19:45UTC - H1 out of Observe, and DIAG_SDF shows:

  • 2016-12-10T19:19:44.81946 DIAG_SDF [RUN_TESTS.run] USERMSG 0: DIFFS: sysecaty1plc2: 1
  • 2016-12-10T19:19:47.78952 DIAG_SDF [RUN_TESTS.run] USERMSG 0: DIFFS: sysecaty1plc2: 1
  • 2016-12-10T19:19:52.26944 DIAG_SDF [RUN_TESTS.run] USERMSG 0: DIFFS: sysecaty1plc2: 1
     

Now, how do I know what "USERMSG 0: DIFFS: sysecaty1plc2: 1" is?

cheryl.vorvick@LIGO.ORG - 11:56, Saturday 10 December 2016 (32422)

Keita's alog 32134 - instructions on how to look for channels that changed

cheryl.vorvick@LIGO.ORG - 12:12, Saturday 10 December 2016 (32423)

My bad - investigating - looked at SDF - kicked H1 out of Observe:

  • 20:08:47UTC - H1 out of Observe
  • 20:09:21UTC - H1 back in Observe
  • no change to H1 config

DIAG_SDF log:

  • 2016-12-10T20:08:44.09793 DIAG_SDF [RUN_TESTS.run] USERMSG 0: DIFFS: sysecatc1plc2: 4
cheryl.vorvick@LIGO.ORG - 15:12, Saturday 10 December 2016 (32425)
  • I ran the scripts in Keita's alog 32134 and did not find the channel that kicked H1 out of Observe
  • emailed Keita
  • He wrote some new files to hunt for the channel that kicked H1 out of Observe
  • those files are in /his directory in LockLoss/SDFERROR
  • command is
    • > for ii in SDFLIST*.txt; do lockloss -c ${ii} plot -w '[-10, 10]' gpstime; done
    • or this event I used gpstime = 1165432802
  • channel that kicked H1 out of Observe is
    • H1:FEC-1031_SDF_DIFF_CNT
    • it toggled 3 times and that agrees with what I found in the DIAG_SDF log
  • the next step is to identify to Front End, but middle mousing on the SDF diff count
    • the Front End responsible is EY ECAT PLC2
    • is it possible that sysecaty1plc2 is sys-ecat-y1-plc2?
Images attached to this comment
cheryl.vorvick@LIGO.ORG - 16:06, Saturday 10 December 2016 (32427)
  • Email from Keita about searching for the exact channel(s) that took H1 out of Observe.
  • I ran them once and didn't see a clear plot of which channel.
  • Corey's going to run them again and see if he comes up with something different.

From Keita:

I took
/opt/rtcds/lho/h1/target/h1sysecaty1plc2sdf/h1sysecaty1plc2sdfepics/OBSERVE.snap

and stripped unnecessary information, split into 20 line chunks and
put them here:
/ligo/home/keita.kawabe/LockLoss/SDFERRORS/h1sysecaty1plc2

Could you again run the lockloss tool by
for ii in ecaty1plc2*; do lockloss -c ${ii} plot -w '[-10,10]' gpstime; done
 

keita.kawabe@LIGO.ORG - 11:59, Monday 12 December 2016 (32479)

This morning (Monday Dec 12) I ran the lockloss script and I can see that H1:ALS-Y_FIBR_LOCK_TEMPERATURECOMTROLS_ON was flipping (see attached, second column from the left). Other things like LASER_HEAD_CRYSTALFREQUENCY, CRYSTALTEMPERATURE and VCO_TUNEOFS were also changing but these were not monitored.

Anyway, it's strange that this was not found when Cheryl and Corey ran lockloss tool. Maybe NDS2 was misbehaving?

Just to make sure, what I did is:

cd /ligo/home/keita.kawabe/LockLoss/SDFERRORS/h1sysecaty1plc2

for ii in ecaty1plc2_a*; do lockloss -c ${ii} plot -w '[-10, 10]' 1165432802; done

Images attached to this comment
H1 AOS (TCS)
cheryl.vorvick@LIGO.ORG - posted 10:15, Saturday 10 December 2016 (32419)
TCSY chiller flow glitches
Images attached to this report
H1 General
cheryl.vorvick@LIGO.ORG - posted 09:21, Saturday 10 December 2016 (32418)
Earthquake Report: near Solomon Islands, Papua New Guinea
H1 IOO (IOO, SUS)
cheryl.vorvick@LIGO.ORG - posted 09:12, Saturday 10 December 2016 (32417)
current lock 14:25UTC to 16:25UTC - some interesting optic behavior

H1 has been locked 15+ hours, and I ran some dataviewer trends looking at 2 hours, and found some optic alignment changes that I think are interesting, and some that might show up in DARM.

Images attached to this report
H1 General
cheryl.vorvick@LIGO.ORG - posted 08:18, Saturday 10 December 2016 (32416)
Ops Day Shift Transition

TITLE: 12/10 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 70.6121Mpc
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
    Wind: 19mph Gusts, 14mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.38 μm/s
QUICK SUMMARY:

LHO General
corey.gray@LIGO.ORG - posted 00:38, Saturday 10 December 2016 - last comment - 16:38, Saturday 10 December 2016(32404)
EVE Operator Summary

TITLE: 12/10 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: Jim
SHIFT SUMMARY:

Once I sat down to H1, able to get to NLN with no issues.  Have had a handful of EYSaturations during this lock.  Seismic trends are steady & winds are getting quiet.

LOG:

Comments related to this report
richard.oram@LIGO.ORG - 05:40, Saturday 10 December 2016 (32411)
Corey please consult with Gary regarding what he has trained the LLO operators to select during a2l so that both observatories are consistent in the use.
corey.gray@LIGO.ORG - 16:38, Saturday 10 December 2016 (32430)SUS

Forgot to note that while handing off to Jim last night, that he did have to make some quick adjustments to the MODE28 (in the 8-9utc hour).

H1 ISC
daniel.sigg@LIGO.ORG - posted 10:07, Wednesday 07 December 2016 - last comment - 15:30, Monday 12 December 2016(32306)
Jitter Coherence

With the ASC IMC model now running at 16384 Hz we look at the coherence of jitter as measured by the IMC WFS and other channels up to 7.4 kHz. Not sure we can conclude anything except that pointing errros contaminate everything.

We can compare this with an older 900-Hz bandwidth measurement from alog 31631 which was taken before the piezo peak fix (alog 31974).

Non-image files attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 13:45, Wednesday 07 December 2016 (32316)

Note that 1084Hz thing doesn't have coherence with IMC WFS.

Images attached to this comment
andrew.lundgren@LIGO.ORG - 04:42, Sunday 11 December 2016 (32436)
Can you check the DC sum channels for the IMC WFS as well? They are the ones that hVeto keeps finding as related to the 1080 Hz noise, and they see a modulation in the noise rather than a steady spectrum.
keita.kawabe@LIGO.ORG - 15:30, Monday 12 December 2016 (32486)

Done, again nothing for the bump in question though there are coherence bumps for f>1100Hz and f<800Hz.

Images attached to this comment
Displaying reports 51661-51680 of 83211.Go to page Start 2580 2581 2582 2583 2584 2585 2586 2587 2588 End