Displaying reports 51601-51620 of 83205.Go to page Start 2577 2578 2579 2580 2581 2582 2583 2584 2585 End
Reports until 01:51, Tuesday 13 December 2016
LHO General
corey.gray@LIGO.ORG - posted 01:51, Tuesday 13 December 2016 (32496)
Transition To OWL

TITLE: 12/13 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 72.9978Mpc
OUTGOING OPERATOR: Patrick
CURRENT ENVIRONMENT:
    Wind: 10mph Gusts, 8mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.15 μm/s
QUICK SUMMARY:

8:11-9:30UTC (12:11-01:30amPST):  No Operator Coverage (Due to Winter Weather this week, I'll be taking the OWL shift for next 3 nights.)

Arrived to an H1 in OBSERVING & a quick scan looks like we have optimal conditions (range is at 70Mpc). Glancing at range over last 12hrs, range has 8hr period of going up to 75Mpc & down to 70Mpc(now) & I don't see anything seismically which matches that (useism has had a constant trend down over the last 24+hrs).  There are also a few more glitches (i.e. drops in range) for this lock.

Low Useism:  Seismically, we appear to be a tad under the 50percentile for useim (a comparison personal note between 90+percentile over the weekend & now is that the Tidal striptool is virtually flat vs over the weekend you would definitely see useism waves/oscillations at 90+%).

Violin Modes:  Fundamental(~500Hz) looks to be just above 1e-19 on DARM.  Second Harmonic (~1kHz) is just above 1e-15, this is the only notable feature on the "H1Glitches(DMT Omega)" tool on nuc0 (perhaps this is something which could be damped at an opportune time [Maintenance Day?]).

Talked with Fyffe @LLO to inform they are no longer flying  solo.

Weather Conditions:  No precipitation & cloudy.

Road Conditions:  (driving in via Twin Bridges) Other than my driveway at home & driveway on-site, the roads were dry and clear (able to easily drive speed limit); made it to site in the standard 20-25min.  I did come across a heard of deer right near the bridges.

LHO General
patrick.thomas@LIGO.ORG - posted 00:09, Tuesday 13 December 2016 - last comment - 00:11, Tuesday 13 December 2016(32494)
Ops Evening Shift Summary
TITLE: 12/13 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 70.922Mpc
INCOMING OPERATOR: None
SHIFT SUMMARY: Locked for over 10 hours. Only out of observing to run a2l. Two GRB alerts.
There will not be an owl shift operator due to inclement weather. I will leave the IFO in observing unattended. I have allowed remote access for the users that Keita requested.
LOG:
21:44 UTC At NLN. Had to set LOCKLOSS_SHUTTER_CHECK node request to HIGH_ARM_POWER in order to go to observing. Have 'TCS_HWS: HWSY Code Stopped Running' diag message.
21:55 UTC Observing
22:13 UTC Out of observing to run a2l. '/opt/rtcds/userapps/release/isc/common/scripts/decoup/a2l_min_LHO.py'
22:19 UTC Observing
23:00 UTC Changed phase for PI mode 26
23:27 UTC Requested INJECT_KILL for the INJ_TRANS guardian
00:35 UTC Requested INJECT_SUCCESS for the INJ_TRANS guardian
01:50 UTC Changed sign of gain for PI mode 28
07:05 UTC GRB alert. LLO not locked.
Comments related to this report
patrick.thomas@LIGO.ORG - 00:11, Tuesday 13 December 2016 (32495)

Nutsinee has requested that someone reload the DIAG_MAIN guardian if the IFO loses lock we go out of observing. (see whiteboard)

LHO General
patrick.thomas@LIGO.ORG - posted 20:17, Monday 12 December 2016 - last comment - 11:40, Tuesday 13 December 2016(32491)
Ops Evening Mid Shift Summary
Have remained locked at NLN since 21:44 UTC.
Stood down for a GRB at 23:20 UTC. It appeared that the injections were not automatically blocked for the GRB, so I set the INJ_TRANS guardian node to INJECT_KILL manually. Requested INJECT_SUCCESS at 00:35 UTC.
Damped PI modes 26 and 28.
Ran a2l near beginning of lock.
Comments related to this report
keita.kawabe@LIGO.ORG - 21:10, Monday 12 December 2016 (32492)

About the GRB, it turns out that the guardian was doing the right thing.

The alert was received at 2016-12-12 23:20:22 UTC, but the actual GRB event was at 2016-12-12 15:38:59, hours before the alert.

Since the alert came after our standard window of one hour, the guardian didn't block the injeciton.

patrick.thomas@LIGO.ORG - 23:13, Monday 12 December 2016 (32493)
We just got another GRB alert at 07:05 UTC. This time the event time is around the same time as the created time. The INJ_TRANS guardian did not change again. Perhaps this is because LLO is not locked?
keita.kawabe@LIGO.ORG - 11:40, Tuesday 13 December 2016 (32516)

I and Patrick looked at the trend of H1:GRD-INJ_TRANS_STATE_N, and actually GRB made the guardian transition to state 30 (EXTTRIG_ALERT_ACTIVE) for an hour (attached).

The GRB event is this one: https://gracedb.ligo.org/events/view/E265603

Images attached to this comment
H1 General
keita.kawabe@LIGO.ORG - posted 15:50, Monday 12 December 2016 (32488)
OBSERVE.snap needs to be used in OBSERVE

Some of the SDF uses safe.snap instead of OBSERVE.snap (attached) during observation, which doesn't make sense. I don't remember which FEC should be safe and which should be OBSERVE.

For the sake of consistency, if the settings don't change for a specific FEC, just copy safe.snap to OBSERVE.snap and use that all the time.

Images attached to this report
H1 TCS (TCS)
nutsinee.kijbunchoo@LIGO.ORG - posted 14:48, Monday 12 December 2016 (32483)
HWS glitched last Friday - Restarted without issue

Dave suggested if HWSY induced glitches in HWSX again maybe we can do some cable swap test to see if the effect is reversed. However both cameras ran fine simultaneously after a computer restart so the test didn't happen.

 

I did a little follow up to see if there's any pattern to these glitches. Both HWSX and HWSY codes used to be able to run together for >10 days prior to Ubuntu14. These glitches happened on random days (not every maintenance Tuesday for instance), and not necessary in order (sometimes HWSX stopped running first, sometimes they both stopped at the same time). This time I only left HWSX running to see if the code would run for much longer time. If there's still an issue this might suggest that the problem is not with the PCI-e card.

 

The plot attached show 10min trend of HWSX and HWSY spherical power over hundred some days including the time prior to Ubuntu 14 (installed Nov 4, 2016). Flat line indicates that the HWS code stopped writing data (due to camera glitch). Sorry for the messed up time axes. Dataviewer doesn't always plot nicely when I ask for too much data. The hand written time on data after Ubuntu 14 was acquired with another second-trend plot (not shown).

Images attached to this report
H1 PEM (OpsInfo)
jeffrey.bartlett@LIGO.ORG - posted 13:55, Monday 12 December 2016 - last comment - 15:01, Monday 12 December 2016(32482)
Dust Monitor Alarm Levels
 After consulting with Peter K. I have adjusted the PSL dust monitor alarm level to those listed in the US-209E Cleanroom Standards chart. This has relaxed the 0.3um particle alarm levels a bit. Will continue to monitor the dust counts in the PSL. We are continuing to look into the causes of the elevated counts in the enclosure and what impact dust contamination could have on the laser system.  

   There is a chance the dust monitor is out of calibration. I will swap out the PSL-101 unit as soon as can get access to the PSL enclosure.    
Non-image files attached to this report
Comments related to this report
kyle.ryan@LIGO.ORG - 15:01, Monday 12 December 2016 (32484)
I recall back in the "early days" that we had dust monitors that seemed to be sensitive to dry air.  If I remember, static electricity was blamed?  
LHO General
patrick.thomas@LIGO.ORG - posted 13:40, Monday 12 December 2016 (32481)
Ops Evening Shift Start
TITLE: 12/12 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
    Wind: 6mph Gusts, 4mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.21 μm/s 
QUICK SUMMARY: Starting early to cover end of Cheryl's shift. On the way to NLN.
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 11:39, Monday 12 December 2016 (32477)
O2 CDS report, Friday 2nd December - Sunday 11th December 2016

No restarts on any of these days except for Mon 05/Dec/2016

model restarts logged for Mon 05/Dec/2016
2016_12_05 11:17 h1ascimc
2016_12_05 11:20 h1lsc
2016_12_05 11:20 h1odcmaster
2016_12_05 11:20 h1susmc1
2016_12_05 11:20 h1susmc2
2016_12_05 11:20 h1susmc3

2016_12_05 11:51 h1broadcast0
2016_12_05 11:51 h1dc0
2016_12_05 11:51 h1fw0
2016_12_05 11:51 h1fw1
2016_12_05 11:51 h1fw2
2016_12_05 11:51 h1nds0
2016_12_05 11:51 h1nds1
2016_12_05 11:51 h1tw0
2016_12_05 11:51 h1tw1

Maintenance monday, sped up h1ascimc model to 16kHz, associated model restarts, added chans to broadcaster, associated DAQ restart.

/ligo file system is showing occassional freeze-ups, investigation continues with cdsfs0's raid controller.

H1 General
cheryl.vorvick@LIGO.ORG - posted 10:25, Monday 12 December 2016 - last comment - 15:17, Monday 12 December 2016(32474)
Ops Day Transition

TITLE: 12/12 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 0.0Mpc
OUTGOING OPERATOR: Ed
CURRENT ENVIRONMENT:
    Wind: 4mph Gusts, 3mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.22 μm/s
QUICK SUMMARY:

Comments related to this report
cheryl.vorvick@LIGO.ORG - 11:01, Monday 12 December 2016 (32476)

19:00UTC (11AM) Update:

  • locked DRMI but alignment was not good enough to survive ASC
  • started an Initial Alignment
  • Fast Shutter is in error, I can't close or open and have eliminated all guardian issues
  • Fil is going out to the floor to check on the power supply
terra.hardwick@LIGO.ORG - 11:52, Monday 12 December 2016 (32478)

Mode26 rang up: this is one that is known to require some phase tweaking over long locks. Due to road conditions, there wasn't an operator here at the time so the usual phase changes didn't happen. 

cheryl.vorvick@LIGO.ORG - 13:19, Monday 12 December 2016 (32480)

My day shift summary - have to hand off at 21:15UTC (1:15PM PT)

  • first Initial Alignment got to DRMI but did not survive ASC
  • set some optics back to alignments of 12/10 00:20-00:30UTC, before the 34 hour lock
  • did an Initial Alignment
  • this alignment is currently at ENGAGE REFL WFS
  • handing off to Patrick

Activities:

  • 19:30UTC - Christine, roll up door - complete
  • 19:50UTC - Kyle to MY - not sure if he's still at MY
  • 20:58UTC - Rick, to LVEA to geta spectrum analyzer - complete

 

corey.gray@LIGO.ORG - 15:17, Monday 12 December 2016 (32485)ISC, OpsInfo

Note About Fast Shutter Issue above mentions it was in Error, and Cheryl resolved it.  Just wanted to make a note about this since it will happen again.  Sometimes the HAM6 Fast Shutter trips.  When it does, we have a command in our ISC_LOCK scripts which tests the Fast Shutter.  So, until we get to that Fast Shutter Test, the shutter will stay in this Error State.  Fearing the Shutter was down, Fil went out to check on the Power Supply for the Fast Shutter--it was on and operational. 

Once Cheryl took H1 to a state where the test was run, the Error went away.  This was marked for about 10min of downtime as CORRECTIVE MAINTENANCE, and an FRS (#6917) was filed.

H1 General (DetChar)
cheryl.vorvick@LIGO.ORG - posted 11:29, Saturday 10 December 2016 - last comment - 11:59, Monday 12 December 2016(32420)
H1 kicked out of Observe, back in Observe
Comments related to this report
cheryl.vorvick@LIGO.ORG - 11:50, Saturday 10 December 2016 (32421)

Corey suggested looking at DIAG_SDF log, and there is activity that coincides with H1 going out of Observe:

19:19:45UTC - H1 out of Observe, and DIAG_SDF shows:

  • 2016-12-10T19:19:44.81946 DIAG_SDF [RUN_TESTS.run] USERMSG 0: DIFFS: sysecaty1plc2: 1
  • 2016-12-10T19:19:47.78952 DIAG_SDF [RUN_TESTS.run] USERMSG 0: DIFFS: sysecaty1plc2: 1
  • 2016-12-10T19:19:52.26944 DIAG_SDF [RUN_TESTS.run] USERMSG 0: DIFFS: sysecaty1plc2: 1
     

Now, how do I know what "USERMSG 0: DIFFS: sysecaty1plc2: 1" is?

cheryl.vorvick@LIGO.ORG - 11:56, Saturday 10 December 2016 (32422)

Keita's alog 32134 - instructions on how to look for channels that changed

cheryl.vorvick@LIGO.ORG - 12:12, Saturday 10 December 2016 (32423)

My bad - investigating - looked at SDF - kicked H1 out of Observe:

  • 20:08:47UTC - H1 out of Observe
  • 20:09:21UTC - H1 back in Observe
  • no change to H1 config

DIAG_SDF log:

  • 2016-12-10T20:08:44.09793 DIAG_SDF [RUN_TESTS.run] USERMSG 0: DIFFS: sysecatc1plc2: 4
cheryl.vorvick@LIGO.ORG - 15:12, Saturday 10 December 2016 (32425)
  • I ran the scripts in Keita's alog 32134 and did not find the channel that kicked H1 out of Observe
  • emailed Keita
  • He wrote some new files to hunt for the channel that kicked H1 out of Observe
  • those files are in /his directory in LockLoss/SDFERROR
  • command is
    • > for ii in SDFLIST*.txt; do lockloss -c ${ii} plot -w '[-10, 10]' gpstime; done
    • or this event I used gpstime = 1165432802
  • channel that kicked H1 out of Observe is
    • H1:FEC-1031_SDF_DIFF_CNT
    • it toggled 3 times and that agrees with what I found in the DIAG_SDF log
  • the next step is to identify to Front End, but middle mousing on the SDF diff count
    • the Front End responsible is EY ECAT PLC2
    • is it possible that sysecaty1plc2 is sys-ecat-y1-plc2?
Images attached to this comment
cheryl.vorvick@LIGO.ORG - 16:06, Saturday 10 December 2016 (32427)
  • Email from Keita about searching for the exact channel(s) that took H1 out of Observe.
  • I ran them once and didn't see a clear plot of which channel.
  • Corey's going to run them again and see if he comes up with something different.

From Keita:

I took
/opt/rtcds/lho/h1/target/h1sysecaty1plc2sdf/h1sysecaty1plc2sdfepics/OBSERVE.snap

and stripped unnecessary information, split into 20 line chunks and
put them here:
/ligo/home/keita.kawabe/LockLoss/SDFERRORS/h1sysecaty1plc2

Could you again run the lockloss tool by
for ii in ecaty1plc2*; do lockloss -c ${ii} plot -w '[-10,10]' gpstime; done
 

keita.kawabe@LIGO.ORG - 11:59, Monday 12 December 2016 (32479)

This morning (Monday Dec 12) I ran the lockloss script and I can see that H1:ALS-Y_FIBR_LOCK_TEMPERATURECOMTROLS_ON was flipping (see attached, second column from the left). Other things like LASER_HEAD_CRYSTALFREQUENCY, CRYSTALTEMPERATURE and VCO_TUNEOFS were also changing but these were not monitored.

Anyway, it's strange that this was not found when Cheryl and Corey ran lockloss tool. Maybe NDS2 was misbehaving?

Just to make sure, what I did is:

cd /ligo/home/keita.kawabe/LockLoss/SDFERRORS/h1sysecaty1plc2

for ii in ecaty1plc2_a*; do lockloss -c ${ii} plot -w '[-10, 10]' 1165432802; done

Images attached to this comment
H1 ISC
daniel.sigg@LIGO.ORG - posted 10:07, Wednesday 07 December 2016 - last comment - 15:30, Monday 12 December 2016(32306)
Jitter Coherence

With the ASC IMC model now running at 16384 Hz we look at the coherence of jitter as measured by the IMC WFS and other channels up to 7.4 kHz. Not sure we can conclude anything except that pointing errros contaminate everything.

We can compare this with an older 900-Hz bandwidth measurement from alog 31631 which was taken before the piezo peak fix (alog 31974).

Non-image files attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 13:45, Wednesday 07 December 2016 (32316)

Note that 1084Hz thing doesn't have coherence with IMC WFS.

Images attached to this comment
andrew.lundgren@LIGO.ORG - 04:42, Sunday 11 December 2016 (32436)
Can you check the DC sum channels for the IMC WFS as well? They are the ones that hVeto keeps finding as related to the 1080 Hz noise, and they see a modulation in the noise rather than a steady spectrum.
keita.kawabe@LIGO.ORG - 15:30, Monday 12 December 2016 (32486)

Done, again nothing for the bump in question though there are coherence bumps for f>1100Hz and f<800Hz.

Images attached to this comment
H1 General (DetChar, OpsInfo)
edmond.merilh@LIGO.ORG - posted 18:43, Saturday 03 December 2016 - last comment - 15:47, Monday 12 December 2016(32152)
H1 Intention Bit Temporarily out of Observe

02:26UTC Verbal Alarm said Intention bit set to Commissioning. Checking the SDF i found that the EX ECAT PLC2 was blinking. Below is a screen cap of the channel that was blinking. DIAG_MAIN was also showing a message  in the log seen in the other screenshot. The channel listed in the SDF and the Message in the DIAG_MAIN correlate in terms of time of occurence and subsystem (ALS). Some troublesome ALS channels were unmonitored last night by Keita. Looking back to his aLog, this appears to be a straggler. I was able to Un-Monitor this channel and reset the Intention bit.

02:27UTC Intention bit set to Observe

Images attached to this report
Comments related to this report
corey.gray@LIGO.ORG - 15:47, Monday 12 December 2016 (32487)GRD, ISC

Good Catch, Ed.  

I am going to paste the channel name in here so it can come up during a search (because we had the same issue with the Y-arm a week later).  The channel in question here (which is in Ed's snapshot) is:

H1:ALS-X_FIBR_LOCK_TEMPERATURECONTROLS_ON

H1 CAL (CAL)
alexander.urban@LIGO.ORG - posted 15:23, Friday 02 December 2016 - last comment - 16:54, Monday 12 December 2016(32117)
Evolution of PCAL-to-DARM residuals with and without time-dependent corrections to h(t)

As requested by Jeff and the calibration review committee, I've done a number of checks related to tracking the behavior of PCAL lines in the online-calibrated strain. (Most of these checks accord with the "official" strain curve plots contained in https://dcc.ligo.org/DocDB/0121/G1501223/003/2015-10-01_H1_O1_Sensitivity.pdf) I report on these review checks below.

I started by choosing a recent lock stretch at LHO that includes segments in which the H1:DMT-CALIBRATED flag is both active and inactive (so that we can visualize the effect of both gated and ungated kappas on strain, with the expected behavior that gstlal_compute_strain defaults each kappa factor to its last computed median if ${IFO}:DMT-CALIBRATED is inactive). There is a 4-hour period from 8:00 to 12:00 UTC on 30 November 2016 that fits the bill (see https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20161130/). I re-calibrated this stretch of data in --partial-calibration mode without kappas applied, and stored the output to

LHO: /home/aurban/O2/calibration/data/H1/

All data were computed with 32 second FFT length and 120 second stride. The following plots are attached:

The script used to generate these plots, and a LAL-formatted cache pointing to re-calibrated data from the same time period but without any kappa factors applied, is checked into the calibration SVN at https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Runs/PreER10/H1/Scripts/TDkappas/. A similar analysis on a stretch of Livingston data is forthcoming.

Images attached to this report
Comments related to this report
alexander.urban@LIGO.ORG - 16:54, Monday 12 December 2016 (32490)

I have re-run the same analysis over 24 hours of Hanford data spanning the full UTC day on December 4th (https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20161204/), during which time LHO was continuously locked. This time the lowest-frequency PCAL line has a PCAL-to-DARM ratio that improves when kappas are applied, which is the expected behavior. This suggests that whatever was going on in the November 30 data, where the 36.5 Hz line briefly strayed to having worse agreement with kappas applied, was transient -- but the issue may still be worth looking into.

Images attached to this comment
Displaying reports 51601-51620 of 83205.Go to page Start 2577 2578 2579 2580 2581 2582 2583 2584 2585 End