Displaying reports 53541-53560 of 84805.Go to page Start 2674 2675 2676 2677 2678 2679 2680 2681 2682 End
Reports until 02:00, Saturday 03 December 2016
H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 02:00, Saturday 03 December 2016 - last comment - 06:54, Saturday 03 December 2016(32137)
6.0M earthquake in Alaska

09:53 UTC Seeing that Terramon predicted 6.7 um/s I hurried and swithed SEI config to EARTH_QUAKE_V2 as the earthquake arrvied on the FOM. However, that was before I realized that what I saw was already the peak at ~1um/s and it was already turning back down. We could have ridden through this earthquake. Livingston is also down.

Comments related to this report
nutsinee.kijbunchoo@LIGO.ORG - 02:06, Saturday 03 December 2016 (32138)

After the lockloss ISI ST2 configs went back to SC_B. I'm switching back to SC_A.

nutsinee.kijbunchoo@LIGO.ORG - 03:50, Saturday 03 December 2016 (32139)

And I got this EZCA connection error that comes and goes but didn't seem to effect the locking. Still struggling to lock as of now. Kept losing after Turn on BS ST2.

Images attached to this comment
nutsinee.kijbunchoo@LIGO.ORG - 04:03, Saturday 03 December 2016 (32140)

Got pass DRMI, but then lost lock at CARM_15PM. Somewhere in between fast shutter thing tripped HAM6 ISI.

nutsinee.kijbunchoo@LIGO.ORG - 05:01, Saturday 03 December 2016 (32141)

Still losing lock at DRMI_LOCKED. I don't know why would SR2 be moving so much. Because PRMI and DRMI was able to locked with okay counts I don't really believe that it needs an initial alignment. I'm going to try one real quick anyway. If that doesn't help I'm going to try turn off SRC2 loop during next DRMI_LOCKED. I briefly went through Jenne's troubleshooting document. I'll be watching out for ASC error signal next time.
 

Images attached to this comment
nutsinee.kijbunchoo@LIGO.ORG - 06:33, Saturday 03 December 2016 (32142)

I zeroed all the SRC error signals before engaging DRMI ASC, following Jenne's Troubleshoot document. Finally was able to move on. Sorry... should have thought of it much sooner.

Images attached to this comment
nutsinee.kijbunchoo@LIGO.ORG - 06:54, Saturday 03 December 2016 (32143)

Back to Observe again at 14:52 UTC. I accepted the SDF ETMX bounce mode damping gain. Bounce mode is still high, but it's slowly damping. I didn't increase the gains.

Images attached to this comment
H1 General
edmond.merilh@LIGO.ORG - posted 00:05, Saturday 03 December 2016 (32136)
Shift Activity
TITLE: 12/03 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 73.642Mpc
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY:
  • H1 has been locked for ≈ 5hr10min
  • THere were only occassional small gltches
  • spoke with Mike Fyffe at LLO when I noticed that the GWIstat showed them not ok. He was damping some violin mode(s)
  • LLO dropped out of lock at ≈ 6:43UTC and was down for 1 hour
  • µseism appears to be making a slightly downward trend. 
  • LOCKING_ALS and CHECK_IR stages had to be MANUALly advanced during locking. 
LOG:
04:04 IY,IX,EY,EX saturations

04:46 EY saturation

07:45 EY saturation

H1 General (SEI)
edmond.merilh@LIGO.ORG - posted 20:00, Friday 02 December 2016 (32130)
Mid-Shift Summary - Eve

 

 

01:06UTC I reverted back to the original sei configuration for BSCs and locking attempts continue.

01:34UTC multiple locklosses at FIND_IR. Reverting back to new configs just to see if I can get past this stage.

01:50UTC New sei configs got me into DRMI land. MICH alignment looks pretty terrible. Trending witness sensors not revealing much. Going to try some alignment with INITIAL_ALIGNMENT guardian. Not going to do the arms if I can help it.

02:00UTC INITIAL_ALIGNMENT/INPUT_ALIGN not working. After trying to adjust the gain higher to get a lock I decided to trend(TimeMachine)IM4. I found it to be about 100urads out in pit. This seems to have corrected the issue. Perhaps this less-than-optimal pointing has been the sore spot in my locking efforts? Also had to put SR2 back to center before aligning SRC.

04:00UTC H1 Locked, Observing Range is ~72Mpc. Lock stretch so far: 01:08 coincidental with L1.

Images attached to this report
H1 General (DetChar)
edmond.merilh@LIGO.ORG - posted 19:10, Friday 02 December 2016 - last comment - 19:23, Friday 02 December 2016(32133)
Intent bit set to OBSERVE @ 03:08UTC
Comments related to this report
edmond.merilh@LIGO.ORG - 19:23, Friday 02 December 2016 (32135)

Damping Bounce mode on ETMX set an SDF diff that brought intent bit to COMMISSIONING (03:13UTC). I accepted the diff and returnd H1 to Observing(03:15).

Images attached to this comment
H1 General
edmond.merilh@LIGO.ORG - posted 19:10, Friday 02 December 2016 (32132)
H1 back to Nominal Low Noise

02:51UTC - NLN 75.3Mpc

Keita is working on some ALS diffs that were knocking the intent bit loose. I firmed up some channels to be unmonitored for some BS gain settings with the assistance of Jim W ( by telephone) so that I could set the intent bit.

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 17:38, Friday 02 December 2016 (32097)
Ops Day Shift Summary

TITLE: 12/02 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 73.9893Mpc
INCOMING OPERATOR: Ed
SHIFT SUMMARY:

Locked for a good chunk of the shift, but useism & wind grew pretty noisy in the afternoon....to a point where locking was going to be tough.

LOG:

H1 GRD
corey.gray@LIGO.ORG - posted 17:38, Friday 02 December 2016 - last comment - 19:25, Friday 02 December 2016(32131)
GRB Verbal Alarm 23:13utc

At 23:13 we received a "Gamma Ray Burst" Alarm.

Following L1500117:

THEN:

AT 23:21 We were bumped out of OBSERVING!!  (Sheila noticed it, we didn't hear it for some reason during all the hub bub of figuring out the deluge of GraceDB reports.)  It wasn't obvious why we were bumped out of OBSERVING.  But since we tend to get bumped out due to SDF Diffs/changes, JimW looked at the DIAG_SDF node's log.  On here he noticed we were getting DIFFS for the SDF NODE:  sysecatx1plc2

Unfortunately, when the diffs appeared they only happened for a few seconds (knocked us out of OBSERVING), and then go away.  So it was hard to see what channels were in question.  It happened several times, and managed to catch a glance at the channels in question.  Words we saw in the channel were:  ALS, Fiber, polarization.  Since they deal with the ALS, we can probably safely say we can NOT_MONITOR these channels (but we should get a blessing from Keita).  Or if we are in a fix in the middle of the night, and you are able to figure out who these elusive channels are, be sure to note the channels, NOT_MONITOR them for the night and then alog what you did.

Epilogue:

We ended up losing lock at 23:56--it's really noisy seismically with useism & wind.

Comments related to this report
keita.kawabe@LIGO.ORG - 19:25, Friday 02 December 2016 (32134)

Regarding strange sdf error of h1sysecatx1plc2:

I used a brute force method of copying OBSERVE.snap file, stripped unnecessary information including but not limited to non-ALS channels, divide into 20 line chunks, and fed them to lockloss tool for [-10,+10] seconds window centered at the first time when IFO was kicked out (GPS 1164756098).

There are three channels that changed 2 seconds prior to the event (first attachment):

H1:ALS-X_FIBR_SERVO_IN1GAIN

H1:ALS-X_FIBR_LOCK_TEMPERATURECONTROLS_ERRORSIGNAL

H1:ALS-X_FIBR_LOCK_TEMPERATURECONTROLS_POLARITY

Based on this, I looked at the fiber PLL lock status and sure enough, there was a large glitch (smaller peak on ch3 of the second attachment) in PLL.  After 3 seconds or so the autolocker started to "relock" by lowering the gain and such, there's a huge swing in the beat note, and it relocked. No suspicious thing in polarization, RF and DC level during this.

It's not clear why this happened, but this is just the end station PLL that is not used for anything during OBSERVE, and there's no sign that there's an RF disaster going on at the end station.

I went ahead and unmonitored these three channels in OBSERVE in sdf.

I didn't unmonitor ALL end station ALS channels.

If this happens next time for other X end ALS channels:

  • Go to /ligo/home/keita.kawabe/Lockloss
  • for ii in test*.txt; do lockloss -c ${ii} plot -w '[-10,10]' gpstime; done
  • (gpstime is the time the IFO was kicked out of observation.)
  • Plot window opens, you note which channel changed, close the plot, and the next window opens. You'll go through 14 plot windows.
  • Go to sdf screen of H1SYSECATX1PLC2SDF, and look at the unmonitored channel list.
  • The channel that changed that is not in the unmonitored list is the new offender. Change the sdf view to all, find the offending one, click 'MON', and accept.
  • Back to OBSERVE.
  • Make alog.
Images attached to this comment
LHO VE
chandra.romel@LIGO.ORG - posted 16:43, Friday 02 December 2016 (32129)
CP3 overfill

4:20pm local

Took 22 min. to overfill CP3 by raising LLCV to 50% open from control room. I raised nominal value from 16% to 17% for weekend.

H1 General
edmond.merilh@LIGO.ORG - posted 16:20, Friday 02 December 2016 - last comment - 16:21, Friday 02 December 2016(32123)
Shift Summary - Eve Transition
TITLE: 12/03 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    Wind: 25mph Gusts, 19mph 5min avg
    Primary useism: 0.11 μm/s
    Secondary useism: 0.82 μm/s 
QUICK SUMMARY:
H1 lost lock just after I got here. I spoke with Jim brifly in the parking lot about his work permit to try some new seismic trickery to help mitigate very high uSeism. Trying to re-lock is presenting some Green arm power issues that may prove to be a bit unmanagable given the current environmental conditions. 
Comments related to this report
edmond.merilh@LIGO.ORG - 16:21, Friday 02 December 2016 (32127)

We had been in a GRB standown period from 23:13UTC before H1 lost lock.

H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 16:15, Friday 02 December 2016 (32125)
CDS O2 and ER10 report, Tuesday 29th November - Thursday 1st December 2016

O2 days 1,2:

model restarts logged for Thu 01/Dec/2016 No restarts reported

h1guardian was restarted at 08:50PDT, its RAM was increased from 12GB to 48GB.

model restarts logged for Wed 30/Nov/2016 No restarts reported

ER10:

model restarts logged for Tue 29/Nov/2016 No restarts reported

H1 General
thomas.shaffer@LIGO.ORG - posted 16:14, Friday 02 December 2016 - last comment - 16:18, Friday 02 December 2016(32124)
Operator Lock Summary

Lock# 8
Times are in: GPS (UTC)
Start time: 1164672438.0  (Dec 2 00:07:01 UTC)
  Itbit Engage: [0, 1164672969.0, 1164673578.0, 1164674812.0, 1164700372.0, 1164745711.0]
  Itbit Disengage: [0, 1164673508.0, 1164674325.0, 1164700288.0, 1164745010.0, 1164747085.0]
End time: 1164747085.0  (Dec 2 20:51:02 UTC)
Total length: 74647.0 , 20hr 44min 7sec
Total Science: 72774.0 , 20hr 12min 54sec


Lock# 9
Times are in: GPS (UTC)
Start time: 1164750245.0  (Dec 2 21:43:48 UTC)
  Itbit Engage: [0, 1164750308.0, 1164756280.0, 1164756461.0, 1164757097.0]
  Itbit Disengage: [0, 1164756100.0, 1164756401.0, 1164756773.0, 1164758206.0]
End time: 1164758206.0  (Dec 2 23:56:26 UTC)
Total length: 7961.0 , 2hr 12min 41sec
Total Science: 7334.0 , 2hr 2min 14sec



End of Day Summary
Current Status: Unlocked
Total Day Locked: 22hr 56min 48sec [95.6%] (82608/86400)
Total Day Science: 22hr 15min 8sec [92.7%] (80108/86400)


 

This is the first trial of a daily post by the Day shift operator of a summary of the locks for the UTC day. This information is created by (userapps)/sys/h1/scripts/VerbalAlarms/Lock_Logging.py which runs in the background of the VerbalAlarms program. The Lock# was started at the begining of O2 and is used only as a reference, it is only used in the control room. The idea behind these daily posts is that it will give the CR something to reference. The lock clock that is displayed in the CR will soon show the current Lock# to match these.

Any question, please let me know. Hopefully its helpful!

Comments related to this report
thomas.shaffer@LIGO.ORG - 16:18, Friday 02 December 2016 (32126)OpsInfo

I forgot to tag OpsInfo

H1 CDS
david.barker@LIGO.ORG - posted 16:03, Friday 02 December 2016 - last comment - 16:34, Friday 02 December 2016(32122)
running A2L script from MEDM

Jenne, Jim W, Dave:

In order to run the a2l_min_LHO.py script from an MEDM shell launcher, I had to explicitly define full paths for all files used. Jenne created a testWrite.py which I used to test this out.

The MEDM command block in the A2L_screen.adl file (callable from SITEMAP via the SUS pulldown) is

        command[0] {
                label="A2L Script"
                name="xterm"
                args="-hold -e 'cd /opt/rtcds/userapps/trunk/isc/common/scripts/decoup; /usr/bin/python /opt/rtcds/userapps/trunk/isc/common/scripts/decoup/a2l_min_LHO.py'"
        }
 

Comments related to this report
jenne.driggers@LIGO.ORG - 16:34, Friday 02 December 2016 (32128)

This is used on the A2L screen that Jim made, accessible from the SUS tab on the sitemap.  You can now run the A2L script by just clicking the button on the screen. 

The first 3 times that this was used / tested, the A2L ran successfully and wrote the appropriate EPICS values, but the text files that summarize the results (so we don't have to trend for the data) didn't get written when the script was run from medm.  Dave's work was to fix this.

I plan to trend the data and hand-create the summary files for the three times they didn't get written, so that it's easier to run the beam position calculation script without losing these data points:

  • around 30 Nov 2016, 05:15:00 UTC
  • around 30 Nov 2016, 06:47:00 UTC
  • around 2 Dec 2016, 20:26:00 UTC
Displaying reports 53541-53560 of 84805.Go to page Start 2674 2675 2676 2677 2678 2679 2680 2681 2682 End