Displaying reports 50181-50200 of 83128.Go to page Start 2506 2507 2508 2509 2510 2511 2512 2513 2514 End
Reports until 01:35, Saturday 04 February 2017
H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 01:35, Saturday 04 February 2017 - last comment - 01:54, Saturday 04 February 2017(33885)
Lockloss ~9:17 UTC

Nothing obvious except for the excessive ground motion in 3-10 Hz band. Chunk of snow fell off? I didn't hear anything in the control room.

 

Verbal alarm also crashed.

NameError: global name 'month_num' is not defined

Comments related to this report
nutsinee.kijbunchoo@LIGO.ORG - 01:54, Saturday 04 February 2017 (33886)

9:54 Back to Observe

LHO General
patrick.thomas@LIGO.ORG - posted 00:07, Saturday 04 February 2017 (33883)
Ops Eve Shift Summary
TITLE: 02/04 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 69Mpc
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY: The TCS_ITMY_CO2 guardian node transitioned back and both between LASER_UP and FIND_LOCK_POINT a few times knocking us out of observing. It seems stable now. One lock loss with no real issues reacquiring other than a small tweak to the end Y TMS alignment to improve the green arm power. I restarted verbal alarms after the lock loss as TJ had requested.
LOG:

06:39 UTC restarted video2
06:55 UTC Changed phase to damp PI mode 28
07:00 UTC Changed phase and sign of gain to damp PI mode 27
08:01 UTC Changed sign of gain to damp PI mode 28
H1 General
patrick.thomas@LIGO.ORG - posted 22:49, Friday 03 February 2017 (33882)
Observing
06:49 UTC
H1 General
patrick.thomas@LIGO.ORG - posted 22:06, Friday 03 February 2017 - last comment - 22:07, Friday 03 February 2017(33880)
Lock loss
06:03 UTC Cause unknown
Comments related to this report
patrick.thomas@LIGO.ORG - 22:07, Friday 03 February 2017 (33881)
Restarted verbal alarms as requested by TJ.
LHO General
patrick.thomas@LIGO.ORG - posted 20:20, Friday 03 February 2017 (33879)
Ops Eve Mid Shift Report
The TCS_ITMY_CO2 guardian node seems to have settled. No other issues seen.
H1 General
patrick.thomas@LIGO.ORG - posted 18:35, Friday 03 February 2017 - last comment - 08:18, Saturday 04 February 2017(33875)
TCSCS SDF took us out of observing
02:27 UTC The TCS_ITMY_CO2 guardian node has transitioned to FIND_LOCK_POINT. The number of SDF differences is varying. One instance of them is attached.
Images attached to this report
Comments related to this report
patrick.thomas@LIGO.ORG - 18:41, Friday 03 February 2017 (33876)
The guardian node has returned to LASER_UP. However there are a number of SDF differences remaining. I have accepted them (screenshot attached). Set back to observing at 02:40 UTC. Just got kicked back to commissioning again. Same issue...
Images attached to this comment
patrick.thomas@LIGO.ORG - 18:51, Friday 03 February 2017 (33877)
Trying again. Accepted SDF differences attached.
Images attached to this comment
patrick.thomas@LIGO.ORG - 19:11, Friday 03 February 2017 (33878)
Got kicked out of observing again while I was out of the room. Setting back to observing. SDF differences attached.
Images attached to this comment
nutsinee.kijbunchoo@LIGO.ORG - 01:14, Saturday 04 February 2017 (33884)TCS

Here I attached some plots regarding the event to compare to what happened last time.

Images attached to this comment
alastair.heptonstall@LIGO.ORG - 08:18, Saturday 04 February 2017 (33887)

This looks like it started with the spike in the lsrpwr_hd_pd channel.  That is the measurement channel for the laser output power that is used to stabilize the laser.

There is then a corresponding correction to the PZT position, and a change in current to the laser associated with this move.  After that the slower temperature change happens to bring the PZT voltage back to the middle of its range.  This all happens in the first 1/3 of the plots shown here.  By the middle of the plot, the laser is unlocked and trying to relock.

Firstly, I suspect that spike in laser power that triggered this may not be real.  We should take a closer look at it, but it may be related to the other spikes and jumps you're seeing on the Y-arm laser.

Secondly I think we should revisit the intentions for this laser locking system.  It is meant to keep the power of the laser relatively stable, not to kick us out of observation mode.

LHO General
corey.gray@LIGO.ORG - posted 18:02, Friday 03 February 2017 (33863)
DAY Operator Summary

TITLE: 02/03 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 65Mpc
INCOMING OPERATOR: Patrick
SHIFT SUMMARY:

Locked for over 31hrs.  Winter weather rolled in during the afternoon & we should last us until morning.

LOG:

LHO General
patrick.thomas@LIGO.ORG - posted 17:00, Friday 03 February 2017 (33874)
Ops Eve Shift Transition
TITLE: 02/04 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 64Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    Wind: 10mph Gusts, 9mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.31 μm/s 
QUICK SUMMARY:

Locked for over 31 hours.
H1 SUS (SUS)
corey.gray@LIGO.ORG - posted 15:15, Friday 03 February 2017 (33873)
PI Mode Homework

After the PI Mode activity on Wed night, and after Keita asked me to measure the modes, I went through an exercise of looking at some of the PI Modes (mainly for my own "homework" since I've luckily not had to deal with needing to change filters to damp these out (changing phase has worked for me...and sometimes gains on rarer occassions---perhaps I've been lucky).

While going over the PI Damping wiki (& also having Jim & Nutsinee show me) & went about exploring the frequencies of PI Modes which either (1) have PLL on and/or (2) have a phase value.  For these PI Modes, I looked at:

Here's what I recorded:

Mode Basic BP Filter peak in Bode Plot BP Notch Value Meas Today
MODE1 14985 186.6(14986.6) 185 14985.6
MODE2 15520 717.8(15517.8) 720 15521.6
MODE3 15606 807.2 (15607.2) 806.5 15607.4
MODE9 14980 177.8 (14977.8) 180 14980.6
MODE10 15518 717.8 (15517.8) 718 15516.9
MODE17 15542 743.0 (15543.0) 742.2 15542.9 or 15543.5
MODE18 15008 207.0 (15007.0) 207.9 15008.5
MODE25 15541 741.2 (15541.2) 741.2 15542.9
MODE26 15010 209.9 (15009) 209.5 15010
MODE27 18043  237.7 (18037) 237.5 18037.5 & 18040.2
MODE28 18059  254.7 (18055) 255 18055.2
 
H1 SEI (SEI)
corey.gray@LIGO.ORG - posted 13:11, Friday 03 February 2017 (33870)
Earthquake Report: 5.6, Martinique
LHO VE
logbook/robot/script0.cds.ligo-wa.caltech.edu@LIGO.ORG - posted 12:10, Friday 03 February 2017 - last comment - 13:45, Friday 03 February 2017(33867)
CP3, CP4 Autofill 2017_02_03
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 50% open. Fill completed in 326 seconds. LLCV set back to 12.0% open.
Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 457 seconds. LLCV set back to 37.0% open.
Images attached to this report
Comments related to this report
kyle.ryan@LIGO.ORG - 12:16, Friday 03 February 2017 (33868)
Increased CP3's LLCV to 15% open from 12% and CP4's LLCV to 39% from 37%.  Will likely make another adjustment this afternoon.
kyle.ryan@LIGO.ORG - 13:45, Friday 03 February 2017 (33871)
~2140 hrs UTC -> Reduced CP3's LLCV to 14% from 15% and CP4's to 38% from 39%.
H1 ISC (GRD, ISC, OpsInfo)
corey.gray@LIGO.ORG - posted 10:48, Friday 03 February 2017 - last comment - 12:40, Friday 03 February 2017(33864)
ISC_LOCK.py -LOAD- (& Code Fix & LOAD Again)

16:05-16:31:  H1 Out Of OBSERVING To Correct ISC_LOCK.py code  (Range was 0Mpc during this time!)

Yesterday during my shift, I noticed that the ISC_LOCK log was continually running a line of script (a few times a second).  I sent an email to commissioners & JeffK pointed me to the appropriate alog related to this change (Heather/Sheila alog33437).  It sounds like this Reset button was being continuously looped & this is the channel:

ISC_LOCK [NOMINAL_LOW_NOISE.run] ezca: H1:OAF-RANGE_RLP_4_RSET => 2

So, yesterday Heather & TJ took a look at these lines of code to fix this (We just wanted this Reset to happen once after a lock).  And then the operator was to wait for an appropriate time to hit LOAD on the ISC_LOCK.  This morning I saw that L1 was down, so I took H1 out of OBSERVING & hit the LOAD, but I received an ERROR for the ISC_LOCK guardian node. 

Instead of delving much into the code, I immediately made a call the the Guardian Help Desk (i.e. Jamie).  I texted him photos of the error message & then he found the issue (it was a missing Closing Parenthesis).  Once this was corrected, saved ISC_LOCK.py, hit LOAD on ISC_LOCK node, & the RED ERROR went away.  I then went back to OBSERVING.

NOTE:  During this time, H1 Range went to 0Mpc, BUT we were still at NLN the entire time.  So our current lock is approaching 24hrs.

Here is the Error which came up on the ISC_LOCK log, when I initially pressed LOAD:

2017-02-03T16:27:08.58122   File "/opt/rtcds/userapps/release/isc/h1/guardian/ISC_LOCK.py", line 3994
2017-02-03T16:27:08.58123     for blrms in range(1,11):
2017-02-03T16:27:08.58123                                               ^
2017-02-03T16:27:08.58124 SyntaxError: invalid syntax
2017-02-03T16:27:08.58134 ISC_LOCK LOAD ERROR: see log for more info (LOAD to reset)

Here are the lines in question from ISC_LOCK.py code (the parenthesis which was missing is highlighted below)

3992        subp.call(['/opt/rtcds/userapps/trunk/isc/h1/guardian/./All_SDF_observe.sh'])<---------  !
3993        #clear history of blrms
3994        for blrms in range(1,11):
3995            ezca['OAF-RANGE_RLP_{}_RSET'.format(blrms)]=2
3996            #ezca['OAF-RANGE_RBP_{}_RSET'.format(blrms)]=2

Comments related to this report
corey.gray@LIGO.ORG - 09:59, Friday 03 February 2017 (33865)GRD

Oh, forgot to mention another change.  When Jamie was looking for issues with line 39994, we did make a change to:

3994        for blrms in range(1,11):

"range" used to be something else, but I didn't capture that before we changed it.  (it was something like "gnumpy.range(1,11):" before).  Will this change anything from what was initially intended?

jameson.rollins@LIGO.ORG - 11:19, Friday 03 February 2017 (33866)

The error condition was caused by a SyntaxError exception in the code.  A simple load of the code will catch these exceptions, so you can easily avoided these load-time exceptions by parsing the code with e.g. guardutil first:

$ guardutil print ISC_LOCK

If there are any syntax errors in the code, that call to guardutil will catch and print it for you, before you hand it over to the operators.

 

Also, we've been trying to avoid calling out to shell scripts with subprocess calls.  Is there some reason this 'All_SDF_observe.sh'
script can't be properly integrated?

I also note that the return code of the script is not being checked, so if the script fails nothing will catch it.  That's not good.  That's one of the main reasons we avoid the subprocess calls.

"range" is similar to "numpy.arange", except it returns a simple python list instead of a numpy.array.  There's no reason to use numpy for this operations.

thomas.shaffer@LIGO.ORG - 12:40, Friday 03 February 2017 (33869)

This was definitely my fault. I'm not sure how, but I somehow deleted that parenthesis when I cut and pasted the blrms code from run to main.

I'm also a bit confused when I saw that shell script call. There is a pass directly before it, almost as if someone didn't want the script to actually be ran, but a pass doesn't work for that purpose and the script has been ran every time.

H1 SEI (DetChar, SEI)
krishna.venkateswara@LIGO.ORG - posted 16:33, Wednesday 25 January 2017 - last comment - 14:55, Friday 03 February 2017(33648)
Spare STS test at End-Y

Krishna

I took a quick look at the data from the PEM STS at EndY, which is mounted on the BRSY platform. The channels are mentioned in 33533.

First plot shows the GND STS (used by SEI) and the PEM STS converted to angle units (by multiplying by w^2/g) in comparison to the BRSY - rX. The wind-speed during this time was less than 2-3 m/s. The GND STS sees less signal than the BRS below ~50 mHz, but the PEM STS sees a lot more. The second plot shows the coherence between some channels and the third plot shows the X-direction signal. The Z channel is not recorded so I can't access it through ligodv-web.

The source of extra noise in the PEM STS could be  - a) Insufficient mass centering or b) Extra temperature noise either on the STS case or the table it sits on; more insulation to the table and the STS case might help...

Non-image files attached to this report
Comments related to this report
hugh.radkins@LIGO.ORG - 09:25, Thursday 26 January 2017 (33659)

When allowed (Tuesday?) we could go check the centering and center it up as needed--Hugh

hugh.radkins@LIGO.ORG - 12:04, Thursday 02 February 2017 (33845)

On Tuesday went to EndY to Check on and Center the PEM STS Masses

Upon arrival checked the mass measurements right away: U V & W were -3.7 -1.8 & -12V.  The X axis is generated from U V & W but the Y signal comes just from V & W.  See manual for axis mapping.

While the W mass is clearly out to lunch, U is also high >2volts.  However, as W contributes to both X & Y signals (but with different weightings) you'd think both X & Y signals would be noisy.  One might even argue that Y(North on STS Manual) would be even worse than X.

This morning, Thursday ~0930pst, with an IFO Lockloss, we went to the VEA again to check and found the U V & W masses exactly where I recorded them almost two days ago after nearly two hours of centering attempts: 9.9 12.9 & -13.5V.  After recording the voltages this morning, we hit the centering button and left the VEA.

Now see the attached 60 hours of minute trends.  Before the Tuesday mass centering activity, the X & Y time series values suggest fairly zero'd signals; I did zoomed in and they are. During the interum ~2 days before this morning's look, the Y signal was pined to a rail.  Noting my U V & W voltages, before Tuesday just W was on it's rail but after Tuesday, essentially all 3 masses were at or near a rail.  This further suggests you can not just look at the X Y & Z signals to assess the mass centering.

So, clearly, waiting for the masses to come off their rail did not yield results.  In addition, it appears there is a higher frequency noise on the channels showing up on X after Tuesday and now on Y after this morning's centering..  We will go measure the mass voltages when allowed.  I did leave some leads hanging off the monitor port for voltage measuring but they are mostly insulated and I don't think they are causing noise.  However, we'll remove these next time just in case.

Meanwhile, the positive glitches seen on the time series (average is still under the min trace but the max [black] is way higher than the average) are not continuous.  I guess these show up as the 1/f noise starting around 40Hz on the spectra, second attachment.  The reference traces are 1200utc 31 Jan (before the Tuesday activities.)  Note the higher noise on the X (pink) trace compared to the Y signal (cyan.)  I checked the wind, it did not seem to be an issue at that time.

Images attached to this comment
hugh.radkins@LIGO.ORG - 14:55, Friday 03 February 2017 (33872)

I have to just throw this in and run.

Here is a comparison of the SEI Ground STS and the the PEM unit on the BRS.  These have some gain difference I don't understand yet but I made them the same by multiplying by the ratio at the useism.  At this BW, the PEM is not happy.

Images attached to this comment
Displaying reports 50181-50200 of 83128.Go to page Start 2506 2507 2508 2509 2510 2511 2512 2513 2514 End