Displaying reports 61201-61220 of 84987.Go to page Start 3057 3058 3059 3060 3061 3062 3063 3064 3065 End
Reports until 17:56, Wednesday 23 December 2015
LHO General
thomas.shaffer@LIGO.ORG - posted 17:56, Wednesday 23 December 2015 (24425)
Ops Report: Locking not getting anywhere

I have managed to lock PRMI two times, but could not advance from there. Since my two locks, it has been harder to keep ALS locked. Not going so well.

useism still ~1um/s, wind <30mph.

H1 General
bubba.gateley@LIGO.ORG - posted 17:20, Wednesday 23 December 2015 (24424)
LVEA DUCT HEATERS
Today Ken (S&K Electric) and I replaced 5 heating elements in Zone 2A and 2 in Zone 1B.
There were several elements burned out in Zone 3A so we relocated the remaining elements in order to bring these up sequentially as they were designed to do. These elements are a different style and we did not have replacements. I will order more after the first of the year. 

With the exception of 1A, which has some issues that will be addressed after the break, we are now able to turn on heat in all of the zones. 
LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 17:02, Wednesday 23 December 2015 (24423)
Manually overfilled CP3


Manually filled CP3 at 22:43 utc, opened the fill valve by one full turn, time to see LN2 out of the exhaust 4:00 minutes.

H1 ISC (DetChar, OpsInfo)
evan.hall@LIGO.ORG - posted 16:45, Wednesday 23 December 2015 - last comment - 16:45, Wednesday 23 December 2015(24418)
Violin mode damping turned on again in full lock

At Nutsinee's request, I have re-enabled violin mode damping in full lock.

I turned off the extra 500 Hz stopbands in the PUM pitch/yaw filter modules, and I have commented out the code in the COIL_DRIVERS state which disables the damping.

We had 36 hours of undisturbed data from roughly 2015-12-19 07:00:00 Z to 2015-12-20 19:00:00 Z. This should be enough to analyze the ringdowns. At a glance, all but one of the first harmonics in the spectrum seem to ring down.

Non-image files attached to this report
Comments related to this report
nutsinee.kijbunchoo@LIGO.ORG - 16:11, Wednesday 23 December 2015 (24421)

Evan was worried that because the violin mode damping has been turned off for two weeks it could ring up the mode if something about the mode have changed (this shouldn't happen). If we ever go back to Observing again pull up the violine mode monitor and drag all the rms values to StripTool and monitor them to make sure no rms is rising. There will be a bunch of SDF differences related to {QUADOPTIC}_L2_DAMP. Accept them. If any violin mode rings up and no commissioners are around feel free to give me a call anytime. If a mode rings up so quickly that it breaks the lock -- go to ISC_LOCK.py and uncomment seven lines of for-loop in COIL_DRIVER to turn off all the violin mode damping when you relock the next time.

Images attached to this comment
H1 General (OpsInfo, PEM, SEI, VE)
cheryl.vorvick@LIGO.ORG - posted 16:05, Wednesday 23 December 2015 (24422)
Ops Day Shift Summary: 23 Dec 2015

Ops Day Summary: 16:00-23:59UTC, 08:00-16:00PT

State of H1: down since 18:15UTC, high useism, winds increasing, not able to get to DRMI

Outgoing Operator: Jeff

Incoming Operator: TJ

Shift Details:

Site:

LHO General
thomas.shaffer@LIGO.ORG - posted 16:00, Wednesday 23 December 2015 (24420)
Ops Eve Transition
H1 SEI
hugh.radkins@LIGO.ORG - posted 15:31, Wednesday 23 December 2015 (24419)
Alternate Sensor COrrection Filter tried--reverted

Jim had an alternative SC filter that gained a bit at the useism at the expense of lower frequency noise.  With the high useism and lock loss it seemed an opportunity to see if it would improve relocking.  Alas, no independent data captured with the IFO unable to grab and hold.  Another time.

Attached are the two filters.  The green is the new filter.  This filters the ground seismo before it is added to the CPS giving it an inertial flavor before going into the super sensor (Blend.)

Jim's saved the foton files in the svn and we've reverted the ISI back to the nominal configuration--no change in SDF.

Images attached to this report
LHO General (DetChar)
nathaniel.strauss@LIGO.ORG - posted 14:02, Wednesday 23 December 2015 - last comment - 10:18, Thursday 24 December 2015(24417)
H1-L1 coherence at 74 Hz

We've continued to investigate activity around 74 Hz in order to explain the coherence line between H1 and L1 generated by stochmon

Attached is a comprehensive set of results from the coherence tool (the full set of results can be found here). The coherence tool plots coherence with the local (H1 or L1) h(t) channel, averaged over a week with 1mHz resolution. The slides attached show the channels showing coherence structure around 74 Hz over the course of O1. You can find relevant plots via a download link in the attached slides.

Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 10:18, Thursday 24 December 2015 (24440)AOS, ISC, SEI, SUS, SYS
Has any more digging gone in to investigating my suggested mechanism for coherence at this frequency (see LHO aLOG 24305)? Does this channel list support of refute the claim?

Tagging the SEI, SUS, AOS, ISC, and SYS groups again.
H1 CDS
david.barker@LIGO.ORG - posted 11:45, Wednesday 23 December 2015 (24416)
update on EPICS freeze up problems on front end computers

As reported in ealier alogs, the EPICS freeze ups are caused by the rsync backups of h1boot by both its clone machine (h1boot-backup) and the tape archive machine (cdsfs1). The file system on h1boot has degraded such that a full rsync which used to complete within an hour is now taking 10 hours. This caused many of the backup jobs to pile up, which in turn caused the front end freeze ups.

I have been re-writing the rsync scripts to spread out the rsync of h1boot by h1boot-backup over many jobs. I am also not rsyncing directories which contain many thousands of files as this always causes a freeze-up. Only h1boot-backup is rsyncing h1boot at deterministic times, I will then rsync h1boot-backup to a third machine to provide a disk-to-disk-to-disk backup.

H1 CDS
david.barker@LIGO.ORG - posted 11:38, Wednesday 23 December 2015 (24415)
problems with LHO CDS backup systems

For reference, here is an email I sent today to lho-all reporting recent CDS backup systems problems, short term and long term solutions:

Over the past few weeks we have developed problems with the CDS backup systems, namely:

  • /ligo file system has filled
  • backing up the front end boot server (h1boot) causes EPICS freeze-ups on the front ends and has on one occasion caused lock loss
  • tape backup hardware failed yesterday
These are all problems related to aging hardware (most are over 4 years old) which have appeared at an unfortunate time (i.e. during an observation run and just before a major holiday).
 
We have new file servers, disks and tape robot on order and plan on replacing all this aging hardware in January. The new hardware will have much larger resources in terms of file-system and tape-backup size and speed. In the mean time we will get by with disk-to-disk-to-disk backups (each file on three disk systems, all protected with UPS power).
 
The main task we all should be doing to help out in the mean time is to ensure that all critical hand-edited files are under SVN version control and to ensure the repository is updated promptly when a file is modified (the last point maintains a good history of changes made to the file and permits restoring previous versions of a file). 
 
We should also refrain from writing large (many GB) files to the /ligo disk system.
 
I have a script called check_h1_files_svn_status which reports any outstanding local mods on critical IFO configuration and control files.
 
many thanks,
Dave
H1 SEI
hugh.radkins@LIGO.ORG - posted 09:37, Wednesday 23 December 2015 - last comment - 10:56, Wednesday 23 December 2015(24410)
Before & After Centering the STS2 mass

Yesterday, the masses on the STS2s were centered but I should have monitored the mass positions more in real time as the centering doesn't always take the first or second time around.  While we did improve things maybe, there are still problems: six of the nine masses (3 masses 3 sts2s) were out of spec but after centering, there are still 3 of the 9 out of spec.

At EndX, the STS2 had two masses out of spec by about 50% (7500 cts vs 5000);  after centering one of those came into spec but the other overshot and is now out by a factor of 5!  Attached is ASDs of before and after.  The refs are from 2am Tuesday and the currents are from 2am this morning.  The useism is likely a bit higher early this morning (not as high as it is right now though.)  There is not anything glaringly wrong to me here.  In general the current traces are higher than the references but again that could just be the actual ground motion.  I thought such a large mass position problem would be obvious but it is not to me.  I'll do a comparison of LVEA instruments at the same time.

Images attached to this report
Comments related to this report
hugh.radkins@LIGO.ORG - 10:10, Wednesday 23 December 2015 (24411)

Here are two day trends of the three corner station STS2 Mass positions.  The bottom row are the U mass, middle are Vs and the top are W masses.  The first column, B2 is the STS2-B (itmy,) the middle column is the B1, STS2-A located near HAM2, and the third column, B3 is the STS2-C near HAM5.

Currently, STS2-B is the unit used for all the corner station sensor corrections.  Yesterday's centering step is clearly visible and please note that the U mass shifted alot but is still equally out of position.  B1, the HAM2 unit is not in service and was not centered yesterday.  I do not understand why there is a glitch to zero at the time.  We did connect a DVM to the Pos Spigots...maybe we toggled the UVW/XYZ switch and that is the glitch.  The third column shows the HAM5 B3 unit which we did center although it looks like we really did not need to do that.  Please note the various noise levels on these channels and especially how that changed on the B3 V signal, whats up with that?

Images attached to this comment
hugh.radkins@LIGO.ORG - 10:33, Wednesday 23 December 2015 (24413)

Here is the spectra comparison at ETMY where the zeroing seems to have improved all the mass positions although one still is out of spec by maybe 50%.  SImilar story to the EndX assessment above: nothing obvious given the potentially different ground motion.  However, if you compare the ENDX and ENDY spectra, it sure would appear that the ENDY sees much more tilt than ENDX.

Images attached to this comment
hugh.radkins@LIGO.ORG - 10:56, Wednesday 23 December 2015 (24414)

Here is the promised same time comparison after centering for the STS-B and C units.  Again, the centering on the C unit was probably not needed (we were testing the process) and the centering on the B unit just sent it to the other side of zero with no real improvement in position.

Couple thing from this.  See how the traces are almost exactly the same from .1 to 6 hz on the Y dof?  That suggests these two instruments are doing well in that channel.  Below about 60mHz, tilt starts to get in the way here; above 6 hz, wavelengths bring about differences.  On the X and Z axes, the difference in this region between the spectra suggests a problem in those dofs.  We suspect STS2-B, the ITMY unit.  Now this could be because of the poor centering or maybe the difficulty in centering is a common symptom of the underlying instrument problem.

Images attached to this comment
H1 SEI (PEM, SEI)
cheryl.vorvick@LIGO.ORG - posted 09:28, Wednesday 23 December 2015 (24409)
Comparing changes in ETMX ISI and ETMY ISI as the useism increased

Plot shows ETMX and ETMY Stage 1 X loaction and Y location over the last 24 hours. 

During the last 24 hours, useism has increased from 0.4um/s to 1um/s.

ETMY ISI (left, top and bottom) shows a steady increase in the amplitude of the oscillations in X Location and Y Location.

ETMX ISI (right, top and bottom) shows an increase in the amplitude of the oscillations in X location and Y Location, however, the increase in amplitude of the signals, and the signals themselves are anything but "smooth."

Images attached to this report
H1 General
jeffrey.bartlett@LIGO.ORG - posted 08:17, Wednesday 23 December 2015 (24407)
Ops Owl Shift Summary
Activity Log: All Times in UTC (PT)

08:00 (00:00) Take over from TJ
12:00 (04:00) Dozens of ETM-Y saturations alarms concurrent with RF45 noise.
12:45 (04:45) Temperature alarm on CS Zone 4 duct. 
12:58 (04:58) Temperature alarm on CS Zone 1A duct.
13:27 (05:27) The ETM-Y/RF45 saturations have stopped.  
15:19 (07:19) Three ETM-Y/RF45 saturations in 8 seconds. 
15:44 (07:44) Bubba – Into the Cleaning area to get a headset 
15:46 (07:46) Bubba – out of the cleaning area
16:00 (08:00) Turn over to Cheryl 



End of Shift Summary:

Title: 12/22/2015, Owl Shift 08:00 – 16:00 (00:00 – 08:00) All times in UTC (PT)

Support:  
 
Incoming Operator: Cheryl

Shift Detail Summary: Starting at 12:00 (04:00) dozens of ETM-Y saturations with simultaneous RF45 saturations. These continued until 13:27 (05:27). There were a few additional saturations after 13:27.

Other than the RF45 saturations, it was a good night for data collection. Seismic and microseism continue to increase, as does the wind. 
  
H1 General
jeffrey.bartlett@LIGO.ORG - posted 05:00, Wednesday 23 December 2015 - last comment - 06:43, Wednesday 23 December 2015(24405)
RF45 Acting Up
   Starting at 12:00 (04:00) getting dozens of RF45 and concurrent ETM-Y saturations. By 12:55 (04:55) the rate of saturations had slowed but continues.   
Comments related to this report
jeffrey.bartlett@LIGO.ORG - 06:43, Wednesday 23 December 2015 (24406)
The last saturation was at 13:27 (05:27). 
H1 General (OpsInfo)
cheryl.vorvick@LIGO.ORG - posted 14:35, Tuesday 22 December 2015 - last comment - 08:19, Wednesday 23 December 2015(24384)
The sound settings on the alarm handler computer

The volume for the alarms were so very quiet this morning I couldn't hear them, and the volume for the verbal alarms was reasonable, and of course, when I tried to raise the alarm volume, the verbal alarms were too loud.

JimB discovered that the sound settings on the computer are seperated into "alarms" and "computer" volume sliders, and the alarm volume was set very low.

I discovered that to have the alarm volume about the same as the verbal alarm volume, the volume on the alarms needs to be at max. and the verbal alarms need to be at about 1/2 of the total possible volume.  

Picture attached.

Images attached to this report
Comments related to this report
cheryl.vorvick@LIGO.ORG - 08:19, Wednesday 23 December 2015 (24408)

I added System Preferences to the toolbar so it's easy to find.

Snapshot attached show the toolbar and System Preferences when open.

In the System Preferences window, open Sound (second line down, furthest to the right).

Images attached to this comment
H1 AOS (ISC, SUS)
jason.oberling@LIGO.ORG - posted 09:14, Tuesday 22 December 2015 - last comment - 10:16, Wednesday 23 December 2015(24382)
BS/SR3 OpLev Laser Power Increased (WP5667)

I increased the power on the BS and SR3 oplev lasers this morning by ~5%.  I used the voltage read out of the "Current Monitor" port on the back of the laser (this port monitors the laser diode current, outputs a voltage).  The old and new values are listed below, as well as the new SUM counts.  Since I had to open the coolers the lasers are housed in they will need ~4-6 hours to return to thermal equilibium; at that point I can assess whether or not the glitching has improved.

Comments related to this report
jason.oberling@LIGO.ORG - 10:16, Wednesday 23 December 2015 (24412)ISC, SUS

Links to todays DetChar pages (look at the SUM spectrograms, both standard and normalized, at the top of the page) for these oplevs, showing data after the power increase and subsequent return to thermal equilibrium (data starts at 0:00 UTC 12/23/2015, or 16:00 PST 12/22/2015):  BS, SR3.  As can bee seen, SR3 quieted up significantly (go back a day or two and compare), one more small power adjustment should take care of it; will do that at the next opportunity (probably the next maintenance day, 1/5/2016).  The BS on the other hand has not seen much improvement.  I'm starting to think maybe I've overshot the stable range...  Will perform another adjustment (probably also on the next maintence day), this time targeting a region between my last 2 adjustments.  I will keep WP5667 open as this work is still ongoing.

On a brighter note, I have been testing the laser removed from the BS oplev in the LSB optics lab and can find nothing wrong with it.  I'm currently adjusting the TEC temperature setpoint to bring the stable region to a more useful power level and the laser will be ready for re-installation (back into the BS if the current laser proves too finnicky or into another oplev (likely ITMy)).

Displaying reports 61201-61220 of 84987.Go to page Start 3057 3058 3059 3060 3061 3062 3063 3064 3065 End