Displaying reports 61881-61900 of 85671.Go to page Start 3091 3092 3093 3094 3095 3096 3097 3098 3099 End
Reports until 22:52, Wednesday 23 December 2015
H1 CDS
thomas.shaffer@LIGO.ORG - posted 22:52, Wednesday 23 December 2015 (24429)
Fixed bug in VerbalAlarms

Today VerbalAlarms crashed due to a ValueError, but luckily we didn't need Verbs today because of the environment.

The problem came from code I added on December 9 that would stop the tempurature alarms from repeating too often. I it would look through a date string for the 9th-11th elements, which should be the minutes, and then see if it was five minutes since the last alarm. This was good and all until the date changed from a single digit to two digits. That was something I should have seen, my fault. The reason it did not show up until now is because it was under a few conditionals that have not been met since the 9th.

The fix was two part. First I made a regular expression to look through the date string and find the minutes. This worked well, but increased the loop time a tiny bit more. So I then went for the much simpler solution and made a global variable 'minute' from when I make the date string. Added only one line of code. This worked even better.

This configuration has been tested and I have reloaded Verbs with the new code.

LHO General
thomas.shaffer@LIGO.ORG - posted 19:49, Wednesday 23 December 2015 (24428)
Ops Eve Mid Shift Report

Locking seems to be impossible, I can't keep ALS locked. I've tried many different blend configurations but there is always a ton of movement. It seems to be well aligned but then will just drift back to 0 power or it will immediately lose lock. I think the enviroment has got us for now.

useism ~1.0um/s. Wind <35mph.

X1 DTS
david.barker@LIGO.ORG - posted 18:00, Wednesday 23 December 2015 (24427)
building external boot NFS server

Ryan, Jim, Dave

we built a new NFS server called x1fs0 today. It has a ZFS file system installed on two 1TB HDD. Ryan introduced us to the wonders of ZFS on linux. This machine is a new build Ubuntu 14:04LTS. We are currently rsyncing /opt from x1boot to x1fs0 (should complete this evening). Tomorrow we will test if we can move the NFS function off of x1boot over to x1fs0 in preparation for doing the same on H1 post-O1.

H1 CDS
david.barker@LIGO.ORG - posted 17:57, Wednesday 23 December 2015 (24426)
cdsbackup machine performing disk-to-disk backups

Ryan, Jim, Carlos, Dave:

Following the failure of the Tandberg tape backup, as a short term fix we built a 2TB file system on cdsbackup so we could perform a disk-to-disk-to-disk backup of /opt/rtcds and /ligo (replacing the current disk-to-disk-to-tape). With Ryan's help we built a 2TB ZFS file system on cdsbackup (a ZFS mirror on two 2TB disks). I am currently rsyncing /opt/rtcds from h1boot-backup to cdsbackup over the private backup network. This will take many hours to complete. I have verified (with Observium) that this traffic is going over the private backup network and not over the 10.101 or 10.20 VLANS, so it should not impact on the front ends or control room.

LHO General
thomas.shaffer@LIGO.ORG - posted 17:56, Wednesday 23 December 2015 (24425)
Ops Report: Locking not getting anywhere

I have managed to lock PRMI two times, but could not advance from there. Since my two locks, it has been harder to keep ALS locked. Not going so well.

useism still ~1um/s, wind <30mph.

H1 General
bubba.gateley@LIGO.ORG - posted 17:20, Wednesday 23 December 2015 (24424)
LVEA DUCT HEATERS
Today Ken (S&K Electric) and I replaced 5 heating elements in Zone 2A and 2 in Zone 1B.
There were several elements burned out in Zone 3A so we relocated the remaining elements in order to bring these up sequentially as they were designed to do. These elements are a different style and we did not have replacements. I will order more after the first of the year. 

With the exception of 1A, which has some issues that will be addressed after the break, we are now able to turn on heat in all of the zones. 
LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 17:02, Wednesday 23 December 2015 (24423)
Manually overfilled CP3


Manually filled CP3 at 22:43 utc, opened the fill valve by one full turn, time to see LN2 out of the exhaust 4:00 minutes.

H1 ISC (DetChar, OpsInfo)
evan.hall@LIGO.ORG - posted 16:45, Wednesday 23 December 2015 - last comment - 16:45, Wednesday 23 December 2015(24418)
Violin mode damping turned on again in full lock

At Nutsinee's request, I have re-enabled violin mode damping in full lock.

I turned off the extra 500 Hz stopbands in the PUM pitch/yaw filter modules, and I have commented out the code in the COIL_DRIVERS state which disables the damping.

We had 36 hours of undisturbed data from roughly 2015-12-19 07:00:00 Z to 2015-12-20 19:00:00 Z. This should be enough to analyze the ringdowns. At a glance, all but one of the first harmonics in the spectrum seem to ring down.

Non-image files attached to this report
Comments related to this report
nutsinee.kijbunchoo@LIGO.ORG - 16:11, Wednesday 23 December 2015 (24421)

Evan was worried that because the violin mode damping has been turned off for two weeks it could ring up the mode if something about the mode have changed (this shouldn't happen). If we ever go back to Observing again pull up the violine mode monitor and drag all the rms values to StripTool and monitor them to make sure no rms is rising. There will be a bunch of SDF differences related to {QUADOPTIC}_L2_DAMP. Accept them. If any violin mode rings up and no commissioners are around feel free to give me a call anytime. If a mode rings up so quickly that it breaks the lock -- go to ISC_LOCK.py and uncomment seven lines of for-loop in COIL_DRIVER to turn off all the violin mode damping when you relock the next time.

Images attached to this comment
H1 General (OpsInfo, PEM, SEI, VE)
cheryl.vorvick@LIGO.ORG - posted 16:05, Wednesday 23 December 2015 (24422)
Ops Day Shift Summary: 23 Dec 2015

Ops Day Summary: 16:00-23:59UTC, 08:00-16:00PT

State of H1: down since 18:15UTC, high useism, winds increasing, not able to get to DRMI

Outgoing Operator: Jeff

Incoming Operator: TJ

Shift Details:

Site:

LHO General
thomas.shaffer@LIGO.ORG - posted 16:00, Wednesday 23 December 2015 (24420)
Ops Eve Transition
H1 SEI
hugh.radkins@LIGO.ORG - posted 15:31, Wednesday 23 December 2015 (24419)
Alternate Sensor COrrection Filter tried--reverted

Jim had an alternative SC filter that gained a bit at the useism at the expense of lower frequency noise.  With the high useism and lock loss it seemed an opportunity to see if it would improve relocking.  Alas, no independent data captured with the IFO unable to grab and hold.  Another time.

Attached are the two filters.  The green is the new filter.  This filters the ground seismo before it is added to the CPS giving it an inertial flavor before going into the super sensor (Blend.)

Jim's saved the foton files in the svn and we've reverted the ISI back to the nominal configuration--no change in SDF.

Images attached to this report
LHO General (DetChar)
nathaniel.strauss@LIGO.ORG - posted 14:02, Wednesday 23 December 2015 - last comment - 10:18, Thursday 24 December 2015(24417)
H1-L1 coherence at 74 Hz

We've continued to investigate activity around 74 Hz in order to explain the coherence line between H1 and L1 generated by stochmon

Attached is a comprehensive set of results from the coherence tool (the full set of results can be found here). The coherence tool plots coherence with the local (H1 or L1) h(t) channel, averaged over a week with 1mHz resolution. The slides attached show the channels showing coherence structure around 74 Hz over the course of O1. You can find relevant plots via a download link in the attached slides.

Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 10:18, Thursday 24 December 2015 (24440)AOS, ISC, SEI, SUS, SYS
Has any more digging gone in to investigating my suggested mechanism for coherence at this frequency (see LHO aLOG 24305)? Does this channel list support of refute the claim?

Tagging the SEI, SUS, AOS, ISC, and SYS groups again.
H1 CDS
david.barker@LIGO.ORG - posted 11:45, Wednesday 23 December 2015 (24416)
update on EPICS freeze up problems on front end computers

As reported in ealier alogs, the EPICS freeze ups are caused by the rsync backups of h1boot by both its clone machine (h1boot-backup) and the tape archive machine (cdsfs1). The file system on h1boot has degraded such that a full rsync which used to complete within an hour is now taking 10 hours. This caused many of the backup jobs to pile up, which in turn caused the front end freeze ups.

I have been re-writing the rsync scripts to spread out the rsync of h1boot by h1boot-backup over many jobs. I am also not rsyncing directories which contain many thousands of files as this always causes a freeze-up. Only h1boot-backup is rsyncing h1boot at deterministic times, I will then rsync h1boot-backup to a third machine to provide a disk-to-disk-to-disk backup.

H1 CDS
david.barker@LIGO.ORG - posted 11:38, Wednesday 23 December 2015 (24415)
problems with LHO CDS backup systems

For reference, here is an email I sent today to lho-all reporting recent CDS backup systems problems, short term and long term solutions:

Over the past few weeks we have developed problems with the CDS backup systems, namely:

  • /ligo file system has filled
  • backing up the front end boot server (h1boot) causes EPICS freeze-ups on the front ends and has on one occasion caused lock loss
  • tape backup hardware failed yesterday
These are all problems related to aging hardware (most are over 4 years old) which have appeared at an unfortunate time (i.e. during an observation run and just before a major holiday).
 
We have new file servers, disks and tape robot on order and plan on replacing all this aging hardware in January. The new hardware will have much larger resources in terms of file-system and tape-backup size and speed. In the mean time we will get by with disk-to-disk-to-disk backups (each file on three disk systems, all protected with UPS power).
 
The main task we all should be doing to help out in the mean time is to ensure that all critical hand-edited files are under SVN version control and to ensure the repository is updated promptly when a file is modified (the last point maintains a good history of changes made to the file and permits restoring previous versions of a file). 
 
We should also refrain from writing large (many GB) files to the /ligo disk system.
 
I have a script called check_h1_files_svn_status which reports any outstanding local mods on critical IFO configuration and control files.
 
many thanks,
Dave
H1 SEI
hugh.radkins@LIGO.ORG - posted 09:37, Wednesday 23 December 2015 - last comment - 10:56, Wednesday 23 December 2015(24410)
Before & After Centering the STS2 mass

Yesterday, the masses on the STS2s were centered but I should have monitored the mass positions more in real time as the centering doesn't always take the first or second time around.  While we did improve things maybe, there are still problems: six of the nine masses (3 masses 3 sts2s) were out of spec but after centering, there are still 3 of the 9 out of spec.

At EndX, the STS2 had two masses out of spec by about 50% (7500 cts vs 5000);  after centering one of those came into spec but the other overshot and is now out by a factor of 5!  Attached is ASDs of before and after.  The refs are from 2am Tuesday and the currents are from 2am this morning.  The useism is likely a bit higher early this morning (not as high as it is right now though.)  There is not anything glaringly wrong to me here.  In general the current traces are higher than the references but again that could just be the actual ground motion.  I thought such a large mass position problem would be obvious but it is not to me.  I'll do a comparison of LVEA instruments at the same time.

Images attached to this report
Comments related to this report
hugh.radkins@LIGO.ORG - 10:10, Wednesday 23 December 2015 (24411)

Here are two day trends of the three corner station STS2 Mass positions.  The bottom row are the U mass, middle are Vs and the top are W masses.  The first column, B2 is the STS2-B (itmy,) the middle column is the B1, STS2-A located near HAM2, and the third column, B3 is the STS2-C near HAM5.

Currently, STS2-B is the unit used for all the corner station sensor corrections.  Yesterday's centering step is clearly visible and please note that the U mass shifted alot but is still equally out of position.  B1, the HAM2 unit is not in service and was not centered yesterday.  I do not understand why there is a glitch to zero at the time.  We did connect a DVM to the Pos Spigots...maybe we toggled the UVW/XYZ switch and that is the glitch.  The third column shows the HAM5 B3 unit which we did center although it looks like we really did not need to do that.  Please note the various noise levels on these channels and especially how that changed on the B3 V signal, whats up with that?

Images attached to this comment
hugh.radkins@LIGO.ORG - 10:33, Wednesday 23 December 2015 (24413)

Here is the spectra comparison at ETMY where the zeroing seems to have improved all the mass positions although one still is out of spec by maybe 50%.  SImilar story to the EndX assessment above: nothing obvious given the potentially different ground motion.  However, if you compare the ENDX and ENDY spectra, it sure would appear that the ENDY sees much more tilt than ENDX.

Images attached to this comment
hugh.radkins@LIGO.ORG - 10:56, Wednesday 23 December 2015 (24414)

Here is the promised same time comparison after centering for the STS-B and C units.  Again, the centering on the C unit was probably not needed (we were testing the process) and the centering on the B unit just sent it to the other side of zero with no real improvement in position.

Couple thing from this.  See how the traces are almost exactly the same from .1 to 6 hz on the Y dof?  That suggests these two instruments are doing well in that channel.  Below about 60mHz, tilt starts to get in the way here; above 6 hz, wavelengths bring about differences.  On the X and Z axes, the difference in this region between the spectra suggests a problem in those dofs.  We suspect STS2-B, the ITMY unit.  Now this could be because of the poor centering or maybe the difficulty in centering is a common symptom of the underlying instrument problem.

Images attached to this comment
H1 AOS (ISC, SUS)
jason.oberling@LIGO.ORG - posted 09:14, Tuesday 22 December 2015 - last comment - 10:16, Wednesday 23 December 2015(24382)
BS/SR3 OpLev Laser Power Increased (WP5667)

I increased the power on the BS and SR3 oplev lasers this morning by ~5%.  I used the voltage read out of the "Current Monitor" port on the back of the laser (this port monitors the laser diode current, outputs a voltage).  The old and new values are listed below, as well as the new SUM counts.  Since I had to open the coolers the lasers are housed in they will need ~4-6 hours to return to thermal equilibium; at that point I can assess whether or not the glitching has improved.

Comments related to this report
jason.oberling@LIGO.ORG - 10:16, Wednesday 23 December 2015 (24412)ISC, SUS

Links to todays DetChar pages (look at the SUM spectrograms, both standard and normalized, at the top of the page) for these oplevs, showing data after the power increase and subsequent return to thermal equilibrium (data starts at 0:00 UTC 12/23/2015, or 16:00 PST 12/22/2015):  BS, SR3.  As can bee seen, SR3 quieted up significantly (go back a day or two and compare), one more small power adjustment should take care of it; will do that at the next opportunity (probably the next maintenance day, 1/5/2016).  The BS on the other hand has not seen much improvement.  I'm starting to think maybe I've overshot the stable range...  Will perform another adjustment (probably also on the next maintence day), this time targeting a region between my last 2 adjustments.  I will keep WP5667 open as this work is still ongoing.

On a brighter note, I have been testing the laser removed from the BS oplev in the LSB optics lab and can find nothing wrong with it.  I'm currently adjusting the TEC temperature setpoint to bring the stable region to a more useful power level and the laser will be ready for re-installation (back into the BS if the current laser proves too finnicky or into another oplev (likely ITMy)).

Displaying reports 61881-61900 of 85671.Go to page Start 3091 3092 3093 3094 3095 3096 3097 3098 3099 End