Changed the diskless node mount points for /opt and /opt/rtcds to the x1fs0 computer, rebooted x1psl0. All models restarted and are running as well as before. I'm moving other test stand computers to use x1fs0 for /opt/rtcds.
O1 days 93-97
No restarts reported for these five days. No CDS maintenance on Tues 22nd.
All DAQ components have been running for 22.8 days and counting (except broadcaster which was restarted for detchar channel inclusion).
Twas the night before Christmas, when all through the lab The wind it was howling and tilting the slab; Seismic was ringing and microseism was too high To lock the thing up was just not going to fly; The staff were all tucked safe in their beds While visions of more data danced in their heads; The wind it was blowing at just a mere gale When the side of the building sounded like it was pelted with hail; I sprang from my chair to see what was the matter Away to the door to discover what was the clatter; As I opened the door, it was ripped from my hand with such speed, There was just enough time to duck flying tumbleweed; So back to the control room I quickly retreated And sunk into the chair, forced to admit I was totally defeated; This Owl shift was done before it was over With little to do but hope tomorrow would find LHO again in Observing clover; The doors are all locked, and the Guardian’s state is in Down All that is left is to head back into town; But the operator was heard to exclaim as he drove out off the site HAPPY CHRISTMAS TO ALL, AND TO ALL A GOOD NIGHT!
Winds are gusting into the high 30 to low 40s. Seismic and microseism are still high and showing an upward trend. Per conservation with the run coordinator earlier this evening, I have decided to abandon the rest of this shift knowing tomorrow will be a better day. To all the LIGO staff and families - Happy Holidays and a very bright 2016. Apologies to Clement C. Moore for taking such liberties with his 1823 poem "A Visit from St. Nicholas".
Jeff Kissel likes this!
Title: 12/24/2015, Owl Shift 08:00 – 16:00 (00:00 – 08:00) All times in UTC (PT) State of H1: 08:00 (00:00), The IFO is down due to environmental conditions. Wind, seismic, and microseism are all still elevated. Will attempt relocking if conditions improve, if not, per Vern, will cancel the rest of the Owl shift and leave it for the Day shift.
TITLE: 12/23 Eve Shift: 00:00-08:00UTC (16:00-00:00 PDT), all times posted in UTC"
STATE Of H1: Down due to environment
SUPPORT: Evan G
SHIFT SUMMARY: I managed to lock PRMI a few times in the begining of my shift, but ever since then the winds have gotten worse and the useism hasn't gone down. If I was able to get ALS locked, it has been impossible to keep it for longer than 30sec.
INCOMING OPERATOR: Jeff B.
ACTIVITY LOG: none
Today VerbalAlarms crashed due to a ValueError, but luckily we didn't need Verbs today because of the environment.
The problem came from code I added on December 9 that would stop the tempurature alarms from repeating too often. I it would look through a date string for the 9th-11th elements, which should be the minutes, and then see if it was five minutes since the last alarm. This was good and all until the date changed from a single digit to two digits. That was something I should have seen, my fault. The reason it did not show up until now is because it was under a few conditionals that have not been met since the 9th.
The fix was two part. First I made a regular expression to look through the date string and find the minutes. This worked well, but increased the loop time a tiny bit more. So I then went for the much simpler solution and made a global variable 'minute' from when I make the date string. Added only one line of code. This worked even better.
This configuration has been tested and I have reloaded Verbs with the new code.
Locking seems to be impossible, I can't keep ALS locked. I've tried many different blend configurations but there is always a ton of movement. It seems to be well aligned but then will just drift back to 0 power or it will immediately lose lock. I think the enviroment has got us for now.
useism ~1.0um/s. Wind <35mph.
Ryan, Jim, Dave
we built a new NFS server called x1fs0 today. It has a ZFS file system installed on two 1TB HDD. Ryan introduced us to the wonders of ZFS on linux. This machine is a new build Ubuntu 14:04LTS. We are currently rsyncing /opt from x1boot to x1fs0 (should complete this evening). Tomorrow we will test if we can move the NFS function off of x1boot over to x1fs0 in preparation for doing the same on H1 post-O1.
Ryan, Jim, Carlos, Dave:
Following the failure of the Tandberg tape backup, as a short term fix we built a 2TB file system on cdsbackup so we could perform a disk-to-disk-to-disk backup of /opt/rtcds and /ligo (replacing the current disk-to-disk-to-tape). With Ryan's help we built a 2TB ZFS file system on cdsbackup (a ZFS mirror on two 2TB disks). I am currently rsyncing /opt/rtcds from h1boot-backup to cdsbackup over the private backup network. This will take many hours to complete. I have verified (with Observium) that this traffic is going over the private backup network and not over the 10.101 or 10.20 VLANS, so it should not impact on the front ends or control room.
I have managed to lock PRMI two times, but could not advance from there. Since my two locks, it has been harder to keep ALS locked. Not going so well.
useism still ~1um/s, wind <30mph.
Today Ken (S&K Electric) and I replaced 5 heating elements in Zone 2A and 2 in Zone 1B. There were several elements burned out in Zone 3A so we relocated the remaining elements in order to bring these up sequentially as they were designed to do. These elements are a different style and we did not have replacements. I will order more after the first of the year. With the exception of 1A, which has some issues that will be addressed after the break, we are now able to turn on heat in all of the zones.
Manually filled CP3 at 22:43 utc, opened the fill valve by one full turn, time to see LN2 out of the exhaust 4:00 minutes.
At Nutsinee's request, I have re-enabled violin mode damping in full lock.
I turned off the extra 500 Hz stopbands in the PUM pitch/yaw filter modules, and I have commented out the code in the COIL_DRIVERS state which disables the damping.
We had 36 hours of undisturbed data from roughly 2015-12-19 07:00:00 Z to 2015-12-20 19:00:00 Z. This should be enough to analyze the ringdowns. At a glance, all but one of the first harmonics in the spectrum seem to ring down.
Evan was worried that because the violin mode damping has been turned off for two weeks it could ring up the mode if something about the mode have changed (this shouldn't happen). If we ever go back to Observing again pull up the violine mode monitor and drag all the rms values to StripTool and monitor them to make sure no rms is rising. There will be a bunch of SDF differences related to {QUADOPTIC}_L2_DAMP. Accept them. If any violin mode rings up and no commissioners are around feel free to give me a call anytime. If a mode rings up so quickly that it breaks the lock -- go to ISC_LOCK.py and uncomment seven lines of for-loop in COIL_DRIVER to turn off all the violin mode damping when you relock the next time.
Ops Day Summary: 16:00-23:59UTC, 08:00-16:00PT
State of H1: down since 18:15UTC, high useism, winds increasing, not able to get to DRMI
Outgoing Operator: Jeff
Incoming Operator: TJ
Shift Details:
Site:
TITLE: 12/23 Eve Shift: 00:00-08:00UTC (16:00-00:00 PDT), all times posted in UTC"
STATE Of H1: Down, environment
OUTGOING OPERATOR: Cheryl
QUICK SUMMARY: The IFO is down due to highish winds (<25mph) and high useism (1.0um/s). I will continue to try and hope for the best.
Jim had an alternative SC filter that gained a bit at the useism at the expense of lower frequency noise. With the high useism and lock loss it seemed an opportunity to see if it would improve relocking. Alas, no independent data captured with the IFO unable to grab and hold. Another time.
Attached are the two filters. The green is the new filter. This filters the ground seismo before it is added to the CPS giving it an inertial flavor before going into the super sensor (Blend.)
Jim's saved the foton files in the svn and we've reverted the ISI back to the nominal configuration--no change in SDF.
We've continued to investigate activity around 74 Hz in order to explain the coherence line between H1 and L1 generated by stochmon.
Attached is a comprehensive set of results from the coherence tool (the full set of results can be found here). The coherence tool plots coherence with the local (H1 or L1) h(t) channel, averaged over a week with 1mHz resolution. The slides attached show the channels showing coherence structure around 74 Hz over the course of O1. You can find relevant plots via a download link in the attached slides.
Has any more digging gone in to investigating my suggested mechanism for coherence at this frequency (see LHO aLOG 24305)? Does this channel list support of refute the claim? Tagging the SEI, SUS, AOS, ISC, and SYS groups again.
As reported in ealier alogs, the EPICS freeze ups are caused by the rsync backups of h1boot by both its clone machine (h1boot-backup) and the tape archive machine (cdsfs1). The file system on h1boot has degraded such that a full rsync which used to complete within an hour is now taking 10 hours. This caused many of the backup jobs to pile up, which in turn caused the front end freeze ups.
I have been re-writing the rsync scripts to spread out the rsync of h1boot by h1boot-backup over many jobs. I am also not rsyncing directories which contain many thousands of files as this always causes a freeze-up. Only h1boot-backup is rsyncing h1boot at deterministic times, I will then rsync h1boot-backup to a third machine to provide a disk-to-disk-to-disk backup.