I have managed to lock PRMI two times, but could not advance from there. Since my two locks, it has been harder to keep ALS locked. Not going so well.
useism still ~1um/s, wind <30mph.
Today Ken (S&K Electric) and I replaced 5 heating elements in Zone 2A and 2 in Zone 1B. There were several elements burned out in Zone 3A so we relocated the remaining elements in order to bring these up sequentially as they were designed to do. These elements are a different style and we did not have replacements. I will order more after the first of the year. With the exception of 1A, which has some issues that will be addressed after the break, we are now able to turn on heat in all of the zones.
Manually filled CP3 at 22:43 utc, opened the fill valve by one full turn, time to see LN2 out of the exhaust 4:00 minutes.
At Nutsinee's request, I have re-enabled violin mode damping in full lock.
I turned off the extra 500 Hz stopbands in the PUM pitch/yaw filter modules, and I have commented out the code in the COIL_DRIVERS state which disables the damping.
We had 36 hours of undisturbed data from roughly 2015-12-19 07:00:00 Z to 2015-12-20 19:00:00 Z. This should be enough to analyze the ringdowns. At a glance, all but one of the first harmonics in the spectrum seem to ring down.
Evan was worried that because the violin mode damping has been turned off for two weeks it could ring up the mode if something about the mode have changed (this shouldn't happen). If we ever go back to Observing again pull up the violine mode monitor and drag all the rms values to StripTool and monitor them to make sure no rms is rising. There will be a bunch of SDF differences related to {QUADOPTIC}_L2_DAMP. Accept them. If any violin mode rings up and no commissioners are around feel free to give me a call anytime. If a mode rings up so quickly that it breaks the lock -- go to ISC_LOCK.py and uncomment seven lines of for-loop in COIL_DRIVER to turn off all the violin mode damping when you relock the next time.
Ops Day Summary: 16:00-23:59UTC, 08:00-16:00PT
State of H1: down since 18:15UTC, high useism, winds increasing, not able to get to DRMI
Outgoing Operator: Jeff
Incoming Operator: TJ
Shift Details:
Site:
TITLE: 12/23 Eve Shift: 00:00-08:00UTC (16:00-00:00 PDT), all times posted in UTC"
STATE Of H1: Down, environment
OUTGOING OPERATOR: Cheryl
QUICK SUMMARY: The IFO is down due to highish winds (<25mph) and high useism (1.0um/s). I will continue to try and hope for the best.
Jim had an alternative SC filter that gained a bit at the useism at the expense of lower frequency noise. With the high useism and lock loss it seemed an opportunity to see if it would improve relocking. Alas, no independent data captured with the IFO unable to grab and hold. Another time.
Attached are the two filters. The green is the new filter. This filters the ground seismo before it is added to the CPS giving it an inertial flavor before going into the super sensor (Blend.)
Jim's saved the foton files in the svn and we've reverted the ISI back to the nominal configuration--no change in SDF.
We've continued to investigate activity around 74 Hz in order to explain the coherence line between H1 and L1 generated by stochmon.
Attached is a comprehensive set of results from the coherence tool (the full set of results can be found here). The coherence tool plots coherence with the local (H1 or L1) h(t) channel, averaged over a week with 1mHz resolution. The slides attached show the channels showing coherence structure around 74 Hz over the course of O1. You can find relevant plots via a download link in the attached slides.
Has any more digging gone in to investigating my suggested mechanism for coherence at this frequency (see LHO aLOG 24305)? Does this channel list support of refute the claim? Tagging the SEI, SUS, AOS, ISC, and SYS groups again.
As reported in ealier alogs, the EPICS freeze ups are caused by the rsync backups of h1boot by both its clone machine (h1boot-backup) and the tape archive machine (cdsfs1). The file system on h1boot has degraded such that a full rsync which used to complete within an hour is now taking 10 hours. This caused many of the backup jobs to pile up, which in turn caused the front end freeze ups.
I have been re-writing the rsync scripts to spread out the rsync of h1boot by h1boot-backup over many jobs. I am also not rsyncing directories which contain many thousands of files as this always causes a freeze-up. Only h1boot-backup is rsyncing h1boot at deterministic times, I will then rsync h1boot-backup to a third machine to provide a disk-to-disk-to-disk backup.
For reference, here is an email I sent today to lho-all reporting recent CDS backup systems problems, short term and long term solutions:
Over the past few weeks we have developed problems with the CDS backup systems, namely:
Yesterday, the masses on the STS2s were centered but I should have monitored the mass positions more in real time as the centering doesn't always take the first or second time around. While we did improve things maybe, there are still problems: six of the nine masses (3 masses 3 sts2s) were out of spec but after centering, there are still 3 of the 9 out of spec.
At EndX, the STS2 had two masses out of spec by about 50% (7500 cts vs 5000); after centering one of those came into spec but the other overshot and is now out by a factor of 5! Attached is ASDs of before and after. The refs are from 2am Tuesday and the currents are from 2am this morning. The useism is likely a bit higher early this morning (not as high as it is right now though.) There is not anything glaringly wrong to me here. In general the current traces are higher than the references but again that could just be the actual ground motion. I thought such a large mass position problem would be obvious but it is not to me. I'll do a comparison of LVEA instruments at the same time.
Here are two day trends of the three corner station STS2 Mass positions. The bottom row are the U mass, middle are Vs and the top are W masses. The first column, B2 is the STS2-B (itmy,) the middle column is the B1, STS2-A located near HAM2, and the third column, B3 is the STS2-C near HAM5.
Currently, STS2-B is the unit used for all the corner station sensor corrections. Yesterday's centering step is clearly visible and please note that the U mass shifted alot but is still equally out of position. B1, the HAM2 unit is not in service and was not centered yesterday. I do not understand why there is a glitch to zero at the time. We did connect a DVM to the Pos Spigots...maybe we toggled the UVW/XYZ switch and that is the glitch. The third column shows the HAM5 B3 unit which we did center although it looks like we really did not need to do that. Please note the various noise levels on these channels and especially how that changed on the B3 V signal, whats up with that?
Here is the spectra comparison at ETMY where the zeroing seems to have improved all the mass positions although one still is out of spec by maybe 50%. SImilar story to the EndX assessment above: nothing obvious given the potentially different ground motion. However, if you compare the ENDX and ENDY spectra, it sure would appear that the ENDY sees much more tilt than ENDX.
Here is the promised same time comparison after centering for the STS-B and C units. Again, the centering on the C unit was probably not needed (we were testing the process) and the centering on the B unit just sent it to the other side of zero with no real improvement in position.
Couple thing from this. See how the traces are almost exactly the same from .1 to 6 hz on the Y dof? That suggests these two instruments are doing well in that channel. Below about 60mHz, tilt starts to get in the way here; above 6 hz, wavelengths bring about differences. On the X and Z axes, the difference in this region between the spectra suggests a problem in those dofs. We suspect STS2-B, the ITMY unit. Now this could be because of the poor centering or maybe the difficulty in centering is a common symptom of the underlying instrument problem.
Plot shows ETMX and ETMY Stage 1 X loaction and Y location over the last 24 hours.
During the last 24 hours, useism has increased from 0.4um/s to 1um/s.
ETMY ISI (left, top and bottom) shows a steady increase in the amplitude of the oscillations in X Location and Y Location.
ETMX ISI (right, top and bottom) shows an increase in the amplitude of the oscillations in X location and Y Location, however, the increase in amplitude of the signals, and the signals themselves are anything but "smooth."
Activity Log: All Times in UTC (PT) 08:00 (00:00) Take over from TJ 12:00 (04:00) Dozens of ETM-Y saturations alarms concurrent with RF45 noise. 12:45 (04:45) Temperature alarm on CS Zone 4 duct. 12:58 (04:58) Temperature alarm on CS Zone 1A duct. 13:27 (05:27) The ETM-Y/RF45 saturations have stopped. 15:19 (07:19) Three ETM-Y/RF45 saturations in 8 seconds. 15:44 (07:44) Bubba – Into the Cleaning area to get a headset 15:46 (07:46) Bubba – out of the cleaning area 16:00 (08:00) Turn over to Cheryl End of Shift Summary: Title: 12/22/2015, Owl Shift 08:00 – 16:00 (00:00 – 08:00) All times in UTC (PT) Support: Incoming Operator: Cheryl Shift Detail Summary: Starting at 12:00 (04:00) dozens of ETM-Y saturations with simultaneous RF45 saturations. These continued until 13:27 (05:27). There were a few additional saturations after 13:27. Other than the RF45 saturations, it was a good night for data collection. Seismic and microseism continue to increase, as does the wind.
Starting at 12:00 (04:00) getting dozens of RF45 and concurrent ETM-Y saturations. By 12:55 (04:55) the rate of saturations had slowed but continues.
The last saturation was at 13:27 (05:27).
The volume for the alarms were so very quiet this morning I couldn't hear them, and the volume for the verbal alarms was reasonable, and of course, when I tried to raise the alarm volume, the verbal alarms were too loud.
JimB discovered that the sound settings on the computer are seperated into "alarms" and "computer" volume sliders, and the alarm volume was set very low.
I discovered that to have the alarm volume about the same as the verbal alarm volume, the volume on the alarms needs to be at max. and the verbal alarms need to be at about 1/2 of the total possible volume.
Picture attached.
I increased the power on the BS and SR3 oplev lasers this morning by ~5%. I used the voltage read out of the "Current Monitor" port on the back of the laser (this port monitors the laser diode current, outputs a voltage). The old and new values are listed below, as well as the new SUM counts. Since I had to open the coolers the lasers are housed in they will need ~4-6 hours to return to thermal equilibium; at that point I can assess whether or not the glitching has improved.
Links to todays DetChar pages (look at the SUM spectrograms, both standard and normalized, at the top of the page) for these oplevs, showing data after the power increase and subsequent return to thermal equilibrium (data starts at 0:00 UTC 12/23/2015, or 16:00 PST 12/22/2015): BS, SR3. As can bee seen, SR3 quieted up significantly (go back a day or two and compare), one more small power adjustment should take care of it; will do that at the next opportunity (probably the next maintenance day, 1/5/2016). The BS on the other hand has not seen much improvement. I'm starting to think maybe I've overshot the stable range... Will perform another adjustment (probably also on the next maintence day), this time targeting a region between my last 2 adjustments. I will keep WP5667 open as this work is still ongoing.
On a brighter note, I have been testing the laser removed from the BS oplev in the LSB optics lab and can find nothing wrong with it. I'm currently adjusting the TEC temperature setpoint to bring the stable region to a more useful power level and the laser will be ready for re-installation (back into the BS if the current laser proves too finnicky or into another oplev (likely ITMy)).