Activity Log: All Times in UTC (PT) 15:00 (08:00) Take over from TJ 15:15 (08:15) Marissa – On site 15:39 (08:39) Reset ITM-Y ISI ST2 watchdog trip 15:49 (08:49) Cleared HAM3 saturations 15:55 (08:55) PSL ISS diffracted power low (6.8% -1.90v) Adjusted to -1.96v for an 8.1% diffracted power 16:05 (09:05) Reset ITM-Y ISI ST1 watchdog trip; Called Hugh R. 16:10 (09:10) Put IFO into DOWN state – Only to DRMI-1F a couple of times. Most breaks come during ALS or FIND_IR. Wind and Seismic still high. 19:50 (12:50) Tried relocking as conditions appeared to have improved 20:10 (13:10) Put IFO back into DOWN state. Wind back up – Gusts to over 50 at End-X. 20:15 (13:15) Hugh – On site 20:40 (13:40) Put HAM1 into READY state so Hugh could run TFs on HAM1 21:09 (14:09) Hugh – Going into CER to check cables on ITM-Y coil driver 21:19 (14:19) Hugh – Out of the CER – No apparent problem found 21:55 (14:55) Hugh – Leaving site 23:00 (16:00) Turn Over to Travis End of Shift Summary: Title: 10/31/2015, Day Shift 15:00 – 23:00 (08:00 – 16:00) All times in UTC (PT) Support: Hugh Incoming Operator: Travis Shift Summary: Had two ITM-Y ISI watchdog trips this morning. The first tripped Stage 1. The second tripped Stage 2. In both cases the “Coil Driver Chassis Bio” had tripped. Was able to reset both. Called Hugh to discuss; reference aLOG #22947 & #22977. Tried relocking as the winds had dropped to below mid-teens. No luck. Wind picked back up, with constant 30s down X-Arm and several gusts of over 50 at End-X. Only made it past LOCKING_ALS one time. Put IFO back into DOWN state until conditions improve. Put HAM1 HEPI into Ready state while Hugh running TFs on HAM1. Wind remained high during end of shift, with several gusts over 50mph. Did not attempt to relock.
Final 2 bands for the full suite, should be finished around 6pm. Just put the Guardian back to Robust Isolated.
With Richard's sanction, I wiggled the binary cables from the ITMY coil driver but no glitches resulted...
These are all within a four hour period with the last one ~1855utc.
There are no others since I reported on them yesterday. Actually, as I dig deeper and look at full data (as the minute trend bands looked too wide,) I see that there are 8 drops of the V3 status bit. Most lasting several seconds. I'm tempted to go wiggle a few cables since the IFO is not locking with the high winds.
Oh yes, Jeffery reminded me that even the rogue excitation alarm triggered on more than half of these. That occurs when we still have DAC drives when the WD is tripped or something like that. I'm not worried about that aspect of this but it just indicates the irritation of this problem.
Wind appeared to be dropping (down into the mid-teens) and seismic activity was also coming down. At 19:50 (12:50) tried relocking. The wind came back up at almost the same time. Only made it to LOCK_ALS one time. At 20:10 (13:10) put the IFO back into a down state until conditions improve. 20:30 (13:30) recording constant winds into the 30s & 40s with gusts at End-X well over 50mph. (Just had all 5 weather stations reporting sustained over 30mph).
Environmental conditions remain difficult, but may be improving. Wind has dropped to be consistently below 20mph. Seismic activity is also dropping a bit, but is still high. Having trouble with the ITM-Y "Coil Driver Chassis Bio" and ST1 and ST2 watchdog trips, (see #22947 姁). Spoke with Hugh earlier in the shift and have a call into him as these event are becoming more frequent. If conditions continue to improve and ITM-Y gets sorted out, will attempt to relock the IFO.
After 1.5 hours of attempting to relock and only making it to DRMI-1F twice before lock loss. Most lock breaks occurred during ALS or FIND_IR. Environmental conditions are not favorable for relocking. Will try to relock when conditions improve.
Carlos, Dave:
The CDS /ligo file system got filled to 100% yesterday. We temporarily made space by removing a large backup and are working on extending/replacing this file system, in the mean time if you can compress or remove any large files (they are all on tape backup) that would be appreciated.
Title: 10/31/2015, Day Shift 15:00 – 23:00 (08:00 – 16:00) All times in UTC (PT) State of H1: 15:00 (08:00) IFO is down due to environmental conditions. Outgoing Operator: TJ Quick Summary: Wind is a fresh to strong breeze (17-27) with gusts in the mid-30s. Microseism is elevated due to winds and coastal storm conditions. Have been unsuccessful at relocking. Will continue to attempt to relock. If not successful will put the IFO into a down state until conditions improve.
Title: 10/31 OWL Shift: 7:00-15:00UTC (00:00-8:00PDT), all times posted in UTC
State of H1: Unlocked
Shift Summary: Locked for about 6 hours of my shift, then lost it at 12:41. I brought it all the way up only to lose it while running the A2L script. Winds are picking up to the low 30's now, and the forcast calls for gust up to 50mph today (normally forcasted much lower than actual speeds). useism is at 0.4um/s.
Side note: ran into a few errors in different places reporting "disk space full." Patrick sent Dave an email about this last night, but this is preventing me from posting screenshots, so sorry for the lack of lockloss info posted.
Incoming Operator: Jeff B
Activity Log:
Since I can't post screenshots, I will write it out. Like Sheila said in alog22982 AS90 was on the low side, about 250 counts, POP18 was around 130, and POP90 at 85.
Lost lost at Nominal Low Noise while I was running the A2L script( 13:54 UTC).
Let's try this again.
Lockloss at 12:41 UTC
Wind ~20mph, useism 0.4um/s. I will post plots momentarily.
Observing @ 72Mpc
Environment:
For the Operator Thursday Maintenance item that was tmissed this week, I topped off the chiller with 150mL.
Title: 10/30 Eve Shift 23:00-7:00 UTC (16:00-24:00 PST). All times in UTC.
State of H1: Observing
Shift Summary: After the wind calmed to a reasonable level a few hours into my shift, I began a slow progression of locking. I got to do a few things that were new to me, such as being involved in restarting a (really) frozen guardian node, and engaging the ISS 2nd loop manually. Locking in Observing after engaging 45 mHz blends. Microseism coming up slowly. Winds calm, at last.
Incoming operator: TJ
Activity log:
0:07 Jeff K starting charge measurements
1:28 Jeff K done
1:52 HAM 1 HEPI restored after Hugh's measurements finished
1:53 Start locking
2:30 Adjusted ISS ref. signal after diffracted power low notification
3:01 Jeff K attempts restart of PRM guardian node
4:45 Start initial alignment after many unsuccessful locking attempts
5:15 PRMI to DRMI transition unsuccessful
5:51 GRB alert
6:00 45 mHz blends turned on
6:26 manually engaged ISS 2nd loop after getting stuck
6:34 Observing
I made some plots to get a sense of how wind and microseism are affecting our duty cycle so far in O1. The first plot is a histogram of duty cycle vs percentiles of wind/microseism. The second plot is a timeseries that shows the microseism and wind plotted for all of O1 (until Oct 31 00:00 UTC) with the percentiles of each superimposed and locked segments along the bottom. All data are minute trend maximums. I used the flag H1:DMT-DC_READOUT_LOCKED for the locked state.
Comparing with long term wind / microseism statistics:
8-year wind study from Margarita: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=12996
Long term study of the seismic environment: P040015
Below I have included a table of actual values for each percentile. The 8 year study of wind has higher values (I inferred the 8-year percentiles from the first plot in Margarita's aLog). Note that the 8 year study used hourly max trend data as opposed to minute trends. Also, keep in mind Margarita's study has already shown that historically September / October are less windy then spring/summer months.
The long term microseism numbers (taken from fig. 5 of the paper) agree more closely with O1 results so far.
| Percentile | O1 Wind [MPH] | Eight Year Wind [MPH] | O1 Microseism [nm/s] | Long Term Microseism [nm/s] |
| 50 | 6 | 11 | 255 | 200 |
| 70 | 10 | 15 | 359 | 300 |
| 80 | 13 | 20 | 442 | 400 |
| 90 | 16 | 25 | 555 | 550 |
| 95 | 21 | 30 | 650 | 700 |
The code used to generate these plots is located here: https://ldas-jobs.ligo-wa.caltech.edu/~jordan.palamos/duty_cycle/
This is a followup on Ed's alog22918 where he reported seeing MICH glitch around the same time DHARD magnitude became large. Miquel found that the MICH "breathing" was actually a result of bad FFT windowing but it could have hidden the 8.6 Hz line that became apparent in DARM the morning of October 28th (and disappeared on the next day). Plot 1,2 and 4,5 attached show spectrum of DHARD Pitch, MICH, and ground motion near HAM2 during the time DHARD oscillation time reported in alog 22918 and alog 22875 compare to their nominal. Plot 3 and 6 are the timeseries of DHARD ptich and yaw during those time. DHARD oscillation frequency is ~21mHz on 10/28 plot and ~39 mHz on 10/27 plot. To show that this oscillation has nothing to do with MICH 8.6 Hz glitch I've attached a set of spectrum and timeseries during the time 8.6Hz began to make its appearence in DARM (7, 8, and 9). The low frequency oscillation isn't there and 8Hz peak shows up clearly in every channel.
I will continue to look for any environment factor that might have caused the DHARD instability. Although Jenne did mentioned that a wrong gain could have caused the loop to become unstable. I was told the that gain hadn't changed for months until somebody lowered it recently to help with the oscillation (Kiwamu?).
While locking DRMI, we noticed that the POP_90 signal looked strangely much larger than it normally does. Evan noticed that PRM was still aligned during this phase when it (and SRM) should normally be misaligned. The PRM guardian was showing that the request was 'MISALIGNED' while the state readback still showed 'ALIGNED'. Looking at the guardian log for PRM, I see that it stopped logging ~3 hours prior to starting to attempt locking. See attached screenshot for error messages.
J. Kissel, N. Kijbunchoo, T. Sadecki
We'd tried several things to resurrect / fix the problem:
- Switch the operation mode from EXEC to PAUSE and back, from EXEC to STOP and back
- Restarting the node from the command line,
- Stopping the node from the command line,
- Destroying the node from the command line
all to no avail.
This doesn't seem to be a show stopping problem, so we're just going to continue as is and email people.
The procedure given in LHO aLOG entry 16880 has been 100% successful in restoring hung Guardian nodes at LLO. We have found that DAQ restarts are usually responsible for causing nodes to hang, hence we reboot the Guardian/script machine following Tuesday maintenance as a preventative measure. n.b. Jamie has also provided a script to help expedite identifying the hung nodes, see LLO aLOG entry 20615.
J. Kissel, T. Sadecki Stuart! You rock! We followed the "procedure" from LHO aLOG 16880, and now SUS_PRM is no longer a member of the walking dead. The PRM node is now responsive to requests and has been remanaged by the ALIGN_IFO manager. Very good!