Level was at 5.4cm. Took 250mL to get it up to 8.9cm. (Nutsinee last filled at ~3am with a similar amount)
(Corey, Jason on phone)
Last night, Nutsinee came in early for her OWL shift & just before midnight the PSL tripped. We opted to leave it tripped for the night to wait for Jason to walk us through recovery procedure. Atleast for now, we always want to run through this procedure with one of our PSL people (Jason or Peter K). Here are my rough notes:
Now on to "Aligning" on Observatory Mode!
PSL Status NOTE:
TITLE: 10/02 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Unknown
OUTGOING OPERATOR: ...No full OWL shift
CURRENT ENVIRONMENT:
Wind: 10mph Gusts, 6mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY:
PSL tripped again last night. Talked with Ed & Nutsinee about it during their shifts. Have sent an email/text to Jason about bringing back the PSL & will address soon (Ah, Jason just texted back. He'll walk me through the procedure over the phone momentarily.)
Then we'll see how H1 is after an Initial Alignment....hopefully ITMy Bounce is less bouncy.
What Else: Robert is here.
Got a TCSY low flow rate alarm so I went out to check on the chiller. Water level was at 5.6cm line. Added 250 ml. Now at 9cm.
I've tried calling several people who I believe know how to untrip the PSL and might still be awake but non picked up my call (except for Ed, he had only done it with Jason walking him through so he couldn't help me). I also was told that the PSL card in the key box cannot access chiller diode room.
I will hang around the control room for a little bit to see if anyone calls back.
04:03 at last lockloss upon returning from filling the TCS chiller i found the PSL tripped off again. Jason talked me down and evrything is back on and the subsystems are all locked and happy.
Didn't bring a pen with me so the paper log isn't filled in.
After about a 2.5 hour standown time I started Initial Alignment at ≈ 02:00UTC. I started locking the IFO at ≈02:48 and got to ENAGE_REFL_POP_WFS. The bounce mode is still "bouncing" so I decided to play with the damping filters a bit. Nothing I did really made a difference in the DARM peak until, of course, I clicked once too many times and broke the lock that I had. Stefan popped in for a brief while with Alexa (yes, that's right....Alexa) but they seem to be gone now. I'll inform Nutsinee that I probably won't be staying much longer as I won't be able to do much with this situation and no other commissioners are due here, that I recall.
TITLE: 10/01 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1:
INCOMING OPERATOR: Ed M.
SHIFT SUMMARY:
Guardian was back, but so was a troublesome ITMy Bounce Mode. Currently leaving H1 DOWN in an attempt to let it ring down.
LOG:
The bonce mode was rung up vey high today (>1e-13m RMS in DARM). Worse yes, anything I tried to do just seemed to make it worse.
I tried both previous settings (see Keita's and Jenne's alogs below), but I was just ringing it up more.
The measured frequency today was 9.8320+-0.005Hz.
In the end it was so bad that the ITMY kept saturating, so I decided to leave the interferometer down for a feq hours.
Keita's elog on ITMY damping: 27680
Jenne's elog on ITMY damping: 29888
As we tried to go up to LOWNOISE ESD ETMY earlier, we could clearly see several lines on the DARM spectra and these were traced to a BOUNCE mode (ITMy to be exact). Stefan has been trying various gain changes & filter changes, but everything seemed to just excite the mode more.
Right now we're turning off Locking to let the system quiet down. Then we'll get back at it.
TITLE: 10/01 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Unknown (it was in "Commissioning", but should have been taken to Unknown at about 10pmPST last night due to Guardian issues)
OUTGOING OPERATOR: None (Owl shift canceled due to Guardian issue)
CURRENT ENVIRONMENT:
Wind: 19mph Gusts, 13mph 5min avg. Looks like there has been wind at each end station over the night.
Primary useism: 0.03 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
P.S. Since it's the weekend, I went ahead and turned off:
01:26 Switched ISI blends to EQ v2. EQ Z axis reached up to ≈1 µm/s. ISC_LOCK Guardian set to DOWN waiting for ringdown.
03:43 Switched ISI config back to nominal state - WINDY
4:52 Guardian had an error party - Connection errors 1st, ALS_YARM, then ALS_DIFF and then ISC_LOCK
ISC_LOCK, ALS_DIFF, and IFO were all showing connection errors because they lost connection to the ALS_YARM guardian, which had unceremoniously died:
2016-10-01T04:47:53.36265 ALS_YARM [INITIAL_ALIGNMENT.enter] 2016-10-01T04:47:53.36943 ALS_YARM [INITIAL_ALIGNMENT.main] ezca: H1:ALS-C_LOCK_REQUESTY => End Locked 2016-10-01T04:47:53.39004 ALS_YARM [INITIAL_ALIGNMENT.main] timer['pause'] = 10 2016-10-01T04:48:03.38190 ALS_YARM [INITIAL_ALIGNMENT.run] timer['pause'] done 2016-10-01T04:50:05.98295 ALS_YARM REQUEST: GREEN_WFS_OFFLOADED 2016-10-01T04:50:05.98317 ALS_YARM calculating path: INITIAL_ALIGNMENT->GREEN_WFS_OFFLOADED 2016-10-01T04:50:05.98360 ALS_YARM new target: OFFLOAD_GREEN_WFS 2016-10-01T04:50:06.04919 ALS_YARM EDGE: INITIAL_ALIGNMENT->OFFLOAD_GREEN_WFS 2016-10-01T04:50:06.04958 ALS_YARM calculating path: OFFLOAD_GREEN_WFS->GREEN_WFS_OFFLOADED 2016-10-01T04:50:06.04995 ALS_YARM new target: GREEN_WFS_OFFLOADED 2016-10-01T04:50:06.05074 ALS_YARM executing state: OFFLOAD_GREEN_WFS (-21) 2016-10-01T04:50:06.05096 ALS_YARM [OFFLOAD_GREEN_WFS.enter] 2016-10-01T04:50:06.05214 ALS_YARM [OFFLOAD_GREEN_WFS.main] starting smooth offload 2016-10-01T04:50:06.05215 ALS_YARM [OFFLOAD_GREEN_WFS.main] ['ITMY', 'ETMY', 'TMSY'] 2016-10-01T04:50:06.55046 ALS_YARM stopping daemon... 2016-10-01T04:50:06.62930 ALS_YARM daemon stopped. 2016-10-01T04:50:07.48941 Traceback (most recent call last): 2016-10-01T04:50:07.48946 File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main 2016-10-01T04:50:07.48954 "__main__", fname, loader, pkg_name) 2016-10-01T04:50:07.48959 File "/usr/lib/python2.7/runpy.py", line 72, in _run_code 2016-10-01T04:50:07.48963 exec code in run_globals 2016-10-01T04:50:07.48968 File "/ligo/apps/linux-x86_64/guardian-1.0.0/lib/python2.7/site-packages/guardian/__main__.py", line 262, in2016-10-01T04:50:07.49240 guard.run() 2016-10-01T04:50:07.49263 File "/ligo/apps/linux-x86_64/guardian-1.0.0/lib/python2.7/site-packages/guardian/daemon.py", line 452, in run 2016-10-01T04:50:07.49308 raise GuardDaemonError("worker exited unexpectedly, exit code: %d" % self.worker.exitcode) 2016-10-01T04:50:07.49380 guardian.daemon.GuardDaemonError: worker exited unexpectedly, exit code: -11 2016-10-01T04:50:07.61754 guardian process stopped: 255 0
As the error indicates, the worker process apparently died without explanation, which is not at all nice.
I restarted the ALS_YARM node with guardctrl and it came back up fine. The rest of the nodes recovered their connections soon after. As of right now all nodes appear to be funtioning normally.
This "worker exited unexpectedly" error isn't one I've seen much at all, so I'm very curious what could have caused it.
Logged an FRS (6338) for this. Jamie was able to get everything back before ~2am, but since we canceled the OWL shift (in this pre-ER10 epoch, operators are informed to NOT wake up help in the middle of the night), instead of being down 4hrs it was more like 10hrs.
Since this issue was fixed, the above FRS can now be closed.
We repeatedly had CHARD run away when switching to LOWNOISE_ASC.
The investigation was not helped lockloss tool, which started crashing suddenly. Guaridan also started having issues (connection errors, white epics channels.)
For partial explanation of the guardian issue see comment to next log.
I'm guessing that the issue with the lockloss tool might have been an overlong delay finding the latest lockloss times due to excessively verbose logging of the ISC_LOCK node when it's in connection error. This mostly exposes the weakness of the lockloss tool relying on parsing the ISC_LOCK node logs for determining lockloss times, but secondarily points to the logs being maybe unnecessarily verbose under these particular connection error conditions. If the lockloss tool problem was *not* due to a long wait time for returning the list of lockloss times, please let me know what the error was so that I can investigate.
I have an improved version of the lockloss tool that finds locklosses much faster via NDS. I'll push it out after I push a minor guardian update on Tuesday. It should make the lockloss tool much faster and more robust.
I also realize there's an issue with the log display part of the lockloss tool. This is completely orthoganal issue to the plotting, and will also be fixed with the next guardian minor release.
Very sorry about the trouble.
The Xtal chiller has tripped 3 times in the last week:
then 4 days later:
now 3 days later:
The last two events imply an overpressure condition. The problem seems more significant than replacing lost fluid from a slow leak.
Was the cap which "blew off" in today's event the "bleeding cap" (section 4.2.2), the "filter sleeve cap" (section 6.2) or the "filler pipe cap" (section 6.3 of T1100374-v1, "200 W Laser Crystal Chiller Manual")?
BTW, LRA = Long Range Actuator; see section 7.2 of T0900641-v5, "Under Manual 200 W Laser".
From my quick look through the alog I think the PSL has tripped more than this (for LHO to concur). From what I can see it has gone off 11 times in the last week (thats as far as I looked back).
I'm not sure if all the same problems, but alogs reporting the PSL laser off are LHO alogs:
10/2/2016
LHO alog 30160
LHO alog 30154
10/1/2016
LHO alog 30146
LHO alog 30143
9/30/2016
LHO alog 30118
LHO alog 30086 (due to power glitch)
9/29/2016
LHO alog 30076
LHO alog 30063 (this alog reports two different instances of the laser going off)
9/27/2016
LHO alog 30016
9/25/2016
LHO alog 29964
The filler pipe cap popping out is a known thing (happens all the time at LLO) when the chiller turns OFF. At LLO at least this has not shown to be due to any problem (just a consequence for whatever reason when the chiller is turned off). Its why we try to not turn these chillers OFF if can help it as they "burp" water over the floor and pop these fill caps even when trying to restart
I have as one of the main agenda items of this Wednesdays PSL meeting to discuss this problem and see if we can work out whats going on. Some statistics on how many times happened in say the last month or two, what the PSL trip was attributed to, and how many times happened before and after the chiller swap (to see if accelerating or at the same rate), would help this. Im not sure if FRS fault reports have been made for each laser trip to make this search easy for us remote to the site to do and work out how much lost observatory time we have had due to this issue.
Yes, Dennis, as Matt says it's the Filler Pipe Cap that pops off. We have had (3) trips since Friday evening (so Matt is probably right about there being more trips over the whole week). OH, and I should correct ourselves here because we just had a 4th Weekend PSL trip (this was just after I had H1 at NLN for 15min. This time the cap was blown off and there was a puddle on the floor.
Able to get back in 30min this time (vs 60min this morning).
OK, back to locking.
Another Note:
Something I wanted to add about the chiller was that when filling it, I noticed quite a bit of turbulence in the fill pipe. And you could see a air bubble vortex/tornado in there. Something we probably don't want if air bubbles are postulated as a trigger for flow sensor trips.
We see these bubbles as well in the chiller fill pipe at LLO with no chiller trips due to it (hopefully haven't jinxed myself). Perhaps post a movie of it so can see if looks the same as here
Jason & crew will be investigaing tomorrow. We should ask them to record a video of it.
Movie of the water turbulence of LLO's crystal chiller fill tube posted at LLO alog 28397
I took a video, but I can't post it; my phone only takes video in .mp4 format, which is apparently not a valid file type for upload to the alog. Huh.
I attached a still from the video to give you some idea of what we've been seeing here for the last few weeks. It's appearing to fluctuate though; the video was taken on Monday, 10/3/2016, but this morning our fill port looks very similar to what's seen in Matt's video of the LLO crystal chiller fill port.