TITLE: 12/19 OWL Shift: 08:00-16:00UTC (00:00-08:00PST), all times posted in UTC
STATE of H1: Observing at ~80 Mpc.
Incoming Operator: TJ
Support:
Quick Summary:
The night was quiet. Except for the high useism and slow computer that was an issue for a while, I have nothing to complain.
Shift Activities:
15:32 Chris Biwer called control room regarding a hardware injection.
This morning's striptool flatline and computer slow down reports further strengthens the evidence that this problem is related to h1boot's NFS disk activity. Yesterday I stopped all running rsync backups of /opt/rtcds and started a single backup at 16:30 PST Friday. It took about 10 hours to complete, and finished around 02:00 this morning. This is around the time the EPICS data froze and Nutsinee reported workstation slow downs. I am monitoring EPICS freezes by looking at the Dolphin manager logs on h1boot, they shows freeze events around 16:30 yesterday and 02:00 this morning and none inbetween.
This rsync used to take only 20 minutes, I'll look into why it is now taking longer (a major file cleanup could be in order).
No linux workstation is reporting loss of NFS connection to this server at these times, looks like a general slow down which impacts diskless frontend computers more.
Investigation continues.
Long term fix is to install a new NFS server for /opt/rtcds post O1.
Here are the dolphin logs for this period of time (reporting when nodes come back)
Dec 18 2015 16:34:15 Fabric 0 status: All nodes are ok!
Dec 18 2015 16:34:28 Fabric 0 status: All nodes are ok!
Dec 18 2015 16:34:29 Fabric 0 status: All nodes are ok!
Dec 19 2015 01:51:09 Fabric 0 status: All nodes are ok!
Dec 19 2015 01:51:45 Fabric 0 status: All nodes are ok!
Dec 19 2015 01:51:46 Fabric 0 status: All nodes are ok!
TITLE: 12/19 Day Shift: 16:00-00:00UTC (08:00-16:00 PDT), all times posted in UTC"
STATE Of H1: Observing at 78Mpc for 9hrs
OUTGOING OPERATOR: Nutsinee
QUICK SUMMARY: useism is on the rise at 0.7 um/s right now, wind is minimal, PSL power at 21.9, blends are on the 45's everywhere so I will keep an eye on the EX ISI, possible earthquake about to hit us.
Max, Chris We preparing to do a burst hardware injection using tinj. I will update this aLog when we add the injection to the schedule.
Still Observing. Very stable range. zero wind. Useism is trending slightly upward. The computers are no longer frozen. Violin modes are high but none seems to be ringing up. Seems like some of the modes get kicked in the wrong way everytime the ifo loses lock.
TITLE: Dec 19 OWL Shift 08:00-16:00UTC (00:00-08:00 PST), all times posted in UTC
Outgoing Ops: Ed
Quick Summary: Useism is hanging at 90th percentile. Violin modes are relatively high (damping still turned off). Wind below 10 mph. Using 45mHz blen everywhere. Have StripTool running to monitor ETMs CPS sensors as Jim suggested (which Ed kindly passed on the information). The computer opposite to Ops station is very slow...
I saw the signal flatted out twice in a row. The lock wasn't broken. Same thing also happened twice during my shift last night. Is this problem becoming worse?
Also Ops computer and two other computers I've tried to logon to are equally VERY VERY SLOW. They're pretty much frozen. There's even a big gap when the verbal alarm was telling me time ("CURRENT TIME ------------- SOMETHING UTC") It took forever just to take a screenshot and attach it to this alog........ Let's hope the we don't lose lock because I don't know how to bring it back up with a frozen computer......
10:18 UTC I stepped out for a bit. Came back and saw a missed call. I don't know how to review the missed call number (sorry these wired phones are ancient technology to me. I will learn next time.). I'm back in the control room now.
The computers are better now.......
TITLE: Dec 18 EVE Shift 00:00-08:00UTC (16:00-00:00 PDT), all times posted in UTC
STATE Of H1: Observing
SUPPORT: Jason
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY:
After repeated locklosses waiting at various stages of the locking sequence after I had changed the blends to QUITE_90 as recommended. I decided to switch back to the 45mHz blends at the end stations. I let the IFO “cook” on ENGAGE_ASC_PART3 for a good long while before advancing. I can’t say conclusively that this change in the ISI blends did the trick, but the IFO went all the way to NLN/78Mpc and it’s been there ever since. EQ bands are nominal. The mean uSeism is slightly above the 90th percentile. Wind is calm. Handing off Locked IFO to Nutsinee.
ACTIVITY LOG:
04:34 Lockloss while at ENGAGE_ASC_PART3
05:00 Switched end station blends to Quite_90 (off axis) to see if locking would improve.
05:33 still no satisfaction locking. Lock breaks at DC_READOUT_TRANSITION, repeatedly. I put a call in to Sheila.
05:32 OMC SUS watchdog tripped.
05:53 it appears that when DC_READOUT_TRANSITION occurs that OMC_LOCK reports OMC not ready as it may still be in TUNE_OFFSETS.
06:05 Waited at ENGAGE_ASC_PART3 for OMC to move to READY_FOR_HANDOFF. Lockloss occured, this time, at the very next step (REFL_IN_VACUO)
06:21 Locklos occurred sitting on ENGAGE_ASC_PART3 Grasping at straws at this point. Haven’t heard back from Sheila.
06:51 FINALLY! Locked at NLN. Switched back to 45mHz blends at Ends. Had to manually start ISS second Loop
MID-SHIFT SUMMARY:
After the lock loss that occurred at 02:54UTC caused by earthquake in Vanuatu and about a 1 hr stand down to permit ring down, re-locking is being delayed by locklosses at various stages in the sequence.
EQ bands have come back down below .1um/s. The mean uSeism is right about the 90th percentile. Wind is calm. Re-locking marches on.
ACTIVITY LOG:
00:32 Timing error occurred H1SUSETMX.
DRMI Unlocked (Dec 19 02:54:05 UTC)
6.2 128km N of Isangel, Vanuatu
2015-12-19 02:10:53 UTC10.0 km
02:55 reset timing error
03:55 DRMI failed to grab solid. DId PRMI align
04:00 I had to reboot the ops station as dataviewer kept crashing
04:12 lost lock at DC_READOUT_TRANSITION
04:22 Lockloss at REDUCE_CARM_OFFSET
ITLE: Dec 18 EVE Shift 00:00-08:00UTC (16:00-00:00 PDT), all times posted in UTC
STATE Of H1: Observing
OUTGOING OPERATOR: Jim
QUICK SUMMARY: IFO recently re-locked. µSeism @ 90thpercentile. EQ bands nominal. Winds ≤ 10mph.
TITLE: 12/17 Day Shift: 16:00-24:00UTC
STATE of H1: Observing
Support: Typical control room crowd
Quick Summary: Mostly quiet. Evan did some work on DARM, I worked on EX HEPI, lost lock late but got it back in time for the party
Shift Activities:
C. Cahillane I have updated the LHO uncertainty budget, fixed some bugs, and generated fewer, more relevant plots. All calibration versions are included in every budget. I will review the details each version below: C00: No kappas, no static systematics applied C01: No kappas, static systematics applied C02: kappa_tst and kappa_C applied, kappa_pu and cavity pole not applied, static systematics applied C03: "Perfect", so all kappas and cavity pole applied, static systematics applied Whenever I do ratio comparisons between model response functions, the "perfect" C03 model is in the denominator. ***** PDFs 1, 2, and 3 ***** These contain the C01, C02, and C03 uncertainty budgets at GPS Time = 1126259461. (Kappas provided by Darkhan in aLOG 11499. Thanks Darkhan!) Plot 1 is the calibration version response function over C03's response function. Plot 2 is four plots. The mag and phase squared uncertainty components plots and the total mag and phase strain uncertainty by itself. Plots 3-6 are just enlarged versions of Plot 2. ***** PDF 4 and 5 ***** PDF 4 is the Sensing fits and statistics PDF 5 are the Actuation stages fits and statistics ***** PDF 6 ***** This 4 way subplot of all the calibration versions together on one plot. I plot the mag and phase response functions and their ratios. For the C01 calibration version, we have under 7% and 4 degrees of uncertainty (See PDF 1, Plot 2) For the C02 calibration version, we have under 6% and 3.5 degrees of uncertainty (See PDF 2, Plot 2)
C. Cahillane To reproduce these plots, go to the Calibration SVN and go to .../aligocalibration/trunk/Runs/O1/Common/MatlabTools/strainUncertainty_Final_O1.m and open this file. At the top of the file (lines 11-15) you can change the IFO you want uncertainty for (L1 or H1) and the GPS time. Run this file after you change it. The code takes < 5 minutes to run. It should post resulting plots in .../aligocalibration/trunk/Runs/O1/[IFO]/Results/Uncertainty/ where IFO is replaced by H1 or L1.
tinj crashed. Max and I looked through the code and see: % update on 9 June, 2015: burst injections use txt now % injfile = [burstpath '/' filefuture{1} ifo '.dat']; injfile = [burstpath '/' filefuture{1} ifo '.txt']; THIS NEEDS TO BE UPDATED IN THE DOCUMENTATION tinj was restarted at H1 at 11:54 EST. tinj was restarted at L1 at 11:58 EST.