TITLE: 06/24 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 66Mpc
INCOMING OPERATOR: Jim
SHIFT SUMMARY: Nothing much to report other than what was mentioned in 37103.
LOG:
Locked for 34 hours at 67Mpc.
DIAG_MAIN is showing "PCAL: Y RX PD is greater than 1% off". Trending the channel that is looks at (H1:CAL-PCALY_RX_PD_VOLTS_OUTPUT), it has dropped very slightly. This same message showed up about 12 hours ago, so I got suspicious of temperature. Sure enough, you can clearly see it dip with the temperature. See 2 day trend below
TITLE: 06/23 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 65Mpc
OUTGOING OPERATOR: Travis
CURRENT ENVIRONMENT:
Wind: 11mph Gusts, 9mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.13 μm/s
QUICK SUMMARY: 30 hr lock, not much else to report
TITLE: 06/23 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 67Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: Locked in Observing for the entire shift and ~27 hours total. No issues to report.
LOG: None
We have remained in Observing for the duration of the shift thus far. 26 hours locked and 23.5 hours in Observing.
At 05:27 6/23/2017 UTC (22:27 PDT Thursday night) we received a short timing GPS-EY error which cleared quickly. Looking at DAQ trends for this system, this appears to be another bogus GPS error. The attached dataviewer trend (second trend, 5 minutes duration, 20:25 - 20:30 PDT Thu 6/22) shows that the reported number of tracked satellites drops to zero, but the not-tracking-satellites flag does not report this. Also the number of visible satellites stays at 10 during the glitch.
model restarts logged for Thu 22/Jun/2017 - Wed 14/Jun/2017 No restarts reported
model restarts logged for Tue 13/Jun/2017
2017_06_13 09:24 h1isiham6
2017_06_13 09:29 h1broadcast0
2017_06_13 09:29 h1dc0
2017_06_13 09:29 h1fw0
2017_06_13 09:29 h1fw1
2017_06_13 09:29 h1fw2
2017_06_13 09:29 h1nds0
2017_06_13 09:29 h1nds1
2017_06_13 09:29 h1tw1
2017_06_13 10:19 h1isiham6
2017_06_13 11:23 h1iopoaf0
2017_06_13 11:23 h1oaf
2017_06_13 11:23 h1pemcs
2017_06_13 11:23 h1tcscs
2017_06_13 11:24 h1calcs
2017_06_13 11:24 h1iopseih45
2017_06_13 11:24 h1ngn
2017_06_13 11:24 h1odcmaster
2017_06_13 11:24 h1susprocpi
2017_06_13 11:26 h1hpiham4
2017_06_13 11:26 h1hpiham5
2017_06_13 11:26 h1iopoaf0
2017_06_13 11:26 h1isiham4
2017_06_13 11:26 h1isiham5
2017_06_13 11:27 h1calcs
2017_06_13 11:27 h1oaf
2017_06_13 11:27 h1odcmaster
2017_06_13 11:27 h1pemcs
2017_06_13 11:27 h1tcscs
2017_06_13 11:28 h1ngn
2017_06_13 11:28 h1susprocpi
Maintenance Tuesday. New isiham6 code with associated DAQ restarts. Unexpected problems with h1seih45 and h1oaf0 which required model restarts.
model restarts logged for Mon 12/Jun/2017 - Wed 07/Jun/2017 No restarts reported
model restarts logged for Tue 06/Jun/2017
2017_06_06 08:54 h1susomc
Maintenance Tuesday, minor model change to h1susomc.
model restarts logged for Mon 05/Jun/2017 - Sat 03/Jun/2017 No restarts reported
model restarts logged for Fri 02/Jun/2017
2017_06_02 10:17 h1caley
2017_06_02 10:17 h1iopiscey
2017_06_02 10:17 h1iscey
2017_06_02 10:17 h1pemey
2017_06_02 10:18 h1caley
2017_06_02 10:18 h1iopiscey
2017_06_02 10:18 h1iscey
2017_06_02 10:18 h1pemey
2017_06_02 10:19 h1alsey
h1iscey power cycle as part of ALS-Y problem investigation.
model restarts logged for Thu 01/Jun/2017
2017_06_01 11:59 h1nds1
2017_06_01 12:01 h1nds0
Completion of minute trend offloading required restart of NDS daqd.
model restarts logged for Wed 31/May/2017 No restarts reported
TITLE: 06/23 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 66Mpc
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
Wind: 4mph Gusts, 2mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY: No issues handed off. Lock is ~22 hours old.
TITLE: 06/23 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 67Mpc
INCOMING OPERATOR: Travis
SHIFT SUMMARY: Another quiet shift
LOG:
Nothing to report. Environment is quiet, IFO was locked all shift.
In Observing for about 7 hours. All green and clear at this time.
Pitch: ETMX, ITMX, and SR3 were at or above 10urads. They have been reset to zero. See Figure 1 Yaw: ETMX and ITMX were atthe 10urads line. The BS is close to the 10urad line, however the slpoe is flat. See Figure 2 Close FAMIS Task #4733
TITLE: 06/22 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 69Mpc
INCOMING OPERATOR: Jeff
SHIFT SUMMARY: Commissioning for the first half of the day, Observing for the second. H1 locked for ~6 hours, Observing for ~3 hours. No issues to report.
LOG: See previous aLogs. A couple of locklosses during Commissioning period that I didn't bother to log because they were CCLs (Commissioner Caused Lockloss);).
It seems that there is high coherence between ASC CSOFT/CHARD and LSC PRCL/MCL. Please see the attached plots (data from Jun 20, 02:00:00 UTC, for 1024s).
Especially with PRCL, the ~0.4 Hz peak seems to be related to the dP/dtheta instability (37046 and alogs referred therein). Does this indicate that there is a large cross-coupling from LSC to ASC ( i.e. a large dtheta/dF term in the dP/dtheta loop)? Would l2a ff (see, e.g., LLO 31756) help?
Short summary posted below. See here for full results.
Sheila logged that the two lock-losses on Tuesday were caused when her excitation measurement on the PSL ISS THIRDLOOP SERVO was both aborted and when it completed correctly. It would appear that this is due to the filter being used by the excitation has a slow ring down and it was outputting a non-zero value when the awgtpman killed the excitation. This caused a sharp transient step which in this case resulted in the output of the THIRDLOOP_SERVO filter going very high.
This problem was initially reported by Jenne in this alog. My analysis was reported in this alog. Daniel, Jim and myself came up with a work-around described in this alog and in this wiki
To verify the filter in this case has a long ring down, I ran it on a test stand front end, it does indeed have a long ring down, image is attached.
I had opened FRS8368 for this issue, which can now be closed.
The flow rate in the power meter flow circuit has gotten extremely noisy in the past 3 days. Since no debris was observed in the filters, one wonders if it's the flow sensor or perhaps the Beckhoff terminal input that processes the output of the flow sensor.
Locked at NLN for 2.5 hours for commissioning work. Set to Observe at 19:46 UTC after Sheila's measurement wrapped up.
Added 225 mL H2O to Xtal chiller. Diode chiller was good. Filters still appear clean. This closes FAMIS task 6528.