TITLE: 08/14 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 53Mpc
INCOMING OPERATOR: Jeff
SHIFT SUMMARY: Covering the last 1.5 hours of Cheryl's day shift, so not a lot to report. Lock is 32+ hours old. No issues.
LOG:
22:22 JeffK to Optics Lab to help TJ with VOPO assembly
Activities: all times in UTC
Currently:
Added to Maintenance:
J. Kissel, S. Karki I discovered this morning that H1 PCAL X was no longer churning out any of Sudarshan's thesis generating lines. Though I knew we'd intentionally turned off the 333.9 Hz and 1083.3 Hz lines this past Friday evening (see LHO aLOG 38148), we had intended to continue to run the >2 kHz long-duration sweep lines. They had stopped running at 2017-08-13 01:33 UTC --> 2017-08-12 (Saturday) at 6:33 pm local. What happened? When the EX SUS rack lost one leg of its power (see LHO aLOG 38162), the EX:ISC power supply was unnecessarily cycled as well, which killed the timing signal to the ISC I/O chassis, the master of the EX Dolphin network. With the master dead, all EX front-ends went belly up -- including PCALX. Upon restart of the front-ends (see LHO aLOG 38163), the guardian code managing this high frequency line, /opt/rtcds/userapps/release/cal/h1/guardian/HIGH_FREQ_LINES.py did not register that we'd lost lock, the safe.snap had some old setting, and we don't monitor these the frequency or gain of the H1:CAL-PCALX_PCALOSC1 oscillator because it is regularly changed during observation -- so no one noticed that the line was off. I've edited the guardian code to start at the next data point, at 4001.3 Hz, loaded it, ran the INIT state, which turned on the line at 2017-08-14 20:31:30 UTC (if you want to be picky, it was ramping up from 20:31:30 UTC, and fully ON and stable by 20:31:30 UTC). Again, because the frequency and gain of this oscillator is not monitored, this guardian reload and settings change did NOT take us out of observation mode.
Got a long enough patch of wind over the weekend, long enough around 20mph and from the south or SW direction, see first attachment. The previous post about this had the wind from the NNW. Go to this alog to walk back through all the positions.
Looking at the second attachment, the Z DOF here looks pretty different than first look at Roam8, otherwise the X & Y DOFs look similar: during the quiet period (1000hrs 10 Aug) the floor or the HAM5 STS2 is noisier than the ITMY unit or that location at frequencies below 20 to 40mHz. During the high wind, 1640hrs 13 Aug, the Roam8 measurement is noisier by a factor of a few below 20 to 70mHz. At the lowest frequencies, below 10 or less, one might argue the HAM5 machine got less noisier during the windy time but these calibrated machines are giving an actual motion, not relative, again arguing that the HAM5 machine is not a good as ITMY.
Bottom line--for Roam8: not as good as ITMY.
Maintenance for 15 Aug 2017:
TITLE: 08/14 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 51Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
Wind: 4mph Gusts, 2mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY: locked 24+ hours
TITLE: 08/14 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1:
INCOMING OPERATOR: Travis
SHIFT SUMMARY:
"Seismon system not updating" DIAG_MAIN alarm, we aren't getting updates of 4.0+ EQs (last one was at 3:05utc, but we've had others since).
H1 has been locked for over 24hrs with a steady 52Mpc.
LOG:
Laser Status:
SysStat is good
Front End Power is 33.88W (should be around 30 W)
HPO Output Power is 154.7W
Front End Watch is GREEN
HPO Watch is GREEN
PMC:
It has been locked 5 days, 18 hr 52 minutes (should be days/weeks)
Reflected power = 17.86Watts
Transmitted power = 57.96Watts
PowerSum = 75.82Watts.
FSS:
It has been locked for 1 days 0 hr and 15 min (should be days/weeks)
TPD[V] = 1.072V (min 0.9V)
ISS:
The diffracted power is around 2.3% (should be 3-5%)
Last saturation event was 1 days 0 hours and 54 minutes ago (should be days/weeks)
Possible Issues:
PMC reflected power is high
TITLE: 08/14 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 51Mpc
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
Wind: 10mph Gusts, 8mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.08 μm/s
Small step up with useism (factor of 2 over last 24hrs). Winds up & down over last 24hrs (currently quiet).
QUICK SUMMARY:
While going through checksheet, noticed a dead video4 computer (rebooting). Other than that, no notable changes.
TITLE: 08/14 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 52Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Not much happened, locked entire shift
LOG:
Around 4:00 UTC, I saw that our seismon code was dead. Not sure what happened, but there was a prolonged period without any 4.0+ earthquakes. Maybe some of the cleanup processes broke it (i.e. all event folders were wiped, seismon couldn't decide where to new ones) ?
TITLE: 08/13 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 52Mpc
INCOMING OPERATOR: Jim
SHIFT SUMMARY: Locked the entire shift, 8.5 hours total now. No issues to report.
LOG: Restarted Video5.
Lock is 5 hours old. We rode through one/two EQs that, interestingly enough, were ONLY reported by SEISMON. See screenshots.
This EQ finally showed up on USGS around 21:30 UTC.
TITLE: 08/13 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 50Mpc
OUTGOING OPERATOR: Ed
CURRENT ENVIRONMENT:
Wind: 11mph Gusts, 9mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY: No issues handed off. Lock is 30 minutes old.
TITLE: 08/13 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 51Mpc
INCOMING OPERATOR: Travis
SHIFT SUMMARY:
H1 locked most of the shift. "Details" of lockloss in previous aLog. Fire reported in transition log seems to be out. There was some rain. Handing off locked/Observing H1 to Travis.
LOG:
12:58 Lockloss. Doesn't seem to be environmental and so far can't see anything else responsible.
13:39 Initial Alignment
14:38 H1 back to Observing.
A few failed attempts at re-locking prompted the necessity to do an initial alignment. Initial alignment went well. The sequence was stopped at DC readout to damp violin modes ETMX 2&4.
TITLE: 08/13 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 50Mpc
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
Wind: 22mph Gusts, 18mph 5min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY:
I was briefed on what would happen when if +/-18VDC supply at EX trips. Watching the fire burning. It looks to me as if it's way off in the distance to the NE across the river. Gard to tell out here in the dark but the alight scent of smoke is in the air outside. The wind is out of the SW.
PaulM, TJ, SudarshanK(remotely)
The low frequency Pcal lines running at 333.9 Hz and 1083.3 Hz were switched off during the PEM injection commissioning break and the guardian node (HIGH_FREQ_LINES) that schedules the single line injection beginning from 4501.3 Hz and ending at 1501.3 Hz with 500 Hz interval was initiated.
These changes were accepted into the SDF system -- see LHO aLOG 38144.
Could not reach Pcal folks to confirm SDF differences. Accepted the differences (see file below) to get back to Observing.
These changes are standard.
In other words: The temporary PCALX calibration lines that had been running for a few days (LHO aLOG 37952) were switched OFF the other day (see LHO aLOG 38148). There are many ways to do so, but that day they chose to zero out the "oscillator use" matrix which sums all PCALX oscillators as desired. Element 1_1 has been traditionally reserved for the high-frequency roaming PCAL line used for Sudarshan's thesis. It was turned back "ON" by putting a 1.0 in the matrix. However, there's no gain on the oscillator, so nothing's coming out. Elements 1_2 and 1_3 were for the 333.9 Hz and 1083.3 Hz lines, which have now been zeroed. The reason these showed up as an SDF difference in the OBSERVE snap was because these lines had been running during observation ready data for the past few days. So accepting this values above are just accepting that they are turned OFF.