Dave, TJ:
DIAG_MAIN was running slowly if it was asking for averages from h1nds0. I could not see any issues with h1nds0 with regards to cpu, memory or syslogs, but we rebooted it anyways and DIAG_MAIN sped back up. h1nds0 had been running since 1st May 2017.
J. Kissel Your weekly charge update: Same as last week -- ETMY is slowly getting worse (our high duty cycle means we're too often holding on one bias voltage sign), and ETMX is fine. Longitudinal actuation strength relative to Jan 2017 for ETMY is upwards of 8-9%. The usual plots are attached.
J. Kissel, D. Barker Posting some data to support a future ECR to add more violin mode banks to the QUADs, here's the CPU Cycle Turn Around Time during O2 for the front-end models that run the QUADs. The ITMs have been running around 42-45 [usec], and the ETMs have been running around 32-33 [usec] during O2 (see first attachment). Remember, two things have happened that have *reduced* the CPU turn-around time over the past few years (see second attachment): - the end-station front-end computers were upgrade to better CPUs back in Feb 2016 (see LHO aLOG 25474); prior to that they'd typically run at around ~55 [usec]. - All QUAD model's data storage list was pruned in Oct 2016 (see LHO aLOG 30821). This knocked about 10-15 usec off of the ITM time, and a further ~2-3 [sec] off of the ITMs. In short: there should be plenty of CPU time to add oodles of violin mode damping filters.
and of course we could install the faster computer for h1susb123
D. Barker, J. Kissel I (and Norna, and Dennis) asked Dave if we could better quantify "there should be plenty of CPU time to add oodles of violin mode damping filters." As such, he says (via email) On Sep 6, 2017, at 4:39 PM, David Barkerwrote: [...] For my first set of tests I took the ITMX_L2_DAMP_MODE10 [, an example, fully loaded, in-use violin] filter module and duplicated it 32 times on two test models: x1susdactest and x1susfiltertest. [piping junk data into the input of the bank to make sure the filters were computing on something] x1susdactest was already running of the fast computer x1susex (doing my 18bit-DAC testing). I then made a copy of this model for x1susfiltertest, which ran on the slow computer x1sush34. x1susex = Intel Xeon E5-2690 v2 @ 3.00GHz (fast 10-core computer) x1sush34 = Intel Xeon X5680 @ 3.33GHz (slow 6-core computer) cpu no filters 32 filters slow 3uS 7uS fast 4uS 7uS not much in it for 32 filter modules, about 1uS increase for every 10 FMs loaded. [...] So, extrapolating Dave's data, if we want to increase from 10 to 40 violin mode filters, that would be adding 3 [us] to the clock cycle turn around time. As Dave mentions, we'll likely switch h1susb123 over to the faster computer type he mentions, which means the ITMs (which had high turn-around at 42-45 [us]) will run more like the ETMs did during the tail end of O2, at ~32-33 [us]. That means for ETMs and ITMs, with the inclusion of these extra filters, the turn-around time would likely only increase to 36-37 [us], indeed still with plenty of head room against the limit of 61 [us].
I tweaked the beam alignment into the PMC; the ISS was OFF for this. I was able to get the reflected power down to ~16.5 W from ~17.5 W. I think we are being limited by mode matching here, as the mode out of the HPO has likely changed (especially since the diode box swap). With the ISS back ON, the PMC is transmitting 57.2 W and reflecting 16.5 W. This completes LHO WP 7108.
This morning I completed the weekly PSL FAMIS tasks.
HPO Pump Diode Current Adjust (FAMIS 8435)
With the ISS OFF, I increased the operating current of the HPO pump diode boxes; DB1 increased by 0.2 A, DB2 DB3, and DB4 all increased by 0.1 A. The changes are summarized in the below table and a screenshot of the PSL Beckhoff main screen is attached for future reference.
Operating Current (A) | ||
Old | New | |
DB1 | 49.4 | 49.6 |
DB2 | 52.4 | 52.5 |
DB3 | 52.4 | 52.5 |
DB4 | 52.4 | 52.5 |
I did not adjust the DB operating temperatures. The HPO is now outputting ~155.1 W, the ISS is still OFF (will remain OFF until I finish the PMC alignment tweak). This completes FAMIS 8435.
PSL Power Watchdog Reset (FAMIS 3663)
I reset both PSL power watchdogs at 16:00 UTC (9:00 PDT). This completes FAMIS 3663.
TITLE: 08/15 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 53Mpc
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY:
RED "RMS WD" on the Ops Overview, sounds like it is more of a bug on the Ops Overview.
L1 & V1 went down at around 14:00 & now H1 is set for Maintenance.
LOG:
Maintenance started at 15:00UTC
H1 locked for 44.5hrs w/ steady 52Mpc range. Nice triple coincidence for the last 2hrs.
Noticed a RED "RMS WD" on the Ops Overview (see attached with RED box & also part of the ETMy SUS screen).
Did NOT receive any RMS WD alarms, and I don't see a RED fault box for any OSEM for ETMy's L2 out mons (my previous experience is that one of the OSEMs would be RED and we would need to enter a "1" & then a "0" to clear this up....lately there's been a script to take care of this.)
Looking at some related channels, I do see (trend attached) a bit of nosiness for these channels at the beginning of my shift (assuming this is due to the noise from the Samoa EQ when many things were oscillating. Perhaps this caused this unique "trip"?).
Since we are going on a 43hr lock, have been in triple coincidence, I will not address this until/if we break lock during the shift.
TITLE: 08/15 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 52Mpc
OUTGOING OPERATOR: Jeff
CURRENT ENVIRONMENT:
Wind: 5mph Gusts, 4mph 5min avg
Primary useism: 0.06 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY:
Walked in to see H1 rolling through a Samoan earthquake (as seen on tidal signals, ASC control signals, and the seismometer signals. But after doing OSB walkthrough for doors/lights everything is quiet and optimal. H1 has been locked almost 41hrs and the range has been at a steady 52Mpc.
Continued triple coincident Observing for first half of the shift. The A2L YAW is elevated. Will run the repair script at the first opportunity. All else is green and clear at this time.
TITLE: 08/14 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 53Mpc
INCOMING OPERATOR: Jeff
SHIFT SUMMARY: Covering the last 1.5 hours of Cheryl's day shift, so not a lot to report. Lock is 32+ hours old. No issues.
LOG:
22:22 JeffK to Optics Lab to help TJ with VOPO assembly
Activities: all times in UTC
Currently:
Added to Maintenance:
J. Kissel, S. Karki I discovered this morning that H1 PCAL X was no longer churning out any of Sudarshan's thesis generating lines. Though I knew we'd intentionally turned off the 333.9 Hz and 1083.3 Hz lines this past Friday evening (see LHO aLOG 38148), we had intended to continue to run the >2 kHz long-duration sweep lines. They had stopped running at 2017-08-13 01:33 UTC --> 2017-08-12 (Saturday) at 6:33 pm local. What happened? When the EX SUS rack lost one leg of its power (see LHO aLOG 38162), the EX:ISC power supply was unnecessarily cycled as well, which killed the timing signal to the ISC I/O chassis, the master of the EX Dolphin network. With the master dead, all EX front-ends went belly up -- including PCALX. Upon restart of the front-ends (see LHO aLOG 38163), the guardian code managing this high frequency line, /opt/rtcds/userapps/release/cal/h1/guardian/HIGH_FREQ_LINES.py did not register that we'd lost lock, the safe.snap had some old setting, and we don't monitor these the frequency or gain of the H1:CAL-PCALX_PCALOSC1 oscillator because it is regularly changed during observation -- so no one noticed that the line was off. I've edited the guardian code to start at the next data point, at 4001.3 Hz, loaded it, ran the INIT state, which turned on the line at 2017-08-14 20:31:30 UTC (if you want to be picky, it was ramping up from 20:31:30 UTC, and fully ON and stable by 20:31:30 UTC). Again, because the frequency and gain of this oscillator is not monitored, this guardian reload and settings change did NOT take us out of observation mode.
Got a long enough patch of wind over the weekend, long enough around 20mph and from the south or SW direction, see first attachment. The previous post about this had the wind from the NNW. Go to this alog to walk back through all the positions.
Looking at the second attachment, the Z DOF here looks pretty different than first look at Roam8, otherwise the X & Y DOFs look similar: during the quiet period (1000hrs 10 Aug) the floor or the HAM5 STS2 is noisier than the ITMY unit or that location at frequencies below 20 to 40mHz. During the high wind, 1640hrs 13 Aug, the Roam8 measurement is noisier by a factor of a few below 20 to 70mHz. At the lowest frequencies, below 10 or less, one might argue the HAM5 machine got less noisier during the windy time but these calibrated machines are giving an actual motion, not relative, again arguing that the HAM5 machine is not a good as ITMY.
Bottom line--for Roam8: not as good as ITMY.
Maintenance for 15 Aug 2017:
PaulM, TJ, SudarshanK(remotely)
The low frequency Pcal lines running at 333.9 Hz and 1083.3 Hz were switched off during the PEM injection commissioning break and the guardian node (HIGH_FREQ_LINES) that schedules the single line injection beginning from 4501.3 Hz and ending at 1501.3 Hz with 500 Hz interval was initiated.
These changes were accepted into the SDF system -- see LHO aLOG 38144.
Could not reach Pcal folks to confirm SDF differences. Accepted the differences (see file below) to get back to Observing.
These changes are standard.
In other words: The temporary PCALX calibration lines that had been running for a few days (LHO aLOG 37952) were switched OFF the other day (see LHO aLOG 38148). There are many ways to do so, but that day they chose to zero out the "oscillator use" matrix which sums all PCALX oscillators as desired. Element 1_1 has been traditionally reserved for the high-frequency roaming PCAL line used for Sudarshan's thesis. It was turned back "ON" by putting a 1.0 in the matrix. However, there's no gain on the oscillator, so nothing's coming out. Elements 1_2 and 1_3 were for the 333.9 Hz and 1083.3 Hz lines, which have now been zeroed. The reason these showed up as an SDF difference in the OBSERVE snap was because these lines had been running during observation ready data for the past few days. So accepting this values above are just accepting that they are turned OFF.
This sped the exec time from ~7sec to ~0.2sec. I have rarely seen it this fast and I had to slightly change two of the tests to handle this. I also cleaned up a few of the tests and old tests while I was in there.
Edit: Forgot to attach the 5day trend of the exec time for future reference.