| optic | absolute distance from IO_AB_L4 | focal length and beam diam. |
| IO_AB_L4 | 0 | fl = 1145.6mm |
| IO_lens3 | 43.4cm | fl = -618.2mm |
| IO_lens4 | 123.3cm | fl = 720.2mm |
| IOGigE1 | 163.1cm | beam diameter = 1.9mm |
| IOGigE2 | 245.5cm | beam diameter = 1.9mm |
Beam spot measurements: according to these measurements, the beam spots from before to after the 5.8M EQ, and all changes are 0.14mm or less.
| alpha | beam dist from center, 17 July 2017 | diff from 17 May 2017 | |
| mc1 p | -0.057 | 2.40 | 0.10 |
| mc1 y | 0.092 | 3.87 | 0.08 |
| mc2 p | -0.183 | 7.67 | 0.04 |
| mc2 y | -0.020 | -0.85 | -0.04 |
| mc3 p | -0.054 | 2.26 | 0.10 |
| mc3 y | -0.138 | -5.84 | -0.14 |
Dave and TJ:
we rebooted h1guardian0 because its loading has been regularly in the 50's. TJ recovered the GRD nodes following the reboot.
Looking over the trends, since the interface chassis was restarted and power cable secured, I see no large glitches, especially on corner3 only, that aren't seen on all CPSs and that are not related to ground motion. Can't access FRS to give a number but I'll keep an eye on this for a few more days.
I've updated FRS Ticket 8517.
This morning I performed the weekly PSL FAMIS tasks.
HPO Pump Diode Current Adjust (FAMIS 8431)
I adjusted the HPO pump diode currents, changes summarized in the table below. All adjustments were done with the ISS OFF. I have attached a screenshot of the PSL Beckhoff main screen for future reference.
| HPO Diode Currents (A) | ||
| Old | New | |
| DB1 | 48.9 | 49.0 |
| DB2 | 51.8 | 52.0 |
| DB3 | 51.8 | 52.0 |
| DB4 | 51.8 | 52.0 |
I then tweaked the DB operating temperatures. The only change was to DB1, all diodes changed from 29.0 °C to 28.5 °C; no changes were made to the operating temps of DBs 2, 3, or 4. The HPO is now outputting 154.9 W and the ISS has been turned back ON. This completes FAMIS 8431.
PSL Power Watchdog Reset (FAMIS 3659)
I reset both PSL power watchdogs at 16:26 UTC (9:26 PDT). This completes FAMIS 3659.
Tried two different browsers;
This server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn't understand how to supply the credentials required.
This is due to a big Grouper update - see LLO aLOG 34921. It should not affect local logins to CDS workstations or the main aLOGs, but does affect all remote logins and web site accesses. This was not as intended and they are working hard to fix it.
IFO in Observing for past 4 hours. Environmental conditions are good. A2Ls are both below reference. Range is holding around 48Mpc.
TITLE: 07/17 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 47Mpc
INCOMING OPERATOR: Jeff
SHIFT SUMMARY:
Main issue of the evening was a 7.7 Earthquake. There was some initial confusion with which ISI_CONFIG state to go to after the EQ (it knocked us out without much heads up). Originally went to LARGE_EARTHQUAKE_NOBRSXY, but then switched to EARTHQUAKE_V2--ultimately went to the correct LARGE_EARTHQUAKE_NOBRSXY. Then it was a waiting game for most of the shift.
H1 has noise starting from 25+Hz giving us a range of 45Mpc.
We worried about violin modes, but they seem OK (under e-15).
LOG:
ALSx is fine.
ALSy has not been able to stay locked in a 00. I've tried tweaking on the ETMy to no avail. I am currently on LOCKED_NO_SLOW_NO_WFS, and wanting to reduce WFS & CAM signals for the Y-arm, but it simply doesn't stay locked long enough for me to relieve the large output values.
Problem Resolved: I was looking elsewhere and Thomas V suggested a Clear History for ALSy & this did the trick.
All SEI & SUS systems are back.
Going to check on IMs and then return to locking.
P.S. EQ was upgraded to 7.7
J. Kissel
As we're getting slammed by another earthquake, I wonder if there can be a few things that team SEI can change in order to get the SEI platforms back to DAMPED (HEPI robust isolated on L4Cs, HAMs Damping with GS13s, BSC ISIs damping with L4Cs and GS13s) and optics back to DAMPED as soon as possible. Here're the flaws I see at the moment:
(1) SEI_CONFIG manager doesn't turn off BSC-ISI Z sensor correction (at least in EARTHQUAKE_v2), so giant EQs like the current still push HEPI in Z enough that it makes the T240s angry and trips the ISI's watchdog, even though the T240s aren't yet involved in any active control. The SEI manager should have a state where it turns off ALL sensor correction, not just horizontal sensor correction.
EDITNevermind => I had not read Jim's updates to the SEI_CONFIG Guide. We should be in LARGE_EQ_NOBRSXY, and that does indeed turn off ALL sensor correction.
(2) This may be hard, but it would be best if the T240s weren't in the watchdog system until they're used.
Just planting the seed for discussion, no need to pull out ECR pens just yet.
Re #2, My recollection was that it was ignored or else while it was riled up it would trip which it does not. Regardless, I reviewed the model and indeed I recalled correctly that the T240s are ignored in the WD as long as all the Isolation Gains are zero. Not sure the situation that Corey was experiencing during the EQ ring-down that suggested the T240s were the problem.
In the SEI World:
Waiting on BSC ISIs to come back. Jeff is looking at them.
In the SUS World:
Jeff K said these most likely tripped due to the SEIs tripping, so these were damped as a first order & they all came back fine.
Will look at IM pointing next, lock up IMC & wait for BSC ISIs.
TITLE: 07/17 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 47Mpc
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
Wind: 11mph Gusts, 7mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY:
H1's in OBSERVING w/ DARM spectra elevated from 25Hz & above, thus giving us a range hovering at ~46Mpc.
TITLE: 07/17 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 47Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Most of the day spent with Fast Shutter, PSL trip slowed things down earlier
LOG:
15:00 I came in and several chambers were tripped from the 6.5 in Russia, I recover and start IA
15:00 Richard to LVEA to cycle fast shutter chassis
15:30 PSL Trip, Jason is called, recover takes ~hour
18:30 After 2 rounds of IA, I make it to DC Readout, Richard and I test fast shutter which breaks lock, next several hours are spent trying to convince ourselves (commisioners, et al.) that the fast shutter is working normally, but locking is easy at this point
21:40 NLN, waiting for PI and A2L to run
22:30 Observing, with awful range
J. Oberling, E. Merilh, J. Warner
The PSL tripped this morning, likely due to the flow in laser head 3. As can be seen in this alog, the flow in head 3 (and only head 3) dropped from 0.52 lpm to ~0.47 lpm sometime on the morning of Saturday, 7/15/2017. It is likely this was the cause of the laser trip. I will do more forensics when I am onsite tomorrow. When restarting the cyrstal chiller, the flow in laser head 3 was back up around 0.54 lpm. Maybe something worked its way through the system, causing the laser trip?
When restarting the laser, the Beckhoff software appeared to lose communication with the hardware of the PSL, requiring a reboot of the PSL Beckhoff PC. Once this was done, everything worked fine and the laser came back up. I had difficulty injection locking the 35W FE to the HPO; I believe this is due to the lower power out of the HPO. I engaged the injection locking by turning the RAMP OFF, and monitoring the power circulating in both directions of the HPO. When the circulating power favored the forward direction, I manually engaged the injection locking. This worked the first time and the laser is now back up and running. Ed re-engaged the PMC, FSS, and ISS, and Jim reset the NPRO noise eater. By this time ~30 minutes had elapsed, so I engaged the LRA and the power watchdogs. The laser is now up and running.
J. Kissel filed FRS 8539 for this trip.
Looking at some trends it appears that it was Laser Head 4 that tripped the PSL, not Laser Head 3 as previously surmised. The first attachment shows the 4 laser head flows around the time of the trip. The second shows the flow through Head 4 and the Head 1-4 Flow Interlock, while the third attachment shows the flow through Head 3 and the interlock. It is clear from this that the flow through Head 4 was at the trip point of 0.4 lpm at the time the interlock tripped, while the flow through Head 3 remained above the trip point. It is unclear why the flow through Laser Head 4 fell so fast, possibly something moving through the system causing a glitch with the flow sensor?