Richard, Fil, Dave, Carlos, Cheryl
The two digital GIGE cameras Cheryl installed on the PSL table are now producing images.
The cameras are called h1iogige1 (10.106.0.52) and h1iogige2 (10.106.0.53).
I have reconfigured h1digivideo2's VID-CAM28 and VID-CAM29 to be IO GIGE 1 and 2. For some reason monit had VID-CAM28 commented out, I reactivated this camera. I changed the MEDM screen to show the correct names (image attached).
GigEs were not connected to medm's until after the installation was complete, so the cameras were aligned without being able to see the images, and further alignment and more attenuation are needed.
Keita pointed out that we have only been using OMC DCPD B for at least the last lock. I trended the relative gains of DCPD A and DCPD B, and it looks like we've been in this situation since 16July2017 at around 05:30 UTC.
The attached is a 5 day trend of the gains. The 2 spikes earlier are when we were either adding or removing a layer of whitening, due to the violin modes being too high. On the 16th, it looks like the guardian was started to switch the whitening, but then maybe got stopped halfway. This explains why it has looked like the shot noise was too high the last few days.
Clearly we need to write into the READY_FOR_HANDOFF OMC lock state to ensure that the 2 gains are both at their nominal values. Also, it looks like someone accepted the wrong values into the Observe SDF file, so we've been able to go to Observing with the wrong values :( No good. The safe SDF file was correct. I'll fix the Observe file.
EDIT: Looking through the guardian log file, it looks like the code gets a bit confused when you try to use these states before we're using the DCPDs for DARM. So, now if we're at DC_Readout_Transition or before (and don't need to do any gain ramping), we skip the gain changes and just change the whitening. If we're on DC_Readout or after, the change will happen as before. Either way, at the end of the state is now a check to see if the DCPD gains are equal (they should both be 1000). The new code is in the svn and has been reloaded, although not actually used.
Tagging CAL and DetChar.
The exact time of departure from this configuration is Jul 16 2017 05:24:00 UTC (1184217858) and return to normal is Jul 18 2017 18:58:00 UTC (1184439498) We (the calibration group) have recommended that any observation time within the period be marked with a CAT1 veto, citing that the detector was in a non-O2-standard configuration and the calibration is suspect. Yes, it is plausible to reconstruct the calibration during these times, but given our limited person-power we have elected to abandon the effort at this time.
No problem, I made a flag and inserted it in to the segment database. H1:DCH-CAL_NON_O2_STANDARD_CONFIG:1 captures this time.
optic | absolute distance from IO_AB_L4 | focal length and beam diam. |
IO_AB_L4 | 0 | fl = 1145.6mm |
IO_lens3 | 43.4cm | fl = -618.2mm |
IO_lens4 | 123.3cm | fl = 720.2mm |
IOGigE1 | 163.1cm | beam diameter = 1.9mm |
IOGigE2 | 245.5cm | beam diameter = 1.9mm |
Beam spot measurements: according to these measurements, the beam spots from before to after the 5.8M EQ, and all changes are 0.14mm or less.
alpha | beam dist from center, 17 July 2017 | diff from 17 May 2017 | |
mc1 p | -0.057 | 2.40 | 0.10 |
mc1 y | 0.092 | 3.87 | 0.08 |
mc2 p | -0.183 | 7.67 | 0.04 |
mc2 y | -0.020 | -0.85 | -0.04 |
mc3 p | -0.054 | 2.26 | 0.10 |
mc3 y | -0.138 | -5.84 | -0.14 |
Dave and TJ:
we rebooted h1guardian0 because its loading has been regularly in the 50's. TJ recovered the GRD nodes following the reboot.
Looking over the trends, since the interface chassis was restarted and power cable secured, I see no large glitches, especially on corner3 only, that aren't seen on all CPSs and that are not related to ground motion. Can't access FRS to give a number but I'll keep an eye on this for a few more days.
I've updated FRS Ticket 8517.
This morning I performed the weekly PSL FAMIS tasks.
HPO Pump Diode Current Adjust (FAMIS 8431)
I adjusted the HPO pump diode currents, changes summarized in the table below. All adjustments were done with the ISS OFF. I have attached a screenshot of the PSL Beckhoff main screen for future reference.
HPO Diode Currents (A) | ||
Old | New | |
DB1 | 48.9 | 49.0 |
DB2 | 51.8 | 52.0 |
DB3 | 51.8 | 52.0 |
DB4 | 51.8 | 52.0 |
I then tweaked the DB operating temperatures. The only change was to DB1, all diodes changed from 29.0 °C to 28.5 °C; no changes were made to the operating temps of DBs 2, 3, or 4. The HPO is now outputting 154.9 W and the ISS has been turned back ON. This completes FAMIS 8431.
PSL Power Watchdog Reset (FAMIS 3659)
I reset both PSL power watchdogs at 16:26 UTC (9:26 PDT). This completes FAMIS 3659.
Tried two different browsers;
This server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn't understand how to supply the credentials required.
This is due to a big Grouper update - see LLO aLOG 34921. It should not affect local logins to CDS workstations or the main aLOGs, but does affect all remote logins and web site accesses. This was not as intended and they are working hard to fix it.
IFO in Observing for past 4 hours. Environmental conditions are good. A2Ls are both below reference. Range is holding around 48Mpc.
TITLE: 07/17 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 47Mpc
INCOMING OPERATOR: Jeff
SHIFT SUMMARY:
Main issue of the evening was a 7.7 Earthquake. There was some initial confusion with which ISI_CONFIG state to go to after the EQ (it knocked us out without much heads up). Originally went to LARGE_EARTHQUAKE_NOBRSXY, but then switched to EARTHQUAKE_V2--ultimately went to the correct LARGE_EARTHQUAKE_NOBRSXY. Then it was a waiting game for most of the shift.
H1 has noise starting from 25+Hz giving us a range of 45Mpc.
We worried about violin modes, but they seem OK (under e-15).
LOG:
ALSx is fine.
ALSy has not been able to stay locked in a 00. I've tried tweaking on the ETMy to no avail. I am currently on LOCKED_NO_SLOW_NO_WFS, and wanting to reduce WFS & CAM signals for the Y-arm, but it simply doesn't stay locked long enough for me to relieve the large output values.
Problem Resolved: I was looking elsewhere and Thomas V suggested a Clear History for ALSy & this did the trick.
All SEI & SUS systems are back.
Going to check on IMs and then return to locking.
P.S. EQ was upgraded to 7.7
J. Kissel As we're getting slammed by another earthquake, I wonder if there can be a few things that team SEI can change in order to get the SEI platforms back to DAMPED (HEPI robust isolated on L4Cs, HAMs Damping with GS13s, BSC ISIs damping with L4Cs and GS13s) and optics back to DAMPED as soon as possible. Here're the flaws I see at the moment: (1) SEI_CONFIG manager doesn't turn off BSC-ISI Z sensor correction (at least in EARTHQUAKE_v2), so giant EQs like the current still push HEPI in Z enough that it makes the T240s angry and trips the ISI's watchdog, even though the T240s aren't yet involved in any active control. The SEI manager should have a state where it turns off ALL sensor correction, not just horizontal sensor correction. EDITNevermind => I had not read Jim's updates to the SEI_CONFIG Guide. We should be in LARGE_EQ_NOBRSXY, and that does indeed turn off ALL sensor correction. (2) This may be hard, but it would be best if the T240s weren't in the watchdog system until they're used. Just planting the seed for discussion, no need to pull out ECR pens just yet.
Re #2, My recollection was that it was ignored or else while it was riled up it would trip which it does not. Regardless, I reviewed the model and indeed I recalled correctly that the T240s are ignored in the WD as long as all the Isolation Gains are zero. Not sure the situation that Corey was experiencing during the EQ ring-down that suggested the T240s were the problem.
J. Oberling, E. Merilh, J. Warner
The PSL tripped this morning, likely due to the flow in laser head 3. As can be seen in this alog, the flow in head 3 (and only head 3) dropped from 0.52 lpm to ~0.47 lpm sometime on the morning of Saturday, 7/15/2017. It is likely this was the cause of the laser trip. I will do more forensics when I am onsite tomorrow. When restarting the cyrstal chiller, the flow in laser head 3 was back up around 0.54 lpm. Maybe something worked its way through the system, causing the laser trip?
When restarting the laser, the Beckhoff software appeared to lose communication with the hardware of the PSL, requiring a reboot of the PSL Beckhoff PC. Once this was done, everything worked fine and the laser came back up. I had difficulty injection locking the 35W FE to the HPO; I believe this is due to the lower power out of the HPO. I engaged the injection locking by turning the RAMP OFF, and monitoring the power circulating in both directions of the HPO. When the circulating power favored the forward direction, I manually engaged the injection locking. This worked the first time and the laser is now back up and running. Ed re-engaged the PMC, FSS, and ISS, and Jim reset the NPRO noise eater. By this time ~30 minutes had elapsed, so I engaged the LRA and the power watchdogs. The laser is now up and running.
J. Kissel filed FRS 8539 for this trip.
Looking at some trends it appears that it was Laser Head 4 that tripped the PSL, not Laser Head 3 as previously surmised. The first attachment shows the 4 laser head flows around the time of the trip. The second shows the flow through Head 4 and the Head 1-4 Flow Interlock, while the third attachment shows the flow through Head 3 and the interlock. It is clear from this that the flow through Head 4 was at the trip point of 0.4 lpm at the time the interlock tripped, while the flow through Head 3 remained above the trip point. It is unclear why the flow through Laser Head 4 fell so fast, possibly something moving through the system causing a glitch with the flow sensor?