J. Kissel As we're getting slammed by another earthquake, I wonder if there can be a few things that team SEI can change in order to get the SEI platforms back to DAMPED (HEPI robust isolated on L4Cs, HAMs Damping with GS13s, BSC ISIs damping with L4Cs and GS13s) and optics back to DAMPED as soon as possible. Here're the flaws I see at the moment: (1) SEI_CONFIG manager doesn't turn off BSC-ISI Z sensor correction (at least in EARTHQUAKE_v2), so giant EQs like the current still push HEPI in Z enough that it makes the T240s angry and trips the ISI's watchdog, even though the T240s aren't yet involved in any active control. The SEI manager should have a state where it turns off ALL sensor correction, not just horizontal sensor correction. EDITNevermind => I had not read Jim's updates to the SEI_CONFIG Guide. We should be in LARGE_EQ_NOBRSXY, and that does indeed turn off ALL sensor correction. (2) This may be hard, but it would be best if the T240s weren't in the watchdog system until they're used. Just planting the seed for discussion, no need to pull out ECR pens just yet.
Re #2, My recollection was that it was ignored or else while it was riled up it would trip which it does not. Regardless, I reviewed the model and indeed I recalled correctly that the T240s are ignored in the WD as long as all the Isolation Gains are zero. Not sure the situation that Corey was experiencing during the EQ ring-down that suggested the T240s were the problem.
In the SEI World:
Waiting on BSC ISIs to come back. Jeff is looking at them.
In the SUS World:
Jeff K said these most likely tripped due to the SEIs tripping, so these were damped as a first order & they all came back fine.
Will look at IM pointing next, lock up IMC & wait for BSC ISIs.
TITLE: 07/17 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 47Mpc
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
Wind: 11mph Gusts, 7mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY:
H1's in OBSERVING w/ DARM spectra elevated from 25Hz & above, thus giving us a range hovering at ~46Mpc.
TITLE: 07/17 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 47Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Most of the day spent with Fast Shutter, PSL trip slowed things down earlier
LOG:
15:00 I came in and several chambers were tripped from the 6.5 in Russia, I recover and start IA
15:00 Richard to LVEA to cycle fast shutter chassis
15:30 PSL Trip, Jason is called, recover takes ~hour
18:30 After 2 rounds of IA, I make it to DC Readout, Richard and I test fast shutter which breaks lock, next several hours are spent trying to convince ourselves (commisioners, et al.) that the fast shutter is working normally, but locking is easy at this point
21:40 NLN, waiting for PI and A2L to run
22:30 Observing, with awful range
There was a PSL trip this morning due to a HPO Osc Head-3 flow error. Jason's aLog has the details. Everything else up to that point seems normal.
J. Oberling, E. Merilh, J. Warner
The PSL tripped this morning, likely due to the flow in laser head 3. As can be seen in this alog, the flow in head 3 (and only head 3) dropped from 0.52 lpm to ~0.47 lpm sometime on the morning of Saturday, 7/15/2017. It is likely this was the cause of the laser trip. I will do more forensics when I am onsite tomorrow. When restarting the cyrstal chiller, the flow in laser head 3 was back up around 0.54 lpm. Maybe something worked its way through the system, causing the laser trip?
When restarting the laser, the Beckhoff software appeared to lose communication with the hardware of the PSL, requiring a reboot of the PSL Beckhoff PC. Once this was done, everything worked fine and the laser came back up. I had difficulty injection locking the 35W FE to the HPO; I believe this is due to the lower power out of the HPO. I engaged the injection locking by turning the RAMP OFF, and monitoring the power circulating in both directions of the HPO. When the circulating power favored the forward direction, I manually engaged the injection locking. This worked the first time and the laser is now back up and running. Ed re-engaged the PMC, FSS, and ISS, and Jim reset the NPRO noise eater. By this time ~30 minutes had elapsed, so I engaged the LRA and the power watchdogs. The laser is now up and running.
J. Kissel filed FRS 8539 for this trip.
Looking at some trends it appears that it was Laser Head 4 that tripped the PSL, not Laser Head 3 as previously surmised. The first attachment shows the 4 laser head flows around the time of the trip. The second shows the flow through Head 4 and the Head 1-4 Flow Interlock, while the third attachment shows the flow through Head 3 and the interlock. It is clear from this that the flow through Head 4 was at the trip point of 0.4 lpm at the time the interlock tripped, while the flow through Head 3 remained above the trip point. It is unclear why the flow through Laser Head 4 fell so fast, possibly something moving through the system causing a glitch with the flow sensor?
model restarts logged for Sun 16/Jul/2017 - Wed 12/Jul/2017 No restarts reported
model restarts logged for Tue 11/Jul/2017
2017_07_11 10:25 h1asc
2017_07_11 10:27 h1dc0
2017_07_11 10:29 h1broadcast0
2017_07_11 10:29 h1fw0
2017_07_11 10:29 h1fw1
2017_07_11 10:29 h1fw2
2017_07_11 10:29 h1nds0
2017_07_11 10:29 h1nds1
2017_07_11 10:29 h1tw1
Maintenance day, new ASC code with associated DAQ restart
model restarts logged for Mon 10/Jul/2017 - Sat 08/Jul/2017 No restarts reported
Here are the past 3 days trends of the HPO headflow rates at Jason's request due to the LASER trip.
I added 150ml to the xtal chiller after it was restarted.
After a failed startup attempt of the laser, I had to add another 235ml.
TITLE: 07/17 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 0Mpc
INCOMING OPERATOR: Jeff
SHIFT SUMMARY: recovery from 5.8M EQ continues, with some interesting results
LOG:
Cheryl got in contact with Jeff B. regarding a canceled Owl shift tonight. Sheila contacted Keita & myself regarding this.
Jim and I have been adjusting the interferometer alignment for the last few hours.
IMC alignment
We noticed as has been noted by a few people that the IMC alignment is different from before the EQ according to the witness sensors and spots on QPDs (IMC WFS DC, IM4, and ISS QPD). We tried restoring the optics and input PZT using witness sensors, as I am sure people have tried in the last week, and saw that this made it impossible to lock the mode cleaner. Next we went back to the alignment from overnight last night. I moved the uncontrolled DOF (DOF4) to bring the IMC WFS DC spots to where they were before the EQ (this was the idea behind Suresh's effort to control DOF4 several years ago). Jim restored IM1,2,3 to their alignments pre EQ using the osems (the biggest move was IM1 pit). This resulted in restoring the spot positions on IM4 trans and the ISS QPD to their pre EQ positions. (see attached plots of QPDs and range so you can identify times of the EQ and other alignment changes made trying to recover from EQ).
One thing we noticed which Corey has also logged is that there is some kind of mistake in the IMC WFS offlaod script which misaligns the PZT. We could replace this with our generic WFS offload guardian state anyway.
PR3+SR3 osem jumps
This is also something that other people have noted, but there is a large jump in the osem values compared to the oplevs for PR3 +SR3 after the EQ. We spent some time worry about both of these optics because of a problem which turned out to be the fast shutter. PR3 moved after teh EQ, but the osem and oplev do not agree about the size of the move, and it seems like it is some kind of permanent shift in the value of the osem. Cheryl has a nice plot that shows this. It seems like people already knew this and had realinged PR3 correctly using the oplev and reset the SR3 cage servo soon after the EQ.
Fast shutter problem
Jim and I did an initial alignment after the IMC +IM move, but ran into trouble at SRC aling. Cheryl and I spent several hours with this problem, which at first seemed very mysterious. We could not move the beam in single bounce to center it on AS_C, if we tried to do this using either SR2 or SR3 the image on the AS camera would turn into a huge interference/clipping blob, and the power on the AS_C QPD would drop nearly to zero. We tried moving many things with no improvement, but in the end clsing the AS shutter fixed the problem. Our best guess is that the shutter was somehow stuck in a half open state from about 22 UTC July 16th (or earlier) to about 4:30 UTC July 17th. This could have happened as early as this morning's unexplained lock loss. This happened a few more times while we were trying to test the shutter in DRMI. It looks (from the strip tool) like the shutter never actually closes but gets stuck half shut while it is opening.
We are not going to lock for the rest of the night because we don't think the shutter is correctly protecting the OMC. We are trying to contact Jeff B to cancel the OWL shift.
Final note: Ireverted the POP A offsets to what they were before the EQ, in hopes that the IFO alingment is more similar to that now. This worked in DRMI.
Opened FRS ticket 8540 for fast shutter issues.
With the instrument down and earthquake ringing went out on the floor to investigate the Fast shutter. The first thing I noticed was the front panel display was not displaying the normal Capacitor charging 250V. Instead a garbled messeage. (see Pictures) I rebooted the controller and everything appears to be normal. I ran a test on the Shutter and the Ham ISI tripped. Before I could look at light levels the PSL tripped so will have to look once we restore.
Regarding the LCD display on the front of the fast shutter, we have seen this type of symptom before. Once in a while (not often by any means) the large electromagnetic pulse that is radiated out of the shutter driver chassis circuitry exceeds some critical threshold and upsets the communication to the LCD display. Shielding was added to the LCD interface cabling, and it was believed that the addition of this shielding fixed the problem. It appears as though there is still some finite chance that a pulse can upset the display. The good news (if you can call it that) is that the display operation is completely separate from the operation of the shutter, so the garbled LCD is not an indicator of a malfunction in the actual shutter operation inside HAM6. The bad news is that if the shutter has started to behave differently, the likelihood of it being a good thing is essentially zero. Any chance that the beam is hitting the thin wiring leading to the moveable portion of the shutter should be taken seriously. If anything (lock-loss transient etc.) causes a blemish to be formed in the Teflon insulation, the normal flexing behavior will likely change and prefer flexing at the damaged spot leading to fatigue failure of the wire. I would advise great caution and scrutiny be applied to the shutter for a while. The tendency to get stuck in a partly blocking position didn't exist at installation time, so change may be afoot and it is not likely for the better.
TITLE: 07/17 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 0Mpc
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
Wind: 8mph Gusts, 6mph 5min avg
Primary useism: 1.20 μm/s
Secondary useism: 0.16 μm/s
QUICK SUMMARY:
TITLE: 07/16 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Initial Alignment
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY: Lockloss early, spent most of the day aligning, ongoing
LOG:
16:00 Lockloss, Sheila and I start revisiting alignment, ongoing
TITLE: 07/16 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 46Mpc
INCOMING OPERATOR: Jim
SHIFT SUMMARY: Observing for 4 hours. No issues to report.
LOG: None.