Mid-SHIFT SUMMARY:
IFO Locked at NLN 78Mpc 02:57UTC
Commissioners working 54Mpc
SDF is LIT up Red. Sheila mentioned that it probably hasn’t been cleared since the power outage?
µSei Z-axis is riding above the 90%ile. EQ bands are ≈.08µm/s. Winds are ≤15mph
STATE OF H1: Relocking SHIFT SUMMARY: Moved RM1 and RM2. Ran initial alignment. Obtained first lock at NLN since power outage. Lost lock shortly afterwards to a 6.6 magnitude earthquake in Mexico (see attached). Earthquake took about 5 hours to ring down. Manually damped ITMX and ITMY roll modes. SUPPORT: Jenne, Evan, Sheila INCOMING OPERATOR: Ed ACTIVITY LOG: 17:02 UTC Bubba and Chris into end X mechanical room to lubricate fans (WP 5685) 17:45 UTC Bubba back from from end X 18:47 UTC Bubba and Chris to end Y mechanical room to lubricate fans (WP 5685) 19:18 UTC Rick and Liyuan to LVEA and optics lab to look for optics 20:11 UTC Rick and Liyuan done 20:30 UTC Filiberto to Y2-8 to check on solar powered battery charge 21:05 UTC Kyle to end X air handling room to take measurements for building a stand for the ion pump 22:08 UTC Filiberto done at Y2-8 and going to X2-8 22:19 UTC Kyle back 22:21 UTC Dick G. to LVEA to look at RF racks 22:53 UTC Jeff B. to optics lab to work on dust monitors 22:59 UTC Filiberto done at X2-8 23:04 UTC Nutsinee and Jim B. to end Y VEA to turn on HWS 23:19 UTC The tidal common length integrator limits (H1:LSC-X_COMM_CTRL_LIMIT, H1:LSC-Y_COMM_CTRL_LIMIT) were being reached. Evan had me change them from 10 to 20. 23:27 UTC Nutsinee and Jim B. back 23:30 UTC Dick G. back 23:33 UTC Jeff B. back. He turned off the dust monitors in the optics labs. 00:34 UTC Kyle to mid Y to overfill CP3
1635 - 1650 hrs. local -> Back and forth Y-mid Next scheduled over fill Saturday, Jan. 23rd
Jim B, Nutsinee
Yesterday I managed to restart ETMX HWS software but wasn't able to connect to EY HWS camera via telnet (this camera is plugged in to an external power supply). Today Jim and I went out, removed the relay box for troubleshooting, and plugged EY HWS camera directly to the power supply (no way of powering it on and off remotely at the moment). I trended the cylindrical power of both EX and EY HWS to see if the software had been running before the power outage yesterday. None of them had been running since the past 10 days. ETMY cylindrical power trend shows that the software in fact hasn't been running since the lost power outage in October (I couldn't get ETMX data). Since nobody noticed this and no one ever told me that it wasn't running, I hope it wasn't needed.
TITLE: Jan 19 EVE Shift 00:00-08:00UTC (16:00-00:00 PDT), all times posted in UTC
STATE Of H1: Locking
DAY OPERATOR: Patrick
BRIEF SUMMARY: Recovering from a very large earthquake off the coast of Western Mexico. Currently the IFO bounce and roll modes are being monitored and manually damped. This appears to be the reason for lock losses around the RF_DARM stage.
Plot attached shows TMSX and TMSY damping signals before and after the power outage, and TMSX signals are significantly more noisy after the power outage.
This is a good indication that the noise really did start during the outage, rather than before. In my spectrum post (alog 25062) I mention that I couldn't promise 100% that this noise wasn't there before the outage, since we don't save the data to a high enough rate to look at old spectra.
model restarts logged for Wed 20/Jan/2016
Power outage at 07:45 PST. Many systems were restarted, full log file attached.
model restarts logged for Tue 19/Jan/2016
No restarts reported
model restarts logged for Mon 18/Jan/2016
No restarts reported
The LHO IM (IM1, IM2, IM3, IM4) alignments change, for the same alignment offset (drive), after a HAM2 ISI trip, or like yesterday, a power outage.
IM2's algnment changes the most, so I have only include it in this alog.
The first attachment is a chart that shows how the IM2 alignment changed, and how much IM2 and the HAM2 ISI shook during this event, summary here:
| IM2 pitch | changed | -33.7urad |
| IM2 yaw | changed | -5.5urad |
| IM2 length | shook | 0.4mm |
| IM2 pitch | shook | 9.9mrad |
| IM2 yaw | shook | 8.8mrad |
| HAM2 ISI X | shook | 8.8mm |
The second attachment is the full data trends for these channels (not in the same order) during the shaking.
At LHO, it is a requirment to check IM alignments after a HAM2 ISI trip, because these alignment shifts regularly occur after the optics experince the shaking.
Other alignment change / shaking events to be posted.
It would be helpfull to have SDF files for multiple interferometer states, to help us diagnose configuration changes when we aren't locked. Since we just lost lock due to an EQ, we know that at least for some ISC and non tripped suspensions we should be in the normal down states.
I've saved down.snaps for LSC, LSCAUX, ASC, ASCIMC (has some diffs due to a script that sets values with too much precision), OMC, ALSEY, ALSEX, ISCEX, ISCEY,SUSETMX, SUSETMY
Now we can start using these as a reference or try to implement Staurt's script to have the guardian load them.
Added 100 mL of H2O to the H1 PSL crystal chiller.
After moving RM1 and RM2, I performed an initial alignment and took the IFO to NLN. The lock lasted a couple of minutes before being taken out by a large earthquake.
DRMI 1F locked, then dipped in power, and recovered (see attached screenshot).
I trended back the positions of RM1 and RM2 for the last 3 days (see attached). The differences appeared significant, so I used the alignment sliders to bring them back. RM1: Pitch: 380 -> 416 Yaw: 632 -> 331 RM2: Pitch: -1149 -> -856 Yaw: 2664 -> 1306
Xarm green alignment
Ed was struggling with the Xarm green alignment. We set the guardian to locked no slow, no wfs. I then turned on the loops one at a time. Usually the camera centering loops are the problem, but they were the easy ones this time. Eventually it was DOF2 that was causing trouble, so I had DOFs 1 and 3 closed and touched TMS by hand to get the error signals for DOF2 closer to zero. I was able to close all the loops, and let the alignment run like normal after that.
Xarm IR alignment
Not really sure what the problem is here, but it's getting late and I'm getting frustrated, so I'm going to see if I can move on with just hand aligning the input.
I suspect that the IMs need some more attention, so if Cheryl (or someone else following Cheryl's procedures) could check on those again in the morning, that would be great.
Also, I'm not sure if the RMs got any attention today, but the DC centering servos are struggling. I've increased the limits on DC servos 1 and 2, both pitch and yaw (they used to all be 555, now they're all 2000). I also increased H1:SUS-RM2_M1_LOCK_Y_LIMIT from 1000 to 2000.
Allowing the INP ASC loops to come on is consistently causing the arm power to decay from 0.85ish to lockloss. I didn't touch the ITM or ETM, but I'm not getting any more power by adjusting PR2 or IM4.
MICH Dark not locking. Discovered that BS's M2 EUL2OSEM matrix wasn't loaded, so no signal being sent out. Hit load, MICH locked, moved on.
Bounce needed hand damping (guardian will prompt you in DARM_WFS state), roll will probably need it too. This isn't surprising, since it happens whenever the ISI for a quad optic trips. Recall that you can find the final gains (and the signs) for all of these in the ISC_LOCK guardian. I like to start with something small-ish, and increase the gain as the mode damps away. No filters need to be changed, just the gains. My starting values are in the table below, and I usually go up by factors of 2 or 3 at a time.
| (starting gain values) | Bounce | Roll |
| ETMY | +0.001 | -1 |
| ETMX | +0.001 | +1 |
| ITMX | -0.001 | +1 |
| ITMY | -0.001 | -1 |
I lost lock while damping the bounce mode (in DARM_WFS state), and the DRMI alignment is coming back much worse than the first few DRMI locks I had.
I don't actually have a lot of faith in my input beam alignment, so I probably wouldn't be happy with any ASC loop measurements I take tonight even if I got the IFO locked. Since we have an 8am meeting, I'm going to call it a night, and ask the morning operator to check my alignment and fix anything that sleepy-Jenne messed up.
STATE OF H1: Still recovering from power outage. ACTIVITY LOG (some missing): 15:44 UTC Unexpected power outage 16:38 UTC Fire department through gate to check on RFAR boxes 16:56 UTC Richard, Jeff B. and Jim W. to end stations to turn on high voltage and HEPI pumps 17:14 UTC Richard turning on h1ecaty1 17:14 UTC Jason and Peter to LVEA to look for an optic 17:39 UTC Richard and company done at end X and going to corner station to turn on TCS chillers and HEPI pumps 17:49 UTC Filiberto to end X to reset fire panel 17:56 UTC Vacuum group touring LVEA (6 or 7 people) 17:57 UTC HEPI pump stations and TCS chillers started in corner station 18:18 UTC Filiberto back from end X 18:24 UTC Jeff B. and Jason to LVEA to start TCS X laser 18:49 UTC Hugh bringing up and isolating HAM ISIs 19:16 UTC Richard and Jim W. to end stations to turn on ALS lasers 19:30 UTC Sheila to CER to power cycle all of the Beckhoff chassis 22:55 UTC Jason to LVEA to look at optical levers 23:02 UTC Jason back 23:34 UTC Dave restarting the EPICS gateway between the slow controls network and the frontend network in an attempt to fix an issue with the Beckhoff SDF 23:47 UTC Dave restarting the Beckhoff SDF code Other notes: End stations: end Y IRIG B chassis power cycled; end X, end Y high voltage turned on; end X, end Y HEPI pump station computers started; end X, end Y ISI coil drivers chassis reset; end Y Beckhoff computer turned on Corner station: TCS chillers turned on; HEPI pump controllers turned on (Jeff B. had to push power button on distribution box); Sheila power cycled all Beckhoff chassis in CER; Sheila turned on the AOS baffle PD chassis in LVEA I started the h0video IOC on h0epics2. Conlog was running upon arrival. However it crashed during the recovery. It still needs to be brought back. Joe D. and Chris worked on beam tube enclosure sealing. There was some trouble starting the corner station HEPI pump controller computer. There was some trouble finding the strip tool template for the wind speeds. I'm not sure if it was found or if TJ created a new one. Things to add to the 'short power outage recovery' document: - Turn on high voltage supplies - Push reset on ISI coil drivers - HEPI pump controller and pumps - TCS chillers - TCS lasers - ALS lasers - Turn on video server
Jeff K. turned and left on wifi at end stations since we are no longer running in science.
From Jeff K.: - The PSL periscope PZTS had to be aligned using the IM4 and MC2 QPDs. Setting them back to their previous offsets did not work. - There was trouble with the end Y ESD beyond the fact that the high voltage got tripped off with the Beckhoff vacuum gauges.
01:43 UTC TMSX guardian set to 'SAFE'. Ed and Jenne reset controller at end X. TMSX guardian set to 'ALIGNED'.
J. Kissel, T. Shaffer, E. Merilh, E. Hall A little more detail on the problems with the EY driver: - We knew / expected the High Voltage Driver of the ESD system to need reseting because of the power outage, and because of several end-station Beckhoff computer reboots (which trip the vacuum interlock gauge, and kill the high voltage power supplies to the HV driver) - However, after one trip to the end station, to perform the "normal" restart (turn on high voltage power supplies via rocker switches, set output voltage and current limit to 430 [V] and 80 [mA], turn on output, go into VEA and hit the red button on the front of the high voltage driver chassis), we found that the *high voltage* driver was railed in the "usual" fashion, in which the high voltage monitors for all 5 channels (DC, UL, LL, UR, LL) show a fixed -16k, regardless of request output. - We'd tried the "usual" cure for an EY railed high voltage driver (see LHO aLOG 19480), where we turn off the driver from the red button, unplug the "preamp input" (see LHO aLOG 19491), turn on the driver from the red button, plug in the cable. That didn't work. - Desperate, after three trips to EY, I tried various combinations of unplugging all of the cables and turning on and off the driver both from the remote switch on the MEDM screen and the front-panel red button, and only when I'd unplugged *every* cable from the front, and power cycled the chassis from the front-panel did it clear up the stuck output. Sheesh!
We have begun recovery from an unexpected power outage that occurred at 7:44 PST and lasted for 3 minutes (as noted from the UPS).
FRS ticket 4245 opened.
Radar plots of the sensing matrix for PRCL, MICH, and SRCL are attached (where MICH is the usual beamsplitter drive). This was taken in the nominal O1 configuration.
Something is strange with the response in POP9; driving beamsplitter seems to result in the same response as driving PRM. Certainly we expect some amount of POP9I response from driving the beamsplitter, since this drives PRCL, but it seems strange that they are almost exactly equal.
For POP45, beamsplitter appears in both I and Q; again, this is not surprising. However, its contribution in I seems to dominate over the SRM contribution. This could potentially be bad, because we only do PRCL → SRCL subtraction (not MICH → SRCL subtraction).
Sensing matrix is as follows:
| PRM | BS | SRM | |
|---|---|---|---|
| POP9I (W/µm) | 3.1 | 3.1 | 0.01 |
| POP45Q (W/µm) | 0.21 | 1.2 | 0.0 |
| POP45I (W/µm) | −0.65 | −0.68 | 0.18 |
There is some ambiguity in the signs here that still needs to be resolved.
The magnitudes of the PRM ⇝ POP9I and SRM ⇝ POP45I elements agree fairly well with what is necessary to explain the open-loop transfer functions for PRCL and SRCL (3.6 W/µm and 0.13 W/µm, respectively). For MICH, there is some frequency-dependent residual between the model and the measurement (which has been observed before). The gain needed to make them match around the UGF is 0.6 W/µm.
I added a 132.1 Hz notch to the PRCL, MICH, and SRCL SFMs. Then one by one I drove PRM, SRM, and BS with a 132.1 Hz line for a few minutes, using the digital oscillator just after the LSC SFMs.
I took the time series for POP9I/Q and POP45I/Q during this time, and demodulated them with 0.1 Hz low-passing.
All times are 2015–01–13 Z. The actuator calibrations are based on digital filters, the suspension models, and the suspension electronics.
As for sensors, the calibrations are 2.2×108 ct/W for POP9 and and 2.3×108 ct/W for POP45, based on LHO#24959 plus an additional 30 dB of whitening gain.
In the above log, I was using a beamsplitter compliance that did not include the violin modes. Because we drive the beamsplitter from its middle stage, Shapiro effect noticeably increases the magnitude of the compliance even around 100 Hz, even though the first violin modes occur around 300 Hz.
The attached plot shows the corrected model, along with a new set of OLTFs taken on the 14th. Both PRCL and SRCL show some discrepancy below 10 Hz.
I started a cleanup of SDF by doing some reverts on TRAMP values (some of which I changed in SUS trying to re-align yesterday). The changes effected were on: HAMs 2,4,5,6 ISI; and SUS-BS,IM, PR2, SR2, and SR3. Also, the Gain/State_Good on the BS ISI was reverted. If 'your' sub-system(s) are showing changes in SDF due to power outage recovery or just trying to make things mo' betta, please have a look and accept or revert.
ALL LHO SEI Platform SDFs were green'd Wednesday morning. Any SDF reds Thursday evening were not related to the power outage.
These changes relate to Jim & me making some guardian changes; we should have more quickly updated the SDFs.