This morning Richard and I tested the UPS used for the PSL Mephisto power supply. It is now set to log what happens. A run calibration was also done. The UPS claims that it can run for ~5-1/2 hours on the battery. We initiated a self-test, for which the UPS switched over to the battery. After a minute or so I checked the laser to find that the Mephisto power supply and frontend laser control box displayed an error condition. Interestingly the MEDM display indicated a flow sensor problem. The Beckhoff PC indicated an interlock problem and an EPICS alarm but nothing about a flow sensor problem. Both the diode and crystal chiller were still running when I entered the diode room to reset the system. Clearly the Mephisto tripped because of the UPS switching over to its batteries. Richard, Peter
Timing is pretty good, the ISIs tripped within a few seconds of each other and the HEPI tripped 16 seconds later. No issues on recovery.
Sheila, Evan
Evan struggled with the PSL tripping earlier tonight. After it stopped tripping we were able to lock long enough to get to low noise and make one DARM OLG measurement, which is attached here. Then we were hit by a large earthquake.
I made an attempt at making a filter for SRCL FF, using alpha to go the the DARM actuator For the SRCL OUT to DARM IN tf we used the noise injection from last night, screen shot of the TF is attached.
To get alpha, I took SRCLOUT-> DARM IN tf *(DARM OLG/(DARM Closed loop*DARM plant) which is (DARM IN1/SRCL OUT)*(DARM IN1/DARM IN2)/((DARM IN1/DARM EXC)*(DARM IN1/DARM OUT)). for the DARM measurements I used a cut of 0.6 on the product of all three coherences, from SRCL to DARM we have I used a cut of 0.8 on coherence. I fit the product using vecfit, the attached screenshot shows the result when using 8 zeros and 8 poles, anyone can adjust the number of zp pairs by editing line 58 in /sheila.dwyer/Noise/SRCLMICHFF/SRCL_DARM_TF.m. I did not load any filters.
Tripped around 2015-05-07 02:31:30 UTC. I reset it. Aside from the usual diode flow bit flipping issue, the diode chiller again appeared to have some unphysically fast jump in temperature for about 30 s.
PSL tripped again, around 2015-05-07 02:47:50 UTC. This time, the computer in the diode room showed a momentary dip in the diode chiller flow rate (see attached photo). I talked to Rick on the phone, and we agreed to restart it again.
PSL tripped a third time, around 04:23:00 UTC. I reset it again.
These trips are really an impediment to progress, not only because of the time it takes to restart the PSL but also the time it takes to relock the IFO and get back to low noise. If it is an option to replace the bad sensor soon, that would be good, although I think I remember from when Michael Rodruck was struggling with the same issue it was not a simple fix.
These recent PSL trips, as well as the power watchdog trips, are an example of how we have made the system less robust by being overzealous in trying to protect it.
Both the recent trips and many trips a few years ago when Michael Rodruck was still here were due to bad sensors. These sensors broke and indicated that the flow was low, when the flow was not actually low and there was no real danger to the laser.
Do we have any redunant sensors, so that we could check that mutliple sensors agree that the flow is low before we turn off the laser?
We see H1:PSL-IL_DCHILFLOW flipping for a while and then it stops flipping after 30 seconds in these recent spurious laser trips. Does this bit go to 0 and stay there if the flow is actually low? If so we could wait to trip until this bit is low for at least half a second, or when a rolling average of the bit drops below half.
The power watchdog seems like another example of overzealous protection of the hardware. (The power watch dog has tripped less often recently, I assume that is because the PSL maintence is now being done) This is a watchdog that goes off when the PSL power becomes low. It seems to me like a drop in the power is not a dangerous situation. I think it would be best to downgrade the power watchdog to an alarm or warning, or anything else that does not bring a multi million dollar facility to a standstill.
Sheila, Evan, et al.,
These laser trips are, of course, a big concern to the PSL "team." We are actively pursuing solutions.
For the past month or so we have been opeating without engaging the watchdog on the Front End laser.
We are trying to understand the functionality of the Beckhoff interlock which is appears is responsible for initiating the shutdowns. This week, we received some information from Maik Frede at LZH. He wrote the control software. It appears that the system is not functioning as designed and described in LIGO-T1000005.
Yesterday, Peter King and Jeff Bartlett replaced the flow sensors in the spare chillers with units that don't have moving parts. We have some indication that the flow sensors originally installed and currently operating (the ones that Michael Rodrick) was replacing, are the cause of the triggers. As soon as possible, we plan to either swap in the spare chillers (with the new sensors) or shut down for an hour or so to replace the sensors in the operating chillers.
Jeff, Peter, Jason, Ed Merilh and I are meeting at 9:00 this morning to discuss progress on several fronts in trying to understand and address these shutdowns.
J. Kissel I've processed the DARM OLG TF measurement Evan and Sheila took last night (LHO aLOG 18269), and was dismayed to find that the DARM coupled cavity pole (DCCP) frequency has decayed back down to 270 [Hz]. This is obvious from the attached residuals, where I show two different versions of model parameters for last nights measurement compared against the two previous measurements taken during the mini-run, where the DCCP frequency was 355 [Hz] up near the expect value. I've again used a 0.99 coherence threshold, I trust this assessment of the DCCP frequency to within 2%, especially since that's the only thing I have to change in the model (besides the overall scale factor) to arrive at this conclusion (for the skeptics of my precision, see LHO aLOG 18213). What's going on here? --- Total Blind Speculation --- As Sheila mentions in the main entry (LHo aLOG 18264), the recycling gain and initial alignment had been restored to values during the mini-run. There has been quite a lot of work on SRC alignment: maybe those SRC loops which are now higher bandwidth -- though good for stable SRCL to DARM coupling (LHO aLOG 18273) are not so good for the DCCP frequency. Perhaps because they have some bad alignment offset? Perhaps we should try very slightly altering the DC alignment of these loops to see if it has an effect on the DCCP frequency, and then optimize for it. Eventually, one might imagine using the amplitude of either the PCal or DARM calibration lines as feedback to keep the pole frequency stationary and near the design value. The good news is that the overall scale factor (what we're assuming is either the optical gain or ESD driver strength variation) changed by less than 0.5%. --------------------- The measurement template has been copied over and committed to the CalSVN here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER7/H1/Measurements/DARMOLGTFs/2015-05-06_H1_DARM_OLGTF_LHOaLOG18269.xml The new model parameter set can be found committed here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER7/H1/Scripts/H1DARMparams_1114947467.m and as usual, the model is here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER7/H1/Scripts/H1DARMmodel_preER7.m and all residual comparisons are done here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER7/H1/Scripts/CompareDARMOLGTFs.m
I guess another possibility is some effect from the lower SRCL ugf. When the guardian goes into LSC_FF, the SRCL gain is reduced by 30% to reduce the amount of control noise appearing in DARM (the SRCL ugf goes from 50 Hz to 25 Hz). I'm assuming this reduction was not done during the minirun, when the good DARM pole was measured.
I suppose we should just run two DARM OLTFs in quick succession, one with the low ugf and one without.
Evan's idea seems unlikely: how could the SRCL loop, with a bandwidth of few tens of Hz, affect what happens at mich higher frequency? This would imply huge couplings of both DARM->SRCL and SRCL->DARM. Such lareg coupling should be easily visible when measuring the SRCL open loop gain.
The most likely hypothesis is that the pole frequency depends critically on the SRC alignment. One interesting test would be to inject two DARM lines, one at low frequency (50 Hz) and one at high frequency (1000 Hz) and track their variations while aligning the ITMs. We expect the low frequency to be constant (tracking the optical gain) and the high frequency to change (tracking the pole frequency).
We already inject calibration lines at 34.7 and 538.1 [Hz] which have been on since April 1 2015 (17621). They've been at a fixed amplitude that had been tuned for a 30 [Mpc] lock stretch, so the 34.7 [Hz] line may not be visible at all times, but still -- it's on the to-do list to make this comparison. Any off-site would be much appreciated!
7:58 JimW taking ETMx ISI down to load new filters
8:55 Gerardo to valve out BSC1 aux cart
9:04 Gerardo done
9:17 Karen to MY, Cris to MX
9:49 Karen leaving MY
10:33 Cris back from MX
14:27 Richard to Mid ? with Simplex
14:57 Gerardo to LVEA to shutdown aux cart
There were a few days in the last week where we could not get the interferometer fully locked for some alignment-related reason. I studied what was going on in this period and came up with a hypothesis.
According to my hypothesis, the followings must have happened:
These resulted in the dark days where we were unable to lock the interferometer.
[The unlockable days]
As shown in the above cartoon, we were unable to fully lock the interferometer between some time in the Tuesday 28th and Thursday 30th of April. I also noted alignment-related activities in the same viewgraph to see what kind of activity we did. It is shown that there was one activity right around the time when we started realizing difficulty in locking, namely removal of the software aperture mask on the ITMY green camera (alog 18108). Nonetheless, I don't think this triggered the unlockable days. As will be discussed in the later part of this alog, I believe that the cause is a large lateral shift of the spot on ITMX.
[Optics' angle in the unlockable days]
I attach two trend plots which show angle of various optics. They are 10days trend. I drew two vertical lines in each plot for encmpassing the dark days. Looking at the two plots on the upper right corner of the first plot, one can clearly see that TRX or TRY did not reach a high value, meaning we could not lock. Also, it is very clear that we had a different alignment in this particular period.
Even though some optics exhibited different alignment in pitch, most of the optics indicate that the alignment in yaw changed significantly. Therefore I neglect any misalignment in pitch hereafter and concentrate on misalignment in yaw.
[X arm alignment]
Since the Y arm behaves as a slave of the X arm from the point of view of the interferometer alignment, I assume that the Y arm alignment had been continueously adjusted properly so that the Michelson becomes dark and the beam spot on ETMY was adjusted to be at the center of it and so forth. In this way I limit the discussion to the X arm and PRC hereafter.
Here is a summary table of how much the relevant optics moved in yaw before and after the Tuesday 28th. The values were extracted from the two trend plots that I showed above.
before Tuesday [urad] | after Tuesday [urad] | difference [urad] | sensor | |
ITMX | -8 | 0 | +8 | oplev |
ETMX | 0 |
-3 *2.74 (oplev calibration alog 18237) = -8.22 |
- 8.22 | oplev |
TMSX | -229 | -235 | - 6 | witness sensor |
PR3 | 271 | 275 | + 4 | alignment slider |
PR2 | 3373 | 3398 | + 25 | witness sensor |
PRM | 420 | 540 | + 120 | witness sensor |
IM4 | -645 | -569 | + 76 | witness sensor |
Using the ITMX and ETMX angle differences, I estimated the difference in the spot position on ITMX and ETMX. I used the following matrix to compute the spot position:
The variables are defined as shown in the cartoon below.
Substituting the realistic values: L = 3994.5 m, ROC_itm = 1934m, ROC_etm = 2245 m, Psi_i = 8 urad and Psi_e = -8.22 urad, I obtain displacement of x_i = -4.7 cm and x_e = -1.8 cm. The beam axis should look like this:
Discussion of the DRMI part is held in the next section. Also, please note that in this analysis we can not estimate the absolute spot positions.
Displacement of 47 mm on ITMX is quite large. For example, one can compare it with the beam radius of the red light on ITMX which is about 53 mm. So it moved by almost half of the beam diamter.
Since we had nonideal behavior in PRC2 ASC loop, I am speculating that the large displacement on ITMX resulted in a clipping or something similar in PRC or Micheslon part and hence bad ASC signal.
According to error propagation, the spot position on the test masses are found to be very senstive to precision of the measured angles. For example, x_e would get an error of +/-1.2 cm if error of +/-0.5 urad is added on the ETMX angle. Nonetheless, I think the displacement of 47 mm on ITMX is believable as this is a relatively large number. On the other hand, the displacement of 18 mm on ETMX maybe fishy because of measurement error. Additionally, the green WFS servos must have steered the test masses such that the spot position on ETMX stays at the same position. Anyway, at this point it is unclear if ETMX actually moved by 18 mm.
One thing which makes me a bit suspicious about this caluclation is the angle of TMSX. As listed in the table, it moved by +6 urad. On the other hand, according to the analysis, the arm eigen axis obtained a rotation of about 16 urad which does not match the angle of TMSX by a factor of almost 3. I am not sure what this means.
[PRC alignment]
Doing a similar matrix approach in PRC, one can build a displacement-misalignment matrix as
where h_j is displacement and phi_j is misalignment. Subscript p stands for PRM, 2 for PR2, 3 for PR3 and i for ITMX. Subsitituting the difference in the measured angles as listed in the table, one can get
[hp h2 h3 hi] = [1.8 mm, -2.5 mm, 1.9 cm, -1.9cm]
which unfortunately contradicts what I said about the ITMX spot position. However, in principle, our initial alignment scheme should bring the PRC beam axis so as to match that of the arm cavity. Now, instead of simply substituting the measured values, I do a small tweak on PR3 angle. I add a magic number of 0.4 urad on top of the measured PR3 alignment. This results in displacements of:
[hp h2 h3 hi] = [-1.3 mm, 5.3 mm, -4.7 cm, 4.7 cm]
which agrees with the previous arm cavity discussion. I don't have any data to support this magical 0.4 urad on PR3, but I believe that it is not crazy to say that the calibration of the PR3 alignment slider has been off by 10 %. Anyway, if we believe in this result, this introduces a translation of 4.7 cm in the PR3-ITMX beam line as expected. Also the spot position on PRM did not move so mcuh as expected because it should be passively determined by the IMC pointing which I think is stable on a time scale of many days. Even though the spot on PR2 moved by 5.3 mm, I don't think this is big enough to let the beam hit the PR2 baffle to cause a clipping.
[Why the yaw alignment changed ?]
It is unclear why the yaw alignment changed so drastically during the period.
One might think that this was due to a wrong reference position on the ITMX green camera. However, according to the past alogs, there was no obvious activities associated with the ITMX green camera at around the time when we started having difficulty. Keita suggested me checking whether a limiter was on in some of ALS WFSs because it has been an issue which hindered the ALS WFS performance. Indeed both ALS_X_DOF2_P and _Y had a limiter on (see the attached conlog and trend), but the _Y_LIMIT value was set to 1e7 counts while _P_LIMIT was set to 10 counts, meaning Y_LIMIT was essentially not effective. This could cause some funny behavior in pitch, but it is still hard to believe that it introduced such a big offset in yaw.
This morning I valved out the aux pump cart, 8:50 am, the cart continued to pump on the valve.
At 3:20 pm the aux cart was turned off, cable, hose and turbo decoupled from the valve. Ion pump is on its own now, current is trending down, currently at 4.02 mA.
No change was noticed on the pressure of the BSC1 annulus system as the cart was removed.
The valve was blanked off.
SEI - working on ETMx ISI filters
CDS - working on chassis repairs
Facilities - exit gate repair ongoing
PSL - interlock investigation ongoing
mouse traps for EY being increased to mediate excretion issues
OSB painting continues
BSC1 ion pump aux cart to be valved out
Andy Lundgren sent out an inquiry via email about lines at 1180 Hz and 1150 Hz. I looked for these lines in the coherence tool, and I didn't see any evidence of the 1180 Hz line, but I did find a very sharp line at 1150 Hz on the following channels: H1:LSC-MCL_OUT_DQ H1:LSC-MICH_OUT_DQ H1:LSC-PRCL_OUT_DQ H1:CAL-CS_CARM_DELTAF_DQ H1:CAL-CS_IMC_DELTAF_DQ H1:CAL-CS_MICH_DQ H1:CAL-CS_PRCL_DQ H1:CAL-CS_SRCL_DQ H1:LSC-DARM_OUT_DQ H1:LSC-REFL_SERVO_CTRL_OUT_DQ H1:OAF-CAL_CARM_AO_DQ H1:OAF-CAL_CARM_X_DQ H1:OAF-CAL_MICH_DQ H1:OAF-CAL_PRCL_DQ H1:OAF-CAL_SRCL_DQ If you want to look for yourself, you can find the full coherence tool output for the May 2 locks here: https://ldas-jobs.ligo-wa.caltech.edu/~nathaniel.strauss/HIFOX/LineSearch/H1_COH_1114579968_1114784433_1000_webpage/
SDF Cleanup
If channels are removed from your model/code whatever, unless you take your system down to the safe state and makeSafeBackup to generate from scratch a new safe.snap (and most of us know the problem doing that causes,) the SDF system will go yellow for CHANS NOT FOUND. Okay, maybe no big deal but green is better than yellow when scanning our configuration control tools and we should green these up at earliest opportunity.
There is a way to clear these out with having to take the system down to a 'safe' state for the new snap.
You can just go to the SDF TABLE and while there you might as well look at the CHANS NOT FOUND list and make sure they make sense (yes, we did remove those channels from the model, e.g.)
Then from that medm, open the SDF SAVE SCREEN
Insure the SAVE TABLE selection is TABLE TO FILE and the FILE OPTIONS SELECTION is OVERWRITE
Press SAVE FILE
This step overwrites the safe.snap file removing the not found channel records. Little wierd though, on the SDF_RESTORE screen (available from the TABLE screen as well,) it does not show a Modified file detected message, but, it has updated the time, as if it had 'Restored' to that time. But you will notice that your list of Not Found Channels have not cleared. Maybe this is explicitly intentional or is maybe a trivial bug.
Now on the SDF_RESTORE screen, press LOAD TABLE, and, all the NOT FOUND channels should clear.
However, there may be remaining NOT FOUNDs that will crop back up:
If the NOT FOUND CHANNELs list contains fields as opposed to just records, that is H1:HPI-ETMX_GUARD_CADENCE.HIGH (field) versus H1:HPI-ETMX_GUARD_CADENCE (record), this process will not remove the field lines from the safe.snap.
Still need to test this (Barker is on it) but, maybe the next time the front end starts, these fields will again become a CHANS NOT FOUND and our green lite will revert to yellow.
This happens because the FE code reads the safe.snap and strips out the field lines and saves them in a safe_alarms.snap file in the target area. Then when it saves the safe.snap, this field list is just tacked onto the end of the safe.snap record list. So your safe.snap will always contain these non existent channel field lines.
The fix to this problem is the following, before doing the steps above, (you could did it after but you'd be saving & loading twice) go to the target area and remove the appropriate field lines from the safe_alarm.snap file (currently writable only by controls.) Once these are gone, proceed as above in the SDF SAVE SCREEN.
For what it's worth, all of those old "...GUARD_CADENCE" channels should be completely purged from all models. That is old deprecated guardian infrastructure. I think it has all been purged at LLO, so maybe LHO just needs to update the affected models.
@Jamie -- indeed -- we've updated the hepitemplate.mdl which has these old GUARD channels remove and compiled / installed the BSC HEPI models this past Tuesday (LHO aLOG 18223). So, Hugh is just documenting how to get rid of them the last known hold out -- the safe.snaps. The integration issues that tracks the removal of these vestigial organs are here: SEI II 922 SUS II 921 They've been marked closed for some reason -- I think it's because again and again we've thought we'd gotten rid of them all, but continue to find them lurking about.
During maintenance this morning we pushed a fairly minor guardian upgrade. We are now running:
Primary bugfixes or new features:
The last two are to help debug some of the intermittent issues we've been seeing with node processes hanging for unknown reasons.
The node status indicators on the main GUARD_OVERVIEW screen also now have a couple of extra indicators lights, for STALLED state of node, and if there are any SPM diffs:
Another new feature I forgot to mention is that guardian can now dump their internal setpoint table to a file.
This is triggered by writing a 1 ('True') to the SPM_SNAP channel for the node:
jameson.rollins@operator1:~ 0$ caput H1:GRD-SUS_ITMY_SPM_SNAP 1
Old : H1:GRD-SUS_ITMY_SPM_SNAP False
New : H1:GRD-SUS_ITMY_SPM_SNAP True
jameson.rollins@operator1:~ 0$ cat /ligo/cds/lho/h1/guardian/archive/SUS_ITMY/SUS_ITMY.spm
H1:SUS-ITMY_M0_TEST_P_SWSTAT IN,OT,DC
H1:SUS-ITMY_M0_TEST_Y_SWSTAT IN,OT,DC
H1:SUS-ITMY_R0_OPTICALIGN_Y_TRAMP 2.0
H1:SUS-ITMY_M0_OPTICALIGN_P_TRAMP 2.0
H1:SUS-ITMY_M0_TEST_P_TRAMP 2.0
H1:SUS-ITMY_M0_OPTICALIGN_Y_TRAMP 2.0
H1:SUS-ITMY_M0_TEST_Y_TRAMP 2.0
H1:SUS-ITMY_M0_OPTICALIGN_P_SWSTAT IN,OF,OT,DC
H1:SUS-ITMY_M0_OPTICALIGN_Y_SWSTAT IN,OF,OT,DC
H1:SUS-ITMY_R0_OPTICALIGN_P_TRAMP 2.0
jameson.rollins@operator1:~ 0$
A '.spm' file is written into the node archive directory (/ligo/cds/<site>/<ifo>/guardian/archive/<node>). The .spm file is a space-separated list of channel/setpoint pairs.
NOTE: the setpoints are accumulate over the running of the node. If the node has just been restarted (or the worker restarted after a STOP command has been issues) the setpoint table will be empty until the node passes through states where it actually writes to a channel. The full set of setpoints will only be fully known once the node runs through all system state code paths. Guardian has the ability to initialize the full set of setpoints for the node for a pre-defined set of channels, but we're not using that fascility yet.
The setpoint table is also written to the log when an SPM_SNAP is triggered, as is a full list of all PVs subscriptions currently active in the node:
2015-05-06T22:16:20.66686 SUS_ITMY W: PVs:
2015-05-06T22:16:20.66698 SUS_ITMY W: H1:SUS-ITMY_DACKILL_STATE = 1.0
2015-05-06T22:16:20.66705 SUS_ITMY W: H1:SUS-ITMY_L1_TEST_L_SW1R = 4.0
2015-05-06T22:16:20.66711 SUS_ITMY W: H1:SUS-ITMY_L1_TEST_P_SW1R = 4.0
2015-05-06T22:16:20.66717 SUS_ITMY W: H1:SUS-ITMY_L1_TEST_Y_SW1R = 4.0
2015-05-06T22:16:20.66723 SUS_ITMY W: H1:SUS-ITMY_L1_WDMON_STATE = 1.0
2015-05-06T22:16:20.66728 SUS_ITMY W: H1:SUS-ITMY_L2_TEST_L_SW1R = 4.0
2015-05-06T22:16:20.66734 SUS_ITMY W: H1:SUS-ITMY_L2_TEST_P_SW1R = 4.0
2015-05-06T22:16:20.66740 SUS_ITMY W: H1:SUS-ITMY_L2_TEST_Y_SW1R = 4.0
2015-05-06T22:16:20.66746 SUS_ITMY W: H1:SUS-ITMY_L2_WDMON_STATE = 1.0
2015-05-06T22:16:20.66751 SUS_ITMY W: H1:SUS-ITMY_L3_TEST_BIAS_SW1R = 4.0
2015-05-06T22:16:20.66757 SUS_ITMY W: H1:SUS-ITMY_L3_TEST_L_SW1R = 4.0
2015-05-06T22:16:20.66763 SUS_ITMY W: H1:SUS-ITMY_L3_TEST_P_SW1R = 4.0
2015-05-06T22:16:20.66768 SUS_ITMY W: H1:SUS-ITMY_L3_TEST_Y_SW1R = 4.0
2015-05-06T22:16:20.66775 SUS_ITMY W: H1:SUS-ITMY_M0_DAMP_L_SW2R = 1728.0
2015-05-06T22:16:20.66781 SUS_ITMY W: H1:SUS-ITMY_M0_DAMP_P_SW2R = 1728.0
2015-05-06T22:16:20.66787 SUS_ITMY W: H1:SUS-ITMY_M0_DAMP_R_SW2R = 1728.0
2015-05-06T22:16:20.66792 SUS_ITMY W: H1:SUS-ITMY_M0_DAMP_T_SW2R = 1728.0
2015-05-06T22:16:20.66798 SUS_ITMY W: H1:SUS-ITMY_M0_DAMP_V_SW2R = 1728.0
2015-05-06T22:16:20.66804 SUS_ITMY W: H1:SUS-ITMY_M0_DAMP_Y_SW2R = 1728.0
2015-05-06T22:16:20.66810 SUS_ITMY W: H1:SUS-ITMY_M0_OPTICALIGN_P_SW1 = 0.0
2015-05-06T22:16:20.66816 SUS_ITMY W: H1:SUS-ITMY_M0_OPTICALIGN_P_SW1R = 12.0
2015-05-06T22:16:20.66821 SUS_ITMY W: H1:SUS-ITMY_M0_OPTICALIGN_P_SW2 = 0.0
2015-05-06T22:16:20.66827 SUS_ITMY W: H1:SUS-ITMY_M0_OPTICALIGN_P_SW2R = 1536.0
2015-05-06T22:16:20.66833 SUS_ITMY W: H1:SUS-ITMY_M0_OPTICALIGN_P_SWSTAT = 302080.0
2015-05-06T22:16:20.66839 SUS_ITMY W: H1:SUS-ITMY_M0_OPTICALIGN_P_TRAMP = 2.0
2015-05-06T22:16:20.66845 SUS_ITMY W: H1:SUS-ITMY_M0_OPTICALIGN_Y_SW1 = 0.0
2015-05-06T22:16:20.66850 SUS_ITMY W: H1:SUS-ITMY_M0_OPTICALIGN_Y_SW1R = 12.0
2015-05-06T22:16:20.66858 SUS_ITMY W: H1:SUS-ITMY_M0_OPTICALIGN_Y_SW2 = 0.0
2015-05-06T22:16:20.66870 SUS_ITMY W: H1:SUS-ITMY_M0_OPTICALIGN_Y_SW2R = 1536.0
2015-05-06T22:16:20.66880 SUS_ITMY W: H1:SUS-ITMY_M0_OPTICALIGN_Y_SWSTAT = 302080.0
2015-05-06T22:16:20.66891 SUS_ITMY W: H1:SUS-ITMY_M0_OPTICALIGN_Y_TRAMP = 2.0
2015-05-06T22:16:20.66903 SUS_ITMY W: H1:SUS-ITMY_M0_TEST_L_SW1R = 4.0
2015-05-06T22:16:20.66913 SUS_ITMY W: H1:SUS-ITMY_M0_TEST_P_SW1 = 0.0
2015-05-06T22:16:20.66923 SUS_ITMY W: H1:SUS-ITMY_M0_TEST_P_SW1R = 4.0
2015-05-06T22:16:20.66933 SUS_ITMY W: H1:SUS-ITMY_M0_TEST_P_SW2 = 0.0
2015-05-06T22:16:20.66940 SUS_ITMY W: H1:SUS-ITMY_M0_TEST_P_SW2R = 1536.0
2015-05-06T22:16:20.66948 SUS_ITMY W: H1:SUS-ITMY_M0_TEST_P_SWSTAT = 300032.0
2015-05-06T22:16:20.66955 SUS_ITMY W: H1:SUS-ITMY_M0_TEST_P_TRAMP = 2.0
2015-05-06T22:16:20.66962 SUS_ITMY W: H1:SUS-ITMY_M0_TEST_R_SW1R = 4.0
2015-05-06T22:16:20.66969 SUS_ITMY W: H1:SUS-ITMY_M0_TEST_T_SW1R = 4.0
2015-05-06T22:16:20.66976 SUS_ITMY W: H1:SUS-ITMY_M0_TEST_V_SW1R = 4.0
2015-05-06T22:16:20.66983 SUS_ITMY W: H1:SUS-ITMY_M0_TEST_Y_SW1 = 0.0
2015-05-06T22:16:20.66995 SUS_ITMY W: H1:SUS-ITMY_M0_TEST_Y_SW1R = 4.0
2015-05-06T22:16:20.66998 SUS_ITMY W: H1:SUS-ITMY_M0_TEST_Y_SW2 = 0.0
2015-05-06T22:16:20.67005 SUS_ITMY W: H1:SUS-ITMY_M0_TEST_Y_SW2R = 1536.0
2015-05-06T22:16:20.67013 SUS_ITMY W: H1:SUS-ITMY_M0_TEST_Y_SWSTAT = 300032.0
2015-05-06T22:16:20.67023 SUS_ITMY W: H1:SUS-ITMY_M0_TEST_Y_TRAMP = 2.0
2015-05-06T22:16:20.67027 SUS_ITMY W: H1:SUS-ITMY_M0_WDMON_STATE = 1.0
2015-05-06T22:16:20.67037 SUS_ITMY W: H1:SUS-ITMY_MASTERSWITCH = 1
2015-05-06T22:16:20.67044 SUS_ITMY W: H1:SUS-ITMY_R0_DAMP_L_SW2R = 1728.0
2015-05-06T22:16:20.67048 SUS_ITMY W: H1:SUS-ITMY_R0_DAMP_P_SW2R = 1728.0
2015-05-06T22:16:20.67055 SUS_ITMY W: H1:SUS-ITMY_R0_DAMP_R_SW2R = 1728.0
2015-05-06T22:16:20.67062 SUS_ITMY W: H1:SUS-ITMY_R0_DAMP_T_SW2R = 1728.0
2015-05-06T22:16:20.67070 SUS_ITMY W: H1:SUS-ITMY_R0_DAMP_V_SW2R = 1728.0
2015-05-06T22:16:20.67077 SUS_ITMY W: H1:SUS-ITMY_R0_DAMP_Y_SW2R = 1728.0
2015-05-06T22:16:20.67088 SUS_ITMY W: H1:SUS-ITMY_R0_OPTICALIGN_P_SW1R = 12.0
2015-05-06T22:16:20.67172 SUS_ITMY W: H1:SUS-ITMY_R0_OPTICALIGN_P_TRAMP = 2.0
2015-05-06T22:16:20.67187 SUS_ITMY W: H1:SUS-ITMY_R0_OPTICALIGN_Y_SW1R = 12.0
2015-05-06T22:16:20.67198 SUS_ITMY W: H1:SUS-ITMY_R0_OPTICALIGN_Y_TRAMP = 2.0
2015-05-06T22:16:20.67210 SUS_ITMY W: H1:SUS-ITMY_R0_TEST_L_SW1R = 4.0
2015-05-06T22:16:20.67226 SUS_ITMY W: H1:SUS-ITMY_R0_TEST_P_SW1R = 4.0
2015-05-06T22:16:20.67229 SUS_ITMY W: H1:SUS-ITMY_R0_TEST_R_SW1R = 4.0
2015-05-06T22:16:20.67239 SUS_ITMY W: H1:SUS-ITMY_R0_TEST_T_SW1R = 4.0
2015-05-06T22:16:20.67250 SUS_ITMY W: H1:SUS-ITMY_R0_TEST_V_SW1R = 4.0
2015-05-06T22:16:20.67261 SUS_ITMY W: H1:SUS-ITMY_R0_TEST_Y_SW1R = 4.0
2015-05-06T22:16:20.67275 SUS_ITMY W: H1:SUS-ITMY_R0_WDMON_STATE = 1.0
2015-05-06T22:16:20.67287 SUS_ITMY W: SPMs:
2015-05-06T22:16:20.67316 SUS_ITMY W: H1:SUS-ITMY_M0_OPTICALIGN_P_SWSTAT = IN,OF,OT,DC
2015-05-06T22:16:20.67326 SUS_ITMY W: H1:SUS-ITMY_M0_OPTICALIGN_P_TRAMP = 2.0
2015-05-06T22:16:20.67353 SUS_ITMY W: H1:SUS-ITMY_M0_OPTICALIGN_Y_SWSTAT = IN,OF,OT,DC
2015-05-06T22:16:20.67363 SUS_ITMY W: H1:SUS-ITMY_M0_OPTICALIGN_Y_TRAMP = 2.0
2015-05-06T22:16:20.67393 SUS_ITMY W: H1:SUS-ITMY_M0_TEST_P_SWSTAT = IN,OT,DC
2015-05-06T22:16:20.67397 SUS_ITMY W: H1:SUS-ITMY_M0_TEST_P_TRAMP = 2.0
2015-05-06T22:16:20.67424 SUS_ITMY W: H1:SUS-ITMY_M0_TEST_Y_SWSTAT = IN,OT,DC
2015-05-06T22:16:20.67432 SUS_ITMY W: H1:SUS-ITMY_M0_TEST_Y_TRAMP = 2.0
2015-05-06T22:16:20.67436 SUS_ITMY W: H1:SUS-ITMY_R0_OPTICALIGN_P_TRAMP = 2.0
2015-05-06T22:16:20.67446 SUS_ITMY W: H1:SUS-ITMY_R0_OPTICALIGN_Y_TRAMP = 2.0
2015-05-06T22:16:20.67455 SUS_ITMY W: 66 PVs, 10 SPMs
2015-05-06T22:16:20.67828 SUS_ITMY W: SPM snapshot: /ligo/cds/lho/h1/guardian/archive/SUS_ITMY/SUS_ITMY.spm
So why didn't other platforms trip? Here are time series of the ITMY's sensors. The BS & ETMX tripped on Actuators. For ITMY, most of the signals show a characteristic Earthquake arrival sequence of P S & Surface waves. The ITMY actuators (first graph) maxed at maybe 14000 (trip at 30000) and the T240 signals did approach 20000cts (second graph.) Other sensors see the EQ signal but they do not exceed 1/3 of max.