Jenne, Sheila
We had an unusual lockloss a few minutes ago, related to 28255
I happened around 8:11 August 23rd UTC, the DRMI gaurdian seemed to think that the lock was lost although it was not.
All Times Pacific Standard Time (PST):
[Jenne, Sheila, Terra, Corey]
We've been having trouble with MICH ASC lately. Sheila suggested that I double-check the TCS power settings, and in fact, the ITMX CO2 guardian is setting the laser to the wrong power.
At 50W, we want TCS-CO2_X to be at 0.2W, and TCS-CO2_Y to be at 0.0W. However, the guardian was taking both to 0.0W. This is because the PSL_power_checker function inside of the TCS guardian has bad gain and offset values (I think). This function calculates the desired TCS power based on the current PSL power, and the gain and offset values are defined separately for each CO2 laser. This function wants to set the TCS-CO2_X power to 0.0W, and the TCS-CO2_Y power to -0.2W. It has a check so that if it's trying to go to a negative power, just go to 0W, which is why both are being set to 0W. At least for 50W, it seems that if we increase the offset values by 0.2W for each laser, we would be getting the power that we want. However, I don't know that this will give us the correct CO2 power for any other PSL powers, so I am not changing it. Someone from the TCS group should look into the calculation of CO2 power versus PSL power in the guardian, please.
After looking into things, I'm actually not sure why we were getting 0.2W for TCS-CO2_X. It looks like the TCS guardian hasn't changed since its original checkin mid-May, and that it has always been calculating (using the PSL_power_checker function) that it should set TCS-CO2_X to 0.0W in the NOMINAL_LOCKED_CO2_LEVEL state, and that we've been requesting that state in the..... As I was typing, I realized why we weren't getting the wrong TCS powers. Over the last few months, we've spent a lot of our time commissioining in the Increase_Power state for ISC_LOCK, where we had forgotten to comment out an explicit TCS power request from before we had the guardians. The TCS guardians weren't requested to do anything until Coil_Drivers, the state after Increase_Power. So, we were getting the TCS powers we wanted from the explicit request that should have been deleted, and weren't going to the wrong powers because we weren't advancing the ISC guardian.
When Sheila shuffled a few guardian states over the last few days to make things more clear after we arrive at 50W, she put the TCS request in a state that we are now always going to, so that's why we've recently run into this. For now, I've commented out the request for the TCS guardians to go to their "nominal" powers, and am leaving in the explicit request in Increase_Power. Once Team TCS confirms the calculation of CO2 power versus PSL power, we can go back to using the TCS guardians.
Terra and I used the PI damping infrastructure to excite the butterfly and drumhead modes on EY, and then ring them down.
We excited the butterfly mode (6053.9 Hz) during a 50 W lock. The observed ringdown time was 23.5 minutes (= 1410 s), giving a Q of 27×106.
We excited the drumhead mode (8158.0 Hz) during at 2 W lock. The observed ringdown time was 13.5 minutes (= 810 s), giving a Q of 20×106.
The templates containing the spectrum data for these ringdowns live in my directory under Public/Templates/SUS/BodyModes.
In PI model:
MODE 29 = ETMX Drumhead
MODE 30 = ETMX Butterfly
MODE 31 = ETMY Butterfly
MODE 32 = ETMY Drumhead
The attached plot shows the expected ratio of the surface strain energy (in J/m) on the test mass face to the total strain energy (in J) in the test mass for the body modes between 5 kHz and 11 kHz. This is a simple Comsol model with a perfect silica cylinder.
Evidently, the drumhead and butterfly modes have similar energy ratios, so we should not expect their Q factors to be too different. It might be good to try the 9.2 kHz modes, since their energy ratio is rather different from the drumhead and butterfly modes, and they produce test mass strain in the beamline direction (the modes at 8.25 kHz and 9.4 kHz do not).
TITLE: 08/22 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Aligning
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Lots of commissioning work ongoing.
LOG:
3:20pm local 34 sec. to overfill CP3 with 1/2 turn open on LLCV bypass valve
Overview
A synchronized oscillator was added to QUAD_MASTER model test mass stage (L3). After re-compiling the SUS-ETMY model there will be two synchronized ossilators in L3 stage that will be used for driving calibration lines: *_L3_CAL_LINE and *_L3_CAL2_LINE.
Removed channel LKIN_P_LO from the list of DQ channels and added L3_CAL2_LINE_OUT into the list.
The h1susetmy model must be recompiled in order for the changes to take effect.
Details
For one of the two calibration lines that we needed to run during ER9 we used a pitch dither oscillator, SUS-ETMY_LKIN_P (see LHO alog 28164). After analyzing the ER9 data we found two problems with this line (see LHO alog 29108):
CAL-CS model that calculates the time-dependent parameters rely on an oscillator that is synchronized with the ones in the SUS-ETMY and CAL-PCALY models. Since SUS-ETMY_LKIN_P is not a fixed phase (synchronized) oscillator, the phases have to be adjusted by hand every time the oscillator gets restarted. With synchronized oscillators this is will not be necessary.
The second synchronized oscillator was added at L3_CAL2_LINE_OUT and the list of DQ channels was modified accordingly. The L3_CAL2_LINE_OUT was added with sampling rate 512 Hz. LKIN_P_LO was removed from the list of DQ channels.
The changes were commited to USERAPPS repository, rev. 14081.
Dave, TJ, Jeff K, Darkhan,
H1:SUS-ETM(X|Y) were recompiled and restarted, DAQ was restarted (see LHO alog 29245, WP 6117).
The QUAD MEDM screen was updated to show the new oscillator settings.
The MEDM screen updates were committed to userapps repository (rev. 14088):
common/medm/quad/SUS_CUST_QUAD_ISTAGE_CAL2_LINE.adl
common/medm/quad/SUS_CUST_QUAD_OVERVIEW.adl
After making corrections to the FSS front end model to reflect the hardware connections for
signals involving temperature stabilisation of the reference cavity, the filter gain(s) and
offset(s) for the ambient temperature and the average reference cavity temperature have been
changed. These are for:
H1:PSL-FSS_DINCO_REFCAV_TEMP
offset = 344582.9468
gain = 9.011805496E-4
H1:PSL-FSS_DINCO_REFCAV_AMBTEMP
offset = 328418.8499
gain = 9.650437381E-4
With these settings, these channels now read out the temperature in Kelvin.
Weekly Xtal - tiny decreases in amp diode powers. No surprise there. All other powers seem nominally steady. Still have a bad current reading at OSC DB3.
Weekly LASER - Osc box humidity down considerably after the internal water event. It appears to even be lower than it was in then days prior to the event. PMC temp output seems to be lower by a couple of degrees, I'm going to call this a good thing. All other power outputs seem nominally stable.
Weekly Environment - I see a decrease in all relative humidity counts. Also there seems to be a marginal drop in temps except for the LVEA and the PSL anteroom.
Weekly Chiller - Trends are zoomed in for high resolution. All flow and pressure trends show downward tendencies except for OSC head2 which trended slightly higher in flow.
Summary - all around, everything seems to be in good shape. There doesn't seem to be any immediate cause for alarm at this time.
Concur with Ed's analysis, everything looks to be running OK.
FAMIS # 6428 - Checked the chillers and filters. Added 125ml to Crystal chiller. Added 250ml to diode as a preventative measure. Both filters are clean. No debris; no discoloration. Trends of chiller flows, pressures, and temperatures are all OK.
SEI - All good. Progress was made with the shutter last week; different filter, too high gain settings (see alog29149).
SUS - All good.
CDS - Running.
Pulling chasis at ends for Beckhoff tomorrow.
PSL - All good.
Vac - HAM6 still coming down slowly.
Tomorrow terminate the cables for the ion pumps. Software needs an update. RGA baking.
Facilities - HVAC system contractors here.
Tomorrow moving items around in LVEA.
Interview and PNNL tour tomorrow in LVEA
Please finish maintenance by NOON tomorrow.
Sheila Terra Evan
This afternoon we made some progress on things that were making locking difficult, and only a little progress on getting to low noise.
Summary: The interventions in late July and early August to disable blinking LEDs and isolate timing system power supplies have made some difference in the periodicities that emerge when folding magnetometer data from the end stations, with the largest difference seen with the initial firmware updates done in late July. Details: Weigang Liu has been cumulatively applying his folding algorithm to magnetometer data from January through early August, including periods before, during and after recent attempts to mitigate leakage of periodic (1 Hz and 0.5 Hz) transients seen in magnetometer channels into DARM. [Recent clogging of the Caltech cluster nodes with sufficient memory has delayed the automatic production of these plots, so Weigang did a bunch of jobs manually on head nodes for this report.] Summary of recent interventions:
| Channel / link to 2016 web pages | Figure attachments | July 16 to July 21 changes | July 21 to August 6 changes | August 6 to August 18 changes
| H1:PEM−EX_MAG_EBAY_SUSRACK_X | 1-4 | Improved | Higher peaks | Similar
| H1:PEM−EX_MAG_EBAY_SUSRACK_Y | 5-8 | Improved | Higher peaks | 1-Hz structure different (not better)
| H1:PEM−EX_MAG_EBAY_SUSRACK_Z | 9-12 | 1-Hz structure worse | Similar | 1-Hz structure reduced
| H1:PEM−EY_MAG_EBAY_SUSRACK_X | 13-16 | Improved | Similar | Similar
| H1:PEM−EY_MAG_EBAY_SUSRACK_Y | 17-20 | Worse | Even worse | Similar
| H1:PEM−EY_MAG_EBAY_SUSRACK_Z | 21-24 | 2-Hz structure smaller, 1-Hz structure worse | Similar | Improved
| |
PI damping working, though all gains required a sign flip. Successfully damped ETMX mode while ETMX ESD was in HV mode thanks to recent mod.
Modes 17 (ETMX), 25, 26, 27 (all ETMY) rang up. All four have been in guardian and were damped tonight with a sign flip of the gains. I was able to check some phase optimizing but locks were too short for much investigation. I've saved these changes to the guardian and they auto damped the next lock successfully.
P. King, J. Oberling
Short Version: The PSL is now up and running following the HPO water leak (first reported here, repairs reported here).
Long Version: This morning, after giving the HPO ~48 hours to completely dry, we inspected the HPO optical surfaces. The only thing found was some water spots on the head 1 4f lens (this was drag wiped clean); all other optical surfaces look good. We then slowly brought up each head individually to ensure no contamination was causing the optical surfaces to glow; all good here as well. The HPO was then successfully powered up an allowed to warm up for several minutes. The front end came on without issue and the injection locking locked immediately. After allowing the system to warm up for ~1 hour, I attempted to lock the PSL subsystems (PMC, FSS, ISS). The PMC did not want to lock; according to Peter this was likely due to a slight horizontal misalignment (this is seen in a trend of the QPD that lives in the ISS box. I unfortunately don't have a copy of it). I returned to the enclosure and tweaked the beam alignment into the PMC and it locked without issue. I then tweaked the PMC alignment further to maximize the power throughput. PMC now has a visibility of ~80% with ~122W transmitted (with ISS on). The FSS and ISS both locked without issue. The PSL is now operational and fully recovered from the water leak.
Information about the mis-alignment was obtained from the reflected spot CCD image, not the ISS QPD.
Terra, Sheila
Tonight we had trouble engaging the ASC again.
Losing optical gain in POP X
We rang up what we think is a PR3 bounce mode when engaging the ASC the same way as last night. We found that we could avoid ringing this mode up by keeping the PRC2 gain low (digital gains of -500). Right before the OMC damage/vent, the POP X path was reworked and the optical gain seemed suspiciously low.
Tonight we found that the optical gain has decreased even more. Terra changed the demod phase by dithering PR3 pit (500 counts to M3) and rotated the phase positive 65 degrees, (Q1, Q2, Q3, Q4 from 55, 53, 54, 51 to 120, 118, 119, 116 ) to maximize the signal in I (minimize the Q signal). The 2 attached figures show Terra's before and after OLG measurements (excitation gain of 50), both with Jenne's gain of -5000, showing a 10dB increase in optical gain which is about what we expected based on the dither amplitude change.
After optimizing the phase, we did not see the 28 Hz mode get rung up, but this seems to come and go because we also didn't see it yesterday. We quickly tried moving L2 on the POP X path, while watching the amplitude of the PR3 dither line in the POP X signal. We moved the lens about 4 inches closer to POP X and about 3 inches further away, and didn't find any location that had more signal for PR3 so we replaced it as we found it.
We are going to leave the IFO locked in DC readout 2 Watts with the request set to down so that it will not try to relock. The noise is bad as expected.
POPX whitening gain is 0dB but should be odd, see alog 26307. FRS 6057 filed.
The whitening gain on POP X was changed from a gain step of 7 (21 dB) to 0 (0dB) on August 12th. This whitening chassis has a problem and we must use odd gain settings, or else it will return an error and not set the gains equally on all quadratns, as Keita and Hang noted 26307
The change in gain probably happened during a beckhoff restart for the shutter code, but we could have been saved from this problem by SDF. I cannot find a record for these whitening chassis in any SDF table.
Also, this does not explain the drop in gain that Jenne saw, which happened before the whitening settings changed.
The stuck whitening gain bit is the LSB of the Q3 channel. In the past this was typically an indication of a cable problem (short).
Sheila Daniel Terra
Connected the AM laser to the POP X head, and saw that we have very similar response in the electronics to what Evan measured in 27069
we had 3.3 mW out of the AM laser with a whitening gain of 21 dB, used -40 dBm of RF drive at 45.501150 MHz. We saw about 600 counts on each quadrant (except quadrant 3 which had 350 counts and also the least amount of DC light because of way the laser was mis centered on the diode).
We saw that there are rather large offsets when we changed the whitening gain, so Daniel reset the offsets. The large offsets might have contributed to problems last night, along with confusion about the whitening gain.
Also, we remembered that a factor of 6.7 of the mystery gain loss was due to adding a beamsplitter and forgetting to comensate for it on July 11.
(Edit: Actually, Haocun and I did remember to correct for this gain change, we just compensated for it in the digital loop gain. )
So to summarize:
loops were intially commisioned with a whitening gain of 21, a digital gain of -21, a 1 Hz ugf, and electronics gain similar to what we have now. (late may)
Edit: loops were originally commisioned with a filter gain of -200 for pit, -0.1 in the input matrix, an analog gain of 21 dB, and the WFS head electronics performing in a way simlar to what we have now. This is when the reference that I think Jenne used was saved, and within a few days the pit input matrix was reduced by a factor of 2.
Edit: Around June 16th, we had difficulty staying locked when these loops were engaged, which was noted in the alog. Terra and I just looked at trends of the filter gains, and it seems like we also reduced the digital gain from -220 to -3.4 although this was not noted in the alog. This, together with the input matrix change explains most of the missing gain that Jenne found.
On July 11th I forgot to compensate for the beamsplitter causing a gain reduction of 6.7 that no one noticed.
On July 26th, Evan and Keita relocated POP X and Jenne noticed that the digital gain had to be increased by a factor of 250 (or 500 for yaw) to keep the ugf the same.
August 12th the whitening gain was reduced to 0 dB from 21 dB by mistake in a beckhoff reboot.
August 16th Terra and I noticed this further reduction in gain, which is explained by the whitening gain. We also changed the demod phase which increased the gain by about 10 dB. We checked that small movements of the L2 don't change the optical gain much, and moving it by a few inches can decrease the signal.
So, we are missing about a factor of 40 gain, which we cannot explain with electronics.
In the end only a factor of 2 of Jenne's gain change in unexplained. It seems that we have had stable high power locks with both the high gain and low gain settings for PRC2, so we can decide which we want to use. We also should have a factor of 3 increase in gain because of the phasing Terra and I did.
More complicated than that.
|
Whitening (dB) |
POPX digital gain before rotation |
Input matrix | PRC2_P_GAIN |
BS |
Overall gain relative to original |
alog | |
| Originally | 33 | 1 | -1 | -220 | none | NA | |
| May 24 ~1:02 | 33 | 1 | -0.05 | -220 | none | 0.5 | |
| Jun. 17 | 33 | 1 | -0.05 | -3 | none | 6.8E-3 | |
| Jun. 22 ~noon | 21 |
2.8 |
-0.05 | -3 | none | 4.8E-3 | 27901 |
| Jul. 11-12 | 21 | 2.8 | -0.05 | -21 | inserted | 5.0E-3 | 28324 |
| Jul. 27 ~4:20 | 21 | 2.8 | -0.05 | -5000 | inserted | 1.2 | 28666 |
No mystery optical/electronic gain reduction any more. Maybe a factor of 1.2 came from the rework on the table.
It's not clear to me why the PRC2 filter gain was reduced by a huge amount on Jun. 17 but I haven't searched through alog.
Typo in the above table, originally the input matrix was -0.1, not -1.
HEPI BS Tripped few minutes before ITMX ISI. This is the only HEPI that tripped in the neighborhood of the large quake.
ITMY ISI tripped--timing (H1:ISI-ITMY_ST2_WD_MON_GPS_TIME) indicates stage2 tripped on ACTuators 1 second before Stage1 on T240s but looking at the plots, the Actuators have only registered a few counts, nothing near saturation/trip level. But the T240s hit their rail almost instantly. It seems the Stage2 Last Trip (H1:ISI-ITMY_ST2_WD_MON_FIRSTTRIG_LATCH) should be indicating ST1WD rather than Actuator. On ETMY, the Trip Time is the same for the two stages and Stage2 notes it is an actuator trip but again, there are only a few counts on the MASTER DRIVE; seems this too should have been a ST1WD trip[ indication trip on Stage2--I'll look into the logic.
On the BS ISI, the Stage1 and Stage2 trip times are the same, and the Last Trip for Stage2 indicates ST1WD. The Stage2 sensors are very rung up after the trip time but not before unlike the T240s which are ramping to to the rail a few seconds before trip. ETMX shows this same logical pattern in the trip sequence indicators.
On the ITMX ISI, Stage1 Tripped 20 minutes before the last Stage2 trip. This indicates the Stage1 did not trip at the last Stage2 trip.
No HAM ISI Tripped on this EQ.
Bottom line: the logical output of the WDs are not consistent from this common model code--needs investigating. Maybe I should open an FRS...
Attachment 1) Trip plots showing Stage2 trip time 1 second before the stage1 trip where the stage2 actuators do not go anywhere near saturation levels.
Attachment 2) Dataviewer plot showing the EQ on the CS ground STS and the platform trip times indicated.
It seems this is not a problem with the watchdog but a problem with the plotting script. It seems for ST2 Actuators, it misses a multiplier on the Y axis. It works correctly for ST1 Actuators and all the sensors; it does not work for other chambers as well for ST2 ACT. FRS 6072.
Actually, the plotting script is working fine. When the spike is so large that the plotting decides to switch to exponential notation, the exponent is hidden by the title until you blow up the plot to very large size. I removed the 300 Hz and 600 Hz stopband filters in DARM, along with the 950 Hz low-pass filter.
I increased the gain from 840 ct/ct to 1400 ct/ct, giving a UGF of 55 Hz. This seems to have improved the gain peaking situation around 10 Hz (see attachment).
The new settings have been added to the guardian (in the EY transition state), but have not been tested. The calibration has not been updated.
Tagging CAL Group. Evan Goetz has also been working on a better PUM roll-off. He'll be installing those improvements soon as well, and a full loop design comparison.
Since we spend a nontrivial amount of time commissioning at high powers (>20 W) with DARM controlled by EX, I moved the DARM gain increase so that it comes on once the PSL power reaches 20 W.
There are two locklosses around that time, so Ill play detective for both.
1.) 8:09:33 UTC (1155974990)
Looking at the Guardian log:
2016-08-23_08:09:30.786330Z ISC_DRMI [ENGAGE_DRMI_ASC.run] ezca: H1:ASC-MICH_P_SW1 => 16
2016-08-23_08:09:31.037960Z ISC_DRMI [ENGAGE_DRMI_ASC.run] ezca: H1:ASC-MICH_P => OFF: FM1
2016-08-23_08:09:31.042700Z ISC_DRMI [ENGAGE_DRMI_ASC.run] ezca: H1:ASC-MICH_Y_SW1 => 16
2016-08-23_08:09:31.290770Z ISC_DRMI [ENGAGE_DRMI_ASC.run] ezca: H1:ASC-MICH_Y => OFF: FM1
2016-08-23_08:09:33.911750Z ISC_DRMI new request: DRMI_WFS_CENTERING
2016-08-23_08:09:33.911930Z ISC_DRMI calculating path: ENGAGE_DRMI_ASC->DRMI_WFS_CENTERING
2016-08-23_08:09:33.912540Z ISC_DRMI new target: DOWN
2016-08-23_08:09:33.912620Z ISC_DRMI GOTO REDIRECT
2016-08-23_08:09:33.912900Z ISC_DRMI REDIRECT requested, timeout in 1.000 seconds
Seems as though there was a request for a state that is behind its current position, so it had to go through DOWN to get there. This request came from ISC_LOCK:
2016-08-23_08:09:33.546800Z ISC_LOCK [LOCK_DRMI_3F.run] DRMI TRIGGERED NOT LOCKED:
2016-08-23_08:09:33.546920Z ISC_LOCK [LOCK_DRMI_3F.run] LSC-MICH_TRIG_MON = 0.0
2016-08-23_08:09:33.547020Z ISC_LOCK [LOCK_DRMI_3F.run] LSC-PRCL_TRIG_MON = 1.0
2016-08-23_08:09:33.547110Z ISC_LOCK [LOCK_DRMI_3F.run] LSC-SRCL_TRIG_MON = 0.0
2016-08-23_08:09:33.547210Z ISC_LOCK [LOCK_DRMI_3F.run] DRMI lost lock
2016-08-23_08:09:33.602500Z ISC_LOCK state returned jump target: LOCKLOSS_DRMI
2016-08-23_08:09:33.602710Z ISC_LOCK [LOCK_DRMI_3F.exit]
2016-08-23_08:09:33.666340Z ISC_LOCK JUMP: LOCK_DRMI_3F->LOCKLOSS_DRMI
2016-08-23_08:09:33.667220Z ISC_LOCK calculating path: LOCKLOSS_DRMI->LOCK_DRMI_3F
2016-08-23_08:09:33.667760Z ISC_LOCK new target: LOCK_DRMI_1F
2016-08-23_08:09:33.668520Z ISC_LOCK executing state: LOCKLOSS_DRMI (3)
2016-08-23_08:09:33.668750Z ISC_LOCK [LOCKLOSS_DRMI.enter]
2016-08-23_08:09:33.854350Z ISC_LOCK EDGE: LOCKLOSS_DRMI->LOCK_DRMI_1F
2016-08-23_08:09:33.855110Z ISC_LOCK calculating path: LOCK_DRMI_1F->LOCK_DRMI_3F
2016-08-23_08:09:33.855670Z ISC_LOCK new target: ENGAGE_DRMI_ASC
2016-08-23_08:09:33.856260Z ISC_LOCK executing state: LOCK_DRMI_1F (101)
2016-08-23_08:09:33.856410Z ISC_LOCK [LOCK_DRMI_1F.enter]
2016-08-23_08:09:33.868100Z ISC_LOCK [LOCK_DRMI_1F.main] USERMSG 0: node TCS_ITMY_CO2_PWR: NOTIFICATION
2016-08-23_08:09:33.868130Z ISC_LOCK [LOCK_DRMI_1F.main] USERMSG 1: node SEI_BS: NOTIFICATION
2016-08-23_08:09:33.893890Z ISC_LOCK [LOCK_DRMI_1F.main] ezca: H1:GRD-ISC_DRMI_REQUEST => DRMI_WFS_CENTERING
and
2.) 08:13:12 UTC (1155975209)
Doesnt seem to be any funny business here. The DRMI_locked() function looks at the channels in the log below and then will return to DRMI_1F, and at this point it seems like the MC lost lock (see plots).
2016-08-23_08:13:17.613090Z ISC_DRMI [DRMI_WFS_CENTERING.run] DRMI TRIGGERED NOT LOCKED:
2016-08-23_08:13:17.613160Z ISC_DRMI [DRMI_WFS_CENTERING.run] LSC-MICH_TRIG_MON = 0.0
2016-08-23_08:13:17.613230Z ISC_DRMI [DRMI_WFS_CENTERING.run] LSC-PRCL_TRIG_MON = 1.0
2016-08-23_08:13:17.613300Z ISC_DRMI [DRMI_WFS_CENTERING.run] LSC-SRCL_TRIG_MON = 0.0
2016-08-23_08:13:17.613500Z ISC_DRMI [DRMI_WFS_CENTERING.run] la la
2016-08-23_08:13:17.670880Z ISC_DRMI state returned jump target: LOCK_DRMI_1F
2016-08-23_08:13:17.671070Z ISC_DRMI [DRMI_WFS_CENTERING.exit]
2016-08-23_08:13:17.671520Z ISC_DRMI STALLED
2016-08-23_08:13:17.734330Z ISC_DRMI JUMP: DRMI_WFS_CENTERING->LOCK_DRMI_1F
2016-08-23_08:13:17.741520Z ISC_DRMI calculating path: LOCK_DRMI_1F->DRMI_WFS_CENTERING
2016-08-23_08:13:17.742080Z ISC_DRMI new target: DRMI_LOCK_WAIT
2016-08-23_08:13:17.742750Z ISC_DRMI executing state: LOCK_DRMI_1F (30)
2016-08-23_08:13:17.742920Z ISC_DRMI [LOCK_DRMI_1F.enter]
2016-08-23_08:13:17.744030Z ISC_DRMI [LOCK_DRMI_1F.main] MC not Locked
2016-08-23_08:13:17.795150Z ISC_DRMI state returned jump target: DOWN
2016-08-23_08:13:17.795290Z ISC_DRMI [LOCK_DRMI_1F.exit]
Here are the functions that are used as decorators in DRMI_WFS_CENTERING
def MC_locked():
trans_pd_lock_threshold = 50
return ezca['IMC-MC2_TRANS_SUM_OUTPUT']/ezca['IMC-PWR_IN_OUTPUT'] >= trans_pd_lock_threshold
def DRMI_locked():
MichMon = ezca['LSC-MICH_TRIG_MON']
PrclMon = ezca['LSC-PRCL_TRIG_MON']
SrclMon = ezca['LSC-SRCL_TRIG_MON']
if (MichMon > 0.5) and (PrclMon > 0.5) and (SrclMon > 0.5):
# We're still locked and triggered, so return True
return True
else:
# Eeep! Not locked. Log some stuff
log('DRMI TRIGGERED NOT LOCKED:')
log('LSC-MICH_TRIG_MON = %s' % MichMon)
log('LSC-PRCL_TRIG_MON = %s' % PrclMon)
log('LSC-SRCL_TRIG_MON = %s' % SrclMon)
return False
Something I also should have mentioned is that ISC_LOCK was brought into Manual and then requested LOCK_DRMI_3F right before the logs seen above. Seems as though it wasnt quite ready to be there yet so it jumped back down to LOCK_DRMI_1F, reran the state where it requested DRMI_WFS_CENTERING from the ISC_DRMI guardian.