Reports until 17:55, Wednesday 01 February 2017
H1 ISC (GRD, OpsInfo)
jim.warner@LIGO.ORG - posted 17:55, Wednesday 01 February 2017 - last comment - 09:37, Thursday 02 February 2017(33825)
ALS Fiber polarization fault can prevent ISC_LOCK from progressing, other problems

While Jason and Fil were looking at the TCSY laser, Corey had left ISC_LOCK sitting at ENGAGE_SRC_ASC. We noticed that DIAG_MAIN and the ALSX guardian were both complaining about the X-arm fiber polarization. We thought we could ignore this because the arms were no longer "on ALS", but when ISC_LOCK got to the SHUTTER_ALS state, ISC_LOCK couldn't proceed because the ALSX guardian was in fault.

To move forward, I had to take the ALSX guardian to manual and put it in the SHUTTERED state. Buuttt... now ALSX wasn't monitored by ISC_LOCK. When I got to NLN, the TCSCS was in safe (from earlier work?) and it had a bunch of differences in SDF, TCS_ITMY_CO2_PWR guardian was also complaining (where is this screen? I had to use "guardctrl medm TCS_ITMY_CO2_PWR" to launch it, it recovered after INITing), and ALSX was controlled by USER. This last one I fixed by doing a caput  "caput H1:GRD-ALS_XARM_MANAGER ISC_LOCK" . Normally, that would be fixed by initing the parent node, but for ISC_LOCK that means going through down and breaking lock.

Of course, after fixing all of that and surviving an earthquake, I lost lock due to PIs that seem to have shifted because of the TCS outage.

Comments related to this report
jim.warner@LIGO.ORG - 18:28, Wednesday 01 February 2017 (33826)TCS

There are more SDF diffs in TCSCS. Looks like these should probably be unmonitored.

Images attached to this comment
jim.warner@LIGO.ORG - 20:42, Wednesday 01 February 2017 (33829)

More TCS diffs.

Images attached to this comment
thomas.shaffer@LIGO.ORG - 09:37, Thursday 02 February 2017 (33841)OpsInfo

To clarify a few Guardian operations here; please be careful to anyone who tries to put a node back to managed by clicking the "AUTO" button, this will make USER the manager NOT the normal, correct, manager node. The way for the manager to regain control of its subordinates, is to go to INIT, as Jim states. It is true that if you select INIT from ISC_LOCK while not in Manual mode, then it will go to DOWN after completing the INIT state, but if you keep ISC_LOCK in Manual then you can wait for the INIT state to complete and then click the next state that ISC_LOCK should execute. That last part is the tricky part though. If you reselect the state that you stopped at before going to INIT, then you may run the risk of losing lock because it will rerun that state. It may not break lock, but some states will. Jim did the other way to regain control of a node by caput'ing the manager node into H1:GRD-{subordinate_node}_MANGER. This also works, but is kind of the "back door" approach (although it may be a bit more clear depending on circumstances).

As for the TCS_ITMY_CO2_PWR node, all nodes are on the Guardian Overview medm screen. All TCS is under the TCS group in the top right near the BRS nodes. Perhaps we should make sure that these are also accessable from the TCS screens.