Chris B., Jamie R. This aLog is documenting the work that has been done so far to get the guardian hardware injection node running at LHO. Documenting the work over the next few days could ease the installation at LLO, after we have sorted out everything at LHO. It also includes some tidbits about the development that's been done, since several members of the hardware injection subgroup wanted to be kept in the loop. Installations There are only a few things we need on the guardian script machines that were not there are already. Things we have done: (1) Updated userapps SVN at /opt/rtcds/userapps/release/cal/common/guardian (2) Checked that we can instantiated guardian node with: guardctrl create INJ (3) Installed awg on the guardian script machine Things we have yet to install on the guardian machine: * glue * ligo-gracedb * grid credentials Codebase Development This afternoon was mostly spent implementing several new things in the codebase. I have attached a new graph of the node to this aLog since there a number of new states, eg. new failure states, renamed the active injection states (formerly called CBC, BURST, etc.), renamed the IDLE state, and the renamed GraceDB state. And as always, the code lives in the SVN here: https://redoubt.ligo-wa.caltech.edu/svn/cds_user_apps/trunk/cal/common/guardian/ Some changes of note: * Changed jump transitions in successful injection pathway to edges. This changes how the node should be run. The model now is that the requested state should be INJECT_SUCCESS while running the node. * The modules (eg. inj_det, inj_types, etc.) have been moved to a new injtools subpackage. * Added success/failure messages for GraceDB events after injection is complete. * Added guardian decorators for a few tasks that are often repeated, eg. checking for external alerts. * Process managing for the awg call. These changes have made the schedule validation script (https://redoubt.ligo-wa.caltech.edu/svn/cds_user_apps/trunk/cal/common/scripts/guardian_inj_schedule_validation.py) out-of-date, and that will need to be updated. Also the GraceDB portions of the code have been commented out for now, since we're holding off on that for now until we have the grid credentials/how we will run the guardian process sorted out. Tests As I was developing I did a couple tests with the guardian node today. I did three hardware injections of constant amplitude (1e-26) into H1:CAL-PINJX_TRANSIENT_EXC. Each injection has a duration of 1 second. GPS start times are: * 1145497100 * 1145497850 * 1145500600 These tests were mostly to check that the call to awg was working properly. The PINJX_HARDWARE filterbank has already been turned off (aLog 26748) so the signal will only appear in the PINJX_TRANSIENT and PINJX_HARDWARE channels. In the attached plot below I shows the injection at 1145500600.
I have installed the following packages on the h1guardian0 machine:
These were installed from the production LSCSoft Debian wheezy archive, which should be fully compatible with this version of Ubuntu (12.04):
controls@h1guardian0:~$ cat /etc/apt/sources.list.d/lscsoft.list deb http://software.ligo.org/lscsoft/debian wheezy contrib deb-src http://software.ligo.org/lscsoft/debian wheezy contrib controls@h1guardian0:~$
We'll be testing these installations today.
As for the awg installation, this was not actually a new install. Instead, the existing GDS installation in /ligo/apps was just made available in the guardian environment:
controls@h1guardian0:~$ grep gds /etc/guardian/local-env . /ligo/apps/linux-x86_64/gds/etc/gds-user-env.sh controls@h1guardian0:~$
That was sufficient to make the awg python bindings available to the guardian nodes.