Displaying report 1-1 of 1.
Reports until 16:03, Wednesday 30 October 2013
H1 CDS
david.barker@LIGO.ORG - posted 16:03, Wednesday 30 October 2013 - last comment - 18:04, Wednesday 30 October 2013(8318)
h1pemmx, testing 2.8 code.

Jim, Cyrus, Dave

Rolf added a new feature to RCG2.8 to permit a front end to run without an IRIGB card (GPS time is obtained via EPICS Channel Access from a remote IOC). We are in the process of testing this on h1pemmx. 

To prepare for the test, I added the line "remoteGPS=1" to the CDS block on h1ioppemmx. I added a cdsEzCaRead part, reading the GPS time from the DAQ data concentrator on channel H1:DAQ-DC0_GPS. I svn updated the trunk area, and compiled h1ioppemmx and h1pemmx against the latest trunk.

Test 1: keep the IRIG-B card in the computer, restart the IOP model several times. We noticed that the sync of the GPS time from IOPPEMMX and its reference DC0 does change from restart to restart but keeps synchronized to within a second.

We are in the process of test 2, removing the IRIGB card from h1pemmx. At the same time, Cyrus is reconfiguring the X-ARM switching sysems for the FrontEnd and DAQ switches, which will permit replacement of two netgear switches at MX with media converters. The use of full switches to support a single front end computer is obviously wasteful.

On completion of today's IRIGB tests, we will re-install the IRIGB card and reload the 2.7.2 version of the IOP and PEM code. While this test is progressing the DAQ status from MX is 0x2000 and its data is bad.

Comments related to this report
david.barker@LIGO.ORG - 17:26, Wednesday 30 October 2013 (8322)

Test is completed, pemmx front end has been reverted back to its original state (IRIGB card installed, 2.7.2 code running).

The Test was a SUCCESS, the IOP ran without an IRIGB card. This is indicated by a ZERO time on the IRIGB diagnostics on the GDS_TP MEDM screen (see attached).

One problem found was with the replacement of the DAQ network switch with a media converter. This caused all the DAQ data from all the other front ends withc share the second 10 GigE card on h1dc0 to go bad. We tried to restart the mx_streamer on h1pemmx but that only made matters worse and all the FEs data went bad for a few seconds. I'll leave it to Cyrus to add more details. We reinstalled the netgear switch for the DAQ, but kept the media converter for the FE network as this showed no problems.

Images attached to this comment
cyrus.reed@LIGO.ORG - 18:04, Wednesday 30 October 2013 (8323)

The media converters I tried are bridging media converters, which means they act like a small 1 port switch with 1 uplink.  I went with these because when the computer is powered off, the embedded IPMI interface negotiates at 100Mbps, not 1Gbps, and a standard media converter will not negotiate this rate (it is fixed to the fiber rate).  Therefore a bridging converter maintains access to the IPMI management interface on the front end computer at all times, not just when booted and connected to the switch at 1Gbps.  However, the switching logic in these media converters do not support Jumbo Frames, which when used on the DAQ network corrupts the Open-MX data.  I've confirmed this by looking at the documentation again and comparing to a non-bridging version.  So, I'll need to obtain some additional non-bridging media converters for use on the DAQ network which should work better for this application as they are strictly Layer1 devices with no Layer 2 functionality.

Displaying report 1-1 of 1.