Displaying reports 69061-69080 of 77128.Go to page Start 3450 3451 3452 3453 3454 3455 3456 3457 3458 End
Reports until 09:19, Thursday 31 October 2013
H1 CDS
patrick.thomas@LIGO.ORG - posted 09:19, Thursday 31 October 2013 (8331)
Restarted Conlog to change status channel names
Updated the Conlog medm and sitemap link.

Restarted Conlog and set the prefix to H1:CDS-CONLOG_. The sitemap CDS->Conlog link should now work.
H1 SUS
mark.barton@LIGO.ORG - posted 09:04, Thursday 31 October 2013 - last comment - 15:17, Thursday 31 October 2013(8330)
IM TFs

Cheryl asked if we could get TFs for IM2, IM3 and IM4 to complete the IM1 set from 8287 before the HAM door is removed, so I set them going in parallel in three Matlab sessions on opsws7. They should be done by about 11:30 am.

Comments related to this report
mark.barton@LIGO.ORG - 11:53, Thursday 31 October 2013 (8334)IOO

All data taken OK. Individual plots attached.

There's still a bug in the script that does comparison plots so stay tuned for that.

Note that the model now includes eddy-current damping, but the values are from by-eye fitting to IM1 data, not actual estimates from magnet and copper block properties.

Chamber is released for door work.

Non-image files attached to this comment
mark.barton@LIGO.ORG - 15:17, Thursday 31 October 2013 (8344)IOO

Here's the comparison plot, including one old trace for IM1 taken by Szymon with DTT on 1/23, and the four new traces for IM1-4 taken with Matlab recently.

The consistency from measurement to measurement is very good, even between DTT and Matlab.

There appears to be a different gain error for each DOF, which is probably an issue with the model:

L: model is about 0.6 of measurement

P: model is about 1.4 of measurement

Y: model is about 2.4 of measurement

Arnaud spotted that the script error mentioned above turned out to be that plothaux_matlabtfs.m and plotHAUX_dtttfs.m were writing intermediate data files of slightly different formats. We tweaked plotHAUX_dtttfs.m and regenerated ^/trunk/HAUX/H1/IM1/SAGM1/Results/2013-01-23_1600_H1SUSIM1_M1.mat.

Non-image files attached to this comment
H1 CDS (SEI, SUS)
james.batch@LIGO.ORG - posted 08:36, Thursday 31 October 2013 (8329)
Restarted streamers on front-ends, but don't get too excited yet
Data is flowing to the data concentrator again, but h1sush56 is locked up.  Restarting will almost certainly take down the Dolphin IPC network, which will disrupt most of the computers in the corner station.  So don't get too excited about taking data this morning until the system is stable.
Logbook Admin Bug (CDS)
jeffrey.kissel@LIGO.ORG - posted 23:06, Wednesday 30 October 2013 - last comment - 23:08, Wednesday 30 October 2013(8326)
tar.gz files fail to upload
I was looking to upload the source code tar ball for the record in LHO aLOG 8325, but after selecting the file in the browser, I hit the "upload file" button and the file just disappears from the browse field without uploading anything.

We've requested that *any* file type should be upload-able, as long as it's under the 10 Mb limit (see LHO aLOG 3798 and subsequent comments) but I guess the infrastructure doesn't allow for it?

P.S. looking if we've requested this before ('cause I thought ), I rediscovered a report of a bug that's still present:
.
Comments related to this report
jeffrey.kissel@LIGO.ORG - 23:08, Wednesday 30 October 2013 (8327)
For the record, the exact tarball (if it doesn't change before tomorrow when we fix the problem) lives in
/opt/rtcds/lho/h1/target/h1sussr3/src/sources.tar.gz
if it has something to do with this specific tar ball, but I doubt it.
H1 SUS (CDS, DAQ)
jeffrey.kissel@LIGO.ORG - posted 22:52, Wednesday 30 October 2013 - last comment - 05:21, Thursday 31 October 2013(8325)
ECR E1300578 and E1300261 Progress -- HLTS Models -- And crashed to Data Concentrator / Framebuilder
J. Kissel

I've now updated the HLTS front-end simulink models as per ECR E1300578, similar to the QUADs, TMTS, and BSFM, as described in G1301192. After successful compilation of both H1SUSPR3 and H1SUSSR3, Fabrice informed me that Arnaud was gathering some data looking for long-term drift on PR3, so I only installed and restarted SR3. Of course, up successful compilation and install, I went to restart the front end with the new process and it hung halfway through, completely taking down the data concentrator / frame builder / DAQ, and took down the entire h1sush56 front end. I attach a screenshot of the CDS overview screen. *sigh*. The reigning king of finding crazy obscure bugs in CDS and exercising them wins again. The only debugging I've done is trying to reboot the data concentrator once, by doing the following

controls@opsws3:models 0$ telnet h1dc0 8087
Trying 10.101.0.20...
Connected to h1dc0.cds.ligo-wa.caltech.edu.
Escape character is '^]'.
daqd> shutdown
OK
Connection closed by foreign host.
controls@opsws3:models 1$

This brought back *some* of the front ends back up and to green status (The SEI and SUS computers at the end stations), but the corner is cooked.

Sorry Arnaud, and anyone else who was gather data overnight.

Giving up for the night and will continue fighting the good fight tomorrow morning.

I tried uploading the source from the target area, but the aLOG doesn't like tar.gz's at all.

------
Here's the status of the sus corner of the SVN repo that's a result of my work:

MM      common/models/HAUX_MASTER.mdl                  # haven't started on the HAUX yet
MM      common/models/HLTS_MASTER.mdl                  # changes complete, but don't want to commit until I can successfully start the front end process
M       common/models/SIXOSEM_T_STAGE_MASTER.mdl       # same as above
MM      common/models/MC_MASTER.mdl                    # haven't started on the HSTS yet
M       common/models/OMCS_MASTER.mdl                  # haven't started on the OMCS yet
M       common/models/SIXOSEM_T_WD_AC_MASTER.mdl       # changes complete, but don't want to commit until I can successfully start the front end process
M       common/models/SIXOSEM_T_WD_DC_MASTER.mdl       # changes complete, but don't want to commit until I can successfully start the front end process
MM      common/models/HSTS_MASTER.mdl                  # haven't started on the HSTS yet

M       h1/filterfiles/H1SUSTMSX.txt                   # Haven't committed since new code has been installed, these still need a hand clean up of now-vestigial filter banks
M       h1/filterfiles/H1SUSTMSY.txt                   #     | 
M       h1/filterfiles/H1SUSBS.txt                     #     | 
M       h1/filterfiles/H1SUSSR3.txt                    #     | 
M       h1/filterfiles/H1SUSETMX.txt                   #     | 
M       h1/filterfiles/H1SUSETMY.txt                   #     | 
M       h1/filterfiles/H1SUSITMX.txt                   #     | 
M       h1/filterfiles/H1SUSITMY.txt                   #     V
 
M       h1/models/h1susprm.mdl                         # haven't started on the HSTS yet
M       h1/models/h1sussrm.mdl                         #     |
M       h1/models/h1suspr2.mdl                         #     V
M       h1/models/h1suspr3.mdl                         # changes complete, but don't want to commit until I can successfully start the front end process
M       h1/models/h1sussr2.mdl                         # haven't started on the HSTS yet
M       h1/models/h1sussr3.mdl                         # changes complete, but don't want to commit until I can successfully start the front end process
M       h1/models/h1susomc.mdl                         # haven't started on the OMCS yet
M       h1/models/h1susmc1.mdl                         # haven't started on the HSTS yet
M       h1/models/h1susmc2.mdl                         #     |
M       h1/models/h1susmc3.mdl                         #     V


Images attached to this report
Comments related to this report
keith.thorne@LIGO.ORG - 05:21, Thursday 31 October 2013 (8328)
The front-end models are running - however data shipping to the data concentrator is not working (or only partially).
What is needed is to restart the mx_stream processes on each front-end.
  ** There should be a script 'restart_all_mxstreams.sh' in /opt/rtcds/lho/h1/target/h1dc0.  If you log into the boot server as 'controls' you should be able to run this script smoothly.
All this script (should) do is ssh onto each front-end, then do /etc/init.d/mx_stream stop, /etc/init.d/mx_stream start.  
* You can do this manually on each front-end to see if it fixes the problem.

[ and yes, we need more complete info, helpful docs consistent at both sites]
H1 CDS
patrick.thomas@LIGO.ORG - posted 19:16, Wednesday 30 October 2013 (8324)
Restarted Conlog after setting paths for caRepeater
I noticed that Conlog was not reconnecting to some PEM channels after their IOCs were restarted.

One hypothesis is that it may need to have the caRepeater running, which it has not been.

I stopped Conlog around 18:12. I set the environment variable $PATH to include the path to the caRepeater binary. I restarted Conlog. I got an error not finding a library needed by caRepeater. I stopped Conlog and set the environment variable $LD_LIBRARY_PATH to include the path to the library. I restarted Conlog and it ran without any further errors. These environment variables are set in /home/controls/bashrc_import which is sourced by /home/controls/.bashrc on h1conlog.

It remains to be seen if this fixes the problem.
H1 SEI
hugh.radkins@LIGO.ORG - posted 16:49, Wednesday 30 October 2013 (8321)
WBSC9 ETMx SEI HEPI Actuators--5 of 8 connected

Should get the remaining Actuators attached tomorrow morning.

LHO General
gerardo.moreno@LIGO.ORG - posted 16:35, Wednesday 30 October 2013 (8320)
Operator Summary

Today's activities:
- Jim W, to LVEA, lock HEPI HAM02.
- Sheila, RefCav locking lesson to operator.
- Richard M, to LVEA, cable work under HAM02.
- Apollo crew, to LVEA, move IOT2L.
- Apollo crew, door prep work, HAM02.
- Betsy, LVEA, SR2 work.
- Jim W, to LVEA, HEPI lock HAM03.
- Apollo crew, to LVEA, door prep work, HAM03
- Filiberto, LVEA and X-end, ESD measurements.
- Mitchel & Thomas, MCB assembly work, West bay area.
- Hugh and Greg, X-end, HEPI work.
- Jim B and Dave B, to Y-End, troubleshooting.

Vendors:
- Porta potty service.

H1 PSL (PSL)
gerardo.moreno@LIGO.ORG - posted 16:35, Wednesday 30 October 2013 (8319)
H1 PSL Changes

(Sheila, Gerardo)

Sheila showed me how to lock the reference cavity.
One change occurred to get the system to behave, Sheila lowered the resonant threshold down to 0.5 V from 0.9 V.
The reference cavity was able to lock manually, but now it appears misaligned when locked.

H1 CDS
david.barker@LIGO.ORG - posted 16:03, Wednesday 30 October 2013 - last comment - 18:04, Wednesday 30 October 2013(8318)
h1pemmx, testing 2.8 code.

Jim, Cyrus, Dave

Rolf added a new feature to RCG2.8 to permit a front end to run without an IRIGB card (GPS time is obtained via EPICS Channel Access from a remote IOC). We are in the process of testing this on h1pemmx. 

To prepare for the test, I added the line "remoteGPS=1" to the CDS block on h1ioppemmx. I added a cdsEzCaRead part, reading the GPS time from the DAQ data concentrator on channel H1:DAQ-DC0_GPS. I svn updated the trunk area, and compiled h1ioppemmx and h1pemmx against the latest trunk.

Test 1: keep the IRIG-B card in the computer, restart the IOP model several times. We noticed that the sync of the GPS time from IOPPEMMX and its reference DC0 does change from restart to restart but keeps synchronized to within a second.

We are in the process of test 2, removing the IRIGB card from h1pemmx. At the same time, Cyrus is reconfiguring the X-ARM switching sysems for the FrontEnd and DAQ switches, which will permit replacement of two netgear switches at MX with media converters. The use of full switches to support a single front end computer is obviously wasteful.

On completion of today's IRIGB tests, we will re-install the IRIGB card and reload the 2.7.2 version of the IOP and PEM code. While this test is progressing the DAQ status from MX is 0x2000 and its data is bad.

Comments related to this report
david.barker@LIGO.ORG - 17:26, Wednesday 30 October 2013 (8322)

Test is completed, pemmx front end has been reverted back to its original state (IRIGB card installed, 2.7.2 code running).

The Test was a SUCCESS, the IOP ran without an IRIGB card. This is indicated by a ZERO time on the IRIGB diagnostics on the GDS_TP MEDM screen (see attached).

One problem found was with the replacement of the DAQ network switch with a media converter. This caused all the DAQ data from all the other front ends withc share the second 10 GigE card on h1dc0 to go bad. We tried to restart the mx_streamer on h1pemmx but that only made matters worse and all the FEs data went bad for a few seconds. I'll leave it to Cyrus to add more details. We reinstalled the netgear switch for the DAQ, but kept the media converter for the FE network as this showed no problems.

Images attached to this comment
cyrus.reed@LIGO.ORG - 18:04, Wednesday 30 October 2013 (8323)

The media converters I tried are bridging media converters, which means they act like a small 1 port switch with 1 uplink.  I went with these because when the computer is powered off, the embedded IPMI interface negotiates at 100Mbps, not 1Gbps, and a standard media converter will not negotiate this rate (it is fixed to the fiber rate).  Therefore a bridging converter maintains access to the IPMI management interface on the front end computer at all times, not just when booted and connected to the switch at 1Gbps.  However, the switching logic in these media converters do not support Jumbo Frames, which when used on the DAQ network corrupts the Open-MX data.  I've confirmed this by looking at the documentation again and comparing to a non-bridging version.  So, I'll need to obtain some additional non-bridging media converters for use on the DAQ network which should work better for this application as they are strictly Layer1 devices with no Layer 2 functionality.

H1 CDS
david.barker@LIGO.ORG - posted 15:54, Wednesday 30 October 2013 (8317)
script0 froze this morning at 6am, needed a reboot

Jim, Dave.

Eagle eyed Kyle noticed that the medm screen snapshots stopped working at 6am this morning. script0 was pingable, but we could not ssh onto it. Its console was frozen, and it had to be rebooted. We restarted the medm screen snapshot program.

H1 CDS
david.barker@LIGO.ORG - posted 15:52, Wednesday 30 October 2013 (8316)
h1susey restarts

Jim and Dave

We restarted the user and iop models on h1susey several times investigating the DAC status bits (follow on from yesterday's ITMX,Y issue). We did not find any problems at EY, the status bits are consistent with the AI units being powered down. We wanted to try powering them up, but they are missing the +15V supply.

H1 AOS
douglas.cook@LIGO.ORG - posted 15:15, Wednesday 30 October 2013 (8315)
ETMx alignment
Jason and I cut away the Ameristat from around the legs of tripods and realigned the instruments to have them ready in the AM.

I added new scribe lines to the ACB targets to represent the new horizontal centerlines. 
H1 INS (INS, SEI)
jim.warner@LIGO.ORG - posted 14:42, Wednesday 30 October 2013 (8314)
HAM's 2 & 3 HEPI locked

This morning at ~9:30 I locked HAM2 HEPI. At ~ 1:30pm, HAM3 HEPI got a similar treatment. Offsets from floating position for both were about 100 cts (=100cts / [(7.87V/mm)*(1638cts/V)] ~.0003"), which is what Hugh reported he shot for when locking.

LHO General (PEM)
patrick.thomas@LIGO.ORG - posted 13:44, Wednesday 30 October 2013 (8313)
Increased dust in beer garden
Started around 9:00 AM on October 29. Plot attached.
Non-image files attached to this report
H1 PSL (PSL)
peter.king@LIGO.ORG - posted 12:04, Wednesday 30 October 2013 (8312)
Second Loop In Air Cables Installed On HAM2
R. McCarthy, P. King

The in-air cables (D1300464) used for the outer loop power stabilisation photodetector array were installed (see
attached pictures).  Looking at the flange, from left to right.  On the left hand side subflange the cables S1301012
and S1301013 were installed.  On the right hand side subflange the cables S1301014 and S1301015 were installed.  These
were attached to the black coloured mating pieces and are face to face flush as shown in the second attached picture.
Images attached to this report
Displaying reports 69061-69080 of 77128.Go to page Start 3450 3451 3452 3453 3454 3455 3456 3457 3458 End