Displaying reports 56721-56740 of 83045.Go to page Start 2833 2834 2835 2836 2837 2838 2839 2840 2841 End
Reports until 14:26, Friday 06 May 2016
H1 General (PSL)
edmond.merilh@LIGO.ORG - posted 14:26, Friday 06 May 2016 (27050)
DBB Scans w/ISS RPN scan

Included is the RPN image from the injection locked HPO for comparison. The 10Hz comb feature that was present in the Freq scan from Apr 29th is gone. The pointing seems to be slightly out of spec and I believe tha modescan looks relatively ok.

Images attached to this report
Non-image files attached to this report
H1 TCS (ISC)
nutsinee.kijbunchoo@LIGO.ORG - posted 14:09, Friday 06 May 2016 (27047)
All rotation stages calculators calibrations as of May 6th

PSL might need another tweak after HPO work is done. I noticed maximum power changed day-to-day basis.

Images attached to this report
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 12:42, Friday 06 May 2016 (27048)
CDS model and DAQ restart reports, Wednesday 27th April - Thursday 5th May 2016

model restarts logged for Thu 05/May/2016 ISC and SUS-PI model work, h1tw0 rebuild, some unexpected fw restarts
2016_05_05 11:48 h1iscex
2016_05_05 11:50 h1iscey
2016_05_05 12:00 h1omc
2016_05_05 12:00 h1omcpi
2016_05_05 12:02 h1pemex
2016_05_05 12:02 h1susetmxpi

2016_05_05 12:06 h1broadcast0
2016_05_05 12:06 h1dc0
2016_05_05 12:06 h1fw0
2016_05_05 12:06 h1fw1
2016_05_05 12:06 h1nds0
2016_05_05 12:06 h1nds1
2016_05_05 12:06 h1tw1

2016_05_05 12:56 h1pemex
2016_05_05 12:57 h1broadcast0
2016_05_05 12:57 h1dc0
2016_05_05 12:57 h1fw0
2016_05_05 12:57 h1fw1
2016_05_05 12:57 h1nds0
2016_05_05 12:57 h1nds1
2016_05_05 12:57 h1tw1

2016_05_05 14:52 h1broadcast0
2016_05_05 14:52 h1dc0
2016_05_05 14:52 h1fw0
2016_05_05 14:52 h1fw1
2016_05_05 14:52 h1nds0
2016_05_05 14:52 h1nds1

2016_05_05 14:52 h1susetmxpi
2016_05_05 14:52 h1tw1
2016_05_05 15:06 h1fw0
2016_05_05 15:13 h1fw0
2016_05_05 15:23 h1fw0

2016_05_05 16:35 h1tw0
2016_05_05 17:06 h1tw0

model restarts logged for Wed 04/May/2016 No restarts reported

model restarts logged for Tue 03/May/2016 ALL SYSTEMS RESTARTED. RCG upgrade to 3.0.2. Front end and DAQ upgrade.

model restarts logged for Mon 02/May/2016 No restarts reported

model restarts logged for Sun 01/May/2016 No restarts reported

model restarts logged for Sat 30/Apr/2016 fw1 instability
2016_04_30 04:33 h1fw1
2016_04_30 06:24 h1fw1
2016_04_30 09:03 h1fw1
2016_04_30 11:26 h1fw1
2016_04_30 19:14 h1fw1
2016_04_30 19:44 h1fw1
2016_04_30 21:03 h1fw1
2016_04_30 21:53 h1fw1

model restarts logged for Fri 29/Apr/2016 No restarted reported

model restarts logged for Thu 28/Apr/2016 hw1 and nds1 instability
2016_04_28 00:34 h1fw1
2016_04_28 04:34 h1fw1
2016_04_28 04:54 h1fw1
2016_04_28 05:12 h1fw1
2016_04_28 06:05 h1fw1
2016_04_28 07:13 h1fw1
2016_04_28 07:43 h1fw1
2016_04_28 07:56 h1fw1
2016_04_28 08:02 h1fw1
2016_04_28 08:34 h1fw1
2016_04_28 08:54 h1fw1
2016_04_28 16:28 h1nds1
2016_04_28 16:29 h1nds1
2016_04_28 16:30 h1nds1

model restarts logged for Wed 27/Apr/2016 fw0+1 unstable. OMC and SUS PI IPC model work
2016_04_27 08:33 h1fw1
2016_04_27 09:53 h1fw1
2016_04_27 11:13 h1fw1
2016_04_27 11:30 h1fw1

2016_04_27 11:59 h1omcpi
2016_04_27 12:01 h1dc0
2016_04_27 12:01 h1susitmpi
2016_04_27 12:03 h1broadcast0
2016_04_27 12:03 h1fw0
2016_04_27 12:03 h1fw1
2016_04_27 12:03 h1nds0
2016_04_27 12:03 h1nds1
2016_04_27 12:03 h1tw1

2016_04_27 13:06 h1fw0
2016_04_27 14:33 h1fw0
2016_04_27 17:54 h1fw1
2016_04_27 20:33 h1fw1

H1 GRD
jameson.rollins@LIGO.ORG - posted 11:51, Friday 06 May 2016 - last comment - 12:01, Wednesday 11 May 2016(27046)
minor guardian update to fix minor log viewing issue

I just pushed a minor upgrade to guardian to fix a small issue with the guardlog client when following node logs.  The new version is r1542.

The client was overly buffering stream data from the server, which was causing data from the stream to not be output in a timely manner.  This issue should be fixed in the version I just pushed.

As always make sure you have a fresh session to get the new version:

jameson.rollins@operator1:~ 0$ which guardlog
/ligo/apps/linux-x86_64/guardian-1542/bin/guardlog
jameson.rollins@operator1:~ 0$ 

Comments related to this report
sheila.dwyer@LIGO.ORG - 17:02, Tuesday 10 May 2016 (27099)
sheila.dwyer@operator1:~/StripTools$ lockloss select
usage: guardlog [-h] [-a TIME] [-b TIME] [-c TIME] [-d S] [-o H] [-t] [-n]
                [-u | -l | -g | -r | -p] [-x]
                NODE [NODE ...]
guardlog: error: unrecognized arguments: --dump
Traceback (most recent call last):
  File "/ligo/cds/userscripts/lockloss", line 408, in
    args.func(args)
  File "/ligo/cds/userscripts/lockloss", line 259, in cmd_select
    selected = select_lockloss_time(index=args.index, tz=args.tz)
  File "/ligo/cds/userscripts/lockloss", line 140, in select_lockloss_time
    times = list(get_guard_lockloss_events())[::-1]
  File "/ligo/cds/userscripts/lockloss", line 107, in get_guard_lockloss_events
    output = subprocess.check_output(cmd, shell=True)
  File "/usr/lib/python2.7/subprocess.py", line 544, in check_output
    raise CalledProcessError(retcode, cmd, output=output)
subprocess.CalledProcessError: Command 'guardlog --dump --today ISC_LOCK' returned non-zero exit status 2
sheila.dwyer@operator1:~/StripTools$ which guardlog
/ligo/apps/linux-x86_64/guardian-1542/bin/guardlog
sheila.dwyer@operator1:~/StripTools$ 
It seems like there is a problem accessing the guardian logs again, I wonder if its related to this update or not.  I don't know if anyone had tried to look at locklosses since the update. 
 
sheila.dwyer@operator1:~/StripTools$ lockloss select
usage: guardlog [-h] [-a TIME] [-b TIME] [-c TIME] [-d S] [-o H] [-t] [-n]
                [-u | -l | -g | -r | -p] [-x]
                NODE [NODE ...]
guardlog: error: unrecognized arguments: --dump
Traceback (most recent call last):
  File "/ligo/cds/userscripts/lockloss", line 408, in
    args.func(args)
  File "/ligo/cds/userscripts/lockloss", line 259, in cmd_select
    selected = select_lockloss_time(index=args.index, tz=args.tz)
  File "/ligo/cds/userscripts/lockloss", line 140, in select_lockloss_time
    times = list(get_guard_lockloss_events())[::-1]
  File "/ligo/cds/userscripts/lockloss", line 107, in get_guard_lockloss_events
    output = subprocess.check_output(cmd, shell=True)
  File "/usr/lib/python2.7/subprocess.py", line 544, in check_output
    raise CalledProcessError(retcode, cmd, output=output)
subprocess.CalledProcessError: Command 'guardlog --dump --today ISC_LOCK' returned non-zero exit status 2
sheila.dwyer@operator1:~/StripTools$ which guardlog
/ligo/apps/linux-x86_64/guardian-1542/bin/guardlog
sheila.dwyer@operator1:~/StripTools$ 
 
jameson.rollins@LIGO.ORG - 07:25, Wednesday 11 May 2016 (27110)

Sorry, Sheila.  This issue has been fixed now.  Just needed to tweak the lockloss script to account for some updated guardlog arguments.

sheila.dwyer@LIGO.ORG - 12:01, Wednesday 11 May 2016 (27117)

Yes, its working, thank you Jamie.  

H1 CDS
thomas.shaffer@LIGO.ORG - posted 11:10, Friday 06 May 2016 (27045)
h1guardian0 rebooted

Jim rebooted h1guardian0 at 10:53 local. I was hoping this would help with the logging issues that we have had here alog26965. But no luck.

LHO VE
kyle.ryan@LIGO.ORG - posted 10:13, Friday 06 May 2016 - last comment - 10:18, Friday 06 May 2016(27043)
0930 hrs. local -> Heating portion of Vertex RGA bakeout complete -> Pumps running until further notice
Notes to my future self -

Soaked @ 120C - 150C for 60 hours.  

Static power to maintain zones at temperature = 405 watts 

Zone specifics 

Turbo + 1.5" valve -> VAR = 38%, 0.18 amp and 200 ohm 
1.5" valve to reducing Tee -> VAR = 44%, 0.48 amp and 136 ohm
2.5" to 1.5" reducer -> VAR = 44%, 0.9 amp and 70 ohm 
2.5" to 1.5" reducing Tee -> VAR = 44%, 1.27 amp and 48 ohm 
Two 1.5" Tee assembly -> VAR = 38%, 0.39 amp and 140 ohm
RGA #1 -> VAR = 44%, 0.88 amp and 70.5 ohm 
RGA #2 + elbow -> VAR = 44%, 0.9 amp and 70 ohm 
2.5" valve -> VAR = 44%, 0.92 amp and 70 ohm 
N2 leak valve -> VAR = 38%, 0.40 amp and 141 ohm 
Kr leak valve -> VAR = 38 %, 0.40 amp and 137 ohm 
Comments related to this report
kyle.ryan@LIGO.ORG - 10:18, Friday 06 May 2016 (27044)
Here "VAR" used as abbreviation for VARIAC and not "VOLT-AMP REACTANCE"
H1 General (PSL)
edmond.merilh@LIGO.ORG - posted 10:10, Friday 06 May 2016 (27042)
PSL Weekly Report - Past 10 day trends

The plots reflect ongoing work being performed in terms of power recycling and environmental incursions.

Images attached to this report
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 09:15, Friday 06 May 2016 (27041)
unused RFM channel retired, new RFM channel used for SUS-PI

Tega, Ross, Jim, Dave:

Late entry for Thursday work. The unused RFM channels H1:ASC-[x,y]_TR_A_SUM_RFM were removed from h1isce[x,y]. This closes WP 5868.

h1omcpi model was changed to export the 4 demodulated pairs of I,Q signals to shared memory RFM on h1lsc0. h1omc mode was modified to mux these 8 channels into one, and send it out on both X-Arm and Y-Arm RFM networks. h1pemex model was changed to receive the X-Arm RFM channel and demux it to 8 signals, then send these out over the EX Dolphin fabric. h1susetmxpi model was changed to receive and process these 8 signals.

Due to top-naming issues with pemex and susetmxpi, several rounds of model-the-DAQ restarts were needed.

We tested the mux-demux by applying large offsets to each channel one at a time and seeing the corresponding channel change at the end station. Note that the DEMUX OFFSET input for the 8-chan C code is set to 7.

H1 ISC
keita.kawabe@LIGO.ORG - posted 18:01, Thursday 05 May 2016 (27038)
POPX work today

Filiberto fixed the working 36/45MHz WFS, S/N 1300511. He extracted pins from the 5-coax connector, rotated the connector shell by 180 degrees, and inserted the pins again such that the connector looks like the mirror image of its old self. Since it's the mirror of the mirror, now it's correct.

I rerouted the cables inside the ISCT1 so the connections from outside of the table are still intact.

Connected the diff-single converter out 3 and 4 to the MCL PZT driver.

ISC R1, position 2, channel (3, 4) = H1:ISC-(434, 435) cable = MCL controller (X, Y) channel on ISCT1.

DAC works.

H1 PSL
nutsinee.kijbunchoo@LIGO.ORG - posted 16:52, Thursday 05 May 2016 (27036)
Weekly PSL Chiller Reservoir Top-Off

+175 ml (accdentally). Should have been < 150 ml. Jeff B. filled 200 ml three days ago.

H1 PSL (PSL)
peter.king@LIGO.ORG - posted 16:51, Thursday 05 May 2016 (27034)
PSL status
Started the high power oscillator this morning.  The power sum came up to 144 W with the corona
aperture in and the external shutter open.  With the external shutter closed, the output power
is ~152 W (there are some intervening optics and polarisers in the way, which accounts for the
difference).  With the corona aperture removed, the output power was 148 W.

    The pump current was decreased from 50.6 A to 50.0 A to put the operation point in the middle
of the second stability range.

SeededFreeRunning1.png shows the beam profile of the seeded but not injection locked high power
oscillator.

rpn2jpg is the measured relative power noise of the injection locked high power oscillator.
This should be compared with the diagnostic breadboard measurement to be performed tomorrow by
Ed.  Of note is the peak at ~5 kHz that is in the dark noise spectrum of the photodiode.  I am
not sure what this is due to.  Also there are a number of peaks in the power noise spectrum
that are around 13 kHz.  These peaks were not affected by changing the injection locking servo
gain.

    All the PSL servos require tweaking at this stage.

 If the laser trips out for any reason, please do not attempt to resuscitate it yourself! 



Jason, Peter
Images attached to this report
H1 DAQ (CDS)
james.batch@LIGO.ORG - posted 16:45, Thursday 05 May 2016 (27035)
The h1tw0 trend writer is running.
The trend writer on h1tw0 has been started after a lengthy absence due to equipment repair.  
H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 16:23, Thursday 05 May 2016 (27033)
Ops Day Shift Summary

All time in UTC

13:47 Chris S. working on beam tube enclosure sealing. Approx. be locateed 1/4 to 1/2 way toward Mid-X. Morning work.

15:19 Christina to LVEA

15:51 Jason heading to PSL

16:10 Joe to LVEA

16:15 Cheryl making measurement on IMs.

17:07 Hugh and Jim to EX.

17:21 Ed to LVEA

17:37 Ed back

17:40 Travis to LVEA to the racks near HAM2,3.

17:53 Manny to End stations looking for extension cords.

17:56 Joe out

18:13 Dave restarted LSC, ISC model

18:17 Hugh to LVEA retrieving equipment near HAM2.

18:24 Jenne and David start cutting floors in the LVEA

18:31 Hugh out

18:51 Hugh to EY making some measurement

18:57 Jason and Peter out for lunch

19:04 DAC restart (EX)

19:54 DAC restart (PEM EX)

20:23 Manny out

20:46 Manny to LVEA

21:06 Fil removing electronics box from ISCT1.

           Richard and Gerado to LX and beer garden look at VAC channels. BSC7 and BSC8  high pressure alarms were their doing.

21:25 Chris to EX to unload pile of lumber.

21:47 DAC and SUS ETMY model restart

22:30 Fil out

22:40 Peter and a group of operators to LVEA for Transition training.

23:00 Peter et al out.

Keita to ISCT1 sometimes between 22:00-23:00

 

NOTE TO ALL: MUST LET OPERATOR KNOW IF YOU ARE GOING TO END STATIONS. You are potentially causing BRS to ring up just by being there. If couldn't find operator on shift, let Jim know.

H1 GRD (SEI)
thomas.shaffer@LIGO.ORG - posted 11:47, Thursday 05 May 2016 (27031)
Added new ISI_OFFLINE state to SEI managers

Added a new ISI_OFFLINE state to the sei manager guardians that has the ISIs in Ready and HEPI Isolated. This state is mainly used while the sei team is testing and not for normal operation. To get to ISI_OFFLINE I added a couple of transition states from DAMPED that will turn Off the ISI damping loops, and then turn them back on before entering DAMPED again. (HAM and BSC graphs attached)

Tested yesterday on ITMX and HAM5. Loaded the new code into all of the managers today.

Images attached to this report
H1 CAL (CAL)
travis.sadecki@LIGO.ORG - posted 11:45, Thursday 05 May 2016 (27029)
LHO PCal End X and End Y Calibration

Darkhan T., TJ S., Travis S., Evan G.

Yesterday, we took PCal end station measurements for both end stations.  See attached pics if you are interested in the raw data and timestamps.  Otherwise, see T1500129 and T1500131 for the summary reports for each end station.

A quick glance at the optical efficiency of X end seems to indicate that the known clipping issue is getting worse. 

Images attached to this report
LHO VE
chandra.romel@LIGO.ORG - posted 10:31, Thursday 05 May 2016 - last comment - 19:59, Thursday 05 May 2016(27027)
unexplained vacuum gauge signal stepping
After Tuesday activities, noticed a step change in pirani gauge signals. Attached is plot of PT100 (on HAM 1) and PT140 (on BSC4). Note:  PT100 CC gauge keeps turning on and off due to pirani set point and stepping.

PT120 also shows a step in signal. Yesterday Richard & Gerardo pulled the leads on PT120 while measuring the voltage. Voltage read higher valve when lifted from rack.

PT140 pirani trend over 15 days shows a major step from 4/26 (when Beckhoff was installed) to 5/3 when power was switched and system rebooted.
Images attached to this report
Comments related to this report
michael.zucker@LIGO.ORG - 14:24, Thursday 05 May 2016 (27032)

Sounds like the input impedance of the Beckhof ADC is lower than the original VME card ADC. These form a voltage divider with the gage head's output impedance.  I/O impedances  should be listed in the respective instrument datasheets.  There may be a jumper/selection available.

Beyond recalibrating epics records,  good to confirm that the gage heads are happy driving whatever impedance they see full-scale.  

 

kyle.ryan@LIGO.ORG - 19:59, Thursday 05 May 2016 (27040)
Good idea Chandra.  I hadn't even considered looking at the piranis!
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 11:39, Wednesday 04 May 2016 - last comment - 18:07, Thursday 05 May 2016(26997)
RCG upgrade from 2.9.6 to 3.0.2

WP5857 Keith, Rolf, Jim, Dave:

Tuesday 3rd May we upgraded the CDS front end systems and the DAQ to RCG3.0.2

The order of build-install was:

  1. Clean out H1.ipc file
  2. Compile all models (105 models)
  3. Install all models into target area
  4. Take down DAQ and recompile all DAQ code, restart DAQ
  5. Reboot one SUSAUX FEC to test
  6. Reboot all non-Dolphin FECs (SUSAUX, PSL, PEM-MIDS)
  7. Put Guardian/IFO into safe state
  8. Stop all Dolphin FEC models, reboot each computers. h1psl0 and h1seiex lost comms with their IOChassis and needed a power cycle.

During the make installWorld h1fs0 developed a major NFS problem and stopped serving /opt/rtcds file system. We reboot h1fs0, but the NFS service did not start correctly and clients could not umount nor mount the file sytem. We restart kernel-nfs-server daemon and the NFS clients remounted correctly.

I need to change the SUS ITMY and ETMY now the HWWD part is synced to the hardware regarding signal levels (I removed temporary NOT inverters on binary signals).

16:00 h1iscex developed the connTrack Table Full error, new network connections were not permitted. We have to reboot this computer and disrupt the EX Dolphin fabric, requiring all EX models to be restarted (except susaux).

Vacuum controls Beckhoff gauges on BSC7,8 were moved from their temporary slow controls Beckhoff connect to the LX vacuum controls system. This required a name change (H1->H0), minute trends were changed on DAQ.

Upgrade built ISI-HAMS with latest isihammaster.mdl file, but team SEI needed to back out this latest change, so common file was reverse-merged to previous version and all isi-ham rebuilt and restarted.

CAL system was modified to remove the Blind Injection data path (h1calcs and h1calex).

Big DAQ restart at end of afternoon: new VE EDCU list, new Slow Controls EDCU LIST, new Dust monitor EDCU list, new susitmy/etmy hwwd code, new isi-ham models, new cal models

Comments related to this report
keita.kawabe@LIGO.ORG - 17:52, Thursday 05 May 2016 (27037)

As of now CDS overview shows some IPC errors all related to h1iscex in H1HPIETMX (H1:LSC-X_TIDAL_HEPIETMX-IPC), H1ASC (a bunch of ALS ASC signals) and H1OAF (H1:SEI-EX_2_OAF_MUX_RFM).

Seems like the errors started at about 18:59 or so, went away after 3 minutes, then came back at about 19:53 or so and never went away, all UTC.

Images attached to this comment
keita.kawabe@LIGO.ORG - 18:07, Thursday 05 May 2016 (27039)

Hmm, now they're gone.

Displaying reports 56721-56740 of 83045.Go to page Start 2833 2834 2835 2836 2837 2838 2839 2840 2841 End