Displaying reports 56621-56640 of 83007.Go to page Start 2828 2829 2830 2831 2832 2833 2834 2835 2836 End
Reports until 09:55, Wednesday 11 May 2016
H1 PSL
keita.kawabe@LIGO.ORG - posted 09:55, Wednesday 11 May 2016 (27112)
PSL monitor diode digital gains

Digital gain for the following PDs that could be useful (unless nothing is connected) have been zero for at least 30 days. The output of these seem to be recorded in the frame, so I set the gains to 1.

H1:PSL-OSC_PD_ISO_DC

H1:PSL-OSC_PD_INT_DC

H1:PSL-OSC_PD_BP_DC

H1 General
jeffrey.bartlett@LIGO.ORG - posted 09:22, Wednesday 11 May 2016 (27111)
08:30 Meeting Minutes
SEI – Working on BRS and filter development
	 Developing Guardian configuration nodes for the end stations (WP #5877)

SUS – Running charge measurements on both ETMs

VAC – Continuing baking RGA in vertex
	   Several issues with vacuum gauges after yesterday's maintenance
	   Working on a grounding/power problem in LVEA

FMC – Beam tube sealing
	       
Comm: Waiting for the PSL ISS to be repaired

No safety meeting today.	
H1 General
cheryl.vorvick@LIGO.ORG - posted 21:59, Tuesday 10 May 2016 (27108)
Eve Ops Summary: locking, not making it to ENGAGE ASC

State of H1: reliably locking DRMI, losing lock around REDUCE CARM OFFSET MORE

 

Commissioners: Sheila

 

Activities:

FSS, when it needs help relocking, has shown a pretty consistent pattern of:

H1 ISC
sheila.dwyer@LIGO.ORG - posted 21:40, Tuesday 10 May 2016 (27107)
some locking attempts tonight

Cheryl, Sheila

Note: the high votlage for the EY ESD is probably still off, so we will need to go out and switch it on.  

Images attached to this report
LHO VE
patrick.thomas@LIGO.ORG - posted 19:44, Tuesday 10 May 2016 - last comment - 02:08, Wednesday 11 May 2016(27105)
Someone might want to watch CP5 and CP6
I would like to leave them on PID to see if they recover. I'm heading out.
Comments related to this report
kyle.ryan@LIGO.ORG - 21:00, Tuesday 10 May 2016 (27106)
I'll monitor a few times during the night
kyle.ryan@LIGO.ORG - 02:08, Wednesday 11 May 2016 (27109)
2109 hrs. local 
CP5 -> 27% open, 97% full @ ? psig exhaust 
CP6 -> 71% open, 89% full @ ? psig exhaust 

2200 hrs. local 
CP5 -> 100% open, 88% full @ 1.3 psig exhaust 
CP6 -> 24% open, 92% full @ 0.2 psig exhaust 

2240 hrs. local 
CP5 -> 44% open, 94% full @ 0.8 psig exhaust 
CP6 -> 42% open, 90% full @ 0.2 psig exhaust 

5/11/2016 
0200 hrs. local 
CP5 -> 6% open, 94% full @ 0.4 psig exhaust 
CP6 -> 27% open, 91% full @ 0.1 psig exhaust 
LHO VE (CDS, VE)
patrick.thomas@LIGO.ORG - posted 18:23, Tuesday 10 May 2016 - last comment - 18:57, Tuesday 10 May 2016(27103)
Beckhoff vacuum controls updates
I updated the software on all of the Beckhoff vacuum controls computers as per work permit 5875. Carlos and I took the opportunity to change the computer names at the same time. They are now h0vaclx, h0vacly, h0vacmr, h0vacmx, h0vacmy, h0vacex and h0vacey. I have attached the channel names changes that were made in the DAQ. I set the smoothing on all of the CP pump levels to 0.99. A lot of the cold cathode gauges have still not come back on. CP4, CP5 and CP6 are still in the process of recovering.
Non-image files attached to this report
Comments related to this report
patrick.thomas@LIGO.ORG - 18:57, Tuesday 10 May 2016 (27104)
The following channels set the amount of smoothing on the reading of each CP level in milliamps (which is then converted to % full). They can range between 0 (no smoothing) to 1 (never changes). I set them all to 0.99.

H0:VAC-LY_CP1_LT100_PUMP_LEVEL_MA_SMOO
H0:VAC-LX_CP2_LT150_PUMP_LEVEL_MA_SMOO
H0:VAC-MY_CP3_LT200_PUMP_LEVEL_MA_SMOO
H0:VAC-MY_CP4_LT250_PUMP_LEVEL_MA_SMOO
H0:VAC-MX_CP5_LT300_PUMP_LEVEL_MA_SMOO
H0:VAC-MX_CP6_LT350_PUMP_LEVEL_MA_SMOO
H0:VAC-EY_CP7_LT400_PUMP_LEVEL_MA_SMOO
H0:VAC-EX_CP8_LT500_PUMP_LEVEL_MA_SMOO
LHO VE
kyle.ryan@LIGO.ORG - posted 17:17, Tuesday 10 May 2016 (27101)
Progess on VBOD
Kyle, Joe D. 

Today we re-installed the SRS RGA100 analyzer after having replaced its fried filaments and CDEM last Friday.  We leak checked the new joints but won't be able to test the basic functionality until tomorrow.  Assuming that every thing works, it will still be "weeks" before VBOD can process bake loads as the SRS RGAs are impossible to bake out effectively as the result of near zero conductance resulting from poor focus plate design - if anybody has an extra $20K in their budgets, please send it our way.  The newer aLIGO-era RGAs pay for themselves in production up time and minimal man hours to Futz with.
LHO VE
kyle.ryan@LIGO.ORG - posted 17:10, Tuesday 10 May 2016 (27100)
RGA cal-gas issue
(see also https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=27078) 

Today I inspected the "hang tags" on the N2 and Kr calibration gases at each of the currently installed aLIGO RGAs (Pfeiffer Prisma Plus, Y-end, X-end and Vertex).  It turns out that these are NOS (new old stock) calibration gases that were intended for but never used during the Beam Tube bake back in 1998.  Though I don't specifically remember, I must have decided to use these since they were the correct gases and leak rates needed for the RGAs and were unused, albeit old, and would save having to spend another $8,000.  This looks to have been a bad idea as the factory fill pressures for the N2 bottles were sub-atmospheric, ~500 torr.  This is typical for "crimped capillary" leaks for the mid 10-8 torr*L/sec leak rates and isn't an issue as long as their isolation valves never get opened at pressures > 500 torr, say like at atmosphere.  The problem here is that these sat in a drawer for 15 years along with various other cal-gas bottles and were accessible to anybody wanting to turn a knob.  

It looks very much like the N2 cal-gas used at the Vertex had been opened to atmosphere at some point during its long storage.  I am currently re-baking the Vertex RGA after having exposed it to the N2 bottle last Friday but while both cal-gas isolation valves are open.  It may be that the N2 bottle may need to isolated and tagged-out and we may be stuck with only a Kr leak at the Vertex.  I haven't reviewed the data but this might also explain the poor clean-up of the RGAs at the end stations.  
H1 PSL (PSL)
peter.king@LIGO.ORG - posted 16:44, Tuesday 10 May 2016 - last comment - 10:36, Wednesday 11 May 2016(27098)
laser trip
Jenne told me this afternoon that the laser had tripped.  The crystal chiller indicated an error with
flow sensor 1.  All the flow sensors in the chiller are the vortex style and have no moving parts
and are not likely to fail in the same manner as the old turbine ones.  My guess is that after ~31k
hours of service, it's time to start scheduling a time where we switch the chiller out for the spare
and send this unit back to Termotek for servicing - who recommend that the servicing be done annually.

    This is the second time in about 4 days that the laser has tripped because of this problem.
Comments related to this report
corey.gray@LIGO.ORG - 10:36, Wednesday 11 May 2016 (27114)

Tagged this with FRS#5469.

H1 CDS (SUS)
david.barker@LIGO.ORG - posted 16:28, Tuesday 10 May 2016 - last comment - 17:45, Tuesday 10 May 2016(27097)
CDS Tuesday Maintenance Summary, 10th May 2016

h1pemey

Dave:

I modified h1pemey to receive the muxed OMC DCPD demodulated channels from h1omc, and demux them into 8 dolphin channels. These will be received by the h1susetmypi model in a later mod. The h1pemey model now has the same mods as were applied to h1pemex last week.

Vacuum Controls

Carlos, Patrick, Dave:

Patrick made new vacuum controls changes today. Some LLCV channels changed names, and new INI files were generated. I copied the running minute trend files from old-name to new-name on both h1tw0 and h1tw1. I created the new edcu file, which was renamed from H0EDCU_VE.ini to H0EDCU_VAC.ini

Target files, autoburts and snap files are being renamed from h0veXX to h0vacXX.

Beckhoff computers were renamed from h0veXX to h0vacXX. The NTP settings were verified, but it looks like the clocks remain unsynced.

Slow Controls Beckhoff Computers

Carlos, Jim, Dave:

The NTP settings on h1ecat[c1,ex,ey] were changed from name based to IP address. This caused a resync of the time on these machines (were 4 seconds out).

DAQ Restart

Dave:

DAQ restart to incorporate the h1pemey new model and the VAC new ini.

Comments related to this report
sheila.dwyer@LIGO.ORG - 17:45, Tuesday 10 May 2016 (27102)

For some reason the high voltage supply for the EX ESD was off this afternoon.  Tega and I turned it back on.  It looks like the same might be true for EY, but we havent' gone out there yet. 

H1 CDS
david.barker@LIGO.ORG - posted 16:15, Tuesday 10 May 2016 (27096)
/opt/rtcds file system no longer almost full

Ryan, Carlos, Jim, Dave:

The new ZFS file system on h1fs0 which is NFS exported as /opt/rtcds became 97% full recently. This is a fast (SSD) but relatively small (1TB) file system.  The reason for being fast is to prevent front end epics freez-ups which we saw last year when the disk was busy syncing to its backup machine.

With this being a snapshotted ZFS file system, freeing up space is not as easy as removing and/or compressing files. First all the snapshots must be destroyed to free up the file blocks and allow file modifications.

Process on both h1fs0 (primary) and h1fs1 (backup) was:

 1 destroy all snapshots

 2 remove all old adl files in the rcg generated dirs /opt/rtcds/lho/h1/medm/model-name/

 3 tar and compress old build areas in /opt/rtcds/lho/h1/rtbuild*

 4 delete all non-rcg generated target_archive dirs (mostly from 2011-2012)

 5 delete all rcg generated target_archive dirs with the exception of the most recent four dirs

2&3  freed up only a modest amount of disk space, they were mainly to clean up clogged directories.

4&5: prior to doing this we verified these old target archives were backed up to tape. Stop 5 freed the lion's share of disk space, about 600GB.

The end result is the amount of free space went from 53GB to 685GB, disk usage went down from 97% to 28%.

At the end of the procedure, a full ZFS-snapshot sync of h1fs0 to h1fs1 was done manually, and the hourly sync was turned back on.

H1 General
jeffrey.bartlett@LIGO.ORG - posted 16:14, Tuesday 10 May 2016 (27095)
Ops Day Shift Summary
Transition Summary:

Title:  05/10/2016, Day Shift 15:00 – 23:00 (08:00 – 16:00) All times in UTC (PT)
State of H1: IFO unlocked. Work on PSL is ongoing. Tuesday Maintenance day  
Commissioning: 
Outgoing Operator: N/A
 
Activity Log: All Times in UTC (PT)

14:00 (06:00) Peter – In the H1 PSL enclosure
14:30 (07:30) Jeff – Replace network cable to PSL dust monitor #102
15:00 (08:00) Start of Tuesday maintenance day
15:00 (08:00) Christina & Karen – Cleaning in the LVEA
15:14 (08:14) Sprague – on site to remove pest control boxes
15:30 (08:30) Hugh – Restarting HAM models (WP #5874)
15:48 (08:48) Filiberto, Leslie, & Manny – Going to pull cables for the access system
15:49 (08:49) Hugh – Finished with model restarts
15:54 (08:54) N2 delivery at both end stations
16:00 (09:00) Patrick – Update Beckhoff code for vacuum controls computer (WP #5875)
16:04 (09:04) Kyle – Going to both end stations VEAs. 
16:10 (09:10) Jim B – Restarting H0EPICS to apply security patch.
16:10 (09:10) Bubba – Replacing faulty check valve air comp at Mid-X (WP #5876)
16:15 09:15) Peter – Out of the H1 PSL enclosure
16:17 (09:17) Filiberto – Power off Vac rack LX, & LY to check grounds
16:35 (09:35) Christina & Karen – Cleaning at End-X
16:35 (09:35) Chandra – Going to Mid-X to work on CP4
16:36 (09:36) Kiwamu – Going to HAM6 area to check S/Ns on electronics
16:41 (09:41) Kiwamu – Out of LVEA 
16:43 (09:43) Jim B. – Finished with H0EPICS restart
16:45 (09:45) Cintas – On site to service rugs and garb
16:50 (09:50) Ed – Going to End-Y to terminate cables in the VEA
16:59 (09:59) Bubba – Back from Mid-X
17:05 (10:05) Christina & Karen – Finished at End-X and are going to End-Y
17:06 (10:06) Kyle – Back from the End Stations
17:10 (10:10) Filiberto – Out of the LVEA
17:18 (10:18) Hugh – Going to both End Stations to check HEPI pumps
17:45 (10:45) Hugh – Finished with HEPI checks at End Stations 
17:50 (10:50) Hugh – Checking CS HEPI pumps
17:58 (10:58) Hugh – Finished with CS HEPI pump checks
18:04 (11:04) Nutsinee – Going to LVEA to take pictures near HAM4
18:13 (11:13) Nutsinee – Out of the LVEA
18:44 (11:44) Vendor on site to deliver drinks
18:54 (11:54) Gerardo – Going to End-Y VEA to measure for EitherCat cables
19:29 (12:29) Gerardo – Back from End Stations
19:49 (12:49) Manny & Leslie – Going into the LVEA to brackets for access sys on IO cabinets
20:27 (13:27) Bubba – Going into LVEA to look at N2 LTS system
20:28 (13:28) Gerardo – Going to Mid-Y to shim CP3 and CP4
20:58 (13:58) Bubba – Out of the LVEA
21:04 (14:04) Manny & Leslie – Finished in the LVEA
21:11 (14:11) Sheila & Jeff – Adjusted the ALS wrong polarization from 43% to 7%
22:10 (15:10) Sheila – Going to End-X
22:20 (15:20) Manny & Leslie – Going to both End Stations to install access controls bracket
22:39 (15:39) Chandra – Going into the LVEA to the walkover bridge
22:45 (15:45) Chandra – Out of the LVEA
22:48 (15:48) Chandra – Going to both end stations to make measurements
22:48 (15:48) Gerardo – Going into the LVEA – climbing around HAM6

End of Shift Summary:

Title: 05/10/2016, Day Shift 15:00 – 23:00 (08:00 – 16:00) All times in UTC (PT)
Support: 
Incoming Operator: Cheryl 

Shift Detail Summary: Maintenance Tuesday. Work continues on recovery of the PSL.   

H1 PSL (PSL)
peter.king@LIGO.ORG - posted 11:07, Tuesday 10 May 2016 - last comment - 13:59, Tuesday 10 May 2016(27090)
ISS photodiode box removed
I have removed the power stabilisation photodiode box from the table for
testing/debugging.  Please do not attempt to engage the power stabilisation.
Comments related to this report
jenne.driggers@LIGO.ORG - 13:59, Tuesday 10 May 2016 (27094)

Here is a time trace of the power into the IMC without an ISS.

Images attached to this comment
H1 ISC
sheila.dwyer@LIGO.ORG - posted 18:20, Monday 09 May 2016 - last comment - 13:56, Tuesday 10 May 2016(27082)
another brief look at XARM IR

Sheila, Jenne, Cheryl

Xarm IR locking was difficult again today, and we made two observations. 

We have a more aggresive low pass filter in MC2 M3 than we really need, CLP300.  We locked the arm twice sucsesfully with this filter off and CLP500 on instead. screenshots are attached with the digital gain set to 0.15.  We didn't add this to the guardian yet, but I think its a good idea (third screenshot attached shows the trial configuration).

We are in no danger of saturating M3, but M2 saturates when we are trying to acquire.  This might be the root of some of our troubles accquiring the xarm.   

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 13:56, Tuesday 10 May 2016 (27093)

I've tested this new arrangement about 10 times in the last 15 minutes, and it does seem much better than what we had before, so I've put it in the guardian and committed them.  

H1 GRD
jameson.rollins@LIGO.ORG - posted 11:51, Friday 06 May 2016 - last comment - 12:01, Wednesday 11 May 2016(27046)
minor guardian update to fix minor log viewing issue

I just pushed a minor upgrade to guardian to fix a small issue with the guardlog client when following node logs.  The new version is r1542.

The client was overly buffering stream data from the server, which was causing data from the stream to not be output in a timely manner.  This issue should be fixed in the version I just pushed.

As always make sure you have a fresh session to get the new version:

jameson.rollins@operator1:~ 0$ which guardlog
/ligo/apps/linux-x86_64/guardian-1542/bin/guardlog
jameson.rollins@operator1:~ 0$ 

Comments related to this report
sheila.dwyer@LIGO.ORG - 17:02, Tuesday 10 May 2016 (27099)
sheila.dwyer@operator1:~/StripTools$ lockloss select
usage: guardlog [-h] [-a TIME] [-b TIME] [-c TIME] [-d S] [-o H] [-t] [-n]
                [-u | -l | -g | -r | -p] [-x]
                NODE [NODE ...]
guardlog: error: unrecognized arguments: --dump
Traceback (most recent call last):
  File "/ligo/cds/userscripts/lockloss", line 408, in
    args.func(args)
  File "/ligo/cds/userscripts/lockloss", line 259, in cmd_select
    selected = select_lockloss_time(index=args.index, tz=args.tz)
  File "/ligo/cds/userscripts/lockloss", line 140, in select_lockloss_time
    times = list(get_guard_lockloss_events())[::-1]
  File "/ligo/cds/userscripts/lockloss", line 107, in get_guard_lockloss_events
    output = subprocess.check_output(cmd, shell=True)
  File "/usr/lib/python2.7/subprocess.py", line 544, in check_output
    raise CalledProcessError(retcode, cmd, output=output)
subprocess.CalledProcessError: Command 'guardlog --dump --today ISC_LOCK' returned non-zero exit status 2
sheila.dwyer@operator1:~/StripTools$ which guardlog
/ligo/apps/linux-x86_64/guardian-1542/bin/guardlog
sheila.dwyer@operator1:~/StripTools$ 
It seems like there is a problem accessing the guardian logs again, I wonder if its related to this update or not.  I don't know if anyone had tried to look at locklosses since the update. 
 
sheila.dwyer@operator1:~/StripTools$ lockloss select
usage: guardlog [-h] [-a TIME] [-b TIME] [-c TIME] [-d S] [-o H] [-t] [-n]
                [-u | -l | -g | -r | -p] [-x]
                NODE [NODE ...]
guardlog: error: unrecognized arguments: --dump
Traceback (most recent call last):
  File "/ligo/cds/userscripts/lockloss", line 408, in
    args.func(args)
  File "/ligo/cds/userscripts/lockloss", line 259, in cmd_select
    selected = select_lockloss_time(index=args.index, tz=args.tz)
  File "/ligo/cds/userscripts/lockloss", line 140, in select_lockloss_time
    times = list(get_guard_lockloss_events())[::-1]
  File "/ligo/cds/userscripts/lockloss", line 107, in get_guard_lockloss_events
    output = subprocess.check_output(cmd, shell=True)
  File "/usr/lib/python2.7/subprocess.py", line 544, in check_output
    raise CalledProcessError(retcode, cmd, output=output)
subprocess.CalledProcessError: Command 'guardlog --dump --today ISC_LOCK' returned non-zero exit status 2
sheila.dwyer@operator1:~/StripTools$ which guardlog
/ligo/apps/linux-x86_64/guardian-1542/bin/guardlog
sheila.dwyer@operator1:~/StripTools$ 
 
jameson.rollins@LIGO.ORG - 07:25, Wednesday 11 May 2016 (27110)

Sorry, Sheila.  This issue has been fixed now.  Just needed to tweak the lockloss script to account for some updated guardlog arguments.

sheila.dwyer@LIGO.ORG - 12:01, Wednesday 11 May 2016 (27117)

Yes, its working, thank you Jamie.  

Displaying reports 56621-56640 of 83007.Go to page Start 2828 2829 2830 2831 2832 2833 2834 2835 2836 End