Displaying reports 59761-59780 of 86139.Go to page Start 2985 2986 2987 2988 2989 2990 2991 2992 2993 End
Reports until 18:23, Tuesday 10 May 2016
LHO VE (CDS, VE)
patrick.thomas@LIGO.ORG - posted 18:23, Tuesday 10 May 2016 - last comment - 18:57, Tuesday 10 May 2016(27103)
Beckhoff vacuum controls updates
I updated the software on all of the Beckhoff vacuum controls computers as per work permit 5875. Carlos and I took the opportunity to change the computer names at the same time. They are now h0vaclx, h0vacly, h0vacmr, h0vacmx, h0vacmy, h0vacex and h0vacey. I have attached the channel names changes that were made in the DAQ. I set the smoothing on all of the CP pump levels to 0.99. A lot of the cold cathode gauges have still not come back on. CP4, CP5 and CP6 are still in the process of recovering.
Non-image files attached to this report
Comments related to this report
patrick.thomas@LIGO.ORG - 18:57, Tuesday 10 May 2016 (27104)
The following channels set the amount of smoothing on the reading of each CP level in milliamps (which is then converted to % full). They can range between 0 (no smoothing) to 1 (never changes). I set them all to 0.99.

H0:VAC-LY_CP1_LT100_PUMP_LEVEL_MA_SMOO
H0:VAC-LX_CP2_LT150_PUMP_LEVEL_MA_SMOO
H0:VAC-MY_CP3_LT200_PUMP_LEVEL_MA_SMOO
H0:VAC-MY_CP4_LT250_PUMP_LEVEL_MA_SMOO
H0:VAC-MX_CP5_LT300_PUMP_LEVEL_MA_SMOO
H0:VAC-MX_CP6_LT350_PUMP_LEVEL_MA_SMOO
H0:VAC-EY_CP7_LT400_PUMP_LEVEL_MA_SMOO
H0:VAC-EX_CP8_LT500_PUMP_LEVEL_MA_SMOO
LHO VE
kyle.ryan@LIGO.ORG - posted 17:17, Tuesday 10 May 2016 (27101)
Progess on VBOD
Kyle, Joe D. 

Today we re-installed the SRS RGA100 analyzer after having replaced its fried filaments and CDEM last Friday.  We leak checked the new joints but won't be able to test the basic functionality until tomorrow.  Assuming that every thing works, it will still be "weeks" before VBOD can process bake loads as the SRS RGAs are impossible to bake out effectively as the result of near zero conductance resulting from poor focus plate design - if anybody has an extra $20K in their budgets, please send it our way.  The newer aLIGO-era RGAs pay for themselves in production up time and minimal man hours to Futz with.
LHO VE
kyle.ryan@LIGO.ORG - posted 17:10, Tuesday 10 May 2016 (27100)
RGA cal-gas issue
(see also https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=27078) 

Today I inspected the "hang tags" on the N2 and Kr calibration gases at each of the currently installed aLIGO RGAs (Pfeiffer Prisma Plus, Y-end, X-end and Vertex).  It turns out that these are NOS (new old stock) calibration gases that were intended for but never used during the Beam Tube bake back in 1998.  Though I don't specifically remember, I must have decided to use these since they were the correct gases and leak rates needed for the RGAs and were unused, albeit old, and would save having to spend another $8,000.  This looks to have been a bad idea as the factory fill pressures for the N2 bottles were sub-atmospheric, ~500 torr.  This is typical for "crimped capillary" leaks for the mid 10-8 torr*L/sec leak rates and isn't an issue as long as their isolation valves never get opened at pressures > 500 torr, say like at atmosphere.  The problem here is that these sat in a drawer for 15 years along with various other cal-gas bottles and were accessible to anybody wanting to turn a knob.  

It looks very much like the N2 cal-gas used at the Vertex had been opened to atmosphere at some point during its long storage.  I am currently re-baking the Vertex RGA after having exposed it to the N2 bottle last Friday but while both cal-gas isolation valves are open.  It may be that the N2 bottle may need to isolated and tagged-out and we may be stuck with only a Kr leak at the Vertex.  I haven't reviewed the data but this might also explain the poor clean-up of the RGAs at the end stations.  
H1 PSL (PSL)
peter.king@LIGO.ORG - posted 16:44, Tuesday 10 May 2016 - last comment - 10:36, Wednesday 11 May 2016(27098)
laser trip
Jenne told me this afternoon that the laser had tripped.  The crystal chiller indicated an error with
flow sensor 1.  All the flow sensors in the chiller are the vortex style and have no moving parts
and are not likely to fail in the same manner as the old turbine ones.  My guess is that after ~31k
hours of service, it's time to start scheduling a time where we switch the chiller out for the spare
and send this unit back to Termotek for servicing - who recommend that the servicing be done annually.

    This is the second time in about 4 days that the laser has tripped because of this problem.
Comments related to this report
corey.gray@LIGO.ORG - 10:36, Wednesday 11 May 2016 (27114)

Tagged this with FRS#5469.

H1 CDS (SUS)
david.barker@LIGO.ORG - posted 16:28, Tuesday 10 May 2016 - last comment - 17:45, Tuesday 10 May 2016(27097)
CDS Tuesday Maintenance Summary, 10th May 2016

h1pemey

Dave:

I modified h1pemey to receive the muxed OMC DCPD demodulated channels from h1omc, and demux them into 8 dolphin channels. These will be received by the h1susetmypi model in a later mod. The h1pemey model now has the same mods as were applied to h1pemex last week.

Vacuum Controls

Carlos, Patrick, Dave:

Patrick made new vacuum controls changes today. Some LLCV channels changed names, and new INI files were generated. I copied the running minute trend files from old-name to new-name on both h1tw0 and h1tw1. I created the new edcu file, which was renamed from H0EDCU_VE.ini to H0EDCU_VAC.ini

Target files, autoburts and snap files are being renamed from h0veXX to h0vacXX.

Beckhoff computers were renamed from h0veXX to h0vacXX. The NTP settings were verified, but it looks like the clocks remain unsynced.

Slow Controls Beckhoff Computers

Carlos, Jim, Dave:

The NTP settings on h1ecat[c1,ex,ey] were changed from name based to IP address. This caused a resync of the time on these machines (were 4 seconds out).

DAQ Restart

Dave:

DAQ restart to incorporate the h1pemey new model and the VAC new ini.

Comments related to this report
sheila.dwyer@LIGO.ORG - 17:45, Tuesday 10 May 2016 (27102)

For some reason the high voltage supply for the EX ESD was off this afternoon.  Tega and I turned it back on.  It looks like the same might be true for EY, but we havent' gone out there yet. 

H1 CDS
david.barker@LIGO.ORG - posted 16:15, Tuesday 10 May 2016 (27096)
/opt/rtcds file system no longer almost full

Ryan, Carlos, Jim, Dave:

The new ZFS file system on h1fs0 which is NFS exported as /opt/rtcds became 97% full recently. This is a fast (SSD) but relatively small (1TB) file system.  The reason for being fast is to prevent front end epics freez-ups which we saw last year when the disk was busy syncing to its backup machine.

With this being a snapshotted ZFS file system, freeing up space is not as easy as removing and/or compressing files. First all the snapshots must be destroyed to free up the file blocks and allow file modifications.

Process on both h1fs0 (primary) and h1fs1 (backup) was:

 1 destroy all snapshots

 2 remove all old adl files in the rcg generated dirs /opt/rtcds/lho/h1/medm/model-name/

 3 tar and compress old build areas in /opt/rtcds/lho/h1/rtbuild*

 4 delete all non-rcg generated target_archive dirs (mostly from 2011-2012)

 5 delete all rcg generated target_archive dirs with the exception of the most recent four dirs

2&3  freed up only a modest amount of disk space, they were mainly to clean up clogged directories.

4&5: prior to doing this we verified these old target archives were backed up to tape. Stop 5 freed the lion's share of disk space, about 600GB.

The end result is the amount of free space went from 53GB to 685GB, disk usage went down from 97% to 28%.

At the end of the procedure, a full ZFS-snapshot sync of h1fs0 to h1fs1 was done manually, and the hourly sync was turned back on.

H1 General
jeffrey.bartlett@LIGO.ORG - posted 16:14, Tuesday 10 May 2016 (27095)
Ops Day Shift Summary
Transition Summary:

Title:  05/10/2016, Day Shift 15:00 – 23:00 (08:00 – 16:00) All times in UTC (PT)
State of H1: IFO unlocked. Work on PSL is ongoing. Tuesday Maintenance day  
Commissioning: 
Outgoing Operator: N/A
 
Activity Log: All Times in UTC (PT)

14:00 (06:00) Peter – In the H1 PSL enclosure
14:30 (07:30) Jeff – Replace network cable to PSL dust monitor #102
15:00 (08:00) Start of Tuesday maintenance day
15:00 (08:00) Christina & Karen – Cleaning in the LVEA
15:14 (08:14) Sprague – on site to remove pest control boxes
15:30 (08:30) Hugh – Restarting HAM models (WP #5874)
15:48 (08:48) Filiberto, Leslie, & Manny – Going to pull cables for the access system
15:49 (08:49) Hugh – Finished with model restarts
15:54 (08:54) N2 delivery at both end stations
16:00 (09:00) Patrick – Update Beckhoff code for vacuum controls computer (WP #5875)
16:04 (09:04) Kyle – Going to both end stations VEAs. 
16:10 (09:10) Jim B – Restarting H0EPICS to apply security patch.
16:10 (09:10) Bubba – Replacing faulty check valve air comp at Mid-X (WP #5876)
16:15 09:15) Peter – Out of the H1 PSL enclosure
16:17 (09:17) Filiberto – Power off Vac rack LX, & LY to check grounds
16:35 (09:35) Christina & Karen – Cleaning at End-X
16:35 (09:35) Chandra – Going to Mid-X to work on CP4
16:36 (09:36) Kiwamu – Going to HAM6 area to check S/Ns on electronics
16:41 (09:41) Kiwamu – Out of LVEA 
16:43 (09:43) Jim B. – Finished with H0EPICS restart
16:45 (09:45) Cintas – On site to service rugs and garb
16:50 (09:50) Ed – Going to End-Y to terminate cables in the VEA
16:59 (09:59) Bubba – Back from Mid-X
17:05 (10:05) Christina & Karen – Finished at End-X and are going to End-Y
17:06 (10:06) Kyle – Back from the End Stations
17:10 (10:10) Filiberto – Out of the LVEA
17:18 (10:18) Hugh – Going to both End Stations to check HEPI pumps
17:45 (10:45) Hugh – Finished with HEPI checks at End Stations 
17:50 (10:50) Hugh – Checking CS HEPI pumps
17:58 (10:58) Hugh – Finished with CS HEPI pump checks
18:04 (11:04) Nutsinee – Going to LVEA to take pictures near HAM4
18:13 (11:13) Nutsinee – Out of the LVEA
18:44 (11:44) Vendor on site to deliver drinks
18:54 (11:54) Gerardo – Going to End-Y VEA to measure for EitherCat cables
19:29 (12:29) Gerardo – Back from End Stations
19:49 (12:49) Manny & Leslie – Going into the LVEA to brackets for access sys on IO cabinets
20:27 (13:27) Bubba – Going into LVEA to look at N2 LTS system
20:28 (13:28) Gerardo – Going to Mid-Y to shim CP3 and CP4
20:58 (13:58) Bubba – Out of the LVEA
21:04 (14:04) Manny & Leslie – Finished in the LVEA
21:11 (14:11) Sheila & Jeff – Adjusted the ALS wrong polarization from 43% to 7%
22:10 (15:10) Sheila – Going to End-X
22:20 (15:20) Manny & Leslie – Going to both End Stations to install access controls bracket
22:39 (15:39) Chandra – Going into the LVEA to the walkover bridge
22:45 (15:45) Chandra – Out of the LVEA
22:48 (15:48) Chandra – Going to both end stations to make measurements
22:48 (15:48) Gerardo – Going into the LVEA – climbing around HAM6

End of Shift Summary:

Title: 05/10/2016, Day Shift 15:00 – 23:00 (08:00 – 16:00) All times in UTC (PT)
Support: 
Incoming Operator: Cheryl 

Shift Detail Summary: Maintenance Tuesday. Work continues on recovery of the PSL.   

H1 PSL (PSL)
peter.king@LIGO.ORG - posted 12:03, Tuesday 10 May 2016 (27092)
FSS
Attached are measurements of the frequency stabilisation taken from this morning.
Noise spectra and the transfer functions are plotted for common gains of 20 dB and 22 dB,
and fast gains of 22 dB and 8 dB respectively.

    At lower frequencies the noise is lower with the common gain at 20 dB.  However this
results in a slight decrease in the unity gain frequency.  With these settings the UGF
is ~450 kHz.  With the common gain at 22 dB, the UGF is ~650 kHz.  Anecdotally it seems
that the frequency servo is more glitchy with the common gain at 22 dB.

    Improving the cross-over by looking at the spectrum and flattening it and adjusting
the fast gain, improves the flatness of the spectrum but then increasing the common gain
does not push the noise floor as low as the two cases illustrated.
Images attached to this report
H1 CAL (CAL, ISC)
kiwamu.izumi@LIGO.ORG - posted 11:47, Tuesday 10 May 2016 - last comment - 15:57, Thursday 12 May 2016(27091)
more accurate OMC DCPD anti whitening filters

Related log(s): 21131

Summary. - Today, I have updated the first anti-whitening filters of the OMC DCPDs in the OMC digital front end model in order to make them more accurately match the actual analog circuits. The attached show screenshots of the foton filters before and after the update.

Impact. - The new OMC DCPD responses are different from what they used to be by 1% at 10 Hz in magnitude and almost no change at 100Hz and above. The O2 DARM model must take this update into account. Note that even though the qualitative behavior of the mismatched anti-whitenings is similar to the large bias we have been seeing in the DARM response (for example 24569), they do not explain the bias (which is about 10% at 10 Hz in magnitude).

DCPD balancing. - According to my calculation, the balancing should be good at a 0.1% level without introducing an artificial imbalance in the OMC front end. So I removed the existing artificitial imbalace (-0.3%) and update the OBSERVE and DOWN SDFs. However, the imbalance should be experimentally double-checked and possibly re-adjusted.

Images attached to this report
Comments related to this report
kiwamu.izumi@LIGO.ORG - 15:57, Thursday 12 May 2016 (27148)

Today, while Jenne and Sheila were re-tuning the OMC dither loops, I looked at the balance of two OMC DCPDs. The attached shows the null and sum spectra, and ratio between DCPD A and B.

The isolation between null and sum is as good as 66 dB, according to a injected line at 12 Hz (OM3 pitch by Jenne and Sheila). The two PDs match each other with a 0.1% level accuracy in magnitude and 0.1 deg level for the phase. This seems good enough for the moment. Though, we should check the responses above 100 Hz at some point.

Images attached to this comment
H1 PSL (PSL)
peter.king@LIGO.ORG - posted 11:07, Tuesday 10 May 2016 - last comment - 13:59, Tuesday 10 May 2016(27090)
ISS photodiode box removed
I have removed the power stabilisation photodiode box from the table for
testing/debugging.  Please do not attempt to engage the power stabilisation.
Comments related to this report
jenne.driggers@LIGO.ORG - 13:59, Tuesday 10 May 2016 (27094)

Here is a time trace of the power into the IMC without an ISS.

Images attached to this comment
H1 PEM (CDS)
james.batch@LIGO.ORG - posted 09:39, Tuesday 10 May 2016 (27089)
Dust, Dewpoint IOC's restarted
The h0epics computer was patched and restarted.  Dust IOC's for the diode room, lab, lvea, psl, ex, and ey were restarted.  The dewpoint monitor IOC was restarted.  The protocol file for the psl IOC was updated to handle three digit ID numbers more reliably.
H1 SEI
hugh.radkins@LIGO.ORG - posted 09:19, Tuesday 10 May 2016 (27088)
WHAM ISIs Models Updated & Guardians SEI HPI ISI

WP 5874

SEI update summarized in T1600090. This includes :

o GS13 signals storing rate changed. ECR E1600077

o STS signals at the senscor channel point not recorded anymore -> ECR E1600077

o blend glitch-> ECR E1500456

o 10 secs delay on coil driver status before triggering watchdog-> ECR E1600076

o remove user dackill-> ECR E1600042

o Cumulative saturation count->ECR E1500325

Thanks to Arnaud for this round up.

Required updating the isihammaster.mdl, guardian isi manager.py and the HAM ISI Overview medm.

See attached for full log.

Non-image files attached to this report
H1 General
cheryl.vorvick@LIGO.ORG - posted 22:49, Monday 09 May 2016 - last comment - 05:50, Tuesday 10 May 2016(27086)
Ops Eve Summary: DRMI locked! (and then DRMI lost lock, FSS had trouble, no more locking tonight)

State of H1: DRMI locked!  Then DRMI lost lock, FSS went into oscillation

H1 Configuration:

Initial Alignment / Locking: 

Activities:

Other H1 activities / configuration:

Comments related to this report
peter.king@LIGO.ORG - 05:50, Tuesday 10 May 2016 (27087)
The fast gain of the FSS was only set to -8 dB in order for Jason and I to proceed with
other laser related work without having to worry about it re-acquiring and oscillating.
It is not the default value.  I will re-measure the cross over today and update the log.
LHO VE
chandra.romel@LIGO.ORG - posted 20:26, Monday 09 May 2016 (27085)
CP4 & CP2
CP4 is back on scale and in PID mode. 

Experimented with PID loops using CP2. 
H1 ISC
sheila.dwyer@LIGO.ORG - posted 18:20, Monday 09 May 2016 - last comment - 13:56, Tuesday 10 May 2016(27082)
another brief look at XARM IR

Sheila, Jenne, Cheryl

Xarm IR locking was difficult again today, and we made two observations. 

We have a more aggresive low pass filter in MC2 M3 than we really need, CLP300.  We locked the arm twice sucsesfully with this filter off and CLP500 on instead. screenshots are attached with the digital gain set to 0.15.  We didn't add this to the guardian yet, but I think its a good idea (third screenshot attached shows the trial configuration).

We are in no danger of saturating M3, but M2 saturates when we are trying to acquire.  This might be the root of some of our troubles accquiring the xarm.   

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 13:56, Tuesday 10 May 2016 (27093)

I've tested this new arrangement about 10 times in the last 15 minutes, and it does seem much better than what we had before, so I've put it in the guardian and committed them.  

H1 GRD
jameson.rollins@LIGO.ORG - posted 11:51, Friday 06 May 2016 - last comment - 12:01, Wednesday 11 May 2016(27046)
minor guardian update to fix minor log viewing issue

I just pushed a minor upgrade to guardian to fix a small issue with the guardlog client when following node logs.  The new version is r1542.

The client was overly buffering stream data from the server, which was causing data from the stream to not be output in a timely manner.  This issue should be fixed in the version I just pushed.

As always make sure you have a fresh session to get the new version:

jameson.rollins@operator1:~ 0$ which guardlog
/ligo/apps/linux-x86_64/guardian-1542/bin/guardlog
jameson.rollins@operator1:~ 0$ 

Comments related to this report
sheila.dwyer@LIGO.ORG - 17:02, Tuesday 10 May 2016 (27099)
sheila.dwyer@operator1:~/StripTools$ lockloss select
usage: guardlog [-h] [-a TIME] [-b TIME] [-c TIME] [-d S] [-o H] [-t] [-n]
                [-u | -l | -g | -r | -p] [-x]
                NODE [NODE ...]
guardlog: error: unrecognized arguments: --dump
Traceback (most recent call last):
  File "/ligo/cds/userscripts/lockloss", line 408, in
    args.func(args)
  File "/ligo/cds/userscripts/lockloss", line 259, in cmd_select
    selected = select_lockloss_time(index=args.index, tz=args.tz)
  File "/ligo/cds/userscripts/lockloss", line 140, in select_lockloss_time
    times = list(get_guard_lockloss_events())[::-1]
  File "/ligo/cds/userscripts/lockloss", line 107, in get_guard_lockloss_events
    output = subprocess.check_output(cmd, shell=True)
  File "/usr/lib/python2.7/subprocess.py", line 544, in check_output
    raise CalledProcessError(retcode, cmd, output=output)
subprocess.CalledProcessError: Command 'guardlog --dump --today ISC_LOCK' returned non-zero exit status 2
sheila.dwyer@operator1:~/StripTools$ which guardlog
/ligo/apps/linux-x86_64/guardian-1542/bin/guardlog
sheila.dwyer@operator1:~/StripTools$ 
It seems like there is a problem accessing the guardian logs again, I wonder if its related to this update or not.  I don't know if anyone had tried to look at locklosses since the update. 
 
sheila.dwyer@operator1:~/StripTools$ lockloss select
usage: guardlog [-h] [-a TIME] [-b TIME] [-c TIME] [-d S] [-o H] [-t] [-n]
                [-u | -l | -g | -r | -p] [-x]
                NODE [NODE ...]
guardlog: error: unrecognized arguments: --dump
Traceback (most recent call last):
  File "/ligo/cds/userscripts/lockloss", line 408, in
    args.func(args)
  File "/ligo/cds/userscripts/lockloss", line 259, in cmd_select
    selected = select_lockloss_time(index=args.index, tz=args.tz)
  File "/ligo/cds/userscripts/lockloss", line 140, in select_lockloss_time
    times = list(get_guard_lockloss_events())[::-1]
  File "/ligo/cds/userscripts/lockloss", line 107, in get_guard_lockloss_events
    output = subprocess.check_output(cmd, shell=True)
  File "/usr/lib/python2.7/subprocess.py", line 544, in check_output
    raise CalledProcessError(retcode, cmd, output=output)
subprocess.CalledProcessError: Command 'guardlog --dump --today ISC_LOCK' returned non-zero exit status 2
sheila.dwyer@operator1:~/StripTools$ which guardlog
/ligo/apps/linux-x86_64/guardian-1542/bin/guardlog
sheila.dwyer@operator1:~/StripTools$ 
 
jameson.rollins@LIGO.ORG - 07:25, Wednesday 11 May 2016 (27110)

Sorry, Sheila.  This issue has been fixed now.  Just needed to tweak the lockloss script to account for some updated guardlog arguments.

sheila.dwyer@LIGO.ORG - 12:01, Wednesday 11 May 2016 (27117)

Yes, its working, thank you Jamie.  

Displaying reports 59761-59780 of 86139.Go to page Start 2985 2986 2987 2988 2989 2990 2991 2992 2993 End