(Corey, Jim, Mitch) Vertical Sensors Installed On Unit #2 On Unit#1, it was discovered the mirrored surfaces for one of the Capacitive Displacement Sensors (CPSs) was scratched. We grabbed spare hardware to be used for Unit#2 to get us through Unit#1. Unfortunately, the Sensor Target Body is the one part we didn't have a spare of. Hoped to have another one in hand late last week, but as of Tuesday we still hadn't received it, so we opted to use the damaged (yet serviceable) Target body so we could get Unit#2 available for testing. All Vertical Sensors were installed. The bad target body is on V1: we'll want to swap it out when we get another Target Body part. Pair of CPS Mini-Racks Powered ON (& Connected Together), and Gaps Set Mini racks were connected to each other with the cable/connections & were both powered on. Since all the CPS Targets were pulled back (huge gap), we saw that V1,H1,V2, & H2 were railed at +32k. V3 & H3 were at zero (we saw this on Unit#1); they went to "real" values when the Target was close enough to the Sensor Head. With the table locked, the Sensor gaps were set to under +/- 100 counts. Note: These values can change up to a few 100 counts due to the ~2" slop in the locked Lockers, so don't spend too much time getting gaps right at zero. Not posting gap values here, because of what was discovered later.... Access Walls Installed, and Unit#2 Balanced & Level Checked After the Access Walls were installed the Dial Indicators were set to zero (with locked table). The table was then unlocked and the table was balanced. The table had already been set fairly close to level weeks ago, so only small moves were needed. The level of the table, as told by the Dial Indicators was: A: 0.000" B: +0.0005" (this means it's low) C: 0.000" D: 0.000" Pretty darn good!! Checking Sensor Gaps With Teflon Shims From the first gap check, it was clear our gaps were big. Here's what we had: (all +/- 0.003") V1: 0.096" H1: 0.094" V2: 0.096" H2: 0.094" V3: 0.100" H3: 0.090" QUESTION: (1)Switch back to the "real" feedthroughs" or (2) set gaps to 0.080" and set new zeroes on mini-racks?? Flange Feedthroughs Used & Gaps Set Again I had thought we had installed some new & better BNC feedthroughs on our Interface Plate, but maybe something happened to them after they were cleaned. Since I found the "real" flange feedthroughs, I opted to employ them again. [I wired them up such that, when looking at the "dirty side", the top connection is V1, and then incremented up clockwise--same for the H's as well] Sure enough, with these new feedthroughs, all the counts went up to around 10,000 (meaning the gaps needed to get smaller. Here are the new values: Gaps (counts) set with a locked table: (Offset / Std Dev) V1: 46 / 1.5 H1: 21 / 0.7 V2: -55 / 1.5 H2: -36 / 0.8 V3: 43 / 1.5 H3: -76 / 0.7 Gaps Measured with teflon shims (all +/- 0.003") V1: 0.082" H1: 0.083" V2: 0.082" H2: 0.083" V3: 0.077" H3: 0.083" How does this sound? Are we sure it's a feedthrough issue? Are we happy where the zeroes are set on the Mini-Racks? V3 Target Contacted Sensor Head While finishing up setting the V3 gap, I accidentally loosened the Collar too much and the Target dropped down on to the Sensor Head. It was a straight drop. There was no rotation, and I immediately pulled it all the way up, and inspected the Target as best as I could. I could see a few minor scratches, but they mainly looked like they were on the outer edge. There were some particles also observed. The biggest feature was a "dried liquid stain" sort of near the middle. Since there wasn't anything huge in the middle, I opted to continue with setting the gap. I took some pictures, so please check them out. Any suggestions on whether it's ok to live with this situation? Otherwise, I say we should just proceed and just remember that this happened (in case this Sensor gives questionable performance in the future). Speaking of things to remember, we also have to remember to swap out the V1 Sensor Target at some point for a good Sensor Target Body (we should get one from LLO on Wed afternoon). Attached are some close-up photos of the V3 Sensor Target. In some of them you will notice the "dried liquid spot" which is fingerprint size and near one of the bolt heads.
More On BNC Feedthrough Situation Just got off phone with Hugh, and he mentioned the BNC feedthrough on the Interface Plate were never swapped out. So we don't have the good ones in there. We'll probably stick with the Flange feedthroughs for now, but we might test out the new BNC feedthroughs at the end of Unit#2 Testing (or just wait until we're on Unit#3). More On Vertical Sensors Talked to Hugh this morning about what happened with V3, and sounds like Hugh will give this Target an inspection and determine whether it needs to be swapped out (and if they get the new Sensor Target Body for V1, they might address it, too).
Reconfigured QTS DAQ, added all the test stand's epics channels to the DAQ. Created the /cvs/cds/llo/chans/daq/L1EDCU.ini file, added this path to the /cvs/cds/llo/target/fb/master file, restarted the DAQ. Frame size has increased from 3.2MB to 6.8MB (per 16 sec frame). This will not cause any disk filling errors, /frames is currently only 16% utilized, so estimated to top out at approx 33%.
Installed an autoburt on bscteststand. Runs as a controls cronjob at one minute past each hour. Uses the l1qtsepics/autoBurt.req file, and writes to the /data/autoburt area. Same code as runs on the seiteststand (slightly more simple since there are no X-FILES)
This test shows that the config changes do work in allowing larger images. This is an image that Corey tried to post yesterday, but had to resize and post a smaller version.
I will be restarting the alog webserver between 10:00am and 10:15am Pacific on Tuesday 2 Aug 2010. Please save your entries before that. This is to change a setting to allow larger images (2MB-10MB) to be uploaded as attachments.
The maintenance is complete.
(Eric A., Jodi F., Corey G., Hugh R., Mitch R., Jim W.) Last Friday (8/30/10), the seismic team boxed up the first Advanced LIGO HAM-ISI (Unit #1). This endeavor went smoothly. The Unit#1 Box remains in the Staging Bldg in the open area outside of the cleanrooms (it will be moved out when we receive our new powerful forklift which will be able to haul it up the "hill". It will then be stored in the Vacuum Prep Warehouse. Action photos are attached.
I have added and committed plots in .fig and .pdf formats of the Transfer Functions for both the co-located(L2L) and IFO coordinate (M2M) basis. I edited some of the original scripts used to plot these (traditionally titled 'TFanalyze') and added a few tweaks to save plots in both formats. They also concatenate the data such that various parameters in the 'TFcollect' scripts can be tracked easily and data plotted without too much editing of the analysis scripts. The plots are located in the repository in: '~seismic/HAM-ISI/X1/Data/unit_1/Figures/' The plots are of the L2L TFs from July 04 and July 13 with bandwidths from 0.025-800Hz and 0.05-50Hz, respectively. Also, the M2M plots are from the same dates and bandwidths. The scripts are located in the repository at: '~/seismic/HAM-ISI/X1/Scripts/DataAnalysis/unit_1/' The new scripts: TFanalyze_100704_L2L_0p025to800hz.m TFanalyze_100713_L2L.m TFanalyze_100704_M2M_0p025to800hz.m TFanalyze_100713_M2M.m I plan on moving these over as templates to the other unit_x folders. Also, I cleaned up the structure of the repository a bit. Now, under the '~/Scripts/' directory, we have separate folders for the assemblies' (units) 'DataCollection' and 'DataAnalysis' folders. So now unit_1's DataCollection folder is in the path '/seismic/HAM-ISI/X1/Scripts/DataAnalysis/unit_1/'. Same for '~DataAnalysis/'. The '~/seismic/HAM-ISI/X1/Data/' folder also has this structure.
This problem is still a mystery. Usually dtt "test timed out" problems mean that the system clock and the daq clock are off by several seconds or more. In this case the dtt and daq are running on the same machine, and the system clock was spot on! Previously when I had to correct the bscteststand clock, I had the restart the daqd to pacify dtt. So today, I just restarted daqd and it did the trick, but the clock was good all along! I'm monitoring the clock corrections (done by cronjob) in the file /var/tmp/timecorrections/log.txt Perhaps the local quartz clock is having aging problems (battery failure)?
With Michael Landry's help, we loaded Brett's filter file into /cvs/cds/llo/chans/L1QTS.txt (Brett's file is saved as M1SUS.txt.eg). Loading Coefficients appears to have loaded filters with appropriate names which correspond to the file. (Note that the file structure needs a little work!) After a DAQ reboot by Dave to clear "Test Timed Out" errors on DTT, we checked that we were able to inject excitations through the awggui and run transfer functions. We can now do some filter checks and run a gamut of TFs.
I've created a program called system_check which
checks all the front end processes are running correctly.
It also checks for duplication.
[controls@seiteststand scripts]$ system_check
Checking setup_shmem.rtl g1x01 g1isiham ... pid = 5732
Checking g1x01epics ... pid = 4994
Checking g1x01fe.rtl ... pid = 5022
Checking awgtpman -s g1x01 ... pid = 5025
Checking g1isihamepics ... pid = 5249
Checking g1isihamfe.rtl ... pid = 5283
Checking awgtpman -s g1isiham ... pid = 5288
Checking daqd ... pid = 5418
Checking nds ... pid = 19919
During one of Fabrice's final measurements on our Unit#1 Assy, it was noticed that the V2 GS13 appeared to have a gain of 2 lower than everyone else. We wanted to check this out. Today, the Seismic Team finished moving all the GS13's from Unit#1 to Unit#2. Today, I ran power spectra looking at the GS13's in various states. [See attached plots] Meas UL: This was with Unit #2 Locked. Here V2 looks to be ok (if anything it seems bigger than everyone else (in contrast to the Meas LL below). Meas LL: This was an old measurement from a locked Unit #1. Here may be the gain of 2 less on the V2 GS13. Meas UR: Here Unit#2 was UNLOCKED. Everything looks normal here. All the H's look similar and the V's look similar. Meas LR: The H2/V2 cable was swapped with the H3/V3 cable (Unit#2 is locked). Nobody looks glaringly different on this plot. The DTT data taken for this measurement is available for your perusal here: /opt/svncommon/seisvn/seismic/HAM-ISI/X1/Data/unit_2/dtt/20100729_geo_V2check.xml
DB and CG The seiteststand was not running all of its required processes, all the g1x01 were missing. We did a killg1x01 and killg1isiham to cleanly remove all processes, and then started g1x01 and g1isiham in that order.
Now that we've completed testing on HAM-ISI Assy #1, we wanted to update our transformation matrices to the more accepted canon (Jeff K. made a new matlab script to generate these matrices which include Celine's work that include the L4Cs among other things). Jeff G. loaded these briefly a couple of weeks ago, but we reverted to older scripts since we were still in the midst of Unit#1 testing; see his elog for the specifics on the files used & an overall explanation. Attached are the BEFORE/AFTER values for these matrices. Now, the new script matrix yields values to 12 sig figs, but EPICS rounds these values to 5 sig figs when they are loaded into our actual/medm matrices. Noticeably Different CONT2ACT Matrix The DISP2CEN & GEO2CEN matrices look fairly identical before and after this update. The noticeable change is with the CONT2ACT matrix. For this matrix, we have values at different locations and also sign changes. During the Assy #2 testing, we'll need to confirm that our sign convention is obeyed due to this matrix change.
CONT2ACT Matrix Looks FINE! Looks like a case of Operator Error here--The matrix I loaded CONT2ACT with was actually for the L4C2CEN matrix! I've rerun the script and acquired/loaded the correct CONT2ACT matrix. Now CONT2ACT looks closer to what it was before. So, this is the mistake I ran previously: >> [GEO2CEN,DISP2CEN,CONT2ACT] = make_HAMX1_projections_100709 It should be: >> [GEO2CEN,DISP2CEN,L4C2CEN,CONT2ACT] = make_HAMX1_projections_100709 I'm attaching an updated/correct BEFORE/AFTER document.
After a Rolf-model-change a while ago, we ended up making new Front End scripts (i.e. our start & kill scripts). Our scripts are located at: /opt/rtcds/geo/g1/scripts I've made an archive folder in here, and I've put our old start & kill scripts in this archive folder. So now we have only the "real" ones in this main folder: * killg1isiham * killg1x01 * startg1isiham * startg1x01
It seems that I still cannot get playback data. I have restarted dv, but the request never opens a plot, _IN1 nor _DAQ channels. As well, the realtime data for either species looks like it is repeating. Also, how do I launch Foton??
from xterm terminal in the cleanroom: ssh -X controls@bscteststand type foton at the prompt.
With the completion of Assembly #1 testing, the ISI is locked. Actuator and GS13 Cabling is now disconnected.