Displaying reports 68961-68980 of 84442.Go to page Start 3445 3446 3447 3448 3449 3450 3451 3452 3453 End
Reports until 08:22, Wednesday 07 January 2015
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 08:22, Wednesday 07 January 2015 (15911)
CDS model and DAQ restart report, Tuesday 6th January 2015

model restarts logged for Tue 06/Jan/2015
2015_01_06 11:05 h1iopsusauxey
2015_01_06 11:05 h1iopsusey
2015_01_06 11:05 h1susauxey
2015_01_06 11:05 h1susetmy
2015_01_06 11:05 h1sustmsy
2015_01_06 11:58 h1sustmsy
2015_01_06 12:01 h1sustmsx
2015_01_06 12:04 h1susim
2015_01_06 12:08 h1sushtts
2015_01_06 12:10 h1susmc1
2015_01_06 12:12 h1susmc3
2015_01_06 12:14 h1susprm
2015_01_06 12:19 h1susmc2
2015_01_06 12:19 h1suspr2
2015_01_06 12:22 h1sussr2
2015_01_06 12:26 h1sussrm
2015_01_06 12:27 h1susomc

2015_01_06 12:36 h1broadcast0
2015_01_06 12:36 h1dc0
2015_01_06 12:36 h1fw0
2015_01_06 12:36 h1fw1
2015_01_06 12:36 h1nds0
2015_01_06 12:36 h1nds1

2015_01_06 16:11 h1nds1
2015_01_06 16:12 h1nds1
2015_01_06 16:13 h1nds1

X1PLC1 10:01 1/6 2015

X1PLC2 10:01 1/6 2015

X1PLC3 10:01 1/6 2015

Unexpected restarts of h1nds1, possibly related to data requests.

Maintenance Day. FE, DAQ and Beckhoff restarts shown. h1susey and h1susauxey ADC work. New models for non-quad SUS with associated DAQ restart. h1ecatx1 did its usual Tuesday freeze up.

H1 ISC (SUS)
jeffrey.kissel@LIGO.ORG - posted 22:11, Tuesday 06 January 2015 (15910)
BS and PR2 Misaligned
J. Kissel

For the record, I've moved H1 SUS BS and H1 SUS PR2 to MISALIGNED via the guardian, for no other reason that everyone has gone home and the giant projection of the AS port was giving me a seizure from interference flashes thanks to the 6.6 [mag] earthquake in Panama. Please restore at your leisure in the morning.
H1 SUS
jeffrey.kissel@LIGO.ORG - posted 21:49, Tuesday 06 January 2015 (15907)
More ETMX Story -- Total Confusion, Let's just Vent
J. Kissel, G. Moreno, K. Ryan, T. Sadecki, B. Shapiro, D. Sigg, B. Weaver, J. Worden

In summary
What happened today made little to no sense. If we continue to play this "mess with the environment, see what happens" game, we feel like we'll 
(a) Be at the game for a week or three, and
(b) Continue to fight the environment after the three weeks of fine tuning until we next vent for other reasons.
We (at least Betsy, Travis, Daniel, Brett, and I) vote for a "quick" in-and-out vent to fix the problem in hardware. 
We'll in discuss at tomorrow's morning meeting.


----------------
What's brought us to this conclusion 
See first attached, 2015-01-06_ETMXAdventure_Annotated.pdf. It's a 24 hour trend of
(UL) The VEA temperature
(LL) The QUAD's vertical displacement
(UR) The fine vacuum pressure (good only when pressure is sub-[Torr])
(LR) The course vacuum pressure (good only when pressure is > ~0.1 [Torr])

Here's the timeline:
- (Beginning of the plot) We see the morning, where we've left the suspension. The VEA is at 19.5 [deg C], the pressure is still slowly falling, and roughly at 3.2e-7 [Torr]. 
- (Solid BLACK line) John decreases the temperature set point of the VEA. Don't see much change (maybe a *little* rise in the SUS).
- (Dot-dashed MAGENTA line) John makes a second adjustment, to account for the potential overshoot, or something . We begin is see the steady increase in displacement, lifting the SUS *up* for which we'd hoped. Yay! But... we see only ~2 [um] in 2 [hrs], and we know we have to go over 100 [um].
- We all had a hall-way conversation, and decided "let's burp in some dry air, to decrease the time constant, by increasing the thermal conductivity in the chamber." 
- (Double-dot-dashed RED line) Kyle burps in 0.5 [Torr] of dry air (see LHO aLOG 15893) -- at the region where (in the hallway) we convince ourselves there's enough air to become sufficiently conductive but below were buoyancy begins to have an effect. 
The SUS drops 15 [um] suddenly. 
     What? 
     Why? 
     More on this in a second.
Then ... over the next several hours the SUS continues to drop, at thermally slow rate. 
     What?? 
     Shouldn't the air have made the temperature equilibrate to lower faster, bringing up the SUS? 
     If not faster, at least at the same rate?

Comparison against the other SUS in the chamber that we know are free
See second attachment, 2015-01-06_H1BSC9SUS_DispComparison.pdf, which is a data viewer export of the vertical displacement for all three SUS in the chamber, except that I've removed the DC bias on all of them so that there change is more obvious. Remember, all of these SUS chains, the ETMX Main, ETMX Reaction, and the TMS, all have roughly the same overall suspended mass, because the Suspension Point blade springs and the top mass are all roughly copy-and-pastes of each-other.
We indeed see:
- The same upturn and increase of displacement when John turns down the VEA temperature in all three suspensions. Notice that the Reaction Chain and TMS rise at *the same rate* and differently than the main chain.
- All three suspensions see the ~15 [um] drop. We think we have an explanation for this, but again will say more later.
- The Reaction Chain and TMS start to slowly turn back up and rise in height again. Still -- at a slower rate than before (??), but at least they rise. And certainly look nothing like the Main chain.

Why the 15 [um] drop?
Daniel, smarter than us all, conjectures it's the OSEM's LED senstivity to temperature. We look at all 6 of the OSEMs on the M0 chain, and we see they all have a large just when the air is burped into the chamber. His theory: the ideal gas law, pV = nRT, tells us that increasing the pressure in the chamber increases the temperature. Though I don't think anyone has measured the OSEMs light output as a function of temperature, we can guess from a few internet sources (e.g. here, here, and here) that the sensitivity to temperature is roughly a 1-3% change in output per degrees C, where lower temperature means and increase in output power.  To the OSEM's PD, an increase in LED output power (from increased temperature, from increased pressure) looks the same as the flag being pushed out of the OSEM. For the vertical degree of freedom, the LF and RT OSEMs, which are on top of the TOP mass, pulling out of the OSEM reads that the SUS has sunk down. The OSEMs have a linear range of 700 [um]. 1-3% of 700 [um]? You guessed it: 7 - 21 [um].


Re-conlusion
So look -- if 
- we're chasing down and getting fooled by 1% temperature dependence of the OSEMs LEDs showing 15 [um], 
- changing the environment by 1 [deg C] only gets a change of a ~3 [um/hr] when we need more than 100 [um], and 
- the test mass (or the other chains for that matter) aren't doing what we'd naively expect, 
then we're barking up the wrong, very slow, very time consuming, tree that may be a dead end with not enough juice to get what we need anyway. 

Let's just go in and fix the problem by backing off the earthquake stops to 2 [mm], or 2000 [um]. Still plenty of protection from earthquakes, and we stop playing this horrible game with these horribly temperature sensitive suspensions.
Images attached to this report
Non-image files attached to this report
H1 ISC
evan.hall@LIGO.ORG - posted 21:21, Tuesday 06 January 2015 - last comment - 10:43, Wednesday 07 January 2015(15908)
ISC maintenance

Kiwamu, Alexa, Koji, Evan

MICH dark locking adjustments

For the past few days, it has been difficult to lock MICH on a dark fringe; the velocity of the BS quickly becomes too high. Kiwamu found that turning down the gain from −500 ct/ct to −200 ct/ct during lock acquisition helps this situation; also, once locked, engaging FM2 and FM3 (rather than FM3 and FM6) seems to have a higher probability of success for keeping the Michelson locked.

DRMI ASC degredation

Trying to transition to DRMI_1F_LOCKED_ASC in the ISC_DRMI guardian will blow the lock. This appears to come down to the INP1 and PRC1 loops; the others (PRC2, MICH, SRC1, SRC2) can be engaged by hand just fine. INP1 controls PRM and IM4, and PRC1 controls just PRM. I tried locking with lower gain, with opposite sign, with the PRM M1 bleed-off disengaged, etc., but could not close either loop stably. We'll need to take a deeper look at why these loops no longer work.

BBPD spectrum

With DRMI locked on 1f, Koji and I took an RF spectrum of REFLAIR_B, using the −12 dB coupler on the diplexer (as in LHO#14796). This is the first measurement of the RF spectrum of REFLAIR_B after its modification (LHO#14925).

Comments related to this report
alexan.staley@LIGO.ORG - 08:22, Wednesday 07 January 2015 (15912)

Initial Alignment

We also had some problems running the PRM_ALIGN and SRM_ALIGN via the ISC_DOF guardian. The PRM_ALIGN takes a while to bleed off from the M3 to the M1 stage, and thus to offload. We tried going back to the old version of PRM_ALIGN in which we only feed the wfs back to M1; however, we did not succueed -- a lot of settings had been changed and we could not remember the old configuration; we didn't spend too much time trying to debug. We also ended up aliging SRM by hand which turned out to be faster. We should take some time and look at these scripts since a lot has been changed.

koji.arai@LIGO.ORG - 10:43, Wednesday 07 January 2015 (15917)

BBPD spectrum taken. -12dB coupling has already been compensated. Note that the removal of one of the BBPD preamps reduced the PD gain by ~20dB.

Condition:

H1:LSC-REFLAIR_B_LF_INMON    17000+/-2000
H1:LSC-REFLAIR_B_LF_OUTPUT    17.8+/-0.2 mW
H1:LSC-POPAIR_B_RF18_I_MON    350+/-5
H1:PSL—PERISCOPE_A_DC_POWERMON    10620+/-20
 

Non-image files attached to this comment
H1 ISC
sheila.dwyer@LIGO.ORG - posted 21:19, Tuesday 06 January 2015 (15909)
Y arm green wfs working again

Daniel, Alexa, Evan, Sheila

Yesterday we noticed that the Y arm green wfs weren't working, today after we locked the arm on green for Pcal camera photos we investigated.  There were settings which were wrong, including a limit that was off in the centering servos, the feedback to the ETM was off, (and the TMS) and the gains in the TMS paths had been reset to 1, presumably durring a boot. We have captured and committed new safe.snaps for TMSY and ETMY.  

The WFS are working fine now, with 6 loops closed.

H1 SEI
brett.shapiro@LIGO.ORG - posted 20:05, Tuesday 06 January 2015 (15906)
Took some measurements on ETMX ISI and ETMX HEPI

I took some quick measurements of the ETMX ISI in DTT tonight using excitations both at stage 1 and in HEPI. I turned them both off using the guardian for these measurements. I reset them back to isolated before leaving.

I plan to finish these measurements tomorrow or later in the week, so not much to say about results yet. The idea is to check Zach Patrick's BSC-ISI N4SID matlab model, in ...SeiSVN/seismic/BSC-ISI/Common/BSC_ISI_Model/N4SID_Model. The model does not include ground (HEPI) displacement inputs. It is possible the responses from the stage 1 actuator inputs will have the same shape, just scaled differently. So the measurements are to check whether this is a worthy assumption. Thus, I am measureing stage 1 to stage 1, and then HEPI to stage 1. If the shapes match, then awesome. If not, we'll need to do more work to complete a BSC-ISI model.

H1 SUS
betsy.weaver@LIGO.ORG - posted 16:38, Tuesday 06 January 2015 (15901)
ETMx story

We've been looking into why the ETMx SUS seems to be rubbing now that we are pumped down.  Initial thoughts are that it is temperature related, coupled with the buoyancy shift.  John turned the temp up at Ey in order to restore it to the temp from before the vent Dec 18-19.  Attached is a plot showing the vent during which we observed a few deg temp change.  We think we set the EQ stops during a more "normal" temp however.  During the vent period John increased the overall building temp 1 degC since it was colder than other VEAs and he assumed to standardize it.  Today, he dropped the temp back to the pre-vent value and we are waiting for the QUAD to reach equilibrium in hpes to see a free suspension.  In addition, it was decided to increase the pressure in the chamber in an attempt to increase the temperature coupling to the suspension.  We are plotting as we go to understand the vertical shifts.  To be continued...

Non-image files attached to this report
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 16:17, Tuesday 06 January 2015 (15904)
CDS Maintenance Summary

h1susey ADC replace, Jim and Dave:

WP4988. We removed two PCI-e ADC cards from the DTS x1seiham IO Chassis. We swapped both PMC-in-carrier ADCs in h1susey with these. The PMC ADCs were installed in x1seiham. We verified the card layout and ribbon routing in h1susey's IO Chassis and updated sheet 13 of D1301004 as-built drawing. This closes WP4988.

We forgot to give the seismic team warning that the IOP SWWD will trip h1seiey when the IOP model was turned off, apologies for that.

h1susauxey ADC offset, Richard, Jim, Dave:

We replaced the ribbon cable for the 3rd ADC in h1susauxey's IO Chassis (ADC2) to see if it would fix a channel offset problem, it did not.

DAQ, Dave:

DAQ restart to support model changes.

PCAL camera remote control, Sudarshan, Dave:

see  alog entries for details.

H1 SUS
travis.sadecki@LIGO.ORG - posted 16:06, Tuesday 06 January 2015 - last comment - 14:29, Wednesday 07 January 2015(15900)
New photos of ETMy post-cleaning using PCal camera

Using the newly setup PCal capabilites from the Control Room (thanks Dave!), I took some new photos of the ETMy optic post FC cleaning.  Although the IR only image is out of focus (I will take a new one tomorrow after refocusing the camera), the improvement if fairly evident.  The first photo is with green only locked, second photo is with IR only locked.  I used the same camera settings as were used pre-cleaning (F8, ISO 200, 30 sec. exposure, WB-cloudy).

Images attached to this report
Comments related to this report
travis.sadecki@LIGO.ORG - 16:10, Tuesday 06 January 2015 (15903)

For easy comparison, see attached composite photo pre-cleaning.  ETMy is the right hand set of photos.

Images attached to this comment
travis.sadecki@LIGO.ORG - 14:29, Wednesday 07 January 2015 (15922)

As promised, a more in-focus post-cleaning IR locked ETMy image .

Images attached to this comment
H1 General
jim.warner@LIGO.ORG - posted 16:04, Tuesday 06 January 2015 (15902)
Ops Shift Summary
8:45 Bubba, et al craning BSC container over beam tube
10:30 DaveB to EY to shut down EY Sus
11:30 JeffB shutdown an LVEA dust monitor 6
11:30 Sudarshan to EY
13:45 Travis to EY
14:30 Sudarshan to EY for PCAL
8:45 Bubba, et al craning BSC container over beam tube
10:30 DaveB to EY to shut down EY Sus
11:30 JeffB shutdown an LVEA dust monitor 6
11:30 Sudarshan to EY
13:45 Travis to EY
14:30 Sudarshan to EY for PCAL
8:45 Bubba, et al craning BSC container over beam tube
10:30 DaveB to EY to shut down EY Sus
11:30 JeffB shutdown an LVEA dust monitor 6
11:30 Sudarshan to EY
13:45 Travis to EY
14:30 Sudarshan to EY for PCAL
 
H1 SUS (ISC, SYS)
jeffrey.kissel@LIGO.ORG - posted 16:01, Tuesday 06 January 2015 (15899)
Note -- ETMX Guardian Paused, Some non-Guardian Controlled Settings Changed
J. Kissel B. Weaver

We've paused the ETMX guardian, changed the M0 coil driver state to acquire (i.e. H1:SUS-ETMX_BIO_M0_STATEREQ = 1), and we've changed the M0 P and Y TEST filter banks to have gain of +1.0, instead of the copies of the optic align gains they normally are. These are the things that we need for transfer functions that we don't initially remember to change after been away from it for a while. We pause the guardian such that the commissioning vanguard can still use the LSC CONFIGS without constantly messing around with ETMX's alignment offsets (which also screw up measurements).

We'll restore the configuration to it's nominal state once we're confident the SUS is free.
H1 SEI
jim.warner@LIGO.ORG - posted 16:00, Tuesday 06 January 2015 (15898)
Sensor correction turn on script modified.

I've modified my sensor correction script, it now turns on the HEPI Z sensor correction on HAM3 instead of the ISI. This is hopefully temporary (re: .6hz weirdness on this chamber), and I haven't had a chance to test, so it may be broken, best of luck if it's needed. See my log 15591 for usage instructions.

H1 CDS (CAL)
david.barker@LIGO.ORG - posted 15:59, Tuesday 06 January 2015 (15897)
both PCAL cameras remotely controlled by their respective Sofortbild computers

Rick, Sudarshan, Dave:

We were successful in getting both PCAL cameras remotely controlled from the MSR. Each camera has its own mac-mini, on which the Sofortbild software is running. Just before the holidays Rick and I paired the Mac-Minis (MM) to their cameras via a local USB connection.

At the beginning of today, all systems were powered up but neither MM was seeing their cameras.

Sudarshan went to EY, and via the camera menu reconfigured the UT-1 network adapter. At this point camera h1pcalcamy established a connection to MM h1pcaly and Sofortbild took a picture (see earlier alog).

Sudarshan went to EX and reconfigured via the camera menu h1pcalcamx/UT-1. At this point h1pcalcamy established a connection with h1pcalx and Sofortbild took a picture.

At this point we are able to take ETMX and ETMY pictures simultaneously.

For a test, we then powered down the MM h1pcalx. This did not affect ETMY picture taking.

When MM h1pcalx was powered back up and rebooted, its Sofortbild could not establish an connection with h1pcalcamx (ETMY picture taking still good).

Sudarshan power cycled h1pcalcamx's UT-1. At this point h1pcalx re-established connection to h1pcalcamx but h1pcaly lost its connection to h1pcalcamy.

Sudarshan went to EY and power cycled h1pcalcamy's UT-1, now h1pcaly re-established connection with h1pcalcamy and ETMX picture taking is still good.

So if one or both Mac-Minis are power cycled, it appears we may have to power cycle both camera's UT-1 network adapters. This is not too bad, as these units are located outside of the camera enclosure, but it does require driving to both end stations.

I was testing using the MSR iMac to remote desktop to the MM. I have now started the remote desktop connection in the control room on the iMac to the left of the sci-mon station. We should leave that running for Betsy and Travis.

H1 SEI (DetChar)
jim.warner@LIGO.ORG - posted 15:55, Tuesday 06 January 2015 (15886)
More HAM3 Sensor correction

I've taken higher resolution measurements of the .6hz feature. I took 2 measurements over Sunday night, one measurement with a BW of 3mhz and one with a BW of .7mhz. Jeff walked me through calculating the magnitude of the peak and he says that the feature is non-stationary. The calculated magnitude for the .7mhz measurement is 1.17 (not sure of the units, but probably nm/s?), for the 3mhz measurement it's 1.66.  Attach image shows a close up of the peaK, X-axis window is from .55 to .65 hz. Blue trace is the .7mhz measurement, red is 3mhz measurement. If Keith Riles, or some one in DetChar could look at this more closely(specifically a high resolution spectrogram of this), it would be appreciated.

Images attached to this report
H1 SUS (CDS)
jeffrey.kissel@LIGO.ORG - posted 14:54, Tuesday 06 January 2015 (15889)
Old Guardian Parts removed from many SUS
J. Kissel, J. Warner, D. Barker

In order to close out ECR E1400295, WP 4987, and Integration Issue 921, I've removed the remaining old guardian infrastructure from the TMTS, HSTS, HTTS, OMCS and HAUX. This required updating the userapps/sus/common/models/ directory (which removes the guardian block from the common library parts for the OMCS and TMTS), and removing the guardian blocks myself from the HSTS_MASTER.mdl, MC_MASTER.mdl, RC_MASTER.mdl, h1sushtts.mdl, and h1susim.mdl. Once removed, I recompiled, reinstalled, restarted, restored all affected models (basically, every suspension but the QUADs, BS, and P/SR3), and had Jim and Dave help me restore the SEI system and perform a DAQ restart, respectively. All common and site-specific model changes have been committed to the userapps repo.

All old guardian infrastructure has already been removed from the MEDM overview screens EXCEPT for the HSSS, i.e. the RMs, OMs, and IMs. I'll leave that for another day, but that should be that last thing LHO needs to do for this ECR, and it should require no more front-end code updates and/or code restarts.

Note all new safe.snaps have been captured for the affected SUS *before* the computer restarts. I'd confirmed that all *corner station* cavities (via camera views) are well aligned. I did *not* reaffirm the Y arm cavity, and of course this bit me. Sheila had changed the gain for the H1 SUS TMSY OPTICALIGN Pitch to be negative (see LHO aLOG 14161), but never captured a safe.snap for it. For some reason, bringing the TMS to SAFE via guardian reverted the gain to positive, flipping the sign of the alignment offset, which distorted the green / red beam pointing into / out of the cavity. We've since fixed the sign, then captured and commit a new safe. Subsequently *all* new safe.snaps have been committed.
 
Non-image files attached to this report
H1 SUS
betsy.weaver@LIGO.ORG - posted 13:43, Tuesday 06 January 2015 (15895)
ETMy health check

I ran a quick P and V TF on the ETMy main chain, since we didn't get an under-vacuum set after the Dec pump down.  The P and V TFs line up well with the model so, so far so good.

 

Note, we've set the guardian state to PAUSE and the M0 CD STATE request to 1.0 from 2.0 (this switches the analog dewhitening in order to run TFs).

H1 SEI (SEI)
fabrice.matichard@LIGO.ORG - posted 13:13, Tuesday 06 January 2015 (15894)
HAM3 Sensor Correction

Now that HAM2 and HAM3 are both using the same ground instruments for sensor correction, I looked again at the coherence betwen their sensor correction channels, looking for some possible noise line at 0.6Hz.

I am using the input of the FIR filter bank:

H1:ISI-HAM2_SENSCOR_GND_STS_X_FIR_IN1_DQ

H1:ISI-HAM3_SENSCOR_GND_STS_X_FIR_IN1_DQ

 

The figure on the left shows the coherence between HAM2 and HAM3 channels.

The figure in the middle shows the transfer functions.

The figure on the right shows the ASDs of HAM3, and the incoherent part betwen HAM 2 and  HAM3 signals. It is well above the theoritical ADC noise, but I don't see any sharp feature at 0.6Hz.

Images attached to this report
H1 CDS
james.batch@LIGO.ORG - posted 11:53, Tuesday 06 January 2015 - last comment - 14:45, Tuesday 06 January 2015(15890)
Update control room software for Ubuntu workstations
WP #4966

The following software has been updated for Ubuntu control room workstations:

awgstream (no change, recompile against new libraries)
gds (foton, diaggui, diag, chndump, awggui, dmtviewer, and others) fix bugzilla 288, 754, 755, 760, 761.
nds2-client  (nds2_channel_source, nds2-tunnel, nds-client-config, nds_query)
root (CERN root libraries and root)

OS X versions of the software will be updated next week.

Comments related to this report
james.batch@LIGO.ORG - 14:45, Tuesday 06 January 2015 (15896)
After installation of the nds2-client-0.10.5 update, it was discovered that the user environment setup script omitted setting the PYTHONPATH environment variable, so python scripts that attempted to "import nds2" failed.  This has been repaired as of 2:30 PM.
H1 SUS (ISC)
jeffrey.kissel@LIGO.ORG - posted 18:49, Tuesday 23 December 2014 - last comment - 12:30, Friday 09 January 2015(15809)
H1 SUS ETMY ESD Turned ON, Linearization Force Coefficient ... Explained?
J. Kissel, R. McCarthy

At my request, after seeing that the EY BSC 10 vacuum pressure has dropped below 1e-5 [Torr] (see attached trend), Richard has turned on the H1 SUS ETMY ESD at ~2pm PST. I'm continuing to commission the chain, and will post functionality results shortly. 

Also -- 

I've found the ESD linearization force coefficient (H1:SUS-ETMX_L3_ESDOUTF_LIN_FORCE_COEFF) to be -180000 [ct]. I don't understand from where this number came, and I couldn't find any aLOGs explaining it. I've logged into to LLO, their coefficient is -512000 [ct]. There's no aLOG describing their number either, but I know from conversations with Joe Betz in early December 2014 that he installed this number when the LLO linearization was switched from before the EUL2ESD matrix to after. When before the EUL2ESD matrix the coefficient was -128000 = - 512000/4 so we was accounting for the factor of 0.25 in EUL2ESD matrix. I suspect that -128000 [ct] came from the following simple model of longitudinal force, F_{tot} on the optic as a result of the quadrant's signal voltage, V_{S} and the bias voltage V_{B}, (which we know is incomplete now -- see LLO aLOG 14853):
     F_{tot} = a ( V_{s} - V_{B} )^2
     F_{tot} = a ( V_{s}^2 - 2 V_{s} V_{B} + V_{B}^2)
     F_{lin} = 2 a V_{s} V_{B}
where F_{lin} is the linear term in the force model, and a< is the force coefficient that turns whatever units V_{S} and V_{B} are in ([ct^2] or [V_{DAC}^2] or [V_{ESD}^2]) into longitudinal force on the test mass in [N]. I *think* the quantity (2 a V_{B}) was mistakenly treated as simply (V_{B}) which has always been held at 128000 [ct] (or the equivalent of 390 [V] on the ESD bias pattern) and the scale factor (2 a) was ignored. Or something. But I don't know.

So I try to make sense of these numbers below.

Looking at what was intended (see T1400321) and what was eventually analytically shown (see T1400490), we want the quantity 
       F_{ctrl}
       -------
      2 k V_{B}^2
to be dimensionless, where F_{ctrl} is the force on the optic caused by the ESD. Note that comparing John / Matt / Den's notation against Brett / Joe / my notation, k = a. As written in T1400321, F_{ctrl} was assumed to have units of [N], and V_{B} to have units of [V_{esd}], such that k has units of [ N / (V_{esd}^2) ], and it's the number we all know from John's thesis, k = a = 4.2e-10 [N/V^2]. We now know the number is smaller than that because of the effects of (we think) charge (see, e.g. LHO aLOG 12220, and again LLO aLOG 14853).

In the way that the "force coefficient" has been implemented in the front end code -- as an epics variable that comes into the linearization blockas "Gain_Constant_In,"  (see attached) -- I think the number magically works out to be ... close. As implemented, the linearized quadrant's signal voltage is as shown in Eq. 13 of T1400490, except that the EPICs record, we'll call it G, is actually multiplied in
     V_{S} = V_{C} + V_{B}(1 - sqrt{ 2 [ (F_{ctrl} / V_{B}^2) * G  + 1 + (V_{C}/V_{B}) + (V_{C}/V_{B})^{2} * 1/4 ] )}
Note, that we currently have all of the effective charge voltages set to 0 [ct], so the equation just boils down to the expected
     V_{S} = V_{B}(1 - sqrt{ 2 [ (F_{ctrl} / V_{B}^2) * G  + 1] )}
which means that 
     G == 1 / (2 k) or k = 1 / (2 G)
and has fundamental dimensions of [V_{esd}^2 / N]. So let's take this "force coefficient," G = -512000 [ct], and turn into fundamental units:      
     G = 512000 [ct]             {{LLO}}
         * (20 / 2^18)     [V_{dac} / ct] 
         * 40              [V_{esd} / V_{dac}] 
         * 1 / (V_{B} * a) [(1 / V_{esd}) . (V_{esd}^{2} / N)]
     G = 9.5391e9 [V_{ESD}^2 / N]
   ==>
     k = 5.37e-11 [N/V_{ESD}^2]  {{LLO}}
where I've used V_{B} = 400 [V_{esd}] and the canonical a = 4.2e-10 [N/V_{esd}^2] originally from pg 7 of G0900956. That makes LLO's coefficient  assume the actuation strength is a factor of 8 lower from the canonical number. For the LHO number, 
     G = 180000 [ct]             {{LHO}}
         * (20 / 2^18)     [V_{dac} / ct] 
         * 40              [V_{esd} / V_{dac}] 
         * 1 / (V_{B} * a) [(1 / V_{esd}) . (V_{esd}^{2} / N)]
     G = 3.2697e9 [V_{ESD}^2 / N]
   ==>
     k = 1.53e-10 [N/V_{ESD}^2]  {{LHO}}
Which is within a factor of 3 lower, and if the ESD's as weak as we've measured it to be it may be dead on. So maybe whomever stuck in 180000 is much smarter than I.

For now I leave in 180000 [ct], which corresponds to a force coefficient of a = 1.53e-10 [N/V_{ESD}^2].
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 15:33, Monday 05 January 2015 (15873)
B. Shapiro, J. Kissel

As usual, two heads are better than one when it comes to these nasty dealings with factors of two (go figure). Brett has caught a subtlety in the front-end implementation that further makes it different from the analytical approach used in T1400321 and T1400490. In summary, we now agree that the LLO and LHO EPICs force coefficients that have been installed are closer to the measured values by a factor of 4, i.e.
G = 512000 [ct] ==> k = 2.0966e-10 [N/V^2]  {{LLO}}
and
G = 180000 [ct] ==> k = 6.1168e-10 [N/V^2]  {{LHO}}
which means, though they still differ from the canonical value (from pg 7 of G0900956)
k = 4.2e-10 [N/V^2]  {{Canonical Model}}
and what we've measured (including charge) (see LHO aLOG 12220, and LLO aLOGs 14853 and 15657)
k = 2e-10 +/- 1.5e-10** [N/V^2] {{Measured}}
they're much closer. 

**I've quickly guesstimated the uncertainty based on the above mentioned measurement aLOGs. IMHO, we still don't have a systematic estimate of the uncertainty because we've measured it so view times, in so many different ways, infrequently, and with the ion pumps still valved in, and each test mass has a different charge mean, charge location, and charge variance.

Here's how the aLOG 15809 math should be corrected: The F_{ctrl} and k = a in the analytic equations is assumed to be for full longitudinal force. However, as implemented in the front end, the longitudinal force F_{ctrl} has already been passed through the EUL2ESD matrix, which splits transforms into quadrant basis force F_{ii}, dividing F_{ctrl} by 4. The EPICs force coefficient, G, should therefore *also* be divided by 4, to preserve the ratio
       F_{ctrl}            F_{ii}
       -------      =   ------------
      2 k V_{B}^2     2 k_{ii} V_{B}^2
inside the analytical linearization algorithm. In other words, as we've physically implemented the ESD, on a quadrant-by-quadrant basis,
       F_{ctrl} = F_{UL} + F_{LL} + F_{UR} + F_{LR}
where
       F_{ii} = k_{ii} (V_{ii} - V_{B})^2
and 
       k_{ii} = k / 4 = a / 4.
As such, the implemented front-end equation
       V_{ii} = V_{B}(1 - sqrt{ 2 [ (F_{ii} / V_{B}^2) * G  + 1] )}
means that
      G == 1 / 2 k_{ii} = 2 / k = 2 / a
and still has the fundamental units of [V_{esd}^2 / N]. So nothing changes about the above conversation from G in [ct] to G in [V_{esd}^2 / N], its simply that the conversion from G to the more well-known analytical quantity k was off by a factor of 4,
     G = 512000 [ct]             {{LLO}}
         * (20 / 2^18)     [V_{dac} / ct] 
         * 40              [V_{esd} / V_{dac}] 
         * 1 / (V_{B} * a) [(1 / V_{esd}) . (V_{esd}^{2} / N)]
     G = 9.5391e9 [V_{ESD}^2 / N]
   ==>
     k = 2.0966e-10 [N/V_{ESD}^2]  {{LLO}}
where I've used V_{B} = 400 [V_{esd}] and the canonical a = 4.2e-10 [N/V_{esd}^2] originally from pg 7 of G0900956. That makes LLO's coefficient assume the actuation strength is a factor of 2 lower from the canonical number, pretty darn close to the measured value and definitely within the uncertainty. For the LHO number, 
     G = 180000 [ct]             {{LHO}}
         * (20 / 2^18)     [V_{dac} / ct] 
         * 40              [V_{esd} / V_{dac}] 
         * 1 / (V_{B} * a) [(1 / V_{esd}) . (V_{esd}^{2} / N)]
     G = 3.2697e9 [V_{ESD}^2 / N]
   ==>
     k = 6.1168e-10 [N/V_{ESD}^2]  {{LHO}}
both of which are closer to the measured value as described above.
jeffrey.kissel@LIGO.ORG - 22:02, Tuesday 06 January 2015 (15905)
N. Smith, (transcribed by J. Kissel)

Nic called and fessed up to being the one who installed the -180000 [ct] force coefficient at LHO. Note -- this coefficient only is installed in ETMX, the ETMY coefficient is still the original dummy coefficient of 1.0 [ct].

He informs me that this number was determined *empirically* -- he drove a line at some frequency, and made sure that the requested input amplitude (driven before the linearization algorithm) was the same as the requested output amplitude (the MASTER_OUT channels) at the that frequency, with the linearization both ON and BYPASSED. He recalls measuring this with a DTT session, not just looking at the MEDM screen (good!). 

Why does this work out to be roughly the right number? Take a look at the front-end equation again:
      V_{ii} = V_{B}(1 - sqrt{ 2 [ (F_{ii} / V_{B}^2) * G  + 1] ) } )
and let's assume Nic was driving V_{ii} at a strength equal and opposite sign to the bias voltage V_{B}. With the linearization OFF / BYPASSED,
      V_{ii} = - V_{B}
Duh. With the linearization in place,
      V_{ii} = - V_{B} = - V_{B} (1 - sqrt{ 2 [ (F_{ii} / V_{B}^2) * G  + 1] ) } )
so we want the quantity 
      (1 - sqrt{ 2 [ (F_{ii} / V_{B}^2) * G  + 1] )} = 1
which only happens if 
      (F_{ii} / V_{B}^2) * G = 1.
If Nic wants to create a force close to the maximum, it needs to be close to the maximum of 
      F_{ii,max} = 2 k_{ii} V_{B}^2, 
which makes
      2 k_{ii} * G = 1
or
      G = 1 / (2 k_{ii}) = 2 / k
which is the same result as in LHO aLOG 15873. Granted, it's late and I've waved my hands a bit, but this is me trying to justify why it feels like it makes sense, at least within the "factor of two-ish" discrepancy between the canonical value and the accepted measurements of the right number. 
jeffrey.kissel@LIGO.ORG - 12:30, Friday 09 January 2015 (15966)
I've summarized this exploration of Linearization Science in G1500036.
Displaying reports 68961-68980 of 84442.Go to page Start 3445 3446 3447 3448 3449 3450 3451 3452 3453 End