On Wednesday night, because we couldn't get much past DRMI due to wind, I tried changing the St1 blend on the BS ISI to match the "windy" blend used on the ITMs. This is a change that the commissioning crew and I have agreed should happen for a while, but we didn't want to disturb the IFO during data taking. Previously the BS was running the 45mhz blends in X&Y, while the ITM's were running the 90mhz blends in the beam direction. Looking at CPS spectra, this meant that the BS was moving a lot more at low frequency than the ITM's (see first attached spectra, BS is red and blue, ITM beam directions are in purple and black). Now that I've switched the BS, it is moving the same as the ITMs. Sheila suggested I look at the ASC signals during the subsequent lock (I left the blends because it was still "windy") to see if this affected the angular control of the IFO. I think at low frequency this made an improvement in MICH (second and third plots, which are ASC yaw and pitch for MICH, green is before, brown after on both). I checked the ground at both times to make sure the input motion was the same and there are differences, but not huge (last plot, green and brown are before, red and blue are after, solid is Y, dashed is X). I don't show Z but it was even more similar.
ER7 will end at 08:00 on Sunday the 14th. The Vent prep work will start Sunday the 14th. Betsy will be the vent coordinator for the End-X and End-Y activities Hugh will be the vent coordinator for the HAM-6 activities For the vent plan, see aLOG 19085 Beam tube cleaning continues on the X-Arm Bubba - working on fan repairs in the Mechanical building Kyle – Prep work for the vent. Goal is to have the doors ready to come off at 12:00 Monday the 15th Peter – Will transition the end stations to Laser Safe by locking the laser enclosures and verifying all view ports are in place. The laser will not be shut down until Monday when the doors are ready to come off.
Since my last post the wind is beinging to pick up again, not anything crazy but gusts above 20mph. I have gotten a few tries very close, at 607 I made it all the way to LOWNOISE_ESD_ETMY but something always seems to break it.
On my last attempt, ETMY and ITMX bounce and roll modes were badly rung up, I managed to damp them below 10^-11 and then as soon as I went to move on it broke lock.
Cleaning crew will be heading into the end stations in a few minutes.
Looks like I failed for tonight. Handing it off to Jeff here in a min.
model restarts logged for Thu 11/Jun/2015
no restarts reported
The winds died down about 2.5 hours ago but locking has still been unsuccessful. Breaks lock consistantly at REFL_TRANS. I have tried a handful of initial alignments, but with the same results. I eventually got past REFL_TRANS by touching PRM a little, but at DRMI_ON_POP I couldn't recover RF90 and RF18, seemed like me touching PR2 made things worse not better like it normally would.
I have a few more ideas I will try before I wake someone up for help.
Walked on to a very windy site(40+mph winds) and no locking for about 10 hours. Since then, the winds have died down a little but locking is still not happening. I can't get it past DRMI, most often it will get stuck on LOCK_DRMI_1F for a very long time.
Here's to hoping the wind will die down some more.
Sheila added a "Brief Version" of the initial alignment procedure to the OPS wiki page. It is a the top of the IA wiki, above the more verbose version used by the operators up to now. The idea is that as we become more familiar with locking the IFO, we'll need more of an outline of what to do than a step by step procedure with all the possiblilites spelled out. She is hoping this will speed up the lock acquisition time and increase the duty cycle.
Struggled with wind all night. Never got past DRMI 1f. I did initial alignment several times (for practice and to make sure I wasn't missing something). Good luck TJ.
Times UTC
11:00 Leo and Betsy still taking charge measurements
11:03 Robert back from Y arm beam tube enclosure
1:32 Leo done with charge measurements
1:42 returned ETMx and ETMy ESD to nominal states
2:00 started initial alignment
Winds have been mid-20 to low-30 MPH since the start of the shift. Locking attempts have been unsuccessful, stalling at DRMI 1f so far. I talked to Gary in the CR at LLO a couple of times now to give him a heads up that we are in for a tough evening.
from Corey, Janeen, Gary, and Vern
Today, in consultation with the Joint Run Planning Committee of the LVC, the following timetable for concluding ER7 was settled upon.
The end date of focused data taking was 2015 June 11 8:00 am PDT.
The end date of ER7 is 2015 June 14 8:00 am PDT.
Between these dates H1 and L1 will be run in a best-effort mode to have coincident locks, with intent bits set (getting us closer to and maybe surpassing the 48hr CBC target); however, they will share the time with the following invasive measurements that have been deemed as critical before the end of ER7:
PEM Injections (H1 [Schofield] & L1)
Stochastic Burst Injection (H1 [Kissel] & L1)
Calibrations (H1 [Kissel] & L1)
Guardian (L1) and Opportunistic Commissioning (L1 and H1)
All but the Stochastic Burst injections do not require coincident locks and their proposers have been encouraged to wait for a loss of lock by one of the IFOs before starting their activities. Once begun, they should be carried to completion to a degree reasonably possible. We expect quiet coincident times to be available in the late evening and early morning hours and the opportunities for invasive measurements to be more likely during the daytime hours. The Stochastic injections will have to be done when the proposers are ready and the IFOs locked. The intent bits should be unset during the injections.
SCHEDULED WORK:
Nothing is totally set in stone, BUT if one of these items is scheduled to happen, it will happen (even if there is double-coincidence!). For example, if one of the above is scheduled to happen and that person comes to you to say they are ready:
Ask for an estimate of how long they'll be
Call the LLO Control Room (225.686.3131) & let the Operator know how long H1 will be down
Change Intent to Commissioning
End lock of H1, if asked to. (Is there a preferred way to take H1 down?)
Make a note of this activity in the ALOG
OPPORTUNISTIC WORK:
Others may be more flexible with when they can start their work and may choose to do so at an opportunistic time (i.e. H1 [or L1] is DOWN). In this case, do the same steps as noted above.
LLO Calls To Say L1 Will Be Down:
If L1 calls in to say they will be down to work on one of the activities above, it's up to local staff and you (the on-shift operator) to determine how to proceed. Since we won't have double-coincidence, then we should jump on an opportunity to complete one of the above tasks. So make an announcement to those who might be interested (i.e. Robert or Jeff) if they're in the Control Room. If they're not in the Control Room, send an email to them (they'll likely be keeping an eye on the Summary Pages, too). If there are no takers for H1, you can leave the Intent as Undisturbed.
COORDINATION BETWEEN CONTROL ROOMS
This is a new state for running in ER7, and coordination between sites is paramount. So please don't hesitate to talk to the LLO Operator!
I've produced a 10 minute stochastic injection stream for simultaneous application at LHO and LLO when both machines are locked simultaneously. Hopefully there will still be a window of opportunity to run this test. The injection file is at h1hwinj1:/ligo/home/edward.daw/injections/SB_H1_ER7_V3.txt The livingston counterpart file is at l1hwinj1:/home/edward.daw/injections/SB_L1_ER7_V3.txt
After the LHO IFO dropped lock around 1pm, Leo and I jumped on making more charge measurements of ETMx and ETMy via the oplev/ESDs. However, we immediately recreated that there seemed to be no coherence in the ETMy measurements. We fumbled around for a while looking at no coherence with signals here and there and then invoked Kissel. Sure'nuf, the ESD drive on ETMy wasn't driving in "LO" or "HI" voltage mode. Richard headed to end Y and re-discovered that turning things off and reterminating the DAC cable fixed the problem - see alog 13335 "Lock up of the DAC channels".
The ETMy measurements are now in progress. Again.
Meanwhile, we attempted to look at why the ETMx LL ESD drive wasn't working and confirmed what Keita saw last night - it doesn't work. We see the requested drive in the monitor channels which means that the signal goes bad somewhere beyond the chassis (toward the chamber). As usual, "no one has been down there" but we're not sure how we can use this ESD to lock the IFO with a dead channel. Richard reports that he will go there tomorrow to investigate.
In case anyone ISN'T tired of hearing that the ETMx OL isn't healthy, here's a snapshot of ugly glitching in the YAW readback. (Jason has stated numerous times that he plans to swap this laser when we give him time to do it.) Just recording again here since we have to stare at it for ESD charge measurements. Ick.
Observation Bit: Undisturbed 08:00 – Take over from Nutsinee – IFO locked at LSC_FF 08:07 Robert – Working at End-Y, but not in the VEA 08:29 - Adjust ISS Diffracted power from 11.1% -2.06v to 8.3% -2.09v 09:32 - Beam tube cleaning crew start work on X-Arm 09:55 Christina & Karen – Open/close rollup door to OBS receiving 10:00 Christina & Karen – Staging garb in the OSB cleaning area 11:52 – Beam cleaning crew stopping for lunch 13:05 - Lockloss 13:10 IFO in down state for ETM charge testing 13:30 Richard – Going to End-X 13:31 – Beam tube cleaning crew start work on X-Arm 13:50 Gerardo – Going to Mid-Y to get a cable 14:12 Gerardo – Back from Mid-Y 14:25 Richard – Back from End-X 14:36 Richard – Going to End-Y to check cabling for ETM charge testing 15:25 Robert – Going to Beam tube near Mid-Y 15:46 – Beam tube cleaning crew finished for the day 16:00 – Hand over to Travis
Andy, Jess Since the 79.2 MHz fixed frequency source was powered off alog, we have not seen any RF beatnote/whistles in DARM at Hanford. We see them in DARM at Livingston, however, but the mechanism is much more complicated than Hanford. The mechanism is not the PSL VCO beating against a fixed frequency. Since we still see whistles at Hanford in auxiliary channels, we thought we'd revisit them, to see if that gives us clues for L1. We looked at the lock of Jun 11 starting at 6 UTC. We see whistles in PRCL, LSC-MCL, and sometimes in SRCL. Choosing two times, we find that the whistles correspond exactly to a beatnote of the PSL VCO frequency with a fixed frequency of 78.5 MHz (or something within a few hundred Hz of that). So it's the same simple mechanism as before, just against a different frequency. Attached are plots of two times in PRCL where we predict the exact shape of the whistle, using IMC-F as a proxy for the PSL VCO frequency. SRCL and MCL are similar. We'll go back and check other locks to see if there's any evidence for other frequencies or shifts in the frequency.
First, a question. Is there something at 34.7 MHz in the center station? I see this frequency on channel SYS-TIMING_C_FO_A_PORT_11_SLAVE_CFC_FREQUENCY_4 - the PSL VCO is number 5 on this fanout. The numerology just about works with 2*34.7+9.1 = 78.5, i.e. that frequency gets doubled and is seen in the 9 MHz demod of the POP and REFL PDs. Jeff wanted me to also post an expanded version of the whistles story that I had sent by email, so here it is: To be clear, H1 *did* have whistles in DARM. Once we got the secret decoder ring that told us how to figure out the PSL VCO frequency, we realized that the whistles in DARM were precisely a beatnote of that frequency with 79.2 MHz. As a result of that investigation, that fixed frequency was turned off, and the whistles in DARM went away. Huge success! We also see whistles in SRCL, PRCL, and MCL. We haven't been worrying about them, since they're not in DARM. But just yesterday we decided to see if this is also a simple mechanism. As you can see from the alog, it is - at least at the times we've checked, the whistles are a beatnote against something at 78.5 MHz. I realized just a little while ago that these channels all come from 9 MHz demods, so maybe the actual frequency we're looking for is actually 69.5 or 87.5. We'll check whether these signals show up on POP or REFL at either LF or 45 MHz. We know that LLO is a very different mechanism. Not only do they not have this particular fixed oscillator, but these whistles: 1. Come from multiple very different VCO frequencies. 2. The beat frequencies don't seem stable even within a lock. 3. The whistles do not follow the PSL VCO frequency. They are more like 4 to 7 times the VCO frequency. The multiplier doesn't seem stable, and sometimes the whistles seem to decouple a bit from the VCO frequency. 4. The whistles show at LF, 9 MHz, and 45 MHz PDs, on REFL and POP. Different crossings show up in different photodiodes and with different strengths. So you can see why we want to tackle Hanford first. I was hoping it would be more complicated but tractable, and that would give us a clue to what's going on in L1. In case you're wondering whether this is academic, the CBC search loves triggering on the whistles at LLO, and it's hard to automatically reject these because they look like linear or quadratic broadband chirps. I think these give the burst search trouble as well. We'll probably spend another day nailing down the case at Hanford, then look over all ER7 to figure out what was going on at L1.
I've taken a look at guardian state information from the last week, with the goal of getting an idea of what we can do to improve our duty cycle. The main messages is that we spent 63% of our time in the nominal low noise state, 13% in the down state, (mostly because the DOWN state was requested), and 8.7% of the week trying to lock DRMI.
Details
I have not taken into account if the intent bit was set or not during this time, I'm only considering the guardian state. These are based on 7 days of data, starting at 19:24:48 UTC on June3rd. The first pie chart shows the percentage of the time during the week the guardian was in a certain state. For legibility states that took up less than 1% of the week are unlabeled, some of the labels are slightly in the wrong position but you can figure out where they should be if you care. The first two charts show the percentage of the time during the week we were in a particular state, the second chart shows only the unlocked time.
DOWN as the requested state
We were requesting DOWN for 12.13% of the week, or 20.4 hours. Down could be the requested state because operators were doing initial alignment, we were in the middle of maintainece (4 hours ), or it was too windy for locking. Although I haven't done any careful study, I would guess that most of this time was spent on inital alingment.
There are probably three ways to reduce the time spent on initial alignment:
Bounce and roll mode damping
We spent 5.3% of the week waiting in states between lock DRMI and LSC FF, when the state was already the requested state. Most of this was after RF DARM, and is probably because people were trying to damp bounce and roll or waiting for them to damp. A more careful study of how well we can tolerate these modes being rung up will tell us it is really necessary to wait, and better automation using the monitors can probably help us damp them more efficiently.
Locking DRMI
we spent 8.7% of the week locking DRMI, 14.6 hours. During this time we made 109 attempts to lock it, (10 of these ended in ALS locklosses), and the median time per lock attempt was 5.4 minutes. From the histogram of time for DRMI locking attempts(3rd attachment), you can see that the mean locking time is increased by 6 attempts that took more than a half hour, presumably either because DRMI was not well aligned or because the wind was high. It is probably worth checking if these were really due to wind or something else. This histogram includes unsuccessful as well as successful attempts.
Probably the most effective way to reduce the time we spend locking DRMI would be to prevent locklosses later in the lock acquisition sequence, which we have had many of this week.
Locklosses
A more careful study of locklosses during ER7 needs to be done. The last plot attached here shows from which guardian state we lost lock, they are fairly well distributed throughout the lock acquisition process. The locklosses from states after DRMI has locked are more costly to us, while locklosses from the state "locking arms green" don't cost us much time and are expect as the optics swing after a lockloss.
I used the channel H1:GRD-ISC_LOCK_STATE_N to identify locklosses in to make the pie chart of locklosses here, specifcally I looked for times when the state was lockloss or lockloss_drmi. However, this is a 16 Hz channel and we can move through the lockloss state faster than 1/16th of a second, so doing this I missed some of the locklosses. I've added 0.2 second pauses to the lockloss states to make sure they will be recorded by this 16 Hz cahnnel in the future. This could be a bad thing since we should move to DOWN quickly to avoid ringing up suspension modes, but we can try it for now.
A version of the lockloss pie chart that spans the end of ER7 is attached.
I'm bothered that you found instances of the LOCKLOSS state not being recorded. Guardian should never pass through a state without registering it, so I'm considering this a bug.
Another way you should be able to get around this in the LOCKLOSS state is by just removing the "return True" from LOCKLOSS.main(). If main returns True the state will complete immediately, after only the first cycle, which apparently can happen in less than one CAS cycle. If main does not return True, then LOCKLOSS.run() will be executed, which defaults to returning True if not specified. That will give the state one extra cycle, which will bump it's total execution time to just above one 16th of a second, therefore ensuring that the STATE channels will be set at least once.
reported as Guardian issue 881
Note that the corrected pie chart includes times that I interprerted as locklosses that in fact were times when the operators made requests that sent the IFO to down. So, the message is that you can imagine the true picture of locklosses is somewhere intermediate between the firrst and the second pie charts.
I realized this new mistake because Dave asked me for an example of a gps time when a lockloss was not recorded by the channel I grabbed from nds2, H1:GRD-ISC_LOCK_STATE_N. An example is
1117959175
I got rid of the return True from the main and added run states that just return true, so hopefully next time around the channel that is saved will record all locklosses.
00:00 The ifo locked right before I came in. Wind speed is <20 mph. 90 mHz blend filter is used for BSC2.
I noticed the BS oplev sum is saturated (> 80000 counts). Is this alright? It's been around this value for 10+ days.
01:55 There's a big bump at ~30 Hz that caused a big dip in the BNS range. SUS Oplev plots didn't show anything suspicious. The bump at this frequency happened through out the night, just not as big.
02:00 A 4.7 MAG earthquake in Ecuador shook PR3 a little and BNS range dropped slightly (from 61 Mpc to 60 Mpc), but that's all it did. No WD tripped.
08:00 We've been locked for 8+ hours and still going strong at 61 Mpc! We had 5+ hours of coincidence with Livingston tonight. Handling the ifo to Jeff B.
Judging from the normalized spectrograms on the summary pages, the 30Hz noise looks like occasional scattering noise, likely from the alignment drives sent to the OMC suspension. Currently the Guardian sets the OMC alignment gain at 0.2 (for a UGF of around 0.1-0.5 Hz in the QPD alignment loops). This is probably too high from a scattering-noise perspective, it can be reduced by a factor of two without ill effects.
To follow up on this noise, here is a plot of one of the noise bursts around 20-30Hz, alongside the OMC alignment control signals. The noise has the classic scattering-arch shape, and it is correlated with the ANG_Y loop, which send a large signal to the OMC SUS. We've seen this kind of thing before. The start time for the plot is 09:27:10 UTC, June 11 (the time axes of the two plots are a little off, because apparently indexing for mlab PSDs is the hardest thing I've had to do in grad school.)
The second plot attached compares the OMC-DCPD_SUM and NULL channels at the time of the noise bursts in the first plot, to a quiet time one minute prior. The scattering noise is largely coherent between the two DCPDs.
Sudarshan, Kiwamu, Darkhan,
Abstract
According to the PCALY line at 540.7 Hz, the DARM cavity pole frequency dropped by roughly 7 Hz from the 17 W configuration to the 23 W (alog 18923). The frequency remained constant after the power increment to 23 W. This certainly impacts on the GDS and CAL-CS calibration by 2 % or so above 350 Hz.
Method
Today we've extracted CAL-DELTAL data from ER7 (June 3 - June 8) to track cavity pole frequency shift in this period. The portion of data that can be used are only then DARM had stable lock, so for our calculation we've used a filtered data taking only data at GPS_TIME when guardian flag was > 501.
From an FFT at a single frequency it is possible to obtain DARM gain and the cavity pole frequency from the phase of the DARM line at a particular frequency at which the drive phase is known or not changing. Since the phase of the resultant FFT does not depend on the optical gain but the cavity pole, looking at the phase essentially gives us information about the cavity pole (see for example alog 18436). However we do not know the phase offset due to time-delay and perhaps for some uncompensated filter. We've decided to focus on cavity pole frequency fluctuations (Delta f_p), rather than trying to find actual cavity pole. In our calculations we have assumed that the change in phase come entirely from cavity pole frequency fluctuations.
The phase of the DARM optical plant can be written as
phi = arctan(- f / f_p),
where f is the Pcal line frequency;
f_p - the cavity pole frequency.
Since this equation does not include any dependence on optical gain, the technique we use, according to our knowledge, the measured value of phi does not get disturbed by the change of the optical gain. Introducing a first order perturbation in f_p, one can linearize the above equation to the following:
f_p^2 + f^2
(Delta f_p) = ------------- (Delta phi)
f
An advantage of using this linearized form is that we don't have to do an absolute calibation of the cavity pole frequency since it focues on fluctuations rather than the absolute values.
Results
Using f_p = 355 Hz, the frequency of the cavity pole measured at the particular time (see alog 18420), and f = 540.7 Hz (Pcal EY line freq.), we can write Delta f_p as
Delta f_p = 773.78 * (Delta phi)
Delta f_p trend based on ER7 data is given in the attached plot: "Delta phi" (in degrees) in the upper subplot and "Delta f_p" (in Hz) in the lower subplot.
Judging by overall trend in Delta f_p we can say that the cavity pole frequency dropped to about 7 Hz after June 6, 3:00 UTC, this correspond to a time when PSL power was changed from 17 W to 23 W (see lho alog 18923, [WP] 5252)
Delta phi also show fast fluctuations of about +/-3 degrees, and right now we do not know the reason that causes this "fuzzyness" of the measured phase.
Filtered channel data was saved into:
aligocalibration/trunk/Runs/ER7/H1/Measurements/PCAL_TRENDS/H1-calib_1117324816-1117670416_501above.txt (@ r737)
Scripts and results were saved into:
aligocalibration/trunk/Runs/ER7/H1/Scripts/PCAL_TRENDS (@ r736)
Clarifications
Notice that this method does not give an absolute value of the cavity pole frequency. The equation
Delta f_p = 773.78 * (Deta phi)
gives a first order approximation of change in cav. pole frequency with respect to change in phase of Pcal EY line in CAL-DELTAL at 540.7 Hz (with the assumptions given in the original message).
Notice that (Delta phi) in this equation is in "radians", i.e. (Delta f_p) [Hz] = 773.78 [Hz/rad] (Delta phi) [rad].
Darkhan, Did you also look at the low frequency (~30 Hz), both amplitude and phase? If these variations come from just cavity pole, then there shouldn't be any changes in either amplitude or phase at low frequencies (below cavity pole). If there is change only in gain, then it is optical gain. Any changes in the phase would indicate more complex change in the response of the detector.
Using two hours of undisturbed data from last night's 66 Mpc lock, I repeated Den's sum/null stream analysis in order to see if we have a similar 1/f1/2 excess in our residual.
I took the OMC sum/null data (calibrated into milliamps), undid the effect of the DARM OLTF in order to get an estimate for the freerunning OMC current, and then scaled by the DARM optical gain (3.5 mA/pm, with a pole at 355 Hz) to get the equivalent freerunning DARM displacement. The residual is then the quadrature difference between the sum and null ASDs.
The attachment shows the sum, null, and residual ASDs, along with the anticipated coating Brownian noise from GWINC. [Just to be clear: the "sum" trace on this plot corresponds to our usual freerunning DARM estimate, although in this case it comes purely from the error signal rather than a combination of the error and control signals.]
If there is some kind of excess 1/f1/2 noise here, it is not yet large enough to dominate the residual. Right now it looks like the residual is at least a factor of 2.2 higher than the expected coating noise at all frequencies. We already know some of this is intensity noise.
The other thing to note here is that we are evidently not completely dominated by shot noise above 1 kHz.
I repeated this on a lock stretch from 2015-06-07 00:00:00Z to 02:00:00Z, but the result is pretty much the same. The best constraint we can put on coating noise right now from the residual is about a factor of 2.2 higher than the GWINC prediction. I also think the residual is not yet clean enough in this frequency band to make an inference about its spectral shape.
I tried increasing the CARM gain by 3 dB to see if the residual would decrease, but it does not (except maybe round 6 kHz; see the attached dtt pdf). So this broadband excess in the sum may not be frequency noise.
There is an error in the above plots.
Only the DCPD sum should be corrected by the DARM OLTF to get the equivalent freerunning noise. The DCPD null should not be corrected. To refer to noise to DARM displacement, however, all these quantities must be corrected by the DARM cavity pole.
This time I've included the DCPD dark noise (sum of A and B), also not corrected by the loop gain.
A few more corrections and additions:
Dan, Travis
Tonight during our long lock we measured the decay time constant of the ITMX bounce mode. At 10:10 UTC we set the intent bit to "I solemnly swear I am up to no good" and flipped the sign on the ITMX_M0_DARM_DAMP_V filter bank and let the bounce mode ring up until it was about 3e-14 m/rt[Hz] in the DARM spectrum. Then, we zeroed the damping gain and let the mode slowly decay over the next few hours.
We measured the mode's Q by fitting the decay curve in two different datasets. The first dataset is the 16Hz-sampled output of Sheila's new RMS monitors; the ITMX bandpass filter is a 4th-order butterworth with corner frequencies of 9.83 and 9.87Hz (the mode frequency is 9.848Hz, +/- 0.001 Hz). This data was lowpassed at 1Hz and fit with an exponential curve.
For the second dataset I followed Koji's demodulation recipe from the OMC 'beacon' measurement. I collected 20 seconds of DELTAL_EXTERNAL_DQ data, every 200 seconds; bandpassed at 9 and 12Hz, demodulated at 9.484Hz, and lowpassed at 2Hz; and collected the median value of the sum of the squares of the demod products. Some data were neglected on the edges of the 20-sec segment to avoid filter transients. These every-200-sec datapoints were fit with an exponential curve.
Results attached; the two methods give different results for Q:
RMS channel: 594,000
Demodulated DARM_ERR: 402,000
I fiddled with the data collection parameters and filtering parameters for both fits, but the results were robust. When varying parameters for each method the results for Q were repeatable within +/- 2,000, this gives some sense of the lower limit on uncertainty of the measurement. (The discrepancy between the two methods gives a sense of the upper limit...) Given a choice between the two I think I trust the RMS channel more, the demod path has more moving parts and there could be a subtlety in the filtering that I am overlooking. The code is attached.
I figured out what was going wrong with the demod measurement - not enough low-passing before the decimation step, the violin modes at ~510Hz were beating against the 256Hz sample rate. With another layer of anti-aliasing the demod results are in very good agreement with the RMS channel:
RMS channel: 594,400
Demodulated DARM_ERR: 593,800
To see what we might expect, I took the current GWINC model of suspension thermal noise and did the following. 1) Removed the horizontal thermal noise so I was only plotting vertical. 2) Updated the maraging steel phi to reflect recent measurement (LLO alog 16740) of Q of UIM blade internal mode of 4 x 10^4. (It is phi of 10^-4, Q 10^4 in the current GWINC). I did this to give better estimate of the vertical noise from higher up the chain. 3) Plotted only around the thermal noise peak and used 1 million points to be sure I resolved it. Resulting curve is attached. Q looks approx 100K, which is less than what was reported in this log. That is encouraging to me. I know the GWINC model is not quite right - it doesn't reflect tapered shape and FEA results. However to see a Q in excess of what we predicted in that model is definitely in the right direction.
Here we take the Mathematica model with the parameter set 20150211TMproduction and we look at varying some of the loss parameters to see how the model compares with these measurements. The thermal noise amplitude in the vertical for the vertical bounce mode is tabularised around the resonance and we take the full width at 1/√2 height to calculate the Q (equivalent to ½ height for power spectrum). With the recently measured mechanical loss value for maranging steel blade springs of 2.4 e-5, the Mathematica model predicts a Q of 430,000. This is a little bit lower Q than the measurement here, but at this level the loss of the wires and the silica is starting to have an effect, and so small differences between the model and reality could show up. Turning off the loss in the blade springs altogether only takes the Q to 550,000, so other losses are sharing equally in this regime. The attached Matlab figures shows mechanical loss factor of maraging steel versus predicted bounce mode Q and against total loss plus the resonance as a function of loss. Angus Giles Ken & Borja
Since there has been some modeling afoot, I wanted to post the statistical error from the fits above, to give a sense of the [statistical] precision on these measurements. The best-fit Q value and the 67% confidence interval on the two measurements for the bounce mode are:
RMS channel: 594,410 +/- 26
Demodulated DARM_ERR: 594,375 +/- 1590
The data for the measurements are attached. Note that this is just the statistical error of the fit -- I am not sure what systematics are present that could bias the measurement in one direction or another. For example, we did not disable the top-stage local damping on ITMX during this measurement, only the DARM_CTRL --> M0 damping that is bandpassed around the bounce mode. There is also optical lever feedback to L2 in pitch, and ASC feedback to L2 in pitch and yaw from the TRX QPDs (although this is very low bandwidth). In principle this feedback could act to increase or decrease the observed Q of the mode, although the drive at the bounce mode frequency is probably very small.
After a measurement of charge on each ETM yesterday, I took a few more on each today. Attached show the results trended with the measurements taken in April and Jan of this year. There appears to be more charge on the ETMs than in previous measurements, although there is quite a spread in the measurements. The ion pumps at the end stations are valved in.
Note, the measurement was saturating on ETMy so Kiwamu pointed me to switch the ETMy HI/LOW Voltage mode and BIO state. This made the measurement run with saturation. Attached is a snapshot of the settings I used for the ETMy charge measurement.
1. I think that the results of charge measurements of ETMY on May, 28 are probably mistaken. I haven't see any correlation in dependence of pitch and yaw from the DC bias. 2. It seems like there was very small response at ETMX LL quadrant at this charge measurements. Other ETMX quadrants are ok. It correlates with results of June, 10 https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=19049