Dan, Nutsinee
Since the knowledge of the violin mode damping is scatted all over the alog, here's the H1 Violin Mode wikipage. The table includes frequencies, test masses, damp settings, and the filters that are being used to damp those modes. All the violin fundamentals are in. Harmonics are coming.
Enjoy!
Increased the damping of ITMY MODE5 to 400. This now makes the 501.606Hz mode fall at a rate of just under a decade per hour.
Prior to this change, photodiodes (TX and Rx PD) calibartion coeffcient were reported in metres/Volt *(1/f^2). Now with the suspension model in place, we have calibrated the photodiodes in terms of Force Coefficient (N/V). The filter banks, as shown in the attachement above, now reflect these new N/V calibration factors. Appropriate changes have been made to the DCC document (T1500252) as well.
This is an additional note about the quack
function -- when I was using quack
, I had a difficulty in quacking an state-space suspension model into a foton filter. See the detail below.
In matlab, I had been using something like:
quad.d = minreal( zpk( c2d(quad.ss, smaplingTime, 'tustin' ), tolerance )
quad.ss
is a state-space representation of the quad suspension response, and samplingTime
in this case is 1/16384 sec. The reason why I used the minreal
function is that otherwise it would come with too many number of poles and zeros which exceeded the number of poles and zeros that foton can handle. I tried adjusting the tolerance of minreal
in order to reduce the number of poles and zeros, but it was extremely difficult because it ended up with either too many poles/zeros or too few poles and zeros.quad.d = c2d( minreal( zpk(plant.ss), tolerance ), samplingTime, 'tustin');
minreal
before c2d
. This allowed me for having a reasonable number of poles and zeros.RickS, Darkhan
We adjusted 35.9 Hz TST (L3 stage only) line drive level from 0.08 ct to 0.11 ct.
Now the amplitude of the TST calibration line in DARM_ERR readout is close to ampltitudes of 36.7 Hz Pcal and 37.3 Hz x_ctrl calibration lines. The target SNR for these lines and for 331.9 Hz Pcal line is 100 SNR with 10 s FFTs (see T1500377).
Evan's script to automatically take frequency and intensity transfer functions is now running. As a reminder, the summing note B path was repurposed for this run. The interferometer won't relock unless we reconnect them. At 4:20 UTC we started changing the TCS X CO2 power from 0.23W to 0.4W. The rotation stage took us on some full circles, but by 04:23 we reached 0.4W. At 4:45 I increased the frequency noise drive by a factor of 5 to gain back coherence. At 6:04 I decreased the power to 0.35W - trying to find the minimal frequency noise point. (The coupling sign had changed.) At 6:34 I reduced it to 0.3W.
It wasn't really clear from this run where the minimum in frequency coupling was (maybe because of the 5 W blast at the start), so I went back to 0.23 W of heating and let the frequency coupling reach a steady state (by driving the same line as before, this time with 100 ct amplitude). Around 2015-08-08 10:18:30 I kicked the TCS power to 0.53 W and started the datataking again.
Once the frequency coupling reached a steady state again, I made a guess for what TCS power we need to minimize the coupling. At 12:43:30 I changed the TCS power to 0.43 W.
I have revered the ALS/CARM cabling to its nominal configuration.
Preliminary analysis from this second run suggests that, of the TCS powers that we probed, our current TCS power is the best in terms of intensity noise coupling.
The attached plot shows the transfer function which takes ISS outer loop PD #1 (in counts) to DCPD A (in milliamps). The coloring of the traces just follows the sequence in which they were taken. Black was taken at 0.23 W of TCS power, and the lightest gray was taken at 0.53 W of TCS power.
The measurements and the plotting script are in evan.hall/Public/Templates/LSC/CARM/FrequencyCouplingAuto/Run2.
Shivaraj, Darkhan
Summary
We replaced whitening filters on Delta L_{res} output channel with 2 zeros and 2 poles zpk, and Delta L_{ctrl} with 3 zeros and 3 poles zpk.
Details
Madeline uses FIR filters to dewhiten Delta L_{res} and Delta L_{ctrl} in GDS scripts. To make it easier to generate shorter dewhitening FIR filters for these channels she requested to replace the existing 5 zeros and 5 poles zpk filters in both of these channels with simpler, less poles and zeros zpk's.
Shivaraj designed new FIR filters for both of these channels, and we implemented them today around 1pm.
Power spectrum of H1:CAL-CS_DARM_DELTAL_CTRL_WHITEN_OUT before applying new whitening filter plotted in attached "H1-CAL-CS_DARM_DELTAL_CTRL_WHITEN_OUT_z5x1_p5x100_MAG.pdf". After applying new whitening filter, zpk([1; 1; 1], [500; 500; 500], 1), unfortunately the interferometer wasn't stable to take a spectrum measurement after I changed the filter, so I couldn't evaluate "whiteness" of the output in this channel.
Power spectrum of H1:CAL-CS_DARM_RESIDUAL_WHITEN_OUT before applying new whitening filter plotted in "H1-CAL-CS_DARM_RESIDUAL_WHITEN_OUT_z5x1_p5x100_MAG.pdf". After applying zpk([1; 1], [500; 500], 1), the spectrum of the same channel is given in "H1-CAL-CS_DARM_RESIDUAL_WHITEN_OUT_z2x1_p2x500_MAG.pdf".
We also plan to implement similar whitening filers on new (not yet existing) channels for Delta L_{TST} and Delta L_{PUM/UIM}.
Below are plots showing the effect of differnet whitening filters including the one that was being used unitl now (5 x 1Hz zeros and 5 x 100 hz poles). These plots are the basis for new filters for DeltaL residual and DeltaL control. In those plots blue curve corresponds to actual data and other colors represent different whitening filters applied to the data.
With the new filters installed we looked at the DeltaL residual and DeltaL control during a recent lock. Below we have attached the spectrum of both along with DeltaL external as a reference (which is still being used with old 5 x 1 Hz zeros and 5 x 100 Hz pole filter). In the plot the channels are dewhitened with the corresponding filters.
Back to locking robustly, all the way to Nominal Low Noise. Hooray!
It was a bit of a trying day, locking-wise. Jim was a champ, trying to get it back up after the many, many locklosses throughout the day. We have made several changes to the guardian scripts that seem to be helping immensely. Here are the changes that were made:
As an aside, I happened to notice some transients in the ETMX L3 ESD AMON channels pass by on Dataviewer, and looked into them a bit. These transients happened while we were turning off the ETMX ESD after transitioning to the ETMY. Thankfully, we didn't lose lock, but I post the time series of the transients anyway. You'll notice in the attached plot that we basically see no effect in DARM, although I should put these channels through the new lockloss tool from Hang, et al. to see if there are any high-frequency things in DARM that are being swamped by the low-f stuff.
Matt E., Sheila D., Jamie R., Terra H., Hang Y.
We updated our lockloss tool. A mistake of using the filter was fixed, and all filtering now is done by using secend order sections, which are more stable than using transfer functions ('BA') directly. The algorithm of handling invalid channel names when fetching data from the nds server (which was the time limiting part of our previous version) was also updated, such that now the lock loss analysis on all valid DQ channels can be finished in a few minutes. Besides the previous code finding all sudden changes before lock loss, Terra also finished a script that could pick out saturated channels. The codes are available at
/opt/rtcds/userapps/trunk/isc/common/scripts/lockloss
with both a python version and a matlab version.
==========================================================================================================================
We applied the code to a lock loss happened at GPS time 1122945715.0, or UTC time 08/07/2015 01:21:38. The two plots shows the first 32 channels that glitched before this lock loss. In the plots, blue lines are raw time-ordered data streams of that channel, and red lines are absolute values of data filtered to a specific frequency band. Thin black lines are pop data, and thick vertical black lines indicate the starting point of that channel going wrong. In the two txt files, the one starts with lockloss shows all DQ channels that glitched for this lock loss, ordered by the time prior lock loss, and the one start with 'satLockloss' gives all channels that saturated.
Kyle, Gerardo Today we continued looking for the leak(s) on the Y-mid turbo, and found it/one -> there is a 2 x 10-5 torr*L/sec (for helium) leak at the factory O-ring seal for the motor wiring feed through -> This is thought to be a very thin cross-sectioned O-ring between the square-flanged feed through and the turbo motor body -> This value of leak (if same for air) would account for the observed pressure rise for time periods this pump has remained idle but, not knowing the actual dimensions of the O-ring could also be accounted for by permeation etc... -> to test, we moved the LD and helium to the X-mid turbo and observed no indication of a leak or permeation for the same application of helium -> it is not clear how serviceable this O-ring is -> it may be that it would have to be stretched over the square flange in lieu of desoldering motor wiring -ouch!
ALL TIMES IN UTC
15:00 IFO not locked. Appears we had a couple of Earthquakes in the Tonga and Ecuador area (~5mag).
15:10 Guardian trying to re-lock. Touched ETMY to correct alignment. Locking sequence commencing.
15:20 IFO broke lock at "Noise_Tunings"
15:35 ESD X railed
15:39 Jim Batch to restart broadcast system
15:48 ISC_LOCK set to down (in Jim W's abscence) due to ESD at EX stillbeing railed.
16:00 Ellie into the optics lab
16:00 ESD EX no longer railed
16:04 talked to Jim and Fil at EX to confirm reset of ESD driver.
16:05 ISC_LOCK re-initiated
16:35 Fill into CER to install Cosmic Ray chassis
17:05 Fil out of CER.
17:05 Robert into LVEA to do HF acoustic injections
17:10 6 Kyle and Gerardo to Y-Mid to get/leave equipment
17:20 Robert and Katie out. Lockloss
17:23 Keita and Jenne to LVEA to investigate/turm off ISS noise injection that may have ben left on.
17:30 Keita and Jenne back
17:40 After the most recent lockloss, it seems that EX ESD driver is having trouble turning back on....AND it's railed
17:58 Found a bug in the code. Was able to reset remotely!
18:02 Begin locking sequence
18:02 Nutsinee popping into LVEA to take pictures at TCS X table.
18:08 Nutsinee out
18:31 Richard out on the roof
19:03 Richard back from the roof
19:07 Cheryl going to have a quick look at the exterior of IOT2R cabling.
19:11 Gerardo and Kyle back from the Mid station
20:52 Kyle and Gerardo to MY then MX then Y2-8.
22:55 Kyle and Gerardo back
Chris S. Joe D.80% 8/4/15 - 8/7/15 The crew installed metal strips on 120 meters of tube enclosure. That was the last of the strips that were cut to length. We had one more roll which was cut into 10' lengths today.
Scott L. Ed P. Rodney H. 8/5/15 79.6 meters of tube cleaned today ending at HSW-2-026. 8/6/15 71.3 meters of tube cleaned ending 10.7 meters east of HSW-2-023. 8/7/15 82 meters of tube cleaned ending at HSW-2-018.
As per Jim, this will be addressed on Tuesday Maintenance Day.
Let's add some statistical information on the loud glitches we've been investigating lately (see 20176, 20276, 20304).
All those glitches have a very distinctive shape: a sharp peak rising in about 1ms and decaying in about 1ms, followed by what we believe is the DARM loop response: a second slower peak of opposite sign.
I looked into all the glitches I previously identified, and run a MATLAB script to fit them with the above shape. What I'm fitting is a 100 Hz high passed DARM_IN1_DQ signal, with additional notches at 60, 502, 992, 1000, 1462 and 2450 Hz. The parameters are the rising and decay times of the first peak, its amplitude, the rising and decay times of the second peak and its amplitude. The waveform is the sum of two triangles. See for example the first two plots for a large and a small glitch.
After fitting all the 151 glitches, I actually convinced myself that 13 of them were probably of another kind, since the shape was clearly different. In two occasions (GPS 1117616943 and 1117631610) there were double glitches (see 3rd and 4th plots).
Armed with the results of the fit, I can now look at the distribution of the amplitudes with sign and of the duration. The 5th plot shows an histogram of the amplitudes. Result: there are both positive and negative amplitude glitches, but large amplitude glitches are only negative (in DARM_IN1_DQ).
The 6th plot shows the distribution of the rising and decay times of the first peak. The distribution is quite well peaked around 1 ms for the rising time and 1.5 ms for the decay time. For the second peak, rising times are peaked around 2.3 ms and decay times around 5 ms.
The last attached plot shows a distribution of the absolute value of the glitch amplitudes, in log scale. The low amplitude cut-off is likely due to my inability to detect the smallest glitches (and the threshold of 5 Mpc drop in the range). The amplitudes span a couple of orders of magnitude, with a quite smooth distribution. I see no reason why we shouldn't believe that there are many more low amplitude glitches with the same shape (and maybe origin) that we simply can't detect in this way.
The attached text file contains all the results of the analysis:
Column 1: GPS time
Column 2: amplitude of the first peak
Column 3: rising time of the first peak [ms]
Column 4: decay time of the second peak [ms]
Column 5: amplitude of the second peak
Column 6: rising time of the second peak [ms]
Column 7: decay time of the second peak [ms]
Note that positive spike in DC current = negative spike in DARM_IN1.
So it seems like DC current always spikes up for large glitches.
Elli, Nutsinee
We currently don't have the return SLED beam to the ITMX HWS. Not sure since when this is happened but looks like the beam splitter is likely to blame.
Sudarshan, Evan, Stefan, Shivaraj
We completed a set of Pcal sweep measurement against the DARM for both endstation and the measurement files and the templates are stored at the following svn location.
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER8/H1/Measurements/PCAL/
Analysis will follow.
This morning after a lock-loss, the ETMX ESD railed, so Fil and I drove to the end station to reset it. To do this, Fil first powered off the chassis by the chamber (3rd chassis from the bottom on the right rack, power is red button on the left), then unplugged the right most cable on the chassis. He then powered the chassis back on and plugged the ESD cable back in. This seemed to fix the ESD, until it tripped just a little while ago.
And here is a longer trend of locklosses showing that the railing does not occur at all lockloss events - it's inconsistent.
Betsy and Jeff tell me the trip was maybe unnecessary. If you click the HV on/off button a few times, on the bottom of the SUS ETMX overview, it may reset itself.
The ETMx ESD railing happened numerous times after Jim's reset at Ex this morning. To clear it, I hit the HV ON/OFF bitton at the bottom of the SUS screen from the controlroom: I hit the button once and verified the ESD signals went to 0 (actually a few hundred instead of it's usual thousands), then hit the button again and confirmed the signals came back to "normal" (roughly -32k on DC and not -16k on the 4 quadrant channels). Note, I was slow to push the buttons which may have helped it be sucessful.
GUARDIAN UPDATED: Sheila, Jeff, Jaimi and I then added to the guardian DOWN portion of the ISC_LOCK code to check for this railing and clear it before making any decisions on whether the state of the ESD should be on or off. This has yet to be tested as the IFO is still working it's way back up.
Attached is a trend showing the numerous railing events of the morning - some of which we think happened because the timing in the guardian code was not quite right.