J. Kissel, J. Betzweiser, D. Barker WP: 5418 ECR: E1500332 II: 1091 I'll put in a more detailed entry describing all the changes once I've finished with the MEDM screen updates, but I've installed an updated CAL-CS front-end model as per all of the documentation mentioned above. This required a DAQ restart, which thankfully was much more successful than is has been in the recent past. I've confirmed that all old CAL model, bare-essentials, functionality is up and running: - Whitening filter recently installed for DELTAL_CTRL has been moved to DELTAL_CTRL_SUM, because the filter bank has changed name - Turned on the newly named DARM_LINE oscillator at 37.3 [Hz], with an amplitude of 0.08 [ct] (I don't think this amplitude is right; I'll confirm the amplitude with Darkhan shortly, it hasn't been aLOGged). - Filled in the new TM_OUTPUT_MTRX to pass through ETMY into DELTAL CTRL. I can see that this functionality has returned, as Evan is able to get the IFO back to some power level, and the DELTAL ASD lies roughly on the reference. Stay tuned for more detail!
Not just lost communications as from 20333. This is an SEI system. If this system has only lost communications then sure do nothing. But that state should be confirmed. If it isn't actually running then more action is call for.
At the end station, the controller computers front panel heartbeat light is not flashing. The VFD Frequency output to the motor is not changing. These two indicators convince me that the control is in fact not running. The analog pressure gauge at the pump station is reading about 78 PSI. This indicates that we got lucky and the fixed output to the VFD is pretty close to what we need to maintain the ideal pressure. About 80psi out of the pump station is nominal. Looking at the trends of the pressures and motor drive, attached, it appears they latched below the average of the normal variation. So while the pressures are trending down (only based on where the trends froze and the analog gauge,) they are doing so at a slow rate and we could be lucky and make it to Tuesday.
The IFO just dropped and Evan gave me the okay to restart. The SSH failed though so a power cycle of the computer is called for. This will certainly take down the Pump Station and therefore the SEI platforms. So we elect to wait.
If the trend had been steeper, the next indicator would have been a tripping of the HEPI as the pressures dropped too low to drive the requested offset. Meanwhile the performance of the platform would likely have become weird as the actuators were attemping to work at a pressure outside their design. Had the trend been up rather than down, if the actuators were still working at 125psi, next the in-line pressure relief valve would have opened and closed repeatedly as the pressure exceeded and dropped below the threshold. Who knows if the HEPI platform would survive this but eventually enough fluid would have been lost to the pressure relief path that the reservoir fluid trip would abruptly trip the pump station and of course the platfrom, ISI, IFO and recovery would be more painful.
[Jenne, Gabriele]
It's been quite a struggle to get the IFO relocked today. Last night, Evan et al. ran a TCS tuning test, which left the TCSX CO2 power at 0.43 W. Yesterday, it had been at 0.23 W.
Just now (03:12:00-ish UTC) we returned the TCSX power to 0.23 W, and we're leaving it to re-equilibrate.
Separately, although we don't think this affected our ability to lock, the SYS_DIAG guardian is dead. It went into error at some point, and reloading it didn't clear the error. Stefan tried to restart it, but it never came back up. We await expert advice on fixing this.
It looks like (see log messages below) there is a syntax error somewhere, although no one (in the control room at least....) was editing it. Perhaps we entered some "if" loop that doesn't usually get run, so that's why this hasn't been caught before?
2015-08-09T00:33:40.06923 File "/opt/rtcds/userapps/release/sys/h1/guardian/SYS_DIAG_tests.py", line 403
2015-08-09T00:33:40.06929 yield 'ISI {} WD saturation count is greater than 75% of max'.format(chamber)
2015-08-09T00:33:40.06932 ^
2015-08-09T00:33:40.06939 SyntaxError: invalid syntax
2015-08-09T00:33:40.09600 guardian process stopped: 255 0
There were some typos in the SEI_STATE test that were preventing the node from restarting.
I destroyed and created the node (probably unnecessary), and then started it.
Also, SYS_DIAG_tests.py had not been checked in to the SVN for quite some time.
Thanks, Evan. It was definitely unecessary to destroy/recreate the node, but it doesn't hurt anything. Probably all you needed to do was just reload.
I looked into the glitches created by Robert's dust injections. In brief: some of the glitches, but not all of them, look very similar to the loud glitches we are hunting down
Here is the procedure:
In this way I could detect a total of 42 glitches. The last plot shows the time of each glitch (circles) compared with the time of Robert's taps (horizontal lines). They match quite well, so we can confidently conclude that all the 41 glitches are due to dust. The times of my detected glitches are reported in the attached text file, together with a rough classification (see below)
I then looked at all the glitches, one by one, to classify them based on the shape. My goal was to see if they are similar to the glitches we've been hunting.
A few of them (4) are not clear (class 0), some others (14) are somehow slower than what we are looking for (class 3). Seven of them have a shape very close to the loud glitches we are looking for (class 1), and 16 more are less obvious but they could still be of the same kind, just larger (class 2).
See the attached plots for some examples of classes.
It seems the text file of glitch times didn't make it into the attachments, would you mind trying to attach it again?
Ops! Here's the file with the glitch times.
Gabriele, Did you check which of Robert's glitches caused any ADC/DAC adjurations? The glitch shape will start changing significantly once the amplitude is big enough to start saturations. PS: The ODC master channel has a bit that will toggle to red (0) is any of the subsystems reports a saturation (with 16k resolution) - it might be exactly what you need in this case.
Stefan, I checked for some ADC and DAC overflows during this data segment. The OMC DCPDs ADCs overflowed during several of these. There were still some with SNRs of 10,000 that didn't overflow like this. The segments are pasted below. They're a bit conservative because I'm using a 16 Hz channel without great timing. There were no ADC overflows in POP_A, and no DAC overflows in the L2 or L3 of the ETMs. I didn't check anything else. This is not quite the same as what the ODC does, which is a little more stringent. I'm just looking for changes in the FEC OVERFLOW_ACC channels. 1123084682.4375 1123084682.5625 1123084957.3750 1123084957.5000 1123086187.3750 1123086187.5000 1123086446.8125 1123086447.3750 1123088113.0625 1123088113.1875 1123088757.4375 1123088757.6250 1123088787.1875 1123088787.3125 1123088832.3125 1123088832.4375 1123089252.6250 1123089252.7500 1123089497.2500 1123089497.3750
I injected particulate by tapping on the beam tube at various locations with a scissors, imitating the taps that the cleaners make by accident. The acceleration measured on the beam tube for similar taps was around 0.1 g with a frequency peak at about 1000 Hz (Link). At each location I made the first tap at the top of the minute and made a tap at every multiple of 5 seconds for 1 or 2 minutes. My tapping uncertainty was about 1 second.
To observe the glitches in DARM I filtered the time series of H1:CAL-DELTA_RESIDUAL_DQ to be dominated by the 120-1000 Hz band, with violin modes notched. The table shows the time and the fraction of taps that made glitches. The distribution of glitch sizes is shown in the Figure. There were roughly the same number of glitches in each size decade.
In summary:
1) Glitches were produced from most regions of the beam tube.
2) Only about 20% of taps produced glitches
3) 9/252 taps produced multiple glitches
4) Only 1 glitch of size comparable to the particulate glitches was observed more than 1s from a tap time in the entire 2 hour lock (DetChar may want to double check this). Thus the data suggest that there is not a reservoir of particles that are freed by the taps but fall later at an exponentially decreasing rate (so it is very unlikely that midnight glitches are particles freed by cleaning and cleaning is unlikely to have increased the background rate).
5) The figure shows that the number of glitches in each size decade was about the same, not increasing with smaller size.
Location on beamtube |
Time of first tap. Aug. 8 UTC |
Tap spacing (seconds) |
Duration (minutes) |
Number of taps |
Large glitches in DARM within 1s |
Large glitches not within 1s |
Y2-8 double |
15:52:00 |
5 |
1 |
12 |
4 |
0 |
Y2-2 double |
15:57:00 |
5 |
1 |
12 |
2 |
1, 2s |
Y1-2 double |
16:02:00 |
5 |
1 |
12 |
5 |
0 |
X2-8 double |
16:22:00 |
5 |
2 |
24 |
7 |
0 |
X2-2 double |
16:27:00 |
5 |
2 |
24 |
1 |
0 |
X1-2 double |
16:33:00 |
5 |
2 |
24 |
0 |
0 |
Y2-8 +1Y single |
16:44:00 |
5 |
2 |
24 |
0 |
0 |
Y2-4 +1Y single |
16:48:00 |
5 |
2 |
24 |
1 |
0 |
Y1-4 +1Y single |
16:53:00 |
5 |
2 |
24 |
13 |
0 |
X2-8 +1X single |
17:05:00 |
5 |
2 |
24 |
13 |
0 |
X2-4 +1X single |
17:12:00 |
5 |
2 |
24 |
5 |
0 |
X1-4 +1X single |
17:18:00 |
5 |
2 |
24 |
5 |
0 |
Number of taps that produced glitches (percentage) |
|
|
|
|
42 (17%) |
|
Total number of glitches |
|
|
|
|
56 |
|
This is very interesting. Can you sort and plot the raw time series of the larger glitches according to X vs. Y arm? (The attack should be unipolar and of opposite sign for the two arms.) It will also be interesting to note if there is a FWHM dependence on axial position.
I've attached a list of all Omicron triggers in the lock with SNRs above 100. The columns are the GPS time, peak frequency, and SNR. Rows marked with an X have an ADC overflow in the OMC DCPDs, so they may not be as useful for determining the shape of the glitches. Several of the glitches have very messy shapes in OMC DCPD. There's a few in the Y arm that have a fairly unambiguous single upward spike, for instance 1123084687.570. The whitened timeseries is attached. The shape is a triangle with a base of roughly 1 or 2 milliseconds. I haven't found anything in the X arm yet with a simple unambiguous shape, but I haven't checked everything.
Dave O and I have been talking about measuring the SRC guoy phase by dithering optics in teh corner station and looking at the motion of the beam at the AS port. This should allow us to work out the ray transfer matrix for the SRC cavity. I misaligned PRM, and ITMX, and misaligned the SRM slightly so that two beams could be seen on the AS camera: a straight shot beam through the SRM and a beam that had made one round trip of SRY. I put exictations on the optic align stage of BS and PR3 auapensions and tracked the beam motion. I now need to do some further analysis on this. All optics have been returned to their nominal positions.
J. Kissel I always forget where the StripTool templates for the wall displays live, and my first instinct is to search the aLOG for ".stp". For future me, here're there locations: /ligo/home/ops/Templates/StripTool/ ASC_Pitch.stp ASC_WFS_Central_1.stp ASC_WFS_Central_2.stp ASC_Yaw.stp BOUNCE_ROLL_DAMP.stp bounceroll.stp DAMP_ROLL.stp ETMs.stp IFO_LOCKING.stp IfoLock.stp initial_alignment.stp oldPRMIsb.stp oplevsPIT.stp oplevsYAW.stp PITCH_ASC_CONTROL_SIGNALS.stp PRC-SRC.stp PRMIsb.stp <<< This is what usually is displayed to show the lock acquisition process RM-OM.stp X-Arm.stp Y-Arm.stp YAW_ASC_CONTROL_SIGNALS.stp
While we're at it, the I copied the seismic FOM into userapps/isc/h1/scripts/Seismic_FOM_split.xml and checked it into the SVN, since the original directory (/ligo/home/controls/FOMs/) is not version controlled.
Dan, Nutsinee, Evan
Tonight we explored the damping settings for the first harmonic of the violin modes. Our goal was to reduce the intensity fluctuations on the DCPDs enough that we could engage a second stage of whitening.
We were able to damp nearly all of the peaks that were visible between 1002-1010Hz, these correspond to the ETMs and are well separated in frequency. The phases required for the damping filters are not amenable to broad bandpass filters (ETMX in particular is pretty random). In the end I dealt with each mode one at a time, by hand.
After a while I got tired of saving the filters, so I just recorded the gain and phase that led to smooth damping. These settings can be easily replicated using a 20mHz bandpass filter around the frequency of the line.
We worked on the modes in order of height. Eventually we ran out of steam, but the first harmonic lines in the spectrum have been reduced by a factor of five compared to the reference. The RMS of the DCPD signals (about 300 counts) is now dominated by the residual length motion around 3-4Hz. We should be able to engage more whitening if we want to get some headroom over the ADC noise.
The frequencies in the table below are from Keith Riles' list o' lines in ER7: alog 19190. Eventually this will get propagated to Nutsinee's new violin mode wiki page.
Mode Frequency | Optic | Damping gain & phase | Filter settings for damping |
991.7478 | |||
991.9345 | |||
992.4256 | |||
992.7944 | ITMX | 266dB, +/-180deg | |
994.2767 | ITMX | 260dB, +/-180deg | |
994.6456 | |||
994.7331 | |||
994.8973 | |||
995.3650 | |||
995.6447 | ITMX | 260dB, 0deg | |
996.2517 | |||
996.5296 | ITMX | 260dB, 0deg | |
997.7169 | |||
997.8868 | |||
998.6645 | |||
998.8050 | |||
1003.6673 | ETMX | 260dB, -60deg | ETMX L2 DAMP MODE2 FM6, FM8, FM9, gain=-50 |
1003.7788 | ETMX | 260dB, 130deg | ETMX L2 DAMP MODE3 FM6, FM8, FM9, gain=50 |
1003.9071 | ETMX | 280dB, 0deg | ETMX L2 DAMP MODE8 FM6, FM9, gain=1000 |
1004.0782 | ETMX | 272dB, +/-180deg | |
1004.5370 | ETMX | 260dB, 0deg | ETMX L2 DAMP MODE1 FM6, FM9, gain=100 |
1005.1694 | ETMX | 266dB, 0deg | |
1005.9378 | ETMX | 266dB, +/-180deg | |
1006.5031 | ETMX | 266dB, +/-180deg | |
1008.4502 | |||
1008.4938 | ETMY | 272dB, 83deg | ETMY L2 MODE3 FM1, FM3, FM4, gain=400 |
1009.0273 | ETMY | 266dB, +/-180deg | ETMY L2 MODE7 FM6, FM9, gain=-200 |
1009.2089 | |||
1009.4402 | ETMY | 272dB, 67deg | ETMY L2 MODE3 FM1, FM3, FM4, gain=400 |
1009.4863 | ETMY | 272dB, 67deg | ETMY L2 MODE3 FM1, FM3, FM4, gain=400 |
1009.6234 | |||
1009.6825 |
Added to the Wiki.
Dan, Nutsinee
Since the knowledge of the violin mode damping is scatted all over the alog, here's the H1 Violin Mode wikipage. The table includes frequencies, test masses, damp settings, and the filters that are being used to damp those modes. All the violin fundamentals are in. Harmonics are coming.
Enjoy!
Increased the damping of ITMY MODE5 to 400. This now makes the 501.606Hz mode fall at a rate of just under a decade per hour.
Prior to this change, photodiodes (TX and Rx PD) calibartion coeffcient were reported in metres/Volt *(1/f^2). Now with the suspension model in place, we have calibrated the photodiodes in terms of Force Coefficient (N/V). The filter banks, as shown in the attachement above, now reflect these new N/V calibration factors. Appropriate changes have been made to the DCC document (T1500252) as well.
This is an additional note about the quack
function -- when I was using quack
, I had a difficulty in quacking an state-space suspension model into a foton filter. See the detail below.
In matlab, I had been using something like:
quad.d = minreal( zpk( c2d(quad.ss, smaplingTime, 'tustin' ), tolerance )
quad.ss
is a state-space representation of the quad suspension response, and samplingTime
in this case is 1/16384 sec. The reason why I used the minreal
function is that otherwise it would come with too many number of poles and zeros which exceeded the number of poles and zeros that foton can handle. I tried adjusting the tolerance of minreal
in order to reduce the number of poles and zeros, but it was extremely difficult because it ended up with either too many poles/zeros or too few poles and zeros.quad.d = c2d( minreal( zpk(plant.ss), tolerance ), samplingTime, 'tustin');
minreal
before c2d
. This allowed me for having a reasonable number of poles and zeros.RickS, Darkhan
We adjusted 35.9 Hz TST (L3 stage only) line drive level from 0.08 ct to 0.11 ct.
Now the amplitude of the TST calibration line in DARM_ERR readout is close to ampltitudes of 36.7 Hz Pcal and 37.3 Hz x_ctrl calibration lines. The target SNR for these lines and for 331.9 Hz Pcal line is 100 SNR with 10 s FFTs (see T1500377).
Evan's script to automatically take frequency and intensity transfer functions is now running. As a reminder, the summing note B path was repurposed for this run. The interferometer won't relock unless we reconnect them. At 4:20 UTC we started changing the TCS X CO2 power from 0.23W to 0.4W. The rotation stage took us on some full circles, but by 04:23 we reached 0.4W. At 4:45 I increased the frequency noise drive by a factor of 5 to gain back coherence. At 6:04 I decreased the power to 0.35W - trying to find the minimal frequency noise point. (The coupling sign had changed.) At 6:34 I reduced it to 0.3W.
It wasn't really clear from this run where the minimum in frequency coupling was (maybe because of the 5 W blast at the start), so I went back to 0.23 W of heating and let the frequency coupling reach a steady state (by driving the same line as before, this time with 100 ct amplitude). Around 2015-08-08 10:18:30 I kicked the TCS power to 0.53 W and started the datataking again.
Once the frequency coupling reached a steady state again, I made a guess for what TCS power we need to minimize the coupling. At 12:43:30 I changed the TCS power to 0.43 W.
I have revered the ALS/CARM cabling to its nominal configuration.
Preliminary analysis from this second run suggests that, of the TCS powers that we probed, our current TCS power is the best in terms of intensity noise coupling.
The attached plot shows the transfer function which takes ISS outer loop PD #1 (in counts) to DCPD A (in milliamps). The coloring of the traces just follows the sequence in which they were taken. Black was taken at 0.23 W of TCS power, and the lightest gray was taken at 0.53 W of TCS power.
The measurements and the plotting script are in evan.hall/Public/Templates/LSC/CARM/FrequencyCouplingAuto/Run2.
Shivaraj, Darkhan
Summary
We replaced whitening filters on Delta L_{res} output channel with 2 zeros and 2 poles zpk, and Delta L_{ctrl} with 3 zeros and 3 poles zpk.
Details
Madeline uses FIR filters to dewhiten Delta L_{res} and Delta L_{ctrl} in GDS scripts. To make it easier to generate shorter dewhitening FIR filters for these channels she requested to replace the existing 5 zeros and 5 poles zpk filters in both of these channels with simpler, less poles and zeros zpk's.
Shivaraj designed new FIR filters for both of these channels, and we implemented them today around 1pm.
Power spectrum of H1:CAL-CS_DARM_DELTAL_CTRL_WHITEN_OUT before applying new whitening filter plotted in attached "H1-CAL-CS_DARM_DELTAL_CTRL_WHITEN_OUT_z5x1_p5x100_MAG.pdf". After applying new whitening filter, zpk([1; 1; 1], [500; 500; 500], 1), unfortunately the interferometer wasn't stable to take a spectrum measurement after I changed the filter, so I couldn't evaluate "whiteness" of the output in this channel.
Power spectrum of H1:CAL-CS_DARM_RESIDUAL_WHITEN_OUT before applying new whitening filter plotted in "H1-CAL-CS_DARM_RESIDUAL_WHITEN_OUT_z5x1_p5x100_MAG.pdf". After applying zpk([1; 1], [500; 500], 1), the spectrum of the same channel is given in "H1-CAL-CS_DARM_RESIDUAL_WHITEN_OUT_z2x1_p2x500_MAG.pdf".
We also plan to implement similar whitening filers on new (not yet existing) channels for Delta L_{TST} and Delta L_{PUM/UIM}.
Below are plots showing the effect of differnet whitening filters including the one that was being used unitl now (5 x 1Hz zeros and 5 x 100 hz poles). These plots are the basis for new filters for DeltaL residual and DeltaL control. In those plots blue curve corresponds to actual data and other colors represent different whitening filters applied to the data.
With the new filters installed we looked at the DeltaL residual and DeltaL control during a recent lock. Below we have attached the spectrum of both along with DeltaL external as a reference (which is still being used with old 5 x 1 Hz zeros and 5 x 100 Hz pole filter). In the plot the channels are dewhitened with the corresponding filters.
This morning, after the IFO was locked, DARM was super noisy in kHz region. ISS error point was also super noisy and the coherence between the two was big.
Turns out that the ISS got noisy at the tail end of the lock stretch from last night, at around 2015-08-07 12:00:00 UTC. That's 5AM in the morning.
We went to the floor and sure enough a function generator was connected to ISS injection point via SR560. We switched them off, disconnected the cable from the ISS front panel, but left the equipment on the floor so the injection can be restarted later.
When Stefan and I hooked the injection back up, we found that the digital enable/disable switches weren't doing their jobs. Toggling the outputs of H1:PSL-ISS_TRANSFER2_INJ and H1:PSL-ISS_TRANSFER1_INJ had no effect on the appearance of the noise in DARM.
JeffK, Darkhan
MEDM screens are undergoing changes to repliate recent front-end model changes (see attached screenshots). The screens are currently in the state at which we can input parameters needed for calibration: C_0, D_0, A_0^{tst}, A_0^{pum}, A_0^{uim}, A_0^{top}, line frequency and amplitude.
Current status of the screens have been committed to SVN (@ r11235). The modification process will continue tomorrow.
We also found a bug in front-end model that need to be fixed (using Im instead of Re part of a quantity, see attachment).