Measurements were carried out on the ETMX effective charge voltage over a period of 3.5 hours this morning. Firstly, some housekeeping, I svn'd up the charge scripts directory to obtain updates from LLO /ligo/svncommon/SusSVN/sus/trunk/QUAD/Common/Scripts/ and persevered local changes. ETMX was selected since it does not currently have low-pass filters installed on each quadrant. It was not necessary to re-align the ETMX optic, since the OpLev was already well centred. Linearization was bypassed (turned off) for the duration of these measurements. The ESD_UL_LL_UR_LR_charge_07-H1.py script was run to drive the ESD at 4 Hz with an amplitude of 130k counts. The script attempts to write a bias to H1:SUS-ETMX_L3_LOCK_BIAS_OFFSET, however, this is nominally turned OFF at LHO. Therefore, it will need to be engaged, and the bias in DAC counts residing in H1:SUS-ETMX_L3_LOCK_INBIAS set to zero. Also, the ramp time should be reduced from 10s to 5s for the duration of the charge measurements. Data was processed using the ESD_UL_LL_UR_LR_analysis_07_H1.m script, charging and charge deviations from today’s measurement can be seen below (all times in UTC). A significant negative effective voltage is observed on the lower quadrants, which are quite variable. Upper quadrants exhibit less charge and are more stable (similar to what was seen for LLO ETMY, before it was discharged) n.b. there is also a large discrepancy between the effective charge reported by OpLev Pitch and Yaw. Previous automated charge measurements carried-out on ETMX by Brett after the most recent vent did not show as larger effective charge (see LHO aLOG entry 16057). Follow-up measurements, may help determine how the charge is evolving. The infrastructure is in place to help determine if any future discharging is successful. Processed results, and analysis scripts have been committed to the sus svn, raw data files have not.
At Jeff K's request, I've included v2 of my charge measurements notes, so as to help ensure the measurement can be repeated.
If the ion pump is open to the chamber you will have significant charge fluctuations over 5 to 10 hours.
Rai -- agreed. This was more to establish that (a) our measurement suite at LHO was functional and blessed by Stuart, (b) to confirm that, "yup! we've still got plenty of charge," and (c) there are locals on site who know how to run the measurement suite. We know that the charge will continue to swing around because, as you say, experience has shown when ion pumps are valved in, and there is some charge on the mass, it varies greatly. As such, we're not going to bother to continue tracking it on a day-to-day basis, or really at all because we know there's nothing we can do about it until we vent and remove the charge as LLO has done.
Today has been a rough day for the input HAM chambers. This morning we found that a channel went dead in expansion chassis (see Richards alog for the resolution), and troubleshooting involved a lot of restarts of SEIH23. This afternoon, after that problem was sorted, Evan found that the SEI MEDM screens for HAM2&3 were frozen. Dave and JimB should be posting a log about the resolution of that issue.
It would be very good if Detchar could at do some comparisons of the H3 IPS on HAM3 HEPI with other sensors to see if the failure that took out this chamber this morning (killing commissioning efforts until ~now) gave us any warning. I attach some dataviewer trends of the IPS blend ins, and I can kind of convince myself that the horizontal loops all look a little noisier 18 hrs ago versus now. Not necessarily true, I haven't looked in any detail and I'm just guessing based on the max/min being noisier 18 hours ago.
Plots like this, for the last couple of weeks. This is a spectra from 23 hours ago before the trip at 01:00 local today. Maybe also look at impacts on the ISI and/or MC2 PR2?
I've attached spectra of H2 and H3 spaced by three hours. There's a clear indication that H3 is going bad on Apr 2 between 18 and 21 UTC. The next spectra pin the start time to between 19:20 and 19:25 UTC (I think that's noon local time). The time series, which starts at 19:15, shows that the problem may come in the form of bursts of noise. Update: I've added an hour-long time series of the channel, high-passed at 10 Hz. The problems start 12 minutes in. It looks like there are bursts of noise, as well as maybe an increase in the overall noise level.
According to the summary pages, this had an impact on the ISI and on the optic motion. The ISI spectrogram and optic motion spectrogram show an increase in noise right around 19:20 UTC. If we need a monitor for this sort of problem, a BLRMS from 30 to 100 Hz would probably work. The sensor is just flat white noise there, until the signal goes bad. Attached is a 12 hour BLRMS showing the onset of the problem.
Sadly, this doesn't show up in the BLRMS signals that we currently have on HEPI. The HPI-HAM3_BLRMS_X_30_100 channel is the BLRMS of the L4C cartesian signal in the 30 to 100 hz band and it doesn't show the sensor going bad. Attached plot is the same 12 hour window that Andy plots just above, and the problem is not apparent. Mo channels, mo problems.
All models on the h1seih23 computer stopped running. A quick check showed that the I/O chassis appeared to be powered down, as no cards were reported from the I/O chassis with a lspci -v command. We assumed the power supply had failed, so we removed the computer from the Dolphin network, stopped the models, and powered off the computer. We then examined the I/O chassis only to find that it was powered up and appeared to be running OK. After examining the one-stop cable to make sure it was seated properly and showed no signs of having been kinked or otherwise damaged, the I/O chassis was powered down, then back up, and the computer restarted. All models started normally with only a slight IRIG-B deviation into a negative value which quickly recovered. Bottom line is we don't know what happened.
645 Jeff B. - LVEA CC stuff HAM6
713 Karen - LVEA
805 Jim W. - LVEA inspect HAM3
815 Jeff B. - Back
818 Jim W. - Back
840 Doug, Jason - LVEA prepping OpLev
909 Jim W. - LVEA more inspection
939 Doug, Jason - Back
946 Sudarshan - LVEA
1002 SEI HAM2,3 FE model restart
1024 Suresh, Doug, Jason - Tweak HAM3 OpLev
1031 Richard - LVEA swapping ADC
1039 Jeff B. - LVEA
1039 Ed M. - LVEA
1047 Richard - Back
1047 Ed M. - Back
1048 Jeff B. - Back
1052 Sudarshan - Back
1112 Richard - LVEA replace ribbon cable
1122 Richard - back
1204 Doug, Jason, Ed M, Suresh - LVEA adjusting OpLev
1225 Sudarshan - LVEA
1229 Sudarshan - Back
1232 Doug, Jason, Ed M. - Back
1300 Suresh - Back
1340 - Restarting IO chasis for HAM2,3
This is posted early since I have to go to the airport
Jim Warner reported a problem with the Pier 3 IPS horizontal. Investigating we found that the signals into the IO expansion chassis had a problem. Large offset and the noise as viewed on dataviewer was not the same as the others. I replace the ADC interface card the ribbon cable and the ADC in an attempt to get the system back on line quickly. We could narrow down the problem in the test stand. I had replace the ribbon cable a second time as the one I found had a problem with channel 1. Once this was complete the system seems to be fine. I believe it is the ribbon cable that is the problem but further investigation is needed. Thanks to Jim Batch for taking the system out of the dolphin network and shutting down the computer (multiple times) for me so the work could proceed.
J. Kissel, J. Batch Exploring why the optical gain compensation calculation wasn't working, Jim reminded me of what I'd already discovered when building the front-end model: The user-defined amplitude of the given calibration line is determined by a built-in EPICs variable inside the oscillator of the DEMOD simulink library part. This amplitude is needed for the math used to compute the optical gain compensation (see T1500121). However, the oscillator part (and therefore the DEMOD part) doesn't spit out a front-end graspable version of the amplitude, only the line itself and the sine and cosine. So I'd had to create *another* EPICs variable to feed into the front-end math, with the intent of *always* keeping it in sync with the oscillator's amplitude. My problem? I turned on the oscillator amplitude, but I forgot to put the parallel EPICs variable on the MEDM screen, so I forgot to set it when I set the amplitude of the calibration line (back in LHO aLOG 17622). I've added the variable to the MEDM screen (and committed to the repo), and set it appropriately. Now we just need a DARM spectrum to further debug the computation!
Jeff K and I saw that Phase 3b (in-vacuum) TFs had not yet been taken for the OMC and OMs suspensions, therefore, I've taken a complete set this morning, prior to next weeks planned vent of HAM6, for reference and acceptance purposes. I've taken undamped DTT TFs, with the HAM-ISI isolated and damped via Guardian. All suspension alignment offsets remained ON throughout the measurement. However, it soon became evident that OM1 was struggling in all DOFs, see snapshots of OM1 TFs below, the red trace is a good reference, the black trace raises some concerns. I turned OFF the alignment offsets for OM1, but the issue persisted. I then noticed that the ISC signal path is biassing OM1 and saturating the DAC output. Turning OFF the ISC path restored OM1s performance. A 7 day trend for OM1 is attached below, apart from some transient glitches, the alignment offsets have remained steady. However, the ISC bias has transitioned between +/-4000 counts. The ISC_LOCK Guardian state is also included, noting that an index >500 indicates the IFO is locked with DC readout. There was also a minor issue obtaining V-V DOF TFs on the OMC suspension, which resulted in the incorrect scaling when compared to the model (all other DOFs were fine). After investigating the signal chain, I discovered a gain of -500 in the TEST_FILTERS bank. I returned this to the nominal gain of +1, which rectified the problem. The OMC and OMs TF measurements have been compared with previous Phase 3a (in-air) measurements, as well with identical suspensions at LLO and are available below. Summary: Transfer functions for both OMC and OM suspensions are consistent with previous measurements, and similar suspensions at LLO (with no biases applied to OM1). However, the HAM6 vent (and swap on the OM1 optic) may provide an ideal opportunity to offload some OM1 alignment, if it's deemed necessary. All data, scripts and plots have been committed to the sus svn as of this entry.
When the IFO unlocks, the HAM6 DC centering loops rail, and the histories aren't cleared until a point in the DRMI Guardian just before the ASC is turned on. So, in the morning after a night of IFO locking, there is usually a large, bogus ISC signal being sent to the HAM6 tip-tilts. This is why Stuart observed a huge bias on OM1 that was probably causing either rubbing or nonlinearity from large values in the coil drives.
The attached plot shows 1 hour of typical OM1 signals during low-noise operations (GRD-ISC_LOCK_STATE > 500). The LOCK inputs are small (~hundreds of counts), and the COILOUTs are well within the +/-32k range. The UR and LL COILOUTs are larger (~11k counts), but Stuart says we shouldn't worry until we reach +/-24k, or 75% of the DAC range.
Some of the inputs from the OSEMs are large (UR is -19k counts), but this reflects a particularly large open-light level and not an OSEM whose flag is about to come out of the barrel.
So, there's no evidence that OM1 is rubbing or saturating during normal operations. OM2 has larger DC alignment offsets than we would prefer (~16k in COILOUT), but this is within the linear range.
Last night, we again became unable to damp the roll modes (see previous experience in alog 17378) with the usual damping settings. After some random experiments, we became able to damp it with the usal damping settings for some reason. We have no idea why the mode occasionally behave in this way. Note that we use AS_WFS_A for damping them.
(The roll modes)
After the recycling gain study (alog 17645) and ASC study (alog 17646), we fully locked the interferometer with DC readout and 10 W. We immediately noticed that the DARM spectrum was extremely noisy which turned to be high roll modes saturating the OMC DC PDs at the ADC. Looking at the frequency of the peak in the DARM spectrum, we could idenfity the mode -- it was at 13.8 Hz which is the one from ITMX. The peak height was as high as 10-12 m/sqrtHz in the DARM spectrum with 0.1 Hz BW. We went back to ASQ in order to address the issue. The PSL power remained at 10 W. Evan tried different phases (e.g. +-60 deg) and even the negative sign in the damping gain, but none of them seemed to work. This is exactly the same situation as the one previously reported (alog 17378).
(Damping experiments)
- ITMX
Since I knew that it was mostly from ITMX, I first disabled the damping on ETMY in order to make the experiment straightforward. Then I narrowed the pass band on ITMX, which is nominally 1 Hz wide with a center frequency of 13.9 Hz, to 100 mHz passband with a center frequency of 13.8 Hz. They are 4th order butterworth filters. According to foton, this change causes an extra phase rotation of 30 deg, which I did not try to correct as this seemed small enough. Engaging the narrower butterworth, I was able to damp the mode with the same positive gain. This brought the peak height in the DARM spectrum to as low as 10-14 m/sqrtHz.
- ETMY
I then moved onto ETMY to propagate the same modification. ETMY also had a 1 Hz bandpass 4th order butterworth with a center frequency of 13.9 Hz. I tried a 100 mHz passband with a center frequency of 14 Hz. This again caused a 30 deg phase shift, but I neglected it. After engaging the narrower bandpass on both ITMX and ETMY, however the modes slowly started growing up. I tried different phases and negative damping gain on ETMY, but none of them helped. I also tried several configurations -- e.g. disabling ETMY and keeping ITMX, disabling ITMX and running ETMY, and etc. But I did not succeed in damping the modes. Moreover they kept growing on a time scale of a couple minutes.
- Ending up with the same old configuraion
After all, I switched the bandpss filters back to the 1 Hz passband ones to see if I can damp them. Yes, I was able to damp them. The decay time was approximately on a time scale of a couple minutes. No extra phases or sign flips were needed. Unsatisfactory (✖╭╮✖)
not a good idea to use the error signals for damping; using AS WFS requires that the roll to angle TF not have phase shifts at the ~30 deg level.
But the TF includes not only the roll -> angle mechanical TF, but also
(roll -> DC readout -> DARM OLG -> SUS actuators L2A -> WFS) +
(roll -> DC readout -> DARM OLG -> SUS actuators L2L -> WFS L2A)
so its complicated.
But also the seemingly straightforward way of using DARM_OUT that I support has issues since the roll -> DC readout TF changes with beam positioning. But if the spot positions are controlled, this way ought to be best as long as you always have a roll RG in the DARM loop.
In order to try and keep the diffracted power at a nominal 8%, I tried a simple ezcaservo to do this. Running the following on opsws2 ezcaservo -r "H1:PSL-ISS_DIFFRACTION_AVG" -s 8.0 -g -0.001 -f 0.05 -t 240 "H1:PSL=ISS_REFSIGNAL" This timed out after 4 minutes. Increasing the gain to -0.01 seems okay. -0.1 is too high. -0.05 is too high. -0.02 seems okay. The attached plot shows two manually introduced excursions from the reference signal set point and the recovery done by ezcaservo, which seems to work okay. The final command issued was ezcaservo -r "H1:PSL-ISS_DIFFRACTION_AVG" -s 8.0 -g -0.02 -f 0.05 -t 240 "H1:PSL-ISS_REFSIGNAL" The command timed out and is no longer running. It would be worth taking out for a longer test drive at a later time.
Summary: H1:SUS-ETMX_M0_DAMP_L_IN1_DQ looks to have been saturating during the long lock and making glitches in DARM at 9Hz.
In Detchar we're looking at the 2015-04-02 lock from ~8-13 UTC, which had good sensitivity and some amount of undisturbed time. The hveto page for that day has several interesting glitch classes. The first round winner is a series of glitches centered at around 9Hz and associated with the SUS ETM* L1/M1 L channels. These glitches seem to only show up in this high sensitivity lock and not the low sensitivity locks around it (perhaps higher RMS on M0 in low-noise configuration?). Checking the raw data, it appears that H1:SUS-ETMX_M0_DAMP_L_IN1_DQ is saturating.
Note: CIS says this channel is calibrated in um. I don't know whether this is a digital saturation or some physical thing - will consult a SUS expert.
One mystery, to me, is why the glitches occur at GPS times ending in .000, .250, .500, and .750. This might be due to time domain clustering in our glitch algorithm. At first I thought it indicated a digital origin for the glitches, which turns out to have been a red herring.
To follow up on the above, Joe Areeda made omega scans of 50 of these glitches in DARM, a tar file is here. Two examples are attached (each over four different timespans). In these scans, the DARM glitches look like three peaks excited for about a second.
I looked at some of the slow longitudinal channels for ETMX, and it turns out the common tidal control signal for the ETMs was hitting its software limit all throughout Wednesday night.
The first plot attached is a one-hour trend of the ETMX-M0_DAMP_L output, which shows the behavior that Josh found, alongside the common tidal control signal for ETMX (the common tidal signal is the same for both ETMs, so picture this happening for ETMY as well). The DAMP_L channel wasn't saturating, but it was flat-topping whenever the LSC-X_COMM_CTRL signal hit the soft limit at 10 microns. An image of the offending filter bank is also attached.
The 10-micron limit is pretty huge, and we're scratching our heads to figure out how the common tidal drive (which is essentially the low-frequency component of IMC-F) could have acquired such a large DC offset. The third plot is a four-hour trend that shows the offset in IMC-F being offloaded to the tidal as the Guardian state climbs towards low-noise.
The common tidal signal is sent to the L1 stage of both ETMs, where it is combined with whatever other longitudinal drive is beng applied to the optic, and then is offloaded to HEPI. During this lock, ETMY was also being driven by DARM, and the contribution of that signal was enough to provide a smooth control signal to the mass. But ETMX was only getting the common tidal signal, and it was stopping abruptly when it came up against the software limit.
Here's an idea for a control room tool: a script that looks at every CDS filter bank, figures out which ones have their limiter enabled, and checks whether the output is within 10% of the limit value during some span of time.
Below are the trends from the last 10 days
Do not forget: Meetings are back to their normal Monday, Wednesday, Friday @ 830 schedule.
SEI: HAM3 is currently experiencing some issues, please stay away until the problem is sorted out.
SUS: Stuart is taking TFs this morning
CDS: Quiet day
VAC/FAC:
OpLev: wants to put new OpLev on HAM3, they will coordinate with SEI
We've had some difficulties locking today, in part because sometimes when the ASC comes on, it misalings things and causes mode hopping.
When we did lock, we planned to work on reducing the ETMY DAC glitches (alog 17555), first by pushing up the L1/L3 crossover then by turning off the linearization. We got as far as measuring the crossover, screenshot attached. We designed a lead filter to give us some phase to push the crossover up to about 2 Hz, but ran into trouble with locking and now HAM3 HEPI.
Evan, Sheila, Koji
We ran into some difficulty tonight with HAM3 seismic, both HEPI and the ISI are tripping. We've tried several things:
It's a bad ADC. Richard is working on replacing it right now.
J. Kissel, J. Warner, R. McCarthy Update: H3 IPS sensor died last night around 7:30 UTC (see attached screenshot). We went out to the pier, no obvious cable problems. We manually moved the HEPI pier, and saw all Horz. sensors register motion *except* H3. We swapped H3 and V3 cables, and the H3 sensor readout by the V3 channel registers the motion. So, we assume problems are upstream of the sensor in the pier pod / satellite box or further up. Richard and Jim continue to investigate. Stay tuned!
This is a follow up entry of LHO ALOG 17601.
A couple of days ago, the discrepancy of the response for DCPDA and DCPDB were found. This was basically caused by misadjusted filter modules for the anti-whitening filters. Some of them were using design values (like Z10:P1) and some others were just left as they had been imported from the LLO setup.
In order to correctly take the whitening transfer functions into account, the wiring of the in-vacuum and in-air connections were necessary to be tracked down. The 1st attachment shows the sufficiently detailed wiring chain for this task. Using the test data (links indicated in the diagram), we can reconstruct what the correct anti-whitening filters should be. The summary can be found below.
[Trivia for Rich: DCPD1 (Transmission side of the OMC BS) is connected to HEAD2, and DCPD2 (Relfection side of the OMC BS) is connected to HEAD1. This is because of twisted D1300369. This cable has J2 for HEAD2 and J3 for HEAD1. This twist exists in LLO and LHO consistently, as far as I know]
=======
Characteristics of the DCPD electronics chain
Complex poles/zeros are expressed by f0 and Q
DCPD A
(DCPD at the transmission side of the OMC DCPD BS)
- Preamp D060572 SN005
Transimpedance: Z_LO = 100.2, Z_HI = 400.0
Voltage amplification ZPK: zeros: 7.094, 7.094, (204.44 k, 0.426), poles: 73.131, 83.167, 13.71k, 17.80k, gain: 1.984
- Whitening filter D1002559 S1101603
(This document defines the gain not at the DC but at a high frequency. The gain below is defined as a DC gain.)
CH5 Whitening
Filter 1: zero 0.87, pole 10.07, DC gain 10.36/(10.07/0.87)
Filter 2: zero 0.88, pole 10.15, DC gain 10.36/(10.15/0.88)
Filter 3: zero 0.88, pole 10.20, DC gain 10.36/(10.20/0.88)
Gain: “0dB”: -0.051dB (nominal), “3dB”: 2.944dB, “6dB”: 5.963dB, “12dB”: 11.84dB, “24dB”: 24.04dB
DCPD B
(DCPD at the reflection side of the OMC DCPD BS)
- Preamp D060572 SN004
Transimpedance: Z_LO = 100.8, Z_HI = 400.9
Voltage amplification ZPK: zeros: 7.689, 7.689, (203.90 k, 0.429), poles: 78.912, 90.642, 13.69k, 17.80k, gain: 1.983
- Whitening filter D1002559 S1101603
CH6 Whitening
Filter 1: zero 0.88, pole 10.13, DC gain 10.41/(10.13/0.88)
Filter 2: zero 0.87, pole 9.96, DC gain 10.40/( 9.96/0.87)
Filter 3: zero 0.88, pole 10.15, DC gain 10.41/(10.15/0.88)
Gain: “0dB”: -0.012dB (nominal), “3dB”: 2.982dB, “6dB”: 6.007dB, “12dB”: 11.87dB, “24dB”: 24.04dB
=======
Now we put these transfer functions into the model and check if we can reproduce the observed relative difference (Attachment 2). In deed, the measurement is well explained by the model below 30Hz where the measurement S/N was good. As we saw in the previous entry, the difference of the DCPDA and DCPDB after the whtening compensation is 20% max. Note that further inspection revealed that this 20% difference is, in fact, mostly coming from the difference of the preamp transfer functions rather than the miscompensation.
So this was the relative calibration between DCPDA and DCPDB. How is the compensation performance of each one? The 3rd attachment shows how much of current we get at the output as H1:OMC-DCPD_A_OUT, H1:OMC-DCPD_B_OUT, and H1:OMC-DCPD_SUM_OUT, if we give 1mA of photocurrent to DCPD_A, DCPD_B, or both (half and half). Ideally, this should be the unity. The plot shows how they have not been adjusted. For the our main GW channel we take sum of two DCPDs. The individual deviations were averaged and thus the sum channel has max 10% deviation from the ideal compensation. This shows up in the GW channel.
=======
So let’s implement correct compensation. Basically we can place the inverse filter of the each filters. The preamplifier, however, includes some poles and zeros whose frequency are higher than the nyquist frequency. Here we just ignore them and assess how the impact is.
The result is shown as the 4th attachment. Upto 1kHz, the gain error is less than 1%. This increases to 5% above 3kHz. The phase error is 7deg at 1kHz. This increases to 20deg above 3kHz. These are the effect of the ignored pole/zeros. Note that these are static error. In fact, the phase error is quite linear to the frequency. Thus this behaves as a time delay of ~18.5us. Since the phase delay at 100Hz is small, the impact to the DARM feedback servo is minimal. For the feedforward subtraction, however, this might cause some limitation of the subtraction performance. In practice, we measure the coupling transfer function in order to adjust the subtraction, in any case. Therefore this delay would not be a serious problem.
The filter bank to implement the new compensation was already configured. The filter file is attached as foton_DCPDfilters.txt
.
Once we lock the full IFO, we measure the DARM OLTF and give it to Kiwamu for recalibration.
With the new filters, the balance is extremely good now.
This indirectly suggests that the individual compensations are done pretty well.
J. Kissel Since the front-end calibration did not account for this whitening compensation mis-match, i.e. it assumed perfect compensation, the calibration of the sensing function was simply *wrong* (inaccurate) at these frequencies were there was a mis-match. (Recall the DARM UGF is ~40 [Hz], so the mismatch began influencing the calibration only above ~40 [Hz]) As such, now that the whitening and preamps have been more accurately compensated the calibration as it stands has now simply become *more correct*. Therefore we will not need to change or correct anything in the front end calibration filters. Stay tuned for further study.
Jeff -- don't be so hasty. The absolute DC gain of the sensing function (or the inverse sensing function in the CAL CS model) is set by scaling an open loop gain TF measurement to a model. Thus far, open loop gain TFs have only been taken between ~10 and ~100 [Hz], exactly where this discrepancy occurs. Thus, the IFO's DC sensing function is likely off in overall scale factor by the ~10-20% caused by this discrepancy. So, once we get the IFO back up, we'll take another open loop gain transfer function, compare it against the prior, determine a new DC gain for optical gain / sensing function, and update the calibration accordingly.
At the section "Characteristics of the DCPD electronics chain", I wrote something inconsistent with the other part of the entry.
DCPD A is the DCPD at the reflection side of the OMC DCPD BS
DCPD B is the DCPD at the transmission side of the OMC DCPD BS
My hand written cartoon is correct.
I wish I could correct the aLOG entry that is older than 24 hours.