JeffreyK, SudarshanK, DarkhanT,
Overview
We made comparison plots of a DARM OLG TF and PCAL to DARM TF measurements taken at LHO with H1DARMmodel_ER8 and H1DARMmodel_O1 (uncorrected and corrected with kappa factors).
One of the main changes in the DARM model update for O1 compared to ER8 was that in the actuation function for ER8 model we did not account for an analog anti-imaging filter. We included that filter into the O1 model. Adding previously missing analog AI filter into the actuation function model increased the (measurement / model) residual to about 1% in magnitude and to ~6 deg around 500 Hz (~10 deg around 900 Hz). Initally some of the ER8 model parameter estimations (ESD/CD gains) were done to best fit the measurments for actuation function that does not include an analog AI.
We also took kappas calculated from calibration lines within about 20 minutes from DARM OLG TF measurement and plotted DARM model for O1 corrected with kappas in two different ways against the measurement to see how kappa corrections will take care of systematics in the model. At this point we don't have comparison results of the DARM OLG TF and PCAL2DARM TF measurements and kappa estimations to make a distinct statement. For this particular measurement from Sep 10, the DARM model that was corrected with κtst and κC produced smaller DARM OLG TF residual and actuation function residual compared to uncorrected model, but the sensing function residual was did not improve by the correction (see attached pdf's for actuation and sensing residuals).
Details
Some of the known issues / systematics in our DARM OLG TF model include:
- inverse actuation filters need to accout for an extra -1 sign (was fixed, see LHO alog 21703);
- CAL-CS reproduction should have a sign that's opposite from DARM output matrix (at LHO we had this correct, but it needed to be fixed at LLO).
This issue affects EP1 value that's written into Epics record and used for estimation of DARM time-dependent parameters (T1500377). At LHO in the DARM model for ER8 we manually rotated phase of EP1 to +44.4 degrees to account for this discrepancy; we modified both the paramter file and the DARM model script to account for the DAQ downsampling filter TF calculated at the xtst line frequency.
- residuals of actuation function and total DARM OLG TF (systemaic error);
- EP1-9 that are used for estimation of DARM temporal variations.
This variable might have been used in GDS calibration, we need to verify with MaddieW to make sure that this extra time delay is not included into GDS code.
One of the possible sources of systematic error in the sensing function model is using a single-pole TF to approximate IFO response.
Some of the parameters of the actuation functions were estimated without taking into account an analog AI filter (one of the issues listed above). We need to revisit ER8/O1 actuation function analysis results.
A comparison script, an updated DARM model script and DARM paramter files were committed calibration SVN:
CalSVN/aligocalibration/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/
Plots were committed to:
CalSVN/aligocalibration/trunk/Runs/O1/H1/Results/DARMOLGTFs/
P.S. I'll add references later.
Maddie has confirmed that she has used the matlab model parameter par.t.actuation to inform the high-frequency and time-delay corrections to the output of the CAL-CS pipeline. This confirms that there is a systematic error in the output of the GDS pipeline output at both observatories -- an extra IOP (65 [kHz]) clock cycle, 15 [us] delay on the actuation path, which results in a ~0.5 [deg] phase mismatch between the reconstructed and true actuation and sensing paths at 100 [Hz]. This is small effect, but given our dwindling person-power, and continued pressure to have been done yesterday, we will not quantitatively assess the impact this has on systematic errors. We will instead, merely update the GDS pipeline to use the correct actuation delay (hopefully next Tuesday), and use that as our stopping point for when we stop re-calibrating prior data.
TITLE: 9/23 OWL Shift: 7:00-15:00UTC (00:00-8:00PDT), all times posted in UTC
STATE OF H1: OBSERVATION @ 76Mpc
OUTGOING OPERATOR: Jim W.
SUPPORT: Darkhan still here (Sheila is on-call, if needed)
QUICK SUMMARY:
Noticed there is a RED Timing error for H1SUSETMX. Would like to hit Diag_Reset to see if this clears this error, but I'm not sure if this knocks us out of Observation Mode. Will hold off.
I brought up this question on my last shift and I believe the answer was that it's inconsequential to reset this bit while Observing except that subsequential errors may be happening during the period that it's RED and we won't know about them/be able to see them in the trend. I took a trend last week at the beginning of one of my shifts and found this error had only happened ~ 1/week. So as far as I can tell you, it's ok to reset this error, but it would be nice to get this blessing from the CDS crew.
Verbal Alarms notified us of a GRB at 7:08:39UTC. We acknowledged on Verbal Alarm Terminal. And then went through the GRB checklist (in L1500117).
Title: 9/22/2015 Eve Shift: 23:00-7:00UTC
State of H1: Observation Mode at 70+Mpc for the last 12+hrs
Support: None needed. Various commissioners present
Shift Summary:Quiet shift.
Activity Log:
0:20 Kyle Gerardo returning from mid X station
J. Kissel, for the CAL team I've created a representative ASD for the start of O1. For now -- it uses data just before the start of the updates to GDS pipeline were started, at Sep 22 2015 11:29:47 UTC, Tuesday, Sep 22 2015 04:29:47 PDT. Why not after? Because NDS2 from matlab can't get data after the pipeline has been restarted. I'll work with Jonathan and Greg tomorrow to find out the problem. Also, because the comparison with the calibrated PCAL amplitude is within our stated uncertainty thus far (LHO aLOG 21689). Of course, this is just an ASD, and I have not respected the phase. More to come on that. The strain and displacement ASDs -- and a corresponding ASCII dump of both -- is housed permanently and publically in G1501223, as a part of the collection of ASDs from the Advanced LIGO Sensitivity Plots. Techniques for how the ASD was computed can be found in T1500365, but I've used essentially the exact same process as was used for the "best" ASD from ER7 (LHO aLOG 19275). A new feature this time around is a comparison of the PCAL vs. the GDS pipeline product. Delightfully -- even though the GDS pipeline was not yet updated, and I have not corrected for any time dependence, the GDS pipeline calibration of the PCAL excitation in DARM agree with the displacement calibrated by PCAL itself at all line frequencies, 36.7, 331.9, and 1083.7 [Hz] to better than 5%, which is consistent with our current uncertainty budget (LHO aLOG 21689). Now be fore we get all greedy and say "then why did we bother to update the GDS pipeline and why have we bothed to even compute the time dependent factors, remember that this is one measurement, and this is only the amplitude/magnitude. We must make the same comparison over a long period of time (a few days), look at the amplitude and phase, such that we can get a feel for how these track with time and we gather a feel for our remaining systematics.
The script to produce the official strain, plots, and ASCII dumps can be found here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Scripts/produceofficialstrainasds_O1.m
Cheryl and Jeff brought to my attention that GWIstat was reporting incorrect information today. It turns out that the ~gstlalcbc home directory at Caltech was moved to a new filesystem today; GWIstat gets its information from a process running under that account, and apparently got into a funny state. I have now restarted it. For the rest of the current observing segment it will report the duration only from the time I restarted it, about 3:32 UTC. I apologize for the problem!
I see this morning that GWIstat is not showing the correct duration for the current observing segment. The log file on ldas-grid.ligo.caltech.edu, where it is now running, shows that it was restarted twice during the night for no obvious reason, and it is reporting the duration only since it was restarted. I'll ask the Caltech computing folks to look into this. New hardware for ldas-grid was put into use yesterday, and maybe they were still shaking it down last night.
Stuart Anderson told me this is a known problem that seems to have arisen from a condor configuration change. They know how to fix it but will need to restart condor. Until they do that, gwistat should indicate status correctly (except for momentary outages) but may sometimes display the wrong duration for the current state.
Durring the maintence window we left the DHARD yaw boost on (21768 and 21708). There was no evidence that it caused any problems, but I was putting excitations onto transmon at the time and there were other maintence activities going on. We'd like to check that it doesn't impact the glitch rate, so if LLO drops out of lock or if you see an earthquake on the way ( 0.1um/sec or larger predicted by terramon), it would be great if you can turn it on. You can find it under ASC overview> ASC arm cavities, DHARD YAW FM3 (labled boost). (screenshot)
It would be good to get more than an hour of data, so if you see that LLO has dropped it would be awesome if you could turn this on util they are back up.
This is just a temporary request, only for tonight or the next few days.
This is actually FM2.
I was texting with Mike to see if taking H1 out of Observation Mode (when L1 is down) for this test was OK by him, and he concurred. This work is referenced by Work Permit #5505. In the work permit, I see a time of 9/21-25 for Period of Activity. So Operators can allow this activity during this time since Mike has signed off on the work permit. (perhaps in the future, we can reference the work permit in alog entries so Operators will know this is an acceptable activity.)
I'm not totally sure about when to make the decision to preemptively turn ON this filter if we get a warning of an impending EQ. It's not totally clear to know which types of EQ will knock us out and which won't. I guess I can look to see if (1) Terramon gives us a RED warning, and also (2) watch 0.03-0.1um/s seismic signal for an order of magnitude increase. Perhaps in that case I could then end Observation Mode and turn ON the filter and stay out of Observation Mode until L1 comes back. (sorry, just trying to come up with a plan of attack in case L1 drops out)
As it stands, L1 has been locked for 10hrs, so we'll keep an eye on them. I asked William to contact me if they drop out (but I'll also watch the FOM & GWI.stat.
I believe that by switching this, while in 'Undisturbed', it will show as an SDF diff thereby automatically taking us to 'Commissioning' mode until the diff is accepted, the ODC Intent ready bit is Green(again) and we can once again click the intent bit to 'Undisturbed'. I asked this at the JRPC meeting yesterday.
Apologies for the wrong FM number, and in the future I'll try to rememver to put the WP number in the alog. Operators can probably stop toggling this filter for now. We will put this on the list of minor changes that we will make on maintence day, so that next tuesday it can be added to the guardian and the observe.snap, along with some HSTS bounce and roll notches.
SudarshanK, DarkhanT
We were using 137 degree of correction factor on kappa_tst on our time varying parameters calculation. (alog 21594). Darkhan found a negative sign that was placed at a wrong position in the DARM model which gave us back 180 degrees of phase. Additionally, Shivaraj found that we were not accounting for DAQ downsampling filter used in ESD Calibration line. These two factors gave us back almost all the phase we were missing. There was also an analog antialiasing filter missing in the actuation TF that was applied in the new model. After these corrections, Darkhan created the new upated epics variable. These epics variable are committed at:
CalSVN/Runs/O1/Scripts/CAL_EPICS
Using these new epics variable, kaapas were recalculated for LHO. For, LLO these epics variable doesnot exist yet. The new plot is attached below. The imaginary parts of all the kappa's are now close to their nominal values of 0 and real part are few percent (2-3%) from their nominal values of 1, which is within the uncertainity of the model. Cavity pole is still off from its nominal value of 341 Hz but has stayed constant over time.
The script to calculate these time varying factors is committed to SVN:
LHO: CalSVN/aligocalibration/trunk/Runs/ER8/H1/Scripts/CAL_PARAM/
LLO: CalSVN/aligocalibration/trunk/Runs/ER8/L1/Scripts/CAL_PARAM/
Recall that Stefan made changes to the OMC Power Scaling on Sunday 13 September 2015 (in the late evening PDT, which means Sept 14th UTC). One can see the difference in character (i.e. the subsequent consistency) of kappa_C after this change on Sudarshan's attached plot. Once can also see that, for a given lock stretch, that the change in optical gain is now more that ~2-3%. That means that ~5 [Mpc] trends we see on our 75 [Mpc] the in-spiral range, which we've seen evolve over long, 6+ hour long lock stretches, cannot be entirely attributed to optical gain fluctuations as we've been flippantly sure of, and claiming. However, now that we've started calculating these values in the GDS pipeline (LHO aLOGs 21795 and 21812), it will be straight-forward to make comparative plots between the calculated time dependent parameters and every other IFO metric we have. And we will! You can too! Stay tuned!
Just to drive the point home, I took 15 hours' worth of range and optical gain data from our ongoing 41+ hour lock. The optical gain fluctuates by a few percent, but the range fluctuates by more like 10 %.
Kyle, Gerardo ~1000 - 1315 hrs. local ~1540 - 1720 hrs. local Sprayed CF flanges between Y-1 and Y-2 excluding GV9,10,11 and 12 lead screw nipples (purposefully excluded lead screw bellows too) SETUP Y-mid turbo backed by LD (QDP80 running but valved-out) 6 x 10-8 torr*L/sec external calibrated leak measured 7 x 10-8 torr*L/sec - 4-5 LPM helium flow for 25 - 100 second dwell - indicated Helium background initially at 9 x 10-9 torr*L/sec fell steadily during testing eventually going off scale < 10-11 torr*L/sec. RESULTS Looked like a response when spraying near closed vent/purge valve (high pressure, O-ring side) but couldn't duplicate after lunch. Soft-cycled IP9 isolation valve - pressure went up when closed. Shut down pumps and leak detector - Leaving turbo controller on overnight to ensure rotor stays levitated until at rest
These are the channels in the DMT (GDS) hoft frames, which include the calibrated strain channel (H1:GDS-CALIB_STRAIN) and the calibration factors (the kappas):
H1:GDS-CALIB_STATE_VECTOR 16
H1:ODC-MASTER_CHANNEL_OUT_DQ 16384
H1:GDS-CALIB_STRAIN 16384
H1:GDS-CALIB_KAPPA_A_REAL 16
H1:GDS-CALIB_KAPPA_A_IMAGINARY 16
H1:GDS-CALIB_KAPPA_TST_REAL 16
H1:GDS-CALIB_KAPPA_TST_IMAGINARY 16
H1:GDS-CALIB_KAPPA_PU_REAL 16
H1:GDS-CALIB_KAPPA_PU_IMAGINARY 16
H1:GDS-CALIB_KAPPA_C 16
H1:GDS-CALIB_F_CC 16
These channels should be available using NDS2.
(For LLO the channels are the same with: H1-> L1.)
I strongly suggest we add EPICS mirrors of these channels (similar to what was done for the sensemon range). This will ensure that (1) they are available in dataviewer, and (2) we have trend data of these channels. We want to be able to look at long-term (week- or month-long) fluctuations of these parameters during O1.
Two things added:
Updated CAL_INJ_CONTROL medm. It is organized a bit differently, labels have changes slightly, and even has a new button! Duncan Macleod supplied us with an updated ext_alert.py that polls GraceDB for new events (both "E" and "G" types), places the new info in some EPICS records, and then will automatically pause injections for either 3600s or 10800s depending on the event.
The Transient Injection Control now has the ability to zero out the pause inj channel. Why is this necessary? The script running in the background of this screen will automatically PAUSE the injections when a new external event alert is detected. If we are down when we get a GRB alert, the script should still pause the injections. The Operator will then need to enable the injections and zero the pause time.
One other thing for Operators to look out for is if we want the injections to stop for longer than the automatic pause time. If we disable the injections by clicking the "Disable" button, and then a new event comes in, it will automatically switch from Disabled --> Paused (this happened to us a few minutes after we started up the script). I am not 100% positive on this, but it seems that when the pause time is up the injections will continue. If this is so, it's definitely something Operators need to watch for.
We will see how this goes and make changes if necessary.
New screen shot attached.
There was apparently some confusion about pausing mechanisms; see alog 21822. If the scheme referred to there is restored, the PAUSE and ENABLE features will be fully under the control of the operators. Independently, injections will automatically be paused by the action of the GRB alert code setting the CAL-INJ_EXTTRIG_ALERT_TIME channel. I have emailed Duncan to try to sort this out.
Last night there were two GRB alerts that paused the injections, and they DID NOT enable Tinj. The Tinj Control went back to Disabled as we had it set to previously. This is good and works as outlined in the HWInjBookkeeping wiki (Thank you Peter Shawhan!). This was my main worry and seems that has already taken care of. It is a bit misleading when the Tinj control goes from Disabled --> Paused and begins to count up to the "Pause Until" time, but after trending the channels it shows that will not enable the Tinj after the times meet.
J. Kissel Some combination of Dave, Jim, Duncan and TJ installed updates to the GRB alert code this morning during maintenance. This updated code now hits the "pause" button on the hardware injection software TINJ when it receives a GRB alert. There is an EPICs record, H1:CAL-INJ_TINJ_PAUSE, which records the GPS time of the time in which TINJ was paused. Somehow, this record -- which is used as a read back / storage of information, not a setting -- got missed when we went through the un-monitoring of INJ settings-which-are-readbacks channels in the CAL-CS model (see LHO aLOG 21154). So this afternoon, while in observation mode, we received a GRB alert and the updated code pushed the TINJ pause button, which then filled in the H1:CAL-INJ_TINJ_PAUSE EPICs record, which triggered an SDF difference in the CAL-CS front end, which took us out of science mode. #facepalm. I've chosen to un-monitor this channel and accepted it in the OBSERVE.snap table of the SDF system to clear the restriction for observation mode. Note -- when we are next out of observation mode, we need to switch to the SAFE.snap table, un-monitor this channel, and switch back to the OBSERVE.snap table. We can't do this now, because switching the table would show the DIFF again, and take us out of observation intent mode again. #doublefacepalm
As I similarly pointed out to the folks at LLO when they tried to implement something similar, having the GRB alert process pause the injection process is a bad model for how to chain the dependencies. Is the GRB process expecting to unpause the injections as well? How do you plan on handling this when there are multiple external alert processes trying to pause the injections? They're all just going to be pausing and un-pausing as they see fit? Bad plan.
Apparently some confusion about this resurfaced after we had (I thought) resolved it in late August (alog 20013). Following the original scheme, CAL-INJ_TINJ_PAUSE and CAL-INJ_TINJ_ENABLE are intended to be under the control of the human operator to set or unset. In parallel, tinj also pauses injections automatically for one hour following the GPS time inserted in CAL-INJ_EXTTRIG_ALERT_TIME by the GRB alert code, ext_alert.py . I have emailed Duncan to try to sort this out.
Elli and Stefan showed in aLOG 20827 that the signals measured by AS 36 WFS for SRM and BS alignment appeared to be strongly dependent on the power circulating in the interferometer. This was apparently not seen to be the case in L1. As a result, I've been looking at the AS 36 sensing with a Finesse model (L1300231), to see if this variability is reproducible in simulation, and also to see what other IFO variables can affect this variability.
In the past when looking for differences between L1 and H1 length sensing (for the SRC in particular), the mode matching of the SRC has come up as a likely candidate. This is mainly because of the relatively large uncertainties in the SR3 mirror RoC combined with the strong dependence of the SRC mode on the SR3 RoC. I thought this would therefore be a good place to start when looking at the alignment sensors at the AS port. I don't expect the SR3 RoC to be very dependent on IFO power, but having a larger SR3 RoC offset (or one in a particular direction) may increase the dependence of the AS WFS signals on the ITM thermal lenses (which are the main IFO variables we typically expect to change with IFO power). This might therefore explain why H1 sees a bigger change in the ASC signals than L1 as the IFOs heat up.
My first step was to observe the change in AS 36 WFS signals as a function of SR3 RoC. The results for the two DOFs shown in aLOG 20827 (MICH = BS, SRC2 = SRM) are shown in the attached plots. I did not spend much time adjusting Gouy phases or demod phases at the WFS in order to match the experiment, but I did make sure that the Gouy phase difference between WFSA and WFSB was 90deg at the nominal SR3 RoC. In the attached plots we can see that the AS 36 WFS signals are definitely changing with SR3 RoC, in some cases even changing sign (e.g. SRM Yaw to ASA36I/Q and SRM Pitch to ASA36I/Q). It's difficult at this stage to compare very closely with the experimental data shown in aLOG 20827, but at least we can say that from model it's not unexpected that these ASC sensing matrix elements are changing with some IFO mode mismatches. The same plots are available for all alignment DOFs, but that's 22 in total so I'm sparing you all the ones which weren't measured during IFO warm up.
The next step will be to look at the dependence of the same ASC matrix elements on common ITM thermal lens values, for a few different SR3 RoC offsets. This is where we might be able to see something that explains the difference between L1 and H1 in this respect. (Of course, there may be other effects which contribute here, such as differential ITM lensing, spot position offsets on the WFS, drifting of uncontrolled DOFs when the IFO heats up... but we have to start somewhere).
Can you add a plot of the amplitude and phase of 36MHz signal that is common to all four quadrants when there's no misalignment?
As requested, here are plots of the 36MHz signal that is common to all quadrants at the ASWFSA and ASWFSB locations in the simulation. I also checked whether the "sidebands on sidebands" from the series modulation at the EOM had any influence on the signal that shows up here: apparently it does not make a difference beyond the ~100ppm level.
At Daniel's suggestion, I adjusted the overall WFS phases so that the 36MHz bias signal shows up only in the I-phase channels. This was done just by adding the phase shown in the plots in the previous comment to both I and Q detectors in the simulation. I've attached the ASWFS sensing matrix elements for MICH (BS) and SRC2 (SRM) again here with the new demod phase basis.
**EDIT** When I reran the code to output the sensitivities to WFS spot position (see below) I also output the MICH (BS) and SRC2 (SRM) DOFs again, as well as all the other ASC DOFs. Motivated by some discussion with Keita about why PIT and YAW looked so different, I checked again how different they were. In the outputs from the re-run, PIT and YAW don't look so different now (see attached files with "phased" suffix, now also including SRC1 (SR2) actuation). The PIT plots are the same as previously, but the YAW plots are different to previous and now agree better with PIT plots.
I suspect that the reason for the earlier difference had something to do with the demod phases not having been adjusted from default for YAW signals, but I wasn't yet able to recreate the error. Another possibility is that I just uploaded old plots with the same names by mistake.
To clarify the point of adjusting the WFS demod phases like this, I also added four new alignment DOFs corresponding to spot position on WFSA and WFSB, in ptich and yaw directions. This was done by dithering a steering mirror in the path just before each WFS, and double demodulating at the 36MHz frequency (in I and Q) and then at the dither frequency. The attached plots show what you would expect to see: In each DOF the sensitivity to spot position is all in the I quadrature (first-order sensitivity to spot position due to the 36MHz bias). Naturally, WFSA spot position doesn't show up at WFSB and vice versa, and yaw position doesn't show up in the WFS pitch signal and vice versa.
For completeness, the yaxis is in units of W/rad tilt of the steering mirror that is being dithered. For WFSA the steering mirror is 0.1m from the WFSA location, and for WFSB the steering mirror is 0.2878m from the WFSB location. We can convert the axes to W/mm spot position or similar from this information, or into W/beam_radius using the fact that the beam spot sizes are at 567µm at WFSA and 146µm at WFSB.
As shown above the 36MHz WFS are sensitive in one quadrature to spot position, due to the constant presence of a 36MHz signal at the WFS. This fact, combined with the possibility of poor spot centering on the WFS due to the effects of "junk" carrier light, is a potential cause of badness in the 36MHz AS WFS loops. Daniel and Keita were interested to know if the spot centering could be improved by using some kind of RF QPD that balances either the 18MHz (or 90MHz) RF signals between quadrants to effectively center the 9MHz (or 45MHz) sideband field, instead of the time averaged sum of all fields (DC centering) that is sensitive to junk carrier light. In Daniel's words, you can think of this as kind of an "RF optical lever".
This brought up the question of which sideband field's spot postion at the WFS changes most when either the BS, SR2 or SRM are actuated.
To answer that question, I:
Some observations from the plots:
I looked again at some of the 2f WFS signals, this time with a linear sweep over alignment offsets rather than a dither transfer function. I attached the results here, with detectors being phased to have the constant signal always in I quadrature. As noted before by Daniel, AS18Q looks like a good signal for MICH sensing, as it is pretty insensitive to beam spot position on the WFS. Since I was looking at larger alignment offsets, I included higher-order modes up to order 6 in the calculation, and all length DOFs were locked. This was for zero SR3 RoC offset, so mode matching is optimal.
J. Kissel I've increased the actuation delay before the sum of the the RESIDUAL (sensing) and CTRL (actuation) paths in the CAL-CS reproduction of H1:CAL-CS_DARM_DELTAL_EXTERNAL from four 16 [kHz] clock cycles (244 [us]) to seven 16 [kHz] clock cycles (427 [us]). This is done by changing the H1:CAL-CS_DARM_CTRL_DELAY_CYCLES EPICs record. The change is motived in LHO aLOG 21746. Recall this only affects the reproduction of the DARM displacement signal H1:CAL-CS_DARM_DELTAL_EXTERNAL (and therefore its ASD projected on the wall). I attach screenshots of the before and after. Note that the ASDs were taken ~30 minutes apart, so I don't expect the detailed structure to be the same. However, the change in phase at the sensing / actuation cross over causes overall shap change in the bucket. Also note that the transfer function used to remove the systematics from the DELTAL EXTERNAL channel will now have to be updated (documented in LHO aLOG 20481), to avoid double counting the high frequency affects. In the attached ASD, those corrections have *not* been changed, so we are likely double counting the corrections. More on that after I reconcile what Kiwamu had done and what Peter suggests. I've captured the updated setting in both the OBSERVE.snap and SAFE.snap SDF files, and committed each to the repository.
I've updated the slide I'd made in the PreER7 -era that elucidates all of the time delays and approximations that we've made to come up with the seven 16 [kHz] clock-cycle delay between the actuation path and sensing path. The ER7 version is pg 7 of G1500750. See attached. In summary, the total delay *between* the inverse sensing and actuation chains, if we approximate all high-frequency frequency response as delays that are in addition to the "true" delays from computer exchanges, if we weren't limited by 16 [kHz] clock cycles should be 442.6 [us]. Because we are limited to 16 [kHz] clock cycles, we've chosen 7, which is a delay of 427.3 [us] -- a 15.3 [us] systematic error. You should also remember, that having this delay between the chains means that CAL-CS_DELTAL_EXTERNAL has an overall delay or "latency" equivalent to that of the inverse sensing function advance, which is 213.6 [us]. Note that these numbers are LHO-centric -- the approximation for the OMC DCPD signal chain of 40 [us] assumes the pole frequencies of the H1 OMC DCPD chain, and the estimation of the systematic error in phase uses H1's DARM unity gain frequency of 40.65 [Hz]. For LLO, the details should be redone if a precise answer is needed.
To ride out earthquakes better, we would like a boost in DHARD yaw (alog 21708) I exported the DHARD YAW OLG measurement posted in alog 20084, made a fit, and tried a few different boosts (plots attched).
I think a reasonable solution is to use a pair of complex poles at 0.35 Hz with a Q of 0.7, and a pair of complex zeros at 0.7 Hz with a Q of 1 (and of cource a high frequency gain of 1). This gives us 12dB more gain at DC than we have now, and we still have an unconditionally stable loop with 45 degrees of phase everywhere.
A foton design string that accomplishes this is
zpk([0.35+i*0.606218;0.35-1*0.606218],[0.25+i*0.244949;0.25-i*0.244949],9,"n")gain(0.444464)
I don't want to save the filter right now because as I learned earlier today that will cause an error on the CDS overview until the filter is loaded, but there is an unsaved version open on opsws5. If anyone gets a chance to try this at the start of maintence tomorow it would be awesome. Any of the boosts in the DHARD yaw filter bank currently can be overwritten.
We tried this out this morning, I turned the filter on at 15:21 , it was on for several hours. The first screenshot show error and control spectra with the boost on and off. As you would expect there is a modest increase in the control signal at low frequencies and a bit more supression of the error signal. The IFO was locked durring maintence activities (including praxair deliveries) so there was a lot of noise in DARM. I tried on off tests to see if the filter was causing the excess noise, and saw no evidence that it was.
We didn't get the earthquake I was hoping we would have durring the maintence window, but there was some large ground motion due to activities on site. The second attached screenshot shows a lockloss when the chilean earthqauke hits (21774), the time when I turned on the boost this morning, and the increased ground motion durring maintence day. The maintence day ground motion that we rode out with the boost on were 2-3 times higher than the EQ, but not all at the same time in all stations.
We turned the filter back off before going to observing mode, and Laura is taking a look to see if there was an impact on the glitch rate.
I took a look at an hour's worth of data after the calibration changes were stable and the filter was on (I sadly can't use much more time) . I also chose a similar time period from this afternoon where things seemed to be running fine without the filter on. Attached are glitchgrams and trigger rate plots for the two periods. The trigger rate plots show data binned in to 5 minute intervals.
When the filter was on we were in active commissioning, so the presence of high SNR triggers are not so surprising. The increased glitch rate around 6 minutes is from Sheila performing some injections. Looking at the trigger rate plots I am mainly looking to see if there is an overall change in the rate of low SNR triggers (i.e. the blue dots) which contribute the majority to the background. In the glitchgram plots I am looking to see if I can see a change of structure.
Based upon the two time periods I have looked at I would estimate the filter does not have a large impact on the background, however I would like more stable time when the filter is on to further confirm.
J. Kissel on behalf of P. Fritschel & M. Wade Peter and Maddie have been trying to understand the discrepancies seen between the CAL-CS front-end calibraton and the (currently offline running) of the GDS pipeline -- see Maddie's comparisons in LHO aLOG 21638. Peter put together an excellent summary on the calibration mailing list that's worth reproducing here because it motivates changing the actuation path delay in the CAL-CS model, which we intend to do tomorrow. We will change the actuation delay to it's current 4 clock cycles to Peter's suggested 7 clock cycles. On Sep 17, 2015, at 6:39 PM, Peter Fritschelwrote: Maddie, et al., I spent some time looking into this (GDS vs CALCS) today, and I think I have a few insights to share. Bottom line is that I think the GDS code is doing the right thing, and that the corrections [to the front-end calibration that are used] make sense given the way things are done. And, I think there is a simple way to make the CAL-CS output get closer to the GDS output. As Maddie pointed out, the amplitude corrections we are seeing from the GDS code in the bucket (50-300 Hz or so) are caused mainly by the phase from the anti-alias (AA) and anti-image (AI) filters, which are accounted for in the GDS model but not in the CAL-CS one. Maddie already gave some numbers for 100 Hz, and pointed out that the relative phase shift she is applying (16.4 degrees) is 8 degrees larger than the relative phase shift that the CAL-CS model applies (8.8 degrees, from 244 usec). I’m referring to the relative phase shift between the DELTAL_CTRL and DELTAL_RESIDUAL signals. The first thing to note is that this difference is going to have different effects on the L1 and H1 GDS calibration, because they have different DARM open loop gain transfer functions. The simple picture for the region we are talking about is we are looking at the errors in the sum: 1 + a*exp(i*phi), as a function of small changes in phi. Here, the ‘1’ represents the DARM error signal, ‘a’ represents the DARM control signal, and is less than one (but not much smaller than 1). ‘phi’ is the relative phase between the two channels, and it is errors in this phase (or small changes to) that we are talking about. The magnitude of the sum is most sensitive to changes in phi for phi = 90 deg. So to bound the effect, assume phi = 90 deg. At this point, the sensitivity is approximately: d|sum|/dphi = a Sticking with 100 Hz as an example, the error in phi that GDS is correcting is 8 degrees, or phi = 0.14 rad. ‘a’ is the DARM open loop gain at 100 Hz, which is different for L1 and H1: L1, a = 0.6 —> d|sum| = 0.084 H1, a = 0.4 —> d|sum| = 0.056 These are the maximum possible errors, depending on ‘phi’. Maddie’s latest plots show a correction at 100 Hz of 7% for L1, 3.5% for H1. Quite understandable. For higher frequencies, the phase error is going to increase, but ‘a’ (open loop gain) will decrease, so you need to look at both. At these frequencies the phase shift/lag from the AA and AI filters (digital and analog) is linear in frequency, so we can easily make the extrapolations. Maddie’s comparison plot shows that the biggest relative difference is at 250 Hz, where it is 9%. At 250 Hz, the phase shift error is going to grow to (250/100)*8 = 20 deg = 0.35 rad. For L1, the DARM OLG at 250 Hz is about 0.3 in magnitude (a). So the maximum error is: d|sum| = 0.105 = 10.5%. (vs. 9% observed) For H1, Maddie’s plot shows a relative difference of about 8% at just below 300 Hz- say 280 Hz. The phase shift error will be (280/100)*8 = 22.4 deg = 0.4 rad. The H1 OLG at 280 Hz is about 0.2 in magnitude. So the maximum error would be: d|sum| = 0.08 = 8%. (vs. 8% observed) I think the frequencies where the differences go to very small values in Maddie’s plots, like 150 Hz for LHO, are frequencies where phi = 0 mod pi, for which |sum| is to first order insensitive to dphi. OK, so now I can believe that it is realistic to see the kinds of amplitude corrections that Maddie is seeing, in ‘the bucket’. However, the above picture also suggests how CAL-CS should be able to get much closer to the GDS output. The frequencies where this is an issue is where ‘a’ (OLG magnitude) is not too small. But at these frequencies (below ~500 Hz), the phase lags from the AA/AI filters are very nearly linear in frequency. Thus, they can be well approximated by a time delay. So here’s the suggestion: Why not increase the time delay that is applied in the CAL-CS model to approximate the AA/AI filter effects? Adding 3 more sample delays would come close: 3 sample delay = 183 usec; phase shift at 100 Hz = 6.6 degree
Check out the attachment to LHO aLOG 21815 for a graphical representation of why seven 16 [kHz] clock-cycles were chosen. Also in the above email, Peter has *not* included delay for the OMC DCPD signal chain, he has *only* considered extra delay from the AA and AI filtering.