Displaying reports 64621-64640 of 85876.Go to page Start 3228 3229 3230 3231 3232 3233 3234 3235 3236 End
Reports until 18:19, Tuesday 22 September 2015
H1 CAL (AOS, CAL)
sudarshan.karki@LIGO.ORG - posted 18:19, Tuesday 22 September 2015 - last comment - 18:35, Wednesday 23 September 2015(21817)
Time Varying Calibration Parameters- Updates

SudarshanK, DarkhanT

We were using 137 degree of correction factor on kappa_tst on our time varying parameters calculation. (alog 21594). Darkhan found a negative sign that was placed at a wrong position in the DARM model which gave us back 180 degrees of phase. Additionally, Shivaraj found that  we were not accounting for DAQ downsampling filter used in ESD Calibration line. These two factors gave us back almost all the phase we were missing. There was also an analog antialiasing filter missing in the actuation TF that was applied in the new model. After these corrections, Darkhan created the new upated epics variable. These epics variable are committed at:

CalSVN/Runs/O1/Scripts/CAL_EPICS

Using these new epics  variable, kaapas were recalculated for LHO. For, LLO these epics variable doesnot exist yet. The new plot is attached below. The imaginary parts of all the kappa's are now close to their nominal values of 0 and real part are few percent (2-3%) from their nominal values of 1, which is within the uncertainity of the model. Cavity pole is still off from its nominal value of 341 Hz but has stayed constant over time.

The script to calculate these time varying factors is committed to SVN:

LHO: CalSVN/aligocalibration/trunk/Runs/ER8/H1/Scripts/CAL_PARAM/

LLO: CalSVN/aligocalibration/trunk/Runs/ER8/L1/Scripts/CAL_PARAM/

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 18:44, Tuesday 22 September 2015 (21821)DetChar, ISC
Recall that Stefan made changes to the OMC Power Scaling on Sunday 13 September 2015 (in the late evening PDT, which means Sept 14th UTC). One can see the difference in character (i.e. the subsequent consistency) of kappa_C after this change on Sudarshan's attached plot. 

Once can also see that, for a given lock stretch, that the change in optical gain is now more that ~2-3%. That means that ~5 [Mpc] trends we see on our 75 [Mpc] the in-spiral range, which we've seen evolve over long, 6+ hour long lock stretches, cannot be entirely attributed to optical gain fluctuations as we've been flippantly sure of, and claiming.

However, now that we've started calculating these values in the GDS pipeline (LHO aLOGs 21795 and 21812), it will be straight-forward to make comparative plots between the calculated time dependent parameters and every other IFO metric we have. And we will! You can too! Stay tuned!
evan.hall@LIGO.ORG - 18:35, Wednesday 23 September 2015 (21871)

Just to drive the point home, I took 15 hours' worth of range and optical gain data from our ongoing 41+ hour lock. The optical gain fluctuates by a few percent, but the range fluctuates by more like 10 %.

Non-image files attached to this comment
LHO VE
kyle.ryan@LIGO.ORG - posted 17:51, Tuesday 22 September 2015 (21814)
Leak hunting at Y-mid
Kyle, Gerardo 

~1000 - 1315 hrs. local 

~1540 - 1720 hrs. local 

Sprayed CF flanges between Y-1 and Y-2 excluding GV9,10,11 and 12 lead screw nipples (purposefully excluded lead screw bellows too)

SETUP
Y-mid turbo backed by LD (QDP80 running but valved-out) 6 x 10-8 torr*L/sec external calibrated leak measured 7 x 10-8 torr*L/sec  - 4-5 LPM helium flow for 25 - 100 second dwell - indicated Helium background initially at 9 x 10-9 torr*L/sec fell steadily during testing eventually going off scale < 10-11 torr*L/sec.  

RESULTS
Looked like a response when spraying near closed vent/purge valve (high pressure, O-ring side) but couldn't duplicate after lunch.  Soft-cycled IP9 isolation valve - pressure went up when closed.

Shut down pumps and leak detector - Leaving turbo controller on overnight to ensure rotor stays levitated until at rest
H1 DCS (CAL, DCS)
gregory.mendell@LIGO.ORG - posted 17:35, Tuesday 22 September 2015 - last comment - 18:43, Wednesday 23 September 2015(21812)
Channels in the DMT (GDS) hoft frames with the strain channel and calibration factors

These are the channels in the DMT (GDS) hoft frames, which include the calibrated strain channel (H1:GDS-CALIB_STRAIN) and the calibration factors (the kappas):

H1:GDS-CALIB_STATE_VECTOR 16
H1:ODC-MASTER_CHANNEL_OUT_DQ 16384
H1:GDS-CALIB_STRAIN 16384
H1:GDS-CALIB_KAPPA_A_REAL 16
H1:GDS-CALIB_KAPPA_A_IMAGINARY 16
H1:GDS-CALIB_KAPPA_TST_REAL 16
H1:GDS-CALIB_KAPPA_TST_IMAGINARY 16
H1:GDS-CALIB_KAPPA_PU_REAL 16
H1:GDS-CALIB_KAPPA_PU_IMAGINARY 16
H1:GDS-CALIB_KAPPA_C 16
H1:GDS-CALIB_F_CC 16

These channels should be available using NDS2.

(For LLO the channels are the same with: H1-> L1.)

Comments related to this report
evan.hall@LIGO.ORG - 18:43, Wednesday 23 September 2015 (21872)

I strongly suggest we add EPICS mirrors of these channels (similar to what was done for the sensemon range). This will ensure that (1) they are available in dataviewer, and (2) we have trend data of these channels. We want to be able to look at long-term (week- or month-long) fluctuations of these parameters during O1.

H1 General
thomas.shaffer@LIGO.ORG - posted 17:20, Tuesday 22 September 2015 (21811)
Added a few things to the OPS_OVERVIEW today

Two things added:

  1. If the script polling GraceDB fails, a red box with "GraceDB Query Failure"  will appear in the low right corner. Please inform the CDS group if this appears, and until it is resolved the Operator should monitor GraceDB for any new events.
  2. On the bottom center, a box with CW Inj signal and Transient Inj signal. The CW inj is red when there is no injection (like right now), and green when there is. The Transient inj is Purple when there is an injection and grey if there is not.
Images attached to this report
H1 CAL (CDS, INJ)
thomas.shaffer@LIGO.ORG - posted 17:11, Tuesday 22 September 2015 - last comment - 10:06, Wednesday 23 September 2015(21810)
New CAL_INJ_CONTROL medm with updated ext_alert.py

Updated CAL_INJ_CONTROL medm. It is organized a bit differently, labels have changes slightly, and even has a new button! Duncan Macleod supplied us with an updated ext_alert.py that polls GraceDB for new events (both "E" and "G" types), places the new info in some EPICS records, and then will automatically pause injections for either 3600s or 10800s depending on the event.

The Transient Injection Control now has the ability to zero out the pause inj channel. Why is this necessary? The script running in the background of this screen will automatically PAUSE the injections when a new external event alert is detected. If we are down when we get a GRB alert, the script should still pause the injections. The Operator will then need to enable the injections and zero the pause time.

One other thing for Operators to look out for is if we want the injections to stop for longer than the automatic pause time. If we disable the injections by clicking the "Disable" button, and then a new event comes in, it will automatically switch from Disabled --> Paused (this happened to us a few minutes after we started up the script). I am not 100% positive on this, but it seems that when the pause time is up the injections will continue. If this is so, it's definitely something Operators need to watch for.

We will see how this goes and make changes if necessary.

New screen shot attached.

Images attached to this report
Comments related to this report
peter.shawhan@LIGO.ORG - 19:24, Tuesday 22 September 2015 (21823)INJ
There was apparently some confusion about pausing mechanisms; see alog 21822.  If the scheme referred to there is restored, the PAUSE and ENABLE features will be fully under the control of the operators.  Independently, injections will automatically be paused by the action of the GRB alert code setting the CAL-INJ_EXTTRIG_ALERT_TIME channel.  I have emailed Duncan to try to sort this out.
thomas.shaffer@LIGO.ORG - 10:06, Wednesday 23 September 2015 (21842)

Last night there were two GRB alerts that paused the injections, and they DID NOT enable Tinj. The Tinj Control went back to Disabled as we had it set to previously. This is good and works as outlined in the HWInjBookkeeping wiki (Thank you Peter Shawhan!). This was my main worry and seems that has already taken care of. It is a bit misleading when the Tinj control goes from Disabled --> Paused and begins to count up to the "Pause Until" time, but after trending the channels it shows that will not enable the Tinj after the times meet.

H1 General
cheryl.vorvick@LIGO.ORG - posted 16:19, Tuesday 22 September 2015 (21804)
OPS Day Summary:

TITLE: Sept 22 Day Shift: 15:00-23:00UTC (08:00-16:00 PDT), all times posted in UTC

 

STATE Of H1

Locked for more than 12 hours - including MAINTENANCE, and currently in observing.  

Range is currently 76.5Mps.

 

SUPPORT: Nutsinee, Ed, MikeL, JeffK

SHIFT SUMMARY

15:00-19:40UTC: Maintenance

19:40-23:00UTC: IFO is clear of any disturbances

21:12:30UTC: GRB alert arrives

22:50:00UCT: IFO has remained lock, and has now been locked for more than 12 hours

HIGHLIGHTS:

- IFO remained lock through all of Maintenance

- GRB arrived and altered the "Observe" mode, but IFO was "Undisturbed" before the GRB and throughout the one hour stand-down time that followed.

- 19:40UTC to 23:00UTC+ IFO Data is GOOD.  See alogs from me and JeffK about why the Observe mode was dropped, but the IFO was not effected in any way!

INCOMING OPERATOR: Jim

ACTIVITY LOG

- All earlier Maintenance activities logged in 21780 and 21794

- All activities from Maintenance are complete except for Kyle and Gerardo at MY.

20:41UTC - Kyle and Gerardo drive back from MY

22:41UTC - Kyle and Gerardo drive to MY to continue working

- Praxair delivery to CP4 and CP5.

- DMT issues from earlier in the day are resolved 

CURRENTLY:

- Kyle and Gerardo at MY

- MC2 Guardian is not managed, and Evan will fix at next lock loss

- Injections are disabled, and it's suggested the incoming GRB is responsible, but the investigation is ongoing - TJ

H1 CDS
filiberto.clara@LIGO.ORG - posted 16:10, Tuesday 22 September 2015 (21802)
O1 Rack Build - Photo Documentation
This morning we took pictures of the racks in the Corner Station and End Stations. This is to document the rack build for each subsystem for O1. Pictures will be uploaded to resource space. Attached are a few pictures of some of the racks in the CER.
Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 16:08, Tuesday 22 September 2015 (21803)
New MEDM screen showing front end SDF reference files being used

I have created a new MEDM file which shows which reference file (safe.snap or OBSSERVE.snap) each front end model's SDF is using. It is created by a new script called create_fe_sdf_source_file_list.py. It shows that all systems have been ported to the new OBSERVE.snap standard with the exception of: IOP models, PEM models, SUSAUX models and ODCMASTER.

The new screen is accessible from the SITEMAP via the CDS pull down, "FE SDF Reference Files" item

Images attached to this report
LHO FMCS
bubba.gateley@LIGO.ORG - posted 15:55, Tuesday 22 September 2015 (21801)
New DCS room fire suppression system
The installation of the DCS room fire suppression system is complete and the system is functional. 
H1 SEI
hugh.radkins@LIGO.ORG - posted 14:50, Tuesday 22 September 2015 (21799)
HEPI Fluid Checks show good

no further maintenance called for.

H1 INJ (CAL, DetChar)
jeffrey.kissel@LIGO.ORG - posted 14:34, Tuesday 22 September 2015 - last comment - 19:19, Tuesday 22 September 2015(21798)
H1:CAL-INJ_TINJ_PAUSE Changed to NOT MONITORED
J. Kissel

Some combination of Dave, Jim, Duncan and TJ installed updates to the GRB alert code this morning during maintenance. This updated code now hits the "pause" button on the hardware injection software TINJ when it receives a GRB alert. There is an EPICs record, H1:CAL-INJ_TINJ_PAUSE, which records the GPS time of the time in which TINJ was paused. Somehow, this record -- which is used as a read back / storage of information, not a setting -- got missed when we went through the un-monitoring of INJ settings-which-are-readbacks channels in the CAL-CS model (see LHO aLOG 21154).

So this afternoon, while in observation mode, we received a GRB alert and the updated code pushed the TINJ pause button, which then filled in the H1:CAL-INJ_TINJ_PAUSE EPICs record, which triggered an SDF difference in the CAL-CS front end, which took us out of science mode. #facepalm.

I've chosen to un-monitor this channel and accepted it in the OBSERVE.snap table of the SDF system to clear the restriction for observation mode.

Note -- when we are next out of observation mode, we need to switch to the SAFE.snap table, un-monitor this channel, and switch back to the OBSERVE.snap table. We can't do this now, because switching the table would show the DIFF again, and take us out of observation intent mode again. #doublefacepalm

Comments related to this report
jameson.rollins@LIGO.ORG - 14:51, Tuesday 22 September 2015 (21800)

As I similarly pointed out to the folks at LLO when they tried to implement something similar, having the GRB alert process pause the injection process is a bad model for how to chain the dependencies.  Is the GRB process expecting to unpause the injections as well?  How do you plan on handling this when there are multiple external alert processes trying to pause the injections?  They're all just going to be pausing and un-pausing as they see fit?  Bad plan.

peter.shawhan@LIGO.ORG - 19:19, Tuesday 22 September 2015 (21822)
Apparently some confusion about this resurfaced after we had (I thought) resolved it in late August (alog 20013).  Following the original scheme, CAL-INJ_TINJ_PAUSE and CAL-INJ_TINJ_ENABLE are intended to be under the control of the human operator to set or unset.  In parallel, tinj also pauses injections automatically for one hour following the GPS time inserted in CAL-INJ_EXTTRIG_ALERT_TIME by the GRB alert code, ext_alert.py .  I have emailed Duncan to try to sort this out.
H1 DetChar
cheryl.vorvick@LIGO.ORG - posted 14:29, Tuesday 22 September 2015 - last comment - 16:24, Tuesday 22 September 2015(21797)
GRB received at 21:12:30UTC, kicks IFO out of observing, back now

19:40:08UTC - IFO in Observe

21:12:30UTC - GRB arrives and updates an EPICS record that kicks SDF into RED, and drops IFO out of Observe

21:19:10UTC - IFO back into Observe

 

At this time, there's no indication that anything other than the change in an EPICS record occurred.

Comments related to this report
cheryl.vorvick@LIGO.ORG - 16:23, Tuesday 22 September 2015 (21805)DetChar

THE DROP FROM OBSERVE MODE AT 21:12:30UTC was NOT a change in the IFO, and ALL DATA 19:40UTC to the end of the lock (currently the IFO is still locked) are GOOD!

cheryl.vorvick@LIGO.ORG - 16:24, Tuesday 22 September 2015 (21806)DetChar

It appears that the GRB alarm disabled injections, so GWIstat is OK but yellow.  Tj and others are looking into it.

H1 CAL (DetChar, ISC)
jeffrey.kissel@LIGO.ORG - posted 10:32, Tuesday 22 September 2015 - last comment - 18:13, Tuesday 22 September 2015(21788)
Actuation Delay Cycles changed to 7 [clock cycles] to better Approximate Super-Nyquist Corrections
J. Kissel

I've increased the actuation delay before the sum of the the RESIDUAL (sensing) and CTRL (actuation) paths in the CAL-CS reproduction of H1:CAL-CS_DARM_DELTAL_EXTERNAL from four 16 [kHz] clock cycles (244 [us]) to seven 16 [kHz] clock cycles (427 [us]). This is done by changing the H1:CAL-CS_DARM_CTRL_DELAY_CYCLES EPICs record. The change is motived in LHO aLOG 21746. Recall this only affects the reproduction of the DARM displacement signal H1:CAL-CS_DARM_DELTAL_EXTERNAL (and therefore its ASD projected on the wall). I attach screenshots of the before and after. Note that the ASDs were taken ~30 minutes apart, so I don't expect the detailed structure to be the same. However, the change in phase at the sensing / actuation cross over causes overall shap change in the bucket. Also note that the transfer function used to remove the systematics from the DELTAL EXTERNAL channel will now have to be updated (documented in LHO aLOG 20481), to avoid double counting the high frequency affects. In the attached ASD, those corrections have *not* been changed, so we are likely double counting the corrections. More on that after I reconcile what Kiwamu had done and what Peter suggests.

I've captured the updated setting in both the OBSERVE.snap and SAFE.snap SDF files, and committed each to the repository.
Images attached to this report
Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 18:13, Tuesday 22 September 2015 (21815)
I've updated the slide I'd made in the PreER7 -era that elucidates all of the time delays and approximations that we've made to come up with the seven 16 [kHz] clock-cycle delay between the actuation path and sensing path. The ER7 version is pg 7 of G1500750. See attached.

In summary, the total delay *between* the inverse sensing and actuation chains, if we approximate all high-frequency frequency response as delays that are in addition to the "true" delays from computer exchanges, if we weren't limited by 16 [kHz] clock cycles should be 442.6 [us]. Because we are limited to 16 [kHz] clock cycles, we've chosen 7, which is a delay of 427.3 [us] -- a 15.3 [us] systematic error.

You should also remember, that having this delay between the chains means that CAL-CS_DELTAL_EXTERNAL has an overall delay or "latency" equivalent to that of the inverse sensing function advance, which is 213.6 [us].

Note that these numbers are LHO-centric -- the approximation for the OMC DCPD signal chain of 40 [us] assumes the pole frequencies of the H1 OMC DCPD chain, and the estimation of the systematic error in phase uses H1's DARM unity gain frequency of 40.65 [Hz]. For LLO, the details should be redone if a precise answer is needed.

Images attached to this comment
Non-image files attached to this comment
H1 ISC (CDS, DetChar, ISC)
sheila.dwyer@LIGO.ORG - posted 09:46, Tuesday 22 September 2015 - last comment - 17:05, Tuesday 22 September 2015(21783)
Large RF AM glitch during maintenance activites

We saw a large glitch in the RF AM monitors with high coherence with DARM at around 16:13 UTC on Sept 22nd, while the IFO was locked and maintence was happening.  There werw people in the LVEA (though not near the PSL) and people in the CER but they were near the SEI and SUS racks, not the ISC racks. The first attached plot shows this on a 5 hour time scale, the second plot has 5 days.  This can be compared to Evan's plots of the last 3 weeks (21766

Images attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 11:34, Tuesday 22 September 2015 (21789)

Starting around 2015-09-22 17:51:00 Z we had a few minutes or what appeared to be full-on instability of the RFAM stabilization servo. The control signal spectrum was >10× the typical value from 10 to 100 Hz. [Edit: actually, it looks like glitching; see below.]

I tried turning the modulation index down by as much as 1.5 dB, but there was no clear effect.

I've attached time series as a zipped DTT xml for the driver channls (control signal, error signal, OOL sensor) during such a glitchy period.

In the control signal, all the glitches I looked at have the same characteristic shape (see the screenshot with the zoomed time series): an upward spike, a slight decay, a downward spike, and then a slower decay back to the nominal control signal level.

Images attached to this comment
Non-image files attached to this comment
evan.hall@LIGO.ORG - 17:05, Tuesday 22 September 2015 (21809)

The control signal during the Γ-reduction attempts seems quite smooth; the 0.2-dB steps do not produce glitches.

Images attached to this comment
H1 ISC
sheila.dwyer@LIGO.ORG - posted 23:07, Monday 21 September 2015 - last comment - 18:35, Tuesday 22 September 2015(21768)
DHARD YAW boost ready to go

To ride out earthquakes better, we would like a boost in DHARD yaw (alog 21708)  I exported the DHARD YAW OLG measurement posted in alog 20084, made a fit, and tried a few different boosts (plots attched).  

I think a reasonable solution is to use a pair of complex poles at 0.35 Hz with a Q of 0.7, and a pair of complex zeros at 0.7 Hz with a Q of 1 (and of cource a high frequency gain of 1).  This gives us 12dB more gain at DC than we have now, and we still have an unconditionally stable loop with 45 degrees of phase everywhere.  

A foton design string that accomplishes this is

zpk([0.35+i*0.606218;0.35-1*0.606218],[0.25+i*0.244949;0.25-i*0.244949],9,"n")gain(0.444464)

I don't want to save the filter right now because as I learned earlier today that will cause an error on the CDS overview until the filter is loaded, but there is an unsaved version open on opsws5.  If anyone gets a chance to try this at the start of maintence tomorow it would be awesome.  Any of the boosts in the DHARD yaw filter bank currently can be overwritten. 

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 13:50, Tuesday 22 September 2015 (21796)

We tried this out this morning, I turned the filter on at 15:21 , it was on for several hours.  The first screenshot show error and control spectra with the boost on and off.  As you would expect there is a modest increase in the control signal at low frequencies and a bit more supression of the error signal.  The IFO was locked durring maintence activities (including praxair deliveries) so there was a lot of noise in DARM.  I tried on off tests to see if the filter was causing the excess noise, and saw no evidence that it was.  

We didn't get the earthquake I was hoping we would have durring the maintence window, but there was some large ground motion due to activities on site.  The second attached screenshot shows a lockloss when the chilean earthqauke hits (21774), the time when I turned on the boost this morning, and the increased ground motion durring maintence day.  The maintence day ground motion that we rode out with the boost on were 2-3 times higher than the EQ, but not all at the same time in all stations.  

We turned the filter back off before going to observing mode, and Laura is taking a look to see if there was an impact on the glitch rate.  

Images attached to this comment
laura.nuttall@LIGO.ORG - 18:35, Tuesday 22 September 2015 (21820)

I took a look at an hour's worth of data after the calibration changes were stable and the filter was on (I sadly can't use much more time) . I also chose a similar time period from this afternoon where things seemed to be running fine without the filter on. Attached are glitchgrams and trigger rate plots for the two periods. The trigger rate plots show data binned in to 5 minute intervals.

When the filter was on we were in active commissioning, so the presence of high SNR triggers are not so surprising. The increased glitch rate around 6 minutes is from Sheila performing some injections. Looking at the trigger rate plots I am mainly looking to see if there is an overall change in the rate of low SNR triggers (i.e. the blue dots) which contribute the majority to the background. In the glitchgram plots I am looking to see if I can see a change of structure.

Based upon the two time periods I have looked at I would estimate the filter does not have a large impact on the background, however I would like more stable time when the filter is on to further confirm.

Images attached to this comment
H1 CAL (DetChar, ISC)
jeffrey.kissel@LIGO.ORG - posted 14:51, Monday 21 September 2015 - last comment - 18:15, Tuesday 22 September 2015(21746)
Motivation to increase Actuation Path Delay before the SUM in CAL-DELTAL_EXTERNAL
J. Kissel on behalf of P. Fritschel & M. Wade

Peter and Maddie have been trying to understand the discrepancies seen between the CAL-CS front-end calibraton and the (currently offline running) of the GDS pipeline -- see Maddie's comparisons in LHO aLOG 21638.

Peter put together an excellent summary on the calibration mailing list that's worth reproducing here because it motivates changing the actuation path delay in the CAL-CS model, which we intend to do tomorrow. We will change the actuation delay to it's current 4 clock cycles to Peter's suggested 7 clock cycles.

On Sep 17, 2015, at 6:39 PM, Peter Fritschel  wrote:

Maddie, et al.,

I spent some time looking into this (GDS vs CALCS) today, and I think I have a few
insights to share.

Bottom line is that I think the GDS code is doing the right thing, and that the corrections
[to the front-end calibration that are used] make sense given the way things are done. And, I think there is a simple
way to make the CAL-CS output get closer to the GDS output.

As Maddie pointed out, the amplitude corrections we are seeing from the GDS code in the
bucket (50-300 Hz or so) are caused mainly by the phase from the anti-alias (AA) and 
anti-image (AI) filters, which are accounted for in the GDS model but not in the CAL-CS one. 

Maddie already gave some numbers for 100 Hz, and pointed out that the relative phase shift
she is applying (16.4 degrees) is 8 degrees larger than the relative phase shift that
the CAL-CS model applies (8.8 degrees, from 244 usec). I’m referring to the relative 
phase shift between the DELTAL_CTRL and DELTAL_RESIDUAL signals.

The first thing to note is that this difference is going to have different effects on the
L1 and H1 GDS calibration, because they have different DARM open loop gain transfer functions. 

The simple picture for the region we are talking about is we are looking at the errors
in the sum: 1 + a*exp(i*phi), as a function of small changes in phi. Here, the ‘1’ represents
the DARM error signal, ‘a’ represents the DARM control signal, and is less than one (but 
not much smaller than 1). ‘phi’ is the relative phase between the two channels, and it is
errors in this phase (or small changes to) that we are talking about. The magnitude of the
sum is most sensitive to changes in phi for phi = 90 deg. So to bound the effect, assume
phi = 90 deg. At this point, the sensitivity is approximately:

   d|sum|/dphi = a

Sticking with 100 Hz as an example, the error in phi that GDS is correcting is 8 degrees,
or phi = 0.14 rad. ‘a’ is the DARM open loop gain at 100 Hz, which is different for L1 
and H1:

   L1, a = 0.6 —>  d|sum| = 0.084
   H1, a = 0.4  —> d|sum| = 0.056

These are the maximum possible errors, depending on ‘phi’. Maddie’s latest plots show a
correction at 100 Hz of 7% for L1, 3.5% for H1. Quite understandable.

For higher frequencies, the phase error is going to increase, but ‘a’ (open loop gain)
will decrease, so you need to look at both.

At these frequencies the phase shift/lag from the AA and AI filters (digital and analog)
is linear in frequency, so we can easily make the extrapolations. 

Maddie’s comparison plot shows that the biggest relative difference is at 250 Hz, where it
is 9%. At 250 Hz, the phase shift error is going to grow to (250/100)*8 = 20 deg = 0.35 rad.
For L1, the DARM OLG at 250 Hz is about 0.3 in magnitude (a). So the maximum error is:

  d|sum| = 0.105 = 10.5%. (vs. 9% observed)

For H1, Maddie’s plot shows a relative difference of about 8% at just below 300 Hz- say 280 Hz.
The phase shift error will be (280/100)*8 = 22.4 deg = 0.4 rad. The H1 OLG at 280 Hz is about
0.2 in magnitude. So the maximum error would be:

   d|sum| = 0.08 = 8%. (vs. 8% observed)

I think the frequencies where the differences go to very small values in Maddie’s plots, like
150 Hz for LHO, are frequencies where phi = 0 mod pi, for which |sum| is to first order
insensitive to dphi. 

OK, so now I can believe that it is realistic to see the kinds of amplitude corrections
that Maddie is seeing, in ‘the bucket’. 

However, the above picture also suggests how CAL-CS should be able to get much closer to
the GDS output. The frequencies where this is an issue is where ‘a’ (OLG magnitude) is not
too small. But at these frequencies (below ~500 Hz), the phase lags from the AA/AI filters are 
very nearly linear in frequency. Thus, they can be well approximated by a time delay.

So here’s the suggestion: Why not increase the time delay that is applied in the CAL-CS 
model to approximate the AA/AI filter effects? Adding 3 more sample delays would come close:

   3 sample delay = 183 usec; phase shift at 100 Hz = 6.6 degree
Comments related to this report
jeffrey.kissel@LIGO.ORG - 18:15, Tuesday 22 September 2015 (21816)
Check out the attachment to LHO aLOG 21815 for a graphical representation of why seven 16 [kHz] clock-cycles were chosen.

Also in the above email, Peter has *not* included delay for the OMC DCPD signal chain, he has *only* considered extra delay from the AA and AI filtering.
H1 SUS
keith.riles@LIGO.ORG - posted 13:35, Sunday 20 September 2015 - last comment - 09:42, Wednesday 23 September 2015(21696)
Pair of lines near 41 Hz
It was noted recently elsewhere that there are a pair of lines in DARM near 41 Hz
that may be the roll modes of triplet suspensions. In particular, there is
a prediction of 40.369 Hz for the roll mode labeled ModeR3.

Attached is a zoom of displacement spectrum in that band from 50 hours of early 
ER8 data. Since one naively expects a bounce mode at 1/sqrt(2) of the roll mode,
also attached is a zoom of that region for which the evidence of
bounce modes seems weak. The visible lines are much narrower,
and one coincides with an integer frequency.

For completeness, I also looked at various potential subharmonics  and harmonics
of these lines, in case the 41-Hz pair come from some other source with non-linear coupling. 
The only ones that appeared at all plausible were at about 2/3 of 41 Hz.

Specifically, the peaks at 40.9365 and 41.0127 Hz have potential 2/3 partners at
27.4170 and 27.5025 Hz (ratios: 0.6697 and 0.6706) -- see 3rd attachment. The 
non-equality of the ratios with 0.6667 is not necessarily inconsistent with a harmonic
relation, since we've seen that quad suspension violin modes do not follow a strict harmonic 
progression, and triplets are almost as complicated as quads. On the other hand, I do not see
any evidence at all for the 4th or 5th harmonics in the data set, despite the comparable strain
strengths seen for the putative 2nd and 3rd harmonics. 

Notes:
* The frequency ranges of the three plots are chosen so that the two peaks would
appear in the same physical locations in the graphs if the nominal sqrt(2) and 2/3 relations were exact..
* There is another, smaller peak of comparable width between the two peaks near 27 Hz,
which may be another mechanical resonance.
* The 27.5025-Hz line has a width that encompasses a 25.5000-hz line that is part of a
1-Hz comb with a 0.5-Hz offset reported previously.
Images attached to this report
Comments related to this report
nelson.christensen@LIGO.ORG - 14:09, Sunday 20 September 2015 (21698)DetChar, PEM
We are looking for the source of the 41 Hz noise lines. 
We used the coherence tool results for a week of ER8, with 1 mHz resolution:
https://ldas-jobs.ligo-wa.caltech.edu/~eric.coughlin/ER7/LineSearch/H1_COH_1123891217_1124582417_SHORT_1_webpage/
and as a guide looked at the structure of the 41 Hz noise, as seen in the PSD posted above by Keith.
Michael Coughlin then ran the tool that plots coherence vs channels, 
https://ldas-jobs.ligo-wa.caltech.edu/~mcoughlin/LineSearch/bokeh_coh/output/output-pcmesh-40_41.png
and made the following observations

Please see below. I would take a look at the MAGs listed, they only seem to be spiking at these frequencies.
The channels that spike just below 40.95:
 H1:SUS-ETMY_L3_MASTER_OUT_UR_DQ
 H1:SUS-ETMY_L3_MASTER_OUT_UL_DQ
 H1:SUS-ETMY_L3_MASTER_OUT_LR_DQ
 H1:SUS-ETMY_L3_MASTER_OUT_LL_DQ
 H1:SUS-ETMY_L2_NOISEMON_UR_DQ
 H1:SUS-ETMY_L2_NOISEMON_UL_DQ
 H1:PEM-CS_MAG_EBAY_SUSRACK_Z_DQ
 H1:PEM-CS_MAG_EBAY_SUSRACK_Y_DQ
 H1:PEM-CS_MAG_EBAY_SUSRACK_X_DQ

The channels that spike just above 41.0 are:

 H1:SUS-ITMY_L2_NOISEMON_UR_DQ
 H1:SUS-ITMY_L2_NOISEMON_UL_DQ
 H1:SUS-ITMY_L2_NOISEMON_LR_DQ
 H1:SUS-ITMY_L2_NOISEMON_LL_DQ
 H1:SUS-ITMX_L2_NOISEMON_UR_DQ
 H1:SUS-ITMX_L2_NOISEMON_UL_DQ
 H1:SUS-ITMX_L2_NOISEMON_LR_DQ
 H1:SUS-ITMX_L2_NOISEMON_LL_DQ
 H1:SUS-ETMY_L3_MASTER_OUT_UR_DQ
 H1:SUS-ETMY_L3_MASTER_OUT_UL_DQ
 H1:SUS-ETMY_L3_MASTER_OUT_LR_DQ
 H1:SUS-ETMY_L3_MASTER_OUT_LL_DQ
 H1:SUS-ETMY_L2_NOISEMON_UR_DQ
 H1:SUS-ETMY_L2_NOISEMON_UL_DQ
 H1:SUS-ETMY_L2_NOISEMON_LR_DQ
 H1:SUS-ETMY_L2_NOISEMON_LL_DQ
 H1:SUS-ETMY_L1_NOISEMON_UR_DQ
 H1:SUS-ETMY_L1_NOISEMON_UL_DQ
 H1:SUS-ETMY_L1_NOISEMON_LR_DQ
 H1:SUS-ETMY_L1_MASTER_OUT_UR_DQ
 H1:SUS-ETMY_L1_MASTER_OUT_UL_DQ
 H1:SUS-ETMY_L1_MASTER_OUT_LR_DQ
 H1:SUS-ETMY_L1_MASTER_OUT_LL_DQ
 H1:SUS-ETMX_L2_NOISEMON_UR_DQ
 H1:SUS-ETMX_L2_NOISEMON_LL_DQ
 H1:PEM-EY_MAG_EBAY_SUSRACK_Z_DQ
 H1:PEM-EY_MAG_EBAY_SUSRACK_Y_DQ
 H1:PEM-EY_MAG_EBAY_SUSRACK_X_DQ
 H1:PEM-CS_MAG_LVEA_OUTPUTOPTICS
 H1:PEM-CS_MAG_LVEA_OUTPUTOPTICS_QUAD_SUM_DQ

The magnetometers do show coherence at the two spikes seen in Keith's plot. The SUS channels are also showing coherence at these frequencies, sometimes broad in structure, sometimes narrow. See the coherence plots below.

Nelson, Michael Coughlin, Eric Coughlin, Pat Meyers
 
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 14:43, Sunday 20 September 2015 (21700)DetChar, PEM
Nelson, et. al

Interesting list of channels. Though they seem scattered, I can imagine a scenario where the SRM's highest roll mode frequency is still the culprit. 

All of the following channels you list are the drive signals for DARM. We're currently feeding back the DARM signal to only ETMY. So, any signal your see in the calibrated performance of the instrument, you will see here -- they are part of the DARM loop.
 H1:SUS-ETMY_L3_MASTER_OUT_UR_DQ
 H1:SUS-ETMY_L3_MASTER_OUT_UL_DQ
 H1:SUS-ETMY_L3_MASTER_OUT_LR_DQ
 H1:SUS-ETMY_L3_MASTER_OUT_LL_DQ
 H1:SUS-ETMY_L2_NOISEMON_UR_DQ
 H1:SUS-ETMY_L2_NOISEMON_UL_DQ
 H1:SUS-ETMY_L2_NOISEMON_LR_DQ
 H1:SUS-ETMY_L2_NOISEMON_LL_DQ
 H1:SUS-ETMY_L1_NOISEMON_UR_DQ
 H1:SUS-ETMY_L1_NOISEMON_UL_DQ
 H1:SUS-ETMY_L1_NOISEMON_LR_DQ
 H1:SUS-ETMY_L1_MASTER_OUT_UR_DQ
 H1:SUS-ETMY_L1_MASTER_OUT_UL_DQ
 H1:SUS-ETMY_L1_MASTER_OUT_LR_DQ
 H1:SUS-ETMY_L1_MASTER_OUT_LL_DQ

Further -- though we'd have to test this theory by measuring the coherence between, say the NoiseMon channels and these SUS rack magnetometers, I suspect these magnetometers are just sensing the requested DARM drive control signal
 H1:PEM-EY_MAG_EBAY_SUSRACK_Z_DQ
 H1:PEM-EY_MAG_EBAY_SUSRACK_Y_DQ
 H1:PEM-EY_MAG_EBAY_SUSRACK_X_DQ

Now comes the harder part. Why the are ITMs and corner station magnetometers firing off? The answer: SRCL feed-forward / subtraction from DARM and perhaps even angular control signals. Recall that the QUAD's electronics chains are identical, in construction and probably in emission of magnetic radiation.   
 H1:PEM-CS_MAG_EBAY_SUSRACK_Z_DQ
 H1:PEM-CS_MAG_EBAY_SUSRACK_Y_DQ
 H1:PEM-CS_MAG_EBAY_SUSRACK_X_DQ
sound like they're in the same location for the ITMs as the EY magnetometer for the ETMs. We push SRCL feed-forward to the ITMs, and SRM is involved in SRCL, and also there is residual SRCL to DARM coupling left-over from the imperfect subtraction. That undoubtedly means that the ~41 [Hz] mode of the SRM will show up in DARM, SRCL, the ETMs and the ITMs. Also, since the error signal / IFO light for the arm cavity (DARM, CARM -- SOFT and HARD) angular control DOFs have to pass through HSTSs as they come out of the IFO (namely SRM and SR2 -- the same SUS involved in SRCL motion), they're also potentially exposed to this HSTS resonance. We feed arm cavity ASC control signal to all four test masses.

That would also explain why the coil driver monitor signals show up on your list:
 H1:SUS-ITMY_L2_NOISEMON_UR_DQ
 H1:SUS-ITMY_L2_NOISEMON_UL_DQ
 H1:SUS-ITMY_L2_NOISEMON_LR_DQ
 H1:SUS-ITMY_L2_NOISEMON_LL_DQ
 H1:SUS-ITMX_L2_NOISEMON_UR_DQ
 H1:SUS-ITMX_L2_NOISEMON_UL_DQ
 H1:SUS-ITMX_L2_NOISEMON_LR_DQ
 H1:SUS-ITMX_L2_NOISEMON_LL_DQ

The 41 Hz showing up in
 H1:SUS-ETMX_L2_NOISEMON_UR_DQ
 H1:SUS-ETMX_L2_NOISEMON_LL_DQ
(and not in the L3 or L1 stage) also is supported by the ASC control signal theory -- we only feed ASC to the L2 stage, and there is no LSC (i.e. DARM) request to ETMX (which we *would* spread among the three L3, L2, and L1 stages.). Also note that there's a whole integration issue about how these noise monitor signals are untrustworthy (see Integration Issue #9), and the ETMX noise mons have not been cleared as "OK," and in fact have been called out explicitly for their suspicious behavior in LHO aLOG 17890

I'm not sure where this magnetometer lives:
 H1:PEM-CS_MAG_LVEA_OUTPUTOPTICS
 H1:PEM-CS_MAG_LVEA_OUTPUTOPTICS_QUAD_SUM_DQ
but it's clear from the channel names that these is just two different versions of the same magnetometer.

I'm surprised that other calibrated LSC channels like
H1:CAL-CS_PRCL_DQ
H1:CAL-CS_PRCL_DQ
H1:CAL-CS_PRCL_DQ
don't show up on your list. I'm staring at the running ASD of these channels on the wall and there's a line at 41 [Hz] that in both the reference trace and the current live trace (though, because PRCL, SRCL, and MICH all light that bounces off of HSTSs, I suspect that you might find slightly different frequencies in each).

"I see your blind list of channels that couple, and raise you a plausible coupling mechanism that explains them all. How good is your hand?!"
keith.riles@LIGO.ORG - 14:42, Sunday 20 September 2015 (21701)
I neglected to state explicitly that the spectra I posted are taken
from non-overlapped Hann-windowed 30-minute SFTs,
hence with bins 0.5556 mHz wide and BW of about 0.83 mHz.
keith.riles@LIGO.ORG - 19:01, Sunday 20 September 2015 (21711)
Attached are close-in zooms of  the bands around 41 Hz peaks,
from the ER8 50-hour data integration, allowing an estimate of 
their Q's (request from Peter F). 

For the peak at about 40.9365 Hz, one has:
FWHM ~ 0.0057 Hz
-> Q = 40.94/.0057 = 7,200

For the peak at about 41.0127 Hz, one has:
FWHM ~ 0.0035 Hz
-> Q = 41.01/0.0035 = 12,000

Also attached are zooms and close-in zooms for the peak at 41.9365 Hz
from 145 hours of ER7 data when the noise floor and the peak were
both higher. The 41.0127-Hz peak is not visible in this data set integration.

In the ER7 data set, one has for 41.9365 Hz:
FWHM ~  0.0049 Hz
-> Q = 40.94/0.0049 = 8,400

Peter expected Q's as high as 4000-5000 and no lower than 2000 for
a triplet suspension. These numbers are high enough to qualify.

Images attached to this comment
keith.riles@LIGO.ORG - 19:35, Sunday 20 September 2015 (21717)
Andy Lundgren pointed out that there is a line at about 28.2 Hz that 
might be close enough to 40.9365/sqrt(2) = 28.95 Hz to qualify as
the bounce-mode counterpart to the suspected roll mode.

So I've checked its Q in the 50-hour ER8 set and the 145-hour ER7 set
and am starting to think Andy's suspicion is correct (see attached spectra).
I get Q's of about 9400 for ER and 8600 for ER7, where the line in 
ER7 is much higher than in ER8, mimicking what is seen at 41 Hz.
Images attached to this comment
nelson.christensen@LIGO.ORG - 07:38, Monday 21 September 2015 (21727)DetChar, PEM
In an email Gabriele Vajente has stated, "...the noise might be correlated to PRCL." There is a coherence spike between h(t) and H1:LSC-PRCL_OUT_DQ at 40.936 Hz. Here is the coherence for a week in ER8.
Images attached to this comment
norna.robertson@LIGO.ORG - 09:04, Monday 21 September 2015 (21731)DetChar, SUS
Peter F asked if Q of ~ 10,000 for bounce and roll modes was plausible.

Answer is yes. We have evidence that the material loss can at least a factor of 2 better than 2e-4 - e.g. see our paper (due to be published soon in Rev Sci Instrum,) P1400229, where we got an average 1.1 x 10^-4 loss for music wire. Q = 1/loss.
stuart.aston@LIGO.ORG - 10:53, Monday 21 September 2015 (21738)
[Stuart A, Jeff K, Norna R]

After having looked through acceptance measurements, taken in-chamber (Phase 3), for all H1 HSTSs, it should be noted that our focus was on the lower frequency modes of the suspensions, so we have little data to refine the estimates of the individual mode frequencies for each suspension.

No vertical (modeV3 at ~27.3201 Hz) or roll (modeR3 at ~40.369 Hz) modes are present in the M1-M1 (top-to-top) stage TFs of the suspensions.

Some hints of modes can be observed in M2-M2 and M3-3 TFs (see attached below), as follows:-

1) M2-M2, all DOFs suffer from poor coherence above 20 Hz. However, there are some high Q features that stand out in the L DOF for SRM, at frequencies of 27.46 Hz and 40.88 Hz. In Pitch, there is a high Q feature at 27.38 Hz for PR2. In Yaw, a feature at 40.81 Hz is just visible for MC1.

2) M3-M3, again all DOFs suffer very poor coherence above 20 Hz. However, a feature can be seen standing above the noise at 26.7 Hz for MC2 in the L DOF. Also, a small peak is present at 40.92 Hz for SR2 in the Yaw DOF.
Non-image files attached to this comment
brett.shapiro@LIGO.ORG - 14:53, Monday 21 September 2015 (21741)
I had a look through the SVN data for the individual OSEMs on M2 of PR2 and PRM at both LHO and LLO because Gabriele suggested the power recycling cavity might be involved.
I also looked at SR2 and SRM on Peter's suggestion.
 
I found all the roll modes and most of the bounce modes for these.
 
SUS           Bounce (Hz)        Roll (Hz)
 
H1 PR2      27.41                    40.93
H1 PRM     27.59                    40.88
H1 SR2      27.51                    40.91
H1 SRM     27.45                    40.87
L1 PR2        ----                       40.88
L1 PRM     27.48                     40.70
L1 SR2      27.52                       ----
L1 SRM     27.51                     40.88
 
I found all these in the M2 to M2 TFs in the …SAGM2/Data directories on the SVN. Screenshots of the DTT sessions are attached. You can see the relevant file names where I found the modes in these screenshots (L1 PRM bounce came from the M2 Pitch to M2 LR transfer function, not shown in the screenshot).
 
The error bar on these frequencies is about +-0.01 Hz, due to the 0.01 Hz resolution of the measurements.
 
For reference, the HSTS matlab model given by the hstsopt_metal.m parameter file in (SusSVN)/sus/trunk/Common/MatlabTools/TripleModel_Production
gives the bounce and roll modes as respectively
 
27.32 Hz and 40.37 Hz 
 
 
Brett
Images attached to this comment
sheila.dwyer@LIGO.ORG - 20:27, Monday 21 September 2015 (21765)

We currently don't have any bandstops for these modes on the tripples, except for in the top stage length path to SRM and PRM.  It would not impact our ASC loops to add bandstops to the P+Y input on all triples.  We will do this next time we have a chance to put some changes in.  

brett.shapiro@LIGO.ORG - 17:05, Tuesday 22 September 2015 (21808)

Ryan Derosa mentioned that he took some low resolution measurements that include an L1 SR2 roll mode at 41.0 Hz.

I have now looked at the data for all the MCs, to complement the PRs and SRs above in log 21741. Screenshots of the data are attached, a list of the modes found are below.

H1

SUS    bounce (Hz)      roll (Hz)

MC1      27.38                40.81

MC2      27.75                40.91

MC3      27.43?              40.84

 

L1

SUS    bounce (Hz)      roll (Hz)

MC1      27.55?              40.86

MC2        ---                   40.875

MC3      27.53                40.77

 

Error bars of +- 0.01 Hz.

I am not sure about the bounce modes for H1 MC3 and L1 MC1 since the peaks are pretty small. I couldn't find any data on L1 MC2 showing a bounce mode.

Images attached to this comment
patrick.meyers@LIGO.ORG - 09:42, Wednesday 23 September 2015 (21841)DetChar

Expanding the channel list to include all channels in the detchar O1 channel list:

https://wiki.ligo.org/DetChar/O1DetCharChannels

I ran a coherence study for a half our of data towards the end of ER8.

I see particularly high coherence at 40.93Hz in many channels in LSC, OMC, ITM suspensions, and also a suspension for PR2. It seems to me like this particularly strong line is probably due to PR2 based on these results, Keith's ASDs, and Brett's measurements, and it seems to be very highly coherent.

Full results with coherence matrices and data used to create them (color = coherence, x axis = frequency, y axis = channels) broken down roughly by subsystem can be found here:

https://ldas-jobs.ligo-wa.caltech.edu/~meyers/coherence_matrices/1126258560-1801/bounce_roll4.html

Attached are several individual coherence spectra that lit up the coherence matrices with the frequency of maximum coherence in that range picked out.

-Pat

Images attached to this comment
Displaying reports 64621-64640 of 85876.Go to page Start 3228 3229 3230 3231 3232 3233 3234 3235 3236 End