Displaying reports 57101-57120 of 78028.Go to page Start 2852 2853 2854 2855 2856 2857 2858 2859 2860 End
Reports until 22:43, Sunday 13 September 2015
H1 ISC
stefan.ballmer@LIGO.ORG - posted 22:43, Sunday 13 September 2015 (21482)
OMC PSL-POWER_SCALE_OFFSETPREF and ERR_GAIN now hardcoded
Evan, Jeff, Stefan

Attached is a plot of H1:OMC-READOUT_PREF_OFFSET, H1:OMC-READOUT_ERR_GAIN and H1:OMC-READOUT_X0_OFFSET over the last 10 days.
Notice the significant lock-to-lock fluctuations - they were recalculated every time the interferometer locks.

To avoid having these fake 'optical gain' changes we decided to hard-code
 H1:OMC-READOUT_PREF_OFFSET = 1.3689
 H1:OMC-READOUT_ERR_GAIN    = -8.7614e-7
which are their values during the calibration DARM UGF measurement on 2015/08/30 12:08:43 UTC (GPS 1124971740).
The values are now set in DC_READOUT in the ISC_LOCK guardian. The actual values are in the lscparams file: lscparams.omc_readout_pref and lscparams.omc_readout_err_gain.


Just before NOMINAL_LOW_NOISE we added a new state with code to average the IMC input power for 5 sec and the set
 H1:PSL-POWER_SCALE_OFFSET to the measured power, and 
 H1:OMC-READOUT_X0_OFFSET such that the DCPD values are servoed to lscparams.omc_dcpd_sum_target which currently is 20mA.
(This had been done before, but just based on one ezca read of the IMC input power.)

To avoid being affected by the DC coupled ISS 2nd loop engagement, we moved the ENGAGE_ISS_2nd_LOOP slightly earlier, between COIL_DRIVER and LOWNOISE_ESD_ETMY.
Images attached to this report
H1 INJ (INJ)
peter.shawhan@LIGO.ORG - posted 21:42, Sunday 13 September 2015 - last comment - 06:38, Monday 14 September 2015(21484)
CAL-INJ_ODC bitmask misbehavior for ER8 hardware injections
Cregg Yancey ran the hardware injection cross-checking code he has been developing, and noticed a curious discrepancy in the periods marked by the burst injection ODC bits at H1 versus L1 for the hardware injection at GPS 1125390500.  I investigated more closely, reading ODC bitmask time series from the recorded raw frame file along with the hardware injection excitation record, H1:CAL-INJ_HARDWARE_OUT_DQ.  I focused in on the bits which indicate burst hardware injections: bit 25 of H1:ODC-MASTER_CHANNEL_OUT_DQ (sampled at 16384 Hz) and bit 11 of H1:CAL-INJ_ODC_CHANNEL_OUT (sampled at 256 Hz).  What I found is rather startling: these ODC bits "flicker" between 0 and 1 around the time of the injection in a way that they definitely should not.  Each bit is supposed to equal 1 when there is no injection (meaning two or more consecutive samples with excitation amplitude less than 10^-200, according to LLO alog 18913) and 0 when there IS an injection.  The bit flickering pattern shows no obvious connection with the excitation time series.

The first three attached plots show what I found for the injection around GPS 1125390500, at three different time scales.  This is the "too-loud burst" injection which has been studied (see https://wiki.ligo.org/LSC/JRPComm/G181472) and seems to have been anomalous due to saturating the actuation chain.  However, the ODC bit setting should only be based on H1:CAL-INJ_HARDWARE_OUT_DQ, unrelated to saturation occurring anywhere later in the chain, so I believe this is a different problem.  Also, another injection at GPS 1125400500 seemed fine as an injected burst (it was a 69 Hz sine-gaussian) but still has the crazy ODC bit behavior, as shown in the last three plots.  There are even alternating "stripes" which are very odd.

Although I'm only showing plots here for H1, the problem occurs in a similar (though not identical) way in L1 data.  Therefore it is NOT restricted to one site.

Also, this bit problem did NOT occur for the ER7 injections; we checked the bitmask transitions and they all made sense.  It seems to have been introduced since then.  Perhaps the model code update described at LLO alog 18913 is not working as expected?
Images attached to this report
Comments related to this report
peter.shawhan@LIGO.ORG - 06:38, Monday 14 September 2015 (21492)INJ
Isn't it nice when "sleeping on it" makes something more clear?  I woke up with a better understanding of what's going on -- I'm 90% sure this is the right story:

1. I was wrong when I wrote that the burst ODC bit is set based on CAL-INJ_HARDWARE (plotted as CAL-INJ_HARDWARE_OUT_DQ as I read it from the raw frame file).  Really it is set based on CAL-INJ_TRANSIENT (see, for instance, the CAL-INJ ODC documentation.  The CAL-INJ_HARDWARE channel is the sum of CAL_INJ_TRANSIENT and CAL_INJ_CW, so in the plots I made, much of the "fuzz" in the plotted trace is from the simulated pulsars in the CAL_INJ_CW channel; some of those are at high frequency, so the fuzz can have a fairly large amplitude.

2. There IS a bug in the way the ODC bit is set, but it is a deterministic, silly bug instead of something more devious.  When the code was updated to check for nonzero values using a threshold of 1e-200 instead of requiring them to be exactly zero to machine precision (https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=18913), I think they neglected to take the absolute value before comparing it to 1e-200.  (The expressions written in the CAL-INJ ODC document are consistent with that if taken at face value -- they do not have absolute-value lines.)  That MOSTLY explains the stripes seen in the 5th and 6th plots attached to the main alog entry here.  That injected signal was a 69-Hz sine-gaussian, and now I see that the period of the stripes is just about 1/(69 Hz).

3. The remaining question is why those stripes aren't solid blocks, i.e. why the bit value isn't just a square wave with a frequency of 69 Hz during the sine-gaussian.  I think the explanation for this is that the very small high-frequency component in the sine wave (or in the numerical discreteness noise in the way it was calculated or written out) was getting boosted by the inverse actuation filter SO MUCH that it is comparable in amplitude to the main sine-gaussian signal, so it can take the time series negative even when the SG waveform has swung positive. 

So, I think the course of action is to confirm and fix the bug in the ODC code (missing absolute-values, probably for all comparisons with 10^-200).  In addition, the sky-high gain for high-frequency content in the injected signal is, I think, a problem, because there is always going to be SOME high-frequency content from finite machine precision even if the intended waveform is all at low frequency.  As the ER8 inverse actuation filters are developed and refined, I think it would be an excellent idea to roll off the gain above ~1 or 2 kHz to avoid problems.
H1 General
corey.gray@LIGO.ORG - posted 20:53, Sunday 13 September 2015 (21483)
Mid Shift Summary

H1 had a lockloss after a 2hr lock.  

During acquisition noticed a rough alignment, and so locked PRMI to check alignment.  When bringing SRM back, it would saturate and its watchdog would trip.  SRM appears to be in an odd state now (see Evan's alog).  We toggled power on the SRM driver box in the CER (not sure if this did anything though).  Evan & Stefan then took control and carefully checked alignment & ASC.

Oh, and another issue is we've had to "FIND_IR" by hand for last few lock attempts.

This is where we are now.  Looks like L1 is up to Observation Mode.

2:29UTC there was a GRB.  We were out of lock.  Talked to Joe at LLO and they were locked.  They did receive the alarm.  They were in the middle of PEM injections however.

Environmentally:  the winds have picked up to around 20mph.  Most other seismc channels look fairly quiet.

H1 CDS
evan.hall@LIGO.ORG - posted 20:08, Sunday 13 September 2015 (21481)
SRM M1 T3, LF, RT, and SD OSEMs are noisy

From the attached spectra (taken with the suspension in safe mode), it looks like the same signature as the PRM/PR3 problem.

We thought this might be the cause today's repeated SRM trips. But Andy Lundgren has already looked through all the OSEMs, and it seems even three weeks ago these OSEMs (among many others) had some kind of problem. It is difficult to tell whether it's exactly the same problem, since the channels are only recorded at 256 Hz.

These four OSEMs all share the same driver and satellite box. We power cycled the driver, but the noise remains.

Non-image files attached to this report
H1 CAL (CAL)
sudarshan.karki@LIGO.ORG - posted 18:35, Sunday 13 September 2015 (21479)
Updates on Time Varying Calibration Parameters

Summary:

By introducing a fudge factor of 3.6 degrees in phase to one of the values from the model (EP1), we now get the time varying calibration parameters close to their nominal values. This basically sets the values of these parameters to their nominal value at the time of calibration.

Details:

Last week, I reported that some of our  time varying parameters ( described in DCC T1500377) were off from their nominal values  and in particular kappa_tst  was off  by a factor of 2 and had a wrong sign as well (alog 21326). 

Darkhan found this discrepancy was due to the  phase of actuation coefficient of x_tst (ESD calibration line) being off by 137 degrees, which was reported in alog 21391. He also corrected for this phase by introducing a fudge factor in  H1:CAL-CS_TDEP_REF_INVA_CLGRATIO_TST_(REAL|IMAG) referred as EP1.  This variable is basically the value  of ESD actuation at time t = t0  (time of calibration) . After correcting for this phase , we got our kappa_tst very close to 1 but real part of kappa_pu was still off by about  7% and imaginary part of kappa_tst was about 0.05 (nominal value 0).

On further investigation today, I found if we change the phase of EP1 by about 3.6 degrees  (EP1 = oldEP1*exp(-1i*pi/180*3.6), the real part of kappa_pu, kappa_tst  and kappa_A will be close to 1 and their imaginary part close to zero, which is what we expect. This shows that there is discrepancy between the values produced by the model to the values obtained from this calculation. We still need to understand why this is different.

Plots:

Attached plots shows different parameters, after applying the correction. The green stretch in each plot is 5 hours of data right after the measurement (DARM sweep and Pcal sweep) used towards calibartion was taken. The values at this point should be nominal for all the parameters. Plot 1 is the real part of kappa_tst, kappa_pu and kappa_A. All these values are close to 1 at the time of calibration. Similarly, in plot 2 the imaginary parts of kappa_tst, kappa_pu and kappa_A are only few percent off from their nominal values of 0. Plot 3 shows the change in optical gain (kappa_C) and the Cavity pole. Kappa_C is close to 1 at the time of calibration and varies from one lock stretch to the other. Cavity pole is close (may be)  to its reported value of 331 Hz (alog 21221) at the time of calibration and has evolved over time.

The script used for this calculation is committed to the svn:

/ligo/svncommon/CalSVN/aligocalibration/trunk/Projects/PhotonCalibrator/drafts_tests/ER8_Data_Analysis/Scripts/plotCalparameterFromSLMData.m

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 17:05, Sunday 13 September 2015 - last comment - 17:41, Sunday 13 September 2015(21475)
Transition To EVE Shift Update

TITLE:  9/13 EVE Shift:  23:00-7:00UTC (16:00-0:00PDT), all times posted in UTC

STATE OF H1:   Continues to be down since earthquakes from just after midnight.  

OUTGOING OPERATOR:  Jim

SUPPORT:  Arrived to a fairly full Control Room on a Sunday (Jenne, Sheila, Evan, Stefan, Jeff K, Greg, Sudarshan).

QUICK SUMMARY:   Ops Overview & O1 CDS Overview look nominal.  (SDF has lots of RED, but since they'be been getting past DRMI a few times, assume we are OK according to SDF [had thought of going back to safe.snaps, but held off on that]).  Will continue troubleshooting.

UPDATE:  H1 BACK!  Jim was still here on-site, so he gets a cookie for his work (along with Jenne [who was here all DAY shift], Sheila, et.al]).

Sounds like they restored IMs to pre-Earthquake/Cheryl alignment values & then did a careful Initial Alignment with Sheila watching them as they went.

Right now we are on our way to recovery:  Addressing SR3 cage servo, SDF diffs, etc.  Currently hovering at about 70Mpc for the last 45mins.  We are listed as "NOT OK" due to current SDF Diffs.

Comments related to this report
corey.gray@LIGO.ORG - 17:41, Sunday 13 September 2015 (21478)

Double Coincidence Status:  Just chatted with Joe H at LLO.  L1 is up, but they are OUT of OBSERVATION Mode for Anamaria/Robert PEM (magnetic/acoustic) injections.  Checked for H1 requests here and Evan may take an hour for a measurement.  

We will coordinate with each other as to when to get back to OBSERVATION Mode.

H1 General
jim.warner@LIGO.ORG - posted 16:18, Sunday 13 September 2015 - last comment - 00:31, Monday 14 September 2015(21473)
End of shift summary
Comments related to this report
sheila.dwyer@LIGO.ORG - 00:31, Monday 14 September 2015 (21488)

While looking into some of the problems Jim was having today, I found an error in the ISC_DRMI guardian.  The clearing of history was reworked a few weeks ago to use fast ezcas, but the order of things became incorrect, (gains were zeroed, histories reset, then inputs turned off.  This means that the integrators in the top stage were cleared and then reaquired some history before the input was turned off.) 

Now this is fixed, so inputs are turned off, gains are zeroed, then history is cleared. 

This was causing DRMI to sometimes be misaligned if PRMI was locked first, the lock was dropped, and we attempted to relock DRMI. 

H1 General
jim.warner@LIGO.ORG - posted 14:15, Sunday 13 September 2015 (21471)
Mid Shift Summary

It's been a rough shift so far. Cheryl dealt with earthquakes tripping everything, HAM5 is still annoying to reset. Jenne is here and has struggled with me through initial alignment, and now DRMI has been difficult. For some reason SRM just tripped and took the HAM5 ISI down with it. HAM 5 also shows setpoint differences, insisting the masterswitch should be off while the ISI is isolated. This will probably prevent us from going to Observe. Progress is slow and now the wind is picking up...

H1 INJ (INJ)
peter.shawhan@LIGO.ORG - posted 09:11, Sunday 13 September 2015 (21469)
Status of reconfiguring/debugging CW injections
I hadn't realized that Dave and Jim B were already on the case for getting the CW injections to work again.  Dave emailed today: "Jim and myself have been installing Keith’s LLO modifications to get monit to autostart the psinject process. We are not quite there for the autostart, but we were able to run psinject manually as user hinj. I’ll take a look at why your manual start yesterday did not work.  Once we have psinject running under monit, we’ll do the same for tinj."

I added some debugging lines to the x_exec_psinject_from_script script to hopefully help track this down.  It's clear that there is a difference in environment variables when trying to start psinject from the start_psinject script versus just executing psinject in an interactive session.  You can compare, for instance, the file Details/log/x_exec_psinject_from_script.log against Details/log/interactive.log, and there are substantial differences, such as:

[hinj@h1hwinj1 Details]$ diff log/interactive.log log/x_exec_psinject_from_script.log
...
7c9,10
< APPSROOT=/ligo/apps/sl6
---
> APPSROOT=/ligo/apps/linux-x86_64
> ASC=H1:ASC
11a15
> CDSDIR=/ligo/cds/lho/h1
...
27c33,34
< EPICS_BASE=/ligo/apps/sl6/epics-3.14.12.2_long/base
---
> EPICS_BASE=/ligo/apps/linux-x86_64/epics/base
> EPICSBIN=/ligo/apps/linux-x86_64/epics/base/bin/linux-x86_64:/ligo/apps/linux-x86_64/epics/extensions/bin/linux-x86_64
...

Evidently, when executed from within the script, the environment is simply not set up properly to allow psinject to run -- e.g., libCore.so is not being found.  I'll let Dave and Jim take it from here.
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 08:47, Sunday 13 September 2015 (21468)
CDS model and DAQ restart report, Saturday 12th September 2015

ER8 Day 27. No restarts reported.

H1 General (GRD, SEI, SUS)
cheryl.vorvick@LIGO.ORG - posted 08:26, Sunday 13 September 2015 (21466)
OPS OWL Summary:

IFO has not returned to locking after the big earthquakes 7 hours ago.

 

I've called Jenne Driggers and we did as much as was possible over the phone.  She'll be in this morning to help and then do measurements.

 

I've been working to relock the IFO since 4 earthquakes in Mexico.

- restore HEPI and ISI, and HAM5 and ETMX are not yet back to full nominal state.

- green is locking in the arms, red is not

- I aligned IM1, IM2, IM3, and IM4 to eliminate their alignment as an issue for locking red, and those diffs will probably show up and can be accepted

- the AS_AIR camera was fixed when Jim fixed HAM5 ISI

- ETMX HEPI and ISI Guardian was unhappy, but the isolation was OK, so this was not an issue for locking red

- with AS_AIR back on the camera, the alignment of the red for x arm looks not too bad, and is popping to 0.6+

 

Jim and Jenne are here for day shift

H1 INJ (INJ)
peter.shawhan@LIGO.ORG - posted 07:54, Sunday 13 September 2015 - last comment - 17:15, Sunday 13 September 2015(21465)
Burst hardware injection didn't work
Since I set up a schedule of burst hardware injections last night, three of the scheduled injection times have passed.

The first one, at GPS 1126160499 (injection start at 1126160499, actual burst signal centered at 1126160500), should have been executed at H1 (not at L1 since it was down at the time), but didn't work.  The tinjburst.log file says: "1126160499.000000 hwinj_1126160499_2_ 1.00e+00 Compromised".  (It says that five times, because tinj retries a few times if it initially fails.)  The more verbose tinj.log says:
Injection imminent in ~297s...
  1126160499 2 1 hwinj_1126160499_2_
GRB alert at 1126151534: injection allowed.
Detector locked.
Intent bit is on.
calling awgstream...
  awgstream H1:CAL-INJ_TRANSIENT_EXC 16384.000000 ../config/Burst/Waveform/hwinj_1126160499_2_H1.txt 1.000000 1126160499.000000
awgstream failure: injection compromised!

That's strange... Maybe it has to do with environment variables.  I seem to be able to run awgstream fine from the command line in an interactive session, with this zero-amplitude test while H1 was down:
[hinj@h1hwinj1 tinj]$ awgstream H1:CAL-INJ_TRANSIENT_EXC 16384.000000 ../config/Burst/Waveform/hwinj_1126160499_2_H1.txt 0.0 1126187950.000000
Channel = H1:CAL-INJ_TRANSIENT_EXC
File    = ../config/Burst/Waveform/hwinj_1126160499_2_H1.txt
Scale   =          0.000000
Start   = 1126187950.000000
The fact that a zero-amplitude injection succeeded rules out a problem with awg.

I tried to do another test by adding a zero-amplitude injection to the schedule at GPS 1126191200, but since the detector was not locked, tinj did not even call awgstream, so I didn't learn anything from that.

The second and third scheduled injections, at GPS 1126180499 (injection start at 1126180499, actual burst signal centered at 1126180500) and 1126190499, did not occur because the detector was not in observing mode at either time; that's the correct behavior. 
Comments related to this report
eric.thrane@LIGO.ORG - 17:15, Sunday 13 September 2015 (21477)INJ
I repeated the injection in question (with zero amplitude and an updated injection) using tinj, run from the matlab command line. The injection seemed to proceed normally, with awgstream exiting with status=0. I tried recompiling/restarting tinj to see if that makes a difference. I looked at the injection file to make sure it was not corrupted, but it appears normal: every entry is a number.
H1 General (GRD, SEI, SUS)
cheryl.vorvick@LIGO.ORG - posted 06:42, Sunday 13 September 2015 (21464)
6 hours since the earthquake - update

State of the IFO:
- recovery from the earthquakes has gone ok
- nothing broken, but SEI at EX not in  the correct state, though it's good enough to lock, so will work with JIm to understand the issue and fix it when he comes in at 15:00UTC

Timeline since the earthquake:

9:32UTC - I start relocking and IMC comes back good
10:30UTC - no AS_AIR image on the camera concerns me, but I move on to locking ALS and both arms are good
11:37UTC - I return to ETMX SEI, I wasn't able to get it into the correct state after the earthquake, and now it's a problem, so I get it to Robust_Damped and leave it there.
11:53UTC - ALS is done, and I start on locking red in the X arm - no signal on the qpd at the end station, but I can see it spike when the arm power does, so I know there's light.
I tried to fix the AS_AIR camera, nothing worked, it's a red hering, the loose optic in HAM2 must have shifted again, or soe=mething else not an in-vacuum optic.

Current: 13:34UTC - writing an update and then returning to alignment

H1 INJ (DetChar, INJ)
andrew.lundgren@LIGO.ORG - posted 02:36, Sunday 13 September 2015 - last comment - 08:43, Sunday 13 September 2015(21463)
Is it possible to do a BNS injection without overflowing the ETMY ESD DAC?
We've seen that CBC hardware injections in ER8 have caused ETMY saturations - see this alog for the injection, and my comment about the overflows. These are often called 'ETMY saturations' but what is happening is that the DAC is being driven beyond its maximum range of 2^17 counts. This is occurring much more often now that the driver has much more analog lowpassing.

Peter Fritschel wrote this DCC document tabulating the available actuation amplitude before overflowing the DAC, which allows us to check whether a hardware injection of a binary neutron star coalescence would overflow the DAC.

The summary is that a BNS hardware injection at a typical SNR would saturate the digital actuation at H1 as soon as it gets into the 500 to 800 Hz range. And a BNS typically goes up to 1.5 kHz. This has nothing to do with the inverse actuation or notches, it's just the limits of how much the digital system can push on the mirrors using the ESD, which has a lot of low-passing to reduce noise. At these frequencies, the DARM loop has almost no feedback so I'm assuming that the injection just moves the mirrors freely.

I made a simple fit of Peter's tables, and plotted it along with the amplitude of a BNS signal as a function of frequency. My Python code is attached below and should be well commented. There's no interface, but you can edit the distance or masses in the file. The output plot is also attached.

The waveform is from Duncan Brown's thesis, because that's where it's given most clearly in the time domain with the amplitude fully specified. I chose a system with masses 1.4,1.4 Msolar and a distance 120 Mpc and optimal orientation. The waveform, over a short period of time, looks like

h(t) = A f^(2/3) cos (2 pi f t)

where f is the gravitational wave frequency. Since this is strain, I plot A f^(2/3) times the arm length (4000 meters).

I tried to avoid the limit by changing the distance to 180 Mpc, and multiplying Peter's limit by 3.3. He assumed we've got 30,000 counts to work with, but we can manage 100,000 counts if we're lucky. It's still not enough and hits the DAC limit at 800 Hz. 

As soon as the DAC overflows (hits 2^17 counts), there's a huge glitch produced so the injection becomes pretty much useless. The likely path forward, in the short term, is to push the masses up into the NSBH range, like to make one object 10 solar masses. We also might need to roll off the high frequencies artificially.
Images attached to this report
Non-image files attached to this report
Comments related to this report
matthew.evans@LIGO.ORG - 08:43, Sunday 13 September 2015 (21467)

As a side note, I think the situation must get worse if we include merger and ringdown, as these go to even higher frequency (the ISCO is really an artificial cut-off, and BNS merger will have signal up to ~3kHz).  Do the current CBC injections stop at f_ISCO?  If so, why not set the cut-off frequency to avoid saturation instead?

H1 CAL
madeline.wade@LIGO.ORG - posted 18:51, Saturday 12 September 2015 - last comment - 22:01, Sunday 13 September 2015(21445)
Time-dependent correction factors for calibration as computed by GDS calibration pipeline
Attached are plots of the time-dependent calibration correction factors as computed by the GDS calibration pipeline for GPS times 1126003432-1126011584 (Fri Sep 11 03:43:35 PDT 2015 - Fri Sep 11 05:59:27 PDT 2015).  The derivation of these factors and their meaning is described in DCC-T1500377.  

The channel names correspond to the factors as such:

GDS-CALIB_F_CC: frequency of coupled cavity pole
  - average value ~ 350 Hz
GDS-CALIB_KAPPA_C: time-dependent scale factor for the sensing 
  - average value ~ 1.07
  - expected average value = 1
GDS-CALIB_KAPPA_A_IMAGINARY: imaginary part of time-dependent scale factor for the total actuation
  - average value ~ -0.01
  - expected average value = 0
GDS-CALIB_KAPPA_A_REAL: real part of time-dependent scale factor for the total actuation
  - average value ~ 1.01
  - expected average value = 1
GDS-CALIB_KAPPA_PU_IMAGINARY: imaginary part of time-dependent scale factor for the PUM/UIM stages of actuation
  - average value ~ 0.03
  - expected average value = 0
GDS-CALIB_KAPPA_PU_REAL: real part of time-dependent scale factor for the PUM/UIM stages of actuation
  - average value ~ 1.07
  - expected average value = 1
GDS-CALIB_KAPPA_TST_IMAGINARY: imaginary part of time-dependent scale factor for the TST stage of actuation
  - average value ~ 0.05
  - expected average value = 0
GDS-CALIB_KAPPA_TST_REAL: real part of time-dependent scale factor for the TST stage of actuation
  - average value ~ 1.0
  - expected average value = 1.0

These values indicate some (O(5%)) differences in gain between the current calibration models and the IFO.  We expect this level of difference for kappa_tst and kappa_c.  However, we anticipated kappa_pu to be closer to the expected average values.  This discrepancy is still being investigated.
Images attached to this report
Comments related to this report
madeline.wade@LIGO.ORG - 22:01, Sunday 13 September 2015 (21485)

I recomputed the factors for this time period using the "fudge factor" described in alog 21479.  The new computed factors match with those for the later times in Sudarshan's alog (#21479).  See attached plots.  (Note: The units on the f_cc plot are Hz on the y-axis. Sorry for the confusion in the labeling.)

Images attached to this comment
H1 AOS (DetChar, PEM, SEI)
joshua.smith@LIGO.ORG - posted 14:57, Saturday 12 September 2015 - last comment - 14:02, Wednesday 30 September 2015(21436)
The 50Hz glitches in DARM: EX mains glitches coupling into EX seismic?

Josh, Andy, David, Jess

Conclusion: We suspect that the 50Hz glitches in DARM are caused by EX Mains glitches that happen 400+/-70ms earlier (reliably matched) and may(?) couple through EX Seismic. 

Throughout September there has been a persitent line of glitches at 50Hz in DARM with a pretty high rate. See the attached glitch-gram from today's summary pages. Based on the one hour plots on the summary pages, the rate of these 50Hz glitches in DARM is about once per minute. 

We found (see e.g. the first round here) that these DARM glitches are correlated in time with glitches in EX channels, particularly PEM EX SEIS/ACC*, ISI ETMX ST1 BLND*, and ASC-X_TR*. 

It looks like the cause of this may be a big glitch in EX Mains. Wew see the EX and DARM channels are all glitching with a 400+/-70ms delay (averaged five examples followed up by hand) after EX mainsmon. Attached are two examples where we lined up the glitch in EX mainsmon and showed that the glitches in EX seismic and DARM are 400ms later. 

Notes:

We would be happy to follow up further leads, but wanted to report what we found so far. 

PS. Should you wish to repeat this yourself, attached is a list of these glitches in DARM today (with nice GPS times) and a screenshot of the ldvw settings I used. 

Images attached to this report
Non-image files attached to this report
Comments related to this report
david.shoemaker@LIGO.ORG - 14:55, Saturday 12 September 2015 (21438)
Note that the signal in the seismometer is a bit later than the mainsmon glitch, and that the seismometer glitch is narrowband around 50 Hz in contrast to the mainsmon. A guess is that there is a mechanical coupling -- maybe a motor running slow due to it hitting something, or a transformer core pulsing -- which induces the 50 Hz characteristic. Seems like it might be easy to hear. The delay suggests that it could be something a bit away -- maybe a chiller?
john.worden@LIGO.ORG - 20:17, Saturday 12 September 2015 (21452)

The chiller compressor is probably on the order of 100 hp (>100 amps at 480v) and is cycling with a period of ~1hour during most of the day. The period depends on the heat load into the building. During the hot time of day one chiller compressor will continue to run while a second compressor will cycle as needed. In the attached plot the peaks represent the start of the chiller compressor. The valleys represent the shutdown. Times are PDT. 

Images attached to this comment
joshua.smith@LIGO.ORG - 14:14, Sunday 13 September 2015 (21470)DetChar, PEM, SEI

After emails with Robert S and John W we did an extensive search in the "HVE-EX:*" and "H0:FMC-EX*" channels for anything that is switching on or off at the same time as the glitches reported above. John Worden suggested that the instrument air compressor (a ~5hp motor at EX) would switch on and off with about the right period. I attach a plot that shows that the glitches discussed above for EX seismic (left) and DARM (right) are correlated with the compressor turning ON at the bottom of the triangle waveforms. 

Images attached to this comment
H1 CAL (CAL)
craig.cahillane@LIGO.ORG - posted 23:53, Friday 11 September 2015 - last comment - 23:06, Sunday 13 September 2015(21421)
Two Signal vs Response Function Strain Uncertainty
C. Cahillane

I have tracked down the differences between my Two Signal ( h = 1/C * DARM_ERR + A * DARM_CTRL ) and Response function ( h = (1 + G)/C * DARM_ERR ) calibration uncertainties.

The trick lay in my weighting factors being slightly different with and without data included.  The Two Signal calibration method requires DARM_ERR and DARM_CTRL from the beginning, whereas the Response function does not.  This means that you must divide out DARM_ERR from the Two Signal method, or multiply it into your Response Function to get comparable results.  I did it both ways, and got identical plots (Shown in Plot 1). 
I found a bug in my Response function method since I was not including the hard-coded minus sign on the x_pcal line I use to calculate kappa_tst and kappa_pu.

Plot 1 and 2 are the same plot, but Plot 2 is zoomed in to show the data noise in the Two Signal method of calibration vs the smooth Response Function.
I have also included the updated components plots for Magnitude (Plot 3) and Phase (Plot 4).  Note that the uncertainty is inflated now compared to aLog 21390.

The next step is to fix the kappa_C and f_c calculations, they still yield ~0.75 and >1000 Hz respectively.  Sudarshan ought to be able to help me here relatively quickly.
Then I must inform my sigma_(param) values more intelligently.

Non-image files attached to this report
Comments related to this report
craig.cahillane@LIGO.ORG - 23:06, Sunday 13 September 2015 (21480)CAL
C. Cahillane, K. Izumi

Kiwamu noted that my magnitude strain uncertainty plots looked strange.  My magnitude uncertainty in A_pu was 4%, and should dominate at low frequency, but it was not contributing at all.  Similarly I noticed |C_r| uncertainty was not contributing at high frequency when it ought to.
I have replotted my strain uncertainty components plots.  I am reporting uncertainty in percentage, but my sigmas were using fractional values, i.e. I was multiplying by 0.01 and should have been using 1.
The plots are repaired and look a bit more sensible.
Non-image files attached to this comment
H1 CAL (CAL, DetChar, INJ, ISC, SYS)
jeffrey.kissel@LIGO.ORG - posted 19:28, Thursday 10 September 2015 - last comment - 16:15, Sunday 13 September 2015(21386)
CAL-CS Calibration Parameters Updated; Info on DARM Loop Parameters for which GDS Pipeline will Compensate
J. Kissel, for the LHO CAL Team

Spoiler Alert: Kiwamu has updated the sensing side of front-end calibration, and has tuned the CAL-CS model to match a fit to the current optical gain of this lock stretch. 
This means the CAL-CS calibration has been updated, and is now complete for O1.

He has analyzed the results, and created the cannonical parameter set for O1. That parameter set is:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER8/H1/Scripts/DARMOLGTFs/H1DARMparams_1125963332.m

This should be run with the now canonical model:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER8/H1/Scripts/DARMOLGTFs/H1DARMOLGTFmodel_ER8.m

To push things forward while he gets a final super-long integration time comparison between PCAL and the CAL-CS model, we give this parameter set to Maddie such that she can you it to inform the parameters of the high-frequency corrections to the output of the CAL-CS model.

In summary, those corrections are 
(1) The precisely known time delays (advances) for the (inverse) sensing and actuation chains
(2) The compensation for the average of the uncompensated, high-frequency poles in the OMC DCPD's readout chain.

%% Details:
Note that, though there are uncompensated, 24e3 [Hz] poles in the ESD response portion of the actuation chain, we have chosen in ER8/O1 to ignore these poles. Preliminary results from craig indicate that we're sitting at ~3 or 4 degrees strain uncertainty at 30 [Hz] (right where the PUM / TST cross-over lies, as expected)  and ignoring this pole corresponds to a 0.07 [deg] systematic error, i.e. negligible. So, the above two items should be the only difference between the CAL-CS front-end model (the relative time-delay between the paths, which affects ERR and CTRL crossover, and the uncompensated poles, which means there's a 15 [deg] phase loss at 1 [kHz]).

To obtain these values from code in the SVN, run
[openloop,par] = H1DARMOLGTFmodel_ER8(H1DARMparams_1125963332);
and the values you (Maddie) need(s) are
(1) The precisely known time delays (advances) for the (inverse) sensing and actuation chains:
    - Sensing Delay (or Inverse sensing advance): par.t.sensing + par.t.armDelay =  8.9618e-05 [s]
    - Actuation Delay: par.t.actuation = 1.4496e-04 [s]

(2) The high-frequency poles in the OMC DCPD's readout chain:
    - par.C.uncompensatedomcpoles_Hz = 13700       17800       14570       18525       98650 [Hz]
    or you can use the preformulated LTI object,
    - par.C.uncompensatedomcdcpd.c
           ans =
                                    6.3585e+25
         ----------------------------------------------------------------
         (s+8.608e04) (s+9.155e04) (s+1.118e05) (s+1.164e05) (s+6.198e05)


Stay tuned for aLOGs from Maddie over the next day or so, as she compares the output of the CAL-CS model with the output of the GDS pipeline.

Thanks to everyone for all the hard work!!
Comments related to this report
jeffrey.kissel@LIGO.ORG - 16:15, Sunday 13 September 2015 (21472)
M. Wade, J. Kissel

In the above list of for what the GDS pipeline must correct, I've neglected several pieces of both chains -- the anti-aliasing and anti-imaging filters of the sensing and actuation chains, respectively. I've slept a little more, and I repeat the list that is now complete:

In summary, those corrections are 
(1) The precisely known time delays (advances) for the (inverse) sensing and actuation chains
(2) The compensation for the average of the uncompensated, high-frequency poles in the OMC DCPD's readout chain.
(3) The digital and analog anti-aliasing filters in the sensing chain
(4) The digital and analog anti-imaging filters in the actuation chain

In more detail,

Actuation: 
(1) Actuation Delay
   -- par.t.actuation = 1.4496e-04 [s]

     
(3) The digital and analog anti-imaging filters (now separated, due their export as LTI objects; they cannot be combined (without squeezing on to a frequency vector) because one is a discrete zpk, the other is a continuous) 
Digital Anti-Imaging Filter (a.k.a. IOP up-sampling filter) 
   -- par.A.antiimaging.digital.response.ssd
Analog Anti-Imaging Filter 
   -- par.A.antiimaging.analog.c

Sensing
(1) Sensing Delay (or Inverse sensing advance): 
   -- par.t.sensing + par.t.armDelay =  8.9618e-05 [s]

(2) The high-frequency poles in the OMC DCPD's readout chain:
   -- par.C.uncompensatedomcpoles_Hz

(4) The digital and analog anti aliasing filter 
Digital Anti-Aliasing Filter (a.k.a. IOP down-sampling filter)
   -- par.C.antialiasing.digital.response.ssd
Analog Anti- Aliasing Filter 
   -- par.C.antialiasing.analog.c
Displaying reports 57101-57120 of 78028.Go to page Start 2852 2853 2854 2855 2856 2857 2858 2859 2860 End