Displaying reports 63821-63840 of 85399.Go to page Start 3188 3189 3190 3191 3192 3193 3194 3195 3196 End
Reports until 15:54, Thursday 01 October 2015
H1 ISC
keita.kawabe@LIGO.ORG - posted 15:54, Thursday 01 October 2015 - last comment - 09:24, Monday 05 October 2015(22154)
Current status of noise bumps that are supposedly from PSL periscope (PeterF, Keita)

Just in case you're wondering why LHO sees two noise bumps at 315 and 350Hz (attached, middle blue) but not at LLO, we don't fully understand either but here is the summary.

There are three things here, environmental noise level, PZT servo, and jitter coupling to DARM. Even though the former two explains a part of the LLO-LHO difference, they cannot explain all of it, and the coupling at LHO seems to be larger.

Reducing the PSL chiller flow will help but that's not a solution for the future.

Reimplementing PZT servo at LHO will help and this should be done. Squashing it all will be hard, though, as we are talking about the jitter between 300 and 370Hz and there's a resonance at 620Hz.

Reducing coupling is one area that was not well explored. Past attempts at LHO were on top of dubious IMC WFS quadrant gain imbalances.


1. Environmental difference

These bumps are supposed to be from the beam jitter caused by PSL periscope resonances (not from the PZT mirror resonances). In the attached you can see that the bumps in H1 (middle blue) correspond to the bumps in PSL periscope accelerometer (top blue). (Don't worry, we figured out which server we need to use for DTT to give us correct results.)

Because of the PSL chiller flow difference between LLO and LHO (LHO alog, couldn't find LLO alog but we have MattH's words), in general LLO periscope noise level is lower than LHO. However, the difference in the accelerometer signal is not enough to explain the difference in IFO.

For example, at 350Hz LHO PSL periscope is only a factor of 2 noisier than LLO. At 330Hz, LHO is quieter than LLO by more than a factor of 2. Yet we have a huge hump in DARM at LHO, it becomes larger and smaller in DARM but it never goes away, while LLO DARM is deat flat.

At LLO they do have a servo to supress noise at about 300Hz, but it shouldn't be doing much if any at 350Hz (see the next section).

So yes, it seems like environmental difference is one of the reasons why we have larger noise.

But the jitter to DARM coupling itself seems to be larger.

Turning down the chiller flow will help but that's not a solution for the future.


2. Servo difference

At LLO there's a servo to squash beam jitter in PIT at 300Hz. LHO used to have it but now it is disabled.

At LLO, IOOWFS_A_I_PIT signal is used to suppress PIT jitter targetting the 300Hz peak which was right on some mechanical resonance/notch structure in PZT PIT (which LHO also has), and the servo reduced the noise between about 270 and about 320Hz (LLO alog 19310).

Same servo was successfully copied to LHO with some modification, which also targeted 300Hz bump (except that YAW was more coherent than PIT and we used YAW signal), with somewhat less (but not much less) aggressive gain and bandwidth. At that time 300Hz bump was problematic together with 250Hz bump and 350Hz bump. Look at the plots from alog 20059 and 20093.

Somehow 250Hz and 300Hz subsided, and now LHO is suffering from 315Hz and 350Hz bumps (compare the attached with the above mentioned alog). Since we never had time to tune the servo filter to target either of the new bumps, and since turning the servo on without modification is going to make marginal improvement at 300Hz and will make 250Hz/350Hz somewhat worse due to gain peaking, it was disabled.

Reimplementing the servo to target 315 and 350Hz bumps will help.  But it's not going to be easy to make this servo wide band enough to squash everything because of 620Hz resonance, which is probably something in the PZT mirror itself (look at the above mentioned alog 20059 for open loop transfer function of the current servo, for example). In principle we can go even wider band, but we'll need more than 2kHz sampling rate for that. We could stiffen the mount if 620Hz is indeed the mount.


3. Coupling difference

As I wrote in the environment difference, from the accelerometer data and IFO signal, it seems as if the coupling is larger at LHO.

There are many jitter coupling measurements at LHO but the best one to look at is this one. We should be able to make a direct comparison with LLO but I haven't looked.

Anyway, it is known that the coupling depends on IMC alignment and OMC alignment (and probably the IFO alignment).

At LHO, IMC WFS has offsets in PIT and YAW in an attempt to minimize the coupling. This is on top of dubious imbalances in IMC WFS quadrant gains at LHO (see alog 20065, the minimum quadrant gain is a factor of 16  larger  smaller than the maximum). We should fix that before spending much time on studying the jitter coupling via alignment.

At LLO, there's no such imbalance and there's no such offset.

Images attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 12:58, Saturday 03 October 2015 (22208)

The coupling of these peaks into DARM appears to pass through a null near the beginning of each full-power lock stretch, perhaps indicating that this coupling can be suppressed through TCS heating.

Already from the summary pages one can see that at the beginning of each lock, these peaks are present in DARM, then they go away for about 20 minutes, and then they come back for the duration of the lock.

I looked at the coherence (both magnitude and phase) between DARM and the IMC WFS error signals at three different times during a lock stretch beginning on 2015-09-29 06:00:00 Z. Blue shows the signals 10 minutes before the sign flip, orange shows the signals near the null, and purple shows the signals 20 minutes after the sign flip.

One can also see that the peaks in the immediate vicinity of 300 Hz decay monotonically from the beginning of the lock strech onward; my guess is that these are generated by some interaction with the beamsplitter violin mode and have nothing to do with jitter.

Images attached to this comment
keita.kawabe@LIGO.ORG - 09:24, Monday 05 October 2015 (22235)

Addendum:

alog 20051 shows the PZT to IMCWFS transfer function (without servo) for PIT and YAW. Easier to see which resonance is on which DOF.

H1 General (INJ)
peter.shawhan@LIGO.ORG - posted 14:08, Thursday 01 October 2015 (22153)
Protocol for upcoming coherent hardware injection tests
Chris Biwer and other members of the hardware injections team will likely be doing coherent hardware injections in the near future, and these will hopefully be detected successfully by one or more of the low-latency data analysis pipelines.  Currently, we are still testing the EM follow-up infrastructure, so the "Approval Processor" software is configured to treat hardware injections like regular triggers.  Therefore, these significant GW "event candidates" should cause audible alarms to sound in each control room, similar to a GRB alarm.  The operator at each site will be asked to "sign off" by going to the GraceDB page for the trigger and answering the question, "At the time of the event, was the operating status of the detector basically okay, or not?"  You can also enter a comment.

For the purpose of these tests, if you are the operator on shift, please:
  * Do not disqualify the trigger based on it being a hardware injection -- we know it is!  So, please sign off with "OKAY" if the detector was otherwise operating OK.
  * Pay attention to whether the audible alarm sounded.  In the past we had issues at one site or the other, so this is one of the things we want to test.
  * Feel free to enter a comment on the GraceDB page when you sign off, like maybe "this was a hardware injection and the audible alarm sounded".
  * You may get a phone call from a "follow-up advocate" who is on shift to remotely help check the trigger.

Note: in the future, once the EM follow-up project is "live", a known hardware injection will not cause the control-room alarms to sound (unless it is a blind injection).  You should not write anything in the alog about alarms from GW event candidates, because that is potentially sensitive information and the alogs are publicly readable.
H1 General
jeffrey.bartlett@LIGO.ORG - posted 13:01, Thursday 01 October 2015 (22152)
Ops Mid Day Shift Summary
IFO has been locked at NOMINAL_LOW_NOISE, 23.0W, 72Mpc for the past 5 hours. Wind and seismic activity are low. 4 ETM-Y saturation alarms. Received GRB alert at 18:31UTC (12:31PT) - LHO was in Observing mode during this event      
H1 ISC
daniel.sigg@LIGO.ORG - posted 11:06, Thursday 01 October 2015 (22150)
RF45 stabilization, two days after the swap

The attached plot shows the 2 day trend of the RF45 glitches. There were no glitches in the past day. The large glitches 24 hours ago were us. This is not inconsistent with a cable or connection problem. No one should be surprised, if the problem reappears.

Images attached to this report
H1 General
jeffrey.bartlett@LIGO.ORG - posted 09:02, Thursday 01 October 2015 (22149)
Day Shift Transition Summary
Title:  10/01/2015, Day Shift 15:00 – 23:00 (08:00 – 16:00) All times in UTC (PT)

State of H1: At 15:00 (08:00) Locked at NOMINAL_LOW_NOISE, 23.0W, 72Mpc

Outgoing Operator: TJ

Quick Summary: Wind is calm, no seismic activity. All appears normal. Intent Bit at Commissioning while LLO was recovering from a lockloss.     
LHO General
thomas.shaffer@LIGO.ORG - posted 08:00, Thursday 01 October 2015 (22146)
Ops Owl Shift Summery
LHO General
thomas.shaffer@LIGO.ORG - posted 07:43, Thursday 01 October 2015 - last comment - 08:49, Thursday 01 October 2015(22145)
Relocked but Injection while LLO is down

Relocked @ 14:38

Sheila wants to do a quick injection while LLO is down.

Comments related to this report
sheila.dwyer@LIGO.ORG - 08:49, Thursday 01 October 2015 (22148)

excitation ended just before we got a GRB alert, but I was making an excitation at the time of the GRB (LLO was not in observing so we were taking advantage of some single IFO time to investigate noise at 78 Hz in DARM that may come from EX).  

When we heard the alert I stopped the dtt session and Jeff B went to observing, but there were times even when we weren't in observing that there were no excitations running.  Grace DB lists 1127747079.41 as the event time for the first GRB alert, and unfortunately my excitation was running at that time.  My last excation was ramping down by 1127747090 as shown in the first attached dataviewer screenshot, where the GRB time is approximately in the middle of the plot, so I was exciting the ETMX ISI at the time of the event. 

The two channels that I was putting excitations on were H1:ISI-ETMX_ST2_ISO_Y_EXC and H1:ISI-ETMX_ST2_ISO_Y_EXC.  These were white noise excitations that produced ISI motions of 0.1 nm/rt Hz at 20 Hz with an amplitude that slowly drops off as the frequency increases until 100 Hz (0.02 nm/rt Hz).  The excitation was bandbassed from 20Hz-200 Hz.  They produced no features in the DARM spectrum, although they were intended to excite the peaks at 78-80 Hz.  

Images attached to this comment
LHO General
thomas.shaffer@LIGO.ORG - posted 07:14, Thursday 01 October 2015 (22144)
Lockloss

Lockloss @ 13:59 UTC

ITMX saturation, and it tripped SUS OMC WD. No obvious reason for lockloss.

H1 CDS
thomas.shaffer@LIGO.ORG - posted 06:17, Thursday 01 October 2015 - last comment - 07:56, Thursday 01 October 2015(22142)
Some red poping up on CDS Overview at Mid Y

H1IOPPEMMY and H1PEMMY both started to report errors for FE, ADC, and DDC on the CDS Overview around 12:55 UTC. There was a red around TDS, so I checked out the timing screen and there seems to be a problem with Port 13 "Invalid or no data".

Since this is only PEM at MidY, I have NOT taken us out of Observing.

Comments related to this report
james.batch@LIGO.ORG - 07:56, Thursday 01 October 2015 (22147)
The I/O chassis is no longer visible to the computer h1pemmy.  This is not critical to the operation of the interferometer. This can wait until Tuesday to fix unless someone desperately needs PEM data from MY.
LHO General
thomas.shaffer@LIGO.ORG - posted 04:30, Thursday 01 October 2015 (22141)
Ops Owl Mid Shift Report

Humming along @ 75Mpc. Have had a handful of glitches during my shift, but the RF noise seems to be in control for now.

H1 CAL (DetChar, ISC)
jeffrey.kissel@LIGO.ORG - posted 00:36, Thursday 01 October 2015 - last comment - 18:06, Friday 28 October 2016(22140)
Official, Representative Calibrated ASD for the Start of O1 -- Now With Time Dependent Corrections Displayed
J. Kissel, for the Calibration Team

I've updated the results from LHO aLOG 21825 and G1501223 with an ASD from the current lock stretch, such that I could display the computed time dependent correction factors, which have recently been cleared of systematics (LHO aLOG 22056), sign errors (LHO aLOG 21601), and bugs yesterday (22090). 

I'm happy to say, that not only does the ASD *without* time dependent corrections still fall happily within the required 10%, but if one eye-balls the time-dependent corrections and how they would be applied at each of the respective calibration line frequencies, they make sense.

To look at all relevant plots (probably only interesting to calibrators and their reviewers), look at the first pdf attachment. The second and third .pdfs are the money plots, and the text files are a raw ascii dump of respective curves so you can plot them however or whereever you like. All of these files are identical to what is in G1501223.

This analysis and plots have been made by
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Scripts/produceofficialstrainasds_O1.m
which has been committed to the svn.
Non-image files attached to this report
Comments related to this report
kiwamu.izumi@LIGO.ORG - 18:06, Friday 28 October 2016 (30973)

Apparently, this script has been moved to a slightly different location. The script can be found at

/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Scripts/DARMASDs/produceofficialstrainasds_O1.m

LHO General
thomas.shaffer@LIGO.ORG - posted 00:02, Thursday 01 October 2015 (22139)
Ops Owl Shift Transition
H1 General
travis.sadecki@LIGO.ORG - posted 00:00, Thursday 01 October 2015 (22138)
OPS Eve shift summary

Title: 9/30 Eve Shift 23:00-7:00 UTC (16:00-24:00 PST).  All times in UTC.

State of H1: Observing

Shift Summary: No news is good news.  Locked my entire shift in Observing.  Only 4 ETMy saturations.  Seismic and wind calm.  No RF45 issues.

Incoming operator: TJ

Activity log:

23:38 ETMy saturation

3:07 ETMy saturation

4:56 ETMy saturation

6:36 ETMy saturation

H1 General
betsy.weaver@LIGO.ORG - posted 23:05, Wednesday 30 September 2015 - last comment - 20:40, Thursday 01 October 2015(22137)
Balance of OBSERVE.SNAP files copied to userapps

Following on from where Hugh left off in alog 21412, I have copied the OBSERVE.snap files from their target area to their appropriate userapps/...h1xxxxx_observe.snap area in prep for committing them to svn.  I have copied these files for: SUS, ALS, ISC, CALEX, CALEY.  Hugh had already done LSC LSCAUX ASC ASCIMC OMC OAF TCSCS CALCS, and all SEI.  The balance (AUX IOP ODC and PEM) have no observe.snap file since it is unneccessary.  The ones that I've just copied will be committed to svn tomorrow.

Comments related to this report
betsy.weaver@LIGO.ORG - 20:40, Thursday 01 October 2015 (22176)

This eve, I finished committing the moved-over OBSERVE.snap files to the svn.  I also committed the lsc OBSERVE.snap since we had changed a few settings (DHARD Y FM2 for example) recently.

H1 CAL (DetChar, ISC, SUS, SYS)
jeffrey.kissel@LIGO.ORG - posted 22:23, Wednesday 30 September 2015 - last comment - 22:44, Wednesday 30 September 2015(22135)
The Case For and Consequences of Flipping the ESD Bias on H1 ETMY
J. Kissel, B. Weaver

This aLOG serves more as a discussion of recent results and future impact, but I figure the aLOG is the most visible place to cover a message that needs discussion among many disparate groups. I discuss the latest evidence for charge evolution on the H1 ETMY test mass, why it matters for that test mass alone, and what it will impact when we do change it (especially in regards to calibration). Conclusions are in bold at the bottom of each section.

----------------
Why I think it's looming:

    :: kappa_TST is the calibration group's measure of the change in actuation strength of the test mass stage as a function of time, and since H1 ETMY is the only DARM actuator, it's showing that the H1 ETMY ESD / TST / L3 stage's strength has increased. kappa_TST has shown an increase in actuation strength in ~3 [weeks] of a ~2%: See blue trace in the top subplot of the first attachment. (copied from LHO aLOG 22082). 
	
    :: H1 ETMY is showing steadily increasing charge since we've last flipped the ESD bias, as expected: See the trend in all subplots of the second attachment (copied from LHO aLOG 22062).
	
	> Recall that actuation strength changes as a function of charge, proportional to the ratio of effective bias voltage from charge to the bias voltage we apply intentionally, Vc / Vb.

	> Regrettably, we've measured the charge so infrequently since the start of the run, that it's dicey to corroborate between charge and actuation strength. But I'll try anyway. If you take Betsy's last and second to last charge measurement points, which are bounding Sudarshan's kappa_TST time span, you can eyeball that the change is 15 [V] effective bias. This means an actuation strength change of 15 / 380 = 4%. Given the error bars on the charge measurements and how few measurements we've had, is totally consistent with the ~2% change seen by tracking the calibration lines. Also, if we increase the effective bias voltage from charge, on an already positive bias, then than you're increasing the bias voltage, which is consistent with an increase in actuation strength, since the linear component of the actuation force is proportional to the bias voltage.

	> The current rate of change of ETMY is ~3V / week. But also recall that the rate of change has *changed* every time we've flipped the bias; see Leo's analysis in LHO aLOG 20387 which (for ETMY) quotes 1.75V / week for one epoch and 5V  / week for the next. So it's going to be subjective when we flip the bias sign, and we have to keep close tabs on it, especially since the error bars on an individual day's measurement require many data points to show a pattern. However, the evidence up to now suggests that even though a bias sign flip changes the rate, it stays at a constant until the bias sign is flipped again.

    :: The calibration group -- now that we finally have removed all systematic errors, fixed all problems, and cleaned up all sign confusion -- finally believe that the live-tracking of this slow time dependence is working, and is tracking the real thing as far as these very long term trends (see first attachment again). But we have not yet started applying them, because we haven't figured out the right *time scale* upon which to apply them, and applying them takes some debugging. See the third attachment (this is new). These time dependent parameters are being computed at a rate of 16 [Hz], but if you look at say, one 420 [sec] chunk, the record wanders all over within a roughly +/-3% swing on a ~10 [sec] cadence. We have no evidence to believe this the test mass actuation strength is changing this fast, so this is likely noise. So at this point, unless we're willing to lose a day or so of data (i.e. enough lock stretches where can get a sense of a pattern) where we're testing out, I don't think we *can* start correcting for these factors

    :: The calibration group's requirement is to stay within a 10% and 5 [deg] uncertainty over the course of the entire run. We already know from Craig's uncertainty analysis of ER8 (see fourth attachment, copied from LHO aLOG 21689), that -- without including time-dependence -- the reference time model (in reference to which all time-dependent parameters are calculated), has an uncertainty of 5% and a little more that 5 [deg]. The change in actuation strength means that we'll have a systematic error that grows with time, and since we're fighting for what's left of the 10% uncertainty budget, this changein actuation strength that we've tracked over these first three weeks of the run of 2-3% are significant.

    :: If we let the charge continue to gather at its current rate, that means we'll gather an additional 11 more weeks, that means another ~35 [V], for a total accumulated charge of 60 [V], and a total actuation strength change of 60 / 380 = 15%, which is *well* outside of our budget.

In summary, we're going to need to change the bias on H1 ETMY sign soon.
-------------------

What will be impacted when we change it:

    :: First, foremost, and easiest, when we change the digital bias sign, it changes the sign of the test mass actuation stage. That means we have to compensate for it in order for the DARM loop to remain stable. That means we change the sign of the gain in the L3 DRIVEALIGN bank, i.e. H1:SUS-ETMY_L3_DRIVEALIGN_L2L_GAIN. Done. easy.
    :: Now on to the hard stuff -- making sure it doesn't affect the calibration.
In the CAL-CS replication of the reference model of the DARM loop is the obvious first impact

        > Of course, we need to flip sign of the digital replica of the drive align matrix, H1:CAL-CS_DARM_FE_ETMY_L3_DRIVEALIGN_L2L_GAIN
        > We also need to flip the sign of the replica of the ESD itself, H1:CAL-CS_DARM_ANALOG_ETMY_L3_GAIN 
    
    :: The not-so obvious impact is on the tracking of the actuation strength. Because the ESD / TST / L3 calibration line is injected downstream of the DARM distribution, but upstream of the drive-align bank, the sign of the analysis code must change. We've already been bitten by this once -- see LHO aLOG 21601

        > That means we need to update the EPICs records that capture the reference model values at the calibration line frequencies
        > *That* means we need to create a new DARM model parameters set, which also maps all of the changes to ESD / TST / L3 sign change (just like CAL-CS)
        > *THAT* means we ought to take a new DARM Open Loop Gain and PCAL to DARM TF, to verify that the parameter set is valid.
        > *THAT* means we need a fully functional and undisturbed IFO (i.e. this can't just be done on a Tuesday, or as a "target of oppurtunity" when L1 is down), *after* the sign flip that we're will to take out of observation mode for an hour or so

    :: Once we have a new, validated model, then we push the new EPICs records into the CAL-CS model, and everything that needs updating should be complete. 

    :: There's then the "offline" work of updating the SDF system -- but this must be done quickly because it prevents you from setting the observation intent bit. For these particular records which have very small numbers, there're precision issues that means you must hand-edit the .snap files (see, e.g. LHO aLOGs 22079, LHO aLOG 22065, 21014), which is another hour of time, instead of just hitting "accept all" and "confirm" in the SDF GUI interface like any other record.

In summary If we are properly prepared for this, and everyone is on deck ready for it, I think this all can be done in a (human, but very long, human) day, especially if we have a *team* of dedicated calibrators, detector engineers, and some commissioning support. Further It's not a "just do it on a Tuesday" or "just do it target of oppurtunity when L1 is down" kind of task and should each of the above steps should be done slowly and methodically.

Also, I should say that is room for improvement in just about every part of this bias sign flipping process, but all of those would require non-science run friendly changes to front-end code, RCG infrastructure, and a good bit of time commissioning.
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 22:44, Wednesday 30 September 2015 (22136)ISC, SUS, SYS
Why isn't this an issue with any of the other test masses?

Both site's ETMX's ESD drivers are turned entirely off during low-noise operation, so they have no impact on noise or actuation strength.

For lock acquisition, 
At H1, ETMX is used, but whether the ETMX acqusition ESD has a 10-20% change in actuation strength over the course of the run does not matter for calibration or lock acquisition. Regrettably H1 ETMX bias is currently negative, so its current trend of positive charge means that actuation strength will be decreasing over the run. But, again, a 10-20% drop in strength won't matter. The only place where I could see us run into trouble is whereever we're on the edge of stability / robustness, like acquiring during very high winds, or if we've designed the L3/L1 ALS DIFF cross-over particularly agressively (which I hope we have not). 

At L1, they've recently switched to using ETMY for lock acquisition, then transitioning to their *ITMX* driver, switching the ETMY driver to low noise, and then transisition back. So only their ETMY ESD strength matters. And, goh'bless'm, they flipped their bias sign *just* before the run started with the charge a -40 [v] effective bias, so they'll have enough time at their current rate charging positive ~10 V/month, or 2.5 V / week then to go through zero right around the end of the run. That means the impact of this on their actuation strength will be slowly reducing over time. 

At H1, the ITMs do not have any ESD drivers (high or low voltage), so they also need no consideration.

At L1, only ITMX has an ESD driver, but again, it's currently used in transit to low noise, so it doesn't play a role in calibration, and assuming the loops were designed with ample margin, a 10-20% change in actuation strength shouldn't be a bother.

In summary H1 ETMY is the only test mass where we will ne to play such terrible games during O1.
H1 INJ (CAL, DetChar, ISC)
jeffrey.kissel@LIGO.ORG - posted 14:18, Wednesday 30 September 2015 - last comment - 06:21, Thursday 01 October 2015(22121)
Testing PCAL as Hardware Injector
C. Biwer, J. Kissel

Taking advantange of single IFO time to run PCAL vs DARM hardware injections. More details later.
Comments related to this report
jeffrey.kissel@LIGO.ORG - 15:11, Wednesday 30 September 2015 (22122)
PCAL Injection tests complete. PCAL X has been restored to nominal configuration.


Injection           Approx End time (GPS)
DARM 1              1127683335
PCAL 1              1127683906
PCAL 2              1127684171
PCAL 3              1127684465
DARM 2              1127684766
DARM 3              1127685143

More details and analysis to come.

These were run from the hwinjection machine as hinj.
Usual DARM Command
awgstream H1:CAL-INJ_TRANSIENT_EXC 16384 coherenttest1from15hz_1126257408.out 1.0 -d -d

PCAL Command:
awgstream H1:CAL-PCALX_SWEPT_SINE_EXC 16384 coherenttest1from15hz_1126257408.out 1.0 -d -d

We turned OFF the 3 [kHz] PCAL line during the excitation.

We're holding off on observation mode to confir about other single IFO tests we can do while L1 is down.
christopher.biwer@LIGO.ORG - 15:14, Wednesday 30 September 2015 (22123)DetChar, INJ
I've attached omega scans of the PCAL and DARM injections.

All injections used the 15Hz template from aLog 21838.
Images attached to this comment
andrew.lundgren@LIGO.ORG - 17:08, Wednesday 30 September 2015 (22127)DetChar, INJ
The SNRs of the Pcal injections seem a bit lower than intended. Omega reports SNR 10.5 for the injection through the normal path, which is about right. But for the Pcal injections, the SNRs are 5.5, 7.6, and 7.2. Note that these are the SNRs in CAL-DELTAL; someone should check in GDS strain as well. Links to scans below:

Standard path
Pcal 1
Pcal 1
Pcal 1
peter.shawhan@LIGO.ORG - 06:21, Thursday 01 October 2015 (22143)INJ
*** Cross-reference: See alog 22124 for summary and analysis
H1 CDS
david.barker@LIGO.ORG - posted 09:10, Sunday 27 September 2015 - last comment - 12:53, Thursday 01 October 2015(21989)
restarted ext_alert.py, need to get this to autostart

The ext_alert.py script which periodically views GraceDB had failed. I have just restarted it, instructions for restarting are in https://lhocds.ligo-wa.caltech.edu/wiki/ExternalAlertNotification

Getting this process to autostart is now on our high priority list (FRS3415).

here is the error message displayed before I did the restart.

 

 File "ext_alert.py", line 150, in query_gracedb

    return query_gracedb(start, end, connection=connection, test=test)

  File "ext_alert.py", line 150, in query_gracedb

    return query_gracedb(start, end, connection=connection, test=test)

  File "ext_alert.py", line 135, in query_gracedb

    external = log_query(connection, 'External %d .. %d' % (start, end))

  File "ext_alert.py", line 163, in log_query

    return list(connection.events(query))

  File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 441, in events 

    uri = self.links['events']

  File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 284, in links  

    return self.service_info.get('links')

  File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 279, in service_info

    self._service_info = self.request("GET", self.service_url).json()

  File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 325, in request

    return GsiRest.request(self, method, *args, **kwargs)

  File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 201, in request

    response = conn.getresponse()

  File "/usr/lib/python2.7/httplib.py", line 1038, in getresponse

    response.begin()

  File "/usr/lib/python2.7/httplib.py", line 415, in begin

    version, status, reason = self._read_status()

  File "/usr/lib/python2.7/httplib.py", line 371, in _read_status

    line = self.fp.readline(_MAXLINE + 1)

  File "/usr/lib/python2.7/socket.py", line 476, in readline

    data = self._sock.recv(self._rbufsize)

  File "/usr/lib/python2.7/ssl.py", line 241, in recv

    return self.read(buflen)

  File "/usr/lib/python2.7/ssl.py", line 160, in read

    return self._sslobj.read(len)

ssl.SSLError: The read operation timed out

Comments related to this report
duncan.macleod@LIGO.ORG - 12:53, Thursday 01 October 2015 (22151)

I have patched the ext_alert.py script to catch SSLError exceptions and retry the query [r11793]. The script will retry up to 5 times before crashing completely, which is something we may want to rethink if we have to.

I have request both sites to svn up and restart the ext_alert.py process at the next convenient opportunity (the next time it crashes).

Displaying reports 63821-63840 of 85399.Go to page Start 3188 3189 3190 3191 3192 3193 3194 3195 3196 End