Displaying reports 56461-56480 of 78053.Go to page Start 2820 2821 2822 2823 2824 2825 2826 2827 2828 End
Reports until 19:22, Thursday 01 October 2015
H1 CAL (CAL)
sudarshan.karki@LIGO.ORG - posted 19:22, Thursday 01 October 2015 (22168)
DARMOLGTF and PCAL sweep during LLO Down

S. Karki, C.Biwer

Used the single IFO time to do the regular DARMOLGTF  and PCALY to DARM sweep measurement for regular calibration purpose. We also did a sweep at PCALX and used the measurement to create inverse actuation filter.

The coherence dropped to .95 during PCALY sweep at around 35 Hz because I forgot to turn of the x_ctrl line. Oops!

The measurement files are committed to the SVN:

DARMOLGTF: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/DARMOLGTFs/2015-10-01_H1_DARM_OLGTF_7to1200Hz.xml

PCALY SWEEP: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/PCAL/2015-10-01_H1_PCALY2DARMTF_7to1200Hz.xml

PCALX SWEEP: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/PCAL/2015-10-01_H1_PCALX2DARMTF_7to1200Hz.xml

Images attached to this report
H1 CAL (AOS, CAL, INJ)
sudarshan.karki@LIGO.ORG - posted 18:59, Thursday 01 October 2015 (22160)
Pcal inverse actuation filter at Xend (Missing factor of ~2 and sign)

S. Karki, C. Biwer, R. Savage

The factor of two that was missing from the pcal inverse actuation filter installed at X-end is now retrieved from a new transfer function measurement  and a appropriate new filter is ready to install. The filters we used previously were created by using the transfer function taken from PCALY. The two Pcal should in principle be very similar and we were working on that principle

Today in trying to troubleshoot, we took a transfer function between RxPD and EXC at Xend and used it to create the inverse act. filter. We were able to retrieve the missing factor and possibly the sign as well. This still doesnot explain why there is this factor of 2 in the pcal response between the two end. Will continue investigating on it.

The factor of two is not unexpected and should not impact the calibration. We pick off a small part of the laser light that goes to the ETM using an uncoated optic oriented close to Brewster’s angle.  The small fraction of the light reflected from this optic (less than 1%) is a strong function of the angle of incidence. The reflected light is directed to the Optical Follower Servo photodetector . The percentage of light reflected, along with attenuators installed in front of the PD, coupled with the diffraction efficiency of the AOM, could easily account for this factor of 2. The excitation signal generates an offset in the Optical Follower Servo that is compared with the light level from the OFS PD.  The OFS ensures that they are equal and opposite (in sign). We monitor the actual light that is directed to and reflects from the ETM and the calibration is based on this.

In the meantime, we will install this new filter at the next opportunity and do a test through a full transfer function as well as waveform injection.

Attached is a plot with measured TF and an appropriate fit.

Images attached to this report
LHO VE
kyle.ryan@LIGO.ORG - posted 18:29, Thursday 01 October 2015 (22165)
Leak checked vent/purge vavle at Y-mid
Kyle, Gerardo 

Today we bagged the Y-mid vent/purge valve NW50-4.5" adapter and sprayed helium (Valve is/was closed)

Warmed up LD for > 30 minutes and calibrated (cal-gas on Tee at LD inlet) -> Helium signal 6.4 x 10-8 torr*L/sec with cal-leak valved-in, < 10-11 torr*L/sec when cal-gas valved out

1.4 x 10-8 torr*L/sec helium baseline when turbo initially valved-in to Y-mid -> Declined to 7.3 x 10-9 torr*L/sec over next 90 minutes -> bagged vent/purge valve's NW50-4.5" adapter, filled bag with 4 LPM helium flow for 100 seconds and again for 10s of seconds -> LD signal began to increase ~30 seconds after initial helium applied and peaked at 1.2 x 10-8 torr*L/sec -> Removed bag -> LD signal fell off to 6.4 x 10-9 torr*L/sec after ???? minutes -> Bagged adapter a second time only this time included the O2 sensor (with internal pump) -> Repeated exercise and achieved O2 < 1% -> Max signal 1.1 x 10-8 torr*L/sec. -> Removed bag and valved-out Turbo etc. when Y-mid helium signal < 8 x 10-9 torr*L/sec 

Conclusion: 
Turbo aperture ~75 sq. in., beam tube apertures (~1500 sq. in) x 2 = ~3,000 sq. in. >> Turbo "sees" ~ 3 % of actual signal -> Air leak rate 0.4 that for helium -> maybe can explain 1/10 of assumed air leak.
H1 INJ (DetChar, INJ)
christopher.biwer@LIGO.ORG - posted 18:04, Thursday 01 October 2015 - last comment - 20:18, Thursday 01 October 2015(22163)
Scheduling coherent injections
Adam M., Chris B.

We are going to be scheduling coherent hardware injections. Will update with schedule lines.
Comments related to this report
christopher.biwer@LIGO.ORG - 18:07, Thursday 01 October 2015 (22164)
1127783813 1 1.0 coherentbbh0_1126259455_
1127784860 1 1.0 coherentbbh1_1126259455_
1127785760 1 1.0 coherentbbh2_1126259455_
1127786667 1 1.0 coherentbbh3_1126259455_
1127787567 1 1.0 coherentbbh4_1126259455_
1127788467 1 1.0 coherentbbh5_1126259455_
1127789367 1 1.0 coherentbbh6_1126259455_
1127790274 1 1.0 coherentbbh7_1126259455_
christopher.biwer@LIGO.ORG - 18:43, Thursday 01 October 2015 (22166)
At H1 tinj is being paused and cannot be re-enabled the second injection did go through at L1. We remembered to change CAL-INJ_EXTTRIG_ALERT_TIME. So we are removing injections that would have failed from the schedule.

The schedule now reads:

1127783813 1 1.0 coherentbbh0_1126259455_
1127784860 1 1.0 coherentbbh1_1126259455_
1127789367 1 1.0 coherentbbh6_1126259455_
christopher.biwer@LIGO.ORG - 18:47, Thursday 01 October 2015 (22167)DetChar, INJ
We are done updating the tinj schedule. After last injection will post aLog with more details.
peter.shawhan@LIGO.ORG - 20:18, Thursday 01 October 2015 (22171)INJ
Argh!  The version of ext_alert.py running at LHO is evidently an older version which is still setting H1:CAL-INJ_TINJ_PAUSE in addition to H1:CAL-INJ_EXTTRIG_ALERT_TIME.  tinj imposes an automatic 1-hour pause based on H1:CAL-INJ_EXTTRIG_ALERT_TIME; setting H1:CAL-INJ_TINJ_PAUSE is an independent pause mechanism, intended for humans to use.  This is the same issue we had for last week's hardware injection tests (alog 21932).  It's too bad the software wasn't updated to match the current version which is running at LLO.
H1 SUS
jenne.driggers@LIGO.ORG - posted 17:56, Thursday 01 October 2015 (22162)
MC2 M2 saturations causing lockloss during CARM offset reduction - again

We had some trouble getting re-locked this last time. 

Several of the locklosses that we had while trying to get back online were due to MC2 M2 saturations, much like we saw in early September.  See, for example, aLog 21169.  It looks like this stopped being something that we were noticing, so we didn't put too much effort into determining the cause. Anyhow, we were able to lock, and are currently back in Observing mode, so maybe we should just sit tight and see if this happens again, or if it goes away again.

H1 General
travis.sadecki@LIGO.ORG - posted 17:46, Thursday 01 October 2015 (22161)
Observing Mode

Back to Observing Mode @ 0:42 UTC.

H1 CDS
david.barker@LIGO.ORG - posted 16:19, Thursday 01 October 2015 (22156)
h1pemmy down due to bad AA-Chassis

Richard, Gerardo, Jim, Dave:

While the IFO was in commissioning we went to Mid-Y to investigate the h1pemmy failure which happened at 03:53 PDT this morning. We found that the IO Chassis was not powered up, but the +24V power supply was running. We found that the fuse in the fuse-units at the top of the DC power supply rack had blown for the +24V line. We replaced the fuse, powered up the IO Chassis and noted the DC power supply was drawing at steady 3.5A as expected. It was then we noticed the AA-Chassis was not on. It has a fuse on the +18V line which had blown. Putting a new fuse in caused a hot electrical odor from the AA-Chassis. The fuse was removed to de-energize the AA.

Initially we left all the systems down (computer and IO Chassis) and returned to the corner station. But with h1pemmy not running, the DAQ EDCU and CONLOG were red on the overview screen. We remotely powered up the h1pemmy computer, which then started the h1ioppemmy and h1pemmy models. Wtith the EPICS IOCs running, this has appeased EDCU and CONLOG. For now the two models have FEC and DAQ errors as they are not actually running, only the epics IOC is running. We will remain in this state until the AA Chassis is runnng again.

A trend of a seismic signal and the front end status shows they both went down at 03:53 PDT Thu 01 oct 2015. We presume the failure of the AA-chassis and the overcurrent of the 18V glitched the 24V line which in turn blew the fuse.

H1 General
jeffrey.bartlett@LIGO.ORG - posted 16:08, Thursday 01 October 2015 (22155)
Ops Day Shift Summary
Activity Log: All Times in UTC (PT)

15:00 (08:00) Take over from TJ
15:05 (08:05) GRB Alert – Switch to Observing mode
16:33 (09:33) Jodi – Delivering storage items to mechanical building
16:59 (09:59) Jodi – Finished with delivery
18:31 (11:31) GRB Alert – In Observing mode before notice
20:47 (13:47) Switch to Commissioning mode – Commissioning work while LLO is down
21:26 (14:26) Kyle & Gerardo – Going to Mid-Y to start rotating pump and leak detector 
21:35 (14:35) Dave & Jim – Going to Mid-Y to check on PEM problem
22:31 (15:31) Filiberto – Going to Mid-Y to work on bad AA chassis 
22:45 (15:45) Lockloss – Commissioning activities
23:00 (16:00) Turn over to Travis


End of Shift Summary:

Title: 10/01/2015, Day Shift 15:00 – 23:00 (08:00 – 16:00) All times in UTC (PT)

Support: Sheila, Mike, 

Incoming Operator: Travis

Shift Summary: 
- 15:00 IFO locked. Intent Bit = Commissioning Mode. Wind is calm, no seismic activity. All appears normal. Sheila doing some testing while LLO is relocking.
    
- 17:05 (08:05) GRB Alert. Switch to Observing mode
- 18:31 (11:31) GRB Alert. In Observing mode 
- 23:00 (16:00) Relocking  
H1 ISC
keita.kawabe@LIGO.ORG - posted 15:54, Thursday 01 October 2015 - last comment - 09:24, Monday 05 October 2015(22154)
Current status of noise bumps that are supposedly from PSL periscope (PeterF, Keita)

Just in case you're wondering why LHO sees two noise bumps at 315 and 350Hz (attached, middle blue) but not at LLO, we don't fully understand either but here is the summary.

There are three things here, environmental noise level, PZT servo, and jitter coupling to DARM. Even though the former two explains a part of the LLO-LHO difference, they cannot explain all of it, and the coupling at LHO seems to be larger.

Reducing the PSL chiller flow will help but that's not a solution for the future.

Reimplementing PZT servo at LHO will help and this should be done. Squashing it all will be hard, though, as we are talking about the jitter between 300 and 370Hz and there's a resonance at 620Hz.

Reducing coupling is one area that was not well explored. Past attempts at LHO were on top of dubious IMC WFS quadrant gain imbalances.


1. Environmental difference

These bumps are supposed to be from the beam jitter caused by PSL periscope resonances (not from the PZT mirror resonances). In the attached you can see that the bumps in H1 (middle blue) correspond to the bumps in PSL periscope accelerometer (top blue). (Don't worry, we figured out which server we need to use for DTT to give us correct results.)

Because of the PSL chiller flow difference between LLO and LHO (LHO alog, couldn't find LLO alog but we have MattH's words), in general LLO periscope noise level is lower than LHO. However, the difference in the accelerometer signal is not enough to explain the difference in IFO.

For example, at 350Hz LHO PSL periscope is only a factor of 2 noisier than LLO. At 330Hz, LHO is quieter than LLO by more than a factor of 2. Yet we have a huge hump in DARM at LHO, it becomes larger and smaller in DARM but it never goes away, while LLO DARM is deat flat.

At LLO they do have a servo to supress noise at about 300Hz, but it shouldn't be doing much if any at 350Hz (see the next section).

So yes, it seems like environmental difference is one of the reasons why we have larger noise.

But the jitter to DARM coupling itself seems to be larger.

Turning down the chiller flow will help but that's not a solution for the future.


2. Servo difference

At LLO there's a servo to squash beam jitter in PIT at 300Hz. LHO used to have it but now it is disabled.

At LLO, IOOWFS_A_I_PIT signal is used to suppress PIT jitter targetting the 300Hz peak which was right on some mechanical resonance/notch structure in PZT PIT (which LHO also has), and the servo reduced the noise between about 270 and about 320Hz (LLO alog 19310).

Same servo was successfully copied to LHO with some modification, which also targeted 300Hz bump (except that YAW was more coherent than PIT and we used YAW signal), with somewhat less (but not much less) aggressive gain and bandwidth. At that time 300Hz bump was problematic together with 250Hz bump and 350Hz bump. Look at the plots from alog 20059 and 20093.

Somehow 250Hz and 300Hz subsided, and now LHO is suffering from 315Hz and 350Hz bumps (compare the attached with the above mentioned alog). Since we never had time to tune the servo filter to target either of the new bumps, and since turning the servo on without modification is going to make marginal improvement at 300Hz and will make 250Hz/350Hz somewhat worse due to gain peaking, it was disabled.

Reimplementing the servo to target 315 and 350Hz bumps will help.  But it's not going to be easy to make this servo wide band enough to squash everything because of 620Hz resonance, which is probably something in the PZT mirror itself (look at the above mentioned alog 20059 for open loop transfer function of the current servo, for example). In principle we can go even wider band, but we'll need more than 2kHz sampling rate for that. We could stiffen the mount if 620Hz is indeed the mount.


3. Coupling difference

As I wrote in the environment difference, from the accelerometer data and IFO signal, it seems as if the coupling is larger at LHO.

There are many jitter coupling measurements at LHO but the best one to look at is this one. We should be able to make a direct comparison with LLO but I haven't looked.

Anyway, it is known that the coupling depends on IMC alignment and OMC alignment (and probably the IFO alignment).

At LHO, IMC WFS has offsets in PIT and YAW in an attempt to minimize the coupling. This is on top of dubious imbalances in IMC WFS quadrant gains at LHO (see alog 20065, the minimum quadrant gain is a factor of 16  larger  smaller than the maximum). We should fix that before spending much time on studying the jitter coupling via alignment.

At LLO, there's no such imbalance and there's no such offset.

Images attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 12:58, Saturday 03 October 2015 (22208)

The coupling of these peaks into DARM appears to pass through a null near the beginning of each full-power lock stretch, perhaps indicating that this coupling can be suppressed through TCS heating.

Already from the summary pages one can see that at the beginning of each lock, these peaks are present in DARM, then they go away for about 20 minutes, and then they come back for the duration of the lock.

I looked at the coherence (both magnitude and phase) between DARM and the IMC WFS error signals at three different times during a lock stretch beginning on 2015-09-29 06:00:00 Z. Blue shows the signals 10 minutes before the sign flip, orange shows the signals near the null, and purple shows the signals 20 minutes after the sign flip.

One can also see that the peaks in the immediate vicinity of 300 Hz decay monotonically from the beginning of the lock strech onward; my guess is that these are generated by some interaction with the beamsplitter violin mode and have nothing to do with jitter.

Images attached to this comment
keita.kawabe@LIGO.ORG - 09:24, Monday 05 October 2015 (22235)

Addendum:

alog 20051 shows the PZT to IMCWFS transfer function (without servo) for PIT and YAW. Easier to see which resonance is on which DOF.

H1 General (INJ)
peter.shawhan@LIGO.ORG - posted 14:08, Thursday 01 October 2015 (22153)
Protocol for upcoming coherent hardware injection tests
Chris Biwer and other members of the hardware injections team will likely be doing coherent hardware injections in the near future, and these will hopefully be detected successfully by one or more of the low-latency data analysis pipelines.  Currently, we are still testing the EM follow-up infrastructure, so the "Approval Processor" software is configured to treat hardware injections like regular triggers.  Therefore, these significant GW "event candidates" should cause audible alarms to sound in each control room, similar to a GRB alarm.  The operator at each site will be asked to "sign off" by going to the GraceDB page for the trigger and answering the question, "At the time of the event, was the operating status of the detector basically okay, or not?"  You can also enter a comment.

For the purpose of these tests, if you are the operator on shift, please:
  * Do not disqualify the trigger based on it being a hardware injection -- we know it is!  So, please sign off with "OKAY" if the detector was otherwise operating OK.
  * Pay attention to whether the audible alarm sounded.  In the past we had issues at one site or the other, so this is one of the things we want to test.
  * Feel free to enter a comment on the GraceDB page when you sign off, like maybe "this was a hardware injection and the audible alarm sounded".
  * You may get a phone call from a "follow-up advocate" who is on shift to remotely help check the trigger.

Note: in the future, once the EM follow-up project is "live", a known hardware injection will not cause the control-room alarms to sound (unless it is a blind injection).  You should not write anything in the alog about alarms from GW event candidates, because that is potentially sensitive information and the alogs are publicly readable.
H1 General
jeffrey.bartlett@LIGO.ORG - posted 13:01, Thursday 01 October 2015 (22152)
Ops Mid Day Shift Summary
IFO has been locked at NOMINAL_LOW_NOISE, 23.0W, 72Mpc for the past 5 hours. Wind and seismic activity are low. 4 ETM-Y saturation alarms. Received GRB alert at 18:31UTC (12:31PT) - LHO was in Observing mode during this event      
H1 ISC
daniel.sigg@LIGO.ORG - posted 11:06, Thursday 01 October 2015 (22150)
RF45 stabilization, two days after the swap

The attached plot shows the 2 day trend of the RF45 glitches. There were no glitches in the past day. The large glitches 24 hours ago were us. This is not inconsistent with a cable or connection problem. No one should be surprised, if the problem reappears.

Images attached to this report
H1 General
jeffrey.bartlett@LIGO.ORG - posted 09:02, Thursday 01 October 2015 (22149)
Day Shift Transition Summary
Title:  10/01/2015, Day Shift 15:00 – 23:00 (08:00 – 16:00) All times in UTC (PT)

State of H1: At 15:00 (08:00) Locked at NOMINAL_LOW_NOISE, 23.0W, 72Mpc

Outgoing Operator: TJ

Quick Summary: Wind is calm, no seismic activity. All appears normal. Intent Bit at Commissioning while LLO was recovering from a lockloss.     
H1 ISC
sheila.dwyer@LIGO.ORG - posted 22:07, Wednesday 30 September 2015 - last comment - 16:55, Thursday 01 October 2015(22107)
bilinear coupling of End X motion to DARM follow up

I'm posting an early version of this alog so that people can see it, but plan to edit again with the results of the second test. 

Yesterday I took a few minutes to follow up on the meausrements in alog 21869.  This time in addition to driving TMS I drove the ISI in the beam direction to reproduce the motion caused by the backreaction to TMS motion. We also breifly had a chance to move the TMSX angle while exciting L. 

The main conclusions are:

Comparison of ISI drive to TMS drive for X and Y

The attached screenshot shows the main results of the first test (driving ISIs and TMSs).  In the top right plot you can see that I got the same amount of ISI motion for 3 cases (driving ETMX ISI, TMSX, ETMY ISI) and that driving TMSY with the same amplitude as TMSX resulted in a 50% smaller motion of the ISI.  Shaking the TMS in the L direction induces a larger motion measured by the GS13s in the direction perpendicular to the beam, than in the beam direction, which was not what I expected.  I chose the drive strength to get the same motion in the beam direction, so I have not reproduced the largest motion of the ISI with this test. If there is a chance it would be interesting to also repeat this measurement reproducing the backreaction in the direction perpendicular to the beam.  

The middle panels of the first screnshot show the motion measured by OSEMs. TMS osems see about a factor of 10 more motion when the TMS is driven than when the ISI is driven.  The signal is also visible in the quad top mass osems, but not lower down the chain.  For the X end, the longitudnal motion seen by the top mass is about a factor of 2 higher when the TMS is excited than when the ISI is excited (middle left panel), which could be because I have not reproduced the full backreaction of the ISI to the TMS motion.  However, it is strange that for ETMY the top mass osem signal produced by driving TMS is almost 2 orders of magnitude larger than the motion produced by moving the ISI. It seems more likely that this is a problem of cross coupling between the osems than real mechanical coupling. The ETMY top mass osems are noisier than ETMX, as andy lundgren pointed out (20675).  It would be interesting to see a transfer function between TMS and the quad top mass to see if this is real mechanical coupling or just cross talk. 

In the bottom left panel of the first screenshot, you can compare the TMS QPD B yaw signals.  The TMS drive produces larger QPD signals than the ISI drive, as you would expect for both end stations.  My first gues would be that driving the ISI in the beam direction could cause TMS pitch, but shouldn't cause as much yaw motion of the TMS.  However, we see the ETMX ISI drive in the yaw QPDs, but not pitch.  The Y ISI drive does not show up in the QPDs at all.  

Lastly, the first plot in the first screenshot shows that the level of noise in DARM produced by driving the ETMX ISI is nearly the same as what is produced by driving TMSX.  Since the TMS motion (seen by TMS osems) is about ten times higher when driving TMS, we can conclude that this coupling is not through TMS motion but the motion of something else that is attached to the ISI. Driving ETMY ISI produces nothing in DARM but driving TMSY produces a narrow peak in DARM. 

For future reference:

I drove ETMX-ISI_ST2_ISO_X_EXC with an amplitude of 0.0283 cnts at 75 Hz from 20:07:47 to 20:10:00UTC sept 29th

I drove 2000 cnts in TMSX test L from 20:10:30 to 20:13:30UTC 

I drove ETMY-ISI_ST2_ISO_Y_EXC with an amplitude of 0.0612 cnts at 75 Hz from 20:13:40 to 20:16:30UTC 

I drove 2000 cnts in TMSY test L from 20:17:10 to 20:20:10UTC

Driving TMSX L while rastering TMS position and angle

I put a 2000 cnt drive on TMSX L from about 2:09 UTC September 30th to 2:37 when I broke the lock. We found a ghost beam that hits QPD B when TMS is misaligned by 100 urad in the positive pitch direction.  There is about 0.5% as much power in this beam as in the main beam (not accounting for the dark offset).  I got another chance to do this this afternoon, and was able to move the beam completely off of the QPDs, which did not make the noise coupling go away or reduce it much.  We can conclude then scatter off of the QPDs is not the main problem.  There were changes in the shape of the peak in DARM as TMS moved, and changes in the noise at 78 Hz (which is normally non stationary) Plots will be added tomorrow.

Speculation

There is a feature in the ETMX top mass osems (especially P and T) around 78 Hz that is vaugely in the right place to be related to the excess noise in the QPDs and DARM. Also, Jeff showed us some B&K measurements from Arnaud (7762) that might hint at a Quad cage resonance at around 78 Hz, although the measured Q looks a little low to explain the spectrum of the TMSX QPDs or the feature in DARM. One could spectulate that the motion driving the noise at 78Hz  is the quad cage resonance, but this is not very solid.  Robert and Anamaria have data from their PEM injections that might be able to shed some light on this.

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 16:37, Thursday 01 October 2015 (22157)

The units in the attached plots are wrong, there GS13s are calibrated into nm, not meters

sheila.dwyer@LIGO.ORG - 16:55, Thursday 01 October 2015 (22159)

This morning I got the chance to do some white noise excitations on the ETMX ISI, in the X and Y directions.  The attached screenshot shows the result, which is that for ISI motion a factor of 10-100 above the normal level, for a wide range of frequencies, no noise shows up in DARM.  SO the normal level of ISI motion in the X and Y directions is not driving the noise in DARM at 78 Hz.  We could do the same test for the other ISI DOFs to eliminate them as well.

Images attached to this comment
H1 AOS
robert.schofield@LIGO.ORG - posted 20:31, Tuesday 29 September 2015 - last comment - 16:42, Thursday 01 October 2015(22094)
Danger using DTT with NDS2 data on a channel whose sampling rate has changed

When DTT gets data from NDS2, it apparently gets the wrong sample rate if the sample rate has changed. The plot shows the result. Notice that the 60 Hz magnetic peak appears at 30 Hz in the NDS2 data displayed with DTT. This is because the sample rate was changed from 4 to 8k last February.  Keita pointed out discrepancies between his periscope data and Peter F's. The plot shows that the periscope signal, whose rate was also changed, has the same problem, which may explain the discrepancy if one person was looking at NDS and the other at NDS2. The plot shows data from the CIT NDS2. Anamaria tried this comparison for the LLO data and the LLO NDS2 and found the same type of problem. But the LHO NDS2 just crashes with a Test timed-out message.

Robert, Anamaria, Dave, Jonathan

Non-image files attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 17:24, Wednesday 30 September 2015 (22128)

It can be a factor of 8 (or 2 or 4 or 16) using DTT with NDS2 (Robert, Keita)

In the attached, the top panel shows the LLO PEM channel pulled off of CIT NDS2 server, and at the bottom is the same channel from LLO NDS2 server, both from the exact same time. LLO server result happens to be correct, but the frequency axis of CIT result is a factor of 8 too small while Y axis of the CIT result is a factor of sqrt(8)  too large.

Jonathan explained this to me:

keita.kawabe@opsws7:~ 0$ nds_query -l -n nds.ligo.caltech.edu L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ
Number of channels received = 2
Channel                  Rate  chan_type
L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ          2048      raw    real_4
L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ         16384      raw    real_4
keita.kawabe@opsws7:~ 0$ nds_query -l -n nds.ligo-la.caltech.edu L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ
Number of channels received = 3
Channel                  Rate  chan_type
L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ         16384   online    real_4
L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ          2048      raw    real_4
L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ         16384      raw    real_4

As you can see, both at CIT and LLO the raw channel sampling rate was changed from 2048Hz to 16384Hz, and raw is the only thing available at CIT. However, at LLO, there's also "online" channel type available at 16k, which is listed prior to "raw".

Jonathan told me that DTT probably takes the sampling rate number in the first one in the channel list regardless of the actual epoch each sampling rate was used. In this case dtt takes 2048Hz from CIT but 16384Hz from LLO, but obtains the 16kHz data. If that's true there is a frequency scaling of 1/8 as well as the amplitude scaling of sqrt(8) for the CIT result.

FYI, for the corresponding H1 channel in CIT and LHO NDS2 server, you'll get this:

keita.kawabe@opsws7:~ 0$ nds_query -l -n nds.ligo.caltech.edu H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ
Number of channels received = 2
Channel                  Rate  chan_type
H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ          8192      raw    real_4
H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ         16384      raw    real_4
keita.kawabe@opsws7:~ 0$ nds_query -l -n nds.ligo-wa.caltech.edu H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ
Number of channels received = 3
Channel                  Rate  chan_type
H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ         16384   online    real_4
H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ          8192      raw    real_4
H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ         16384      raw    real_4

In this case, the data from LHO happens to be good, but CIT frequency is a factor of 2 too small and magnitude a factor of sqrt(2) too large.

Images attached to this comment
jonathan.hanks@LIGO.ORG - 17:40, Wednesday 30 September 2015 (22131)

Part of this that DTT does not handle the case of a channel changing sample rate over time.

DTT retreives a channel list from NDS2 that includes all the channels with sample rates, it takes the first entry for each channel name and ignores any following entries in the list with different sample rates.  It uses the first sample rate it receives ans the sample rate for the channel at all possible times.  So when it retreives data it may be 8k data, but it looks at it as 4k data and interprets the data wrong.

I worked up a band-aid that inserts a layer between DTT and NDS2 and essentially makes it ignore specified channel/sample rate combinations.  This has let Robert do some work.  We are not sure how this scales and are investigating a fix to DTT.

jonathan.hanks@LIGO.ORG - 16:42, Thursday 01 October 2015 (22158)

As followup we have gone through two approaches to fix this:

  1. We created a proxy we put between DTT & NDS2 for Robert that was able to strip out the versions of the channels that we are not interested in. This was done yesterday and allowed Robert to work. This has allowed Robert to work but is not a scalable solution.
  2. Jim and I investigated what DTT was doing and have a test build of DTT that allows it to present a list with multiple sample rates per channel. We have a test build of this at LHO. There are rough edges to this, but we have filed a ECR to see about rolling out a solution in this vein in production (which would include LLO).
H1 CDS
david.barker@LIGO.ORG - posted 09:10, Sunday 27 September 2015 - last comment - 12:53, Thursday 01 October 2015(21989)
restarted ext_alert.py, need to get this to autostart

The ext_alert.py script which periodically views GraceDB had failed. I have just restarted it, instructions for restarting are in https://lhocds.ligo-wa.caltech.edu/wiki/ExternalAlertNotification

Getting this process to autostart is now on our high priority list (FRS3415).

here is the error message displayed before I did the restart.

 

 File "ext_alert.py", line 150, in query_gracedb

    return query_gracedb(start, end, connection=connection, test=test)

  File "ext_alert.py", line 150, in query_gracedb

    return query_gracedb(start, end, connection=connection, test=test)

  File "ext_alert.py", line 135, in query_gracedb

    external = log_query(connection, 'External %d .. %d' % (start, end))

  File "ext_alert.py", line 163, in log_query

    return list(connection.events(query))

  File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 441, in events 

    uri = self.links['events']

  File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 284, in links  

    return self.service_info.get('links')

  File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 279, in service_info

    self._service_info = self.request("GET", self.service_url).json()

  File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 325, in request

    return GsiRest.request(self, method, *args, **kwargs)

  File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 201, in request

    response = conn.getresponse()

  File "/usr/lib/python2.7/httplib.py", line 1038, in getresponse

    response.begin()

  File "/usr/lib/python2.7/httplib.py", line 415, in begin

    version, status, reason = self._read_status()

  File "/usr/lib/python2.7/httplib.py", line 371, in _read_status

    line = self.fp.readline(_MAXLINE + 1)

  File "/usr/lib/python2.7/socket.py", line 476, in readline

    data = self._sock.recv(self._rbufsize)

  File "/usr/lib/python2.7/ssl.py", line 241, in recv

    return self.read(buflen)

  File "/usr/lib/python2.7/ssl.py", line 160, in read

    return self._sslobj.read(len)

ssl.SSLError: The read operation timed out

Comments related to this report
duncan.macleod@LIGO.ORG - 12:53, Thursday 01 October 2015 (22151)

I have patched the ext_alert.py script to catch SSLError exceptions and retry the query [r11793]. The script will retry up to 5 times before crashing completely, which is something we may want to rethink if we have to.

I have request both sites to svn up and restart the ext_alert.py process at the next convenient opportunity (the next time it crashes).

Displaying reports 56461-56480 of 78053.Go to page Start 2820 2821 2822 2823 2824 2825 2826 2827 2828 End