Displaying reports 64881-64900 of 86412.Go to page Start 3241 3242 3243 3244 3245 3246 3247 3248 3249 End
Reports until 08:29, Wednesday 30 September 2015
H1 General
jeffrey.bartlett@LIGO.ORG - posted 08:29, Wednesday 30 September 2015 (22105)
Ops Day Shift Transition Summary
Title:  09/30/2015, Day Shift 15:00 – 23:00 (08:00 – 16:00) All times in UTC (PT)

State of H1: At 15:00 (08:00) Locked at NOMINAL_LOW_NOISE, 22.4W, 71Mpc

Outgoing Operator: TJ

Quick Summary: Wind is calm; no seismic activity. All appears normal.     

LHO General
thomas.shaffer@LIGO.ORG - posted 08:00, Wednesday 30 September 2015 (22103)
Ops Owl Shift Summery
LHO General
thomas.shaffer@LIGO.ORG - posted 05:34, Wednesday 30 September 2015 (22101)
Ops Owl Mid Shift Report

Had one lockloss at 7:43 UTC, but brought it back up and into observing at 8:37. I'm still not sure what caused the lockloss.

Aside from that it is a quiet environment and everything seems to be humming along.

H1 CDS
thomas.shaffer@LIGO.ORG - posted 05:31, Wednesday 30 September 2015 (22100)
GraceDB Query Failure and Restart

There was a GraceDB query failure with the last query at 11:48 UTC. I followed the instructions on this wiki and it started up just fine.

LHO General
thomas.shaffer@LIGO.ORG - posted 01:38, Wednesday 30 September 2015 (22099)
Observing

Back to Observing

LHO General
thomas.shaffer@LIGO.ORG - posted 00:53, Wednesday 30 September 2015 - last comment - 05:43, Wednesday 30 September 2015(22098)
Lockloss

Lockloss at 07:43 UTC.

No idea what may have caused it yet. There was an ITMX saturation, control loops looked normal, no seismic activity, all the monitors showed normal operation.

Comments related to this report
thomas.shaffer@LIGO.ORG - 05:43, Wednesday 30 September 2015 (22102)

Here's some plots.

Images attached to this comment
H1 AOS
travis.sadecki@LIGO.ORG - posted 00:02, Wednesday 30 September 2015 (22097)
OPS Eve shift summary

Title: 9/29 Eve Shift 23:00-7:00 UTC (16:00-24:00 PST).  All times in UTC.

State of H1: Observing

Shift Summary: One lockloss due to ITMy saturation.  One lockloss due measurements being made while LLO was down.  This resulted in a net of 16 minutes of lost coincident observing time.  Observing for all but ~1 hour of my shift.  RF45 has been stable the entire shift.  WInd and seismic quiet.

Incoming operator: TJ

Activity log:

23:25 Lockloss, ITMy saturation

23:26 Kyle and Gerardo back from EY

2:04 Out of observing while LLO is down so Sheila can make measurements

2:32 Lockloss due to measurements

3:00 Observing Mode

H1 ISC
sheila.dwyer@LIGO.ORG - posted 20:35, Tuesday 29 September 2015 (22095)
DHARD yaw boost is now on in nominal state

Today durring the maintence day we added the DHARD yaw boost to the guardian, after comparisons of glitch rates with it on and off (22043 21820).  

Now we can wait for some earthquakes to see what difference this makes. 

H1 AOS
robert.schofield@LIGO.ORG - posted 20:31, Tuesday 29 September 2015 - last comment - 16:42, Thursday 01 October 2015(22094)
Danger using DTT with NDS2 data on a channel whose sampling rate has changed

When DTT gets data from NDS2, it apparently gets the wrong sample rate if the sample rate has changed. The plot shows the result. Notice that the 60 Hz magnetic peak appears at 30 Hz in the NDS2 data displayed with DTT. This is because the sample rate was changed from 4 to 8k last February.  Keita pointed out discrepancies between his periscope data and Peter F's. The plot shows that the periscope signal, whose rate was also changed, has the same problem, which may explain the discrepancy if one person was looking at NDS and the other at NDS2. The plot shows data from the CIT NDS2. Anamaria tried this comparison for the LLO data and the LLO NDS2 and found the same type of problem. But the LHO NDS2 just crashes with a Test timed-out message.

Robert, Anamaria, Dave, Jonathan

Non-image files attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 17:24, Wednesday 30 September 2015 (22128)

It can be a factor of 8 (or 2 or 4 or 16) using DTT with NDS2 (Robert, Keita)

In the attached, the top panel shows the LLO PEM channel pulled off of CIT NDS2 server, and at the bottom is the same channel from LLO NDS2 server, both from the exact same time. LLO server result happens to be correct, but the frequency axis of CIT result is a factor of 8 too small while Y axis of the CIT result is a factor of sqrt(8)  too large.

Jonathan explained this to me:

keita.kawabe@opsws7:~ 0$ nds_query -l -n nds.ligo.caltech.edu L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ
Number of channels received = 2
Channel                  Rate  chan_type
L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ          2048      raw    real_4
L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ         16384      raw    real_4
keita.kawabe@opsws7:~ 0$ nds_query -l -n nds.ligo-la.caltech.edu L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ
Number of channels received = 3
Channel                  Rate  chan_type
L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ         16384   online    real_4
L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ          2048      raw    real_4
L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ         16384      raw    real_4

As you can see, both at CIT and LLO the raw channel sampling rate was changed from 2048Hz to 16384Hz, and raw is the only thing available at CIT. However, at LLO, there's also "online" channel type available at 16k, which is listed prior to "raw".

Jonathan told me that DTT probably takes the sampling rate number in the first one in the channel list regardless of the actual epoch each sampling rate was used. In this case dtt takes 2048Hz from CIT but 16384Hz from LLO, but obtains the 16kHz data. If that's true there is a frequency scaling of 1/8 as well as the amplitude scaling of sqrt(8) for the CIT result.

FYI, for the corresponding H1 channel in CIT and LHO NDS2 server, you'll get this:

keita.kawabe@opsws7:~ 0$ nds_query -l -n nds.ligo.caltech.edu H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ
Number of channels received = 2
Channel                  Rate  chan_type
H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ          8192      raw    real_4
H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ         16384      raw    real_4
keita.kawabe@opsws7:~ 0$ nds_query -l -n nds.ligo-wa.caltech.edu H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ
Number of channels received = 3
Channel                  Rate  chan_type
H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ         16384   online    real_4
H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ          8192      raw    real_4
H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ         16384      raw    real_4

In this case, the data from LHO happens to be good, but CIT frequency is a factor of 2 too small and magnitude a factor of sqrt(2) too large.

Images attached to this comment
jonathan.hanks@LIGO.ORG - 17:40, Wednesday 30 September 2015 (22131)

Part of this that DTT does not handle the case of a channel changing sample rate over time.

DTT retreives a channel list from NDS2 that includes all the channels with sample rates, it takes the first entry for each channel name and ignores any following entries in the list with different sample rates.  It uses the first sample rate it receives ans the sample rate for the channel at all possible times.  So when it retreives data it may be 8k data, but it looks at it as 4k data and interprets the data wrong.

I worked up a band-aid that inserts a layer between DTT and NDS2 and essentially makes it ignore specified channel/sample rate combinations.  This has let Robert do some work.  We are not sure how this scales and are investigating a fix to DTT.

jonathan.hanks@LIGO.ORG - 16:42, Thursday 01 October 2015 (22158)

As followup we have gone through two approaches to fix this:

  1. We created a proxy we put between DTT & NDS2 for Robert that was able to strip out the versions of the channels that we are not interested in. This was done yesterday and allowed Robert to work. This has allowed Robert to work but is not a scalable solution.
  2. Jim and I investigated what DTT was doing and have a test build of DTT that allows it to present a list with multiple sample rates per channel. We have a test build of this at LHO. There are rough edges to this, but we have filed a ECR to see about rolling out a solution in this vein in production (which would include LLO).
H1 General
travis.sadecki@LIGO.ORG - posted 20:04, Tuesday 29 September 2015 (22093)
Observing Mode

Back to Observing Mode @ 3:03 UTC.

H1 General
travis.sadecki@LIGO.ORG - posted 19:04, Tuesday 29 September 2015 - last comment - 19:36, Tuesday 29 September 2015(22091)
Out of Observing Mode for measurements

Since LLO went out of lock, Sheila asked if she could complete some measurements that she didn't finish during maintenance.  I gave her the OK and went to commissioning mode since we aren't losing any coincident data time.

Comments related to this report
sheila.dwyer@LIGO.ORG - 19:36, Tuesday 29 September 2015 (22092)

I caused a lockloss by moving TMSX too quickly while doing this test.  

I also spent some time earlier in the day (durring maintence recovery) to do some excitations on TMS and the End station ISIs to investigate the noise that seems to come from TMSX.  An alog with results will be coming soon. 

H1 CAL
madeline.wade@LIGO.ORG - posted 18:28, Tuesday 29 September 2015 (22090)
Bug fix to GDS calibration correction filters

I updated the GDS calibration correction filters today to reflect the bug fixes to the actuation and sensing time delays (see aLOG #22056).  Attached are plots of the residual and control correction filters, which include the updated time delays.  I have also attached plots that compare the h(t) spectra from the CALCS and GDS calibration pipelines and the spectrum residuals.  There is now a larger discrepency between CALCS and GDS because the time delays that were added to CALCS to bring the two closer together are now no longer as accurate.  Updates to the delays in CALCS may be coming as the differences are investigated more.

The new GDS calibration correction filters were generating using

create_partial_td_filters_O1.m

which is checked into the calibraiton SVN (r1560) under

/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/Common/MatlabTools
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/Common/MatlabTools
aligocalibration/trunk/Runs/O1/Common/MatlabTools.
 
The new filters file is also checked into the calibration SVN under

aligocalibration/trunk/Runs/O1/GDSFilters

The filters file is called H1GDS_1127593528.npz.

Images attached to this report
Non-image files attached to this report
H1 General
travis.sadecki@LIGO.ORG - posted 16:50, Tuesday 29 September 2015 (22089)
Observing Mode

Back to Observing @ 23:50 UTC.

H1 General
travis.sadecki@LIGO.ORG - posted 16:38, Tuesday 29 September 2015 (22088)
Lockloss

Lockloss @ 23:25 UTC.  ITMy saturation.

H1 DetChar
jordan.palamos@LIGO.ORG - posted 16:31, Tuesday 29 September 2015 (22086)
Decreasing RF45 modulation index doesn't seem to change background trigger rate

Laura, Jordan

People have been decreasing the RF45 modulation index as a fix for extreme glitchiness associated with RF45 AM noise (described here among other places). I did a quick check to see if this has any noticeable effect on the rate of background triggers.  I made plots of omicron glitchgrams and trigger rates for times where the modulation index was decreased and for nearby times with modulation index at its nominal level and no other obvious issues (many plots ).  Attached are rate plots from some recent times.

For reference, the nominal plot is from lock # 31 and the decreased plot is from # 39 according to https://ldas-jobs.ligo.caltech.edu/~detchar/summary/O1

Images attached to this report
H1 AOS
sheila.dwyer@LIGO.ORG - posted 07:34, Tuesday 29 September 2015 - last comment - 22:41, Tuesday 29 September 2015(22050)
running A2L script

Since LLO had already gone down (we think for maintence) TJ let me start some maintence work that needs the full IFO locked.  at about 14:32 UTC Sept 29th we went to commisioning to start running the A2L script as described in WP # 5517.

Comments related to this report
sheila.dwyer@LIGO.ORG - 08:24, Tuesday 29 September 2015 (22053)

The script finished right before an EQ knocked us out of lock.  Attached are results, we can decide if we are keeping these decouplings durring the maintence window.

The three changes made by the script which I would like to keep are ETMX pit, ETMY yaw, and ITMY pit.  These three gains are accepted in SDF.  Since we aren't going to do the other work described in the WP, this is now finished. 

All the results from the script are:

ETMX pit changed from 1.263 to 1.069 (1st attachment, keep)

ETMX yaw reverted (script changed it from 0.749 to 1.1723 based on the fit shown in the second attachment)

ETMY pit reverted (script changed it from 0.26 to 0.14 based on the 3rd attachement)

ETMY yaw changed from -0.42  to -0.509, based on fit shown in 4th attachment

ITMX no changes were made by the script, 5th +6th attchments

ITMY pit (from 1.37 to 1.13 based on 7th attachment, keep)

ITMY yaw reverted (changed from -2.174 to -1.7, based on the 8th attachment which does not seem like a good fit)

Images attached to this comment
sheila.dwyer@LIGO.ORG - 16:20, Tuesday 29 September 2015 (22083)

By the way, the script that I ran to find the decoupling gains is in userapps/isc/common/decoup/run_a2l_vII.sh  Perhaps next time we use this we should try a higher drive amplitude, to try to get better fits.

I ran Hang's script that uses the A2L gains to determine a spot position (alog 19904), here are the values after running the script today. 

  vertical (mm) horizontal(mm)
ITMX -9 4.7
ITMY -5.1 -7.7
ETMX -4.9 5.3
ETMY -1.2 -2.3

I also re-ran this script for the old gains, 

 

vertical(mm)

horizontal (mm)
ITMX -9 4.7
ITMY -6.2 -7.7
ETMX -5.8 5.3
ETMY -1.2 -1.9

So the changes amount to +0.4 mm in the horizontal direction on ETMY, -0.9 mm in the vertical direction on ETMX, and -1.1mm in the vertical direction on ITMY.  

hang.yu@LIGO.ORG - 22:41, Tuesday 29 September 2015 (22096)

Please be aware that in my code estimating beam's position, I neglected the L2 angle -> L3 length coupling, which would induce

an error of l_ex / theta_L3

where l_ex is the length induced by L2a->L3l coupling when we dither L2, and theta_L3 is the angle L3 tilts through L2a->L3a.

Sorry about that...

H1 CDS
keita.kawabe@LIGO.ORG - posted 13:55, Wednesday 16 September 2015 - last comment - 08:45, Wednesday 30 September 2015(21585)
Binary inspiral range copy in EPICS is about 109 seconds delayed from the DMT

When you compare "H1 SNSW EFFECTIVE RANGE (MPC) (TSeries)" data in DMT SenseMonitor_CAL_H1 with its copy in EPICS (H1:CDS-SENSEMON_CAL_SNSW_EFFECTIVE_RANGE_MPC), you will find that the EPICS data is "delayed" from the DMT data by about 109 seconds (109.375 sec in this example, I don't know if it varies with time significantly).

In the attached, vertical lines are minute markers where GPS second is divisible by 60. Bottom is the DMT trend, top is its EPICS copy. In the second attachment you see that this results in the minute trend of this EPICS range data becoming a mixture of DMT trend from 1 minute and 2 minutes ago.

This is harmless most of the time, but if you want to see if e.g. a particular glitch caused the inspiral range to drop, you need to do either a mental math or a real math.

(Out of this 109 seconds, 60 should come from the fact that DMT takes 60 seconds of data to calculate one data point and puts the start time of this 1 min window as the time stamp. Note that this start time is always at the minute boundary where GPS second is divisible by 60. Remaining 49 seconds should be the sum of various latencies on DMT end as well as on the copying mechanism.)

Images attached to this report
Comments related to this report
jonathan.hanks@LIGO.ORG - 08:45, Wednesday 30 September 2015 (22106)

The 109s delay is a little higher than expected, but not to strange.  I'm not sure where DMT marks the time, as the start/mid/end of the minute it outputs.

Start Time Max End Time Stage
0 60 Data being calculated in the DMT.
60 90 The DMT to EPICS IOC queries the DMT every 30s.
90 91

The EDCU should sample it at 16Hz and send to the frame writter.

The 30s sample rate of the DMT to EPICS IOC is configurable, but was chosen as a good sample rate for a data source that produces data every 60 seconds.

It should also be noted that at least at LHO we do not make an effort to coordinate the sampling time (as far as which seconds in the minute) that happen with the DMT.  So the actual delay time may change if the IOC gets restarted.

EDITED TO ADD:

Also, for this channel we record the GPS time that DMT asserts is associated with each sample.  That way you should be able to get the offset.

The value is available in H1:CDS-SENSMON_CAL_SNSW_EFFECTIVE_RANGE_MPC_GPS

Displaying reports 64881-64900 of 86412.Go to page Start 3241 3242 3243 3244 3245 3246 3247 3248 3249 End