Displaying reports 62021-62040 of 84553.Go to page Start 3098 3099 3100 3101 3102 3103 3104 3105 3106 End
Reports until 12:00, Thursday 05 November 2015
H1 General
edmond.merilh@LIGO.ORG - posted 12:00, Thursday 05 November 2015 (23140)
Mid-Shift Summary - DAY

MID-SHIFT SUMMARY:  Nothing big to report except for some ETMY glitches (usual). the wind picked up for a bit, around 20mph, but it’s starting to die down. Also, one GraceDB External Notification script failure and recovery as I’m writing this. (that I’ve noticed).

H1 CAL (CAL)
craig.cahillane@LIGO.ORG - posted 10:22, Thursday 05 November 2015 - last comment - 13:47, Thursday 05 November 2015(23137)
LHO O1 Calibration Uncertainty
I have posted the latest LHO calibration uncertainty plots.

I have reduced our kappas' uncertainty to 1% in mag and 0.5 degrees in phase.  We are now limited by our measurements.

I believe that our "statistical uncertainty only" plots are underestimating error.

This is because the systematic error that remains in our measurements is ignored when we consider only the uncertainty bars.  
One way to combat this underestimation is to not fit our systematic to the final weighted mean of all our measurements, but instead fit to each of our measurements and take the std of our remaining systematic errors as our total systematic error.  This ought to fix the fact that I was ignoring remaining systematic errors in our "statistical uncertainty only" plots.

Plots 1-4 show nominal O1 model mag and phase uncertainty (the statistical uncertainty and systematic error summed in quadrature).  
Plots 5-8 show the systematic corrections model mag and phase uncertainty (statistical uncertainty only).  I believe these are currently underestimated.
Plot 9 is the comparison of the nominal O1 calibration response function to the systematic corrections model I make.  (The red line is the systematic correction model, the dashed lines are the associated uncertainty)

Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 13:47, Thursday 05 November 2015 (23142)
Several follow up comments:
- On describing I would say it differently, plots 1-4,
    (1) 05-Nov-2015_Strain_Uncertainty_Magnitude.pdf
    (2) 05-Nov-2015_Strain_Uncertainty_Phase.pdf
    (3) 05-Nov-2015_Strain_Uncertainty_Squared_Magnitude_Components.pdf
    (4) 05-Nov-2015_Strain_Uncertainty_Squared_Phase_Components.pdf
are not the "nominal" uncertainty. These are the uncertainty if we incorrectly add systematic errors with statistical uncertainty in quadrature (i.e. implying we don't know the sign of systematic error and/or that they don't affect the mean of the Gaussian distribution, which we do and we know does). The reason we show these is to show how the uncertainty has traditionally been quoted, given the level of sophistication search algorithms had been able to handle. It's also much each to make a "single number" statement from this curve, which is what most people want so they can discuss the uncertainty colloquially.
 
Now that parameter estimation groups are interested in greater detail (i.e. have asked questions like "what do you *really* mean by '10%'??"), and we have solid handles on some of our systematic errors, we can offer an alternative display of the statistical uncertainty ONLY, namely plots 5-8,
    (5) 05-Nov-2015_Strain_Uncertainty_Magnitude_Systematics.pdf
    (6) 05-Nov-2015_Strain_Uncertainty_Phase_Systematics.pdf
    (7) 05-Nov-2015_Strain_Uncertainty_Squared_Magnitude_Components_Systematics.pdf
    (8) 05-Nov-2015_Strain_Uncertainty_Squared_Phase_Components_Systematics.pdf
and then display the resulting affect on the mean in plot 9,
    (9) 05-Nov-2015_Strain_Uncertainty_Systematics_vs_Nominal_Residuals.pdf
Black solid shows a zero mean error on the response function, with statistical uncertainty and systematic error incorrectly added in quadrature, shown in black dashed. Red solid shows the mean, corrected by the systematic errors, with the correct statistical uncertainty surrounding it.


- The f > 1 [kHz] statistical uncertainty is more ill-defined than reported. We simply don't have the measurements up there to confirm such small precision, so Craig, for the time being, has merely cut off the uncertainty at the last frequency sweep's data point (~900 [Hz]) and used that as the default value out to the limit of the frequency range. As such, the uncertainty appears to be limited by the 1% / 0.5 [deg] statistical uncertainty of the calibration lines, translated from lower-frequency (~330 [Hz]) because that's we scale of the overall optical gain in the sensing function. While we don't expect the uncertainty to be much larger at high frequency, we simply don't have any quantitative upper bound. Nor do we have any idea what kind of systematics dragons tharr be. As such, I suggest we continue take the f > 1 [kHz] uncertainty with a grain of salt.

H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 08:38, Thursday 05 November 2015 (23135)
CDS model and DAQ restart report, Wednesday 4th November 2015

O1 day 48

model restarts logged for Wed 04/Nov/2015
2015_11_04 11:16 h1nds1

Unexpected restart of nds1, maybe due to overloading and/or testpoint activity.

H1 General
jim.warner@LIGO.ORG - posted 08:07, Thursday 05 November 2015 (23134)
Shift Summary
TITLE: 11/04 Owl: 8:00-16:00 UTC 
STATE Of H1: Observing @ ~80 MPc.
SHIFT SUMMARY:It ended well... 
SUPPORT:
ACTIVITY LOG:

H1 was locked when I came in. Lost lock about 11:00 UTC. Flashes weren't great during DRMI, so after 15 minutes of waiting for DRMI, then another 15 minutes of PRMI, all with low or no flashing, I did an initial alignment. After that DRMI was still rough. 

Back to observing at 14:00.

15:00 LLO called to say they were doing some environmental measurements, but not going out of low noise for a while.
H1 General
edmond.merilh@LIGO.ORG - posted 08:06, Thursday 05 November 2015 (23133)
Shift Summary -Day Transition

TITLE: Nov 5 DAY Shift 16:00-23:00UTC (08:00-04:00 PDT), all times posted in UTC

STATE Of H1: Observing

OUTGOING OPERATOR: Jim

QUICK SUMMARY: IFO is in Observing @ ~78.2Mpc. Eq sei bands are all in the .23micron range. µSei is around .2µ. Wind is <10mph. All light appear to be off in E, M, CS & PSL. CW injections are running. Cal lines are running. Livingston is up and running. Quite 90 blends being used.

H1 General
jim.warner@LIGO.ORG - posted 04:12, Thursday 05 November 2015 - last comment - 16:08, Friday 06 November 2015(23132)
Mid shift summary

Lost lock about 2 hours ago, not sure why. Trying to relock now, but DRMI is not cooperating. Otherwise, things are quiet. Winds down, useism is down. RF45 has not been awful.

Comments related to this report
jenne.driggers@LIGO.ORG - 16:08, Friday 06 November 2015 (23165)

Not sure what the cause of the power drift / alignment drift was, but it looks like we may have lost lock when the power recycling gain dropped below 33.5-ish.  See aLog 23164 for some plots and details.

LHO General
patrick.thomas@LIGO.ORG - posted 00:00, Thursday 05 November 2015 (23130)
Ops Eve End Shift Summary
TITLE: 11/04 [EVE Shift]: 00:00-08:00 UTC (16:00-00:00 PDT), all times posted in UTC
STATE Of H1: Observing @ ~80 MPc.
SHIFT SUMMARY: Rode through magnitude 5.2 earthquake in Russia. Taken out of observing for 20 minutes to offload SR3 M2 to M1. RF45 noise has not reappeared. No major change in wind or seismic.
SUPPORT: Jenne
INCOMING OPERATOR: Jim

ACTIVITY LOG:

Lost lock twice on DRMI split mode.
00:28 UTC Evan and Keita to LVEA to check on RF45 noise.
00:29 UTC Caught lock on split mode. Moved BS in pitch from 164.84 to 165.32 to lock DRMI on 1f.
00:34 UTC Lock loss on CARM_5PM
00:51 UTC Lock loss on PARK_ALS_VCO
00:57 UTC Lock loss on CARM_ON_TR
01:01 UTC Keita and Evan done
01:31 UTC Observing
07:21 UTC Out of observing to offload SR3 M2 to M1
07:41 UTC Observing
H1 General
patrick.thomas@LIGO.ORG - posted 23:42, Wednesday 04 November 2015 (23129)
Observing
Jenne finished offloading SR3 M2 to M1.

Intention Bit: Commissioning (Nov 5 07:21:59 UTC)
Intention Bit: Undisturbed (Nov 5 07:40:52 UTC)
H1 General
patrick.thomas@LIGO.ORG - posted 23:27, Wednesday 04 November 2015 - last comment - 01:00, Thursday 05 November 2015(23128)
Out of Observing
Jenne, Patrick

Happened to notice that a bit on the ODC mini overview on nuc1 was red. Traced it to a DAC overflow on SR3 M2. Jenne trended it back. It looks like it started about 2 hours ago. The signal originates from the guardian cage servo. We have taken the IFO out of commissioning to manually offload it to M1.
Comments related to this report
jenne.driggers@LIGO.ORG - 01:00, Thursday 05 November 2015 (23131)

The SR3 cage servo hit its limit, and started going crazy.  This wouldn't have caused (and didn't cause) an immediate lockloss, but probably would have caused one eventually, after SR3 drifted far enough that the other ASC loops ran out of range. 

As a reminder, the "cage servo" puts an offset into the M2 stage of SR2, such that the OSEM witness sensors on the M3 stage are kept at a constant pitch value.  There is no offloading of this offset to M1, and we just ran out of range on the M2 actuators. 

In the attached plot, the offset that we send to the M2 actuator is the "TEST_P_OUTPUT", which is multiplied by 8.3 and then sent to the M2 coils.  This factor of 8.3 in the Euler-to-OSEM matrix means that if the Test_P output is above 15,600, we'll saturate the DAC output.  You can see that about when the output hits 15,000 the SR3_M3_WIT starts to drift away from the 922 setpoint, and the Test_P_OUTPUT starts going crazy since it thinks it needs to keep pushing harder, since the witness is below 922. 

While I don't think the data is compromised, we did take the IFO out of Observing mode while I manually offloaded the pitch actuation to the M1 stage.  I moved the M1_OPTICALIGN_P_OFFSET by about 3 urad, and that brought the cage servo offset back to near zero.  The SRC1 and SRC2 ASC loops reacted to this, but I did the offloading slowly enough that we didn't have any problems. 

I have added a notification to the SR3 cage servo guardian to put up a message if the TEST_P_OUTPUT gets above 10,000 counts, so there's plenty of time (days) to offload the SR3 alignment before we run into this problem again.

Images attached to this comment
LHO General
patrick.thomas@LIGO.ORG - posted 20:07, Wednesday 04 November 2015 (23126)
Ops Eve Mid Shift Summary
Have remained in observing. RF45 noise has not reappeared.

Rode through jump in 0.03 - 0.1 Hz seismic band to slightly above 0.1 um/s approximately 90 minutes ago.

From USGS:
5.2 11km WSW of Ust'-Kamchatsk Staryy, Russia
2015-11-05 01:59:22 UTC5 4.6 km deep

Terramon reports it arriving at 18:24:17 PST (02:24:17 UTC).
H1 General
patrick.thomas@LIGO.ORG - posted 17:37, Wednesday 04 November 2015 (23124)
Observing
ISI blends are at Quite_90. Seismic and winds are unchanged from start of shift. No SDF differences to check.

RF45 appears to have subsided for now.

Range is ~ 77 MPc.
LHO General
patrick.thomas@LIGO.ORG - posted 16:23, Wednesday 04 November 2015 (23121)
Ops Eve Beginning Shift Summary
TITLE: 11/03 [EVE Shift]: 00:00-08:00 UTC (16:00-00:00 PDT), all times posted in UTC
STATE Of H1: Lock acquisition
OUTGOING OPERATOR: Jeff
QUICK SUMMARY:
Lights appear off in the LVEA, PSL enclosure, end X, end Y and mid X. I can not tell from the camera if they are off at mid Y.
Winds are between ~ 5 and 15 mph.
ISI blends are at Quite_90.
Earthquake seismic band is between 0.01 and 0.03 um/s. Microseism is between 0.08 and 0.2 um/s.

Evan and Keita just finished looking at RF45 cabling in PSL enclosure. Attempting to relock.
H1 DetChar (DetChar, PEM, SEI)
nairwita.mazumder@LIGO.ORG - posted 16:05, Wednesday 04 November 2015 (23113)
Are EY-Mainsmon glitches related to DARM glitches?
Jordan, TJ, Jess, Andy, Nairwita

We noticed that there  was a significant increase in loud SNR glitch rate at the end of 3rd November's lock (Fig 1 and 2). While going through the HVETO  result we noticed that these glitches were vetoed out at round 1 and the auxiliary channel used to veto out these triggers is : PEM-EY_MAINSMON_EBAY_QUAD_SUM  (Fig 3).  Omega scan of one of such glitch can be seen in Fig 4. 

The spectrum of one of the EY EBAY mainsmon channel is also attached. The "Blue" one is the current spectrum (4th Nov ~ 10:45pm UTC) and red one corresponding to one of the coincident glitch time.  Though the later one (red trace) is slightly elevated in compare to the blue one, it's hard to believe that Mainsmon glitches with such small SNR can create those glitches in DARM. 

We did notice some related glitches in EX Mainsmon (but not in CS Mainsmon channel, any magnetometers or microphones).  HEPI L4C, GS13s and ACC channels  at corner station have some glitches around the same time. 

The main question is why these glitches are showing up in end station Mainsmon channels and around the same time at corner station seismometers. 
Images attached to this report
H1 General
jeffrey.bartlett@LIGO.ORG - posted 16:01, Wednesday 04 November 2015 (23120)
Ops Day Shift Summary
Activity Log: All Times in UTC (PT)

15:51 (07:51) Check TCS Chillers 
16:00 (08:00) Take over from Jim
16:16 (08:16) Lockloss – Unknown
08:23 (08:23) Keita – Going to End-X to look for missing equipment
16:39 (08:39) Keita – Back from End-X
16:44 (08:44) Chris – Replacing filter in both mid stations
17:21 (09:21) Contractor (Tom) on site for John
18:10 (10:10) Bubba & John – Going to End-Y chiller yard
18:17 (10:17) Locked at NOMINAL_LOW_NOISE, 22.5W, 76Mpc	
18:25 (10:25) Set intent bit to Observing
18:56 (10:56) Bubba & John – Back from End-Y
19:10 (11:10) Chris – Beam tube sealing on X-Arm between Mid & End stations
19:26 (11:26) Set intent bit to Commissioning while Evan & Filiberto are in LVEA
19:30 (11:30) Evan & Filiberto – Going into the LVEA  
19:40 (11:40) Kyle & Gerardo – Going to Y28 to look for equipment
19:45 (11:45) Lockloss – Evan & Filiberto at electronics rack 
20:05 (12:05) Kyle & Gerardo – Back from Y28
20:40 (12:40) IFO locked & Intent bit set to Observing 
21:12 (13:12) Joe – Going to work with Chris on beam tube sealing on X-Arm
21:49 (13:49) Kyle & Gerardo – Going to X28
22:38 (14:38) Set the intent bit to Commissioning
22:40 (14:40) Keita & Evan – Going into PSL to check cabling for the RF45 problem
22:57 (14:57) Lockloss – Due to PSL entry
23:00 (15:00) Robert – Staging shakers in the LVEA
23:20 (15:20) Joe & Chris – Back from X-Arm
23:46 (15:46) Kyle & Gerardo – Back from X28
00:00 (16:00) Turn over to Patrick


Shift Summary:

Title: 11/04/2015, Day Shift 16:00 – 00:00 (08:00 – 16:00) All times in UTC (PT)

Support: Kiwamu, Evan, Keita, Mike, Filiberto, Daniel, Jason

Incoming Operator: Patrick

Shift Summary: 

Lockloss at 16:16 (08:16) - Unknown

After lock loss, had trouble getting past DIRM_1F. Decided to do an initial alignment. Relocked IFO at 17:47 (09:47). Ran the A2L script. Had several SDF notifications for both ETMs and ITMs, could not find a SUS person. Took a snapshot of all and accepted. Set the intent bit to Observing at 18:25 (10:25)  
   
After relocking this morning the RF45 noise has come back, at several time per minute rate. This is knocking the range down into the high teens and low 20s Mpc. Confer with Evan, Keita, Mike, Daniel, and Filiberto. Evan & Filiberto going into the LVEA to check cabling.
  
The cabling checks outside the PSL did not change the RF45 glitch rate. Keita & Evan going into the PSL to check for cabling issues there. Lockloss when the PSL environmental controls were switch on. 

Crew is out of the PSL. Starting to relock. 


H1 ISC (CDS)
evan.hall@LIGO.ORG - posted 15:57, Wednesday 04 November 2015 - last comment - 18:30, Wednesday 04 November 2015(23119)
EOM driver investigation in PSL: no conclusions

Keita, Evan

We went into the PSL to see if we could find a source for today's 45 MHz glitches.

We didn't find anything conclusive. Mostly, it seems that bending the main cable (the LMR195 that carries the 45 MHz into the PSL) causes large glitches in the AM stabilization control signal, similar to what is seen by bending/tapping the LMR195 cable at ISC R1. We did not really see anything by bending the slow controls / power cables, or the rf cable that drives the EOM.

The main cable passes from the ISC rack, through the PSL wall, through the (overstuffed) cable protector on the HPO side of the PSL table, over the water pipes underneath the PSL, and then terminates at the EOM driver, which sits just underneath the PMC side of the table. Keita notes that the pipes don't seem to be vibrating.

It is worth noting that these glitches, which are clearly seen in the control signal time series of the EOM driver in the PSL, do not show up in the control signal time series of the spare driver hooked up in the CER.

Comments related to this report
evan.hall@LIGO.ORG - 18:30, Wednesday 04 November 2015 (23125)

After this, Keita and I went to the ISC rack, inserted a 20 dB coupler after the balun on the patch panel, and looked at the coupled output on a scope. We didn't see anything.

However, around the time that we inserte this coupler, it seems that the glitches went away. The attachment shows 12 hours of AM stabilization control signal. The first loud burst appears to coincide with the lockloss at 16:20 Z. The second loud burst around 19:40 was Fil and me wiggling the cable. The third loud burst around 23:30 is Keita and me in the PSL. The dc shift in the control signal around 00:30 is the time period with the coupler inserted.

When inserting the coupler, I noticed that the balun casing is slightly loose; I was able to twist the face of this casing just by unscrewing the cable from it.

Non-image files attached to this comment
H1 General
jeffrey.bartlett@LIGO.ORG - posted 12:08, Wednesday 04 November 2015 - last comment - 17:06, Wednesday 04 November 2015(23108)
Ops Day Mid-Shift Summary
Not a generally good morning. 

   IFO broke lock at 16:16 (08:16). After running initial alignment; relocked the IFO and went into Observing mode at 18:25 (10:25). Shortly after relocking started seeing a lot of RF45 noise events at a rate of several per minute. It was decided to have Evan & Filiberto to go into the LVEA to check the RF cabling. 

   While Evan & Filiberto were in the LVEA working on the cabling, the IFO dropped lock. Waiting for them to come out before relocking.
         
Comments related to this report
john.worden@LIGO.ORG - 14:34, Wednesday 04 November 2015 (23112)

RE lockloss at 8:16.

Bubba and I were turning on a few heaters and one of these feeds the output arm (HAM5-6).

The plots show duct temperature near the heater, zone temperature near the floor, and 10 days of zone temperatures.

I have multiplied the Observation_Ready bit by 70 to get it on the same scale as the temperature.

Coincidence?

Images attached to this comment
thomas.shaffer@LIGO.ORG - 15:24, Wednesday 04 November 2015 (23114)

Robert suggested that looking at the OpLevs could give an idea as to if this lockloss could be caused by the temperature. I trended the OpLevs and compared their motion with the LVEA temperatures, and there is definitely some correlation seen on the SR3 OpLev movements and the temp of the LVEA. You could probably make an argument for ITMY and PR3 OpLevs as well, but it is not as clear as the SR3.

So was this temperature excusion the cause of the lockloss? Well, the ASC signals didn't seem to be showing any signs of a problem before lockloss, seismic activity was low to minimal, winds low to minimal, and I didn't see any other obvious culprits.

I attached a trend of some of the CS OpLevs and LVEA temp in a 6 hour window, as well as a group of Lockloss plots with ASC signals and such.

Images attached to this comment
john.worden@LIGO.ORG - 15:49, Wednesday 04 November 2015 (23116)

If air temperature was a factor in this lockloss I would be suspicious of electronics outside the vacuum near HAM5 or 6. I would not expect in-vacuum hardware to respond so promptly.

jenne.driggers@LIGO.ORG - 17:06, Wednesday 04 November 2015 (23123)

[JeffB, Jenne]

After some closer looking, we don't think that the temperature was a direct cause of the locklosss.  By the time the lock was lost, the temperature in FMC-ZONE4 had only changed by a few tenths of a degree - well within our regular diurnal temperature changes. About 16 seconds before the lockloss, SRC2 Pitch and Yaw both started walking away.  I don't know why they started walking away, but there's probably nothing to so about this, other than eventually move the SRC ASC from AS36 to some other signal, as we're planning to do after O1 anyway.

H1 ISC (DetChar)
sheila.dwyer@LIGO.ORG - posted 13:35, Wednesday 21 October 2015 - last comment - 23:13, Wednesday 04 November 2015(22710)
evidence that scattered light couples anthropegenic noise to DARM up to 250 Hz

We have a few piece of evidence that suggest that anthropegenic noise (probably trucks going to ERDF) couples to DARM through scattered light which is most likely hitting something that is attached to the ground in the corner station.

  1. Our spectrum is more non-stationary between 100-200 Hz durring times of high anthropegenic noise. Nairwita noted this by looking through summary pages (these glitches only seem to appear on weekdays between 7 am and 4pm local (14-23UTC), and not on Hanford Fridays when anthropegenic noise is low), and Jordan confirmed this by making a few comparisons of high/low anthropegenic noise within lock stretches.  (alog 22594)
  2. Corner station ground sensors are a good witness of these glitches.  HVETO shows this clearly (see page for October 14th for example).  Also, comparison of bandpassed DARM to several corner ground motion sensors and accelerometers show that glitches in DARM coincide with ground motion (for example see nutsinee's alog 22527)
  3. The DARM spectragram at the time of these glitches show what looks like scattering arches from 1 Hz motion and a velocity of around 40 um second total path length change. alog 22523  Both this high velocity and the fact that the seismometers on the tables don't seem to witness this motion well suggest that something bolted to the ground is involved in the scattering. This velocity is probably too high for something bolted to the ground.
  4. The scattering amplitude ratio (ration of scattered amplitude to DC readout light on DCPDs) we would estimate bassed on the fringes in DARM 1e-5, similar to what we got in April  Using the ISCT6 accelerometer to predict the velocity of the motion doesn't quite work out.
  5. Annamaria and Roert did some PEM injections in the east bay, which showed a linear coupling to DARM.  Annamaria is still working on the data and trying to disentangle downconverion from the linear coupling, but if we assume that scattered light is responsible for the linear coupling the amplitude ratio is fairly consistent with what we got from the fringe wrapping when trucks go by.

On monday, Evan and I went to ISCT6 and listened to DARM and watched a spectrum while tapping and knocking on various things.  We couldn't get a response in DARM by tapping around ISCT6.  We tried knocking fairly hard on the table, the enclosure, tapping aggresively on all the periscope top mirrors, and several mounts on the table and nothing showed up.  We did see something in DARM at around 100 Hz when I tapped loudly on the light pipe, but this seemed like an excitation that is much louder than anything that would normaly happen.  Lastly we tried knocking on the chamber walls on the side of HAM6 near ISCT6, and this did make some low frequency noise in DARM.  Evan has the times of our tapping.

It might be worth revisiting the fringe wrapping measurements we made in April by driving the ISI, the OMC sus, and the OMs.  It may also be worth looking at some of the things done at LLO to look accoustic coupling through the HAM5 bellow (19450 and 

19846)

Comments related to this report
evan.hall@LIGO.ORG - 21:37, Tuesday 03 November 2015 (23089)

14:31: tapping on HAM6 table

14:39: tapping on HAM6 chamber (ISCT6 side), in the region underneath AS port viewport

14:40: tapping on HAM6 chamber (ISCT6 side), near OMC REFL light pipe

14:44: with AS beam diverter open, tapping on HAM6 chamber (ISCT6 side)

14:45: with OMC REFL beam diverter open, tapping on HAM6 chamber (ISCT6 side)

14:47: beam diverters closed again, tapping on HAM6 chamber (ISCT6 side)

All times 2015-10-19 local

nutsinee.kijbunchoo@LIGO.ORG - 23:13, Wednesday 04 November 2015 (23122)DetChar

I've made some plots based on the tap time Evan recorded (the recorded time seems off by half a minute or so compare to what really shows up in the accelerometer and DARM). Not all taps created signals in DARM but every signal that showed up in DARM has the same feature in a spectrogram (visible at ~0-300Hz, 900Hz, 2000Hz, 3000Hz, and 5000Hz. See attachment2). Timeseries also reveal that whether or not the tap would show up in DARM does not seems to depend on the overall amplitude of the tap (seen in HAM6 accelerometer, see attachment 3). PEM spectrum during different taps times doesn't seem to give any clue why one tap shows up in DARM more than the other (attachment 4,5). Apology for the wrong conclusion I drew earlier based on the spectrum I plotted using wrong GPS time (those plots have been deleted).

Images attached to this comment
nutsinee.kijbunchoo@LIGO.ORG - 20:41, Wednesday 04 November 2015 (23127)

I zoomed in a little closer at higher frequency and realized this pattern is similiar to the unsolved n*505 glitches. Could this be a clue to figuring out the mechanism that caused the n*505?

Images attached to this comment
H1 SUS
betsy.weaver@LIGO.ORG - posted 20:28, Thursday 01 October 2015 - last comment - 09:46, Thursday 05 November 2015(22174)
HXTX MEDM channel typo

Today, I discovered that on all of the HXTX Aux Ch screens (SUS_CUST_HXTX_MONITOR_OVERVIEW.adl) each of the M3 stage indicator lights are COPY/PASTEs of the M2 ones.  The channel values are all reading the appropriate channels, but the visual is incorrect for M3.  See attached.

Need to fix...

Images attached to this report
Comments related to this report
betsy.weaver@LIGO.ORG - 09:46, Thursday 05 November 2015 (23136)

Disregarding my "HXTX" references above which should read "HXTS", I've fixed the error in the screen and committed it to SVN.

Screen address:

/opt/rtcds/userapps/release/sus/common/medm/hxts/SUS_CUST_HXTS_MONITOR_OVERVIEW.adl

Displaying reports 62021-62040 of 84553.Go to page Start 3098 3099 3100 3101 3102 3103 3104 3105 3106 End