Displaying reports 57201-57220 of 85004.Go to page Start 2857 2858 2859 2860 2861 2862 2863 2864 2865 End
Reports until 09:53, Friday 22 July 2016
H1 DAQ (CDS)
james.batch@LIGO.ORG - posted 09:53, Friday 22 July 2016 - last comment - 10:43, Friday 22 July 2016(28582)
Data dropouts in second trend data

The data dropouts observed in second trend data (see alog 28579) are a result of missing data caused by restarts of frame writer 1.  If the raw data is used instead of second trend data, you can see that dropouts are not present, see attached example.  Both nds0 and nds1 are reading the same second trend data, and only frame writer 1 is currently writing second trend data as we explore frame writer instability.

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 10:43, Friday 22 July 2016 (28584)

fw0 continues to be 100% stable, fw1 is now restarting about 12 times a day.

Currently fw0 is writing the two 64 second frames (commissioning and science) and fw1 is writing all four frame types (commissioning and science 64S-frames, second and minute trends). Hence the second trend drop-outs when fw1 restarts. Next week we will work on making the second trends contiguous.

H1 SUS (AOS, SEI, SUS)
travis.sadecki@LIGO.ORG - posted 08:39, Friday 22 July 2016 (28580)
Optical Lever 7 Day Trends

Attached are screenshots for the past 7 days optical ever trends for PIT, YAW, and SUM.

This completes FAMIS request 4685.

Images attached to this report
H1 PSL
peter.king@LIGO.ORG - posted 07:36, Friday 22 July 2016 (28579)
Laser trip
The laser tripped around 3:20 this morning.  Suspect it is the "usual" problem.  The crystal
chiller was found off but with its dummy still in place.

    The data drops out looks like an artifact of data acquisition.

    Another observation is that the head temperatures all increase when the flow rates drop.
That is not surprising but the humidity also rises.  It might be that we have a tiny water leak
whose effect on humidity is noticeable when the temperature rises.  We have seen evidence of
tiny leaks before because of some corrosion stains in the laser base plate, however when the
laser is observed over time scales of ~30 minutes, no leak is visible.  The (possible) leak is
small enough that it does not make a noticeable difference in the water level of the chiller.
Images attached to this report
H1 ISC
kiwamu.izumi@LIGO.ORG - posted 00:48, Friday 22 July 2016 (28577)
Incomplete study of power reycycling gain at 50 W

Jeff B, Kiwamu,

We powered up the input power to 50 W tonight with SRM controlled by the dither alignment. We held the interferometer at 50 W for 30-ish minutes but this was ended because of a PI at 18056 Hz (28259).

During this short period, we moved the PRM pointing (or PRC1) in pitch and confirmed that moving PRM further in the negative direction (by introducing an offset in the error point of PRC1) improved the recycling gain. See the attached. The recycling gain slowly went back to 28 or so at 50 W. We did not get a chance to explore the optimum point yet. POPX started railing when we steered PRM at 40 W on the way to reach 50 W.

Images attached to this report
H1 General
jeffrey.bartlett@LIGO.ORG - posted 00:06, Friday 22 July 2016 (28578)
Ops Evening Shift Summary
Title:  07/21/2016, Day Shift 23:00 – 07:00 (16:00 – 00:00) All times in UTC (PT)
State of H1: IFO locked at DC_READOUT. Wind is Calm to Moderate Breeze (0 to 18mph). Seismic is a bit elevated at the End Stations; but should pose no operational difficulties. Microseism is low.         
Commissioning: Commissioners are commissioning.
Outgoing Operator:  Travis
 
Activity Log: All Times in UTC (PT)

23:00 (16:00) Take over from Travis
23:08 (16:08) Nutsinee – Going to TCS-X Table
23:11 (16:11) Sheila & Jenne – Going to ISCT1 to add a new lens
23:45 (16:45) Sheila & Jenne – Out of LVEA
00:16 (17:16) Jenne – Going to ISCT1 
00:25 (17:25) Jenne – Out of LVEA
00:39 (17:39) Jenne – Going back to ISCT1
01:26 (18:26) Jenne – Out of the LVEA


Title: 07/21/2016, Day Shift 23:00 – 07:00 (16:00 – 00:00) All times in UTC (PT)
Support:  Sheila, Jenne, Kiwamu 
Incoming Operator: N/A

Shift Detail Summary: IFO has been mostly up while the commissioners were working. The lock losses were relativity easily recovered. Had to do some minor touch up of PRMI a couple of times, but nothing too serious. The wind has come up over the last 3 hours. Base winds are in the mid to upper teens with gusts into the mid-30s. Microseism remains low and seismic activity has rung up with the increase in the winds.     
H1 PSL
jeffrey.bartlett@LIGO.ORG - posted 18:35, Thursday 21 July 2016 (28574)
PSL Chiller Check - FAMIS Task
   Added 150ml water to the Crystal chiller. Diode chiller was OK. Noted no new contamination or debris in either chiller filter. 

   Trended the pressures and flows for the Diode and Crystal chillers. The pressures are still slowly trending upward and the flows are trending downward. Both are changing by very small amounts. I have the new (higher flow and less restrictive)filters for the chiller room in-line filters. Plan to swap these at the next maintenance window.   
H1 ISC (ISC)
jenne.driggers@LIGO.ORG - posted 18:34, Thursday 21 July 2016 (28573)
Lens for POPAIRB

[Jenne, Cheryl, Sheila]

We placed a lens in front of POPAIR B tonight, so that we are no longer overfilling the BBPD.  We seem to be able to lock okie dokie, so I think everything is okay. 

H1 SEI (ISC)
jim.warner@LIGO.ORG - posted 17:46, Thursday 21 July 2016 (28572)
More earthquake survival data

Last night during one of the lock stretches there was an small earthquake. It wasn't big enough to break the lock, but it was big enough to show up in a number of channels in the IFO.  During the earthquake last night, no change was made to the seismic configuration, so it makes for a good comparison with the "earthquake" configuration from my alog on June 29, 28050. The first two plots are the minute trends of RMS 30-100mhz Z LVEA and ETMX  motion (best I could do for both days) for the 10minutes of data I'm comparing between the two earthquakes.  Keep in mind the that RMS motion in this band is higher for the June 28 earthquake, than last night.

The next plot shows a version of DARM for 3 different times, red is June 28, green is last night's earthquake and pink is 10 minutes of quiet time before last night's earthquake. You could be forgiven for looking at this plot and wondering why we don't run the earthguake configuration all the time. The current "windy" configuration should do better during high microseism, where from O1 we know we need inertial isolation in the .1-.3hz band, which the earthquake configuration won't provide. The next plot shows IMC-F, which is a measure of CARM (the units aren't really calibrated, just scaled to be similar to the other spectra), the  color scheme remains the same, red is June 28, green is last night's earthquake and pink is 10 minutes of quiet time before last night's earthquake. CARM (IMC-F)  and DARM are both lower for the bigger June earthquake (where we used the seismic earthquake configuration) than for the smaller earthquake last night. I think this shows that for low microseism, we want to use this configuration. More thought will be required for a winter configuration, but suppressing the microseism while staying locked to the ground below 100mhz will be hard.

We are working with MIT to get SEISMON/Terramon transplanted here, so we can get more reliable earthquake warnings. RIght now the earliest, and most reliable warning we get comes from watching IMC and STS time series on the wall FOMs.

Images attached to this report
H1 SEI
hugh.radkins@LIGO.ORG - posted 16:48, Thursday 21 July 2016 - last comment - 15:33, Friday 22 July 2016(28567)
H1 ITMX Earthquake Sensitivity Study--Coil Driver Glitch causes lockloss

Just a theory here...But I believe the H2 coil driver needs attention.

H2 Drives off as RZ and X Loops, H2 only directly affects RZ and X.  If it were the loops, of Actuator Drives would respond.  Page 11 of attached show the CD I & V INMONs going wacky impacting the RZ and X(page 4) loops 1 or more seconds before the model responds.

I know this is long winded, you should see the stuff in the text file not preinted here as I went down other rabbit holes.

ITMX ISI Tripped Wednesday July 13 AM from M6.3 EQ in New Zealand. No other platforms tripped.  The extra sensitivity of the ITMX to EQs is not new.  Based on the ground Seismometer STS2-B, it looks like the trip occurred with the arrival of the S-Wave (Shear, not surface.) See page 1 of attachmentBased on the EPICS records, ST2 tripped first; based on the FIRSTTRIG_LATCH value, the Actuators hit their rail first.

Page 2: 80 seconds with the Actuator drives. The upper right graphs have the horizontals and verticals on the same plot. Given that there is no RX or RY loop closed, it makes sense that the three vertical drives are exactly the same. Leading up to the trip, the H1 and H3 drives appear to be getting a greater request from the loops compared to the corner2 horizontal. The H2 GS13 is parallel to the X arm and so contributes nothing to the Y DOF. The verticals are well behaved. However, a couple of seconds before the EPICS INMON registers the state change (1 to 2,) the H2 drive shoots almost straight to the rail. This delay makes sense with the 8192 saturations required before tripping on this 4k model. The upper left plot zooms out on the H2 drive as it ramps its way up to e7 before the watchdog trip takes it to zero. The H1 and H3 drives continue along until the trip when they start to ring. After the trip the three horizontals behave similarly.  You might believe there is a kink in these other drives a few seconds earlier,...ehhh.

So the behavior of Drive2 may be of interest.  Is this a coil driver problem or does it originate from up stream?

Upstream is the damping, isolation, and feedforward paths. There is no indication that the vertical loop is a problem and feedforward only goes to Z; and, there are no DQ channels for the damping loops—they will have to wait for further upstream examination. This leaves the isolation loops.

Page 3 shows 80 seconds of the Isolation loop inputs. The ITMX and ITMY IN1s are overlain on the same plots. Also shown are the horizontal ground motions; the vertical is uninteresting. The ground motion signals overlain on the isolation plots are scaled and shifted so as to be seen. This shows that both ITMX and ITMY X & Y DOFs are doing about the same and are fairly well correlated with the input ground motion hence the control loops are pretty much the same. I've also verified this looking at the Matlab Controller figures. Around xx8505 gps, the Y ground motion quickly increases, this may be the arrival of the s-wave. The Y loop responds but within several seconds, the response doesn't seem as reasonable and looks to be ringing up and getting behind the curve. The X gets hit with this some 30 seconds later. I don't know if the mostly flatish nature of the RZ loops is or is not remarkable. As well, is the scale of noise/buzz difference between ITMX and Y worth noting? Conversely, the Z DOFs seem to be responding to the same input but certainly drive harder on the ITMX.

 

So the H2 Drive goes wacky before the others. The X and RZ loops feed the H2 Drive and both those ISOs head off to the outer reaches. The Y loop sees a blip and then gets buzzy/noisy; the Z sees nothing. The H2 Drive is pushing the platform and the X and RZ motion is seen back at the ISO_IN1. It makes sense that the Y DOF would also see something if the H2 actuator is the only one pushing. The push on H2 Drive could produce canceling affects on the H1 and H3 sensors and not push that DOF iso off the page. If the RZ loop was driving it, the response would be the same on all Horz GS13s but they are not responding the same.

On page 4 the Drives and ISO Ins are plotted together for 4 seconds. Clearly the H2 Drive leads the other drives as already established. On the top plot of the ISOs, the X and RZ loops respond dramatically compared to the blip on Y. Further, the ISOs are scaled by 5 to emphasize their behavior before the Drive goes off. The Y loop looks to be trundling along but X and RZ and maybe more clearly RZ changes what it had been doing.

This might suggest the loops are causing this but the fact that only the H2 Drive reacts implies that circuit is more sensitive.

On page 5, only ISOs X and RZ are shown with the H2 Drive. On the top plot, the drive is scaled down to be visible with the zoomed in ISO loops. The RZ loop really seems to be headed somewhere bad before the drive freaks out. Again however, if the loop was the source, the other drives would be reacting similarly and they are not.

On Page 6, one sees the ITMY loops don't respond like the ITMX X and RZ loops--not caused by the common ground motion.

On pages 7 & 8, an interesting beat is seen on the HEPI L4C RZ loops. Otherwise nothing else on the HEPI IPS or L4Cs of either ITM.

Page 9, the GS13s of ITMY are overlain on the ITMX GS13s for 20 seconds showing the motion on the platform is nearly identical on the two ITMs.

On page 10, the horizontal GS13s of ITMX are plotted together (left middle graph) showing the relative magnitudes. This implies only corner 2 is being driven and the others are just seeing the rotation caused by the one corner.

Page 11 plots the H2 CD INMONs, 80 seconds to show the normal before some badness at the trip time.

Page 12 zooms into 10 seconds showing the Coil Driver current and voltage making some really ugly step and excursion one or more seconds before the RZ loop starts to head off.  The model did not tell the coils to do this or it would be seen on the DRIVE.  The RZ loop and the X loop (page 4) are responding to what the coil driver is doing to the platform.  I'm pretty sure this tells us the coil driver is the source of this problem.

Maybe the coil driver is marginal in some way and when the extra demand of dealing with the work of the earthquake motion occurs, something lets go.

Non-image files attached to this report
Comments related to this report
hugh.radkins@LIGO.ORG - 10:26, Friday 22 July 2016 (28583)

NDS2 Data Storage Timing Issue???

Attached is a ddt plot of the H2 INMONs, the ground motion, and the H2 MASTER DRIVE (the output of the model.)  I had to go to NDS2 extraction as the full data is now gone from h1nds1.

On this ddt plot, the hump up to ~500cts on the I mon (RED) starts about 10 seconds whereas the rapid increase of MASTER_H2_DRIVE (violet?) is starting around 9ish seconds.

On page 11 of the pdf above, the ramp up of the Drive channel (center graph) clearly happens after the step in the I_INMON channel (upper middle channel.)

The INMON channels are slow whereas the DQ channels are fast...

I suspect my dataviewer looks are correct but...

Images attached to this comment
hugh.radkins@LIGO.ORG - 15:33, Friday 22 July 2016 (28590)

Okay--false alarm on the timing, but a warning to DDT via NDS2 users looking for time series with mixed data rate (16 & 4k) channels---these are the conditions I was working under.

See the attached plot where the red trace is a 16hz FE channel and the reference in green is a 4kHz channel and the measurement setup has the stop frequency at 1 hz and BW of 0.01 hz.  Lower in the attached is the fourier tools settings for the Blue data where the only thing changed from the green is to set the stop frequency to 7 hz.  The reason for the change in the shape of the data may be obvious to many of you much sharper in your signal and fourier analysis than I but should serve as a warning to all those using DDT to display time series.  I had to slow things way down due to the 16 hz channel and just went to  and it greatly impact the accompanying fast channel.  2, 3 & 4 HZ stop frequency all produce the brown trace; 5, 6 & 7 all produce the Blue trace.  7 Hz was as high as it would go.

Images attached to this comment
H1 General
travis.sadecki@LIGO.ORG - posted 16:00, Thursday 21 July 2016 (28565)
Ops Day Shift Summary

TITLE: 07/21 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Jeff
SHIFT SUMMARY:  It was one of those days for locking where everything seems to go wrong.  I struggled with initial alignment most of the day, which was interrupted by various issues such as PSL crashing and FSS oscillating.  ISIs were reset this morning, so the alignment of the arms was very bad and required a lot of work to recover.
LOG:
 

18:07 PSL trips off, called Peter

18:14 Peter toggling noise eater

22:12 Finally made it to DC_Readout

22:37 Nutsinee to LVEA racks

22:43 Nutsinee out

Dave restarting a bunch of models related to PI upgrades.  Several other inconsequential (to locking) tasks which I didn't record due to being busy trying to lock.

H1 ISC (OpsInfo)
sheila.dwyer@LIGO.ORG - posted 17:28, Wednesday 20 July 2016 - last comment - 19:11, Thursday 21 July 2016(28538)
SRM dither alignment loops working

We closed some alignement loops for SRM using a dither and demodulating POP90 this morning.  These loops seem to have a ugf below 100 mHz, so we may want to increase their gain, but they seem to be working well for now.  They come on in the guardian durring the DRMI ASC and leave them on.  They maintain POP 90 (and the ratio of AS90/POP90) at a reasonable level as we go through the CARM offset reduction, engaging the soft loops, and increasing the power. 

We saw a instability that seemed to be a cross  coupling between SRM and SOFT yaw loops, Jenne increased the gain in the soft yaw loops by a factor of 10 which seems to have taken care of the problem and is now in the guardian. 

As long as this keeps working, operators should no longer have to adjust SRM. 

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 19:11, Thursday 21 July 2016 (28575)

Somehow, running this loop durring acquisition was causing an HLTS bounce mode (28 Hz ish, probably PR3) to ring up, which also saturated the quad L2 drives and therefore cause violins to ring up.  We are now using the normal 36 WFS during DRMI, turning them off, and turning on the dither loop in the ENGAGE_SRC_ASC state once we reach full lock.  This donesn't ring anything up. 

H1 DetChar (DetChar, PEM)
paul.schale@LIGO.ORG - posted 16:01, Wednesday 20 July 2016 - last comment - 16:27, Thursday 21 July 2016(28534)
A class of O1 blip glitches is correlated with low relative humidity in the buildings

We have been studying the list of blip glitches that Miriam Cabero Müller generated for O1. We noticed that the rate of blip glitches increased dramatically during two time periods, see figure 1. We first checked if the glitch rate might be correlated with outside temperature, in case the blip glitches were produced by beam tube particulate events. The correlation with outside temperature was strong, but, for beam tube particulate, we expected it to be stronger with rate of temperature change than with temperature, and it was not. So we checked relative humidity and found that inside relative humidity was well correlated with outside temperature (and glitch rate), most likely because of the extra heating needed in cold spells.  A plot of the blip glitch rate and RH inside the CS mass storage room is attached.  

While the correlation with inside relative humidity is not better than with outside temperature, we plot inside relative humidity because we can better speculate on reasons for the correlation. Dry conditions may lead to the build up and discharge of static electricity on electronics cooling fans. Alternatively, there may be current leakage paths that are more likely to discharge in bursts when the pathways dry out. While this is, at best, speculation, we set up a magnetometer near the HV line for the EY ESD to monitor for possible small short fluctuations in current that are correlated with blip glitches. At the same time, we suggest that, as risk mitigation, we consider humidifying the experimental areas during cold snaps.

The low-humidity correlated blip glitches may represent a different population of glitches because they have statistically significantly smaller SNRs than the background blip glitches.  We analyzed the distribution of SNR (as reported by pycbc) of the blip glitches during three time periods – segments 1, 2, and a relatively quiet period from October 5 – October 20 (segment 3).  This gave approximately 600 blip glitches for each segment.  Figure 2 is a histogram of these

To determine if these distributions are statistically different, we used the Mann-Whitney U test.   Segments 2 and 3 matched, reporting a one-sided p-value of 0.18.  The distribution in SNR for segment 3 - the low glitch rate times -  did not match either segment 1 or 2, with p-values of 0.0015 and 2.0e-5, respectively.  Thus we can conclude that the distributions of 1 and 2 are statistically significantly different from 3.

We are currently examining the diurnal variations in the rate of these blip glitches, and will post an alog about that soon.

 

Paul Schale, Robert Schofield, Jordan Palamos

 

We have been studying the list of blip glitches that Miriam Cabero Müller generated for O1. We noticed that the rate of blip glitches increased dramatically during two time periods, see figure 1. We first checked if the glitch rate might be correlated with outside temperature, in case the blip glitches were produced by beam tube particulate events. The correlation with outside temperature was strong, but, for beam tube particulate, we expected it to be stronger with rate of temperature change than with temperature, and it was not. So we checked relative humidity and found that inside relative humidity was well correlated with outside temperature (and glitch rate), most likely because of the extra heating needed in cold spells.  A plot of the blip glitch rate and RH inside the CS mass storage room is attached.  
 
While the correlation with inside relative humidity is not better than with outside temperature, we plot inside relative humidity because we can better speculate on reasons for the correlation. Dry conditions may lead to the build up and discharge of static electricity on electronics cooling fans. Alternatively, there may be current leakage paths that are more likely to discharge in bursts when the pathways dry out. While this is, at best, speculation, we set up a magnetometer near the HV line for the EY ESD to monitor for possible small short fluctuations in current that are correlated with blip glitches. At the same time, we suggest that, as risk mitigation, we consider humidifying the experimental areas during cold snaps.
 
The low-humidity correlated blip glitches may represent a different population of glitches because they have statistically significantly smaller SNRs than the background blip glitches.  We analyzed the distribution of SNR (as reported by pycbc) of the blip glitches during three time periods – segments 1, 2, and a relatively quiet period from October 5 – October 20 (segment 3).  This gave approximately 600 blip glitches for each segment.  A histogram of these is attached.
 
To determine if these distributions are statistically different, we used the Mann-Whitney U test.   Segments 2 and 3 matched, reporting a one-sided p-value of 0.18.  The distribution in SNR for segment 3 - the low glitch rate times -  did not match either segment 1 or 2, with p-values of 0.0015 and 2.0e-5, respectively.  Thus we can conclude that the distributions of 1 and 2 are statistically significantly different from 3.
 
We are currently examining the diurnal variations in the rate of these blip glitches, and will post an alog about that soon.
 
 
Paul Schale, Robert Schofield, Jordan Palamos

 

We have been studying the list of blip glitches that Miriam Cabero Müller generated for O1. We noticed that the rate of blip glitches increased dramatically during two time periods, see figure 1. We first checked if the glitch rate might be correlated with outside temperature, in case the blip glitches were produced by beam tube particulate events. The correlation with outside temperature was strong, but, for beam tube particulate, we expected it to be stronger with rate of temperature change than with temperature, and it was not. So we checked relative humidity and found that inside relative humidity was well correlated with outside temperature (and glitch rate), most likely because of the extra heating needed in cold spells.  A plot of the blip glitch rate and RH inside the CS mass storage room is attached.  
 
While the correlation with inside relative humidity is not better than with outside temperature, we plot inside relative humidity because we can better speculate on reasons for the correlation. Dry conditions may lead to the build up and discharge of static electricity on electronics cooling fans. Alternatively, there may be current leakage paths that are more likely to discharge in bursts when the pathways dry out. While this is, at best, speculation, we set up a magnetometer near the HV line for the EY ESD to monitor for possible small short fluctuations in current that are correlated with blip glitches. At the same time, we suggest that, as risk mitigation, we consider humidifying the experimental areas during cold snaps.
 
The low-humidity correlated blip glitches may represent a different population of glitches because they have statistically significantly smaller SNRs than the background blip glitches.  We analyzed the distribution of SNR (as reported by pycbc) of the blip glitches during three time periods – segments 1, 2, and a relatively quiet period from October 5 – October 20 (segment 3).  This gave approximately 600 blip glitches for each segment.  A histogram of these is attached.
 
To determine if these distributions are statistically different, we used the Mann-Whitney U test.   Segments 2 and 3 matched, reporting a one-sided p-value of 0.18.  The distribution in SNR for segment 3 - the low glitch rate times -  did not match either segment 1 or 2, with p-values of 0.0015 and 2.0e-5, respectively.  Thus we can conclude that the distributions of 1 and 2 are statistically significantly different from 3.
 
We are currently examining the diurnal variations in the rate of these blip glitches, and will post an alog about that soon.
 
 
Paul Schale, Robert Schofield, Jordan Pala
Images attached to this report
Comments related to this report
matthew.evans@LIGO.ORG - 22:06, Wednesday 20 July 2016 (28542)

This is probably something you already checked, but could it be just that there is more heating going on when the outside temperature is low? More heating would mean more energy consumption, which I guess could bother the electronics in a variety of ways that have nothing to do with humidity (magnetic fields, power glitches, vibration, acoustics in the VEAs, etc.).  Which other coupling mechanisms have you investigated?

robert.schofield@LIGO.ORG - 22:18, Wednesday 20 July 2016 (28543)

In a sense we are suggesting that the effect is due to the extra heating during the cold snaps, the humidity is just our best guess as to how the extra heating affects the electronics. We think it is unlikely to be temperature, since the temperature in the buildings changed little and did not correlate as well as humidity. Our understanding is that DetChar and others have looked carefully for, and not found, coincident events in auxiliary channels, which would argue against magnetic or power glitches from heaters. The heaters dont increase vibration or sound levels by much, the fans work continuously. The humidity, however, changed a lot.

john.zweizig@LIGO.ORG - 11:19, Thursday 21 July 2016 (28556)DAQ
Could the decrease in humidity affect the electronics cooling efficiency by changing the heat capacity of the cooling air? Is there any recorded direct measurement of the electronics heat-sink temperatures or exhaust temperature?
brian.oreilly@LIGO.ORG - 13:31, Thursday 21 July 2016 (28559)
If you want to do a similar study at L1 then one of the best periods (in terms of a fairly quick change in RH) is the period from Nov. 20th to Nov. 27th
2015. Of course RH values here in the swamps are much higher.

Where is the "blip glitch" list? I followed this link: https://wiki.ligo.org/viewauth/DetChar/GlitchClassificationStudy but there's nothing past
Sept. 23 there.
Images attached to this comment
paul.schale@LIGO.ORG - 14:08, Thursday 21 July 2016 (28560)

The list of blip glitches was emailed out by Miriam Cabero Müller.  They're store at https://www.atlas.aei.uni-hannover.de/~miriam.cabero/LSC/blips/full_O1/, and Omega scans for H1 are here: https://ldas-jobs.ligo-wa.caltech.edu/~miriam.cabero/blips/wscan_tables/  and for L1 here: https://ldas-jobs.ligo-la.caltech.edu/~miriam.cabero/blips/wscan_tables/.

paul.schale@LIGO.ORG - 16:27, Thursday 21 July 2016 (28568)

John:

Humidity has very little effect on the thermal properties of air at the relevant temperatures and humidities (20-40 degrees C, 0-20 % RH).  On pages 1104 and 1105 of this paper (http://www.ewp.rpi.edu/hartford/~roberk/IS_Climate/Impacts/Resources/calculate%20mixture%20viscosity.pdf), there are plots of specific heat capacity, viscosity, thermal conductivity, and thermal diffusivity.

H1 ISC
evan.hall@LIGO.ORG - posted 10:47, Wednesday 20 July 2016 - last comment - 09:21, Friday 22 July 2016(28505)
Estimate of DARM noise from test mass dielectric loss

Stefan, Matt, Evan

We thought a bit about how thermally driven fluctuations in the polarization density of the test mass substrate could couple into DARM. We made a preliminary calculation of this noise for a single test mass biased at 380 V. This is not currently limiting the DARM sensitivity (since bias voltage reduction tests showed no effect), but could be important once aLIGO is closer to design sensitivity.

The test mass substrate and the reaction mass electrodes can be considered as a capacitor with position-dependent capacitance C(x) = C_0/(1+x/d), where d is the gap distance. The dielectric loss of the substrate will contribute a small imaginary part phi_C to this capacitance. If a significant fraction of the electrostatic field density from the electrodes is located inside the test mass substrate, then the loss angle of the capacitance will be similar to the dielectric loss of the substrate.

If a sinusoidal voltage V(t) = V_0 cos(omega t) is applied to the bias electrode (and the quadrant electrodes are held at ground), then the charge accumulated on the electrodes is q(t) = C_0 V_0 [cos(omega t) + phi_C sin(omega t)]. The time-averaged power dissipated per cycle is then W = langle dot{q} V 
angle = frac{1}{2} C_0 V_0^2 phi_C omega.

Since S_{qq}(f) = frac{2 k_{mathrm{B}} T}{pi^2 f^2} frac{W}{V_0^2} = frac{2 k_{mathrm{B}} T}{pi f} C_0 phi_C, we therefore have S_{VV}(f) = frac{2 k_{mathrm{B}} T}{pi f} frac{phi_C}{C_0}.

This voltage noise can then be propagated to test mass motion in the usual way. Using an ESD coefficient 2×10−10 N/V2 and a gap distance of 5 mm, this fixes the capacitance at C0 = 2 pF.

The loss tangent of Suprasil is quoted as 5×10−4 at 1 kHz. If this is assumed to be structural, then for a 380 V bias, the expected displacement ASD is 8×10−22 m/Hz1/2 at 100 Hz, falling like f5/2. This is shown in the attachment, along with the aLIGO design curve.

We have not considered the capacitance of the ESD cables, or of nearby conductors (e.g., ring heaters). (The effect of the series resistors in the ESD cables was considered by Hartmut some time ago.)

Non-image files attached to this report
Comments related to this report
peter.fritschel@LIGO.ORG - 09:21, Friday 22 July 2016 (28581)

The Dynasil web site quotes dielectric properties of fused silica from MIT's Laboratory for Insulation Research, circa 1970.

These vales for the loss tangent are much lower, e.g.: < 4e-6 at 100 Hz.

H1 SEI (PEM, SEI)
david.mcmanus@LIGO.ORG - posted 23:38, Tuesday 19 July 2016 - last comment - 16:34, Thursday 21 July 2016(28507)
Newtonian Noise Array set-up (Part 1)

David McManus, Jenne Driggers

Today I set up 16 of the sensors for the corner station Newtonian noise L4C array. These were the 16 that were most out of the way and least likely to be tripping hazards, mostly focused in the 'Beer Garden' area and around the arms. The channels I connected were: 4,8,13,18,19,20,21,22,23,24,25,26,27,28,29,30. The sensors corresponding to these channels are included in the table attached to this report. The sensors are stuck to the ground using a 5 minute epoxy, and a foam cooler is placed on top of each one and taped to the ground. These foam coolers have a small hole cut near the base so that the cable can get out without touching the cooler (shown in one of the pictures). The cut surfaces are sealed with tape to prevent foam from flaking off onto the LVEA floor. The cables have basic strain relief by taping them to the ground on either side of the foam cooler, which also helps to ensure that the cable is not touching the cooler. I've attached two pictures showing what the sensors look like with and without the cooler. 

As a side note sensor 26 is quite difficult to access as it is placed almost directly beneath a vacuum tank. When it eventually needs to be removed a small person may be required to fetch it. The final attached photo shows how it is positioned (BSC7). I placed it by carefully stepping into the gap between the pipes that run along that section of the arm and then crawling under the vacuum tank support.

Images attached to this report
Comments related to this report
david.mcmanus@LIGO.ORG - 16:34, Thursday 21 July 2016 (28569)

The sensor channel names are H1:NGN-CS_L4C_Z_#_OUT, where # is the channel number i reference in this post. Jenne made an MEDM screen which can be found under the SEI tab, and then 'Newtonian Seismic Array'

H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 23:35, Tuesday 19 July 2016 - last comment - 10:47, Wednesday 27 July 2016(28508)
CDS maintenance summary

Upgrade of Timing Firmware

Daniel, Ansel, Jim, Dave

Most of today was spent upgrading the entire timing system to the new V3 firmware. This did not go as smootly as planned, and took from 9am to 6pm to complete. By the end of the day we had reverted the timing master and the two CER fanouts to the orginal code (the end station fanouts were not upgraded). We did upgrade all the IRIG-B fanouts, all the IOChassis timing slaves, all the comparators and all the RF Amplifiers.

The general order was: stop all front end models and power down all front end computers, upgrade the MSR units, upgrade the CER fanouts, upgrade PSL IO Chassis (h1psl0 was restarted, followed by a DAQ restart), upgrade all CER slaves (at this point the master was reverted to V2), at EY we upgraded IRIG-B and slaves (skipping fanout), at MY we upgraded the PEM IO Chassis, at EX we did the same as EY and at MX the same as MY. 

All remaining front ends were now powered up. The DAQ was running correctly but the NDS were slow to complete their startup. Addiional master work in the MSR required a second round to restarts, at this point comparators which had been skipped were upgraded and the CER fanouts were downgraded. Finally after h1iopsush56 cleared a long negative irig-b error all systems were operational.

During these rounds of upgrades FEC and DAQ were restarted several times.

Addition of Beam Splitter Digital Camera

Richard, Carlos, Jim

An analog camera was replaced with a digital video GIGE-POE camera at the Beam Splitter.

New ASC code

Sheila:

new h1asc code was installed and the DAQ was restarted.

Reconfigured RAID for ldas-h1-frames file system

Dan:

The ldas-h1-frames QFS file system was reconfigured for faster disk access. This is the file system exported by h1ldasgw0 for h1fw0's use. After the system was upgraded, we reconfigured h1fw0 to write all four frame types (science, commissioning, second and minute). As expected, h1fw0 was still unstable at the 10 minute mark, similar to the test when h1fw0 wrote to its own file system. h1fw0 was returned to its science-frames-only configuration.

Comments related to this report
jeffrey.kissel@LIGO.ORG - 08:26, Wednesday 20 July 2016 (28515)DetChar, INJ, PEM, SYS
Just curious -- it's my impression that the point of "upgrading the timing system to the new V3 firmware" was to reprogram all timing system hardware's LED lights so as to not blink every second or two, because we suspect that those LEDs are somehow coupling into the IFO and causing 1 to 2 Hz combs in the interferometer response. 

The I/O chassis, IRIG-B, comparators, and RF amplifiers are a huge chunk of the timing system. Do we think that this majority will be enough to reduce the problem to negligible, or do we think that because the timing master and fanouts -- which are the primary and secondary distributors of the timing signal -- are still at the previous version that we'll still have problems?
richard.mccarthy@LIGO.ORG - 09:27, Wednesday 20 July 2016 (28520)
With the I/O chassis timing upgrade we removed the separate power supply form the timing slaves on the LSC in the corner and both EX and EY ISC chassis.  Hopefully the timing work will eliminate the need for the separate supplies.
keith.riles@LIGO.ORG - 12:09, Wednesday 20 July 2016 (28528)
Could you clarify that last comment? Was yesterday's test of changing the LED blinking pattern
done in parallel with removal of separate power supplies for timing and other nearby electronics?



 
jeffrey.kissel@LIGO.ORG - 12:29, Wednesday 20 July 2016 (28529)CDS, DetChar, INJ, PEM
Ansel has been working with Richard and Robert of the past few months testing out separate power supplies for the LEDs in several I/O chassis (regrettably, there are no findable aLOGs showing results about this). Those investigations were apparently enough to push us over the edge of going forward with this upgrade of the timing system. 

Indeed, as Richard says, those separate power supplies were removed yesterday, in addition to upgrading the firmware (to keep the LEDs constantly ON instead of blinking) on the majority of the timing system. 
ansel.neunzert@LIGO.ORG - 10:38, Thursday 21 July 2016 (28554)
To clarify Jeff's comment: testing on separate power supplies was done by Brynley Pearlstone, and information on that can be found in his alog entries. Per his work, there was significant evidence that the blinking LEDs were related to the DARM comb, but changing power supplies on individual timing cards did not remove the comb. This motivated changing the LED logic overall to remove blinking.

I'm not sure whether the upgrades done so far will be sufficient to fix the problem. Maybe Robert or others have a better sense of this?

Notable alog entries from Bryn:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=25772
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=25861
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=27202
keith.riles@LIGO.ORG - 18:39, Thursday 21 July 2016 (28562)
I have gone through and manually compared FScan spectrograms and
normalized spectra for the 27 magnetocometer channels that are
processed daily: https://ldas-jobs.ligo-wa.caltech.edu/~pulsar/fscan/H1_DUAL_ARM/H1_PEM/fscanNavigation.html,
to look for changes following Tuesday's timing system intervention,
focusing on the lowest 100 Hz, where DARM 1-Hz (etc.) combs are worst.

Because of substantial non-stationarity that seems to be typical,
it's not as straightforward as I hoped it would be to spot a change
in the character of the spectra. I compared today's generated FScans (July 20-21)
to an arbitrary choice two weeks ago (July 6-7).

But these six channels seemed to improve w.r.t. narrow line proliferation:

H1_PEM-CS_MAG_EBAY_LSCRACK_X_DQ
H1_PEM-EX_MAG_EBAY_SUSRACK_X_DQ
H1_PEM-EX_MAG_EBAY_SUSRACK_Y_DQ
H1_PEM-EX_MAG_EBAY_SUSRACK_Z_DQ
H1_PEM-EY_MAG_EBAY_SUSRACK_X_DQ
H1_PEM-EY_MAG_VEA_FLOOR_X_DQ  (before & after figures attached)

while these four channels seemed to get worse w.r.t. narrow lines:

H1_PEM-EX_MAG_VEA_FLOOR_Z_DQ
H1_PEM-EY_MAG_EBAY_SEIRACK_X_DQ
H1_PEM-EY_MAG_EBAY_SEIRACK_Y_DQ
H1_PEM-EY_MAG_EBAY_SEIRACK_Z_DQ

In addition, many of today's spectrograms show evidence of broad
wandering lines and a broad disturbance in the 40-70 Hz band
(including in the 2nd attached figure).




Images attached to this comment
keith.riles@LIGO.ORG - 10:47, Wednesday 27 July 2016 (28672)
Weigang Liu has results in for folded magnetometer channels for UTC days July 18 (before changes), July 19-20 (overlapping with changes) and July 21 (after changes):

Compare 1st and 4th columns of plots for each link below.

CS_MAG_EBAY_SUSRACK_X - looks slightly worse than before the changes
CS_MAG_EBAY_SUSRACK_Y - periodic glitches higher than before
CS_MAG_EBAY_SUSRACK_Z - periodicity more pronounced as than before

CS_MAG_LVEA_VERTEX_X -  periodic glitches higher than before
CS_MAG_LVEA_VERTEX_Y -  periodic glitches higher than before
CS_MAG_LVEA_VERTEX_Z -  periodic glitches higher than before

EX_MAG_EBAY_SUSRACK_X - looks better than before
EX_MAG_EBAY_SUSRACK_Y - looks better than before
EX_MAG_EBAY_SUSRACK_Z - looks slightly worse than before

EY_MAG_EBAY_SUSRACK_Y  - looks slightly better after changes
EY_MAG_EBAY_SUSRACK_Z - looks the same after changes
(Weigang ran into a technical problem reading July 21 data for EY_MAG_EBAY_SUSRACK_X)

A summary of links for these channels from ER9 and from this July 18-21 period can be found here.
H1 AOS
travis.sadecki@LIGO.ORG - posted 12:59, Tuesday 19 July 2016 - last comment - 16:58, Thursday 21 July 2016(28491)
ITM camera housing installation

Chandra, Gerardo, Travis

Today we attempted to install both the X and Y arm ITM camera housings.  We successfully installed both of the viewports adapters with the new 1/4" dowel pins.  The installation of the X arm camera housing went smoothly (see pics 1 and 2).  However, when attempting to install the Y arm housing, I noticed that the gap between the expansion bellows of the vacuum system and the camera housing side plates was non-existent (see pics 3 and 4), whereas the X arm has ~1/8' of clearance.  Gerardo measured the length of the bellows in both cases and noted that the Y arm bellows were ~1/4" shorter in length than the X arm, meaning that the height of the peaks of the bellows would be greater.  We'll have to modify the Y arm camera housing to accomodate this difference.  I hope LLO does not have the same issue.

Images attached to this report
Comments related to this report
dennis.coyne@LIGO.ORG - 16:58, Thursday 21 July 2016 (28571)

Calum checked with the designer, Joe Gleason. These units are not installed properly. They need to be mounted so that the base plate is perpendicular to a radial line from the center of the vacuum spool, as shown in:

By mounting in this orientation there should be ~1" of clearance from the bellows.

H1 ISC
kiwamu.izumi@LIGO.ORG - posted 15:05, Thursday 14 July 2016 - last comment - 11:46, Friday 05 August 2016(28414)
Quick analysis of shot noise from last night

The shot noise level from last night seems higher (worse) than the O1 level by 6%. Here is the spectrum:

You can see that the red trace (which is the one from the last night) is slightly higher than the (post-) O1 spectrum. The 6% increment was estimated by dividing the two spectra for frequencies above 1200 Hz and taking a median of it.

Images attached to this report
Comments related to this report
kiwamu.izumi@LIGO.ORG - 16:41, Friday 15 July 2016 (28441)

Evan H. suggested looking at the null and sum channels to see if the excess in shot noise is from an addition technical noise or not. The attached shows the spectrum of the null and sum channels at the same duration as the spectrum in the above entry.

From this plot, it is evident that the excess is not due to technical white noise.

Images attached to this comment
kiwamu.izumi@LIGO.ORG - 20:15, Thursday 21 July 2016 (28576)CAL, ISC

It is quite likely that the calibration is wrong -- the true shot noise level can be smaller than what we have measured.

I have checked the calibration of the DARM signal by comparing it against the Pcal excitation signals. I used the same lock stretch as the above entry. The height of the Pcal line at 331.9 Hz in the DARM spectrum was found be too high by 13% relative to the Pcal TR and RX PDs. See the attached. This means that we have overestimated the DARM signal at 331.9 Hz due to a calibration error. If we assume this is all due to an inaccurate optical gain, actual shot noise level should be smaller by the same factor of 13% that what we thought, corresponding to a ~7% smaller shot noise level than that in O1. We need to nail down whether this is an error in the optical gain or cavity pole in order to further evaluate the calibration error.

Note that the Pcal Y uses a fresh set of the calibration factors that was updated a month ago (27983). The ratio of RX PD over TX PD was found to be 1.002 at 331.9 Hz and this makes me think that the Pcal Y calibration is reliable.

Images attached to this comment
shivaraj.kandhasamy@LIGO.ORG - 09:53, Friday 05 August 2016 (28863)CAL

Here I have attached plots of the optical gain during this lock as well a few locks randomly picked during the month of July. I used O1 model as reference (wasn't not quite sure whether there was new time zero reference after O1 with all kappas set to 1). The first plot showing kappa_C over a few locks during July show that kappa_C values were close to 1. However here we note that the gain in the inverse sensing function during July was set to 1.102e-6 compared to 8.834e-7 during O1 (the referene model has changed). At high frequencies, the relation between corrected h(t) and h(t) recorded in front-end is,

corrected h(t) ~ h(t) / kappa_C ~ inv_gain * DARM_ERR / kappa_C

So for same DARM_ERR, kappa_C of 1 during July 2016 corresponds to 0.8 * h(t)  (= 8.834e-7 / 1.102e-6) as that of during O1. This assumes that there wasn't any change in the gain of the electronic chain on the OMC side.  The second plot show trend of kappa_C during the lock Kiwamu was looking at. An interesting thing to note here that there was ~10% change in the optical gain during this lock.  Kiwamu's plot correspond to time of the second peak we see in the plot (a coincidence!). The kappa_C value of 1.15 suggests that the measured h(t) in the above a-log would correspod to 0.70 ( = 8.834e-7/1.102e-6/1.15) times that of h(t) we would be measured during O1. Since the trend plot show that there were times in the same lock during which the kappa_C values were different, I tried to compare the power spectrum between those times. The third plot show that comparison. The mystery is that eventhough the ratio between the 331.9 Hz photon calibrator line and DELTAL_EXTERNAL line is ~10 % different between the times compared (and hence corresponding to ~10% different optical gain), the shot noise level looks same! We couldn't get the exact cavity pole frequencies because at this point I don't have the new LHO DARM model function, but the trend indicated that it didn't change during the lock. For completeness we also added the acutation strength variation during this time. The values are close to what we expect. Since 35.9 Hz ESD line we used during O1 wasn't available, for actuation strength comparison we used 35.3 Hz ESD line.

EDIT: We corrected the earlier estimate of high frequency h(t) level change.

Images attached to this comment
H1 CAL (ISC)
kiwamu.izumi@LIGO.ORG - posted 23:15, Wednesday 13 July 2016 - last comment - 16:38, Thursday 21 July 2016(28396)
A sign error in online CAL-CS calibration fixed

Stefan, Matt, Kiwamu,

The online calibration, aka CAL-CS, is more accurate now; the sign of the simulated ETMY L3 stage (CAL-CSDARM_ANALOG_ETMY) was found to be wrong and we fixed it. Fixing the error resulted in an improved noise level at around 100 Hz in CAL CS. This should not affect the GDS pipe line calibration. The attached shows a comparison of CAL-CS before the fix and an offline calibration using DARM IN1 and the full DARM model (28179). It is clear something wrong was going on in 40 - 200 Hz.

What we changed:

This fix gave us an actuator model which is consistent with the measurement (28179) in the sense that the relative phase between the PUM and TST have a relative phase of 180 deg at high frequencies. Also, traditionally, when ETMY had a positive bias, the gain of the L3 stage used to be set to -1 in the O1 era (see for example 25575). Therefore today's fix is consistent with the O1 era too. One thing I still don't understand is the relative calibration difference between GDS and CAL-CS (summary page). The relative magnitude should show of a factor of 2 difference or so around 100 Hz assuming the sign error was only in CAL-CS, but it does not show such a big difference. Not sure why.

Images attached to this report
Comments related to this report
shivaraj.kandhasamy@LIGO.ORG - 08:07, Thursday 14 July 2016 (28403)

The online GDS (called C00 during O1) calculation uses CAL_DELTAL_CTRL and CAL_DELTAL_RESIDUAL to produce h(t). Compared to the front-end, It applies better relative timing between the two signals and other high freqency corrections. Since CAL_DELTAL_CTRL is obtained after the application ANALOG_ETMY_L3 filter, the online GDS will also have the same problem as front-end DARM signal. Only the offline GDS (called C01, C02 during O1), uses CAL-DARM_ERR and CAL-DARM_CTRL along with actuation and sensing models to produce h(t) and hence would have been different. I am not sure whether we have produced that at this point.

kiwamu.izumi@LIGO.ORG - 09:08, Thursday 14 July 2016 (28405)

Thanks, Shivaraj.

You are right. I misinterpreted the subway diagram (G1501518-v10) last night . I agree that C00 must have the same sign error and therefore what we saw in the summary page is correct.

kiwamu.izumi@LIGO.ORG - 16:38, Thursday 21 July 2016 (28570)

The scipt, which produced the comparison plot, is saved into svn, so that one can use the code in some future when it is needed. The code lives at

/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER9/H1/Scripts/ControlRoomCalib/H1CalibDoubleCheck.m

Displaying reports 57201-57220 of 85004.Go to page Start 2857 2858 2859 2860 2861 2862 2863 2864 2865 End