Displaying reports 56741-56760 of 84535.Go to page Start 2834 2835 2836 2837 2838 2839 2840 2841 2842 End
Reports until 18:35, Thursday 21 July 2016
H1 PSL
jeffrey.bartlett@LIGO.ORG - posted 18:35, Thursday 21 July 2016 (28574)
PSL Chiller Check - FAMIS Task
   Added 150ml water to the Crystal chiller. Diode chiller was OK. Noted no new contamination or debris in either chiller filter. 

   Trended the pressures and flows for the Diode and Crystal chillers. The pressures are still slowly trending upward and the flows are trending downward. Both are changing by very small amounts. I have the new (higher flow and less restrictive)filters for the chiller room in-line filters. Plan to swap these at the next maintenance window.   
H1 ISC (ISC)
jenne.driggers@LIGO.ORG - posted 18:34, Thursday 21 July 2016 (28573)
Lens for POPAIRB

[Jenne, Cheryl, Sheila]

We placed a lens in front of POPAIR B tonight, so that we are no longer overfilling the BBPD.  We seem to be able to lock okie dokie, so I think everything is okay. 

H1 SEI (ISC)
jim.warner@LIGO.ORG - posted 17:46, Thursday 21 July 2016 (28572)
More earthquake survival data

Last night during one of the lock stretches there was an small earthquake. It wasn't big enough to break the lock, but it was big enough to show up in a number of channels in the IFO.  During the earthquake last night, no change was made to the seismic configuration, so it makes for a good comparison with the "earthquake" configuration from my alog on June 29, 28050. The first two plots are the minute trends of RMS 30-100mhz Z LVEA and ETMX  motion (best I could do for both days) for the 10minutes of data I'm comparing between the two earthquakes.  Keep in mind the that RMS motion in this band is higher for the June 28 earthquake, than last night.

The next plot shows a version of DARM for 3 different times, red is June 28, green is last night's earthquake and pink is 10 minutes of quiet time before last night's earthquake. You could be forgiven for looking at this plot and wondering why we don't run the earthguake configuration all the time. The current "windy" configuration should do better during high microseism, where from O1 we know we need inertial isolation in the .1-.3hz band, which the earthquake configuration won't provide. The next plot shows IMC-F, which is a measure of CARM (the units aren't really calibrated, just scaled to be similar to the other spectra), the  color scheme remains the same, red is June 28, green is last night's earthquake and pink is 10 minutes of quiet time before last night's earthquake. CARM (IMC-F)  and DARM are both lower for the bigger June earthquake (where we used the seismic earthquake configuration) than for the smaller earthquake last night. I think this shows that for low microseism, we want to use this configuration. More thought will be required for a winter configuration, but suppressing the microseism while staying locked to the ground below 100mhz will be hard.

We are working with MIT to get SEISMON/Terramon transplanted here, so we can get more reliable earthquake warnings. RIght now the earliest, and most reliable warning we get comes from watching IMC and STS time series on the wall FOMs.

Images attached to this report
H1 SEI
hugh.radkins@LIGO.ORG - posted 16:48, Thursday 21 July 2016 - last comment - 15:33, Friday 22 July 2016(28567)
H1 ITMX Earthquake Sensitivity Study--Coil Driver Glitch causes lockloss

Just a theory here...But I believe the H2 coil driver needs attention.

H2 Drives off as RZ and X Loops, H2 only directly affects RZ and X.  If it were the loops, of Actuator Drives would respond.  Page 11 of attached show the CD I & V INMONs going wacky impacting the RZ and X(page 4) loops 1 or more seconds before the model responds.

I know this is long winded, you should see the stuff in the text file not preinted here as I went down other rabbit holes.

ITMX ISI Tripped Wednesday July 13 AM from M6.3 EQ in New Zealand. No other platforms tripped.  The extra sensitivity of the ITMX to EQs is not new.  Based on the ground Seismometer STS2-B, it looks like the trip occurred with the arrival of the S-Wave (Shear, not surface.) See page 1 of attachmentBased on the EPICS records, ST2 tripped first; based on the FIRSTTRIG_LATCH value, the Actuators hit their rail first.

Page 2: 80 seconds with the Actuator drives. The upper right graphs have the horizontals and verticals on the same plot. Given that there is no RX or RY loop closed, it makes sense that the three vertical drives are exactly the same. Leading up to the trip, the H1 and H3 drives appear to be getting a greater request from the loops compared to the corner2 horizontal. The H2 GS13 is parallel to the X arm and so contributes nothing to the Y DOF. The verticals are well behaved. However, a couple of seconds before the EPICS INMON registers the state change (1 to 2,) the H2 drive shoots almost straight to the rail. This delay makes sense with the 8192 saturations required before tripping on this 4k model. The upper left plot zooms out on the H2 drive as it ramps its way up to e7 before the watchdog trip takes it to zero. The H1 and H3 drives continue along until the trip when they start to ring. After the trip the three horizontals behave similarly.  You might believe there is a kink in these other drives a few seconds earlier,...ehhh.

So the behavior of Drive2 may be of interest.  Is this a coil driver problem or does it originate from up stream?

Upstream is the damping, isolation, and feedforward paths. There is no indication that the vertical loop is a problem and feedforward only goes to Z; and, there are no DQ channels for the damping loops—they will have to wait for further upstream examination. This leaves the isolation loops.

Page 3 shows 80 seconds of the Isolation loop inputs. The ITMX and ITMY IN1s are overlain on the same plots. Also shown are the horizontal ground motions; the vertical is uninteresting. The ground motion signals overlain on the isolation plots are scaled and shifted so as to be seen. This shows that both ITMX and ITMY X & Y DOFs are doing about the same and are fairly well correlated with the input ground motion hence the control loops are pretty much the same. I've also verified this looking at the Matlab Controller figures. Around xx8505 gps, the Y ground motion quickly increases, this may be the arrival of the s-wave. The Y loop responds but within several seconds, the response doesn't seem as reasonable and looks to be ringing up and getting behind the curve. The X gets hit with this some 30 seconds later. I don't know if the mostly flatish nature of the RZ loops is or is not remarkable. As well, is the scale of noise/buzz difference between ITMX and Y worth noting? Conversely, the Z DOFs seem to be responding to the same input but certainly drive harder on the ITMX.

 

So the H2 Drive goes wacky before the others. The X and RZ loops feed the H2 Drive and both those ISOs head off to the outer reaches. The Y loop sees a blip and then gets buzzy/noisy; the Z sees nothing. The H2 Drive is pushing the platform and the X and RZ motion is seen back at the ISO_IN1. It makes sense that the Y DOF would also see something if the H2 actuator is the only one pushing. The push on H2 Drive could produce canceling affects on the H1 and H3 sensors and not push that DOF iso off the page. If the RZ loop was driving it, the response would be the same on all Horz GS13s but they are not responding the same.

On page 4 the Drives and ISO Ins are plotted together for 4 seconds. Clearly the H2 Drive leads the other drives as already established. On the top plot of the ISOs, the X and RZ loops respond dramatically compared to the blip on Y. Further, the ISOs are scaled by 5 to emphasize their behavior before the Drive goes off. The Y loop looks to be trundling along but X and RZ and maybe more clearly RZ changes what it had been doing.

This might suggest the loops are causing this but the fact that only the H2 Drive reacts implies that circuit is more sensitive.

On page 5, only ISOs X and RZ are shown with the H2 Drive. On the top plot, the drive is scaled down to be visible with the zoomed in ISO loops. The RZ loop really seems to be headed somewhere bad before the drive freaks out. Again however, if the loop was the source, the other drives would be reacting similarly and they are not.

On Page 6, one sees the ITMY loops don't respond like the ITMX X and RZ loops--not caused by the common ground motion.

On pages 7 & 8, an interesting beat is seen on the HEPI L4C RZ loops. Otherwise nothing else on the HEPI IPS or L4Cs of either ITM.

Page 9, the GS13s of ITMY are overlain on the ITMX GS13s for 20 seconds showing the motion on the platform is nearly identical on the two ITMs.

On page 10, the horizontal GS13s of ITMX are plotted together (left middle graph) showing the relative magnitudes. This implies only corner 2 is being driven and the others are just seeing the rotation caused by the one corner.

Page 11 plots the H2 CD INMONs, 80 seconds to show the normal before some badness at the trip time.

Page 12 zooms into 10 seconds showing the Coil Driver current and voltage making some really ugly step and excursion one or more seconds before the RZ loop starts to head off.  The model did not tell the coils to do this or it would be seen on the DRIVE.  The RZ loop and the X loop (page 4) are responding to what the coil driver is doing to the platform.  I'm pretty sure this tells us the coil driver is the source of this problem.

Maybe the coil driver is marginal in some way and when the extra demand of dealing with the work of the earthquake motion occurs, something lets go.

Non-image files attached to this report
Comments related to this report
hugh.radkins@LIGO.ORG - 10:26, Friday 22 July 2016 (28583)

NDS2 Data Storage Timing Issue???

Attached is a ddt plot of the H2 INMONs, the ground motion, and the H2 MASTER DRIVE (the output of the model.)  I had to go to NDS2 extraction as the full data is now gone from h1nds1.

On this ddt plot, the hump up to ~500cts on the I mon (RED) starts about 10 seconds whereas the rapid increase of MASTER_H2_DRIVE (violet?) is starting around 9ish seconds.

On page 11 of the pdf above, the ramp up of the Drive channel (center graph) clearly happens after the step in the I_INMON channel (upper middle channel.)

The INMON channels are slow whereas the DQ channels are fast...

I suspect my dataviewer looks are correct but...

Images attached to this comment
hugh.radkins@LIGO.ORG - 15:33, Friday 22 July 2016 (28590)

Okay--false alarm on the timing, but a warning to DDT via NDS2 users looking for time series with mixed data rate (16 & 4k) channels---these are the conditions I was working under.

See the attached plot where the red trace is a 16hz FE channel and the reference in green is a 4kHz channel and the measurement setup has the stop frequency at 1 hz and BW of 0.01 hz.  Lower in the attached is the fourier tools settings for the Blue data where the only thing changed from the green is to set the stop frequency to 7 hz.  The reason for the change in the shape of the data may be obvious to many of you much sharper in your signal and fourier analysis than I but should serve as a warning to all those using DDT to display time series.  I had to slow things way down due to the 16 hz channel and just went to  and it greatly impact the accompanying fast channel.  2, 3 & 4 HZ stop frequency all produce the brown trace; 5, 6 & 7 all produce the Blue trace.  7 Hz was as high as it would go.

Images attached to this comment
H1 General
travis.sadecki@LIGO.ORG - posted 16:00, Thursday 21 July 2016 (28565)
Ops Day Shift Summary

TITLE: 07/21 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Jeff
SHIFT SUMMARY:  It was one of those days for locking where everything seems to go wrong.  I struggled with initial alignment most of the day, which was interrupted by various issues such as PSL crashing and FSS oscillating.  ISIs were reset this morning, so the alignment of the arms was very bad and required a lot of work to recover.
LOG:
 

18:07 PSL trips off, called Peter

18:14 Peter toggling noise eater

22:12 Finally made it to DC_Readout

22:37 Nutsinee to LVEA racks

22:43 Nutsinee out

Dave restarting a bunch of models related to PI upgrades.  Several other inconsequential (to locking) tasks which I didn't record due to being busy trying to lock.

H1 SEI (SEI)
cheryl.vorvick@LIGO.ORG - posted 15:52, Thursday 21 July 2016 (28566)
ISI CPS Noise Spectra Check - Weekly - BSCs and HAMs
Images attached to this report
H1 ISC (ISC)
jenne.driggers@LIGO.ORG - posted 14:51, Thursday 21 July 2016 (28563)
ISS 3rd loop measured at 40W

Yesterday, I measured the ISS 3rd loop while we were locked at 40W.  This is motivated by the fact that we can't increase past 40W without seeing this loop go unstable (see alog 28482). 

This measurement was taken before the PRM was moved, so the recycling gain was very low (~25).  Still to do is measure the 3rd loop after we recover the recycling gain. 

The screenshot shows Kiwamu's 20W measurement in blue (alog 27940, note that the phase is wrong by 180deg), and the new 40W low recycling gain measurement in red (red and unseen green are same).  Interestingly, the gain at the peak is much higher at 40W than it is at 20W.  I'm not totally sure why that is.  Loop still looks stable though, so I don't know why it would go crazy with another couple of Watts.

Images attached to this report
Non-image files attached to this report
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 12:58, Thursday 21 July 2016 (28558)
h1fw0 went stable yesterday

Jonathan, Jim, Dave:

Yesterday morning we increased the log level on h1fw0's daqdrc from 2 to 10. At this point it ran for over an hour writing all but commissioning frames. I then restarted it, now with a log level of 30 and writing both commissioning and science frames (but not trends). h1fw0 has not crashed since then through to today's DAQ restart (23 hours). Interestingly the log file has not increased its verbosity, and we are now seeing random retransmission requests similar to what LLO used to see.

In the mean time h1fw1 continues to restart. We increased its log level from 2 to 30, it still restarts roughly once an hour on average (today's restarts shown below). h1fw1 is writing all four types of frames.

We are building a third frame writer (h1fw2) which will be used to test new daqd code, for example mutex control of the writing threads to prevent more than one file to be written at a time.

H1 CDS (DAQ, SUS)
david.barker@LIGO.ORG - posted 12:46, Thursday 21 July 2016 (28557)
New SUS PI models installed

Carl, Dave:

This morning we installed the latest SUS PI model changes. The new system uses a brand new model (h1susprocpi) on the h1oaf0 machine to run additional signal processing. Ideally this model should be located on the h1lsc0 machine, but this currently does not have any spare cores whereas h1oaf does. We may upgrade h1lsc0 to a faster 10-core machine and move the model at a later time.

Additional Dolphin channels have been added to send channels between h1susprocpi and h1omcpi, h1omc, h1susitmpi. At the RFM IPC level nothing is changed, only corner station dolphin mods.

The models changed are: h1susetmxpi, h1susetmypi, h1pemex, h1pemey, h1omc, h1omcpi, h1susitmpi and h1susprocpi (new)

The details of h1susprocpi:

front end = h1oaf0

rate = 2048Hz

dcuid = 71

cpu-num = 5

DAQ was restarted to resync to the new models.

Still to do:

Dave: add new model to overviews and CDS, set SDF to monitor everything, make OBSERVE.snap a link to safe.snap

Carl: create and load all filters

H1 PSL (PSL)
peter.king@LIGO.ORG - posted 11:26, Thursday 21 July 2016 (28555)
Laser shutdown
Travis called, said the laser had just tripped out.

    From the status screen it looks like it was a flow rate error in the power meter cooling
circuit - the same suspected reason as last time.  This in turn trips the power watchdog.
Hopefully this is a sensor problem and not one of hardening of the arteries.

    Note the laser shutter(s) remain open.

   The injection relock counter was reset to 0 on both the Beckhoff PC and the MEDM screen.
Images attached to this report
H1 ISC (OpsInfo)
sheila.dwyer@LIGO.ORG - posted 17:28, Wednesday 20 July 2016 - last comment - 19:11, Thursday 21 July 2016(28538)
SRM dither alignment loops working

We closed some alignement loops for SRM using a dither and demodulating POP90 this morning.  These loops seem to have a ugf below 100 mHz, so we may want to increase their gain, but they seem to be working well for now.  They come on in the guardian durring the DRMI ASC and leave them on.  They maintain POP 90 (and the ratio of AS90/POP90) at a reasonable level as we go through the CARM offset reduction, engaging the soft loops, and increasing the power. 

We saw a instability that seemed to be a cross  coupling between SRM and SOFT yaw loops, Jenne increased the gain in the soft yaw loops by a factor of 10 which seems to have taken care of the problem and is now in the guardian. 

As long as this keeps working, operators should no longer have to adjust SRM. 

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 19:11, Thursday 21 July 2016 (28575)

Somehow, running this loop durring acquisition was causing an HLTS bounce mode (28 Hz ish, probably PR3) to ring up, which also saturated the quad L2 drives and therefore cause violins to ring up.  We are now using the normal 36 WFS during DRMI, turning them off, and turning on the dither loop in the ENGAGE_SRC_ASC state once we reach full lock.  This donesn't ring anything up. 

H1 DetChar (DetChar, PEM)
paul.schale@LIGO.ORG - posted 16:01, Wednesday 20 July 2016 - last comment - 16:27, Thursday 21 July 2016(28534)
A class of O1 blip glitches is correlated with low relative humidity in the buildings

We have been studying the list of blip glitches that Miriam Cabero Müller generated for O1. We noticed that the rate of blip glitches increased dramatically during two time periods, see figure 1. We first checked if the glitch rate might be correlated with outside temperature, in case the blip glitches were produced by beam tube particulate events. The correlation with outside temperature was strong, but, for beam tube particulate, we expected it to be stronger with rate of temperature change than with temperature, and it was not. So we checked relative humidity and found that inside relative humidity was well correlated with outside temperature (and glitch rate), most likely because of the extra heating needed in cold spells.  A plot of the blip glitch rate and RH inside the CS mass storage room is attached.  

While the correlation with inside relative humidity is not better than with outside temperature, we plot inside relative humidity because we can better speculate on reasons for the correlation. Dry conditions may lead to the build up and discharge of static electricity on electronics cooling fans. Alternatively, there may be current leakage paths that are more likely to discharge in bursts when the pathways dry out. While this is, at best, speculation, we set up a magnetometer near the HV line for the EY ESD to monitor for possible small short fluctuations in current that are correlated with blip glitches. At the same time, we suggest that, as risk mitigation, we consider humidifying the experimental areas during cold snaps.

The low-humidity correlated blip glitches may represent a different population of glitches because they have statistically significantly smaller SNRs than the background blip glitches.  We analyzed the distribution of SNR (as reported by pycbc) of the blip glitches during three time periods – segments 1, 2, and a relatively quiet period from October 5 – October 20 (segment 3).  This gave approximately 600 blip glitches for each segment.  Figure 2 is a histogram of these

To determine if these distributions are statistically different, we used the Mann-Whitney U test.   Segments 2 and 3 matched, reporting a one-sided p-value of 0.18.  The distribution in SNR for segment 3 - the low glitch rate times -  did not match either segment 1 or 2, with p-values of 0.0015 and 2.0e-5, respectively.  Thus we can conclude that the distributions of 1 and 2 are statistically significantly different from 3.

We are currently examining the diurnal variations in the rate of these blip glitches, and will post an alog about that soon.

 

Paul Schale, Robert Schofield, Jordan Palamos

 

We have been studying the list of blip glitches that Miriam Cabero Müller generated for O1. We noticed that the rate of blip glitches increased dramatically during two time periods, see figure 1. We first checked if the glitch rate might be correlated with outside temperature, in case the blip glitches were produced by beam tube particulate events. The correlation with outside temperature was strong, but, for beam tube particulate, we expected it to be stronger with rate of temperature change than with temperature, and it was not. So we checked relative humidity and found that inside relative humidity was well correlated with outside temperature (and glitch rate), most likely because of the extra heating needed in cold spells.  A plot of the blip glitch rate and RH inside the CS mass storage room is attached.  
 
While the correlation with inside relative humidity is not better than with outside temperature, we plot inside relative humidity because we can better speculate on reasons for the correlation. Dry conditions may lead to the build up and discharge of static electricity on electronics cooling fans. Alternatively, there may be current leakage paths that are more likely to discharge in bursts when the pathways dry out. While this is, at best, speculation, we set up a magnetometer near the HV line for the EY ESD to monitor for possible small short fluctuations in current that are correlated with blip glitches. At the same time, we suggest that, as risk mitigation, we consider humidifying the experimental areas during cold snaps.
 
The low-humidity correlated blip glitches may represent a different population of glitches because they have statistically significantly smaller SNRs than the background blip glitches.  We analyzed the distribution of SNR (as reported by pycbc) of the blip glitches during three time periods – segments 1, 2, and a relatively quiet period from October 5 – October 20 (segment 3).  This gave approximately 600 blip glitches for each segment.  A histogram of these is attached.
 
To determine if these distributions are statistically different, we used the Mann-Whitney U test.   Segments 2 and 3 matched, reporting a one-sided p-value of 0.18.  The distribution in SNR for segment 3 - the low glitch rate times -  did not match either segment 1 or 2, with p-values of 0.0015 and 2.0e-5, respectively.  Thus we can conclude that the distributions of 1 and 2 are statistically significantly different from 3.
 
We are currently examining the diurnal variations in the rate of these blip glitches, and will post an alog about that soon.
 
 
Paul Schale, Robert Schofield, Jordan Palamos

 

We have been studying the list of blip glitches that Miriam Cabero Müller generated for O1. We noticed that the rate of blip glitches increased dramatically during two time periods, see figure 1. We first checked if the glitch rate might be correlated with outside temperature, in case the blip glitches were produced by beam tube particulate events. The correlation with outside temperature was strong, but, for beam tube particulate, we expected it to be stronger with rate of temperature change than with temperature, and it was not. So we checked relative humidity and found that inside relative humidity was well correlated with outside temperature (and glitch rate), most likely because of the extra heating needed in cold spells.  A plot of the blip glitch rate and RH inside the CS mass storage room is attached.  
 
While the correlation with inside relative humidity is not better than with outside temperature, we plot inside relative humidity because we can better speculate on reasons for the correlation. Dry conditions may lead to the build up and discharge of static electricity on electronics cooling fans. Alternatively, there may be current leakage paths that are more likely to discharge in bursts when the pathways dry out. While this is, at best, speculation, we set up a magnetometer near the HV line for the EY ESD to monitor for possible small short fluctuations in current that are correlated with blip glitches. At the same time, we suggest that, as risk mitigation, we consider humidifying the experimental areas during cold snaps.
 
The low-humidity correlated blip glitches may represent a different population of glitches because they have statistically significantly smaller SNRs than the background blip glitches.  We analyzed the distribution of SNR (as reported by pycbc) of the blip glitches during three time periods – segments 1, 2, and a relatively quiet period from October 5 – October 20 (segment 3).  This gave approximately 600 blip glitches for each segment.  A histogram of these is attached.
 
To determine if these distributions are statistically different, we used the Mann-Whitney U test.   Segments 2 and 3 matched, reporting a one-sided p-value of 0.18.  The distribution in SNR for segment 3 - the low glitch rate times -  did not match either segment 1 or 2, with p-values of 0.0015 and 2.0e-5, respectively.  Thus we can conclude that the distributions of 1 and 2 are statistically significantly different from 3.
 
We are currently examining the diurnal variations in the rate of these blip glitches, and will post an alog about that soon.
 
 
Paul Schale, Robert Schofield, Jordan Pala
Images attached to this report
Comments related to this report
matthew.evans@LIGO.ORG - 22:06, Wednesday 20 July 2016 (28542)

This is probably something you already checked, but could it be just that there is more heating going on when the outside temperature is low? More heating would mean more energy consumption, which I guess could bother the electronics in a variety of ways that have nothing to do with humidity (magnetic fields, power glitches, vibration, acoustics in the VEAs, etc.).  Which other coupling mechanisms have you investigated?

robert.schofield@LIGO.ORG - 22:18, Wednesday 20 July 2016 (28543)

In a sense we are suggesting that the effect is due to the extra heating during the cold snaps, the humidity is just our best guess as to how the extra heating affects the electronics. We think it is unlikely to be temperature, since the temperature in the buildings changed little and did not correlate as well as humidity. Our understanding is that DetChar and others have looked carefully for, and not found, coincident events in auxiliary channels, which would argue against magnetic or power glitches from heaters. The heaters dont increase vibration or sound levels by much, the fans work continuously. The humidity, however, changed a lot.

john.zweizig@LIGO.ORG - 11:19, Thursday 21 July 2016 (28556)DAQ
Could the decrease in humidity affect the electronics cooling efficiency by changing the heat capacity of the cooling air? Is there any recorded direct measurement of the electronics heat-sink temperatures or exhaust temperature?
brian.oreilly@LIGO.ORG - 13:31, Thursday 21 July 2016 (28559)
If you want to do a similar study at L1 then one of the best periods (in terms of a fairly quick change in RH) is the period from Nov. 20th to Nov. 27th
2015. Of course RH values here in the swamps are much higher.

Where is the "blip glitch" list? I followed this link: https://wiki.ligo.org/viewauth/DetChar/GlitchClassificationStudy but there's nothing past
Sept. 23 there.
Images attached to this comment
paul.schale@LIGO.ORG - 14:08, Thursday 21 July 2016 (28560)

The list of blip glitches was emailed out by Miriam Cabero Müller.  They're store at https://www.atlas.aei.uni-hannover.de/~miriam.cabero/LSC/blips/full_O1/, and Omega scans for H1 are here: https://ldas-jobs.ligo-wa.caltech.edu/~miriam.cabero/blips/wscan_tables/  and for L1 here: https://ldas-jobs.ligo-la.caltech.edu/~miriam.cabero/blips/wscan_tables/.

paul.schale@LIGO.ORG - 16:27, Thursday 21 July 2016 (28568)

John:

Humidity has very little effect on the thermal properties of air at the relevant temperatures and humidities (20-40 degrees C, 0-20 % RH).  On pages 1104 and 1105 of this paper (http://www.ewp.rpi.edu/hartford/~roberk/IS_Climate/Impacts/Resources/calculate%20mixture%20viscosity.pdf), there are plots of specific heat capacity, viscosity, thermal conductivity, and thermal diffusivity.

H1 TCS (ISC)
jeffrey.kissel@LIGO.ORG - posted 12:04, Wednesday 20 July 2016 - last comment - 15:23, Thursday 21 July 2016(28527)
TCSY Delivered Laser Power -- The Day After
J. Kissel, E. Hall

Checking in on the power delivered to the ITMY compensation plate after the strange drop in TSCY's front-end laser power drop yesterday (see LHO aLOG 28506), it looks like the laser power has recovered, mostly. The power, as measured by the pick-off beam just before the up-periscope into the vacuum system (H1:TCS-ITMY_CO2_LSRPWR_MTR_OUTPUT) is roughly stable at 0.3075 [W], where it used to deliver a really stable 0.312 [W].

I attach two zoom levels of the same 4-day trend.

There's also some weird, 10-min period feature in the *minimum* of the minute-trend, where the reported power drops to 0.16 [W]. Given its periodicity, one might immediate suspect data viewer and data retrieval problems, but one can see in the un-zoomed trend that this half-power drop has been happening since the drop-out yesterday, but tracks the reported laser power even before the delivered power was increased back to nominal.
Images attached to this report
Comments related to this report
alastair.heptonstall@LIGO.ORG - 16:29, Wednesday 20 July 2016 (28537)

I'm wondering if this weird behaviour is due to the RF driver - we should try to swap in the spare driver soon to check this because if the power output is really glitching low like that then it's likely to cause issues for commisioning, or possibly to fail altogether.

The temperature trend for the laser doesn't give any signs that it might have overheated.

nutsinee.kijbunchoo@LIGO.ORG - 15:23, Thursday 21 July 2016 (28564)

The delivered power was recovered because I added an offset to the RS calibration so that it allows the right amount of power through. The laser itself is still outputting 42W.

H1 SEI (PEM, SEI)
david.mcmanus@LIGO.ORG - posted 23:38, Tuesday 19 July 2016 - last comment - 16:34, Thursday 21 July 2016(28507)
Newtonian Noise Array set-up (Part 1)

David McManus, Jenne Driggers

Today I set up 16 of the sensors for the corner station Newtonian noise L4C array. These were the 16 that were most out of the way and least likely to be tripping hazards, mostly focused in the 'Beer Garden' area and around the arms. The channels I connected were: 4,8,13,18,19,20,21,22,23,24,25,26,27,28,29,30. The sensors corresponding to these channels are included in the table attached to this report. The sensors are stuck to the ground using a 5 minute epoxy, and a foam cooler is placed on top of each one and taped to the ground. These foam coolers have a small hole cut near the base so that the cable can get out without touching the cooler (shown in one of the pictures). The cut surfaces are sealed with tape to prevent foam from flaking off onto the LVEA floor. The cables have basic strain relief by taping them to the ground on either side of the foam cooler, which also helps to ensure that the cable is not touching the cooler. I've attached two pictures showing what the sensors look like with and without the cooler. 

As a side note sensor 26 is quite difficult to access as it is placed almost directly beneath a vacuum tank. When it eventually needs to be removed a small person may be required to fetch it. The final attached photo shows how it is positioned (BSC7). I placed it by carefully stepping into the gap between the pipes that run along that section of the arm and then crawling under the vacuum tank support.

Images attached to this report
Comments related to this report
david.mcmanus@LIGO.ORG - 16:34, Thursday 21 July 2016 (28569)

The sensor channel names are H1:NGN-CS_L4C_Z_#_OUT, where # is the channel number i reference in this post. Jenne made an MEDM screen which can be found under the SEI tab, and then 'Newtonian Seismic Array'

H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 23:35, Tuesday 19 July 2016 - last comment - 10:47, Wednesday 27 July 2016(28508)
CDS maintenance summary

Upgrade of Timing Firmware

Daniel, Ansel, Jim, Dave

Most of today was spent upgrading the entire timing system to the new V3 firmware. This did not go as smootly as planned, and took from 9am to 6pm to complete. By the end of the day we had reverted the timing master and the two CER fanouts to the orginal code (the end station fanouts were not upgraded). We did upgrade all the IRIG-B fanouts, all the IOChassis timing slaves, all the comparators and all the RF Amplifiers.

The general order was: stop all front end models and power down all front end computers, upgrade the MSR units, upgrade the CER fanouts, upgrade PSL IO Chassis (h1psl0 was restarted, followed by a DAQ restart), upgrade all CER slaves (at this point the master was reverted to V2), at EY we upgraded IRIG-B and slaves (skipping fanout), at MY we upgraded the PEM IO Chassis, at EX we did the same as EY and at MX the same as MY. 

All remaining front ends were now powered up. The DAQ was running correctly but the NDS were slow to complete their startup. Addiional master work in the MSR required a second round to restarts, at this point comparators which had been skipped were upgraded and the CER fanouts were downgraded. Finally after h1iopsush56 cleared a long negative irig-b error all systems were operational.

During these rounds of upgrades FEC and DAQ were restarted several times.

Addition of Beam Splitter Digital Camera

Richard, Carlos, Jim

An analog camera was replaced with a digital video GIGE-POE camera at the Beam Splitter.

New ASC code

Sheila:

new h1asc code was installed and the DAQ was restarted.

Reconfigured RAID for ldas-h1-frames file system

Dan:

The ldas-h1-frames QFS file system was reconfigured for faster disk access. This is the file system exported by h1ldasgw0 for h1fw0's use. After the system was upgraded, we reconfigured h1fw0 to write all four frame types (science, commissioning, second and minute). As expected, h1fw0 was still unstable at the 10 minute mark, similar to the test when h1fw0 wrote to its own file system. h1fw0 was returned to its science-frames-only configuration.

Comments related to this report
jeffrey.kissel@LIGO.ORG - 08:26, Wednesday 20 July 2016 (28515)DetChar, INJ, PEM, SYS
Just curious -- it's my impression that the point of "upgrading the timing system to the new V3 firmware" was to reprogram all timing system hardware's LED lights so as to not blink every second or two, because we suspect that those LEDs are somehow coupling into the IFO and causing 1 to 2 Hz combs in the interferometer response. 

The I/O chassis, IRIG-B, comparators, and RF amplifiers are a huge chunk of the timing system. Do we think that this majority will be enough to reduce the problem to negligible, or do we think that because the timing master and fanouts -- which are the primary and secondary distributors of the timing signal -- are still at the previous version that we'll still have problems?
richard.mccarthy@LIGO.ORG - 09:27, Wednesday 20 July 2016 (28520)
With the I/O chassis timing upgrade we removed the separate power supply form the timing slaves on the LSC in the corner and both EX and EY ISC chassis.  Hopefully the timing work will eliminate the need for the separate supplies.
keith.riles@LIGO.ORG - 12:09, Wednesday 20 July 2016 (28528)
Could you clarify that last comment? Was yesterday's test of changing the LED blinking pattern
done in parallel with removal of separate power supplies for timing and other nearby electronics?



 
jeffrey.kissel@LIGO.ORG - 12:29, Wednesday 20 July 2016 (28529)CDS, DetChar, INJ, PEM
Ansel has been working with Richard and Robert of the past few months testing out separate power supplies for the LEDs in several I/O chassis (regrettably, there are no findable aLOGs showing results about this). Those investigations were apparently enough to push us over the edge of going forward with this upgrade of the timing system. 

Indeed, as Richard says, those separate power supplies were removed yesterday, in addition to upgrading the firmware (to keep the LEDs constantly ON instead of blinking) on the majority of the timing system. 
ansel.neunzert@LIGO.ORG - 10:38, Thursday 21 July 2016 (28554)
To clarify Jeff's comment: testing on separate power supplies was done by Brynley Pearlstone, and information on that can be found in his alog entries. Per his work, there was significant evidence that the blinking LEDs were related to the DARM comb, but changing power supplies on individual timing cards did not remove the comb. This motivated changing the LED logic overall to remove blinking.

I'm not sure whether the upgrades done so far will be sufficient to fix the problem. Maybe Robert or others have a better sense of this?

Notable alog entries from Bryn:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=25772
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=25861
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=27202
keith.riles@LIGO.ORG - 18:39, Thursday 21 July 2016 (28562)
I have gone through and manually compared FScan spectrograms and
normalized spectra for the 27 magnetocometer channels that are
processed daily: https://ldas-jobs.ligo-wa.caltech.edu/~pulsar/fscan/H1_DUAL_ARM/H1_PEM/fscanNavigation.html,
to look for changes following Tuesday's timing system intervention,
focusing on the lowest 100 Hz, where DARM 1-Hz (etc.) combs are worst.

Because of substantial non-stationarity that seems to be typical,
it's not as straightforward as I hoped it would be to spot a change
in the character of the spectra. I compared today's generated FScans (July 20-21)
to an arbitrary choice two weeks ago (July 6-7).

But these six channels seemed to improve w.r.t. narrow line proliferation:

H1_PEM-CS_MAG_EBAY_LSCRACK_X_DQ
H1_PEM-EX_MAG_EBAY_SUSRACK_X_DQ
H1_PEM-EX_MAG_EBAY_SUSRACK_Y_DQ
H1_PEM-EX_MAG_EBAY_SUSRACK_Z_DQ
H1_PEM-EY_MAG_EBAY_SUSRACK_X_DQ
H1_PEM-EY_MAG_VEA_FLOOR_X_DQ  (before & after figures attached)

while these four channels seemed to get worse w.r.t. narrow lines:

H1_PEM-EX_MAG_VEA_FLOOR_Z_DQ
H1_PEM-EY_MAG_EBAY_SEIRACK_X_DQ
H1_PEM-EY_MAG_EBAY_SEIRACK_Y_DQ
H1_PEM-EY_MAG_EBAY_SEIRACK_Z_DQ

In addition, many of today's spectrograms show evidence of broad
wandering lines and a broad disturbance in the 40-70 Hz band
(including in the 2nd attached figure).




Images attached to this comment
keith.riles@LIGO.ORG - 10:47, Wednesday 27 July 2016 (28672)
Weigang Liu has results in for folded magnetometer channels for UTC days July 18 (before changes), July 19-20 (overlapping with changes) and July 21 (after changes):

Compare 1st and 4th columns of plots for each link below.

CS_MAG_EBAY_SUSRACK_X - looks slightly worse than before the changes
CS_MAG_EBAY_SUSRACK_Y - periodic glitches higher than before
CS_MAG_EBAY_SUSRACK_Z - periodicity more pronounced as than before

CS_MAG_LVEA_VERTEX_X -  periodic glitches higher than before
CS_MAG_LVEA_VERTEX_Y -  periodic glitches higher than before
CS_MAG_LVEA_VERTEX_Z -  periodic glitches higher than before

EX_MAG_EBAY_SUSRACK_X - looks better than before
EX_MAG_EBAY_SUSRACK_Y - looks better than before
EX_MAG_EBAY_SUSRACK_Z - looks slightly worse than before

EY_MAG_EBAY_SUSRACK_Y  - looks slightly better after changes
EY_MAG_EBAY_SUSRACK_Z - looks the same after changes
(Weigang ran into a technical problem reading July 21 data for EY_MAG_EBAY_SUSRACK_X)

A summary of links for these channels from ER9 and from this July 18-21 period can be found here.
H1 TCS
nutsinee.kijbunchoo@LIGO.ORG - posted 22:14, Tuesday 19 July 2016 - last comment - 14:44, Thursday 21 July 2016(28506)
Mysterious CO2Y output power dropped after today maintenance activities

Jeff K, Alastair (by phone), Nutsinee

Jeff noticed that TCS CO2Y was throwing a bunch of guardian error messages which led him to investigate and found that the CO2Y actual output power was lower since the laser recovered from maintenance activity this morning. Timeseries shows that CO2Y power dropped out at 15:41 UTC (8:41 local time) and never came back to its nominal (~57W). Chiller temperature which is read off the front end was down at the same time indicating CO2Y was down due to some front end maintenance activity. The supply current to CO2Y was also low compared to CO2X (19A vs 22A) suggesting that the low power output was real. And indeed, we went out and measured about 40W at the table (we stick a handheld power meter right before the first steering mirror).

We don't know why would the Front End maintenance today would affect CO2Y output power (CO2X is fine by the way). On the plus side, the heating profile looks good on the FLIR camera which means nothing was misaligned and we can still use CO2Y laser. The beam dump that was in front of the FLIR screen hasn't been put back so be mindful if you ever want to blast full power through the rotation stage.

 

I commented out the output power fault checker part in TCS power guardian so that ISC_LOCK can still tell it to go places. I added a temporary +1 degree offset to the minimum angle parameter for CO2Y rotation stage calibration so it would go to requested powers. We requested TCS CO2 laser stabilization guardian to down because it's not usable given a current output power.

 

Quick conclusion: CO2Y is still functional. The reason for power loss is to be investigated.

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 09:32, Wednesday 20 July 2016 (28521)
J. Kissel, S. Dwyer, N. Kijbunchoo, J. Bartlett, V. Sandberg

A few checks we forgot to mention to Nutsinee last night:
- Nutsinee and I checked the flow rate on the mechanical flowmeter for both the supply and return for TCSY chiller line, and it showed (what Nutsinee said was) nominal ~3 Gallon per minute. This was after we manually measured the power to be 41 W coming out of the laser head to confirm the EPICs readout.

- Sheila and I went to the TCS chillers on the mezzanine. Their front-panel display confirmed the ~20.1 deg C EPICs setting for temperature.

- On our way out, we also noticed that a power supply in the remote rack that is by the chillers marked "TCSY" was drawing ~18 mA, and was fluctuating by about +/- 2mA. We didn't know what this meant, but it was different than the power supply marked TCSX. We didn't do anything about it.

- The RF oscillator mounted in that same remote rack appeared functional spitting out some MHz frequency sine wave. Sheila and I did not diagnose any further than "merp -- looks like an oscillator; looks on; looks to be programed to spit out some MHz sine wave." 

nutsinee.kijbunchoo@LIGO.ORG - 14:44, Thursday 21 July 2016 (28561)

Alastair, Nutsinee

Today I went and check the CO2Y power supply set point. Voltage limit is set to 30V and current limit is set to 28A. Same goes for CO2X power supply. These are correct settings, which means CO2Y laser is really not behaving properly.

H1 AOS
travis.sadecki@LIGO.ORG - posted 12:59, Tuesday 19 July 2016 - last comment - 16:58, Thursday 21 July 2016(28491)
ITM camera housing installation

Chandra, Gerardo, Travis

Today we attempted to install both the X and Y arm ITM camera housings.  We successfully installed both of the viewports adapters with the new 1/4" dowel pins.  The installation of the X arm camera housing went smoothly (see pics 1 and 2).  However, when attempting to install the Y arm housing, I noticed that the gap between the expansion bellows of the vacuum system and the camera housing side plates was non-existent (see pics 3 and 4), whereas the X arm has ~1/8' of clearance.  Gerardo measured the length of the bellows in both cases and noted that the Y arm bellows were ~1/4" shorter in length than the X arm, meaning that the height of the peaks of the bellows would be greater.  We'll have to modify the Y arm camera housing to accomodate this difference.  I hope LLO does not have the same issue.

Images attached to this report
Comments related to this report
dennis.coyne@LIGO.ORG - 16:58, Thursday 21 July 2016 (28571)

Calum checked with the designer, Joe Gleason. These units are not installed properly. They need to be mounted so that the base plate is perpendicular to a radial line from the center of the vacuum spool, as shown in:

By mounting in this orientation there should be ~1" of clearance from the bellows.

H1 CAL (ISC)
kiwamu.izumi@LIGO.ORG - posted 23:15, Wednesday 13 July 2016 - last comment - 16:38, Thursday 21 July 2016(28396)
A sign error in online CAL-CS calibration fixed

Stefan, Matt, Kiwamu,

The online calibration, aka CAL-CS, is more accurate now; the sign of the simulated ETMY L3 stage (CAL-CSDARM_ANALOG_ETMY) was found to be wrong and we fixed it. Fixing the error resulted in an improved noise level at around 100 Hz in CAL CS. This should not affect the GDS pipe line calibration. The attached shows a comparison of CAL-CS before the fix and an offline calibration using DARM IN1 and the full DARM model (28179). It is clear something wrong was going on in 40 - 200 Hz.

What we changed:

This fix gave us an actuator model which is consistent with the measurement (28179) in the sense that the relative phase between the PUM and TST have a relative phase of 180 deg at high frequencies. Also, traditionally, when ETMY had a positive bias, the gain of the L3 stage used to be set to -1 in the O1 era (see for example 25575). Therefore today's fix is consistent with the O1 era too. One thing I still don't understand is the relative calibration difference between GDS and CAL-CS (summary page). The relative magnitude should show of a factor of 2 difference or so around 100 Hz assuming the sign error was only in CAL-CS, but it does not show such a big difference. Not sure why.

Images attached to this report
Comments related to this report
shivaraj.kandhasamy@LIGO.ORG - 08:07, Thursday 14 July 2016 (28403)

The online GDS (called C00 during O1) calculation uses CAL_DELTAL_CTRL and CAL_DELTAL_RESIDUAL to produce h(t). Compared to the front-end, It applies better relative timing between the two signals and other high freqency corrections. Since CAL_DELTAL_CTRL is obtained after the application ANALOG_ETMY_L3 filter, the online GDS will also have the same problem as front-end DARM signal. Only the offline GDS (called C01, C02 during O1), uses CAL-DARM_ERR and CAL-DARM_CTRL along with actuation and sensing models to produce h(t) and hence would have been different. I am not sure whether we have produced that at this point.

kiwamu.izumi@LIGO.ORG - 09:08, Thursday 14 July 2016 (28405)

Thanks, Shivaraj.

You are right. I misinterpreted the subway diagram (G1501518-v10) last night . I agree that C00 must have the same sign error and therefore what we saw in the summary page is correct.

kiwamu.izumi@LIGO.ORG - 16:38, Thursday 21 July 2016 (28570)

The scipt, which produced the comparison plot, is saved into svn, so that one can use the code in some future when it is needed. The code lives at

/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER9/H1/Scripts/ControlRoomCalib/H1CalibDoubleCheck.m

Displaying reports 56741-56760 of 84535.Go to page Start 2834 2835 2836 2837 2838 2839 2840 2841 2842 End