Displaying reports 64141-64160 of 83348.Go to page Start 3204 3205 3206 3207 3208 3209 3210 3211 3212 End
Reports until 16:35, Friday 17 July 2015
LHO FMCS
bubba.gateley@LIGO.ORG - posted 16:35, Friday 17 July 2015 (19719)
Beam Tube Enclosure Joint Repair on the X-Arm
Chris S. Joe D.

The crew installed metal strips on the top of 350 meters of tube enclosure joints this week for a total of 1075 meters of enclosure from the corner station on the X-Arm. 
LHO VE
bubba.gateley@LIGO.ORG - posted 16:26, Friday 17 July 2015 (19717)
Beam Tube Washing
APOLOGIZES FOR NOT REPORTING FOR THE PAST WEEK

Scott L. Ed P. Rodney H.

This report will cover 7/13-7/17 dates inclusive.

 The crew cleaned a total of 358.7 meters of tube this week. Test results for the week also shown here.

 We added another generator that we had on site to the cleaning operation so the third man could be vacuuming the support tubes and pre-cleaning the egregiously dirty areas of the tube. This has seemed to increase productivity as seen by the almost 72 meter a day average.

Scott L. will be on vacation next week so to hopefully keep up with the current pace I am bringing out another Apollo employee who is very familiar with the site. Mark Layne will be filling in for Scott next week.  
 
Non-image files attached to this report
H1 CDS (CDS, VE)
patrick.thomas@LIGO.ORG - posted 16:25, Friday 17 July 2015 (19718)
Cathodes remotely turned off at end stations
These are NOT the cathodes used as interlocks for the high voltage.

For both end stations:
I logged into the Beckhoff computer. I went to the 'CoE - Online' tab for the Inficon gauge labeled 'Pressure Gauge NEG (BPG 402)' in the system manager. In index FB44:01, 'Emission ON / OFF Command: Command', I entered 00 02 in the Binary box. I then verified that index 6015:05, 'Input Hot Cathode Ion: Emission Status Off/On Module 2' had changed from TRUE to FALSE.

This was done around 11:35 PDT.

Richard will go to the end stations and verify that they are off on Monday.
LHO General
patrick.thomas@LIGO.ORG - posted 16:13, Friday 17 July 2015 (19716)
Ops Summary
Cheryl, Patrick, TJ, Ed

The ETMY LR RMS WD was tripped when I came in. I reset it by writing 0 and then 1 to it. Jim W. and I switched the ITMY, ITMX and BS ISI blends from Windy_90 to Quite_90. The mode cleaner was not locking because the input power was low. I had to do a search for home with the rotation stage. Spent most of the day keeping the IFO at DC power for commissioners. Reloaded Guardian a couple of times for script changes. 

09:18 Richard to roof
09:24 Jason and Peter taking diode box into PSL diode room
09:35 Richard off roof
09:43 Jason and Peter done
09:50 ETMX ISI WD tripped, indicated payload trip, but no WD trip on SUS or TMS
10:51 Richard to roof
11:35 I remotely turned off the cathodes at both end stations (WP 5363)
12:09 Pepsi truck through gate
15:44 Dave installing h1susetmypi model (WP 5365)
15:48 Jeff K. restarting h1susomc model (WP 5366)

Currently Stefan has the IFO and is working on ASC.
H1 SUS (ISC)
jeffrey.kissel@LIGO.ORG - posted 16:09, Friday 17 July 2015 (19714)
OMC ASC Signals Routed through Standard ISC Paths
J. Kissel, S. Dwyer
WP #5366

Continuing to pursue OMC ASC diagonalization (see LHO aLOG 19691), I've made changes to the top level of the OMC SUS front end model such that the ISC signals go through the originally intended ISC path, i.e. through the ISCINF, LOCK, and DRIVEALIGN banks. This is such that we can *use* the drivealign matrix to decouple L, P and Y drive. I've made the change in such a way that this is only a top-level model change, and does not impact any library parts. Sadly this means that the implementation is rather ugly, but if the new scheme is successful, we'll submit an ECR to clean up the model and install the scheme properly during a maintenance day. 

I've saved, compiled, installed, restarted the model, confirmed that all settings have been restored as expected, confirmed alignment sliders at the same value, and that the "new" (or remapped) drive signals arrive in the expected banks as expected. Since the former paths were not disconnected, this change is entirely backward compatible; all previous alignment schemes will still work.

The development of the control filter implementation has now been handed off to Sheila.
Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 12:21, Friday 17 July 2015 (19710)
FE channel access freeze ups now hourly

Starting late last night the front end channel access freeze ups are now fairly regular and hourly (in the 30-40 minutes of the hour). In the past few hours they have only occured in the 37th minute of the hour. I have checked all the usual suspects (hourly rsync, autoburt, etc.) and not found any correlation so far. I have also checked Ganglia and Observium logs, no obvious computing or networking events happen at this time of the hour.

H1 ISC
jenne.driggers@LIGO.ORG - posted 12:01, Friday 17 July 2015 - last comment - 17:36, Friday 17 July 2015(19709)
H1 better low freq performance

Matt found some data from last night that looks pretty good - I'm not sure what the state of the IFO was at this particular time, so I won't say.

Images attached to this report
Comments related to this report
gabriele.vajente@LIGO.ORG - 13:04, Friday 17 July 2015 (19711)

Brute force coherence report for this perod can be found here:

https://ldas-jobs.ligo.caltech.edu/~gabriele.vajente/bruco_1120811417/

sheila.dwyer@LIGO.ORG - 15:38, Friday 17 July 2015 (19713)

Our NOMINAL_LOW_NOISE state now includes BS coil drivers swtiched,  SRCL and MICH FF.  A2L coefficients were tuned before the vent but not carefully since then.  We also had the ISS 2nd loop on at this time that helped the noise around 300 Hz. 

rana.adhikari@LIGO.ORG - 17:36, Friday 17 July 2015 (19721)

we have seen that low frequency noise breathe somewhat; the noise was already low aroun 70 Hz when we switched on the SRC FF (the old filters). We have taken a few measurements with better coherence and with better fitting code and will soon get a bit more subtraction. The high frequency noise is less good than the best mysteriously. The DARM offset was giving us 20 mA total OMC DC current. We did not succeed yet in being stable at higher offsets.

H1 SEI
jim.warner@LIGO.ORG - posted 11:10, Friday 17 July 2015 (19707)
HAM spring resonance testing

Because of Robert's interest in the HAM6 ISI spring flexure to DARM coupling, I dug up a spare blade left over from aLIGO assembly and rigged it up on a table in the staging building. It's very rough, but intended only as a first look. Hopefully, Robert will attach his test results.

Images attached to this report
LHO General
patrick.thomas@LIGO.ORG - posted 10:59, Friday 17 July 2015 (19706)
Morning meeting notes
One of the corner station HEPI pumps is making noise and not putting out as much pressure as the others. Hugh would like to address it next Tuesday.

No suspensions work scheduled today or over the weekend.

Richard working on roof for PEM.

PMC alignment scheduled for next Tuesday.

Carpenter shop rollup door was left open.

Electric Health will look at malfunctioning LSB lift pumps.

Richard and Patrick to attempt to remotely turn off hot cathode gauges at end stations.
H1 SEI
hugh.radkins@LIGO.ORG - posted 09:53, Friday 17 July 2015 - last comment - 10:02, Friday 17 July 2015(19699)
Corner Station HEPI Pump Station #1(8) is having problems & needs attention

On Tuesday all the HEPI Pumps were spun down to check system Accumulators' pressures.  Upon bringing them back into service, I noticed CS PS#8 sounded funny.  I had also greased this and another motor and suspect maybe this greasing has packed the motor too tightly.  See the first attachment of the Pump Controller Medm to follow along if you wish.

See the second attachment showing the last four days spanning the down time on Tuesday.  Notice how after the pumps come back on, PS1_PRESS1 (epics channel name) is lower by 10psi and is also much noisier.  PRESS1 is the sensor just after the pump before resistors and filters.

I was a little baffled at first seeing that the other Pump Stations PRESS1 was exactly the same but it now makes sense.  The system servos on the differential pressure across the actuators at BSC2 and that is a direct function of the total output of the four pump stations.  Working back upstream to the Pump, the resistances giving the pressure drops across the 3u filter and the two laminar flow resistors have not changed hence the Pressure out of the other three pumps have not changed.  However, now that Pump #8 is not putting out the same as before, the other pumps must spin faster to achieve our total desired output.  Hence the 20% increase in motor speed shown on the VOUT channel.

I propose I take this PS offline on Tuesday; it may stay offline for a week or I may get it back in one day.  I can do this with out having to deisolate any HEPI platforms although there may be a minor pressure glitch when I valve out this pump station as the motor stops.  Still this pressure glitch may be seen on the platforms, see the effect on the BS if you care here when I greased motors May 2014.  The glitches from this can be seen as 'deep' as the SUS ISI Witness channels although I don't know if these are actually strong.  Bottom line, this certainly should not be done during any measurements.

Images attached to this report
Comments related to this report
hugh.radkins@LIGO.ORG - 10:02, Friday 17 July 2015 (19702)

And I will renumber the physical pump station and medm to stop the madness of different numbers.

H1 DetChar (DetChar, ISC, SUS)
sheila.dwyer@LIGO.ORG - posted 01:56, Friday 17 July 2015 - last comment - 19:46, Friday 17 July 2015(19696)
Lock Losses need some investigation

The following are a list of lock loss messages in the Guardian log. We've had a bunch of locklosses during the transition from locked to low noise this evening. As you can see there are a few different culprits, but one of the big ones is LOWNOISE_ESD_ETMY. It would be handy if someone can check out these lock losses and home in on what precisely went bad during this transition (e.g. ramping, switching, etc.). Then we can get back to SRCL FF tuning.

2015-07-17 06:47:05.694550  ISC_LOCK  LOWNOISE_ESD_ETMY -> LOCKLOSS

2015-07-17 06:48:22.162260  ISC_LOCK  LOCKING_ARMS_GREEN -> LOCKLOSS

2015-07-17 07:14:34.431170  ISC_LOCK  NOMINAL_LOW_NOISE -> LOCKLOSS

2015-07-17 07:26:40.249110  ISC_LOCK  CARM_10PM -> LOCKLOSS

2015-07-17 07:34:59.720030  ISC_LOCK  PREP_TR_CARM -> LOCKLOSS

2015-07-17 07:42:08.269350  ISC_LOCK  LOCKING_ALS -> LOCKLOSS

2015-07-17 08:02:41.773620  ISC_LOCK  LOWNOISE_ESD_ETMY -> LOCKLOSS

2015-07-17 08:21:58.665420  ISC_LOCK  LOWNOISE_ESD_ETMY -> LOCKLOSS

2015-07-17 08:31:29.035330  ISC_LOCK  REDUCE_CARM_OFFSET_MORE -> LOCKLOSS

2015-07-17 08:48:32.514870  ISC_LOCK  LOWNOISE_ESD_ETMY -> LOCKLOSS

 
Rana, Sheila
Comments related to this report
keita.kawabe@LIGO.ORG - 11:01, Friday 17 July 2015 (19705)

Guardian error causing lock loss in LOWNOISE_ESD_ETMY (Evan, Keita)

Summary:

Out of four lock losses in LOWNOISE_ESD_ETMY that Rana and Sheila listed, one lock loss (15-07-17-06-47-05) was due to the guardian running main() of LOWNOISE_ESD_ETMY twice.

Running main() twice (some times but not always) is apprently a known problem of the guardian, but this specific state is written such that running main() twice is not safe.

Details:

Looking at the lock loss, I found that the ETMY_L3_LOCK_L ramp time (left of the attached, red CH16) was set to zero at the same  or right after the ETMX and ETMY L3 gain (blue ch3 and brown ch5) were set to their final number (0 and 1.25 respectively). There was a huge glitch in EY actuators at that point but not to EX.

This transition is supposed to happen with the ramp time of 10 seconds, so setting the ramp time to 0 after setting the gain kills the lock.

Looking at the guardian code (attached right), the ramp time is set to zero at the beginning and set to 10 at the end.

Evan told me that main() could be executed twice, we looked at the log (attached middle), and sure enough, right after LOWNOISE_ESD_ETMY.main is finished at 2015-07-17T06:46:50.39059,  the gain was set to zero again.

Images attached to this comment
jameson.rollins@LIGO.ORG - 11:55, Friday 17 July 2015 (19708)

I have identified the source of the double main execution and have a patch ready that fixes the problem:

https://bugzilla.ligo-wa.caltech.edu/bugzilla3/show_bug.cgi?id=879#c7

If needed we can push out a point release essentially immediately, maybe during next Tuesday's maintenance period.

keita.kawabe@LIGO.ORG - 14:09, Friday 17 July 2015 (19712)

Bounce rang up during the EX-EY transition gain ramping, 3/4 of the times last night.

In three out of four lock losses in LOWNOISE_ESD_ETMY that Rana and Sheila listed, the guardian made it all the way to the gain ramping at the end, and it did not run main() twice.

However, about 7 to 8 seconds after the ramping started, 9.8Hz oscillation built up in DARM, then there came fast glitches in ETMY L2 drive, then the IFO lost lock. 

This looks like a bounce but I have no idea why it was suddenly rang up.

See attached. First attachment shows the very end of the lock losses that clearly shows DARM oscillation.

The second plot shows the same lock losses but zoomed out so you can see that each lock losses happened 7 to 8 seconds after the ramping started.

The last attachment shows one of the DARM oscillation so you can see that 6 cycles = 0.309 seconds (i.e. 9.8Hz signal).

Images attached to this comment
keita.kawabe@LIGO.ORG - 19:46, Friday 17 July 2015 (19725)

Update: After bounce was rung up, OMC DCPDs saturated before IFO lost lock.

In the attached, while 9.8Hz was getting bigger (top left), if you high-pass DARM_IN1_DQ (middle left) you can see that the high frequency part dominated by 2.5kHz suddenly quenched at about t=18sec.

Same thing is observed in OMC DCPDs (middle middle and bottom middle), and even though we don't have a fast channel for DCPD ADCs, it seems like they were very close to the saturation at 18sec (bottom left).

Though we don't know why 9.8Hz was excited, at least we know that the DCPD saturated to cause the lock loss.

Since the same thing happened 3 times, and each time it was 7 to 8 seconds after the ETMX and ETMY L3 LOCK_L gain started ramping, you could set the gains to the values corresponding to this in-between state, keep it there for a minute or so, and see if the IFO can stay locked. If you fail to keep it locked it's a sure sign that this instability is somehow related to the L3 actuator balance between X and Y, or L3-L2 crossover in Y (or in X) or both.

The in-between gain would be something like 1.1 for EY L3 lock and 0.125 for EX.

Images attached to this comment
H1 ISC
sheila.dwyer@LIGO.ORG - posted 20:18, Thursday 16 July 2015 - last comment - 16:23, Friday 17 July 2015(19692)
PI at 15734 in Y arm

We were locked at 24 Watts for just over 2 hours before we rang up a PI that shows up in the Y arm QPDs at 15734 Hz.  I increased the ring heater power (for both arms )from 0.5 to 0.6 Watts.  template with the QPD IOP channels is attached.  I tried to reduce the power, but we lost lock when I did that perhaps because the ISS second loop was on.  The lockloss was at about 3:00 UTC on July 17

 

Images attached to this report
Non-image files attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 01:03, Friday 17 July 2015 (19694)DetChar

We suspect this was not PI, that it was the roll mode.  

It would be useful if someone could track down which optic this was by looking at the roll mode peak RMS trends and looking to see if it in fact did saturate any of the actuators.

Rana

edmond.merilh@LIGO.ORG - 09:44, Friday 17 July 2015 (19700)
nutsinee.kijbunchoo@LIGO.ORG - 09:52, Friday 17 July 2015 (19701)

ETMX ring heater has asymmetrical heating at the moment (0.5W upper ring 0.6W lower ring). Not sure if you'd like to keep the setting so I'm leaving it there....

sheila.dwyer@LIGO.ORG - 16:23, Friday 17 July 2015 (19715)

Matt, Sheila

 Matt looked at this lock this morning, and saw that although the ROLL mode might have increased in the last few minutes it likely wasn't the culprit.  However, there was a line at 1055 Hz that apeared and grew in the last 20 minutes of the lock, shown in the attached screenshot.  This would indicate that the PI could be at 15329 or 17439, so this is a new PI for us.  (past incidents alog 17903 and alog 18965)  As far as I know this is also a different frequency from what has been seen at LLO. 
Unfortunately,  in my hurry to grab some fast channels for the QPDs, I used the LLO channel names but we are uusing a different ADC, so I got the wrong channels.  So we don't really know which arm this was in. 

I've made a template that anyone who suspects that a PI is rung up can run:

/ligo/home/sheila.dwyer/ParametricInstabilities/PI_IOP_template.xml

The assymetry in the ring heater was my mistake.

Images attached to this comment
H1 SEI (SEI)
robert.schofield@LIGO.ORG - posted 11:38, Thursday 16 July 2015 - last comment - 10:02, Friday 17 July 2015(19682)
Wind tilt is very local and is lowest in the beer garden

Summary: the effect of wind in the sub 0.1 Hz tilt band is very local (little coherence between seismometers 20m apart) and more than a factor of two greater in the HAM 2 and 5 seismometer locations then in the beer garden. We may be less sensitive to wind if the sensor correction seismometer(s) are located only in the beer garden. Also, because tilt is so local, real tilt meters, like Krishna’s at EX, should be as close as possible to the chambers.

Wind tilts our buildings, which produces spurious control signals from servo seismometers and can make it difficult to lock or maintain lock.  A previous log showed that there was almost no wind tilt at a location 40 m from the EY building, making it clear that wind tilt is a local effect (Link).  As a result, Hugh and I have been wondering if the HAM 5 seismometer location is better because it is down-wind for most storms or if the beer garden is better because it is furthest from walls. With Hugh’s help, I looked at chance coincidences between wind storms and seismometer huddles over the last few months as well as data with seismometers in the 3 locations. I think the answer is that the beer garden shows substantially less tilt than either the HAM 5 or 2 locations.

Figure 1 shows how local tilt is. The blue seismometer traces are for “huddled” seismometers (about 2m apart) in the beer garden and show high coherence below 0.1 Hz. But the red seismometer trace shows much lower coherence in this tilt band between the beer garden seismometer and the HAM5 seismometer, only about 20 m away. During high wind, I also found low coherence in the tilt band between the beer garden and the HAM2 seismometer locations.  The local nature of the tilt has implications for true tilt meters used to correct the tilt signal from seismometers. The tilt meter at EX is about 4 m from the chamber and, in Figure 1 we saw very little coherence at 20m. While it may not be enough of a return to move this one, it may be best to try and place the next one even closer, and, to the degree possible, engineer the BRS so that it can be as close as possible or even under the chamber.

Figure 2a and b show that tilt is very different at different locations in the LVEA and, of the 3 locations, the beer garden is the best. In both horizontal axes, the tilt in the beer garden is at least a factor of two better than the best of the HAM2 and HAM5 locations. It is about a factor of ten better than the worst of the HAM2 or 5 locations. I checked the 3 windstorms during the period when all 3 seismometers were working and, for each time that I examined, the beer garden seismometer was better. Figure 3 shows the two seismometers that were available during the windy period that caused locking problems last night: the tilt noise was half as much in the unused beer garden seismometer than in the HAM5 seismometer that was used for sensor correction. So, a sensor correction seismometer in the beer garden may be better than in the HAM2 or 5 locations in the frequency band dominated by tilt instead of real acceleration (roughly below 0.5 Hz). This morning Jim switched sensor correction to the beer garden seismometer.

Finally, when we have two STSs available, I think we should do a more detailed study of tilt-band coherence length, and attenuation with distance from the walls.

 

Robert, Hugh

Non-image files attached to this report
Comments related to this report
brian.lantz@LIGO.ORG - 12:09, Thursday 16 July 2015 (19684)
Just to check - 
Are you sure that there was no activity in the LVEA during these data times? That will also cause local distortions of the floor and might confuse the results.
robert.schofield@LIGO.ORG - 10:02, Friday 17 July 2015 (19703)

Actually, people on the floor make very different spectral signatures than wind and would be easy to identify in any of the spectra. But, nevertheless, I did check for any anomolous spikes in the 30 to 100 mHz band of the PEM seismometers, or, for more recent data, the  new equivalent bands of the ISI seismometers. 

H1 SEI (CDS)
jim.warner@LIGO.ORG - posted 16:23, Wednesday 08 July 2015 - last comment - 10:10, Friday 17 July 2015(19509)
Quacking matlab filters and foton filter glitching

Earlier today, I wrote a couple out of loop feedforward filters to the BS ISI foton file using Foton. When I hit the load coefficients button (while the ISI was isolated, the ff paths were off, so it shouldn't have done anything), the ISI tripped, hard. It rang up the T240's pretty bad and I couldn't isolate the ISI for several minutes after. Worried I had inadvertantly written some other filter I ran a diff on the most recent archived file and the file created yesterday when Jeff restarted the models. This showed a whole bunch of filter coefficient differences, which shouldn't have been there (as reported by a diff of the two archive files, I don't know exactly what changed, see attached). Talking to Jim, Dave and Jeff, it sounds like the glitch was probably caused by my having used Quack recently (June 22nd) to load some blend filters. Jeff's model restart (and even a prior model restart on June 30) simply inherited that quack-written file. Today was the first time the BS's foton file was opened and saved in Foton. Quack can apparently load coefficients with higher precision than Foton will accept, so when you open and save a "too high" precision filter with Foton, it rounds the coefficients off. Sudden change in precision of SOS coefficients in the blend filters = bad for isolation loops = bad trip.

We've seen this Foton vs. Quack Rounding problem before -- see e.g. https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=3553 -- and it's still biting us.

This sounds like a relatively easy thing to control for, I can think of two ways:

- getting Quack to check and do the rounding on it's own before writing to the Foton file.

- have the post-build script run a "foton -c" on all filter files before the model gets restarted.

Is there someone in the CDS group who can fix this? Maybe it has been? There are several versions of Quack running around, June was my first attempt with it, maybe I used the wrong one.

I used /ligo/svncommon/SeiSVN/seismic/Common/MatlabTools/autoquack.m

 https://svn.ligo.caltech.edu/svn/seismic/Common/MatlabTools/autoquack.m

Last Changed Author: brian.lantz@LIGO.ORG
Last Changed Rev: 7939
Last Changed Date: 2014-02-14 15:38:15 -0800 (Fri, 14 Feb 2014)
Text Last Updated: 2014-02-14 15:48:16 -0800 (Fri, 14 Feb 2014)

Non-image files attached to this report
Comments related to this report
richard.mittleman@LIGO.ORG - 07:02, Thursday 09 July 2015 (19514)

we should use the readfoton script to read and plot the installed filter, i can do that

brian.lantz@LIGO.ORG - 13:57, Monday 13 July 2015 (19591)
I suspect that the problem appears because a change (however small) in the filter coefficients causes the filters to reset (clear history, start over) and reset of the filter history = glitch in the output. It is easy to image this glitch being quite large for a ISI loop which is holding a static offset. 

I am working on an update to autoquack which will have it automatically call foton -c so that the filter updates happen in a deterministic way, and there is a log file telling you which filters have been touched.
 
brian.lantz@LIGO.ORG - 10:10, Friday 17 July 2015 (19704)
filed ECR 
https://services.ligo-wa.caltech.edu/integrationissues/show_bug.cgi?id=1077

testing of possible solution, see 
https://alog.ligo-la.caltech.edu/SEI/index.php?callRep=789
-Brian
Displaying reports 64141-64160 of 83348.Go to page Start 3204 3205 3206 3207 3208 3209 3210 3211 3212 End