Displaying reports 58261-58280 of 77993.Go to page Start 2910 2911 2912 2913 2914 2915 2916 2917 2918 End
Reports until 11:47, Wednesday 05 August 2015
LHO VE
kyle.ryan@LIGO.ORG - posted 11:47, Wednesday 05 August 2015 - last comment - 11:47, Wednesday 05 August 2015(20221)
X-end NEG regeneration (EX_NEG_REGEN1....)
Kyle, Gerardo 

0935 hrs. local -> Valved-in RGA to X-end with N2 cal-gas on 

0955 hrs. local -> Valved-out N2 cal-gas 

1005 hrs. local -> Valved-out NEG from X-end 

1015 hrs. local -> began NEG regeneration (heating)

~1145 hrs. local -> NEG regeneration (heating) ends -> Begin NEG cool down 

1240 hrs local -> Valved-in NEG to X-end 

Data attached (see also LIGO-T1500408)

Leaving RGA valved-in to X-end, N2 cal-gas valved-out and filament off
Non-image files attached to this report
Comments related to this report
gerardo.moreno@LIGO.ORG - 03:26, Wednesday 05 August 2015 (20243)VE

Attached is a plot of the pressure inside of the NEG pump's vessel during regeneration, along with temperature.

Temperature started at 22 ºC and eventually reached 250 ºC.

Non-image files attached to this comment
H1 CAL (AOS, CAL)
sudarshan.karki@LIGO.ORG - posted 11:31, Wednesday 05 August 2015 - last comment - 14:27, Wednesday 05 August 2015(20259)
PCAL AA Chassis at Xend

Pcal Team,

During maintenace and calibration yesterday we found that the PCAL AA Chassis (S1400574) at EndX has problems with channel 5-7. Chanenl 5 is dead and 6 and 7 are railed at ~15000 cts. These channels are connected to DB9-to-BNC chassis (D1400423) at the other end. We isolated this unit from AA chassis to troubleshoot the location of the problem and confirmed that it is the AA chassis. 

Comments related to this report
sudarshan.karki@LIGO.ORG - 14:27, Wednesday 05 August 2015 (20266)

Fil, Sudarshan

We tried power-cycling the AA chassis to see if it solves the problem. It didnot so we replaced the broken AA chassis with a spare one (S1102791) and brought the broken one  back to EE shop for troubleshooting. We will swap it back with original, once it is fixed.

H1 DetChar (DetChar)
robert.schofield@LIGO.ORG - posted 09:47, Wednesday 05 August 2015 - last comment - 09:52, Wednesday 05 August 2015(20255)
Are rate variations of huge glitches inconsistent with beam tube particulate?

There has been some speculation that the huge glitches in DARM on weekends and in the middle of the night might be beam tube particulate falling through the beam. The absence of correlated events in auxiliary channels (Link) and the lack of saturations, have not helped dissipate this speculation.

I think that we can test the, in my mind unlikely, hypothesis that these huge glitches are particulate glitches by comparing rate variations to what we would expect. If the glitches are produced by a constant ambient rate of particles falling through the beam, then we would not expect large gaps like the one at the beginning of the Aug. 1 run that Gabriele analyzed for the above linked log (see attached inspiral range plot). This is a fairly weak test when applied to this one day: I calculate that the distribution of glitches on Aug. 1 is only 20% likely to be consistent with a constant rate. But perhaps DetChar could strengthen this argument by looking at future variations in rates to test the hypothesis that the rate is constant. I checked that there was no cleaning or wind above 10 MPH for the Aug. 1 period.

If bangs during cleaning on July 30th had freed up some particulate that then fell over the next few days, and this dominated the glitch rate, than the expected rate would not be constant but exponentially declining starting at the last cleaning. Since the gap was at the beginning of the Aug. 1 run, this would be even more unlikely than 20%. Bubba keeps a record of cleaning so we could also test for exponential declines in rates.

But for starters, maybe DetChar could check for consistency with a constant rate for those glitches that are not associated with saturations, have auxiliary channel signatures similar to known particulate glitches (e.g. Link, and more to come), and happen on days without cleaning (weekends for sure), and with wind under 10 MPH. Since particulate glitches are likely to be an ongoing concern for some, and since glitch rate statistics can be a good discriminant for particulate glitches, I think that it would be worth setting up this infrastructure for rate statistics of unidentified glitches, if it doesn't already exist.

Images attached to this report
Comments related to this report
david.shoemaker@LIGO.ORG - 09:52, Wednesday 05 August 2015 (20257)
Also good to look for potential variation in rate due to other environmental conditions in addition to wind -- temperature (absolute or derivative) would be good to test. 
H1 SEI (DetChar)
jim.warner@LIGO.ORG - posted 09:46, Wednesday 05 August 2015 - last comment - 08:38, Monday 10 August 2015(20256)
Return of the "WTH is that"?

I can't find the posts now, but several months ago, an intermittent issue with ETMX was spotted that was narrowed down to the CPS's, possibly specfically the corner 2 cpses (?). This problem then somehow "fixed" itself and was quiet for months. As of the night of the 4th, it seems to be back, intermittently (first attached image, spectr should be pretty smooth around 1 hz, it's decidedly toothy). Looking at the Detchar pages, it shows up about 8 UTC and disappears sometime later. I took spectra from last night (second image) and everything was normal again.

Still don't know what this is. Anybody turn anything on Monday afternoon at EX that shouldn't be?

Images attached to this report
Comments related to this report
patrick.thomas@LIGO.ORG - 11:27, Wednesday 05 August 2015 (20260)
I had turned on the NEG Bayard Alpert gauge at end X yesterday, but I have verified at least through Beckhoff that I turned it back off.
nairwita.mazumder@LIGO.ORG - 08:38, Monday 10 August 2015 (20323)
I have done some follow up investigation and the dcc document can be found here. The feature seems to be related to ETMX ESD driver issue. (alogs- 20219, 19487, 19487) 
LHO VE
john.worden@LIGO.ORG - posted 09:40, Wednesday 05 August 2015 (20254)
X END NEG pump regen

Kyle and Gerardo regenerated the NEG pump at X End yesterday. The attached shows 30 days of the BSC chamber pressure. We gained a factor of two from the regen.

Kyle's alog;  https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=20221

 

alog for the Y End regen is here;   https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=19998

RGA data to follow.

Images attached to this report
H1 General
edmond.merilh@LIGO.ORG - posted 08:56, Wednesday 05 August 2015 (20250)
Morning Meeting Summary
H1 PSL
edmond.merilh@LIGO.ORG - posted 08:17, Wednesday 05 August 2015 (20249)
Adjusted AOM Diffracted Power

I adjusted the AOM diffracted power from 9% to ~7%.

H1 SUS
jenne.driggers@LIGO.ORG - posted 00:52, Wednesday 05 August 2015 - last comment - 16:11, Wednesday 05 August 2015(20242)
ETMY violin mode damping

[Sheila, Jenne]

We have had a violin mode at ~ 508.29 Hz rung up for the last several days.

Part of the problem was that the ETMY Mode7 filter was railing at its limiter.  This filter bank has the new "flat phase" filter that is being tried, for damping many modes at once.  Evan ramped the gain to zero for this filter bank.  After ~30 minutes we didn't see any noticeable change in the height of the peak.

The only other filter bank that was enabled was the Mode5 bank, with a 506-513 band pass.  We turned off the "-60deg" FM2.  After an hour and a half or so, we see the height of the violin mode has been reduced by almost a factor of 10.  We'll check back on it tomorrow.

 

Also, there is a new violin mode blrms screen, accessible from the quad overview screen.

Images attached to this report
Comments related to this report
jenne.driggers@LIGO.ORG - 16:11, Wednesday 05 August 2015 (20272)

When looking into seeing if the violin mode is still going down, it occurred to me to look at the output of the violin filter module (ETMY Mode 7) that Evan set to zero last night.  Turns out it's been railing for days.  My guess is that this was turned on for testing of violin mode damping, and then never added to the guardian so it never gets turned off (violin mode damping should only be on when the IFO is locked).  Ooops. It's been off since last night, which is good.

Plotted are the Mode 5 output, which gets turned on and off appropriately, as well as the Mode 7 output which is just going rail-to-rail (where the rail is set by the filter bank's limiter here).

Images attached to this comment
H1 ISC (CAL, CDS, IOO, ISC, SEI, SUS)
jeffrey.kissel@LIGO.ORG - posted 22:47, Tuesday 04 August 2015 (20241)
IFO Maintenance Recovery Complete at 05:20 UTC (22:17 PDT)
J. Kissel, K. Izumi, P. Thomas, B. Weaver, for the whole team.

It was another rough Tuesday. Thanks all for your help and patience.

The biggest issues (time delays in recovery) were (all times PDT):
09:00 - 10:27 (~1.5 hrs) Debugging the Guardian code core before the blanket upgrade LHO aLOG 20210
12:26 - 13:10 (~0.5 hrs) Finding that HEPI foton files all needed a "foton -c" cleansing from bad auto-quacking over the years LHO aLOG 20212
12:50 - 13:25 (~0.5 hrs) Finding that ETMX ESD driver had been railed negative since the night before LHO aLOG 20203
14:15 - 15:40 (~1.5 hrs) End-station Beckhoff finally crashing killing ETMX ESD driver *again* during recovery and it going unnoticed LHO aLOG 20219
16:10 - 16:40 (~0.5 hrs) LSC_CONFIG guardian becomes undead, is killed, which requires a DAQ restart, which fails miserably LHO aLOG 20228
18:05 - 19:15 (~1.5 hrs) PSL rotation stage roulette, which eventually requires restarting the corner station Beckhoff slow-controls chassis LHO aLOG 20235
19:34 - 21:40 (~2.0 hrs) OMC_LOCK guardian has logic in it that needs two channels specific data rate, and those channel's data rate was changed today during the OMC / LSC science frame channel cleanup LHO aLOG 20237

I attach a picture of the full-whiteboard, recovery diary. 
Non-image files attached to this report
H1 GRD
jameson.rollins@LIGO.ORG - posted 22:38, Tuesday 04 August 2015 (20240)
H1 IFO Guardian top node briefly reports OK

For maybe the first time ever, the H1 IFO top node reported ALL_NODE_OK, and indicated OK==True for the entire H1 IFO:

This is a reflection of  the fact that every node in the system is reporting OK==True, which in turn means that all nodes are operating correctly and their systems are their nominal states.  In other words,

H1 guardian is ready for O1

Of course it dropped out almost immediately as the ISS was turned off for commissioning, but still...

Images attached to this report
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 22:22, Tuesday 04 August 2015 - last comment - 09:26, Wednesday 05 August 2015(20238)
CDS maintenance Summary

PEM channel renaming

WP5414. Robert, Dave:

h1pemcs was modified to rename the accelerometer channels PR1, SR1, MC to PRM, SRM, IMC. Change was applied to DAQ when it was restarted this morning.

h1isiham3 phantom ini change

Hugh, Dave

After the 07:55 restart of h1isiham3 and a DAQ restart the front end was reporting an ini file mismatch. We are not sure this was a true mismatch since the ini file had not been modified since Monday. We restarted h1isiham3 which cleared the alert.

h1susey computer

WP5413. Carlos, Dave, Jim:

the original h1susey computer was re-installed to remove the IOP glitching. The ETMY model is running longer than when this machine was last in service  (was 51uS, now 55uS). We checked the new SUSETMYPI model runs well with this hardware.

Reboot digital video servers

Dave

The digital video servers h1digivideo[0,1,2] were power cycled as part of the PM reboot project. These machines have been running for 280, 280 and 251 days respectively. We saw no problems with reboots.

Digital video snapshop problem. FRS3410.

Jenne, Dave, Jim, Elli, Kiwamu:

Jenne found that certain digital cameras are producing tiff files which cannot be viewed by most graphics tools. The reboot of the video servers did not fix this. We tried power cycling the ITMX spool camera which also did not help. We tracked this down to the "Frame Type = Mono12" change in the camera configuration. Kiwamu and Elli have methods to read the 32 bit tiff files. This problem is now resovled, FRS3410 will be closed.

EPICS Gateway

Dave

Due to the extended downtime of h1ecatx1, CA clients on the H1FE network did not reconnect to this IOC (EDCU, conlog, Guardian). I restarted the h1slow-h1fe EPICS gateway which prompted the clients to reconnect to h1ecatx1.

DAQ Restarts

Jim, Dave

There were several DAQ restarts. The last restart was quite messy, details in earlier alog.

The FPGA duotone channels were added to the frame broadcaster for DMT timing diagnostics.

Comments related to this report
david.barker@LIGO.ORG - 09:26, Wednesday 05 August 2015 (20251)

restart log for Tuesday 04Aug2015 is attached. No unexpected restarts.

Non-image files attached to this comment
H1 ISC
kiwamu.izumi@LIGO.ORG - posted 22:12, Tuesday 04 August 2015 - last comment - 05:06, Wednesday 05 August 2015(20237)
OMC guardian hustle

Jemie, Kiwamu,

Even though we didn't think that the change we made on the OMC model (alog 20197) would impact on locking, it actually did.

The OMC guardian kept stopping because the code tries to access an oversize index for a data array. After some investigaton, it turned out to be due to the data rate for H1:OMC-PZT1_MON_DC_OUT_DQ whose sampling rate had been changed from 16 kHz to 512 Hz in this morning. Since the OMC guardian tries to access the data with an assumption that the data sampling rate is 16 kHz, some lines in the code lead it to the bad situation. This error happened in FIND_CARRIER in which the code sweeps the OMC length and attemps to find peaks for the 45 MHz sidebands.

Jamie and I did a quick hack -- we decimated another channel, H1:OMC-DCPD_SUM_OUT_DQ, from 16 kHz to also 512 Hz using scipy.signal.deciamate() so that relavant lines are consistent for 512 Hz. We tested it once and it worked.

Comments related to this report
evan.hall@LIGO.ORG - 05:06, Wednesday 05 August 2015 (20247)

Somehow it still can't find the 00 mode on its own.

As for the warning message about bad filter coefficients, I changed the order of the Chebyshev from 8 to 4, and that seems to make the warning message go away.

H1 TCS (SYS)
evan.hall@LIGO.ORG - posted 20:14, Tuesday 04 August 2015 - last comment - 06:52, Wednesday 05 August 2015(20233)
Rotation stage roulette

Patrick, Jeff, Evan

We spent a few minutes cooking the IX compensation plate while trying to make the TCS rotation stage behave.

Images attached to this report
Comments related to this report
patrick.thomas@LIGO.ORG - 20:26, Tuesday 04 August 2015 (20236)
This was after the Beckhoff chassis was power cycled.
matthew.heintze@LIGO.ORG - 06:52, Wednesday 05 August 2015 (20248)

If after the beckhoff chassis is cycled and if you do not first "refind home" then you will find these issues with the rotation stage not knowing what angle to go with as it loses its mind where its at. I cant say I have noticed the waveplate acting up after this step is taken (but maybe it has without me noticing)

 

Also note that search for home does not take the waveplate to minimum angle necessarily. You should be pressing "go to minimum power". So after a Beckhoff restart, usually here at LLO we search for home first, so that the rotation stage finds its 0 point again, then go to minimum power and then start operating it from there. A known issue all along is that as the waveplate rotates to home it could go through a brief period where it allows maximum power into the IMC or onto the ITM CPs

H1 SEI
hugh.radkins@LIGO.ORG - posted 14:49, Tuesday 04 August 2015 - last comment - 09:26, Wednesday 05 August 2015(20212)
All LHO HEPI Foton Files run through foton -c

JimW, HughR

We took all the SEIs down with guardian.  Ran foton -c foton hepifile and then loaded the modified file.  Re-isolated all platforms.

Comments related to this report
jim.warner@LIGO.ORG - 09:26, Wednesday 05 August 2015 (20252)

I've looked at a few archived foton files to see if this caused any significant changes in any coefficients. Mostly what I've found are changes in the order of header information, but H1HPIBS.txt show a bunch of changes, all at the ~10^-6 level , so probably still harmless. Also, these changes are likely the result of a known (and now resolved) issue with quacking foton files with Matlab.

H1 PSL
jason.oberling@LIGO.ORG - posted 13:34, Tuesday 04 August 2015 - last comment - 05:27, Wednesday 05 August 2015(20204)
PMC/ISS TF Measurements

J. Oberling, P. King

Today we measured the OLTF of the ISS inner loop and the PMC.  This was done in response to LLO's recent measurement of the same documented here.  Peter has all the TF data and will post it as a comment to this report.  For the ISS inner loop, the gain was changed from 6dB to 16dB in the last couple of days, so we measured the TF at 6dB and at 16dB of gain to see how the increased gain changed things.  For the PMC we ended up increasing the gain to 22dB (up from 18dB).  This brought the UGF to ~7.5 kHz, closer to what we expect from E1200385.

I also had a chance to turn off the ISS without disturbing anything else and calibrate the 2 PDs discussed in alog 20043, and therefore complete the work in WP 5391.  The new gain values for the these PDs are as follows:

Comments related to this report
peter.king@LIGO.ORG - 04:42, Wednesday 05 August 2015 (20244)
Attached is the measured pre-modecleaner OLTF.  Two values of gain slider were used.  The initial one, 18 dB was where it was set to when we started the measurements.  To increase the UGF we increased the gain to 22 dB.

With the gain slider at 18 dB, the UGF is ~3.56 kHz with a phase margin of 70 deg.  With the gain slider at 22 dB, the UGF is 7.7 kHz with a phase margin of 75 deg.

We could push the gain a bit higher but that comes at the cost of robustness.
Images attached to this comment
peter.king@LIGO.ORG - 05:27, Wednesday 05 August 2015 (20246)
Attached is the first loop OLTF.  It should be noted that this measurement was performed whilst Robert and Cheryl were working inside the PSL Enclosure and so the HEPA fans were on (ditto for the PMC measurement).  Plots are for the old value of gain slider and for the current value.  Currently the UGF is about ~65 kHz with a phase margin of ~25 deg.  The measurements are consistent with those taken around the time installation was completed and more recent measurements done by Kiwamu and Sudarshan.

We took a high frequency measurement but unfortunately the results don't seem right.
Images attached to this comment
H1 SUS
jenne.driggers@LIGO.ORG - posted 13:58, Monday 03 August 2015 - last comment - 21:49, Wednesday 05 August 2015(20099)
Post-lock drift of PR3, SR3 in pitch

Rana pointed out to me that the PR3 and SR3 suspensions may still have some shift due to wire heating during locks (which we won't see until a lockloss, since we control the angles of mirrors during lock).

Attached are the oplev signals for PR3 and SR3 at the end of a few different lock stretches, labeled by the time of the end of the lock. The lock ending 3 Aug was 14+ hours.  The lock ending 31 July was 10+ hours.  The lock ending 23 July was 5+ hours.  The lock ending 20 July was 6+ hours.

The PR3 shift is more significant than the SR3 shift, but that shouldn't be too surprising, since there is more power in the PRC than the SRC so there is going to be more scattered light around PR3. Also, PR3 has some ASC feedback to keep the pointing.  SR3 does not have ASC feedback, but it does have a DC-coupled optical lever.  SR3 shifts usually a few tenths of a microradian, but PR3 is often one or more microradians.  Interestingly, the PR3 shift is larger for medium length locks (1 or 1.5 urad) than for very long locks (0.3 urad).  I'm not at all sure why this is.

This is not the end of the world for us right now, since we won't be increasing the laser power for O1, however we expect that this drift will increase as we increase the laser power, so we may need to consider adding even more baffling to the recycling cavity folding mirrors during some future vent.

Images attached to this report
Comments related to this report
betsy.weaver@LIGO.ORG - 10:05, Wednesday 05 August 2015 (20258)

Note - The PR3 and SR3 have 2 different baffles in front of them which do different things.  The PR3 HAS a baffle which specifically shields the wires from the beam.  The SR3 does not have this particular baffle, however I believe we ave a spare which we could mount at some point if deemed necessary.

Attached is a picture of the PR3 "wire shielding baffle D1300957, showing how it shields the suspension wires at th PR3 optic stage.  In fact, a picture of this baffle was taken from the controlroom and is in alog 8941.

The second attachment is a repost of the SR3 baffle picture from alog 16512.

Images attached to this comment
rana.adhikari@LIGO.ORG - 21:49, Wednesday 05 August 2015 (20279)AOS

from the pictures, it seems like we could get most of the rest of the baffling we need if the wire going under neath the barrel of PR3 were to be covered. Perhaps that's what accounts for the residual heating. Also, if it became a problem perhaps we can get an SR3 baffle with a slightly smaller hole to cover its wires.

Displaying reports 58261-58280 of 77993.Go to page Start 2910 2911 2912 2913 2914 2915 2916 2917 2918 End