Displaying reports 59841-59860 of 83137.Go to page Start 2989 2990 2991 2992 2993 2994 2995 2996 2997 End
Reports until 07:36, Thursday 03 December 2015
H1 CDS
thomas.shaffer@LIGO.ORG - posted 07:36, Thursday 03 December 2015 (23931)
Can't log into the workstations in the control room

Peter King arrived and couldn't log in so I tried my account on two different machines with no luck. LLO isn't having this issue so must be a local problem, but I don't think it should be related to the full file system.

H1 General
peter.king@LIGO.ORG - posted 04:44, Thursday 03 December 2015 (23929)
remote MEDM screens
As per TJ's note on some hard disc space being filled.  The remote screens
and control room shots are amiss.  Having said that a number of the HEPI and
ISI ones are still available, if that helps debug the issue.
H1 CDS (OpsInfo)
thomas.shaffer@LIGO.ORG - posted 04:42, Thursday 03 December 2015 (23928)
/ligo system is full again I think

I'm seeing signs very similar to last time this happened (alog23006).

I first noticed something was up when I couldn't get the Lockloss tool to work due to an IOError of no space left on drive, and then the Lock Clock died. The clock works similarly to the reservation system where it will repeatedly write a new file with the updated times and delete the old one, so when the system gets full it can't write a new one.

I have deleted some stuff in my folder in hopes that it would help, but I don't really have any large files that would make a big dent.

Operators: VerbalAlarms has also crashed because it cannot write its notifications. I started it up on the Alarm Handler computer without the "-w" option so it will still run for now, but none of its notifications are being recorded. Please stop this process and start a new one when the issue is fixed. Same startup as before, type "VerbalAlarms" into the AH computer terminal (the -w and -l options are already aliased in).

LHO General
thomas.shaffer@LIGO.ORG - posted 04:26, Thursday 03 December 2015 (23927)
Back in Observing

Back at 12:21 UTC. Had to run thourhg an initial alignment, where I struggled with locking ALSX. I couldn't seem to get the power above 0.5, similar to what Corey was going through on Nov 29 (alog23805). I'm not sure how I fixed it. I fiddled with the ETMX, ITMX, and a tiny bit of TMSX, but when the power suddenly shot up near 1.0 I swear it was at values that I had already passed through for all three.

LHO General (Lockloss)
thomas.shaffer@LIGO.ORG - posted 02:21, Thursday 03 December 2015 - last comment - 02:40, Thursday 03 December 2015(23925)
Lockloss 10:17 UTC

Not sure of the cause yet. Control signal Striptools didn't show any signs of struggle. Seismic looks calm, no wind.

Comments related to this report
thomas.shaffer@LIGO.ORG - 02:40, Thursday 03 December 2015 (23926)

HEPI tidal seems to have been good (1st attachment).

Lockloss tool is failing due to "IOError: [Error 28] No space left on device"

 

Looks like I am going to have to do an initial alignment, pretty much everything seems very misaligned...odd.

Images attached to this comment
LHO General
thomas.shaffer@LIGO.ORG - posted 00:01, Thursday 03 December 2015 (23924)
Ops Owl Shift Transition
H1 General
travis.sadecki@LIGO.ORG - posted 00:00, Thursday 03 December 2015 (23923)
OPS Eve shift summary

Title: 12/2 Eve Shift 0:00-8:00 UTC (16:00-24:00 PST).  All times in UTC.

State of H1: Observing

Shift Summary: Another very quiet shift.  Only 4 ETMy saturations.  Going on 31 hours of lock.

Incoming operator: TJ

Activity log:

0:45 Commissioning mode for Hugh to change HEPI limits (~1 min.)

0:49 Commissioning mode for Evan to change tidal limits (~1 min.)

H1 General
evan.goetz@LIGO.ORG - posted 22:42, Wednesday 02 December 2015 (23922)
LISA Pathfinder launch online outreach

Evan G, Travis S, Nutsinee K

 

Congratulations to the LISA Pathfinder team for the succesful launch today! The LHO and LLO control rooms both took part in a Google Hangouts interactive live-stream in the leadup and following the launch of LISA Pathfinder. Besides a few technical difficulties with Google Hangouts/YouTube, I think it was a nice experience. We were able to answer a few questions how LIGO and LISA are different, our current operational status, etc. We now look forward to the interesting scientific output from the mission!

Congratulations again! :)

H1 General
travis.sadecki@LIGO.ORG - posted 21:21, Wednesday 02 December 2015 (23921)
OPS Eve mid-shift summary

Other than a brief excursion to Commissioning for updating HEPI drive limits (see aLog 23918), we have been locked in Observing for over 28 hours now.  Congrats to the LISA Pathfinder mission for a successful launch.

H1 ISC (SEI)
hugh.radkins@LIGO.ORG - posted 16:58, Wednesday 02 December 2015 (23918)
HEPI Drive Limits increased

In hopes to not lose the lock and with approval from MLandry, we increased the limits from 500 to 700um.  These are the limits changed:

H1:HPI-ETMX_ISCINF_LONG_LIMIT

H1:HPI-ETMY_ISCINF_LONG_LIMIT

H1:LSC-X_TIDAL_CTRL_LIMIT &

H1:LSC-Y_TIDAL_CTRL_LIMIT

 

The post change trend looks like we may be out of the woods as the attached plot shows a turn over and the tide may pull us back.  If we last another 6 to 12 hours, we'll really know something.

Images attached to this report
H1 ISC
keita.kawabe@LIGO.ORG - posted 16:06, Wednesday 02 December 2015 (23914)
Some noise and a wandering peak that are coherent with RFAM and are common to PSL room and CER

The first attachment shows the RF45_AM_CTRL_OUT (and RF9) to DARM coherence when RFWhen RF45 AM was not glitching. Left column is after we pulled RFAM measurement unit and coupler out of the PSL room, right is before we installed RF AM measurement unit and coupler, but the distinction between left and right is not that important.

There are two things worth noting:

1. When it's not glitching, there's some coherence between RFAM and DARM at around 170-190Hz, 540-580Hz, and 810-860Hz.

The coherence is almost common to both RF45 and RF9 (which is monitoring RF45 in CER), so this is probably the harmonic generator or some common environmental noise coupling into two different distribution amplifier outputs.

Seems like the shape of the noise bumps in RF45_AM_CTRL_OUT_DQ is not stable over time. Also, this might have become somewhat worse over time.

2. Wandering peak between 600 and 700 Hz (and its second harmonic) that comes and goes.

It's apparent in the right column, but there's also one trace in the left (black trace, Nov/22).

In my plot the measurement time was arbitrary and very very sparsely chosen, so it's not clear if the peak is gone after Nov/22. The peak was not present in Nov/07 data point either (right, brown traces).

Coherence between CARM and both RF45 and RF9 control outputs are very high for this peak.

I can see the second harmonic but not 3rd and higher (second attachment).

The peak also seems to be visible in FOM reference trace from 2015/Oct/14 (third attachment).

I wonder if detchar identified the cause of this.

Images attached to this report
H1 General
jeffrey.bartlett@LIGO.ORG - posted 16:04, Wednesday 02 December 2015 (23917)
Ops Day Shift Summary
Activity Log: All Times in UTC (PT)

16:00 (08:00) Take over from JT
17:10 (09:10) Bubba – Finished with tractor clearing the roads
18:27 (10:27) CW Injection running alert
18:30 (10:30) CW injection inactive alert
19:55 (11:55) Kyle & Gerardo – Going to X2-8 
20:03 (12:03) Out of Observing for vacuum work at X2-8 (WP #5632)
20:16 (12:16) Kyle & Gerardo – Finished at X2-8, coming back to CS. IFO back to Observing
20:47 (12:47) Bubba – Going to Mid-X to work on actuators
21:11 (13:11) John- Going to Mid-X
21:57 (13:57) John & Bubba – Back from Mid-X
00:00 (16:00) Turn over to Travis


Shift Summary:

Support: None needed
 
Incoming Operator: Travis

Shift Detail Summary: IFO has been locked in Observing mode for almost 24 hours. Switched to Commissioning mode for 13 minutes, while Kyle & Gerardo were performing vacuum work at X2-8 (WP #5632). The range has generally been over 80Mpc for most of the shift. There were 8 ETM-Y saturations during the shift. Did not observe any RF45 glitches associated with the ETM-Y saturations. Wind has been building and is now a light to gentle breeze (4 to 12mph). Seismic has been flat all shift (around 0.03um/s). Microseism has a slight upward slope and is centered around 0.3um/s.


LHO VE
kyle.ryan@LIGO.ORG - posted 15:50, Wednesday 02 December 2015 (23916)
1210 hrs. local -> Valved-in new ion pump located at beam tube port X2-8
Resulting pressure reduction at nearest BT gauge is as expected, i.e. reduced by a factor of ~2.5.   

Also, there is now a large error in pump current converted pressure at the ion pump controller vs. independent gauge voltage -> gauge voltage converts to 2e-9 torr, ion pump controller indicates 1e-10 torr (converting 3 uAmps) -> 20 hour pump down curve of captured gauge data prior to opening valve is as expected -> ignoring pump current converted pressures at these small values
H1 SEI (ISC)
hugh.radkins@LIGO.ORG - posted 15:36, Wednesday 02 December 2015 (23915)
Arm length as seen by Bleed Off to HEPI still racing to lock loss

In 24 hours, the lock stretch has already used 80% of the doubled range on the HEPI bleed off.  This is not 'tidal.'  The attached plot shows the trends of the bleed to HEPI and LVEA and RefCav temps.  It looks very coincident on this 3 day trend with a big dip on the LVEA temp and the step run up on the HEPI drive. But, if this increase in LVEA temp is getting to the refcav increasing its length, the sign is wrong at HEPI.  An increase in length will lead to an increase in length.  Also, while not know with certainty, the mechanism of getting a temp change into the refcav certainly has a pretty large delay.  Any thoughts?

If this trend does not turn around soon, the lock will be lost unless we further increase the limit.  If the drive hits 0.7mm, angular integrity will start to be compromised as the IPS on HEPI reaches its linear limits

Images attached to this report
H1 General
jeffrey.bartlett@LIGO.ORG - posted 14:45, Wednesday 02 December 2015 (23912)
November DB & DES Cabinet Data
Posted are the November data for the 3IFO desiccant cabinet in the LVEA and the two Dry Box storage cabinets in the VPW. No interesting or disturbing trends are apparent.   
Non-image files attached to this report
H1 General (PSL)
edmond.merilh@LIGO.ORG - posted 14:06, Wednesday 02 December 2015 (23909)
PSL Status and Past 14 Day Trends

 Laser Status:

SysStat is good
Front End power is 31.22W (should be around 30 W)
Frontend Watch is GREEN
HPO Watch is RED
 
PMC:
It has been locked 8.0 days, 4.0 hr 0.0 minutes (should be days/weeks)
Reflected power is 1.654Watts and PowerSum = 25.14Watts.
 
FSS:
It has been locked for 0.0 days 22.0 h and 53.0 min (should be days/weeks)
TPD[V] = 1.41V (min 0.9V)
 
ISS:
The diffracted power is around 7.565% (should be 5-9%)
Last saturation event was 1.0 days 4.0 hours and 5.0 minutes ago (should be days/weeks)
Images attached to this report
H1 General (DetChar)
thomas.shaffer@LIGO.ORG - posted 07:39, Wednesday 02 December 2015 - last comment - 14:58, Wednesday 02 December 2015(23894)
Tractor clearing snow in parking lot, visible dip in range

At 15:06 UTC Bubba told me that he was headed out to clear the parking lots of snow. Since then there has been about a 5Mpc drop in range. Coincidence?

Comments related to this report
jeffrey.bartlett@LIGO.ORG - 09:46, Wednesday 02 December 2015 (23898)
Bubba finished using the tractor for snow removal at 17:10 (09:100
jordan.palamos@LIGO.ORG - 14:58, Wednesday 02 December 2015 (23913)PEM

Attaching a plot that compares the DARM spectrum during Tractor activity (15:40UTC) and an earlier quiet time from the same lock (8:00UTC). It clearly shows excess noise from ~80-190Hz.

Images attached to this comment
H1 CAL (COC, DetChar, SUS, VE)
jeffrey.kissel@LIGO.ORG - posted 14:24, Tuesday 01 December 2015 - last comment - 19:48, Wednesday 02 December 2015(23866)
Charge Measurement Update; Measurements Continue to Agree with Calibration Line Actuation Strength, ETMY Charge Slowing Down?
J. Kissel, S. Karki, B. Weaver, R. McCarthy, G. Merano, M. Landry

After gathering the weekly charge measurements, I've compared H1 SUS ETMY ESD's relative Pitch/Yaw actuation strength change (as measued by the optical levers) against the Longitundinal Actuation Strength (as measured by PCAL / ESD calibration lines). As has been shown previously (see LHO aLOG 22903), the pitch/yaw strength's slope trends very nicely along with the longituinal stength change -- if you take a quick glance. Upon closer investigation, here are things that one begins to question:
(1) We still don't understand why the optical level actuation strength assessments are offset from the longitunidal strength assessment after the ESD bias sign flip. 
(2) One *could* argue that, although prior to the flip the eye-ball-average of oplev measurements trackes the longitudinal strength, after the flip there are periods where two quadrants (magenta, in pitch, which is LR, from Oct 25 to Nov 8; black, in yaw, which is UR, from ~Nov 11 to Dec 06) track the longitudinal strength. As such, one *could* argue that the longitudinal actuation strength trend is dominated by a single quadrant's charge, instead of the average. Maybe.
(3) If you squint, you *could* say that the longitudinal actuation strength increase rate is slowly tapering off, where as the optical lever strength increase *may* be remaining constant. One could probably also say that the rate of strength increase is different between oplevs and cal lines (oplev P/Y strength is increasing faster that cal line L strength).
All this being said, we are still unsure whether we want to flip the ETMY ESD bias sign again before the observation run is out. Landry suggests we either do it mid-December (say the week of Dec 14), or not at all. So we'll continue to track via optical lever, and compare against the longitudinal estimate from cal lines.

Results continue to look encouraging for ETMX -- ever since we've had great duty cycle, and turned off the ETMX ESD Bias when we're in low-noise and/or when the IFO is down, the charging rate has decreased. Even though the actuation strength of ETMX doesn't matter at the few % level like it does for ETMY (because ETMX is not used as the DARM actuator in nominal low-noise, so it doesn't affect the IFO calibration), it's still good to know that we can get an appreciable effect by simply reducing the bias voltage and/or turning it off for estended periods of time. This again argues for going the LLO route of decreasing the ETMY bias by a factor of 2, which we should certianly consider doing after O1.

---------------
As usual, I've followed the instructions from the aWiki to take the measurements. I had much less trouble today than I had last week gathering data from NDS, which is encouraging. One thing I'd done differently was wait a litle longer before requesting the gathering and analysis (I waited until the *next* measurement had gone through -9.0 and -4.0 [V] bias voltage points and started the 0.0 [V] point, roughly 5 minutes after the measurement I wanted to analyze ended). As such, I was able to get 6 and 4 oplev data point to compose the average for ETMX and ETMY, respectively (as opposed to the 3 and 1 I got last week; see LHO aLOG 23717).

Once all data was analyzed, I created the usual optical-lever-only assessment using 
/ligo/svncommon/SusSVN/sus/trunk/QUAD/Common/Scripts/Long_Trend.m
and saved the data to here:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Results/CAL_PARAM/2015-12-01_H1SUSETMY_ChargeMeasResults.mat

However, I'd asked Sudarshan to gather the latest calibration line estimates of the ESD longitudinal actuation strength (aka kappa_TST), which he gathered from his matlab tool that gathers the output of the GDS function "Standard Line Monitor." (He's promised me an updated procedure and an aLOG so that anyone can do it). This is noteably *not* the output of the GDS pipeline, but the answers should be equivalent. His data lives here:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Results/CAL_PARAM/2015-12-01_Sep-Oct-Nov_ALLKappas.mat

Finally, I've made the comparison between oplev and cal live strength estimates using
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Scripts/CAL_PARAM/compare_chargevskappaTST_20151201.m
Images attached to this report
Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 12:25, Wednesday 02 December 2015 (23906)SUS, SYS, VE
J. Kissel, G. Merano, J. Worden

In order to facilitate figuring out what's left on the chambers that might be charging the test masses (and also to compare against LLO who has a few bonkers quadrants that had suddenly gained charge), I attach a drawing (apologies for my out-of-date SolidWorks version) of what gauges remain around the end-station chambers. 

The "Inficon wide-range gauge" is the BPG402-Sx ATM to UHV Gauge,
and the "Gauge Pair" are separate units merged together by LIGO.

Also, PS -- we're valving in the ol' ion pumps today (in their new 250 [m]-from-the-test-masses locations). Kyle and Gerardo are valving in the X-arm today (stay tuned for details from them).
Images attached to this comment
john.worden@LIGO.ORG - 19:48, Wednesday 02 December 2015 (23920)

Not sure what Jeff meant by "ol' ion pumps". Kyle and Gerardo valved in a "bran' new ion pump" at the 250m location. The ol ion pump remains mounted in the end station but valved out from the chamber. Only the Xarm pump has been valved in at the 250 m location. The Yarm pump has yet to be baked prior to opening to the tube.

https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=23916

H1 SEI
jim.warner@LIGO.ORG - posted 16:03, Friday 20 November 2015 - last comment - 17:33, Wednesday 02 December 2015(23610)
BSC Blend switching glitches

Because the microseism is down to a level similar to what we saw over the summer, I asked Patrick to put the ISI's in the more wind tolerant 90 mhz blends. Wind is also low right now, but it can come up suddenly, while microseism takes days to rise. We also need more data about what conditions necessitate switching. After, Evan and Keita finished in the PSL, Patrick started relocking but was having troubles with the  green. Fiddling the TMS alignments fixed it, I looked to see if the ISI positions had changed. They haven't moved but I found something interesting.

Most of the ISIs all showed a relatively smooth switch from the 45 to 90 blends except for ETMY. ITMY is the first plot and is pretty consistent with all the other chambers, except ETMY. Patrick switched the X&Y blends at the same time , in the middle of the time span I grabbed in the plot. You can kind of tell because the ITMY location mon range gets a little smaller and there are fewer low frequency swings on X&Y, due to the lower gain peaking at higher frequency of the 90 blend. The other DOFs don't seem to see anything.

ETMY however shows a huge 30 micron shift in the Y direction (second plot), and is visible in all DOFs. This chamber was only running the 45mhz blend in the Y DOF, so only the Y blend got switched on this chamber, at about 21:48 UTC today. No idea why it should be any different from the other chambers, although BrianL did note a while ago that the ETMY STS is poorly centered (maybe relevant because the STS is used for sensor correction, so gets summed with the CPS signal before the blend). It would be good to get some time to see if this is repeatable and truly limited to ETMY or not.

Images attached to this report
Comments related to this report
brian.lantz@LIGO.ORG - 17:33, Wednesday 02 December 2015 (23919)
For some discussion about how to fix this, see the SEI log, entry 887.
https://alog.ligo-la.caltech.edu/SEI/index.php?callRep=887

short answer - 
1) SEI team needs to update the blend-switch code to wait longer during the switch process so that the transients can settle down.
2) don't try to change the blend filters while the Tidal-offload is pushing hard on HEPI - wait at least 200 seconds after it finishes. 
3) SEI team needs to have less tilt on HEPI when it gets moved by the tidal offload.

Displaying reports 59841-59860 of 83137.Go to page Start 2989 2990 2991 2992 2993 2994 2995 2996 2997 End