Displaying reports 46901-46920 of 83425.Go to page Start 2342 2343 2344 2345 2346 2347 2348 2349 2350 End
Reports until 11:08, Tuesday 18 July 2017
H1 CDS (GRD)
david.barker@LIGO.ORG - posted 11:08, Tuesday 18 July 2017 (37583)
h1guardian0 rebooted

Dave and TJ:

we rebooted h1guardian0 because its loading has been regularly in the 50's. TJ recovered the GRD nodes following the reboot.

H1 SEI
hugh.radkins@LIGO.ORG - posted 10:24, Tuesday 18 July 2017 - last comment - 12:51, Tuesday 18 July 2017(37582)
BS ISI CPs Glitches--None so Far

Looking over the trends, since the interface chassis was restarted and power cable secured, I see no large glitches, especially on corner3 only, that aren't seen on all CPSs and that are not related to ground motion.  Can't access FRS to give a number but I'll keep an eye on this for a few more days.

Comments related to this report
jeffrey.kissel@LIGO.ORG - 12:51, Tuesday 18 July 2017 (37588)
I've updated FRS Ticket 8517.
H1 PSL
jason.oberling@LIGO.ORG - posted 09:55, Tuesday 18 July 2017 (37579)
PSL Weekly FAMIS Tasks (FAMIS 3659 & 8431)

This morning I performed the weekly PSL FAMIS tasks.

HPO Pump Diode Current Adjust (FAMIS 8431)

I adjusted the HPO pump diode currents, changes summarized in the table below.  All adjustments were done with the ISS OFF.  I have attached a screenshot of the PSL Beckhoff main screen for future reference.

  HPO Diode Currents (A)
Old New
DB1 48.9 49.0
DB2 51.8 52.0
DB3 51.8 52.0
DB4 51.8 52.0

I then tweaked the DB operating temperatures.  The only change was to DB1, all diodes changed from 29.0 °C to 28.5 °C; no changes were made to the operating temps of DBs 2, 3, or 4.  The HPO is now outputting 154.9 W and the ISS has been turned back ON.  This completes FAMIS 8431.

PSL Power Watchdog Reset (FAMIS 3659)

I reset both PSL power watchdogs at 16:26 UTC (9:26 PDT).  This completes FAMIS 3659.

Images attached to this report
H1 General
jeffrey.bartlett@LIGO.ORG - posted 08:04, Tuesday 18 July 2017 (37577)
Ops Owl Shift Summary
Ops Shift Log: 07/18/2017, Owl Shift 07:00 – 15:00 (00:00 - 08:00) Time - UTC (PT)
State of H1: Locked at NLN, power at 29.2W, range at 48.2Mpc
Intent Bit: Observing
Support: N/A
Incoming Operator: Jim
Shift Summary: Observing for past 8 hours. Several smaller aftershocks near Nikol’skoye Russia; none that caused issues for the IFO. Range has been holding around 47 to 48Mpc as the noise continues at 20Hz and higher. Prep work underway for this morning’s maintenance activities.   
 
Activity Log: Time - UTC (PT)
07:00 (00:00) Take over from Corey
07:03 (00:03) Damp PI Mode-28
07:28 (00:28) Drop out of Observing to run A2L repair script
07:40 (00:40) Back to Observing after A2L script finished
13:23 (06:23) Bubba – Going to both end station chiller yards to start compressors
14:00 (07:00) Bubba & Chris – Lubing Supply fans WP #7076
14:56 (07:56) HDF on site. Notified Bubba
15:00 (08:00) Turn over to Jim
LHO General
john.worden@LIGO.ORG - posted 06:32, Tuesday 18 July 2017 - last comment - 09:17, Tuesday 18 July 2017(37576)
MEDM screenshots not allowing log in

Tried two different browsers;

Unauthorized

This server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn't understand how to supply the credentials required.


Apache/2.4.10 (Debian) Server at lhocds.ligo-wa.caltech.edu Port 443
Comments related to this report
keith.thorne@LIGO.ORG - 09:17, Tuesday 18 July 2017 (37578)CDS
This is due to a big Grouper update - see LLO aLOG 34921.  It should not affect local logins to CDS workstations or the main aLOGs, but does affect all remote logins and web site accesses.  This was not as intended and they are working hard to fix it.
H1 General
jeffrey.bartlett@LIGO.ORG - posted 04:06, Tuesday 18 July 2017 (37575)
Ops Owl Mid-Shift Summary
IFO in Observing for past 4 hours. Environmental conditions are good. A2Ls are both below reference. Range is holding around 48Mpc.  
H1 General
jeffrey.bartlett@LIGO.ORG - posted 00:31, Tuesday 18 July 2017 (37574)
Ops Owl Shift Transition
Ops Shift Transition: 07/18/2017, Owl Shift 07:00 – 15:00 (00:00 - 08:00) - UTC (PT)
State of H1: Locked at NLN 29.2W, range is 48.1Mpc
Intent Bit: Observing
Weather: Skies are clear, the wind is a Light Breeze, Temps are in the mid-60s 
Primary 0.03 – 0.1Hz: At 0.01um/s
Secondary 0.1 – 0.3Hz: At 0.04um/s
Quick Summary: Recovering after earlier large EQ in the Russia area. Noise is elevated above 20Hz. With a bit of work damped PI Mode-28. A2L Yaw is high. Run A2L repair script while LLO is still down from the Russia EQ.
Outgoing Operator: Corey
LHO General
corey.gray@LIGO.ORG - posted 00:13, Tuesday 18 July 2017 (37568)
EVE Operator Summary

TITLE: 07/17 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 47Mpc
INCOMING OPERATOR: Jeff
SHIFT SUMMARY:

Main issue of the evening was a 7.7 Earthquake.  There was some initial confusion with which ISI_CONFIG state to go to after the EQ (it knocked us out without much heads up).  Originally went to LARGE_EARTHQUAKE_NOBRSXY, but then switched to EARTHQUAKE_V2--ultimately went to the correct LARGE_EARTHQUAKE_NOBRSXY.  Then it was a waiting game for most of the shift.  

H1 has noise starting from 25+Hz giving us a range of 45Mpc.  

We worried about violin modes, but they seem OK (under e-15).
LOG:

H1 ISC
corey.gray@LIGO.ORG - posted 21:25, Monday 17 July 2017 - last comment - 23:40, Monday 17 July 2017(37572)
Stuck With ALSy Problems

ALSx is fine.

ALSy has not been able to stay locked in a 00.  I've tried tweaking on the ETMy to no avail.  I am currently on LOCKED_NO_SLOW_NO_WFS, and wanting to reduce WFS & CAM signals for the Y-arm, but it simply doesn't stay locked long enough for me to relieve the large output values.  

Comments related to this report
corey.gray@LIGO.ORG - 23:40, Monday 17 July 2017 (37573)

Problem Resolved:  I was looking elsewhere and Thomas V suggested a Clear History for ALSy & this did the trick.

LHO General
corey.gray@LIGO.ORG - posted 20:30, Monday 17 July 2017 (37571)
Mid Shift Status

All SEI & SUS systems are back.

Going to check on IMs and then return to locking.

P.S. EQ was upgraded to 7.7

H1 SEI (DetChar, GRD, Lockloss, SEI)
jeffrey.kissel@LIGO.ORG - posted 17:41, Monday 17 July 2017 - last comment - 10:08, Tuesday 18 July 2017(37570)
Active Platforms vs. and Angry Earth
J. Kissel

As we're getting slammed by another earthquake, I wonder if there can be a few things that team SEI can change in order to get the SEI platforms back to DAMPED (HEPI robust isolated on L4Cs, HAMs Damping with GS13s, BSC ISIs damping with L4Cs and GS13s) and optics back to DAMPED as soon as possible. Here're the flaws I see at the moment:
(1) SEI_CONFIG manager doesn't turn off BSC-ISI Z sensor correction (at least in EARTHQUAKE_v2), so giant EQs like the current still push HEPI in Z enough that it makes the T240s angry and trips the ISI's watchdog, even though the T240s aren't yet involved in any active control. The SEI manager should have a state where it turns off ALL sensor correction, not just horizontal sensor correction.
    EDITNevermind => I had not read Jim's updates to the SEI_CONFIG Guide. We should be in LARGE_EQ_NOBRSXY, and that does indeed turn off ALL sensor correction.
(2) This may be hard, but it would be best if the T240s weren't in the watchdog system until they're used.

Just planting the seed for discussion, no need to pull out ECR pens just yet.
Images attached to this report
Comments related to this report
hugh.radkins@LIGO.ORG - 10:08, Tuesday 18 July 2017 (37580)

Re #2, My recollection was that it was ignored or else while it was riled up it would trip which it does not.  Regardless, I reviewed the model and indeed I recalled correctly that the T240s are ignored in the WD as long as all the Isolation Gains are zero.  Not sure the situation that Corey was experiencing during the EQ ring-down that suggested the T240s were the problem.

H1 General (SEI, SUS)
corey.gray@LIGO.ORG - posted 17:35, Monday 17 July 2017 (37569)
7.4 EQ In Russia Near Where 6.2 From This Morning (Ugh!)

In the SEI World:

Waiting on BSC ISIs to come back.  Jeff is looking at them.

In the SUS World:

Jeff K said these most likely tripped due to the SEIs tripping, so these were damped as a first order & they all came back fine.  

Will look at IM pointing next, lock up IMC & wait for BSC ISIs.

LHO General
corey.gray@LIGO.ORG - posted 16:25, Monday 17 July 2017 (37567)
Transition To EVE

TITLE: 07/17 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 47Mpc
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
    Wind: 11mph Gusts, 7mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.05 μm/s
QUICK SUMMARY:

H1's in OBSERVING w/ DARM spectra elevated from 25Hz & above, thus giving us a range hovering at ~46Mpc.

H1 General
jim.warner@LIGO.ORG - posted 16:09, Monday 17 July 2017 (37566)
Shift Summary

TITLE: 07/17 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 47Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Most of the day spent with Fast Shutter, PSL trip slowed things down earlier
LOG:

15:00 I came in and several chambers were tripped from the 6.5 in Russia, I recover and start IA

15:00 Richard to LVEA to cycle fast shutter chassis

15:30 PSL Trip, Jason is called, recover takes ~hour

18:30 After 2 rounds of IA, I make it to DC Readout, Richard and I test fast shutter which breaks lock, next several hours are spent trying to convince ourselves (commisioners, et al.) that the fast shutter is working normally, but locking is easy at this point

21:40 NLN, waiting for PI and A2L to run

22:30 Observing, with awful range
 

H1 PSL
jason.oberling@LIGO.ORG - posted 10:35, Monday 17 July 2017 - last comment - 10:39, Tuesday 18 July 2017(37560)
PSL Tripped, Likely due to Flow in Laser Head 3

J. Oberling, E. Merilh, J. Warner

The PSL tripped this morning, likely due to the flow in laser head 3.  As can be seen in this alog, the flow in head 3 (and only head 3) dropped from 0.52 lpm to ~0.47 lpm sometime on the morning of Saturday, 7/15/2017.  It is likely this was the cause of the laser trip.  I will do more forensics when I am onsite tomorrow.  When restarting the cyrstal chiller, the flow in laser head 3 was back up around 0.54 lpm.  Maybe something worked its way through the system, causing the laser trip?

When restarting the laser, the Beckhoff software appeared to lose communication with the hardware of the PSL, requiring a reboot of the PSL Beckhoff PC.  Once this was done, everything worked fine and the laser came back up.  I had difficulty injection locking the 35W FE to the HPO; I believe this is due to the lower power out of the HPO.  I engaged the injection locking by turning the RAMP OFF, and monitoring the power circulating in both directions of the HPO.  When the circulating power favored the forward direction, I manually engaged the injection locking.  This worked the first time and the laser is now back up and running.  Ed re-engaged the PMC, FSS, and ISS, and Jim reset the NPRO noise eater.  By this time ~30 minutes had elapsed, so I engaged the LRA and the power watchdogs.  The laser is now up and running.

J. Kissel filed FRS 8539 for this trip.

Comments related to this report
edmond.merilh@LIGO.ORG - 11:25, Monday 17 July 2017 (37564)
Images attached to this comment
jason.oberling@LIGO.ORG - 10:39, Tuesday 18 July 2017 (37581)

Looking at some trends it appears that it was Laser Head 4 that tripped the PSL, not Laser Head 3 as previously surmised.  The first attachment shows the 4 laser head flows around the time of the trip.  The second shows the flow through Head 4 and the Head 1-4 Flow Interlock, while the third attachment shows the flow through Head 3 and the interlock.  It is clear from this that the flow through Head 4 was at the trip point of 0.4 lpm at the time the interlock tripped, while the flow through Head 3 remained above the trip point.  It is unclear why the flow through Laser Head 4 fell so fast, possibly something moving through the system causing a glitch with the flow sensor?

Images attached to this comment
H1 ISC (IOO)
sheila.dwyer@LIGO.ORG - posted 21:47, Sunday 16 July 2017 - last comment - 15:42, Monday 17 July 2017(37553)
IMC restored to pre EQ alignment, fast shutter problem, OWL canceled

Jim and I have been adjusting the interferometer alignment for the last few hours.

IMC alignment

We noticed as has been noted by a few people that the IMC alignment is different from before the EQ according to the witness sensors  and spots on QPDs (IMC WFS DC, IM4, and ISS QPD). We tried restoring the optics and input PZT using witness sensors, as I am sure people have tried in the last week, and saw that this made it impossible to lock the mode cleaner. Next we went back to the alignment from overnight last night.  I moved the uncontrolled DOF (DOF4) to bring the IMC WFS DC spots to where they were before the EQ (this was the idea behind Suresh's effort to control DOF4 several years ago).  Jim restored IM1,2,3 to their alignments pre EQ using the osems (the biggest move was IM1 pit).  This resulted in restoring the spot positions on IM4 trans and the ISS QPD to their pre EQ positions.  (see attached plots of QPDs and range so you can identify times of the EQ and other alignment changes made trying to recover from EQ).

One thing we noticed which Corey has also logged is that there is some kind of mistake in the IMC WFS offlaod script which misaligns the PZT.  We could replace this with our generic WFS offload guardian state anyway.

PR3+SR3 osem jumps

This is also something that other people have noted, but there is a large jump in the osem values compared to the oplevs for PR3 +SR3 after the EQ.  We spent some time worry about both of these optics because of a problem which turned out to be the fast shutter. PR3 moved after teh EQ, but the osem and oplev do not agree about the size of the move, and it seems like it is some kind of permanent shift in the value of the osem.  Cheryl has a nice plot that shows this.  It seems like people already knew this and had realinged PR3 correctly using the oplev and reset the SR3 cage servo soon after the EQ.

Fast shutter problem

Jim and I did an initial alignment after the IMC +IM move, but ran into trouble at SRC aling.  Cheryl and I spent several hours with this problem, which at first seemed very mysterious.  We could not move the beam in single bounce to center it on AS_C, if we tried to do this using either SR2 or SR3 the image on the AS camera would turn into a huge interference/clipping blob, and the power on the AS_C QPD would drop nearly to zero.  We tried moving many things with no improvement, but in the end clsing the AS shutter fixed the problem.  Our best guess is that the shutter was somehow stuck in a half open state from about 22 UTC July 16th (or earlier) to about 4:30 UTC July 17th.  This could have happened as early as this morning's unexplained lock loss. This happened a few more times while we were trying to test the shutter in DRMI.  It looks (from the strip tool) like the shutter never actually closes but gets stuck half shut while it is opening. 

We are not going to lock for the rest of the night because we don't think the shutter is correctly protecting the OMC. We are trying to contact Jeff B to cancel the OWL shift.

Final note: Ireverted the POP A offsets to what they were before the EQ, in hopes that the IFO alingment is more similar to that now.  This worked in DRMI.

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 10:52, Monday 17 July 2017 (37561)
Opened FRS ticket 8540 for fast shutter issues.
richard.mccarthy@LIGO.ORG - 11:14, Monday 17 July 2017 (37562)

With the instrument down and earthquake ringing went out on the floor to investigate the Fast shutter.  The first thing I noticed was the front panel display was not displaying the normal Capacitor charging 250V.  Instead a garbled messeage. (see Pictures)  I rebooted the controller and everything appears to be normal.  I ran a test on the Shutter and the Ham ISI tripped.  Before I could look at light levels the PSL tripped so will have to look once we restore.

Images attached to this comment
rich.abbott@LIGO.ORG - 15:42, Monday 17 July 2017 (37565)ISC
Regarding the LCD display on the front of the fast shutter, we have seen this type of symptom before.  Once in a while (not often by any means) the large electromagnetic pulse that is radiated out of the shutter driver chassis circuitry exceeds some critical threshold and upsets the communication to the LCD display.  Shielding was added to the LCD interface cabling, and it was believed that the addition of this shielding fixed the problem.  It appears as though there is still some finite chance that a pulse can upset the display.

The good news (if you can call it that) is that the display operation is completely separate from the operation of the shutter, so the garbled LCD is not an indicator of a malfunction in the actual shutter operation inside HAM6.

The bad news is that if the shutter has started to behave differently, the likelihood of it being a good thing is essentially zero.  Any chance that the beam is hitting the thin wiring leading to the moveable portion of the shutter should be taken seriously.  If anything (lock-loss transient etc.) causes a blemish to be formed in the Teflon insulation, the normal flexing behavior will likely change and prefer flexing at the damaged spot leading to fatigue failure of the wire.

I would advise great caution and scrutiny be applied to the shutter for a while.  The tendency to get stuck in a partly blocking position didn't exist at installation time, so change may be afoot and it is not likely for the better.
Displaying reports 46901-46920 of 83425.Go to page Start 2342 2343 2344 2345 2346 2347 2348 2349 2350 End