Displaying reports 47521-47540 of 84714.Go to page Start 2373 2374 2375 2376 2377 2378 2379 2380 2381 End
Reports until 00:00, Saturday 19 August 2017
LHO General
thomas.shaffer@LIGO.ORG - posted 00:00, Saturday 19 August 2017 (38269)
Ops Eve Shift Summary

TITLE: 08/19 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 49Mpc
INCOMING OPERATOR: Jim
SHIFT SUMMARY: Wind is still at a consistent 20mph, but on a 22hr lock.

LHO General
thomas.shaffer@LIGO.ORG - posted 20:37, Friday 18 August 2017 (38268)
Ops Mid Shift Report

Looks like we just rode through 6.4M Fiji ~500km deep.

19hr lock.

LHO General
thomas.shaffer@LIGO.ORG - posted 16:05, Friday 18 August 2017 (38266)
Ops Eve Shift Transition

TITLE: 08/18 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 50Mpc
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
    Wind: 16mph Gusts, 10mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.09 μm/s
QUICK SUMMARY: 14hr lock at a lower 50Mpc.

 

H1 General
cheryl.vorvick@LIGO.ORG - posted 16:01, Friday 18 August 2017 (38265)
Ops Day Summary

TITLE: 08/18 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 49Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
LOG:

H1 General
cheryl.vorvick@LIGO.ORG - posted 10:21, Friday 18 August 2017 (38264)
Ops Day Transition:

TITLE: 08/18 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 50Mpc
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
    Wind: 7mph Gusts, 5mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.10 μm/s

QUICK SUMMARY: after EQ, drop in range from 24 hours ago

H1 General
jim.warner@LIGO.ORG - posted 08:14, Friday 18 August 2017 (38263)
Shift Summary

TITLE: 08/18 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 52Mpc
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY:
LOG:

Recovery from earthquake was rough. Jeff was waiting for violins to settle when I arrived, we got to Low Noise ESD ETMY and ISC_DRMI guardian said somethin about the ETMY ESD not being on. Didn't know what to do, so I pushed the "On" button on the ETMY SUS overview, which broke the lock. After that, I couldn't get past Locking ALS. Not knowing what else to do, I did an initial alignment. When I got to Mich alignment, I couldn't get locked on the dark fringe, at all. I eventually just put the alignment guardian in down and aligned the AS spot by eye, finished SRC and went back to locking. A few attempts at locking PRMI/DRMI and I was able to move on. Violins were a bit high after, but they damped down on their own. Ran A2L, which seemed to take an unusual amount of time to complete.

Back to observing at 9:20.
 

H1 General
jeffrey.bartlett@LIGO.ORG - posted 00:00, Friday 18 August 2017 (38262)
Ops Evning Shift Summary
Ops Shift Log: 08/17/2017, Evening Shift 23:00 – 07:00 (16:00 - 00:00) Time - UTC (PT)
State of H1: Unlocked
Intent Bit: Environmental/Earthquake
Support: Jim W.,
Incoming Operator: Jim
Shift Summary: A2L Pitch and Yaw are both below the reference.
Lockloss due to Mag6.8 earthquake north of Ascension Island.
Put IFO in DOWN until mother Gaia calms down.
Relocking – While damping ETMX Violin modes 1, 3, 5 took another incoming earthquake. Although not as strong as the Ascension Island earthquake, it broke lock and reexcited the Violin modes. Relocking; holding at ENGAGE_REFL_POP_WFS while damping ETMX Violin mode 1, 3, 4, 5. 
 
Activity Log: Time - UTC (PT)
23:00 (16:00) Take over from Cheryl
03:51 (20:51) Lockloss – Mag6.8 EQ North of Ascension Island
                     Switch SEI_CONFIG to LARGE_EQ_NOBRSXY
03:59 (20:59) GRB Alert – No action – IFO down due to earthquake near Ascension Island
04:03 (21:03) Put the IFO into DOWN until seismic activity settles down a bit.
07:00 (00:00) Turn over to Jim
H1 General
jeffrey.bartlett@LIGO.ORG - posted 20:02, Thursday 17 August 2017 (38261)
Ops Evening Mid-Shift Summary
   At mid shift all is well. We are at triple coincident observing. With the wind dropping a little, environmental conditions are good.  
H1 General
jeffrey.bartlett@LIGO.ORG - posted 16:23, Thursday 17 August 2017 (38259)
Ops Evening Shift Transition
Ops Shift Transition: 07/17/2017, Evening Shift 23:00 – 07:00 (16:00 - 00:00) - UTC (PT)
State of H1: Locked at NLN, 28.8w, 53.0Mpc
Intent Bit: Observing
Weather: Clear skies. Winds are Calm to Light breeze. Temperatures are in the high 80s.
Primary 0.03 – 0.1Hz: At 0.02um/s  
Secondary 0.1 – 0.3Hz: At 0.09um/s
Quick Summary: IOF locked in Triple-Coincident Observing. No issues to report at this time.
Outgoing Operator: Cheryl
H1 General
cheryl.vorvick@LIGO.ORG - posted 15:55, Thursday 17 August 2017 (38258)
Day Ops Suammary:

TITLE: 08/17 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 52Mpc
INCOMING OPERATOR: Jeff
SHIFT SUMMARY: locked all shift
LOG:

H1 ISC
cheryl.vorvick@LIGO.ORG - posted 15:23, Thursday 17 August 2017 - last comment - 15:29, Thursday 17 August 2017(38255)
ran A2L

Ran A2L when LLO lost lock.

Comments related to this report
cheryl.vorvick@LIGO.ORG - 15:29, Thursday 17 August 2017 (38256)

Two attachments - DARM_a2l_passive:

  • 21:11:29 UTC - before running A2L
  • 21:31:49 UTC - after running A2L
Images attached to this comment
H1 General
cheryl.vorvick@LIGO.ORG - posted 15:18, Thursday 17 August 2017 (38254)
O2 Run Meeting:

Maintenance for 22 Aug 2017:

H1 ISC (CAL, DetChar, FRS, PEM, SEI, SUS, SYS)
jeffrey.kissel@LIGO.ORG - posted 10:10, Thursday 17 August 2017 (38248)
Recovery Timeline from Jul 6th 2017 EQ
J. Kissel

Albert asked me to describe the recovery from the Jul 5-6th 2017 Montana EQ for his upcoming LVC meeting LIGO status talk, so in doing so, to make sure I reported correct and citable information, I created a detailed timeline of the recovery. 

I post it here, for the record, and in case it might jog some folks memory in figuring out some smoking gun for the new noise source. 

%%%%%%%%%%%
Exec Summary:
We were able to return to nominal low noise within a day. We have problems with higher-than-1.0kHz harmonics (3rd harmonics at 1.5 kHz mostly) rung up for only a few days. We had problems with the automated damping of the 0.5 kHz and 1.0 kHz modes because of coupling changes and more EQs and HEPI Pump Failures, but only for about 8 days after the EQ. However, because a bunch of other problems, cancelled shifts from defeated employees, regular maintenance days, 2 weekends with only the operator on site, and broken scripts coupled with operator confusion, it wasn’t until Jul 19th 2017 (local), the Wednesday after maintenance (only 14 days after the July 5-6th 2017 EQ) that we recovered the new-normal sensitivity of ~55-60 Mpc. I back up this statement with a detailed timeline below for your perusal backed up with aLOG citations.

%%%%%%%%%%%
Detailed Timeline:

- Jul 6th 2017 6:31 UTC, (Last half hour of Eve Shift Wednesday local time) EQ Hits.

- Owl shift and Day shift Thursday morning (Jul 6th 2017) is spent recovering the SEI and SUS systems, and restoring alignment. LHO aLOG 37347

- We were able to recover ALS DIFF by mid-day Thursday, after hand-damping some Bounce and Roll modes. LHO aLOG 37357

- We we up to DC readout by the evening that evening (with no stages of DCPD whitening), but then one of the end station HEPI pump stations tripped. LHO aLOG 37359

- Jul 7th 2017 Once recovered, I worked on actively damping the 1.5 kHz 3rd harmonics for the first time that had rung up during the EQ. LHO aLOG 37361
	These were the only modes new modes that were particularly problematic, and it was only a few of them, because their mode separation was in the ~mHz, and their coupling was week.

- Then the weekend hit, and operators were still relatively inexperienced in tuning violin mode damping. While they were mostly successful by skipping the automated damping, and slowing inching up the gain on the active damping, they’d still occasionally they’d ring up, as they do after any normal lock loss, but when they tried to adjust the settings things got more rung up; e.g. LHO aLOG 37394

- Jul 9 2017 at ~2 UTC on Travis’ eve shift, something went belly up as the range slowly decayed and the lock broke. LHO aLOG 37399
	Since it was a “slow” lock loss, we were likely sending junk to all of the test masses for an extended period of time. That coupled with another EQ, LHO aLOG 37403, and high winds, 2017-07-10 Wind Summary Page, reverted all the hard work on violin modes, including some of the new 1.5 kHz modes. Upon recovery all modes had rung back up, and modes began to become unresponsive to normal settings, LHO aLOG 37402

- Monday Morning / Day shift (Jul 10 2017), we picked up focused efforts on new modes, and made sure no other modes went bad, LHO aLOG 37412 and LHO aLOG 37426. And by that evening (~8:00 pm local, when the A team left), we now had what we now now to be the new normal: LHO aLOG 37433 and we thought violin modes were under control.

- Then that night (Jul 10-11 2017) Patrick got his first exposure to what we’ve now found out later, that the coupling on some of the modes had changed (because of the Jul 6th EQ? because of the Jul 11th slow lock loss? Dunno.), and would now run away with time, LHO aLOG 37438. Not yet understanding that the mode coupling had changed, and coupled with more EQs and PSL trips due to continuing problems with flow sensors over night, meant we were dead, not debugging most of Monday Jun 11-12 UTC, and then we went into normal maintenance on Tuesday morning.

- Recovery after maintenance (Jul 11 2017) went OK, but because the fundamentals were rung up so high from the night before, we’d lose lock earlier in the lock acquisition sequence than normal. Confused and bewildered we cancelled 12 hours worth of shift; LHO aLOG 37467 and LHO aLOG 37468. We found out later that it was just another PD that was saturating from the now re-rung up violins, LHO aLOG 37500.
At this point I’ll emphasize — we were no longer having trouble with high order violin modes. It was that fundamental (~500 Hz) and 1st harmonic (1 kHz) “normal” damping wasn’t working. Further, we’re able to regularly get to reasonable sensitivity, we just didn’t hit the science mode button because we were still actively playing with violin mode damping filters to get them to work. It’s Wednesday Jul 12, so only 7 days after the EQ.

- Another HEPI Pump Trip at EX while the IFO was down likely rung the normal 0.5 and 1.0 kHz modes back up, LHO aLOG 37475

- Jul 12 2017 Wednesday, we begin to realize the normal automated damping isn’t working LHO aLOG 37484, but it was unclear which modes were going bad, over the next few shifts, we slowly and systematically checked the phase and gain of every loop, and updated the LHO Violin Mode Table and Guardian accordingly.

- The Beam Splitter CPS started glitching, for no known reasons, LHO aLOG 37499, which also decreased productivity.

- By Jul 13 2017 Thursday, 8 days after the EQ, we had re-configured all of the automated damping and quantified how many of the modes now had changed coupling, LHO aLOG 37504, and violin modes were no longer a problem.
	
The remainder of the time before we regularly went back to observing was spent exploring why the sensitivity was much worse, and/or other unrelated problems resulted in more defeated shift cancelling.
It was unrelated to any violin modes.
    Spot position moves (LHO aLOG 37506 and LHO aLOG 37536)
    Fast shutter problems (LHO aLOG 37553)
    More PSL Trips (LHO aLOG 37560)
    A broken script and a mis-calculated SDF accept left us on only one DCPD for a few days (LHO aLOG 37585) 
etc.
And our sensitivity has not changed since then. 

So I would say that, aside from this new mystery noise that we still don’t understand, we were fully recovered by Wednesday Jul 19th 2017, 14 days after the EQ.
H1 TCS (TCS)
cheryl.vorvick@LIGO.ORG - posted 09:31, Thursday 17 August 2017 - last comment - 12:39, Thursday 17 August 2017(38246)
TCSX laser is off - PeterK heading out to reset - cause was a drop in chiller flow
Comments related to this report
cheryl.vorvick@LIGO.ORG - 10:58, Thursday 17 August 2017 (38249)

TCSX is back - H1 in Observe - details to follow

cheryl.vorvick@LIGO.ORG - 11:19, Thursday 17 August 2017 (38250)DetChar

Initial exit of H1 from Observe:

  • 1187018572 - chiller flow (blue) drops low enough to shut the laser off (red)
  • shortly after the SDF for TCSX goes from 0 to 1, and then 1 to 2, which kicks H1 out of Observe
Images attached to this comment
cheryl.vorvick@LIGO.ORG - 12:39, Thursday 17 August 2017 (38252)

H1 recovery / return to Observe:

  • 16:48:35 UTC - Thermal effects and TCSX guardian recovery are both complete - plot attached
  • 16:49:00UTC - H1 in Observe
Images attached to this comment
H1 SUS
jeffrey.kissel@LIGO.ORG - posted 11:37, Tuesday 08 August 2017 - last comment - 18:22, Thursday 24 August 2017(38077)
Single Stage SUSs Checked for Rubbing
J. Kissel

I've checked the last suspensions for any sign of rubbing. Preliminary results look like "Nope." 

The data has been committed to SUS repo here:
/ligo/svncommon/SusSVN/sus/trunk/HAUX/H1/IM1/SAGM1/Data/
2017-08-08_1629_H1SUSIM1_M1_WhiteNoise_L_0p01to50Hz.xml
2017-08-08_1629_H1SUSIM1_M1_WhiteNoise_P_0p01to50Hz.xml
2017-08-08_1629_H1SUSIM1_M1_WhiteNoise_Y_0p01to50Hz.xml

/ligo/svncommon/SusSVN/sus/trunk/HAUX/H1/IM2/SAGM1/Data/
2017-08-08_1714_H1SUSIM2_M1_WhiteNoise_L_0p01to50Hz.xml
2017-08-08_1714_H1SUSIM2_M1_WhiteNoise_P_0p01to50Hz.xml
2017-08-08_1714_H1SUSIM2_M1_WhiteNoise_Y_0p01to50Hz.xml

/ligo/svncommon/SusSVN/sus/trunk/HAUX/H1/IM3/SAGM1/Data/
2017-08-08_1719_H1SUSIM3_M1_WhiteNoise_L_0p01to50Hz.xml
2017-08-08_1719_H1SUSIM3_M1_WhiteNoise_P_0p01to50Hz.xml
2017-08-08_1719_H1SUSIM3_M1_WhiteNoise_Y_0p01to50Hz.xml

/ligo/svncommon/SusSVN/sus/trunk/HAUX/H1/IM4/SAGM1/Data/
2017-08-08_1741_H1SUSIM4_M1_WhiteNoise_L_0p01to50Hz.xml
2017-08-08_1741_H1SUSIM4_M1_WhiteNoise_P_0p01to50Hz.xml
2017-08-08_1741_H1SUSIM4_M1_WhiteNoise_Y_0p01to50Hz.xml

/ligo/svncommon/SusSVN/sus/trunk/HTTS/H1/OM1/SAGM1/Data/
2017-08-08_1544_H1SUSOM1_M1_WhiteNoise_L_0p01to50Hz.xml
2017-08-08_1544_H1SUSOM1_M1_WhiteNoise_P_0p01to50Hz.xml
2017-08-08_1544_H1SUSOM1_M1_WhiteNoise_Y_0p01to50Hz.xml

/ligo/svncommon/SusSVN/sus/trunk/HTTS/H1/OM2/SAGM1/Data/
2017-08-08_1546_H1SUSOM2_M1_WhiteNoise_L_0p01to50Hz.xml
2017-08-08_1546_H1SUSOM2_M1_WhiteNoise_P_0p01to50Hz.xml
2017-08-08_1546_H1SUSOM2_M1_WhiteNoise_Y_0p01to50Hz.xml

/ligo/svncommon/SusSVN/sus/trunk/HTTS/H1/OM3/SAGM1/Data/
2017-08-08_1625_H1SUSOM3_M1_WhiteNoise_L_0p01to50Hz.xml
2017-08-08_1625_H1SUSOM3_M1_WhiteNoise_P_0p01to50Hz.xml
2017-08-08_1625_H1SUSOM3_M1_WhiteNoise_Y_0p01to50Hz.xml

/ligo/svncommon/SusSVN/sus/trunk/HTTS/H1/RM1/SAGM1/Data/
2017-08-08_1516_H1SUSRM1_M1_WhiteNoise_L_0p01to50Hz.xml
2017-08-08_1516_H1SUSRM1_M1_WhiteNoise_P_0p01to50Hz.xml
2017-08-08_1516_H1SUSRM1_M1_WhiteNoise_Y_0p01to50Hz.xml

/ligo/svncommon/SusSVN/sus/trunk/HTTS/H1/RM2/SAGM1/Data/
2017-08-08_1520_H1SUSRM2_M1_WhiteNoise_L_0p01to50Hz.xml
2017-08-08_1520_H1SUSRM2_M1_WhiteNoise_P_0p01to50Hz.xml
2017-08-08_1520_H1SUSRM2_M1_WhiteNoise_Y_0p01to50Hz.xml

Will post results in due time, but my measurement processing / analysis / aLOGging queue is severely backed up.
Comments related to this report
jeffrey.kissel@LIGO.ORG - 14:59, Thursday 17 August 2017 (38253)
J. Kissel

Process the IM1, IM2, and IM3 data from above. Unfortunately, it looks like I didn't actually save an IM4 Yaw transfer function, so I don't have plots for that suspension.

I can confirm that IM1, IM2, and IM3 do not look abnormal from their past measurements other than a scale factor gain. Recall that the IMs had their coil driver range reduced in Nov 2013 (see LHO aLOG 8758), but otherwise I can't explain the electronics gain drift, other than to suspect OSEM LED current decay, as has been seen to a much smaller degree in other larger suspension types.

Will try to get the last DOF of IM4 soon.
Non-image files attached to this comment
jeffrey.kissel@LIGO.ORG - 17:26, Thursday 17 August 2017 (38260)
All HTTSs are clear of rubbing. 

Attached are
- the individual measurements to show OSEM basis transfer function results,
- each suspensions transfer functions as a function of time
- all suspensions (plus an L1 RM) latest TFs just to show how they're all nicely the same (now)

Strangely, and positively, though RM2 has always shown an extra resonance in YAW (the last measurement was in 2014 after the HAM1 vent work described in LHO aLOG 9211), that extra resonance has now disappeared, and looks like every other HTTS. Weird, but at least a good weird!
Non-image files attached to this comment
jeffrey.kissel@LIGO.ORG - 18:22, Thursday 24 August 2017 (38355)
J. Kissel

Still playing catch up -- I was finally able to retake IM4 Y. Processed data is attached. Still confused about scale factors, but the SUS is definitely not rubbing, and its frequency dependence looks exactly as it did 3 years ago.
Non-image files attached to this comment
H1 PSL
jason.oberling@LIGO.ORG - posted 10:58, Wednesday 02 August 2017 - last comment - 15:43, Thursday 17 August 2017(37967)
PSL HPO Diode Box and Front End/NPRO Decay Since O2 Start

Attached are two 270-day trends of the HPO diode box powers (in relative %, first attachment) and the 35W FE and NPRO power (second attachment).  Start date of the trends is 11-5-2016, roughly 3.5 weeks before the start of O2.

It is clear when we started adjusting the HPO diode box operating currents on 4-18-2017; previous to that date we were adjusting the currents on an as-needed basis.  The large jump in H1:PSL-OSC_DB1_PWR near the end of the trend is when we swapped that diode box for a spare in early June.  I was also going to include a trend of the HPO DB operating currents, but a read-back issue with DB3 makes this useless; the power supply reports an operating current to the PSL Beckhoff of 100 A, not the 52.3 A displayed on the front of the power supply (a power supply swap should fix this issue, planning for this as well after O2).  In light of that I will make a plot similar to Matt's here and post it as a comment.

On the 2nd attachment, it is clear the drop in the FE power coincides with the drop in the NPRO power.  This is an issue because we are currently unable to increase the FE power by adjusting the FE DB operating currents or temperatures; we suspect this is due to the low NPRO power.  It should be noted that the calibration of H1:PSL-PWR_NPRO_OUTPUT is not correct; the NPRO output power was measured back in May to be 1.36 W.  We will correct this when we swap our aging NPRO for a spare at the end of O2.

Images attached to this report
Comments related to this report
jason.oberling@LIGO.ORG - 15:43, Thursday 17 August 2017 (38257)

Attached is a graph of the HPO pump diode box operating current for the 4 HPO diode boxes.  Graph starts on 4/18/2017, the date we started weekly adjustments of the operating current.  The swap of DB1 is clearly seen on 6/6/2017.  Since then the current increases have been linear, which we expect.

Images attached to this comment
Non-image files attached to this comment
Displaying reports 47521-47540 of 84714.Go to page Start 2373 2374 2375 2376 2377 2378 2379 2380 2381 End