Displaying reports 48141-48160 of 84671.Go to page Start 2404 2405 2406 2407 2408 2409 2410 2411 2412 End
Reports until 14:28, Tuesday 18 July 2017
H1 CDS
david.barker@LIGO.ORG - posted 14:28, Tuesday 18 July 2017 - last comment - 16:50, Tuesday 18 July 2017(37589)
PSL IO GIGE cameras operational

Richard, Fil, Dave, Carlos, Cheryl

The two digital GIGE cameras Cheryl installed on the PSL table are now producing images.

The cameras are called h1iogige1 (10.106.0.52) and h1iogige2 (10.106.0.53).

I have reconfigured  h1digivideo2's VID-CAM28 and VID-CAM29 to be IO GIGE 1 and 2. For some reason monit had VID-CAM28 commented out, I reactivated this camera. I changed the MEDM screen to show the correct names (image attached).

Images attached to this report
Comments related to this report
cheryl.vorvick@LIGO.ORG - 16:50, Tuesday 18 July 2017 (37595)

GigEs were not connected to medm's until after the installation was complete, so the cameras were aligned without being able to see the images, and further alignment and more attenuation are needed.

H1 ISC (ISC)
jenne.driggers@LIGO.ORG - posted 12:59, Tuesday 18 July 2017 - last comment - 00:42, Tuesday 25 July 2017(37585)
Only using OMC DCPD B for last 2.5 days

Keita pointed out that we have only been using OMC DCPD B for at least the last lock.  I trended the relative gains of DCPD A and DCPD B, and it looks like we've been in this situation since 16July2017 at around 05:30 UTC.

The attached is a 5 day trend of the gains.  The 2 spikes earlier are when we were either adding or removing a layer of whitening, due to the violin modes being too high.  On the 16th, it looks like the guardian was started to switch the whitening, but then maybe got stopped halfway.  This explains why it has looked like the shot noise was too high the last few days.

Clearly we need to write into the READY_FOR_HANDOFF OMC lock state to ensure that the 2 gains are both at their nominal values.  Also, it looks like someone accepted the wrong values into the Observe SDF file, so we've been able to go to Observing with the wrong values :(  No good.  The safe SDF file was correct.  I'll fix the Observe file.

EDIT:  Looking through the guardian log file, it looks like the code gets a bit confused when you try to use these states before we're using the DCPDs for DARM.  So, now if we're at DC_Readout_Transition or before (and don't need to do any gain ramping), we skip the gain changes and just change the whitening.  If we're on DC_Readout or after, the change will happen as before.  Either way, at the end of the state is now a check to see if the DCPD gains are equal (they should both be 1000).  The new code is in the svn and has been reloaded, although not actually used.

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 12:47, Tuesday 18 July 2017 (37587)CAL, DetChar, ISC, OpsInfo
Tagging CAL and DetChar.
jeffrey.kissel@LIGO.ORG - 15:32, Monday 24 July 2017 (37743)CAL, DetChar
The exact time of departure from this configuration is
	Jul 16 2017 05:24:00 UTC (1184217858)
and return to normal is
	Jul 18 2017 18:58:00 UTC (1184439498)

We (the calibration group) have recommended that any observation time within the period be marked with a CAT1 veto, citing that the detector was in a non-O2-standard configuration and the calibration is suspect. 

Yes, it is plausible to reconstruct the calibration during these times, but given our limited person-power we have elected to abandon the effort at this time.
laura.nuttall@LIGO.ORG - 00:42, Tuesday 25 July 2017 (37756)

No problem, I made a flag and inserted it in to the segment database. H1:DCH-CAL_NON_O2_STANDARD_CONFIG:1 captures this time.

H1 IOO
cheryl.vorvick@LIGO.ORG - posted 12:25, Tuesday 18 July 2017 (37586)
IO GigE camera installation on the PSL is complete
optic absolute distance from IO_AB_L4 focal length and beam diam.
IO_AB_L4 0 fl = 1145.6mm
IO_lens3 43.4cm fl = -618.2mm
IO_lens4 123.3cm fl = 720.2mm
IOGigE1 163.1cm beam diameter = 1.9mm
IOGigE2 245.5cm beam diameter = 1.9mm
Images attached to this report
H1 General (IOO)
cheryl.vorvick@LIGO.ORG - posted 11:24, Tuesday 18 July 2017 (37584)
IMC beam spot measurements - 17 July 2017

Beam spot measurements: according to these measurements, the beam spots from before to after the 5.8M EQ, and all changes are 0.14mm or less.

  alpha beam dist from center, 17 July 2017 diff from 17 May 2017
mc1 p -0.057 2.40 0.10
mc1 y 0.092 3.87 0.08
mc2 p -0.183 7.67 0.04
mc2 y -0.020 -0.85 -0.04
mc3 p -0.054 2.26 0.10
mc3 y -0.138 -5.84 -0.14
H1 CDS (GRD)
david.barker@LIGO.ORG - posted 11:08, Tuesday 18 July 2017 (37583)
h1guardian0 rebooted

Dave and TJ:

we rebooted h1guardian0 because its loading has been regularly in the 50's. TJ recovered the GRD nodes following the reboot.

H1 SEI
hugh.radkins@LIGO.ORG - posted 10:24, Tuesday 18 July 2017 - last comment - 12:51, Tuesday 18 July 2017(37582)
BS ISI CPs Glitches--None so Far

Looking over the trends, since the interface chassis was restarted and power cable secured, I see no large glitches, especially on corner3 only, that aren't seen on all CPSs and that are not related to ground motion.  Can't access FRS to give a number but I'll keep an eye on this for a few more days.

Comments related to this report
jeffrey.kissel@LIGO.ORG - 12:51, Tuesday 18 July 2017 (37588)
I've updated FRS Ticket 8517.
H1 PSL
jason.oberling@LIGO.ORG - posted 09:55, Tuesday 18 July 2017 (37579)
PSL Weekly FAMIS Tasks (FAMIS 3659 & 8431)

This morning I performed the weekly PSL FAMIS tasks.

HPO Pump Diode Current Adjust (FAMIS 8431)

I adjusted the HPO pump diode currents, changes summarized in the table below.  All adjustments were done with the ISS OFF.  I have attached a screenshot of the PSL Beckhoff main screen for future reference.

  HPO Diode Currents (A)
Old New
DB1 48.9 49.0
DB2 51.8 52.0
DB3 51.8 52.0
DB4 51.8 52.0

I then tweaked the DB operating temperatures.  The only change was to DB1, all diodes changed from 29.0 °C to 28.5 °C; no changes were made to the operating temps of DBs 2, 3, or 4.  The HPO is now outputting 154.9 W and the ISS has been turned back ON.  This completes FAMIS 8431.

PSL Power Watchdog Reset (FAMIS 3659)

I reset both PSL power watchdogs at 16:26 UTC (9:26 PDT).  This completes FAMIS 3659.

Images attached to this report
H1 General
jeffrey.bartlett@LIGO.ORG - posted 08:04, Tuesday 18 July 2017 (37577)
Ops Owl Shift Summary
Ops Shift Log: 07/18/2017, Owl Shift 07:00 – 15:00 (00:00 - 08:00) Time - UTC (PT)
State of H1: Locked at NLN, power at 29.2W, range at 48.2Mpc
Intent Bit: Observing
Support: N/A
Incoming Operator: Jim
Shift Summary: Observing for past 8 hours. Several smaller aftershocks near Nikol’skoye Russia; none that caused issues for the IFO. Range has been holding around 47 to 48Mpc as the noise continues at 20Hz and higher. Prep work underway for this morning’s maintenance activities.   
 
Activity Log: Time - UTC (PT)
07:00 (00:00) Take over from Corey
07:03 (00:03) Damp PI Mode-28
07:28 (00:28) Drop out of Observing to run A2L repair script
07:40 (00:40) Back to Observing after A2L script finished
13:23 (06:23) Bubba – Going to both end station chiller yards to start compressors
14:00 (07:00) Bubba & Chris – Lubing Supply fans WP #7076
14:56 (07:56) HDF on site. Notified Bubba
15:00 (08:00) Turn over to Jim
LHO General
john.worden@LIGO.ORG - posted 06:32, Tuesday 18 July 2017 - last comment - 09:17, Tuesday 18 July 2017(37576)
MEDM screenshots not allowing log in

Tried two different browsers;

Unauthorized

This server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn't understand how to supply the credentials required.


Apache/2.4.10 (Debian) Server at lhocds.ligo-wa.caltech.edu Port 443
Comments related to this report
keith.thorne@LIGO.ORG - 09:17, Tuesday 18 July 2017 (37578)CDS
This is due to a big Grouper update - see LLO aLOG 34921.  It should not affect local logins to CDS workstations or the main aLOGs, but does affect all remote logins and web site accesses.  This was not as intended and they are working hard to fix it.
H1 General
jeffrey.bartlett@LIGO.ORG - posted 04:06, Tuesday 18 July 2017 (37575)
Ops Owl Mid-Shift Summary
IFO in Observing for past 4 hours. Environmental conditions are good. A2Ls are both below reference. Range is holding around 48Mpc.  
H1 General
jeffrey.bartlett@LIGO.ORG - posted 00:31, Tuesday 18 July 2017 (37574)
Ops Owl Shift Transition
Ops Shift Transition: 07/18/2017, Owl Shift 07:00 – 15:00 (00:00 - 08:00) - UTC (PT)
State of H1: Locked at NLN 29.2W, range is 48.1Mpc
Intent Bit: Observing
Weather: Skies are clear, the wind is a Light Breeze, Temps are in the mid-60s 
Primary 0.03 – 0.1Hz: At 0.01um/s
Secondary 0.1 – 0.3Hz: At 0.04um/s
Quick Summary: Recovering after earlier large EQ in the Russia area. Noise is elevated above 20Hz. With a bit of work damped PI Mode-28. A2L Yaw is high. Run A2L repair script while LLO is still down from the Russia EQ.
Outgoing Operator: Corey
LHO General
corey.gray@LIGO.ORG - posted 00:13, Tuesday 18 July 2017 (37568)
EVE Operator Summary

TITLE: 07/17 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 47Mpc
INCOMING OPERATOR: Jeff
SHIFT SUMMARY:

Main issue of the evening was a 7.7 Earthquake.  There was some initial confusion with which ISI_CONFIG state to go to after the EQ (it knocked us out without much heads up).  Originally went to LARGE_EARTHQUAKE_NOBRSXY, but then switched to EARTHQUAKE_V2--ultimately went to the correct LARGE_EARTHQUAKE_NOBRSXY.  Then it was a waiting game for most of the shift.  

H1 has noise starting from 25+Hz giving us a range of 45Mpc.  

We worried about violin modes, but they seem OK (under e-15).
LOG:

H1 ISC
corey.gray@LIGO.ORG - posted 21:25, Monday 17 July 2017 - last comment - 23:40, Monday 17 July 2017(37572)
Stuck With ALSy Problems

ALSx is fine.

ALSy has not been able to stay locked in a 00.  I've tried tweaking on the ETMy to no avail.  I am currently on LOCKED_NO_SLOW_NO_WFS, and wanting to reduce WFS & CAM signals for the Y-arm, but it simply doesn't stay locked long enough for me to relieve the large output values.  

Comments related to this report
corey.gray@LIGO.ORG - 23:40, Monday 17 July 2017 (37573)

Problem Resolved:  I was looking elsewhere and Thomas V suggested a Clear History for ALSy & this did the trick.

LHO General
corey.gray@LIGO.ORG - posted 20:30, Monday 17 July 2017 (37571)
Mid Shift Status

All SEI & SUS systems are back.

Going to check on IMs and then return to locking.

P.S. EQ was upgraded to 7.7

H1 SEI (DetChar, GRD, Lockloss, SEI)
jeffrey.kissel@LIGO.ORG - posted 17:41, Monday 17 July 2017 - last comment - 10:08, Tuesday 18 July 2017(37570)
Active Platforms vs. and Angry Earth
J. Kissel

As we're getting slammed by another earthquake, I wonder if there can be a few things that team SEI can change in order to get the SEI platforms back to DAMPED (HEPI robust isolated on L4Cs, HAMs Damping with GS13s, BSC ISIs damping with L4Cs and GS13s) and optics back to DAMPED as soon as possible. Here're the flaws I see at the moment:
(1) SEI_CONFIG manager doesn't turn off BSC-ISI Z sensor correction (at least in EARTHQUAKE_v2), so giant EQs like the current still push HEPI in Z enough that it makes the T240s angry and trips the ISI's watchdog, even though the T240s aren't yet involved in any active control. The SEI manager should have a state where it turns off ALL sensor correction, not just horizontal sensor correction.
    EDITNevermind => I had not read Jim's updates to the SEI_CONFIG Guide. We should be in LARGE_EQ_NOBRSXY, and that does indeed turn off ALL sensor correction.
(2) This may be hard, but it would be best if the T240s weren't in the watchdog system until they're used.

Just planting the seed for discussion, no need to pull out ECR pens just yet.
Images attached to this report
Comments related to this report
hugh.radkins@LIGO.ORG - 10:08, Tuesday 18 July 2017 (37580)

Re #2, My recollection was that it was ignored or else while it was riled up it would trip which it does not.  Regardless, I reviewed the model and indeed I recalled correctly that the T240s are ignored in the WD as long as all the Isolation Gains are zero.  Not sure the situation that Corey was experiencing during the EQ ring-down that suggested the T240s were the problem.

H1 PSL
jason.oberling@LIGO.ORG - posted 10:35, Monday 17 July 2017 - last comment - 10:39, Tuesday 18 July 2017(37560)
PSL Tripped, Likely due to Flow in Laser Head 3

J. Oberling, E. Merilh, J. Warner

The PSL tripped this morning, likely due to the flow in laser head 3.  As can be seen in this alog, the flow in head 3 (and only head 3) dropped from 0.52 lpm to ~0.47 lpm sometime on the morning of Saturday, 7/15/2017.  It is likely this was the cause of the laser trip.  I will do more forensics when I am onsite tomorrow.  When restarting the cyrstal chiller, the flow in laser head 3 was back up around 0.54 lpm.  Maybe something worked its way through the system, causing the laser trip?

When restarting the laser, the Beckhoff software appeared to lose communication with the hardware of the PSL, requiring a reboot of the PSL Beckhoff PC.  Once this was done, everything worked fine and the laser came back up.  I had difficulty injection locking the 35W FE to the HPO; I believe this is due to the lower power out of the HPO.  I engaged the injection locking by turning the RAMP OFF, and monitoring the power circulating in both directions of the HPO.  When the circulating power favored the forward direction, I manually engaged the injection locking.  This worked the first time and the laser is now back up and running.  Ed re-engaged the PMC, FSS, and ISS, and Jim reset the NPRO noise eater.  By this time ~30 minutes had elapsed, so I engaged the LRA and the power watchdogs.  The laser is now up and running.

J. Kissel filed FRS 8539 for this trip.

Comments related to this report
edmond.merilh@LIGO.ORG - 11:25, Monday 17 July 2017 (37564)
Images attached to this comment
jason.oberling@LIGO.ORG - 10:39, Tuesday 18 July 2017 (37581)

Looking at some trends it appears that it was Laser Head 4 that tripped the PSL, not Laser Head 3 as previously surmised.  The first attachment shows the 4 laser head flows around the time of the trip.  The second shows the flow through Head 4 and the Head 1-4 Flow Interlock, while the third attachment shows the flow through Head 3 and the interlock.  It is clear from this that the flow through Head 4 was at the trip point of 0.4 lpm at the time the interlock tripped, while the flow through Head 3 remained above the trip point.  It is unclear why the flow through Laser Head 4 fell so fast, possibly something moving through the system causing a glitch with the flow sensor?

Images attached to this comment
Displaying reports 48141-48160 of 84671.Go to page Start 2404 2405 2406 2407 2408 2409 2410 2411 2412 End