Displaying reports 49541-49560 of 85517.Go to page Start 2474 2475 2476 2477 2478 2479 2480 2481 2482 End
Reports until 12:04, Tuesday 20 June 2017
H1 IOO (IOO, PSL)
cheryl.vorvick@LIGO.ORG - posted 12:04, Tuesday 20 June 2017 (37022)
GigE cameras installed on PSL, plugged in, but no beams on them

- Sheila, Cheryl

Images attached to this report
LHO General
patrick.thomas@LIGO.ORG - posted 11:29, Tuesday 20 June 2017 (37021)
Ops Day Shift Morning Log
Restarted all nuc computers in control room per Carlos's request.

15:09 UTC Karen to end Y (cleaning)
15:15 UTC Hugh and Chris to mechanical room
15:17 UTC Took ISC_LOCK to DOWN for Jason to start PSL FAMIS tasks
15:19 UTC Jim and Krishna to LVEA (compact BRS)
15:22 UTC Hugh HEPI CS
15:23 UTC Dick and John to LVEA
15:32 UTC Christina leaving end X
15:33 UTC Sudarshan to end X (PCAL)
15:35 UTC Filiberto to CER, slow controls chassis 6
15:39 UTC Kyle to LVEA to retrieve vacuum pump
15:40 UTC Jason done
15:46 UTC Carlos to CER to restart work station
15:49 UTC Hugh to end stations (HEPI)
15:51 UTC Pep, Ed and Paul to end Y (ESD power supply)
15:57 UTC Gerardo to end Y (pull cable)
15:58 UTC Karen leaving end Y
16:00 UTC Peter transitioning LVEA to laser safe
16:05 UTC Jason to end X to swap and recenter optical lever laser
16:10 UTC Richard to CS roof to re-terminate weather station temperature sensor
16:11 UTC Carlos back
16:11 UTC Karen to LVEA (cleaning)
16:11 UTC Peter done transitioning LVEA to laser safe. Peter to optics lab.
16:18 UTC Alex Urban restarting GDS calibration pipeline
16:19 UTC Alex done
16:27 UTC Rick to end X to work with Sudarshan
16:28 UTC Richard back
16:30 UTC Krishna back
16:37 UTC Krishna back to LVEA
16:40 UTC Dave making CALCS model change
16:44 UTC Hugh back from end stations, to LVEA to move roaming seismometer
16:47 UTC Keita taking SURF students on tour through LVEA
16:55 UTC Pep, Ed and Paul back
16:57 UTC Sheila and Cheryl to PSL enclosure to pull in cables for GigE camera
17:00 UTC Filiberto done
17:05 UTC Hugh back, Ed to end Y, LN2 delivery through gate
17:14 UTC Jenne, Joe, and Paul to beer garden to glue accelerometers on floor
17:17 UTC Kiwamu aligning PRM and SRM
17:21 UTC Paradise water delivery
17:29 UTC Jason back, to LVEA to recenter ITMX, SR3 and power adjust ITMY
17:32 UTC Filiberto to end Y to help Gerardo
17:35 UTC Kiwamu to LVEA to check cabling around ISC rack 3 (near HAM6)
17:39 UTC Hanford fire through gate
17:41 UTC Changed power to 30 W for Cheryl
17:42 UTC Ace toilet service through gate
17:43 UTC Changed power back to 2 W
17:46 UTC Sudarshan out of end X
17:55 UTC Chandra done checking AIP
18:01 UTC Ed back (was back earlier)
18:05 UTC Keita done with tour, Cheryl back
18:06 UTC Jason done in LVEA, going to end Y
18:11 UTC Chandra WP 7054
18:15 UTC Jenne, Joe and Paul back
H1 SEI
hugh.radkins@LIGO.ORG - posted 10:54, Tuesday 20 June 2017 - last comment - 10:57, Tuesday 20 June 2017(37019)
Visual Inspect of LHO HEPI Fluid Reservoirs--Not that bad

Joe Hanson at LLO sent me a photo into their HEPI Fluid Reservoir interior so I wanted to check ours and did so this morning under WP 7040.  I was unable to capture an image but my observation would be that there was a little light dust-appearing patch (few cm2) on the surface and a few globules (drop size) on the walls of the vessel.  I think the surface patch could actually be dust on the surface and the globules on the sides are dried out fluid as the reservoir level has gone through level cycles.  This was at the Corner Station.  It saw no evidence of either at EndY and just the a small surface patch on the fluid at EndX.  What I saw was nothing like the image from LLO that I'll attach in a moment.  Have closed WP 7040.

Comments related to this report
hugh.radkins@LIGO.ORG - 10:57, Tuesday 20 June 2017 (37020)

Here is the image from LLO's reservoir.

Images attached to this comment
H1 ISC (CDS, IOO)
filiberto.clara@LIGO.ORG - posted 10:18, Tuesday 20 June 2017 (37018)
9 MHz EOM Driver's Beckhoff Readback Signals

WP 7043
FRS 8250
aLOG 36568

Replaced analog input Beckhoff terminal on EtherCAT Slow Controls Chassis 6, S1400077. Terminal replaced was an EL3104, middle terminal, slot 10. This was to address a drift seen in the readback signals.

H1 CAL (CAL, DetChar)
alexander.urban@LIGO.ORG - posted 09:26, Tuesday 20 June 2017 - last comment - 13:28, Wednesday 21 June 2017(37017)
GDS calibration pipeline restarted with gstlal-calibration-1.1.7

The primary and redundant h(t) pipelines were restarted at GPS second 1182010698. This pipeline restart is not accompanied by any filter changes, but does pick up gstlal-calibration-1.1.7. For more information on this version of the code, please see its redmine page.

For more information about the filters currently being run, please see the following aLOG entries:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=36864
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=36842

Comments related to this report
gregory.mendell@LIGO.ORG - 15:38, Tuesday 20 June 2017 (37034)

The corrected filter file needed for LHO Work Permit Number 7047 had already been staged on the DMT on June 14. Thus, this restart of the calibration code picked up the corrected filter file, and this completes LHO Work Permit Number 7047.

alexander.urban@LIGO.ORG - 13:28, Wednesday 21 June 2017 (37067)

Correction to this alog: the pipeline restart does not affect the version of gstlal-calibration running, since version 1.1.7 was already running at Hanford. However it does pick up corrected filters, but the file pointing to those filters had the same name so no command line change was needed. I apologize for the confusion.

H1 PSL
jason.oberling@LIGO.ORG - posted 08:50, Tuesday 20 June 2017 (37015)
PSL Weekly FAMIS Tasks (FAMIS 3655 & 8427)

I performed the weekly PSL FAMIS tasks this morning.

HPO Pump Diode Current Adjust

All pump diode currents were increased by 0.1A, new and old currents summarized in the table below.  The first attachment shows a 15 day minute-trend of how the DBs have decayed since the last current adjustment, while the 2nd is a screenshot of the main PSL Beckhoff screen for future reference.

  Operating Current (A)
Old New
DB1 48.6 48.7
DB2 51.5 51.6
DB3 51.5 51.6
DB4 51.5 51.6

I looked at and optimized the DB operating temps as well.  I changed the temps on all the diodes in DB1 from 28.0 °C to 29.0 °C; the operating temps of the other 3 DBs remained unchanged.  The HPO is now outputting ~157.7 W.  This completes FAMIS task 8427.

PSL Power Watchdog Reset

I reset both PSL power watchdogs at 15:35 UTC (8:35 PDT).  This completes FAMIS task 3655.

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 08:00, Tuesday 20 June 2017 (37010)
OWL Operator Summary

TITLE: 06/20 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Cheryl (Patrick covering at beginning of shift)
SHIFT SUMMARY:

A bit of a rough shift with 0.44Hz oscillation seen with ASC PIT signals and on oplevs (primarily ITMy).  

So had a few hours of CORRECTIVE MAINTENANCE (No FRS has been filed.)
LOG:

H1 AOS (AOS, ISC, SUS)
corey.gray@LIGO.ORG - posted 07:40, Tuesday 20 June 2017 (37013)
Continuing To Watch ITMy & ASC: Was OK For Almost 2Hrs

Decided to slowly take H1 up to Nominal Low Noise, while watching the ITMy oplev & ASC (watched 0.05 Hz "live" spectra, oplev blrms channels on dataviewer, & the ASC Control signals).  

Miraculously, H1 eventually made it to Nominal Low Noise & the culprits of the evening did not appear.  So H1 was cautiously taken to OBSERVING at 12:50 utc.  

Unfortunately, at 14:00 the 0.44Hz noise returned (ITMy oplev spectra looked bad, ITMy oplev blrms increased, and the ASC control signals all showed the 0.44Hz feature).  Took H1 out of OBSERVING, marked it as CORRECTIVE MAINTENANCE & tried to make adjustments to ASC CSOFT gain (went from 0.6 to 0.8 w/ a 30sec TRAMP), but H1 eventually dropped out of lock.

Attached is the ITMy Oplev Summary Page (unfortunately it doesn't show after 14:00utc yet).

 

Images attached to this report
H1 AOS (AOS, DetChar, ISC)
corey.gray@LIGO.ORG - posted 04:47, Tuesday 20 June 2017 (37012)
ITMy Oplev Exhibiting 0.44Hz Noise Again

As Jeff alluded to in his summary, Sheila posted an alog about Oplev damping back in April, but I stopped looking at oplevs since the offcenter ITMX & ETMX had been that way for over a week (i.e. they were not a new feature).

After looking into ASC Land, I returned to the oplevs after going through Sheila's alog and then also looking at Krishna's alog.  Summary of Oplev operations over the last couple of months:

So with Oplevs not actively not controlling optics, I didn't focus on them.  However, we still use them for monitoring position of the optics.  And with that, I returned to checking them out after reading Krishna's alog about ITMy.  I took spectra of all (4) test masses (ref from last night's lock & from when we had trouble with H1 tonight).  Of all the test masses, ITMy appears to be much more noisy, and it all begins down at 0.44Hz (this is the same oscillation seen in the ASC signals previously alogged tonight).

Following Krishna's diagnosis, I looked at the BLRMS signals for the oplevs in the H1 Summary Pages & one can clearly see ITMy starting to clearly look unhealthy compared to other optics toward the end of Jeff's shift starting between 4:00 -5:00 utc (look at bands between 0.3 to 10Hz).  The behavior isn't like what Krishna saw (back then ITMy reached values of 1.0+ urad for short times & in this case ITMy is just under 0.1urad, but stays there for longer stretches). 

Since we don't use oplevs for damping, could the issue be noise  from the light of the ITMy oplev being seen by H1?  Or is there an ASC effect which is also being observed with the ITMy oplev?

Images attached to this report
H1 ISC (ISC)
corey.gray@LIGO.ORG - posted 02:23, Tuesday 20 June 2017 (37011)
H1's ASC Pit Is Still Not Happy. ~0.44Hz Oscillation Breaks lock within 15min.

H1 locked back up to 70Mpc, but within 5min ASC Pit signals started to ring up at 0.44Hz.  This time H1 couldn't hobble with the ASC noise & quickly dropped out of lock on its own.

Marking as CORRECTIVE MAINTENANCE again until this can be resolved.

H1 ISC (ISC)
corey.gray@LIGO.ORG - posted 00:47, Tuesday 20 June 2017 - last comment - 01:45, Tuesday 20 June 2017(37007)
ASC Pitch Instabilities

H1's issue can clearly be seen in:

snapshot of ASC striptools are attached.

Images attached to this report
Comments related to this report
corey.gray@LIGO.ORG - 01:04, Tuesday 20 June 2017 (37008)

Looking at DARM shows noisy lines at 0.44Hz (which is what I pressume is what is seen in ASC Pit signals on StripTools), 0.89Hz, & 1.34Hz in addition to the broadband noise.  Spectra attached (where quiet reference is from 2utc (7pmPDT)).

Images attached to this comment
corey.gray@LIGO.ORG - 01:45, Tuesday 20 June 2017 (37009)

Checking SDF for ASC & Related To Pitch

Looked in SDF & only found a few changes.  Most related to ASC & pitch were fairly small.

Using TimeMachine To Check For Diffs

Since the striptools were showing issues with CSOFT PIT & MICH_P signals, went about following the signal in ASC Land to see where issues arise.  According to the ASC Input Matrix, the PDs for CSOFT are the TR A&B PDs & one can clearly see the signal grow for these guys.  And then from CSOFT you go to all four test masses.  

Out Of OBSERVING & Marking As Corrective Maintenance

Since we were not getting anywhere (H1 range was barely above 50Mpc), I decided to take H1 Out of OBSERVING & tag Observatory Mode as CORRECTIVE MAINTENANCE.  The thought is that starting anew will be useful here.  
I tried lowering the CSOFT Pit gain.  First from 0.6 -> 0.5, and then from 0.5 -> 0.2 (this immediately rung up ASC more and broke lock).

H1 General
corey.gray@LIGO.ORG - posted 00:15, Tuesday 20 June 2017 (37006)
Transition To OWL

TITLE: 06/20 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 52Mpc
OUTGOING OPERATOR: Jeff
CURRENT ENVIRONMENT:
    Wind: 6mph Gusts, 5mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.07 μm/s
QUICK SUMMARY:

ASC Pit Control & Error signals are huge & this is resulting in broadband noise on DARM from 10-100Hz & obviously resulting in a low range.  I will start investigating further while we are still locked.  The ITMx & ETMx oplevs have been at/beyond their limits in pit & yaw, but they've been like that for atleast a week.

H1 General
jeffrey.bartlett@LIGO.ORG - posted 00:11, Tuesday 20 June 2017 (37005)
Ops Evening Shift Summary
Ops Shift Log: 06/19/2017, Evening Shift 23:00 – 07:00 (16:00 - 00:00) Time - UTC (PT)
State of H1: Locked at NLN 29.2W, 48.2Mpc
Intent Bit: Observing
Support: N/A
Incoming Operator: Gorey

Shift Summary: After the IFO relocked, ran A2L check script. Pitch is a bit rung up, since we are in commissioning mode decided to run the repair script to make things better.  

LLO called about a GRB alert. LHO did not receive notice from Verbal Alarms. In one hour hold.

Around 06:00 (23:00) the ASC Pitch Control and Error signals started to ring up. Could find no apparent reason. Sheila’s aLOG #35371 talks about this and OpLev damping. ITMX and ETMX OpLevs are off center. However, OpLev damping appears to be off on ITMY and ETMY. Range is suffering.

 

Activity Log: Time - UTC (PT)
23:00 (16:00) Tack over from Cheryl
23:25 (16:25) Kyle – Going to Mid-Y
23:44 (16:44) Kyle – Back from Mid-Y
00:16 (17:16) Relocked at NLN
00:31 (17:31) Intention bit set to Observing
00:51 (17:51) Damp PI Mode-27 by switching Phase sign
05:05 (22:05) GRB Alert – In one hour hold
06:05 (23:05) End GRB one hour hold
07:00 (00:00) Turn over to Corey

 

H1 General
jeffrey.bartlett@LIGO.ORG - posted 22:16, Monday 19 June 2017 - last comment - 08:53, Tuesday 20 June 2017(37004)
GRB Alert -
   LLO called about GRB Alert. In one hour stand-down for the alert. 

   LHO did not receive notice via Verbal Alarms. 
Comments related to this report
thomas.shaffer@LIGO.ORG - 08:53, Tuesday 20 June 2017 (37016)OpsInfo

There has not been a GRB since June 18 11:25 UTC according to GraceDb, so it looks like LLO received a false alarm.

Tagging OpsInfo for a reminder: L1500117 says to confirm that this is not a maintenance test, but maybe it should also include to confirm that is a real event. Please make sure to check GraceDb after receiving one of these alerts to make sure it is not a test and it is a current event.

Images attached to this comment
H1 General
jeffrey.bartlett@LIGO.ORG - posted 20:23, Monday 19 June 2017 (37003)
Ops Evening Mid-Shift Summary
  Back to Observing for about 3 hours. Environmental and seismic conditions are good. Range has been   in the upper 60Mpcs. All monitors are green and clear. 
H1 CDS (SYS)
david.barker@LIGO.ORG - posted 11:25, Monday 19 June 2017 - last comment - 08:00, Tuesday 20 June 2017(36996)
investigation into recent timing alarms

In the past week we have had two timing alarms:

Monday 6/12 01:19 UTC [Sunday 6/11 18:19 PDT) EY GPS
Saturday 6/17 23:15 - 23:24 UTC (16:15 - 16:24 PDT) MSR Master GPS

The first was a short (one minute) alarm from EY GPS (CNS-II). I trended all the EPICS channels on this system and only found that the Dilution of Precision (DOP) and the Receiver mode channels showed any variation at this time (number of satellites locked did not change). Does this mean it is a real or bogus error?

The second was a longer alarm (9 minutes) with the MSR master's GPS receiver (I think the internal one). The only channels in alarm were  "GPS locked", the the MSR comparator channel-01 (MFO GPS). This would suggest a real problem with the Master's GPS receiver?

Time to pass this over to the timing group for further investigation (Daniel, Zsuzsanna, Stefan?)

Comments related to this report
daniel.sigg@LIGO.ORG - 08:00, Tuesday 20 June 2017 (37014)

These are error messages seem real, but are non critical. The internal GPS is only used to set the GPS date/time.

Displaying reports 49541-49560 of 85517.Go to page Start 2474 2475 2476 2477 2478 2479 2480 2481 2482 End