Displaying reports 50141-50160 of 86125.Go to page Start 2504 2505 2506 2507 2508 2509 2510 2511 2512 End
Reports until 14:12, Tuesday 20 June 2017
H1 General
cheryl.vorvick@LIGO.ORG - posted 14:12, Tuesday 20 June 2017 (37030)
Maintenance Update: 21:00UTC
H1 SEI
jenne.driggers@LIGO.ORG - posted 13:56, Tuesday 20 June 2017 (37029)
NN array sensors back

[PaulM, JoeM, Jenne]

Most of the Newtonian noise sensors that had to be removed for the May vent are back.  We did not do the ~3 that are north of the HAM4 north door - I'll do that quickly next Tues.

H1 General
edmond.merilh@LIGO.ORG - posted 13:50, Tuesday 20 June 2017 (37028)
ESD Electronics at EY moved to Temporary Power Supplies

This morning I executed WP #7044 . The supplies were placed on the floor directly behind the rack/chassis that they are powering

H1 AOS (DetChar, SUS)
jason.oberling@LIGO.ORG - posted 12:57, Tuesday 20 June 2017 (37027)
Optical Lever Maintenance (WP 7042)

Several oplev maintenance items were completed today.

Swap ETMx OpLev Laser

I swapped the ETMx oplev laser as the old laser was glitching and there was no more adjustment room with the laser power to follow the glitch-free zone.  The new laser SN is 130-1, old laser SN was 106-1.  The output power of laser SN 130-1 was set to the same point used in the Pcal lab (using the Current Mon port on the back of the laser): 0.793 V.  I will test laser SN 106-1 in the Pcal lab in the LSB to see if it is still useful or if a return for refurbishment is necessary.

Re-center ETMx, ITMx, and SR3 OpLevs

After the May vent the alignment of these 3 optics changed slightly, requiring re-centering of their oplevs.  This has now been completed.

Power Adjustment for ITMy and ETMy OpLevs

Both of these oplev lasers were showing signs of very slight mode-hop glitches, so I adjusted the output power of both lasers to return to a glitch-free operating power.  I used the Current Mon port on the back of the lasers to monitor the power increase (the port outputs a voltage).  The adjustments were:

All of the effected lasers (ETMx, ITMy, ETMy) will need a few hours to return to thermal equilibrium, therefore I will assess whether or not they are still glitching and require further adjustment later this afternoon.  This completes WP 7042.  Should further adjutments to these lasers be required, I will open a new work permit.

H1 AOS (SEI)
krishna.venkateswara@LIGO.ORG - posted 12:55, Tuesday 20 June 2017 (37026)
Compact-BRS relocated to ITMY and restarted

Jim, Krishna

We moved the c-BRS close to the ITMY chamber (roughly the same position relative to the chamber as the previous position relative to ITMX) (see attached photos with Jim for scale). I unlocked the beam-balance, centered it and hooked up the fiber optics and the electronics. After wondering why we weren't getting light on the photodiodes, I realized I had to to actually power the laser for that. doh.

The instrument is functioning normally now, though the drift in the beam-balance position is high, as expected. It will settle slowly over the next few days. It is back under guardian control which is also working well. To compensate for the drift, which is currently in the opposite direction than normal, I have changed the setpoints of the Piezo1 Offset and Piezo2 Offset to 90,000 and -20000 (as compared to the normal value of 110,000 and -40000 respectively). Once the drifts normalize, these offsets could be returned to their normal values.

Images attached to this report
H1 PEM (PEM)
richard.mccarthy@LIGO.ORG - posted 12:35, Tuesday 20 June 2017 (37025)
Corner Station Temperature
Per FRS 7691 and WP7052 I looked at the weather station temperature sensor.  There is and intermittent fault on the signal that bothers detChar so I re terminated the wiring in the hopes this fixes the problem.  Will have to trend the data for some time.
H1 General
cheryl.vorvick@LIGO.ORG - posted 12:34, Tuesday 20 June 2017 (37024)
Maintenance Update:

todo:

LHO VE
chandra.romel@LIGO.ORG - posted 12:34, Tuesday 20 June 2017 (37023)
Tested new AIP set points & rerouted IP6 HV cables

Closed WP 7053 & 7054

Tested new set points on annulus IPs by power cycling AIPs on HAM 4,5,6. Now pumps show red on MEDM screen when power is lost.

Rerouted HV cables on IP6 for improved safety. Threads on bottom nut on west side connector are bad.

Closed FRS 7036 & 8088

H1 IOO (IOO, PSL)
cheryl.vorvick@LIGO.ORG - posted 12:04, Tuesday 20 June 2017 (37022)
GigE cameras installed on PSL, plugged in, but no beams on them

- Sheila, Cheryl

Images attached to this report
LHO General
patrick.thomas@LIGO.ORG - posted 11:29, Tuesday 20 June 2017 (37021)
Ops Day Shift Morning Log
Restarted all nuc computers in control room per Carlos's request.

15:09 UTC Karen to end Y (cleaning)
15:15 UTC Hugh and Chris to mechanical room
15:17 UTC Took ISC_LOCK to DOWN for Jason to start PSL FAMIS tasks
15:19 UTC Jim and Krishna to LVEA (compact BRS)
15:22 UTC Hugh HEPI CS
15:23 UTC Dick and John to LVEA
15:32 UTC Christina leaving end X
15:33 UTC Sudarshan to end X (PCAL)
15:35 UTC Filiberto to CER, slow controls chassis 6
15:39 UTC Kyle to LVEA to retrieve vacuum pump
15:40 UTC Jason done
15:46 UTC Carlos to CER to restart work station
15:49 UTC Hugh to end stations (HEPI)
15:51 UTC Pep, Ed and Paul to end Y (ESD power supply)
15:57 UTC Gerardo to end Y (pull cable)
15:58 UTC Karen leaving end Y
16:00 UTC Peter transitioning LVEA to laser safe
16:05 UTC Jason to end X to swap and recenter optical lever laser
16:10 UTC Richard to CS roof to re-terminate weather station temperature sensor
16:11 UTC Carlos back
16:11 UTC Karen to LVEA (cleaning)
16:11 UTC Peter done transitioning LVEA to laser safe. Peter to optics lab.
16:18 UTC Alex Urban restarting GDS calibration pipeline
16:19 UTC Alex done
16:27 UTC Rick to end X to work with Sudarshan
16:28 UTC Richard back
16:30 UTC Krishna back
16:37 UTC Krishna back to LVEA
16:40 UTC Dave making CALCS model change
16:44 UTC Hugh back from end stations, to LVEA to move roaming seismometer
16:47 UTC Keita taking SURF students on tour through LVEA
16:55 UTC Pep, Ed and Paul back
16:57 UTC Sheila and Cheryl to PSL enclosure to pull in cables for GigE camera
17:00 UTC Filiberto done
17:05 UTC Hugh back, Ed to end Y, LN2 delivery through gate
17:14 UTC Jenne, Joe, and Paul to beer garden to glue accelerometers on floor
17:17 UTC Kiwamu aligning PRM and SRM
17:21 UTC Paradise water delivery
17:29 UTC Jason back, to LVEA to recenter ITMX, SR3 and power adjust ITMY
17:32 UTC Filiberto to end Y to help Gerardo
17:35 UTC Kiwamu to LVEA to check cabling around ISC rack 3 (near HAM6)
17:39 UTC Hanford fire through gate
17:41 UTC Changed power to 30 W for Cheryl
17:42 UTC Ace toilet service through gate
17:43 UTC Changed power back to 2 W
17:46 UTC Sudarshan out of end X
17:55 UTC Chandra done checking AIP
18:01 UTC Ed back (was back earlier)
18:05 UTC Keita done with tour, Cheryl back
18:06 UTC Jason done in LVEA, going to end Y
18:11 UTC Chandra WP 7054
18:15 UTC Jenne, Joe and Paul back
H1 SEI
hugh.radkins@LIGO.ORG - posted 10:54, Tuesday 20 June 2017 - last comment - 10:57, Tuesday 20 June 2017(37019)
Visual Inspect of LHO HEPI Fluid Reservoirs--Not that bad

Joe Hanson at LLO sent me a photo into their HEPI Fluid Reservoir interior so I wanted to check ours and did so this morning under WP 7040.  I was unable to capture an image but my observation would be that there was a little light dust-appearing patch (few cm2) on the surface and a few globules (drop size) on the walls of the vessel.  I think the surface patch could actually be dust on the surface and the globules on the sides are dried out fluid as the reservoir level has gone through level cycles.  This was at the Corner Station.  It saw no evidence of either at EndY and just the a small surface patch on the fluid at EndX.  What I saw was nothing like the image from LLO that I'll attach in a moment.  Have closed WP 7040.

Comments related to this report
hugh.radkins@LIGO.ORG - 10:57, Tuesday 20 June 2017 (37020)

Here is the image from LLO's reservoir.

Images attached to this comment
H1 ISC (CDS, IOO)
filiberto.clara@LIGO.ORG - posted 10:18, Tuesday 20 June 2017 (37018)
9 MHz EOM Driver's Beckhoff Readback Signals

WP 7043
FRS 8250
aLOG 36568

Replaced analog input Beckhoff terminal on EtherCAT Slow Controls Chassis 6, S1400077. Terminal replaced was an EL3104, middle terminal, slot 10. This was to address a drift seen in the readback signals.

H1 CAL (CAL, DetChar)
alexander.urban@LIGO.ORG - posted 09:26, Tuesday 20 June 2017 - last comment - 13:28, Wednesday 21 June 2017(37017)
GDS calibration pipeline restarted with gstlal-calibration-1.1.7

The primary and redundant h(t) pipelines were restarted at GPS second 1182010698. This pipeline restart is not accompanied by any filter changes, but does pick up gstlal-calibration-1.1.7. For more information on this version of the code, please see its redmine page.

For more information about the filters currently being run, please see the following aLOG entries:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=36864
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=36842

Comments related to this report
gregory.mendell@LIGO.ORG - 15:38, Tuesday 20 June 2017 (37034)

The corrected filter file needed for LHO Work Permit Number 7047 had already been staged on the DMT on June 14. Thus, this restart of the calibration code picked up the corrected filter file, and this completes LHO Work Permit Number 7047.

alexander.urban@LIGO.ORG - 13:28, Wednesday 21 June 2017 (37067)

Correction to this alog: the pipeline restart does not affect the version of gstlal-calibration running, since version 1.1.7 was already running at Hanford. However it does pick up corrected filters, but the file pointing to those filters had the same name so no command line change was needed. I apologize for the confusion.

H1 PSL
jason.oberling@LIGO.ORG - posted 08:50, Tuesday 20 June 2017 (37015)
PSL Weekly FAMIS Tasks (FAMIS 3655 & 8427)

I performed the weekly PSL FAMIS tasks this morning.

HPO Pump Diode Current Adjust

All pump diode currents were increased by 0.1A, new and old currents summarized in the table below.  The first attachment shows a 15 day minute-trend of how the DBs have decayed since the last current adjustment, while the 2nd is a screenshot of the main PSL Beckhoff screen for future reference.

  Operating Current (A)
Old New
DB1 48.6 48.7
DB2 51.5 51.6
DB3 51.5 51.6
DB4 51.5 51.6

I looked at and optimized the DB operating temps as well.  I changed the temps on all the diodes in DB1 from 28.0 °C to 29.0 °C; the operating temps of the other 3 DBs remained unchanged.  The HPO is now outputting ~157.7 W.  This completes FAMIS task 8427.

PSL Power Watchdog Reset

I reset both PSL power watchdogs at 15:35 UTC (8:35 PDT).  This completes FAMIS task 3655.

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 08:00, Tuesday 20 June 2017 (37010)
OWL Operator Summary

TITLE: 06/20 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Cheryl (Patrick covering at beginning of shift)
SHIFT SUMMARY:

A bit of a rough shift with 0.44Hz oscillation seen with ASC PIT signals and on oplevs (primarily ITMy).  

So had a few hours of CORRECTIVE MAINTENANCE (No FRS has been filed.)
LOG:

H1 AOS (AOS, ISC, SUS)
corey.gray@LIGO.ORG - posted 07:40, Tuesday 20 June 2017 (37013)
Continuing To Watch ITMy & ASC: Was OK For Almost 2Hrs

Decided to slowly take H1 up to Nominal Low Noise, while watching the ITMy oplev & ASC (watched 0.05 Hz "live" spectra, oplev blrms channels on dataviewer, & the ASC Control signals).  

Miraculously, H1 eventually made it to Nominal Low Noise & the culprits of the evening did not appear.  So H1 was cautiously taken to OBSERVING at 12:50 utc.  

Unfortunately, at 14:00 the 0.44Hz noise returned (ITMy oplev spectra looked bad, ITMy oplev blrms increased, and the ASC control signals all showed the 0.44Hz feature).  Took H1 out of OBSERVING, marked it as CORRECTIVE MAINTENANCE & tried to make adjustments to ASC CSOFT gain (went from 0.6 to 0.8 w/ a 30sec TRAMP), but H1 eventually dropped out of lock.

Attached is the ITMy Oplev Summary Page (unfortunately it doesn't show after 14:00utc yet).

 

Images attached to this report
H1 AOS (AOS, DetChar, ISC)
corey.gray@LIGO.ORG - posted 04:47, Tuesday 20 June 2017 (37012)
ITMy Oplev Exhibiting 0.44Hz Noise Again

As Jeff alluded to in his summary, Sheila posted an alog about Oplev damping back in April, but I stopped looking at oplevs since the offcenter ITMX & ETMX had been that way for over a week (i.e. they were not a new feature).

After looking into ASC Land, I returned to the oplevs after going through Sheila's alog and then also looking at Krishna's alog.  Summary of Oplev operations over the last couple of months:

So with Oplevs not actively not controlling optics, I didn't focus on them.  However, we still use them for monitoring position of the optics.  And with that, I returned to checking them out after reading Krishna's alog about ITMy.  I took spectra of all (4) test masses (ref from last night's lock & from when we had trouble with H1 tonight).  Of all the test masses, ITMy appears to be much more noisy, and it all begins down at 0.44Hz (this is the same oscillation seen in the ASC signals previously alogged tonight).

Following Krishna's diagnosis, I looked at the BLRMS signals for the oplevs in the H1 Summary Pages & one can clearly see ITMy starting to clearly look unhealthy compared to other optics toward the end of Jeff's shift starting between 4:00 -5:00 utc (look at bands between 0.3 to 10Hz).  The behavior isn't like what Krishna saw (back then ITMy reached values of 1.0+ urad for short times & in this case ITMy is just under 0.1urad, but stays there for longer stretches). 

Since we don't use oplevs for damping, could the issue be noise  from the light of the ITMy oplev being seen by H1?  Or is there an ASC effect which is also being observed with the ITMy oplev?

Images attached to this report
H1 General
jeffrey.bartlett@LIGO.ORG - posted 22:16, Monday 19 June 2017 - last comment - 08:53, Tuesday 20 June 2017(37004)
GRB Alert -
   LLO called about GRB Alert. In one hour stand-down for the alert. 

   LHO did not receive notice via Verbal Alarms. 
Comments related to this report
thomas.shaffer@LIGO.ORG - 08:53, Tuesday 20 June 2017 (37016)OpsInfo

There has not been a GRB since June 18 11:25 UTC according to GraceDb, so it looks like LLO received a false alarm.

Tagging OpsInfo for a reminder: L1500117 says to confirm that this is not a maintenance test, but maybe it should also include to confirm that is a real event. Please make sure to check GraceDb after receiving one of these alerts to make sure it is not a test and it is a current event.

Images attached to this comment
H1 CDS (SYS)
david.barker@LIGO.ORG - posted 11:25, Monday 19 June 2017 - last comment - 08:00, Tuesday 20 June 2017(36996)
investigation into recent timing alarms

In the past week we have had two timing alarms:

Monday 6/12 01:19 UTC [Sunday 6/11 18:19 PDT) EY GPS
Saturday 6/17 23:15 - 23:24 UTC (16:15 - 16:24 PDT) MSR Master GPS

The first was a short (one minute) alarm from EY GPS (CNS-II). I trended all the EPICS channels on this system and only found that the Dilution of Precision (DOP) and the Receiver mode channels showed any variation at this time (number of satellites locked did not change). Does this mean it is a real or bogus error?

The second was a longer alarm (9 minutes) with the MSR master's GPS receiver (I think the internal one). The only channels in alarm were  "GPS locked", the the MSR comparator channel-01 (MFO GPS). This would suggest a real problem with the Master's GPS receiver?

Time to pass this over to the timing group for further investigation (Daniel, Zsuzsanna, Stefan?)

Comments related to this report
daniel.sigg@LIGO.ORG - 08:00, Tuesday 20 June 2017 (37014)

These are error messages seem real, but are non critical. The internal GPS is only used to set the GPS date/time.

Displaying reports 50141-50160 of 86125.Go to page Start 2504 2505 2506 2507 2508 2509 2510 2511 2512 End