Displaying reports 48781-48800 of 84567.Go to page Start 2436 2437 2438 2439 2440 2441 2442 2443 2444 End
Reports until 04:58, Tuesday 13 June 2017
H1 General
patrick.thomas@LIGO.ORG - posted 04:58, Tuesday 13 June 2017 (36827)
Observing
NOISE_TUNINGS was announced twice by verbal alarms, along with: 'TypeError while running test: LOCK_STATE'. This occurred the last time as well.

Accepted the attached SDF differences. Why are these coming up each time?

Verbal alarms crashed again upon hitting the observing intent bit.

Hopefully the initial alignment will help this lock.
Images attached to this report
H1 General
patrick.thomas@LIGO.ORG - posted 04:39, Tuesday 13 June 2017 (36826)
Lock loss
Lock loss at 9:46 UTC. Ran initial alignment. Relocking.
H1 General
patrick.thomas@LIGO.ORG - posted 03:02, Tuesday 13 June 2017 (36825)
Observing
Accepted the attached SDF differences. Verbal alarms crashed upon hitting the observing intent bit. Previously seen DARM noise is gone.
Images attached to this report
H1 General
patrick.thomas@LIGO.ORG - posted 02:04, Tuesday 13 June 2017 - last comment - 02:25, Tuesday 13 June 2017(36822)
Lock loss
Lock loss at 09:00 UTC. Possibly from the mechanism causing the DARM noise?
Comments related to this report
patrick.thomas@LIGO.ORG - 02:23, Tuesday 13 June 2017 (36823)
May have actually been from an earthquake.

I do not believe seismon saw it. A tconvert on the GPS time gives:
patrick.thomas@zotws3:~$ tconvert 1181363865
Jun 13 2017 04:37:27 UTC
patrick.thomas@zotws3:~$ 
Images attached to this comment
patrick.thomas@LIGO.ORG - 02:25, Tuesday 13 June 2017 (36824)
I believe this is the one seismon is reporting, which was much earlier.
Images attached to this comment
H1 General (DetChar)
patrick.thomas@LIGO.ORG - posted 01:05, Tuesday 13 June 2017 - last comment - 10:02, Tuesday 13 June 2017(36819)
DARM noise
Range is dropping with it. See attached.
Images attached to this report
Comments related to this report
patrick.thomas@LIGO.ORG - 01:08, Tuesday 13 June 2017 (36820)
I have seen this before: alog 35173. If I recall, Jeff K. knew what it was, but I don't remember what he said.
patrick.thomas@LIGO.ORG - 01:15, Tuesday 13 June 2017 (36821)
Found it: alog 35184
sheila.dwyer@LIGO.ORG - 09:07, Tuesday 13 June 2017 (36835)

What is rung up at 6kHz?  That might be the problem.

patrick.thomas@LIGO.ORG - 09:58, Tuesday 13 June 2017 (36839)
Good question. Somehow I hadn't noticed that.

It is not there in the screenshot in the first alog I linked to (from when this happened before).
patrick.thomas@LIGO.ORG - 10:02, Tuesday 13 June 2017 (36840)
Well, maybe it is, but at a much reduced amplitude.
LHO General
patrick.thomas@LIGO.ORG - posted 00:20, Tuesday 13 June 2017 (36818)
Ops Owl Shift Transition
TITLE: 06/13 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 58Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    Wind: 9mph Gusts, 7mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.11 μm/s 
QUICK SUMMARY:

Restarted nuc5 and video2.
LHO General
thomas.shaffer@LIGO.ORG - posted 00:00, Tuesday 13 June 2017 (36817)
Ops Eve Shift Summary

TITLE: 06/13 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 57Mpc
INCOMING OPERATOR: Patrick
SHIFT SUMMARY: 15hour lock, range has been a bit lower than the last lock (59Mpc) but the wind has calmed down and the rest of the environment is calm.

LHO General
thomas.shaffer@LIGO.ORG - posted 20:53, Monday 12 June 2017 - last comment - 23:38, Monday 12 June 2017(36815)
Ops Mid shift report

Locked and Observing for 12hrs. The range drop that we saw previously definitely seems to correlate with the wind, which is odd since the wind was only peaked at 35mph for a few tens of minutes and then slowly decreased again.

There does still seem to be some more noise than usual in the 30-1000hz and our range is still only at 60Mpc, about 5Mpc worse than the last lock.

Comments related to this report
thomas.shaffer@LIGO.ORG - 23:38, Monday 12 June 2017 (36816)

Some trends maybe showing that low wind is affecting our range.

Images attached to this comment
H1 CAL
aaron.viets@LIGO.ORG - posted 19:09, Monday 12 June 2017 (36814)
gstlal-calibration-1.1.7 now installed on DMT machines at LHO
When I restarted the calibration pipeline last Thursday, June 8 (see https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=36727 ), I noticed that gstlal-calibration-1.1.7 had been installed on h1dmt0 and h1dmt2. This apparently happened automatically during Tuesday maintenance and was not intentional. However, this change did not affect the process or the output until the restart on June 8. Due to the amount of testing already done with this version (the last batch of C01 was made with it; also see https://bugs.ligo.org/redmine/issues/5531) and the difficulty of reverting to version 1.1.4, it was decided to continue using 1.1.7 at LHO. We have had 5 days of online running since this restart, and no problems have occurred. LLO plans to continue using 1.1.4 for one more week before upgrading to 1.1.7 on June 20.

New features/bug fixes included in 1.1.7 as compared to 1.1.4:
* Algorithm to compute SRC detuning parameters; 4 more 16 Hz channels:
        - {IFO}:GDS-CALIB_F_S
        - {IFO}:GDS-CALIB_F_S_NOGATE
        - {IFO}:GDS-CALIB_SRC_Q_INVERSE
        - {IFO}:GDS-CALIB_SRC_Q_INVERSE_NOGATE
* Command line option to remove calibration lines (not yet being used, but would add a 16 kHz channel, {IFO}:GDS-CALIB_STRAIN_CLEAN )
* Latency reduced by 5 seconds (was 10 - 14 seconds, now 5 - 9 seconds)
* Bug fix to mark 3 seconds before and after raw data dropouts bad in the CALIB_STATE_VECTOR. This 3 seconds is due to the settling of the FIR filters. The risk of running without this big fix is that there can be glitches around raw data dropouts that need to be manually vetoed (see https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=32486 )
LHO VE
david.barker@LIGO.ORG - posted 17:11, Monday 12 June 2017 (36813)
no more robo-alog entries about CP3, CP4 autofill

The eagle-eyed alog reader would have noticed that the regular robo-alog entries reporting on CP3 and CP4 autofills have been missing lately. We have discontinued this service now that the vacuum team have fixed the LN2 pump liquid level gauges and these pumps are back under PID control.

H1 PSL (CDS, ISC)
koji.arai@LIGO.ORG - posted 16:56, Monday 12 June 2017 (36812)
A typo in ISC_CUST_BULLSEYE.adl fixed

I found a channel name typo in /opt/rtcds/userapps/release/isc/common/medm/ISC_CUST_BULLSEYE.adl was fixed. The file was committed to the repository.

Previously, H1:PSL-DIAG_BULLSEYE_WID_INMON display on the screen actually showed H1:PSL-DIAG_BULLSEYE_PIT_INMON.

H1 OpsInfo
thomas.shaffer@LIGO.ORG - posted 16:39, Monday 12 June 2017 (36811)
VerbalAlarms GRBs turned back on

I have taken VerbalAlarms back into its normal operating mode that will report GRBs. I was asked to turn off the GRBs and other acknowledgeable alarms for the vent, so I created a commissioning mode for Verbal to do that. Running "./VerbalAlarms.py -c" will run it without the acknowledgeable alarms and without a few other unnecessary tests. I had forgotten to turn this back on before the weekend, but it should be all good now!

LHO General
thomas.shaffer@LIGO.ORG - posted 16:09, Monday 12 June 2017 (36810)
Ops Eve Shift Transition

TITLE: 06/12 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 58Mpc
OUTGOING OPERATOR: Travis
CURRENT ENVIRONMENT:
    Wind: 16mph Gusts, 9mph 5min avg
    Primary useism: 0.06 μm/s
    Secondary useism: 0.14 μm/s
QUICK SUMMARY: Higher winds predicted for this evening, we will see how that goes. Running on a 7.5 hr lock at 59Mpc, but there is some noise from 40-1000Hz bringing our range down a bit. Vaishali said that it might be PRM misalignment?

 

H1 General
travis.sadecki@LIGO.ORG - posted 16:00, Monday 12 June 2017 (36803)
Ops Day Shift Summary

TITLE: 06/12 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 59Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY:  After relocking after this morning's EQ, it has been a quiet day of Observing.
LOG:

16:45 Peter and Gerardo to Optics Lab

16:55 Richard to roof to look at anemometer

17:14 Peter and Gerardo done in Optics Lab

17:15 Richard off the roof

17:33-17:50  Out of Observing while LLO is down to run A2L

18:14 Marc to MY parts hunting

18:53 Koji to Optics Lab

19:06 Marc back

20:17 Kyle to MX

20:33 Bubba to MX

20:42 Kyle back

21:15 Bubba back

22;17 Betsy and Calum to both mid stations

H1 CAL
aaron.viets@LIGO.ORG - posted 09:11, Thursday 08 June 2017 - last comment - 14:25, Monday 12 June 2017(36727)
GDS calibration pipelines restarted
The primary and redundant DMT processes at LHO were restarted at 1180972740.  It appears there was a raw data dropout starting at 2017-06-08 8:36 UTC that lasted almost 7 hours. After this, the calibration pipeline was running but producing no output. A simple restart seems to have gotten data flowing again. 
Comments related to this report
aaron.viets@LIGO.ORG - 14:25, Monday 12 June 2017 (36809)
This restart also picked up the new version of the calibration code, gstlal-calibration-1.1.7. This was automatically (unintentionally) installed during Tuesday maintenance.
Displaying reports 48781-48800 of 84567.Go to page Start 2436 2437 2438 2439 2440 2441 2442 2443 2444 End