NOISE_TUNINGS was announced twice by verbal alarms, along with: 'TypeError while running test: LOCK_STATE'. This occurred the last time as well. Accepted the attached SDF differences. Why are these coming up each time? Verbal alarms crashed again upon hitting the observing intent bit. Hopefully the initial alignment will help this lock.
Lock loss at 9:46 UTC. Ran initial alignment. Relocking.
Accepted the attached SDF differences. Verbal alarms crashed upon hitting the observing intent bit. Previously seen DARM noise is gone.
Lock loss at 09:00 UTC. Possibly from the mechanism causing the DARM noise?
May have actually been from an earthquake. I do not believe seismon saw it. A tconvert on the GPS time gives: patrick.thomas@zotws3:~$ tconvert 1181363865 Jun 13 2017 04:37:27 UTC patrick.thomas@zotws3:~$
I believe this is the one seismon is reporting, which was much earlier.
Range is dropping with it. See attached.
I have seen this before: alog 35173. If I recall, Jeff K. knew what it was, but I don't remember what he said.
Found it: alog 35184
What is rung up at 6kHz? That might be the problem.
Good question. Somehow I hadn't noticed that. It is not there in the screenshot in the first alog I linked to (from when this happened before).
Well, maybe it is, but at a much reduced amplitude.
TITLE: 06/13 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC STATE of H1: Observing at 58Mpc OUTGOING OPERATOR: TJ CURRENT ENVIRONMENT: Wind: 9mph Gusts, 7mph 5min avg Primary useism: 0.02 μm/s Secondary useism: 0.11 μm/s QUICK SUMMARY: Restarted nuc5 and video2.
TITLE: 06/13 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 57Mpc
INCOMING OPERATOR: Patrick
SHIFT SUMMARY: 15hour lock, range has been a bit lower than the last lock (59Mpc) but the wind has calmed down and the rest of the environment is calm.
Locked and Observing for 12hrs. The range drop that we saw previously definitely seems to correlate with the wind, which is odd since the wind was only peaked at 35mph for a few tens of minutes and then slowly decreased again.
There does still seem to be some more noise than usual in the 30-1000hz and our range is still only at 60Mpc, about 5Mpc worse than the last lock.
When I restarted the calibration pipeline last Thursday, June 8 (see https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=36727 ), I noticed that gstlal-calibration-1.1.7 had been installed on h1dmt0 and h1dmt2. This apparently happened automatically during Tuesday maintenance and was not intentional. However, this change did not affect the process or the output until the restart on June 8. Due to the amount of testing already done with this version (the last batch of C01 was made with it; also see https://bugs.ligo.org/redmine/issues/5531) and the difficulty of reverting to version 1.1.4, it was decided to continue using 1.1.7 at LHO. We have had 5 days of online running since this restart, and no problems have occurred. LLO plans to continue using 1.1.4 for one more week before upgrading to 1.1.7 on June 20. New features/bug fixes included in 1.1.7 as compared to 1.1.4: * Algorithm to compute SRC detuning parameters; 4 more 16 Hz channels: - {IFO}:GDS-CALIB_F_S - {IFO}:GDS-CALIB_F_S_NOGATE - {IFO}:GDS-CALIB_SRC_Q_INVERSE - {IFO}:GDS-CALIB_SRC_Q_INVERSE_NOGATE * Command line option to remove calibration lines (not yet being used, but would add a 16 kHz channel, {IFO}:GDS-CALIB_STRAIN_CLEAN ) * Latency reduced by 5 seconds (was 10 - 14 seconds, now 5 - 9 seconds) * Bug fix to mark 3 seconds before and after raw data dropouts bad in the CALIB_STATE_VECTOR. This 3 seconds is due to the settling of the FIR filters. The risk of running without this big fix is that there can be glitches around raw data dropouts that need to be manually vetoed (see https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=32486 )
The eagle-eyed alog reader would have noticed that the regular robo-alog entries reporting on CP3 and CP4 autofills have been missing lately. We have discontinued this service now that the vacuum team have fixed the LN2 pump liquid level gauges and these pumps are back under PID control.
I found a channel name typo in /opt/rtcds/userapps/release/isc/common/medm/ISC_CUST_BULLSEYE.adl
was fixed. The file was committed to the repository.
Previously, H1:PSL-DIAG_BULLSEYE_WID_INMON
display on the screen actually showed H1:PSL-DIAG_BULLSEYE_PIT_INMON.
I have taken VerbalAlarms back into its normal operating mode that will report GRBs. I was asked to turn off the GRBs and other acknowledgeable alarms for the vent, so I created a commissioning mode for Verbal to do that. Running "./VerbalAlarms.py -c" will run it without the acknowledgeable alarms and without a few other unnecessary tests. I had forgotten to turn this back on before the weekend, but it should be all good now!
TITLE: 06/12 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 58Mpc
OUTGOING OPERATOR: Travis
CURRENT ENVIRONMENT:
Wind: 16mph Gusts, 9mph 5min avg
Primary useism: 0.06 μm/s
Secondary useism: 0.14 μm/s
QUICK SUMMARY: Higher winds predicted for this evening, we will see how that goes. Running on a 7.5 hr lock at 59Mpc, but there is some noise from 40-1000Hz bringing our range down a bit. Vaishali said that it might be PRM misalignment?
TITLE: 06/12 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 59Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: After relocking after this morning's EQ, it has been a quiet day of Observing.
LOG:
16:45 Peter and Gerardo to Optics Lab
16:55 Richard to roof to look at anemometer
17:14 Peter and Gerardo done in Optics Lab
17:15 Richard off the roof
17:33-17:50 Out of Observing while LLO is down to run A2L
18:14 Marc to MY parts hunting
18:53 Koji to Optics Lab
19:06 Marc back
20:17 Kyle to MX
20:33 Bubba to MX
20:42 Kyle back
21:15 Bubba back
22;17 Betsy and Calum to both mid stations
The primary and redundant DMT processes at LHO were restarted at 1180972740. It appears there was a raw data dropout starting at 2017-06-08 8:36 UTC that lasted almost 7 hours. After this, the calibration pipeline was running but producing no output. A simple restart seems to have gotten data flowing again.
This restart also picked up the new version of the calibration code, gstlal-calibration-1.1.7. This was automatically (unintentionally) installed during Tuesday maintenance.