Locked and Observing for 12hrs. The range drop that we saw previously definitely seems to correlate with the wind, which is odd since the wind was only peaked at 35mph for a few tens of minutes and then slowly decreased again.
There does still seem to be some more noise than usual in the 30-1000hz and our range is still only at 60Mpc, about 5Mpc worse than the last lock.
When I restarted the calibration pipeline last Thursday, June 8 (see https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=36727 ), I noticed that gstlal-calibration-1.1.7 had been installed on h1dmt0 and h1dmt2. This apparently happened automatically during Tuesday maintenance and was not intentional. However, this change did not affect the process or the output until the restart on June 8. Due to the amount of testing already done with this version (the last batch of C01 was made with it; also see https://bugs.ligo.org/redmine/issues/5531) and the difficulty of reverting to version 1.1.4, it was decided to continue using 1.1.7 at LHO. We have had 5 days of online running since this restart, and no problems have occurred. LLO plans to continue using 1.1.4 for one more week before upgrading to 1.1.7 on June 20.
New features/bug fixes included in 1.1.7 as compared to 1.1.4:
* Algorithm to compute SRC detuning parameters; 4 more 16 Hz channels:
- {IFO}:GDS-CALIB_F_S
- {IFO}:GDS-CALIB_F_S_NOGATE
- {IFO}:GDS-CALIB_SRC_Q_INVERSE
- {IFO}:GDS-CALIB_SRC_Q_INVERSE_NOGATE
* Command line option to remove calibration lines (not yet being used, but would add a 16 kHz channel, {IFO}:GDS-CALIB_STRAIN_CLEAN )
* Latency reduced by 5 seconds (was 10 - 14 seconds, now 5 - 9 seconds)
* Bug fix to mark 3 seconds before and after raw data dropouts bad in the CALIB_STATE_VECTOR. This 3 seconds is due to the settling of the FIR filters. The risk of running without this big fix is that there can be glitches around raw data dropouts that need to be manually vetoed (see https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=32486 )
The eagle-eyed alog reader would have noticed that the regular robo-alog entries reporting on CP3 and CP4 autofills have been missing lately. We have discontinued this service now that the vacuum team have fixed the LN2 pump liquid level gauges and these pumps are back under PID control.
I found a channel name typo in /opt/rtcds/userapps/release/isc/common/medm/ISC_CUST_BULLSEYE.adl was fixed. The file was committed to the repository.
Previously, H1:PSL-DIAG_BULLSEYE_WID_INMON display on the screen actually showed H1:PSL-DIAG_BULLSEYE_PIT_INMON.
I have taken VerbalAlarms back into its normal operating mode that will report GRBs. I was asked to turn off the GRBs and other acknowledgeable alarms for the vent, so I created a commissioning mode for Verbal to do that. Running "./VerbalAlarms.py -c" will run it without the acknowledgeable alarms and without a few other unnecessary tests. I had forgotten to turn this back on before the weekend, but it should be all good now!
TITLE: 06/12 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 58Mpc
OUTGOING OPERATOR: Travis
CURRENT ENVIRONMENT:
Wind: 16mph Gusts, 9mph 5min avg
Primary useism: 0.06 μm/s
Secondary useism: 0.14 μm/s
QUICK SUMMARY: Higher winds predicted for this evening, we will see how that goes. Running on a 7.5 hr lock at 59Mpc, but there is some noise from 40-1000Hz bringing our range down a bit. Vaishali said that it might be PRM misalignment?
TITLE: 06/12 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 59Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: After relocking after this morning's EQ, it has been a quiet day of Observing.
LOG:
16:45 Peter and Gerardo to Optics Lab
16:55 Richard to roof to look at anemometer
17:14 Peter and Gerardo done in Optics Lab
17:15 Richard off the roof
17:33-17:50 Out of Observing while LLO is down to run A2L
18:14 Marc to MY parts hunting
18:53 Koji to Optics Lab
19:06 Marc back
20:17 Kyle to MX
20:33 Bubba to MX
20:42 Kyle back
21:15 Bubba back
22;17 Betsy and Calum to both mid stations
Created a command-line HWS WF plotting tool. See: https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=34263
Locked for ~4 hours. Range is a bit low. It has been suggested that this is due to low PRC gain. We'll wait for the next lockloss to address this.
Laser Status:
SysStat is good
Front End Power is 33.99W (should be around 30 W)
HPO Output Power is 157.2W
Front End Watch is GREEN
HPO Watch is GREEN
PMC:
It has been locked 5 days, 21 hr 11 minutes (should be days/weeks)
Reflected power = 16.07Watts
Transmitted power = 59.44Watts
PowerSum = 75.51Watts.
FSS:
It has been locked for 0 days 4 hr and 9 min (should be days/weeks)
TPD[V] = 3.485V (min 0.9V)
ISS:
The diffracted power is around 2.8% (should be 3-5%)
Last saturation event was 0 days 4 hours and 21 minutes ago (should be days/weeks)
Possible Issues:
Summary of the DQ shift from Thursday 8th June to Sunday 11th June (inclusive), click here for full report:
* Very good start to the post-commissioning break with a single lock that extended the whole weekend and duty cycles of 19% (thursday), 72% (friday), 97% (saturday) and 100% (sunday) for the period of this DQ shift.
* Range sensitivity stable averaging around 64 Mpc.
* Not an eventful DQ shift, despite some strong winds and Earthquakes in Mexico. However notice that the only lockloss of Friday lasted 3:30 hours and for a long time LHO had trouble progressing beyond DRMI, dropping out of lock widly (AS camera spot swings wildly and AS90_OVER_POP90 channel has huge values). This required running and INITIAL ALIGNMENT which had issues for a while until LHO realised that this was the first initial alignment run since changes on Wed evening, when the green PZT pointing and green ITM camera setpoints were reset. This changes were undone for both arms. Things worked better after this.
* Interesting PyCBC live glitches on Friday (one Koi fish and another of the type previously identified as related to radiation pressure) and Saturday (blip/chirp? glitch suggested as maybe being related to computer issues).
* On Saturday highest SNR Omicron glitches correlates with scattering of H1:SUS-PRM_M1_DAMP_L and H1:SUS-SRM_M1_DAMP_L, caused by Earthquake in Mexico.
* On Saturday very short consecutive drop outs of Observe mode due to TCS ITMx laser losing lock.
TITLE: 06/12 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing
OUTGOING OPERATOR: Patrick
CURRENT ENVIRONMENT:
Wind: 12mph Gusts, 9mph 5min avg
Primary useism: 0.10 μm/s
Secondary useism: 0.14 μm/s
QUICK SUMMARY: Just made it back to NLN and Observing. No issues with relocking after Patrick's IA. Had to accept a few SDF diffs, 2 for ISCEX and ISCEY that were 10^-17, so didn't bother taking a screenshot. The H1SYSECATX1PLC2 diff I accepted is attached for reference.
TITLE: 06/12 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC STATE of H1: Earthquake INCOMING OPERATOR: Travis SHIFT SUMMARY: No issues until earthquake. Finished initial alignment. Attempting to relock. LOG: 12:23 UTC Noticed that the earthquake plot on the top display of nuc5 had disappeared. Restarted nuc5. 12:33 UTC Verbal alarms notice of incoming earthquake 13:15 UTC Lock loss 13:59 UTC Changed observatory mode from lock acquistion to earthquake. Should have done so earlier. 14:15 UTC Starting initial alignment 15:03 UTC Initial alignment done
I received a verbal alarm notification of an incoming earthquake. It was not clear at first which earthquake was being seen. I now believe it must be the 6.3 magnitude one in Greece. It seems that Seismon saw it first, then the USGS webpage and then Terramon. Neither the USGS webpage or Terramon saw it before it arrived. Is the verbal alarms notification looking at Seismon? 12:33 UTC Verbal alarms notice of an incoming earthquake. I do not see anything with an arrival time closer than 05:14 PST (12:14 UTC) on Terramon. I'm not certain how to read seismon. It seems to predict a 6.4 earthquake, but I don't see one of this magnitude on USGS. 12:43 UTC Spike on seismic BLRMS. 13:15 UTC Lock loss
No issues to report. Lock is almost 64 hours old.
TITLE: 06/12 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 66Mpc
OUTGOING OPERATOR: Ed
CURRENT ENVIRONMENT:
Wind: 17mph Gusts, 14mph 5min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.19 μm/s
QUICK SUMMARY:
No known issues.
TITLE: 06/12 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 66Mpc
INCOMING OPERATOR: Patrick
SHIFT SUMMARY:
LOG:
1:19:45UTC Got this error. Have no idea what this is. It doesn't seem to have broke anything o my present knowldege.
Full data plots of H1:SYS-TIMING_Y_GPS_A_ERROR_FLAG and H1:SYS-TIMING_Y_GPS_A_ERROR_CODE for the last 8 hours are attached. The error code momentarily went to 4. From the medm I might guess this is decoded as 'Waiting for GPS lock'. This seems familiar: alog 35111.
The primary and redundant DMT processes at LHO were restarted at 1180972740. It appears there was a raw data dropout starting at 2017-06-08 8:36 UTC that lasted almost 7 hours. After this, the calibration pipeline was running but producing no output. A simple restart seems to have gotten data flowing again.
This restart also picked up the new version of the calibration code, gstlal-calibration-1.1.7. This was automatically (unintentionally) installed during Tuesday maintenance.
Some trends maybe showing that low wind is affecting our range.