Changed the HV setting on IP7 controller (last of the old multivac Varian controllers in LVEA) from 5.6 kV to 7 kV.
PT-140 shows a pressure rise from turning off HV off for a couple of minutes.
Summary of the DQ shift from Thursday 26th to Sunday 29th (inclusive), click here for full report:
For reference:
I have learned that the H1:PEM-CS_ACC_HAM6 accelerometers while being not in vacuum but just beside the HAM6 chamber, they are sensitive to the shutter closing down in order to protect the output PDs after lockloss. Therefore the spikes in this channels will always be present during lockloss (see attached picture where I plot this channel together with the power built up on PR cavity so a good indicator of lockloss).
I also looked at the possibility that the overflow of H1:ASC-AS_C_SEG4 and H1:ASC-AS_C_SEG2 may actually have caused the lockloss (more to come from this analysis worthy of an aLog in itself), but notice that these signals are used, together with SEG1 and SEG3, to generate the aligment signals for SR2 and SRM mirrors.
Finally, in order to look at the possible effect that the A2L script caused on the excess low frequency noise I will run BruCo before and after the running the A2L script, which is run ocassionaly to recenter the beam into the mirrors. Sheila has mentioned that it should not affect frequencies above 25Hz though.
The Bruco reports before and after the A2L script was run. In each case I look at 600 seconds from GPS time 1169564417 for the case 'before' and 1169569217 for the case after and look at the frequency band between 40 and 100Hz, as an example this is the command used for the 'Before' case:
./bruco.py --ifo H1 --channel=OMC-DCPD_SUM_OUT_DQ --gpsb=1169564417 --length=600 --outfs=4096 --naver=100 --dir=~/public_html/detchar/O2/bruco/Before_at_1169564417_600_OMC_DCPD --top=100 --webtop=20 --xlim=40:100 --ylim=1e-10:1 --excluded=share/lho_excluded_channels.txt
Another look at accelerometers/seismometers.
In addition to the external (i.e. in-air) accelerometers, we also have in-vacuum seismometers (GS13s mounted within the Seismic Isolation [ISI]tables) which we can also look at. Attached is a look at one of the HAM6 ISI GS13s (H1:ISI-HAM6_GS13INF_H1_IN1_DQ) & a HAM6 (H1:PEM-CS_ACC_HAM6_OMC_X_DQ) in-air/external (probably mounted on the HAMdoor) accelerometer. So, when the lockloss occurs, the HAM6 Fast Shutter (referred to as "the Toaster") pops up & this shakes the HAM6 table (somes trips the HAM6 ISI). This motion is seen by the GS13 inside and accelerometer outside (which was a surprise to me).
Everything looks nominal.
No issues relocking.
Out of Observing from 17:08-17:09 UTC to turn off FM9 filter for 4.7kHz violin mode.
Guardian has been reverting FM9 back ON for EY 4.7k violin damping repeatedly, confusing operators.
I changed ISC_GEN_STATES.py line 597 so FM9 doesn't keep coming back.
Old: ey_mode[10].only_on('INPUT', 'FM4', 'FM9', 'FM10', 'OUTPUT', 'DECIMATION')
New: ey_mode[10].only_on('INPUT', 'FM4', 'FM10', 'OUTPUT', 'DECIMATION')
Next time H1 loses lock, ISC_LOCK guardian should be reloaded.
GRB alert 16:45. Unfortunately, we were relocking at the time.
model restarts logged for Sun 29/Jan/2017
2017_01_29 17:41 h1nds1
unexpected freeze of h1nds1, required power cycle.
model restarts logged for Sat 28/Jan/2017-Wed 25/Jan/2017 No restarts reported
Nothing apparent on FOMs. Keita was looking at SDF files at the time, but nothing that should have been detrimental to locking.
I opened ETMY SDF, opened SDF restore screen, pressed "select request file" button to see where the request files are, and pressed cancel.
IFO lost lock at the same time I pressed cancel.
TITLE: 01/30 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 68Mpc
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
Wind: 8mph Gusts, 6mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.33 μm/s
QUICK SUMMARY: Locked in Observing for 14.5 hours. No issues were handed off.
TITLE: 01/30 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 67Mpc
INCOMING OPERATOR: Travis
SHIFT SUMMARY: locked all shift
LOG:
Stand time is complete in 5 nminutes.
TITLE: 01/30 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 69Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
Wind: 5mph Gusts, 4mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.31 μm/s
TITLE: 01/30 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 69Mpc
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY: One lockloss that I couldn't find the cause of. I did not need to run initial alignment. NDS1 crashed but I called Dave and recued it.
LOG:
More dust alarms with the wind in the PSL.
Attached is a plot of the dust counts for the past 2 days, from the dust monitor in the PSL Laser room (101), in the PSL Anit-Room (102), and from the monitor between HAM1 and the PSL enclosure. The plots also contain the CS wind and temperature data for the same period. There does not appear to be a correlation of dust to temperature. There does appear to be a correlation of dust to wind, as TJ noted. Investigation plans are (1) install a dust monitor inside the closet where the makeup intake for the PSL resides for a few days to see what the particle counts of air coming into the PSL are, and (2). Check the filters on top of the PSL enclosure.
I got an alarm for the MX VEA temp, and it looks like it has been dropping for the past few days. I gave Bubba a call and he said it would be fine till tomorrow morning. Currently we are at 60F in there.
Another one of those locklosses with no signs of a struggle. Wind has picked up to gusts of 20mph, but not near anything that should kick us out.
I will look at lockloss plots once I get the NDS1 going again.
Back to Observing at 01:45 UTC.
I accepted the FM9 filter difference on ETMY L2 DAMP MODE 10 for the 4.7K.
Around the time of the lockloss, NDS1 went down. I will focus on relocking first (since NDS0 is still good) and then call to get NDS1 back up.
h1nds1 looks like it just froze up, last log entry was a keep-alive message at 17:07 PST. TJ says H1 had lost lock before this time, so it looks like a coincidence.
TJ power cycled h1nds1, and after a file system check it came back with no problems.
Raising the HV setting improved vacuum level in diagonal volume.