Attached are 45 day trends of the pressure out from the pumps and the control drive to the motor. After seeing the dips very evident in the End control drives, I replaced the CS pump output for 1 2 & 3, which are as flat as PS4, with the outside air temp. Very clear, the change in temp drives the pumps' need for speed.
Still much noisier than the others but the EX pressure cleaned itself up for some unknown reason a week ago...
And like clock work, the EndY daily and weekly glitches are still present. Anybody?
Jenne, Sheila, EvanH, Hugh
We had a fortuitous rapid drop in the wind speeds around 1400pdt and while not attempting to lock, it was a quick study looking at the GS13 signal while changing the blends to see that the 45 mHz were much quieter. Of course there still is the worry about the ETMX X 45 blend ringing up MCF but we decided that if it does,maybe we can study it or try something.
The blend switch, and why we though it was a good idea given the reduced winds ad the very elevated microseism is clearly seen on the attached plot.
It still took some time to lock, maybe still a little much wind or something else. Still we decided to not switch the ETMX X blend back fearing that it woould likely break the lock. So for over an hour we've only had minor flare ups of MCF but it really hasn't grown and faded after just a couple cycles.
TITLE: Dec 9 DAY Shift 16:00-00:00UTC (08:00-16:00 PDT), all times posted in UTC
STATE Of H1: Observing
SUPPORT: Sheila, Hugh, TJ, Jenne, Robert and Jordan
LOCK DURATION: 36min
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY:
A combination of high microseism and high winds, for most of the day, made it less than possible for locking.
Twice today the call for doing maintenance was made.
Hugh and Sheila were able to determine the best blends to use at the End Stations for locking today.
Sheila, Jordan and Robert tweaked the COMM VCO to get rid of some unwanted lines in DARM
DRMI Lock was finally achieved at 23:02UTC
Intention Bit set to Undisturbed at 23:50UTC
There is a timing error showing in the ETMX CDS overview that got forgotten. It can be reset in the case of lockloss.
ACTIVITY LOG:
16:15 Fil out to LVEA to do some more cable pulling
16:25 Jeff Bartlett and Mitch out to LVEA to do some more work on dust monitor plumbing
16:30 Peter into LDR
16:40 Peter out of LDR
17:01 Jeff and Mitch out of LVEA
17:03 Jeff and Mitch to EX
17:21 Richard called to let me and Mike know that he has started his work in the CER.
17:40 Gerardo out to Y-2-8 to refuel the generator
17:54 Richard, et al, are finished in the CER and are heading down to EY.
18:19 Bubba heading to MX to work on a fan.
18:39 Gerardo back from Y-2-8
21:22 Kyle to Y-2-8 to take some temp readings
21:55 Kyle back to corner station
23:21 H1 is locked at Nominal Low Noise!
23:50 Intention Bit set to UNDISTURBED
23:50UTC
TJ & Hugh
With IFO down from elevated winds, we attempted to have the Guardian switch the gains more like how the old SEI COMMANDS script switch the gains--WP5647.
Original svn update 24 Nov alog 23695--Change tested on HAM5 and worked.
7 Dec, ISI trips with EQ reveal problem of this switching but only for HAMs 2 & 3, aLOG 24028.
In alog 24054 I report how the GS13 gain switch when done by the guardian well after the platform was isolated and quiet would still trip the HAM3 ISI. This led us to believe it was not the timing of the guardian GS13 switching--at the end of Isolation process after/while finishing the driving to the reference CPS location; but, that it was more related to how the guardian was doing the switching. The guardian switches both FM4 & 5 at the same time whereas the COMMAND script switches all the FM4 and then all the FM5s with a 1 sec sleep between.--See SEI Log 894 comment for code comparison.
Today,TJ modified the guardian code, see comment below, to try simulating the way the COMMAND script switches the Gain. Bottom line it still tripped.
We even tried reversing the order of the FM4 & 5, that is first FM5 then FM4 to turn off the analog whitening before we increased the analog gain, still tripped... So we still scratch our heads, it's winter, my skin is dry and it itches anyway.
Below is the code that we tried. I just added in a sleep for 1 second between the switching of the FM4 and FM5 banks in all dofs. When executed, it looked to have worked the same as the old perl script, but there must be some deeper magic that I need to look into.
def switch_gs13_gain(command, doflist):
"""Swtich GS13 analog gain between HIGH and LOW"""
lf_gs13 = LIGOFilterManager(['GS13INF_'+dof for dof in doflist], ezca)
if command == 'HIGH':
log('Switching GS13s to high gain')
#lf_gs13.all_do('switch', 'FM4','OFF','FM5','OFF')
lf_gs13.all_do('switch', 'FM5', 'OFF')
time.sleep(1)
lf_gs13.all_do('switch', 'FM4', 'OFF')
elif command == 'LOW':
log('Switching GS13s to low gain')
#lf_gs13.all_do('switch', 'FM4', 'ON', 'FM5', 'ON')
lf_gs13.all_do('switch', 'FM5', 'ON')
time.sleep(1)
lf_gs13.all_do('switch', 'FM4', 'ON')
# To account for the 2s zero-crossing timeout
time.sleep(3)
return
(I switched this back to what it was before and committed it back in the svn).
Note to me: Had to restart the node in order for it to take any bits of new code. I've seen this before but I'm not sure what warrents a restart vs just a reload. I'm writing this here so I can look back and find the trend.
Mike Landry has given the final "OK" to proceed with maintenance tasks this morning due to the IFO being down due to high winds. Livingston has been contacted regarding our plan. They are currently back "UP".
This period ended at ≈ 19:00UTC
O1 days 81,82
model restarts logged for Tue 08/Dec/2015
2015_12_08 11:51 h1broadcast0
Maintenance day, DMT channel added to broadcaster
model restarts logged for Mon 07/Dec/2015
No restarts reported
TITLE: Dec 9 DAY Shift 08:00-1t6:00UTC (00:00-08:00 PDT), all times posted in UTC
STATE Of H1: Environment
OUTGOING OPERATOR: Patrick
QUICK SUMMARY: Doesn’t look like a good day for locking unless the wind dies back significantly.
TITLE: 12/09 [OWL Shift]: 08:00-16:00 UTC (00:00-08:00 PDT), all times posted in UTC STATE Of H1: Unlocked SHIFT SUMMARY: Lock loss likely from earthquake. High winds and microseism have hindered relocking ALS. Tried different combinations of ISI blends to no avail. The Y arm has been the least stable. There is a timing error on H1SUSETMX. INCOMING OPERATOR: Ed ACTIVITY LOG: 10:41 UTC Lock loss. Arms not staying locked on green. Put ISC_LOCK guardian to DOWN to wait out earthquake. Adjusted REF_SIGNAL to increase ISS diffracted power. 13:38 UTC Started attempting to relock 15:33 UTC Bubba driving down arms to assess tumbleweed buildup
In checking on the reference cavity transmission this morning, I noted that it took a dive over the last day in particular, and over the last three days in general. Also attached is a plot of the IMC locked status. Looking over the trend data, it appears to me that the reference cavity transmission suffers when there are long periods of time when the IMC is trying to acquire lock. My hypothesis is that the IMC trying to acquire lock means that the VCO frequency of the double-passed AOM swings from one extreme to the other. This might result in heating of the AOM, which in turn changes the alignment of the AOM with respect to the input beam. We don't observe an alignment change on the nearby EOM because the EOM is too close for a small alignment change to be noticed. Of course we observe the alignment change on the iris located in front of the reference cavity, since that's some distance away from the AOM (~2 m). Also attached is a plot of the reference cavity transmission and VCO frequency. One might ask, why didn't you just fix it by re-aligning the AOM previously. And the convenient answer is that the AOM alignment is quite sensitive. This doesn't necessarily fix the problem if overheating of the AOM is the culprit.
Mike L. called and notified me that there was a SNEWS test alert entered into GraceDB (E206974, see attached screenshot). I did not get a verbal alarm.
There was no alarm for this in the control room because the script that queries GraceDB (ext_alert.py) will only look back 36000 seconds and the delay for this alarm was 73681.000000sec.
We have been using the default 'lookback' time, but can set to be any value we choose from the command line at the start of ext_alert.py. This delay was a special case, but it might be worth looking back a bit more than 10 hours...
Lookback time code is below if anyone is curious.
actions['run'].add_argument('-l', '--lookback-time', type=float, default=36000,
dest='lookback',
help='how far back in time to query, '
'default: %(default)s')
...(later, in the loop)
# query gracedb
now = gps_time_now(ifo=args.ifo)
start = now - args.lookback
client, events = list(
query_gracedb(start, now, ifo=args.ifo, connection=client,
far=args.far, test=args.test))
Put ISC_LOCK guardian to DOWN and waiting for earthquake to subside. Wind speeds are peaking around 40 mph.
Range was degrading with increasing winds. There was also an earthquake and LLO went down at the same time: 6.9 106km SE of Amahai, Indonesia 2015-12-09 10:21:50 UTC 33.9 km deep
Evan Hall suggested changing ETMY ISI blend filters after a couple lock losses between DRMI and ENGAGE_ASC.
The change from all Quiet_90s to Quiet_90 in X direction, and 45mHz in Y direction was easilly seen on ST1_ISO_RX and ST2_ISO_Y, and the ASC control signal in pitch being sent to the optic, ASC-ETMY_PIT_OUTPUT.
Plot attached shows those channels before and after the blend filter change, and ETMX ASC control signal, to compare.
Current blend filter config. medm is also attached.
Why are the y blends set different from the X blends?
Over the past month or so, having the 45 mHz blends on EX causes the ISI to ring up in full lock (for example: alog 23674 and comments).