TJ & Hugh
With IFO down from elevated winds, we attempted to have the Guardian switch the gains more like how the old SEI COMMANDS script switch the gains--WP5647.
Original svn update 24 Nov alog 23695--Change tested on HAM5 and worked.
7 Dec, ISI trips with EQ reveal problem of this switching but only for HAMs 2 & 3, aLOG 24028.
In alog 24054 I report how the GS13 gain switch when done by the guardian well after the platform was isolated and quiet would still trip the HAM3 ISI. This led us to believe it was not the timing of the guardian GS13 switching--at the end of Isolation process after/while finishing the driving to the reference CPS location; but, that it was more related to how the guardian was doing the switching. The guardian switches both FM4 & 5 at the same time whereas the COMMAND script switches all the FM4 and then all the FM5s with a 1 sec sleep between.--See SEI Log 894 comment for code comparison.
Today,TJ modified the guardian code, see comment below, to try simulating the way the COMMAND script switches the Gain. Bottom line it still tripped.
We even tried reversing the order of the FM4 & 5, that is first FM5 then FM4 to turn off the analog whitening before we increased the analog gain, still tripped... So we still scratch our heads, it's winter, my skin is dry and it itches anyway.
Mike Landry has given the final "OK" to proceed with maintenance tasks this morning due to the IFO being down due to high winds. Livingston has been contacted regarding our plan. They are currently back "UP".
This period ended at ≈ 19:00UTC
O1 days 81,82
model restarts logged for Tue 08/Dec/2015
2015_12_08 11:51 h1broadcast0
Maintenance day, DMT channel added to broadcaster
model restarts logged for Mon 07/Dec/2015
No restarts reported
TITLE: Dec 9 DAY Shift 08:00-1t6:00UTC (00:00-08:00 PDT), all times posted in UTC
STATE Of H1: Environment
OUTGOING OPERATOR: Patrick
QUICK SUMMARY: Doesn’t look like a good day for locking unless the wind dies back significantly.
TITLE: 12/09 [OWL Shift]: 08:00-16:00 UTC (00:00-08:00 PDT), all times posted in UTC STATE Of H1: Unlocked SHIFT SUMMARY: Lock loss likely from earthquake. High winds and microseism have hindered relocking ALS. Tried different combinations of ISI blends to no avail. The Y arm has been the least stable. There is a timing error on H1SUSETMX. INCOMING OPERATOR: Ed ACTIVITY LOG: 10:41 UTC Lock loss. Arms not staying locked on green. Put ISC_LOCK guardian to DOWN to wait out earthquake. Adjusted REF_SIGNAL to increase ISS diffracted power. 13:38 UTC Started attempting to relock 15:33 UTC Bubba driving down arms to assess tumbleweed buildup
In checking on the reference cavity transmission this morning, I noted that it took a dive over the last day in particular, and over the last three days in general. Also attached is a plot of the IMC locked status. Looking over the trend data, it appears to me that the reference cavity transmission suffers when there are long periods of time when the IMC is trying to acquire lock. My hypothesis is that the IMC trying to acquire lock means that the VCO frequency of the double-passed AOM swings from one extreme to the other. This might result in heating of the AOM, which in turn changes the alignment of the AOM with respect to the input beam. We don't observe an alignment change on the nearby EOM because the EOM is too close for a small alignment change to be noticed. Of course we observe the alignment change on the iris located in front of the reference cavity, since that's some distance away from the AOM (~2 m). Also attached is a plot of the reference cavity transmission and VCO frequency. One might ask, why didn't you just fix it by re-aligning the AOM previously. And the convenient answer is that the AOM alignment is quite sensitive. This doesn't necessarily fix the problem if overheating of the AOM is the culprit.
Mike L. called and notified me that there was a SNEWS test alert entered into GraceDB (E206974, see attached screenshot). I did not get a verbal alarm.
There was no alarm for this in the control room because the script that queries GraceDB (ext_alert.py) will only look back 36000 seconds and the delay for this alarm was 73681.000000sec.
We have been using the default 'lookback' time, but can set to be any value we choose from the command line at the start of ext_alert.py. This delay was a special case, but it might be worth looking back a bit more than 10 hours...
Lookback time code is below if anyone is curious.
actions['run'].add_argument('-l', '--lookback-time', type=float, default=36000,
dest='lookback',
help='how far back in time to query, '
'default: %(default)s')
...(later, in the loop)
# query gracedb
now = gps_time_now(ifo=args.ifo)
start = now - args.lookback
client, events = list(
query_gracedb(start, now, ifo=args.ifo, connection=client,
far=args.far, test=args.test))
Put ISC_LOCK guardian to DOWN and waiting for earthquake to subside. Wind speeds are peaking around 40 mph.
Range was degrading with increasing winds. There was also an earthquake and LLO went down at the same time: 6.9 106km SE of Amahai, Indonesia 2015-12-09 10:21:50 UTC 33.9 km deep
TITLE: 12/09 [OWL Shift]: 08:00-16:00 UTC (00:00-08:00 PDT), all times posted in UTC STATE Of H1: Observing @ ~ 79 MPc OUTGOING OPERATOR: Cheryl QUICK SUMMARY: From the cameras: The lights are off in the LVEA. The lights are off in the PSL enclosure. The lights are off at end X. The lights are off at end Y. I can not tell if the lights are on or off at mid X and mid Y. The 0.03 - 0.1 Hz (earthquake) seismic band is between ~ 0.01 and 0.1 um/s. The 0.1 - 0.3 Hz (microseism) seismic band is trending slightly up and is now between ~ 0.2 and 0.9 um/s. The winds are between ~ 0 and 15 mph. See screenshot for ISI blends. From pinging: CDS WAP is off at the LVEA. CDS WAP is off at end X. CDS WAP is off at end Y. CDS WAP is on at mid X. CDS WAP is on at mid Y.
Ops Eve Summary: 00:01-08:00UTC (16:00-23:59PT)
State of H1: locked in Observe for 2+ hours
Help: Jenne, Sheila, Evan H
Shift Summary:
Timeline:
Kyle was here and needed to drive down the Y arm:
Timeline including GRB:
Evan Hall suggested changing ETMY ISI blend filters after a couple lock losses between DRMI and ENGAGE_ASC.
The change from all Quiet_90s to Quiet_90 in X direction, and 45mHz in Y direction was easilly seen on ST1_ISO_RX and ST2_ISO_Y, and the ASC control signal in pitch being sent to the optic, ASC-ETMY_PIT_OUTPUT.
Plot attached shows those channels before and after the blend filter change, and ETMX ASC control signal, to compare.
Current blend filter config. medm is also attached.
Why are the y blends set different from the X blends?
Over the past month or so, having the 45 mHz blends on EX causes the ISI to ring up in full lock (for example: alog 23674 and comments).
I have edited the ISC_LOCK guardian so that it now turns violin mode damping off before the interferometer reaches nominal low noise. This will hopefully allow us to collect ringdown data on the violin mode fundamentals.
Violin mode damping is still turned on as usual in BOUNCE_VIOLIN_MODE_DAMPING, but it now is turned off in the COIL_DRIVERS state. Thus the modes will still receive a few minutes of damping before DARM is switched to dc readout.
If this behavior needs to be reverted, there is a flag in the main() function of COIL_DRIVERS called self.turnOffViolinDamping which can be set to False.
Violin mode damping in full lock will be re-enabled once sufficient data have been collected.
Are there any observable results from this? For example, does this mean we will now see these on the running power spectrum on nuc3? And is this the reason we now have the red boxes on the violin mode medm on nuc0? I hadn't noticed the latter before, so I was wondering why these were flashing red.
Since turning the damping off, the violin mode fundamentals seem to appear on the DARM FOM at the level of 10−16 m/Hz1/2 or so. Before turning the damping off, they would eventually damp down below 10−18 m/Hz1/2 after a few hours.
I'm guessing this is why the violin mode monitors are red, but I don't know what Nutsinee set the monitor thresholds to.
Also, since writing the above alog I changed the Guardian code for turning off the damping. It is no longer executed inside an if statement; it's just executed directly in the main() function of COIL_DRIVERS.
The mystery ~650Hz noise reported here and here also shows up in the PEM rf antenna (in both 9MHz and 45MHz located in CER and LVEA). Further investigation revealed that this peak shows up in the PEM antenna during lock aquisition at the start of the DC Readout Transition step (if it appears at all; it's not present in every lock--more on this later). At the start of this step the ALS COMM VCO is parked at some value.
To determine whether this VCO could be responsible for the ~650Hz noise, the frequency readback of the VCO was compared to the frequency of the mystery peak in the PEM antenna. Attached (figure 1) is a plot of H1:ALS-C-COMM_VCO_FREQUENCY timeseries on top and spectrogram of the PEM 45MHz LVEA antenna on the bottom. The frequency of the peak seems to track with the VCO frequency if you take into account the fact that the VCO frequency readback is digitized into steps of 8Hz (does anyone know why / can we fix this?).
Also, there appears to be 2 different values where the VCO can be parked. Figure 2 has similar plots to figure 1, over a 28hr stretch which contained multiple locks where the peak was sometimes present. In locks where the peak was present, the VCO was set to ~78.7873MHz. For locks where the peak is not there the VCO was set to ~78.7944MHz. These values correspond with two different values of H1:ALS-C_COMM_VCO_TUNEOFS : ~-1.39 and ~1.25, respectively.
To test this, we tried moving the COMM VCO TUNE OFS slider with the IFO locked (before continuing to NLN / Observing). While initially it looked like the peak in the PEM rf channel moved as the slider was moved, the lock broke before we could conclusively tell. The lockloss occurred right as Sheila was moving the slider. We don't know why this should cause a lockloss, so this is a subject for further investigations (it was windy and ground motion-y at the time so it could have been a coincidence).
Also included (figure 3) is a plot of the VCO frequency (again, 8Hz digitization) and the CER Rack temperature. More data is needed, but it looks like the freqency trends down after the temperature rises.
Finally, there is still the question as to why this is showing up in the 9MHz and 45MHz channels (and, ultimately, DARM). As a first check, I compared 9.100230 MHz and harmonics to 78.7873 MHz and harmonics to see if a beat would show up within 600 Hz. Out to 10 harmonics of the VCO frequency the closest they came to each other was 200 kHz--still a mystery.
Jordan, Robert, Sheila
8Hz is the single precision rounding, so somewhere somebody is casting the number to single. Bekhof code?
VCO frequency is about 80MHz, and 8e7=(1+fractional)*2^26 (fractional is about 0.1921, but that's not important).
Single precision number uses 23 bits for fractional. For any number in [2^26, 2^27) range, the least significant bit is 2^-23 * 2^26 = 8.
EPICS values are fine, so this is a problem of DAQ/NDS/DV.
Channels are double-precision in the front end, but stored as single-precision in the frames. Maybe Jordan was getting this data from the frames/NDS2, rather than live, so that's why there's this quantization error?
Below is the code that we tried. I just added in a sleep for 1 second between the switching of the FM4 and FM5 banks in all dofs. When executed, it looked to have worked the same as the old perl script, but there must be some deeper magic that I need to look into.
(I switched this back to what it was before and committed it back in the svn).
Note to me: Had to restart the node in order for it to take any bits of new code. I've seen this before but I'm not sure what warrents a restart vs just a reload. I'm writing this here so I can look back and find the trend.