TITLE: 12/09 [OWL Shift]: 08:00-16:00 UTC (00:00-08:00 PDT), all times posted in UTC STATE Of H1: Unlocked SHIFT SUMMARY: Lock loss likely from earthquake. High winds and microseism have hindered relocking ALS. Tried different combinations of ISI blends to no avail. The Y arm has been the least stable. There is a timing error on H1SUSETMX. INCOMING OPERATOR: Ed ACTIVITY LOG: 10:41 UTC Lock loss. Arms not staying locked on green. Put ISC_LOCK guardian to DOWN to wait out earthquake. Adjusted REF_SIGNAL to increase ISS diffracted power. 13:38 UTC Started attempting to relock 15:33 UTC Bubba driving down arms to assess tumbleweed buildup
In checking on the reference cavity transmission this morning, I noted that it took a dive over the last day in particular, and over the last three days in general. Also attached is a plot of the IMC locked status. Looking over the trend data, it appears to me that the reference cavity transmission suffers when there are long periods of time when the IMC is trying to acquire lock. My hypothesis is that the IMC trying to acquire lock means that the VCO frequency of the double-passed AOM swings from one extreme to the other. This might result in heating of the AOM, which in turn changes the alignment of the AOM with respect to the input beam. We don't observe an alignment change on the nearby EOM because the EOM is too close for a small alignment change to be noticed. Of course we observe the alignment change on the iris located in front of the reference cavity, since that's some distance away from the AOM (~2 m). Also attached is a plot of the reference cavity transmission and VCO frequency. One might ask, why didn't you just fix it by re-aligning the AOM previously. And the convenient answer is that the AOM alignment is quite sensitive. This doesn't necessarily fix the problem if overheating of the AOM is the culprit.
Mike L. called and notified me that there was a SNEWS test alert entered into GraceDB (E206974, see attached screenshot). I did not get a verbal alarm.
There was no alarm for this in the control room because the script that queries GraceDB (ext_alert.py) will only look back 36000 seconds and the delay for this alarm was 73681.000000sec.
We have been using the default 'lookback' time, but can set to be any value we choose from the command line at the start of ext_alert.py. This delay was a special case, but it might be worth looking back a bit more than 10 hours...
Lookback time code is below if anyone is curious.
actions['run'].add_argument('-l', '--lookback-time', type=float, default=36000,
dest='lookback',
help='how far back in time to query, '
'default: %(default)s')
...(later, in the loop)
# query gracedb
now = gps_time_now(ifo=args.ifo)
start = now - args.lookback
client, events = list(
query_gracedb(start, now, ifo=args.ifo, connection=client,
far=args.far, test=args.test))
Put ISC_LOCK guardian to DOWN and waiting for earthquake to subside. Wind speeds are peaking around 40 mph.
Range was degrading with increasing winds. There was also an earthquake and LLO went down at the same time: 6.9 106km SE of Amahai, Indonesia 2015-12-09 10:21:50 UTC 33.9 km deep
TITLE: 12/09 [OWL Shift]: 08:00-16:00 UTC (00:00-08:00 PDT), all times posted in UTC STATE Of H1: Observing @ ~ 79 MPc OUTGOING OPERATOR: Cheryl QUICK SUMMARY: From the cameras: The lights are off in the LVEA. The lights are off in the PSL enclosure. The lights are off at end X. The lights are off at end Y. I can not tell if the lights are on or off at mid X and mid Y. The 0.03 - 0.1 Hz (earthquake) seismic band is between ~ 0.01 and 0.1 um/s. The 0.1 - 0.3 Hz (microseism) seismic band is trending slightly up and is now between ~ 0.2 and 0.9 um/s. The winds are between ~ 0 and 15 mph. See screenshot for ISI blends. From pinging: CDS WAP is off at the LVEA. CDS WAP is off at end X. CDS WAP is off at end Y. CDS WAP is on at mid X. CDS WAP is on at mid Y.
Ops Eve Summary: 00:01-08:00UTC (16:00-23:59PT)
State of H1: locked in Observe for 2+ hours
Help: Jenne, Sheila, Evan H
Shift Summary:
Timeline:
Kyle was here and needed to drive down the Y arm:
Timeline including GRB:
Evan Hall suggested changing ETMY ISI blend filters after a couple lock losses between DRMI and ENGAGE_ASC.
The change from all Quiet_90s to Quiet_90 in X direction, and 45mHz in Y direction was easilly seen on ST1_ISO_RX and ST2_ISO_Y, and the ASC control signal in pitch being sent to the optic, ASC-ETMY_PIT_OUTPUT.
Plot attached shows those channels before and after the blend filter change, and ETMX ASC control signal, to compare.
Current blend filter config. medm is also attached.
Why are the y blends set different from the X blends?
Over the past month or so, having the 45 mHz blends on EX causes the ISI to ring up in full lock (for example: alog 23674 and comments).
I have edited the ISC_LOCK guardian so that it now turns violin mode damping off before the interferometer reaches nominal low noise. This will hopefully allow us to collect ringdown data on the violin mode fundamentals.
Violin mode damping is still turned on as usual in BOUNCE_VIOLIN_MODE_DAMPING, but it now is turned off in the COIL_DRIVERS state. Thus the modes will still receive a few minutes of damping before DARM is switched to dc readout.
If this behavior needs to be reverted, there is a flag in the main() function of COIL_DRIVERS called self.turnOffViolinDamping which can be set to False.
Violin mode damping in full lock will be re-enabled once sufficient data have been collected.
Are there any observable results from this? For example, does this mean we will now see these on the running power spectrum on nuc3? And is this the reason we now have the red boxes on the violin mode medm on nuc0? I hadn't noticed the latter before, so I was wondering why these were flashing red.
Since turning the damping off, the violin mode fundamentals seem to appear on the DARM FOM at the level of 10−16 m/Hz1/2 or so. Before turning the damping off, they would eventually damp down below 10−18 m/Hz1/2 after a few hours.
I'm guessing this is why the violin mode monitors are red, but I don't know what Nutsinee set the monitor thresholds to.
Also, since writing the above alog I changed the Guardian code for turning off the damping. It is no longer executed inside an if statement; it's just executed directly in the main() function of COIL_DRIVERS.
The mystery ~650Hz noise reported here and here also shows up in the PEM rf antenna (in both 9MHz and 45MHz located in CER and LVEA). Further investigation revealed that this peak shows up in the PEM antenna during lock aquisition at the start of the DC Readout Transition step (if it appears at all; it's not present in every lock--more on this later). At the start of this step the ALS COMM VCO is parked at some value.
To determine whether this VCO could be responsible for the ~650Hz noise, the frequency readback of the VCO was compared to the frequency of the mystery peak in the PEM antenna. Attached (figure 1) is a plot of H1:ALS-C-COMM_VCO_FREQUENCY timeseries on top and spectrogram of the PEM 45MHz LVEA antenna on the bottom. The frequency of the peak seems to track with the VCO frequency if you take into account the fact that the VCO frequency readback is digitized into steps of 8Hz (does anyone know why / can we fix this?).
Also, there appears to be 2 different values where the VCO can be parked. Figure 2 has similar plots to figure 1, over a 28hr stretch which contained multiple locks where the peak was sometimes present. In locks where the peak was present, the VCO was set to ~78.7873MHz. For locks where the peak is not there the VCO was set to ~78.7944MHz. These values correspond with two different values of H1:ALS-C_COMM_VCO_TUNEOFS : ~-1.39 and ~1.25, respectively.
To test this, we tried moving the COMM VCO TUNE OFS slider with the IFO locked (before continuing to NLN / Observing). While initially it looked like the peak in the PEM rf channel moved as the slider was moved, the lock broke before we could conclusively tell. The lockloss occurred right as Sheila was moving the slider. We don't know why this should cause a lockloss, so this is a subject for further investigations (it was windy and ground motion-y at the time so it could have been a coincidence).
Also included (figure 3) is a plot of the VCO frequency (again, 8Hz digitization) and the CER Rack temperature. More data is needed, but it looks like the freqency trends down after the temperature rises.
Finally, there is still the question as to why this is showing up in the 9MHz and 45MHz channels (and, ultimately, DARM). As a first check, I compared 9.100230 MHz and harmonics to 78.7873 MHz and harmonics to see if a beat would show up within 600 Hz. Out to 10 harmonics of the VCO frequency the closest they came to each other was 200 kHz--still a mystery.
Jordan, Robert, Sheila
8Hz is the single precision rounding, so somewhere somebody is casting the number to single. Bekhof code?
VCO frequency is about 80MHz, and 8e7=(1+fractional)*2^26 (fractional is about 0.1921, but that's not important).
Single precision number uses 23 bits for fractional. For any number in [2^26, 2^27) range, the least significant bit is 2^-23 * 2^26 = 8.
EPICS values are fine, so this is a problem of DAQ/NDS/DV.
Channels are double-precision in the front end, but stored as single-precision in the frames. Maybe Jordan was getting this data from the frames/NDS2, rather than live, so that's why there's this quantization error?
Ops Eve Shift: 00:01-08:00UTC (16:00-23:59PT)
State of H1: locked in Observe
Help: Jenne, Sheila, Evan
Details:
Site:
We've been having difficulty locking this afternoon (microseism around 0.6 um/sec rms, winds consistently gusting up to 35-40mph at times up to 50mph), the locklosses seem fairly typical of the type that plauge us with high ground motion when we are locked on TR_CARM and ALS_DIFF, see alog 22211
These locklosses seem to be caused by some kind of glitch that happens only while ALS is on DIFF. I had a look at the LLO guardian and talked to Adam, who doesn't think that LLO has this problem. There seem to be three differences in the way that we do this step compared to LLO: they ramp the CARM offset much faster (3 seconds for the whole thing vs 25 seconds here), they don't run a servo to ALS diff from AS45Q durring this step, and I believe that they go to a lower CARM offset before they switch DARM from ALS to AS45Q if I'm interpreting the normalizations correctly.
For now I just sped up the ramp times a good deal, reducing both the 15 second ramp in reduce CARM offset and the 10 second ramp in SWITCH_TO_QPDs with 3 second ramps. This seems to be a bit better, I think that it has worked 3 out of 5 times that we have tried it in the last few hours, which was better than earlier in the day. For now I will leave this in.
There were some testpoints left open on h1sustmsx,y and h1pemey, I have cleared these.
In the past hour a partial fm load of susetmx was done (ETMX_L3_OPLEV_YAW). The filter file for etmx was last modified 29th Oct, so no new filters were in fact loaded. No one in the control room knew why this had happened, so I did a full load to remove the partial load string. I checked that no filter archive file was generated since the source file was not modified.
Rotating pumps and diesel generator will be running continuously until Friday -> I'll be out to refuel the generator tonight.
TITLE: Dec 8 DAY Shift 16:00-00:00UTC (08:00-16:00 PDT), all times posted in UTC
STATE Of H1: Commissioning
SUPPORT: Sheila, Hugh, TJ, Evan, Jenne
LOCK DURATION:
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY: After maintenance day, winds picked up, sometimes in excess of 45 mph. Locking has been tedious and not gotten past ASC part_3.
ACTIVITY LOG:
15:55 Tv crew is on site
16:00 Christina and crew will start at End-Y
16:03 Jeff B out to LVEA. pulling vacuum plumbing for dust monitors along input arm. Mitch will be joining him when he arrives
16:04 Bubba is going to LVEA to inspect the supports on the crane/snorkel lift
16:10 Richard testing phones at out buildings. Will be calling ops to confirm numbers.
16:12 Schofield and Jordan out to CER.
16:21 Jodie 3IFO Storage monitoring
16:21 Gerardo out to LVEA to replace annulus ion pump on Y manifold.
16:25 took IMC down for Hugh to do some Guardian/ISI work HAM2/3 WP5640
16:28 Betsy into LVEA
16:41 Karen called from End-Y to report water on floor under pump in the mechanical room. John is going to investigate.
16:44 Fill out to LVEA to survey for a future cable run
16:45 Cleaning crew donw at Y heading to X
16:46 Richard is done with phones.
16:56 Jodi is done in LVEA
17:00 Joe returning a hose into the LVEA.
17:03 Travis out to LVEA to join Betsy.
17:04 Port-o-let service on site
17:05 GW/EM Test alarm received
17:11 Betsy called from LVEA. HWFS team can begin.
17:11 Richard out to join Fil in LVEA
17:15 Karen called from End-X. Lights not turning on.
17:23 HFD on site.
17:25 Kyle headed to Y-2-8 to start generator
17:35 Fil and Pete pulling a cable or two on the input arm and also around HAM4. WP5644
17:43 John and Bubba to EX and then EY. Water damage will have the lights down at EX until further notice.
17:46 Betsy into LVEA to check on maintenance work progress.
17:56 Hugh headed out to Ends to do weekly HEPI fluid checks.
18:14 re-locked IMC
18:15 Chris out to join crew at End-X
18:15 Alistair arrived on site
18:20 Jason reset the PSL WD
18:28 Hugh back from HEPI checks at End Stations
18:45 Hugh finished doing HEPI fluid checks.
18:52 Hugh into LVEA looking for 3IFO parts
18:56 Gerardo finished with annulus pump. Aux cart will stay on until noon.
18:58 CDS WP5637 executed.
19:04 Coca-Cola on site.
19:05 Hugh back from LVEA
19:24 Bartlett to End-Y to take pictures.
19:40 Cleared L4C saturation accumullators.
19:42 Jodi heading into LVEA to check equipment (scope) for auditor.
20:00 Jeff B doing LVEA sweep.
20:10 Begin Initial Alignment
20:20 Gerardo exits LVEA and turns out lights. Richard headed out for a minute before I move on.
20:23 Begin full Locking sequence.
20:35 Kyle heading back down from Y-2-8 arm.
21:43 switching to 90mHz blends in corner after many failed attempts w/high winds.
21:51 Kyle driving out to Y-2-8 again.
22:02 Switched corner back to 45mHz blends and changed ETMX to 45 on axis and 90 off axis. ETMY has been configured this way the whole time. Wind speeds seem to have decreased slightly at EX.
23:00 after many failed attempts, this has been put in the commissioner’s hands.
From time to time we see some big glitches when DRMI is locking, and it seems likely that some of the time these things cause locklossess. We sometimes wonder if this is a problem with a glitchy osem, especially when we are having difficulty locking.
Here is an example of this happening, DRMI survives. In this case at least it was the BS oplev glitching, as you can see the sum drops at the same time that there are drops in the DRMI build ups.