Starting at approx 5:30 UTC we are seeing an (small) increase in glitches on the control room glitchgram FOM (Figure 1). We think this is a symptom of the RF45 related issues again as we noticed a jump in the RF45 coherence FOM during one of these glitches.Right now, it seems much less damaging then the last episode.
See spectrograms (specifically feature at ~350 seconds in):
Strain: https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=108900
RF45: https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=108901
This time, the trend of LSC-MOD_RF45_AM_CTRL_OUT_DQ shows nothing strange (Figure 2).
The only reason I said the increase in glitches was 'small' is because I was comparing it to this.
It seemed to go away for ~40 mins but now it has returned again at ~6:32 UTC, looking pretty bad. I think Ed is going to try something.
MID-SHIFT SUMMARY: IFO is still locked and Observing @ 72Mpc. Tidal and ASC have been a little “active” from time to time probably due to increasing winds peaking to ≈35mph at times. We’ll see.
Correction...peaking to 40mph since that last entry.
Most of the 8:00-16:00 PST shift was fairly smooth. There was one lock loss caused by an earthquake (I didn't have time to localize the source of this seismic event). Several ETMY DAC saturations.
One strange glitch affected the range of H1 simultaneously as a glitch in L1's range. It didn't look like a usual ETMY DAC saturation, so Nutsinee and I ran an omega scan here. Nothing obvious in H1:CAL-DELTAL_EXTERNAL_DQ but several other channels show glitches. I'm not sure which ones to focus on for this particular event. Obviously, I need to stare at a few more omega scans to see what is typical.
h1build hung last Friday (13 Nov). This locked up the the slow controls SDF monitors which are being tested. As the slow controls SDF is not used by guardian yet to determine observation state we rebooted h1build and restarted the slow controls SDF monitors.
At this time we do not know what caused it to lock up, but it was noticed when Betsy was examining difference counts and noticed a discrepancy between values in the SDF display and on other MEDM screens.
TITLE: Nov 16 EVE Shift 00:00-08:00UTC (08:00-04:00 PDT), all times posted in UTC
STATE Of H1: Observing
OUTGOING OPERATOR: Jim
QUICK SUMMARY: IFO is locked and observing @ 73.8Mpc. µSei ≈ .6µm/s. EQ bands are ≈.22µm/s. Winds are ≤20mph.
TITLE: 11/16 [Day Shift]: 16:00-24:00UTC
STATE Of H1: Observing at ~80 Mpc
SUPPORT: Usual CR crowd
SHIFT SUMMARY: Locked most of the shift. Winds quietish <20 mph, useism is medium low
ACTIVITY LOG:
17:00 JohnW Bubba driving around site looking at tumbleweed situation
17:00 Keita & Me running excitations on ETMX & ETMY ISIs, earthquake arrives at the same time, lockloss
I rotated the HAM2 STS2 to see if elevated power in the Y channel was from tilting or a bad channel. The first attachment shows the 5 site SEI STS2s when the wind was fairly quiet. The top row of panels is from 19 October before the HAM2 STS2 rotation; the bottom row is from 12 Nov after the sensor was rotated. The HAM2 Y channel still shows elevated signal even though it is now rotated to align with the X axis. So this isn't related to orientation wrt SW facing wall. Additionally, these are quiet wind times (<10mph) so hard to explain this away with wind tilt noise.
The second attachment is similarly arranged with the same 19 October panels in the top row but now the lower panels are from 13 November when the wind was really honking (20 to 60mph.) Notice that all the X & Y signals are much larger at lower frequencies (<~0.3hz,) except the HAM2 Y channel which in the lower panels is actually pointing in X. This channel has a larger magnitude until you get to about 10mHZ but it does not respond anywhere near to amount that the other sensors do. This indicates to me that the HAM2 Y channel is definitely not healthy.
O1 day 59
model restarts logged for Sun 15/Nov/2015 No restarts reported
In preparation for my rezeroing of the active oplevs tomorrow I've taken 7 day minute trends of these oplevs.
LHO's second-Saturday public tour occurred on the afternoon of 11/14. Arrival time at LSB = 1:00 - 1:30 PM. Departure time = 3:45 - 4:15 PM. Group size = ~30 adults. Vehicles at the LSB = ~15 passenger cars. The group was on the overpass near 2:50 PM and in the control room from about 3:10 to 3:40. Landscapers were working on tumbleweeds near the corner station near 2:45 PM.
TITLE: "11/16 [OWL Shift]: 08:00-16:00UTC (00:00-08:00 PDT), all times posted in UTC"
STATE Of H1: Observing at ~80 Mpc
SUPPORT: Jenne
SHIFT SUMMARY: Spent first couple hours with initial alignment and lock acquisition. Issue with ALSY* both during initial alignment and locking ALS. Later OMC flashed at the wrong mode. An attempt to adjust OMC alignment was made but it seemed to fix itself somehow. Low wind and nominal EQ seismic band through out the shift. Useism ~0.5e-1 um/s.
INCOMING OPERATOR: Jim
ACTIVITY LOG: See alog23433 and alog23429 + comments therein.
* when I said I had problem with ETMY I actually meant ALSY. There's no problem with ETMY.
Robert, Jess, Nutsinee
Back in September Robert and control room folks did an injection of "5 people jumping in sync" (alog21180). A followup showed that the injection indeed coupled into DARM (alog21191) from ~10-400Hz (an omega scan of a jump can be found here). So I continue to followup the coupling mechanism hoping it would shed some light on how ground motion coupled into DARM (we suffer a slight range drop daily during high traffic time). Using Robert's Acoustic Coupling Functions (alog22797 figure 9) I was able to pin down the coupling mechanism above 100Hz. Figure1 shows the calibrated DARM and the PSL periscope accelerometer spectrum at 100-400 Hz.
The predicted DARM value is expected to agree with the actual measurement within a factor of 2. Thus it is reasonable to blame PSL periscope for ground-to-DARM coupling at above 100Hz. However, at f < 100Hz the coupling mechanism is still a mystery (figure4). Robert wasn't able to make periscope motion couple into DARM at low frequency (refer to figure 7 of alog22797). However, we were able to rule out HAM2 and HAM6 as coupling sites based on the upper limit (alog22797 figure 3 and 6). Figure 2 and 3 attached show that both HAM2 and HAM6 GS13 rise about a magnitude above the noise floor, while the upper limit from Robert's alog is about 1.5-2 order of magnitude. Meaning, shaking HAM2 and HAM6 GS13 at 2 order of magnitudes above its noise floor at low frequency didn't couple into DARM.
Whatever the coupling mechanism is at low frequency, we believe the same mechanism is responsible for HVAC coupling into DARM. Figure5 shows DARM spectrum during the HVAC injection time (alog22532). HVAC noise shows up in DARM at 20-100Hz.
I accepted SDF change of OMC alignment. I didn't want to risk losing lock by reverting it. Screenshot of the old and new value attached below. This might have caused by me attempting to touch the OMC alignment slide bar earlier.
I trended all the optics that I know are in the green light path back to where they were before we lost lock (~10hours ago). TMSY alignment was off by 5.6 um. Adjusting TMSY alone improved the flashes from 0.4 to 0.6. The rest was the combination of ITMY and ETMY adjustment. Seems like I have some magic fingers =p
One mistake that I used to make is that touching either ETM or ITM without realizing that both mirrors have to be aligned with respect to one another. Other than looking at the camera and dataviewer try imagining how optics actually move could be helpful.
OMC seems to have a problem. The shutter is opened but there's no light at the OMC Trans camera. Now stuck at DC_READOUT_TRANSITION.
The first time I hit INIT at OMC_LOCK nothing happened. I tried again and I saw flashes in a bad mode.
OMC alignment was bad. I put OMC_LOCK to Auto and DOWN so ISC_LOCK would stop kicking it and OMC Guardian wouldn't fight my alignment bar (realized this after several WD trips). While I was going through what could have gone wrong with Jenne (we found that LF and RT OSEM DAC outputs were saturated), OMC fixed itself. I tried requested READY_FOR_HANDOFF and it is now locking at the right mode.
GREAT job, Nutsinee!!! So, by your trends it looks like TMSy Pitch was the culprit being off by 5.6um?
I could have sweared I returned it to it's original value and started tweaking ETMy/ITMy, too. But obviously I had no luck! There were periods where I also did get the powers up to over 0.6, BUT I never could get a 0:0. I need to go back to alignment school, I guess!! And you DO have magic fingers!! :) GREAT WORK!!! :)
This is a classic tale of IM1-3 woes - IM1-3 are very likely to move when the HAM2 ISI trips, so need to be checked every time.
The IMs come from IOO, so they are unlike any other optics we have, and so behave in a very different way, and are suscetable to changing alignment when they experience shaking, like they do when the ISI trips.
The IM OSEM values are consistant, and when the optic alignment shifts, it is consistantly recovered by driving the optic back to previous OSEM vlaues, regardless of slider values. The OSEM values, when restored, consistantly restore the pointing onto IM4 Trans QPD.
IM4 Trans QPD reads different values for in-lock vs out-of-lock, so it's necessary to trend a signal like OMC DC A PD to correctly compare times.
IM4 does sometimes shift it's alignment after shaking, but because it's moved around by the IFO, choosing a starting value can be difficult. In the case of IM4, restoring it's alignment to a recent out-of-lock value should be sufficient to lock, but ultimately IM4 needs to be pointing so that we can lock the X arm in red.
I've tracked the alignment changes for the IM1-3 since 9 Nov 2015, and they are listed below.
These alignment changes are big enough to effect locking, and it's possible that the IFO realignment that was necessary last night was in part a response to IM pointing changes.
I've attached plot showing the IM alignment channels.
Armed with those channels, and the knowledge that the IM OSEM values are trustworthy, and the knowledge that under normal running conditions IM1-3 only drift 1-2urad in a day, checking and restoing IM alignemt after a shaking event (ISI trip, earthquake) should be a fairly quick process.
Thanks for the write-up here, Cheryl!
General Statement:
Honestly, when it comes to gross misalignments (those which CANNOT be fixed with an Initial Alignment; usually caused by something catastrophic [i.e. power outage, huge earthquake, etc]), I don’t have an idea of where to start.
For example, what specific channels does one check for misalignments (i.e. specific channel name, is it same for all optics? What about for ISIs/HEPI, do we need to check them for misalignment?). This is a more specific question for IO, SUS, SEI, & TMS.
Specific Statement/Question:
It sounds like you are finding that the Input Mirrors (IMs) are more susceptible to “shakes” from SEI; whereas since SUS’s are so much different and bigger, they aren’t as susceptible. This is a big thing, and we should pay attention to changes to the IMs.
Side question: Are the IMs similar to the Tip Tilts?
For input pointing misalignments, what is the cookbook/procedure for checking & fixing (if needed) alignment? Sounds like we:
All of this can be done in the control room, yes? Do we ever have to go out on an IO table?
I’d like something similar for SUS, TMS, & SEI. What signals (specific channels) is best to look at to check for alignment of each suspension or platform?
Anyway, thank you for the write-up and helping to clarify this!