This is a classic tale of IM1-3 woes - IM1-3 are very likely to move when the HAM2 ISI trips, so need to be checked every time.
The IMs come from IOO, so they are unlike any other optics we have, and so behave in a very different way, and are suscetable to changing alignment when they experience shaking, like they do when the ISI trips.
The IM OSEM values are consistant, and when the optic alignment shifts, it is consistantly recovered by driving the optic back to previous OSEM vlaues, regardless of slider values. The OSEM values, when restored, consistantly restore the pointing onto IM4 Trans QPD.
IM4 Trans QPD reads different values for in-lock vs out-of-lock, so it's necessary to trend a signal like OMC DC A PD to correctly compare times.
IM4 does sometimes shift it's alignment after shaking, but because it's moved around by the IFO, choosing a starting value can be difficult. In the case of IM4, restoring it's alignment to a recent out-of-lock value should be sufficient to lock, but ultimately IM4 needs to be pointing so that we can lock the X arm in red.
I've tracked the alignment changes for the IM1-3 since 9 Nov 2015, and they are listed below.
These alignment changes are big enough to effect locking, and it's possible that the IFO realignment that was necessary last night was in part a response to IM pointing changes.
I've attached plot showing the IM alignment channels.
Armed with those channels, and the knowledge that the IM OSEM values are trustworthy, and the knowledge that under normal running conditions IM1-3 only drift 1-2urad in a day, checking and restoing IM alignemt after a shaking event (ISI trip, earthquake) should be a fairly quick process.
Thanks for the write-up here, Cheryl!
General Statement:
Honestly, when it comes to gross misalignments (those which CANNOT be fixed with an Initial Alignment; usually caused by something catastrophic [i.e. power outage, huge earthquake, etc]), I don’t have an idea of where to start.
For example, what specific channels does one check for misalignments (i.e. specific channel name, is it same for all optics? What about for ISIs/HEPI, do we need to check them for misalignment?). This is a more specific question for IO, SUS, SEI, & TMS.
Specific Statement/Question:
It sounds like you are finding that the Input Mirrors (IMs) are more susceptible to “shakes” from SEI; whereas since SUS’s are so much different and bigger, they aren’t as susceptible. This is a big thing, and we should pay attention to changes to the IMs.
Side question: Are the IMs similar to the Tip Tilts?
For input pointing misalignments, what is the cookbook/procedure for checking & fixing (if needed) alignment? Sounds like we:
All of this can be done in the control room, yes? Do we ever have to go out on an IO table?
I’d like something similar for SUS, TMS, & SEI. What signals (specific channels) is best to look at to check for alignment of each suspension or platform?
Anyway, thank you for the write-up and helping to clarify this!
O1 day 59
model restarts logged for Sat 14/Nov/2015 No restarts reported
Title: Ops Owl Summary, 08:00-16:00UTC (00:00-08:00PT)
State of H1: in Observe for 11 hours, range is 79.6Mpc
Incoming Operator: JimW
Shift Summary: IFO has been locked all shift with good range. Winds have been under 20mph all shift, and useism is currently about 0.5 atthe Corner Station.
Shift Details:
- I restarted gracedb and added some instructions to the wiki page on how to tell if it's running, beyond the indicator on the OPS Overview
- there's been about 6 ETMY saturations
TITLE: 11/15 OWL Shift: 00:00-08:00UTC (16:00-00:00PDT), all times posted in UTC
STATE of H1: Observation Mode with range around 76Mpc
Incoming Operator: Cheryl
Support: Talked with Jenne on the phone (& Jim W briefly)
Quick Summary:
With seismicity lower this evening, went about attempting to get H1 back. Jenne helped walk through fixing pointing of input optics to get the PRM locking. Have been in Observation Mode for last few hours. The range is a bit ragged looking (perhaps related to useism which is still above the 90 percentile for the LVEA).
Shift Activities:
H1 went through Guardian states fairly easily the 2nd time around. I did have to engage the 2nd ISS Loop by hand again. I also hit the DIAG RESET for SUS ETMy Timing on the CDS Overview.
H1 had a big ETMy glitch right after going to Observation Mode, but has been running at 78-81Mpc.
Off to dinner!
Seismicity Note: Corner Station useism is at ~0.7um/s & winds are about 8mph.
(Thank you, Jenne for the alignment help, Jim's help with GS13 Gains & to Kiwamu's ISS Loop instructions!)
After the alignment tweaking with Jenne, and getting through PRM, I continued with the Initial Alignment. The Dark Michelson came up on its own fairly well aligned and I only barely touched it.
After the IA, attempted locking. First attempt was on an ugly DRMI mode. Second Attempt locked DRMI within 10-15min. Proceeded through Guardian States. First hitch was at ENGAGING ISS 2ND LOOP. Ended up having to Engage by hand (via Kiwamu's alog); this was scary as it's hard to engage when output is close to zero---had a big glitch when I engaged, but it rode through (yay!).
Finally made it to NOMINAL LOW NOISE with range around 80Mpc (but had a few DIFFS on SDF). The Diffs were:
Input Pointing Diffs:
Some ISIs in GS13 LO Gain Mode Diffs for HAM2, HAM5, ITMx
Now working on getting H1 back UP (& going to have dinner soon).
I meant to attach snapshots of the SDF diffs I observed last night, but forgot to get to that (wrote alog on my laptop and saved snapshots on ops workstation Desktop).
Mainly thought the input pointing changes were worth noting, since that was a noticeable/big change to H1.
Will post when I'm back on shift tonight.
Here (attached) are differences from SDF I noted in the main entry.
On Daniel's suggestion, I have made some projections for how we can expect the interferometer range to improve if we double the power and/or fix the low-frequency power-law noise.
The first attachment shows various projected curves at the current power (23 W), and the second attachment shows projected curves at twice the power. The ranges are computed using Salvo and John M's new version of int73.
I've assumed the unknown noise has an ASD of 1×10−20 m/rtHz at 100 Hz, with a shape of 1/f2. This is just done by eye, with the magnitude chosen so that the subtraction does not force the remaining noise to dip significantly below the "total expected" trace.
It seems that either fixing the power-law noise or doubling the power gives similar results for both BNS range (for two 1.4M⊙ stars) and BBH range (for two 30M⊙ holes). Doing both gives us a bigger boost.
BNS range |
BBH range |
|
Present noise | 79 | 1.3 |
LF power-law noise fixed | 99 | 1.8 |
Power doubled | 93 | 1.6 |
LF power-law noise fixed, power doubled | 119 | 2.3 |
From the curve in which the power is doubled and the low-frequency noise is fixed, it is apparent (if you believe the subtraction) that there is still some irregular excess noise (perhaps scatter) between 60 and 100 Hz. This actually seems to limit the range significantly; looking at the red traces, the "total expected" trace has ranges of 110 Mpc / 2.1 Gpc, and the "total expected" trace at twice the power has range of 135 Mpc / 2.7 Mpc. Additionally, some of the noise below 50 Hz is LSC control noise, which can be reduced by implementing better subtraction.
(Warner, Drigger, Gray)
Summary: Beams were not aligned onto PDs for PRM, so we had to touch alignment of PRM, RM1, & RM2
After getting a hand-off from Jim, went through Initial Alignment, just so I could see what he saw. I made notes as I went:
This is where I called in Jenne for a rescue.
So from PRM Align, we looked through ASC land and found big inputs on the DC1 & DC2 PDs.
Then took ISC ALIGN to DOWN, and then only to PRX_LOCKED.
While here we ultimately moved some optics around to help us get better. I watched the DC pds and Jenne did the same and also looked at some QPD sum values, while I tweaked on the PRM (mainly). I also tweaked on the RM1 & RM2 mirrors a little to get the DC1/2 values down to decent values (from 0.9 down below 0.2). Jenne could clearly see the beam off the QPDs and walked me into a decent spot & I tweaked mirrors to minimize inputs on the DC pds.
Once we liked what we had, Jenne engaged the PRC1 p/y loops by hand, and this improved all the signals we were looking at (note: she turned OFF the intergrators first [FM2's]...once loops INPUTs were ON, she turned on FM2's again). At this point everything looked good. We then went to PRM_ALIGN (which didn't do much, since Jenne did it all by hand). I then waited and OFFLOADED PRM Alignment.
I also went to DOWN and back to PRM ALIGN to repeat and everything looked good.
Now continuing with Initial Alignment!
TITLE: 11/14 OWL Shift: 00:00-08:00UTC (16:00-00:00PDT), all times posted in UTC
STATE of H1: H1is in ALIGNING state (see Jim's alog for issues here).
Outgoing Operator: Jim Warner
Support: Jenne is On-Call commissioner
Quick Summary: I've emailed Jenne a heads up I may be employing her help, but will make an attempt to go through Initial Alignment first. Goal: Get H1 back to locking after being down for almost 43hrs!
Title: Day Summary Nov 14th,16:00-00:00UTC
State of H1: Ugh....
Offsite Support: Jenne on the phone
Shift Summary: winds and useism kept the IFO down
Details:
Wind and useism were still high when I arrived, ALS wouldn't stay locked. about 22:00 wind had finally calmed down, so I started trying to lock. No flashes at DRMI, so I started initial alignment. Initial alignment would find X arm IR, Jenne was called and turning up the gain on the LSC X-arm loop from .05 to .15 fixed it. PRM_ALIGN is now failing, so Corey will be getting support for that.
No locking so far today. Winds and useism are still high. So far the IFO hasn't made it past Find-IR. And now an earthquake in Japan is shaking us up.
Gracedb had some issues last night, details in Keith's alog Link
Our ext_alert program on h1fescript0 had given up attempting to reconnect due to the long duration of the server outage. This morning I tried restarting ext_alert via a monit restart, but this did not work and I ended up starting it by hand. It should be stable now.
Are operators supposed to restart this? I did not receive an alarm last night or tonight (only way I knew of a "GraceDB quiery failure" was a red box appearing on the Ops Overview.
There used to be instructions to re-starting this on a wiki, but those instructions have been removed from this page:
https://lhocds.ligo-wa.caltech.edu/wiki/ExternalAlertNotification
So not sure if I'm supposed to use the old instructions to start this or have someone else restart.
O1 day 57
model restarts logged for Fri 13/Nov/2015 No restarts reported
The landscape crew will begin removing tumbleweeds that have accumulated on the arm access roads. This will involve using the tractor and bailing machine. I have requested they begin on the X-Arm where the buildup seems to be the greatest.
Title: Ops Owl Summary, 08:00-16:00UTC (00:00-08::00PT)
State of H1: trying to relock
Incoming Operator: JimW
Shift Summary: winds and useism kept the IFO down
Details:
- when I arrived the ETMX ISI was oscillating and that took until about 14:00UTC to "fix" though I don't' know what I did that had a good effect
--- I took the ISI to OFFLINE, then back to FULLY ISOLATED and engaging stage2 thew it into oscillations
--- I had to manually engaged stage2, and was confused by the ISI position not matching the setpoints, and will ask Jim, who's coming in at 8AM
--- I restored all guardian control and sent the ISI back to FULLY ISOLATED, and it went and is currently ok.
Current State:
- average winds at EY and EX and the Corner Station are now above 27mph according to LHO anemometers
- useism is slightly lower in the last 2 hours, end stations around 0.5 and Corner Station around 0.9
- IFO is struggling to get to CHECK IR
as of 14:34UTC X arm in red is locked and alignment is offloaded!
continuing on with the initial alignment.
winds at EX are about 25mph, and increasing.
winds at the Corner Station are about 10mph.
useism is unchanged from 2 hours ago.
ETMX ISI excessive motion seems to be gone for now.
Title: Ops Owl Mid-shift report, shift times 08:00-16:00UTC (00:00-08:00PT)
State of H1: unlocked
Shift Summary: Arriving on site, winds are good, useism is still high but trending down over the last 8 hours, ETMX ISI is oscillating
I'm working on ETMX ISI to stop the oscillations. So far have taken it to OFFLINE and ran an INIT, and am now working on going back to FULLY_ISOLATED.
This may not work, but maybe it will.
Other than ETMX ISI, no other outstanding issues from heavy winds.
oops, see seperate alog.
Alastair, Corey, Nutsinee
First RTD/IR sens. alarm went red on the medm screen. A visual inspection at the controller box showed IR FAULT alarm which couldn't be untripped by simply turn the key back and forth. Before going out to swap the IR sensor box I discovered that the laser had actually stopped working almost two hours before the alarm went off. Meaning, nothing was heating up the viewport at the time IR sensor tripped. We followed Alastair's suggestion (with a permission from Mike) and inspected the viewport from inside the table anyway to make sure there's nothing obvious (eg. bug crawed in and got zapped). While taking down the IR sensor box the sensor happened to untripped itself. So we swapped the DB9 cable and plug it to the spare IR box and left it on the chassis. After the IR fault got untripped we were able to restart the laser.
The cause of CO2X tripped is unclear. There was a sharp drop in power supply but only by a tiny amount (-0.005 V) at the same time that the laser output power, laser temperature, and the current drawn dropped to 0. All these happened almost two hours before IR FAULT alarm went off. The chiller was working fine.
Is the IR sensor output available to be plotted? Along with an output of a nearby temperature sensor? The comparator circuit in the IR sensor may have tripped due to a local rise in temperature, or an instability in its power supply voltages.
Unfortunately I don't think there's a channel that allow us to monitor the IR sensor readout. If the comparator box were to trip due to a local rise in temperature it should have happened sooner since the box has never been moved since we installed it.