I trended all the optics that I know are in the green light path back to where they were before we lost lock (~10hours ago). TMSY alignment was off by 5.6 um. Adjusting TMSY alone improved the flashes from 0.4 to 0.6. The rest was the combination of ITMY and ETMY adjustment. Seems like I have some magic fingers =p
One mistake that I used to make is that touching either ETM or ITM without realizing that both mirrors have to be aligned with respect to one another. Other than looking at the camera and dataviewer try imagining how optics actually move could be helpful.
OMC seems to have a problem. The shutter is opened but there's no light at the OMC Trans camera. Now stuck at DC_READOUT_TRANSITION.
The first time I hit INIT at OMC_LOCK nothing happened. I tried again and I saw flashes in a bad mode.
OMC alignment was bad. I put OMC_LOCK to Auto and DOWN so ISC_LOCK would stop kicking it and OMC Guardian wouldn't fight my alignment bar (realized this after several WD trips). While I was going through what could have gone wrong with Jenne (we found that LF and RT OSEM DAC outputs were saturated), OMC fixed itself. I tried requested READY_FOR_HANDOFF and it is now locking at the right mode.
GREAT job, Nutsinee!!! So, by your trends it looks like TMSy Pitch was the culprit being off by 5.6um?
I could have sweared I returned it to it's original value and started tweaking ETMy/ITMy, too. But obviously I had no luck! There were periods where I also did get the powers up to over 0.6, BUT I never could get a 0:0. I need to go back to alignment school, I guess!! And you DO have magic fingers!! :) GREAT WORK!!! :)
TITLE: 11/16 OWL Shift: 00:00-08:00UTC (16:00-00:00PDT), all times posted in UTC
STATE of H1: ALSy not locking
Incoming Operator: Nutsinee
Support: Talked with Jenne a couple of times (as well as Mike)
Quick Summary:
Tonight was a bust. ALSy was giving Jim trouble at the end of his shift, and it continued that way for all of my shift. Basically, having trouble aligning ALSy. Not able to get to a 0:0 mode (and this was with tweaking ETMy, TMSy, & ITMy). Frustrating shift.
ALSy Notes:
To Do :
Hoping a new set of eyes (Nutsinee's!) has better luck here. Jenne and I both think it's an alignment issue. And since it's ALSy, the only knobs to turn are for ETMy/ITMy/TMSy. So mabye I just didn't have the magic touch: I just wanted a 0:0!!!!
I picked Jenne's brain for anything I could pass on to Nutsinee and basically all we have is:
Rough Night.
Jess, Laura, Paul
We have been following up on the periodic 60Hz EY glitches which have been seen at LHO since at least June (alog 18936). These glitches are witnessed by various magnetometers and couple into DARM.
It seems that the glitches indicate the switch-on of some electronic device. Spectrograms of H1:PEM-EY_MAG_EBAY_SUSRACK_QUAD_SUM_DQ show lines at several frequencies (30, 42, 60, 80, 120 Hz) that start when the glitch occurs, then stop with a rather softer glitch about 27 minutes later. Attached are spectrograms from yesterday and last Thursday. There doesn't appear to be anything correlated in the HVE or H0:FMC channels.
We recently investigated something similar at LLO (alog 22549), which we managed to trace to an unmonitored air-conditioning unit outside the EY station. The behavior at LLO is somewhat different in that the glitch is much louder and the subsequent noise (in both PEM channels and DARM) is quieter, as well as the fact that it stays on longer each time. However, because the turn-on and turn-off are so predictable it should be possible for someone to go down to EY and listen for anything turning on and off at the expected time.
Trying to understand better the 60Hz problem. I looked at 16 hours of H1:PEM_EY_MAG_EBAY_SUSRACK_QUAD data on friday (from 2015-11-13 00:00:00 to 16:00:00) when the detector was mostly unlocked to see if the 60Hz bursts were there while the detector is unlock. As seen on the attached time plot ('60Hz_Burst_locked_and_unlocked.png'), yes they are there. This is good because that means we can look for the source while the detector is unlocked during maintenance tomorrow. In this plot I also show that the first 6 hours the detector was locked and the rest had the detector unlocked.
Each burst begins with a spike that last about 0.22 seconds (see attached figure 'Zoom-spikes_Mag_VEA_and_EBAY.png'). Notice that the spike looks different on the magnetometer in VEA and the one in EBAY_SEIRACK. While the former has a frequency of about 60.24 Hz, the later has a frequency of 121.65Hz.
Each spike is spaced between 88.5 and 90.5 minutes from each other, however in the past this was reported to be spaced by 75 minutes. After each spike there is an excess signal that last about 23 minutes, I looked at these segments of 23 minutes during the time when the detector was locked (blue arrow on the time figure '60Hz_Burst_locked_and_unlocked.png'), when the detector was unlock (green arrow on the time figure '60Hz_Burst_locked_and_unlocked.png') and to compare also when there was no burst (red arrow on the time figure '60Hz_Burst_locked_and_unlocked.png'). And then using the same color coding I plotted the spectrum of each segments around the 60Hz and its 2nd and 3rd harmonic (as attached in figures labelled as 'Spectrum_left_sideband_at_...png'). Interestingly these 60Hz harmonics show a one sided sideband at about 1Hz from the 60Hz carrier (this is particularly intense on the 2nd harmonic).
Then I looked at a Microphone (MIC_VEA_MINUSY) and an accelerometer (ACC_VEA_FLOOR) and also plotted spectrum of the same 3 time segments around the 60Hz and 2nd harmonic (figures 'Spectrum_MIC_VEA_...png' and 'Spectrum_ACC_VEA_...png' ) and although we do not see the sideband however we see a peak very near to where the sideband is at the 60Hz fundamental. This peak exist also when the burst is not there, so most probably is unrelated, but worth noticing.
Tomorrow morning we will go to EndY with portable magnetometers to see if we can notice anything. And to predict what time the spikes should occur I looked at the most recent data I have from 2015-11-16 21:15:49 to 2015-11-17 05:15:50 shown on the attached plot 'Latest_60Hz_bursts.png', we notice that the spacing is 85minutes, therefore we should spects spikes at about:
UTC LHO - Local
2015-11-17 16:02:00 8:02 am
2015-11-17 17:27:00 9:27 am
2015-11-17 18:52:00 10:52 am
2015-11-17 20:17:00 12:17 pm
2015-11-17 21:42:00 1:42 pm
Returned ETMy, ITMy, & TMSy to state they were before last lockloss (by using the M0 & M1 DAMP p/y INMON values & using sliders to return SUS to these values).
Still have issue of not being able to get ALSy power over 0.5. The best I could do was actually up to 0.65 with a 0:1 mode (I changed trigger threshold to 0.8), but as soon as I took the threshold down to 0.5, the WFS engaged and drove the alignment off (not surprisingly since what they had was a bad 0:1 mode).
I have had zero luck at getting a 0:0. Not sure what other knob I can turn to help me get out of this bad alignment hole.
Have continued Jim's work on trying to get ALSy to a point where WFS can take over but have not been able to get to a 0:0 mode, and thus have not had much luck getting powers much over 0.5 (when WFS can kick in, after 10sec).
The problem from my point of view, is that ALSy will lock on a mode 0:1 (and uglier modes) for only a few seconds--why can't it stay in a mode for longer so I can get an idea of which way to move sliders(!). This isn't long enough for me to tweak and observe the power to improve it. I wish I could get it to a 0:0, because then I could atleast get it above 0.5.
I started out just moving ETMy, but at this point, I have also been adjusting sliders for TMSy & ITMy to no avail.
I did finally get a 10+sec stretch of 0.5 - 0.6 (which was a 0:1 mode), but when the WFS kicked in, they drove the power down & knocked it out. SOOO, I really want to get a 0:0 with any power and try to get WFS to help out.
SDF shows no difs for the ISIs of the Y-arm. (SDF isn't useful for the SUS's because they are always RED in this state).
Looking through Sheila's Locking Training Document, I don't see any obvious problems with Yarm Suspensions (looking at ITMy, ETMy, TMSy):
I have previously shut down the HVAC over the whole site, increasing the inspiral range (Link, Link). The PEM injection report (Link) indicates that coupling in the 5-80 Hz HVAC band is largest at the corner station so I tried shutting down just the corner station. Figure 1 shows that a shutdown of the corner station HVAC reduces the DARM floor above 80 Hz. Figure 2 shows that SF1,2,3 and 4 contribute little, but SF5, 6 and the chiller pump (actually the turbulence in the system), are the worst contributors.
Nov 15 UTC
AHU 1-4 and chiller pump off 16:40:00 - 16:42:25; Back on 16:50:00-16:51:30
AHU 1-4 and chilled water pump off 17:00:00-17:01:30; Back on 17:10:00 - 17:11:30
AHU 1-4 and chilled water pump off 17:20:00-17:21:45; Back on at 17:30:00-17:31:30
AHU4 off 20:15:00 -20:15:30; Back on 20:30:00 - 20:30:30
AHU3 (SF5 and 6) off 20:45:00-20:45:30; Back on 20:55:00-20:55:30
AHU1,2 SF1,2,3,4 off 21:02:00-21:07:00; Back on 21:17:00- 21:17:30
SF1,2,3,4,5,6 off 21:30:00 - 21:30:30; Back on 21:41:00 - 21:42:00
Chiller pump off 22:13:00-22:13:30; Back on 22:33:00-22:33:30
SF5,6 off at 22:40:00-22:40:30; Back on 22:51:00 to 22:52:00
SF1,2,3,4 off 23:00:00-23:00:30; Back on 23:10:00-23:10:30
AHU4office off 23:17:00 to 23:17:30; Back on 23:27:00
Title: Day Nov 15th Summary, 16:00-00:00UTC
State of H1: initial alignment
Shift Summary: Was a good shift until a little bit ago
Shift Details:
IFOwas locked when I arrived, Cheryl reported a quiet shift. About 18:00 winds started picking up, see my log 23419 (where I briefly tried to get ETMX ISI to settle down by turning off a boost to no effect), to eventually topping out over 40mph. ALS and ASC both showed a lot of motion. The IFO stayed locked through the worst of it and winds calmed until 23:30, when the wind picked back up to just over 20 mph. Then the lock just broke, no clear cause. ASC was somewhat rung up, as was ALS, but not as much as when winds were ~40mph. Now ALS-Y is being difficult, I couldn't get it to re-lock. Corey is starting an Initial alignment, but not having much more luck than I did.
I talk about a "new" filter in this alog. Before I get yelled at, I only tested it, I asked first, we are not running it anywhere, no changes have been made to the ISI configuration, the foton files in the SVN are all still current. Thank you.
On Friday, high winds and high microseism were making locking impossible, so I took the time to mess around with sensor correction. The current X/Y rdr sensor correction was designed to give very minimal gain peaking, while providing extra isolation around the .46 hz quad resonance. This helps LHO out, because it is difficult to roll off a 90mhz CPS blend off quickly enough to get good isolation at half a hz. At LHO we typically use a 90mhz blend because our useism is low enough that we can tolerate being locked to the ground in the .1-.3 hz band, and the 90 mhz blends have less gain peaking at frequencies ( < .1hz ) that cause problems for the ASC loops. When useism is high, however, we have to use a 45mhz blends, but these blends couple in more platform tilt at bad frequencies, a problem that gets amplified when winds are high. I wanted to see if it was possible to change the sensor correction to provide some isolation at the microseism, while giving up the gains at half a hz. I based my design on the current one, I just pushed the region where the phases matched up to lay on top of the microseism peak. The fifth picture is a quick and dirty plot I used to design the filter in Matlab. Blue is the current filter, green is my new filter, red is a ratio of transfer functions that indicate the "ideal" filter.
First plot is the T240s for the different configurations I tried. Red is the nominal configuration, 90mhz blends .46hz sc notch, blue is the high microseism configuration, 45 mhz blends .46hz sc notch. Green is the 45 mhz blend with useism sc, brown is the 90mhz with useism sc. This color key holds for all my other plots.
---Blue is pretty good, until you get down to .1 hz, then ISI tilt (RY to X coupling on the T240s) and gain peaking in the blends make ASC /LSC difficult. This is why 45 mhz blends don't work with high winds. The ground was a little worse for this measurement below .1 hz, but I think it only explains half the difference here, at most.
----Red is okay until .1-.3 hz, where we know the microseism was moving the mirrors too much.
---Green should be compared to blue, as it indicates the differences in the performance of the two sc filters. The performace is indeed worse at ~.5 hz, but better between .04 -.2 hz. I really expected this to be much worse below ~.1 hz, instead of only a little worse over .02-.03 hz.
---Brown should be compared to Red, again showing the difference in sc performance. Story is the same as green, better at .04-.2 and worse at ~.5 hz.
The second plot is the CPSes, the color key the same, blue & green compare the two sc filters with 45mhz blends, while red & brown compare the two sc filters with the 90 mhz blends. I think the conclusions are the same, but the CPSes show the low frequency story better.
Third plot shows the ground at EY at the times I was taking each measurement. Clearly the ground at .05 to .09 hz was a little worse than the other times, but I think it accounts for at most half the difference. The fourth plot shows the CPS and the ground motion for the two sc measurements with 45 mhz blends, which were the two extremes of ground motion. Maybe it's not clear, but I think solid blue is higher above dashed blue, than solid green is above dashed green around .1 hz. I should have gotten the ground to CPS tfs....
I don't know if the losses at .5 hz are too much (as much as a factor of 10), I didn't look at oplevs, there were no cavities when I did this. Frankly I'm surprised it worked at all. But maybe this is something worth exploring. I would like to try this again when the ground environment is making locking impossible.
Winds are starting to come back up. Before I noticed that I saw the ETMX ISI was starting it's slow ring up. Suspecting the lock was not going to last long I briefly took the IFO out of observe, so I could try disabling the boost on the ETMX ST1 X isolation bank, as that had been successful in the past of settling the ISI down (based on one data point, so it may not be true...). This didn't have any effect however, so I've put the boost back on, and gone back to observing. The wind seems to be getting worse, so this lock probably won't make it much longer. Useism is getting better, so we might be able to lock again, if we switch back to 90mhz blends, but the current lock probably wouldn't survive the switch.
I turned the boost off at about 18:34, then back on at about 18:48, UTC.
This is a classic tale of IM1-3 woes - IM1-3 are very likely to move when the HAM2 ISI trips, so need to be checked every time.
The IMs come from IOO, so they are unlike any other optics we have, and so behave in a very different way, and are suscetable to changing alignment when they experience shaking, like they do when the ISI trips.
The IM OSEM values are consistant, and when the optic alignment shifts, it is consistantly recovered by driving the optic back to previous OSEM vlaues, regardless of slider values. The OSEM values, when restored, consistantly restore the pointing onto IM4 Trans QPD.
IM4 Trans QPD reads different values for in-lock vs out-of-lock, so it's necessary to trend a signal like OMC DC A PD to correctly compare times.
IM4 does sometimes shift it's alignment after shaking, but because it's moved around by the IFO, choosing a starting value can be difficult. In the case of IM4, restoring it's alignment to a recent out-of-lock value should be sufficient to lock, but ultimately IM4 needs to be pointing so that we can lock the X arm in red.
I've tracked the alignment changes for the IM1-3 since 9 Nov 2015, and they are listed below.
These alignment changes are big enough to effect locking, and it's possible that the IFO realignment that was necessary last night was in part a response to IM pointing changes.
I've attached plot showing the IM alignment channels.
Armed with those channels, and the knowledge that the IM OSEM values are trustworthy, and the knowledge that under normal running conditions IM1-3 only drift 1-2urad in a day, checking and restoing IM alignemt after a shaking event (ISI trip, earthquake) should be a fairly quick process.
Thanks for the write-up here, Cheryl!
General Statement:
Honestly, when it comes to gross misalignments (those which CANNOT be fixed with an Initial Alignment; usually caused by something catastrophic [i.e. power outage, huge earthquake, etc]), I don’t have an idea of where to start.
For example, what specific channels does one check for misalignments (i.e. specific channel name, is it same for all optics? What about for ISIs/HEPI, do we need to check them for misalignment?). This is a more specific question for IO, SUS, SEI, & TMS.
Specific Statement/Question:
It sounds like you are finding that the Input Mirrors (IMs) are more susceptible to “shakes” from SEI; whereas since SUS’s are so much different and bigger, they aren’t as susceptible. This is a big thing, and we should pay attention to changes to the IMs.
Side question: Are the IMs similar to the Tip Tilts?
For input pointing misalignments, what is the cookbook/procedure for checking & fixing (if needed) alignment? Sounds like we:
All of this can be done in the control room, yes? Do we ever have to go out on an IO table?
I’d like something similar for SUS, TMS, & SEI. What signals (specific channels) is best to look at to check for alignment of each suspension or platform?
Anyway, thank you for the write-up and helping to clarify this!
O1 day 59
model restarts logged for Sat 14/Nov/2015 No restarts reported
Title: Ops Owl Summary, 08:00-16:00UTC (00:00-08:00PT)
State of H1: in Observe for 11 hours, range is 79.6Mpc
Incoming Operator: JimW
Shift Summary: IFO has been locked all shift with good range. Winds have been under 20mph all shift, and useism is currently about 0.5 atthe Corner Station.
Shift Details:
- I restarted gracedb and added some instructions to the wiki page on how to tell if it's running, beyond the indicator on the OPS Overview
- there's been about 6 ETMY saturations
TITLE: 11/15 OWL Shift: 00:00-08:00UTC (16:00-00:00PDT), all times posted in UTC
STATE of H1: Observation Mode with range around 76Mpc
Incoming Operator: Cheryl
Support: Talked with Jenne on the phone (& Jim W briefly)
Quick Summary:
With seismicity lower this evening, went about attempting to get H1 back. Jenne helped walk through fixing pointing of input optics to get the PRM locking. Have been in Observation Mode for last few hours. The range is a bit ragged looking (perhaps related to useism which is still above the 90 percentile for the LVEA).
Shift Activities:
After the alignment tweaking with Jenne, and getting through PRM, I continued with the Initial Alignment. The Dark Michelson came up on its own fairly well aligned and I only barely touched it.
After the IA, attempted locking. First attempt was on an ugly DRMI mode. Second Attempt locked DRMI within 10-15min. Proceeded through Guardian States. First hitch was at ENGAGING ISS 2ND LOOP. Ended up having to Engage by hand (via Kiwamu's alog); this was scary as it's hard to engage when output is close to zero---had a big glitch when I engaged, but it rode through (yay!).
Finally made it to NOMINAL LOW NOISE with range around 80Mpc (but had a few DIFFS on SDF). The Diffs were:
Input Pointing Diffs:
Some ISIs in GS13 LO Gain Mode Diffs for HAM2, HAM5, ITMx
Now working on getting H1 back UP (& going to have dinner soon).
I meant to attach snapshots of the SDF diffs I observed last night, but forgot to get to that (wrote alog on my laptop and saved snapshots on ops workstation Desktop).
Mainly thought the input pointing changes were worth noting, since that was a noticeable/big change to H1.
Will post when I'm back on shift tonight.
Here (attached) are differences from SDF I noted in the main entry.
Gracedb had some issues last night, details in Keith's alog Link
Our ext_alert program on h1fescript0 had given up attempting to reconnect due to the long duration of the server outage. This morning I tried restarting ext_alert via a monit restart, but this did not work and I ended up starting it by hand. It should be stable now.
Are operators supposed to restart this? I did not receive an alarm last night or tonight (only way I knew of a "GraceDB quiery failure" was a red box appearing on the Ops Overview.
There used to be instructions to re-starting this on a wiki, but those instructions have been removed from this page:
https://lhocds.ligo-wa.caltech.edu/wiki/ExternalAlertNotification
So not sure if I'm supposed to use the old instructions to start this or have someone else restart.