TITLE: Sep 14 EVE Shift 23:00-07:00UTC (16:00-00:00 PDT), all times posted in UTC
STATE Of H1: Observing
SUPPORT: Jenne Driggers was called about comb oscillation in DARM
INCOMING OPERATOR: Nutsinee
Shift Summary: IFO locked since my shift started. There were no rises in seismic activity and the wind remained calm. About 1 hour before my shift ended we developed this 2Hz comb in DARM. I contacted the commissioner on call. I was advised to let it ride. Nutsinee will attempt to look into it on her shift.
Activity log:
23:28 OIB/Mode set to Undisturbed/Observing ~ 70Mpc
00:01 Dave Barker noticed that someone ran a hardware injection briefly in CALCS while he happened to be looking over my shoulder.
00:07 ETMY Saturated
02:24 ETMY Saturated
03:29 ETMY Saturated
04:08 It seems there was a decrease in range over the last hour from 72Mpc down to about 68Mpc. Reason is unknown. Environment is quiet. Seismic activity is very quiet. Interesting, it also seems that Livingston is on this same curve.
05:48 ETMY Saturated
05:50 a 2Hz comb appeared in DARM from ~20Hz to ~100Hz (that I can tell, it gets lost in the bucket)) which resulted in a drop in range to ~63Mpc. I placed a call to Jenne who was on call to inquire as to how I should proceed and the advice was as long as were locked, stay locked.
06:25 ETMY Saturated
06:27 ETMY Saturated
06:51 2Hz comb still very apparent in DARM
05:50 a 2Hz comb appeared in DARM from ~20Hz to ~100Hz (as far as I can tell as it gets lost in the bucket)) which resulted in a drop in range to ~63Mpc. I placed a call to Jenne who was on call to inquire as to how I should proceed and the advice was as long as were locked, stay locked. So, we're still locked and Observing at 63Mpc.
C. Cahillane I have made a file in /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER8/H1/Scripts/compare_actcoeffs_ER8_craig_extrapolation.m The purpose of this file is to find the frequency dependence of our uncertainty in A_pu and A_tst. It takes our three measurements from August 26, 28, and 29 and finds their weighted means and weighted standard deviations at all frequencies. (Thank you Jeff, Kiwamu, and Darkhan for the code infrastructure.) I have used the Pcal measurements only for our uncertainty calculations since it is has the lowest uncertainty and is the main method of calibrating our actuators. Free Swinging Mich and ALSDIFF are used to determine the accuracy of Pcal, and Pcal is relatively very precise. Our measurements go from 3 Hz to 100 Hz, but I need uncertainty from 3 Hz to 5000 Hz, so I am currently using a Zeroth Order Extrapolation, which is basically where I take our final uncertainty value at 100 Hz and let that be our uncertainty value for all data points from 100 Hz to 5000 Hz. It is expected that the actuation term will fall off quickly after 100 Hz and this will be sufficient for our first-order result. If actuation is the main contribution to uncertainty more intelligent steps must be taken to ensure we aren't underestimating our uncertainty. I will do the same tomorrow with the sensing function residual C_r, probably using Zeroth Order Extrapolation down to low frequency in this case. Plot 1, 2, and 3 represent our UIM, PUM, and TST actuation stages. For now I will fold systematic error and statistical uncertainty together, i.e. if we have an |A_tst| std of 0.5 m/ct and a systematic residual of 3%, I will make our total uncertainty be 0.5^2 + ( 0.03 * |A_tst| )^2.
Summary: IFO locked since my shift started. Aside from the earlier noise trouble we’ve been locked at ~ 72Mpc. No EQs showing and wind is calm. A couple of ETMY glitches dropped te range to 10Mpc. Rick and the Pcal team still hard at work.
00:55UTC The Transient Injection was toggled to "Disable"
TITLE: Sep 14 EVE Shift 23:00-07:00UTC (16:00-00:00 PDT), all times posted in UTC
STATE Of H1: Observing
OUTGOING OPERATOR: Jim Warner
QUICK SUMMARY:Control room is pretty full of the usual suspects. Lights are on in the LVEA. Wind is below 20mph. EQ activity is quiet. IFO is experiencing some anomalous broad band noise. Commissioners are working on a solution.
J. Kissel Since updates to the CAL-CS model have been a little sporadic over the past few days, I recap below what happened when, as logged by the filter archive. What Changed Local Time (PDT) Corresponding Archive of Change Corresponding aLOG SUS dynamical model update is complete Thu Sep 10 13:50 H1CALCS_1125953445.txt LHO aLOG 21322 Cavity pole frequency changed from 389 to 341 [Hz] (+optical gain name change) Thu Sep 10 14:03 H1CALCS_1125954209.txt Optical gain is renamed (no substantial change) Thu Sep 10 14:03 H1CALCS_1125954234.txt DARM_ERR Optical gain is tuned Thu Sep 10 17:49 H1CALCS_1125967809.txt LHO aLOG 21385 Installed inverse actuation filter Sun Sep 13 22:35 H1CALCS_1126244167.txt LHO aLOG 21487 Kiwamu changes L1 and L2 violin modes Mon Sep 14 09:03 H1CALCS_1126281835.txt LHO aLOG 21500 Copied new inverse actuation filter over to blind injection Mon Sep 14 09:08 H1CALCS_1126282148.txt Changed the names of the blind injection filter banks (no substantial change) Mon Sep 14 09:09 H1CALCS_1126282176.txt LHO aLOG 21499 (Note the filter files are tagged with the GPS time that they're created. so they're more accurate (to the second) than the local time I quote.) Given that the DARM calibration was tuned at Sep 10 2015 17:49:00 PDT = Sep 11 2015 00:49:00 UTC, and Kiwamu started the PCAL to CAL-DELTAL_EXTERNAL transfer function at Sep 10 2015 18:08:12 PDT = Sep 11 2015 01:08:12 UTC, I would conlcude that the front-end calibration has been stable for LHO since the first lock stretch that was placed in observation mode after that measurement, which starts at 1125972087 = Sep 11 2015 02:01:10 UTC = Sep 10 2015 19:01:10 PDT until the last observation stretch -- which ended at 1126281754 = Sep 14 2015 16:02:17 UTC = Sep 14 2015 09:02:17 PDT -- before Kiwamu's change at Sep 14 2014 09:03:00 PDT = Sep 14 2015 16:03:00 UTC. I say the front end calibration has been "stable" instead of "valid" because because we know that Kiwamu's work this morning was to correct a systematic error from a bug that had been found. We only briefly took the IFO out of observation mode (though the IFO stayed happily locked) while Kiwamu loaded new SUS filters and I the copy of inverse actuation filters. We resumed a new observation segment Sep 14 2015 16:10:43 UTC. As noted in Kiwamu's most recent aLOG about updating the calibration this morning (LHO aLOG 21500), his update changed the UIM and PUM stages, and for features well-above the cross-over frequencies of those stages (TST/ PUM ~30-40 [Hz], PUM / UIM ~ 2 [Hz]). The update was a bug fix, and it means that the front is now more accurately representing the violin modes that were already correct in the DARM model. We all suspect this is a small change in the overall actuation function, but the recent discovery of how poorly the PUM has been rolled off (LHO aLOG 21419), means we have to verify. After these messages, we'll be riiiight back. *whistle glitch* P.S. Since Matt and I performed all of our inverse actuation filter design on the original matlab model of the DARM actuation function, the inverse actuation filter design and implementation is unaffected by Kiwamu's change, nor was it impacted by the systematic error that he found.
Mike Landry & Vern Sandberg ER8 to O1 Transition
Cancellation of Tuesday Maintenance Day, September 15, 2015 to allow for IFO ER8-Quality data collection.
Title: 9/14 Day Shift: 15:00-23:00UTC all times posted in UTC
State of H1: Observation Mode at 70Mpc for the last 8hrs
Support: All the commissioners
Shift Summary: Mostly quiet, but worsening range at the end
Activity Log:
Checked the Reservoir Levels this morning. All steady; no further maintenance called for.
Attached are 60 day means of the HEPI Drive outputs. The expectation is that if we have a bad valve, this would show in a trend up or step up as the valve is failing.
Not sure about the efficacy of this manner of investigation and this 60 day look could easily be too coarse to see the needed details plus it is confused with platform trips and maybe the occasional alignment change, but for now, it looks like there is no obviously bad or failing valve. I'll continue with weekly or bi weekly shortly duration trends. We are attempting to avoid doing the invasive Valve Check routine.
From the tinj log files, and visible on https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20150914/plots/H1-ALL_893A96_ODC-1126224017-86400.png, I see that two scheduled burst injections were successful this morning. The GPS times were 1126270500 and 1126280500. That's good, because it seems to mean that yesterday's problem with injections was temporary. These injections were only done at LHO; LLO was not observing at those times.
Beverly, Jess
The full details for the shift may be found here.
Bubba and I inspected the instrument air compressors at both end stations and the corner.
The two end stations have failed vibration isolators. Bubba is checking our inventory of spares.
The corner station is in good condition - and has been mounted in a different fashion.
Attached are two photos; shorted.jpg is one of the end station units, notshorted.jpg is the corner.
Ref; https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=21436
Things were quiet. Then Stefan arrived, and I realized I had set the observatory mode but forgot the Intent Bit, and which spiraled into a bunch SDF differences. The LSC guardian showed a difference on a t-ramp (annoying that this kind of difference would keep us from going into Observe) and ASC had two diffs that I guess Evan logged about last night but didn't accept. Then Stefan went and set a bunch of ODC masks (this also would have kept us from going into observe, so also annoying) as well as tuning the whitening the IM4 QPD (which was saturating, Stefan assures me is out of loop so shouldn't affect anything). Guardian recovered from the earlier lock loss in 20 minutes (not annoying!) back to 70 mpc and it has been otherwise quiet.
We just came out of Observe for a minute so Stefan could set more ODC bits. Service has resumed.
The ODC status should not affect OBSERVATION READY in any way. If it does, then ODC is misconfigured and needs to be fixed.
SDF is unfortunately looking at all channels - including ODC mask bits. When anyone updates the ODC masks, the SDF goes red, and you then havta accept the ODC mask channel value in SDF, or ignore it. Again, it's a one-time thing to update these ODC values, just late in the game and not all at one time.
While we were in commisioing mode, we had a strange non stationary spectrum, with large non stationary noise also appearing in MICH and PRCL. I quickly looked at some triple witness sensors, since we've had issues with these recently, but didn't see anything obvious.
The strange noise started a few minutes before 2:50:34 UTC Sept 10, seemed to get better for a few minutes and then got worse again around 2:57:21.
We don't think that this was related to any comissioning we were doing. Evan had temporarily lowered the DARM ugf and put a notch in, (which should cause anything like this). He undid both changes when we saw the unusual noise, but the noise was unaffected by his change.
We aren't going to investigate any further right now since the problem seems to have gone away.
This is happening again. (this time it seems worse) It started about a half hour ago, at 22:30 UTC on Sept 14th
We ran a bruco on 5 minutes of this excessively noisy period (2015-09-14 22:55:00). Results here.
From 10 Hz to 1 kHz, most of the LSC and ASC signals are coherent with DARM, so it's hard to say what's going on here.
Above 1 kHz, there is some coherence with the RFAM monitor on the EOM driver. A series of spectra are attached showing elevated RIN in this monitor, compared to a quiet period roughly 30 minutes later. What mechanism could cause this? Also included are spectra of the LSC magnetometer in the CER, since this also showed elevated coherence.
RF45 AM stabilization looks fishy.
Look at feedback signal (Ch3) VS the range. They're in sync today (left) as well as back on 10th of Sept. (right).
Dan, Evan
This evening we made a qualitative study of the coupling of beam jitter before the IMC into DARM. This is going to need more attention, but it looks like the quiescent noise level may be as high as 10% of the DARM noise floor around 200Hz. While we don't yet understand the coupling mechanism, this might explain some of the excess noise between 100-200Hz in the latest noise budget.
We drove IMC-PZT with white noise in pitch, and then yaw. The amplitude was chosen to raise the broadband noise measured by IMC-WFS_A_I_{PIT,YAW} to approximately 10x the quiescent noise floor. This isn't a pure out-of-loop sensor, and since we were driving the control point of the DOF3 and DOF5 loops of the IMC alignment channels we will need to work out the loop suppression to get an idea of how much input beam motion was being generated. Unfortunately we don't have a true out-of-loop sensor of alignment before the IMC. We may try this test again with the loops off, or the gain reduced, or calibrate the motion using the IMC WFS dc channels with the IMC unlocked. Recall that Keita has commissioned the DOF5 YAW loop to suppress the intensity noise around 300Hz.
The two attached plots show the coherence between the excitation channel (PIT or YAW) and various interferometer channels. The coupling from YAW is much worse: at 200Hz, an excitation 10x larger than normal noise (we think) generates coherence around 0.6, so the quiescent level could generate a few percent of the DARM noise. Looking at these plots has us pretty stumped. How does input beam jitter couple into DARM? If it's jitter --> intensity noise, why isn't it coherent with something like REFL_A_LF or POP_A_LF (not shown, but zero)?
The third plot is a comparison of various channels with the excitation on (red) and off (blue). Note the DCPD sum in the upper right corner. Will have to think more about this after getting some slpee.
Transfer function please.
TFs of the yaw measurement attached.
If the WFS A error signal accurately represents the quiescent yaw jitter into the IMC, the orange TF suggests that this jitter contributes the DCPD sum at a level of 3×10−8 mA/Hz1/2 at 100 Hz, which is about a factor of 6 below the total noise.
Using this measured WFS A yaw → DCPD sum TF, I projected the noise from WFS A onto the DARM spectrum (using data from 2015-08-27). Since the coupling TF was taken during a completely different lock stretch than the noises, this should be taken with a grain of salt. However, it gives us an idea of how significant the jitter is above 100 Hz. (Pitch has not yet been included.)
PIT coupling per beam rotation angle is a factor of 7.5 smaller than YAW:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=21212
Re: "How does beam jitter couple to DARM?" : jitter can couple to DARM via misalignments of core optics (see https://www.osapublishing.org/ao/abstract.cfm?uri=ao-37-28-6734).
If this is the dominant coupling mechanism, you should see some coherence between a DARM BLRMS channel where this jitter noise is the dominant noise (you may need to drive jitter with white noise for this) and some of the main IFO WFS channels.
The BLRMS in the input beam jitter region (300-400 Hz) is remarkably stable over each lock (see my entry here), so there seems to be no clear correlation with residual motion of any IFO angular control.
Thanks for the link to that post, I hadn't seen it. It may still be possible though that there's some alignment offset in the main IFO that couples the jitter to DARM (i.e. a DC offset that is large compared to residual motion – perhaps caused by mode mismatch + miscentering on a WFS). This could be checked by putting offsets on WFS channels and seeing how the coupling changes.