The DCS Det Char computer was frozen again, causing the LHO Summary Pages to not show up. Dan Moraru has power cycled the box and Det Char has been informed. The summary pages should start to show up again (though it will take a while for the det char jobs to catch up). (See also alog 32162.)
The SRM M2 UL OSEM has ~1000 counts on it which means that the magnet is very far in the OSEM head blocking almost all of the light (nominal is ~15000 counts). I trended way back to find that this OSEM has been like this since the initial pointing of the IFO by commissioners just after install and pump down AUG 2014. So, this OSEM isn't "dying" it is just very poorly centered and needs attention at the next vent. Although this symptom has been around for a very long time, it has apparently not been a specific issue to the IFO and therefore has not been logged explicitly. Still, I want to set it as a WHENVENT FRS.
Filing FRS.
[M Pirello, F Clara]
The +/- 430V High Voltage [HV] Electrostatic Driver Kepco power supplies at End Y were replaced with new lower noise Technology Dynamics HV power supplies. These supplies have analog controls for voltage and current and in the event of a power cycle, they will return to their previous values. They also should stop randomly tripping like the Kepco supplies. I have attached images of the front and back of the installed supplies.
The old Kepco supplies were placed on a cart next to the power rack while we evaluate the performace of these new supplies.
Greg Mendell, Patrick Brockill, Aaron Viets I restarted the primary and redundant GDS pipelines at GPS time 1164999500. The latency was ~10 seconds and the CPU usage is ~80%, both in the usual range. This restart picked up gstlal-calibration-1.1.0, which includes a bug fix that should solve the problem that Greg discovered last week with non-identical h(t) produced by primary vs redundant pipelines. This required two changes: 1) Removal of start-time dependence from the demodulation routine used in computing the time-dependent corrections. 2) Removal of the Gstreamer stock element audiocheblimit, which I replaced with lal_firbank. The audiocheblimit element appears to have a bug which makes its output irreproducible when run on the very same data in some instances. With this fix, h(t) is identical (as far as I can tell--to 16 digits) between runs of different start times after sufficient filter settling time. The settling time depends on the state of the interferometer at the time of start (whether it's locked and the coherence of the calibration lines). Also incorporated into this version is additional gating in the kappa_tst calculation. See this aLOG for more information: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=31911 The first two plots show the h(t) spectrum compared between GDS and CALCS. The third plot is a time series of the kappas. The filters have not been changed. See this aLOG for information on the filters: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=31712
J. Kissel More evidence of non-linear scattered light being a problem for the low-frequency sensitivity of the IFO, I show a trend of each translational DOF of the 3-10 Hz BLRMS velocity of the ground (as reported by the ITMY GND STS, in the PEM stay-clear zone of the H1/H2 bier garten) vs. the Binary Nuetron Star range. There is clear correlation between the two. The levels of in BLRMS that show correlation are roughly ~100-200 [nm/s] RMS or ~1-5 [nm] RMS. I also show trends of the same BLRMS band for the ISI ST1 T240s and ST2 GS13s in the Z direction. While the overall amplitude is reduce by an order or two of magnitude, the variation remains identical to the ground. @LSCFellows, @DetChar -- it would be good to do some scattering fringe studies e.g. LHO aLOG 22405 comparing how the speed of each chamber's isolation platform (as measured by T240s and/or GS13s) relates to the various BLRMSs of the DARM spectrum. This may also have to do with Robert's recent discovery of problems with the ITM elliptical baffles -- see, e.g. LHO aLOG 31886. Remember that the 3-10 Hz band cover's the HEPI Crossbeam Foot resonance (see LHo aLOG 13505 and G1401167), so ground motion is amplified in this region. And just in case anyone want's to blame the changes in BNS range on fluctuations in optical gain due to slow alignment drifts or other optical plant changes, I attach a trend of the relative optical gain. There is no correlation there.
The most likely explanation for this behavoir is that we have the ISCT1 beam diverter open. (see alog 30835 for evidence that this is a problem which leads to upconversion.)
This is why we had been trying to switch to refl WFS before the run started, which might work even with the low recycling gain, and allow us to close the beam diverter. It would only be worth working on this if people are willing to have a configuration change.
Since Daniel/Dave is about to boot MC1-3 SUS, ASC, LSC models, Kissel and I are trying to tackle the SDF changes in the SAFE files - specifically the ~30+ that were sitting in the ASC SDF. Many of them were from changes made at many different times over the last 2 weeks while the IFO was down. We have reconciled:
- ACCEPTED Sheila's threshold changes from Nov 23, 2016 - alog 31794.
- ACCEPTED changes that were accepted in OBSERVE, but not yet in SAFE, mostly loop offsets such as CSOFT, etc.
- INMATRIX_P and Y changes from
- INMATRIX_P and _Y from someone's test a while ago. ACCEPTED as now 0
- NOT MON - ASC-ADS_SEN_MTRX dither channels - under guardian control
- NOT MON - Various loop filter switches - now under guardian control- ACCEPTED some random MC2 M3 DRIVEALIGN Out put switch that is not used.
(see WP #6377) We are comparing gauge drift behavior when pumped by local turbo vs. when pumped by site vacuum volume. The attached shows how other gauges which share the common 24VDC pwr supply respond to changing electrical demands - in this case, PT120B responds to PT180 traversing through different pressure ranges. We aren't interested in pursing eliminating this phenomenon at this time but, rather, want to document this for future reference.
Since Livingston was down, we took the opportunity to tweak the alignment into the reference cavity since its transmission had fallen from ~3.9V to ~2.1V. The alignment was tweaked (see RCAlignment.png). However the noise eater decided to start oscillating. Toggling the switch restored the reference cavity transmission to a higher value (NoiseEater.png). Christina also wiped/mopped the Ante-room and the Laser Room. We confirmed that all the computers were powered down. Christina/Jason/Peter
J. Kissel, E. Goetz for S. Karki WP #6368 After nice, high duty cycle weekend, we continue the schedule for this roaming line with a move from 3001.3 to 3501.3 Hz. [edit] SDF screenshot, changes are now stored in OBSERVE and SAFE.snaps for H1CALEX SDF tables. Frequency Planned Amplitude Planned Duration Actual Amplitude Start Time Stop Time Achieved Duration (Hz) (ct) (hh:mm) (ct) (UTC) (UTC) (hh:mm) --------------------------------------------------------------------------------------------------------------------------------------------------------- 1001.3 35k 02:00 39322.0 Nov 28 2016 17:20:44 UTC Nov 30 2016 17:16:00 UTC days @ 30 W 1501.3 35k 02:00 39322.0 Nov 30 2016 17:27:00 UTC Nov 30 2016 19:36:00 UTC 02:09 @ 30 W 2001.3 35k 02:00 39322.0 Nov 30 2016 19:36:00 UTC Nov 30 2016 22:07:00 UTC 02:31 @ 30 W 2501.3 35k 05:00 39322.0 Nov 30 2016 22:08:00 UTC Dec 02 2016 20:16:00 UTC days @ 30 W 3001.3 35k 05:00 39322.0 Dec 02 2016 20:17:00 UTC Dec 05 2016 16:58:57 UTC days @ 30 W 3501.3 35k 05:00 39322.0 Dec 05 2016 16:58:57 UTC 4001.3 40k 10:00 39322.0 4301.3 40k 10:00 39322.0 4501.3 40k 10:00 39322.0 4801.3 40k 10:00 39222.0 5001.3 40k 10:00 39222.0 Frequency Planned Amplitude Planned Duration Actual Amplitude Start Time Stop Time Achieved Duration (Hz) (ct) (hh:mm) (ct) (UTC) (UTC) (hh:mm) --------------------------------------------------------------------------------------------------------------------------------------------------------- 1001.3 35k 02:00 39322.0 Nov 11 2016 21:37:50 UTC Nov 12 2016 03:28:21 UTC ~several hours @ 25 W 1501.3 35k 02:00 39322.0 Oct 24 2016 15:26:57 UTC Oct 31 2016 15:44:29 UTC ~week @ 25 W 2001.3 35k 02:00 39322.0 Oct 17 2016 21:22:03 UTC Oct 24 2016 15:26:57 UTC several days (at both 50W and 25 W) 2501.3 35k 05:00 39322.0 Oct 12 2016 03:20:41 UTC Oct 17 2016 21:22:03 UTC days @ 50 W 3001.3 35k 05:00 39322.0 Oct 06 2016 18:39:26 UTC Oct 12 2016 03:20:41 UTC days @ 50 W 3501.3 35k 05:00 39322.0 Jul 06 2016 18:56:13 UTC Oct 06 2016 18:39:26 UTC months @ 50 W 4001.3 40k 10:00 39322.0 Nov 12 2016 03:28:21 UTC Nov 16 2016 22:17:29 UTC days @ 30 W (see LHO aLOG 31546 for caveats) 4301.3 40k 10:00 39322.0 Nov 16 2016 22:17:29 UTC Nov 18 2016 17:08:49 UTC days @ 30 W 4501.3 40k 10:00 39322.0 Nov 18 2016 17:08:49 UTC Nov 20 2016 16:54:32 UTC days @ 30 W (see LHO aLOG 31610 for caveats) 4801.3 40k 10:00 39222.0 Nov 20 2016 16:54:32 UTC Nov 22 2016 23:56:06 UTC days @ 30 W 5001.3 40k 10:00 39222.0 Nov 22 2016 23:56:06 UTC Nov 28 2016 17:20:44 UTC days @ 30 W (line was OFF and ON for Hardware INJ)
Maintenance today while LLO is down: Jeff B. - move cleanroom curtains, climbing on output tube Peter - reference cavity alignment (1/2 hour) Richard - swap SRM analog camera for Gig-E Richard - swap ESD HV supplies at end Y Daniel - DAQ changes Christina - cleaning in PSL enclosure Betsy - charge measurements Kyle - isolate PT180 and pump down Rana on site Tuesday with group of artists, would like to enter LVEA if possible TIAA-CREF representative on site
TITLE: 12/05 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 71.4238Mpc
INCOMING OPERATOR: Patrick
SHIFT SUMMARY: Locked for my whole shift, drop in range seemed to have fixed itself because I couldn't find anything. Possibly Hanford traffic?
LOG:
I have increased the heat in the LVEA in sections 1A and 5 by 1 ma each.
I also raised the heat at End X from 9.5 to 10ma.
End X is slow to respond so I have raised the heater to 11.5 ma and increased fan flow SF01 to 12 ma and ~11500 cfm.
I assume these temperature increases were well motivated, but I hope the affects of such increases were considered before hand, especially considering we're in the middle of a run. Small temperature variations can have noticable impact on the IFO, such as changing test mass suspension positions and/or acoustic mode frequencies.
It looks like "temperature" difference units are given in amps? What is the temperature as a functon heater current?
Been locked and observing for 5+hrs. There have been a few glitches lately, and the range has started to drop a little (from 75 -> 70Mpc). Seems to be a bit more noise in DARM around 30-90Hz. Everything else looks good aside from that though. I'll keep investigating.
TITLE: 12/05 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ed
CURRENT ENVIRONMENT:
Wind: 5mph Gusts, 4mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.50 μm/s
QUICK SUMMARY: After a 39hr lock, Ed handed me a fully locked IFO. Nice.
06:30 Lockoss - reason unknown. There were no alerts. There were no obvious environmental causes. Going to begin re-locking. I will switch ISI config back to WINDY as per the email that Jim just sent out. after locking, if necessary after looking at a2l DTT script I may run the a2l script.
H1 is locked for 37hours 10 min. uSeism mean is resting about the 90%ile. Winds got a litle feisty but have calmed to less than 20mph. Some EQ activity caused a slight rise in BLRMS about 2 hours ago. There have been some EY glitches as noted in my activity log below. There was a CP4 alarm and a PSL dust moinor alarm. The TCSY chiller message was as it was on Patrick's shift. The plot of the glitch is also below.
At the beginning of my shift I ran the measurement that looks at the DARM coherence for a2l. The plot is below.
01:21 EY saturation.
01:51 Granted Carlos remote access to CDS
01:55 CP4 alarm
02:03 TCSY chiller flow message - Verbal Alarm
02:15 BIG EY glitch took the range to 4.1 Mpc
02:44 EY glitch
C. Cahillane The ER10 LHO Actuation Uncertainty Budget What has been done: (1) Gathering all the Newtons/count actuation function measurements for the three stages (UIM, PUM, TST) from November 8th, 9th, 11th, 12th, and 18th. (2) Running an MCMC over the physical parameters gain and delay for all three stages. (Only usedfreq < 10 Hz
for UIM,freq < 100 Hz
for PUM) (3) Plot MCMC cornerplots and resulting residuals uncertainty budget from the MCMC PDFs. (4) Take the residuals from the MCMC physical parameters' fit, and plug them into a Gaussian Process with kernelk(x,y) = a + b * exp(-0.5*(x-y)**2/l**2)
, wherea, b,
andl
are the hyperparameters of the GP. (5) Train the GP on the residuals, and produce a mean posterior fit and an uncertainty budget from the trained kernel. TL:DR : Take measurements -> Physical MCMC -> Unmodeled GP -> Frequency-Dependent Systematic Error +- Uncertainty Budget The mean posterior fit is the frequency-dependent systematic error. The uncertainty budget is also frequency-dependent. The Gaussian Process results are shown in Plots 1, 2, and 3 for the UIM, PUM, and TST stages. The 1-sigma, 68% confidence intervals are plotted. I want to call attention to how small the uncertainty is in the bucket. The frequency-dependent uncertainty in the bucket is on the order of 0.1% This is exciting, but is it real? Points in favor: (1) We have five sets of measurements, and uncertainty roughly goes as 1/sqrt(N) where N = measurement number. This allows us to win a lot where the measurements are nearly the same and the errorbars are small. (2) The GP fit hits most of the measurement errorbars. (3) Uncertainty expands where we have less information. Points against: (1) Overfitting might be a problem. There seem to be unphysical wiggles that nail our data in the bucket. (2) This uncertainty budget is lower than ever before by a factor of 10. This requires extraordinary proof. Rebuttals: (1) We might be able to combat overfitting by lengthening the kernel length-scale and adding more noise to our measurements. Also, the whole point of this method is to capture unmodeled wiggles. (2) The data analysis methods used are more advanced and designed to handle frequency-dependence. Plots 4, 5, and 6 are the UIM, PUM, and TST physical model MCMC fits with the measurements. Plots 7, 8, and 9 are the UIM, PUM, and TST physical model MCMC parameter cornerplots. A couple of points: (1) The GP uncertainty doesn't take into account the uncertainty from the MCMC physical parameters. The GP uncertainty dominates nearly everywhere, since the MCMC uncertainty is so tiny (See plots 4, 5, 6). I may try to incorporate the MCMC uncertainty directly into the measurement errorbars that I input into the GP. (2) A lot remains to be done. LLO actuation, sensing at both sites, and final response uncertainty. (3) If these results hold, we may have uncertainty dominated by time-dependence. Stay tuned.
C. Cahillane I have changed the kernel the Gaussian Process runs on in order to get more realistic errorbars. The plots for the UIM stage have a dot product kernel: k(x,y) = 1.0 + x.y + noise The plots for the PUM and TST stage have a squared dot product kernel: k(x,y) = 1.0 + x.y + (x.y)**2 + noise The results show a much more simplistic and physically realistic correlation between measurement points. (No more overfitting wiggles, slightly expanded uncertainty bars, no instabilities in the GP results) Our minimum uncertainty in the TST stage in the bucket is on the order of ~0.6% and 0.3 degrees. So still very good uncertainty. Still to do: (1) Remove time-dependence from the measurements by modifying the .conf files with the relevant kappas. (2) MCMC to find physical parameters on only the reference measurement. (3) LLO Actuation, LHO and LLO Sensing.