Posted are data for the two long term storage dry boxes (DB1 & DB4) in use in the VPW. Measurement data looks good, with no issues or problems being noted. I will collect the data from the desiccant cabinet in the LVEA during the next maintenance window.
This is the data for the 3IFO desiccant cabinet in the LVEA.
These are the past ten day trends.
Dan, Travis
Tonight during our long lock we measured the decay time constant of the ITMX bounce mode. At 10:10 UTC we set the intent bit to "I solemnly swear I am up to no good" and flipped the sign on the ITMX_M0_DARM_DAMP_V filter bank and let the bounce mode ring up until it was about 3e-14 m/rt[Hz] in the DARM spectrum. Then, we zeroed the damping gain and let the mode slowly decay over the next few hours.
We measured the mode's Q by fitting the decay curve in two different datasets. The first dataset is the 16Hz-sampled output of Sheila's new RMS monitors; the ITMX bandpass filter is a 4th-order butterworth with corner frequencies of 9.83 and 9.87Hz (the mode frequency is 9.848Hz, +/- 0.001 Hz). This data was lowpassed at 1Hz and fit with an exponential curve.
For the second dataset I followed Koji's demodulation recipe from the OMC 'beacon' measurement. I collected 20 seconds of DELTAL_EXTERNAL_DQ data, every 200 seconds; bandpassed at 9 and 12Hz, demodulated at 9.484Hz, and lowpassed at 2Hz; and collected the median value of the sum of the squares of the demod products. Some data were neglected on the edges of the 20-sec segment to avoid filter transients. These every-200-sec datapoints were fit with an exponential curve.
Results attached; the two methods give different results for Q:
RMS channel: 594,000
Demodulated DARM_ERR: 402,000
I fiddled with the data collection parameters and filtering parameters for both fits, but the results were robust. When varying parameters for each method the results for Q were repeatable within +/- 2,000, this gives some sense of the lower limit on uncertainty of the measurement. (The discrepancy between the two methods gives a sense of the upper limit...) Given a choice between the two I think I trust the RMS channel more, the demod path has more moving parts and there could be a subtlety in the filtering that I am overlooking. The code is attached.
I figured out what was going wrong with the demod measurement - not enough low-passing before the decimation step, the violin modes at ~510Hz were beating against the 256Hz sample rate. With another layer of anti-aliasing the demod results are in very good agreement with the RMS channel:
RMS channel: 594,400
Demodulated DARM_ERR: 593,800
To see what we might expect, I took the current GWINC model of suspension thermal noise and did the following. 1) Removed the horizontal thermal noise so I was only plotting vertical. 2) Updated the maraging steel phi to reflect recent measurement (LLO alog 16740) of Q of UIM blade internal mode of 4 x 10^4. (It is phi of 10^-4, Q 10^4 in the current GWINC). I did this to give better estimate of the vertical noise from higher up the chain. 3) Plotted only around the thermal noise peak and used 1 million points to be sure I resolved it. Resulting curve is attached. Q looks approx 100K, which is less than what was reported in this log. That is encouraging to me. I know the GWINC model is not quite right - it doesn't reflect tapered shape and FEA results. However to see a Q in excess of what we predicted in that model is definitely in the right direction.
Here we take the Mathematica model with the parameter set 20150211TMproduction and we look at varying some of the loss parameters to see how the model compares with these measurements. The thermal noise amplitude in the vertical for the vertical bounce mode is tabularised around the resonance and we take the full width at 1/√2 height to calculate the Q (equivalent to ½ height for power spectrum). With the recently measured mechanical loss value for maranging steel blade springs of 2.4 e-5, the Mathematica model predicts a Q of 430,000. This is a little bit lower Q than the measurement here, but at this level the loss of the wires and the silica is starting to have an effect, and so small differences between the model and reality could show up. Turning off the loss in the blade springs altogether only takes the Q to 550,000, so other losses are sharing equally in this regime. The attached Matlab figures shows mechanical loss factor of maraging steel versus predicted bounce mode Q and against total loss plus the resonance as a function of loss. Angus Giles Ken & Borja
Since there has been some modeling afoot, I wanted to post the statistical error from the fits above, to give a sense of the [statistical] precision on these measurements. The best-fit Q value and the 67% confidence interval on the two measurements for the bounce mode are:
RMS channel: 594,410 +/- 26
Demodulated DARM_ERR: 594,375 +/- 1590
The data for the measurements are attached. Note that this is just the statistical error of the fit -- I am not sure what systematics are present that could bias the measurement in one direction or another. For example, we did not disable the top-stage local damping on ITMX during this measurement, only the DARM_CTRL --> M0 damping that is bandpassed around the bounce mode. There is also optical lever feedback to L2 in pitch, and ASC feedback to L2 in pitch and yaw from the TRX QPDs (although this is very low bandwidth). In principle this feedback could act to increase or decrease the observed Q of the mode, although the drive at the bounce mode frequency is probably very small.
Times in UTC
7:00 Came in to find the IFO had been locked to LSC FF for several hours already, left it as is
9:57 Seeing that Livingston had dropped lock, switched Intent Bit to commissioning, and with a suggestion/assistance from Dan H., undamped the bounce mode of ITMX for a ringdown measurement of its Q. Gain of H1SUS-ITMX_M0_DARM_DAMP_V set to 0.0 (from 0.3) for ringdown. Will need to set back to actively damp the mode once Dan is satisfied
10:58 Switched Intent Bit back to undisturbed
14:11 Switched Intent Bit to commissioning. Edited LSC_FF guardian to bring laser power back to 24W. Switched Intent Bit back to undisturbed.
14:35 Lockloss (Richard volunteered to take blame)
14:39 Reduced power back to 16W. Initial alignment.
15:00 Handoff to Patrick.
King Soft Water was on site yesterday and replaced 4 of the 9 membranes in the R.O. system, the other 5 are on order. This seemed to make a noticeable difference in that the water system actually ran all night without tripping. The water was also tested yesterday and found to be in very good condition.
model restarts logged for Tue 02/Jun/2015
2015_06_02 04:40 h1fw0*
2015_06_02 06:49 h1fw0*
2015_06_02 10:57 h1iopsush34
2015_06_02 10:59 h1susmc2
2015_06_02 10:59 h1suspr2
2015_06_02 10:59 h1sussr2
2015_06_02 11:31 h1calcs
* = two unexpected fw0 restarts. Restart of h1sush34 for re-calibration of 18bit-DACs. New calcs model.
I was able to replace one coarse regulator with a new flowmeter and these are the results of that change.
Submitted by Bubba.
Dan, Travis
Around 06:50 UTC we started to observe frequent glitching in the PRCL and SRCL loops that generated a lot of nonstationary noise in DARM between 20 and 100Hz. The glitches occur several times a minute, it's been two hours of more or less the same behvaior and counting. Our range has dropped by a couple of megaparsecs. The first plot has a spectrogram of DARM compared to SRCL that shows a burst of excess noise at 08:08:20 UTC.
The noise shows up in POP_A_RF45_I and POP_A_RF9_I, but not so much in the Q phases, see second plot. (MICH is POPA_RF45_Q, PRCL and SRCL are from the I-phases.) A quick look at the PRM and SRM coil outputs doesn't reveal a consistent DAC value at the time of the glitches, so maybe DAC glitching isn't a problem, see third plot. The three optical levers that we're using in the corner station (ITMX, ITMY, SR3, all in pitch) don't look any different now than they did before 0700 UTC.
I'm pretty sure these come from PRM, but one stage higher than you looked, M2. Attached is a lineup of PRM M2 UR and the glitches in CAL DELTAL. Looks like 2^16 = 65536 counts DAC glitches to me. I'll check some other times and channels, but wanted to report it now.
After a lot of followup by many detchar people (TJ, Laura, Duncan, and Josh from a plane over the Atlantic), we haven't really been able to make DAC glitches work as an explanation for these glitches. A number of channels (especially PRM M2 and M3) start crossing +/- 2^16 around 7 UTC, when the glitches begin. Some of these glitches line up with the crossings, but there are plenty of glitches that go unexplained, and plenty of crossings that don't correspond to a glitch. It's possible that DAC glitches are part but not all of the explanation. We'll be following this up further since one of these glitches corresponds to an outlier in the burst search. Duncan does report that no software saturations (channels hitting their software limit, as with the tidal drive yesterday) were found during this time, so we can rule those out.
Andy, Laura, TJ, Duncan, Josh: To add to what Andy said, here are a few more plots on the subject,
1) The glitching being bad does coincide with SRM and PRM channels being close to 2^16 (plotted only positive values here, negative values are similar). Of course this is pretty weak evidence as lots of things drift.
2) A histogram of the number of glitches versus the DAC value of the PRM M2 UR channel has a small line at 2^16. Almost not significant. Statistically, with the hveto algorithm, we find only a weak correlation with +/-2^16 crossings in the PRM M2 suspensions. Again, all very weak.
3) As you all reported, the glitches are really strong in SRCL and PRCL, ASC REFL channels, etc. hveto would veto ~60% of yesterday's glitches using SRCL and PRCL, as shown in this time-frequency plot. But the rest of the glitches would still get through and hurt the searches.
So we haven't found the right suspension, or there is something more complicated going on. Sorry we don't have a smoking gun - we'll keep looking.
Observation Bit: Commissioning
16:00 Day shift IFO to LCS_FF
16:08 Lockloss – Guardian recovering
17:35 IFO Locked at LCS_FF
17:38 Lockloss – Guardian recovering – Will take the IFO to LSC_FF at lower power (16.6w)
18:24 Set Observation bit to Undisturbed
00:00 Some glitches but Lock held at 48Mpc
I have created the FIR filters for the GDS calibration during ER7. The filters are based on the following models: Total_Actuation = Actuation * AI * ActuationTimeDelay Total_Sensing = Sensing * AA * OMCWhiteningPoles * SensingTimeDelay These models follow directions given in LHO alog #18769. Actuation is model(2).par.A.total from aligocalibration/trunk/Runs/PreER7/H1/Results/DARMOLGTFs/2015-06-01_LVLNDriver_DARMOLGTF.mat. Sensing is model(2).par.C.total from aligocalibration/trunk/Runs/PreER7/H1/Results/DARMOLGTFs/2015-06-01_LVLNDriver_DARMOLGTF.mat. AA is the antialiasing filter, which is model(2).par.C.antialiasing.total loaded from aligocalibration/trunk/Runs/PreER7/H1/Results/DARMOLGTFs/2015-06-01_LVLNDriver_DARMOLGTF.mat. AI is the anti imaging filter, which is model(2).par.A.antiimaging.total loaded from aligocalibration/trunk/Runs/PreER7/H1/Results/DARMOLGTFs/2015-06-01_LVLNDriver_DARMOLGTF.mat. OMCWhiteningPoles are the super-Nyquist-frequency poles of the OMC whitening. This is model(2).par.C.uncompensatedomcdcpd.c from aligocalibration/trunk/Runs/PreER7/H1/Results/DARMOLGTFs/2015-06-01_LVLNDriver_DARMOLGTF.mat. The ActuationTimeDelay is model(2).par.t.actuation from aligocalibration/trunk/Runs/PreER7/H1/Results/DARMOLGTFs/2015-06-01_LVLNDriver_DARMOLGTF.mat (in seconds), and the SensingTimeDelay is model(2).par.t.sensing + model(2).par.t.armDelay from aligocalibration/trunk/Runs/PreER7/H1/Results/DARMOLGTFs/2015-06-01_LVLNDriver_DARMOLGTF.mat in seconds. To generate the FIR filters, run create_td_filters_ER7.m in aligocalibration/trunk/Runs/ER7/Common/MatlabTools/. I've attached plots of the frequency response of the actuation and inverse sensing FIR filters compared to the models they are based on. In the plots, the left column shows the FIR frequency response in blue and the original frequency model in red (magnitude on top, phase on bottom). The right column shows the relative error in magnitude on top and the phase error in degrees on bottom. In addition, I performed a test where I took some random DARM_CTRL and DARM_ERR data and calibrated in the time domain with the FIR filters and in the frequency domain with the original frequency models. I've also attached a plot of the comparison between this frequency domain calibration and the time domain calibration.
For unknown reasons, dmtviewer on the Mac Mini computers cannot get data from Livingston.
DMTviewer is used to view data produced by the DMT systems for both Livingston and Hanford observatories. It is used in the control room to display recent seismic data and inspiral range data on projector or TV monitors. The dmtviewer program may also be run on a control room workstation or operator workstation.
To display the inspiral range on a workstation:
Note: These procedures are found on the CDS Wiki. Search for "DMTviewer" and use the DMTviewer article.
L1 range cannot be displayed on Mac OS machines anymore. We are working in replacing the FOM Mac Mini computers with Ubuntu NUC computers to resolve this.
Shivaraj, Marco, and I have just recently finished developing an online low-latency earthquake monitoring webpage, utilizing Michael Coughlin's seismon code, that is now available for use.
The site refreshes once every 5 minutes to get the most recent data and then calculates the P/S wave arrival times at each site (llo, lho, geo, virgo), amplitude (velocity induced on accelerometers), source magnitude, source time, distance, and lat/lon. The webpage can also be used as an early warning system and indicates the likelihood of appreciable seismic disturbance at each observatory. This is determined by a set threshold amplitude using previous studies done by Michael Coughlin over initial LIGO data. The link to this page is provided below.
https://ldas-jobs.ligo.caltech.edu/~hunter.gabbard/earthquake_mon/seismic.html
SVN up /ligo/svncommon/SusSVN/sus/trunk/QUAD/Common/MatlabTools/QuadModel_Production.
I updated the quad model with the measured ETMY violin modes for harmonics 1 to 7. I got these values from 17610, 17365, 18764, and 18614.
Mode 4 is a bit of a guess because the measured values I saw are not identified by suspension. But ETMY is the highest for the other modes I was able to compare, so I chose the highest I saw in that range. I averaged the numbers where different values are posted for each fiber. The model doesn’t yet account for different frequencies in different fibers (but I can add this if we need to).
Hopefully this helps clear up some discrepancies between the modeled and measured loop gain seen on page 2 of https://alog.ligo-wa.caltech.edu/aLOG/uploads/18769_20150602030143_2015-06-01_LVLNDriver_H1DARMOLGTF.pdf
Got the IFO to lock at high power (@ 23W) for about 3 minutes before Lockloss. Winds are in the high teens to low 20MPH. Will bring the IFO to DC-Readout and then adjust the power up manually.
Sheila set the power to 15W in Guardian. Took the IFO to LSC_FF (16.6W) and it seems stable. Set the Observation bit to Undisturbed. When bringing the IOF up stopped at ENGAGE_ASC while the Bounce & Roll modes dropped to around 10E-13. Then went to DC_READOUT and paused for a few minutes. Then went to LSC_FF. At this time the IFO is locked and appears stable. Wind is still high (around 20MPH) and gusty (into the 30s. )
We've had three locklosses at 23 Watts in the last 4 hours. The first one of these did coincide with a gust of wind (plot attached), although it might not have been the cause. We've been able to stayed locked with gustier wind in the past at 10 Watts.
In all three of these locklosses there were fluctuations in the POP90 power on 10s of seconds timescales.
07:53 Christina opening OSB receiving rollup door 08:00 Peter to H2 PSL enclosure (WP 5233) 08:00 Karen and Christina to mid and end stations to clean 08:08 Jodi moving 3 boxes of TCS equipment in LVEA 08:12 Richard to end X to replace wireless router (WP 5236) 08:26 Bubba checking on RO alarm, then replacing nitrogen regulators on LTS containers in LVEA (WP 5238) 08:31 Jodi back 08:31 Richard called from end X to test phone 08:39 Jodi to mid X to look for mirror 08:43 Joe to LVEA to flush eyewash stations, put water in lift trucks, etc. 08:49 Filiberto to end Y to test cabling (WP 5103) 08:52 Peter done 08:58 Richard back 09:01 Jodi back 09:05 Jodi, Nutsinee to LVEA to get viewport guards 09:18 Karen, Christina leaving end Y, going to mid X then end X. 09:25 Jodi and Nutsinee back 09:39 Kyle to end Y to turn RGA bakeout on, then end X to run up turbo (WP 5237) 09:40 Mitchel going to mid and end stations (not VEAs) to get serial numbers 09:40 Joe back Jim B., Dave pulling fibers under floor tiles 09:55 Karen and Christina leaving mid X, going to end X 09:56 Hugh checking HEPI fluid reservior levels 09:58 Filiberto back 10:06 Bubba back 10:08 Nutsinee and Rick taking pictures of ETMX (WP 5240) 10:20 Joe back to continue previous work in LVEA 10:26 Dave updating h1calcs model (WP 5241) 10:26 Bubba pulling tubing for TMDS 10:27 Karen and Christina leaving end X 10:36 Jim B. done with floor tiles 10:50 Dave, Jeff K. restarting h1sush34 IO chassis for 18 bit dac recalibration 10:59 Joe back 11:09 Hugh back 11:10 Nutsinee and Rick done 11:25 Karen and Christina to LVEA to clean 11:26 Jeff K. restarting h1calcs model 11:32 Kingsoft water through gate, notified Bubba (WP 5239) 11:35 Mitchell back 11:37 Restart of guardian machine 11:54 Karen and Christina out of LVEA, opening OSB receiving rollup door to empty cardboard container 12:10 Pepsi truck through gate 13:28 Kyle done Karen and Christina report that the phone is not working at mid Y. Kyle reports that a set of cables is pushed against a pyramid shaped pier at end Y. Cheryl, Evan, Kiwamu, Patrick, Sheila Had trouble running through an initial alignment after maintenance. It turned out that the safe.snap files used for h1sush34 had the gains set to 0 for the inputs to MC2 and PR2. We changed these back to 1 by hand. The safe.snap files will need to be fixed. Sheila enabled optical lever DC alignment feedback on SR3. (alog 18777) We had trouble getting past the RESONANCE state. We stopped in the CARM_5PM state and Cheryl adjusted the alignment of one of the power recycling mirrors. After this we were able to lock twice on LSC_FF.
This is work I completed a while ago (the 7th of May if Matlab is to be believed), but I wanted to put this in for comparison with my log 18453. These are the new (as of May 7th) and old isolation filters for ETMY. ETM's are definitely harder to do that ITM's, but loops should be very similar now on all BSC chambers
That is a lot of gain peaking in X and Y (7 and 4.6 for St1 * 4.2 for stage 2, so ~30 and20 over all) worth remembering if there is a problem later on