In response to alog 18831, work permit #5243 was put in for an indicator light on the OPS_OVERVIEW_CUSTOM.adl screen. If the test point on H1CALCS_GDS_TP goes below 187, the light will go from green to red. This way the operator can easily see with the injections have stopped working as they should.
Chris Pankow, Jeff Kissel, Adam M, Eric Thrane We restarted tinj at LHO (GPS=1117411494 ) to resume transient injections. We scheduled a burst injection for GPS=1117411713. The burst waveform is in svn. It is, I understand, a white noise burst. It is, for the time being, are standard burst injection waveform until others are added. The injection completed successfully. Following this test, we updated the schedule to inject the same waveform every two hours over the next two days. The next injection is scheduled for GPS=1116241935. However, this schedule may change soon as Chris adds new waveforms to the svn repository. We are not carrying out transient injection tests at LLO because the filter bank needs to be updated and we cannot be sure that the filters are even close to correct. Adam thinks they will be updated by ~tomorrow.
10:03 Betsy, Kyle, Nutsinee into LVEA (Nutsinee to take pictures with cell phone of ITMX and ITMY spools, Betsy to retrieve equipment, Kyle to check on equipment) 10:13 Kyle out 10:14 Nutsinee out 10:15 Betsy out 14:22 Joe moving cabinets from OSB to staging building ~15:25 - 15:53 Greg moving cabinets from computer users room to staging building I ran an initial alignment in the morning and made it to LSC_FF but the range was unstable and didn't last long. Had difficulty locking and staying locked the remainder of the shift. Evan and Sheila are continuing to track down the cause.
Scott L. Ed P. Results from 5/18/15 thru 5/21/15 posted here. 6/1/15 Cleaning crew returns and sets up after last week off. Cleaned 40 meters ending 3.8 meters north of HNW-4-060. 6/2/15 Cleaned 49 meters ending 11.4 meters north of HNW-4-062. 6/3/15 Cleaned 33.5 meters ending at HNW-4-064. Removed lights and relocated equipment to next section north and started hanging lights. Safety meeting.
Sheila, Elli
This morning we added filters to H1:SUS-PRM_M1_LOCK_L and H1:SUS-SRM_M1_LOCK_L to be used for offloading the M2_LOCK stage. The motivation for this was because last night the M2 coil drivers were approaching saturation, and also to see if this might addressf the 2^16 (alog18815). The new filters are zp0.01:0 in FM1, zp0:0.01 in FM2, and -90dB gain in FM3 and they are engaged with a gain of -0.2. The filters are turned on by the ISC_DRMI guardian at OFFLOAD_DRMI. We were having trouble locking this afternoon so we turned off these filters for a while, but they are back on again now as they didn't seem to be causing the locking difficulties.
Kiwamu, Elli
We are interested in whether we can see any changes in the spot size/location on the ITMs due to thermal drift during full lock. We have decreased the exposure on the ITMY SPOOL and ITMX SPOOL cameras from 100000microseconds to 1000microseconds. (These cameras are not used any interferometer systems) We will take images with these cameras. at 5 minute intervals for the next 24 hours.
Ive changed the interval the cameras automatically take images back to the nominal 60min.
Jeff, Corey, Betsy, Dave, Keith, Eric Betsy: After the reboots yesterday, it appears that the CW hardware injections have not been restarted. 1) Can you please restart them? 2) There is no indication that this injection is OFF when looking at the CAL_INJ_CONTROL.adl. Dave B simply noticed that there was no longer an EXC showing on his CDS overview screen. Eric: I just restarted the injections at LHO (at GPS=1117406895). Here are the instructions: log on to the h1hwinj cd /data/scirun/O1/HardwareInjection/Details bin/start_psinject You will be prompted to give your name and your reason for starting the injections, which are saved to the psinject log. Injections begin 60s after you hit enter. I'll add these instructions to the DCC document.
Since I touched a ton of safe.snaps when setting the SDF monitor switches and accepted newly commissioned settings over the last week, I committed them all to svn. safe.snaps I committed:
h1susitmy_safe.snap | h1lscaux_safe.snap |
h1susbs_safe.snap | h1lsc_safe.snap |
h1susitmyx_safe.snap | h1ascimc_safe.snap |
h1susmc2_safe.snap | h1asc_safe.snap |
h1suspr2_safe.snap | |
h1sussr2_safe.snap | h1omc_safe.snap |
h1sussrm_safe.snap | h1psliss_safe.snap |
h1susetmx_safe.snap | h1iscex_safe.snap |
h1susetmy_safe.snap | h1iscey_safe.snap |
h1susim_safe.snap | h1alsex_safe.snap |
h1sustmsy_safe.snap | h1alsey_safe.snap |
This morning I found 1 channel in alarm on SDF and accepted it:
H1:OMC-ASC_DITHER_MASTER setpoint was set to ON, but is now OFF.
Likely this is left-over from last night, and in fact should be off.
J. Kissel I don't know why, but the TRANSIENT filter bank had been turned OFF (i.e. the gain had been set to zero and the output had been turned OFF). This cause the ODC bit that reflects the system status to report red (H1:CAL-INJ_ODC_CHANNEL_LATCH). I've now turned ON the transient bank, and the status has turned green. I hope this is what the INJ team wants.
Brute Force Coherence report for last night lock can be found here:
https://ldas-jobs.ligo.caltech.edu/~gabriele.vajente/bruco_1117341016/
The most interesting features are the coherence with SRCL in the 20-80 Hz band and with MICH in the 20-200 Hz band.
Posted are data for the two long term storage dry boxes (DB1 & DB4) in use in the VPW. Measurement data looks good, with no issues or problems being noted. I will collect the data from the desiccant cabinet in the LVEA during the next maintenance window.
This is the data for the 3IFO desiccant cabinet in the LVEA.
These are the past ten day trends.
Dan, Travis
Tonight during our long lock we measured the decay time constant of the ITMX bounce mode. At 10:10 UTC we set the intent bit to "I solemnly swear I am up to no good" and flipped the sign on the ITMX_M0_DARM_DAMP_V filter bank and let the bounce mode ring up until it was about 3e-14 m/rt[Hz] in the DARM spectrum. Then, we zeroed the damping gain and let the mode slowly decay over the next few hours.
We measured the mode's Q by fitting the decay curve in two different datasets. The first dataset is the 16Hz-sampled output of Sheila's new RMS monitors; the ITMX bandpass filter is a 4th-order butterworth with corner frequencies of 9.83 and 9.87Hz (the mode frequency is 9.848Hz, +/- 0.001 Hz). This data was lowpassed at 1Hz and fit with an exponential curve.
For the second dataset I followed Koji's demodulation recipe from the OMC 'beacon' measurement. I collected 20 seconds of DELTAL_EXTERNAL_DQ data, every 200 seconds; bandpassed at 9 and 12Hz, demodulated at 9.484Hz, and lowpassed at 2Hz; and collected the median value of the sum of the squares of the demod products. Some data were neglected on the edges of the 20-sec segment to avoid filter transients. These every-200-sec datapoints were fit with an exponential curve.
Results attached; the two methods give different results for Q:
RMS channel: 594,000
Demodulated DARM_ERR: 402,000
I fiddled with the data collection parameters and filtering parameters for both fits, but the results were robust. When varying parameters for each method the results for Q were repeatable within +/- 2,000, this gives some sense of the lower limit on uncertainty of the measurement. (The discrepancy between the two methods gives a sense of the upper limit...) Given a choice between the two I think I trust the RMS channel more, the demod path has more moving parts and there could be a subtlety in the filtering that I am overlooking. The code is attached.
I figured out what was going wrong with the demod measurement - not enough low-passing before the decimation step, the violin modes at ~510Hz were beating against the 256Hz sample rate. With another layer of anti-aliasing the demod results are in very good agreement with the RMS channel:
RMS channel: 594,400
Demodulated DARM_ERR: 593,800
To see what we might expect, I took the current GWINC model of suspension thermal noise and did the following. 1) Removed the horizontal thermal noise so I was only plotting vertical. 2) Updated the maraging steel phi to reflect recent measurement (LLO alog 16740) of Q of UIM blade internal mode of 4 x 10^4. (It is phi of 10^-4, Q 10^4 in the current GWINC). I did this to give better estimate of the vertical noise from higher up the chain. 3) Plotted only around the thermal noise peak and used 1 million points to be sure I resolved it. Resulting curve is attached. Q looks approx 100K, which is less than what was reported in this log. That is encouraging to me. I know the GWINC model is not quite right - it doesn't reflect tapered shape and FEA results. However to see a Q in excess of what we predicted in that model is definitely in the right direction.
Here we take the Mathematica model with the parameter set 20150211TMproduction and we look at varying some of the loss parameters to see how the model compares with these measurements. The thermal noise amplitude in the vertical for the vertical bounce mode is tabularised around the resonance and we take the full width at 1/√2 height to calculate the Q (equivalent to ½ height for power spectrum). With the recently measured mechanical loss value for maranging steel blade springs of 2.4 e-5, the Mathematica model predicts a Q of 430,000. This is a little bit lower Q than the measurement here, but at this level the loss of the wires and the silica is starting to have an effect, and so small differences between the model and reality could show up. Turning off the loss in the blade springs altogether only takes the Q to 550,000, so other losses are sharing equally in this regime. The attached Matlab figures shows mechanical loss factor of maraging steel versus predicted bounce mode Q and against total loss plus the resonance as a function of loss. Angus Giles Ken & Borja
Since there has been some modeling afoot, I wanted to post the statistical error from the fits above, to give a sense of the [statistical] precision on these measurements. The best-fit Q value and the 67% confidence interval on the two measurements for the bounce mode are:
RMS channel: 594,410 +/- 26
Demodulated DARM_ERR: 594,375 +/- 1590
The data for the measurements are attached. Note that this is just the statistical error of the fit -- I am not sure what systematics are present that could bias the measurement in one direction or another. For example, we did not disable the top-stage local damping on ITMX during this measurement, only the DARM_CTRL --> M0 damping that is bandpassed around the bounce mode. There is also optical lever feedback to L2 in pitch, and ASC feedback to L2 in pitch and yaw from the TRX QPDs (although this is very low bandwidth). In principle this feedback could act to increase or decrease the observed Q of the mode, although the drive at the bounce mode frequency is probably very small.
Times in UTC
7:00 Came in to find the IFO had been locked to LSC FF for several hours already, left it as is
9:57 Seeing that Livingston had dropped lock, switched Intent Bit to commissioning, and with a suggestion/assistance from Dan H., undamped the bounce mode of ITMX for a ringdown measurement of its Q. Gain of H1SUS-ITMX_M0_DARM_DAMP_V set to 0.0 (from 0.3) for ringdown. Will need to set back to actively damp the mode once Dan is satisfied
10:58 Switched Intent Bit back to undisturbed
14:11 Switched Intent Bit to commissioning. Edited LSC_FF guardian to bring laser power back to 24W. Switched Intent Bit back to undisturbed.
14:35 Lockloss (Richard volunteered to take blame)
14:39 Reduced power back to 16W. Initial alignment.
15:00 Handoff to Patrick.
King Soft Water was on site yesterday and replaced 4 of the 9 membranes in the R.O. system, the other 5 are on order. This seemed to make a noticeable difference in that the water system actually ran all night without tripping. The water was also tested yesterday and found to be in very good condition.
model restarts logged for Tue 02/Jun/2015
2015_06_02 04:40 h1fw0*
2015_06_02 06:49 h1fw0*
2015_06_02 10:57 h1iopsush34
2015_06_02 10:59 h1susmc2
2015_06_02 10:59 h1suspr2
2015_06_02 10:59 h1sussr2
2015_06_02 11:31 h1calcs
* = two unexpected fw0 restarts. Restart of h1sush34 for re-calibration of 18bit-DACs. New calcs model.
Dan, Travis
Around 06:50 UTC we started to observe frequent glitching in the PRCL and SRCL loops that generated a lot of nonstationary noise in DARM between 20 and 100Hz. The glitches occur several times a minute, it's been two hours of more or less the same behvaior and counting. Our range has dropped by a couple of megaparsecs. The first plot has a spectrogram of DARM compared to SRCL that shows a burst of excess noise at 08:08:20 UTC.
The noise shows up in POP_A_RF45_I and POP_A_RF9_I, but not so much in the Q phases, see second plot. (MICH is POPA_RF45_Q, PRCL and SRCL are from the I-phases.) A quick look at the PRM and SRM coil outputs doesn't reveal a consistent DAC value at the time of the glitches, so maybe DAC glitching isn't a problem, see third plot. The three optical levers that we're using in the corner station (ITMX, ITMY, SR3, all in pitch) don't look any different now than they did before 0700 UTC.
I'm pretty sure these come from PRM, but one stage higher than you looked, M2. Attached is a lineup of PRM M2 UR and the glitches in CAL DELTAL. Looks like 2^16 = 65536 counts DAC glitches to me. I'll check some other times and channels, but wanted to report it now.
After a lot of followup by many detchar people (TJ, Laura, Duncan, and Josh from a plane over the Atlantic), we haven't really been able to make DAC glitches work as an explanation for these glitches. A number of channels (especially PRM M2 and M3) start crossing +/- 2^16 around 7 UTC, when the glitches begin. Some of these glitches line up with the crossings, but there are plenty of glitches that go unexplained, and plenty of crossings that don't correspond to a glitch. It's possible that DAC glitches are part but not all of the explanation. We'll be following this up further since one of these glitches corresponds to an outlier in the burst search. Duncan does report that no software saturations (channels hitting their software limit, as with the tidal drive yesterday) were found during this time, so we can rule those out.
Andy, Laura, TJ, Duncan, Josh: To add to what Andy said, here are a few more plots on the subject,
1) The glitching being bad does coincide with SRM and PRM channels being close to 2^16 (plotted only positive values here, negative values are similar). Of course this is pretty weak evidence as lots of things drift.
2) A histogram of the number of glitches versus the DAC value of the PRM M2 UR channel has a small line at 2^16. Almost not significant. Statistically, with the hveto algorithm, we find only a weak correlation with +/-2^16 crossings in the PRM M2 suspensions. Again, all very weak.
3) As you all reported, the glitches are really strong in SRCL and PRCL, ASC REFL channels, etc. hveto would veto ~60% of yesterday's glitches using SRCL and PRCL, as shown in this time-frequency plot. But the rest of the glitches would still get through and hurt the searches.
So we haven't found the right suspension, or there is something more complicated going on. Sorry we don't have a smoking gun - we'll keep looking.
For unknown reasons, dmtviewer on the Mac Mini computers cannot get data from Livingston.
DMTviewer is used to view data produced by the DMT systems for both Livingston and Hanford observatories. It is used in the control room to display recent seismic data and inspiral range data on projector or TV monitors. The dmtviewer program may also be run on a control room workstation or operator workstation.
To display the inspiral range on a workstation:
Note: These procedures are found on the CDS Wiki. Search for "DMTviewer" and use the DMTviewer article.
L1 range cannot be displayed on Mac OS machines anymore. We are working in replacing the FOM Mac Mini computers with Ubuntu NUC computers to resolve this.
This is work I completed a while ago (the 7th of May if Matlab is to be believed), but I wanted to put this in for comparison with my log 18453. These are the new (as of May 7th) and old isolation filters for ETMY. ETM's are definitely harder to do that ITM's, but loops should be very similar now on all BSC chambers
That is a lot of gain peaking in X and Y (7 and 4.6 for St1 * 4.2 for stage 2, so ~30 and20 over all) worth remembering if there is a problem later on