Some LHO DMT monitors are not running. John and Maddie have been emailed about the problem. There is no CAL SENSEMON or SEIS BLRMS data at the moment from H1.
Looks like the DAQ broadcaster has stopped sending data. Its EPICS IOC is still running, but the EPICS values are frozen. There are no recent logs in the log files. We have not seen this failure before. CPU and MEM looks good on the machine.
We should discuss restarting the broadcaster (will not affect the rest of the DAQ)
Back to commissioning at 17:57 - glitches in ETMY causing range to drop to zero.
The improvement of 3 or 4 in peak height and a few Mpc seems to have held for at least a few hours, so I attach before and after spectra. Two hours was not much time for this tuning considering that settling time was several minutes. When possible, I would like to try, for example, adjusting some of the other degrees of freedom (we adjusted only 1 of 3).
We are testing a new CDS overview screen for ER7 called H1CDS_SCIRUN_STATE_WORD_CUSTOM.adl. It is identical to the non-scirun except the EXC bits should all be GREEN all the time. This means that when any excitation other than h1calcs is engaged the bit turns RED, and if the h1calcs exication were to be removed that will also turn RED (the assumption is that CW excitations are always running).
This is more along the philosophy of green means nominal. Of course the control room is free to run whichever display is preferred.
A couple ISIs, HAMs 5 & 6, and HEPI ETMX, have accumulated some saturations on inertial sensors but these were not at the time of lock loss.
Checking the offload drive to HEPI at the ETMs showed that we still had ~100,000 nm before hitting the ETMi_ISCINF_LONG 250000 limit. Also, the cartesian positions of HEPI still seemed to be comfortably moving as directed.
model restarts logged for Thu 04/Jun/2015
2015_06_04 18:26 h1fw1*
* = unexpected restart
Long lock from Patrick's shift roared through all of TJ's shift and most of my shift. Dropped out at 6:12am today (yesterday it was around 5:45am). Here is time info for the long lock:
I ran through a couple of alignments after the last lockloss, but they have not looked great, so handing off to the Day Crew (Jim covering for Cheryl who will be in soon).
Note: On my alignments, I mainly tweaked ETMy, PR3 (just a little), and then the BS.
In the SDF Overview medm, have only (3) channels which are RED during the current lock/Science Mode segment.
When we are happy with these changes, we should GREEN these channels up.
Once again, handed a nicely locked & in-Science-Mode H1 from TJ, with a range of 54Mpc. Have had a couple of glitches which dropped the range a bit in the last few minutes though. (as I am typing noticed the end of a huge glitch [~8:42:30-ish utc] which took the range down to 2Mpc! Nothing obvious for a reason why on any of the FOMs.)
Seismically, all is quiet on the LHO front with winds below 10mph.
Noticed that the CDS Overview (on video2) must have had an old medm up because when going through checksheet and checking this window I noticed H1CALCS had no purple EXC box (mentioned to Jeff and he was surprised). Then I noticed it was fine on my workstation. So medm restarted on video2.
[Time for caffeine.]
J. Kissel, K. Izumi, E. Hall for the CAL Team We've been studying the discrepancies between the GDS low-latency pipeline (which produces H1:GDS-CALIB_STRAIN) vs what comes out of the CAL-CS front-end real-time calibration (which produces H1:CAL-DELTAL_EXTERNAL_DQ). Please check out G1500750 for a discussion on this.
The MICH feedforward path into DARM has been retuned, since Gabriele recently found a nontrivial amount of coupling from 50 to 200 Hz.
First, the MICH → DARM and MICHFF → DARM paths were measured according to the prescription given here. Then the ratio of these TFs was vectfitted and loaded into FM4 of LSC-MICHFF. This is meant to stand in place of FM5, which is the old frequency-dependent compensation filter. The necessary feedforward gain has been absorbed into FM4, so the filter module gain should now be 1. These changes have been written into the LSC_FF state in the Guardian.
The attachment shows the performance of the new retuning compared to the old retuning. At the start of this exercise, I had already changed the FM gain of the "old" retuning from 0.038 (in the Guardian) to 0.045, as this removed a lot of the coherence near 100 Hz.
Also, I had previously widened the violin stopband in BS M2 L, but had not propagated this change to the analogous filter in LSC-MICHFF. This is now fixed. Also note that if any change is made to the invBS compensation filter in BS M3 L, this change must be propagated to LSC-MICHFF as well.
I did not have time to implement SRCL feedforward. I suspect it would be a quick job, and could be done parasitically with other tweaking activities.
Coherence between MICH and DARM before and after Evan's work. (As far as I can tell, yes, DARM got better and the range improved..although it is not evident from the SNSH channel, as it had problems around the time of Evan's work.) Let's see how much this coupling changes over time.
The LSC-MICHFF_TRAMP is now set to 3 versus the snap setting of 5 sec. This shows in the LSC SDF diff Table now and should be reverted or accepted, Evan?
Snap setting of 5 seconds is probably better.
I injected pitch (270 Hz) and yaw (280 Hz) lines with the PSL piezo mirror and then tried to minimize them in DARM by adjusting values of the offsets of DOF2P and DOF1Y of the IMC WFS. It was a bit tricky because of ten-minute alignment time scales. But we reduced the injected pitch peak height by a factor of 2, and the yaw by 5 (we cared most about yaw because the PSL jitter peaks are mainly in yaw). The inspiral range increased by a few Mpc and the jitter peaks seemed down by a factor of a few, but I will wait until tomorrow to put in spectra in order to be sure that the improvement is stable. DOF2P was changed from 135.9 to 350 and DOF1Y from 47.5 to 200. I think we could do better with another 2 hours.
Robert, Kiwamu, Evan
This change now shows in the ASCIMC SDF Table.
Chris Pankow, Jeff Kissel, Adam M, Eric Thrane We restarted tinj at LHO (GPS=1117411494 ) to resume transient injections. We scheduled a burst injection for GPS=1117411713. The burst waveform is in svn. It is, I understand, a white noise burst. It is, for the time being, are standard burst injection waveform until others are added. The injection completed successfully. Following this test, we updated the schedule to inject the same waveform every two hours over the next two days. The next injection is scheduled for GPS=1116241935. However, this schedule may change soon as Chris adds new waveforms to the svn repository. We are not carrying out transient injection tests at LLO because the filter bank needs to be updated and we cannot be sure that the filters are even close to correct. Adam thinks they will be updated by ~tomorrow.
First, some details, the injection is actually a sine-Gaussian with parameters: SineGaussian t0 = 1116964989.435322284 q = 28.4183770634 f0 = 1134.57994534 The t0 can be safely ignored since this injection would have no counterpart in L1, but it should be noted that this injection *does* have all relevant antenna patterns and polarization factors applied to it (e.g. it is made to look like a "real" GW signal). I attach the time and frequency domain plots of the waveform --- however they are relative to an O1 type spectrum and so may not be indicative of the actual performance of the instrument at this period of time. Given the most recent spectra and frequency content of the injection, this could be weaker up to a factor of ~2-3. The characteristic SNRs I calculated using the O1 type spectrum: Waveform SineGaussian at 1116964989.435 has SNR in H1 of 7.673043 Waveform SineGaussian at 1116964989.435 has SNR in L1 of 20.470634 Network SNR for SineGaussian at 1116964989.435 is 21.861438 So, it's possible that this injection had an SNR as low as ~2, not accounting for variance from the noise. The excitation channel (H1:CAL-INJ_TRANSIENT_EXCMON, trended) does show a non-zero value, and the "count" value is consistent with the amplitude of the strain, another monitor (H1:CAL-INJ_HARDWARE_OUT_DQ, at a higher sample rate) also shows the full injection, though it is not calibrated. So, the injection was successfully scheduled and looks to have been made. I also did an omega scan of the latter channel, and the signal is at the correct frequency (but has, notably, a very long duration). I did a little poking around to see if this showed up in h(t) (using H1:GDS-CALIB_STRAIN). Unfortunately, it is not visible in the spectrogram of H1:GDS-CALIB_STRAIN (attached). It may be some peculiarity of the scheduling, but it's interesting to note that the non-zero excitation occurs about a second after the GPS time that Eric quotes. More interestingly, this does not seem to have fired off the proper bits in the state vector. H1:GDS-CALIB_STATE_VECTOR reports the value 456 for this period, which corresponds to the data being okay, gamma being okay, but no injection taking place. it also appears to mean that no calibration was taking place (bits 3 and 4 are off). I'm guessing I'm just misinterpreting the meaning of this. I'd recommend, for future testing, a scale factor of 3 or 4, to make it *clearly* visible and give us a point of reference. We should also close the loop with the ODC / calibration folks to see if something was missed.
I can see that burst injections have been occurring on schedule, generally. The schedule file (which you currently have to log into h1hwinj1 to view) reads, in part:... 1117411713 1 1 burst_test_ 1117419952 1 1 burst_test_ 1117427152 1 1 burst_test_ ...Compare that to the bit transitions in CAL-INJ_ODC:pshawhan@> ./FrBitmaskTransitions -c H1:CAL-INJ_ODC_CHANNEL_OUT_DQ /archive/frames/ER7/raw/H1/H-H1_R-11174/*.gwf -m fffffff 1117400000.000000 0x00003f9e Data starts 1117400027.625000 0x00003fde 6 on 1117400028.621093 0x00003fff 0 on, 5 on 1117406895.000000 0x00003e7f 7 off, 8 off 1117411714.394531 0x0000347f 9 off, 11 off 1117411714.480468 0x00003e7f 9 on, 11 on 1117419953.394531 0x0000347f 9 off, 11 off 1117419953.480468 0x00003e7f 9 on, 11 on 1117427153.394531 0x0000347f 9 off, 11 off 1117427153.480468 0x00003e7f 9 on, 11 on ...Bits 9 and 11 go ON when a burst injection begins, and off when it ends. The offset of start time is because the waveform file initially contains zeroes. To be specific, the first 22507 samples in the waveform file (=1.37372 sec) are all exactly zero; then the next several samples are vanishingly small, e.g. 1e-74 . At t=1.39453 sec = 22848 samples into the waveform file, the strain amplitude of the waveform is about 1e-42. At the end time of the injection (according to the bitmask transition), the strain amplitude has dropped to about 1e-53, but I believe the filtering extends the waveform some. To give an idea of how much, the end time is about 130 samples, or ~8 msec, after the strain amplitude drops below 1e-42. Earlier in the record, bit 6 went on to indicate that the transient filter gain was OK, bit 5 went on to indicate that the transient filter state was OK, and bit 0 went on at the same time to indicate a good summary. Somewhat later, bit 8 went off when CW injections began, and bit 7 went off at the same time to indicate the presence of any hardware injection signal. Note that tinj seems to have been stopped (or died) at about GPS 1117456410, according to the tinj.log file, and that's consistent with a lack of bit transitions in CAL-INJ_ODC after that time.
Checking the other bitmask channels, there's a curious pattern in how the hardware injection bits from CAL-INJ_ODC are getting summarized in ODC-MASTER:pshawhan@> ./FrBitmaskTransitions -c H1:ODC-MASTER_CHANNEL_OUT_DQ -m 7000000 /archive/frames/ER7/raw/H1/H-H1_R-11174/H-H1_R-11174[12345]*.gwf 1117410048.000000 0x07000000 Data starts 1117411714.391540 0x05000000 25 off 1117411714.391723 0x07000000 25 on 1117411714.391845 0x05000000 25 off 1117411714.475463 0x07000000 25 on 1117419953.391540 0x05000000 25 off 1117419953.391723 0x07000000 25 on 1117419953.391845 0x05000000 25 off 1117419953.475463 0x07000000 25 on 1117427153.391540 0x05000000 25 off 1117427153.391723 0x07000000 25 on 1117427153.391845 0x05000000 25 off 1117427153.475463 0x07000000 25 on ...The same pattern continues for all 7 of the injections. We can see that there's a brief interval (0.000183 s = 3 samples at 16384 Hz) which is marked as a burst injection, then "no injection" for 2 samples, then back on for 0.083618 s = 1370 samples. Knowing how the sine-Gaussian ramps up from vanishingly small amplitude, I think this is the real in the sense that the first whisper of a nonzero cycle returns to effectively zero for a couple of samples before it grows enough to be consistently "on". It is also interesting to see that the interval ends in ODC-MASTER slightly (5.0 ms) earlier than it does in CAL-INJ_ODC. I suspect that this is OK and the model really runs at 16384 Hz, but CAL-INJ_ODC is a down-sampled record of the real bit activity. I also confirmed that the injection bit got carried over to CALIB-STATE_VECTOR:pshawhan@> ./FrBitmaskTransitions -c H1:GDS-CALIB_STATE_VECTOR -m 1ff /archive/frames/ER7/hoft/H1/H-H1_HOFT_C00-11174/H-H1_HOFT_C00-11174[012]*.gwf 1117401088.000000 0x000001c8 Data starts 1117408113.375000 0x000001cc 2 on 1117408119.375000 0x000001dd 0 on, 4 on 1117408143.750000 0x000001df 1 on 1117408494.812500 0x000001dd 1 off 1117408494.937500 0x000001c8 0 off, 2 off, 4 off 1117411714.375000 0x00000148 7 off 1117411714.500000 0x000001c8 7 on 1117419953.375000 0x00000148 7 off 1117419953.500000 0x000001c8 7 on 1117420916.625000 0x000001cc 2 on 1117420922.625000 0x000001dd 0 on, 4 on 1117421174.000000 0x000001df 1 on 1117424707.062500 0x000001dd 1 off 1117424954.437500 0x000001c8 0 off, 2 off, 4 off 1117426693.000000 0x000001cc 2 on 1117426699.000000 0x000001dd 0 on, 4 on 1117426833.625000 0x000001df 1 on 1117427153.375000 0x0000015f 7 off 1117427153.500000 0x000001df 7 on ...Bit 7 indicates burst injections. It comes on for 2 16-Hz samples at the appropriate times.
As of 7:00am PDT on Friday, June 5, there have been 13 burst hardware injections at LHO over the last two days. All of these are represented by segments (each 1 second long) in the DQ segment database, and can be retrieved using a command like:pshawhan@> ligolw_segment_query_dqsegdb --segment-url https://dqsegdb5.phy.syr.edu --query-segments --include-segments H1:ODC-INJECTION_BURST --gps-start-time 1117300000 --gps-end-time 'lalapps_tconvert now' | ligolw_print -t segment -c start_time -c end_time -d ' 'These time intervals also agree with the bits in the H1:GDS-CALIB_STATE_VECTOR channel in the H1_HOFT_C00 frame data on the Caltech cluster, except that there is a gap in the h(t) frame data (due to filters being updated and the h(t) process being restarted, as noted in an email from Maddie). Similar DB queries show no H1 CBC injection segments yet, but H1 CW injections are ongoing:pshawhan@> ligolw_segment_query_dqsegdb --segment-url https://dqsegdb5.phy.syr.edu --query-segments --include-segments H1:ODC-INJECTION_CW --gps-start-time 1117400000 --gps-end-time 'lalapps_tconvert now' | ligolw_print -t segment -c start_time -c end_time -d ' ' 1117406895 1117552800By the way, by repeating that query I observed that the CW segment span was extended by 304 seconds about 130 segments after the segment ended. Therefore, the latency of obtaining this information from the segment database ranges from 130 to 434 seconds, depending on when you query it. (At least under current conditions.) I also did similar checks at LLO, which revealed a bug in tinj -- see https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=18517 .
Kiwamu, Elli
We are interested in whether we can see any changes in the spot size/location on the ITMs due to thermal drift during full lock. We have decreased the exposure on the ITMY SPOOL and ITMX SPOOL cameras from 100000microseconds to 1000microseconds. (These cameras are not used any interferometer systems) We will take images with these cameras. at 5 minute intervals for the next 24 hours.
Ive changed the interval the cameras automatically take images back to the nominal 60min.
Dan, Travis
Tonight during our long lock we measured the decay time constant of the ITMX bounce mode. At 10:10 UTC we set the intent bit to "I solemnly swear I am up to no good" and flipped the sign on the ITMX_M0_DARM_DAMP_V filter bank and let the bounce mode ring up until it was about 3e-14 m/rt[Hz] in the DARM spectrum. Then, we zeroed the damping gain and let the mode slowly decay over the next few hours.
We measured the mode's Q by fitting the decay curve in two different datasets. The first dataset is the 16Hz-sampled output of Sheila's new RMS monitors; the ITMX bandpass filter is a 4th-order butterworth with corner frequencies of 9.83 and 9.87Hz (the mode frequency is 9.848Hz, +/- 0.001 Hz). This data was lowpassed at 1Hz and fit with an exponential curve.
For the second dataset I followed Koji's demodulation recipe from the OMC 'beacon' measurement. I collected 20 seconds of DELTAL_EXTERNAL_DQ data, every 200 seconds; bandpassed at 9 and 12Hz, demodulated at 9.484Hz, and lowpassed at 2Hz; and collected the median value of the sum of the squares of the demod products. Some data were neglected on the edges of the 20-sec segment to avoid filter transients. These every-200-sec datapoints were fit with an exponential curve.
Results attached; the two methods give different results for Q:
RMS channel: 594,000
Demodulated DARM_ERR: 402,000
I fiddled with the data collection parameters and filtering parameters for both fits, but the results were robust. When varying parameters for each method the results for Q were repeatable within +/- 2,000, this gives some sense of the lower limit on uncertainty of the measurement. (The discrepancy between the two methods gives a sense of the upper limit...) Given a choice between the two I think I trust the RMS channel more, the demod path has more moving parts and there could be a subtlety in the filtering that I am overlooking. The code is attached.
I figured out what was going wrong with the demod measurement - not enough low-passing before the decimation step, the violin modes at ~510Hz were beating against the 256Hz sample rate. With another layer of anti-aliasing the demod results are in very good agreement with the RMS channel:
RMS channel: 594,400
Demodulated DARM_ERR: 593,800
To see what we might expect, I took the current GWINC model of suspension thermal noise and did the following. 1) Removed the horizontal thermal noise so I was only plotting vertical. 2) Updated the maraging steel phi to reflect recent measurement (LLO alog 16740) of Q of UIM blade internal mode of 4 x 10^4. (It is phi of 10^-4, Q 10^4 in the current GWINC). I did this to give better estimate of the vertical noise from higher up the chain. 3) Plotted only around the thermal noise peak and used 1 million points to be sure I resolved it. Resulting curve is attached. Q looks approx 100K, which is less than what was reported in this log. That is encouraging to me. I know the GWINC model is not quite right - it doesn't reflect tapered shape and FEA results. However to see a Q in excess of what we predicted in that model is definitely in the right direction.
Here we take the Mathematica model with the parameter set 20150211TMproduction and we look at varying some of the loss parameters to see how the model compares with these measurements. The thermal noise amplitude in the vertical for the vertical bounce mode is tabularised around the resonance and we take the full width at 1/√2 height to calculate the Q (equivalent to ½ height for power spectrum). With the recently measured mechanical loss value for maranging steel blade springs of 2.4 e-5, the Mathematica model predicts a Q of 430,000. This is a little bit lower Q than the measurement here, but at this level the loss of the wires and the silica is starting to have an effect, and so small differences between the model and reality could show up. Turning off the loss in the blade springs altogether only takes the Q to 550,000, so other losses are sharing equally in this regime. The attached Matlab figures shows mechanical loss factor of maraging steel versus predicted bounce mode Q and against total loss plus the resonance as a function of loss. Angus Giles Ken & Borja
Since there has been some modeling afoot, I wanted to post the statistical error from the fits above, to give a sense of the [statistical] precision on these measurements. The best-fit Q value and the 67% confidence interval on the two measurements for the bounce mode are:
RMS channel: 594,410 +/- 26
Demodulated DARM_ERR: 594,375 +/- 1590
The data for the measurements are attached. Note that this is just the statistical error of the fit -- I am not sure what systematics are present that could bias the measurement in one direction or another. For example, we did not disable the top-stage local damping on ITMX during this measurement, only the DARM_CTRL --> M0 damping that is bandpassed around the bounce mode. There is also optical lever feedback to L2 in pitch, and ASC feedback to L2 in pitch and yaw from the TRX QPDs (although this is very low bandwidth). In principle this feedback could act to increase or decrease the observed Q of the mode, although the drive at the bounce mode frequency is probably very small.