Dan, Corey
Tonight we followed up the measurement of ITMX's bounce mode Q with a measurement of the roll mode. Same procedure: flipped the sign on the damping gain, excited the mode, zeroed the damping gain, allowed the mode to decay.
This time we lost lock after two hours, but this was enough for a clean measurement. Also, as mentioned in a comment to the bounce mode entry, I did a better job of anti-aliasing before decimating the data for demodulation, and the fit to the RMS channel and the fit to the demodulated data are in good agreement.
RMS channel: 456,700
Demodulated DARM_ERR: 458,000
The mode frequency is 13.978Hz, plots attached. Btw, I chose ITMX for these measurements because it receives the least amount of actuation -- ETMY gets DARM_CTRL, both ETMs are controlled by DHARD and CHARD, and ITMY is used for the SRCL FF. We feed DSOFT and CSOFT back to the ITMs, and they both have oplev damping in pitch, but these loops should have very little gain at the bounce/roll frequencies. (Also there are bounce/roll bandstops in the L2 and OL filter banks.) So, there's not much chance that the ITMX modes are being damped/excited by control loops.
Noticed L1 went down around 10:30utc. I talked with Danny at LLO to confirm they were down (and he gave est of quickest they could be back up would be 40min), so Dan & I took H1 out of Science Mode at ~10:33utc (3:33amPST) to perform a list of items we were waiting to do.
#1 Ring up Roll mode for ITMx
The Roll mode was rung up by changing H1:SUS-ITMX_M0_DARM_DAMP_R_GAIN from it's initial value of 20 and taking it to a negative value (we went from -5 up to -200), observing increased Roll Mode, and then damping it a little (took gain to +2.0), and then to 0.0 (where it was left). Dan will take some measurements of the Roll Mode.
#2 OMC Transitioned From "QPD Alignment" to "Dither Alignment"
Ran the H1:0MC-ASC_MASTERGAIN from 0.1 down to 0.0. Took the switch from QPD to Dither. And then took the Master Gain to 0.05. After this, we watched the OMC alignment lines (which are a group of lines between 1660-1760Hz) & watched them decrease as Dither improved the alignment. We kept the OMC in this state.
#3 ETMy Violin 1st Harmonic Suppression
Referencing Dan's alog #17365, we switched the MODE3 filters for ETMY (on the SUS_CUST_L2_DAMP_MODE_FILTERS.adl medm) & played with the gain. We were surprised that we ended up with a -200 for the gain (originally tried +100 & even higher).
H1 back to Science Mode at 11:03utc (4:03amPST)
NOTE: Our range appeared to take a little step up to 54Mpc, but we've seen it nosedive down to 43Mpc...hopefully it'll stabilize. Do not see any environmental effect leading to this sudden drop in the last 30min. We currently have a +7.5hr lock.
[Time for lunch.]
Winds have all died down.
H1 has stabilized, but we had a 9Mpc step down starting at ~11:30utc & continue to stay at this range.
(Dan, Corey)
TJ handed off a nice H1 segment with a range hovering at a steady 51Mpc (double-coincident with L1's 59Mpc which has trended down to 57Mpc). We are Undisturbed, with Guardian at "LSC FF" and a power of 17.6W.
For the last four hours winds have taken a little bump up to ~12mph (which is considered high). All seismic bands are now low (fairly low useism [0.1-0.3Hz] below 0.1um/s).
I notice the LVEA & EY lights have been left ON!
TO DO (if L1 drops):
As of 10:00utc, going on ~4hrs of double coincidence. Winds have died down to under 10mph locally.
While going through the Ops Checksheet, had a few notes:
16:00 - Started the shift with Sheila and Evan trying to figure out why it isn't locking.
16:08 - Full lock, Intent bit set
16:16 - Lockloss
21:20 - Full lcok, Intent bit set
23:18 - Intent bit OFF, Dan adjusted gain, Intent bit ON
0:00 - Interferometer still locked, have fun Corey!
Sheila, Evan
The SR3 pitch oplev servo appears to be causing interferometer instability, so it is now turned off.
Starting this morning, we found that we could not lock the interferometer for more than 15 minutes or so, even at 3 W with DARM controlled with AS45Q. The symptom was that irregular spikes would start to appear in POP90 a few minutes after acquiring lock. One could also see the effect as 0.9 Hz ASC oscillation in the BS, SRM, SR2, dETM, and sometimes cETM loops.
We initially did not suspect the SR3 oplev servo, since it worked fine during last night's 10 hour lock. We instead tried turning off our ASC loops, but that did not solve the issue. We then sat in the DRMI configuration with arms off resonance and no ASC, but that did not solve the issue. We turned off ITM oplev damping servos, but that did not solve the issue. Then finally we turned off the SR3 pitch oplev servo, and we also disabled the top-stage PRM and SRM LSC offloading. This seems to have solved the issue.
Because Sheila and Elli had at one point today turned off the offloading but still were not able to achieve a long lock, we suspect this means that the SR3 oplev servo is at fault. However, it is not obvious to us why it takes a few minutes for this spiking to start happening (one would think that if it is caused by oplev glitching, then one would see the spikes immediately). Also, at first glance there does not seem to be glitching in the oplev sum.
It turns out that the problem we have been having today, which seems like a 1-5 Hz alignment problem, is actually caused by the verry low frequency offloading of PRM and SRM from M2 to M1. The length feedbacks themselves are stable, and very slow, but there must be some interaction with the alignment that means that one or both of these make the IFO unstable.
We're not going to try turning them on again. We are turning the SR3 OpLev damping back on, since that seemed to solve the lockloss problems encountered on Travis' tuesday owl shift (alog 18777).
In response to alog 18831, work permit #5243 was put in for an indicator light on the OPS_OVERVIEW_CUSTOM.adl screen. If the test point on H1CALCS_GDS_TP goes below 187, the light will go from green to red. This way the operator can easily see with the injections have stopped working as they should.
Chris Pankow, Jeff Kissel, Adam M, Eric Thrane We restarted tinj at LHO (GPS=1117411494 ) to resume transient injections. We scheduled a burst injection for GPS=1117411713. The burst waveform is in svn. It is, I understand, a white noise burst. It is, for the time being, are standard burst injection waveform until others are added. The injection completed successfully. Following this test, we updated the schedule to inject the same waveform every two hours over the next two days. The next injection is scheduled for GPS=1116241935. However, this schedule may change soon as Chris adds new waveforms to the svn repository. We are not carrying out transient injection tests at LLO because the filter bank needs to be updated and we cannot be sure that the filters are even close to correct. Adam thinks they will be updated by ~tomorrow.
First, some details, the injection is actually a sine-Gaussian with parameters: SineGaussian t0 = 1116964989.435322284 q = 28.4183770634 f0 = 1134.57994534 The t0 can be safely ignored since this injection would have no counterpart in L1, but it should be noted that this injection *does* have all relevant antenna patterns and polarization factors applied to it (e.g. it is made to look like a "real" GW signal). I attach the time and frequency domain plots of the waveform --- however they are relative to an O1 type spectrum and so may not be indicative of the actual performance of the instrument at this period of time. Given the most recent spectra and frequency content of the injection, this could be weaker up to a factor of ~2-3. The characteristic SNRs I calculated using the O1 type spectrum: Waveform SineGaussian at 1116964989.435 has SNR in H1 of 7.673043 Waveform SineGaussian at 1116964989.435 has SNR in L1 of 20.470634 Network SNR for SineGaussian at 1116964989.435 is 21.861438 So, it's possible that this injection had an SNR as low as ~2, not accounting for variance from the noise. The excitation channel (H1:CAL-INJ_TRANSIENT_EXCMON, trended) does show a non-zero value, and the "count" value is consistent with the amplitude of the strain, another monitor (H1:CAL-INJ_HARDWARE_OUT_DQ, at a higher sample rate) also shows the full injection, though it is not calibrated. So, the injection was successfully scheduled and looks to have been made. I also did an omega scan of the latter channel, and the signal is at the correct frequency (but has, notably, a very long duration). I did a little poking around to see if this showed up in h(t) (using H1:GDS-CALIB_STRAIN). Unfortunately, it is not visible in the spectrogram of H1:GDS-CALIB_STRAIN (attached). It may be some peculiarity of the scheduling, but it's interesting to note that the non-zero excitation occurs about a second after the GPS time that Eric quotes. More interestingly, this does not seem to have fired off the proper bits in the state vector. H1:GDS-CALIB_STATE_VECTOR reports the value 456 for this period, which corresponds to the data being okay, gamma being okay, but no injection taking place. it also appears to mean that no calibration was taking place (bits 3 and 4 are off). I'm guessing I'm just misinterpreting the meaning of this. I'd recommend, for future testing, a scale factor of 3 or 4, to make it *clearly* visible and give us a point of reference. We should also close the loop with the ODC / calibration folks to see if something was missed.
I can see that burst injections have been occurring on schedule, generally. The schedule file (which you currently have to log into h1hwinj1 to view) reads, in part:... 1117411713 1 1 burst_test_ 1117419952 1 1 burst_test_ 1117427152 1 1 burst_test_ ...Compare that to the bit transitions in CAL-INJ_ODC:pshawhan@> ./FrBitmaskTransitions -c H1:CAL-INJ_ODC_CHANNEL_OUT_DQ /archive/frames/ER7/raw/H1/H-H1_R-11174/*.gwf -m fffffff 1117400000.000000 0x00003f9e Data starts 1117400027.625000 0x00003fde 6 on 1117400028.621093 0x00003fff 0 on, 5 on 1117406895.000000 0x00003e7f 7 off, 8 off 1117411714.394531 0x0000347f 9 off, 11 off 1117411714.480468 0x00003e7f 9 on, 11 on 1117419953.394531 0x0000347f 9 off, 11 off 1117419953.480468 0x00003e7f 9 on, 11 on 1117427153.394531 0x0000347f 9 off, 11 off 1117427153.480468 0x00003e7f 9 on, 11 on ...Bits 9 and 11 go ON when a burst injection begins, and off when it ends. The offset of start time is because the waveform file initially contains zeroes. To be specific, the first 22507 samples in the waveform file (=1.37372 sec) are all exactly zero; then the next several samples are vanishingly small, e.g. 1e-74 . At t=1.39453 sec = 22848 samples into the waveform file, the strain amplitude of the waveform is about 1e-42. At the end time of the injection (according to the bitmask transition), the strain amplitude has dropped to about 1e-53, but I believe the filtering extends the waveform some. To give an idea of how much, the end time is about 130 samples, or ~8 msec, after the strain amplitude drops below 1e-42. Earlier in the record, bit 6 went on to indicate that the transient filter gain was OK, bit 5 went on to indicate that the transient filter state was OK, and bit 0 went on at the same time to indicate a good summary. Somewhat later, bit 8 went off when CW injections began, and bit 7 went off at the same time to indicate the presence of any hardware injection signal. Note that tinj seems to have been stopped (or died) at about GPS 1117456410, according to the tinj.log file, and that's consistent with a lack of bit transitions in CAL-INJ_ODC after that time.
Checking the other bitmask channels, there's a curious pattern in how the hardware injection bits from CAL-INJ_ODC are getting summarized in ODC-MASTER:pshawhan@> ./FrBitmaskTransitions -c H1:ODC-MASTER_CHANNEL_OUT_DQ -m 7000000 /archive/frames/ER7/raw/H1/H-H1_R-11174/H-H1_R-11174[12345]*.gwf 1117410048.000000 0x07000000 Data starts 1117411714.391540 0x05000000 25 off 1117411714.391723 0x07000000 25 on 1117411714.391845 0x05000000 25 off 1117411714.475463 0x07000000 25 on 1117419953.391540 0x05000000 25 off 1117419953.391723 0x07000000 25 on 1117419953.391845 0x05000000 25 off 1117419953.475463 0x07000000 25 on 1117427153.391540 0x05000000 25 off 1117427153.391723 0x07000000 25 on 1117427153.391845 0x05000000 25 off 1117427153.475463 0x07000000 25 on ...The same pattern continues for all 7 of the injections. We can see that there's a brief interval (0.000183 s = 3 samples at 16384 Hz) which is marked as a burst injection, then "no injection" for 2 samples, then back on for 0.083618 s = 1370 samples. Knowing how the sine-Gaussian ramps up from vanishingly small amplitude, I think this is the real in the sense that the first whisper of a nonzero cycle returns to effectively zero for a couple of samples before it grows enough to be consistently "on". It is also interesting to see that the interval ends in ODC-MASTER slightly (5.0 ms) earlier than it does in CAL-INJ_ODC. I suspect that this is OK and the model really runs at 16384 Hz, but CAL-INJ_ODC is a down-sampled record of the real bit activity. I also confirmed that the injection bit got carried over to CALIB-STATE_VECTOR:pshawhan@> ./FrBitmaskTransitions -c H1:GDS-CALIB_STATE_VECTOR -m 1ff /archive/frames/ER7/hoft/H1/H-H1_HOFT_C00-11174/H-H1_HOFT_C00-11174[012]*.gwf 1117401088.000000 0x000001c8 Data starts 1117408113.375000 0x000001cc 2 on 1117408119.375000 0x000001dd 0 on, 4 on 1117408143.750000 0x000001df 1 on 1117408494.812500 0x000001dd 1 off 1117408494.937500 0x000001c8 0 off, 2 off, 4 off 1117411714.375000 0x00000148 7 off 1117411714.500000 0x000001c8 7 on 1117419953.375000 0x00000148 7 off 1117419953.500000 0x000001c8 7 on 1117420916.625000 0x000001cc 2 on 1117420922.625000 0x000001dd 0 on, 4 on 1117421174.000000 0x000001df 1 on 1117424707.062500 0x000001dd 1 off 1117424954.437500 0x000001c8 0 off, 2 off, 4 off 1117426693.000000 0x000001cc 2 on 1117426699.000000 0x000001dd 0 on, 4 on 1117426833.625000 0x000001df 1 on 1117427153.375000 0x0000015f 7 off 1117427153.500000 0x000001df 7 on ...Bit 7 indicates burst injections. It comes on for 2 16-Hz samples at the appropriate times.
As of 7:00am PDT on Friday, June 5, there have been 13 burst hardware injections at LHO over the last two days. All of these are represented by segments (each 1 second long) in the DQ segment database, and can be retrieved using a command like:pshawhan@> ligolw_segment_query_dqsegdb --segment-url https://dqsegdb5.phy.syr.edu --query-segments --include-segments H1:ODC-INJECTION_BURST --gps-start-time 1117300000 --gps-end-time 'lalapps_tconvert now' | ligolw_print -t segment -c start_time -c end_time -d ' 'These time intervals also agree with the bits in the H1:GDS-CALIB_STATE_VECTOR channel in the H1_HOFT_C00 frame data on the Caltech cluster, except that there is a gap in the h(t) frame data (due to filters being updated and the h(t) process being restarted, as noted in an email from Maddie). Similar DB queries show no H1 CBC injection segments yet, but H1 CW injections are ongoing:pshawhan@> ligolw_segment_query_dqsegdb --segment-url https://dqsegdb5.phy.syr.edu --query-segments --include-segments H1:ODC-INJECTION_CW --gps-start-time 1117400000 --gps-end-time 'lalapps_tconvert now' | ligolw_print -t segment -c start_time -c end_time -d ' ' 1117406895 1117552800By the way, by repeating that query I observed that the CW segment span was extended by 304 seconds about 130 segments after the segment ended. Therefore, the latency of obtaining this information from the segment database ranges from 130 to 434 seconds, depending on when you query it. (At least under current conditions.) I also did similar checks at LLO, which revealed a bug in tinj -- see https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=18517 .
10:03 Betsy, Kyle, Nutsinee into LVEA (Nutsinee to take pictures with cell phone of ITMX and ITMY spools, Betsy to retrieve equipment, Kyle to check on equipment) 10:13 Kyle out 10:14 Nutsinee out 10:15 Betsy out 14:22 Joe moving cabinets from OSB to staging building ~15:25 - 15:53 Greg moving cabinets from computer users room to staging building I ran an initial alignment in the morning and made it to LSC_FF but the range was unstable and didn't last long. Had difficulty locking and staying locked the remainder of the shift. Evan and Sheila are continuing to track down the cause.
Scott L. Ed P. Results from 5/18/15 thru 5/21/15 posted here. 6/1/15 Cleaning crew returns and sets up after last week off. Cleaned 40 meters ending 3.8 meters north of HNW-4-060. 6/2/15 Cleaned 49 meters ending 11.4 meters north of HNW-4-062. 6/3/15 Cleaned 33.5 meters ending at HNW-4-064. Removed lights and relocated equipment to next section north and started hanging lights. Safety meeting.
Sheila, Elli
This morning we added filters to H1:SUS-PRM_M1_LOCK_L and H1:SUS-SRM_M1_LOCK_L to be used for offloading the M2_LOCK stage. The motivation for this was because last night the M2 coil drivers were approaching saturation, and also to see if this might addressf the 2^16 (alog18815). The new filters are zp0.01:0 in FM1, zp0:0.01 in FM2, and -90dB gain in FM3 and they are engaged with a gain of -0.2. The filters are turned on by the ISC_DRMI guardian at OFFLOAD_DRMI. We were having trouble locking this afternoon so we turned off these filters for a while, but they are back on again now as they didn't seem to be causing the locking difficulties.
Kiwamu, Elli
We are interested in whether we can see any changes in the spot size/location on the ITMs due to thermal drift during full lock. We have decreased the exposure on the ITMY SPOOL and ITMX SPOOL cameras from 100000microseconds to 1000microseconds. (These cameras are not used any interferometer systems) We will take images with these cameras. at 5 minute intervals for the next 24 hours.
Ive changed the interval the cameras automatically take images back to the nominal 60min.
Jeff, Corey, Betsy, Dave, Keith, Eric Betsy: After the reboots yesterday, it appears that the CW hardware injections have not been restarted. 1) Can you please restart them? 2) There is no indication that this injection is OFF when looking at the CAL_INJ_CONTROL.adl. Dave B simply noticed that there was no longer an EXC showing on his CDS overview screen. Eric: I just restarted the injections at LHO (at GPS=1117406895). Here are the instructions: log on to the h1hwinj cd /data/scirun/O1/HardwareInjection/Details bin/start_psinject You will be prompted to give your name and your reason for starting the injections, which are saved to the psinject log. Injections begin 60s after you hit enter. I'll add these instructions to the DCC document.
Since I touched a ton of safe.snaps when setting the SDF monitor switches and accepted newly commissioned settings over the last week, I committed them all to svn. safe.snaps I committed:
| h1susitmy_safe.snap | h1lscaux_safe.snap |
| h1susbs_safe.snap | h1lsc_safe.snap |
| h1susitmyx_safe.snap | h1ascimc_safe.snap |
| h1susmc2_safe.snap | h1asc_safe.snap |
| h1suspr2_safe.snap | |
| h1sussr2_safe.snap | h1omc_safe.snap |
| h1sussrm_safe.snap | h1psliss_safe.snap |
| h1susetmx_safe.snap | h1iscex_safe.snap |
| h1susetmy_safe.snap | h1iscey_safe.snap |
| h1susim_safe.snap | h1alsex_safe.snap |
| h1sustmsy_safe.snap | h1alsey_safe.snap |
This morning I found 1 channel in alarm on SDF and accepted it:
H1:OMC-ASC_DITHER_MASTER setpoint was set to ON, but is now OFF.
Likely this is left-over from last night, and in fact should be off.
Dan, Travis
Tonight during our long lock we measured the decay time constant of the ITMX bounce mode. At 10:10 UTC we set the intent bit to "I solemnly swear I am up to no good" and flipped the sign on the ITMX_M0_DARM_DAMP_V filter bank and let the bounce mode ring up until it was about 3e-14 m/rt[Hz] in the DARM spectrum. Then, we zeroed the damping gain and let the mode slowly decay over the next few hours.
We measured the mode's Q by fitting the decay curve in two different datasets. The first dataset is the 16Hz-sampled output of Sheila's new RMS monitors; the ITMX bandpass filter is a 4th-order butterworth with corner frequencies of 9.83 and 9.87Hz (the mode frequency is 9.848Hz, +/- 0.001 Hz). This data was lowpassed at 1Hz and fit with an exponential curve.
For the second dataset I followed Koji's demodulation recipe from the OMC 'beacon' measurement. I collected 20 seconds of DELTAL_EXTERNAL_DQ data, every 200 seconds; bandpassed at 9 and 12Hz, demodulated at 9.484Hz, and lowpassed at 2Hz; and collected the median value of the sum of the squares of the demod products. Some data were neglected on the edges of the 20-sec segment to avoid filter transients. These every-200-sec datapoints were fit with an exponential curve.
Results attached; the two methods give different results for Q:
RMS channel: 594,000
Demodulated DARM_ERR: 402,000
I fiddled with the data collection parameters and filtering parameters for both fits, but the results were robust. When varying parameters for each method the results for Q were repeatable within +/- 2,000, this gives some sense of the lower limit on uncertainty of the measurement. (The discrepancy between the two methods gives a sense of the upper limit...) Given a choice between the two I think I trust the RMS channel more, the demod path has more moving parts and there could be a subtlety in the filtering that I am overlooking. The code is attached.
I figured out what was going wrong with the demod measurement - not enough low-passing before the decimation step, the violin modes at ~510Hz were beating against the 256Hz sample rate. With another layer of anti-aliasing the demod results are in very good agreement with the RMS channel:
RMS channel: 594,400
Demodulated DARM_ERR: 593,800
To see what we might expect, I took the current GWINC model of suspension thermal noise and did the following. 1) Removed the horizontal thermal noise so I was only plotting vertical. 2) Updated the maraging steel phi to reflect recent measurement (LLO alog 16740) of Q of UIM blade internal mode of 4 x 10^4. (It is phi of 10^-4, Q 10^4 in the current GWINC). I did this to give better estimate of the vertical noise from higher up the chain. 3) Plotted only around the thermal noise peak and used 1 million points to be sure I resolved it. Resulting curve is attached. Q looks approx 100K, which is less than what was reported in this log. That is encouraging to me. I know the GWINC model is not quite right - it doesn't reflect tapered shape and FEA results. However to see a Q in excess of what we predicted in that model is definitely in the right direction.
Here we take the Mathematica model with the parameter set 20150211TMproduction and we look at varying some of the loss parameters to see how the model compares with these measurements. The thermal noise amplitude in the vertical for the vertical bounce mode is tabularised around the resonance and we take the full width at 1/√2 height to calculate the Q (equivalent to ½ height for power spectrum). With the recently measured mechanical loss value for maranging steel blade springs of 2.4 e-5, the Mathematica model predicts a Q of 430,000. This is a little bit lower Q than the measurement here, but at this level the loss of the wires and the silica is starting to have an effect, and so small differences between the model and reality could show up. Turning off the loss in the blade springs altogether only takes the Q to 550,000, so other losses are sharing equally in this regime. The attached Matlab figures shows mechanical loss factor of maraging steel versus predicted bounce mode Q and against total loss plus the resonance as a function of loss. Angus Giles Ken & Borja
Since there has been some modeling afoot, I wanted to post the statistical error from the fits above, to give a sense of the [statistical] precision on these measurements. The best-fit Q value and the 67% confidence interval on the two measurements for the bounce mode are:
RMS channel: 594,410 +/- 26
Demodulated DARM_ERR: 594,375 +/- 1590
The data for the measurements are attached. Note that this is just the statistical error of the fit -- I am not sure what systematics are present that could bias the measurement in one direction or another. For example, we did not disable the top-stage local damping on ITMX during this measurement, only the DARM_CTRL --> M0 damping that is bandpassed around the bounce mode. There is also optical lever feedback to L2 in pitch, and ASC feedback to L2 in pitch and yaw from the TRX QPDs (although this is very low bandwidth). In principle this feedback could act to increase or decrease the observed Q of the mode, although the drive at the bounce mode frequency is probably very small.
In the first observation intent time from today, there are DAC glitches in MC2 M3. They don't obviously appear in DARM at the time we checked, but they do appear in a number of channels. The first plot is an Omega scan showing glitches in MC_L, and the second shows that they correspond to zero crossings in MC2 M3 control.
I grabbed Andy's images and lined them up in keynote. Thought folks might want to see how convincing this is for DAC zero crossing glitches.
Jim, Dave
we checked to see if the 18bit DAC card for SUS MC2 M3 happened to be close to the new DC power supply. It is not, in fact it is the furthest from the power supply.
At least twice tonight there has been glitching of MC2, similar to what Kiwamu described last week. It is visible in POP18 and AS90, as well as the MC2 witness sensors. I was not looking at the IMC REFL camera at the time, so I can't say whether it was the same kind of kick in yaw as before.