The MICH feedforward path into DARM has been retuned, since Gabriele recently found a nontrivial amount of coupling from 50 to 200 Hz.
First, the MICH → DARM and MICHFF → DARM paths were measured according to the prescription given here. Then the ratio of these TFs was vectfitted and loaded into FM4 of LSC-MICHFF. This is meant to stand in place of FM5, which is the old frequency-dependent compensation filter. The necessary feedforward gain has been absorbed into FM4, so the filter module gain should now be 1. These changes have been written into the LSC_FF state in the Guardian.
The attachment shows the performance of the new retuning compared to the old retuning. At the start of this exercise, I had already changed the FM gain of the "old" retuning from 0.038 (in the Guardian) to 0.045, as this removed a lot of the coherence near 100 Hz.
Also, I had previously widened the violin stopband in BS M2 L, but had not propagated this change to the analogous filter in LSC-MICHFF. This is now fixed. Also note that if any change is made to the invBS compensation filter in BS M3 L, this change must be propagated to LSC-MICHFF as well.
I did not have time to implement SRCL feedforward. I suspect it would be a quick job, and could be done parasitically with other tweaking activities.
Coherence between MICH and DARM before and after Evan's work. (As far as I can tell, yes, DARM got better and the range improved..although it is not evident from the SNSH channel, as it had problems around the time of Evan's work.) Let's see how much this coupling changes over time.
The LSC-MICHFF_TRAMP is now set to 3 versus the snap setting of 5 sec. This shows in the LSC SDF diff Table now and should be reverted or accepted, Evan?
Snap setting of 5 seconds is probably better.
I injected pitch (270 Hz) and yaw (280 Hz) lines with the PSL piezo mirror and then tried to minimize them in DARM by adjusting values of the offsets of DOF2P and DOF1Y of the IMC WFS. It was a bit tricky because of ten-minute alignment time scales. But we reduced the injected pitch peak height by a factor of 2, and the yaw by 5 (we cared most about yaw because the PSL jitter peaks are mainly in yaw). The inspiral range increased by a few Mpc and the jitter peaks seemed down by a factor of a few, but I will wait until tomorrow to put in spectra in order to be sure that the improvement is stable. DOF2P was changed from 135.9 to 350 and DOF1Y from 47.5 to 200. I think we could do better with another 2 hours.
Robert, Kiwamu, Evan
This change now shows in the ASCIMC SDF Table.
19:48 - Intent Bit set back to Undisturbed. Lock is still going strong.
16:00 - Took the chair with both H1 and L1 locked.
17:31 - L1 lost lock, so I turned the Intent Bit off for Evanand Robert to return LSC Feedfowardand tune WFS offsets.
This work should last about 2 hours.
Hannah Fair, Stefan Ballmer Just a few days ago the interferometer tended to loose lock at full input power (23W) in ~10 to 40min. (Hence the run is at 17.3W.) We went back to the following lock losses: 1) Jun 02 2015 01:49:44 UTC = GPS 1117245000 2) Jun 02 2015 03:53:19 UTC = GPS 1117252415 3) Jun 02 2015 08:13:04 UTC = GPS 1117268000 4) Jun 02 2015 11:33:54 UTC = GPS 1117280050 All of them showed as immediate cause of lockloss a run-away of the MICH_Y ASC loop on a time scale of about 0.2sec. (0Plots 1-4). However, lock losses 3) and 4) also showed a significant control signal drift in PRC2, SRC1, SRC2 (both Yaw and Pit) over about one minute time scale before the lock-loss. We therefore suggest lowering the MICH_Y ASC gain slightly next time we try 23W, and then investigate the ASC signals, most likely for the SRC.
Scott L. Ed P. Chris S. We found some cracks in the concrete enclosure while hanging lights this afternoon, 730 meters south of X-End. The cracks run the entire length of a single panel on both the east and west side of the tube, up approximately 2.1 and 2.3 meters respectively from the slab. Picture 018 also shows the crack extending >2" horizontally at the end of the panel. I have not ever put anything on resource space but I do have 20 more pictures that I will attempt to download there. Only 15 meters of tube cleaned today after moving lights and looking at the cracks.
Had trouble in the morning maintaining lock in the transition to REFL_TRANS. I was going to do an initial alignment per Sheila's suggestion, but then it locked. I have not done an alignment today. Ended the shift in science mode.
The computer video6 which was used to display the Range display and the ISC guardian overview has been replaced. The new computer is an Intel NUC, which is a 4 core i5 CPU, 16G RAM computer running Ubuntu 14.04, with a KDE display manager. The computer has a wireless keyboard which the operator manages. The hostname for the computer is nuc0. This computer cannot be managed with remote desktop at this time, only with the remote keyboard. Instructions for maintaining the displays on the computer are in the CDS wiki, under Operations/FiguresOfMerit.
[Daniel, Paul]
We have updated our LHO FINESSE file (https://dcc.ligo.org/LIGO-T1300904) to compute an estimate of the DCCP as requested by Jeff.
The DCCP value we calculated was 369 Hz. This is for a well mode matched and aligned setup.
The optics details were taken from the galaxy page for the various mirrors installed at LHO. We also tried to get some more accurate losses in the arms taken from the following documents:
Optic | Loss [ppm] | LHO aLOG ref. |
ITMX | 42 | N.A. |
ETMX | 78 | 16082 |
XARM | 120 | 16579 |
ITMY | 30 | N.A. |
ETMY | 125 | 15919 |
YARM | 155 | 15937 |
The caveat here is that the DCCP value has been seen to wander depending on alignment, as was reproduced in some of Gabriele's simulations, https://dcc.ligo.org/LIGO-G1500641. Here Gabriele found a slightly higher well aligned DCCP value of ~380Hz. We know that the DDCP value depends on many variables (alignment, mode mismatch in SRC, arm losses etc.), some of which are not yet well constrained with measurements.
Attached is a plot of the DARM TF from the Finesse model, which is well described by a single pole at 369 Hz.
This is my own reckoning of the loss measurements we've performed after the ETMs were cleaned in December. Note that these visibility measurements do not independently measure losses from the ITMs or the ETMs; they just give the total arm loss from scatter and absorption.
If these measurements are not satisfactory, we could always repeat them.
We could also repeat the ringdown measurements, but we would need to be more careful when collecting the data. Last time, the incident IR power on the arm fluctuated from lock to lock, which made the uncertainties in the inferred losses much too big for the measurements to be usable.
Loss | Date | Method | alog | Notes |
78(18) | 2015-01-14 | Visibility | 16082 | — |
Loss | Date | Method | alog | Notes |
286(33) | 2015-01-05 | Visibility | 15874 | Green WFS not on |
125(19) | 2015-01-07 | Visibility | 15919 | — |
155(19) | 2015-01-08 | Visibility | 15937 | — |
140(16) | 2015-01-09 | Visibility | 15991 | — |
14:26 Starting beam tube washing at HNW-4-064 in ~ 10 minutes 15:11 Jim replacing video6 mac mini with NUC 15:49 Went to science mode 15:59 Beam tube washing is done for the day
Betsy, Pattrick, Dave
The local copy of H1SUSMC2.txt filter file was out of sync with the SVN repository. When we tried to commit it we got a "copy out of date" error. To get around this, we copied our version into a backup file, svn reverted the file, svn updated the file, copied back in our new version of the file and then were able to commit it to the repository. During this file shell game, the h1susmc2 front end computer saw the file change and flagged the CFC bit (reporting modified file). This is a latched bit, so even though we returned the file back to the original the flag persists. During the H1 down time, we loaded all coefficients on h1susmc2 to clear the flag. No change to filters were applied.
This appears to be from Evan opening awggui and selecting a channel. It showed up even though he never hit start. It went away when he closed awggui.
late post from Tuesday evening when PNNL disrupted offsite network access for maintenance. The GRB alert system running on h1fescript0 reported that it was unable to contact the GraceDB resource, and then seamlessly reconnected when network access was restored. This outage lasted just over a minute, so the CAL_INJ_CONTROL MEDM screen would have been RED for a few seconds.
[ext-alert 1117332997] CRITICAL: Error querying gracedb: [Errno 113] No route to host
[ext-alert 1117333060] CRITICAL: Error querying gracedb: [Errno 110] Connection timed out
[ext-alert 1117333063] CRITICAL: Error querying gracedb: [Errno 113] No route to host
[ext-alert 1117333064] CRITICAL: Error querying gracedb: [Errno 113] No route to host
Since there have been a few uncommitted commissioning changes to filter files and guardian code, Dave and Sheila confirmed it was time to commit them all to the svn since the IFO is running ~well for E7. Dave and I are working to clear house in all subsystems. I have committed:
All SUS.txt filter files
All 2 ASC filter files - H1ASC.txt, H1ASCIMC.txt
All 2 ASC filter files -H1ALSEX.txt, H1ALSEY.txt
All LSC.py guardian scripts
Note, all SUS guardians are up-to-date in the SVN.
When we ent to commit H1SUSMC2 it failed with an error stating file is "out of date". So, we copied the file to a backup and did a work around:
betsy.bland@operator1:/opt/rtcds/userapps/release/sus/h1/filterfiles$ cp H1SUSMC2.txt H1SUSMC2.txtbak
betsy.bland@operator1:/opt/rtcds/userapps/release/sus/h1/filterfiles$ svn up H1SUSMC2.txt
Conflict discovered in 'H1SUSMC2.txt'.
Select: (p) postpone, (df) diff-full, (e) edit,
(mc) mine-conflict, (tc) theirs-conflict,
(s) show all options: ^Csvn: Caught signal
betsy.bland@operator1:/opt/rtcds/userapps/release/sus/h1/filterfiles$ svn revert H1SUSMC2.txt
Reverted 'H1SUSMC2.txt'
betsy.bland@operator1:/opt/rtcds/userapps/release/sus/h1/filterfiles$ svn up H1SUSMC2.txt
U H1SUSMC2.txt
Updated to revision 10745.
betsy.bland@operator1:/opt/rtcds/userapps/release/sus/h1/filterfiles$ cp H1SUSMC2.txtbak H1SUSMC2.txt
This however caused the FE DAQ status to show the error that the filter file had changed. Indeed to file changed name, and then back, but we confirm that the contents of the file are the same, so we will have to hit the LOAD COEFF button on MC2 to clear the FE alarm.
The IFO was down, so we hit the load coeff on MC2 during it's walk back up to LSC_FF lock. The IFO didn't waiver so all seems fine.
Just before setting the intent bit for the current lock stretch, Patrick, Kiwamu and I looked at SDF. A few red alarms needed attention. We set SDFs:
11:32 Richard turned off lights in the LVEA, reports that the crane lights are still on. Lights are stuck on at end Y. 11:49 Karen getting paper towels out of mechanical room. 12:24 Kiwamu and Betsy have gone through the SDF differences. Went to Science mode and notified LLO control room.
11:28 Dave reloaded filter module coefficients for h1sus frontend. (WP 5245)
Chris Pankow, Jeff Kissel, Adam M, Eric Thrane We restarted tinj at LHO (GPS=1117411494 ) to resume transient injections. We scheduled a burst injection for GPS=1117411713. The burst waveform is in svn. It is, I understand, a white noise burst. It is, for the time being, are standard burst injection waveform until others are added. The injection completed successfully. Following this test, we updated the schedule to inject the same waveform every two hours over the next two days. The next injection is scheduled for GPS=1116241935. However, this schedule may change soon as Chris adds new waveforms to the svn repository. We are not carrying out transient injection tests at LLO because the filter bank needs to be updated and we cannot be sure that the filters are even close to correct. Adam thinks they will be updated by ~tomorrow.
First, some details, the injection is actually a sine-Gaussian with parameters: SineGaussian t0 = 1116964989.435322284 q = 28.4183770634 f0 = 1134.57994534 The t0 can be safely ignored since this injection would have no counterpart in L1, but it should be noted that this injection *does* have all relevant antenna patterns and polarization factors applied to it (e.g. it is made to look like a "real" GW signal). I attach the time and frequency domain plots of the waveform --- however they are relative to an O1 type spectrum and so may not be indicative of the actual performance of the instrument at this period of time. Given the most recent spectra and frequency content of the injection, this could be weaker up to a factor of ~2-3. The characteristic SNRs I calculated using the O1 type spectrum: Waveform SineGaussian at 1116964989.435 has SNR in H1 of 7.673043 Waveform SineGaussian at 1116964989.435 has SNR in L1 of 20.470634 Network SNR for SineGaussian at 1116964989.435 is 21.861438 So, it's possible that this injection had an SNR as low as ~2, not accounting for variance from the noise. The excitation channel (H1:CAL-INJ_TRANSIENT_EXCMON, trended) does show a non-zero value, and the "count" value is consistent with the amplitude of the strain, another monitor (H1:CAL-INJ_HARDWARE_OUT_DQ, at a higher sample rate) also shows the full injection, though it is not calibrated. So, the injection was successfully scheduled and looks to have been made. I also did an omega scan of the latter channel, and the signal is at the correct frequency (but has, notably, a very long duration). I did a little poking around to see if this showed up in h(t) (using H1:GDS-CALIB_STRAIN). Unfortunately, it is not visible in the spectrogram of H1:GDS-CALIB_STRAIN (attached). It may be some peculiarity of the scheduling, but it's interesting to note that the non-zero excitation occurs about a second after the GPS time that Eric quotes. More interestingly, this does not seem to have fired off the proper bits in the state vector. H1:GDS-CALIB_STATE_VECTOR reports the value 456 for this period, which corresponds to the data being okay, gamma being okay, but no injection taking place. it also appears to mean that no calibration was taking place (bits 3 and 4 are off). I'm guessing I'm just misinterpreting the meaning of this. I'd recommend, for future testing, a scale factor of 3 or 4, to make it *clearly* visible and give us a point of reference. We should also close the loop with the ODC / calibration folks to see if something was missed.
I can see that burst injections have been occurring on schedule, generally. The schedule file (which you currently have to log into h1hwinj1 to view) reads, in part:... 1117411713 1 1 burst_test_ 1117419952 1 1 burst_test_ 1117427152 1 1 burst_test_ ...Compare that to the bit transitions in CAL-INJ_ODC:pshawhan@> ./FrBitmaskTransitions -c H1:CAL-INJ_ODC_CHANNEL_OUT_DQ /archive/frames/ER7/raw/H1/H-H1_R-11174/*.gwf -m fffffff 1117400000.000000 0x00003f9e Data starts 1117400027.625000 0x00003fde 6 on 1117400028.621093 0x00003fff 0 on, 5 on 1117406895.000000 0x00003e7f 7 off, 8 off 1117411714.394531 0x0000347f 9 off, 11 off 1117411714.480468 0x00003e7f 9 on, 11 on 1117419953.394531 0x0000347f 9 off, 11 off 1117419953.480468 0x00003e7f 9 on, 11 on 1117427153.394531 0x0000347f 9 off, 11 off 1117427153.480468 0x00003e7f 9 on, 11 on ...Bits 9 and 11 go ON when a burst injection begins, and off when it ends. The offset of start time is because the waveform file initially contains zeroes. To be specific, the first 22507 samples in the waveform file (=1.37372 sec) are all exactly zero; then the next several samples are vanishingly small, e.g. 1e-74 . At t=1.39453 sec = 22848 samples into the waveform file, the strain amplitude of the waveform is about 1e-42. At the end time of the injection (according to the bitmask transition), the strain amplitude has dropped to about 1e-53, but I believe the filtering extends the waveform some. To give an idea of how much, the end time is about 130 samples, or ~8 msec, after the strain amplitude drops below 1e-42. Earlier in the record, bit 6 went on to indicate that the transient filter gain was OK, bit 5 went on to indicate that the transient filter state was OK, and bit 0 went on at the same time to indicate a good summary. Somewhat later, bit 8 went off when CW injections began, and bit 7 went off at the same time to indicate the presence of any hardware injection signal. Note that tinj seems to have been stopped (or died) at about GPS 1117456410, according to the tinj.log file, and that's consistent with a lack of bit transitions in CAL-INJ_ODC after that time.
Checking the other bitmask channels, there's a curious pattern in how the hardware injection bits from CAL-INJ_ODC are getting summarized in ODC-MASTER:pshawhan@> ./FrBitmaskTransitions -c H1:ODC-MASTER_CHANNEL_OUT_DQ -m 7000000 /archive/frames/ER7/raw/H1/H-H1_R-11174/H-H1_R-11174[12345]*.gwf 1117410048.000000 0x07000000 Data starts 1117411714.391540 0x05000000 25 off 1117411714.391723 0x07000000 25 on 1117411714.391845 0x05000000 25 off 1117411714.475463 0x07000000 25 on 1117419953.391540 0x05000000 25 off 1117419953.391723 0x07000000 25 on 1117419953.391845 0x05000000 25 off 1117419953.475463 0x07000000 25 on 1117427153.391540 0x05000000 25 off 1117427153.391723 0x07000000 25 on 1117427153.391845 0x05000000 25 off 1117427153.475463 0x07000000 25 on ...The same pattern continues for all 7 of the injections. We can see that there's a brief interval (0.000183 s = 3 samples at 16384 Hz) which is marked as a burst injection, then "no injection" for 2 samples, then back on for 0.083618 s = 1370 samples. Knowing how the sine-Gaussian ramps up from vanishingly small amplitude, I think this is the real in the sense that the first whisper of a nonzero cycle returns to effectively zero for a couple of samples before it grows enough to be consistently "on". It is also interesting to see that the interval ends in ODC-MASTER slightly (5.0 ms) earlier than it does in CAL-INJ_ODC. I suspect that this is OK and the model really runs at 16384 Hz, but CAL-INJ_ODC is a down-sampled record of the real bit activity. I also confirmed that the injection bit got carried over to CALIB-STATE_VECTOR:pshawhan@> ./FrBitmaskTransitions -c H1:GDS-CALIB_STATE_VECTOR -m 1ff /archive/frames/ER7/hoft/H1/H-H1_HOFT_C00-11174/H-H1_HOFT_C00-11174[012]*.gwf 1117401088.000000 0x000001c8 Data starts 1117408113.375000 0x000001cc 2 on 1117408119.375000 0x000001dd 0 on, 4 on 1117408143.750000 0x000001df 1 on 1117408494.812500 0x000001dd 1 off 1117408494.937500 0x000001c8 0 off, 2 off, 4 off 1117411714.375000 0x00000148 7 off 1117411714.500000 0x000001c8 7 on 1117419953.375000 0x00000148 7 off 1117419953.500000 0x000001c8 7 on 1117420916.625000 0x000001cc 2 on 1117420922.625000 0x000001dd 0 on, 4 on 1117421174.000000 0x000001df 1 on 1117424707.062500 0x000001dd 1 off 1117424954.437500 0x000001c8 0 off, 2 off, 4 off 1117426693.000000 0x000001cc 2 on 1117426699.000000 0x000001dd 0 on, 4 on 1117426833.625000 0x000001df 1 on 1117427153.375000 0x0000015f 7 off 1117427153.500000 0x000001df 7 on ...Bit 7 indicates burst injections. It comes on for 2 16-Hz samples at the appropriate times.
As of 7:00am PDT on Friday, June 5, there have been 13 burst hardware injections at LHO over the last two days. All of these are represented by segments (each 1 second long) in the DQ segment database, and can be retrieved using a command like:pshawhan@> ligolw_segment_query_dqsegdb --segment-url https://dqsegdb5.phy.syr.edu --query-segments --include-segments H1:ODC-INJECTION_BURST --gps-start-time 1117300000 --gps-end-time 'lalapps_tconvert now' | ligolw_print -t segment -c start_time -c end_time -d ' 'These time intervals also agree with the bits in the H1:GDS-CALIB_STATE_VECTOR channel in the H1_HOFT_C00 frame data on the Caltech cluster, except that there is a gap in the h(t) frame data (due to filters being updated and the h(t) process being restarted, as noted in an email from Maddie). Similar DB queries show no H1 CBC injection segments yet, but H1 CW injections are ongoing:pshawhan@> ligolw_segment_query_dqsegdb --segment-url https://dqsegdb5.phy.syr.edu --query-segments --include-segments H1:ODC-INJECTION_CW --gps-start-time 1117400000 --gps-end-time 'lalapps_tconvert now' | ligolw_print -t segment -c start_time -c end_time -d ' ' 1117406895 1117552800By the way, by repeating that query I observed that the CW segment span was extended by 304 seconds about 130 segments after the segment ended. Therefore, the latency of obtaining this information from the segment database ranges from 130 to 434 seconds, depending on when you query it. (At least under current conditions.) I also did similar checks at LLO, which revealed a bug in tinj -- see https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=18517 .