J. Kissel Similar to what was done for the HARDWARE injection filter (LHO aLOG 21703), I've added a minus sign to the identical BLIND injection filter bank. This facilitates testing this bank, which we hope to do soon. I've also turned on FM5 where this minus sign lives, and accepted the the new configuration in the SDF system (both in OBSERVE and SAFE.snaps), and committed the new filter bank to the repo.
Test of Sheila's filter. GRB standown time is complete. Returning to Commissioning while LLO is down due to temperature transient in their LVEA.
The FOM for H1 Range was modified at some point in the last 24 hours, and at Vern and Mike's request I backed out the 20Mpc horizontal line, and saved the template with a y-axis range of 0-101Mpc.
The FOM for H1 Range is posted on the website and so considered to be under version control, though not in SVN, and changes need to be approved before implemented.
L1 went out of lock. At H1 we turned off the intent bit and injected some hardware injections. The hardware injections were the same waveform that was injected on September 21. For more information about those injections see aLog entry 21759 For information about the waveform see aLog entry 21774. tinj was not used to do the injections.The commands to do the injections were: awgstream H1:CAL-INJ_TRANSIENT_EXC 16384 H1-HWINJ_CBC-1126257410-12.txt 0.5 -d -d >> log2.txt awgstream H1:CAL-INJ_TRANSIENT_EXC 16384 H1-HWINJ_CBC-1126257410-12.txt 1.0 -d -d >> log2.txt ezcawrite H1:CAL-INJ_TINJ_TYPE 1 awgstream H1:CAL-INJ_TRANSIENT_EXC 16384 H1-HWINJ_CBC-1126257410-12.txt 1.0 -d -d >> log2.txt awgstream H1:CAL-INJ_TRANSIENT_EXC 16384 H1-HWINJ_CBC-1126257410-12.txt 1.0 -d -d >> log2.txt To my chagrin the first two injections were labeled as burst injections. Taken from the awgstream log the corresponding times are approximates of the injection time: 1127074640.002463000 1127074773.002417000 1127075235.002141000 1127075742.002100000 The expected SNR of the injection is ~18 without any scaling factor. I've attached omegascans of the injections. There is no sign of the "pre-glitch" that was seen on September 21.
Attached stdout of command line.
Neat! looks good.
Hi Chris, It looks like there is a 1s offset between the times you report and the rough coalescence time of the signal. Do you know if it is exactly 1s difference?
Yes, as John said, all of the end times of the waveforms are just about 1 second later that what's in the original post. I ran a version my simple bandpass-filtered overlay script for these waveforms. Filtering both the model (strain waveform injected into the system) and the data from 70-260 Hz, it overlays them, and also does a crude (non-optimal) matched filter to estimate the relative amplitude and time offset. The four plots attached are for the four injected signals; note that the first one was injected with a scale factor of 0.5 and is not "reconstructed" by my code very accurately. The others actually look rather good, with reasonably consistent amplitudes and time delays. Note that the sign of the signal came out correctly!
I ran the daily BBH search with the injected template on the last two injections (1127075235 and 1127075742). For 1127075235; the recovered end time was 1127075235.986, the SNR was 20.42, the chi-squared was 29.17, and the newSNR was 19.19. For 1127075242; the recovered end time was 1127075242.986, the SNR was 20.04, the chi-squared was 35.07, and the newSNR was 19.19.
KW sees all the injections with the +1 sec delay, some of them in multiple frequency bands. From /gds-h1/dmt/triggers/H-KW_RHOFT/H-KW_RHOFT-11270/H-KW_RHOFT-1127074624-64.trg /gds-h1/dmt/triggers/H-KW_RHOFT/H-KW_RHOFT-11270/H-KW_RHOFT-1127074752-64.trg /gds-h1/dmt/triggers/H-KW_RHOFT/H-KW_RHOFT-11270/H-KW_RHOFT-1127075200-64.trg /gds-h1/dmt/triggers/H-KW_RHOFT/H-KW_RHOFT-11270/H-KW_RHOFT-1127075712-64.trg tcent fcent significance channel 1127074640.979948 146 26.34 H1_GDS-CALIB_STRAIN_32_2048 1127074774.015977 119 41.17 H1_GDS-CALIB_STRAIN_8_128 1127074773.978134 165 104.42 H1_GDS-CALIB_STRAIN_32_2048 1127075235.980545 199 136.82 H1_GDS-CALIB_STRAIN_32_2048 1127075743.018279 102 74.87 H1_GDS-CALIB_STRAIN_8_128 1127075742.982020 162 113.65 H1_GDS-CALIB_STRAIN_32_2048 Omicron also sees them with the same delay From : /home/reed.essick/Omicron/test/triggers/H-11270/H1:GDS-CALIB_STRAIN/H1-GDS_CALIB_STRAIN_Omicron-1127074621-30.xml /home/reed.essick/Omicron/test/triggers/H-11270/H1:GDS-CALIB_STRAIN/H1-GDS_CALIB_STRAIN_Omicron-1127074771-30.xml /home/reed.essick/Omicron/test/triggers/H-11270/H1:GDS-CALIB_STRAIN/H1-GDS_CALIB_STRAIN_Omicron-1127075221-30.xml /home/reed.essick/Omicron/test/triggers/H-11270/H1:GDS-CALIB_STRAIN/H1-GDS_CALIB_STRAIN_Omicron-1127075731-30.xml peak time fcent snr 1127074640.977539062 88.77163 6.3716 1127074773.983397960 648.78342 11.41002 <- surprisingly high fcent, could be due to clustering 1127075235.981445074 181.39816 13.09279 1127075742.983397960 181.39816 12.39437 LIB single-IFO jobs also found all the events. Post-proc pages can be found here: https://ldas-jobs.ligo.caltech.edu/~reed.essick/O1/2015_09_23-HWINJ/1127074640.98-0/H1L1/H1/posplots.html https://ldas-jobs.ligo.caltech.edu/~reed.essick/O1/2015_09_23-HWINJ/1127074773.98-1/H1L1/H1/posplots.html https://ldas-jobs.ligo.caltech.edu/~reed.essick/O1/2015_09_23-HWINJ/1127075235.98-2/H1L1/H1/posplots.html https://ldas-jobs.ligo.caltech.edu/~reed.essick/O1/2015_09_23-HWINJ/1127075742.98-3/H1L1/H1/posplots.html all runs appear to have reasonable posteriors.
Here is how Omicron detects these injections: https://ldas-jobs.ligo-wa.caltech.edu/~frobinet/scans/hd/1127074641/ https://ldas-jobs.ligo-wa.caltech.edu/~frobinet/scans/hd/1127074774/ https://ldas-jobs.ligo-wa.caltech.edu/~frobinet/scans/hd/1127075236/ https://ldas-jobs.ligo-wa.caltech.edu/~frobinet/scans/hd/1127075743/ Here are the parameters measured by Omicron (loudest tile): 1127074640: t=1127074640.981, f=119.9 Hz, SNR=6.7 1127074773: t=1127074773.981, f=135.3 Hz, SNR=11.8 1127075235: t=1127075235.981, f=114.9 Hz, SNR=12.8 1127075742: t=1127075742.981, f=135.3 Hz, SNR=12.4
The BayesWave single IFO (glitch only) analysis recovers these injections with the following SNRs: 4640: 8.65535 4773: 19.2185 5253: 20.5258 5742: 20.1666 The results are posted here: https://ldas-jobs.ligo.caltech.edu/~meg.millhouse/O1/CBC_hwinj/
Going to Observe without filter test.
General Question: Does this knock us out of Observation Mode? Could I have reset this this morning?
IFO was in Commissioniing.
No, the DIAG_MAIN guardian node is NOT under the OBSERVATION READY check. It can be changed/reset/etc. without affecting OBSERVATION MODE.
I think Cheryl was talking about the diag reset button on the GDS overview screen for the front end, not the DIAG_MAIN guardian.
Finished hardware injection tests. More details in upcoming aLog. Last injection went in at approximately 20:35 UTC.
LLO is down - H1 in commissioning for injections and a filter test, starting at 20:16:27UTC.
At times 1127059140GPS (15:58:43UTC), and 1127059260GPS (16:00:43UTC), the H1 Range DMT montor did not get data, so put in the dafult which is -1.
There was no glitch or change to the H1 IFO.
Will notify when done.
Intent mode was turned off before tests.
TITLE: 9/23 OWL Shift: 7:00-15:00UTC (00:00-8:00PDT), all times posted in UTC
STATE OF H1: OBSERVATION @ 76Mpc
OUTGOING OPERATOR: Jim W.
SUPPORT: Darkhan still here (Sheila is on-call, if needed)
QUICK SUMMARY:
Noticed there is a RED Timing error for H1SUSETMX. Would like to hit Diag_Reset to see if this clears this error, but I'm not sure if this knocks us out of Observation Mode. Will hold off.
I brought up this question on my last shift and I believe the answer was that it's inconsequential to reset this bit while Observing except that subsequential errors may be happening during the period that it's RED and we won't know about them/be able to see them in the trend. I took a trend last week at the beginning of one of my shifts and found this error had only happened ~ 1/week. So as far as I can tell you, it's ok to reset this error, but it would be nice to get this blessing from the CDS crew.
Durring the maintence window we left the DHARD yaw boost on (21768 and 21708). There was no evidence that it caused any problems, but I was putting excitations onto transmon at the time and there were other maintence activities going on. We'd like to check that it doesn't impact the glitch rate, so if LLO drops out of lock or if you see an earthquake on the way ( 0.1um/sec or larger predicted by terramon), it would be great if you can turn it on. You can find it under ASC overview> ASC arm cavities, DHARD YAW FM3 (labled boost). (screenshot)
It would be good to get more than an hour of data, so if you see that LLO has dropped it would be awesome if you could turn this on util they are back up.
This is just a temporary request, only for tonight or the next few days.
This is actually FM2.
I was texting with Mike to see if taking H1 out of Observation Mode (when L1 is down) for this test was OK by him, and he concurred. This work is referenced by Work Permit #5505. In the work permit, I see a time of 9/21-25 for Period of Activity. So Operators can allow this activity during this time since Mike has signed off on the work permit. (perhaps in the future, we can reference the work permit in alog entries so Operators will know this is an acceptable activity.)
I'm not totally sure about when to make the decision to preemptively turn ON this filter if we get a warning of an impending EQ. It's not totally clear to know which types of EQ will knock us out and which won't. I guess I can look to see if (1) Terramon gives us a RED warning, and also (2) watch 0.03-0.1um/s seismic signal for an order of magnitude increase. Perhaps in that case I could then end Observation Mode and turn ON the filter and stay out of Observation Mode until L1 comes back. (sorry, just trying to come up with a plan of attack in case L1 drops out)
As it stands, L1 has been locked for 10hrs, so we'll keep an eye on them. I asked William to contact me if they drop out (but I'll also watch the FOM & GWI.stat.
I believe that by switching this, while in 'Undisturbed', it will show as an SDF diff thereby automatically taking us to 'Commissioning' mode until the diff is accepted, the ODC Intent ready bit is Green(again) and we can once again click the intent bit to 'Undisturbed'. I asked this at the JRPC meeting yesterday.
Apologies for the wrong FM number, and in the future I'll try to rememver to put the WP number in the alog. Operators can probably stop toggling this filter for now. We will put this on the list of minor changes that we will make on maintence day, so that next tuesday it can be added to the guardian and the observe.snap, along with some HSTS bounce and roll notches.
Updated CAL_INJ_CONTROL medm. It is organized a bit differently, labels have changes slightly, and even has a new button! Duncan Macleod supplied us with an updated ext_alert.py that polls GraceDB for new events (both "E" and "G" types), places the new info in some EPICS records, and then will automatically pause injections for either 3600s or 10800s depending on the event.
The Transient Injection Control now has the ability to zero out the pause inj channel. Why is this necessary? The script running in the background of this screen will automatically PAUSE the injections when a new external event alert is detected. If we are down when we get a GRB alert, the script should still pause the injections. The Operator will then need to enable the injections and zero the pause time.
One other thing for Operators to look out for is if we want the injections to stop for longer than the automatic pause time. If we disable the injections by clicking the "Disable" button, and then a new event comes in, it will automatically switch from Disabled --> Paused (this happened to us a few minutes after we started up the script). I am not 100% positive on this, but it seems that when the pause time is up the injections will continue. If this is so, it's definitely something Operators need to watch for.
We will see how this goes and make changes if necessary.
New screen shot attached.
There was apparently some confusion about pausing mechanisms; see alog 21822. If the scheme referred to there is restored, the PAUSE and ENABLE features will be fully under the control of the operators. Independently, injections will automatically be paused by the action of the GRB alert code setting the CAL-INJ_EXTTRIG_ALERT_TIME channel. I have emailed Duncan to try to sort this out.
Last night there were two GRB alerts that paused the injections, and they DID NOT enable Tinj. The Tinj Control went back to Disabled as we had it set to previously. This is good and works as outlined in the HWInjBookkeeping wiki (Thank you Peter Shawhan!). This was my main worry and seems that has already taken care of. It is a bit misleading when the Tinj control goes from Disabled --> Paused and begins to count up to the "Pause Until" time, but after trending the channels it shows that will not enable the Tinj after the times meet.
Elli and Stefan showed in aLOG 20827 that the signals measured by AS 36 WFS for SRM and BS alignment appeared to be strongly dependent on the power circulating in the interferometer. This was apparently not seen to be the case in L1. As a result, I've been looking at the AS 36 sensing with a Finesse model (L1300231), to see if this variability is reproducible in simulation, and also to see what other IFO variables can affect this variability.
In the past when looking for differences between L1 and H1 length sensing (for the SRC in particular), the mode matching of the SRC has come up as a likely candidate. This is mainly because of the relatively large uncertainties in the SR3 mirror RoC combined with the strong dependence of the SRC mode on the SR3 RoC. I thought this would therefore be a good place to start when looking at the alignment sensors at the AS port. I don't expect the SR3 RoC to be very dependent on IFO power, but having a larger SR3 RoC offset (or one in a particular direction) may increase the dependence of the AS WFS signals on the ITM thermal lenses (which are the main IFO variables we typically expect to change with IFO power). This might therefore explain why H1 sees a bigger change in the ASC signals than L1 as the IFOs heat up.
My first step was to observe the change in AS 36 WFS signals as a function of SR3 RoC. The results for the two DOFs shown in aLOG 20827 (MICH = BS, SRC2 = SRM) are shown in the attached plots. I did not spend much time adjusting Gouy phases or demod phases at the WFS in order to match the experiment, but I did make sure that the Gouy phase difference between WFSA and WFSB was 90deg at the nominal SR3 RoC. In the attached plots we can see that the AS 36 WFS signals are definitely changing with SR3 RoC, in some cases even changing sign (e.g. SRM Yaw to ASA36I/Q and SRM Pitch to ASA36I/Q). It's difficult at this stage to compare very closely with the experimental data shown in aLOG 20827, but at least we can say that from model it's not unexpected that these ASC sensing matrix elements are changing with some IFO mode mismatches. The same plots are available for all alignment DOFs, but that's 22 in total so I'm sparing you all the ones which weren't measured during IFO warm up.
The next step will be to look at the dependence of the same ASC matrix elements on common ITM thermal lens values, for a few different SR3 RoC offsets. This is where we might be able to see something that explains the difference between L1 and H1 in this respect. (Of course, there may be other effects which contribute here, such as differential ITM lensing, spot position offsets on the WFS, drifting of uncontrolled DOFs when the IFO heats up... but we have to start somewhere).
Can you add a plot of the amplitude and phase of 36MHz signal that is common to all four quadrants when there's no misalignment?
As requested, here are plots of the 36MHz signal that is common to all quadrants at the ASWFSA and ASWFSB locations in the simulation. I also checked whether the "sidebands on sidebands" from the series modulation at the EOM had any influence on the signal that shows up here: apparently it does not make a difference beyond the ~100ppm level.
At Daniel's suggestion, I adjusted the overall WFS phases so that the 36MHz bias signal shows up only in the I-phase channels. This was done just by adding the phase shown in the plots in the previous comment to both I and Q detectors in the simulation. I've attached the ASWFS sensing matrix elements for MICH (BS) and SRC2 (SRM) again here with the new demod phase basis.
**EDIT** When I reran the code to output the sensitivities to WFS spot position (see below) I also output the MICH (BS) and SRC2 (SRM) DOFs again, as well as all the other ASC DOFs. Motivated by some discussion with Keita about why PIT and YAW looked so different, I checked again how different they were. In the outputs from the re-run, PIT and YAW don't look so different now (see attached files with "phased" suffix, now also including SRC1 (SR2) actuation). The PIT plots are the same as previously, but the YAW plots are different to previous and now agree better with PIT plots.
I suspect that the reason for the earlier difference had something to do with the demod phases not having been adjusted from default for YAW signals, but I wasn't yet able to recreate the error. Another possibility is that I just uploaded old plots with the same names by mistake.
To clarify the point of adjusting the WFS demod phases like this, I also added four new alignment DOFs corresponding to spot position on WFSA and WFSB, in ptich and yaw directions. This was done by dithering a steering mirror in the path just before each WFS, and double demodulating at the 36MHz frequency (in I and Q) and then at the dither frequency. The attached plots show what you would expect to see: In each DOF the sensitivity to spot position is all in the I quadrature (first-order sensitivity to spot position due to the 36MHz bias). Naturally, WFSA spot position doesn't show up at WFSB and vice versa, and yaw position doesn't show up in the WFS pitch signal and vice versa.
For completeness, the yaxis is in units of W/rad tilt of the steering mirror that is being dithered. For WFSA the steering mirror is 0.1m from the WFSA location, and for WFSB the steering mirror is 0.2878m from the WFSB location. We can convert the axes to W/mm spot position or similar from this information, or into W/beam_radius using the fact that the beam spot sizes are at 567µm at WFSA and 146µm at WFSB.
As shown above the 36MHz WFS are sensitive in one quadrature to spot position, due to the constant presence of a 36MHz signal at the WFS. This fact, combined with the possibility of poor spot centering on the WFS due to the effects of "junk" carrier light, is a potential cause of badness in the 36MHz AS WFS loops. Daniel and Keita were interested to know if the spot centering could be improved by using some kind of RF QPD that balances either the 18MHz (or 90MHz) RF signals between quadrants to effectively center the 9MHz (or 45MHz) sideband field, instead of the time averaged sum of all fields (DC centering) that is sensitive to junk carrier light. In Daniel's words, you can think of this as kind of an "RF optical lever".
This brought up the question of which sideband field's spot postion at the WFS changes most when either the BS, SR2 or SRM are actuated.
To answer that question, I:
Some observations from the plots:
I looked again at some of the 2f WFS signals, this time with a linear sweep over alignment offsets rather than a dither transfer function. I attached the results here, with detectors being phased to have the constant signal always in I quadrature. As noted before by Daniel, AS18Q looks like a good signal for MICH sensing, as it is pretty insensitive to beam spot position on the WFS. Since I was looking at larger alignment offsets, I included higher-order modes up to order 6 in the calculation, and all length DOFs were locked. This was for zero SR3 RoC offset, so mode matching is optimal.
VerbalAlarms reports that DIAG_MAIN guardin node is in error.
Appears to be a standard NDS2 burp:
2015-09-21T22:57:59.11305 File "/ligo/apps/linux-x86_64/guardian-1485/lib/python2.7/site-packages/guardian/worker.py", line 459, in run
2015-09-21T22:57:59.11306 retval = statefunc()
2015-09-21T22:57:59.11306 File "/opt/rtcds/userapps/release/sys/common/guardian/SYS_DIAG.py", line 178, in run
2015-09-21T22:57:59.11307 return SYSDIAG.run_all()
2015-09-21T22:57:59.11307 File "/opt/rtcds/userapps/release/sys/common/guardian/SYS_DIAG.py", line 151, in run_all
2015-09-21T22:57:59.11308 ret &= self.run(name)
2015-09-21T22:57:59.11308 File "/opt/rtcds/userapps/release/sys/common/guardian/SYS_DIAG.py", line 136, in run
2015-09-21T22:57:59.11310 for msg in self[name](**kwargs):
2015-09-21T22:57:59.11311 File "/opt/rtcds/userapps/release/sys/h1/guardian/DIAG_MAIN.py", line 66, in PSL_ISS
2015-09-21T22:57:59.11311 diff_pwr = avg(-10, 'PSL-ISS_DIFFRACTION_AVG')
2015-09-21T22:57:59.11312 File "/ligo/apps/linux-x86_64/cdsutils-497/lib/python2.7/site-packages/cdsutils/avg.py", line 67, in avg
2015-09-21T22:57:59.11312 for buf in conn.iterate(*args):
2015-09-21T22:57:59.11313 RuntimeError: Requested data were not found.
Reloaded.
Having the guardian go into error because of an NDS2 hiccough is kind of irritating.
Based on this StackExchange answer, I added the following handler function to the DIAG MAIN guardian:
def try_avg(*args):
while True:
try:
q = avg(*args)
except RuntimeError:
log('Encountered runtime error while trying to average {}'.format(args[1]))
continue
break
return q
where avg is the cdsutils.avg function.
This is now used for the ISS diffraction and the ESD railing diag tests. If we like it, we should consider propagating it to the rest of the guardian.
This is a fine hack solution for this one case, but please don't propogate this around to all guardian NDS calls. Let me come up with a way to better handle it within the guardian infrastructure, so we don't end up with a lot of cruft in the guardian user code.