TITLE: 9/23 OWL Shift: 7:00-15:00UTC (00:00-8:00PDT), all times posted in UTC
STATE OF H1: Continues to be locked from yesterday (that's 30+hrs!).
SUPPORT: None (& not needed)
SHIFT SUMMARY:
H1 continues to hum along.
Toward the end of the shift (starting at 11:00-13:00utc), H1 started to trend down from 77Mpc down to 64Mpc; only thing I could see correlated to this were the 1-30Hz seismic channels. They actually all peaked around 12:30utc and came down & H1 mirrored this. I reckon this seismic bump is Hanford traffic, but that would be only a guess, because I don't see a bump like this for my shift yesterday morning during this same lock.
No opportunity to engage Sheila's new DHARD Yaw Boost filter due to double coincidence the entire shift.
There is a Timing error on ETMx which should be cleared, but I held off doing this because I didn't want to bump H1 out of Observation Mode.
Incoming DAY Operator: Cheryl V.
SHIFT'S ACTIVITIES:
I've uploaded new and approved coherent waveforms for hardware injection testing. SVN is at revision number 5097. There is a H1L1 coherent version of the September 21 test injection that was done at LHO. It can be found here: * H1 waveform * L1 waveform * XML parameter file There is a H1L1 coherent version of the September 21 test injection that was done at LHO and the waveform begins at 15Hz. This waveform should be tested after the previous waveform has been tested. It can be found here: * H1 waveform * L1 waveform * XML parameter file
After completing a number of tests with operator and advocate sign-offs and log entry tagging, we THINK that Approval Processor and the automated programs which annotate an event are working properly. We have now set the FAR threshold used by Approval Processor to 3.8e-7 Hz, i.e. ~1 per month, for each pipeline. Therefore, please now take GW trigger alerts seriously, since they will represent apparently significant events found by the low-latency analyses. Operators (along with remote "EM follow-up Advocates") should review and sign off (OKAY or NOT OKAY) each alert, and the Run Coordinators should seriously consider activating the Rapid Response Teams. (We will need to see, though, whether the low-latency data analysis pipelines really generate event candidates at the intended rate.) In truth, we are still in a testing mode now; any alerts generated still will not go out to astronomers for follow up without deliberate manual intervention. However, in this phase we want to test the software and procedures end-to-end, so please try to follow all procedures as closely as possible if you are on shift. There should be some hardware signal injections in the near future which, hopefully, will be identified by the low-latency pipelines and trigger this whole process (because, for now, Approval Processor is configured to treat hardware injections like real events). Seeing this in action should help assure us and the Data Analysis Council that the system is ready to go live.
Received GRB at 10:19:20 UTC via Verbal Alarms.
JeffreyK, SudarshanK, DarkhanT,
Overview
We made comparison plots of a DARM OLG TF and PCAL to DARM TF measurements taken at LHO with H1DARMmodel_ER8 and H1DARMmodel_O1 (uncorrected and corrected with kappa factors).
One of the main changes in the DARM model update for O1 compared to ER8 was that in the actuation function for ER8 model we did not account for an analog anti-imaging filter. We included that filter into the O1 model. Adding previously missing analog AI filter into the actuation function model increased the (measurement / model) residual to about 1% in magnitude and to ~6 deg around 500 Hz (~10 deg around 900 Hz). Initally some of the ER8 model parameter estimations (ESD/CD gains) were done to best fit the measurments for actuation function that does not include an analog AI.
We also took kappas calculated from calibration lines within about 20 minutes from DARM OLG TF measurement and plotted DARM model for O1 corrected with kappas in two different ways against the measurement to see how kappa corrections will take care of systematics in the model. At this point we don't have comparison results of the DARM OLG TF and PCAL2DARM TF measurements and kappa estimations to make a distinct statement. For this particular measurement from Sep 10, the DARM model that was corrected with κtst and κC produced smaller DARM OLG TF residual and actuation function residual compared to uncorrected model, but the sensing function residual was did not improve by the correction (see attached pdf's for actuation and sensing residuals).
Details
Some of the known issues / systematics in our DARM OLG TF model include:
- inverse actuation filters need to accout for an extra -1 sign (was fixed, see LHO alog 21703);
- CAL-CS reproduction should have a sign that's opposite from DARM output matrix (at LHO we had this correct, but it needed to be fixed at LLO).
This issue affects EP1 value that's written into Epics record and used for estimation of DARM time-dependent parameters (T1500377). At LHO in the DARM model for ER8 we manually rotated phase of EP1 to +44.4 degrees to account for this discrepancy; we modified both the paramter file and the DARM model script to account for the DAQ downsampling filter TF calculated at the xtst line frequency.
- residuals of actuation function and total DARM OLG TF (systemaic error);
- EP1-9 that are used for estimation of DARM temporal variations.
This variable might have been used in GDS calibration, we need to verify with MaddieW to make sure that this extra time delay is not included into GDS code.
One of the possible sources of systematic error in the sensing function model is using a single-pole TF to approximate IFO response.
Some of the parameters of the actuation functions were estimated without taking into account an analog AI filter (one of the issues listed above). We need to revisit ER8/O1 actuation function analysis results.
A comparison script, an updated DARM model script and DARM paramter files were committed calibration SVN:
CalSVN/aligocalibration/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/
Plots were committed to:
CalSVN/aligocalibration/trunk/Runs/O1/H1/Results/DARMOLGTFs/
P.S. I'll add references later.
Maddie has confirmed that she has used the matlab model parameter par.t.actuation to inform the high-frequency and time-delay corrections to the output of the CAL-CS pipeline. This confirms that there is a systematic error in the output of the GDS pipeline output at both observatories -- an extra IOP (65 [kHz]) clock cycle, 15 [us] delay on the actuation path, which results in a ~0.5 [deg] phase mismatch between the reconstructed and true actuation and sensing paths at 100 [Hz]. This is small effect, but given our dwindling person-power, and continued pressure to have been done yesterday, we will not quantitatively assess the impact this has on systematic errors. We will instead, merely update the GDS pipeline to use the correct actuation delay (hopefully next Tuesday), and use that as our stopping point for when we stop re-calibrating prior data.
TITLE: 9/23 OWL Shift: 7:00-15:00UTC (00:00-8:00PDT), all times posted in UTC
STATE OF H1: OBSERVATION @ 76Mpc
OUTGOING OPERATOR: Jim W.
SUPPORT: Darkhan still here (Sheila is on-call, if needed)
QUICK SUMMARY:
Noticed there is a RED Timing error for H1SUSETMX. Would like to hit Diag_Reset to see if this clears this error, but I'm not sure if this knocks us out of Observation Mode. Will hold off.
I brought up this question on my last shift and I believe the answer was that it's inconsequential to reset this bit while Observing except that subsequential errors may be happening during the period that it's RED and we won't know about them/be able to see them in the trend. I took a trend last week at the beginning of one of my shifts and found this error had only happened ~ 1/week. So as far as I can tell you, it's ok to reset this error, but it would be nice to get this blessing from the CDS crew.
Verbal Alarms notified us of a GRB at 7:08:39UTC. We acknowledged on Verbal Alarm Terminal. And then went through the GRB checklist (in L1500117).
Title: 9/22/2015 Eve Shift: 23:00-7:00UTC
State of H1: Observation Mode at 70+Mpc for the last 12+hrs
Support: None needed. Various commissioners present
Shift Summary:Quiet shift.
Activity Log:
0:20 Kyle Gerardo returning from mid X station
J. Kissel, for the CAL team I've created a representative ASD for the start of O1. For now -- it uses data just before the start of the updates to GDS pipeline were started, at Sep 22 2015 11:29:47 UTC, Tuesday, Sep 22 2015 04:29:47 PDT. Why not after? Because NDS2 from matlab can't get data after the pipeline has been restarted. I'll work with Jonathan and Greg tomorrow to find out the problem. Also, because the comparison with the calibrated PCAL amplitude is within our stated uncertainty thus far (LHO aLOG 21689). Of course, this is just an ASD, and I have not respected the phase. More to come on that. The strain and displacement ASDs -- and a corresponding ASCII dump of both -- is housed permanently and publically in G1501223, as a part of the collection of ASDs from the Advanced LIGO Sensitivity Plots. Techniques for how the ASD was computed can be found in T1500365, but I've used essentially the exact same process as was used for the "best" ASD from ER7 (LHO aLOG 19275). A new feature this time around is a comparison of the PCAL vs. the GDS pipeline product. Delightfully -- even though the GDS pipeline was not yet updated, and I have not corrected for any time dependence, the GDS pipeline calibration of the PCAL excitation in DARM agree with the displacement calibrated by PCAL itself at all line frequencies, 36.7, 331.9, and 1083.7 [Hz] to better than 5%, which is consistent with our current uncertainty budget (LHO aLOG 21689). Now be fore we get all greedy and say "then why did we bother to update the GDS pipeline and why have we bothed to even compute the time dependent factors, remember that this is one measurement, and this is only the amplitude/magnitude. We must make the same comparison over a long period of time (a few days), look at the amplitude and phase, such that we can get a feel for how these track with time and we gather a feel for our remaining systematics.
The script to produce the official strain, plots, and ASCII dumps can be found here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Scripts/produceofficialstrainasds_O1.m
Cheryl and Jeff brought to my attention that GWIstat was reporting incorrect information today. It turns out that the ~gstlalcbc home directory at Caltech was moved to a new filesystem today; GWIstat gets its information from a process running under that account, and apparently got into a funny state. I have now restarted it. For the rest of the current observing segment it will report the duration only from the time I restarted it, about 3:32 UTC. I apologize for the problem!
I see this morning that GWIstat is not showing the correct duration for the current observing segment. The log file on ldas-grid.ligo.caltech.edu, where it is now running, shows that it was restarted twice during the night for no obvious reason, and it is reporting the duration only since it was restarted. I'll ask the Caltech computing folks to look into this. New hardware for ldas-grid was put into use yesterday, and maybe they were still shaking it down last night.
Stuart Anderson told me this is a known problem that seems to have arisen from a condor configuration change. They know how to fix it but will need to restart condor. Until they do that, gwistat should indicate status correctly (except for momentary outages) but may sometimes display the wrong duration for the current state.
Durring the maintence window we left the DHARD yaw boost on (21768 and 21708). There was no evidence that it caused any problems, but I was putting excitations onto transmon at the time and there were other maintence activities going on. We'd like to check that it doesn't impact the glitch rate, so if LLO drops out of lock or if you see an earthquake on the way ( 0.1um/sec or larger predicted by terramon), it would be great if you can turn it on. You can find it under ASC overview> ASC arm cavities, DHARD YAW FM3 (labled boost). (screenshot)
It would be good to get more than an hour of data, so if you see that LLO has dropped it would be awesome if you could turn this on util they are back up.
This is just a temporary request, only for tonight or the next few days.
This is actually FM2.
I was texting with Mike to see if taking H1 out of Observation Mode (when L1 is down) for this test was OK by him, and he concurred. This work is referenced by Work Permit #5505. In the work permit, I see a time of 9/21-25 for Period of Activity. So Operators can allow this activity during this time since Mike has signed off on the work permit. (perhaps in the future, we can reference the work permit in alog entries so Operators will know this is an acceptable activity.)
I'm not totally sure about when to make the decision to preemptively turn ON this filter if we get a warning of an impending EQ. It's not totally clear to know which types of EQ will knock us out and which won't. I guess I can look to see if (1) Terramon gives us a RED warning, and also (2) watch 0.03-0.1um/s seismic signal for an order of magnitude increase. Perhaps in that case I could then end Observation Mode and turn ON the filter and stay out of Observation Mode until L1 comes back. (sorry, just trying to come up with a plan of attack in case L1 drops out)
As it stands, L1 has been locked for 10hrs, so we'll keep an eye on them. I asked William to contact me if they drop out (but I'll also watch the FOM & GWI.stat.
I believe that by switching this, while in 'Undisturbed', it will show as an SDF diff thereby automatically taking us to 'Commissioning' mode until the diff is accepted, the ODC Intent ready bit is Green(again) and we can once again click the intent bit to 'Undisturbed'. I asked this at the JRPC meeting yesterday.
Apologies for the wrong FM number, and in the future I'll try to rememver to put the WP number in the alog. Operators can probably stop toggling this filter for now. We will put this on the list of minor changes that we will make on maintence day, so that next tuesday it can be added to the guardian and the observe.snap, along with some HSTS bounce and roll notches.
SudarshanK, DarkhanT
We were using 137 degree of correction factor on kappa_tst on our time varying parameters calculation. (alog 21594). Darkhan found a negative sign that was placed at a wrong position in the DARM model which gave us back 180 degrees of phase. Additionally, Shivaraj found that we were not accounting for DAQ downsampling filter used in ESD Calibration line. These two factors gave us back almost all the phase we were missing. There was also an analog antialiasing filter missing in the actuation TF that was applied in the new model. After these corrections, Darkhan created the new upated epics variable. These epics variable are committed at:
CalSVN/Runs/O1/Scripts/CAL_EPICS
Using these new epics variable, kaapas were recalculated for LHO. For, LLO these epics variable doesnot exist yet. The new plot is attached below. The imaginary parts of all the kappa's are now close to their nominal values of 0 and real part are few percent (2-3%) from their nominal values of 1, which is within the uncertainity of the model. Cavity pole is still off from its nominal value of 341 Hz but has stayed constant over time.
The script to calculate these time varying factors is committed to SVN:
LHO: CalSVN/aligocalibration/trunk/Runs/ER8/H1/Scripts/CAL_PARAM/
LLO: CalSVN/aligocalibration/trunk/Runs/ER8/L1/Scripts/CAL_PARAM/
Recall that Stefan made changes to the OMC Power Scaling on Sunday 13 September 2015 (in the late evening PDT, which means Sept 14th UTC). One can see the difference in character (i.e. the subsequent consistency) of kappa_C after this change on Sudarshan's attached plot. Once can also see that, for a given lock stretch, that the change in optical gain is now more that ~2-3%. That means that ~5 [Mpc] trends we see on our 75 [Mpc] the in-spiral range, which we've seen evolve over long, 6+ hour long lock stretches, cannot be entirely attributed to optical gain fluctuations as we've been flippantly sure of, and claiming. However, now that we've started calculating these values in the GDS pipeline (LHO aLOGs 21795 and 21812), it will be straight-forward to make comparative plots between the calculated time dependent parameters and every other IFO metric we have. And we will! You can too! Stay tuned!
Just to drive the point home, I took 15 hours' worth of range and optical gain data from our ongoing 41+ hour lock. The optical gain fluctuates by a few percent, but the range fluctuates by more like 10 %.
Updated CAL_INJ_CONTROL medm. It is organized a bit differently, labels have changes slightly, and even has a new button! Duncan Macleod supplied us with an updated ext_alert.py that polls GraceDB for new events (both "E" and "G" types), places the new info in some EPICS records, and then will automatically pause injections for either 3600s or 10800s depending on the event.
The Transient Injection Control now has the ability to zero out the pause inj channel. Why is this necessary? The script running in the background of this screen will automatically PAUSE the injections when a new external event alert is detected. If we are down when we get a GRB alert, the script should still pause the injections. The Operator will then need to enable the injections and zero the pause time.
One other thing for Operators to look out for is if we want the injections to stop for longer than the automatic pause time. If we disable the injections by clicking the "Disable" button, and then a new event comes in, it will automatically switch from Disabled --> Paused (this happened to us a few minutes after we started up the script). I am not 100% positive on this, but it seems that when the pause time is up the injections will continue. If this is so, it's definitely something Operators need to watch for.
We will see how this goes and make changes if necessary.
New screen shot attached.
There was apparently some confusion about pausing mechanisms; see alog 21822. If the scheme referred to there is restored, the PAUSE and ENABLE features will be fully under the control of the operators. Independently, injections will automatically be paused by the action of the GRB alert code setting the CAL-INJ_EXTTRIG_ALERT_TIME channel. I have emailed Duncan to try to sort this out.
Last night there were two GRB alerts that paused the injections, and they DID NOT enable Tinj. The Tinj Control went back to Disabled as we had it set to previously. This is good and works as outlined in the HWInjBookkeeping wiki (Thank you Peter Shawhan!). This was my main worry and seems that has already taken care of. It is a bit misleading when the Tinj control goes from Disabled --> Paused and begins to count up to the "Pause Until" time, but after trending the channels it shows that will not enable the Tinj after the times meet.
J. Kissel Some combination of Dave, Jim, Duncan and TJ installed updates to the GRB alert code this morning during maintenance. This updated code now hits the "pause" button on the hardware injection software TINJ when it receives a GRB alert. There is an EPICs record, H1:CAL-INJ_TINJ_PAUSE, which records the GPS time of the time in which TINJ was paused. Somehow, this record -- which is used as a read back / storage of information, not a setting -- got missed when we went through the un-monitoring of INJ settings-which-are-readbacks channels in the CAL-CS model (see LHO aLOG 21154). So this afternoon, while in observation mode, we received a GRB alert and the updated code pushed the TINJ pause button, which then filled in the H1:CAL-INJ_TINJ_PAUSE EPICs record, which triggered an SDF difference in the CAL-CS front end, which took us out of science mode. #facepalm. I've chosen to un-monitor this channel and accepted it in the OBSERVE.snap table of the SDF system to clear the restriction for observation mode. Note -- when we are next out of observation mode, we need to switch to the SAFE.snap table, un-monitor this channel, and switch back to the OBSERVE.snap table. We can't do this now, because switching the table would show the DIFF again, and take us out of observation intent mode again. #doublefacepalm
As I similarly pointed out to the folks at LLO when they tried to implement something similar, having the GRB alert process pause the injection process is a bad model for how to chain the dependencies. Is the GRB process expecting to unpause the injections as well? How do you plan on handling this when there are multiple external alert processes trying to pause the injections? They're all just going to be pausing and un-pausing as they see fit? Bad plan.
Apparently some confusion about this resurfaced after we had (I thought) resolved it in late August (alog 20013). Following the original scheme, CAL-INJ_TINJ_PAUSE and CAL-INJ_TINJ_ENABLE are intended to be under the control of the human operator to set or unset. In parallel, tinj also pauses injections automatically for one hour following the GPS time inserted in CAL-INJ_EXTTRIG_ALERT_TIME by the GRB alert code, ext_alert.py . I have emailed Duncan to try to sort this out.
To ride out earthquakes better, we would like a boost in DHARD yaw (alog 21708) I exported the DHARD YAW OLG measurement posted in alog 20084, made a fit, and tried a few different boosts (plots attched).
I think a reasonable solution is to use a pair of complex poles at 0.35 Hz with a Q of 0.7, and a pair of complex zeros at 0.7 Hz with a Q of 1 (and of cource a high frequency gain of 1). This gives us 12dB more gain at DC than we have now, and we still have an unconditionally stable loop with 45 degrees of phase everywhere.
A foton design string that accomplishes this is
zpk([0.35+i*0.606218;0.35-1*0.606218],[0.25+i*0.244949;0.25-i*0.244949],9,"n")gain(0.444464)
I don't want to save the filter right now because as I learned earlier today that will cause an error on the CDS overview until the filter is loaded, but there is an unsaved version open on opsws5. If anyone gets a chance to try this at the start of maintence tomorow it would be awesome. Any of the boosts in the DHARD yaw filter bank currently can be overwritten.
We tried this out this morning, I turned the filter on at 15:21 , it was on for several hours. The first screenshot show error and control spectra with the boost on and off. As you would expect there is a modest increase in the control signal at low frequencies and a bit more supression of the error signal. The IFO was locked durring maintence activities (including praxair deliveries) so there was a lot of noise in DARM. I tried on off tests to see if the filter was causing the excess noise, and saw no evidence that it was.
We didn't get the earthquake I was hoping we would have durring the maintence window, but there was some large ground motion due to activities on site. The second attached screenshot shows a lockloss when the chilean earthqauke hits (21774), the time when I turned on the boost this morning, and the increased ground motion durring maintence day. The maintence day ground motion that we rode out with the boost on were 2-3 times higher than the EQ, but not all at the same time in all stations.
We turned the filter back off before going to observing mode, and Laura is taking a look to see if there was an impact on the glitch rate.
I took a look at an hour's worth of data after the calibration changes were stable and the filter was on (I sadly can't use much more time) . I also chose a similar time period from this afternoon where things seemed to be running fine without the filter on. Attached are glitchgrams and trigger rate plots for the two periods. The trigger rate plots show data binned in to 5 minute intervals.
When the filter was on we were in active commissioning, so the presence of high SNR triggers are not so surprising. The increased glitch rate around 6 minutes is from Sheila performing some injections. Looking at the trigger rate plots I am mainly looking to see if there is an overall change in the rate of low SNR triggers (i.e. the blue dots) which contribute the majority to the background. In the glitchgram plots I am looking to see if I can see a change of structure.
Based upon the two time periods I have looked at I would estimate the filter does not have a large impact on the background, however I would like more stable time when the filter is on to further confirm.
I did a diag reset on a timing glitch that occurred at around 20:48UTC. THe reset was effected at ~23:28UTC
Does clicking Diag_Reset knock us out of Observation Mode?