Chris Biwer and other members of the hardware injections team will likely be doing coherent hardware injections in the near future, and these will hopefully be detected successfully by one or more of the low-latency data analysis pipelines. Currently, we are still testing the EM follow-up infrastructure, so the "Approval Processor" software is configured to treat hardware injections like regular triggers. Therefore, these significant GW "event candidates" should cause audible alarms to sound in each control room, similar to a GRB alarm. The operator at each site will be asked to "sign off" by going to the GraceDB page for the trigger and answering the question, "At the time of the event, was the operating status of the detector basically okay, or not?" You can also enter a comment. For the purpose of these tests, if you are the operator on shift, please: * Do not disqualify the trigger based on it being a hardware injection -- we know it is! So, please sign off with "OKAY" if the detector was otherwise operating OK. * Pay attention to whether the audible alarm sounded. In the past we had issues at one site or the other, so this is one of the things we want to test. * Feel free to enter a comment on the GraceDB page when you sign off, like maybe "this was a hardware injection and the audible alarm sounded". * You may get a phone call from a "follow-up advocate" who is on shift to remotely help check the trigger. Note: in the future, once the EM follow-up project is "live", a known hardware injection will not cause the control-room alarms to sound (unless it is a blind injection). You should not write anything in the alog about alarms from GW event candidates, because that is potentially sensitive information and the alogs are publicly readable.
IFO has been locked at NOMINAL_LOW_NOISE, 23.0W, 72Mpc for the past 5 hours. Wind and seismic activity are low. 4 ETM-Y saturation alarms. Received GRB alert at 18:31UTC (12:31PT) - LHO was in Observing mode during this event
The attached plot shows the 2 day trend of the RF45 glitches. There were no glitches in the past day. The large glitches 24 hours ago were us. This is not inconsistent with a cable or connection problem. No one should be surprised, if the problem reappears.
Title: 10/01/2015, Day Shift 15:00 – 23:00 (08:00 – 16:00) All times in UTC (PT) State of H1: At 15:00 (08:00) Locked at NOMINAL_LOW_NOISE, 23.0W, 72Mpc Outgoing Operator: TJ Quick Summary: Wind is calm, no seismic activity. All appears normal. Intent Bit at Commissioning while LLO was recovering from a lockloss.
Title: 10/1 OWL Shift: 7:00-15:00UTC (00:00-8:00PDT), all times posted in UTC
State of H1: Locked but not Observing for inj
Shift Summary: I had one lockloss, but it came back up with relative ease. The RF Noise wasn't bothering me.
Incoming Operator: Jeff B
Activity Log:
Relocked @ 14:38
Sheila wants to do a quick injection while LLO is down.
excitation ended just before we got a GRB alert, but I was making an excitation at the time of the GRB (LLO was not in observing so we were taking advantage of some single IFO time to investigate noise at 78 Hz in DARM that may come from EX).
When we heard the alert I stopped the dtt session and Jeff B went to observing, but there were times even when we weren't in observing that there were no excitations running. Grace DB lists 1127747079.41 as the event time for the first GRB alert, and unfortunately my excitation was running at that time. My last excation was ramping down by 1127747090 as shown in the first attached dataviewer screenshot, where the GRB time is approximately in the middle of the plot, so I was exciting the ETMX ISI at the time of the event.
The two channels that I was putting excitations on were H1:ISI-ETMX_ST2_ISO_Y_EXC and H1:ISI-ETMX_ST2_ISO_Y_EXC. These were white noise excitations that produced ISI motions of 0.1 nm/rt Hz at 20 Hz with an amplitude that slowly drops off as the frequency increases until 100 Hz (0.02 nm/rt Hz). The excitation was bandbassed from 20Hz-200 Hz. They produced no features in the DARM spectrum, although they were intended to excite the peaks at 78-80 Hz.
Lockloss @ 13:59 UTC
ITMX saturation, and it tripped SUS OMC WD. No obvious reason for lockloss.
H1IOPPEMMY and H1PEMMY both started to report errors for FE, ADC, and DDC on the CDS Overview around 12:55 UTC. There was a red around TDS, so I checked out the timing screen and there seems to be a problem with Port 13 "Invalid or no data".
Since this is only PEM at MidY, I have NOT taken us out of Observing.
The I/O chassis is no longer visible to the computer h1pemmy. This is not critical to the operation of the interferometer. This can wait until Tuesday to fix unless someone desperately needs PEM data from MY.
Humming along @ 75Mpc. Have had a handful of glitches during my shift, but the RF noise seems to be in control for now.
J. Kissel, for the Calibration Team I've updated the results from LHO aLOG 21825 and G1501223 with an ASD from the current lock stretch, such that I could display the computed time dependent correction factors, which have recently been cleared of systematics (LHO aLOG 22056), sign errors (LHO aLOG 21601), and bugs yesterday (22090). I'm happy to say, that not only does the ASD *without* time dependent corrections still fall happily within the required 10%, but if one eye-balls the time-dependent corrections and how they would be applied at each of the respective calibration line frequencies, they make sense. To look at all relevant plots (probably only interesting to calibrators and their reviewers), look at the first pdf attachment. The second and third .pdfs are the money plots, and the text files are a raw ascii dump of respective curves so you can plot them however or whereever you like. All of these files are identical to what is in G1501223. This analysis and plots have been made by /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Scripts/produceofficialstrainasds_O1.m which has been committed to the svn.
Apparently, this script has been moved to a slightly different location. The script can be found at
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Scripts/DARMASDs/produceofficialstrainasds_O1.m
Title: 10/1 OWL Shift: 7:00-15:00UTC (00:00-8:00PDT), all times posted in UTC
State of H1: Observation Mode at 75Mpc for the last 10hrs
Outgoing Operator: Travis S
Quick Summary: Locked his whole shift.
Title: 9/30 Eve Shift 23:00-7:00 UTC (16:00-24:00 PST). All times in UTC.
State of H1: Observing
Shift Summary: No news is good news. Locked my entire shift in Observing. Only 4 ETMy saturations. Seismic and wind calm. No RF45 issues.
Incoming operator: TJ
Activity log:
23:38 ETMy saturation
3:07 ETMy saturation
4:56 ETMy saturation
6:36 ETMy saturation
Following on from where Hugh left off in alog 21412, I have copied the OBSERVE.snap files from their target area to their appropriate userapps/...h1xxxxx_observe.snap area in prep for committing them to svn. I have copied these files for: SUS, ALS, ISC, CALEX, CALEY. Hugh had already done LSC LSCAUX ASC ASCIMC OMC OAF TCSCS CALCS, and all SEI. The balance (AUX IOP ODC and PEM) have no observe.snap file since it is unneccessary. The ones that I've just copied will be committed to svn tomorrow.
This eve, I finished committing the moved-over OBSERVE.snap files to the svn. I also committed the lsc OBSERVE.snap since we had changed a few settings (DHARD Y FM2 for example) recently.
J. Kissel, B. Weaver This aLOG serves more as a discussion of recent results and future impact, but I figure the aLOG is the most visible place to cover a message that needs discussion among many disparate groups. I discuss the latest evidence for charge evolution on the H1 ETMY test mass, why it matters for that test mass alone, and what it will impact when we do change it (especially in regards to calibration). Conclusions are in bold at the bottom of each section. ---------------- Why I think it's looming: :: kappa_TST is the calibration group's measure of the change in actuation strength of the test mass stage as a function of time, and since H1 ETMY is the only DARM actuator, it's showing that the H1 ETMY ESD / TST / L3 stage's strength has increased. kappa_TST has shown an increase in actuation strength in ~3 [weeks] of a ~2%: See blue trace in the top subplot of the first attachment. (copied from LHO aLOG 22082). :: H1 ETMY is showing steadily increasing charge since we've last flipped the ESD bias, as expected: See the trend in all subplots of the second attachment (copied from LHO aLOG 22062). > Recall that actuation strength changes as a function of charge, proportional to the ratio of effective bias voltage from charge to the bias voltage we apply intentionally, Vc / Vb. > Regrettably, we've measured the charge so infrequently since the start of the run, that it's dicey to corroborate between charge and actuation strength. But I'll try anyway. If you take Betsy's last and second to last charge measurement points, which are bounding Sudarshan's kappa_TST time span, you can eyeball that the change is 15 [V] effective bias. This means an actuation strength change of 15 / 380 = 4%. Given the error bars on the charge measurements and how few measurements we've had, is totally consistent with the ~2% change seen by tracking the calibration lines. Also, if we increase the effective bias voltage from charge, on an already positive bias, then than you're increasing the bias voltage, which is consistent with an increase in actuation strength, since the linear component of the actuation force is proportional to the bias voltage. > The current rate of change of ETMY is ~3V / week. But also recall that the rate of change has *changed* every time we've flipped the bias; see Leo's analysis in LHO aLOG 20387 which (for ETMY) quotes 1.75V / week for one epoch and 5V / week for the next. So it's going to be subjective when we flip the bias sign, and we have to keep close tabs on it, especially since the error bars on an individual day's measurement require many data points to show a pattern. However, the evidence up to now suggests that even though a bias sign flip changes the rate, it stays at a constant until the bias sign is flipped again. :: The calibration group -- now that we finally have removed all systematic errors, fixed all problems, and cleaned up all sign confusion -- finally believe that the live-tracking of this slow time dependence is working, and is tracking the real thing as far as these very long term trends (see first attachment again). But we have not yet started applying them, because we haven't figured out the right *time scale* upon which to apply them, and applying them takes some debugging. See the third attachment (this is new). These time dependent parameters are being computed at a rate of 16 [Hz], but if you look at say, one 420 [sec] chunk, the record wanders all over within a roughly +/-3% swing on a ~10 [sec] cadence. We have no evidence to believe this the test mass actuation strength is changing this fast, so this is likely noise. So at this point, unless we're willing to lose a day or so of data (i.e. enough lock stretches where can get a sense of a pattern) where we're testing out, I don't think we *can* start correcting for these factors :: The calibration group's requirement is to stay within a 10% and 5 [deg] uncertainty over the course of the entire run. We already know from Craig's uncertainty analysis of ER8 (see fourth attachment, copied from LHO aLOG 21689), that -- without including time-dependence -- the reference time model (in reference to which all time-dependent parameters are calculated), has an uncertainty of 5% and a little more that 5 [deg]. The change in actuation strength means that we'll have a systematic error that grows with time, and since we're fighting for what's left of the 10% uncertainty budget, this changein actuation strength that we've tracked over these first three weeks of the run of 2-3% are significant. :: If we let the charge continue to gather at its current rate, that means we'll gather an additional 11 more weeks, that means another ~35 [V], for a total accumulated charge of 60 [V], and a total actuation strength change of 60 / 380 = 15%, which is *well* outside of our budget. In summary, we're going to need to change the bias on H1 ETMY sign soon. ------------------- What will be impacted when we change it: :: First, foremost, and easiest, when we change the digital bias sign, it changes the sign of the test mass actuation stage. That means we have to compensate for it in order for the DARM loop to remain stable. That means we change the sign of the gain in the L3 DRIVEALIGN bank, i.e. H1:SUS-ETMY_L3_DRIVEALIGN_L2L_GAIN. Done. easy. :: Now on to the hard stuff -- making sure it doesn't affect the calibration. In the CAL-CS replication of the reference model of the DARM loop is the obvious first impact > Of course, we need to flip sign of the digital replica of the drive align matrix, H1:CAL-CS_DARM_FE_ETMY_L3_DRIVEALIGN_L2L_GAIN > We also need to flip the sign of the replica of the ESD itself, H1:CAL-CS_DARM_ANALOG_ETMY_L3_GAIN :: The not-so obvious impact is on the tracking of the actuation strength. Because the ESD / TST / L3 calibration line is injected downstream of the DARM distribution, but upstream of the drive-align bank, the sign of the analysis code must change. We've already been bitten by this once -- see LHO aLOG 21601 > That means we need to update the EPICs records that capture the reference model values at the calibration line frequencies > *That* means we need to create a new DARM model parameters set, which also maps all of the changes to ESD / TST / L3 sign change (just like CAL-CS) > *THAT* means we ought to take a new DARM Open Loop Gain and PCAL to DARM TF, to verify that the parameter set is valid. > *THAT* means we need a fully functional and undisturbed IFO (i.e. this can't just be done on a Tuesday, or as a "target of oppurtunity" when L1 is down), *after* the sign flip that we're will to take out of observation mode for an hour or so :: Once we have a new, validated model, then we push the new EPICs records into the CAL-CS model, and everything that needs updating should be complete. :: There's then the "offline" work of updating the SDF system -- but this must be done quickly because it prevents you from setting the observation intent bit. For these particular records which have very small numbers, there're precision issues that means you must hand-edit the .snap files (see, e.g. LHO aLOGs 22079, LHO aLOG 22065, 21014), which is another hour of time, instead of just hitting "accept all" and "confirm" in the SDF GUI interface like any other record. In summary If we are properly prepared for this, and everyone is on deck ready for it, I think this all can be done in a (human, but very long, human) day, especially if we have a *team* of dedicated calibrators, detector engineers, and some commissioning support. Further It's not a "just do it on a Tuesday" or "just do it target of oppurtunity when L1 is down" kind of task and should each of the above steps should be done slowly and methodically. Also, I should say that is room for improvement in just about every part of this bias sign flipping process, but all of those would require non-science run friendly changes to front-end code, RCG infrastructure, and a good bit of time commissioning.
Why isn't this an issue with any of the other test masses? Both site's ETMX's ESD drivers are turned entirely off during low-noise operation, so they have no impact on noise or actuation strength. For lock acquisition, At H1, ETMX is used, but whether the ETMX acqusition ESD has a 10-20% change in actuation strength over the course of the run does not matter for calibration or lock acquisition. Regrettably H1 ETMX bias is currently negative, so its current trend of positive charge means that actuation strength will be decreasing over the run. But, again, a 10-20% drop in strength won't matter. The only place where I could see us run into trouble is whereever we're on the edge of stability / robustness, like acquiring during very high winds, or if we've designed the L3/L1 ALS DIFF cross-over particularly agressively (which I hope we have not). At L1, they've recently switched to using ETMY for lock acquisition, then transitioning to their *ITMX* driver, switching the ETMY driver to low noise, and then transisition back. So only their ETMY ESD strength matters. And, goh'bless'm, they flipped their bias sign *just* before the run started with the charge a -40 [v] effective bias, so they'll have enough time at their current rate charging positive ~10 V/month, or 2.5 V / week then to go through zero right around the end of the run. That means the impact of this on their actuation strength will be slowly reducing over time. At H1, the ITMs do not have any ESD drivers (high or low voltage), so they also need no consideration. At L1, only ITMX has an ESD driver, but again, it's currently used in transit to low noise, so it doesn't play a role in calibration, and assuming the loops were designed with ample margin, a 10-20% change in actuation strength shouldn't be a bother. In summary H1 ETMY is the only test mass where we will ne to play such terrible games during O1.
I'm posting an early version of this alog so that people can see it, but plan to edit again with the results of the second test.
Yesterday I took a few minutes to follow up on the meausrements in alog 21869. This time in addition to driving TMS I drove the ISI in the beam direction to reproduce the motion caused by the backreaction to TMS motion. We also breifly had a chance to move the TMSX angle while exciting L.
The main conclusions are:
Comparison of ISI drive to TMS drive for X and Y
The attached screenshot shows the main results of the first test (driving ISIs and TMSs). In the top right plot you can see that I got the same amount of ISI motion for 3 cases (driving ETMX ISI, TMSX, ETMY ISI) and that driving TMSY with the same amplitude as TMSX resulted in a 50% smaller motion of the ISI. Shaking the TMS in the L direction induces a larger motion measured by the GS13s in the direction perpendicular to the beam, than in the beam direction, which was not what I expected. I chose the drive strength to get the same motion in the beam direction, so I have not reproduced the largest motion of the ISI with this test. If there is a chance it would be interesting to also repeat this measurement reproducing the backreaction in the direction perpendicular to the beam.
The middle panels of the first screnshot show the motion measured by OSEMs. TMS osems see about a factor of 10 more motion when the TMS is driven than when the ISI is driven. The signal is also visible in the quad top mass osems, but not lower down the chain. For the X end, the longitudnal motion seen by the top mass is about a factor of 2 higher when the TMS is excited than when the ISI is excited (middle left panel), which could be because I have not reproduced the full backreaction of the ISI to the TMS motion. However, it is strange that for ETMY the top mass osem signal produced by driving TMS is almost 2 orders of magnitude larger than the motion produced by moving the ISI. It seems more likely that this is a problem of cross coupling between the osems than real mechanical coupling. The ETMY top mass osems are noisier than ETMX, as andy lundgren pointed out (20675). It would be interesting to see a transfer function between TMS and the quad top mass to see if this is real mechanical coupling or just cross talk.
In the bottom left panel of the first screenshot, you can compare the TMS QPD B yaw signals. The TMS drive produces larger QPD signals than the ISI drive, as you would expect for both end stations. My first gues would be that driving the ISI in the beam direction could cause TMS pitch, but shouldn't cause as much yaw motion of the TMS. However, we see the ETMX ISI drive in the yaw QPDs, but not pitch. The Y ISI drive does not show up in the QPDs at all.
Lastly, the first plot in the first screenshot shows that the level of noise in DARM produced by driving the ETMX ISI is nearly the same as what is produced by driving TMSX. Since the TMS motion (seen by TMS osems) is about ten times higher when driving TMS, we can conclude that this coupling is not through TMS motion but the motion of something else that is attached to the ISI. Driving ETMY ISI produces nothing in DARM but driving TMSY produces a narrow peak in DARM.
For future reference:
I drove ETMX-ISI_ST2_ISO_X_EXC with an amplitude of 0.0283 cnts at 75 Hz from 20:07:47 to 20:10:00UTC sept 29th
I drove 2000 cnts in TMSX test L from 20:10:30 to 20:13:30UTC
I drove ETMY-ISI_ST2_ISO_Y_EXC with an amplitude of 0.0612 cnts at 75 Hz from 20:13:40 to 20:16:30UTC
I drove 2000 cnts in TMSY test L from 20:17:10 to 20:20:10UTC
Driving TMSX L while rastering TMS position and angle
I put a 2000 cnt drive on TMSX L from about 2:09 UTC September 30th to 2:37 when I broke the lock. We found a ghost beam that hits QPD B when TMS is misaligned by 100 urad in the positive pitch direction. There is about 0.5% as much power in this beam as in the main beam (not accounting for the dark offset). I got another chance to do this this afternoon, and was able to move the beam completely off of the QPDs, which did not make the noise coupling go away or reduce it much. We can conclude then scatter off of the QPDs is not the main problem. There were changes in the shape of the peak in DARM as TMS moved, and changes in the noise at 78 Hz (which is normally non stationary) Plots will be added tomorrow.
Speculation
There is a feature in the ETMX top mass osems (especially P and T) around 78 Hz that is vaugely in the right place to be related to the excess noise in the QPDs and DARM. Also, Jeff showed us some B&K measurements from Arnaud (7762) that might hint at a Quad cage resonance at around 78 Hz, although the measured Q looks a little low to explain the spectrum of the TMSX QPDs or the feature in DARM. One could spectulate that the motion driving the noise at 78Hz is the quad cage resonance, but this is not very solid. Robert and Anamaria have data from their PEM injections that might be able to shed some light on this.
The units in the attached plots are wrong, there GS13s are calibrated into nm, not meters
This morning I got the chance to do some white noise excitations on the ETMX ISI, in the X and Y directions. The attached screenshot shows the result, which is that for ISI motion a factor of 10-100 above the normal level, for a wide range of frequencies, no noise shows up in DARM. SO the normal level of ISI motion in the X and Y directions is not driving the noise in DARM at 78 Hz. We could do the same test for the other ISI DOFs to eliminate them as well.
C. Biwer, J. Kissel Taking advantange of single IFO time to run PCAL vs DARM hardware injections. More details later.
PCAL Injection tests complete. PCAL X has been restored to nominal configuration. Injection Approx End time (GPS) DARM 1 1127683335 PCAL 1 1127683906 PCAL 2 1127684171 PCAL 3 1127684465 DARM 2 1127684766 DARM 3 1127685143 More details and analysis to come. These were run from the hwinjection machine as hinj. Usual DARM Command awgstream H1:CAL-INJ_TRANSIENT_EXC 16384 coherenttest1from15hz_1126257408.out 1.0 -d -d PCAL Command: awgstream H1:CAL-PCALX_SWEPT_SINE_EXC 16384 coherenttest1from15hz_1126257408.out 1.0 -d -d We turned OFF the 3 [kHz] PCAL line during the excitation. We're holding off on observation mode to confir about other single IFO tests we can do while L1 is down.
I've attached omega scans of the PCAL and DARM injections. All injections used the 15Hz template from aLog 21838.
The SNRs of the Pcal injections seem a bit lower than intended. Omega reports SNR 10.5 for the injection through the normal path, which is about right. But for the Pcal injections, the SNRs are 5.5, 7.6, and 7.2. Note that these are the SNRs in CAL-DELTAL; someone should check in GDS strain as well. Links to scans below: Standard path Pcal 1 Pcal 1 Pcal 1
*** Cross-reference: See alog 22124 for summary and analysis
The ext_alert.py script which periodically views GraceDB had failed. I have just restarted it, instructions for restarting are in https://lhocds.ligo-wa.caltech.edu/wiki/ExternalAlertNotification
Getting this process to autostart is now on our high priority list (FRS3415).
here is the error message displayed before I did the restart.
File "ext_alert.py", line 150, in query_gracedb
return query_gracedb(start, end, connection=connection, test=test)
File "ext_alert.py", line 150, in query_gracedb
return query_gracedb(start, end, connection=connection, test=test)
File "ext_alert.py", line 135, in query_gracedb
external = log_query(connection, 'External %d .. %d' % (start, end))
File "ext_alert.py", line 163, in log_query
return list(connection.events(query))
File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 441, in events
uri = self.links['events']
File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 284, in links
return self.service_info.get('links')
File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 279, in service_info
self._service_info = self.request("GET", self.service_url).json()
File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 325, in request
return GsiRest.request(self, method, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 201, in request
response = conn.getresponse()
File "/usr/lib/python2.7/httplib.py", line 1038, in getresponse
response.begin()
File "/usr/lib/python2.7/httplib.py", line 415, in begin
version, status, reason = self._read_status()
File "/usr/lib/python2.7/httplib.py", line 371, in _read_status
line = self.fp.readline(_MAXLINE + 1)
File "/usr/lib/python2.7/socket.py", line 476, in readline
data = self._sock.recv(self._rbufsize)
File "/usr/lib/python2.7/ssl.py", line 241, in recv
return self.read(buflen)
File "/usr/lib/python2.7/ssl.py", line 160, in read
return self._sslobj.read(len)
ssl.SSLError: The read operation timed out
I have patched the ext_alert.py script to catch SSLError exceptions and retry the query [r11793]. The script will retry up to 5 times before crashing completely, which is something we may want to rethink if we have to.
I have request both sites to svn up and restart the ext_alert.py process at the next convenient opportunity (the next time it crashes).