J. Kissel As a part / start of the review process for the calibration pipeline, it has been requested that -- along with a long-term view of the PCAL to GDS-CALIB_STRAIN ratios (as shown in LHO aLOG 24285) -- we take snap-shots of the entire C01 GDS-CALIB_STRAIN spectrum at a smattering of times, and take the ASD ratios as has been done for the official strain sensitivity plots (see e.g. G1501223). I've done so over the early part of the run, between Sep 11 and Oct 22 2015, every 5 days or so, Sep 11 2015 20:30:43 UTC Sep 18 2015 04:50:43 UTC Sep 25 2015 07:23:43 UTC Oct 01 2015 01:30:43 UTC Oct 05 2015 05:31:43 UTC Oct 11 2015 18:57:43 UTC Oct 17 2015 07:41:43 UTC Oct 22 2015 03:12:43 UTC (recall that we get 420 [sec] of data for each ASD -- see T1500365). Attached are the zooms of the ASDs of both GDS-CALIB_STRAIN and PCAL displacement around the three PCAL frequencies ( C01_H1_O1_Sensitivity_displacement_asd_pcalzoom.pdf) and the ratio of these ASDs (C01_H1_O1_Sensitivity_displacement_asd_pcalzoomratio.pdf) for these 8 data sets. I figured this was plot overload, so I leave all nine plots that come with each official strain simply committed to the CalSVN here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Results/DARMASDs/ where the collection of plots are dated accordingly (e.g. 2015-10-22_C01_H1_O1_Sensitivity.pdf). The message: for these spot-checks, the ratio between GDS C01 calibrated displacement and PCAL calibrated displacement is within the expected 10%, and all but the earliest data point data point is better than 5%. Very good -- great job CAL team! ------- These plots were made with a generalization of the official strain producing script that now lives in /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/Common/Scripts/produceofficialstrainasds_O1_C01.m
Per the Shift Check Sheet (for Thursday's task), I went and checked on the Crystal & Diode Chillers for the PSL. The Crystal Chiller was not at maximum, so I added 125mL (basically to get the water's meniscus close to the "MAX" level). NOTE: according to the notepad, this was filled on 12/16 with 125mL also.
Diode Chiller was fine (no red error light).
H1 has been locked for 5.5+ hrs with a range between 70-80Mpc (only one ETMy saturation).
All running well here. I was not able to log into the ops workstation (operator0) at the beginning of the shift, but Jim B was able to remedy that.
Snowed earlier in the shift, and we have about 0.75" of snow on the ground currently. Winds are at about 6mph & useism is at 0.4um/s
The omicron scans of the top ten Hanford BBH/BNS triggers on December 16th show a notable recurring wave-like glitch. The glitches are present in the following scans: https://ldas-jobs.ligo-wa.caltech.edu/~jacob.broida/Dec16/BBH/GW/1134302174/ https://ldas-jobs.ligo-wa.caltech.edu/~jacob.broida/Dec16/BBH/GW/1134294300/ https://ldas-jobs.ligo-wa.caltech.edu/~jacob.broida/Dec16/BBH/GW/1134291513/ https://ldas-jobs.ligo-wa.caltech.edu/~jacob.broida/Dec16/BNS/GW/1134295111/ Inspection of the auxiliary channels revealed that these signals seem to stem from one channel: H1:ASC-Y_TR_B_PIT_OUT_DQ.The scans for this channel are visible here: https://ldas-jobs.ligo-wa.caltech.edu/~jacob.broida/Dec16/BBH/1134302174/#H1:ASC-Y_TR_B_PIT_OUT_DQ https://ldas-jobs.ligo-wa.caltech.edu/~jacob.broida/Dec16/BBH/1134294300/#H1:ASC-Y_TR_B_PIT_OUT_DQ https://ldas-jobs.ligo-wa.caltech.edu/~jacob.broida/Dec16/BBH/1134291513/#H1:ASC-Y_TR_B_PIT_OUT_DQ https://ldas-jobs.ligo-wa.caltech.edu/~jacob.broida/Dec16/BNS/1134295111/#H1:ASC-Y_TR_B_PIT_OUT_DQ The rest of the scans from December 16th can be accessed from the chart here: https://ldas-jobs.ligo-wa.caltech.edu/~jacob.broida/Dec16/Results.html
BCV Results for the above glitches. BBH Glitches https://ldas-jobs.ligo-wa.caltech.edu/~sudarshan.ghonge/BCV/O1/H1_glitch_151216_bbh/H1_1134289817_1134329417_webpage/ BNS Glitches https://ldas-jobs.ligo-wa.caltech.edu/~sudarshan.ghonge/BCV/O1/H1_glitch_151216_bns/H1_1134282617_1134336617_webpage/
TITLE: 12/17 Day Shift: 16:00-24:00UTC
STATE of H1: Observing, only just
Support: Yes
Quick Summary: Quiet shift until an earthquake knocked us out of lock
Shift Activities:
17:45 Mitch to Optics Lab
19:00 Jodi to Mid-y and EndX, back 19:45
It was a relatively quiet shift. We were locked at about 75 Mpc until about 15 minutes ago when a 6.4 earthquake in Mexico hit us. .03-.1 hz ground motion is about 5 microns. Otherwise, wind is moderate, microseism was about .5 microns. Now we play the waiting game.
I've taken some measurements indicating the EX sensor correction in Z is causing problems for the ISI. I've ramped it off, watching the IFO for issues and will be taking some measurements. This may result in a change of configuration for the ISI, eventually.
All these analysis were done in C00 data and I am sure these discrepancy in phase will disappear on C01 data. I will analyze and confirm it once the SLM data for C01 is available.
J. Kissel, K. Izumi, J. Betzwieser Since we've last taken the full actuation characterization measurement suite (see LHO aLOG 20940, T1500383), we've revised a few bits of the measurement methodology such that we might have more successful measurements. I recall them here to help us remember what we plan to do. The major differences are as follows: (1) Order of measurements (LLO innovation). In order to try to make the suite as robust against environmental or other reasons for lock-loss, Joe has suggested that we take the full IFO portions of the suite *first* while the IFO is stable and fully operational. These "final results" are really the only measurements that *need* to be in the same lock stretch, such that we can make sure that the optical gain of the FULL IFO is the same. Only after all 6 of those sweeps are complete, *then* we intentionally break the IFO and begin the ALS DIFF and Free-swinging Michelson measurements with lesser configurations of the IFO. The full outline of the measurements is attached as a picture. (2) Merging of Free-swinging Michelson and ALS DIFF, IFO propogation measurements (LLO innovation). To-date, LHO has propagated the Free-Swinging Michelson absolute calibration from ITMX L2 stage to ETMX using the traditional single arm locked on red. However, during his last attempt, Joe had used ALS DIFF to propagate the the absolute calibration because (a) ALS DIFF is a more sensitive measurement and (b) we need to lock ALS DIFF anyways in our path propogate the ALS DIFF absolute calibration. (3) Using the "super actuator" ALS DIFF transfer function (LHO innovation). This was mentioned in LHO's last attempt (see 20940). The idea being to reduce the number of seeps by one, by taking advantage of a little loop math: From a simple diagram of the loop, one can show that L3 LOCK IN2 1 ----------- = ---------- L3 LOCK EXC 1 + G_DIFF which is true with any excitation at any point around the loop, just like is done "normally" done with a DARM IN2 / DARM EXC TF. Further, DIFF_PLL_CTRL 1 ------------- = ---------- x ETMX x DIFF L3 LOCK EXC 1 + G_DIFF one can immediately see that the absolute calibration of the super-actuator ETMX falls out of ratio of these two transfer functions, assuming you have the absolute calibration of DIFF such that you can divide it out (i.e. the [Hz/ct] and z:p = 40:1.6 Hz pair of the VCO, i.e. measurements (3) and (4), which we do, a priori). What's great, is that since you're using the same excitation, as long as you store both of these channels in the template, you can directly measure and export the transfer function ratio that you really want, DIFF_PLL_CTRL ------------- = ETMX x DIFF L3 LOCK IN2 and thus you've reduced two measurments into one.
BNS range is taking a small dip of ~5 Mpc. Probably due to an increasing traffic. Otherwise everything looks good. Wind speed < 10mph. Useism hanging out at 50th percentile. Ground motion in EQ band is trending downward. Couple of ETMY saturation since shift started.
TITLE: 12/16 EVE Shift: 00:00-08:00UTC (16:00-00:00PDT), all times posted in UTC
STATE of H1: NLN around 80Mpc (Observing for last 4.5+ hrs)
Incoming Operator: Nutsinee
Support: Sheila helped me out.
Quick Summary:
Documentary crew in Control Room at beginning of the shift. One lockloss due to EPICS freeze & another due to operator error (trying to fix SR3 Cage Servo to prevent lockloss which occurred due to the EPICS freeze). Pretty decent double coincidence.
Shift Activities:
Summary: There were 11 scheduled hardware injections: 1134345924 1 1.0 coherentbbh10_1128678894_ 1134348928 1 1.0 coherentbbh11_1128678894_ 1134353728 1 1.0 coherentbbh15_1128678894_ 1134354928 1 1.0 coherentbbh16_1128678894_ 1134356128 1 1.0 coherentbbh17_1128678894_ 1134357328 1 1.0 coherentbbh18_1128678894_ 1134359800 1 1.0 coherentbbh11_1128678894_ 1134361000 1 1.0 coherentbbh12_1128678894_ 1134362200 1 1.0 coherentbbh13_1128678894_ 1134363400 1 1.0 coherentbbh14_1128678894_ 1134364600 1 1.0 coherentbbh19_1128678894_ The first column is the start time of the injection. The second column is an integer that specifies that it was a CBC injection. The third column is the scale factor. And the fourth column is the beginning prefix of the parameter/waveform files. The waveform files can be found here: https://daqsvn.ligo-la.caltech.edu/svn/injection/hwinj/Details/Inspiral/H1/ and https://daqsvn.ligo-la.caltech.edu/svn/injection/hwinj/Details/Inspiral/L1/ The parameter files can be found here: https://daqsvn.ligo-la.caltech.edu/svn/injection/hwinj/Details/Inspiral/ Two of these happened in L1 only because H1 lost lock. Those two are: 1134348928 1 1.0 coherentbbh11_1128678894_ 1134357328 1 1.0 coherentbbh18_1128678894_ Segments: The segment database reports 9 injections for the H1:ODC-INJECTION_CBC:2 flag: 1134345929,1134345931 1134353734,1134353735 1134354934,1134354935 1134356134,1134356135 1134359806,1134359807 1134361005,1134361007 1134362206,1134362207 1134363405,1134363407 1134364606,1134364607 And 11 for L1:ODC-INJECTION_CBC:2 flag: 1134345929,1134345931 1134348934,1134348935 1134353734,1134353735 1134354934,1134354935 1134356134,1134356135 1134357333,1134357335 1134359806,1134359807 1134361005,1134361007 1134362206,1134362207 1134363405,1134363407 1134364606,1134364607
Parameter estimation started. Results will appear here https://www.lsc-group.phys.uwm.edu/ligovirgo/cbcnote/ParameterEstimationModelSelection/O1_PE/HardwareInjs_12162015 ping salvo or john for questions/comments
In loading the new code for the SR3 cage servo, I caused the guardian to go into error (because there is a self.counter that is initalized in main and used in run). This took us out of observing. I took the CAGE_SERVO guardian to manual, turned it off and back on so that it would initalize the counter. Then I attempted to INIT ISC_LOCK so that it would manage the CAGE_SERVO again. The cage servo is managed by the DRMI guardian, though. I took DRMI to manual, init, this broke the lock for some reason that I don't understand at all.
Apologies.
TITLE: 12/16 EVE Shift: 00:00-08:00UTC (16:00-00:00PDT), all times posted in UTC
STATE of H1: NLN just under 80Mpc
Outgoing Operator: Jim
Quick Summary:
Walked in to find film crew in the middle of an interview, so held off entering the Control Room for a while, but Jim had H1 with still running from the lock I started from last night. Once I was able to squeeze into the Control Room, after a while, H1 had a lockloss, and commissioners noted that that this was probably due to an "EPICS Freeze". As far as I know, an "EPICS Freeze" symptom is that all the StripTool traces flatline & thus we appear to lose the ability to control H1 and thus have a lockloss. For this one, the flatline happened on the order of 80sec and then we had the lockloss. NOTE: yesterday people mentioned "EPICS Freezes" last night, but they were shorter and we stayed locked.
Filming continues! :)
Dave, Sheila
At 00:47 UTC, we lost lock after a 45 second epics freeze. The attached screenshot shows that the cage servo stopped updating, durring the freeze, which is the intended behavior since we modified the guardian (22754) to watch and not update the servo when the PITMON does not update.
Dave looked up the guardian logs, and indeed the guardian does have the intended behavoir. However, when the servo comes back on after being off this long it puts out a large number. Now we have added a counter that gets incremented when the PITMON has not updated, and cleared when it is updated. If PITMON has not updated in 16 cycles, the guardian will go to CAGE_SERVO_OFF, then should return to CAGE_SERVO_RUNNING and reinitalize the servo. This will happen once a second until the epics freeze is over.
O1 days 88-89
model restarts logged for Tue 15/Dec/2015
2015_12_15 08:26 h1isietmx
2015_12_15 08:28 h1isietmy
2015_12_15 08:33 h1isiitmx
2015_12_15 08:38 h1isiitmy
2015_12_15 08:42 h1isibs
2015_12_15 08:49 h1isiham6
2015_12_15 08:50 h1isiham5
2015_12_15 08:52 h1isiham4
2015_12_15 08:55 h1isiham3
2015_12_15 09:02 h1isiham2
Maintenance day, new ISI code for both BSC and HAMS. No DAQ restart required.
model restarts logged for Mon 14/Dec/2015 No restarts reported
I'm preparing to do a set of hardware injection tests. I will update this aLog entry as injections are scheduled. I first need to svn up the repo to get the injection files. And check that the latest version of tinj is installed.
I am beginning by scheduling one hardware injection first. The schedule was updated with: 1134345924 1 1.0 coherentbbh10_1128678894_
tinj was enabled at 00:02 UTC at H1.
I did the following checks: * I checked the first injection to make sure it was flagged in the segment database. I queried H1:ODC-INJECTION_CBC:2 and L1:ODC-INJECTION_CBC:2 and found that both returned the segment [1134345929,1134345931). This looks good, it is what I expect. * I checked the tinj.log file as the injection was being performed. It was logged as successful. * I checked the TRANSIENT_OUT16 channel to check that the signal was injected. See attached plot. Everything looks good. So I'm going to add the following to the schedule: 1134348928 1 1.0 coherentbbh11_1128678894_ 1134350128 1 1.0 coherentbbh12_1128678894_ 1134351328 1 1.0 coherentbbh13_1128678894_ 1134352528 1 1.0 coherentbbh14_1128678894_ 1134353728 1 1.0 coherentbbh15_1128678894_ 1134354928 1 1.0 coherentbbh16_1128678894_ 1134356128 1 1.0 coherentbbh17_1128678894_ 1134357328 1 1.0 coherentbbh18_1128678894_ 1134358528 1 1.0 coherentbbh19_1128678894_
LHO lost lock a couple minutes before the injection scheduled for: 1134348928 1 1.0 coherentbbh11_1128678894_. So this injection only went into L1. I've removed the lines: 1134350128 1 1.0 coherentbbh12_1128678894_ 1134351328 1 1.0 coherentbbh13_1128678894_ 1134352528 1 1.0 coherentbbh14_1128678894_ from the schedule since LHO is still trying to relock.
LHO lost lock again. The following line was then removed from the schedule file then: 1134358528 1 1.0 coherentbbh19_1128678894_
Added the following lines to the schedule to re-do the injections that were skipped due to LHO losing lock: 1134359800 1 1.0 coherentbbh11_1128678894_ 1134361000 1 1.0 coherentbbh12_1128678894_ 1134362200 1 1.0 coherentbbh13_1128678894_ 1134363400 1 1.0 coherentbbh14_1128678894_ 1134364600 1 1.0 coherentbbh19_1128678894_ Just to recap. Tonight tests the following injections are in the schedule: 1134345924 1 1.0 coherentbbh10_1128678894_ 1134348928 1 1.0 coherentbbh11_1128678894_ 1134353728 1 1.0 coherentbbh15_1128678894_ 1134354928 1 1.0 coherentbbh16_1128678894_ 1134356128 1 1.0 coherentbbh17_1128678894_ 1134357328 1 1.0 coherentbbh18_1128678894_ 1134359800 1 1.0 coherentbbh11_1128678894_ 1134361000 1 1.0 coherentbbh12_1128678894_ 1134362200 1 1.0 coherentbbh13_1128678894_ 1134363400 1 1.0 coherentbbh14_1128678894_ 1134364600 1 1.0 coherentbbh19_1128678894_
These injections are now complete.