I looked back at the first TCS X slewing that Kiwamu, Stefan, and I did at the start of August. It seems that during this slewing, there was no null of the IMC jitter coupling. On the other hand, we already know that the coupling passes through a null close to the beginning of each lock stretch (22208).
First, the caveats: (1) the ISS outer loop was off during this test, and (2) one can see that the shape and location of the jitter peaks is different from what we normally observe in DARM.
The first attachment is a normalized spectrogram of DARM error during the slewing. It can be seen that the jitter peaks around 250 and 350 Hz increase monotonically from start to finish.
As a check, I looked again at the complex coherence between the IMC WFS error signals and DARM at the beginning and the end of this test (at 01:19:00 and 03:19:00, 2015-08-03 Z). Unlike what we see at the start of each lock, there is no sign flip of the coupling:
This seems to indicate that thermal tuning of the X arm alone is not enough to minimize the coupling. Either the null that we see at the beginning of each lock cannot be reached by thermal tuning, or we need thermal compensation on both arms.
Attached is 2 weeks of data. No pressure fluctuations seen == Pumps running well. Also on the plots are the Control Out from the VFD. Looks like outside temperature trends showing at EndY and that is likely the reason for the large fluctuations on the L0 (CS) control--facility adjusting HVAC. The daily and weekly glitches at the EndY continue unabated.
Levels steady for several weeks now. No further maintenance required.
Motivated by our locking difficulties at the end of last week, I downloaded guardian state data for the last two weeks to look into our locklosses durring the CARM offset reduction.
To start with the good news, the script that I use to do this automatically makes a few figures that tell us about our duty cycle, and we are doing well overall. As usual, I am only looking at the guardian state, and not considering any intent bits. My reaasoning is that the hardest part is getting and keeping the IFO locked, and that is currently the important limit to our duty cycle.
The main weakness in our lock acquistion sequence now seems to be the early steps of CARM offset reduction, as we saw yesterday.
As requested, I've made a few plots which bettter show how individual locks contribute to our total low noise time. The first attached plot is a repeat of the histogram of lock times from above, with the lower panel sshowing the sum of the low noise time accumlated in all the locks in that bin of the histogram. The second attached plot shows the accumlated low noise time, theere is a point for each lock stretch with the length of the lock on the horizontal axis and the cumulative low noise time on the vertical axis.
The message of both plots is that we accumulate a lot of our time durring a small number of long locks. The secondd plot shows that in the 3 longest locks (out of 33 total) we accumlated 133 hours of low noise time out of 312 total hours.
I also had a look at relocking times for this period based on a request from Keita.
The minimum relocking time was 20 minutes (this is measured from the time of lockloss to the time that the guardian arrives in nominal low noise), the median was 53 and the maximum was 6 hours and 20 minutes (long stretch of downtime on the evening of Oct 2nd is not included here).
A histogram is attached.
Link to the DQ shif paget: DQ Shift page. Summary: * October 1 had locks lost doing injections looking for scattered light in the X end. Otherwise a nice set of data, with a duty cycle ~80%. * October 2 and 3 there was some problems locking. High winds, beam splitter optical lever glitching, and ALS X/Y problems locking. At one point ~22 hours unlocked. * October 4, recovered from the past two days and had some long locks with a duty cycle ~80%.
I reset the PSL FE watchdog at 16:14 UTC (09:14 PDT).
TITLE: Oct 6 DAY Shift 15:00-23:00UTC (08:00-04:00 PDT), all times posted in UTC
STATE Of H1: Commissioning
OUTGOING OPERATOR: Patrick
QUICK SUMMARY:IFO in commissioning mode as Maintenance Day begins.
Expected Internet outage of about 15 minutes as one of our ISPs performs network maintenance this morning, between 0800 and 0900 Pacific (1500-1600 UTC).
Maintenance has been completed.
TITLE: 10/06 [OWL Shift]: 07:00-15:00 UTC (00:00-08:00 PDT), all times posted in UTC STATE Of H1: Locked. Commissioning. SHIFT SUMMARY: Locked entire shift. One GRB alert. Went to commissioning at the end of the shift to allow Sheila to run injections. INCOMING OPERATOR: Ed ACTIVITY LOG: 11:08 - 11:18 UTC Stepped out of control room 14:18 UTC Went to commissioning to allow Sheila to run injections (WP 5528) 14:27 UTC Bubba driving to VPW 14:47 UTC Let Safety Clean truck through gate. Notified Bubba. SUS E_T_M_Y saturating (Tue Oct 6 11:22:48 UTC) SUS E_T_M_Y saturating (Tue Oct 6 13:54:13 UTC) From watching the monitor it seemed that RF45 coherence may have risen with the DARM spectrum during these saturations.
17:20 UTC Stopped/restarted gracedb query script 07:42 - 7:44 UTC Stepped out of control room 09:55 UTC GRB alert. Notified LLO. 10:30 - 10:36 UTC Stepped out of control room Still observing @ ~79 MPc.
I saw on a terminal on the ops workstation that someone had logged into h1fescript0 and started the query grace db script without running it in screen. I ctrl-c terminated that script. I did a listing of the current screen sessions (screen -ls) and saw two running. The one with PID 4403 had the printout from a crashed query db script. I restarted it in that screen and detached from the session. It is now running in screen with PID 4403.
TITLE: 10/06 [OWL Shift]: 07:00-15:00 UTC (00:00-08:00 PDT), all times posted in UTC STATE Of H1: Observing @ ~ 78 MPc. OUTGOING OPERATOR: Nutsinee QUICK SUMMARY: From the cameras the lights are off in the LVEA, PSL enclosure, end X, end Y and mid X. I can not tell if they are off at mid Y. Seismic in 0.03 - 0.1 Hz band is around .015 um/s. Seismic in 0.1 - 0.3 Hz band is around .1 um/s. Winds are less than ~5 mph.
TITLE: "10/05 [EVE Shift]: 23:00-07:00UTC (16:00-00:00 PDT), all times posted in UTC"
STATE Of H1: Observing at ~75 Mpc for the past 9 hours. Range recently went up to ~78 Mpc for no apparent reason.
SUPPORT: Mike
SHIFT SUMMARY: Quiet shift. Minimal/nominal seismic activity. Wind below 5mph. GraceDB querying failed once.
INCOMING OPERATOR: Patrick
ACTIVITY LOG:
23:51 Chris started HW injection.
01:41 Injection done
04:00 Noticed GraceDB querying failure. Restarted the python script. Called Mike about GraceDB events that weren't labeled as INJ.
I was following up the LHO loud glitches from 26th September. I had a look at the transmitted power signals as done in alog 20395 for dust glitches. The attached pdf contains the plots of band passed (10Hz-100Hz) DARM, as well as ASC-{X,Y}_TR_{A,B}_NSUM_OUT_DQ and SUS-ETMY_L3_MASTER_OUT_LL_DQ channels for the 8 time instances when ETMY saturation was observed. The ETMY saturation times are given below: 1127380417.6250 1127380418.6875 1127384361.3750 1127384361.8125 1127389556.1250 1127389556.2500 1127403691.3750 1127403691.5000 1127409078.3750 1127409078.5625 1127418386.1875 1127418386.4375 1127423722.0000 1127423722.1250 1127424221.7500 1127424222.0000 1127427748.9375 1127427749.0625 It can be noticed that the glitches also appear in Y TR signals (e.g., ASC-Y_TR_{A,B}_NSUM_OUT_DQ) with large amplitude except the 4th one. For the 4th glitch, it is not visible in ASC-Y_TR_{A,B}. In this particular case, , it is visible in X QPDs but much smaller than the other ones in Y. Also it doesn't look like that it is much above the noise in the QPD, and even the glitch shape in DARM seems different. In most of these cases DARM is going down and ASC-Y_TR_{A,B}_NSUM is going up initially. So it is interesting that in spite of ETMY saturation, the glitch shape is completely different as well as nothing can be found in Y QPDs.
Summary: I performed 8 coherent H1L1 CBC hardware injections. They were all successful. Waveforms: Waveforms are in the hardware injection SVN: https://daqsvn.ligo-la.caltech.edu/svn/injection/hwinj/Details/Inspiral/${IFO}/coherentbbh*_1126259455_${IFO}.out Waveform parameter files are in the hardware injection SVN: https://daqsvn.ligo-la.caltech.edu/svn/injection/hwinj/Details/Inspiral/coherentbbh*_1126259455.xml Schedule: The following was appended to the schedule: 1128124325 1 1.0 coherentbbh1_1126259455_ 1128125225 1 1.0 coherentbbh2_1126259455_ 1128126125 1 1.0 coherentbbh3_1126259455_ 1128127025 1 1.0 coherentbbh4_1126259455_ 1128127925 1 1.0 coherentbbh5_1126259455_ 1128128825 1 1.0 coherentbbh7_1126259455_ 1128129725 1 1.0 coherentbbh8_1126259455_ 1128130625 1 1.0 coherentbbh9_1126259455_ Recall we had the issue before where injections were being cancelled since they triggered an EM alert for testing. Peter S. documented what needed to be changed in aLog 22163. I reset ${IFO}:INJ-CAL_TINJ_PAUSE and ${IFO}:INJ-CAL_EXTTRIG_ALERT_TIME between injections so that they were not cancelled. It worked. The gracedb entries for these injections are H190051 to H190068. I've attached omegascans of the injections.
Since the current set of hardware injections is testing the injections themselves, not the human response to event candidates, I have changed the Approval Processor configuration to no longer treat hardware injections like real GW event candidates. That means that they will not get the H1OPS and L1OPS labels, and operators will not be presented with a sign-off box for them. However, with the CURRENT version of the external alert code (ext_alert.py), audible alerts will still sound in the control rooms. I am trying to arrange that during tomorrow's maintenance period we will update the ext_alert.py scripts so that the audible alerts sound only for triggers that operators are asked to sign-off on. After that change is made (hopefully tomorrow), please take any audible alert seriously as a genuine external-trigger alert or low-latency GW trigger alert. Exception: we are planning to set up weekly tests of the alert system that will happen during Tuesday maintenance periods; details to come later.
O1 days 16,17
Saturday 3rd October 2015. Many unexpected restarts of h1fw0 due to failed SSD raid. No other restarts reported. Details in attached text file.
Sunday 4th October 2015. Many unexpected restarts of h1fw0 due to failed SSD raid. No other restarts reported. Details in attached text file.
I am starting some coherent H1L1 hardware injections test. There will be the 8 test injections that Adam and I did not get a chance to perform last Friday. I will add a comment with the schedule when it they have been added. More details to follow later.
The schedule was appended with: 1128124325 1 1.0 coherentbbh1_1126259455_ 1128125225 1 1.0 coherentbbh2_1126259455_ 1128126125 1 1.0 coherentbbh3_1126259455_ 1128127025 1 1.0 coherentbbh4_1126259455_ 1128127925 1 1.0 coherentbbh5_1126259455_ 1128128825 1 1.0 coherentbbh7_1126259455_ 1128129725 1 1.0 coherentbbh8_1126259455_ 1128130625 1 1.0 coherentbbh9_1126259455_
Last scheduled injection performed. More details later.