19:26UTC - 12:26PT, truck at receiving
19:37:35UTC - Commissioning , Praxair on site
Praxair takes about 2.5 hours, so expect to be in Commissioning until about 22:15UTC.
There is a second Praxair truck expected today as well.
19:44UTC - Richard, Filiberto, Mid-Y
19:45:55UTC - engaged FM2 on DHARD_Y, Sheila's filter
- heard the rumbling of the truck in the CR, and saw the range drop to 55Mpc just before engaging the filter
- apparently the truck had to turn around and go to the Y arm, which is the rumbling I heard
TITLE: IFO returns to Observe after multiple large earthquakes over the last 4 hours.
ASSISTANCE: Hugh (HAM5 SEI clearing all Guardian issues), Sheila (bounce-roll mode damping)
TIMELINE:
15:00UTC - start of shift, high ground motion
- SYS_DIAG telling me ISS defracted power is too high - I reduce the slider. Defracted power was 12 and went to 9.
15:47:12UTC - ground motion just coming down enough to relock, first DRMI lock since the earthquake in Canada
16:09:36UTC - IFO made it to Bounce Violin Mode Damping
16:19:37UTC - IFO unlocked
- A couple more DRIM locks that didn't survive.
- Four more 5-6 magnitude earthquakes come in and prevent locking.
18:04:31UTC - IFO ready to lock, and this starts the locking sequence that resulted in the IFO going to Low Noise
- Longer than usual time from ready to Low Noise, and one reason was that the roll mode on ITMY was about 4, so with Sheila's help I started the damping of the roll mode manually, and waiting for it to improved = 20 minutes
18:55:04UTC - IFO in Low Noise
18:58:57UTC - IFO in Observe
All buildings are beginning to respond to the lower outdoor temperatures so Bubba and I have turned on heaters in both end stations and the LVEA.
One stage of HC5 is now on in the LVEA. This will impact the input chambers the most. The response appears to be ~0.5 F.
The End stations have variac control so these have been incremented from 4ma(off) up 8.5ma
TIME: 16:39UTC, 9:39PT
STATE OF THE IFO: Unlocked
EXPLANATION: The IFO did lock and made it past ENGAGE_ASC_PART3, but then another earthquake arrived, and broke the lock, and ground motion is high.
Video0's striptools have been modified, and now they have a red background, due to the channels DHARD_Y and DHARD_P.
When making a change to a screen, it's important to test it in all situations. Maybe this addition worked quite well during our long lock, but right now with earthquakes and relocking, the change to the striptools have rendered them unusable.
FOM image attached.
This is a symptom of a rung up roll mode. Cheryl spent the last few minutes damping it (ITMY) and now these displays are back to looking normal.
Following Sheila's entry (alog 21708), I've tried to figure out which locklosses where due to Earthquakes during the week of Sep 10th (ER8).
Out of the 22 locklosses seen this week, I've counted 6 locklosses due to EQs. Sheila counted 9, I'll double check my conclusions with her.
EQ | Location | Ground Velocity at LHO | Status | Lock Loss time |
5.9 | Alaska | 5.99 microns/s | Lockloss | 1125916264.5625 |
5.2 | Mexico | 1.08 microns/s | Lockloss | 1126042739.0625 |
5.7 | New Zealand | 1.16 microns/s | Lockloss | 1126127070.8125 |
6.3 | Indonesia | 3.20 microns/s | Lockloss | 1126427496.4375 |
6.1 | Papua New Guinea | 2.90 microns/s | Lockloss | 1126448905.3125 |
8.3 | Chile | 170 microns/s | Lockloss + SEI trip | 1126480080.5625 |
The ground velocities represent the maximum amplitude predicted by the Seismon software. These predictions could be inaccurate by a few percents.
Six examples is not enough to do some accurate statistics, but the behavior observed during ER8 doesn't contradict the conclusions drawn from ER7 (see DCC T1500230). My conclusions were:
Note: among the 16 remaining locklosses, some of them are due to a high ground motion, but EQ is not the cause (wind? human activity?).
TITLE: 9/24 OWL Shift: 7:00-15:00UTC (00:00-8:00PDT), all times posted in UTC
STATE OF H1: Unlocked. Waiting out Earthquake.
SUPPORT: None (& not needed)
SHIFT SUMMARY:
More action during this shift with the end of the long lock stretch (no obvious reason for lockloss by Jim, although he said he mentioned 45MHz issues for end of that lock). Went through an alignment & then had a 4hr lock, and then had another Unknown lockloss. Then while trying to bring back H1 had some issues (noted earlier). And now handing off unlocked H1 with seismic ringing down from Vancouver EQ.
Shift Activities:
12:35 H1 Lockloss (nothing obvious: seismic quiet, and all the H1 strip tools showed nothing obvious before lockloss)
During Lock Acquisition, during 2nd locking attempt, had issues with ALS XARM:
1) You can either misalign SRM using the SUS_SRM guardian, or use a new state in ALIGN_IFO called SET_SUS_FOR_PRMI_W_ALS.
2) Request OFFLOAD_PRMI from the DRMI guardian.
3) Once PRMI locks adjust PRM and BS alignment until you get about 80 counts on POP90 and 50 counts on POP18.
4)Realign SRM by undoing whatever change you made in step 1.
5) Request DRMI_1F_OFFFLOADED from the ISC_DRMI guardian.
Waited for DRMI to lock again. Several times I had a Verbal Alarm which sounded like "Slip Mode"? What do we do about this? It looked ugly on AS-AIR video. Had "Slip Mode" alarms a few times while waiting for DRMI, and then the Vancouver earthquake arrived!
After recovering ISIs and ETMy, took ISC_LOCK to LOCKING_ALS, but haven't made it there yet (the guardian log keeps saying "Waiting for arms to settle"). The 0.03-0.1Hz seismic band was ramping down, but flattened out at 0.3um/s (which is over an order of magnitude from normal quiet levels). Perhaps we should wait for this to come down to normal levels.
At 13:54 Verbal Alarm posted Earthquake.
Looks like Magnitude 5.5 from Vancouver Island at 13:49. (Terramon did not alert us to this one.) 0.03-0.1hz seismic has increased 3 orders of magnitude (!) and 0.1-0.3Hz seismic has increased 2 orders of magnitude.
ETMy & all BSC ISIs have Watchdogs have tripped.
H1 had a ~46Hr lock that ended toward the end of Jim's shift. He noted DRMI was looking pretty ugly. I waited DRMI a little, but eventually gave Sheila's new PRMI procedure a try, but POP18 & POP90 remained at a flat zero and PRMI never locked. So went through the Initial Alignment procedure (I went slow & took notes since this is something we do rarely.). Alignment was and getting to NOMINAL_LOW_NOISE were straightforward.
Reason for Lockloss?
I saw nothing obvious from seismic signals when I walked in, everything quiet! Jim mentioned ASAir/OMC video spots took an odd move prior to lockloss. H1 was trending down over 6hrs before the lockloss, and Jim mentioned this could be the RF45 issue which has been coming up.
Initial Alignment Notes:
Accidental Slip Out Of Observation Mode: 9:04:15 - 9:04:33
While making the entry above and updating the Initial Alignment wiki, I wanted to look at a pull-down of an ISC Guardian node. Clicking on the ALS_YARM node pull-down dropped H1 out of Observation & into Commissioning. (after cursing, I quickly checked things and took H1 back to Observation)
(I didn't bother touching the Observatory Mode button during this short slip.)
TITLE: 9/24 OWL Shift: 7:00-15:00UTC (00:00-8:00PDT), all times posted in UTC
STATE OF H1: H1 in lock acquisition after ~45hr lock.
OUTGOING OPERATOR: Jim W.
SUPPORT: None
QUICK SUMMARY: DRMI looked ugly. Tried Sheila's new procedure for PRMI, but POP90 & POP18 were flat at zero. Proceeding with an Intial Alignment.
J. Kissel I've taken new DARM open loop gain and PCALY to DARM transfer functions to validate the current calibration. During the PCALY to DARM transfer function, I take the transfer function from PCALY's RX PD (calibrated into [m] of ETMY motion) and the CAL-CS front-end's DELTAL_EXTERNAL (calibrated into DARM [m], which -- since we're driving ETMY -- is identical to [m] of ETMY motion). These two different methods agree to within 4% and 3 [deg] over the 15 [Hz] to 1.2 [kHz] band. The calibration discrepancy expands to a whopping 9% and 4 [deg] if we look a frequencies between 5 and 15 [Hz] ;-). I think we're in great shape, boys and girls. Details -------------- - CAL-CS does not correct for any slow time depedence (optical gain, test mass actuation strength, etc), so any agreement you see with the current interferometer is agreement with the reference model taken on Sep 10th 2015 (LHO aLOG 21385). - In the previous measurement, Kiwamu had to fudge the phase by ~90 [us] to get the phase to agree. Now that we've updated the cycle delay between sensing and actuation to 7 [16 kHz clock cycles] to better approximate the high-frequency frequency response of AA, AI, and the OMC DCPD signal chain, we no longer have to fudge the phase -- AND the phase between the two metrics agree. NICE. - I've made sure to turn OFF calibration lines *during both of these measurements, but there should be ample data just before and just after with calibration lines ON, such that we can compare our results against theirs to help refine our estimates of systematic error. - The measurements live in /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/DARMOLGTFs/2015-09-23_H1_DARM_OLGTF_7to1200Hz.xml /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/PCAL$/2015-09-23_PCALY2DARMTF_7to1200Hz.xml and have been committed to the CalSVN. We'll process these results shortly, and perform a similar analysis as Darkhan has done in yesterday's aLOG 21827.
The parameter file for this measurement was committed to calibration SVN:
CalSVN/aligocalibration/trunk/Runs/O1/H1/Scripts/DARMOLGTFs/H1DARMparams_1127083151.m
Attached plots show components of DARM loop TF and their residuals vs. DARM model for O1.
It looks better. Very nice.
By the way, I wanted to measure the open loop without the MICH or SRCL feedforward because I wanted to demonstrate that the unknown shape in the residual in magnitude is not due to these feedforward corrections. Though this may be a crazy thought. Anyway, it would be great if you can run an open-loop measurement without the feedforwards at some point, just once.
L1 went out of lock. At H1 we turned off the intent bit and injected some hardware injections. The hardware injections were the same waveform that was injected on September 21. For more information about those injections see aLog entry 21759 For information about the waveform see aLog entry 21774. tinj was not used to do the injections.The commands to do the injections were: awgstream H1:CAL-INJ_TRANSIENT_EXC 16384 H1-HWINJ_CBC-1126257410-12.txt 0.5 -d -d >> log2.txt awgstream H1:CAL-INJ_TRANSIENT_EXC 16384 H1-HWINJ_CBC-1126257410-12.txt 1.0 -d -d >> log2.txt ezcawrite H1:CAL-INJ_TINJ_TYPE 1 awgstream H1:CAL-INJ_TRANSIENT_EXC 16384 H1-HWINJ_CBC-1126257410-12.txt 1.0 -d -d >> log2.txt awgstream H1:CAL-INJ_TRANSIENT_EXC 16384 H1-HWINJ_CBC-1126257410-12.txt 1.0 -d -d >> log2.txt To my chagrin the first two injections were labeled as burst injections. Taken from the awgstream log the corresponding times are approximates of the injection time: 1127074640.002463000 1127074773.002417000 1127075235.002141000 1127075742.002100000 The expected SNR of the injection is ~18 without any scaling factor. I've attached omegascans of the injections. There is no sign of the "pre-glitch" that was seen on September 21.
Attached stdout of command line.
Neat! looks good.
Hi Chris, It looks like there is a 1s offset between the times you report and the rough coalescence time of the signal. Do you know if it is exactly 1s difference?
Yes, as John said, all of the end times of the waveforms are just about 1 second later that what's in the original post. I ran a version my simple bandpass-filtered overlay script for these waveforms. Filtering both the model (strain waveform injected into the system) and the data from 70-260 Hz, it overlays them, and also does a crude (non-optimal) matched filter to estimate the relative amplitude and time offset. The four plots attached are for the four injected signals; note that the first one was injected with a scale factor of 0.5 and is not "reconstructed" by my code very accurately. The others actually look rather good, with reasonably consistent amplitudes and time delays. Note that the sign of the signal came out correctly!
I ran the daily BBH search with the injected template on the last two injections (1127075235 and 1127075742). For 1127075235; the recovered end time was 1127075235.986, the SNR was 20.42, the chi-squared was 29.17, and the newSNR was 19.19. For 1127075242; the recovered end time was 1127075242.986, the SNR was 20.04, the chi-squared was 35.07, and the newSNR was 19.19.
KW sees all the injections with the +1 sec delay, some of them in multiple frequency bands. From /gds-h1/dmt/triggers/H-KW_RHOFT/H-KW_RHOFT-11270/H-KW_RHOFT-1127074624-64.trg /gds-h1/dmt/triggers/H-KW_RHOFT/H-KW_RHOFT-11270/H-KW_RHOFT-1127074752-64.trg /gds-h1/dmt/triggers/H-KW_RHOFT/H-KW_RHOFT-11270/H-KW_RHOFT-1127075200-64.trg /gds-h1/dmt/triggers/H-KW_RHOFT/H-KW_RHOFT-11270/H-KW_RHOFT-1127075712-64.trg tcent fcent significance channel 1127074640.979948 146 26.34 H1_GDS-CALIB_STRAIN_32_2048 1127074774.015977 119 41.17 H1_GDS-CALIB_STRAIN_8_128 1127074773.978134 165 104.42 H1_GDS-CALIB_STRAIN_32_2048 1127075235.980545 199 136.82 H1_GDS-CALIB_STRAIN_32_2048 1127075743.018279 102 74.87 H1_GDS-CALIB_STRAIN_8_128 1127075742.982020 162 113.65 H1_GDS-CALIB_STRAIN_32_2048 Omicron also sees them with the same delay From : /home/reed.essick/Omicron/test/triggers/H-11270/H1:GDS-CALIB_STRAIN/H1-GDS_CALIB_STRAIN_Omicron-1127074621-30.xml /home/reed.essick/Omicron/test/triggers/H-11270/H1:GDS-CALIB_STRAIN/H1-GDS_CALIB_STRAIN_Omicron-1127074771-30.xml /home/reed.essick/Omicron/test/triggers/H-11270/H1:GDS-CALIB_STRAIN/H1-GDS_CALIB_STRAIN_Omicron-1127075221-30.xml /home/reed.essick/Omicron/test/triggers/H-11270/H1:GDS-CALIB_STRAIN/H1-GDS_CALIB_STRAIN_Omicron-1127075731-30.xml peak time fcent snr 1127074640.977539062 88.77163 6.3716 1127074773.983397960 648.78342 11.41002 <- surprisingly high fcent, could be due to clustering 1127075235.981445074 181.39816 13.09279 1127075742.983397960 181.39816 12.39437 LIB single-IFO jobs also found all the events. Post-proc pages can be found here: https://ldas-jobs.ligo.caltech.edu/~reed.essick/O1/2015_09_23-HWINJ/1127074640.98-0/H1L1/H1/posplots.html https://ldas-jobs.ligo.caltech.edu/~reed.essick/O1/2015_09_23-HWINJ/1127074773.98-1/H1L1/H1/posplots.html https://ldas-jobs.ligo.caltech.edu/~reed.essick/O1/2015_09_23-HWINJ/1127075235.98-2/H1L1/H1/posplots.html https://ldas-jobs.ligo.caltech.edu/~reed.essick/O1/2015_09_23-HWINJ/1127075742.98-3/H1L1/H1/posplots.html all runs appear to have reasonable posteriors.
Here is how Omicron detects these injections: https://ldas-jobs.ligo-wa.caltech.edu/~frobinet/scans/hd/1127074641/ https://ldas-jobs.ligo-wa.caltech.edu/~frobinet/scans/hd/1127074774/ https://ldas-jobs.ligo-wa.caltech.edu/~frobinet/scans/hd/1127075236/ https://ldas-jobs.ligo-wa.caltech.edu/~frobinet/scans/hd/1127075743/ Here are the parameters measured by Omicron (loudest tile): 1127074640: t=1127074640.981, f=119.9 Hz, SNR=6.7 1127074773: t=1127074773.981, f=135.3 Hz, SNR=11.8 1127075235: t=1127075235.981, f=114.9 Hz, SNR=12.8 1127075742: t=1127075742.981, f=135.3 Hz, SNR=12.4
The BayesWave single IFO (glitch only) analysis recovers these injections with the following SNRs: 4640: 8.65535 4773: 19.2185 5253: 20.5258 5742: 20.1666 The results are posted here: https://ldas-jobs.ligo.caltech.edu/~meg.millhouse/O1/CBC_hwinj/
Folks have been complaining that the HAM5-ISI Rogue Excitation monitor is a pain. (see e.g. https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=21474) It looks like the coil-voltage-readback monitor for the V2 coil is busted somewhere, and so the monitor is sitting around -500 counts all the time. Hugh gave me the GPS time for a recent earthquake, and in the attached plot you can see the watchdog trip from normal (state 1) to damp-down (state 2) at T+3 seconds. The coil voltages come down pretty quickly. Then the WD goes to state 4 (full trip) about 3 seconds later and the coil drive monitors (except V2) get quite small. The rogue excitation alarm goes off about 3 seconds after that, because the V2 monitor has not fallen to abs(Vmon) < +100 counts. The V2 monitor just sort of sits at ~-500 counts all the time. I'm pretty sure the V2 coil drive is working, otherwise the HAM5-ISI platform would act very poorly. I'm guessing the problem is somewhere in the readback chain. Note - the channels I use for this are all Epics channels, so the timing is a bit crude and the voltages are sort of jumpy. The channels are: H1:ISI-HAM5_ERRMON_ROGUE_EXC_ALARM H1:ISI-HAM5_CDMON_H1_V_INMON H1:ISI-HAM5_CDMON_H2_V_INMON H1:ISI-HAM5_CDMON_H3_V_INMON H1:ISI-HAM5_CDMON_V1_V_INMON H1:ISI-HAM5_CDMON_V2_V_INMON H1:ISI-HAM5_CDMON_V3_V_INMON H1:ISI-HAM5_WD_MON_STATE_INMON I've also attached screenshots of the "coil drive voltage too big" calculation and the "rogue excitation alarm generation" calculation from the HAM-ISI master model
I've added integration issue 1127 on this https://services.ligo-wa.caltech.edu/integrationissues/show_bug.cgi?id=1127
I think I've got all the colors sorted out and pretty sure it looks like H2 Monitor isn't working either. At first I thought it was just zero on the plot but I don't think so. At least it won't cause this problem and being H2 may help find the problem.