Dave Barker, TJ
The new version of ext_alert.py (the scripts that polls GraceDB and reports events) is now running here at LHO. It has been running at LLO while they tested it and we got the OK from Duncan Macleod today. This new version will alert for "E" type events and now "G" type as well. The new events can be seen on the CAL_INJ_CONTROL.adl medm screen.
As a test, psinject is running excitations to the h1calcs model. Note that the actual injection is turned off using the hardware injection control MEDM screen. The psinject process is under control of Monit on h1hwinj1. This will be turned off at the conclusion of Tuesday maintenance.
The following servers have been patched and rebooted: ldr.ligo-wa.caltech.edu ldas-pcdev2.ligo-wa.caltech.edu The following servers were patched but were not and will not be rebooted today: detchar.ligo-wa.caltech.edu all compute nodes (node[1-270], gpu-node[1-5])
We saw a large glitch in the RF AM monitors with high coherence with DARM at around 16:13 UTC on Sept 22nd, while the IFO was locked and maintence was happening. There werw people in the LVEA (though not near the PSL) and people in the CER but they were near the SEI and SUS racks, not the ISC racks. The first attached plot shows this on a 5 hour time scale, the second plot has 5 days. This can be compared to Evan's plots of the last 3 weeks (21766)
Starting around 2015-09-22 17:51:00 Z we had a few minutes or what appeared to be full-on instability of the RFAM stabilization servo. The control signal spectrum was >10× the typical value from 10 to 100 Hz. [Edit: actually, it looks like glitching; see below.]
I tried turning the modulation index down by as much as 1.5 dB, but there was no clear effect.
I've attached time series as a zipped DTT xml for the driver channls (control signal, error signal, OOL sensor) during such a glitchy period.
In the control signal, all the glitches I looked at have the same characteristic shape (see the screenshot with the zoomed time series): an upward spike, a slight decay, a downward spike, and then a slower decay back to the nominal control signal level.
The control signal during the Γ-reduction attempts seems quite smooth; the 0.2-dB steps do not produce glitches.
The full report can be found on the detchar wiki, but here are the main highlights:
TITLE: Sept 22, Day Shift: starting 15:00UTC
STATE Of H1: Locked/Commissioning, Range is 50+MPC, trying to stay locked for filter tests
OUTGOING OPERATOR: Corey
QUICK SUMMARY: Maintenance started at 15:00UTC. Many people/trucks on site and people in VEAs and in the LVEA. Attempting to stay locked for ASC filter tests: Sheila.
- Jeff, forklift, dust monitor, mechanical building and high bay
- Christina - out buildings, no real cleaning, but checking for problems
- Ken, hammering 250m from end stations
- Richard, vault
- Bubba, 3IFO checking storage, and placing cones to protect elec.
- Jodi, 3IFO checks
- Pepsi, two trucks, on site
- Hanford Fire, hydrant checks
- Hugh, checking HEPI fluid levels,
- Jim, troubleshooting hardware injections, may glitch...
- Joe, taking Fire Dept. guy through LVEA and both end stations
- Mike, film crew, LVEA
- Praxair is onsite, though came through the open gate, so I don't know where they are
- Sheila, loading filter, started 15:20UTC
- Sprague, spraying perimiter
TITLE: 9/22 OWL Shift: 7:00-15:00UTC (00:00-8:00PDT), all times posted in UTC
STATE OF H1: Observation Mode @ 70+Mpc, but getting ready for Maintenance
SUPPORT: None (& not needed)
SHIFT SUMMARY: Other than the Chilean aftershock, this was a stellar shift.
Incoming DAY Operator: Cheryl V.
SHIFT'S ACTIVITIES:
Just chatted with Brian and Jeremy at LLO about the start of their Maintenance time (which is usually 8amCT [13:00UTC]), and they said they are going to let L1 stay in Observation Mode beyond their usual Maintenance start time so we can have more double coincident time (they will let L1 drop out whenever activities/contractors bump it out of lock.)
(For the record, LHO's Maintenance Time starts at 8am PST [15:00UTC]).
13:32 UTC Guimin just notified me they are out of lock and starting their Maintenance.
H1 in Observation Mode for 2.5+hrs @75Mpc. Looks like L1 just came back @60Mpc.
Seismic band of 0.03-0.1Hz um/s has finally returned to 0.01um/s (took about 3.5hrs). Winds under 5mph.
As mentioned earlier, Terramon reported EQ around 8:00utc....it was off about about 2min with a lockloss at 8:02utc. Here is evidence this was an earthquake lockloss:
Here's a timeline of how this event occurred:
TITLE: 9/22 OWL Shift: 7:00-15:00UTC (00:00-8:00PDT), all times posted in UTC
STATE OF H1: Jim handed over a nice boring shift (just how I like it!). Earth is not rumbling & winds are calm (below 10mph). He mentioned issues with RF45 they had at the beginning of the current lock (it sounds like if this occurs again, we ride it out? Stay in Observing Mode?)
OUTGOING OPERATOR: Jim W.
SUPPORT: Solo. (Sheila is on-call, if needed)
QUICK SUMMARY: H1 sailing along at 75Mpc.
Guardian has a YELLOW notification about HAM5 related to Master Switch.
A quick scan of DARM (compared to 9/10/15 reference):
Terramon Update: Reports R-waves (2.4um/s) from 6.1 Chile quake are due here in 16min (8:00utc)! I see the 0.03-0.1Hz already starting a climb (just peaking a little over an order of magnitude (i.e. up to 0.1um/s)...but we'll monitor to see how the ground shakes. Let's see how we ride through it.
Title: 9/21/2015 Eve Shift: 11:00-7:00UTC
State of H1: Observation Mode at 70+Mpc for the last 8hrs
Support: Evan,Jenne,Sheila, others in control room for most of shift
Shift Summary: Travis relocked right before I arrived, quiet aside from occasional ETMY glitches, ground and wind mostly quiet
Activity Log:
To ride out earthquakes better, we would like a boost in DHARD yaw (alog 21708) I exported the DHARD YAW OLG measurement posted in alog 20084, made a fit, and tried a few different boosts (plots attched).
I think a reasonable solution is to use a pair of complex poles at 0.35 Hz with a Q of 0.7, and a pair of complex zeros at 0.7 Hz with a Q of 1 (and of cource a high frequency gain of 1). This gives us 12dB more gain at DC than we have now, and we still have an unconditionally stable loop with 45 degrees of phase everywhere.
A foton design string that accomplishes this is
zpk([0.35+i*0.606218;0.35-1*0.606218],[0.25+i*0.244949;0.25-i*0.244949],9,"n")gain(0.444464)
I don't want to save the filter right now because as I learned earlier today that will cause an error on the CDS overview until the filter is loaded, but there is an unsaved version open on opsws5. If anyone gets a chance to try this at the start of maintence tomorow it would be awesome. Any of the boosts in the DHARD yaw filter bank currently can be overwritten.
We tried this out this morning, I turned the filter on at 15:21 , it was on for several hours. The first screenshot show error and control spectra with the boost on and off. As you would expect there is a modest increase in the control signal at low frequencies and a bit more supression of the error signal. The IFO was locked durring maintence activities (including praxair deliveries) so there was a lot of noise in DARM. I tried on off tests to see if the filter was causing the excess noise, and saw no evidence that it was.
We didn't get the earthquake I was hoping we would have durring the maintence window, but there was some large ground motion due to activities on site. The second attached screenshot shows a lockloss when the chilean earthqauke hits (21774), the time when I turned on the boost this morning, and the increased ground motion durring maintence day. The maintence day ground motion that we rode out with the boost on were 2-3 times higher than the EQ, but not all at the same time in all stations.
We turned the filter back off before going to observing mode, and Laura is taking a look to see if there was an impact on the glitch rate.
I took a look at an hour's worth of data after the calibration changes were stable and the filter was on (I sadly can't use much more time) . I also chose a similar time period from this afternoon where things seemed to be running fine without the filter on. Attached are glitchgrams and trigger rate plots for the two periods. The trigger rate plots show data binned in to 5 minute intervals.
When the filter was on we were in active commissioning, so the presence of high SNR triggers are not so surprising. The increased glitch rate around 6 minutes is from Sheila performing some injections. Looking at the trigger rate plots I am mainly looking to see if there is an overall change in the rate of low SNR triggers (i.e. the blue dots) which contribute the majority to the background. In the glitchgram plots I am looking to see if I can see a change of structure.
Based upon the two time periods I have looked at I would estimate the filter does not have a large impact on the background, however I would like more stable time when the filter is on to further confirm.
Chris B., Jeff K. We performed a series of single-IFO hardware injections at H1 as a test. The intent mode button was off at the time. All injections were the same waveform from aLog 21744. tinj was not used to do the injections. The command line used to do the injections was: awgstream H1:CAL-INJ_TRANSIENT_EXC 16384 H1-HWINJ_CBC-1126257410-12.txt 0.2 -d -d awgstream H1:CAL-INJ_TRANSIENT_EXC 16384 H1-HWINJ_CBC-1126257410-12.txt 0.5 -d -d >> log.txt awgstream H1:CAL-INJ_TRANSIENT_EXC 16384 H1-HWINJ_CBC-1126257410-12.txt 0.5 -d -d >> log.txt awgstream H1:CAL-INJ_TRANSIENT_EXC 16384 H1-HWINJ_CBC-1126257410-12.txt 0.5 -d -d >> log.txt awgstream H1:CAL-INJ_TRANSIENT_EXC 16384 H1-HWINJ_CBC-1126257410-12.txt 1.0 -d -d >> log.txt I've attached the log (log.txt) which contains the standard output from running awgstream. Taken from the awgstream log the corresponding times are approximates of the injection time: 1126916005.002499000 1126916394.002471000 1126916649.002147000 1126916962.002220000 1126917729.002499000 The expected SNR the waveform is ~18. The scale factors applied by awgstream should change the SNR by a factor of 0.2 and 0.5 when used. I've attached timeseries of the INJ-CAL_HARDWARE and INJ-CAL_TRANSIENT. The injections did not reach the 200 counts limit of the INJ_HARDWARE filterbank that we saw in the past. Watching the live noise curve in the control room we did not notice any strong indication of ETMY saturation which usually manifests itself as a rise in the bucket of the noise curve. But this needs followup. I've attached omegascans of the injections.
It looks like there's a pre-injection glitch in the last spectrogram. Is that understood?
There were no ESD DAC overflows due to any of the injections. The only such overflow was at 1126916343, which was between injections. The glitch before the last injection is not understood. It does not correspond to the start of the waveform, which is at GPS time ___29.75. The glitch is at ___29.87 (see attached scan), and I can't find what feature in the waveform it might correspond to. It may be some feature in the inverse actuation filter. We should repeat this hardware injection to see if the glitch happens again. Subsequent injections should be done with a lower frequency of 15 Hz (this was 30 Hz), to make sure there are no startup effects. This will only make the injection about 3 seconds longer. In the above, I'm assuming that the hardware injection is always synchronized to the GPS second, so that features in the strain file correspond exactly to what is injected, with just an integer offset. I confirmed that by looking at the injection channel, but someone should correct me if the injection code ever applies non-integer offsets.
If you run awgstream without specifying a start time, it chooses a start time on an exact integer GPS second. (On the other hand, if you DO specify a start time, you can give it a non-integer GPS time and it will start the injection on the closest 16384 Hz sample to that time.)
Note that these CBC injections were recorded by ODC as Burst injections (e.g., see https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20150922/plots/H1-ALL_893A96_ODC-1126915217-86400.png) because the CAL-INJ_TINJ_TYPE channel was left at its previous setting, evidently equal to 2.
I completed LALInference followup of these events. linked from https://www.lsc-group.phys.uwm.edu/ligovirgo/cbcnote/aLIGOaVirgo/150827092943PEO1%20parameter%20estimation%20procedure#Hardware_Injections
I took higher resolution spectra of the supposed 41Hz HSTS roll mode tonight (see first attachment). There are 2 peaks, however only the first peak has any coherance with anything. The 2 peaks are at:
40.9375 Hz (lots of DARM-PRC-SRC coherence)
41.0117 Hz (no coherence with anything I looked at)
The second attachment shows the set of 41Hz peaks in 3 of the last lock stretches. I attempted to use the ASC signals to check for coherence with DARM, but:
- the OSEM sensors on the HSTS suspensions are too noise, so see no coherence
- there is coherence with BOTH SRC and PRC, unfortunately, so hard to pin point where
- there is coherence with SRC1 which talks to SRM - is it cross-coupling thru ASC/LSC?
- there is no coherence with SRC2 likely because it is a noisy detector (AS DC), so can't tell if the SRM and SR2 combination that it talks to contribute to the 41Hz
- PRC1 not plotted, no ASC drive there, but witnessed that there is no coherence there
- PRC2 goes to PR3, so while there is coherence there (not plotted) it is likely because it is cross-coupling from SRC to PRC
- IMC_TRANS_P not plotted, but witnessed no coherence, again too noisy of a detector?
This is also close to the third harmonic of the Roll modes as seen at 41.25Hz at LLO 20737 though this is probably only relevant to highly excited roll modes.
The 40.9375 Hz mode is consistent with the PR2 M2 to M2 TFs, where a 40.93 +- 0.01 Hz mode was seen. See log 21741.
And just to clarify. This is not a third harmonic. It is the third (and highest) of the three roll modes of the HAM small triple suspension (HSTS).