There is a danger to use DTT together with NDS2 as described here: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=22128
When you look at the data of a channel from the past, if the sampling rate of that channel was changed in the past, DTT could be confused and could output a totally bogus calculation. If you're only looking at the spectrum it's just a matter of wrong frequency axis, but if you look at the coherence between a channel with this problem and a channel without, your coherence is totally gone. Recently I was hit by this behavior again and spent a day to figure it out.
In the first attachment, I was looking at the coherence between PEM magnetometers and DARM using DTT. On the bottom row left half, you can see that DARM and H1:PEM-CS_MAG_LVEA_OUTPUTOPTICS_QUAD_SUM_DQ are sometimes coherent with each other (BTW this cannot be magnetic coupling into HAM6, because the level of magnetic noise is too small according to Robert) but indivisual X, Y and Z channles don't show any coherence (top row right half, middle row). Strange thing is, QUAD_SUM is made inside the frontend, adding square of X, Y, and Z.
In the second attachment, I did the same measurement using matlab, and X coherence is actually larger than QUAD_SUM. This is not as mysterious as I thought from the DTT plot.
The difference, it turns out, is that the sampling rate of X, Y and Z channels were increased from 4096Hz to 8192Hz (but that's not the case with SUM). DTT cannot handle this and assumes that they were always 4096Hz. If you look at the spectrum of, say, X channel in DTT, you'll see that the first mains line peak is at 30Hz, not 60.
This is really annoying.
After settling down from the problems during the first half of the shift, we are in Observing mode. General environmental conditions are good. At this time there are no apparent problems to report.
Conclusion:
We can account for 0.25% of the discrepancy measured by Jeff. If we also consider that the Foton filter values for H1 y-end Pcal are incorrect by 1.5% from the beginning of the run due to a change in measured optical efficiency. The current calibration factors in the front-end filters use a measurement from May 2015 and ETMY was misaligned during this measurement. The current hypothesis is that the ETM alignment during the end station calibration measurements can affect the measurements, so this could have changed the amount of light reaching the receiver module. We believe this should resolve the 1.8% discrepancy measured recently. Separately evaluating the two epochs could give a better measure of systematic uncertainties (difference between front-end filter and the correct values) during the two epochs rather than a combined uncertainty.
These measurements should be repeated for L1.
Dave and Jim corrected the problem with the WD reset. Did an initial alignment and relocked with no apparent problems. Power at 21.4W range at 80Mpc. Cleared a few SDF issues and set the intent bit to Observing.
Transition Summary: Title: 01/14/2016, Evening Shift 00:00 – 08:00 (16:00 – 00:00) All times in UTC (PT) State of H1: IFO unlocked and OMC WD triped. Working on fixing the WD trips (see Dave's aLOG), initial alignment, and relocking Outgoing Operator: TJ
Jeff K, Jeff B. , Jenne, TJ, Jim, Dave:
Around 4pm PST TJ reported that OMC had tripped and the watchdog could not be untripped. Jeff K. recommended a model restart. Unfortunately due to a communication problem we first mistakenly restarted the OMC model on the LSC front end (sorry OMC). Then we restarted the correct SUS-OMC model on SUSH56. This did not fix it. We then restarted all the models on SUSH56 (including the IOP). This did not fix it. We then stopped all models and only started IOP and SUS-SRM to do further debugging. (in the mean time the SWWD on the IOP had tripped SEI for HAM5 and HAM6). After some debugging we found that the PERL script sus/common/scripts/wdreset_all.pl was throwing an error about not finding the PERL CA LIBRARY. Jim tracked this down to a missing CaTools.pm perl module in the userapps/release/guardian directory. Turns out this file was removed from the SVN repository way back on 2nd March 2015 and the LHO working directory was only updated this afternoon by Jenne and TJ. This all nicely ties in with the watchdog resets working last night but not this afternoon.
In the mean time we had manually reset the watchdogs for SUS-SRM/SR3/OMC and SEI HAM5,6 and set the SDF back to OBSERVE for SUSH56IOP, SUSSRM/SR3/OMC and OMC.
For now we have manually copied the CaTools.pm file into userapps/release/sus/common/scripts to get the watchdog reset script working again.
This raises an FRS:
A perl module which is used by the watchdog systems has been deprecated. The watchdog system should be changed to no longer use PERL and instead use PYTHON (or perhaps BASH for exceptionally simple scripts).
We experienced a seemingly identical occurrence of this issue at LLO last Wednesday (see LLO aLOG entry 24156). However, as well as the SUS/SEI Watchdog reset scripts our initial alignment script was also affected, since it has Perl dependencies. It is still unknown how the symbolic-link to CaTools.pm became broken at LLO, see #4180.
Stuart, it was broken because I updated the same the same folder when I was visiting LLO. I am at fault for both of these CaTools.pm links being broken at both sites, though I had no idea that simply updating the SVN could cause this.
Thanks for shedding light on this mystery! I would suspect that svn'ing up pushed the changes to deprecate the Perl module sooner than was intended.
Last nights burst injections were scheduled for times when H1 was down, so we're attempting to do these again. This time I've schedule 5 lots of, the first lot starting at 22:00 CT (06:00 UTC), spaced 2 hours apart, with the last lot starting at 06:00 CT (14:00 UTC). Each injection is spaced 20 minutes apart. We're also including a BNS CBC injection in between the 5 lots of burst injections. All up there will be 30 injections, one every 20 minutes starting at 22:00 CT (06:00 UTC) and ending at 07:40 CT (15:40 UTC). I'll also mention that transient hardware injections only go ahead when the IFO is in observation mode, so they won't interfere with any PEM measurements that may be happening at the time. Here is the updated schedule: 1136872817 2 1.0 burst_GPS_76.259_ 1136874017 2 1.0 burst_GPS_76.262_ 1136875217 2 1.0 burst_GPS_76.263_ 1136876417 2 1.0 burst_GPS_76.264_ 1136877617 2 1.0 burst_GPS_76.266_ 1136878817 1 1.0 coherentbns1_1135135335_ 1136880017 2 1.0 burst_GPS_76.259_ 1136881217 2 1.0 burst_GPS_76.262_ 1136882417 2 1.0 burst_GPS_76.263_ 1136883617 2 1.0 burst_GPS_76.264_ 1136884817 2 1.0 burst_GPS_76.266_ 1136886017 1 1.0 coherentbns1_1135135335_ 1136887217 2 1.0 burst_GPS_76.259_ 1136888417 2 1.0 burst_GPS_76.262_ 1136889617 2 1.0 burst_GPS_76.263_ 1136890817 2 1.0 burst_GPS_76.264_ 1136892017 2 1.0 burst_GPS_76.266_ 1136893217 1 1.0 coherentbns1_1135135335_ 1136894417 2 1.0 burst_GPS_76.259_ 1136895617 2 1.0 burst_GPS_76.262_ 1136896817 2 1.0 burst_GPS_76.263_ 1136898017 2 1.0 burst_GPS_76.264_ 1136899217 2 1.0 burst_GPS_76.266_ 1136900417 1 1.0 coherentbns1_1135135335_ 1136901617 2 1.0 burst_GPS_76.259_ 1136902817 2 1.0 burst_GPS_76.262_ 1136904017 2 1.0 burst_GPS_76.263_ 1136905217 2 1.0 burst_GPS_76.264_ 1136906417 2 1.0 burst_GPS_76.266_ 1136907617 1 1.0 coherentbns1_1135135335_
06:00 UTC = 00:00 CT = 22:00 PT
TITLE: 1/14 Day Shift: 16:00-00:00UTC (08:00-16:00 PDT), all times posted in UTC"
STATE Of H1: Calibration on going
SHIFT SUMMARY: Calibration work for the majority of the day. Lockloss at 23:15 tripped the OMC WatchDog. It would not untrip so Dave had to restart the IOP, but this also tripped HAM5/6. Now SRM will not UNtrip. Dave and Jeff B are currently working to figure it out.
INCOMING OPERATOR: Jeff B
ACTIVITY LOG:
OMC WD would not UNtrip, Dave had to restart IOPSUSH56 to clear this. By restarting these, HAM5,6 also tripped.
We are currently untripping everything.
The SSD RAID for h1tw0 has failed, so h1tw0 is down until we can get it fixed. FRS 4228
For operators, you will notice that h1tw0 will be WHITE-boxed on the DAQ Detail medm (which you should be checking every shift in the Ops Checksheet).
I reduced one heater, HC1B, from 10ma to 9ma control current at 1:47 local time.
I have revamped my initial alignment lazy script and made a new medm to help operators and commissioners bring up what they need a bit faster.
The Script:
Location: /userapps/.../isc/h1/scripts/Init_align_lazy.py
Action: This will bring up 5 StripTools(XARM_GREEN_WFS.stp, YARM_GREEN_WFS.stp, PITCH_ASC_INITIAL_ALIGNMENT.stp, YAW_ASC_INITIAL_ALIGNMENT.stp, initall_alignment.stp) as well as the new medm I made (INIT_ALIGN.adl). The script will arrange them on the left monitor in a 3x2 grid. They can then be moved if so desired. If any of these windows are open in another workspace, the window manager will get confused and move the other one, so keep that in mind of they aren't in their proper positions.
I have aliased this in for "ops" as 'initial_alignment'
The medm:
Location: /userapps/.../isc/h1/medm/INIT_ALIGN.adl
Combines the ALIGN_IFO, X and Y ALS Guardians with the alignment sliders for ETMX, ETMY, ITMX, ITMY, BS, and PR3. These are the most commonly adjusted optics for IA.
I have not linked this medm off of the sitemap yet, but if/when I do it will most likely be under OPS.
Comments adn questions are always welcome.
Jenne noticed the control room symptoms of the ETMX ISI starting to ring up (H1:IMC-F_OUT16 on our Tidal.stp will be gin to oscillate [shot attached] along with the ASC control signals). I brought up the template to watch it and sure enough, ISI ETMX X was ringing up. I switched the X direction blend to the 90mHz and it imediately settled down.
I'm attaching screen shots of the environment control room tools to hopefully help.
TITLE: 1/14 Day Shift 16:00-00:00UTC (08:00-16:00 PDT), all times posted in UTC"
STATE Of H1: Observing at 78Mpc for 2hrs
OUTGOING OPERATOR: Travis
QUICK SUMMARY: wind <10mph, useism 0.4um/s, CWinj running, More calibration measurements today to come.
Title: 1/14 Owl Shift 8:00-16:00 UTC (0:00-8:00 PST). All times in UTC.
State of H1: Observing
Shift Summary: Unlocked at the beginning of my shift. After an IA, it locked up pretty quickly. Then there was an lockloss due to EQ. After EQ ringdown, we stumbled through the CARM section a few times before getting back to NLN.
Incoming operator: TJ
Activity log:
9:27 Locked at NLN
9:30 cleared ETMx TIM error
9:32 Observing
10:06 Set to Commissioning to tweak TMS in an attempt to get POP power higher (it wasn't drifting down, just not as high as usual,15800). Unsuccessful.
10:13 Observing
11:42 Lockloss. EQ?
13:50 Locked NLN, attempting to tweak POP power again
14:08 Another unsuccessful POP attempt. I did however manage to ring up the ASC loops like crazy! Waiting for them to ring down.
14:21 Observing
Finally back to Observing after a few failed attempts.
According to our seismometers, it appears to be an EQ. The only thing showing on Terramon or USGS at the moment is a 5.1 near Wallis and Futuna with Terramon prediciting 0.17 um/s R-Wave. However, our seismos are already at ~1 um/s a few minutes before the predicted arrival time for this EQ.
After running through an initial alignment, we are back to Observing. The issue with PRM align mentioned in Jeff's summary seems to have been due to PRM alignment being far off (hundreds of urads). I restored them to earlier values and it locked without issue.