J. Kissel After a long and arduous journey, we're finally ready to put the brand new H1 SUS OPO suspension under vacuum. All close-out TFs look acceptable, as do spectra of the sensors. All OSEMs are centered. SUS is ready for doors.
Well done!
J. Kissel, T. Shaffer TJ scanned over the OMs this morning, looking at and adjusting eddy current damping magnets and earthquake stops on all HTTS in HAM6 -- OM1, OM2, OM3, and ZM1. Then, he took the full suite of transfer functions to confirm all where free of rubbing. They're all free of rubbing. ZM1 still shows a bit of abnormal dynamics in its yaw to yaw DOF (abnormal w.r.t. to all other HTTSs, showing a second resonance in yaw other that the expected primary mode, and the primary mode is a bit lower in frequency than expected), but it will work well enough. I've processed TJ's results, and they're attached. Also, I've checked the OSEM centering -- looks good. Also, also, checked the high-frequency spectra -- looks good enough. Interestingly (is it really? do we wanna know? do we care enough to fix it?) ZM1 shows much more lines, junk, and ~6-7 kHz humps where the OMs do not. Fil & CDS crew have yet to perform ground loops checks, so they may identify and fix a problem or two that may alleviate this issue. Regardless, I decree the HAM6 HTTSs are good enough for us to close doors.
Alexei, Dan Brown
The design spec for the OMC has an astigmatic eigenmode. This is generally ignored when analyzing mode scans since the finesse of the OMC is not large enough to resolve the second order peaks of the x/y eigenmodes and they are assumed to be close enough as to basically sit on top of each other.
Recent independent FINESSE models by Dan and I (mine is at the IFOSIM git) have found that the mismatch (as estimated by the ratio of the second order peak to the zeroth order [A2/A0]) is always underestimated by about ~ 18.6% (i.e. a factor of 0.814) for any mismatch we put in.
My current working model is that the small separation between the x and y peaks causes the second order peak to appear smaller and wider than it normally would if the OMC eigenmode was not astigmatic. Therefore in an ideal environment (no misalignments, perfect input gaussian beam) the mismatch computed from an OMC mode scan will ALWAYS UNDERESTIMATE the actual mismatch between the beam and the cavity eigenmode by a constant factor.
Figure 1 demonstrates the effect of what I am talking about. Lx(fwhm,2*delta_fx) and Ly(fwhm,2*delta_fy) are lorentzians (corresponding to a second order resonance) with a fwhm and offset defined by the design OMC parameters.
fwhm = 643943 Hz
delta_fx = 5.813185e+07 Hz (mode spacing of xaxis modes)
delta_fy = 5.797750e+07 Hz (mode spacing of yaxis modes)
The height of the Lx+Ly peak is 18.69% (a factor of 0.814) smaller than the height of 2*Lx (what it would be without astigmatism).
In order to counteract this effect multiplying the measured peak height for the second order peak by 1.23 is sufficient.
This factor can be computed from evaluating the ratio of 2*Lx at its center to Lx + Ly at the midpoint of their centers by the following equation
(x^2 + 1)
where x = |2*delta_fx - 2*delta_fy| / fwhm.
The complete derivation of this will be posted up on the DCC at some point in the near future.
Please note that this only meant to be a first order approximation to what is going on. This breaks down for tiny mismatches (10^-4 and smaller) as the peak location stops being at the midpoint between Lx and Ly. If the cavity astigmatism or finesse is high enough to resolve the individual axes then this model is also invalid. But in this particular case of the OMC it should hold, and so my recommendation is for future mismatch calculations (the ones that are computed by A2/A0) from OMC scans to be scaled by a factor of 1.23 to avoid underestimating the actual mismatch.
The DCC document containing the full derivation is at T1800191. Comments are welcome.
Both PT-423 and PT-424 tripped and have been reset. Expect alarms until cold cathode comes back on.
Gauges tripped again, including PT-410 this time. This is due to APS contractor work installing security system. FRS tickets filed.
FYI -
We've updated the ISO_RAMP_OFF.c code, and the ISI model update should be ready to install again on site - with caution.
Previous attempt (see alog 41368 and comments there-to) was a complete fail. We can't figure out why. We updated the c-code anyway to eliminate the chance of divide/0 errors (good practice, at least) and are glad that Jonathan and Dave pointed this out. The new code is the same name and saved in userapps revision 17231, see SEI log 1338 for testing notes. If you want to be sure you have the correct version, look near the top of the file and you will see the line
* v2 - April 2018 - remove the div/0 issue and cleanup - BTL, Dane Stocks.
As discussed with Hugh - We can not duplicate the errors here on campus. We can not explain the errors. The c-code update seems like a good idea, but so far as we can tell, it shouldn't make any difference - so-
please do be careful and continue to make good notes so we can figure out what's up.
-Brian
WP 7505 Updated the code on h0vaclx to change PT110 from a BPG402 Inficon gauge to a BCG450 Inficon gauge. DAQ has been restarted.
DAQ restarted at 12:45PDT for h0vaclx vacuum controls system changes for the Y0 gauge. Channel removal/additions:
WP 7504
Field cabling for SQZT6 has been disconnected. The fibers were pulled back on top of HAM5. RF and power cables are by the HAM5 door. PZT and fast shutter high voltage power supplies have been powered off. The interlock cable for the HV power supplies was reconnected.
PeterF pointed out that it would be good to see what beam jitter the IMC WFS are seeing now that we have the new 70W laser operating with the lower water flow.
I attach 2 screenshots: one of all the WFS, and one with just WFS_B traces, since that has historically and is still the sensor that sees the jitter motion the best. You can see that the coherence between the WFS and the PEM accelerometer on the PSL periscope has decreased, as well as just the overall spectra. There is a small new feature at about 583 Hz, but otherwise the spectra above 100 Hz are all notably better. I haven't confirmed the source of extra low freq noise in the WFS right now, but there's a lot going on in the LVEA, and the comparison time is Observation mode.
Perhaps I'll ask one of our Fellows who is working on the noise budget to use the old coupling TF to try to project what this noise would mean for our O2 DARM, but hopefully we'll also have significantly less coupling now that we've replaced ITMX, so that projection would be an upper limit.
EDIT: Note that these are the RF channels (I just realized that I forgot to include that information in my DTT-froze-on-me / redo things fiasco). I'll soon post a version with the calibrated WFS DC channels.
Really what we want to see is the WFS DC spectra, in calibrated units so that we can see the ratio of 1,0 modes to 0,0 modes. However, the times recently that the IMC has been locked have either been at such a low power (0.9W or less), or when the beam was very far off center from the WFS that the data isn't great.
I have some data from a lock on April 4th with the new 70W amplifier but before the rotation stage was locked out at low power (and before the PMC and EOM were swapped), so the IMC was locked with 5.2W injected into the vacuum. Comparing with alog 34845 from March 2017, some of the peaks look perhaps a little better, but I need to retake the data with the IMC locked at higher input power to have better SNR. I don't have the unlocked version of WFS at this time - we went straight from locked to laser safe.
J. Kissel
Grabbed a few closeout measurements of H1 SUS OMC. All clear; good to go for door closure.
Transfer functions show that the dynamics are virtually identical to what they were before. See
- Individual measurement 2018-04-25_2324_H1SUSOMC_M1_ALL_TFs.pdf
- Comparison with others allomcss_2018-04-25_2324_H1SUSOMC_M1_Phase3a_ALL_ZOOMED_TFs.pdf
High Frequency OSEM sensor noise ASD,
- 2018-04-26_1831_H1SUSOMC_M1_OSEM_Noise_ASDs.png
has some spikes, but nothing egregious like a grounded BOSEM (e.g. LHO aLOG 40787). If I had to complain, I would complain about the T1 OSEM being a little higher in noise starting around 500 Hz. But I don't have to complain, so I won't. #WORKSFORME
TF Data Templates
2018-04-25_2324_H1SUSOMC_M1_WhiteNoise_L_0p02to50Hz.xml
2018-04-25_2324_H1SUSOMC_M1_WhiteNoise_P_0p02to50Hz.xml
2018-04-25_2324_H1SUSOMC_M1_WhiteNoise_R_0p02to50Hz.xml
2018-04-25_2324_H1SUSOMC_M1_WhiteNoise_T_0p02to50Hz.xml
2018-04-25_2324_H1SUSOMC_M1_WhiteNoise_V_0p02to50Hz.xml
2018-04-25_2324_H1SUSOMC_M1_WhiteNoise_Y_0p02to50Hz.xml
ASD Template
2018-04-26_1831_H1SUSOMC_M1_OSEM_ASDs.xml
I'm bypassing LX alarms to cell phones while the IOC is being worked on. Alarm bypass will expire 15:28 PDT.
bypass has been removed, h0velx is back.
As part of the HAM6 closeout and prep for pumping, I have disabled the picomotor drivers for HAM6/ISCT6 as well as Squeezer (those are the labels on the picomotor screen). All other picomotor drivers were already disabled.
TVo, Sheila, Dan Brown
Summary: Astigmatism in the OPO beam (that seems to be happening in HAM5 somewhere) is limiting the mode matching to the OMC. If the astigmatism can be fixed we could get 95% or better matching.
Along with the OMC mode scans taken the other night we also took beam profile measurement in HAM6 with the Nanoscan. On the OMC side of the table we took profile between HAM5 and OM1, OM1 and OM2, and on OMC REFL, on the OPO side we took them after ZM1 and the propagated off of the beam diverter onto SQZT6. Using the as built Finesse model for the HAM5 to OMC path I fit the input x and y plane beam parameters to the data. This is compared to the as built OMC mode propagated to the SRM AR surface (beam parameter values in the legend).

The x-plane beam has an overlap of 83%, the y-plane 95%. Taking the ratio of the 2nd to 0th order peaks in the OMC scan will average the two planes, so we measure 89% matching. Which agrees pretty well with what we measured the other night. Although the OMC scan ratio should underestimate the mismatch in general due to astigmatism in the OMC, the non-zero 1st order peak from misalignment also couples a bit into the 2nd order modes, which causes an overestimate in the mismatch as it makes 02/20 larger. These two effects just happen to cancel each other out in these scans it seems.
I also compared our measurements to what I originally predicted we should have got. As can be seen the y prediction vs fit isn't that far off, but the x-plane astigmatism is causing it to be significantly different.

Lastly, I fit the beam profiles to a mode propagating away from ZM1. If there were no astigmatism (and the model parameters are correct) this beam propagated to the OMC would have ~98% matching.

BRS-X excursion is due to clean-rooms and work being done at the location. BRS-Y needs to be addressed. This downward trend is typical.
TITLE: 04/26 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
Wind: 5mph Gusts, 3mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY:
All Quiet on the Hanford Front.
I have locked PRX using the ALIGN_IFO guardian node, after a bit of alignment tweaking by hand. In particular, ITMX is quite far from where it was. PRX is not yet aligned well, since I got distracted trying to close the REFL DC centering loops (which are now closed).
Next up:
I think there are some sign flip shenanigans going on, which I think that I have fixed and they make sense now. But, the DC2 centering loops are still unstable for their old nominal gains. Right now the gains are lower by a factor of 4 - increasing them causes the loops to go unstable.
Recall that in alog 40853 JeffK flipped some signs so that the suspensions were matching the proper sign conventions. TVo, Sheila, and others found that this meant they needed to flip the feedback signs in the AS DC centering loops, as noted in alog 41436. In that alog, Sheila preemptively flipped the signs also in the REFL DC centering loops. However, as Jeff noted in his alog, RM2 didn't need the sign flip that the other *Ms did. So, just flipping the DC centering loop sign isn't quite the right thing. In the end, I put the REFL DC centering loops back to their previously nominal negative signs, and then flipped the sign of the RM1 elements in the ASC pitch and yaw output matrices, leaving the RM2 elements alone. I think this achieves the correct sign-flippage. But, the DC2 loops are still unstable when I go to their full gains (-10 for pit and -12 for yaw). So, for now they're set at -3 for both pitch and yaw. Tomorrow I'll measure the loops and see what's going on.
Attached is the alignment slider screenshot of where I have things right now.
By Jameson Rollins and Jonathan Hanks
Soon after the new guardian machine (h1guardian1 [0]) was moved into production (i.e. after all guardian nodes were moved to the new machine) we started seeing efence and valgrind but never saw a crash in either case, presumably either because the MTTF was increased significantly or because the crashes were circumvented entirely by serialized system calls (valgrind).
Adding to the confusion was the inability to reproduce the crashes in a test environment. 50 test guardian nodes running under the new production environment, subscribing to hundreds of front end channels (but with no writes), and with test clients subscribed to all control channels, failed to show any crashes in two weeks of straight running. (See below for the later discovered reason why this was.)
The following steps were taken to help diagnose the problem:
Inspection of the coredumps generally turned up no useful information other than the problem was a memory corruption error, likely in the EPICS CAS (or in the pcaspy python wrapping of it). The relatively long MTTF pointed to a threading race condition.
[0] Configuration of the new h1guardian1 machine:
[1] systemd-coredump is a very useful package. All core dump files are logged and archived and the coredumpctl command provides access to those logs and an easy means for viewing them with gdb. Unfortunately the log files are cleared out by default after 3 days, and there's there doesn't seem to be a way to increase the expiration time. So be sure to backup the coredump files from /var/lib/systemd/coredump/ for later inspection.
In an attemp to get more informative error reporting with less impact on performance, Jonathan compiled python2.7 and pcaspy with libasan, the address sanitizer. libasan is similar to valgrind in that it wraps all memory allocation calls to detect memory errors that commonly lead to seg faults. But it's much faster and doesn't serialize the code, thereby leaving in place the threads that were likely triggering the crashes.
(As an aside, libtsan, the thread sanitizer, is basically impossible to use with python, since the python core itself seems to not be particularly thread safe. Running guardian with python libtsan caused guardian to crash immediately after launch with ~20k lines of tsan log output (yes, really). So this was abandoned as an avenue of investigation.)
Once we finally got guardian running under libasan [0], we started to observe libasan-triggered aborts. The libasan abort logs were consistent:
==20277==ERROR: AddressSanitizer: heap-use-after-free on address 0x602001074d90 at pc 0x7fa95a7ec0f1 bp 0x7fff99bc1660 sp 0x7fff99bc0e10
WRITE of size 8 at 0x602001074d90 thread T0
#0 0x7fa95a7ec0f0 in __interceptor_strncpy (/usr/lib/x86_64-linux-gnu/libasan.so.3+0x6f0f0)
#1 0x7fa94f6acd78 in aitString::copy(char const*, unsigned int, unsigned int) (/usr/lib/x86_64-linux-gnu/libgdd.so.3.15.3+0x2bd78)
#2 0x7fa94f6a8fd3 (/usr/lib/x86_64-linux-gnu/libgdd.so.3.15.3+0x27fd3)
#3 0x7fa94f69a1e0 in gdd::putConvert(aitString const&) (/usr/lib/x86_64-linux-gnu/libgdd.so.3.15.3+0x191e0)
#4 0x7fa95001bcc3 in gdd_putConvertString pcaspy/casdef_wrap.cpp:4136
#5 0x7fa95003320d in _wrap_gdd_putConvertString pcaspy/casdef_wrap.cpp:7977
#6 0x564b9ecccda4 in call_function ../Python/ceval.c:4352
...
#67 0x564b9eb3fce9 in _start (/opt/python/python-2.7.13-asan/bin/python2.7+0xd9ce9)
0x602001074d90 is located 0 bytes inside of 8-byte region [0x602001074d90,0x602001074d98)
freed by thread T3 here:
#0 0x7fa95a840370 in operator delete[](void*) (/usr/lib/x86_64-linux-gnu/libasan.so.3+0xc3370)
#1 0x7fa94f6996de in gdd::setPrimType(aitEnum) (/usr/lib/x86_64-linux-gnu/libgdd.so.3.15.3+0x186de)
previously allocated by thread T0 here:
#0 0x7fa95a83fd70 in operator new[](unsigned long) (/usr/lib/x86_64-linux-gnu/libasan.so.3+0xc2d70)
#1 0x7fa94f6acd2c in aitString::copy(char const*, unsigned int, unsigned int) (/usr/lib/x86_64-linux-gnu/libgdd.so.3.15.3+0x2bd2c)
Thread T3 created by T0 here:
#0 0x7fa95a7adf59 in __interceptor_pthread_create (/usr/lib/x86_64-linux-gnu/libasan.so.3+0x30f59)
#1 0x564b9ed5e942 in PyThread_start_new_thread ../Python/thread_pthread.h:194
SUMMARY: AddressSanitizer: heap-use-after-free (/usr/lib/x86_64-linux-gnu/libasan.so.3+0x6f0f0) in __interceptor_strncpy
Shadow bytes around the buggy address:
0x0c0480206960: fa fa fd fa fa fa 00 fa fa fa fd fd fa fa fa fa
0x0c0480206970: fa fa fd fd fa fa fd fd fa fa fd fd fa fa fd fd
0x0c0480206980: fa fa fd fd fa fa fd fa fa fa fa fa fa fa fd fd
0x0c0480206990: fa fa fa fa fa fa fd fa fa fa fd fa fa fa fd fa
0x0c04802069a0: fa fa fa fa fa fa 00 fa fa fa fa fa fa fa fd fa
=>0x0c04802069b0: fa fa[fd]fa fa fa fd fd fa fa fd fd fa fa 00 fa
0x0c04802069c0: fa fa fd fd fa fa fd fd fa fa fd fd fa fa fd fd
0x0c04802069d0: fa fa 00 fa fa fa fd fa fa fa fd fd fa fa fd fa
0x0c04802069e0: fa fa fa fa fa fa fd fa fa fa 00 fa fa fa fd fd
0x0c04802069f0: fa fa 00 fa fa fa fd fd fa fa fd fd fa fa fd fd
0x0c0480206a00: fa fa fd fd fa fa fd fd fa fa fd fd fa fa 00 fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Heap right redzone: fb
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack partial redzone: f4
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==20277==ABORTING
Here's a stack trace from a similar crash (couldn't find the trace from the exact same process, but the libasan aborts are all identical):
Stack trace of thread 18347:
#0 0x00007fe5c173efff __GI_raise (libc.so.6)
#1 0x00007fe5c174042a __GI_abort (libc.so.6)
#2 0x00007fe5c24ae329 n/a (libasan.so.3)
#3 0x00007fe5c24a39ab n/a (libasan.so.3)
#4 0x00007fe5c249db57 n/a (libasan.so.3)
#5 0x00007fe5c2442113 __interceptor_strncpy (libasan.so.3)
#6 0x00007fe5b7bf2d79 strncpy (libgdd.so.3.15.3)
#7 0x00007fe5b7beefd4 _ZN9aitString4copyEPKcj (libgdd.so.3.15.3)
#8 0x00007fe5b7be01e1 _Z10aitConvert7aitEnumPvS_PKvjPK18gddEnumStringTable (libgdd.so.3.15.3)
#9 0x00007fe5b8561cc4 gdd_putConvertString (_cas.so)
#10 0x00007fe5b857920e _wrap_gdd_putConvertString (_cas.so)
#11 0x000055f7f0c86da5 call_function (python2.7)
"strncpy" is the known-problematic string copy function, in this case used to copy strings into the EPICS GDD type used by the channel access server.
GDB backtraces of the core files show that string being copied was always "seconds". The only place that the string "seconds" is used in guardian is as the value of the "units" sub-record given to pcaspy for the EXECTIME and EXECTIME_LAST channels.
[0] systemd drop-in file used to run guardian under libasan python/pcaspy (~guardian/.confg/systemd/user/guardian@.service.d/instrumented.conf):
[Service] Type=simple WatchdogSec=0 Environment=PYTHONPATH=/opt/python/pcaspy-0.7.1-asan/build/lib.linux-x86_64-2.7:/home/guardian/guardian/lib:/usr/lib/python2.7/dist-packages Environment=ASAN_OPTIONS=abort_on_error=1:disable_coredump=0 ExecStartPre= ExecStart= ExecStart=/opt/python/python-2.7.13-asan/bin/python -u -t -m guardian %i
The discovery that the crash was caused by copying the string "seconds" in CAS led to the revelation about why the test setup had not been reproducing the crashes. The "units" sub-record is part of the EPICS DBR_CTRL_* records and is the only sub-record being used of string type. The test clients were only subscribing to the base records of all the guardian channels, not the DBR_CTRL records. MEDM, on the other hand, subscribes to the CTRL records. Guardian overview screens are open all over the control room, subscribing to all the CTRL records of all the production guardian nodes.
CTRL record subscriptions involve copying the "units" string sub-record, and therefore trigger the crashes. No CTRL record subscriptions, no crashes.
So this all led us to take a closer look at how exactly guardian was using pcaspy.
The pcaspy documentation implies that pcaspy is thread safe. The package even provides a helper function that runs the server in a separate thread for you. The implication here is that running the server in a separate thread and pushing/pulling channel updates from/to a main thread into/out of the cas thread is safe to do. Guardian was originally written to run the pcaspy.Server in a separate thread explicitly because of this implication in the documentation.
The main surface for threading issues in the guardian usage of pcaspy was between client writes that trigger pcaspy.Driver.setParams() and pcaspy.Driver.updatePVs() calls inside of pcaspy.Driver.write(), and status channel updates being pushed from the main daemon thread in to the driver that also trigger updatePVs. At Jonathan's suggestion all guardian interaction with the core pcaspy cas functions (Driver.setParams(), Driver.updatePVs()) were wrapped with locks. But we were skeptical that this would actually solve the problem, though, since pcaspy itself provides no means to lock it's internal reading of the updated PVs for shipment out over the EPICS CTRL records (initiated during pcaspy.Server.process()). And in fact this turned out to be correct; crashes persisted even after the locks were in place.
We then started looking into ways to get rid of the separate pcaspy thread altogether. The main daemon loop runs at 16 Hz. The main logic in the daemon loop takes only about 5 ms to run. This leaves ~57 ms to run Server.process(), which should be plenty of time to process the cas without slowing things down noticeably. Moving the CAS select processing into the dead time of the main loop forces the main loop to keep track of it's own timing. This has the added benefit of allowing us to drop the separate clock thread that had been keeping track of timing, elliminating two separate threads instead of just one.
So a patch was prepared to eliminate the separate CAS thread from guardian, and it was tested on about a half dozen nodes. No crashes were observed after a day of running (far exceeding the previous MTTF).
A new version of guardian was wrapped up and put into production, and we have seen no spontaneous segfaults in nearly a week. We will continue to monitor the system to confirm that the behavior has not been adversely affected in any way by the ellimination of the CAS thread (full lock recovery would be the best test of that), but we're fairly confident the issue has been resolved.
pcaspy and CAS are not thread safe. This is the main take away. It's possible that guardian is the most intensive user of this library out there, which is why this has not been seen previously. I will report the issue to the EPICS community. We should be more aware of how we use this library in the future, and avoid running the server in a separate thread.
Multi-threading is tricky.
A quick followup about systemd-coredump. It's possible to control the expiration of the coredump files by dropping the following file into the file system:
root@h1guardian1:~# cat /etc/tmpfiles.d/00_coredump.conf d /var/lib/systemd/coredump 0755 root root - root@h1guardian1:~#
The final "-" tells systemd-tmpfiles to put no expiration on files in this directory. The "00" prefix is needed to make sure it always sorts lexically before any config that would set the expiration on this path (the system default is in /usr/lib/tmpfiles.d/systemd.conf).
Spring enabled the EE shop to work on setting up power for the LEMIs, and I had a look at the new signals. The top plot in the figure shows that we can see Schumann Resonances quite well, up to quite close to 60 Hz. The bottom two plots show some transient signals that might interfere with a feed-forward system.
It looks like the signals are degraded by wind. I am not surprised because we see wind noise in buried seismometers. I think we would have this vibration problem even on a perfect flat because of the variation in Bernoulli’s forces associated with gusts. It may be that a LEMI signal is generated by the wind because of slight motions of the magnetometers in the earth’s huge DC magnetic field. We buried the LEMIs about 18 inches deep (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=29096). I think we might be able to mitigate the noise some by going much deeper. Once we have the vault seismometer working, it would be a good project to test the wind vibration hypothesis by comparing the LEMI and seismic signals.
There also seem to be some transients, some long and some short, possibly self inflicted by our system. It would be good to look into which transients would be a problem, and for those, details such as whether they are correlated with time of day, the average time between transients, etc., in order to help determine their source.
Finally, I would like to get the full system calibrated by comparing to a battery powered fluxgate magnetometer.
[Pat Meyers, Andrew Matas] We attach a few additional plots studying the Schumann resonances. Figures 1,2 show spectrograms using 16 hours of data from April 18, where the Schumann resonances are clearly visible. There are also a few glitches. We also show coherence (Figure 3) and cross power (Figure 4) between the Hanford and Livingston LEMIs. The first two Schumann resonances at about 8 Hz and 14 Hz are coherent between the sites.
We disabled the vault power on April 20th to upgrade the power supply, it will remain down until the this afternoon.