FAMIS #4718 For some reason HAM2 YAW did not plot. ETMX and ETMY may need to be recentered in pitch. HAM2 and ITMX have been close to -10 in pitch for the last 7 days.
TITLE: 03/09 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC STATE of H1: Observing at 65Mpc OUTGOING OPERATOR: Nutsinee CURRENT ENVIRONMENT: Wind: 4mph Gusts, 2mph 5min avg Primary useism: 0.01 μm/s Secondary useism: 0.14 μm/s QUICK SUMMARY: No issues to report.
TITLE: 03/09 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 58Mpc
INCOMING OPERATOR: Patrick
SHIFT SUMMARY: Locked and Observe most of the night (except for one little accidence, see below). PI mode 27 has two BP filters on right now. Patrol visited the site last night. Foton doesn't seem to work when trying to use it on an medm screen (usual right click >> foton >> click on filter)
LOG:
10:00 Hanford Patrol on site. Called the control room from the gate but the thing didn't work. I only heard echos of my voice when trying to talk. They came through anyway.
10:10 Patrol left the exit gate, headed to LSB. Off site couple of minutes later.
13:18 Noticed two BP filters were on for PI mode 27. Ed told me about this before he left but I didn't realized they were both being used. Accidently went out of Observe trying to revert the configuration. I left the BP filters as they are for now.
14:08 Bubba heading to Y arm to check on tumbleweed.
14:26 Bubba back.
instafoton problem has been fixed, some diagnostics code was added which inadvertently generated a reliance on a temporary file's ownership.
5.9M West of Macquarie Island
Was it reported by Terramon, USGS, SEISMON? Yes, Yes, No
Magnitude (according to Terramon, USGS, SEISMON): 5.9, 5.9, na
Location: 60.146°S 150.296°E
Starting time of event (ie. when BLRMS started to increase on DMT on the wall): ~3:34 local time
Lock status? H1 stayed locked, LLO is down
EQ reported by Terramon BEFORE it actually arrived? Yes
Miriam, Hunter, Andy A subset of blip glitches appear to be due to a glitch in the ETMY L2 coil driver chain. We measured the transfer function from the ETMY L2 MASTER channel to the NOISEMON channel (specifically, for the LR quadrant). We used this to subtract the drive signal out of the noisemon, so what remains would be any glitches in the coil drive chain itself (and not just feedback from DARM). The subtraction works very well as seen in plot 1, with the noise floor a factor of 100 below the signal from about 4 to 800 Hz. We identified some blip glitches from Feb 11 and 12 as well as Mar 6 and 7. Some of the Omega scans of the raw noisemon signals look suspicious, so we performed the subtraction. The noisemons seem to have an analog saturation limit at +/- 22,000 counts, so we looked for cases where the noisemon signal is clearly below this. In some cases, there was nothing seen in the noisemon after subtraction, or what remained was small and seemed like it might be due to a soft saturation or nonlinearity in the noisemon. However we have identified at least three times where there is a strong residual. These are the second through fourth plots. We now plan to automate this process to look at many more blip and check all test mass L2 coils in all quadrants.
In case someone wants to know, the times we report here are:
1170833873.5
1170934017
1170975288.38
I have noticed similarly caused glitches on the 10th March, in particular for the highest SNR Omicron glitch for the day:
Looking at the OmegaScan of this glitch in H(t) and then the highest SNR coincident channels which are all the quadrants of H1:SUS-ETMY_L2_NOISEMON:
Hi Borja,
could you point us to the link to those omega scans? I would like to see the time series plots to check if the noisemon channels are saturating (we saw that sometimes they look like that in the spectrogram when it saturates).
I am also going to look into the blip glitches I got for March 10 to see if I find more of those (although I won't have glitches with such a high SNR like the one you posted).
Thanks!
Hi Miriam,
The above OmegaScan can be found here
Also I noticed that yesterday the highest New SNR glitch for the whole day reported by PyCBC live 'Short' is of this type as well. The OmegaScan for this one can be found here.
Hope if helps!
Hi Miriam, Borja,
While following up on a GraceDB trigger, I looked at several glitches from March 1 which seem to match those that Borja posted. The omegascans are here, in case these are also of interest to you.
Hi,
Borja, in the first omega scan you sent, the noisemon channels are indeed saturated. In that case it is difficult to tell apart if that is the reason for the spectrogram looking like that or if indeed it might be a glitch in the coil drive. Once Andy has a more final version of his code, we can check on that. In the second omega scan, the noisemon channels look just like the blip glitch looks in the calib_strain channel, which means the blip was probably already in the DARM loop before and the noisemon channels are just hearing it. Notice also that, besides the PyCBC_Live 'short', we have a version of PyCBC_Live that is dedicated specifically to find blip glitches (see aLog 34257), so at some point we will be looking into times coming from there (I will keep in mind to look into the March 10 list).
Paul, those omega scans do not quite look like what we are looking for. We did look into some blip glitches where the noisemon channels looked like what you sent and we did not find any evidence for glitches in the coil drive. But thanks for your omega scans, I will be checking those times when Andy has a final version of the subtraction code.
Attached are temperature trends for the past two days, the past year, and the past fortnight. The year tend is attached because Gerardo mentioned to me that this behaviour had in fact been going on for some time. Indeed there appear to be dips in the temperature except for the period between 21st December 2016 to 8th February 2017, or even 1st March 2017. One thing that is apparent is that there is a ~0.3 degC temperature shift from 1st March. That in turn seems to coincide with the recent spate of temperature excursions in the Laser Room temperature. I didn't find anything in the alog concerning work going on in the enclosure around that time. There was mention of a laser trip but that was reset from the outside. No mention of any other work around 1st March.
Looks like an earthquake is on the way. SEISMON doesn't seem to be very updated. Terramon predicted an R-wave velocity from the 5.9 earthquake to be 2.4 micron/s.
TITLE: 03/09 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 61Mpc
OUTGOING OPERATOR: Ed
CURRENT ENVIRONMENT: Wind: 5mph Gusts, 4mph 5min avg Primary useism: 0.01 μm/s Secondary useism: 0.18 μm/s
QUICK SUMMARY: Spotted a bunch of EY CDS overview went red. The status reverted before I can identify which column it was complaining.
I attached a plot of H1IOPSUSEY status over the past 30 mins. I saw two columns went red on this one. The number read 6 and 512.
Note to operators: remember that h1susex and h1susey are running the newer, faster computers to keep processing time to within the 60uS limit. This system has a feature, occassionally the IOP runs long for a single cycle resulting in a latched TIM, ADC error on the IOP, and sometimes an IPC error which propagates to the SUS models, SEI models and ISC models at the end station. A cronjob running every minute at the zero second mark clears these latched errors.
As an example of the frequency of these glitches, over the past 7 days, EX has glitched 61 times and EY 38 times (once every 2 and 4 hours respectively).
We should only report problems if the error bits are not being cleared after a minute has elapsed.
TITLE: 03/09 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 64Mpc
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY:
LOG:
I was successful in damping it by first adjusting the set freq and then changing to 90˚phase and 1000 gain.
I wasn't able to test the actuality of my actions with mode 23 due to lockloss and a different set of PI conditions with the new lock, however I did return the settings for that mode to what I had set them to previously
I also had to switch on a BP filter to gain control of mode 27 which seemed to be impervious to my efforts at first. I finally got a lock going that looks like it will hold for a while.
All Times in UTC
00:29 PSL Temp Excursion.
00:45 Bailiers back from Y arm
00:47 Initiated locking sequence
16:54 Gerardo and Fil into LVEA to pull the AC breaker for the PSL AC to stop temp excursions
1:06 Gerardo and Fil out
1:19 Just realized tht OMC wasn't locking. Re-requested READY_FOR_HANDOFF.
1:31 finally moving THROUGH DC Readout
7:31 restarted VerbalAlarms
8:00 Handing of too Nutsinee.
Lockloss at 00:33
00:47 Initiated locking sequence
16:54 Gerardo and Fil into LVEA to pull the AC breaker for the PSL AC to stop temp excursions
1:06 Gerardo and Fil out
1:19 Just realized tht OMC wasn't locking. Re-requested READY_FOR_HANDOFF.
1:31 finally moving THROUGH DC Readout
1:45 accepted some SDF diffs for Bounce mode damping (left over from my last shift?) modes ringing down nicely
1:47 Intention Bit "Undisturbed" 61.2Mpc
TITLE: 03/09 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC STATE of H1: Observing at 63Mpc INCOMING OPERATOR: Ed SHIFT SUMMARY: Tumbleweed bailing throughout the day. Currently on Y arm. PSL enclosure temperature has remained slowly returning to nominal. DMT inspiral range for LLO is back. GW status webpage is not. Dan is contacting Caltech to sort it out. I may have the details wrong, but it sounds like the detchar observing and calibration bits were not being set since Tuesday maintenance. This appears to have been resolved. John said that Apollo has been working at mid X on HVAC controls. LOG: Chris at mid Y WP 6518 16:01 UTC Christina to mid X, Karen to mid Y 17:04 UTC Karen leaving mid Y
[Heather Fong, Sheila Dwyer, Karl Toland, Vaishali Adya]
We went back to EY on Tuesday to reproduce our measurements 34540 . We did a few measurements with the Illuminator powered off followed by the PDs. The coherence with DARM didn't change by a significant amount.
The channels we looked at were :
Here is a list of the frequencies that show up apart from the known calibration lines and multiples of the power lines :
281.875, 282.375,284.75,280.25,279.75,279,277.125 Hz
When we turned off the Oplevs however we noticed an additional peak at 33.37 Hz.
This measurement is consistent with what we saw in 34540.
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 50% open. Fill completed in 410 seconds. LLCV set back to 18.0% open. Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 868 seconds. TC A did not register fill. LLCV set back to 40.0% open.
PI mode 23 has rung up a couple of times coincident with a glitch in DARM. It is not immediately obvious which precedes which. There is no gain set for damping this mode and it has come back down on its own. From the camera the tumble weed bailing now appears close to the corner station on the X arm. Have remained in observing since the start of the shift. No observed jumps in the PSL enclosure temperature.
J. Kissel, D. Tuyenbayev Analyzing the high-frequency data for the UIM that we took last night (LHO aLOG 31601), we find -- as previously suspected -- there is lots of dynamical resonant features in the UIM / L1 actuation stage; it definitely does NOT fall as f^6 to infinity as one might naively suspect. There are even more features than the (now anticipated; LHO aLOG 31432) broad impacts of the violin modes of the Sus Point-to-TOP wires (~311 Hz), and UIM-to-PUM wires (~420 Hz). We had seen hints of these features previously (LHO aLOG 24917), but here they are fully characterized out to 500 Hz with a combination of swept-sine (SS) and broad-band (BB) transfer function ratios (the calibration standard measurements of PCAL2DARM = C / (1+G) and iEXC2DARM = C A_i / (1+G)). The measurements yield the actuation strength of the UIM stage, in terms [m] of test mass displacement per [ct] of drive from the L1_TEST_L bank, which is the Euler-basis equivalent to DAC [ct]. To scale to [m/N], is a mere scale factor, measured to be 20/2^18 [V/ct] 0.62e-3 [A/V]* 1.7082 [N/A] = 8.08e-8 [N/ct] (see LHO aLOG 31344). Via private communication in January this year, Norna suspects that 111 Hz feature is the first internal mode of the UIM blades, backed by a bench test of the blades at CIT which revealed a resonance at 109 Hz. No ideas on the 167 Hz mode though. These high frequency dynamics continue to plague the estimate of the UIM actuation strength at DC using the traditional frequency-dependent sweep method, because these high frequency dynamics begin to affect the actuation strength at as low a frequency as ~30 Hz (LHO aLOG 31427), and any model fitting code gets totally distracted by these features. A challenge to the CSWG team: fit this transfer function above 20 Hz and create a set of zeros and poles that can be used as a "correction" filter to a model that falls off as f^6. This filter need not perfectly resolve the details of all of the high-Q features, but it must track the overall frequency dependence over the 20 - 500 Hz region well. I attach all of the measurements compressed onto one (discontinuous) frequency vector as an ascii in the standard DTT form of [freq re(TF) im(TF)]. To use: >> foo = load('2016-11-17_H1SUSETMY_L1_Actuation_HighFreqChar_asciidump.txt') >> figure; loglog(foo(:,1), abs( foo(:,2)+1i*foo(:,3) )) This data is also committed to the CalSVN repo here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/H1/Results/Actuation/2016-11-17_H1SUSETMY_L1_Actuation_HighFreqChar_asciidump.txt Kiwamu has already tried to create such a filter from the previous data (LHO aLOG 28206), but was limited by that measurement's high-frequency bound falling between the 111, 137, and 167 Hz features. Details: Analysis code: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/H1/Scripts/FullIFOActuatorTFs/process_H1SUSETMY_L1_HFDynamicsTest_20161117.m Config files: IFOindepPars = '../../../Common/params/IFOindepParams.conf'; IFOdepPars = {'../../params/H1params.conf'}; IFOmeasPars = {'../../params/2016-11-12/H1params_2016-11-12.conf'}; PCALPars = {'../../params/2016-11-12/measurements_2016-11-12_ETMY_L1_actuator.conf'}; Model: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/DARMmodel/src/computeDARM.m Will post the data for the fitting challenge later this afternoon.
I made an update to the quad matlab model to account for these mystery features. See CSWG log 11197.
I describe my use of the Frequency Domain System Identification toolbox (FDIDENT) to fit this transfer function in CSWG elog #11205. FDIDENT is a third party Matlab toolbox which provides tools for identifying linear dynamic single-input/single-output (SISO) systems from time response or frequency response measurements. The toolbox is free for non-profit use.
https://www.mathworks.com/products/connections/product_detail/product_35570.html
http://home.mit.bme.hu/~kollar/fdident/
A stable, but non-minimum phase, model without delay – compatible with a Linear Time Invariant (LTI) representation -- results in a best fit for a 22 order numerator and 28 order denominator model, m2228. The model is compared to the measurement data in the attached bode plot.
I attach several new parts of this high frequency characterization in order to facilitate incorporating the uncertainty in any future transfer function fitting. I attach three new text files: "..._tf.txt" -- a copy of the originally attached text file, columns are [freq re(A) im(A)] "..._coh.txt" -- an export of the (prefiltered) coherence, columns are [freq iEXCCoh PCALCoh] "..._relunc.txt" -- an export of the combined relative uncertainty on the transfer function, columns are [freq sigma_A] Computing the uncertainty on this actuation strength was a bit of a challenge. Remember, the above measure of the actuation strength of the UIM stage, A, is a combination of two transfer functions, as described in P1500248, Section V. In this aLOG they're referred to as "PCAL2DARM" where we use the photon calibrator as a reference actuator, and "iEXC2DARM" where the suspension stage under test is used as the actuator. Typically, the iEXC2DARM transfer function has the lowest coherence. Even worse, I've combined many data sets of both transfer functions covering different frequency regions each with a different number of averages. Thus form the uncertainty, I've taken each frequency region's data set, and - Filtered both iEXC and PCAL transfer functions for data points in which the iEXC TF has coherence greater than 0.95, - Created a relative uncertainty vector for each iEXC and PCAL transfer functions using the standard B&P equation, sigma_TF(f) / TF = sqrt( (1-C(f)) / (2 N C(f)) ) where C(f) is the coherence, and N is the number of averages (N was 10 for swept sine TFs, 25 for broad band TFs) - Concatenated the data sets to form the overall transfer function, A, - Combined the two uncertainty vectors in the standard way, sigma_A / A = sqrt((sigma_iEXC / iEXC)^2 + (sigma_PCAL / PCAL)^2) - Sorted the collection of [frequency complextf iexccoh pcalcoh sigma_A] by frequency. - Exported the uncertainty. Note that one only needs one column of uncertainty, for the absolute uncertainty in magnitude is just |sigma_A| = abs(A) * (sigma_A / A) and the absolute uncertainty in phase is /_ sigma = 180/pi * (sigma_A / A) I attach a plot of the magnitude and its uncertainty for demonstrative purposes, so that when the files are used, you can compare your plots of this against mine to be sure you're using the data right. Note that I've multiplied the uncertainty by a factor of 10 for plotting only so that it's visible. I've updated and committed the function that's used to process this data, and it can be found here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/H1/Scripts/FullIFOActuatorTFs/ process_H1SUSETMY_L1_HFDynamicsTest_20161117.m