Why?--Robert is going to guide me on helping him with Chamber Pinning investigation. That is, does proximity to the chamber reduce wind induced tilt of the floor?
SEI is using just one of our three LVEA STS2s, ITMY (STS2B) for HEPI and ISI Sensor Correction. So we'll move an under utilized STSs around the floor to see how distance from/to the Chamber Legs affects the tilt of the floor induced by wind.
Right now, the HAM5 machine is actually in the BierGarten a couple meters -Y of the ITMY machine; I'll call the machine in this position Roam1. Not quite close enough for a huddle test (especially if it were windy) so I'll move it closer next week.
Meanwhile, we can compare things now to access the seismometers' health.
Had a fairly quiet wind last night so attached are low wind comparisons.
Plot1 are the X-axes. Roam1 and ITMY compare pretty well until we get down to about 20mHz. HAM2 X may have a problem or it is just the location. With these light winds, really less than 5mph mostly during the 5500sec spectra, it is troubling to see the ETMY X channel. The wind is light, it is 4:30am, no reason for that noise compared to the others.
The Y axes in Plot2 reveal the good agreement between ITMY and Roam1 is now down to below 10mHz. HAM2 again we assume is suffering either from it's location or its rough LIGO life over the years. The End Machines compare quite well to below 10mHz as well.
The 3rd plot shows great z axis agreement between the site instruments to well below 10mHz (except maybe HAM2 again, around 5-6mHz.) Too bad this channel wasn't the one we need for tilt studies.
TITLE: 03/24 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 63Mpc
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY: 11.5 hour lock observing for most of the time, other than for a 24min while Robert turned on the fire pump while LLO was down. DARM seems to be a bit more noisy since then and our range has been dipping a bit. I'm not sure where the balers are, but they are not where Bubba said they were suppose to be. We are still seeing some large spike in the seismic 3-10Hz BLRMS so maybe this is the cause, but this is just a wild guess.
LOG:
Read below for more details on the methodology and the results.
At this point I have 5548 times identified as blip glitches, and the value of the suspension channels for each of those. And 5548 more times that are clean data, with the value of the suspension channels for each of those.
Here's an example plot of the result. I picked one of the most interesting channel (SUS-ETMX_M0_DAMP_Y_INMON). The PDF files attached below contains the same kind of histograms for all channels, divided by test mass.
The first row shows results for the SUS-ETMX_M0_DAMP_Y_INMON signal. The first panel compares the histogram of the values of this signal for blip times (red) and clean times (blue). This histogram is normalized such that the value of the curve is the empirical probability distribution of having a particular value of the signal when a blip happens (o doesn't happen). The second panel in the first row is the cumulative probability distribution (the integral of the histogram): the value at an abscissa x gives the probability of having a value of the signal lower than x. It is a standard way to smooth out the histogram, and it is often used as a test for the equality of two empirical distributions (the Kolmogorov-Smirnov test). The third panel is the ratio of the histogram of glitchy times over the histogram of clean times: if the two distributions are equal, it should be one. The shaded region is the 95% confidence interval, computed assuming that the number of counts in each bin of the histogram is a naive Poisson distribution. This is probably not a good assumption, but the best I could come up with.
The second row is the same, but this time I'm considering the derivative of the signal. The third row is for the second derivative. I don't see much of interest in the derivative plots of any signal. Probably it's due to the high frequency content of those signals, I should try to apply a low pass filter. On my to-do list.
However, looking a the histogram comparison on the very first panel, it's clear that the two distributions are different: a subset of blip glitches happen when the signal SUS-ETMX_M0_DAMP_Y_INMON has a value of about 15.5. There are almost no counts of clean data for this value. I think this is a quite strong correlation.
You can look at all the signals in the PDF files. Here's a summary of the most relevant findings. I'm listing all the signals that show a significant peak of blip glitches, as above, and the corresponding value:
ETMX | ETMY | ITMX | ITMY | |
---|---|---|---|---|
L1_WIT_LMON | -21.5 | -54.5 | ||
L1_WIT_PMON | 495, 498 | 609 | 581 | |
L1_WIT_YMON | 753 | -455 | 643, 647 | 500.5 |
L2_WIT_LMON | 437, 437.5 | -24 | -34.3 | |
L2_WIT_PMON | 1495 | -993, -987 | ||
L2_WIT_YMON | -170, -168, -167.5 | 69 | ||
L3_OPLEV_PIT_OUT16 | -13, -3 | 7 | ||
L3_OPLEV_YAW_OUT16 | -4.5 | 7.5 | -10, -2 | 3 |
M0_DAMP_L_INMON | 36.6 | 14.7 | 26.5, 26.9 | |
M0_DAMP_P_INMON | -120 | 363 | 965, 975 | |
M0_DAMP_R_INMON | 32.8 | -4.5 | 190 | -90 |
M0_DAMP_T_INMON | 18.3 | 10.2 | 20.6 | 27.3 |
M0_DAMP_V_INMON | -56.5 | 65 | -25 | -60 |
M0_DAMP_Y_INMON | 15.5 | -71.5 | -19, -17 | 85 |
Some of the peaks listed above are quite narrow, some are wider. It looks like ITMY is the suspension with more peaks, and probably the most significant correlations. But that's not very conclusive.
There were some days with extremely high rates of glitches in January. It's possible that clumping of glitches in time could be throwing off the results. Maybe you could try considering December, January, and February as separate sets and see if the results only hold at certain times. Also, it would be nice to see how specifically you can predict the glitches in time. Could you take the glitch times, randomly offset each one according to some distribution, and use that as the 'clean' times? That way, they would have roughly the same time distribution as the glitches, so you shouldn't be very affected by changes in IFO alignment. And you can see if the blips can be predicted within a few seconds or within a minute.
In my analysis I assumed that the list of blip glitches were not affected by the vetoes. I was wrong.
Looking at the histogram in my entry, there is a set of glitches that happen when H1:SUS-ETMX_M0_DAMP_Y_INMON > 15.5. So I plotted the value of this channel as a function of time for all blip glitches. In the plot below the blue circles are all the blips, the orange dots all the blips that passes ANALYSIS_READY, and the yellow crosses all the blips that passes the vetoes. Clearly, the period of time when the signal was > 15.5 is completely vetoed.
So that's why I got different distributions: my sampling of clean times did include the vetoes, while the blip list did not. I ran again the analysis including only non vetoed blips, and that family of blips disappeared. There are still some differences in the histograms that might be interesting to investigate. See the attached PDF files for the new results.
One more thing to check is the distribution over time of the blip glitches and of the clean times. There is an excess of blips between days 39 and 40, while the distribution of clean data is more uniform. This is another possible source of issues in my analsysis. To be checked.
@Gabriele
The excess of blips around day 40 corresponds to a known period of high glitchiness, I believe it was around Dec 18-20. I also got that peak when I made a histogram of the lists of blip glitches coming from the online blip glitch hunter (the histogram is at https://ldas-jobs.ligo.caltech.edu/~miriam.cabero/tst.png, is not as pretty as yours, I just made it last week very quickly to get a preliminary view of variations in the range of blips during O2 so far).
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 50% open. Fill completed in 38 seconds. LLCV set back to 16.0% open. Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 1902 seconds. TC A did not register fill. LLCV set back to 43.0% open.
Lowered CP3 to 15% open and raised CP4 to 44% open.
model restarts logged for Thu 23/Mar/2017 - Mon 20/Mar/2017 No restarts reported
no code changes during Tuesday maintenance.
We always thought that the video of ITMY is worse than ITMX but we were not sure about the camera/lens setting.
Richard opened the iris of both ITMX and ITMY gige camera all the way on Tuesday, and they still look the same. In the attached, several pictures with different exposures are combined, but even without doing anything ITMY looks obviously brighter.
Is there any hidden variable here?
(Added later: Non-conclusion of this alog is, it seems as if ITMY HR has larger scattering than ITMX HR, and that ITMX HR doesn't seem to have a single big scatterer, but I'm still wondering if this is due to some hidden setting somewhere.)
I have added an (unreleased) algorithm into the GDS/DCS pipeline to compute the SRC spring frequency and Q. This algorithm was used to collect 18 hours of data on February 4 (first plot) and 18 hours of data on March 4 (second plot). The plots include kappa_c, the cavity pole, the SRC spring frequency, the SRC Q, and 4 of the coherence uncertainties (uncertainty of the 7.93 Hz line is not yet available). The derivation this algorithm was based on is similar to what Jeff has posted ( https://dcc.ligo.org/DocDB/0140/T1700106/001/T1700106-v1.pdf ), with differences noted below: 1) The approximation made at the bottom of p4 and top of p5 was only used in the calculation of kappa_c and the cavity pole. So SRC detuning effects were not accounted for in computing S_c. However, kappa_c and the computed cavity pole were used in the calculation of S_s. 2) In eq. 18, I have a minus (-) sign instead of a plus (+) sign before EP6. S_c = S(f_1, t) has been computed this way in GDS/DCS since the start of O2 (I assume during O1 as well). 3) Similarly, in eq. 20, I have a minus sign (-) before EP12. 4) In the lower two equations of 13, I have the terms under the square root subtracted in the opposite order, as suggested by Shivaraj. (Also, I noted that the expression for Q should only depend on S(f_2, t), with no dependence on S(f_1, t). ) The smoothing (128s running median + 10s average) was done on f_s and 1/Q, since that is the way they would be applied to h(t). Therefore, the zero-crossings of 1/Q show up as asymptotes in the plot of Q. I think it would be better to output 1/Q in a channel rather than Q for this reason. There is a noticeable ramping up of f_s at the beginning of lock stretches, and the range of values agrees with what has been measured previously. I've noted that it is quite difficult to resolve the value of Q with good accuracy. These are some reasons I suspect: 1) Higher uncertainty of calibration measurements at low frequency can add a systematic error to the EPICS values computed at 7.93 Hz. This may be why the Q is more often negative than positive ?? 2) In the calculation of S_s, the actuation strength is subtracted from the ratio of pcal and DARM_ERR. Since this is such a low frequency, the subtracted values are close to the same value in magnitude and phase. Thus, subtracting magnifies both systematic error and uncertainty. 3) The imaginary part of S_s (see eq. 13, bottom equation) in the denominator, is very close to zero, so small fluctuations (about zero, as it turns out) in 1/Q cause large fluctuations in Q. These reasons make it difficult to measure Q with this method. The effect of these measured-Q fluctuations on S_s, the factor we would actually apply to h(t) (see eq. 22), is not enormous, so long as we apply the smoothing to 1/Q, as I have done here.
First attachment is a hand written note containing derivation of the equations 13 in DCC document T1700106. As Aaron mentioned above, in the derivation the order of the quantities in the sqrt function comes out to be in the opposite order (Re[S] - abs[S]^2 instead of abs[S]^2 - Re[S]). The second plot show the estimation of the four sensing function quantities for 2017-01-24 calibration sweep measurement done at LHO (a-log 33604). Instead of tracking across time here we track across sweep frequencies. The top two plots in the second figure show the estimation of optical gain and cavity pole frequency assuming no detuning. We see that above ~100 Hz we get almost constant values for optical gain and cavity pole frequency suggesting detuning doesn't affect the estimation of those quantities (currently we use 331.9 Hz line at LHO for estimating optical gain and cavity pole). Substituting back the optical gain and cavity pole calculated this way, we then calculated detuning frequency and Q. The bottom two plots of the second figure show those. We see that upto ~60 Hz we can use the lines to estimate detuning frequency (currently at LHO we are running the line at 7.83 Hz). However the Q is hard to estimate, the variation is pretty large (Evan's recent a-log also indicate this 34967; Aaron also finds this to be the case). Also in the 7-10 Hz region its value seems to be negative (need to look at more data to make sure that it not just a fluctuation). With the current set of calibration lines, it seems tracking of detuning frequency would easy but estimating Q might be a little difficult.
In the second page of the derivation, at the half way point I have unintentionally switched the notation from S_s to S_c (it should be S_s till the end of the page 2).
[Daniel Finstad, Aaron Viets] The time series and histograms attached show additional data collected using the DCS calibration pipeline from Jan 19, 2017 at 21:44:14 UTC (GPS 1168897472) until Jan 20, 2017 at 09:02:38 UTC (GPS 1168938176).
TITLE: 03/24 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 67Mpc
OUTGOING OPERATOR: Patrick
CURRENT ENVIRONMENT:
Wind: 5mph Gusts, 4mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.38 μm/s
QUICK SUMMARY: Glad to see us recovered from yesterday and we are now riding a 4 hours lock at 67Mpc
TITLE: 03/24 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC STATE of H1: Observing at 67Mpc INCOMING OPERATOR: TJ SHIFT SUMMARY: One lock loss. Cause not immediately apparent. No major issues relocking. Had a verbal alarms notification of a timing system error (see alog 35058). Tumbleweed baling has started at LSB. LOG: Closed bash terminal that was obscuring screenshot of nuc0 DMT Omega plot 10:27 UTC Lockloss 11:07 UTC Back to observing 11:11 UTC PI mode 28 rang up and back down on its own 11:19 UTC Changed phase to damp PI mode 27 11:50 UTC Verbal alarm: Timing system error. 14:04 UTC Tumbleweed balers started tractor, waiting for it to warm up, then driving it to start baling at the LSB. 14:38 UTC Damped PI mode 27
I have shut down the instrument air compressors at M X. All of the pneumatic actuators have been replaced with electric actuators. The only call for air at that station would be when the vacuum system needs to run the turbo pumps or for the water storage tank. The compressors are still readily available for these functions, they will just need to be turned back on. Eventually all of the I A compressors will be placed in this stand by mode and not continually running and cycling, saving both electricity and maintenance cost. I do plan to cycle these on a monthly basis just so they will be ready when needed.
At 11:50:12 UTC verbal alarms reported: 'Timing system error'. I worked my way through the timing medm screens, trending each error channel to see which screen to go to next. In the end I found H1:SYS-TIMING_C_FO_A_PORT_2_SLAVE_ERROR_CODE went to 40 briefly. I'm not sure how to interpret this. Trends of the sequential error codes are attached.
I just found that H1:SYS-TIMING_C_FO_A_PORT_2_SLAVE_UPLINKCRCERRCOUNT increased to 1 at the same time.
The error code is 40 (0x28) which decodes to 'Firmware error' + 'DUOTONE'. The port in question feeds the IO-Chassis for h1seib2 (Beam Splitter Seismic). I trended the IOP duotone and cpu signals over this time period, no problems were found. I spoke with Daniel and he agrees this looks like a random error (the timing around 4am local time is suspicious). I trended this system over the past 100 days, only this one occurence of a glitch. Hopefully not an early symptom of aging fiber transceivers.
Lost lock at 10:27 UTC. Cause unclear. Almost back to NLN.
Back to observing at 11:07 UTC.
Upon entering the control room, I noticed that we have a low frequency oscillation in the PRC gain, POP DC, etc. on our front wall striptool. Turns out, this is the same as what Jim documented last night in alog 34999, and likely comes from Sheila's changing of the oplev damping cutoff in alog 34988.
I would like to go back to the old 10 Hz cutoff that we used to have in the oplevs, to see if that fixes this. However, Sheila noted to me that the old cutoff is zero history, no ramp, so we can't just go out of Observe and switch filters. I'll need to change the filters to always on, few sec ramp, load the filters, then we can switch back.
[Jenne, Sheila]
Hmmm. Reverting the cutoff frequency filters did not fix the problem. We're leaving them with the old 10Hz filters for now.
We also tried lowering the oplev gains by a factor of 8 on all test masses at the same time (in factor of 2 steps), but that caused CSOFT to start being unhappy, so we put them back to nominal before we lost lock.
We're going to go back to Observe since we don't have another idea right now of how to fix this.
As a note, it looks like this is upconverting with the 36Hz Cal lines to broaden them significantly, which is probably part of the reason we're at a reduced range right now.
Still no clever ideas, but it certainly seems like ITMY's oplev or something in the suspension is suspicious, and perhaps imposing noise through the ASC on the other optics. All 4 test masses see the 0.43 Hz oscillation in pitch, but ITMY seems to have many harmonics and other "hair" in the oplev spectrum. Not plotted, but ITMY yaw also sees the 0.43 Hz motion, but the other optics don't seem to see yaw problems.
EDIT to add spot position plot - spots on the test masses aren't moving very much over the last month, so it doesn't seem to be related to that. (Except ETMX yaw - still no idea why that's so scattered)
Beverly, Andy
On 22 Mar, a line of glitches at about 3.3 kHz appeared in apparent coincidence with these ASC problems. A known, always present line at the same frequency shows glitching at about 0.43 Hz (see attached spectrogram). A previous alog (27488) referred to this line as an SRM mode at 3335 Hz. In fact, the SRM spectrum (see attached spectrum) developed a feature at 0.43 Hz on 22 Mar that was not present on 21 Mar.
We haven't checked out anything related to Beverly's suggestion yet since the 0.43 Hz hasn't seemed to be a problem in the last ~day, and we've had other things to worry about. But, Sheila's 8Hz oplev cutoffs were in the guardian, so other than the lock that I had changed them back to 10Hz, they've been on the 8Hz cutoffs.
The DARM spectrum is noticeably better below about 25Hz, so it's good that Sheila's new oplev cutoffs aren't causing any problems, and are just helping.
I analyzed the HWS data for the lock-acquisition that occurs around 1173641000. The "point source" lens appears very quickly (within the first 60s) and then we see a larger thermal blooming over the next few hundred seconds. I've plotted the data below (both gradient field and wavefront OPD). There are a few obviously errant spots in the HWS gradient field data. Since I'm still getting used to Python, I've not managed to successfully strip these off yet. However, it is safe to ignore them.
Check out the video too. Next step is to match a COMSOL model of thermal lensing with a point absorber to the data to get a best estimate of the size of absorber and the power absorbed (preliminary estimates are of the order of 10mW).
FYI: this measurement compares the system before, during and after a lock-acquisition (ignore the title that says "lock-loss"). The previous measurement, 34853, looked at lens decay before, during and after a lock-loss.
For your amusement I've attached a GIF file of the same data. My start time is 79 seconds prior to Aidan's "current time".
The integrated gradient field data for H1ITMX is contained in the attached MAT file. It shows the accumulated optical path distortion (OPD) after 2.75 hours following the lock acquisition around 1173640800. This is the total accumulated OPD for a round trip through the CP+ITMX substrates, reflection off ITMX_HR and back through ITMX+CP substrates.
Note: this image is inverted. In this coordinate system, the top of the ITM is at the bottom of the image.
Here is the wavefront data with the superimposed gradient field. The little feature around [+30mm, +20mm] does not appear in the animation of the wavefront or gradient field over much of the preceeding 2.75 hours. My suspicion is that this is a data point with a larger variance that the other HWS data points, rather than a true represtation of wavefront distortion.
I fitted, by eye, a COMSOL model of an absorber (14mm diameter Gaussian, ~25mW absorbed) to the measured HWS data. I then removed this modeled optical path distortion to get the following residual (lower left plot). I then fitted a COMOSL model of 30mW uniform absorption to this model and subtracted it to get the residual in the lower right plot.
Here's the total fitted distortion from the sum of two COMSOL models (point absorber + uniform absorption):