A little late with this entry due to Mother Nature:
TITLE: Sep 16 EVE Shift 23:00-07:00UTC (16:00-00:00 PDT), all times posted in UTC
STATE Of H1: Observing
OUTGOING OPERATOR: Patrick
QUICK SUMMARY:Full control room.. Wind is blowing just under the 20mph mark. Seismic activity quieting down from earlier earthquakes. All lights at Mid, End and LVEA stations are off. Then...Chilean earthquake 8.3Mag caused lockloss. Working with Hugh to get tripped seismic systems to happy places to ride this out.
Terramon hadn't reported the arrival of this event until it had already "knocked us for a loop".
C. Cahillane The five sensing function measurements and associated uncertainties at LHO have been analysed. Plot 1 shows the five measurements and models plotted together, alongside the residuals. The measurements are from August 26, 28, 29, and September 8 and 10. Plot 2 shows the weighted mean of the five measurement residuals in black, a naive systematic fit in red, and the weighted mean / systematic fit in blue. To be consistent with what I've done for actuation, right now I'm going to quadratically sum the systematic and statistical uncertainty, and zeroth order extrapolate it. For the sensing function though this is very clearly the incorrect approach, since the systematics on low freq sensing mag or high freq sensing phase are not simply going to stop blowing up at the end of the measurement. I will certainly have to go back and fix this. Also, since the systematics are so large here, I am thinking about applying my own "correction" to the sensing function the model gives me and then plotting only statistical uncertainty. I think this plot will display the directionality of our error alongside the statistical uncertainty. Things to do: 1) Calculate weighted mean of actuation residuals for each measurement time for Sudarshan 2) Zeroth Order -> First Order Extrapolation 3) Find systematics of actuation stages 4) Apply correct A_pu uncertainty calculations. 5) Apply systematics "corrections" to my uncertainty models to get directionality of error combined with statistical uncertainty 6) Go to LLO and do it all again!
J. Kissel, with help from B. Weaver Remotely While the earth was still cruising at 10 [um/s], ringing down from Giant EQ in Chile, I gathered some charge measurements on both ETMX and ETMY. I'm unsure of the data quality, because we were only able to get the SEI system to DAMPED (HPI on position loops, ISIs damped only with low-gained GS13s), but the results live here: /ligo/svncommon/SusSVN/sus/trunk/QUAD/H1/ETMX/SAGL3/Data/ data_2015-09-17-00-37-40 data_2015-09-17-00-50-55 data_2015-09-17-01-04-09 data_2015-09-17-01-17-51 data_2015-09-17-01-31-53 /ligo/svncommon/SusSVN/sus/trunk/QUAD/H1/ETMY/SAGL3/Data/ data_2015-09-17-00-37-41 data_2015-09-17-00-43-23 data_2015-09-17-00-56-34 data_2015-09-17-01-09-50 data_2015-09-17-01-24-05 data_2015-09-17-01-37-59 data_2015-09-17-01-39-45 data_2015-09-17-01-52-34 Note after the charge measurements were complete, I used the SDF system and conlog to restore the ESD Driver / Bias / Linearization settings and the alignment offsets, respectively. Will post results tomorrow.
RickS, DarkhanT, JeffK, CraigC, SudarshanK
After discovering a discrepancy in phase, of about 137 degrees, between measurement and model for the reference value of kappa_tst for LHO, we applied this phase as a correction factor and computed the kappas. Most of the parameters are close to their nominal values after applying this factor except for imaginary part of kappa_tst which is 0.05 (nominal 0) and real part of kappa_pu is about 8% off from its nominal value of 1. Cavity pole is also off by about 20 Hz from its nominal value of 341 Hz.
A similar correction factor (a pahse of 225 degrees) was applied to LLO refernece value of kappa_tst as well. For LLO, the imaginary part of kappa_tst and kappa_pu are close to zero wheras the real part are off by about 7% for kappa_pu and 10% for kappa_tst. kappa_C is close to 1 (off by few percent on some lock stretch) and Cavity pole is off by about 10 Hz from its nominal value of 388 Hz.
The 0 (zero) on X-axis of each plot mark the time when the most recent calibration data was taken for each observatories. For LLO it is: 14-Sep 2015 07:23:32 UTC and for LLO it is 10 Sep 2015 23:35:32 UTC.
Correction on that last entry. We bumped the fan at ~1545 PT.
STATE Of H1: Out of lock, riding out earthquake SUPPORT: Sheila, Jenne SHIFT SUMMARY: After Sheila and Jenne diagnosed the lock loss from the DSOFT loop, we were able to get back to observing without trouble. Took out of observing briefly to allow Sheila and TJ to load guardian code changes. Just lost lock to an earthquake in Chili. INCOMING OPERATOR: Ed ACTIVITY LOG: 21:51 UTC Sheila starting WP 5502, TJ starting WP 5499. 22:08 UTC Out of Observing for Sheila and TJ load guardian changes. 22:09 UTC Back to observing. 22:37 UTC Jeff B. to area in mechanical building near HEPI pump mezzanine to retrieve small bin of parts. 22:43 UTC Jeff B. back. 23:13 UTC lost lock to a large earthquake in Chili.
I turned off corner station SF-3 @ ~1350 PT. This fan has been suspect for some time (see FRS 3263) and while showing Tim Nelson from LLO our supply fans, there was what appears to be a bearing failure by the excessive noise and vibration. I described this to John W. at the All Hands Safety meeting and afterwards ~1545 PT we went and bumped the fan a couple of times and heard the noise and felt the vibration of the fan. I will put a work permit in to disassemble this fan and repair as soon as I am able to work on it. The fan is locked out. For more information see CSFanStatus_3.
While Sheila was making her Guardian changes, I got the OK to remove the BRS tests and add 2 others to DIAG_MAIN via WP 5499.
Tests removed:
Tests added:
The code was tested, then saved and loaded into the node. I have not committed this to the svn yet as we are still locked.
Svn'd after the earthquake got us.
Not sure why anything would prevent you from committing to the SVN....
Attached is agallery of 5 "dust" glitches. Still clueless of what they are, but - ETMY saturation is a symptom, not a cause - it is not possible to produce such a white glitch from saturating a drive. - The DCPD spectrum shows a roll-off for all of them - But the roll-off frequency (i.e. glitch duration) varies significantly = from about 300Hz to 3kHz. Example 2: GPS: 1126294545 UTC: Sep 14 2015 19:35:28 UTC ETMY saturation: yes Example 3 GPS: 1126437892 UTC: Sep 16 2015 11:24:35 UTC ETMY saturation: yes Example 4 GPS: 1126434798 UTC: Sep 16 2015 10:33:01 UTC ETMY saturation: yes Example 5 GPS: 1126441165 UTC: Sep 16 2015 12:19:08 UTC ETMY saturation: yes Example 6 GPS: 1126442379 UTC: Sep 16 2015 12:39:22 UTC ETMY saturation: yes
WIth Hang's help, I managed to investigate these glitches with the new lockloss tool using SUS-ETMY_L3_MASTER_OUT_LL_DQ as a reference channel. The script couldn't find any other optics that glitch prior to the ETMY. And sometimes the glitches are seen by ETMX 30-40 miliseconds after.
I've attached the plot of the glitches at the time you've given. I've also attached the list of channel I told the script to look. Basically all the SUS MASTER OUT DQ channels. Please let me know if you have any suggestions on whereelse I should look at.
Attached are time traces of the DCPD_SUM for the 5 examples.
We just loaded two minor guardian changes that should save us some time and locklosses during acquisition. They don't change anything about the configuration of the IFO in observing mode.
The first was the fix the issue that Jenne and Patrick wrote about 21508. When the SOFT loops had not converged before the gain was increased, it could cause us to loose lock (which could be because the SRC1 loop is strongly contaminated by these signals, and we see POP90 running away in the usual way.) I've simply added another if statement to the ENGAGE_ASC part 3 state, which will wait for all the SOFT loop control signals to become less than 500:
The Initial Alignment Checklist and ops wiki page's "Initial alignment brief version" have been updated to reflect this change.
When you compare "H1 SNSW EFFECTIVE RANGE (MPC) (TSeries)" data in DMT SenseMonitor_CAL_H1 with its copy in EPICS (H1:CDS-SENSEMON_CAL_SNSW_EFFECTIVE_RANGE_MPC), you will find that the EPICS data is "delayed" from the DMT data by about 109 seconds (109.375 sec in this example, I don't know if it varies with time significantly).
In the attached, vertical lines are minute markers where GPS second is divisible by 60. Bottom is the DMT trend, top is its EPICS copy. In the second attachment you see that this results in the minute trend of this EPICS range data becoming a mixture of DMT trend from 1 minute and 2 minutes ago.
This is harmless most of the time, but if you want to see if e.g. a particular glitch caused the inspiral range to drop, you need to do either a mental math or a real math.
(Out of this 109 seconds, 60 should come from the fact that DMT takes 60 seconds of data to calculate one data point and puts the start time of this 1 min window as the time stamp. Note that this start time is always at the minute boundary where GPS second is divisible by 60. Remaining 49 seconds should be the sum of various latencies on DMT end as well as on the copying mechanism.)
The 109s delay is a little higher than expected, but not to strange. I'm not sure where DMT marks the time, as the start/mid/end of the minute it outputs.
| Start Time | Max End Time | Stage |
| 0 | 60 | Data being calculated in the DMT. |
| 60 | 90 | The DMT to EPICS IOC queries the DMT every 30s. |
| 90 | 91 |
The EDCU should sample it at 16Hz and send to the frame writter. |
The 30s sample rate of the DMT to EPICS IOC is configurable, but was chosen as a good sample rate for a data source that produces data every 60 seconds.
It should also be noted that at least at LHO we do not make an effort to coordinate the sampling time (as far as which seconds in the minute) that happen with the DMT. So the actual delay time may change if the IOC gets restarted.
EDITED TO ADD:
Also, for this channel we record the GPS time that DMT asserts is associated with each sample. That way you should be able to get the offset.
The value is available in H1:CDS-SENSMON_CAL_SNSW_EFFECTIVE_RANGE_MPC_GPS
20:28 UTC Back to observing mode.
C. Cahillane I have generated the latest uncertainty results for LHO. I have characterized the uncertainty in the actuation function by quadratically summing the statistical and systematic errors. There are more elegant ways to combine systematics and statistical errors, but for now we will see if this is sufficient. Plot 1 and 2 are the Mag and Phase components of σ_h^2, and Plots 3 and 4 are the Mag and Phase of σ_h itself. We have greater than 10% and 10 degree uncertainty just below 100 Hz with our current analysis, but this is just an update with inflated flat uncertainties in the kappas and f_c. Note that the linear sum of our PUM and UIM stage uncertainties into our A_pu uncertainty is not correct, but only an estimate used for calculations in the meantime. I have performed the correct calculations in Mathematica and will implement them given time. The following are the sigma values I have used: σ_|A_tst| = A_coeff_sigma_mag_A_tst .* abs(A_tst); σ_|A_pu| = A_coeff_sigma_mag_A_pum .* abs(A_pum) + A_coeff_sigma_mag_A_uim .* abs(A_uim); σ_|C_r| = 1.0 .* abs(C_r); σ_|kappa_tst| = 5; σ_|kappa_pu| = 5; σ_φ_A_tst = A_coeff_sigma_phase_A_tst; σ_φ_A_pu = A_coeff_sigma_phase_A_pum + A_coeff_sigma_phase_A_uim; σ_φ_C_r = 5; σ_φ_kappa_tst = 5; σ_φ_kappa_pu = 5; σ_kappa_C = 5; σ_f_c = 23.4731; I have inflated the errors in the kappas and f_c to 5% and 5 degrees based on our systematics seen from the calibration group calculations. We hope that we can deflate these errors when the calibration group discovers the systematics responsible for our kappa deviations from 1. The next step is to propagate uncertainty for the sensing function, after which I will have finished analysing the uncertainty of measurements at LHO. Then I may move onto LLO. Steps: 1) Get sensing function uncertainties at LHO 2) Implement correct A_pu uncertainty propagation 3) Check sanity 4) Move on to LLO uncertainty measurements (?) Correct for kappas and f_c uncertainties as better calculations are made available. For now, apply flat 5% and 5 degrees.
15:55 David N. through gate Had trouble locking on DRMI. Tried locking on just PRMI. Had a few locks on PRMI, but would not stay. Eventually gave up and started an initial alignment at 17:47 UTC. Finished initial alignment with help from Sheila at 18:55 UTC. Had trouble with ALS during initial alignment. Sheila and Jenne now suspect we may be losing lock engaging the gain for the DSOFT loop. They are starting a test now.
[Sheila, Patrick, Jenne]
We discovered that the problem was we had some medium-large offsets on the input of the SOFT yaw loops, and when the gain was being increased in ENGAGE_ASC_PART3, the loops started to run away.
For this lock, we hand-ran the "main" part of ENGAGE_ASC_PART3, which turns the loops on with low gain. Once the loops had converged (~ 10 minutes for all of them to finish!), we let guardian run the full ENGAGE_ASC_PART3 state, and continue on. We have just arrived at NOMINAL_LOW_NOISE, and we don't have any SDF diffs, so there is no net change in the configuration of the IFO.
We would like to think about putting a "convergence checker" in this guardian state, so that it waits until the loops are happy before continuing. Sheila has implemented this with great success several weeks ago in the DRMI ASC parts of the guardian code, so it should be relatively simple to do the same thing for the arm loops.
Things were quiet. Then Stefan arrived, and I realized I had set the observatory mode but forgot the Intent Bit, and which spiraled into a bunch SDF differences. The LSC guardian showed a difference on a t-ramp (annoying that this kind of difference would keep us from going into Observe) and ASC had two diffs that I guess Evan logged about last night but didn't accept. Then Stefan went and set a bunch of ODC masks (this also would have kept us from going into observe, so also annoying) as well as tuning the whitening the IM4 QPD (which was saturating, Stefan assures me is out of loop so shouldn't affect anything). Guardian recovered from the earlier lock loss in 20 minutes (not annoying!) back to 70 mpc and it has been otherwise quiet.
We just came out of Observe for a minute so Stefan could set more ODC bits. Service has resumed.
The ODC status should not affect OBSERVATION READY in any way. If it does, then ODC is misconfigured and needs to be fixed.
SDF is unfortunately looking at all channels - including ODC mask bits. When anyone updates the ODC masks, the SDF goes red, and you then havta accept the ODC mask channel value in SDF, or ignore it. Again, it's a one-time thing to update these ODC values, just late in the game and not all at one time.