RickS, DarkhanT, JeffK, CraigC, SudarshanK
After discovering a discrepancy in phase, of about 137 degrees, between measurement and model for the reference value of kappa_tst for LHO, we applied this phase as a correction factor and computed the kappas. Most of the parameters are close to their nominal values after applying this factor except for imaginary part of kappa_tst which is 0.05 (nominal 0) and real part of kappa_pu is about 8% off from its nominal value of 1. Cavity pole is also off by about 20 Hz from its nominal value of 341 Hz.
A similar correction factor (a pahse of 225 degrees) was applied to LLO refernece value of kappa_tst as well. For LLO, the imaginary part of kappa_tst and kappa_pu are close to zero wheras the real part are off by about 7% for kappa_pu and 10% for kappa_tst. kappa_C is close to 1 (off by few percent on some lock stretch) and Cavity pole is off by about 10 Hz from its nominal value of 388 Hz.
The 0 (zero) on X-axis of each plot mark the time when the most recent calibration data was taken for each observatories. For LLO it is: 14-Sep 2015 07:23:32 UTC and for LLO it is 10 Sep 2015 23:35:32 UTC.
Correction on that last entry. We bumped the fan at ~1545 PT.
STATE Of H1: Out of lock, riding out earthquake SUPPORT: Sheila, Jenne SHIFT SUMMARY: After Sheila and Jenne diagnosed the lock loss from the DSOFT loop, we were able to get back to observing without trouble. Took out of observing briefly to allow Sheila and TJ to load guardian code changes. Just lost lock to an earthquake in Chili. INCOMING OPERATOR: Ed ACTIVITY LOG: 21:51 UTC Sheila starting WP 5502, TJ starting WP 5499. 22:08 UTC Out of Observing for Sheila and TJ load guardian changes. 22:09 UTC Back to observing. 22:37 UTC Jeff B. to area in mechanical building near HEPI pump mezzanine to retrieve small bin of parts. 22:43 UTC Jeff B. back. 23:13 UTC lost lock to a large earthquake in Chili.
I turned off corner station SF-3 @ ~1350 PT. This fan has been suspect for some time (see FRS 3263) and while showing Tim Nelson from LLO our supply fans, there was what appears to be a bearing failure by the excessive noise and vibration. I described this to John W. at the All Hands Safety meeting and afterwards ~1545 PT we went and bumped the fan a couple of times and heard the noise and felt the vibration of the fan. I will put a work permit in to disassemble this fan and repair as soon as I am able to work on it. The fan is locked out. For more information see CSFanStatus_3.
While Sheila was making her Guardian changes, I got the OK to remove the BRS tests and add 2 others to DIAG_MAIN via WP 5499.
Tests removed:
Tests added:
The code was tested, then saved and loaded into the node. I have not committed this to the svn yet as we are still locked.
Svn'd after the earthquake got us.
Not sure why anything would prevent you from committing to the SVN....
Attached is agallery of 5 "dust" glitches. Still clueless of what they are, but - ETMY saturation is a symptom, not a cause - it is not possible to produce such a white glitch from saturating a drive. - The DCPD spectrum shows a roll-off for all of them - But the roll-off frequency (i.e. glitch duration) varies significantly = from about 300Hz to 3kHz. Example 2: GPS: 1126294545 UTC: Sep 14 2015 19:35:28 UTC ETMY saturation: yes Example 3 GPS: 1126437892 UTC: Sep 16 2015 11:24:35 UTC ETMY saturation: yes Example 4 GPS: 1126434798 UTC: Sep 16 2015 10:33:01 UTC ETMY saturation: yes Example 5 GPS: 1126441165 UTC: Sep 16 2015 12:19:08 UTC ETMY saturation: yes Example 6 GPS: 1126442379 UTC: Sep 16 2015 12:39:22 UTC ETMY saturation: yes
WIth Hang's help, I managed to investigate these glitches with the new lockloss tool using SUS-ETMY_L3_MASTER_OUT_LL_DQ as a reference channel. The script couldn't find any other optics that glitch prior to the ETMY. And sometimes the glitches are seen by ETMX 30-40 miliseconds after.
I've attached the plot of the glitches at the time you've given. I've also attached the list of channel I told the script to look. Basically all the SUS MASTER OUT DQ channels. Please let me know if you have any suggestions on whereelse I should look at.
Attached are time traces of the DCPD_SUM for the 5 examples.
We just loaded two minor guardian changes that should save us some time and locklosses during acquisition. They don't change anything about the configuration of the IFO in observing mode.
The first was the fix the issue that Jenne and Patrick wrote about 21508. When the SOFT loops had not converged before the gain was increased, it could cause us to loose lock (which could be because the SRC1 loop is strongly contaminated by these signals, and we see POP90 running away in the usual way.) I've simply added another if statement to the ENGAGE_ASC part 3 state, which will wait for all the SOFT loop control signals to become less than 500:
The Initial Alignment Checklist and ops wiki page's "Initial alignment brief version" have been updated to reflect this change.
When you compare "H1 SNSW EFFECTIVE RANGE (MPC) (TSeries)" data in DMT SenseMonitor_CAL_H1 with its copy in EPICS (H1:CDS-SENSEMON_CAL_SNSW_EFFECTIVE_RANGE_MPC), you will find that the EPICS data is "delayed" from the DMT data by about 109 seconds (109.375 sec in this example, I don't know if it varies with time significantly).
In the attached, vertical lines are minute markers where GPS second is divisible by 60. Bottom is the DMT trend, top is its EPICS copy. In the second attachment you see that this results in the minute trend of this EPICS range data becoming a mixture of DMT trend from 1 minute and 2 minutes ago.
This is harmless most of the time, but if you want to see if e.g. a particular glitch caused the inspiral range to drop, you need to do either a mental math or a real math.
(Out of this 109 seconds, 60 should come from the fact that DMT takes 60 seconds of data to calculate one data point and puts the start time of this 1 min window as the time stamp. Note that this start time is always at the minute boundary where GPS second is divisible by 60. Remaining 49 seconds should be the sum of various latencies on DMT end as well as on the copying mechanism.)
The 109s delay is a little higher than expected, but not to strange. I'm not sure where DMT marks the time, as the start/mid/end of the minute it outputs.
Start Time | Max End Time | Stage |
0 | 60 | Data being calculated in the DMT. |
60 | 90 | The DMT to EPICS IOC queries the DMT every 30s. |
90 | 91 |
The EDCU should sample it at 16Hz and send to the frame writter. |
The 30s sample rate of the DMT to EPICS IOC is configurable, but was chosen as a good sample rate for a data source that produces data every 60 seconds.
It should also be noted that at least at LHO we do not make an effort to coordinate the sampling time (as far as which seconds in the minute) that happen with the DMT. So the actual delay time may change if the IOC gets restarted.
EDITED TO ADD:
Also, for this channel we record the GPS time that DMT asserts is associated with each sample. That way you should be able to get the offset.
The value is available in H1:CDS-SENSMON_CAL_SNSW_EFFECTIVE_RANGE_MPC_GPS
20:28 UTC Back to observing mode.
C. Cahillane I have generated the latest uncertainty results for LHO. I have characterized the uncertainty in the actuation function by quadratically summing the statistical and systematic errors. There are more elegant ways to combine systematics and statistical errors, but for now we will see if this is sufficient. Plot 1 and 2 are the Mag and Phase components of σ_h^2, and Plots 3 and 4 are the Mag and Phase of σ_h itself. We have greater than 10% and 10 degree uncertainty just below 100 Hz with our current analysis, but this is just an update with inflated flat uncertainties in the kappas and f_c. Note that the linear sum of our PUM and UIM stage uncertainties into our A_pu uncertainty is not correct, but only an estimate used for calculations in the meantime. I have performed the correct calculations in Mathematica and will implement them given time. The following are the sigma values I have used: σ_|A_tst| = A_coeff_sigma_mag_A_tst .* abs(A_tst); σ_|A_pu| = A_coeff_sigma_mag_A_pum .* abs(A_pum) + A_coeff_sigma_mag_A_uim .* abs(A_uim); σ_|C_r| = 1.0 .* abs(C_r); σ_|kappa_tst| = 5; σ_|kappa_pu| = 5; σ_φ_A_tst = A_coeff_sigma_phase_A_tst; σ_φ_A_pu = A_coeff_sigma_phase_A_pum + A_coeff_sigma_phase_A_uim; σ_φ_C_r = 5; σ_φ_kappa_tst = 5; σ_φ_kappa_pu = 5; σ_kappa_C = 5; σ_f_c = 23.4731; I have inflated the errors in the kappas and f_c to 5% and 5 degrees based on our systematics seen from the calibration group calculations. We hope that we can deflate these errors when the calibration group discovers the systematics responsible for our kappa deviations from 1. The next step is to propagate uncertainty for the sensing function, after which I will have finished analysing the uncertainty of measurements at LHO. Then I may move onto LLO. Steps: 1) Get sensing function uncertainties at LHO 2) Implement correct A_pu uncertainty propagation 3) Check sanity 4) Move on to LLO uncertainty measurements (?) Correct for kappas and f_c uncertainties as better calculations are made available. For now, apply flat 5% and 5 degrees.
15:55 David N. through gate Had trouble locking on DRMI. Tried locking on just PRMI. Had a few locks on PRMI, but would not stay. Eventually gave up and started an initial alignment at 17:47 UTC. Finished initial alignment with help from Sheila at 18:55 UTC. Had trouble with ALS during initial alignment. Sheila and Jenne now suspect we may be losing lock engaging the gain for the DSOFT loop. They are starting a test now.
[Sheila, Patrick, Jenne]
We discovered that the problem was we had some medium-large offsets on the input of the SOFT yaw loops, and when the gain was being increased in ENGAGE_ASC_PART3, the loops started to run away.
For this lock, we hand-ran the "main" part of ENGAGE_ASC_PART3, which turns the loops on with low gain. Once the loops had converged (~ 10 minutes for all of them to finish!), we let guardian run the full ENGAGE_ASC_PART3 state, and continue on. We have just arrived at NOMINAL_LOW_NOISE, and we don't have any SDF diffs, so there is no net change in the configuration of the IFO.
We would like to think about putting a "convergence checker" in this guardian state, so that it waits until the loops are happy before continuing. Sheila has implemented this with great success several weeks ago in the DRMI ASC parts of the guardian code, so it should be relatively simple to do the same thing for the arm loops.
TITLE: Sep 16 OWL Shift 07:00-15:00UTC (00:00-08:00 PDT), all times posted in UTC
STATE Of H1: Was observing at ~70 Mpc for 4 hours after the 6.3M earthquake in Indonesia. We lost lock again due to a 6.0 earthquake in Papua New Guinea. Mother nature wasn't happy.
SUPPORT: Rick S., Darkhan T., Corey G.
SHIFT SUMMARY: After the 6.3M earthquake in Indonesia, things had been quiet until another 6.0M earthquake in Papua New Guinea. There were several small earthquakes (less than 3 mag) in California, Oregon, and Alaska but the interferometer hardly noticed them. Wind speed >=10 mph. 4 ETMY glitches that were loud enough to be noticable on range plot. Overall trend of the microseism is slowly coming down. LLO has been down since earthquake. Danny said there were a problem with ALS PDH servo board.
INCOMING OPERATOR: Patrick
Activity log:
07:00 Took over from Ed. The ifo was locking and observing. No more nasty 2Hz comb (yay!).
08:07 Rick called about LVEA light. I switched to commissioning to go into the LVEA to turn off the light.
08:12 Back to Observing. The light in the PSL area was still on. Called Rick again. We thought it was the light from the change room.
08:30 After LLO went down I switched the intent bit to commissioning again and went to find PSL change room light switch. It wasn't the change room light. It was the crain.
Came back few minutes after and found out we had a lock loss at 08:01 Due to an earthquake in Indonesia.
09:02 Began lock acquisition. The earthquake band amplitude was still 10^-1 um/s but ALS-TRX and TRY didn't glitch much so I gave it a try.
09:15 Craig and Darkhan left the control room.
09:18 Corey called about the crain light. I went out to LVEA to take care of it. I switched off the door access reader on my way out.
09:23 Back in the control room. I realized that the operating mode was still commissioning. It should have been in Environmental for ~30 minutes. Since I have started the lock acquisition I switched the operating mode to LOCK ACQUISITION.
09:47 Back to Observing. The ifo have reached NOMINAL_LOW_NOISE few minutes ago but I thought the 3001 Hz Pcal line wasn't there. I called Darkhan and apparently the spectrum needed a narrower BW for the line to show (I saw it at 0.01Hz BW).
12:00 Karen and Christina on site
13:00 Richard on site.
14:28 Lockloss. 6.0M earthquake in Papua New Guinea.
15:00 Handing off to Patrick
Note:
- EX dust monitor looks bogus. The alarm that went off probably wasn't real neither.
STATE Of H1: Earthquake OUTGOING OPERATOR: Nutsinee QUICK SUMMARY: Unlocked, possibly due to earthquake. Winds less than 10 mph. Peak to around 1 um/s in 0.03 - 0.1 Hz seismic. 0.1 - 0.3 Hz seismic around 10^-1 um/s. Lights off in LVEA. Lights appear off in PSL enclosure. Lights appear off at mid and end stations.
ER8 Day 30. h1ecaty1 all three PLCs restarted due to Beckhoff chassis work.
Tue 15 Sep 2015 11:34:25 h1ecaty1plc[1,2,3]
Things were quiet. Then Stefan arrived, and I realized I had set the observatory mode but forgot the Intent Bit, and which spiraled into a bunch SDF differences. The LSC guardian showed a difference on a t-ramp (annoying that this kind of difference would keep us from going into Observe) and ASC had two diffs that I guess Evan logged about last night but didn't accept. Then Stefan went and set a bunch of ODC masks (this also would have kept us from going into observe, so also annoying) as well as tuning the whitening the IM4 QPD (which was saturating, Stefan assures me is out of loop so shouldn't affect anything). Guardian recovered from the earlier lock loss in 20 minutes (not annoying!) back to 70 mpc and it has been otherwise quiet.
We just came out of Observe for a minute so Stefan could set more ODC bits. Service has resumed.
The ODC status should not affect OBSERVATION READY in any way. If it does, then ODC is misconfigured and needs to be fixed.
SDF is unfortunately looking at all channels - including ODC mask bits. When anyone updates the ODC masks, the SDF goes red, and you then havta accept the ODC mask channel value in SDF, or ignore it. Again, it's a one-time thing to update these ODC values, just late in the game and not all at one time.