While investigating the 2 Hz comb in DARM, Sheila and Patrick found communications errors with beckhoff at EY. The End Station 2 Ethercat chasssis was brought back to the lab for further troubleshooting. Spare unit was installed and Beckhoff computer was restarted. While troubling shooting the unit found a bad EK1100 coupler on the third rail (left) of the chassis. Re-scanned unit and found 5 terminals all EL3104 (anaolog inputs) that were giving us errors. After re-scanning/checking internal cabling/power cycling, we were only able to reproduce 3 errors. We tried multiply times to try and reproduce all five errors, but could not. Eventually after a power cycle, all errors we had previously seen could not be reproduce. After multiple re-scans and power cycles with no errors showing up, we used a voltage calibrator and injected a 5V DC signal into the EL3104 terminals. It was later decided to reinstall unit back at EY.
I was trying to use the fast ODC channels (e.g. H1:SUS-ETMY_ODC_CHANNEL_OUT_DQ) to track down ETMY saturation causes, but ran into a number of bugs in our data access tools that made it impossible: 1) dataviewer can play the trend data of integer ODC channels, but not the full data. 2) both dtt (diaggui) and lockloss (pydv) can't access integer ODC channels 3) using NDS2, the ODC channel is down-sampled to 256 Hz, making it useless for my purpose. (why do we do that? That channel compresses either way, and takes the same amount of disk space) 4) dtt (diaggui) can access that channel, but only with the "now" setting, and IT RETURNS DATA WITH A TIME STAMP IN THE FUTURE!!!! (The attached plot has a UTC clock in the terminal, and dtt's repored time stamp...) sigh... I give up for today.
TITLE: Sep 15 EVE Shift 23:00-07:00UTC (16:00-00:00 PDT), all times posted in UTC
STATE Of H1: Locking
OUTGOING OPERATOR: Patrick/Jeff B
QUICK SUMMARY:Full control room, mostly working on last night’s 2Hz comb problem. Patrick is in H2 examining the EtherCat chassis from EY, responsible for Ring Heaters, for baddness. Lights are on in the LVEA. Wind is below 20mph. Seismic activity looks a little rumbly perhaps from a quake 7 hours ago and subsequent smaller ones in various areas. IFO is locked again for the first time in 7 hours. We are currently going to make some compensatory Ring Heater adjustments
Hugh, Sheila, Jenne, filling in for Patrick
We are filling in for Patrick at the moment because he is helping Fil to use the beckhoff test stand in the H2 building. We are trying to lock now after Patrick and Jenne did a full initial alignment.
This morning, Fil, Vern, Elli, Nutsinee and I swapped out the End Station 2 Beckhoff Chassis, because all the modules on the right rail were in the state INIT NO_COMM, and we could not change their state by requesting different states (including safe) or clearing errors. We swapped the chassis with a spare. Hugh and Patrick burt restored plc1,2,3 to 10 pm last night.
When we first arrived we saw that the high voltage power supplies for the ESD were off, Fil turned them on and restored the settings. This may be because the vacuum gauges which are controlled by beckhoff would have tripped them off. The ESD driver was off, but we did not reset it believing that we could reset it remotely. Later Betsy reset it by driving back to EY after she could not reset it in the normal way remotely.
Elli quickly noticed that something was amiss with the ring heaters at end Y, which are controlled by the same beckhoff chassis. We checked that the settings were all the same, it seems like there is some difference between the 2 chassis.
Fil, with help from several people, has spent most of the afternoon with the chassis we removed. We had trouble getting a test set up going that we could use to diagnose the problem, so Hugh, Jenne, and I are relieving Patrick for a few hours so he and Fil can work together on the removed chassis. They think that they have found a problem with one of the modules, and are currently replacing that module.
We are about to reach low noise (with the spare chassis, and incorrect power on the ring heaters). We hope to see that the comb is gone, and if we see that we may break the lock to revert to the old chassis if possible.
Patrick's notes about his shift:
10 am PDT safety tour changed from X end to Y end(this may have broken our noisy lock this morning)
18:11 UTC placed beckhoff chassis at End Y
18:20 UTC Rick retrieves (?) PCAL out of optics lab
18:30 UTC Dick in optics lab out at 12:41 PDT
18:31 UTC Fil restarts beckhoff computer (Guardian to down)
a little while later h1ecaty1 burt restores to 22:10 Sep 14
18:53 UTC Jeff B, Jason retrieve part from optics lab (?)
12:38 PDT UPS truck at LSB
20:37 start of initial alignment finished at 21:22
20:54 Sheila to H2 electronics back 20:56, Fil is still there
20:58 Betsy to EY to restart ESD back 14:31
14:58 UTC Ryan B. patching and rebooting alog 15:09 UTC Gate phone rang. Bubba G. answered first. It was the fire department. Bubba let them in. They wanted to do fire hydrant tests. Bubba told them not to. 15:11 UTC Fire department leaving 15:12 UTC David N. through gate 15:59 UTC Turned away fire department from gate. They wanted to check fire extinguishers. 16:06 UTC ETMY saturation alarm, lock loss, end Y ALS in fault 16:16 UTC Christina to mechanical room to get supplies 17:51 UTC King Soft water through gate. Parking in main parking lot. Wheeling in equipment for RO system on hand truck. 17:56 UTC Dave B. running an svn update 18:20 UTC Rick S. retrieving part from optics lab ? UTC Rick S. back 18:30 UTC Dick G. in optics lab 18:53 UTC Jeff B, Jason O. retrieving part from optics lab 19:30 UTC Jeff B, Jason O. done 19:38 UTC UPS at LSB 19:41 UTC Dick G. out of optics lab 20:37 UTC Starting initial alignment 20:54 UTC Sheila D. to H2 electronics room 20:56 UTC Sheila D. back 20:58 UTC Betsy to end Y for ESD restart 21:22 UTC Finished initial alignment 21:31 UTC Betsy back
To clarify, I didn't go to the optics lab as noted in Patrick's log. When retrieving the PMC from the optics lab was approved, I let Jason know that there was an opportunity to go to the lab, which he did as noted by Patrick.
While the Beckhoff troubleshooting was going this morning, I ran a set of charge measurements on the unused ETMx SUS. After completion a few hours ago, OPS/commiss started relocking attempts which are ongoing.
Results of charge measurements to be posted later.
With all of the Beckhoff attempted repair work this morning, the ETMy ESD "railed" with all 5 channels reading ~-15k. I went down to EY and did the Push RED ON/OFF button, unplug far right DAQ cable, Push RED ON/OFF back ON button, replug DAQ cable in procedure. This worked - the DC bias channel is back to ~-32k while the other 4 channels are near zero at ~-200 as viewed on the lower right of the ETMY SUS screen.
There are several calibration lines whose injection amplitudes are set to give a signal-to-noise ratio of 100 for a 10-second integration time. Thus in typical strain noise spectra, made with a frequency resolution of order 0.1 Hz, these calibration lines show up about 100x above the noise floor. I have been concerned whether there is any non-linear noise conversion with the calibration lines at these amplitudes. Certainly we see upconversion around the test mass bounce and roll modes when they are at a high amplitude. Any egregious upconversion would have been flagged before, but it wasn't clear whether smaller effects had been ruled out before.
To test this, a few days ago the calibration folks made a controlled test where the calibration lines were turned off and then on, while everything else in the interferometer remained (nominally) the same. These times are:
Sep-11-2015 1:45:25 UTC, calibration lines turned off, no excitations, fully locked at NOMINAL_LOWNOISE
Sep-11-2015 1:59:06 UTC, calibration lines turned back on
Sep-11-2015 2:50:03 UTC, unlocked
I made 0.05 Hz-resolution spectra of H1:CAL-DELTAL_EXTERNAL_DQ during these times; see the attached plots. I see no signs of any upconversion around the calibration lines in the 35-38 Hz band, nor the one near 332 Hz. I also looked around the first harmonics of the 35-38 Hz lines; there is no sign of twice the frequency of any of these lines showing up. So the calibration line amplitudes look ok from this standpoint. (Comment: I thought all 3 lines in the 35-38 Hz band were to have a high amplitude -- SNR ~ 100 -- but the 37.3 Hz line is smaller by a large factor.)
First figure: zoom around the 331.9 Hz line
Second figure: zoom around the 35-38 Hz band (upper) and twice that frequency (lower)
Note: the FFTs were performed with a Hanning window; the strong calibration lines in these spectra display the spectral leakage associated with the windowing/finite time series
The 37.3 Hz line is injected at the DARM CONTROL excitation point (See fig. 1 in LIGO-T1500377). Because of how Delta L_ext is generated (A*d_ctrl + d_err/C), the lines injected in DARM CONTROL (x_ctrl) don't appear in Delta L_ext.
So of the three calibration likes inear 36 Hz we only expect to see the x_tst line at 35.9 Hz and the Pcal line at 36.7 Hz.
Andy, Laura, Dr. Dan Hoak Something went wrong with the Pcal Y. It's not clear right now whether these were causing the glitches in ETMY, or whether there's a more general electrical failure at EY. The Beckhoff seems to have failed at 5:45 UTC (alog). At that time, we see the PCAL Y lines suddenly disappear, and this is obvious in DARM (first plot) as well as the PCAL Y photodiodes (plots 2 and 3). There are bursts of noise in the TX channel seen by the summary page spectrogram (plot 4), but shorter spectrograms of TX and RX don't show anything at times when we know DARM was glitching (plots 5 and 6). We'll continue to investigate.
Seems like pcal was gone somewhat after Beckhoff. A big DARM glitch came almost exactly at the same time as the Pcal power loss at 05:44:07.91, which was somewhat after beckhoff communication loss (1st attachment). Right after that glitch DARM became 2Hz-rich.
In the second plot, you can see that AOM drive, Pcal receiver and transmitter went away at the same time. Since there was no light, Pcal cannot be the cause of 2Hz thing. It's still possible that the loss of Beckhof was somehow responsible.
I checked mainsmon at EY though I don't know what exactly these are monitoring. I don't see mainsmon glitch when pcal was lost (2nd attachment middle), though it does glitch once in a while, e.g. t~440. This is through 60Hz and 180Hz band stop.
16:45 UTC Sheila, Filiberto, Vern, Nutsinee to end Y to investigate problem with Beckhoff. Looks like some of the channels froze around 09/15 5:45 UTC. May be coincident with start of 2 Hz comb in spectrum.
Does this have any interaction with the Pcal? We've seen something go very wrong there at the time near where the 2 Hz glitches started (we'll alog that shortly). There's also something wrong with some ETMY M0 OSEMs. Detchar is happy to have this investigated at the cost of locked time. This problem ruins the data and it's probably unanalyzable.
The PCAL team is investigating.
17:37 UTC Vern, Nutsinee, Filiberto and Sheila back. Brought Beckhoff chassis with them. Vern says they found a broken Beckhoff terminal.
18:11 UTC Filiberto, Sheila, Vern replacing end station 2 Ethercat chassis at end Y with spare 18:31 UTC Error remains in Beckhoff. Team at end Y is restarting Beckhoff computer. I put ISC_LOCK guardian to manual and down prior.
18:48 UTC Hugh burtrestored h1ecaty1plc1, h1ecaty1plc2, h1ecaty1plc3 to 09/14/22:10 as requested by team at end Y.
19:32 UTC Filiberto restarted Beckhoff vacuum gauges, going to turn back on high voltage for ESD
Strange DC level shift in EY microphone channels, especially EBAY racks (see ch3). But not low frequency mic.
As soon as the Beckhof froze, EY microphone signals DC level went down, and after Vern/Sheila/Elli/Fil restarted Beckhof the DC level came back close to original. Why? Is the ground level of ebay area pulled by something else controlled by Beckhof (or Beckhof chassis itself)?
Jeff, Darkhan, Sudarshan, Craig, Kiwamu,
(Verification measurement)
The above screen shot shows a measured transfer function from displacement estimated by Pcal Y to displacement estimated by CAL-CS. They agree within +/- 10 % in magnitude and +/- 5 deg in phase all across the frequency band we swept. Note that one data point at 10 Hz showed magnitude that is slightly above 10%, but this was not repeatable and therefore we don't think it is a reliable data point. We measured the same transfer function three times within the same lock stretch and saw the magnitude changing to a value between 0.85 and 1.1 at this particular frequency point. We are guessing that this is due to a bounce mode confusing our measurement.
Also, even though the coherence was high all across the frequency band, the data points below 30 Hz seemed to change in magnitude in every sweep. So we increased the integration time from 3 sec to 6 sec which seemed to improved the flatness.
The optical gain was adjusted by measuring the sensing function with a Pcal sweep within the same lock stretch. This gave me a 341 Hz cavity pole (which is the same as two nights ago, alog 21352) and an optical gain of 8.834e-7 meters/counts. Both the parameters are now loaded into the CALCS foton file and enabled.
(Phase correction)
Sudarshan will make a separate alog on this topic, but a trick to get this beautiful plot was to properly incorporate the know time delays. Based on our knowledge, we have included a 115 usec = (41 + 61 + 13 usec) time delay. If we did not remove the delay, the phase would have been off by 40 deg at 1 kHz.
(An extra measurement)
Independently of the calibration validation measurement, we did a simple measurement -- check the binary range with and without the calibration lines. Here is the relevant time stamps:
We will check the range later.
All the data are accessible at the following SVN locations:
DARM open loop measurements
aligocalibration/trunk/Runs/ER8/H1/Measurements/DARMOLGTFs/2015-09-10_H1_DARM_OLGTF_7to1200Hz.xml
aligocalibration/trunk/Runs/ER8/H1/Measurements/DARMOLGTFs/2015-09-10_H1_DARM_OLGTF_7to1200Hz_halfamp.xml
For the analysis, I have used the first measurement. The second measurement was meant to assess repeatability of the measurement by applying the half size of the usual excitation in DARM.
Pcal to DARM responses:
aligocalibration/trunk/Runs/ER8/H1/Measurements/PCAL/2015-09-10_PCALY2DARMTF_7to1200Hz.xml
aligocalibration/trunk/Runs/ER8/H1/Measurements/PCAL/2015-09-10_PCALY2DARMTF_7to1200Hz_v2.xml
aligocalibration/trunk/Runs/ER8/H1/Measurements/PCAL/2015-09-10_PCALY2DARMTF_7to1200Hz_v3.xml
The final plot, that I have posted above, is from the third measurement in which I have doubled the integration time in order to obtain better signal-to-noise ratio.
DARM paramter file (as reporeted in alog 21386):
aligocalibration/trunk/Runs/ER8/H1/Scripts/DARMOLGTFs/H1DARMparams_1125963332.m
On 2015-09-12 06:30:00, the gain from the DCPD sum to DARM IN1 was 3.477×10−7 ct/mA. Therefore, using Kiwamu's number of 8.834×10−7 m/ct, this gives the optical gain as 3.26 mA/pm. (One stage of DCPD whitening.)
Once we installed the spare EtherCat chassis, the ETMY ring heater was outputting less than the requested power (first plot). (H1:TCS-ETMY_RH_UPPERPOWER and H1:TCS-ETMY_RH_LOWERPOWER were 0.34 and 0.2 W respectively, instead of the requested 0.5W.) Trending the ring heater input channels showed this change happened during the chassis swap. It appears there was a 0.2W bias on H1:TCS-ETMY_RH_UPPERCURRENT and H1:TCS-ETMY_RH_LOWERCURRENT.
When we swapped the original chassis back in, the ETMYring heater output the correct power (second plot).