TITLE: Sep 15 EVE Shift 23:00-07:00UTC (16:00-00:00 PDT), all times posted in UTC
STATE Of H1: Observing
LOCK DURATION: 5hr48min
SUPPORT: Sheila gave a tip regarding if the 2Hz comb should re-appear to check the ALS Phase discriminator RF Mon and LO Mon to see if the counts have stopped moving. More about this can be found in her aLog. https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=21564
INCOMING OPERATOR: Nutsinee
Activity log:
23:31 Team out to swap the EndY EtherCat chassis 2
23:43 Sheila called from EY to confirm IFO state in order to begin work with EtherCat chassis.
00:17 Sheila called to say that they think they won.
00:18 Begin Locking
00:25 Team out of EY and headed back to control room.
01:00 IFO set to UNDISTURBED/Observing
01:17 I noticed the PCal lines missing
01:18 IFO set to UNDISTURBED/Observing
01:48 ETMY Saturation
02:19 ETMY Saturation
02:43 ETMY Saturation
02:51 ETMY Saturation
03:09 ETMY Saturation
05:33 poking around in GraceDB I noticed that there was a Gamma Ray Burst Alarm while the IFO was down.
End-of-Shift Summary: IFO remains locked and Observing at ~70Mpc. Seismic and wind activity remain quiet. MIcroseism had been steadily increasing over the last 22 hours but seems to have leveled off at around 2e-1. Occassional ETMY glitching (will be detailed in the end of shift activity log.
Sheila, Evan
Last night, we had a beckhoff problem at end Y that started at 5:44 UTC (alogs: 21555 and comments).
The ESD high voltage remained on all night, even though the vacuum guages that are meant to turn it off were not communicating.
The first screen shot shows the beckhoff readout vacuum guage (H1:VAC-EY_Y4_PT425_PRESS_TORR) , which was reporting a constant value all night since the start of the beckhoff problems. It was alo reported as "valid" durring this time. Before Fil, Vern Nutsinee and I went out to EY, Patrick and I were attempting to clear the beckhoff errors by reiniting all the modules. This must have been partially sucsesfull around 18:33 UTC, when the pressure reading dropped to zero, the channel is no longer reported as valid, and the ESD high voltage gets turned off. This is why we found the high voltage off even though it had been on all night.
One other concerning thing is that the values reported by the beckhoff readout gauge has changed before and after the problems. the normal vacuum guage does not see such a big change. Also, it seems that Y5 was not working until we swapped the chassis today.
Note for the operators: There was no indication of this problem on the CDS overview screen. You can watch for this problem by opening up an ALS PFD screen (for example, see screenshot) and checking that the RF MON and LO mon values are changing. If they are not changing, you may need to call someone. Ed has this screen open for both end stations on he ops station.
So the vacuum pressure was flatlined and the diagnostic channel reported it as valid? Not good.
Mid-Shift Summary:
When the Beckhoff crate when down last night the Pcal laser shut down, likely due to not getting the enable signal from the Beckhoff chassis.
When it was swapped the second time, (original unit swapped back in?) the Optical Follower Servo was not locked. Opening, then closing the loop allowed the loop to relock.
For our first lock after today's shenanigans, Betsy noticed that our only SDF Diffs were the PRM M2 coils. We discovered that they get switched to the high range state occasionally, but not daily. It turns out that in the PRC Align state of the interferometer we were setting the PRM M2 coils to state 2, however nowhere were they ever set back to the low noise state 3 for lock acquisition. It seems that we had just been hand-setting the coils to the low noise state when we realized that they were wrong.
Anyhow, now in the ISC_DRMI's DOWN state, the PRM M2 coils are set to the low noise state 3. The code has been loaded, so next time we do initial alignment the coils will be reset appropriately afterward.
Separately, I have made a "checklist" version of the Initial Alignment procedure on the Ops wiki page. I was finding myself accidentally skipping steps (which contributed to the long day on Sunday) because the wiki page was too detailed for glancing through. The checklist is available via the wiki page, and also attached here.
IFO was temporarily swithced to "Corrective Maintenance" to turn on the Y PCal lines.
While investigating the 2 Hz comb in DARM, Sheila and Patrick found communications errors with beckhoff at EY. The End Station 2 Ethercat chasssis was brought back to the lab for further troubleshooting. Spare unit was installed and Beckhoff computer was restarted. While troubling shooting the unit found a bad EK1100 coupler on the third rail (left) of the chassis. Re-scanned unit and found 5 terminals all EL3104 (anaolog inputs) that were giving us errors. After re-scanning/checking internal cabling/power cycling, we were only able to reproduce 3 errors. We tried multiply times to try and reproduce all five errors, but could not. Eventually after a power cycle, all errors we had previously seen could not be reproduce. After multiple re-scans and power cycles with no errors showing up, we used a voltage calibrator and injected a 5V DC signal into the EL3104 terminals. It was later decided to reinstall unit back at EY.
Once we installed the spare EtherCat chassis, the ETMY ring heater was outputting less than the requested power (first plot). (H1:TCS-ETMY_RH_UPPERPOWER and H1:TCS-ETMY_RH_LOWERPOWER were 0.34 and 0.2 W respectively, instead of the requested 0.5W.) Trending the ring heater input channels showed this change happened during the chassis swap. It appears there was a 0.2W bias on H1:TCS-ETMY_RH_UPPERCURRENT and H1:TCS-ETMY_RH_LOWERCURRENT.
When we swapped the original chassis back in, the ETMYring heater output the correct power (second plot).
Short history: We lost lock. I was having trouble locking the ALS Y arm. Sheila noticed that some of the end Y Beckhoff channels were not changing in value. We opened up the system manager for end Y and saw that a number of the signals had errors (see attached screenshots from earlier post). Sheila, Vern and Filiberto went to end Y to look at it. They brought the end station 2 chassis back to the electronics lab. They swapped one of the terminals. They put in an end station 2 chassis from H2 in its place. We were able to relock with the spare chassis, but there were problems with the TCS ring heater. Filiberto and I looked at the original chassis on the test system in the H2 building. We could not communicate with the right rail. Some of the terminals on the left rail had errors. We swapped the coupler on the right rail with a spare. This fixed that problem. We finally ended up clearing the errors on the left rail through some random process that involved power cycling the chassis a couple of times. We swapped the original chassis back in at end Y (with the mods). We had communication errors on all of the terminals for that chassis. Filiberto power cycled the chassis. One error remained. Filiberto power cycled the chassis again. The remaining error cleared. We checked that the Beckhoff vacuum gauges were reading and that the ESD drive was still on. We left end Y. It seems we may have a serious issue, in that there was no indication that there was a problem with the Beckhoff system until we noticed that -some- of the channels were not changing in value. The CDS overview for Beckhoff was fine. The only indication outside of opening up the system manager was that some of the channels were not changing in value. Do we need to add diagnostic channels for each of the terminals into EPICS?
I was trying to use the fast ODC channels (e.g. H1:SUS-ETMY_ODC_CHANNEL_OUT_DQ) to track down ETMY saturation causes, but ran into a number of bugs in our data access tools that made it impossible: 1) dataviewer can play the trend data of integer ODC channels, but not the full data. 2) both dtt (diaggui) and lockloss (pydv) can't access integer ODC channels 3) using NDS2, the ODC channel is down-sampled to 256 Hz, making it useless for my purpose. (why do we do that? That channel compresses either way, and takes the same amount of disk space) 4) dtt (diaggui) can access that channel, but only with the "now" setting, and IT RETURNS DATA WITH A TIME STAMP IN THE FUTURE!!!! (The attached plot has a UTC clock in the terminal, and dtt's repored time stamp...) sigh... I give up for today.
TITLE: Sep 15 EVE Shift 23:00-07:00UTC (16:00-00:00 PDT), all times posted in UTC
STATE Of H1: Locking
OUTGOING OPERATOR: Patrick/Jeff B
QUICK SUMMARY:Full control room, mostly working on last night’s 2Hz comb problem. Patrick is in H2 examining the EtherCat chassis from EY, responsible for Ring Heaters, for baddness. Lights are on in the LVEA. Wind is below 20mph. Seismic activity looks a little rumbly perhaps from a quake 7 hours ago and subsequent smaller ones in various areas. IFO is locked again for the first time in 7 hours. We are currently going to make some compensatory Ring Heater adjustments
Hugh, Sheila, Jenne, filling in for Patrick
We are filling in for Patrick at the moment because he is helping Fil to use the beckhoff test stand in the H2 building. We are trying to lock now after Patrick and Jenne did a full initial alignment.
This morning, Fil, Vern, Elli, Nutsinee and I swapped out the End Station 2 Beckhoff Chassis, because all the modules on the right rail were in the state INIT NO_COMM, and we could not change their state by requesting different states (including safe) or clearing errors. We swapped the chassis with a spare. Hugh and Patrick burt restored plc1,2,3 to 10 pm last night.
When we first arrived we saw that the high voltage power supplies for the ESD were off, Fil turned them on and restored the settings. This may be because the vacuum gauges which are controlled by beckhoff would have tripped them off. The ESD driver was off, but we did not reset it believing that we could reset it remotely. Later Betsy reset it by driving back to EY after she could not reset it in the normal way remotely.
Elli quickly noticed that something was amiss with the ring heaters at end Y, which are controlled by the same beckhoff chassis. We checked that the settings were all the same, it seems like there is some difference between the 2 chassis.
Fil, with help from several people, has spent most of the afternoon with the chassis we removed. We had trouble getting a test set up going that we could use to diagnose the problem, so Hugh, Jenne, and I are relieving Patrick for a few hours so he and Fil can work together on the removed chassis. They think that they have found a problem with one of the modules, and are currently replacing that module.
We are about to reach low noise (with the spare chassis, and incorrect power on the ring heaters). We hope to see that the comb is gone, and if we see that we may break the lock to revert to the old chassis if possible.
Patrick's notes about his shift:
10 am PDT safety tour changed from X end to Y end(this may have broken our noisy lock this morning)
18:11 UTC placed beckhoff chassis at End Y
18:20 UTC Rick retrieves (?) PCAL out of optics lab
18:30 UTC Dick in optics lab out at 12:41 PDT
18:31 UTC Fil restarts beckhoff computer (Guardian to down)
a little while later h1ecaty1 burt restores to 22:10 Sep 14
18:53 UTC Jeff B, Jason retrieve part from optics lab (?)
12:38 PDT UPS truck at LSB
20:37 start of initial alignment finished at 21:22
20:54 Sheila to H2 electronics back 20:56, Fil is still there
20:58 Betsy to EY to restart ESD back 14:31
14:58 UTC Ryan B. patching and rebooting alog 15:09 UTC Gate phone rang. Bubba G. answered first. It was the fire department. Bubba let them in. They wanted to do fire hydrant tests. Bubba told them not to. 15:11 UTC Fire department leaving 15:12 UTC David N. through gate 15:59 UTC Turned away fire department from gate. They wanted to check fire extinguishers. 16:06 UTC ETMY saturation alarm, lock loss, end Y ALS in fault 16:16 UTC Christina to mechanical room to get supplies 17:51 UTC King Soft water through gate. Parking in main parking lot. Wheeling in equipment for RO system on hand truck. 17:56 UTC Dave B. running an svn update 18:20 UTC Rick S. retrieving part from optics lab ? UTC Rick S. back 18:30 UTC Dick G. in optics lab 18:53 UTC Jeff B, Jason O. retrieving part from optics lab 19:30 UTC Jeff B, Jason O. done 19:38 UTC UPS at LSB 19:41 UTC Dick G. out of optics lab 20:37 UTC Starting initial alignment 20:54 UTC Sheila D. to H2 electronics room 20:56 UTC Sheila D. back 20:58 UTC Betsy to end Y for ESD restart 21:22 UTC Finished initial alignment 21:31 UTC Betsy back
To clarify, I didn't go to the optics lab as noted in Patrick's log. When retrieving the PMC from the optics lab was approved, I let Jason know that there was an opportunity to go to the lab, which he did as noted by Patrick.
While the Beckhoff troubleshooting was going this morning, I ran a set of charge measurements on the unused ETMx SUS. After completion a few hours ago, OPS/commiss started relocking attempts which are ongoing.
Results of charge measurements to be posted later.
With all of the Beckhoff attempted repair work this morning, the ETMy ESD "railed" with all 5 channels reading ~-15k. I went down to EY and did the Push RED ON/OFF button, unplug far right DAQ cable, Push RED ON/OFF back ON button, replug DAQ cable in procedure. This worked - the DC bias channel is back to ~-32k while the other 4 channels are near zero at ~-200 as viewed on the lower right of the ETMY SUS screen.
16:45 UTC Sheila, Filiberto, Vern, Nutsinee to end Y to investigate problem with Beckhoff. Looks like some of the channels froze around 09/15 5:45 UTC. May be coincident with start of 2 Hz comb in spectrum.
Does this have any interaction with the Pcal? We've seen something go very wrong there at the time near where the 2 Hz glitches started (we'll alog that shortly). There's also something wrong with some ETMY M0 OSEMs. Detchar is happy to have this investigated at the cost of locked time. This problem ruins the data and it's probably unanalyzable.
The PCAL team is investigating.
17:37 UTC Vern, Nutsinee, Filiberto and Sheila back. Brought Beckhoff chassis with them. Vern says they found a broken Beckhoff terminal.
18:11 UTC Filiberto, Sheila, Vern replacing end station 2 Ethercat chassis at end Y with spare 18:31 UTC Error remains in Beckhoff. Team at end Y is restarting Beckhoff computer. I put ISC_LOCK guardian to manual and down prior.
18:48 UTC Hugh burtrestored h1ecaty1plc1, h1ecaty1plc2, h1ecaty1plc3 to 09/14/22:10 as requested by team at end Y.
19:32 UTC Filiberto restarted Beckhoff vacuum gauges, going to turn back on high voltage for ESD
Strange DC level shift in EY microphone channels, especially EBAY racks (see ch3). But not low frequency mic.
As soon as the Beckhof froze, EY microphone signals DC level went down, and after Vern/Sheila/Elli/Fil restarted Beckhof the DC level came back close to original. Why? Is the ground level of ebay area pulled by something else controlled by Beckhof (or Beckhof chassis itself)?
Jeff, Darkhan, Sudarshan, Craig, Kiwamu,
(Verification measurement)
The above screen shot shows a measured transfer function from displacement estimated by Pcal Y to displacement estimated by CAL-CS. They agree within +/- 10 % in magnitude and +/- 5 deg in phase all across the frequency band we swept. Note that one data point at 10 Hz showed magnitude that is slightly above 10%, but this was not repeatable and therefore we don't think it is a reliable data point. We measured the same transfer function three times within the same lock stretch and saw the magnitude changing to a value between 0.85 and 1.1 at this particular frequency point. We are guessing that this is due to a bounce mode confusing our measurement.
Also, even though the coherence was high all across the frequency band, the data points below 30 Hz seemed to change in magnitude in every sweep. So we increased the integration time from 3 sec to 6 sec which seemed to improved the flatness.
The optical gain was adjusted by measuring the sensing function with a Pcal sweep within the same lock stretch. This gave me a 341 Hz cavity pole (which is the same as two nights ago, alog 21352) and an optical gain of 8.834e-7 meters/counts. Both the parameters are now loaded into the CALCS foton file and enabled.
(Phase correction)
Sudarshan will make a separate alog on this topic, but a trick to get this beautiful plot was to properly incorporate the know time delays. Based on our knowledge, we have included a 115 usec = (41 + 61 + 13 usec) time delay. If we did not remove the delay, the phase would have been off by 40 deg at 1 kHz.
(An extra measurement)
Independently of the calibration validation measurement, we did a simple measurement -- check the binary range with and without the calibration lines. Here is the relevant time stamps:
We will check the range later.
All the data are accessible at the following SVN locations:
DARM open loop measurements
aligocalibration/trunk/Runs/ER8/H1/Measurements/DARMOLGTFs/2015-09-10_H1_DARM_OLGTF_7to1200Hz.xml
aligocalibration/trunk/Runs/ER8/H1/Measurements/DARMOLGTFs/2015-09-10_H1_DARM_OLGTF_7to1200Hz_halfamp.xml
For the analysis, I have used the first measurement. The second measurement was meant to assess repeatability of the measurement by applying the half size of the usual excitation in DARM.
Pcal to DARM responses:
aligocalibration/trunk/Runs/ER8/H1/Measurements/PCAL/2015-09-10_PCALY2DARMTF_7to1200Hz.xml
aligocalibration/trunk/Runs/ER8/H1/Measurements/PCAL/2015-09-10_PCALY2DARMTF_7to1200Hz_v2.xml
aligocalibration/trunk/Runs/ER8/H1/Measurements/PCAL/2015-09-10_PCALY2DARMTF_7to1200Hz_v3.xml
The final plot, that I have posted above, is from the third measurement in which I have doubled the integration time in order to obtain better signal-to-noise ratio.
DARM paramter file (as reporeted in alog 21386):
aligocalibration/trunk/Runs/ER8/H1/Scripts/DARMOLGTFs/H1DARMparams_1125963332.m
On 2015-09-12 06:30:00, the gain from the DCPD sum to DARM IN1 was 3.477×10−7 ct/mA. Therefore, using Kiwamu's number of 8.834×10−7 m/ct, this gives the optical gain as 3.26 mA/pm. (One stage of DCPD whitening.)