Continued troubleshooting EY ESD remote restart capability. Replaced Sequence Module inside the ESD unit. Unit can now be controlled on/off from the control room. F. Clara, R. McCarthy, S. Aston
SudarshanS RickS
Using a lock stretch from last night, we compared the Xarm and Yarm Pcal calibrations using the calibrated DARM spectrum (see plots attached below). We assume that any errors in the shape of the DARM response function are small over the few Hz between Xarm and Yarm lines.
We have calibrated both the photodector inside the Pcal transmitter modules that samples the transmitted light (TxPD) and the photodetector mounted to the integrating sphere inside the receiver module that captures the light reflected from the ETM (RxPD). Their calibration coefficients are:
TxPD (m/V)*1/f^2 |
RxPD (m/V)*1/f^2 | |
Xarm |
8.517e-13 | 6.799e-13 |
Yarm |
1.789e-12 | 6.861e-13 |
By comparing the ratio of the (Xarm and Yarm) calibrated Pcal photodector peaks with the ratio of the DARM peaks, we can compare the Pcal calibrations at the two end stations.
For the Tx PDs, the ratio of the calibrations is 1.0214 and for the Rx PDs the ratio is 1.0191. Ideally, the ratios would be 1.0. Some of the 2% variation could be due to response function errors in the DARM calibration. We can test this by swapping the line frequencies between the X and Y arms some time in the future.
We plan to repeat the Pcal calibrations to get some insight into statistical variations.
Note that the Pcal estimates of the calibrated DARM displacements are about 25-30% higher than the amplitudes reported by the calibrated DARM spectrum:
Pcal/DARM | TxPD | RxPD |
Xarm | 1.28 | 1.29 |
Yarm | 1.25 | 1.27 |
Added 38 channels. Removed 25 channels.
Sheila, Kiwamu, Koji, Evan
Today the goal was to get back to 34+ Mpc and put as many steps as possible into the Guardian.
Mostly we spent the day trying to get the ASC working again.
We spent a while puzzling over why the ASB36I→SRM ASC loops were not working in DRMI or in full lock.
Recall that in the new HAM6 centering scheme, ASB is not under active centering control. Over the past week, the beam on ASB has managed to stay near zero, but today it started to wander in the course of locking activities. So it is unsurprising that the ASB36I error signals were junk. (Miraculously, the ASB36Q→BS loops continued to work fine.)
With DRMI locked, we tried moving the ASB picomotor to center the beam. This worked temporarily, but the beam would be miscentered after each relock, or after SRM was moved. Then we tried using ASA36I instead, since ASA is still actively centered. This worked for pitch, but the yaw error signal was not good.
So finally we just decided to revert to the old HAM6 centering scheme in which OM1 and OM2 are used to center onto ASA and ASB, and OM3 and OMC Sus are used to align onto the OMC QPDs (specifically, we took settings from 2015-03-14 08:00:00 UTC). After that, the ASC worked fine. This isn't a long-term solution, but it is expedient and it seems to indicate that the problem is a lack of centering on ASB.
This "old" HAM6 centering scheme is more or less captured in the Ham6CenteringStatusQuo.snap (attached in zip file). The "new" HAM6 centering scheme is in Ham6CenteringNouveau.snap. Also there are screenshots of both configurations.
BS coil driver switching is now successfully implemented in the Guardian.
We made use of the remote EY ESD driver switching a few times. It is very convenient.
We didn't get any new Mpc tonight, but we definitely have a record for mph. The attached plot shows the wind reaching 45mph while we were locking. Previously, we were having dificulty locking with wind speeds between 20-30 mph. Margartia's plots show that previously the wind would have caused difficulty locking something like 15-10% of the hours in a year, while now the wind should only interfere with locking about 0.7% of the year.
Now the wind is gusting up to 60 mph, and we are still able to lock ALS, the problem now seems to be with DRMI, which can acquire lock but looses it after less than a minute.
This is a lot of progress. Last spring, we were struggling just to have the seismic system not tripped with winds of about 30 mph, so it has taken a lot of work from a lot of people to be able to lock under these conditions. Some of the major steps were: fixing watchdogs, improving the seismic performance, high bandwidth tidal from ALS, green WFS, windy blends, L2P on ETMX, switching the ALS fiber AOM drive to the imc VCO, and lastly, we've added the Oplev damping back for ETMX tonight once the gusts were above 50 mph. We could think about high bandwidth green WFS instead.
Sheila, Ed, Ross, Evan
This morning we had another incident where we seemed to have glitches in the channel H1:ALS-Y_REFL_CTRL_OUT_DQ that could have been the cause of ALS locklosses, similar to what was described in alog 15242 and alog 15402. Some screenshots are attached. There is a distinctive pattern in the REFL CTRL signal, and the error signal (ALS-Y_REFL_ERR_OUT_DQ) has a large glitch, as well as the transmitted green light (ALS-C_TRY_A_LF_OUT_DQ). Today these things were happening only every 10-15 minutes, but that is often enough to prevent locking. As before this problem went away after a while, without us doing anything to fix it. Several screen shots are attached. These glitches sometimes show up in the X arm, but that could be because the DIFF control imposes them on the X arm even if the originate in the Y arm. The glitches still happen when we had tidal off, and COMM unlocked, so I would not think that there was anyway a glitch that originated in the X arm could have been imposed on the Y arm. There are no coincident gliches in the Optical levers or in the Fiber control or error signals.
It would be good to know how often these glitches happen, how intermitent the problem has been over the last several months, and how often they coincide with a lockloss. Ed and Ross started to look into this, and started a script to identify these glitches in a way that is easier than looking though a time series. I made an attempt to use omega scan on ligodvweb, but I ran into trouble producing a plot.
I am wondering if anyone from Detchar has tools that could be helpful (and maybe used from the control room?) in finding times when these glitches have happened.
We don't currently have glitch-finding algorithms running on those channels, but we'll start running them today and also produce results for the last week. In the meantime, just looking at one event, it looks like a BRMS from 50 to 100 Hz on ALS-Y_REFL_CTRL_OUT_DQ would do a good enough job of finding these.
The primitive code for finding these ALS glitches is attached; it uses the MATLAB frontend to NDS2 to get the data, and in this data it searches the H1:ALS-Y_REFL_ERR_OUT_DQ channel for the following: a data sample that exceeds 20000 counts, followed in less than 500 bins (30ms) by at least one sample that is lower than -20000 counts. This is a crude bipolar burst search of the channel, and it seems to find ALS glitches with high efficiency. There seem to be three classes of ALS glitches - 1) streams of large glitches that occur far from IFO locks, where the ALS servo is just not working very well/ hopping between spatial modes, 2) smaller glitches coincident with losses of lock 3) other smaller glitches whilst locked that are not coincident with loss of lock. I have also attached 4 pdfs. Attachments are explained below: 1. findalsglitch.m - a glitch search code that downloads and searches a predetermined amount of full rate data from ALS-Y_REFL_ERR_OUT_DQ 2. rangealsglitch.m - a code that calls findalsglitch.m multiple times, after breaking a predetermined data stretch into manageable chunks. 3. A pdf of a stream of large glitches in ALS occuring when the machine is out of lock 4. A pdf of a small ALS glitch during a lock stretch where the IFO did not lose lock coincident with the glitch. 5. A pdf of a small ALS glitch during a lock stretch coincident with an IFO loss of lock. 6. A zoomed in version of 5 showing the shape of the glitch in detail.
Here is one more example of this kind of glitch, I think this is an example of the Y arm ALS glitch causing a lockloss.
The optimal location for a single version of Krishna’s BRS (intermediate frequency tilt sensor), or, if it would be better to have two of them, depends on the tilt spectrum in the beam direction. We suspected that wind-induced tilt is worse at EY than EX, where the BRS is currently located, because, for typical wind storm directions, the building is being pushed roughly along the beam axis at Y-End and roughly perpendicular at X-End (the tumble weeds usually roll down the Y-Arm). But we aren’t sure whether a single sensor at EY would make sense (e.g. if EY is 10x worse than EX) or if two BRSs would be better. Since we have only the one BRS, we used the 0.03-0.08 Hz band of STS seismometers to compare the two stations. This frequency band was selected as a proxy for tilt because this band is below the microseismic peak frequency and, in windy conditions, ground motion in this band is usually dominated by tilt. Figure 1 shows the strong correlation between the 0.03-0.08 Hz seismometer band and the tilt measured by the BRS at EX for one wind storm. Each of the small points in the plots in this log represent a 60s average of the wind speed and a 60s fft of the ground motion.
Figure 2 shows 4 months (Aug 15, 2014 - Dec 15, 2014) of the 0.03 to 0.08 Hz beamline seismic band at EY and EX plotted against wind speed measured at EX. The large red and blue dots show the median of minute points in 2 MPH bins. Dipongkar has plotted the median because large earthquakes, which also appear in this band, would bias the mean. Roughly speaking, for a particular wind speed, the signal at EY is twice the signal at EX when averaged over many storms in 4 months. This data suggests to me that we may want a second BRS at EY rather than moving the sensor from EX to EY, because the difference is, on average, only a factor of 2.
The differences between the stations can change for different wind storms, possibly because of different wind directions. Figure 3 shows the effects of individual storms (each storm is a different color, the same color on both plots) at the two stations. One of the storms produced about 5 times more beam-line tilt at EY than at EX.
Caveat: Getting this data is very time-consuming, so we are putting in this log even though we have obtained only 4 months of data. Dipongkar will continue to increase coverage to include the spring windy period and we will update if necessary.
Robert Schofield, Dipongkar Talukder
We were going to wait until we had a year of data before putting in corner station plots and the plots for tilt perpendicular to the beamline, but since Krishna asked, here are the CS plots for the same storms as Fig. 3.
Thanks for the study! I know it is very time consuiming but I thought I'd say that it would also be very interesting to compare the tilt of the corner-station against the end-station during these wind-storms. If I remember right, the corner station slab moves roughly factor of 2-3 less than EX. If so, then once tilt at the end-station is corrected for (by factors of 5-10), the corner-station tilt would limit the low-frequency ISI motion.
Added Figure5 showing 4 months CS-X and CS-Y tilt plotted against wind speed measured at EX.
Added four new figures which are Figures 2,3,4 and 5 above recast with their y-axis converted from 0.03-0.08 Hz band velocity in [nm/s] into tilt [nrad] using the model from Figure1 (replotted and attached in this comment). Note that x and y in the fit equation and model of Figure1 are in the units of nm/s and nrad, respectively.
Scott L. Ed P. Joe D. Starting at X-1-7 double doors and moving north towards single door. Vacuumed tube supports and sprayed bleach/water mixture on soiled areas of floor. Cleaned D I water tank and vacuum machines. We have started cleaning the machines every time we fill the D I tank. Sampled dirty sections posted here. We are installing support tube caps on previously cleaned supports. Started tube washing after lunch and cleaned 28 meters to the north. Continuous monitoring of beam tube pressures by control room operator during cleaning operations.
This is a slightly edited version of the original sample aLOG done by Andres Ramirez with actual current values:
Laser Status: SysStat is good Front End power is 31.8W (should be around 30 W) FRONTEND WATCH is GREEN HPO WATCH is RED PMC: It has been locked 13 day, 4 hr 37 minutes (should be days/weeks) Reflected power is 2.2 Watts and PowerSum = 22.6 Watts. (Reflected Power should be <= 10% of PowerSum) FSS: It has been locked for 0 h and 4 min (should be days/weeks) TPD[V] = 1.4V (min 0.9V) ISS: The diffracted power is around 7.2% (should be 5-9%) Last saturation event was 0 h and 6 minutes ago (should be days/weeks) NOTES: ISS diffracted power was up to ~11% this morning. Since we prefer it in the 7-9% range at this time I've adjusted the refsignal from -1.08 to -2.03 to yield ~ 7.5%
In order for the remote on off switch to work we had to remove the sequencing module and set the jumper to the proper pins. We then connected a BNC to 9pinD cable to the Beckhoff interface. For EX Stuart A was able to switch the unit on and off with little trouble using a command line. The MEDM will follow. EY was not nearly as successful. After moving the jumper Stuart was able to turn the unit on but not off. After turning it on you could hear and audible sound as though a relay was continuing to switch. The unit would not turn off. I was able to operate the unit using my DVM set to continuity by touching the center pin and the shell. The ESD would turn On and OFF. Not sure if it is an ESD sequence module of the Beckhoff chassis. Will investigate further Tuesday.
MEDM screens have been rolled-out to support the remote reset and motoring of the ESD Drivers. However, due to channel name inconsistencies difficult/impossible to reconcile between sites, I've generated some customs screens for LHO. I updated sitemap to link to these new screens via the path: "X-Arm ETM/TM->ETMXESD" and "Y-Arm ETM/TM->ETMYESD". Snapshots of both ETM Overview screens are available below (see ETMX_ESD_Monitor.png and ETMY_ESD_Monitor.png). The ESD remote start/stop is toggled via the following Beckhoff channels:- H1:ISC-EXTRA_X_BO_3 H1:ISC-EXTRA_Y_BO_3 Cycling the required channel, for example using "caput H1:ISC-EXTRA_X_BO_3 1", followed by "caput H1:ISC-EXTRA_X_BO_3 0" at the command line should be sufficient to transition the state of the ESD driver between active and idle (n.b. as noted in the above entry, further debugging is required for ETMY ESD). The newly generated ESD Overview MEDM screens have been committed to the cds userapps svn:- /opt/rtcds/userapps/release/sus/common/medm/quad/ A SUS_CUST_QUAD_MONITOR_EX_OVERVIEW.adl A SUS_CUST_QUAD_MONITOR_EY_OVERVIEW.adl n.b. we aim to provide read-out of ESD monitoring channels later, after installing cables and swapping and ADC from the SUS front-end to SUSAUX chassis (as denoted in the latest revision of D1002741, as was carried out at LLO).
All Watchdog tripped on the M7.5 earthquake (~ -20 hr) but survived the M4.8 at ~ -12 hr.
PSL frequency noise is better than indicated in DBB (diagnostic breadboard) scans.
JasonO et al. have been running weekly DBB scans (see this link) that indicate that the PSL frequency noise is well above requirements.
However, spectra of MC_F, the IMC control signal that drives the VCO indicate that the frequency noise is much better than what is reported by the DBB.
Attached below is a recent DBB report and spectra of MC_F taken during a recent full interferometer lock.
I took a look at the ISS this morning and found the dfifracted power up around ~11% + with a ref signal of ~ -1.8. It's been adjusted to ~ 7% at a refsignal of -2.02. Also, FSS TPD[V] are down to 1.4 from 1.6. There maybe the need for incursion on maintenance day to do touch ups and discovery.
[Evan, Koji]
M7.5 - 55km SE of Kokopo, Papua New Guinea 2015-03-29 23:48:31 (UTC)
The EQ knocked off most of the HEPI/ISI/SUSs. The ground motion calmed down after a couple of hours.
The WDs were reset.
Current status:
- All of the HEPIs are in the "ROBUST_ISOLATED" mode.
- All of the HAM ISIs are in the "ISOLATED" mode.
All of the BSC ISIs except for the BS ISI are in the "FULLY-ISOLATED" mode.
BS-ISI is in the "ISOLATED_DAMPED" mode. (As it was like this, I left it so. If it is necessary to go to "ISOLATED", please do so.)
- All SUSs except for PRM and SRM are in the "ALIGNED" mode.
PRM and SRM are in the "MISALIGNED" mode.
WDs of the TTs were restored too.
On my drive in I saw a kiddie pool flying across Jadwin Ave, so I knew it would be a good day to work on ALS. The gusts today are up to 45- 50 mph; I can hear the building shake when one hits. While the changes I've made so far aren't enough to lock the IFO under these conditions, there is a lot of progress.
One more thing, this morning I opened the X end beam divereter and re did the inital alignment. Now both beam divereters are open. The inital alingment has it's own problems when the wind is this high.
In hopes to capture graphically what Shiela done in this log as far as switching the Fiber PLL's AOM input from a fixed oscillator at 160 [MHz] to the as-originally-planned IMC VCO input, I've updated the diagram I'd originally put together in G1400519, and Alexa and co. cleaned up / made more accurate / published in P1400105 to form the stand-alone diagram: The CARM / ALS Electro-Optical Controls Diagram G1500456 I attach a copy of -v1 to this log for easy access. Focus on the bottom left corner of the diagram, at the input of the red VCO, whose frequency is tuned by the fast output of IMC common mode board. It's output, as currently shown, is how it's configured now -- it's split to head both to the double-passed FSS AOM and to the ALS Fiber PLL AOM. Previously* the path to the ALS Fiber PLL AOM had been replaced with a fixed 80 [MHz] oscillator. *Recall the history / evolution of this connection: it had always been planned to be this way, and LLO who commissioned their IFO in a more "natural," or serial, fashion always had this connection. However, because LHO had commissioned their ALS system and DRMI in parallel during the HIFO Integration Phase, having the IMC control hooked up to the Fiber PLL would couple them in a distracting detrimental fashion. Thus, the team "temporarily" replaced the IMC VCO output with an independent, fixed oscillator, see E1300659. As with many things in LIGO "temporary" can mean "years," so it's only now that Sheila and Daniel decided that the full IFO was commissioned enough that we could again restore to the original plan.