CO2X: - Turned on laser at 10:00 AM PT, mysteriously turned off around 3:24 PM PT. Still investigating causes, but we opened up the table and didn't see any problems. We're leaving this on overnight tonight. CO2Y: - Calibrate the on table power meter. - We're not leaving the laser on tonight, but we are leaving the chiller and the power supply on. HWS HAM4: - Aligned part of the X side on the in air table - Retrieved spare in-vac temperature sensor for install by SUS RHs: - We wanted to commission the ITMX ring heater today without bothering the integration team. Since the time constant of the ring heaters is on the order of many hours, quick impulses of very small powers (1-4 Watts for 1 or 2 seconds) we figured that it wouldn't be a problem. We left the ITMY ring heater alone because it is actively being used to lock the PRMI so we focused solely on ITMX. Complication: We saw some problems with the ITMX RH. First off, the power we request gets translated into a current request which should match pretty closely to the current measured. When we request 3.5 Watts, we expect to get .29 Amps and 12.0 Volts but we get .14 Amps and 6.00 Volts with an output wattage of .8 Watts. Not good. Then we noticed that turning on the upper segment of the RH affected the read back on the lower segment. This is really strange since we expect that the two segments are electronically isolated from each other. Test: We tested the ITMX ring heater continuity out to the floor and got 50.2 Ohms for both the upper and lower segments so that is what we expect. Then to the ring heater on the floor from the RH driver chassis, we saw that it was driving the correct amount when we used a breakout board to measure the voltage across the ring heaters. The ring heater driver also provides the monitoring tools for the RHs as well such as voltage_mon and current_mon and this could be wired wrong in the Beckhoff chassis. So we carefully measured the continuity of the connection between the feed through and the EtherCAT modules to make sure that they match our drawings and this is where we found the wiring mistake that had us confused. Basically, there was a wiring switcharoo between the readouts of the upper and lower segments on ITMX but in a unique way: the positive read back terminal was connected to the upper and the negative read back terminal was connected to the lower. That would explain why the two segments were coupled when we know that they shouldn't be. The good news is that ITMY seems to be working just fine and weird up correctly, also, this mixup happened only on the read back side of ITMX RH, which means we don't think that it's actually driving the ring heaters in some unexpected way. At the first available opportunity, we want to fix this wiring and test the RH again. Before the next vent, we hope to do some serious testing with the ITM ring heaters, namely turning them on and off in 30 minute intervals for 24 hours to see their response.
With a woefully underestimated increase in the payload, I removed a bunch of wall mass from the ISI. It unlocked pretty easily and then I commenced to removing more and yet more mass. But I got there with the table locking/unlocking quite smoothly with well within spec shifts. A 10Kg D0901075 even had to be removed from the table top. In the end, the added payload & balance mass is lower by 52.45Kg. I have red-lined D1001132-v3 with the changes and current mass distribution and added that to the DCC. The ISI is unlocked with the C3 covers pulled away from the ISI. HEPI remains locked.
Between the times of 3:24-3:27 PM PT, we saw that the laser on CO2X was turned off when we were trying to initialize CO2Y. It's not clear exactly what caused this glitch because we had not yet started touching either system during this time frame so we're not sure if this glitch was due to a human interaction or some sort of electronics malfunction. All we know is that the laser gate was off but that the emergency shutoff had not been pressed on either table. Also, the laser itself was not shut off but the laser controller had somehow closed the gate. We're still investigating this issue, we turned the laser on at 10:00 AM this morning so it had been running for about 5.5 hours or so.
08:50 Corey – Working around the squeezer area on laser enclosure 09:20 Hugh – Working on HAM4 ISI alignment & balance 09:42 Dave – At HAM4 area working on Hartman table 10:00 Jason & Betsy – HAM5 for SRM alignment 10:05 Travis & Gerardo – Working on ACB alignment 10:18 Peter – Working in the H2-PSL enclosure 12:50 Karen – Going to End-Y and Mid-Y 12:57 King Water Systems on site to work on RO system 13:17 Thomas – In and out of LVEA working on TCS 13:51 Gerardo & Travis – Working on ACB alignment 14:22 Hugh & Scott – HAM4 ISI alignment & balance 14:30 Justin – Transition LVEA to laser hazard 15:45 Hugh – Turned down purge air at HAM4 and HAM6
Today, Jeff is on operator duty so I tagged back in on HAM 5. While Jason was resettting his equipment, I worked on -
- cable routing for SR3, including switching the cables in the bracket since they were mounted backwards.
- positioning all of the lower stage AOSEMs on SR3.
- removing yaw adjustment hardware and torqued remaining dog clamps.
- cable routing on SRM.
- resetting the SRM open light gains/offsets with Arnaud (and creating the safe_snap once set) - Note, this was done because it was last done on a different set of electronics/cables. Currently, all of the OSEMs are backed out and reading open light - I'll resume setting these tomorrow.
- staging the dog clamps for the upcoming OFI and 3-baffle installs on this table.
On friday night, the wind was high(~25mph), and the arm was hard to lock. Although the angular motion of the test masses was not particularly different than on a quiet night. First attachement shows the oplev spectra from friday night, when the wind was ~25mph. second attachement shows spectra from yesterday night with a 5mph wind.
To note :
We tracked down the problems we had in the past windy nights to be related with the ALS tidal tuning , but it is still good to make these plots with the OpLEv signals.
However, I think your plots didn't turn out the way you wanted to. There is only YAW in the second attachment.
Also, it would be easier to make a comparison if you put windy/quiet curves for each DOF in the same plot. Thanks!
thanks for noticing!
Attached are some other plots :
1. wind at corner and endy stations between may 17th/may 20th. High winds references are marked in orange, low winds is in green
2. Spectra comparison of ground sensors (HEPI STS) at end and corner stations with different winds. Blue and pink curves are during high winds (blue = may 17th, pink = may 18th). Red is during low winds (may 19th). From top to bottom rows : ETMX ETMY and ITMY. From left to right : X Y Z degrees of freedom.
3. Spectra comparison of oplevs during high winds (may 18th blue curve) and low winds (may 19th red curve) for ETMx ETMy and ITMy top to bottom. Left column is pitch, right column is yaw.
(Stefan, Lisa, Chris)
Looking through the locked stretches from the weekend, we noticed that the ALS slow feedback -- intended to relieve the VCO control signal of low frequency seismic and tidal motion -- was failing in its mission. What we saw is plotted in the first attachment. The green trace shows the VCO control signal, which should be forced toward zero when the slow test-mass feedback (red trace) is enabled about halfway through the timeseries. Obviously this wasn't happening. Meanwhile the VCO frequency servo that actuates the tune slider (blue trace) was running away along with the VCO control signal. When the tune slider hits its limit at +/-5 the lock is broken.
We tracked it down to a missing integrator in the ALS-X_ARM filter bank, without which the slow loop didn't have enough oomph at DC to null the fast control signal. I think this filter was removed intentionally (but without realizing its true purpose) during some earlier automation work. We put it back and, taking advantage of the improved test mass plant inversion, we were able to increase the gain upstream (in the ALS-X_REFL_SLOW filter bank) from 1 to 30. This should help prevent the arm from being blown away by the wind. When we lose lock the integrator is switched off by the Beckhoff, and the drive bleeds away through a 0.01Hz pole. The improved behavior is shown in the second attachment (where the gain was cranked about 20 minutes in).
Here is the list of commissioning task for the next 7-14 days:
Green team (XY-arm):
Blue team (X/Y-arm):
Red team:
SEI/SUS team:
In-Vacuum preperation:
As of 12:00 PSL Status: SysStat: Green, except VB Output power: 26.7w FRONTEND WATCH: Good HPO WATCH: Red PMC: Locked: 12min Reflected power: 1.5w Power Transmitted: 9.8w Total Power: 11.3w FSS: Locked: 12min Trans PD: 0.94v ISS: Diffracted power: 3.833% Last saturation event: 12m
Jason & Betsy – SRM fine alignment at HAM5 Gerardo – Putting the OFI into storage box and fly over beam tube Hugh – ISI balancing at HAM4 Travis – ACB balance and alignment Richard & Filiberto – Chassis work in EE shop & cabling at HAM6 Peter & Olli – In H2-PSL working on ISS assembly Dave – TCS Hartman table alignment
The problems we noted over the weekend with the ETM SUS (and sometimes SEI) systems tripping on lock loss (alog 11957) can maybe most accurately be characterized as a watchdog problem. The trips are caused by large drive transients after lock loss. It's hard to imagine that there's much we can do to prevent this from happening. Even if the DARM (CARM, etc.) control signals could be shut off immediately, the residual impulses would still produce large impulse response in the LOCK filters in the suspensions. We could try to shut off the drive signals in the SUS controllers, or hold the outputs at their current value, but that's a bit more difficult to implement.
In general, though, the watchdogs should probably just not be tripping on transients. If the watchdogs were a bit smarter and only tripped on sustained saturations or oscillations, this would likely not be an issue. I vote that we solve the problem in the watchdogs, but increasing the amount of time it takes before they trip.
I closely looked at one of saturday's trip on ETMX @ 04:44:40 UTC (May 18th 2014). The sequence was :
I will post some data tomorrow
Just for a bookkeeping purpose:
I recentered the beam on the IMC WFSs on this past Saturday. I used the picomotors. Both of them had been off by 0.2-ish counts in the normalized pitch and yaw.
No restarts reported.
I made a couple of improvements to the IMC guardian:
The final point was to make it so that there is no activity in the LOCKED state, so that reaching the LOCKED state means that the IMC is fully up. In general, I think we should start making all requestable states be "idle" states in that they don't do any action other than monitoring for exit conditions. This idea here is to make them true markers of a steady state of the system.

We end up with just two requestable states now: DOWN and LOCKED. The DOWN state doesn't actually prevent the IMC from locking briefly, so we may want to change the DOWN state to something that actually prevents the IMC from locking.
Next, I want to make the IMC guardian the manager of the SUS_MC{1,2,3}, so IMC is actually in control of setting the MC suspensions to their ALIGNED states. This will also allow us to achive the point above, since we can then make the DOWN state into something where the SUS are misaligned, thereby preventing the IMC from locking.
model restarts logged for Fri 16/May/2014
2014_05_16 23:12 h1fw1
No restarts reported for Saturday.
One unexpected restart of h1fw1 over the two days.
Kiwamu, Jamie, Lisa
We figured out why we couldn't reduce the CARM offset below 1kHz while keeping the PRMI locked on REFL 45 I&Q.
The REFL 45 demod phase was tuned without the arms present; it turns out that with the arms, even with a large CARM offset (1 kHz), the demod phase changes enough to make MICH very close to instability.
So, we just tuned the REFL 45 phase by monitored the MICH OL TF and the REFL 45 I/Q decoupling by looking at a PRCL line around 50 Hz.
Not a very advanced technique, but we could recover pretty easily a reasonable shape for the MICH loop and we could reduce the CARM offset.
Kiwamu will post some plots with the MICH OL TF; here are some numbers for the REFL 45 phase:
Without the arms: 144 dg
1 kHz --- 900 Hz CARM offset: 172 dg
800 Hz --- 700 Hz CARM offset: 180 dg
We were happy and ready to do the transition to 3f, but another seismic event like the previous one made the lock of the Y arm more difficult.
We are still debating if we should go home or not...
Fig.1 MICH open loop transfer functions. The pink curve is the one after we adjusted the demod phase at a CARM offset of 1 kHz. Everything else is the ones before the adjustment. You can see that the funny bump between 20 and 50 Hz dissappered as we adjusted the demod phase.
Fig.2 PRCL open loops. The red curve is the one after the demod phase adjustment. The blue one is before the adjustment. Not a significant difference.
Fig.3 Various error signals for the length control. The references were taken when the CARM offset was at 1 kHz and the live traces are the ones with the offset at 800 Hz. REFL45I sees more noise above 40 Hz which is seemlingly from the common mode. A peak at 50 Hz is our injection in PRCL for calibrating the responses.