Arnaud, Alexa, Sheila, Kiwamu,
Tonight we tried closing the ALS diff loop, but it did not go well. We observed a high peak at 1 Hz, which we don't understand and seems to prevent us from going back to the high UGF configuration that we achieved in the last week. Since we ran out our energy at this moment, we will have a close look tomorrow. This may be a gain peaking , a bad cross over or some stupid reason.
We tried increasing the servo gain to the nominal value (which should give a UGF of 4 Hz), but today we were not able to reach that point due to the instability at 1 Hz. So we mostly worked with a lower UGF by a factor of 3 or 4 at which the loop was relatively stable. Changing the TST gain did not seem to help. The attaches s the in-loop error signal (show in red) and free-running noise (in black). When we closed the loop, a oscillation happened at 1 Hz. Hmmmm.....
A good new is that Sheila succeeded in deploying a guardian for the ALS diff control. So it now allows us to keep track of good settings.
Fabrice's measurement is started on ITMY, running on operator1
After Sheila saw some excess angular motion caused by the length drive, I tried improving the decoupling of EX and EY UIM.
[Keita Arnaud]
Measurements live in /ligo/svncommon/SusSVN/sus/trunk/QUAD/H1/ETMX/SAGL1/Data called 2014-05-13_H1SUSETMX_L1_{L/P}2PY.xml
D. Hosken, G. Grabeel, T. Vo Today we wanted get the CO2X laser table in a position to run the closed system overnight. We've taken some extensive precautions using redundant beam dumps along the laser path to make sure that we don't have stray beams and that the main laser beam does not get into the vacuum system via the periscope. Beam dump locations: - After the output beamsplitter - Secondary beam dump after the final steering mirror - A shield between the second iris and the first periscope mirror The rotation stage was power-cycled to see if we could reproduce the errors that M. Heintze saw last week but we couldn't get the same behavior. I also forced the Motor Warning label to be "FALSE". We've left the rotation stage close to minimum power with about 2.2 Watts going through after the polarizers so we can monitor the beam power down stream with an on-table power meter that should be reading ~.199 Volts (working but not yet calibrated). We expect it to go up and down a bit because of the temperature fluctuations and we left PZT/Chiller loop open. The jumper for the interlock for the rotation stage is removed so that no one in the control room can rotate the HWP via EPICS. We've left the HEPA and the lights off at 6:00 PM PT, The table is locked and we've tagged out the keys and left them in the control room box with our phone numbers attached. ISS PD adjustments: We removed the covers and beam dump to let light onto the ISS PDs. We also translated the PDs back ~2-3 mm back and fine-tuned the alignment onto the center of the PDS. Temperature Sensors: The temperature sensors were not working very well at first. There were only 2/6 working which turned into 1/6 as we were trying to troubleshoot. Turns out that the problem was due to a loose connection at the feedthrough, once we tightened the connector screws, everything worked pretty well. We'll be monitoring these temperatures quite closely during the overnight observation because it'll give us a first-look indication if anything is wrong. H1-TCS-C_CO2_X_TEMPERATURESENSOR1 = AOM H1-TCS-C_CO2_X_TEMPERATURESENSOR2 = PLUMBING MANIFOLD H1-TCS-C_CO2_X_TEMPERATURESENSOR3 = AOM DRIVER H1-TCS-C_CO2_X_TEMPERATURESENSOR4 = WATERCOOLED BEAM DUMP H1-TCS-C_CO2_X_TEMPERATURESENSOR5 = RF DRIVER
This morning the end Y laser was off and had no power, Richard looked around and noticed that there was an extension chord which had come loose. He changed the way that this is plugged in so this should not happen again.
Alexa and I placed the GigE camera on ISCT1 for the X arm.
I also went out to End Y a few times to work on the IR path there. The LSC PD is now aligned for the IR trans, the gain setting is 60dB. The power cable is flaky for the analog camera. Aaron and Fil helped get the signal to the control room. Although the spot is faint you can see it in the upper right hand corner of channel A on the lower monitor.
8:54 am, Jeff and Andres to the CS VEA, work around HAM4/5 area.
9:05 am, Jason to the CS VEA, East bay area for alignment work. <-- Done by 12:33 pm.
9:20 am, Jeff requested aide from Apollo, work on and around HAM6, 30 min. curtain work while using a small scissors lift.
9:20 am, Travis to the CS VEA, West bay area for ACB assembly work.
9:30 am, Betsy to the CS VEA, East bay area work.
9:42 am, Richard to End Y VEA, ESD work.
10:18 am, David H. to CS VEA, near HAM4, Hartmann sensor table work.
11:00 am, Restarted video4, loaded camera views only, no striptool, elimination process for "freezing" issue.
12:00 pm, ACB crew out of CS VEA.
12:45 pm, Cris to X-End station VEA, cleaning.
12:45 pm, Karen to Y-End station VEA, cleaning.
1:11 pm, Travis to CS VEA, West bay area for ACB work.
1:13 pm, Mitchell to CS VEA, West bay area for ACB work. <-- Done 2:20 pm.
1:16 pm, Betsy to CS VEA, West bay area for ACB work.
1:45 pm, Justin transitioned the LVEA to laser hazard.
2:30 pm, Thomas and David to CS VEA, East bay area for TCS work, powering laser inside table.
2:45 pm, Mark L. to CS VEA, TCS install work.
3:45 pm, Jeff and Andres to CS VEA cleaning area. <-- done 4:05 pm
I noticed that the PD offset nulling script for PRMI did not run for some reason. I looked into the script and found that this was due to too-many PD channels requested to tdsavg at once. So I modified the script so that it divides the PDs into a number of group. This seems running OK at a expense of the processing time. The modified script is named as lsc/h1/scripts/pdOffsetNull_ver2.py. I also checked this script into the svn.
I have begun setup for the pitch/yaw alignment; will finish fine set tomorrow morning and we will commence with SR3 pitch/yaw alignment.
BS ISI tripped at 12:25pm. Certainly due to a second earthquake (5.8 southern east pacific rise)
FYI - The EY wind channel died ~2 hours ago. Richard reports that "someone was working out there and it's probably related."
While poking around looking at screens in the control room this morning, I noticed the ETMx OL sum is was low. Some trends reveal that there was a drastic drop of the laser power on the QPD on Sat morning. A trend of ETMx signals, as well as some ISI and HEPI signals (not shown), indicate those systems are ~healthy. So, it does not seem that the physical pointing of the ETMx via SUS or SEI has changed the pointing on the OL. Note, this happened a day after the Fri EQ which caused all the ISIs to trip. Likely we should investigate the OL itself to see what's going on with the laser beam/QPD.
Arnaud reports that at ~midnight last Friday night, all of the ISI tripped due to an earthquake. There was a magnitude 6 EQ in Mexico May 10 07:36 UTC which correlates to the ISI's tripping at 07:50 UTC.
cf alog for ISI plots
Gerardo says there is an earth quake on the oregon coast, so far end X is the only one tripped.
Here is the last 30 days of YEND temperatures. Shown is the LVEA average and two of the four zones from which the average is derived.
The large positive spike is experimentation with chilled water flow - (Robert and John on 4/20).
The smaller negative going spikes are not explained but may be related to human actions as I have stopped and started chillers several times in this period for diagnostic purposes.
It's not clear what caused the trips from this morning. The ones from Friday night seem to be related to earthquakes.
It's not a problem for now, even if we use oplev damping, because the oscillation leaking to the oplev error signal (attached, a tiny bump at 0.32Hz for green curve) is small enough.
Anyway, it seems like this has been going on for the past four weeks (second attachment), and even if it doesn't get worse, oplev people should look at it after the integration test.
Oplev SUM peak-peak was a factor of 10 worse before it settled down to the current state (beginning of the second attachment), but I cannot see the raw data so we cannot say what was going on then.
no restarts reported.