The Instrument Air for MX has been alarming (about every 15min) since early Fri morning [see attached]. (should vacuum crew take care of this, or do we change alarm level settings? Sending email to Vacuum Team).
UGF=5Hz now, the separation between different DOFs is more than 20db. You can increase the BW more but watch for gain peaking.
------
I decided to diagonalize based on PZTs (don't remember how it was done at EY). If you want to do it based on QPDs, just swap input and output matrix.
Sensing matrix was measured by turning on the servo with 3-8Hz band stop filter, then stop it holding the output, then injecing at 3.1, 4..1, 5 and 6.5Hz for PZT1_PIT, PZT2_PIT, PZT1_YAW and PZT2_YAW. Inverted the matrix to make a new input matrix such that POS=PZT1 and ANG=PZT2, and that the filter TF becomes equal to OLTF at low frequency where PZT response is flat.
I removed "calibration" filter bank as it's not useful. Now the filter comprises a simple integrator (FM1 and 2 together), "boost" (FM3) which I disabled as our UGF is much higher, and LF100 (FM10).
Integrator is scaled such that the absolute value is 1 at 1Hz. With just the integrator and LF100, filter gain is equal to the UGF.
The first plot shows the OLTF of all four DOFs when the filter gain was set to 10. Since the phase margin at 10Hz is already getting thin, I changed the gain to 5 after this measurement (UGF=5Hz).
The second plot (right bottom) shows that the loops are separated from each other by more than 20dB (look at red and blue for PIT modulations at 3.1Hz and 4.1Hz, green and brown for YAW at 5 and 6.5Hz).
When you need more BW for some reason, disabling LF100 will allow you to push to the gain of 50 or so. Since PZT starts to fall off from 20Hz or so, gain of 50 doesn't give you of about 30Hz UGF instead of 50.
If you need to have even higher BW for whatever reason, Stefan says that the MadCityLabs control box lets you switch to high BW mode which he disabled at some stage because the 2" PZT was oscillating with that setting.
I made EY QPD servo similar to EX (PZT based diagonalization, UGF5Hz without boost).
Black trace shows the OLTF when gain is set to 50 without LF100.
On Friday Jason shot us in on the WHAM5 ISI Optical Table position. We started needing to go South 2.4mm, East 3.6mm and CCW 880urads. Vertically it was 1.1mm low (see log 11061.) Couple adjustments with the DSCW Springs and Jason gave us the close-enough sign at 1mm West, CW <400urads and good N-S. After that work, I checked the DIs and it looked like we had disturbed the vertical.
This morning we checked and sure enough we were out of level across the Optical Table by almost a mm. We corrected this easily bringing up the three low corners. I thought this would correct the low elevation but the average elevation is still low 1.1mm, just outside of tolerance. The disturbance friday must have dropped rather than raised anything. I should have looked closer earliar. Today's vertical move disturbed the horizontal position slightly, improving the yaw by ~50urads, shifting west and south 0.1mm. I'll check with the PTB if this is close enough to move on.
Gate valves GV5 and GV7 were soft closed this morning to allow Apollo do some craning. Gate valves were soft closed at 9:00 am.
After Apollo was done craning GV7 was opened at 10:52 am. GV5 will remain soft closed.
At request of A. Pele and J. Warner so as to confirm noise source at Y-end. Also, shut down leak detector at BSC6 (left off)
I am out at the end station working on the table (on the side closer to the chamber sometimes), and the ISI tripped. Stage 1 said CPSs were the first trigger, GS13s were the first trigger on stage2.
I don't think the plotting scripts work on the CDS Imacs, but these trips were probably due to me walking around.
No, sorry, that was me... too many of the same screens open.
On the Guardian Screen, I noticed a RED HAM4 ISI. Looking at the LOG, it said it had an ERROR from last Wed (4/9) after a WD Trip. Here are the last few lines from Guardian's Log regarding the issue:
20140409_18:58:49.069 ISI_HAM4 W: EzcaError: Could not connect to channel (timeout=2s): H1:ISI-HAM4_WD_MON_STATE_INMON
20140409_18:58:49.090 ISI_HAM4 ERROR in state WATCHDOG_TRIPPED_FULL_SHUTDOWN. See log for more info. MODE=>LOAD to reset
I went to MODE, hit LOAD (HAM4 Guardian went white for 10s of seconds & then came back YELLOW), and then hit EXEC (now it's GREEN).
Started morning with several systems tripped. Arnaud posted about an EQ around 1pm on Sat. So trips I'm posting about here are for trips on Sat around 5pm and on Sun around 5pm
So by about 9am, I got everybody back to untripped (GREEN). This was pretty easy:
NOTE: For the ISIs, I wasn't aware Guardian DOES NOT take care of the Blend Filters for us. So Operators (as long as it's OK with commissioners) should make sure to ensure the Blend Filters are enabled. Jim tells me there has been alogs stating what type of Blend Filters we are to engage per ISI.
Here is the list of commissioning task for the next 7-14 days:
Green team:
Red team:
Blue team (ALS WFS):
Blue team (ISCTEY):
SEI/SUS team:
model restarts logged for Fri 11/Apr/2014
model restarts logged for Sat 12/Apr/2014
model restarts logged for Sun 13/Apr/2014
No restarts reported for all three days.
Certainly because of an earthquake, 3 ISIs tripped two hours ago. ETMX and ITMX tripped on ST1 actuators, whereas ITMY tripped on L4C (a measurement was still running). ETMY survived though !
Chris, Alexa, Pablo, Keita, Sheila
This is a late alog of work done thursday afternoon and friday morning. The short summary is that we think the beam quality problem we are having on WFS B is caused by the Faraday used to reject the reflected beam, we don't think we have clipping between our on table PZTs and the TMS QPDs, but might have some in the path from the PZTs to the WFS.
First we used the technique Keita described in alog 11280 to look for evidence of clipping. When we dither the PZTs we see 30-40dB more resoponse in the QPD PIT or YAW than in NSUM, but for the WFS we the difference is more like 20-30dB, which suggested to us that we could have clipping in the WFS path itself (we had already checked on the table all the optics in this path, and didn't see anything that looked bad, so we suspected the Faraday.)
On friday morning Pablo and I went out to the end station and set up the beam profiler again. (images attached) We started by measuing the beam in the path to WFS B (this has been the troublesome one for beam quality) at a point upstream of where we have been, and see that the beam quality is also bad there. We moved the nanoscan to approximately the position of WFS B, and tried moving the first optic after the Faraday to see if this has the same impact on the beam quality that we have seen from moving TMS QPD servo offsets (see alog 11072), shown in the second page of the attachment. It didn't so we concluded that the problem must be upstream of that optic. We then tried moving the Faraday using the new 4 axis mount, and that did have a similar impact as moving the TMS QPD offsets. This may mean that the problem is in the Faraday. We tried several iterations of moving the Faraday in pitch, but didn't really suceed in improving the beam quality much and actually made it worse in the horizontal direction, and made the beam quality going into the chamber worse. We set the Faraday angle to restore a nice guassian profile for he beam going into the chamber.
We then tried putting an iris into the WFS path just after the slipt from the path for the LSC diode (photos will be attached soon.) This dramatically improved the beam quality on WFS B, but didn't change th situation on WFS A. We set the iris to be slightly more open than the position that gave us the best guassian fit to our beam profile, to allow for some alingment drifts. We saw that the iris really dosen't impact the beam quality on WFS A.
So as we left things, we had good beam quality on WFS B and the beam going into the chamber, and the same so so quality that we have had on WFS A since we reworked the path (alog 11074).
Thanks for letting this machine do this. Likely busy until late Sunday afternoon.
Phase 3b spectra (in chamber, in vacuum) was taken last week and has been processed today. During the measurement, HEPI was floating, and ISI was isolated with level 3 and blends at "start".
Also, I plotted a comparison of ETMX and ETMY main chain and reaction chain top mass transfer functions for the 6 DOF, from last week's measurement.
SPECTRA
1) ETMy / ETMx spectra comparison SUS damping off
2) ETMy / ETMx spectra comparison SUS damping on
3) ETMy main chain damping on / damping off comparison
4) ETMy reaction chain damping on / damping off comparison
Spectra look good. Only F2 osem of ETMY main chain has an unexpected sharp peak at ~30Hz (can be seen on 4 pdfs). ETMy amplitude is higher than ETMx due to the ISI state being different. Reaction chain Vertical and Yaw damping could be improved looking at the comparison damping on/off
TRANSFER FUNCTIONS
4) ETMx / ETMy M0-M0 and R0-R0 TF comparison damping OFF
5) ETMx / ETMy M0-M0 and R0-R0 TF comparison damping ON
Again, TFs look good and results of ETMy are matching the ones from ETMx, except for Pitch which has its second mode (1.33Hz) shifted down for ETMx
ETMY isn't working as well as ETMX.
ETMY is running TCrappy blends
ETMX is running TBetter with sensor correction (lower blend frequencies in rX and rY)
There is what looks like a loop instablity around 10Hz (stage 1 maybe?)
To much motion in ry & rX around 1/2 (probably the difference in the blend frequencies)
The references in the ETMX-GS13 and ETMX OPLev plots are with Sensor correction off, which is giving more peaking at the microsiesem then we want so I'm turning it off
The low frequency excess motion is likely due to the fact that we haven't finished doing the stage 1 tilt decoupling, Jim needs a few hours to finish that up