It appears that none of the MEDM screen snapshots are working - as a result we are unable to check on the vacuum status remotely.
After Kyle finished active pumping on BSC9 for today, I took V and P TFs on the main chain of ETMx. The pressure has come down to 11 Torr, and the TFs look healthy. Recall, previously this supension started rubbing around ~400 Torr so we should be in pretty good shape, finally. We'll take final TFs on Monday, but this looks very good. Attached is just the Vertical TF, but the Pitch looks well matched to the model as well, and can be found in the appropriate svncommon/Sus... directory on CDS for further weekend scrutiny if wanted. (Not committed to svn yet however.) I also looked at the reaction chain vertical DOF and it's TF looks healthy still too.
(Entry by Kyle)
Will finish roughing tomorrow and switch over to the turbo
Motivated by the rubbing saga of ETMX (15985), I have compiled a list of how much each stage of each suspension sags with temperature. This log is just like LLO 15636, except that instead of just top masses, all the stages are included here. The derivation of the temperature sensitivity is in LLO 12581.
Table 1: Vertical sag with temperature (microns / C)
| Stage | QUAD | BS | HLTS | HSTS | OMC | TMS |
| 1 (top mass) | -106 | -38 | -37 | -62 | -50 | -88 |
| 2 | -182 | -57 | -60 | -96 | -64 | -129 |
| 3 | -223 | -58 | -60 | -96 | -- | -- |
| 4 (test) | -224 | -- | -- | -- | -- | -- |
These numbers were calculated using the derivation in LLO log 12581. The formula from that log is
dz/dt = -254*m*g/K [microns / C]
where m is the total suspend mass of a given SUS stage [kg], K is the total vertical stiffness supporting that mass [N/m], and g is gravity [m/s^2]. The negative sign indicates a drop in height with increasing temperature. The thermal sensitivity of the young's modulus of maraging steel is given by "The maraging-steel blades of the Virgo super attenuator." Braccini et al, Meas Sci Tecnol 11 (2000).
Table 2: Relevant SUS parameters
| Spring properties | QUAD | BS | HLTS | HSTS | OMC | TMS |
| Stage 1 mass (kg) | 123.32 | 40.42 | 36.46 | 8.99 | 10.02 | 123.94 |
| Stage 1 stiffness (N/m) | 2889 | 2684 | 2437 | 360 | 500 | 3519 |
| Stage 2 mass (kg) | 101.32 | 27.79 | 24.37 | 5.87 | 7.12 | 79.86 |
| Stage 2 stiffness (N/m) | 3333 | 3540 | 2689 | 439 | 1229 | 4847 |
| Stage 3 mass (kg) | 79.32 | 14.21 | 12.14 | 2.89 | -- | -- |
| Stage 3 stiffness (N/m) | 4875 | 83213 | 189190 | 43139 | -- | -- |
| Stage 4 mass (kg) | 39.64 | -- | -- | -- | -- | -- |
| Stage 4 stiffness (N/m) | 72139 | -- | -- | -- | -- | -- |
| Parameter file | quadopt_fiber | bsfmopt_glass | hltsopt_metal | hstsopt_metal | omcsopt_metal | tmtsopt_production |
Rick, Sudarshan, Shivaraj, Dave, Daniel:
late entry from Friday morning. Daniel found the PCAL binary output switching of the servo, laser power and shutter for PCAL ETMY was due to a mis-configuration within h1ecaty1. He fixed the configuration and restarted all three PLCs on h1ecaty1. He then burt restored these IOCs.
No unexpected restarts all 3 days.
Wed 7th, Thu 8th: no restarts reported
Friday 9th: no FE restarts, Beckhoff restarts as part of PCAL install at ETMY
Y1PLC1 11:42 1/9 2015
Y1PLC2 11:42 1/9 2015
Y1PLC3 11:42 1/9 2015
Alexa, Keita, Evan
Alexa and I performed our usual loss measurement: Pon = 1257(5) ct, Poff = 1304(2) ct, giving a loss of 140(16) ppm in the Y arm.
Next, we wanted to assess the amount of scatter in the arms by looking at the amount of IR light on the baffle PDs. With the arm locked to IR and the IR WFS running, we misaligned the green light and looked at the ITM/ETM baffle PDs, both with their 40 dB and 60 dB gain settings. Then we removed the IR from the arm by unlocking the modecleaner and misaligning PR2. This allows us to get the dark counts on the baffle PDs.
Relevant times are as follows, all on 2015-01-09 UTC:
We're still working on analyzing the data.
For now, here is a dtt of our measurements showing the gain settings, the IR transmission, and the power of the baffle PDs.
BPD powers are given in the following table. Dark powers have been subtracted. The data are taken from the BAFFLEPD_#_POWER channels, which already contain calibration from counts to milliwatts. Plots, data, and code attached.
| BPD | Power (nW), 40 dB | Power (nW), 60 dB |
|---|---|---|
| ITMY 1 | 111(13) | 118(11) |
| ITMY 2 | 4(12) | 10(9) |
| ITMY 3 | 15(11) | 20(9) |
| ITMY 4 | 110(13) | 116(10) |
| ETMY 1 | 65.6(2.3) | 65.5(1.8) |
| ETMY 2 | 1.1(1.3) | 1.18(24) |
| ETMY 3 | 3.1(9) | 3.05(19) |
| ETMY 4 | 105(13) | 104(12) |
Alexa, Kiwamu, Sheila, Koji, Evan
Finally we were able to lock DRMI with the high-bandwidth ASC loops.
The key here was to move IM4 so as to center the forward-transmitted beam on POP B. In addition to reducing the amount of offset for the INP error signals, we believe (based on camera images) that this reduced the amount of light scattered on the PR2 baffle.
After moving IM4, we then adjusted PRM and PR2 so that PRX would lock again. We then proceeded with the usual initial alignment of the corner optics.
Once DRMI had locked, we engaged the MICH, SRC1, and SRC2 loops without issue, and then transitioned them to high bandwidth (by turning off the -20 dB filters and ramping down the BS oplev damping).
Then we were able to engage the PRC1_P and PRC2_P loops without issue, and transition them to high bandwidth (by turning off the -20 dB filters, and turning on the PRM M1 and PR3 M1 locking filters).
Initially we had difficulty turning on PRC1_Y and PRC2_Y. However, we found that we could get them to work by engaging them in close succession. Kiwamu conjectures that there may be some gain heirarchy at work here.
Then we were able to engage INP1_P. Initially we put in an offset at the error point so that the loop would not immediately try to integrate away the error signal dc value. However, we were able to turn the offset off without issue.
The only tricky business here was INP1_Y. At one point (before working on the PRC loops), we turned it on (with an offset) and found that we had to flip the sign of the gain (from 300 ct/ct to -300 ct/ct) to keep the POP buildup stable. However, once we engaged it last (after all the other loops), we found that the original gain works fine. It's still unclear what's going on here.
The new slider values for IM4 are outside the "safe" range found by Keita and Alexa (LHO#). But since the IMC pointing has been changed since then, it's not clear that these safe values are still valid.
We started a (hopefully) long DRMI1f+ASC lock at 2015-01-10 05:21:00 UTC.
When DRMI locking becomes sluggish, we found it is helpful to misalign the SRM, then wait for PRMI to lock, then adjust PRM and BS to maximize POPAIR_B_RF18. Then upon breaking the lock an realigning SRM, DRMI appears to lock more quickly.
These are the calibrated error signals and the calibrated unsuppressed displacement noises for the vertex DOFs for this DRMI lock. As instructed by Kiwamu, I de-whitened the corresponding OAF channels with the filter zpk([100; 100],[1;1], 1) (gain 1 @ DC). The RMS residual motion is: MICH ~ 50 pm, PRCL < 1pm, SRCL ~ 5 pm.
Suspension clearance issues require BSC9 incursion to correct -> Aborted pump down Kyle, Gerardo -> 4 hr. vent of X-end Kyle, Gerardo, Bubba -> Removed BSC9 West door -> Installed BSC9 West door Kyle -> Decoupled purge/vent line -> X-end "Blow down" air dew point measured -11C -> Began pumping BSC9 annulus -> Expect to begin "attended" rough pumping of X-end late Saturday morning for ??? duration and finish on Sunday
Note, for clarification, Kyles alog here reports the days activities as seen through VE eyes, not an update to the pump down starting this weekend. (I first read this alog and had another heart attack until I figured out the format of his alog was a summary. The pump down abort happened this morning followed by a vent and incursion.)
With a fresh set of expert eyes on site in the form of Brett Shapiro, we embarked on evaluating ETMx in search of the pesky rubbing EQ stop. Having theoretically narrowed the possibilites down to the PUM stage of the main chain, I started by evaluating the bottom barrel stops of the PUM on the right side (SUS convention, looking from reaction chain down the arm). While the stops looked closer than the recently adjusted test mass stops (as was expected), I did not see anything glaring. I then moved to the left side of the suspension and very quickly identified the possible culprit. The bottom barrel stop on the left side nearest the reaction chain was very close to the mass, although not touching in the in-air state (as has been verified multiple times via in-air TFs), it was ~0.1-0.2mm from the mass. I also noted that the lock nuts on neither of the left side lower barrel stops were tight, in fact they were several millimeters from the EQ stop mount bar. We continued to evaluate the remaining bottom-side stops at all stages looking for other possible sources of rubbing but did not find any. As per guidance from SYS/SUS representatives, we readjusted ALL bottom-side EQ stops at the PUM, UIM, and Top Mass stages to the erring-on-the-larger-side of the 1mm spec. We also re-verified that the Test Mass and ERM stops were set to the 1.5mm spec set by Betsy earlier this week. Fingers crossed!
The above narrative focuses on the main goal of the incursion, finding the rubbing source. The step by step checklist summary is as follows:
Betsy plans to run remote TFs on this SUS over the weekend to evaluate it's health on an ongoing basis. Hopefully she will NOT call Kyle's cell!
ETMX
Linking some BSC-ISI modeling notes from SEI log 672.
I got the LLO test mass charge measuring script running here at LHO. There was an NDS error initially that Dave Barker magically made go away. Otherwise, the script required no modification other than some missing commas in the bias voltage definition. There is a timing error that seems to pop up at pseudo random times. In these cases, the excitation appears shifted from the oplev response by 9 sec. See UR_timing_err.pdf. Of all the ESD quadrants it most often occurs on the UR, though I saw it once on upper left. Not sure if it has something to do with the settings of the script. The way it is now, the timing error has appeared in 3 to 5 out of 20 measurements in each run. I have run the script in the current configuration 7 times.
In any case, I was able to get a 2 hour trend of the ESD charge. See ETMY_Charge_Trend_9Jan2015.pdf. The charge wanders a bit, but has some consistency.
The charge measuring script is:
.../SusSVN/sus/trunk/QUAD/Common/Scripts/ESD_UL_LL_UR_LR_charge_07-H1.py
The analysis script, which I am still updating for LHO is:
...SusSVN/sus/trunk/QUAD/Common/Scripts/ESD_UL_LL_UR_LR_analysis07_BrettTest.m
A lot of work has been done on HAM3-ISI for the past month. I'm trying to summarize here what we know and which path we should follow.
. We see a high Q peak around ~0.65Hz in all the local sensors (GS13+CPS) and all the DOFs, except RZ.
. This peak is present only when the Z sensor correction is ON. It doesn't matter if the Z sensor correction is coming from HEPI or the ISI, the peak is present when one of them is ON (see here and here). It doesn't seem to have any link with the X,Y sensor correction.
. The peak seems non-stationary. First of all, the peak is not the same if the sensor correction is done with the ISI or HEPI (see first attachment). Second of all, the amplitude and frequency of the peak vary with time (see second attachment and here).
. The problem doesn't come from the ground STS. We checked the electronic chain (swapping distribution chassis) and tried another ground instrument (see here). The problem is independant (see here).
. This is not mechanical/rubbing issue. We did a driven transfer function around this frequency with a perfect coherence/no sharp peak.
. The problem doesn't come from HEPI. We turned OFF the HEPI loops with no improvement (see here)
. We don't think it comes from a specific sensor. Everything looks fine wihtout sensor correction, plus the problem shows up in all the sensors (if it was , let's say, a CPS issue, it wouldn't appear in the GS13).
. It might come from the blend: by switching to a 750mHz blend, the peak seems to disappear. However, we know that the 750mHz blend is obsolete, so I wouldn't draw solid conclusions from that...
. It might come from the drive (see here)
Now it seems that the peak appears even when the sensor correction is OFF (see third attachment)! I don't know if it's a good or bad thing, but that's the first time we've seen that. The only thing I did this afternoon is switching blend filters on all DOFs and turning the Z sensor correction ON and OFF... The ground motion doesn't seem any different than usual.
. Keep investigate to see if the sensor correction is the cause of this peak
. Implementation of a slighty different blend in Z to see if it makes a difference.
. PR2 and MC2 have a mode at 0.67Hz. Could it be a weird coupling between the sensor correction and suspension? We'll try to turn the damping loops ON and OFF to see if it makes a difference
. The ISI model has been restarted before, but we haven't tried to restart the actual computer
We'll also take driven transfer functions of MC2 and PR2, to confirm their resonance Q is much lower that this feature. Further, we'll not only try an ON/OFF test of the SUS, but we'll trying *changing* the damping filters (something we want to do eventually anyways).
Since we rolled out the sensor chains from our investigation (I'll do a summary alog about that in a minute), I've looked at the actuation chain.
I took the transfer function between the actual ouput of the actuators (counts) and the related voltage read by the coil driver readback channels. This is the same information in different units, so except some color coming from the electronics, we shouldn't see any sharp peak in the transfer function. I did this exercise on HAM3 (HEPI Z sensor correction ON and OFF) and HAM2 (Sensor correction OFF) for the comparison.
The results are interesting. First I'm not sure I understand why we don't have a perfect coherence (noise?), but I don't think that's linked to our issue. More interesting, we can see a small peak around 0.66Hz on HAM3 when the sensor correction is OFF (which is not the case on HAM2).
This might be an indication that the problem is coming from the actuators.
I wrote a script which compares a target's autoBurt.req file against its safe.snap file. This is a channel list check, verifying that the safe.snap has no more and no less channels compared to autoBurt.req. I am not checking channel values or channel read-only-status.
I ran the script on all the H1 models (100 of them). For models I manage (IOP and PEM) I fixed any mismatches. For any system which was missing a safe.snap, I created one by snapshotting the system.
The end result is that all safe.snaps are concurrent with the exception of:
certain SUS safe.snaps still retain the GUARD channels which were removed this Tuesday, Jeff K is going to resnap these this weekend
h1pslfss.snap is missing some DINCO channels, I'l work with PSL to resolve this
The psl fss fix was trivial, I have added the missing channel to h1pslfss_safe.snap by hand and committed to svn.
Probably an issue with a CDS server? I can't see the Ops Schedule, LHO CDS Wiki, LHO CDS webpage.