Arnaud, Apollo, Thomas We made a lot of progress on the ETMY OL, the receiver is 90% finished with only a cable left to run from the whitening chassis to the QPD but we've confirmed all the mechanical components fit well together. The transmitter pylon ended up being too close to the viewport to fit all the parts (nozzles, adapters and bellows) so we need to move it back a few inches to work but that requires moving a pipe bridge, Apollo is going to start on that tomorrow morning. We are still on track for a full working system by tomorrow night, which includes calibration and whitening.
Alexa, Sheila, Keita, Fred, Chris
This morning after we moved the WFS, Alexa remeasured the sensing matrix and got (in counts/urad):
ETMX PIT | ITMX PIT | ETMX YAW | ITMX YAW | |
WFS A | -1809 | 828 | -896 | 894 |
WFS B | -362 | -311 | 370 (coherence 0.7) |
712 |
We also checked the phasing by injecting a signal to the laser frequency (through the PDH servo board) and looking at I and Q from each segment. Then we struggled to lock the loops.
Later on we remeasured the sensing matrix again, several times. We saw that the measurement isn't particularly sensitive to the centering on the WFS. We tried using DTT to do the exicitation, and tried using AWGGUI and letting the excitation run for a few minutes before we took the measurement to avoid any trasients, but that also did not seem to have a big impact.
Some elements were consistent (within 20-30%) each time we measured and consistent between Alexa's morning measurement and the after noon measurements. These were EMX Yaw to WFS A (-900counts/urad average) and ITMX yaw to WFS B (562 counts/urad average) ETMX yaw to WFS B consistently was small and didn't have much coherence, so we have set it to zero. Each time we measured ITMX YAW we got a magnitude of 700-750, but the phase varied, (136, -134, -38, -21, 19). We decided to also set this to zero, and now have a yaw matrix that is just: -1.1 WFS A to DOF 1 (ETMX Yaw) and 1.897 WFS B to DOF2 (ITMX Yaw). This seems to be working OK.
For pitch we got values for WFS A that are consistent with Alexa's measurement from this morning, but not for WFS B. For WFS B PIT we got -630 to -853 coutns/urad ETMX pit and 566 counts/urad ITMX pit, a sign flip compared to the earlier measurement.
Right now we have all four DOFs locked, using the new YAW matrix and Alexa's measurement from this morning for PIT. This seems to be stable, we pushed the gain up a little, so the current gains are -0.002 DOF 1+2 Y , -0.002 DOF 1 PIT, and 0.005 DOF 2 PIT. These have been locked with good build up for about 20 minutes.
I have left 2 excitations running, on ITMX Yaw we are injecting 1 count at 3 Hz into the L2 Lock Yaw filter, and for ETMX 0.3 counts at 1.5 Hz. Hopefully this will stay locked a good part of the night so that we can look at how stable the sensing matrix was overnight.
Aidan. Thomas. Greg. Eric G. Dave H.
Flow meter issue
Wired up flow meter. The wiring was
CO2 Controller Chassis
Leaky laser was enabled
Laser (SN 20510.20816D) was run. Maximum output power was 49.4W
Beckhoff issues
CO2 binary outputs not working - traced problem to Beckhoff System Manager not mapping CO2 output hardware to PLC variables.
Plumbing
It actually takes a large number of ducks to be in a row to turn these CO2 lasers on. So here's how to do it ...
Ensure that the laser power cables are connected from the Instek Power Supply (Mech Room) to the CO2X table feedthroughs and, inside the table, connected to the RF driver
The success of this can be gauged by checking:
Jim Warner, Greg Grabeel Hugh noted there were some issues with a limited range of motion on HAM 4. Jim and I checked for rubbing but were not able to find anything readily apparent. We replaced the parker valve on the NW corner horizontal actuator. At ~2:45pm we started running the actuator in bleed mode. At ~4:30pm we changed the valve positions to run mode. There were no leaks on the parker valve. I did an air purge on the accumulators shortly after switching over to the run state. Jim will be running linearity and transfer function tests to see if this fixes the issues.
That should read resister stack and not accumulator.
ETMY ISI tf running from opsws0, should run til mid-morning. HAM5 ISI tf running from opsws1, should run in slightly less time. HAM4 HEPI local basis test running from opsws2, just a couple of hours. Anyone care to join a pool on which ones crash, with extra payout for guessing the reason?
- 9:30 am Travis, Gary, Giles and Margot to the LVEA, West bay monolithic work.
- 10:05 am Dale +1, to the LVEA, tour the LVEA.
- 12:33 pm Aaron to Y-End, check on OpLev cable.
- 12:50 pm Thomas and Arnaud to End stations, X-End to pick up tools, then to Y-End optical lever work.
- 12:55 pm Karen to Y-End.
- 1:14 pm Travis and Margot to Y-End, tool retrieval.
- 1:45 pm Travis, Gary, Giles and Jason to LVEA, west bay area for monolithic work.
- 1:55 pm John to LVEA, west bay area, monolitich work inspection.
- 2:37 pm Andres to LVEA, 3rd IFO parts search and retrive by HAM3.
- 3:22 pm Dale and Margarita to LVEA, LVEA tour.
- 3:22 pm Cyrus to X-End, to retrieve dead imac.
Andres R. & Jeff B. We made several centering adjustments to the BOSEMs and reset all to 50% light. The latest set of transfer functions show the undamped coupling issues in the 03/25/14 data have been reduced or eliminated. The plots for the last set of TFs are attached below. The plots for the power spectra taken on 03/25/14 are also included.
Because the spectra looked suspicious in the low frequency band, I retook some data from yesterday with SR3 on the test bench (M3 osems were disconnected), and compared it with the last measurements. In fact, the feature is not present anymore (green after vs blue before).
Other than this, spectra and transfer functions look acceptable for installation.
I upgraded and rebooted the tape backup machine this afternoon to clear up a problem which turned out to be just the tape head needed cleaning. The strange thing is that I cleaned the head just three days ago, I'm not sure why that didn't appear to take.
I've installled OS updates on opsws0,7 and operator1, and moved opsws0 and operator1 to the workstations subnet. These were the machines missed in the last round of updates. All of the control room workstations have now been moved to the new subnet.
With the upcoming TCS CO2 work, we needed to work out any bugs in software land before trying to turn on the laser. A few additions (rotation stage) and deletions caused the old code to not work very well and so we re-mapped the configuration and re-linked the software to hardware. This solved the problem of getting the hardware channels all the way through to EPICS. A few more details: - In order to not trip all the watchdogs in HAM2/3, Sheila turned off the connection between ISC to SUS temporarily as well as putting the MC Guardian in a down->paused state. - We created a new boot project for PLC3. - Committed the new system manager to SVN.
(Sheila, Alexa)
Since the beam on the WFS seemed a little large, we moved WFSA closer to the lens by 5 inches and WFSB closer to the lens by 9 inches. Hopefully, this helps keep the sensing matrix more consistent when we recenter the beams. TBD...
Here is the list of commissioning task for the next 7-14 days:
Green team:
Red team:
Blue team (ALS WFS):
Blue team (ISCTEY):
SEI/SUS team:
Daniel means increasing the UIM driver range, as per ECR E1400164 and integration issue 762. Note, also, that after conversations with LHO staff and subsequent follow up with P. Fristchel, it has been agreed that we should modify all four UIM drivers (not just the ETM drivers as mentioned in the current versions of the ECR and II). Also by ETMY Diagonalization, he means (once the optical lever installation is complete) - Balancing the coils on the UIM and PUM stages - Gathering measurements, designing, and installing filters for - Length-to-Angle frequency dependent decoupling - P and Y plant compensation filters for WFS. (See LHO aLOG 11067 for list of measurements.)
Weekend report
model restarts logged for Sat 29/Mar/2014
no restarts reported for Sat
model restarts logged for Sun 30/Mar/2014
2014_03_30 00:42 h1fw0
Unexpected restart of h1fw0
Sheila, logged in as Alexa.
I spent some time working on the ALS WFS today. The guardian can now be used to turn on and off the WFS, although to get started we still need to align the cavity pretty well by hand and then center the WFS by hand.
I was able to lock one DOF (DOF_2_Y) with a 1Hz bandwidth (transfer function attached, this was using a gain of -0.05). However, when I tried this after recentering the WFS it was no longer stable, so I've set it back to -0.001
One difficulty I had today is that the sensing matrix changes enough as the WFS centering changes to make the loops become unstable. When I arrived I centered the WFS, locked all 4 DOFs, then the centering changed and the pit DOFs moved away from 0 until they saturated the DAC.
The second screen shot attached is the measurent of the sensing matrix for ITMX pitch that Alexa and Daniel did on friday, next to one I did today. You can see they are totally different. After recentering I measured the sensing matrix again, and got back a value similar to Friday's.
Assembled aLIGO RGA on BSC6 and rough pumped RGA volume -> replaced gauge controller and foreline pirani gauge on MTP (gauge exhibiting failure mode of losing "enhanced" or stable readings in turbulent flow conditions, controller losing power to B channel pirani (foreline) intermittently) -> Demonstrated MTP "Safety Valve" closing as a function of foreline pressure setpoint when in "Turbo" mode -> Pumping BSC10 annulus 2100 hrs. local -> Kyle leaving site now
After a lot struggle with the BSC ISIs this week, we finally have all commissioned BSCs under guardian control.
The BSC seismic stack is one of the more complicated systems we've deployed, as it currently consists of three nodes in a "managed" configuration: two separate nodes for each of the two ISI stages (e.g. ISI_ITMY
, and ISI_ITMY
), and a "chamber manager" (SEI_ITMY
) that coordinates the activity of the two subordinate ISI stages. USERAPPS location of the top level modules (and primary loaded library):
USERAPPS/isi/common/guardian/SEI_ITMY.py (USERAPPS/isi/common/guardian/isiguardianlib/BSC_MANAGER)
USERAPPS/isi/common/guardian/ISI_ITMY_ST1.py (USERAPPS/isi/common/guardian/isiguardianlib/ISI_STAGE)
USERAPPS/isi/common/guardian/ISI_ITMY_ST2.py (USERAPPS/isi/common/guardian/isiguardianlib/ISI_STAGE)
All single ISI stages share the exact same code, for each BSC stage as well the single HAM stage. Here are the system graphs for the SEI manager and ISI stage 1:
![]() |
![]() |
The manager (left above) has three main requestable states: READY, DAMPED, ISOLATED. The READY state puts both of the ISI stages in READY, the DAMPED state puts them both in DAMPED, and the ISOLATED state puts both into isolation levels that can be specified in the manager system description module (USERAPPS/isi/common/guardian/SEI_*.py). At the moment all systems are using their default configuration, which is to operate the ISI stages in the HIGH_ISOLATED state, which corresponds to "lvl3" from the old "command" scripts. This can't currently be changed on an individual BSC basis, due to a guardian core issue, but I'm working on it.
The degrees of freedom for which to restore cart bias offsets can also be specified on a per stage basis. Note the ISI_ITMY_ST1 graph above has "RESTORE_ISO_CART_BIAS_*_RZ" states, and no corresponding states for the other degrees of freedom (X, Y, Z, RX, RY). This indicates that we are restoring only RZ target cart bias offsets for this stage.
There are a couple of important things to note about the current isolation procedure:
We had a lot of trouble dealing with the T240s in stage 1. Their exterme sensitivity to pitch causes them to saturate if a large pitch cart bias offset is applied to the stage. We there spend a lot of time trying to figure out how to gracefully bring them in and out of the loop, by reducing their gain and/or changing the blend filters during the isolation procedure. All of theses things has problems, though:
This makes things very difficult if we need to hold pitch offsets on any of the ISIs. It looks like we need to fix the synchronous switching of the T240 compensation filters before we can handle pitch cart bias offsets.
However, after speaking with the integration team, it seems that it's ok to operate temporarily in a mode where we only restore RZ cart biases. Any needed pitch offsets can be held in the suspensions for now. Therefore we do not need to change the T240 gains or blend filters during the isolation procedure. This is good news, since it means we can run with the current guardian behavior. Obviously this will have to be fixed down the line, as we will likely want to hold pitch offsets on the ETMs, but we have a workable solution for the moment.
With the above configuration we can successfully isolate the BSCs without touching the T240s or blend filters. It seems to be fairly robust.
Important operator notes:
guardctrl stop SEI_ITMY ISI_ITMY_ST1 ISI_ITMY_ST2
"), at which point all the old command scripts can be used. This should obviously be done only as a very last resort.The SEI managers represent a new step for guardian, as their primary task is to control the state of other guardian nodes, which we haven't needed up to this point. Getting their behavior robust has been quite tricky. It necessitated a lot of work on the guardian built-in "manager" library, as well as a lot of experimentation with the SEI manager module itself. The manager is designed to be fairly robust against commissioner noodling with it's subordinates, but that is also fairly tricky to get right in a robust and useful way. Some things will need to be improved, but this is the behavior currently:
While things seem to working well at the moment, I certainly don't claim that all the bugs have excised. If the manager does get hung in some state, resetting to the INIT state should clear everything, reset the ISIs, and get things back on track.
guardctrl start ISI_ETMX_ST{1,2} && guardctrl start SEI_ETMX (should then remove the "WIP" masks over the BSC ETMX SEI and ISI guardian boxes in the GUARD_OVERVIEW screen).
Note, the following code version number are applicable to the above configuration:
A. Staley, S. Dwyer, J. Kissel After speaking with Sheila and Alexa, trying to understand the spectrum in LHO aLOG 11026, I realized that -- because of so much unexpected excess noise -- the current ALS setup has diverged significantly from the baseline plan. As such, we've worked together to create a diagram that is to-the-best-of-(my)-knowledge accurate, and makes sure to show how every control signal gets around to its actuators, including passes through digital land. The only things I did not include were the demodulator boards between the REFL diodes (ARM REFL, MC REFL, and the two green XARM and YARM REFLs) and lame things like whitening, AA, and AI chassis that facilitate getting in-and-out of digital land. In particular, I draw your attention to the unorthodox and new features that were not part of the original design plan: (1) The PRM is misaligned. This way we can use IFO REFL red diode signal (after normalizing / linearizing it with the red arm cavity trans) to measure the arm and offload at low frequency to the CARM board in the second input. This is unorthodox because normally we would feed REFL straight to the CARM board in all analog and retain the large bandwidth, but because the noise is so large, it must be linearized in the front-end (the only place the Arm TRANS signal exists), and fed out one of the corner station LSC DACs. Note that this configuration is just a special technique for these particular series of measurements and is not designed to become a normal part of the lock acquisition scheme. (2) The Voltage Controlled Oscillator (VCO) actuated, Phase-Locking Loop (PLL) that's wrapped around the corner-station COMM and DIFF Phase-Frequency Discriminators (PFD) are used to reduce the noise enough to keep the PFD in the linear, phase detecting mode (check out pgs 6-8 of the AD9901 datasheet -- it's a really nice explanation of how it works). (3) Before the hand-off, there's another temporary step which is to use the Beckhoff system ("Slow ADC" in the diagram) to feed directly to the PSL VCO (again at low frequency) to reduce the noise enough for the COMM hand-off to work. This is most certainly an impressively complicated system! Note that this diagram has been committed into the noisebudget repo here: ${NBSVN}/trunk/HIFO/Common/ and I imagine will become an "as built" DCC document in the glorious future when everything is working beautifully.
Comments and my replies on the diagram from Bram, to be added/fixed later: You are missing the Fiber Distribution Box, which shifts the laser frequency by -160 MHz before it gets shipped to the end-stations to compensate for the PSL AOM. The lasers at the end-station are then locked at the + (X-end?) and - (Y-end?) 40 MHz around the ‘0’. Ah, yes. I had debated putting the distribution box in there, and decided against it for clarity because I forgot the important frequency shift feature. See D1200136 for details or Figure for in T1000555 for schematical diagram. Also, the F and S in the ‘servo’ block diagrams, is the S beckhoff? No -- the "F" and "S" are the "Fast" and "Slow" paths of what has been traditionally called a "common mode" board everywhere -- I was trying to start a clarifying revolution with this diagram... The lasers in the end-station get the slow temp control directly from the beckhoff from the demos error signal monitor. The beckhoff links are a little confusing, maybe make a little color legend?
This trip happened around the time of a beckhoff restart. A beckhoff restart causes a crash of the IMC guardian, which causes it to stop. Its not clear to me why this should cause MC2 to get a large signal, but it seems to. This causes a cascade of trips. Even though I don't know why this happends, creating a safe state in the IMC guardian so that it handels missing channels better could help with the problem.
Another solution is to have the SUS WDs not trip HEPI. Can we get that fix soon?
HAM2 also tripped at the same time, Jeff and Hugh brought that back.
To be clear, IMC guardian did not "crash" in this particular situation. The guardian responded exactly as it's currently programmed to respond, which is to go into ERROR when it looses communication with any of the channels it's monitoring. I want to distinguish and ERROR condition, which is something that guardian handles, to a "crash", which means that the guardian process died unexpectedly.
Here's my guess for the sequence of events:
It's possible guardian could be made slightly more robust against loss of some of it's channels, but that only helps up to a point. Eventually guardian has to drop into some error condition if it can't talk to whatever it's trying to control. It could try to move everything to some sort of safe state, but that only works if it can talk to the front-ends to actually change their state.
A Beckhoff restart also causes the IMC servo board to be reset, as well as all whitening for photodiodes, wavefront sensors and QPDs. I assume that the resulting transient caused the MC to trip. It would be interesting to know, if this is due the length or alignment system. Is it the initial transient or a run-away integrator? In either case this should not result in a trip. A better action would be to simply turn off the ISC inputs.
After taking a look at the time of WD trips, it seems like HAM3 ISI trips before MC2, see green plot vs red plot (the X axis is the number of seconds after gps=1080083200)
Margot and I performed all of BSC10 closeout tasks listed in my previous alog regarding this closeout. We pulled the FC from the ETMy optic a little after 11am, and the door was going on at ~1:30. It took "so long" (ha) because we ended up having to spend some extra time calling in the troops about one of the ISC viewports having what appeared to be a scratch on it's inside surface. More details to come as needed.
Here are some further pictures of the ACB swing back and the FirstContact spray cone attachment to the HR side of the ETMy QUAD structure from yesterday. Yes, the cone protrudes into the baffle. Yes, Margot still was able to fit her head and an arm with the spray bottle in the cone.
The last picture shows the placement of the horizontal wafer and verticle 1" witness optic in the center of the chamber, placed today just before the door went on the chamber.
Notes and dcc numbers for Contamination Control Samples (ones that came out and ones that went in) to WBSC10 before doors went on: 1. Vertical 1’’ optics on QUAD: T1400246 (SN1195) out and T1400247 (SN 262) in. 2. Vertical 1’’ optic under quad: T1400248 (SN261) placed at end of closeout. 3. Vertical wafer on quad: T1400249 attached to quad under HR of ETM. 4. Horizontal wafer on floor under quad:T1400250 placed at end of closeout. 5. Horizontal 1’’ optic: T1400251 was left on beam tube floor between BSC10 and BSC6. Looking up SN/history. Didn’t take it out, because we had no clean optic containers handy. 6. 24’’ PET swipe sample taken in tube near purge today close to end of closeout. Labeled, and taken to PET microscope area.