I turned on the Ring Heater this morning to a nominal 30W of electrical power. We've yet to fully determine the ratio of the radiated power to electrical power, but the most recent number is approximately 2.5W radiated power per 6W electrical power.
I measured the spherical power of the HWS beam - probing the ETM thermal lens and surface deformation and compared it to the results from a simple COMSOL model of the predicted thermal lens + surface deformation [no fitting]. The results are shown in the attached plot. Bear in mind that the scale of the predicted model will be affected by the delivered power (dependent on the radiated to electrical power ratio) and that the HWS is still operating on a nominal calibration for the optical system - rather than a measured calibration.
The results are quite similar but there are obvious differences in the time constant. At the moment I would put that down to the simplicity of the COMSOL model - it doesn't include the flats on the sides of the ETM, or the reaction mass or the time constant associated with the ring heater itself.
However, this is the first real aLIGO measurement of thermal lensing with the HWS.
Caution: this is not solely the surface curvature of the ETM - the optical path distortion from the thermal lens is roughly 10x larger than that of the surface curvature.
10-Aug-2012 11:34AM - replaced plot. I had added the incorrect sign to the surface deformation in the model. I've fixed this.
Extra information:
Ring Heater on at 1028555836
Requested current: H2:TCS-ETMY_RING_HTR_SEG1_DC_I_SET_OUTPUT
Measured current: H2:TCS-ETMY_RING_HTR_SEG1_I_MON_OUTPUT
Requested current | Measured current | Measured V across RH | Electrical power (V*I) | |
Upper RH segment | 630mA | 623.4mA | 21.485V | 13.394W |
Lower RH segment | 630mA | 623.0mA | 21.458V | 13.368W |
HWS Channels:
Defocus @ HWS: H2:TCS-ETMY_HWS_POLYFIT_SPHERICAL_POWER [m^-1]
Defocus @ ETM: (1/mag^2)*H2:TCS-ETMY_HWS_POLYFIT_SPHERICAL_POWER [m^-1]
The daqd process on h2nds0 died, the log file shows ... Ask for retransmission of 6 packets; port 7097 (repeated 50 times) Have to skip 6 packets (retry limit exceeded) Didn't automatically restart because /var/run/daqd.pid didn't get erased. I manually restarted the process. The computer h2nds1 was completely unresponsive. No error message appeared on the console. Restarted computer. With both nds servers down, it was impossible to get channel data for any software that needed it. Suggest more reliable method of determining if daqd is actually running, along with some bug fixing.
I'm running repeated cavity transfer functions to monitor changes in the cavity during the ring heater test. This means that Excitation B on Common Mode Board A will be left on for the duration of the test (~12 h), and that the GPIB unit on the SR785 at EY is in use and should not be contacted.
Suspending this while oplev calibrations and suspension tests are going on. Will comment when I need to turn Excitation B on CMA on again.
Thomas tried to add a python script (with extension .py) to his LHO aLOG 3758, but was thwarted and needed to change the extension to .txt in order upload it. Please add this to the acceptable files.
I don't see why the log book should be rejecting any attachments based on extension at all. That doesn't make any sense. One should be able to attach anything they think is relevant to a report, regardless of file type. File name extensions aren't a particularly good way to determine a file type anyway.
I have added .py to LLO's logbook. This OSL software that we are using requires defining allowable attachment extensions and associating them with a not null mime type. I didn't write the code, or participate in the process that determined which software would be used.
Added at LHO as well.
Checking to see if anyone had centered the optical levers in hopes to grab a spectra of the Commissioning Team's awesome new spectra, I found that - ETMY was still not centered, but with the corrected calibration - ITMY was centered, but had the old calibration (queue sad trombone). I've restored the ITMY calibration that I installed yesterday morning. I see that H2SUSITMY and H2SUSETMY were rebooted some time later in the day yesterday, according to the ops log, but I don't see any further details/aLOGs as to why... My guess is that safe.snap had not been updated since those calibrations were installed (though the right calibration stuck for ETMY...). I haven't gathered a new safe.snap myself, I'll will wait for local staff, as I'm not sure about the state of the cavity. But, when you do, remember there's a nifty script here: ${userapps}/release/cds/common/scripts/makeSafeBackup which can use to make a new safe.snap in the appropriate reboot location, and it also makes a copy in the appropriate userapps directory (with the model name appended), e.g. 0$ cd /opt/rtcds/userapps/release/cds/common/scripts/ 0$ ./makeSafeBackup sus h2susitmy 0$ which puts files in /opt/rtcds/lho/h2/target/h2susitmy/h2susitmyepics/burt/safe.snap and /opt/rtcds/userapps/release/sus/h2/burtfiles/h2susitmy_safe.snap
ha, that makes sense as the ETM oplev did give me any indicaiton I was yawing the test mass. In the early hour of the cavity locking I was trying to compensate for the pointing error casued by the heated ETM. Aiden mention that is was more in yaw (~25 urad), but the oplev wasn't able to show me my moves.
Now there are SUM indicators next to the QPD quadrant definitions, under the OPLEVINF button, in the lower left corner, on the SUS_CUST_L3_OPLEV.adl screens (which are linked off pf the QUAD OVERVIEW Screens). I attach screen shots of ETMY (Spot OFF the QPD) and ITMY (Spot ON the QPD). That bar is actually a live reading of the SUM channel, ${IFO}:SUS-${OPTIC}_L3_OPLEV_SUM_OUTMON, with limits from 0 to 10000 cts. (As shown, ITMY is aligned and has ~18k counts worth of light -- so a full green bar, and ETMY is not aligned, and has ~6 cts -- so no green bar). These is a good indication of whether you have light on the QPD or not.
Today, we finally stuffed the optic in the PR2 suspension and started correcting it's roll. We will finish suspending it tomorrow. As well, Filiberto and Travis sorted out a cable issue on the PR2 chamberside testing setup. Since, for the moment, we cannot test both MC2 and PR2 simulatenously, and PR2 is our higher priority, we have "given" the shared cable between them to PR2 in hopes of getting it's full data set run over the next 4 business days. Seek Filiberto if for some reason you need it moved back to MC2 (possibly to facilitate it's testing later next week).
The Arm is locked, as off 03:24:00 UTC (20:24h local time).
The various controls are in the state listed below.
Channel of interest is 'H2:ALS-Y_ARM_LONG_IN1_DQ' and has units of nm (and is sampled at 16k).
HPI State:
All running fine. The cavity feedback (off load) at low frequencies to HPI ETMY, with a UGF of ~1 mHz. ETMY_ISCINF_LONG has a a 250k nm limit on the output (I got an effective limit of 200k nm on the output of the signal ISC sends into the RFM).
ISI State:
Stage 1, (ITM/ETM) X/Y 250mHz blend other at 750mHz blend
Stage 2: (ITM/ETM) XY 100mHz other 750mHz blend.
Quad Sate:
ETM: M0 damping on, L1 L/P damping only, L2 no damping (watch dog off), L3 no damping
ITM: M0 damping on, L1 L/P damping only, L2 no damping (watch dog off), L3 no damping (watch dog off)
Attached is an initial spectrum .... the frequency noise has dropped nicely! (due to 3791)
I was not able to close the PZT servo loop which keep the beams nice and tight lock to the TMS. It turned out that the filter switches of thewhitening board 'were enganged' but not ON. I toggled all the buttons on the medm screens which seemed to fix the problem.
The beam is nicely locked to the TMS ...
Elli, Robert, Bram
We had another look into the RefCav. I think we found the culprit ... the free-space to fiber coupler behind the RefCav is retro reflecting back to the cavity. We use an angle-cleaved fiber cut to remidy this, but not enough. When we block the beam to the fiber coupler, intensity flickering on the CCD is gone. You can still see he peaks on the transmitted diode, but they are most likely intensity fluctuations (which is we 'immune' to).
By the lack of finding any quarter waveplates, we placed a half waveplate, a PBS and a faraday rotator after the first steering mirror behind the cavity to capture any back reflections (from the fiber coupler, photodiode and CCD).
Now we will need to relock the arm cavity to see if it made any improvements ..
I turned off the ETM ring heater at 1:37:50 UTC.
DaveB, JimB, HugoP
Models were recently updated at LLO so they can use Dolphin. Updates were transfered over to LHO. HAM3-ISI model was re-installed.
We also seized the opportunity to install HAM2-ISI model which is now running.
MEDM screens for HAM2-ISI were added to the VIDEO6 display of the control room, next to the ones of HAM3-ISI.
The arborescence, and the unit-specific testing-programs for HAM2 were created.
Coordinate transform matcices were filled for both ISIs.
CoreyG ,HugoP
Transfer functions ran overnight yesterday. Unwanted resonances are present in the vicinity of 130Hz, on all DOF. The usual suspect is then the 600lbs payload mass set on top of the ISI. One can hear it resonating when being hit. The washers it was sat on were too close from each other. Corey and I lifted this top mass with a fork lift and reset the washers. The mass would vibrate way less when being hit afterwards.
Transfer functions are running overnight.
Moving the washers had the intended effect. It suppressed the unwanted resonances.
Before/After plots are attached
video3, which displays the psl enclosure cameras in the control room, died spontaneously with a message in several languages stating it needed to be powered off. So it was powered off and turned back on. Someone who knows the passwords to the cameras will need to enter them to get the video to display again.
Attached is a spectrum from last night (7 August 2012), starting a ~23:30h.
There seems to be a narrow spike at 700mHz, while the 2.74 Hz peaks are still there. I guess we nudging downwards ...