In preparation for updating the IMC to LLO style control, I've brought all the *current H1 ISC filter files under proper version control, such that we have a version controlled back up. This meant (1) Adding and/or Ccmmitting any outstanding changes that were present in the local copy in the userapps repo (if they already existed) (2) Copying the chans directory copy over to the userapps repo, if they are different, assuming the chans directory copy is the latest and greatest. (3) Commit the latest and greatest to the repo. (4) Remove the chans copy, and turn it into a soft link to the userapps copy. (In case you're worried about whether Foton, the RCG, and autoquack can correctly follow this link, SUS has been interacting with the filter file / softlink for many months now without issue. It survived many re-compiles, RCG upgrades, opening and editing the chans "copy" with Foton, and quacking new filters in.) NOTE: It should now be common practice to open and edit the filter file in the userapps repo, NOT in the chans directory even though foton can handle both. Now the ISC filter files in the chans directory look like this (and point to -> that): controls@operator1:chans 0$ pwd /opt/rtcds/lho/h1/chans controls@operator1:chans 0$ ls -l H1{I,L,A}SC*.txt lrwxrwxrwx 1 controls controls 59 Jul 31 12:07 H1ASCIMC.txt -> /opt/rtcds/userapps/release/asc/h1/filterfiles/H1ASCIMC.txt lrwxrwxrwx 1 controls controls 58 Jul 19 16:14 H1ASCTT.txt -> /opt/rtcds/userapps/release/asc/h1/filterfiles/H1ASCTT.txt lrwxrwxrwx 1 controls controls 56 Jul 31 12:24 H1ASC.txt -> /opt/rtcds/userapps/release/asc/h1/filterfiles/H1ASC.txt lrwxrwxrwx 1 controls controls 58 Jul 31 12:11 H1ISCEY.txt -> /opt/rtcds/userapps/release/isc/h1/filterfiles/H1ISCEY.txt lrwxrwxrwx 1 controls controls 56 Jul 31 12:06 H1LSC.txt -> /opt/rtcds/userapps/release/lsc/h1/filterfiles/H1LSC.txt
Right now there is a mismatch between the MEDM HEPI screens and the model, more precisely in the BLND screen: three ON/OFF switches are still active but not displayed in the MEDM screen.
By default, the switches are OFF and the ouput of the BLND is 0 (see pictures attached).
This bug should disappeared in the last model version (we got rid of the switches). I'll try to put a WP and do the update this week.
For now, the quick solution is to use caput commands in the terminal. Example for HEPI-HAM3:
put the BLND L4C switche ON -> caput H1:HPI-HAM3_BLND_L4C_SWITCH 1
put the BLND IPS switche ON -> caput H1:HPI-HAM3_BLND_IPS_SWITCH 1
put the BLND BIAS switche ON -> caput H1:HPI-HAM3_BLND_BIAS_SWITCH 1
I took the PSL from "science" to "commisioning" mode for 10 minutes, from 11:20 to 11:32. I only entered the anteroom, where I retrieved an ALS faraday and swapped a dust monitor for patrick. Sheila
Just started a measurement on HEPI-HAM2 and ISI-HAM3 (on the computer 'operator3' in the control room). Should be done in the morning around 10h30.
[J. Kissel, A. Pele] We've launched acceptance transfer function measurements on H1 SUS BS, using a matlab session on operator1 computer. Measurement start 30-Jul-2013 18:23:34 PDT. We expect it to take 12 (!!) hours, so it should finish 31-Jul-2013 06:30 PDT.
measurements succesfully completed.
[S. Biscans, K. Izumi, J. Kissel, A. Pele, J. Rollins] Log of how we brought the IMC back up to locking today after the RCG 2.7 upgrade in rough chronological order: - SUS (for each SUS -- MC1, MC2, MC3, PRM, PR2, PR3, BS, ITMY, IM1-IM4) - Given to us as it comes up from a reboot, in SAFE -- watchdogs tripped, masterswitch off, (stale*) alignment offsets OFF. ** - Reset IOP watchdog - Reset Inner-most USER watchdogs - Reset USER DACKILL - Turned on Masterswitch (damping loops should already be trying to run) - Turned stale alignment offsets (in anticipation of having to burt restore ripe offsets that are not that different) - Recovered ripe alignment offsets for MC1, MC2, and MC3 - snapped to 2013-07-29 20:00 (Local) - /ligo/cds/lho/h1/burt/2013/07/29/20:00/h1susmc*epics.snap - HPI-HAM2 & HPI-HAM3 - Given to us as it comes up from a reboot, in SAFE -- watchdogs tripped, masterswitch off *** - Make sure no offsets are requested, ramp down master gain - reset watchdog - turned on master switch - manually restored offsets in OUTF - put in a 10 sec ramp - entered and turned on offsets from LHO aLOG 7180 (offsets turn on use the ramp time now as well) - ISI-HAM2 & ISI-HAM3 - Given to us as it comes up from a reboot, in SAFE -- watchdogs tripped, masterswitch off - Make sure no offsets are requested, ramp down master gain - reset watchdogs - Turned on Master Switch - Used Command window, to - damp ISI - isolate ISI Level 2 - PSL - Runs entirely on Beckhoff, and was unaffected by the the RCG 2.7 upgrade so there was no recovery needed - IOO - Given to us as it comes up from a reboot - burt restored h1ascimc.snap (mainly to restore alignment of Beckhoff controlled mirror at the top of the periscope -- all other mirrors are fixed) - 2013-07-29 20:00 (Local) - /ligo/cds/lho/h1/burt/2013/07/29/20:00/h1susascimc.snap - turned off IMC WFS loops - ramp down master gain in bottom left corner - turned on guardian **** - go to IMC screen, have the guardian screen handy/visible - In a terminal ]$ ssh h1gaurdian0 ]$ screen -RAad imcguardian (opens a "screen" environment, which looks just like a new terminal) ]$ cd /opt/rtcds/userapps/release/ioo/h1/scripts/ ]$ ./runIMCGuardian (some lights should go green on the IMC Guardian Screen) (hit successive crtl+a, then ctrl+d which "detaches" the screen session, taking you back to the original h1gaurdian0 terminal) ]$ exit (logs you out of h1guardian) - the guardian starts up not doing anything (state request or current state are blank) - hit "manual", "quiet," or "damped" on the IMC Guardian Screen (since there are currently no "goto_" scripts for "locked," hitting "locked" doesn't do anything) - Manually Ramp IMC WFS master gain back up (guardian doesn't know to touch it) - IMC Locks up like a charm * Because the safe.snaps for most suspensions had been captured a while ago, one can never trust the alignment offsets. ** For MC1, MC2, BS, and ITMY, the safe state came up with the BIO settings frozen and red -- specifically the Test/Coil Enable switches were red, and changing the state request didn't change the filter settings. Scared that this was a bug in RCG's interaction with RCG 2.7, I had considered going through a full boot process and/or declaring a failure mode of the upgrade. However, since this did *not* appear on MC3, PRM, PR2, or PR3 (which ruled out that the bug was common to a given computer), my quick test to see whether it was frozen on a given SUS was open open the BIO screen and randomly pick a field to try to change. When I got to the beam splitter, my randomness drove me to try changing the TEST/COIL Enable bit instead of the State Request. When I requested to turn on the SUS-BS M3 TEST/COIL Enable, it flipped the M1 and M2 TEST/COIL enable and all BIO functionality returned, and the stages began behaving independent as expected and as normal. I went back to the other suspensions, and saw similar behavior i.e. that the BIO EPICs record was stuck, and the SAFE burt restore didn't sufficiently tickle the EPICS database to get the BIO request and monitors to report back correct information. Once tickled in the right way, all functionally returned. YUCK! *** Sebastian, having never seen spectra of LHO IPS, we had assumed that the HPIs were locked, or the IPS were broken. Because of the nonsense in described in ** I told him to wait a bit. Then we convinced ourselves that we needed to power-cycle the H1SEIH23 I/O Chassis (because we'd heard that Dave had to restart the PSL I/O chassis this morning because it complained it had lost connection with the dolphin network or something). **** We tried to follow Chris' instructions from LHO aLOG 6920, but this failed because the h1script0 machine is still running Ubuntu 10.4 (as opposed to 12.04 that h1guardian has been updated to recently, like the rest of the control room machines) -- so we don't know how Chris did it. Thankfully Jamie was helping and identified the problem immediately after querying Jim.
**** Regarding the guardian/h1script0 mystery -- it sounds like LHO's installation of perl has been upgraded recently. At the time of entry 6920, the situation was the opposite: the IMC guardian could not be started on the 12.04 workstations, and only the more outdated machines (such as h1script0) were capable of running it.
WP 4065
this morning we upgraded CDS to tag advLigoRTS-2.7. All daqd systems were compiled from source, all front end models were rebuilt, all front end computers were rebooted.
At the same time, the EY computers were powered down and will remain off for several weeks while we reconfigure EY to the aLIGO configuration.
The EY systems were taken out of the Dolphin manager, and their channels were marked as acquire=0 in DAQ so that they are not written to the frame. Their channels are still in the DAQ to allow trending of the old data.
h1dc0 required a reboot to load the new GPS drivers. h1susb123 and h1psl0 were power cycled for permit comms with their IO Chassis.
I did not update the PSL models today. This system was burt restored to 8am today.
We are testing the new science frames features in the DAQ. We have found some problems with the channel stats.
Unfortunately our dataValid flags are still incorrect (set to ONE, should be ZERO). Jim rebuilt tag 2.7 on the DTS and its DAQ there is writing correct dataValid values. We are investigating what is different between the two.
Jeff reports strange binary I/O behavior on the sus models (QUAD and HAM). The BIO readbacks are bogus until an output channel has been switched, at which point the data becomes correct. We are investigating.
This behavior was noted at LLO during the RCG 2.7 upgrade by Stuart Aston (See aLOG entry last Thursday.
Was able to open GV5 and leak test the gate annulus piping joints on GV6 today -> 1 of 5 CFF was found to be leaking. Volume = Y1, Y-mid, Y2, YBM, XBM, HAM2-3, BSC1,2,3,7,8 being pumped by CP1, CP2, YBM MTP (backed by leak detector), CP3, CP4 and IP9. Helium background 6.5x10-8 torr*L/sec -> 1.33 CFF on North side of GV6 climbed steadily from background to 1.5x10-7 torr*L/sec beginning 18 seconds after filling bag until purged with air at the 200 second mark. Demonstration was repeated after signal fell back to near the original starting value.
Started fluid draining this morning and today got 8 liters out. Also, all the HEPI electronics at the Piers were removed and all the Actuators were disconnected from the Crossbeam Feet. Actuator lock-down is ongoing. Additionally, most of the ISI cabling sans EM Actuators were disconnected from the chamber and the CPS racks brought down. Greg & Hugh
Took a new safe_snapshot of ITMY using save_safe_snap('H1','ITMY') matlab script. It includes the new gain values for the 2.1 damping filters recently installed.
h1susitmy_safe.snap is located in /opt/rtcds/userapps/release/sus/h1/burtfiles
EY: Cleanroom move, ISCT-EY move to LVEA, powering down lots of electronics, rerouting VAC conduit
CDS work: RCG 2.7 work begins (which takes down pretty much everything), reboot h0pemmx, much of afternoon devoted to restoring systems from RCG work.
Terminals upgraded to software to make medm faster (BUT, we can no longer look at out-building videos on Control Room terminals....because they freeze the computer(!))
Snow Valley working on Chillers, Praxair made a delivery, Sprague on-site
I measured the ETMX OL values
prettyOSEMgains('H1','ETMX')
M0F1 19894 1.508 -9947
M0F2 29453 1.019 -14727
M0F3 30661 0.978 -15330
M0LF 24512 1.224 -12256
M0RT 24724 1.213 -12362
M0SD 21784 1.377 -10892
R0F1 28922 1.037 -14461
R0F2 23013 1.304 -11507
R0F3 25716 1.167 -12858
R0LF 26662 1.125 -13331
R0RT 24388 1.230 -12194
R0SD 21961 1.366 -10980
L1UL 24267 1.236 -12133
L1LL 26538 1.130 -13269
L1UR 24545 1.222 -12273
L1LR 26259 1.142 -13130
L2UL 17935 1.673 -8967
L2LL 18726 1.602 -9363
L2UR 25124 1.194 -12562
L2LR 25518 1.176 -12759
I entered the gains and offsets into the OSEMINF screens and updated and committed the /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susetmx_safe.snap .
Betsy was concerned about the M0 F1 and SD OL values, which were a bit low (in spec, but rather lower than average). So she swapped out those two and I remeasured those two values only. The new master list is as follows:
M0F1 24018 1.249 -12009 (new)
M0F2 29453 1.019 -14727
M0F3 30661 0.978 -15330
M0LF 24512 1.224 -12256
M0RT 24724 1.213 -12362
M0SD 28519 1.052 -14259 (new)
R0F1 28922 1.037 -14461
R0F2 23013 1.304 -11507
R0F3 25716 1.167 -12858
R0LF 26662 1.125 -13331
R0RT 24388 1.230 -12194
R0SD 21961 1.366 -10980
L1UL 24267 1.236 -12133
L1LL 26538 1.130 -13269
L1UR 24545 1.222 -12273
L1LR 26259 1.142 -13130
L2UL 17935 1.673 -8967
L2LL 18726 1.602 -9363
L2UR 25124 1.194 -12562
L2LR 25518 1.176 -12759
I entered the new values into the OSEMINF screen and redid the safe.snap.
Now with associated serial numbers which are also in ICS:
M0F1 - 643
M0F2 - 486
M0F3 - 414
M0LF - 580
M0RT - 507
M0SD - 563
R0F1 - 426
R0F2 - 497
R0F3 - 504
R0LF - 468
R0RT - 418
R0SD - 434
When the ETMX came up after the software upgrade, I attempted to measure the OL values on the OSEMs, which had been installed and left retracted by Betsy yesterday.
I noticed that four channels, R0 F1/F2/F3/SD were behaving oddly. Trending revealed that these channels had gone bad at 00:09 Pacific this morning (see attached). Richard replaced the satellite amp and they came good.
Summary:
Off-center the beam on curved mirrors significantly in one direction and everything is within tolerance (first plot, this is the best quality data we managed to squeeze out of this setup).
Center the beam on the curved mirrors and the data looks OK-ish but things are somewhat out of nominal tolerance (second plot, some green error bars are outside of two vertical lines representing the nominal tolerance), and the ellipticity and the astigmatism become worse (you can tell by the fact that X data and Y data moved in the opposite direction).
The third/4th attachment show how off-centered the beams were on the secondary and on the primary when the good data was obtained.
The 5th/6th attachment show the centering on the secondary and the primary when the second set of data was taken.
This is repeatable. Every time a good looking data is obtained, the beam position on the primary is offset to the left. Every time we re-centered the beam on the primary and the secondary, the scan data is worse (but still OK-ish, not terrible).
Sounds like a problem of the mirror surface figure to me. Maybe we got unlucky on this pair.
This is the best we can achieve, and since it's not terrible when the beam is centered on the curved mirror, I'll declare that this is the final tuning of the H1TMSX. Tomorrow we'll mate the ISC table to the tele.
Today I processed the data from the lower stage transfer functions taken few weeks ago on the 3 MC suspensions.
Attached are the plots showing respectively M2 to M2 and M3 to M3 Phase 3b DTT undamped transfer functions for MC1 MC2 and MC3.
Measurements are consistent with the models (blue curves). Only MC2 M2-M2 and M3-M3 pitch show a small discrepancy at around .85Hz, frequency that corresponds to the first vertical mode.
*Measurement and data processing details*
dtt templates are saved and commited under :
/ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/${susID}/${sagLevel}/Data/${date}_H1SUSMC1_M2_WhiteNoise_${DOF}_0p01to50Hz.xml
mat results files are saved and commited under :
/ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/${susID}/${sagLevel}/Results/${date}_H1SUS${susID}_${sagLevel}
During the process, a function called "calib_hsts.m" has been created under /ligo/svncommon/SusSVN/sus/trunk/HSTS/Common/MatlabTools and is being used in plotHSTS_dtttfs_${M1/2/3}.m, since the calibration of the second stage of MC2 is different than MC1 and MC3 (different coil driver).
the function defines the DC scaling factor to calibrate data with inputs :
susID ('MC1','MC2','MC3',...)
level ('M1','M2','M3')
is sensor input filter engaged ? ('true','false')
Initially we were unable to do some needed leak testing on GV6 a few weeks ago as the helium background signal in the site vacuum system was too high (~2 x 10-6 torr*L/sec). We have reduced this via valving-in the YBM turbo during the day then valving it out at the end of the day over the past few weeks (must be attended when open to the BT). As of the end of today the helium background is about 2.5 x 10-7 torr*L/sec, still too high for leak testing. Based upon the physical parameters, the rate of removal via the YBM turbo pump suggests a reservoir of helium with some independent conductance to the vacuum system as opposed to an initial quantity of helium fully present in the vacuum system. As such we have vented then pumped and or purged adjacent volumes which had been exposed to helium spraying in the recent past (GV1 annulus, GV3,4 gate annulus, HAM4 annulus, HAM4-5 OMC volume, HAM1-2 annulus, HAM1 interior. This had no significant effect, i.e. the reservoir+conductance could be withing the vacuum envelope(?) Today we were able to "take the gloves off" (HIFO Y ended) and demonstrate the the large, 2500L/sec, ion pumps are the source of helium. We conclude this by noting that soft-closing GV5 resulted in a slight increase in the helium background at the Vertex (IP9 and IP11 not yet saturated-still pump helium a little bit) and that valving-out IP1, IP2, IP5 and IP6 resulted in the plummeting of the helium signal, i.e. t0=1100hours=3.1x10-7 torr*L/sec, 1110hours=1.7x10-7, 1120hours=9x10-8, 1130hours=5.0x10-8, 1140hours=2.7x10-8, 1150hours=1.5x10-8, 1305hours=1.2x10-9, 1430hours=1.1x10-9. Conclusion: We believe that this is the first instance that we have ever had a leak detector valved-in while one or more 2500L/s ion pump(s) was simultaneously valved-in. Nominally, leak testing is performed on an isolatable volume pumped only by a turbo which is backed by a leak detector. When initially attempting to leak test GV6, the YBM turbo, IP1, IP2, IP9 and IP11 were all valved-in (GV6 was open at the time). Therefore we don't know if the helium concentrated/dissolved in the ion pumps that we see now is the result of years of low level residual exposure or, conversely, one single large recent exposure "event". So we don't know if this is a new issue or an old issue(?)
You may be able to reduce the helium in the ion pumps by baking them at 150C (need to establish the Curie temperature of the magnets to decide if they can remain on the pump during the bake). The helium is not bonded to any of the molecules deposited on the walls and will diffuse out even though buried under layers of getter.
Yesterday, Gerardo bonded on the 1st prism to the LLO destined PUM. He used the new procedure which incorporateed adding borosilicate glass beads to space the glue joint more appropriately. He intends to proceed today with gluing in the magnet/flag discs and then the 2nd prism today/tomorrow.
Late last week, the discs and second prism were glued. The second prism glue joint did not cure with glue across the entire surface. Work continues at CIT/LHO to investigate why and revise procedures. After the optic was airbaked for additional cure as per the existing procedure, and no further change was noticed, it was decided to ship the PUM to LLO. It should arrive at LLO by Wed July 31st. LLO can proceed with using this PUM in the L1-ETMx monolithic assembly.