The setup script for the userapps directory has been modified to add paths for the new calibration subsystem, cal. The environment variables USERAPPS_LIB_PATH, USERAPPS_MEDM_PATH, and USERAPPS_SCRIPTS_PATH have been modified, and environment variables CAL_SRC and CAL_IFO_SRC have been added. This change is effective for new logins or new shells.
model restarts logged for Thu 09/Oct/2014
2014_10_09 08:51 h1fw1
2014_10_09 10:23 h1fw1
2014_10_09 13:12 h1fw1
2014_10_09 21:41 h1fw0
all unexpected restarts
In attendance: Sudarshan, Bubba, Vern. Keita Jason, Patrick, Krishna, Jeff, Hugh, Jim, Kissel, Gerardo, Fil, Aaron, Christina, Dick, Travis & Greg.
SEI- Nothing new to report. Going to wait and see what happens when the waves created by yesterday's 6.8 quake start hitting the west coast
SUS- Working on OpLev slider calibration
PSL- working on ISS outer loop
BRS- seems to be working. Maybe test/tweak one more time
Commish- Robust locking DRMI. ASC helping greatly.
3IFO- ongoing
Facilities- beam tube cleaning is ongoing
BSC2 Test Stand was shut down on Wed the 8th!
The setup script for the userapps directory has been modified to add paths for the new calibration subsystem, cal. The environment variables USERAPPS_LIB_PATH, USERAPPS_MEDM_PATH, and USERAPPS_SCRIPTS_PATH have been modified, and environment variables CAL_SRC and CAL_IFO_SRC have been added. This change is effective for new logins or new shells.
Kiwamu, Sheila
DRMI ASC helps stability
We have REFL A 9I going to PRM with a gain of 1e-3 in yaw, 1e-4 in pitch, and AS A 45 Q to the BS with a gain of -5e-5 in pitch and 1e-3 in yaw. We looked at using REFL 45 I to feedback to the SRM, but the refl B signals have a large offset, and the refl A signals don't seem sensitve to motion of the SRM. We checked the phasing of the rel 45 wfs, (it was already good) and set the dark offsets, but this didn't help. The four loops that we do have are working well, in the attached screen shot the loops come on at about -30 seconds, and bring us to a good build up. After this, DRMI is stable and we have none of the mode hopping problems that have been plauging us,and preventing us from locking most of the day today. Since taking that screen shot I've edited the guardian to bring the ASC on faster, because sometimes we loose the lock due to mode hopping problems before the asc comes on. The slowest part is the AS WFS centering servo. These asc loops are all in the gaurdian, as well as offloading, but the user has to decide when they have done their job and manually request offloading for now.
Stable 3F DRMI
After getting the drmi stable, on 1F, we moved it to 3F to confirm that it is stable after the improvements Kiwamu made last night. That was easy, the gaurdian did it right away without any problems. It was locked on 3F and left alone for a half hour with no disturbance, from 5:21:25 Sept 10 UTC to 5:51:07. This is in the middle of almost 2 hours of locked DRMI.
DRMI+ALS
After this we attempted to lock DRMI with the arms controlled by ALS. We had some sucess locking ALS without the ezca servo, (and slow feedback from the end station PDH). but the slow feedback needs some work (it misalings the arms). Kiwmau decreased the ALS DIFF gain from 0.8 to 0.5, (already in the gaurdian) which helps prevent saturations, and lets diff lock stably. We were able to lock both COMM +DIFF+ find the IR resonances, offset the COMM VCO by 500 Hz (which is only 250 Hz in IR) and align PRM and SRM. We couldn't engage the ASC like this. We locked DRMI serveral times like this, but it quickly drops because of the mode hopping problem.
I'm testing a new commissioning reservation system I wrote this afternoon using python. This is a file based system replacing the old EPICS system. The main reason for the change is that the old EPICS system developed problems related to updates and required a lot of maintenance. It was also awkward to configure and restrictive in what it could do.
The reservation file is /opt/rtcds/lho/h1/cds/reservations.txt
There are three python scripts:
make_reservation.py is available to all users, it allows you to create your reservation
display_reservations.py script loops every second and shows the currently open reservations
decrement_reservations.py script is ran as a cronjob every minute, it decrements the time-to-live of each reservation and deletes them when they expire.
I have a display running on the left control room TV closest to the projector screen.
Here is the usage document for the make_reservation.py script
controls@opsws6:~ 0$ make_reservation.py -h
usage: make_reservation.py [-h] system name task contact length
create a reservation for a system
positional arguments:
system the system you are reserving, e.g. model/daq
name your name
task description of your task
contact your contact info (location, phone num)
length length of time (d:hh:mm)
optional arguments:
-h, --help show this help message and exit
If you need to use spaces, surround the string with quotes. There are no limitations on string contents. Here is an example reservation
make_reservation.py "PEM and DAQ" david.barker "update PEM models, restart DAQ" "phone 255" 0:2:0
Here's an example, reserving Nutsinee for 1 hour on TSCX / BSC3: opsws8:~$ make_reservation.py TCSX nutsinee 'BSC Temp Sensor Replacement' LVEA 00:01:00
J. Kissel I happened to have the H1 ISI ETMY overview screen open and noticed a blinking red light in the bottom corner alarming that the pod pressures are low, indicative of a potential leak. Jim informed me that Gerardo had noticed this earlier as well (both interactions verbal, no aLOG). Further investigations reveal that, though the sensors indicate a slow leak over the past 5 months on all three L4Cs; the leak rate is ~0.25e-6 [torr-Liter/sec] (see attached 2014-10-09_H1ISIETMY_L4CPod_LeakRate.pdf) -- a rate that is 1/4th of what has been deemed acceptable (see T0900192). Indeed, for further comfort -- though Brian's original guess (see G1000561, pg 15) says that the pod pressure sensors might only be able to sense 5e-6 [torr.Liter/sec] - level leaks -- it appears that we are indeed at least a factor of 8 more sensitive than that. Though I don't understand it well enough to make adjustments, the action item is to (a) adjust the threshold to represent 1e-6 [torr.Liter/sec] (if we're still OK with that number). and (b) have @DetChar or @Operators make a similar study on the rest of the chambers across the project to ensure that the rest of the pods aren't leaking any worse than these L4Cs. Note that this ETMY is the second oldest ISI (save the LASTI ISI) in the project, as it was installed just after ITMY for the H2 OAT. Details / Logical Path / Figures -------------------------------------- On the MEDM pod-pressure screen (accessible in the bottom right corner of the overview), Corner 1 L4C and Corner 2 and 3 T240 are blinking around 96-97, 100-101, and 100-100 [kPa] respectively, which directly correspond to the blinking alarm light. So, I trended them over the past 300 days. I quickly found that the signals have been non-flat, and in fact going down in pressure, indicative that the in-air pods were leaking air out to the in-vacuum chamber. I focused on the L4Cs, because they appeared to be the worst offender. After identifying that the major features in the 300-day minute trend time series: -- We begin to see data ~1/4 of the way into the time series, right when Hugh and Richard are cabling up the ISI ETMY, now moved into BSC10 on Feb 25 2014 (see 10360), -- The hump that starts at ~1/3 of the time axis is the beginning of the chamber closeout, where kyle turns on the roughing pumps on March 28 (see LHO aLOG 11076), and -- Shark-fin feature 3/4 through the time axis which corresponds to Rai's charge dissipation experiments on Aug 06 (see LHO aLOG LHO aLOG 13274, I believe that the sensors are indicating a real pressure signal, and not some electronics drift as Brian had worried in G1000561. Interestingly, the *differential* pressure does not show a trend, implying that all six L4C pods are leaking at roughly the same rate. To quantify the leak rate, I grabbed the average of one hour of minute-trend data on the first of every month over the linear ramp down the of the pressure for all three L4C pods (i.e. from May 01 2014 to Oct 01 2014): Pod 1 Pod 2 Pod 3 pressure_kPa = [97.423 98.956 98.86;... % May 97.358 98.910 98.771;... % Jun 97.288 98.820 98.710;... % Jul 97.199 98.734 98.573;... % Aug 97.110 98.665 98.526;... % Sep 97.026 98.568 98.369]; % Oct (At this point, I'm just *hoping* the pressure sensors are correctly calibrated, but we know that 1 [atm] = ~750 [torr] = ~ 100 [kPa], so it seems legit.) Taking the matrix of 6 months by 3 pods, I converted to torr, torr_per_kPa = 7.5; % [torr/kPa] pressure_torr = pressure_kPa * torr_per_kPa; % [torr] and assuming the volume enclosed in the pod is volume_L4C = 0.9 [Liter], as Brian assumed in G1000561, and taking time = 1 [month] = 2.62974e6 [sec], the leak rate over each month is leakRate(iMonth,iPod) = (pressure_torr(iMonth,iPod) - pressure_torr(iMonth+1,iPod))*volume_L4C/time; (manipulating the P1* V - L* T = P2 * V equation on pg 15 of G1000561). I attach the .m file to make the calculation, if the above isn't clear enough to write your own. It's a rather noisy calculation from month-to-month that could be refined, but it gets the message across -- the leak rate is roughly 0.25e-6 [torr.Liter/sec], a factor of 4 smaller than deemed acceptable. If one puts on your trusty pair of astronomy goggles, you could argue that the leak rate is increasing, but I would refine the quantification of the leaks before I made such claims. Finally, I checked the GS13s and T240s to make sure they're leaking less, and indeed they are. I also post a copy of the simulink bit logic that creates the warning bit -- it's gunna take me some time to verify it all -- but the goal will be to change the "ABS REF", "DEV REF", and "DEV REL" such that we don't alarm unnecessarily, as we've done here.
One can check if there are really leaks in the pods by looking at amu20 and amu22 on an RGA mounted on the chamber with the suspect pods. The pods are filled with neon. The peaks should be in the ratio of the isotopic abundance of the two neon isotopes.
Quoting T0900192, "We conclude that any leak is unacceptable from a contamination viewpoint..." This should be followed up.
I am skeptical.
We have many conflats and feedthrus installed on LIGO and the failure rate is extremely low once the initial leak test is passed.
I think it is more likely that there is an aging effect here with the sensors or possibly some gettering action of air in the pod (we do not know how much air remained in the pod when the neon was filled). The aging could be mechanical fatigue or permeation of gas into the "zero" side of the capactive sensor.
The kp125 sensors are "low cost" capacitive barometric sensors and a typical use would be in an automobile engine intake manifold. Long term drift of the sensor due to aging would not be a factor in this application because the manifold is routinely exposed to one atmosphere prior to startup - allowing for a calibration.
Depending on the air content in the pods a chemical reaction (slow oxidation??) could also be responsible for this drift. The L4C is the smallest unit and would therefore show the largest loss of gas if this were the case.(smaller reservoir)
Quote from the vendor spec sheet:
The table attached shows the pressure and "apparent leak rates" of the L4C pods for all BSC-ISI. For the calulations, I used a short period of data from the first data of each month. Results for ETMY are consistent with Jeff's numbers.
Results/Comments:
- Unlike ETMY, other units don't show a consitent trend. The pressure signal seems to go up and down, rather than consitently down.
- The sign and amplitude of the apparent leak rate is always very similar within the three pods of a given chamber.
-Magnitude ~7 earthquake at South Pacific Rise at around 8pm PST
DAY's Activities:
Other Notes:
I did a tilt decoupling-like measurement on ETMY ISI, driving in Z with a low pass filter (rolling off at ~200 mHz), all the isolation loops engaged, and all the blends swith to Start, I looked at the Z and RZ seismometer signals on St1 and St2 (results shown in dashed lines on the attached plot). I then switched the same drive signal to the ETMY HEPI, and again looked at the seismometers (solid lines on the same plot). I don't know what to make of it, but the RZ coupling is clearly different between driving on the ISI versus the same drive on HEPI. And it only shows up in the GS-13s. Huh. Template is in the H1 BSC-ISI svn, in the /Misc/Tilt Decoupling folder with the name Z_to_RZ_w_hepi.xml.
I've restored all settings on the ISI, so blends should be back to the new nominal for commissioners tonight.
K. Venkateswara
The damping turn-table for BRS is now functioning as expected. The attached pdf shows an example of how it damps the BRS beam balance when it exceeds a certain threshold (which is currently roughly 200 counts amplitude). I also show the control signal which controls the turn-tables position.
The plot starts with the BRS being damped with a Q of ~ 30. At 600s, I stopped the BRS code and doubled the feedback gain to reduce the Q to ~15. At 650s, the code was restarted. There was a spike in the control signal (due to the restart) which kicked up the beam-balance slightly after which it began damping correctly. The turn-table controller is set to turn off 700s after the threshold is reached which seems to be around ~1700s. After that the natural Q of the balance (~5000) is restored. The slow oscillation is due to the high-pass filter. Most of the parameters of this scheme are easy to change and I may improve them soon.
Before getting the turn-table to work correctly, I had to sort out a couple of issues: First, Richard McCarthy added a grounding cable to the platform on Wednesday but the 'grouding issue' I saw on Tuesday was not related to that. It was due to an intermittent short between the controller board and it's housing. After fixing that, the control signal looked clean. However a different issue cropped up: It looked like the turn-table would occasionally apply a huge torque on the beam-balance. I had seen this happen before so I realized that it was due to vibrational coupling to some higher-order mode of the balance (maybe bounce). The stepper motor and turn-table make some noise/vibration which, when it hits the right frequency, imparts a large torque on the balance. To fix this, it was necessary to change something on the turn-table so that the vibration produced was different. After a couple of trial and error attempts, I finally managed it by adding o-rings under some of the bolts of the turn-table.
Cyrus, Jim, Dave
We are running a python version of the bash script to copy the ITMY green laser digital camera image epics values (centroid and sum) to the ALS-Y front end model. The new script is more efficient, it does not issue a network broadcast for every epics read/write operation. The new code was committed to svn under userapps/release/sys/h1/scripts/h1cameracopy.py. It is running under screen on h1fescript0 as user controls, pid file ~/cameracopy.pid
I did a bit of cleanup of the SEI guardian code. H1 SEI guardian configurations have undergone some changes recently, not all of which had been committed to the SVN. But first...
I moved all H1-specific guardian configuration changes to isi/h1/guardian. This makes it clearer which systems have deviated from the "default", and it doesn't make the default such a moving target. All H1-specific changes should now be made in isi/h1/guardian, instead of to the "default" configurations in isi/common/guardian.
Once this was done, I committed all changes to the SVN.
Here is the current list of h1-specific SEI guardian modules:
isi/h1/guardian/HPI_ITMY.py isi/h1/guardian/ISI_BS_ST1.py isi/h1/guardian/ISI_BS_ST2.py isi/h1/guardian/ISI_ETMX_ST1.py isi/h1/guardian/ISI_ETMX_ST2.py isi/h1/guardian/ISI_ETMY_ST1.py isi/h1/guardian/ISI_ETMY_ST2.py isi/h1/guardian/ISI_HAM4.py isi/h1/guardian/ISI_HAM5.py isi/h1/guardian/ISI_HAM6.py isi/h1/guardian/ISI_ITMX_ST1.py isi/h1/guardian/ISI_ITMX_ST2.py isi/h1/guardian/ISI_ITMY_ST1.py isi/h1/guardian/ISI_ITMY_ST2.py
NOTE: HPI_ITMY has not been committed to the SVN, as I'm not sure about the changes in that module.
All SEI guardian nodes should be restarted to make sure all changes properly take affect.
J. Kissel Using the refined H1 SUS BS OPTICALIGN alignment offset slider calibration (see LHO aLOG 14321), I've refined the calibration of the optical levers using the same method as with the ITMs (see LHO aLOG 14312), by moving the well-calibrated sliders and tracking the optical lever motion. Attached are the fitted slopes to this motion, which indicate that the new optical lever calibrations should be BS P 141 [urad/ct] BS Y 231.9 [urad/ct] as with the ITMs, Pitch has been corrected by a factor close to 2, where yaw needs only a ~15% correction. Because the optical lever damping is actively used, I'll wait until tomorrow morning to install the new calibrations and adjust the loop gains accordingly. Details: ----------- - Step through several alignment offset values (in [urad]), record DC optical lever output (in ["urad"], the quotes indicating the to-be-refined units). I chose to get a smattering of offsets between +/- 20 [urad] surrounding the currently saved "ALIGNED" values. - Fit slope of data points to a line (see attached). The calibration corrections are ["urad"/urad] [urad/"urad"] BS P 1.901 0.526 BS Y 1.156 0.8654 Previous Cal * Correction = New Cal BS P 268 ["urad"/ct] * 0.526 [urad/"urad"] = 141 [urad/ct] BS Y 268 ["urad"/ct] * 0.8654 [urad/"urad"] = 231.9 [urad/ct]
re my 14373 entry, I figured I could correct these to the T1000388 even though there is a "Check the signs" Note in the doc as Fabrice says we'll just change the signs in the sym filters.
The matrix had correct elements in the primary transformations; only some of the cross coupling elements were variant. I went ahead and ran the populate matrices medm script. It crashed as it tried to load CPSALIGN values which are not in the data file. Makes me wonder how the GS13 & CART2ACT values got loaded before... So, I moved the CPSALIGN load section to the end and ran the script again. I then confirmed all the matrices complied with the T1000388--all good. Then confirmed the ISI still isolated--yes.
Then with the ISI just damped and HEPI not isolating( no Pos loops,) I drove HEPI in Z with a random noise signal band passed between .1 and 1 hz. Monitoring the HEPI L4Cs and the ISI Stage0 L4Cs, see attached, it appears that the vertical sensors are out of phase. So it seems we need that sign change in the SYM filters.
Made and committed safe.snap file.
Commissioners wanted the machine back before I could do other dof measurements, maybe I can do them with passive TFs. Either way, later.
Here is some horizontal data. I drove HEPI RZ but since the ISI L4Cs go to the Sensor Correction bank, it only has X Y & Z. So, here I look at the L4CINFs, the tangentially oriented sensors should be seeing this RZ. Compared to the vertical data above, I drove 4x as hard to get the coherence higher(still kinda poor.) It may not say the same thing, but, based on the phase, the signs are the same. Of course now this is before the input filters (I should have grabbed the _OUTs but too late for this run) much less the L4C2CART matrix so I'm really mixing apples and applesauce.
no restarts reported
J. Kissel Following a similar procedure as was done for the ITMs (see LHO aLOG 14265), I've refined the calibration for the H1 SUS BS optical lever. The new calibrations are BS P 6.9522 [ct/urad] BS Y 3.8899 [ct/urad] They've not yet been installed; will install tomorrow during maintenance. DETAILS ------------ Currently, the alignment slider calibration gains are 4.714 [ct/"urad"] 4.268 [ct/"urad"] based on dead-reckoned knowledge of the actuation chain (see LLO aLOG 5362). Sheila and Alexa recently found the alignment values for the beam splitter which gets red light onto the ETMY baffle PDs: P ["urad"] Y ["urad"] ETMY PD1 184.0 -255.0 ETMY PD4 237.1 -287.7 or a displacement of BS P 53.12 * 2 = 106.2 ["urad"] BS Y 23.70 * 2 = 65.4 ["urad"] where the factor of two comes from the single bounce optical lever effect. I spoke with Gerardo who informed me that the numbers Keita had posted (LHO aLOG 9087) for the locations of the baffle PDs on the Arm Cavity Baffles are slightly off from reality. He gave me links to D1200296 (ETM) and D1200313 (ITM), which indicate that the PD locations are identical between an ITM and ETM baffle, and are 11.329 [inches] = 0.288 [m] apart in vertical, and 11.313 [inches] = 0.287 [m] apart in horizontal. Again using 3994.5 [m] for the length of the arm (LHO aLOG 11611), and adding 4.847+0.100+0.020+0.200 = 5.167 [m] for the distance between the HR surface of the BS and the back of the CP, through the thin CP, through the ITM QUAD's reaction-to-main chain gap, and through to the HR surface of ITM, respectively (D0901920), that's a lever arm of 3999.7 [m]. Hence, a displacement of BS P 0.288 [m] / 3999.7 [m] = 72.01 [urad] BS Y 0.287 [m] / 3999.7 [m] = 71.76 [urad] The alignment offset slider gains should therefore be corrected by BS P 72.01 / 106.2 = 0.67806 [urad/"urad"] BS Y 71.76 / 65.4 = 1.0972 [urad/"urad"] or BS P 1.4748 ["urad"/urad] BS Y 0.91141 ["urad"/urad] The new slider gains should therefore be BS P 4.714 [ct/"urad"] * 1.4748 ["urad"/urad] = 6.9522 [ct/urad] BS Y 4.268 [ct/"urad"] * 0.9114 ["urad"/urad] = 3.8899 [ct/urad] We're now storing 4 alignments for the BS, P ["urad"] Y ["urad"] BS Aligned 210.6 -271.4 Misaligned 236.5 -287 To EY ACB PD1 184 -255 To EY ACB PD4 237.1 -287.7 which should therefore become, P [urad] Y [urad] BS Aligned 142.8 -297.3 Misaligned 160.3 -314.9 To EY ACB PD1 124.76 160.77 To EY ACB PD2 160.7 315.66 To do: - Update calibration in OPTICALIGN gain - Update calibration in M1 LOCK bank - Update, confirm, and save corrected alignments - Capture new safe.snap
OPTICALIGN Calibration gains have been changed, but only the ALIGNED and MISALIGNED values have been stored. Still need store PD1 and PD4 values, commit the snaps to the userapps repo, and capture a new safe.snap. Turns out there are NO calibration filters in the H1SUSBS M1 or M2 LOCK filter banks yet, so they need not get updated. Will do what I can tomorrow.
Completed OPTICALIGN alignment offset slider calibration refinement this morning: saved ALIGNED_TO_PD1 and ALIGNED_TO_PD4 values, confirming that they're hitting the ITMY baffle PDs. Finally, captured a new safe.snap. Now moving on to optical lever calibration refinement using new values.
J. Kissel Following the refinement of the ITM alignment slider calibration (see LHO aLOG 14265), I used the sliders as a reference to refine the calibration of the optical levers. As suspected (see LHO aLOG 12216), the correction factor to the ITMX calibration is around a factor of two. The following new calibrations have been installed as of Oct 06 2014 19:00:00 UTC (12:00:00 PDT, GPS 1096657216): IX P = 30.87 [urad/ct] IX Y = 25.29 [urad/ct] IY P = 23.94 [urad/ct] IY Y = 24.01 [urad/ct] I still need to capture new safe.snaps for both these suspensions to make sure both the refined slider and optical lever calibrations stick. The process: - Step through several alignment offset values (in [urad]), record DC optical lever output (in ["urad"], the quotes indicating the to-be-refined units). I chose to get a smattering of offsets between +/- 20 [urad] surrounding the currently saved "ALIGNED" values. - Fit slope of data points to a line (see attached). The calibration corrections are ["urad"/urad] [urad/"urad"] IX P 1.578 0.6339 IX Y 2.233 0.4478 IY P 0.9767 1.024 IY Y 1.031 0.9703 - Correct calibration. Previous Cal * Correction = New Cal 48.6954 ["urad"/ct] * 0.6339 [urad/"urad"] = 30.87 [urad/ct] 56.4889 ["urad"/ct] * 0.4478 [urad/"urad"] = 25.29 [urad/ct] 23.38 ["urad"/ct] * 1.024 [urad/"urad"] = 23.94 [urad/ct] 24.74 ["urad"/ct] * 0.9703 [urad/"urad"] = 24.01 [urad/ct]
ITMX safe.snap captured as of this entry.
For the record, Thomas had changed the optical lever calibration (see LHO aLOG 10617), based on Keita and Stefan's refinement using the same method (see LHO aLOGs 10331 and 10454). This had *increased* the gain by a factor of 2, where my calculations suggest they should be re-*decreased* back closer to the original values. Keita hints that they factor of two is weird, but, at least in words, seems to describe the same method. I have a feeling that this was done while the ETM and ITM baffle signals were crossed, and he was actually looking at PD3, which is twice as far away. They I'm checking ETMX now to see if I get values consistent with Keita's.