After ISCTEY rearrangement I looked at the WFS sensing, but it didn't behave well. Disappointing.
The difference between before and now, apart from the table rearrangement that shouldn't have caused much impact on the sensing, is the ITMY beam position. Maybe the WFS signal is dependent on the beam position?
Tomorrow I'll try to find a good set of centering offsets for the current beam location, as the alignment now is supposed to be good.
To get more range out of MCL PZTs, I swapped the position of fixed and PZT mirrors. See D1400241.
| Mirror | before | now | |
| WFSA | ALS-M16 | fixed | PZT |
| ALS-M17 | PZT | fixed | |
| WFSB | ALS-M18 | fixed | PZT |
| ALS-M19 | PZT | fixed |
This should have increased the centering range of WFSA by a factor of 3.7, and WFSB by a factor of 9.4. Even though WFSB centering is heavily coupled to WFSA, there seems to be no problem.
I have written a new H1 commissioning reservation system. The user guide can be found at:
https://lhocds.ligo-wa.caltech.edu/wiki/H1CommissioningReservationSystem
Please email me any comments, problems, improvements.
ISI ETMX (Krishna, Jim W and Dave)
restart of h1isietmx and an associated DAQ restart
PCAL (Rick, Shivaraj, Dave)
modified h1calex.mdl and h1caley.mdl models to use one more ADC and DAC channel. This will permit simultaneous double duotone loop back timing measurements. This required new h1pemex and h1pemey models to relinquish DAC channels. These models were restarted several times, with associated DAQ restarts.
DAQ solaris memory upgrades (Cyrus, Jim)
The DAQ solaris QFS writer machines had additional memory installed to see if this fixes the frame writer restart problem.
DNS configuration (Cyrus)
All dns clients were confirmed to be using the new dns server. The old service on cdsfs0 was then turned off.
here are the frame gaps while the solaris QFS writers were being upgraded this morning
fw0
rw-r--r-- 1 controls controls 1265333833 Oct 14 11:17 H-H1_C-1097345792-64.gwf
-rw-r--r-- 1 controls controls 1265519846 Oct 14 11:50 H-H1_C-1097347776-64.gwf
fw1
-rw-r--r-- 1 controls controls 1264327500 Oct 14 11:53 H-H1_C-1097347904-64.gwf
-rw-r--r-- 1 controls controls 1281982191 Oct 14 12:37 H-H1_C-1097350592-64.gwf
Observation bit already set to 'Commissioning'. 08:18 Ed to end X to remove whitening chassis for repair (WP 4897). 08:43 Aaron and Filiberto to end X to pull PCAL cables (WP 4896). 08:55 Betsy to LVEA West bay to work on 3IFO. 09:10 Jim W. driving ITMX ISI and HEPI. 09:12 Cyrus auditing DNS settings on CDS machines. 09:29 Ed back from end X. 09:30 Karen cleaning at mid Y, end Y. 09:40 Keita to end Y to align optics on ISCT1. 10:08 Chris cleaning at end X. 10:15 Keita back from end Y. 11:00 Ed to end X to return repaired whitening chassis. 11:17 Jim B. shutting down fw0. Cyrus installing memory in h1ldasgw0. 11:39 Karen done cleaning at mid Y, end Y. 11:42 Ed back from end X. 11:51 Dave restarting PEM and CAL models at end X. 11:53 Jim B. shutting down fw1. Cyrus installing memory in h1ldasgw1. 11:51 Dave making changes to end Y PCAL model. 13:02 Jim W. and Sudarshan to end X to check on status of activity. 13:06 Travis to LVEA. Turn off HAM5/6 and HAM2/3 cleanrooms. Work on 3IFO SUS quad build. 13:29 Jim W. and Sudarshan back. Jim W., Sudarshan working on adding BRS signal to end X ISI model (WP 4895). Rick, Dave, Shivaraj working on changes to PCAL model (WP 4894).
I've written a Matlab script to automate the blend switching of the BSC ISI's to the LLO configuration. It's called BSC_Blend_Switcher, and I've saved it locally to the Seismic common folder. To run, you will need to have the SEI path loaded (this can be done in matlab from the shortcuts on the contols account). The syntax at the matlab command line is:
BSC_Blend_Switcher('ITMX')
where, 'ITMX' can be replaced with any of the BSC optics, in whatever case you like. I haven't tested it yet, but the bits and pieces I tried worked. I'll test it tomorrow morning.
I only mean for it to be a temporary work around, as it will probably take a couple minutes to run, and it isn't very smart, just a bunch of ezca calls. Hopefully, Guardian or something smarter will replace it, sooner rather than later.
I have audited the DNS configuration on all of the machines in CDS to ensure that they use the current redundant pair of servers (ns0,1). As a result, I have now also decommissioned the BIND config that was on cdsfs0 as it is no longer needed, and removed the associated BIND packages.
Added: 697 Removed: 98 No unmonitored channels.
Reset HAM3 and BS.
I've moved pslws0-3 and opsws19 into the workstations subnet. These were the last workstation machines left in what is to become the servers subnet - anything that's considered a CDS workstation is now in the new subnet.
Alexa, Kiwamu, Patrick The EPICS variables for h1ecatx1 were all frozen/invalid. Logged into the machine. Checked the logs. Did not find anything obvious. Automatic updates were turned on. I turned them off. Made sure automatic updates were off on h1ecatc1 and h1ecaty1. One had them on, the other didn't (don't remember which). Restarted the h1ecatx1 computer. Ran through the 'Stop', 'Update from source' and 'Compile' scripts. Would not compile because of uncommitted changes. Ran an svn update and got updates for: TwinCAT/Source/Current/Interferometer/End/Plc1.pro TwinCAT/Source/Current/Interferometer/Corner/Plc1.exp TwinCAT/Source/Current/Interferometer/Corner/Plc1.pro Scripts/Configuration/L1ECATY1/SYS/L1EcatY1.tsm 'Compile' then worked. Ran 'Activate and run' and 'Restart EPICS database'. System restored.
Forgot: The OPC to EPICS IOC had a bunch of 'Mystery Error' messages (quoted) when I first logged in.
I installed more memory in h1ldasgw0 and 1 at the request of Dave. Now both have 24GB total memory, using the original Sun/Oracle sticks that came out of these machines. On booting h1ldasgw1 after installing the memory it complained about a Non-recoverable ECC error from one of the sticks, in the iLom management interface this was (/SYS/MB/P0/D8). I re-opened the chassis, reseated the memory, cleared the iLom fault and the error did not return. However, we should keep an eye out for any ECC/memory faults that may occur in the future on this machine. Installing the memory requires stopping h1fw0,1, so there will be some (non-overlapping) frame data gaps for the period between appx. 11:15AM-12:30PM.
While I was at ISCT1 to look at the green Y beam DC detector, I found two things:
The first picture is the COMM BBPD before. Though you cannot see the diode edge, you can easily tell that the beam is close to the top edge of the can.
The second picture is after the adjustment. Seems like I could have lowered it more.
ISC Whitening chassis S1101631 was returned to it's original position following repair. The operation was confirmed by Kiwamu. Details of the findings can be found in E-Traveler.
The ISC Whitening chassis S1101631 mentioned in Kiwamu's log entry (14430) was removed from the X-End LVEA electrnoics rack. The -15VDC failed. I also removed a small snake from the computer cart all snuggled up to the warm monitor and keyboard.
I did not notice the snake!
It was the +15 rail, not the -15. My bad.
A whitening chassis at EX seemed to have died on the last Tuesday. This should be fixed.
All four segments of both in-vac IR QPDs (ASC-X_TR_A and _B) had been reading some bogus values since approximately 7 pm of 7th of October in PDT. Dave and I looked at the reboot logs and confirmed that no software activities were performed since 3-ish pm in this particular day. We suspected some analog issue and therefore I went down to EX to check out the circuits. The AA chassis seemed to be working fine because the raw ADC counts had gone when I disconnected the signal cables. Tracking the signal lines further, I found a whitening chassis whose +15 V LEDs were all off. I power-cycled the chassis and this resulted in change in the level of the bogus signals, but this did not still give me realistic signals. The +15 V indicators are still off. We need to replace this unit.
The binary readback cable was disconnected from this chassis when I went to investigate. Do you know why? I'm going to reinstall as I found it. Be aware of this in the case that you should need it.
J. Kissel I happened to have the H1 ISI ETMY overview screen open and noticed a blinking red light in the bottom corner alarming that the pod pressures are low, indicative of a potential leak. Jim informed me that Gerardo had noticed this earlier as well (both interactions verbal, no aLOG). Further investigations reveal that, though the sensors indicate a slow leak over the past 5 months on all three L4Cs; the leak rate is ~0.25e-6 [torr-Liter/sec] (see attached 2014-10-09_H1ISIETMY_L4CPod_LeakRate.pdf) -- a rate that is 1/4th of what has been deemed acceptable (see T0900192). Indeed, for further comfort -- though Brian's original guess (see G1000561, pg 15) says that the pod pressure sensors might only be able to sense 5e-6 [torr.Liter/sec] - level leaks -- it appears that we are indeed at least a factor of 8 more sensitive than that. Though I don't understand it well enough to make adjustments, the action item is to (a) adjust the threshold to represent 1e-6 [torr.Liter/sec] (if we're still OK with that number). and (b) have @DetChar or @Operators make a similar study on the rest of the chambers across the project to ensure that the rest of the pods aren't leaking any worse than these L4Cs. Note that this ETMY is the second oldest ISI (save the LASTI ISI) in the project, as it was installed just after ITMY for the H2 OAT. Details / Logical Path / Figures -------------------------------------- On the MEDM pod-pressure screen (accessible in the bottom right corner of the overview), Corner 1 L4C and Corner 2 and 3 T240 are blinking around 96-97, 100-101, and 100-100 [kPa] respectively, which directly correspond to the blinking alarm light. So, I trended them over the past 300 days. I quickly found that the signals have been non-flat, and in fact going down in pressure, indicative that the in-air pods were leaking air out to the in-vacuum chamber. I focused on the L4Cs, because they appeared to be the worst offender. After identifying that the major features in the 300-day minute trend time series: -- We begin to see data ~1/4 of the way into the time series, right when Hugh and Richard are cabling up the ISI ETMY, now moved into BSC10 on Feb 25 2014 (see 10360), -- The hump that starts at ~1/3 of the time axis is the beginning of the chamber closeout, where kyle turns on the roughing pumps on March 28 (see LHO aLOG 11076), and -- Shark-fin feature 3/4 through the time axis which corresponds to Rai's charge dissipation experiments on Aug 06 (see LHO aLOG LHO aLOG 13274, I believe that the sensors are indicating a real pressure signal, and not some electronics drift as Brian had worried in G1000561. Interestingly, the *differential* pressure does not show a trend, implying that all six L4C pods are leaking at roughly the same rate. To quantify the leak rate, I grabbed the average of one hour of minute-trend data on the first of every month over the linear ramp down the of the pressure for all three L4C pods (i.e. from May 01 2014 to Oct 01 2014): Pod 1 Pod 2 Pod 3 pressure_kPa = [97.423 98.956 98.86;... % May 97.358 98.910 98.771;... % Jun 97.288 98.820 98.710;... % Jul 97.199 98.734 98.573;... % Aug 97.110 98.665 98.526;... % Sep 97.026 98.568 98.369]; % Oct (At this point, I'm just *hoping* the pressure sensors are correctly calibrated, but we know that 1 [atm] = ~750 [torr] = ~ 100 [kPa], so it seems legit.) Taking the matrix of 6 months by 3 pods, I converted to torr, torr_per_kPa = 7.5; % [torr/kPa] pressure_torr = pressure_kPa * torr_per_kPa; % [torr] and assuming the volume enclosed in the pod is volume_L4C = 0.9 [Liter], as Brian assumed in G1000561, and taking time = 1 [month] = 2.62974e6 [sec], the leak rate over each month is leakRate(iMonth,iPod) = (pressure_torr(iMonth,iPod) - pressure_torr(iMonth+1,iPod))*volume_L4C/time; (manipulating the P1* V - L* T = P2 * V equation on pg 15 of G1000561). I attach the .m file to make the calculation, if the above isn't clear enough to write your own. It's a rather noisy calculation from month-to-month that could be refined, but it gets the message across -- the leak rate is roughly 0.25e-6 [torr.Liter/sec], a factor of 4 smaller than deemed acceptable. If one puts on your trusty pair of astronomy goggles, you could argue that the leak rate is increasing, but I would refine the quantification of the leaks before I made such claims. Finally, I checked the GS13s and T240s to make sure they're leaking less, and indeed they are. I also post a copy of the simulink bit logic that creates the warning bit -- it's gunna take me some time to verify it all -- but the goal will be to change the "ABS REF", "DEV REF", and "DEV REL" such that we don't alarm unnecessarily, as we've done here.
One can check if there are really leaks in the pods by looking at amu20 and amu22 on an RGA mounted on the chamber with the suspect pods. The pods are filled with neon. The peaks should be in the ratio of the isotopic abundance of the two neon isotopes.
Quoting T0900192, "We conclude that any leak is unacceptable from a contamination viewpoint..." This should be followed up.
I am skeptical.
We have many conflats and feedthrus installed on LIGO and the failure rate is extremely low once the initial leak test is passed.
I think it is more likely that there is an aging effect here with the sensors or possibly some gettering action of air in the pod (we do not know how much air remained in the pod when the neon was filled). The aging could be mechanical fatigue or permeation of gas into the "zero" side of the capactive sensor.
The kp125 sensors are "low cost" capacitive barometric sensors and a typical use would be in an automobile engine intake manifold. Long term drift of the sensor due to aging would not be a factor in this application because the manifold is routinely exposed to one atmosphere prior to startup - allowing for a calibration.
Depending on the air content in the pods a chemical reaction (slow oxidation??) could also be responsible for this drift. The L4C is the smallest unit and would therefore show the largest loss of gas if this were the case.(smaller reservoir)
Quote from the vendor spec sheet:
The table attached shows the pressure and "apparent leak rates" of the L4C pods for all BSC-ISI. For the calulations, I used a short period of data from the first data of each month. Results for ETMY are consistent with Jeff's numbers.
Results/Comments:
- Unlike ETMY, other units don't show a consitent trend. The pressure signal seems to go up and down, rather than consitently down.
- The sign and amplitude of the apparent leak rate is always very similar within the three pods of a given chamber.
Could we have a ghost beam problem?