We studied one year data taken from May 01, 2014 to April 30, 2015. The results agree with the earlier findings from 4-months data (here), i.e., EY tilts twice as much as EX along the beam line. Figure shows 1 year of the 0.03 to 0.08 Hz beamline seismic band at EX and EY plotted against wind speed measured at EX. The full study is documented here. Dipongkar Talukder & Robert Schofield
During ER7, the alignment of the IMC was adjusted to minimize the jitter peaks in DARM around 312 and 347 Hz (Link). We examine the stability of the tuning here. The top two traces in the figure are BLRMS bands centered on the peaks, and the tuning is the sharp drop at the left of the plot. The dashed horizontal lines on the plots show the pre-tuning and post-tuning level. The tuning seems to have been fairly stable, the peak height at the end of the run was about what it was right after tuning. The peak height increased during the period when the power was increased to 24 W, which is indicated on the plot by a yellow background. This was expected because the alignment was not retuned for this power. But when the power was returned to about 17W (cyan background) the tuning returned to its good state. The main drawback that we found was that the peak height started high after each lock and drifted down to the tuned level over tens of minutes. Dipongkar Talukder & Robert Schofield
Filiberto, Patrick Filiberto moved the cable for the gauge and I made the update to the system manger, committed it into svn, and activated it. Filiberto then removed chassis 3 for his work on it. I disabled that chassis and the one following it in the target system manager, activated and ran that configuration. I have not restarted the PLCs because I am not sure they will run without the missing chassis. The gauge appears to still be operating however, as it does not need a PLC program. So the end X Beckhoff PLCs are not running while Filiberto works on the chassis.
Filiberto reinstalled the modified chassis. I updated the system manager accordingly, committed the change to svn and activated it. I had to reboot the computer in order to get PLC2 to run. I burtrestored to 6:10 this morning. It is back up and running again.
Richard, Patrick Did essentially the same work as yesterday (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=19310), except the gauge was connected at the end of the photodiode chassis.
Kate, Matt and a wee bit of help from Calum Omc black glass shroud is installed. More to follow ... To do list 1) balance mass 2) beam dump 3) beam diverter 4) laser beam check 5) cables. Please note should look at cable from omc bench. Dan and Matt are going to look at this in am.
So yes the order would like to do things is look at (and fix) the cable interference with black glass shroud (different solutions...will look at what we think is the best one), move the beam diverter/cabling/black glass V dump associated with this, then we can probably go laser hazard to fine tune beam diverter move and also then see if the laser goes through all the apertures in the black glass. If it doesnt I may have to reposition some of the glass pieces (there is some adjustment by design to allow this to happen). Dan and I will start in on this work again this morning. Perhaps laser hazard by lunch, maybe a little after (but dont hold me too that :-))
Then we can put back all the things that were taken off, and then balance the ISI and that should, I think, be it.
Note: Though we had 2 people verify that the EQ stops were all unlocked, now that the glass shroud is on there is no access to the EQ stops. We might want to perhaps think about doing a Transfer Function on the OMC if possible right about now. Just as a sanity check..for me, as thats what I had nightmares about last night that we somehow missed one....
Calum and Kate have lots of good pics on their camera but I will try to post some also today.
And one last thing. Its been a long hard slog by a lot of people to accomplish this. From cleaning the area prior to venting, to door removal (on Wednesday last week..just to get a time line going), to staging all the cleanrooms, cleaning the area and keeping it all fully stocked, to removing payload on the ISI for us to do the work, to people working long hours on the weekend cleaning and staging the glass, to then the three enormously long days this week pre assembling and staging everything and then installing the shroud (where we had a few problems with some parts along the way but nothing we couldnt overcome), to the people who ran looking for parts/equipment at the drop of a hat for us.........basically this couldnt have been done without a lot of help from a lot of people and it was greatly appreciated.
Now for Dan and I to finish this stuff off so we can get about closing up.
Pic........we are so happy its in....now we can sleep :-)
L. Prokhorov, J. Kissel All the quadrants of ETMX ESD work Ok. The charge level at all quadrants of ETMX is much lower after the venting and discharging procedure. See plots of today's results and Long-term* plot below. * Points for LL quadrant (red) are correct only before May and the last point. At May and June,12 we had LL ESD broken.
Sheila, Kiwamu, Jim, Dave:
It appears that following the last successful build of h1ascimc on April 21 of this year, the file was modified to use a fastPZT version of the common ASCIMC_MASTER part. This part was not found, and the model would not compile. For now I have modified h1ascimc.mdl to use the standard ASCIMC_MASTER/ASCIMC part. The common part's input ports aligned correctly, but I had to manually connect the output ports up in the order found. Someone should verify that the DAC channels are correct, they are (counting from 0 through 15)
ch08 = PZT_P_OUT
ch09 = PZT_YAW_OUT
ch10 = SPAREOUT2
ch11 = MCREFL-SHUTTER_OPEN
The output IPC parts are labeled so I'm confident they are connected correctly.
I have not installed this model, it needs checking before we do this.
Scott L. Ed P. 6/22/15 Cleaned 59 meters ending at HNW-4-086. Serviced diesel generator. 6/23/15 Cleaned 39.8 meters ending at HNW-4-088. Removed lights and cords and begin relocation to next section north. Test results posted on this A-Log. 6/24/15 Made run to town for fuel in all support vehicles and diesel tank. Filled water tank and cleaned vacuum machines. Restring cords, hang lights, vacuum support tubes, spray diluted water/bleach solution on floor. This is an exceptionally dirty section.
Jeff, Dave:
Jeff noted that the SUS ITMY SDF was reporting a difference for the HWWD STATE epics variable. This was due to the HWWD testing I did last Wednesday Link
The hardware watchdog (HWWD) is read out by the h1susitmy model via binary I/O. For a long period of time the unit was powered on but had its monitor cables disconnected. It was this state which was recorded in the safe.snap file. Last Wednesday, following the conclusion of my tests, I disconnected the input cables and then powered the unit down, which raised the SDF difference. Today I accepted the current STATE to clear the SDF difference.
Day's Activities:
R. McCarthy, J. Kissel, L. Prokhorov Richard has restored the functionality of the ETMX High Voltage ESD driver. In addition to simply turning on the high voltage, he had to disconnect and reconnect the temporary, Beckhoff binary input control cable for the "MSR" light to engage on the driver. Once the Beckhoff's binary control was restored, we were able to drive. After aligning the suspension, we briefly confirmed the functionality of the ETMX optical lever since it's health had not been check since being replaced (LHO aLOG 19290). We looked at an ambient pitch and yaw performance ASD (looked OK -- saw SUS resonances, no apparent combs, and a noise floor of sub- 1 [nrad/rtHz] above 1 [Hz]), and trending the SUM to check for glitches (I don't know what glitching does or doesn't look like, but over the past 6 hours the SUM time series does not at ALL look Gaussian). So still unclear whether the optical lever is perfect (i.e. still probably can't use it for damping), but should be good enough for now (i.e. measuring charge). Once the optical lever health was confirmed good enough, we drove a 4.7 [Hz] sine wave through each quadrant and confirmed, with a 5e4 [ct] amplitude, that ALL quadrants (even LL, which had previous been busted; LHO aLOG 19097, 19060). Looks like the replacement of the feedthrough (LHO aLOG 19221) was a success. Nice work in-vac and CDS team! Leo has begun to take charge measurements.
Richard, Patrick Richard attached a BPG402-SE Pirani/Cold Cathode combination gauge to the end X End Link Beckhoff chassis. I downloaded the device description file from the Inficon website (http://products.inficon.com/en-us/Product/Detail/BPG402-S?path=Products%2Fpg-wide-range-vacuum-gauges) and copied it into C:/TwinCAT/Io/EtherCAT/Inficon.xml on h1ecatx1. I scanned for new devices in the system manager and added it after the ALS Laser Table Relay (EP2624-0002). I updated the CoE - Online parameters to set: (800E:02) Low Trip Point Enable TP1: TRUE (800E:14) Low Trip Point Limit TP1: 1E-5 (800F:02) Low Trip Point Enable TP2: TRUE (800F:14) Low Trip Point Limit TP2: 1E-5 (F840:01) Data Units: 0x00A10000 (Set pressure reading to Torr) I stored these to non-volatile memory by writing 0x65766173 to (1010:01) Store all parameters. We tested the functionality by setting to the low trip point limit to 1E-7, verified that the high voltage tripped off, and put it back to 1E-5. I committed the change in C:/SlowControls/Scripts/Configuration/H1ECATX1/SYS/H1ECatX1.tsm to svn. I then used Daniel's GUI to copy it to the target area, activate and run it. The high voltage tripped off when it was restarted (a good thing). At some point the EPICS IOC stopped and I had to restart it. I burtrestored PLC1, PLC2 and PLC3 to 6:10 this morning.
Since some of the HEPI L4C Watchdog Monitors were non-zero, I went ahead and zeroed multiple counters. These are the counts they used to have:
wp 5305 RichardM HughR
The HEPI Pump Servos have forever been powered by bench test power supplies and these should be available for that purpose and they are expensive for use in a permanent setup. Additionally at EndX, the pressure sensors have been excessively noisy since 26 Feb, relative to the other locations and we've suspected the power supply. So Richard procured/built a supply for permanent install. We swapped it over but there was no improvement on the noise of the pressure sensors.
We tried a number of additional things. Disconnect grounds of the power supply; disconnect pressure sensors, power down the VFD of the motor Controller. No real changes. If you compare the pressures on the three servos at LHO, the EndX is way noisier so we know it can be better. Richard has some suspicion of the Servo itself now.
I have installed RCG version tag-2.9.3 on h1build and made it the default version. Any model builds from now onwards will be using 2.9.3. I performed a "make World" on all FE models, all built except for h1ascimc. I have not performed an install. There was no modification to the H1.ipc file.
There is no DAQ update for this version, it addresses SDF and guardian issues
==================================================================================================
Changes for 2.9.3 release
==================================================================================================
- Bug Fix (850): SDF changes.
- As requested for Guardian, changed skeleton.st to read filter module ramp time values before
offsets and gains.
From Ed's summary yesterday, assumed to be the same, since no emails about changing to Laser Hazard:
We are currently LASER SAFE in the corner and the ends. PSL is shuttered, CO2 LASERs are OFF. End station Viewports and tables are closed and locked. Not sure about the on/off statues of the ALS.
Timeline - Last night:
Timeline - This morning:
I had forgotten to mention the Pcal LASERs.
Data extracted for the three week period of ER7 (since the timing system would nominally be running steadily the entire time). The histograms show: - Tight grouping of the minute trends in the data; min and max values for each minute end up in discrete bands. - The Time Code Generators show nearly identical histograms for their mean minute trends. - The mean minute trends of the GPS clock are much more tightly grouped with a width of roughly 20ns; with minimum and maximum offsets included, the width roughly doubles. Issues: - There was a single second during which the MSR Time Code Generator was off from the timing system by approximately 0.4 seconds. The issue self-rectified before the start of the next second. Second-trend data was not available through dataview (or at least I couldn't get any out of it). This did not happen in the EY and EX time code generators. It did not happen again in the MSR time code generator. The anomaly happened at GPS time 1117315540. I've attached histograms as well as screenshots of data taken from Grace.
Conclusion: The timing system is internally consistent and doesn't drift much relative to the atomic clock. We should look at this again once we hook up the Master's 1PPS input to the Symmetricom's 1PPS output; right now it's getting its 1PPS frpm the Master's built in GPS clock, which isn't as accurate as the Symmetricom's signal. The time code generator in MSR is connected to an atomic clock, which we'd expect to provide more accurate short-term timing, though GPS beats it in the long-run. So we're interested in short-term deviations from the atomic clock time, not the overall linear trend, which won't be flat unless the atomic clock itself is perfectly calibrated. For this reason, it's not surprising that the timeseries for the TCG and TCT show linear drift. The relevant metric (variation about the linear trend) is actually smaller than the above histograms would suggest, which is good. Even the naive measurement presented in these histograms shows variance of less than 100 ns.
I set up an accelerometer on the beam tube and measured the accelerations as I simulated the tapping that I observed during cleaning, and my tapping experiment mentioned here: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=19038. Examination of the time series indicated that accelerations during typical taps reached about 1 m/s^2. I suspect that some taps reached 1g. The figure shows spectra for metal vs. fist taps. The above link notes that I was unable to produce glitches while hitting the beam tube with my fist. The fist and metal taps both had about the same maximum acceleration of 1 m/s^2 (ambient acceleration levels are about 4 orders of magnitude lower), so the source of the difference in liberating the particles is probably not acceleration. Instead, it may be the change in curvature of the beam tube, which would be expected to be greater at higher frequencies. The responses from metal taps in figure 1 peak at about 1000 Hz while the fist taps peak at about 100 Hz. These results are consistent with a hypothesis that the metal oxide particles are liberated by fracture associated with changes in curvature rather than simple vertical acceleration.
When I was producing glitches in DARM for the link above, I noticed that we did not get glitches every metal tap but every several metal taps. I tried to quantify this by making many individual taps at several locations along the beam tube. Unfortunately, we lost lock with the first loud glitch and I did not get a chance to repeat this before the vent. Nevertheless, it would be good for DetChar to look for smaller glitches at the time of the taps given below. Allow a 1 second window centered around the times given below for my tap timing uncertainty. I tapped once every ten seconds, starting at ten seconds after the top of the minute so that there were six taps per minute.
UTC times, all on June 14
Location Y-1-8
20:37 - 20:38 every ten seconds
20:47 - 20:50 every ten seconds
Next to EY
20:55 - every ten seconds until loss of lock at 20:56:10
I don't think the IFO stayed locked for the whole time. The summary page says it lost lock at 20:48:41, and a time series of DCPD_SUM (first plot) seems to confirm that. I did a few spectrograms, and Omega scanned each 10 seconds in the second series, and the only glitch I find is a very big one at 20:48:00.5. Here is an Omega scan. The fourth tap after this one caused the lock loss. Detchar will look closer at this time to see if there are any quiet glitches. First we'll need to regenerate the Omicron triggers... they're missing around this time probably due to having too many triggers caused by the lockloss. I'm not sure why there's a discrepancy in the locked times with Robert's report.
Any chance this could be scattered light? at 1 m/s^2 and 1 kHz, it is a displacement of 25 nm, so you don't need fringe wrapping.
I think that you would expect any mechanism that does not require release of a particle to occur pretty much with every tap. These glitches dont happen with every tap.