Scott L. Ed P. Chris S. The crew relocated lights and equipment this morning. We will be starting at the single door between X-1-5 and X-1-6 double doors and working toward the Mid station. We started vacuuming beam tube supports this afternoon and took dirty samples on the tube. Results posted on this entry. These are some of the dirtiest areas we have seen so far. A new generator was purchased Friday, however it still needs to be shipped here. A rental generator was picked up this morning to get us by until the new unit arrives.
This is a reminder that the RF phase of the REFL_A_RF9 and REFLAIR_A_RF9 signals should always be adjusted using the delay line phase shifter. The analog phase shift is applied to the LO signal and therefore effects both analog and digital feedback paths. On the other hand, the phase rotation in the real-time system only changes the digital path. It should always be set to 0°, if the error signal of the REFL servo uses the I output of the demodulator, and to ±90°, if it uses the Q output. One other caveat is that the delay line phase shifter may be in local mode, meaning one has to use the physical toggle switches at the chassis. In local mode the digital settings are getting completely ignored and there is no readback.
Jamie, Dave, Jim:
Jamie reported that the location of the MEDM Print selection, being at the top of the right-mouse pull-down menu, has resulted in many accidental printer jobs. This explains the printouts of things like the sitemap screen with no data content and no one picking the print-outs up.
Jim modified the LHO CDS MEDM this afternoon to move the print section from the top to second from the bottom of the pull down menu. Please see the attached before and after images.
During tomorrow's maintenance we will stop all old MEDMs in the control room so the new feature will be picked up. This is only a linux change, it was not applied to Mac OS.
Awesome. Thanks, Jim and Dave. The forests thank you.
LVEA: Laser Hazard Observation Bit: Commissioning 07:00 Karen & Cris – Cleaning in the LVEA 07:45 Bubba – Chiller delivery from LLO, Bubba unloading truck at Mid-X 08:15 HFD – On site with Richard at Mid-X 08:25 Adjust ISS diffracted power from 5.5% to 8.2% 08:43 Corey – Going into Squeezer bay for 3IFO work 08:50 Filiberto – IN LVEA making a survey of racks 09:00 Corey – Out of squeezer bay 10:00 Corey – Going into squeezer bay for 3IFO work 10:10 Bubba – Finished unloading truck at Mid-X – Big Red has been parked 10:30 Corey – Out of squeezer bay 11:15 Ken – Working on CER 12:47 HFD on site for fire alarm maintenance 14:20 Richard – Going to Mid-X 14:35 HFD finished testing the OSB fire alarms
Even though the threshold is set to 32k, the Stage2 tripped when the Actuators only hit 30k. Is this just from the 512k downsampled data?
The attached trend plots shows the full data and although it gets close, it doesn't appear to reach the threshold. Shame we get shut down for such a short transient.
Hugh means 512 [Hz], because the _DQ channel he's posted is down sampled from the native user model rate of 16384 [Hz] to 512 [Hz] before it sent out for frame storage. No way to confirm your suspicion Hugh, with the data that's currently stored. Brian says "We should store that stuff faster." I agree, especially if we add "if we *actually* want to know for these super-high-frequency glitch-type trips," because most people probably don't, they just want it not to happen. #TriggerECR Have y'all reduced the high-frequency roll-off of these loops yet? That's the real solution...
These are the 24 hour OpLev trends. Will look into the drops in PR3 and the BS.
PSL Status: SysStat: All Green, except VB program offline Output power: 31.9w Frontend Watch: Green HPO Watch: Red PMC: Locked: 13 days, 2 hours, 34 minutes Reflected power: 2.1w Transmitted power: 22.5w Total Power: 24.6w ISS: Diffracted power: 9.2% Last saturation event: 0 days, 0 hours, 28 minutes FSS: Locked: 0 days, 0 hours, 5 minutes Trans PD: 1.347v
Sheila, Nutsinee
Attached below are the plots of PEM/ALS and PEM/ISI correlations. Using data from March 14 (we had high wind up to 45 MPH). The x-axes are wind speed in mph, the y-axes are either ALS-X(Y)_REFL_SCTRL_OUT_DQ (rms) or ISI-ETMY_ST1_FFB_BLRMS_(DOF)_(100M or 300M)_(300M or 1) (mean). The correlation (r) is calculated and printed in the parentheses. Since PEM-EX and ALS-Y seems to correlate the most (r=0.7783) I only plot the correlation between PEM-EX and ISI-ETMY for the PEM/ISI correlation. PEM-EX and ISI-ETMY-RY (100-300 mHz) appears to have the highest correlation (r = 0.7469).
A Matlab script also attached.
My apology for the lack of axes labels earlier. I have attached better plots below....
Also, note that PEMEX, PEMEY, and PEMCS is in X, Y, Z direction (not locations). Also, the "ISI-ETMY_ST1_FFB_BLRMS_(DOF)_
Seismic: Hugh will be doing HEPI maintenance in the LVEA during the Tuesday maintenance window. Jim will be running transfer functions as opportunity allows. CDS: Need to work on shutter interface at End-Y. 3IFO work in the H2 Electronics building. VAC: Kyle will swap the ion pumps at HAM1 and HAM2 during the maintenance window. 3IFO: Work continues on IO inventory & storage tasks. The temp/dew point sensors are ready; connecting the 3IFO storage containers to N2 can start.
See the attached for trends of the BRS for health assessment. It has been down since ~22 Feb.
The first plot is the minute trend of H1:ISI-GND_BRS_ETMX_RY_INMON. This is 100 days of just the mean and the flat lined sections are BRS in trouble areas. The 2nd attachment is a 60 minute second trend. This shows the healthy 9mhz signal of the BRS.
Thanks Hugh. Some comments: The BRS was turned off around Dec. 18th for the ETMX vent and restored after things normalized again in EX VEA. The 'spikes' in the BRS data are mostly the disturbances from people in it's vicinity, which is damped quickly (in ~10 mins). Therefore in the normal/good state BRS_RY_INMON shows between 5-50 count amplitude and 9 mHz oscillations, as Hugh showed. When the BRS software crashes, the output remains flat and shows <2 count variation.
Currently the software crashes once in 2-3 weeks. A preventive restart (which takes <10-15 minutes) once in two weeks would be useful till a better solution can be found.
After a bit of a struggle getting the pre-modecleaner out of its tank, we managed to examine the mirrors of the pre-modecleaner. Visually most looked okay, except one which showed some evidence of a film. It was not easy to see, nor photograph but certainly the impression is that something is there. This is most likely the reason of the pre-modecleaner's demise, which may have been the result of it having been stored with its tank lid on. Ed, Peter
More details on the scattering Dan mentioned on Friday, with some new and re-interpreted details (the responsible motion is horizontal) that became clear after further investigation.
This last Monday, DARM spectra showed a double scattering shelf occasionally reaching 60 Hz and 120 Hz (Figure 1) and even higher. I searched for the source of the scattering path length variation by looking for motion sensors that detected maximum motion at the times that the higher frequency scattering shelf reached its highest frequency. The top trace of Figure 2 is a 700s time series of DARM, band passed between 90 and 145 Hz. Each of the spikes in the time series was produced when the scattering shelf reached into this band. The lower plot shows that the maxima in one of the OMC OSEM signals coincides with the spikes in the DARM plot above it. All OMC OSEMs show this correlation except the side sensor (on the short side). I found no other GS13 or OSEM channels that showed this correlation; most importantly, it was not in the GS13 signals from HAM6. Figure 3 is a zoom in to one of the clusters of scattering spikes, showing that the scattering spikes appear to occur at the steepest part of the OSEM signals (when velocity would be highest), and that the time spacing between DARM spikes agreed with the resonant frequencies of the OMCS. Beating between 2 of the suspension modes seems to cause the variation in motion.
Using the 1 um/count calibration of the OSEM OUT channels, I obtained average velocity spetra that, for some OSEMs, reached 10 um/s (Figure 4).
In order to produce a shelf out to 60 Hz, the rate of path length change for a single bounce would have to reach about 30 um/s. The OSEMs measure displacement of the top mass, M1, and the OMC hangs below it. At the resonant frequencies of this suspension, the motion of the OMC, while damped, would still be greater than the top mass. Also, the 10 um/s figure is only an average rms. Thus the OMC motion can account for the 60Hz shelf with a single reflection.
There are two shelves in Figure 1 spaced by a factor of 2 in frequency. The spectrogram in Figure 5 shows that the frequency spacing of the two shelves is always a factor of 2. The shelf at 120 Hz in figure 1 is about an order of magnitude below the shelf at 60 Hz. If this represents a scattered beam that reflects twice instead of once, then the reflectivities at each of the two extra surfaces would have to be very high. While there may be other mechanisms to generate the double shelf, it is probably worth looking for bright beam spots on highly reflective surfaces in HAM6.
Finally, it would be nice to reduce the motion of the OMC by a factor of ten, particularly, the side to side and rolling motion perpendicular to and about the line connecting the two suspension points of M1, motion detected by the LF, RT, T2 and T3 OSEMs.
It's likely the OMC ASC control (Kiwamu, Daniel, Keita)
Summary:
OMC BOSEMs are usually very quiet, but they show extremely big motion that Robert showed only when the IFO is in lock with DC. In-lock VS out-of-lock ratio is huge at about 3 orders of magnitude.
It turns out that this comes from OMC ASC control actuating on the OMC suspension.
OMC YAW has a large coupling to the distance between the OMC and the IFO because the rotation axis is fairly distant from the first steering mirror on the OMC breadboard, probably 18cm or so, and therefore is likely this is the main motion coupling to the scattering path modulation.
Apart from identifying the scattering source, there could be some mitigation tasks that we could do.
We will test the two TT mitigation tomorrow.
Details:
Attached shows the same OMC BOSEM velocity signals that Robert used (the only difference is that this is calibrated in um/sec, not m/sec).
Solid lines are now, broken lines are when Robert took his measurement. At around big peaks, there's 3 orders of magnitude difference. It turns out that they're mostly like solid lines show, and becomes excited only when in DC-lock.
Kiwamu will fill in more details.
I attatch a 24-hrs trend of some relevant channels from Mar-10-2015. As shown in the trend, it seems that every time the OMC-ASC loops are in action actuating on the OMC suspension, the OMC OSEMs read high fluctuation as weel as a big shift in the DC values. When the OMC ASC loops are not in action, the OSEM readouts are quiet.
On Friday we reduced this scattering noise below the usual noise floor by reducing the OMC ASC gain by ~10x. This reduced the UGF of the QPD loops down to 0.1Hz. This is around the UGF of the dither loops, which explains why we only saw this noise recently, when the QPD loops were used in low-noise. See the attached plot of the OMC SUS longitudinal signal: the blue traces are the QPD loops in a high-gain state, red is low-gain. The purple traces are from March 4 when the OMC was aligned with the dither loops 9 (the dither signals are rolled off above 2Hz since the signal-to-noise above that frequency is not good). The black traces are the quiescent OMC SUS noise without ASC feedback. For the current noise floor between 10-100Hz, the motion of the red and purple traces are low enough to keep the scattering from being the limiting noise source.
That said, this scattering knowledge means that the experiment of feeding back alignment signals to the OMC SUS should end. I've added OM3 to the ASC model, so we can feed back the DC centering signal from AS_B to OM1-3, along with the two degrees of freedom from the OMC. This will give us a 3x3 control matrix for the HAM6 alignment, similar to what's being done at L1. It ignores the centering on AS_A, but centering on AS_B should be sufficient.
Btw the dither loops need to be re-commissioned because I moved the dither lines up to ~1.7kHz. This changed the sensing matrix, it needs to be remeasured and inverted for a new control matrix. Tomorrow we will 1) switch the control topology to use the OMs rather than the OMC SUS, and 2) switch the sensing topology back to the dither for low-noise operations.
Today we have high winds again. At 22:24 UTC I've switched the ALS arm requests to LOCKED_NO_SLOW_NO_WFS. This means H1:ALS-REFL_CNTRL_OUT_DQ will be a measure of the arm length fluctuations (in um), contaminated by the coupling of angle to the length sensor which is due to the high transmission of the ETMs. The locks are only lasting at most 30 seconds. We've have both end station ISIs with 45 mHz blends along the beam direction.
Blends along the beam direction were switched to 90 mHz at 22:37 UTC.
At 23:23 UTC I switched the end X sensor correction to use the BRS, and turned off beam direction sensor correction in End Y.
TJ Massinger, Ryan F., We created new MEDM screens for the new ASC and OMC ODC channels, and updated the screen for LSC. These are in their appropriate common/medm/ directories. The LSC_ODC.adl has not been committed to the SVN because the local directory is in a strange state. Pictures attached! We also took a first pass through setting all of the EPICs records for the LSC and ASC ODC channels. These settings have been captured in the safe.snap files for the two models and committed to the SVN.
I remind everyone that white is a forbidden color for MEDM screens, since that's the color they turn when they have lost contact with the server.
This was done in order to fix a couple of outstanding usability issues with the SUS guardians, and to make things generally easier to deal with. Here are the key changes:
This is a fairly substantial change to procedure, but it has a couple of important benefits:
This gives an effective "one button" misalign/align. We also no longer need to worry about saving alignments. w00t!
Sheila declared that the ALIGN_TO_PD1 and ALIGN_TO_PD4 states are no longer needed with the new changes.
In general, guardian now touches less stuff. This is important because it reduces the cross section with the ISC automation that occaissionally needs to adjust the LOCK filter module gains. It also increases what can be monitored by the new SDF system.
This will help get the suspensions to the correct state quicker on guardian restarts, with less disturbance.
In the SAFE state, the SUS is in the following state:
The full "enable" and alignment procedure is as follows:
Here's the new graph:
The DAMPED state now has just the DAMP outputs engaged, so just the damping loops are on. The FULLY_ENABLED state has the remaining control outputs engaged. The ALIGNED and MISALIGNED states have the OPTICALIGN and TEST P/Y OFFSETs enabled.
All SUS guardians were updated such that the TEST_P and TEST_Y filter modules have the same GAIN calibrations that are in the OPTICALIGN banks. The OFFSETS in the TEST_{P,Y} modules were set such that they corresponded to the last stored "misaligned" values in the alignment snapshots.
All SUS guardian nodes were then restarted. They all came up without issue.
This is probably the biggest gotcha. We need to update the IFO_ALIGN screen to give an indication that the MISALIGNMENT (TEST) OFFSETs are enabled. There is also no indicator on the SUS_*_OVERVIEW screens that the suspension is in a misaligned state. Guardian should be considered the main authority for SUS alignment state now.
This means that after a front end reboot the stored alignment and misalignment OFFSETS will be lost. However, they can be easily restored from the BURT snapshots.
I will put together a script that will allow us to easily restore an alignment offset to any point in past, including possibly the last alignment before a reboot.
This is to ensure the same calibrations for the OPTICALIGN and TEST OFFSETs. These don't usually change, so it shouldn't be an issue, but when they do need to be changed, they will need to be updated in two places
The new code is in two new files:
The new SUS2.py holds all the new guardian logic, and the new sustools2.py is a slightly cleaned-up, guardian-ified and improved version of sustools. SUS2.py imports sustools2.py. The old SUS.py was renamed to SUS1.py. Reverting to the old configuration is a simple matter of repointing the new SUS.py symlink to the old SUS1.py module.
The plan is still to overhaul the OPTICALIGN parts such that they handle all of this in one place, without needing to use these TEST modules. This will make things even cleaner. We will also integrate the new Sigg integrators so that we can have glitch-less offloading of integrated DC values from the ASC loops to the OPTICALIGN biases.
So, things to do:
😁
I suppose this means that the ditherAlign script we use to align the TMSs needs to be somehow modified, since I think it currently overwrites these offsets in order to point the TMS to the ITM baffle PDs.