This is an update to alog 19166.
All points in the reliability and operations category have been addressed or completed. More work will be done to fine tune the alignment control and the scripting. Improvements have been made at low frequencies due to tuning of A2L, aux dof, loop shape, etc. Below is the updated list.
SudarshanS, TravisS, JoshuaF
After some investigation we did yesterday, it looks like the X-end Pcal laser is clipping on the way out (after hitting the test mass) inside the chamber.
We were aware about the slow drift in Pcal laser as observed by RxPD (receiver side photodetector) at X-end sometime before ER7. We let the pcal run without fixing it because we were sure it was clipping on the way out. This was confirmed by looking at the data from ER7 which shows the pcal induced displacement on DARM corroborates with the calibrated TxPD that is on the transmitter side of the Pcal enclosure.
Yesterday we wanted to investigate if the clipping was on the aperture of the RxPD and/or steering mirror in RxPD enclosure, which could have been an easy fix. However, its not the case. One of the beam(out of two) is clipping inside the chamber on the way out. We did measure the power of each beam individually before it enters the chamber and after it exits as follows.
Inner Beam | Outer Beam | |
Before it enters the chamber | 0.735 W | 0.715 W |
After it exits the chamber | 0.726 W | 0.077 W |
This still doesnot effect the operation of the Pcal because the displacement can be calibrated using TxPD. However, looking at the trend plot we know that this changed happen sometime around May 20 and we should be able to revert back into the old configuration after some more investigation of what changed during that time.
This is an update to alog 19451.
All tasks which have been scheduled for the maintenance periods of 7/7, 7/14 and 7/21 have been completed. Furthermore, all pending SUS/SEI ECR model changes have been implemented. However, the update to the new faster end station SUS computers has been proven somewhat unstable.
Completed tasks:
Maintenance period 8/4:
Maintenance period 8/11:
Between April 21 and July 14, the HEPI Fluid system Accumulators were properly charged--19632. The check of the charging obviously has risks though, not only does the Pump need to be spun down taking down the entire SEI isolation, the checking of the pressure via the schrader valve of the accumulator may cause it to start leaking. This is what I expect happened at EndY on 14 July. When I was at EndY Tuesday 21 July looking at the reference voltage of the pressure sensors (attempting to learn why the sensors are noisy,) I noted the HEPI reservoir Fluid level was dramatically down. Yesterday I found the schrader valve was indeed leaking and needed replacement.
Fortunately, it appears the platform is not that sensitive to the loss of just a single accumulator, or it maybe more importantly depend on where that accumulator is in the system. Attached are coherence plots between the servo pressure and the HEPI and ISI inertial sensors. The reference traces (thick pale) are from 11 July, before my confirming measurement on July 14. The current traces (thin dark) are from the early morning hours yesterday after I saw the low fluid level. While there are a scattering of a few peaks of increased coherence in no instance are they in adjacent frequency bins. There a few higher frequency (between 0.5 and 3hz) spikes that approach 0.9,
I'll look at this again in a couple days and see if these spikes go away since the charging of this one accumulator. Also, I need to check the affect of a lost accumulator on the fluid level in the corner station. The actual fluid 'loss' should be the same but the reservoir is larger so the drop should be less. Still, the loss of one accumulator would be noticeable.
I don't understand why the X & Y DOF L4C signals run so noticeably beneath the instrument noise floor. The RZ is just a little below and the other are okay. The Z looks most reasonable. The T240 signals look okay. I'm pretty sure the ISI remained locked during these measurements but anything is possible.
Addition: Looks like there were data drops and the last measurement started at 0745 local instead of ending around 3am. Things could have certainly gotten bad on the platform at this time. Is this a problem of the frame writer dropout/restarts? I may need to redo this, but again maybe the coherence statements remain valid?
There was a question yesterday about the blends we are running on the the BSC-ISIs. I'm attaching plots of the Quite90/250 and Windy90/250 blends. Pages 1-3 are the 90mhz X/Y blends, 3-6 are the 250 RX/RY blends. The Quite blends are what we are running currently on St1 in XYZ (90) and RX/RY (250), and there is some evidence that we can run them during windy times as well. The Windy blends are kind of untested, and don't currently run on the end station ISIs, I don't know why and I haven't had much of a chance to look into it. The end ISIs will switch to the Windy blends, but trip after a few seconds. Pages 3 and 6 at least show that the filters as installed on ETMY are properly complementary, ETMX is the same.
For comparison, here are the Quite blends compared to the LLO blends we used previously. Pages 1-3 are the 90 mhz X and page 4-6 are the RY blends. Pages 2&5 are probably the clearest to look at.
For fun, here are the HAM 0128 blends. Page 2 shows the complementary form of the X and RY blends, page 1 is the as installed version.
Work:
- Richard, cosmic ray work continues, not in LVEA
- no SEI
- Apollo, beam tube cleaning ongoing today
- Gerardo, vacuum, High Bay area
- SUS, no work
- PSL, yesterday's work OK, no work today
- Commissioning, intensity noise, frequency noise
- Jim, Frame Writers work continues
I turned on the 530-545 notch in FM7 of ETMY L3 LOCK, and switched to ETMY with the ESD low pass on. As the alogs of Evan and Rana suggested, by notching the pcal lines in the drive to ETMY we can engage the low pass and still have a rms of about 3000 counts on the ESD. (screen shot of ETMY drives attached).
The resulting DARM spectrum seems slightly better at around 50-80 Hz. Things are worse below 20 Hz because I increased the gain of DHARD YAW. We can see if we are at least stable at high power with this increased gain for DHARD, and then worry about the 20 Hz noise.
We had one of the huge glitches that we have been calling beam tube glitches, at around 8:15 UTC July 24th, even though no-one is cleaning the beam tubes. I've hit the intent bit and an leaving the IFO locked.
There is currently a descrepancy of about 25 Mpc between the range displayed in the control room and the range on the summary pages.
This lock has lots of glitches at harmonics of ~505 Hz. We've seen this before, and it seems to be some kind of upconversion of the violin modes. I checked to see if these were caused by overflows in the ETMY L3 DACs. I also checked the L2 DACs on both ETMs, and a bunch of photodiodes (DCPD, POP_A, and many ASC). The only DAC/ADC overflows on any of them were during two isolated incidents or at the last second of lock. So the 505 Hz harmonics are not related to saturations in any of these. I did find that the ETMY L3 DACs started to saturate about a minute before the end of the lock. There are two isolated times in the middle of the lock with overflows, 1121762742 and 1121765852, that detchar should look into.
Here is some data from the cavity pole tracker (alog 19852) from the lock stretch of the last night ( which lasted for approximately 100 minutes):
The pole frequency was stable in the first 75 minutes or so. Then the pole frequency dopped by 50 Hz. Then it became unstable for the rest of 20 minutes, followed by a lock loss at the end. There were two glitches in the estimated cavity pole, one at 25-ish min and the other at 75-ish minutes. At the beginning, I thought they were related to the SR3 optical lever which also showed some glitchy behavior. However the cavity pole and SR3 oplev don't look coincided. So the glitches in the cavity pole must have been driven by something else.
Interestingly, after 75 minutes, the cavity pole seemed to start correlating with motion in PR3 yaw -- as PR3 oplev yaw went down the cavity pole also went down and vice versa. On the other hand I did not see a clear correlation of the cavity pole with test mass oplevs.
Jenne, Sheila
On the theory that our slow instability at 24 Watts (alog 19855 )is caused by DHARD signal which contaminates the SOFT signals and the AS_C signal therefore gets into the SRC WFS, we decided to boost the DHARD loops. For pitch this was easy, I just moved the pair of complex poles in the existing boost down from 0.2 Hz to 0.02 Hz. This worked the first time.
For yaw I tried similarly to lower the poles in the existing boost to 0 Hz, but this caused an instability that broke the lock on power up. Jenne and I then had a look at the transfer function and remembered how not great this loop was. In fact at 3 Watts it had about 7 ugfs, the lowest one just under 0.3 Hz. We made a small adjustment to the plant inversion to get rid of two of the (the new plant inversion is invY2, the Q of the 1.8 Hz pole is higher), we increased the gain in the input matrix from 1 to 3, and last I modified the boost which had been a pair of 0.4Hz poles, Q of 1.2, and is now a pair of 1 Hz poles, Q of 2. The first attached screenshot shows the transfer function before and after these changes, measured at 3 Watts. This loop currently doesn't have any roll off, but we don't have much phase margin to spare. The second attached screenshot shows measurements at 3 different powers, reference numbers are in the legend of the phase plot. As we increase the power, the gain below the resonance drops as expected, this means that at 24 Watts we just barely have any gain even after increasing this gain by about 20dB.
Aside: Part of the reason that we had trouble designing this loop in the first place is that during the CARM offset reduction the gain changes by about 30dB, and the resonance move. I thought that one way of making things easier would be to use the dynamic power normalization, which we haven't used here yet. Scaling by POP DC would be about right, we could lower the gain by 26 dB between the point where we turn on the loop and on resonance, so we could design a loop with only one ugf above all the structure, and not need so much gain margin. I tried this, but it didn't work. I tried turning up the gain in the input matrix, and turning it back down. That was fine, so I tried to use the power scaling to do exactly the same thing, and that broke the lock. I don't know if I've been fooled by some hidden normalization, but I don't see anything in the model.
J. Kissel, D. Barker, J. Batch, J. Betzwieser As the calibration group solidifies how to measure the slowly-time-dependent parameters of the aLIGO detector's DARM loop, we've identified that we can no longer just get away with only comparing the real-and-imaginary parts of the current DARM Open Loop Gain transfer function against those of a reference time -- at calibration line frequencies -- to produce a "gamma" coefficient (a real, scalar, multiplicative factor applied to the detector's strain output). Because of charge on the test mass evolving with time (on the actuation side of things) and the DARM coupled cavity pole frequency evolving with time (on the sensing side of things), we now must at least track, if not correct for time-varying optical gain, cavity pole frequency, and actuation strength. As such we've parametrized the interferometer response (see T1500377), and this parametrization needs the real and imaginary parts of the actuation function, sensing function, and DARM filter -- again at calibration line frequencies -- for the reference times. Long story short, I've added EPICs records to the CAL-CS front-end model to store these new real-and-imaginary parts to the actuation function, sensing function, and DARM filter. The new channels are H1:CAL-CS_TDEP_${ACTTYPE}_LINE${NUM}_REF_A_REAL H1:CAL-CS_TDEP_${ACTTYPE}_LINE${NUM}_REF_A_IMAG H1:CAL-CS_TDEP_${ACTTYPE}_LINE${NUM}_REF_D_REAL H1:CAL-CS_TDEP_${ACTTYPE}_LINE${NUM}_REF_D_IMAG H1:CAL-CS_TDEP_${ACTTYPE}_LINE${NUM}_REF_C_REAL H1:CAL-CS_TDEP_${ACTTYPE}_LINE${NUM}_REF_C_IMAG H1:CAL-CS_TDEP_${ACTTYPE}_LINE${NUM}_REF_C_NOCAVPOLE_REAL H1:CAL-CS_TDEP_${ACTTYPE}_LINE${NUM}_REF_C_NOCAVPOLE_IMAG where ${ACTTYPE} = ESD, PCALX, or PCALY ${NUM} = 1, 2, (3, or 4) and there are 4 lines for ESD and 2 lines for each PCAL. I will update the CAL-CS MEDM screens with a look-up table for these reference values tomorrow and/or over the weekend. Short story long again, I ran into two problems trying to compile the changes that I wanted to make: (1) Not sure how bast to describe this bug, but it comes up when you use several FROM and GOTO tags on multiple levels of subsystems without connecting them to a CDS part on each level. There were two examples of me doing this in my desired CAL-CS model, where I'd tagged a signal, traversed down a subsystem level, tagged it again, and traversed down a few more subsystem levels before it connected to a filter bank. I show the two ways I had done this schematically below: CASE 1: IPC part --------+ | CAL_CS_MASTER +----------------+ | TIMEDEPENDENCE +---< TAG 1 | | TAG 2 >---+ | ESD_LINE1 +---+ | DEMOD +---+ | DEMOD +----------- filter bank CASE 2: +---< TAG 1 | | TAG 1 >---+ DARM | | TIMEDEP filter bank -----+ +--- < TAG 2 | | TAG 2 >---+ | PCALX_LINE1 +---+ | DEMOD +------------ filter bank I'm not sure which of these two caused the problem, but he attached screen shots with the file tag "doesnotcompile" demonstrate what I *wanted* to compile. The correspondingly named screenshots *without* any extra file tag shows what we had to do in order to get the model to compile. Pretty darn ugly. (2) The original "doesnotcompile" name for the PCAL and ESD line calculations were stuck in a subsystem I'd called TIMEDEPEDNENCE. Sadly, this totaly blew my channel character limit out of the water. This is a known feature that one can only have channels of 60 characters or less, I had just forgotten, and also didn't appreciate how many sub-sub-sub systems there were in this block. As such, I've reduced the sub system to the obfuscated "TDEP." Oh well.
It is my on-going attemp to damp the 501.606 Hz line as I'm slowly learning about damping filters. This mode is not touched by the Guardian and will always be off while I'm not using it.
Added 1109 channels. Removed 524 channels.
Got notification somewhere around the CARM_ON_TR to REDUCE_CARM_OFFSET transition: NO IR in arms!!! Tried Sheila's alog suggestion of setting the ISC_LOCK guardian to manual mode and reselecting PREP_TR_CARM but it did not seem to work. Tried going back and forth between CARM_ON_TR and REDUCE_CARM_OFFSET and got brief spikes in TR_X_NORM and TR_Y_NORM but they did not last. Lost lock in middle of this. The ISC_LOCK guardian was still in manual mode when it lost lock and the guardian appeared to go haywire when it did. I think it may have been quickly flashing between all of the states. I quickly set it back to auto mode but something was still wrong. I tried different things including reloading the guardian but that ended up with a user error message. At the same time the DAQ was being rebooted and the nds servers went away. However only the SYS_DIAG guardian seemed to have an error related to this that cleared when it came back. The issue with the ISC_LOCK guardian user error message was still there. It seems to have been because when I reloaded the guardian it loaded code that Sheila was in the middle of editing. She commented something out and it was able to reload and move on. Not certain what if any of the problems leading up to this were related to nds restarts. I got a SYS_DIAG notification that there was an issue with the NPRO noise eater. I think this may have been there since the PSL work. Sheila, Cheryl and I reset it. After that the ISS went into oscillation. Jason was able to fix it. I have left the IFO in DC_READOUT while Sheila and Jenne investigate why an increase in power is breaking the lock. Lessons learned: When the ALS_COMM guardian notifies you to find IR by hand: Go to the ALS Overview, click on the lower left 80 MHZ VCO to open the H1:ALS-C_COMM_VCO medm screen and move the Set Off (Hz) slider to bring back the LSC-TR_X_NORM signal. This can usually be done by 3 clicks to the left. When the ALS DIFF guardian notifies you to find IR by hand: Got to the LSC Overview and adjust the very small slider labeled ALS_DIFF at the bottom of this medm screen to bring back the LSC-TR_Y_NORM signal. Dave says losing the nds servers *can* affect locking the IFO because the guardian connects to them.
Items completed today: 1. High Voltage Power Supply was installed (PEM-C1) and HV outputs were verified 2. Cosmic Ray Electronics D1500202 was installed (PEM-C1) 3. All BNC and MHV cables were terminated 4. Raymond Frey looked at outputs of cosmic detector - all within spec Signals still need to be entered into DAQ (ongoing).
WP 5382 has been opened to add the Cosmic Ray channels to the h1pemcs model. Currently one channel will be added to the DAQ at 16kHz.
On the ISC_GUARDIANS.xml, ISC_LOCK pull down menu has two states named "Down" - why?
Picture attached.
Also two CHECK_IR, NONE and DRMI_LOCKED states.
To fix the problem of accidental PUM/ESD crossovers near the violin mode resonances, the PUM is now rolled off more aggressively above the crossover (see attachment of EY L2 LOCK FM6 vs FM7, where grey is old and purple new).
Previously, the PUM had an f2 plant inversion out to about 600 Hz. It is now more like 300 Hz. There is also a broad notch around the first violin mode just to make sure that we do not have an accidental crossover there.
DARM OLTF attached (blue and red are essentially part of the same measurement). The high end of the phase bubble has flattened out a bit.
The PUM/ESD crossover was remeasured and was found to be satisfactory (attachment). Additionally, the rms drives to the three stages seem to be acceptable as well (attachment), although these were taken during low seismic activity.
Taking the data from the time in the spectra posted above, I looked at what is using up the ESD range. It looks like it should be fine to engage two stages of low pass filter in the LVLN driver.
The first attached plot shows the ESD MASTER_OUT LL channel as well as the expected signal level after applying the compensation filter for the two (50:2.2) analog filters.
The RMS goes from 1450 to 30000 cts after switching the filter.
Most of the RMS increase would come from a few CAL lines around 540 Hz which are not accurately notched by the DARM filter bank. These filters should be modified when the line frequencies are changed. Also, the line ampltiudes are too large. Probably the line amplitudes should be set by determining what physical parameter we need to estimate with what SNR, instead of some ad-hoc amplitude based on the power spectra.
Notching out the CAL lines would reduce the DAC signal from an RMS of 30000 cts (un-tenable) to 3000 cts (reasonable).
to help with getting the glitch rates, I'm running a script every minute which performs a DIAG clear on the models which are showing this issue. These models are: IOP-SUS[EX,EY], SUS-ETM[X,Y], IOP-SEI-E[X,Y], ALS-E[X,Y], ISC-E[X,Y].
Yesterday I started a cut-down version of this script which only cleared the ALS and ISC errors, however not every SUS glitch prodcues a remote IPC receive error so this was under counting.
We have noticed that in the past 20 hours only EY has glitched. We at still seeing two different types of IOP-SUS glitches either with or without a TIM bit setting.
During this morning's SUS Detector telecon, Stuart pointed us to an LLO alog about similar timing glitches observed on l1susb123 (also a new fast FE machine). See LLO alog 19236.