Jenne, Evan, Stefan - Reverted the the ramp times in PREP_TR back to Monday's values - this seemed to be more reliable, and not produce any transients at that stage of the script. - Repeatedly brought the machine up to 24W, but we still see a hint for the 0.41Hz CSOFT resonance, and occasionally a YAW gain oscillation at maybe 1.5Hz. - The DC oplev PIT on SR3 was stuck on the limiter. - We also cleaned up the Guardian flow chart. It now has a single flow line. (The previous spider web caused more lock losses than it was worth.) - Since the 0.41Hz problem seems to come and go, we decided to kill it with a good CSOFT loop design - using a 2nd UGF at the resonance to damp it. Evan is still testing this filter.
Kiwamu, Stefan, Jenne, Matt, Evan,
We've almost recovered from yesterday's "maintenance". Kiwamu found this morning that the X arm camera servo had been off, I think this was a miscommunication/mistake durring maintence recovery which probably lead us into some bad alingment last night. Stefan/Kiwamu/Jenne did some realignment this morning and early afternoon, and we were able to lock with good recycling gain and were stable at 24 Watts.
The gain of the OMC DC PDs was wrong this was picked up by the SDF put I am not sure what maintence acvitity would have caused that. The ETMY ESD bias had the wrong sign. Stefan added it in SDF.
We were able to lock momentarily at low noise, and lost it for reasons that we don't understand yet, but might have been a slow alignment instability.
About 2 hours ago the wind picked up, we have gusts up to 35-40 mph, forecast to stay this way until 1 am. Our improvement to the offloading of DRMI has certaintly helped with windy locking, DRMI locking is still slow but it does lock and stay locked. Stefan has rasied the gains that are used for acquisition, we can try to get some statisctics about DRMI lockign times. Now we get to the next windy locking problem, which seems to be the handoff from ALS COMM to the transmitted arm powers for CARM.
The guardian has been having an occasional problem where it cannot acces channels (the channels exist and are spelled correctly). One solution has been to reload the gaurdian a few times, just now I had to reload the DRMI guardian about 10 times. Eventualy Stefan paused and reloaded it and the problem went away.
Screen shot attached.
We've also had some more epics freeze incidents today. 4:42:39 UTC Local 17:25:50
08:09 Ken working on GPS antenna on roof 08:46 Peter, Jeff B. to chiller room to look at chiller displays 08:52 Richard to roof 09:02 Peter, Jeff B. back 09:39 Travis to LVEA to look for part 09:45 Rebooted projector0 due to memory leak, Jim B. recreated credentials for seismic DMT 09:49 Travis back 11:26 Richard to DC power mezzanine to look at roof sections 11:32 Jeff B. to cleaning area 11:44 Richard back 12:17 Ken to electronics room to get measurements for drilling hole in building for GPS antenna cabling, then drill hole 12:58 Nutsinee to LVEA to take pictures of CO2 laser chassis 13:04 Nutsinee back 13:30 Ken done Commissioners working on recovering IFO alignment.
We are seeing large CPU max numbers on the IOP and SUS models at the end stations. In addition ADC errors are showing up on the IOP model, and intermittent Dolphin IPC errors on SEI and ISC receivers. I have just cleared out the warnings so we can see how often these appear.
The alignment settings of the IFO this morning are a bit different than they were according to the hourly burt snap files from ~midnight on Monday night when locking was ~good. The slider values from Monday's good locking are consistent with the alignment values in hourly snaps from a few days before Monday as well. Attached is the snap file from Monday at 11:10pm in the event the commissioners need to do a complete restore. Commissioners report that it is confusing as to why to ASC systems did not recover the pointing even when the starting point was not quite right. Evan is working on it now.
Most notably:
IM4 is 1400 uRad different in Pitch
IM4 is 200uRad different in Yaw
MC3 is 100 uRad different in Pitch
MC1 is 80 uRad different in Pitch
PRM is 30 uRad different in Pitch
PRM is 40 uRad different in Yaw
Turns out the IMC slider values changed significantly, resulting in the IMC to hang at a different place. This caused a lot of the trouble we faced the last day. Once we simply restored Monday’s alignment slider values, the IMC mirror positions moved pretty much back to where they were. This meant we also reverted the alignment references back to Monday's values. This includes the following settings: H1:ALS-X_CAM_ITM_PIT_OFS 256 H1:ALS-X_CAM_ITM_YAW_OFS 340.9 H1:ALS-Y_CAM_ITM_PIT_OFS 303.9 H1:ALS-Y_CAM_ITM_YAW_OFS 433.5 H1:ASC-X_TR_A_PIT_OFFSET 0 H1:ASC-X_TR_A_YAW_OFFSET -0.095 H1:ASC-X_TR_B_PIT_OFFSET -0.11 H1:ASC-X_TR_B_YAW_OFFSET -0.067 H1:ASC-Y_TR_A_PIT_OFFSET -0.128 H1:ASC-Y_TR_A_YAW_OFFSET -0.174 H1:ASC-Y_TR_B_PIT_OFFSET -0.516 H1:ASC-Y_TR_B_YAW_OFFSET -0.1 H1:ASC-POP_A_PIT_OFFSET 0.38 H1:ASC-POP_A_YAW_OFFSET 0.248 H1:ALS-X_QPD_A_PIT_OFFSET 0.2 H1:ALS-X_QPD_A_YAW_OFFSET 0 H1:ALS-X_QPD_B_PIT_OFFSET 0 H1:ALS-X_QPD_B_YAW_OFFSET -0.05 H1:ALS-Y_QPD_A_PIT_OFFSET 0.1 H1:ALS-Y_QPD_A_YAW_OFFSET -0.4 H1:ALS-Y_QPD_B_PIT_OFFSET 0 H1:ALS-Y_QPD_B_YAW_OFFSET 0.15
Sheila, Jeff, Evan
We had repeated locklosses handing off the DARM sensor from ALS DIFF to AS45Q. We changed the guardian so that the handoff happens at a slightly lower CARM offset, and with a different DARM loop gain (we had previously used these settings back in late February). This new CARM offset makes the AS port more unstable during the DARM handoff, but it makes the transition successful.
We were able to make it to resonance on rf darm, but with a mediocre recycling gain (30 W/W). We spent some time manually steering the ITMs in order to bring the recyling gain up to more than 40 W/W. Then we updated the TMS QPD spot positions and the green alignment references (green QPD offsets and camera positions). It is not clear to us why we had to do this, since we restored all the suspension alignments from before the maintenance work.
We did an initial alignment starting with the new green references. Subsequently, we came into resonance with good recycling gain (>40 W/W) again.
We were able to engage the ASC with these new spot positions. However, at 17 W we saw the same 0.4 Hz resonance that we saw a few days ago, meaning we should not power up further in this configuration.
We redid the dark offsets for the TMS QPDs, since they seemed to be stale.
For now, the DARM handoff has been returned to its old CARM offset. I have left the DARM gain slightly lower than before (80 rather than 125).
The ITM QPD offsets have been reverted to yesterday's values. We are able to engage them as usual in the ENGAGE_ASC state, and they give a good recycling gain. However, at 23 W the interferometer unlocks suddenly after a few minutes. The transmitted arm powers seem slightly less stable than with the new offsets tried above (a slow oscillation with a few-second period can be seen in the arm powers, as well as POP LF), but there is no 0.4 Hz oscillation.
J. Kissel for E. Hall, S. Dwyer, J. Driggers, K. Izumi The IMC is in terrible shape again this morning. Words I got quickly from Evan: "We think that the FSS began oscillating again *during* [full IFO] lock, then the IMC WFS began integrating to a bad place." Obviously the investigation is on-going, but any help from the PSL team in the control room would be appreciated.
J. Kissel, for just about everyone on site
Today was the official start of the so-called "re-locking team." As such, I've folded Ed, Jim (the operators on shift) and Betsy into the routine that I've done in the past for particularly busy Tuesdays and we've made an effort to focus more on speedy recovery from each task. Each part of the following tasks is necessary if we expect to robustly get back to locking ASAP:
- Best effort basis understanding of all planned maintenance activities, to obtain
- an assessment of what the activities impact others
- an assessment of those activities impact on the IFO
- an assessment of how we'll recover from and what to test after those activities
- Preparing the IFO for maintenance
- Keeping track of the chaos during maintenance
- Beginning regression testing and recovery on as much of the IFO as soon as possible
- Following through with activities making sure they're closed out, performing regression testing, until the IFO performs as good as it did the night before
- aLOGging anything and everything you can about the day, highlighting the big stopping points and hold ups.
Here's how today when down.
First, refresh your memory with how the plan was supposed to go: LHO aLOG 19600
(All times PDT)
7:45a Ed, Jim, and I prep for maintenance (LHO aLOG 19616)
~8:00a Rick, Jeff, and Jason head into the PSL to do RefCav and PMC Alignment
Fil installs new 9.1 MHz OXCO, and hooks it up to the timing system
Dave, Jim, and Richard go to EX to replace EX SUS front end
Bubba heads to EX for TMDS Piping
Hugh begins checking corner station HEPI accumulator bladder pressures
~10:00a PSL team takes longer than expected, and only gets through RefCav alignment, forgoes PMC alignment until next week (LHO aLOG 19622
Hugh is finished with Corner Station HEPI, moves on to EX and EY (LHO aLOG 19632)
Fil moves to running BNC cables with Jordan and Vinny near HAM2 and HAM4
Dave reports back that EX went terribly because they got shipped the wrong type of card (they got a DAC not an ADC) for the new PI damping ADC, moves to Y (LHO aLOG 19659)
Recovery 1:Robert needs IMC recovered for PSL periscope tuning, and it's part of the plan anyway to recover the IMC after PSL work is done, so Relocking Team recovers corner station SEI, with a focus on HAM2/HAM3 so the IMC can get back up and running
10:15a TCS X LASER trips because of finnicky temperature sensor in racks near HAM4, suspected Fil / Vinny / Jordan activity (LHO aLOG 19624)
10:30a Jim still having trouble relocking the IMC (LHO aLOG 19623), we call in Keita he diagnoses as an alignment problem (LHO aLOG 19627), likely because the IMC REFL camera as a *little* low, and HAM2 had tripped during recovery, and we've see IMC REFL periscope problems in the past.
11:00a Jeff begins compiling, installing, and restarting ITM models
Jim and Dave finish with EY and EX SUS front end replacement
Recovery 2:Leo measures charge to confirm ETM functionality before front-end model changes, confirms functionality but a sign flip from an out-of-date snap of non-monitored filter gain
Stefan installs ODC MASTER changes (LHO aLOG 19654)
Jordan, Vinny, Katie perform PEM tap-test calibrations in all VEAs
11:30a Jeff finished with ITMs, moves onto ETMs for SUS model changes
12:00p Kiwamu and Sudarshan begin install of ISS outerloop electronics
Dave finds bug in Jeff's code changes for ITMs that caused front end to stay stalled after startup, fixes them
12:15p IMC does OK for a bit, but continues to putter, Robert Kiwamu and Sudarshan is still delayed
Jeff done with end station models, (LHO aLOG 19655)
Dave runs quickly through all other computer reboots and a DAQ restart (LHO aLOG 19659)
Recovery 3:Jenne and Ed begin recovery of end stations and green locking
12:30p Recovery 4:TCS X LASER recovered (LHO aLOG 19624)
PSL trips because a shutter closes (of course this means we lose the IMC), Ed investigates and recovers
12:40p IMC locked, but remains squirrelly
1:00p Jenne and Ed, with the help of realize that alignment offsets and M0 settings for QUADs are bogus because of out-of-date SDF snap of non-monitored alignment offsets and M0 LOCK filter gains
2:00p Still battling the IMC Robert calls Rick; Rick and Jeff discover that the real problem with the IMC had been the RefCav realignment causing FSS oscillations all along (LHO aLOG 19631) and alignment was a read herring.
2:11p Recovery 1 complete Stable IMC lock, Robert, Kiwamu, and Sudardshan finally get started
2:52p First of many models get killed by SORT ON SUBSTRING SDF bug, starting with ETMY, further stalling green arm locking recovery (LHO aLOG 19628)
3:30p Diagnosed the problem of the SORT ON SUBSTRING SDF bug, so that stopped (LHO aLOG 19650)
Ed, Jenne, Sheila also get stalled with OFFSET RAMPING BUG (LHO aLOG 19653)
4:00p Jim and Ed's shift is up, Jim has to go, Jim stick around for a little, still neck deep in green ALS recovery
4:50p Ed has to go, Jenne and Sheila "take over" though they've already been heavily involved
Recovery 1 ... AgainFSS oscillations start again, Sheila has to reduce the FSS gain by 3 dB (LHO aLOG 19641)
6:30p Initial alignment complete, begin lock acquisition attempts.
Sheila / Jenne identify that ALS COMM won't lock because error signal for VCO frequency is identically zero
7:20p Recovery 3 complete After call to Dave, we figure out that at 8:00a, during Fil's install of the new 9.1 MHz OXCO, the timing comparator signal from which the COMM (and DIFF) VCO frequencies are derived had been moved to a different port of the timing fanout, foiling the information hardcoded Beckhoff PLC that converts the fanout channels to ALS channels. So we reverted the comparator to the right port and moved the oscillator to another port. (LHO aLOG 19646)
Moving on to DRMI lock in the acquisiton sequence
9:20p Find bug in ISC_LOCK guardian, a failing conditional statement from code installed a few days ago
9:38p Get to RESONANCE (i.e. full IFO RF locked), find that the ITMY Bounce Mode is EXTREMELY rung up. Trace it down to bug in ITMY's Bounce / Roll / Violin mode error signal mapping (LHO aLOG 19657)
10:00p ITMY SUS and SEI recovered,
10:30p Evan / Sheila Continue to wait for DRMI to lock, having trouble and confusion about PRM alignment, ASC start up / lock acquisition,
|
V
1:30a Resonance! Finally! Can begin to damp bounce and roll modes.
Still haven't fully tested and completed Recovery 2 as of this entry, because we haven't confirmed that we can get to ETMY low-noise lock.
I attach a panoramic of the white board, that Betsy to picture of yesterday (see LHO aLOG 19603), just to show how the best laid plans begin to fall to pieces as reality sets in throughout the maintenance day.
So, although got a ton done (as shown in LHO aLOG 19600 there were double the number of tasks that I mention, but they didn't come up because they didn't *end up* having an impact on immediate recovery), including all of Daniel's major tasks (from LHO aLOG 19451), we still lost a large fraction of time to the following:
- Much belated diagnosis of problems resulting from PSL alignment activity
- New /green users rediscovering the problems with computer restarts because we still don't have a good solution for holding alignment offsets and other important unmonitored SDF filter bank channels through the computer reboots
- Much belated diagnosis of impact of moving timing comparator from PORT 11 of timing fanout
- Brand new problems with SDF monitoring system
- Much belated diagnosis new QUAD software bugs
- The "relocking team" fizzling out after 4-5:00p.
This being said, I don't think this is any different from any other heavy maintenance day that I've planned / coordinated to this level of detail ahead of time, e.g. LHO aLOG 16165, LHO aLOG 10849, etc.
* = unexpected restart
model restarts logged for Tue 14/Jul/2015
2015_07_14 01:04 h1fw1*
2015_07_14 09:21 h1iopsusex
2015_07_14 09:21 h1susetmx
2015_07_14 09:21 h1sustmsx
2015_07_14 09:26 h1odcmaster
2015_07_14 09:28 h1alsex
2015_07_14 09:28 h1calex
2015_07_14 09:28 h1hpietmx
2015_07_14 09:28 h1iopiscex
2015_07_14 09:28 h1iopseiex
2015_07_14 09:28 h1iopsusex
2015_07_14 09:28 h1iscex
2015_07_14 09:28 h1isietmx
2015_07_14 09:28 h1pemex
2015_07_14 09:28 h1susetmx
2015_07_14 09:28 h1sustmsx
2015_07_14 10:32 h1alsey
2015_07_14 10:32 h1caley
2015_07_14 10:32 h1iopiscey
2015_07_14 10:32 h1iopsusey
2015_07_14 10:32 h1iscey
2015_07_14 10:32 h1pemey
2015_07_14 10:32 h1susetmy
2015_07_14 10:32 h1sustmsy
2015_07_14 10:34 h1hpietmy
2015_07_14 10:34 h1iopseiey
2015_07_14 10:34 h1isietmy
2015_07_14 11:06 h1alsex
2015_07_14 11:06 h1calex
2015_07_14 11:06 h1hpietmx
2015_07_14 11:06 h1iopiscex
2015_07_14 11:06 h1iopseiex
2015_07_14 11:06 h1iopsusex
2015_07_14 11:06 h1iscex
2015_07_14 11:06 h1isietmx
2015_07_14 11:06 h1pemex
2015_07_14 11:06 h1susetmx
2015_07_14 11:06 h1sustmsx
2015_07_14 11:17 h1susitmx
2015_07_14 11:19 h1susitmy
2015_07_14 12:10 h1susitmx
2015_07_14 12:10 h1susitmy
2015_07_14 12:28 h1iopsusex
2015_07_14 12:28 h1susetmx
2015_07_14 12:30 h1sustmsx
2015_07_14 12:43 h1iopsusey
2015_07_14 12:44 h1susetmy
2015_07_14 12:44 h1sustmsy
2015_07_14 12:45 h1broadcast0
2015_07_14 12:45 h1dc0
2015_07_14 12:45 h1fw0
2015_07_14 12:45 h1fw1
2015_07_14 12:45 h1nds0
2015_07_14 12:45 h1nds1
2015_07_14 14:10 h1susetmy
2015_07_14 15:32 h1susitmx
2015_07_14 15:32 h1susitmy
2015_07_14 15:34 h1isiham6
2015_07_14 15:51 h1odcmaster
2015_07_14 17:06 h1fw1*
2015_07_14 18:44 h1fw0*
2015_07_14 21:34 h1susitmy
2015_07_14 21:45 h1susitmy
Faster front end computer install in end station SUS [WP5351]
Jim, Dave:
h1susex and h1susey front end computers were upgraded to the faster computer model. The SUS QUAD models processing time decreased from 51uS to 31uS. The TMS SUS models processing time decreased from 13uS to 10uS.
Additional ADC install in end station SUS
Richard, Dave, Jim:
An additional ADC for the PI model was installed in h1susex and h1susey IO Chassis. The IOP models h1iopsus[ex,ey] were modified accordingly.
FE slow channels added to DAQ via EDCU [WP5352]
Dave:
the H1EDCU_FE.ini file was modified to add one slow channel from each front end computer to the DAQ via channel access. The IOP models' ADC DuoTone channel was chosen as a continuously varying signal. This is being used to investigate the FE channel access freeze issue.
Duplicate EPICS gateways fixed
Dave, Jim:
The two copies of the H1SLOW-H1FE EPICS gateways were stopped and one new gateway was started.
DAQ restart
Dave:
The DAQ was restarted to resync to: new SUS QUAD models, new ODC MASTER model, new FEC EDCU ini file.
Matt, Jeff
The TrueRMS part used in the new violin mode BLRMS appears to be having trouble... its output may be positive with no input, or zero with a large input. It appears that the trouble comes from an uninitialized variable in the RCG part (PART_n). The following code is an approximation to what the RCG generates:
// TrueRMS: PART
if (PART_first_time_through) {
PART_first_time_through = 0;
PART = in0;
PART_sqrsum = in0 * in0;
PART_indatsqrd[0] = PART_sqrsum;
} else {
if (PART_n < PART_WINSZ) {
PART_index = PART_n++;
} else {
PART_index = (1+PART_index) % PART_WINSZ;
PART_sqrsum -= PART_indatsqrd[PART_index];
}
PART_indatsqrd[PART_index] = in0 * in0;
PART_sqrsum += PART_indatsqrd[PART_index];
PART_sqrval = PART_sqrsum / (double) PART_n;
if (PART_sqrval > 0.0) {
PART = lsqrt(PART_sqrval);
}else{
PART = 0.0;
}
}
With PART_n uninitialized, the PART_sqrsum is polluted by uninitialized values in the PART_indatsqrd array, resulting in a persistent offset.
We'll need to get this fixed and rebuild the SUS FEs.
J. Kissel, E. Hall, S. Dwyer The SUS models got a full makeover this morning (see LHO aLOG 19655), in which a lot of top-level model changes were made. It was not until much later in the evening that we were finally able to get to full-interferometry that might begin to resolve the QUADs highest-frequency vertical and roll (a.k.a. Bounce / Roll modes) and violin modes. Thus it was not until then we discovered that ITMY's top-level wiring for the error signals for each of these loops was incorrect. It was a simple bug to fix; it only took two tries to fix it (it's getting late...); but I've now fixed the bug. Unfortunately, as of this aLOG we haven't get *back* up to that same point in the locking sequence, but Evan was able to confirm with offsets that all signals are going to the right banks now. We've confirmed that all other test masses' wiring is correct. I've committed the new top-level models. This required taking the ITMY SEI system to DAMPED (so that we don't take down HEPI), taking ITMY SUS to SAFE, recompiling, re-installing, restarting, and restoring the SUS, and then bringing the chamber back up. We don't think the problems re-locking are because of this quick restart. There are yet still more issues to track down.
J. Kissel, D. Barker WP 5346 ECRs: IIs: What It Accomplishes E1500271, 1066, Adding violin mode damping filter banks, and modifying monitor system E1500228, 1055, Update tidal correction infrastructure to handling both LLO and LHO control schemes << CAN NOW CLOSE E1500276, 1068, Solidify top mass Bounce Mode damping infrastructure to use DARM_CTRL as error signal << CAN NOW CLOSE E1500090, 1068, Solidify top mass Bounce Mode damping infrastructure to use DARM_CTRL as error signal << CAN NOW CLOSE E1400232, 859, Implementing remote monitoring, restart & reset capability of the High Voltage ESD Driver E1500230, 1054, Remove some redundant EPICS channels recording IPC errors from top level I've implemented almost* all of the changes described in the above work permits an integration issues, thanks to Stuart's hard work updating library parts at LLO, see LHO aLOG 18887 and LHO aLOG 18819. The only change that had *not* been previously developed by Stuart was the new(ish) ECR to use DARM_CTRL instead DARM_ERR for the M0 highest V (a.k.a. "bounce") mode damping. This required changes to /opt/rtcds/userapps/release/sus/common/models/QUAD_MASTER.mdl because, not only did I want to change the input signal, I also wanted to clean up the path names. Further, the way it's implemented now, the paths and names in all library parts are agnostic to which sensor signal is used, and if LLO wishes NOT to recommission these damping loops, they can make that choice at the top level of the model. I've also cleaned up the naming and ordering of the inputs for both tidal and DARM damping, such that they're clustered by function such that the top level can be more clean. However, I did make a couple of modifications to other library parts because they hadn't been implemented exactly right: /opt/rtcds/userapps/release/sus/common/models/FOUROSEM_STAGE_MASTER_OPLEV_TIDAL.mdl At LHO, recall that we receive the tidal corrections at the UIM stage. But this implementation hadn't really been changed since Daniel installed it quickly, many moons ago (and when he did, there was no reflection of it on any part of the QUAD's MEDM screen). Stuart has drawn a line as to where it comes in and goes on the OVERVIEW screen, but I found myself craving an EPICs record of what was coming in *before* it was summed into the normal global longitudinal locking signal. So, I added an EPICs record ${IFO}:SUS-${OPTIC}_L1_TIDALMON to the above mentioned library part, and while I was there I added a flag-and-tag to clean up the drawing. /opt/rtcds/userapps/release/sus/common/models/FOUROSEM_DAMPED_STAGE_MASTER_WITH_DAMP_MODE.mdl The band-limiting filters for the violin mode monitoring were still *after* the RMS calculation. If we want a band-limited RMS monitor for the violin modes, we need our band-limiting filter *before* the RMS. So I've moved the filters, changed the name of the bank to "BL" instead of "RMS," and aded an EPICs output monitor at the output of the RMS calculation. This change has also been reflected on the MEDM overview screen, and the sub-screen which has all of the filter banks. TOP LEVEL Model Changes that were required (and sadly because of the differences in ESD driver, the list is different for the ITMs, ETMX, and ETMY): - Modifications to the BIO_DECODE and BIO_ENCODE blocks to gather the appropriate signals for the digital monitors of the HV ESD driver (on ETMX and ETMY) - Add BIO_DECODE/ENCODE connections for LVLN ESD driver on ETMX, so we don't have to fix the model once we do install that driver (who am I kidding, I'm sure we'll find a reason to fix the model) - Add BIO_DECODE/ENCODE connections for the HV driver. I wasn't sure how the LVLN driver handles the analog and digital types of the HV driver (and there's no drawing for it yet), so I just hooked it up like ETMX, steering around the LVLN BIO stuff that's already there. - Re-organize the IPC receivers, their busses tags and flags, such that they're better grouped by functionality. - Removed redundant EPICs outputs that were capturing the IPC errors. - Re-arranged all connections to the main library part because of all of the library part rearrangements. There are several things that I did *not* change because I ran out of time: - I forgot to change the order of the quadrants in of the ESD at ETMY for the LVLN driver that's in place. However, we've been running like (i.e. the the order of the channels mixed up) this for all of ER7, so I feel no need to rush to change the order of these channels. The DC bias channel is right, and otherwise we only use the quadrants as a whole for longitudinal control, so it can wait a little bit longer. - I have not yet made the changes to the SUSAUX models that were need to complete E1400232 -- however the cables for them are not ready anyways, so the signals will just remain the junk they've been. I've committed all library part models, all top level models, and all MEDM screens associated with these fixes.
One problem that took a while to deal with as a part of IFO recovery was the lack of good .snap values for the OPTICALIGN slider values. I am told that part of this is that the values in the safe.snap files are not updated very often. In particular, they had not been updated since before some of the suspensions were mechanically realigned to center the slider values, so the computer reboots this morning put the optics in very bad places. (We had to hand-trend each slider value and type the values in.)
As a solution, I have created a new .req file that includes all of the OPTICALIGN values from the IFO_Align screen. The .req file (and the corresponding .snap file) lives in /opt/rtcds/userapps/release/sus/h1/burtfiles/OptAlignBurt.req . I have also written scripts to capture new .snap files, and restore the .snap file. The idea is that the capture script be run just before maintenence begins, and the restore script be run at the end of maintenence.
To run the capture script, in a terminal paste the following:
/opt/rtcds/userapps/release/sus/h1/scripts/CaptureOptAlignBurt.sh
To run the restore script, in a terminal paste the following:
/opt/rtcds/userapps/release/sus/h1/scripts/RestoreOptAlignBurt.sh
----------------
As a side note, the slider values for ITMX, ITMY, ETMX and ETMY have been accepted in the SDF system (made to be monitored, accepted, then un-monitored), so computer reboots should keep us closer, even if we forget to run the above scripts. We should do the same with the other major suspended optics.
Recall that we started writing the IFO ALignment slider values to an hourly burt such that we can easily grab-n-restore best alignment values - see alog 18799 from June 2.
The hourly burts are at:
/ligo/cds/lho/h1/burt/2015
under the appropriate date /h1ifoalignepics.snap
'Sorry that no one in the CR recalled this info from last month for you yesterday...
All fifty (50) Accumulators were checked for charge today. No Accumulator needed charging. Only three accumulators showed a decrease in pressure since the last charge check on 21 April, see T1500280. These were small decreases (few psi) and likely reflect loss from gauge pulloff (does the uncertainy principle apply?) The acceptable range of 60-93% of operating pressure is quite broad and the lowest reading today was at 80%.
Given these results, and, the reservoir-fluid-level-indication of Accumulator charge which can be checked with the system pumping, this invasive, must have system off accumulator pressure check could be done just quarterly. As long as the weekly check of reservoir fluid levels show no decrease, the accumulators can be assumed to be adaquately charged. If a weekly check of the reservoir fluid indicates a volume loss, then the accumulators could be checked.
good to hear that the accumulators are holding well. I like your plan -Brian
Jenne, Sheila, Evan
We locked at 10Watts with low noise, and redid the OMC excitations that Koji and I did in alog 17919. We plotted the OMC L excitation against a model with a peak to peak motion of 36 um, and the result seems consistent with a reflectivity of 160e-7 that we measured on Friday by exciting the ISI. This is slightly worse than what we measured in April.
We made these excitations with the same amplitudes and frequencies that we used in April, but some of the velocities seem to be smaller. Jenne is working on doing a more thourough comparision, but it seems that the scatter is better when we are exciting Yaw and Transverse, if a little worse for longitudnal.
We used a frequency of 0.2 Hz for all excitations.
| DOF | excitation amplitude (0.2Hz) | time | Ref |
| OMC L | 20000 | 4:39:30 | 10 |
| T | 20000 | 4:43:51-4:47:00 | 11 |
| V | 20000 | 4:47:30-4:49:20 | 12 |
| P | 2000 | 4:51:38-4:53:20 | 13 |
| Y | 200 | 4:54:00-4:56:20 | 14 |
| R | 2000 | 4:56:47-4:58:00 | 15 |
I'm concerned that the times from the April data for the Longitudinal excitation that Sheila is using aren't quite correct. This means that for the "L" traces we're integrating some "no excitation" time in with our "excitation" time, and using this muddled spectra as the measurement of the OMC scattering.
I have pulled the data from April, and adjusted the start time of each measurement to ensure that the excitation channel was fully on at the start (the [0][0] "time series" trace in DTT), and was still fully on for the last average (the [0][9] "time series" trace). Since I only had to adjust the "L" start time, I think this is the only one that is affected. With this adjustment, I see that the knee frequency goes down for L and T. It stays about the same for P, and is hard to tell (almost no scattering) for Y. The amplitude is a little bit higher for L and P, but not by a lot. Since the knee frequency is directly proportional to the velocity (eq. 4.16, Tobin's thesis), this seems to imply that even though we were actuating with the same amplitude and frequency, the true motion is slower now than in April. Is this because we are also pushing around the weight of the glass shroud? I'm not sure how the glass is mounted.
The times that I'm using are as follows:
| 16-17 April 2015 (t0 UTC) | 14 July 2015 (t0 UTC) | |
| No excitation | 23:33:39 | 04:49:57 |
| L excitation | 23:47:47 | 04:39:30 |
| T excitation | 23:59:00 | 04:43:56 |
| Y excitation | 00:31:00 | 04:55:00 |
| P excitation | 00:24:00 | 04:51:50 |
Another thing to add:
Since June 25 (right after shroud thing was done) and including the time this measurement was done, OMCR beam diverter has been open and nobody cared to close it.
Though it's not clear if this makes any difference, any comparison should be done with the diverter closed.
Regarding Jenne's comment above, "Is this because we are also pushing around the weight of the glass shroud? I'm not sure how the glass is mounted." - the black glass shroud is mounted to the OMC structure, not the suspended mass. After installation, the ISI was rebalanced and retested.
Just to summarize, we seem to be suffering from several issues at 23+ W:
I was able to get a 30 minute lock at 23.7 W with the following:
Other notes:
We were able to get two more stable locks at 24 W, this time in the full low noise state.
However, Patrick and I found that EY L2 was periodically saturating, with the quadrants having something like 50,000 ct rms, coming primarily from the microseism. So the EY L1/L2 crossover is now increased in the Guardian (the L1 filter gain was 0.16, and now it is 0.3). The rms drive on L2 is now more like 30,000 ct rms. L1 is 6000 ct rms, and L3 is 1000 ct rms.