Hugh and Robert asked that all sensor correction for the corner station be switched to the ITMY STS. I've done this (because interferometry is being impeded by an earthquake) and accepted the changes in the SDF. I'm assuming Robert and Hugh have an imminent alog describing the need for the change.
Jenne, Evan, Stefan - Reverted the the ramp times in PREP_TR back to Monday's values - this seemed to be more reliable, and not produce any transients at that stage of the script. - Repeatedly brought the machine up to 24W, but we still see a hint for the 0.41Hz CSOFT resonance, and occasionally a YAW gain oscillation at maybe 1.5Hz. - The DC oplev PIT on SR3 was stuck on the limiter. - We also cleaned up the Guardian flow chart. It now has a single flow line. (The previous spider web caused more lock losses than it was worth.) - Since the 0.41Hz problem seems to come and go, we decided to kill it with a good CSOFT loop design - using a 2nd UGF at the resonance to damp it. Evan is still testing this filter.
Kiwamu, Stefan, Jenne, Matt, Evan,
We've almost recovered from yesterday's "maintenance". Kiwamu found this morning that the X arm camera servo had been off, I think this was a miscommunication/mistake durring maintence recovery which probably lead us into some bad alingment last night. Stefan/Kiwamu/Jenne did some realignment this morning and early afternoon, and we were able to lock with good recycling gain and were stable at 24 Watts.
The gain of the OMC DC PDs was wrong this was picked up by the SDF put I am not sure what maintence acvitity would have caused that. The ETMY ESD bias had the wrong sign. Stefan added it in SDF.
We were able to lock momentarily at low noise, and lost it for reasons that we don't understand yet, but might have been a slow alignment instability.
About 2 hours ago the wind picked up, we have gusts up to 35-40 mph, forecast to stay this way until 1 am. Our improvement to the offloading of DRMI has certaintly helped with windy locking, DRMI locking is still slow but it does lock and stay locked. Stefan has rasied the gains that are used for acquisition, we can try to get some statisctics about DRMI lockign times. Now we get to the next windy locking problem, which seems to be the handoff from ALS COMM to the transmitted arm powers for CARM.
The guardian has been having an occasional problem where it cannot acces channels (the channels exist and are spelled correctly). One solution has been to reload the gaurdian a few times, just now I had to reload the DRMI guardian about 10 times. Eventualy Stefan paused and reloaded it and the problem went away.
Screen shot attached.
We've also had some more epics freeze incidents today. 4:42:39 UTC Local 17:25:50
08:09 Ken working on GPS antenna on roof 08:46 Peter, Jeff B. to chiller room to look at chiller displays 08:52 Richard to roof 09:02 Peter, Jeff B. back 09:39 Travis to LVEA to look for part 09:45 Rebooted projector0 due to memory leak, Jim B. recreated credentials for seismic DMT 09:49 Travis back 11:26 Richard to DC power mezzanine to look at roof sections 11:32 Jeff B. to cleaning area 11:44 Richard back 12:17 Ken to electronics room to get measurements for drilling hole in building for GPS antenna cabling, then drill hole 12:58 Nutsinee to LVEA to take pictures of CO2 laser chassis 13:04 Nutsinee back 13:30 Ken done Commissioners working on recovering IFO alignment.
We are seeing large CPU max numbers on the IOP and SUS models at the end stations. In addition ADC errors are showing up on the IOP model, and intermittent Dolphin IPC errors on SEI and ISC receivers. I have just cleared out the warnings so we can see how often these appear.
The alignment settings of the IFO this morning are a bit different than they were according to the hourly burt snap files from ~midnight on Monday night when locking was ~good. The slider values from Monday's good locking are consistent with the alignment values in hourly snaps from a few days before Monday as well. Attached is the snap file from Monday at 11:10pm in the event the commissioners need to do a complete restore. Commissioners report that it is confusing as to why to ASC systems did not recover the pointing even when the starting point was not quite right. Evan is working on it now.
Most notably:
IM4 is 1400 uRad different in Pitch
IM4 is 200uRad different in Yaw
MC3 is 100 uRad different in Pitch
MC1 is 80 uRad different in Pitch
PRM is 30 uRad different in Pitch
PRM is 40 uRad different in Yaw
Turns out the IMC slider values changed significantly, resulting in the IMC to hang at a different place. This caused a lot of the trouble we faced the last day. Once we simply restored Monday’s alignment slider values, the IMC mirror positions moved pretty much back to where they were. This meant we also reverted the alignment references back to Monday's values. This includes the following settings: H1:ALS-X_CAM_ITM_PIT_OFS 256 H1:ALS-X_CAM_ITM_YAW_OFS 340.9 H1:ALS-Y_CAM_ITM_PIT_OFS 303.9 H1:ALS-Y_CAM_ITM_YAW_OFS 433.5 H1:ASC-X_TR_A_PIT_OFFSET 0 H1:ASC-X_TR_A_YAW_OFFSET -0.095 H1:ASC-X_TR_B_PIT_OFFSET -0.11 H1:ASC-X_TR_B_YAW_OFFSET -0.067 H1:ASC-Y_TR_A_PIT_OFFSET -0.128 H1:ASC-Y_TR_A_YAW_OFFSET -0.174 H1:ASC-Y_TR_B_PIT_OFFSET -0.516 H1:ASC-Y_TR_B_YAW_OFFSET -0.1 H1:ASC-POP_A_PIT_OFFSET 0.38 H1:ASC-POP_A_YAW_OFFSET 0.248 H1:ALS-X_QPD_A_PIT_OFFSET 0.2 H1:ALS-X_QPD_A_YAW_OFFSET 0 H1:ALS-X_QPD_B_PIT_OFFSET 0 H1:ALS-X_QPD_B_YAW_OFFSET -0.05 H1:ALS-Y_QPD_A_PIT_OFFSET 0.1 H1:ALS-Y_QPD_A_YAW_OFFSET -0.4 H1:ALS-Y_QPD_B_PIT_OFFSET 0 H1:ALS-Y_QPD_B_YAW_OFFSET 0.15
Sheila, Jeff, Evan
We had repeated locklosses handing off the DARM sensor from ALS DIFF to AS45Q. We changed the guardian so that the handoff happens at a slightly lower CARM offset, and with a different DARM loop gain (we had previously used these settings back in late February). This new CARM offset makes the AS port more unstable during the DARM handoff, but it makes the transition successful.
We were able to make it to resonance on rf darm, but with a mediocre recycling gain (30 W/W). We spent some time manually steering the ITMs in order to bring the recyling gain up to more than 40 W/W. Then we updated the TMS QPD spot positions and the green alignment references (green QPD offsets and camera positions). It is not clear to us why we had to do this, since we restored all the suspension alignments from before the maintenance work.
We did an initial alignment starting with the new green references. Subsequently, we came into resonance with good recycling gain (>40 W/W) again.
We were able to engage the ASC with these new spot positions. However, at 17 W we saw the same 0.4 Hz resonance that we saw a few days ago, meaning we should not power up further in this configuration.
We redid the dark offsets for the TMS QPDs, since they seemed to be stale.
For now, the DARM handoff has been returned to its old CARM offset. I have left the DARM gain slightly lower than before (80 rather than 125).
The ITM QPD offsets have been reverted to yesterday's values. We are able to engage them as usual in the ENGAGE_ASC state, and they give a good recycling gain. However, at 23 W the interferometer unlocks suddenly after a few minutes. The transmitted arm powers seem slightly less stable than with the new offsets tried above (a slow oscillation with a few-second period can be seen in the arm powers, as well as POP LF), but there is no 0.4 Hz oscillation.
J. Kissel for E. Hall, S. Dwyer, J. Driggers, K. Izumi The IMC is in terrible shape again this morning. Words I got quickly from Evan: "We think that the FSS began oscillating again *during* [full IFO] lock, then the IMC WFS began integrating to a bad place." Obviously the investigation is on-going, but any help from the PSL team in the control room would be appreciated.
J. Kissel, for just about everyone on site
Today was the official start of the so-called "re-locking team." As such, I've folded Ed, Jim (the operators on shift) and Betsy into the routine that I've done in the past for particularly busy Tuesdays and we've made an effort to focus more on speedy recovery from each task. Each part of the following tasks is necessary if we expect to robustly get back to locking ASAP:
- Best effort basis understanding of all planned maintenance activities, to obtain
- an assessment of what the activities impact others
- an assessment of those activities impact on the IFO
- an assessment of how we'll recover from and what to test after those activities
- Preparing the IFO for maintenance
- Keeping track of the chaos during maintenance
- Beginning regression testing and recovery on as much of the IFO as soon as possible
- Following through with activities making sure they're closed out, performing regression testing, until the IFO performs as good as it did the night before
- aLOGging anything and everything you can about the day, highlighting the big stopping points and hold ups.
Here's how today when down.
First, refresh your memory with how the plan was supposed to go: LHO aLOG 19600
(All times PDT)
7:45a Ed, Jim, and I prep for maintenance (LHO aLOG 19616)
~8:00a Rick, Jeff, and Jason head into the PSL to do RefCav and PMC Alignment
Fil installs new 9.1 MHz OXCO, and hooks it up to the timing system
Dave, Jim, and Richard go to EX to replace EX SUS front end
Bubba heads to EX for TMDS Piping
Hugh begins checking corner station HEPI accumulator bladder pressures
~10:00a PSL team takes longer than expected, and only gets through RefCav alignment, forgoes PMC alignment until next week (LHO aLOG 19622
Hugh is finished with Corner Station HEPI, moves on to EX and EY (LHO aLOG 19632)
Fil moves to running BNC cables with Jordan and Vinny near HAM2 and HAM4
Dave reports back that EX went terribly because they got shipped the wrong type of card (they got a DAC not an ADC) for the new PI damping ADC, moves to Y (LHO aLOG 19659)
Recovery 1:Robert needs IMC recovered for PSL periscope tuning, and it's part of the plan anyway to recover the IMC after PSL work is done, so Relocking Team recovers corner station SEI, with a focus on HAM2/HAM3 so the IMC can get back up and running
10:15a TCS X LASER trips because of finnicky temperature sensor in racks near HAM4, suspected Fil / Vinny / Jordan activity (LHO aLOG 19624)
10:30a Jim still having trouble relocking the IMC (LHO aLOG 19623), we call in Keita he diagnoses as an alignment problem (LHO aLOG 19627), likely because the IMC REFL camera as a *little* low, and HAM2 had tripped during recovery, and we've see IMC REFL periscope problems in the past.
11:00a Jeff begins compiling, installing, and restarting ITM models
Jim and Dave finish with EY and EX SUS front end replacement
Recovery 2:Leo measures charge to confirm ETM functionality before front-end model changes, confirms functionality but a sign flip from an out-of-date snap of non-monitored filter gain
Stefan installs ODC MASTER changes (LHO aLOG 19654)
Jordan, Vinny, Katie perform PEM tap-test calibrations in all VEAs
11:30a Jeff finished with ITMs, moves onto ETMs for SUS model changes
12:00p Kiwamu and Sudarshan begin install of ISS outerloop electronics
Dave finds bug in Jeff's code changes for ITMs that caused front end to stay stalled after startup, fixes them
12:15p IMC does OK for a bit, but continues to putter, Robert Kiwamu and Sudarshan is still delayed
Jeff done with end station models, (LHO aLOG 19655)
Dave runs quickly through all other computer reboots and a DAQ restart (LHO aLOG 19659)
Recovery 3:Jenne and Ed begin recovery of end stations and green locking
12:30p Recovery 4:TCS X LASER recovered (LHO aLOG 19624)
PSL trips because a shutter closes (of course this means we lose the IMC), Ed investigates and recovers
12:40p IMC locked, but remains squirrelly
1:00p Jenne and Ed, with the help of realize that alignment offsets and M0 settings for QUADs are bogus because of out-of-date SDF snap of non-monitored alignment offsets and M0 LOCK filter gains
2:00p Still battling the IMC Robert calls Rick; Rick and Jeff discover that the real problem with the IMC had been the RefCav realignment causing FSS oscillations all along (LHO aLOG 19631) and alignment was a read herring.
2:11p Recovery 1 complete Stable IMC lock, Robert, Kiwamu, and Sudardshan finally get started
2:52p First of many models get killed by SORT ON SUBSTRING SDF bug, starting with ETMY, further stalling green arm locking recovery (LHO aLOG 19628)
3:30p Diagnosed the problem of the SORT ON SUBSTRING SDF bug, so that stopped (LHO aLOG 19650)
Ed, Jenne, Sheila also get stalled with OFFSET RAMPING BUG (LHO aLOG 19653)
4:00p Jim and Ed's shift is up, Jim has to go, Jim stick around for a little, still neck deep in green ALS recovery
4:50p Ed has to go, Jenne and Sheila "take over" though they've already been heavily involved
Recovery 1 ... AgainFSS oscillations start again, Sheila has to reduce the FSS gain by 3 dB (LHO aLOG 19641)
6:30p Initial alignment complete, begin lock acquisition attempts.
Sheila / Jenne identify that ALS COMM won't lock because error signal for VCO frequency is identically zero
7:20p Recovery 3 complete After call to Dave, we figure out that at 8:00a, during Fil's install of the new 9.1 MHz OXCO, the timing comparator signal from which the COMM (and DIFF) VCO frequencies are derived had been moved to a different port of the timing fanout, foiling the information hardcoded Beckhoff PLC that converts the fanout channels to ALS channels. So we reverted the comparator to the right port and moved the oscillator to another port. (LHO aLOG 19646)
Moving on to DRMI lock in the acquisiton sequence
9:20p Find bug in ISC_LOCK guardian, a failing conditional statement from code installed a few days ago
9:38p Get to RESONANCE (i.e. full IFO RF locked), find that the ITMY Bounce Mode is EXTREMELY rung up. Trace it down to bug in ITMY's Bounce / Roll / Violin mode error signal mapping (LHO aLOG 19657)
10:00p ITMY SUS and SEI recovered,
10:30p Evan / Sheila Continue to wait for DRMI to lock, having trouble and confusion about PRM alignment, ASC start up / lock acquisition,
|
V
1:30a Resonance! Finally! Can begin to damp bounce and roll modes.
Still haven't fully tested and completed Recovery 2 as of this entry, because we haven't confirmed that we can get to ETMY low-noise lock.
I attach a panoramic of the white board, that Betsy to picture of yesterday (see LHO aLOG 19603), just to show how the best laid plans begin to fall to pieces as reality sets in throughout the maintenance day.
So, although got a ton done (as shown in LHO aLOG 19600 there were double the number of tasks that I mention, but they didn't come up because they didn't *end up* having an impact on immediate recovery), including all of Daniel's major tasks (from LHO aLOG 19451), we still lost a large fraction of time to the following:
- Much belated diagnosis of problems resulting from PSL alignment activity
- New /green users rediscovering the problems with computer restarts because we still don't have a good solution for holding alignment offsets and other important unmonitored SDF filter bank channels through the computer reboots
- Much belated diagnosis of impact of moving timing comparator from PORT 11 of timing fanout
- Brand new problems with SDF monitoring system
- Much belated diagnosis new QUAD software bugs
- The "relocking team" fizzling out after 4-5:00p.
This being said, I don't think this is any different from any other heavy maintenance day that I've planned / coordinated to this level of detail ahead of time, e.g. LHO aLOG 16165, LHO aLOG 10849, etc.
One problem that took a while to deal with as a part of IFO recovery was the lack of good .snap values for the OPTICALIGN slider values. I am told that part of this is that the values in the safe.snap files are not updated very often. In particular, they had not been updated since before some of the suspensions were mechanically realigned to center the slider values, so the computer reboots this morning put the optics in very bad places. (We had to hand-trend each slider value and type the values in.)
As a solution, I have created a new .req file that includes all of the OPTICALIGN values from the IFO_Align screen. The .req file (and the corresponding .snap file) lives in /opt/rtcds/userapps/release/sus/h1/burtfiles/OptAlignBurt.req . I have also written scripts to capture new .snap files, and restore the .snap file. The idea is that the capture script be run just before maintenence begins, and the restore script be run at the end of maintenence.
To run the capture script, in a terminal paste the following:
/opt/rtcds/userapps/release/sus/h1/scripts/CaptureOptAlignBurt.sh
To run the restore script, in a terminal paste the following:
/opt/rtcds/userapps/release/sus/h1/scripts/RestoreOptAlignBurt.sh
----------------
As a side note, the slider values for ITMX, ITMY, ETMX and ETMY have been accepted in the SDF system (made to be monitored, accepted, then un-monitored), so computer reboots should keep us closer, even if we forget to run the above scripts. We should do the same with the other major suspended optics.
Recall that we started writing the IFO ALignment slider values to an hourly burt such that we can easily grab-n-restore best alignment values - see alog 18799 from June 2.
The hourly burts are at:
/ligo/cds/lho/h1/burt/2015
under the appropriate date /h1ifoalignepics.snap
'Sorry that no one in the CR recalled this info from last month for you yesterday...
All fifty (50) Accumulators were checked for charge today. No Accumulator needed charging. Only three accumulators showed a decrease in pressure since the last charge check on 21 April, see T1500280. These were small decreases (few psi) and likely reflect loss from gauge pulloff (does the uncertainy principle apply?) The acceptable range of 60-93% of operating pressure is quite broad and the lowest reading today was at 80%.
Given these results, and, the reservoir-fluid-level-indication of Accumulator charge which can be checked with the system pumping, this invasive, must have system off accumulator pressure check could be done just quarterly. As long as the weekly check of reservoir fluid levels show no decrease, the accumulators can be assumed to be adaquately charged. If a weekly check of the reservoir fluid indicates a volume loss, then the accumulators could be checked.
good to hear that the accumulators are holding well. I like your plan -Brian
CO2X laser RTD sensor alarm (H1:TCS-ITMX_CO2_INTRLK_RTD_OR_IR_ALRM) tripped at 14 Jul 15 17:15:00 UTC this morning (10:15am), shutting off the CO2X laser. Folks were pulling cables near HAM4 this morning, which is probably why it tripped. CO2X laser was restarted at 19:30:00 UTC, and is now running normally again.
Just adding some words, parroting what Elli told me: this temperature sensor (RTD) is nominally supposed to be on "the" viewport (HAM4? Some BSC? The Injection port for the laser? Dunno). This sensor is not mounted on the viewport currently, it's mounted on "the" chassis, which (I believe) resides in the TCS remote racks by HAM4. She's seen this in the past: even looking at this sensor wrong (my words, not hers) while you're cabling / electronics-ing near HAM4, this sensor trips. As she says, this was noticed and recovered by her before it became an issue with the IFO because recovery went much slower than anticipated.
If I understand correctly the sensor I think your talking about then yes this should be on the viewport (the BSC viewport which the laser is injected in). The viewport sensor though is an IR sensor, but for some parts of the wiring in the control box (and thus on the MEDM screen) the IR sensor and RTD sensor are wired in together making it hard to know which one caused the trip. Its supposed to monitor scattered light coming off that viewport. It is very sensitive and can be affected by humans standing near it, light being shown onto it (one of the ways to set the trip level is to hold a lighter up to it ), maybe also heat from electronics, etc. So just sitting in the rack I am not at all surprised that it is tripping all the time and causing grief.
My suggestion is to try to get this installed on the viewport if you can, otherwise if you can’t and it really is causing problems all the time, there is a pot inside the control box which you can alter to change the level at which it trips.
Jenne, Sheila, Evan
We locked at 10Watts with low noise, and redid the OMC excitations that Koji and I did in alog 17919. We plotted the OMC L excitation against a model with a peak to peak motion of 36 um, and the result seems consistent with a reflectivity of 160e-7 that we measured on Friday by exciting the ISI. This is slightly worse than what we measured in April.
We made these excitations with the same amplitudes and frequencies that we used in April, but some of the velocities seem to be smaller. Jenne is working on doing a more thourough comparision, but it seems that the scatter is better when we are exciting Yaw and Transverse, if a little worse for longitudnal.
We used a frequency of 0.2 Hz for all excitations.
| DOF | excitation amplitude (0.2Hz) | time | Ref |
| OMC L | 20000 | 4:39:30 | 10 |
| T | 20000 | 4:43:51-4:47:00 | 11 |
| V | 20000 | 4:47:30-4:49:20 | 12 |
| P | 2000 | 4:51:38-4:53:20 | 13 |
| Y | 200 | 4:54:00-4:56:20 | 14 |
| R | 2000 | 4:56:47-4:58:00 | 15 |
I'm concerned that the times from the April data for the Longitudinal excitation that Sheila is using aren't quite correct. This means that for the "L" traces we're integrating some "no excitation" time in with our "excitation" time, and using this muddled spectra as the measurement of the OMC scattering.
I have pulled the data from April, and adjusted the start time of each measurement to ensure that the excitation channel was fully on at the start (the [0][0] "time series" trace in DTT), and was still fully on for the last average (the [0][9] "time series" trace). Since I only had to adjust the "L" start time, I think this is the only one that is affected. With this adjustment, I see that the knee frequency goes down for L and T. It stays about the same for P, and is hard to tell (almost no scattering) for Y. The amplitude is a little bit higher for L and P, but not by a lot. Since the knee frequency is directly proportional to the velocity (eq. 4.16, Tobin's thesis), this seems to imply that even though we were actuating with the same amplitude and frequency, the true motion is slower now than in April. Is this because we are also pushing around the weight of the glass shroud? I'm not sure how the glass is mounted.
The times that I'm using are as follows:
| 16-17 April 2015 (t0 UTC) | 14 July 2015 (t0 UTC) | |
| No excitation | 23:33:39 | 04:49:57 |
| L excitation | 23:47:47 | 04:39:30 |
| T excitation | 23:59:00 | 04:43:56 |
| Y excitation | 00:31:00 | 04:55:00 |
| P excitation | 00:24:00 | 04:51:50 |
Another thing to add:
Since June 25 (right after shroud thing was done) and including the time this measurement was done, OMCR beam diverter has been open and nobody cared to close it.
Though it's not clear if this makes any difference, any comparison should be done with the diverter closed.
Regarding Jenne's comment above, "Is this because we are also pushing around the weight of the glass shroud? I'm not sure how the glass is mounted." - the black glass shroud is mounted to the OMC structure, not the suspended mass. After installation, the ISI was rebalanced and retested.
Just to summarize, we seem to be suffering from several issues at 23+ W:
I was able to get a 30 minute lock at 23.7 W with the following:
Other notes:
We were able to get two more stable locks at 24 W, this time in the full low noise state.
However, Patrick and I found that EY L2 was periodically saturating, with the quadrants having something like 50,000 ct rms, coming primarily from the microseism. So the EY L1/L2 crossover is now increased in the Guardian (the L1 filter gain was 0.16, and now it is 0.3). The rms drive on L2 is now more like 30,000 ct rms. L1 is 6000 ct rms, and L3 is 1000 ct rms.