For anyone who is interested in classifying locklosses, or in earthquakes, here is an example of a full low noise lock broken by an earthquake.
Hugh and Robert asked that all sensor correction for the corner station be switched to the ITMY STS. I've done this (because interferometry is being impeded by an earthquake) and accepted the changes in the SDF. I'm assuming Robert and Hugh have an imminent alog describing the need for the change.
Jenne, Evan, Stefan - Reverted the the ramp times in PREP_TR back to Monday's values - this seemed to be more reliable, and not produce any transients at that stage of the script. - Repeatedly brought the machine up to 24W, but we still see a hint for the 0.41Hz CSOFT resonance, and occasionally a YAW gain oscillation at maybe 1.5Hz. - The DC oplev PIT on SR3 was stuck on the limiter. - We also cleaned up the Guardian flow chart. It now has a single flow line. (The previous spider web caused more lock losses than it was worth.) - Since the 0.41Hz problem seems to come and go, we decided to kill it with a good CSOFT loop design - using a 2nd UGF at the resonance to damp it. Evan is still testing this filter.
Just to summarize, we seem to be suffering from several issues at 23+ W:
I was able to get a 30 minute lock at 23.7 W with the following:
Other notes:
We were able to get two more stable locks at 24 W, this time in the full low noise state.
However, Patrick and I found that EY L2 was periodically saturating, with the quadrants having something like 50,000 ct rms, coming primarily from the microseism. So the EY L1/L2 crossover is now increased in the Guardian (the L1 filter gain was 0.16, and now it is 0.3). The rms drive on L2 is now more like 30,000 ct rms. L1 is 6000 ct rms, and L3 is 1000 ct rms.
Kiwamu, Stefan, Jenne, Matt, Evan,
We've almost recovered from yesterday's "maintenance". Kiwamu found this morning that the X arm camera servo had been off, I think this was a miscommunication/mistake durring maintence recovery which probably lead us into some bad alingment last night. Stefan/Kiwamu/Jenne did some realignment this morning and early afternoon, and we were able to lock with good recycling gain and were stable at 24 Watts.
The gain of the OMC DC PDs was wrong this was picked up by the SDF put I am not sure what maintence acvitity would have caused that. The ETMY ESD bias had the wrong sign. Stefan added it in SDF.
We were able to lock momentarily at low noise, and lost it for reasons that we don't understand yet, but might have been a slow alignment instability.
About 2 hours ago the wind picked up, we have gusts up to 35-40 mph, forecast to stay this way until 1 am. Our improvement to the offloading of DRMI has certaintly helped with windy locking, DRMI locking is still slow but it does lock and stay locked. Stefan has rasied the gains that are used for acquisition, we can try to get some statisctics about DRMI lockign times. Now we get to the next windy locking problem, which seems to be the handoff from ALS COMM to the transmitted arm powers for CARM.
The guardian has been having an occasional problem where it cannot acces channels (the channels exist and are spelled correctly). One solution has been to reload the gaurdian a few times, just now I had to reload the DRMI guardian about 10 times. Eventualy Stefan paused and reloaded it and the problem went away.
Screen shot attached.
We've also had some more epics freeze incidents today. 4:42:39 UTC Local 17:25:50
08:09 Ken working on GPS antenna on roof 08:46 Peter, Jeff B. to chiller room to look at chiller displays 08:52 Richard to roof 09:02 Peter, Jeff B. back 09:39 Travis to LVEA to look for part 09:45 Rebooted projector0 due to memory leak, Jim B. recreated credentials for seismic DMT 09:49 Travis back 11:26 Richard to DC power mezzanine to look at roof sections 11:32 Jeff B. to cleaning area 11:44 Richard back 12:17 Ken to electronics room to get measurements for drilling hole in building for GPS antenna cabling, then drill hole 12:58 Nutsinee to LVEA to take pictures of CO2 laser chassis 13:04 Nutsinee back 13:30 Ken done Commissioners working on recovering IFO alignment.
We are seeing large CPU max numbers on the IOP and SUS models at the end stations. In addition ADC errors are showing up on the IOP model, and intermittent Dolphin IPC errors on SEI and ISC receivers. I have just cleared out the warnings so we can see how often these appear.
The alignment settings of the IFO this morning are a bit different than they were according to the hourly burt snap files from ~midnight on Monday night when locking was ~good. The slider values from Monday's good locking are consistent with the alignment values in hourly snaps from a few days before Monday as well. Attached is the snap file from Monday at 11:10pm in the event the commissioners need to do a complete restore. Commissioners report that it is confusing as to why to ASC systems did not recover the pointing even when the starting point was not quite right. Evan is working on it now.
Most notably:
IM4 is 1400 uRad different in Pitch
IM4 is 200uRad different in Yaw
MC3 is 100 uRad different in Pitch
MC1 is 80 uRad different in Pitch
PRM is 30 uRad different in Pitch
PRM is 40 uRad different in Yaw
Turns out the IMC slider values changed significantly, resulting in the IMC to hang at a different place. This caused a lot of the trouble we faced the last day. Once we simply restored Monday’s alignment slider values, the IMC mirror positions moved pretty much back to where they were. This meant we also reverted the alignment references back to Monday's values. This includes the following settings: H1:ALS-X_CAM_ITM_PIT_OFS 256 H1:ALS-X_CAM_ITM_YAW_OFS 340.9 H1:ALS-Y_CAM_ITM_PIT_OFS 303.9 H1:ALS-Y_CAM_ITM_YAW_OFS 433.5 H1:ASC-X_TR_A_PIT_OFFSET 0 H1:ASC-X_TR_A_YAW_OFFSET -0.095 H1:ASC-X_TR_B_PIT_OFFSET -0.11 H1:ASC-X_TR_B_YAW_OFFSET -0.067 H1:ASC-Y_TR_A_PIT_OFFSET -0.128 H1:ASC-Y_TR_A_YAW_OFFSET -0.174 H1:ASC-Y_TR_B_PIT_OFFSET -0.516 H1:ASC-Y_TR_B_YAW_OFFSET -0.1 H1:ASC-POP_A_PIT_OFFSET 0.38 H1:ASC-POP_A_YAW_OFFSET 0.248 H1:ALS-X_QPD_A_PIT_OFFSET 0.2 H1:ALS-X_QPD_A_YAW_OFFSET 0 H1:ALS-X_QPD_B_PIT_OFFSET 0 H1:ALS-X_QPD_B_YAW_OFFSET -0.05 H1:ALS-Y_QPD_A_PIT_OFFSET 0.1 H1:ALS-Y_QPD_A_YAW_OFFSET -0.4 H1:ALS-Y_QPD_B_PIT_OFFSET 0 H1:ALS-Y_QPD_B_YAW_OFFSET 0.15
Sheila, Jeff, Evan
We had repeated locklosses handing off the DARM sensor from ALS DIFF to AS45Q. We changed the guardian so that the handoff happens at a slightly lower CARM offset, and with a different DARM loop gain (we had previously used these settings back in late February). This new CARM offset makes the AS port more unstable during the DARM handoff, but it makes the transition successful.
We were able to make it to resonance on rf darm, but with a mediocre recycling gain (30 W/W). We spent some time manually steering the ITMs in order to bring the recyling gain up to more than 40 W/W. Then we updated the TMS QPD spot positions and the green alignment references (green QPD offsets and camera positions). It is not clear to us why we had to do this, since we restored all the suspension alignments from before the maintenance work.
We did an initial alignment starting with the new green references. Subsequently, we came into resonance with good recycling gain (>40 W/W) again.
We were able to engage the ASC with these new spot positions. However, at 17 W we saw the same 0.4 Hz resonance that we saw a few days ago, meaning we should not power up further in this configuration.
We redid the dark offsets for the TMS QPDs, since they seemed to be stale.
For now, the DARM handoff has been returned to its old CARM offset. I have left the DARM gain slightly lower than before (80 rather than 125).
The ITM QPD offsets have been reverted to yesterday's values. We are able to engage them as usual in the ENGAGE_ASC state, and they give a good recycling gain. However, at 23 W the interferometer unlocks suddenly after a few minutes. The transmitted arm powers seem slightly less stable than with the new offsets tried above (a slow oscillation with a few-second period can be seen in the arm powers, as well as POP LF), but there is no 0.4 Hz oscillation.
J. Kissel for E. Hall, S. Dwyer, J. Driggers, K. Izumi The IMC is in terrible shape again this morning. Words I got quickly from Evan: "We think that the FSS began oscillating again *during* [full IFO] lock, then the IMC WFS began integrating to a bad place." Obviously the investigation is on-going, but any help from the PSL team in the control room would be appreciated.
One problem that took a while to deal with as a part of IFO recovery was the lack of good .snap values for the OPTICALIGN slider values. I am told that part of this is that the values in the safe.snap files are not updated very often. In particular, they had not been updated since before some of the suspensions were mechanically realigned to center the slider values, so the computer reboots this morning put the optics in very bad places. (We had to hand-trend each slider value and type the values in.)
As a solution, I have created a new .req file that includes all of the OPTICALIGN values from the IFO_Align screen. The .req file (and the corresponding .snap file) lives in /opt/rtcds/userapps/release/sus/h1/burtfiles/OptAlignBurt.req . I have also written scripts to capture new .snap files, and restore the .snap file. The idea is that the capture script be run just before maintenence begins, and the restore script be run at the end of maintenence.
To run the capture script, in a terminal paste the following:
/opt/rtcds/userapps/release/sus/h1/scripts/CaptureOptAlignBurt.sh
To run the restore script, in a terminal paste the following:
/opt/rtcds/userapps/release/sus/h1/scripts/RestoreOptAlignBurt.sh
----------------
As a side note, the slider values for ITMX, ITMY, ETMX and ETMY have been accepted in the SDF system (made to be monitored, accepted, then un-monitored), so computer reboots should keep us closer, even if we forget to run the above scripts. We should do the same with the other major suspended optics.
Recall that we started writing the IFO ALignment slider values to an hourly burt such that we can easily grab-n-restore best alignment values - see alog 18799 from June 2.
The hourly burts are at:
/ligo/cds/lho/h1/burt/2015
under the appropriate date /h1ifoalignepics.snap
'Sorry that no one in the CR recalled this info from last month for you yesterday...
All fifty (50) Accumulators were checked for charge today. No Accumulator needed charging. Only three accumulators showed a decrease in pressure since the last charge check on 21 April, see T1500280. These were small decreases (few psi) and likely reflect loss from gauge pulloff (does the uncertainy principle apply?) The acceptable range of 60-93% of operating pressure is quite broad and the lowest reading today was at 80%.
Given these results, and, the reservoir-fluid-level-indication of Accumulator charge which can be checked with the system pumping, this invasive, must have system off accumulator pressure check could be done just quarterly. As long as the weekly check of reservoir fluid levels show no decrease, the accumulators can be assumed to be adaquately charged. If a weekly check of the reservoir fluid indicates a volume loss, then the accumulators could be checked.
good to hear that the accumulators are holding well. I like your plan -Brian
CO2X laser RTD sensor alarm (H1:TCS-ITMX_CO2_INTRLK_RTD_OR_IR_ALRM) tripped at 14 Jul 15 17:15:00 UTC this morning (10:15am), shutting off the CO2X laser. Folks were pulling cables near HAM4 this morning, which is probably why it tripped. CO2X laser was restarted at 19:30:00 UTC, and is now running normally again.
Just adding some words, parroting what Elli told me: this temperature sensor (RTD) is nominally supposed to be on "the" viewport (HAM4? Some BSC? The Injection port for the laser? Dunno). This sensor is not mounted on the viewport currently, it's mounted on "the" chassis, which (I believe) resides in the TCS remote racks by HAM4. She's seen this in the past: even looking at this sensor wrong (my words, not hers) while you're cabling / electronics-ing near HAM4, this sensor trips. As she says, this was noticed and recovered by her before it became an issue with the IFO because recovery went much slower than anticipated.
If I understand correctly the sensor I think your talking about then yes this should be on the viewport (the BSC viewport which the laser is injected in). The viewport sensor though is an IR sensor, but for some parts of the wiring in the control box (and thus on the MEDM screen) the IR sensor and RTD sensor are wired in together making it hard to know which one caused the trip. Its supposed to monitor scattered light coming off that viewport. It is very sensitive and can be affected by humans standing near it, light being shown onto it (one of the ways to set the trip level is to hold a lighter up to it ), maybe also heat from electronics, etc. So just sitting in the rack I am not at all surprised that it is tripping all the time and causing grief.
My suggestion is to try to get this installed on the viewport if you can, otherwise if you can’t and it really is causing problems all the time, there is a pot inside the control box which you can alter to change the level at which it trips.
Jenne, Sheila, Evan
We locked at 10Watts with low noise, and redid the OMC excitations that Koji and I did in alog 17919. We plotted the OMC L excitation against a model with a peak to peak motion of 36 um, and the result seems consistent with a reflectivity of 160e-7 that we measured on Friday by exciting the ISI. This is slightly worse than what we measured in April.
We made these excitations with the same amplitudes and frequencies that we used in April, but some of the velocities seem to be smaller. Jenne is working on doing a more thourough comparision, but it seems that the scatter is better when we are exciting Yaw and Transverse, if a little worse for longitudnal.
We used a frequency of 0.2 Hz for all excitations.
| DOF | excitation amplitude (0.2Hz) | time | Ref |
| OMC L | 20000 | 4:39:30 | 10 |
| T | 20000 | 4:43:51-4:47:00 | 11 |
| V | 20000 | 4:47:30-4:49:20 | 12 |
| P | 2000 | 4:51:38-4:53:20 | 13 |
| Y | 200 | 4:54:00-4:56:20 | 14 |
| R | 2000 | 4:56:47-4:58:00 | 15 |
I'm concerned that the times from the April data for the Longitudinal excitation that Sheila is using aren't quite correct. This means that for the "L" traces we're integrating some "no excitation" time in with our "excitation" time, and using this muddled spectra as the measurement of the OMC scattering.
I have pulled the data from April, and adjusted the start time of each measurement to ensure that the excitation channel was fully on at the start (the [0][0] "time series" trace in DTT), and was still fully on for the last average (the [0][9] "time series" trace). Since I only had to adjust the "L" start time, I think this is the only one that is affected. With this adjustment, I see that the knee frequency goes down for L and T. It stays about the same for P, and is hard to tell (almost no scattering) for Y. The amplitude is a little bit higher for L and P, but not by a lot. Since the knee frequency is directly proportional to the velocity (eq. 4.16, Tobin's thesis), this seems to imply that even though we were actuating with the same amplitude and frequency, the true motion is slower now than in April. Is this because we are also pushing around the weight of the glass shroud? I'm not sure how the glass is mounted.
The times that I'm using are as follows:
| 16-17 April 2015 (t0 UTC) | 14 July 2015 (t0 UTC) | |
| No excitation | 23:33:39 | 04:49:57 |
| L excitation | 23:47:47 | 04:39:30 |
| T excitation | 23:59:00 | 04:43:56 |
| Y excitation | 00:31:00 | 04:55:00 |
| P excitation | 00:24:00 | 04:51:50 |
Another thing to add:
Since June 25 (right after shroud thing was done) and including the time this measurement was done, OMCR beam diverter has been open and nobody cared to close it.
Though it's not clear if this makes any difference, any comparison should be done with the diverter closed.
Regarding Jenne's comment above, "Is this because we are also pushing around the weight of the glass shroud? I'm not sure how the glass is mounted." - the black glass shroud is mounted to the OMC structure, not the suspended mass. After installation, the ISI was rebalanced and retested.