I will be doing some work on the connection between CDS and GC this morning which will result in some brief interruption to access to CDS from the outside world (and vice versa). This includes any services hosted at lhocds.ligo-wa.caltech.edu. A follow up entry will be posted when work is complete.
Zero'd the Match gains starting at 1549utc. Jim reports the X leg of the sensor dropped by ~1/2 on 23 Dec when it appears the zero button was pressed. We will press it again to see if that might bring it back. I can't find a log of a button press but we had been struggling to get that STS2 signal down to reasonable levels. We're not sure this will work but we'll see.
model restarts logged for Mon 26/Jan/2015
2015_01_26 00:02 h1fw0
2015_01_26 04:29 h1fw1
2015_01_26 05:28 h1fw1
all unexpected restarts. Conlog frequently changing channels report attached.
Switched to manual control at 1510utc. Made one setpoint tweak at 1514. It looks like it will probably run without further adjustment (maintaining the ~70psi differential pressure) for a hour or more.
J. Kissel, H. Radkins, Corner Station HEPI Pump Servo Control was restored at 9:21 PST (17:16 UTC). For the record -- wasn't collection an Open Loop Transfer Function as indicated by the title, we merely wanted a few hours of data with the pump servo off to assess out-of-loop noise and impact on platform motion.
Alexa, Elli, Sheila, Rana, Evan
Today we worked some more to make the sqrt(TRX+TRY) handoff more robust.
Now we can pretty reliably complete the transition by hand. We are working on implementing the transition in the guardian.
We have tried to reduce the CARM offset while locked on sqrt(TRX+TRY), but we cannot seem to get beyond 2 or 3 times the single-arm power without blowing the lock. The next step is to go through the CARM reduction sequence more carefully and characterize the OLTF of the CARM loop in this state.
As discussed in LHO#16252 et seq., we suspect that the common-mode board has gain-dependent offsets. If this is true, it explains why the interferometer can be knocked out of lock while ramping down or turning off ALS COMM.
To boost the CARM loop's immunity to these offsets, we redistributed gains as follows:
LSC-REFL_SERVO_IN2GAIN
). Correspondingly, the gain on the AO into the IMC board was reduced from +16 dB to −4 dB (IMC-REFL_SERVO_IN2GAIN
).LSC-REFLBIAS_GAIN
was increased from 0.8 to 8).
Even with these changes, success is not guaranteed. The gain steps can be heard very clearly when listening to LSC-CARM_IN1
, and turning off ALS COMM on the summing board will blow the lock about half the time. We can get better results by using the gain slider on the common-mode board, but we can still hear the gain steps.
LSC-REFLBIAS
, we engage FM3 (1 Hz pole, 40 Hz zero) and FM9 (1.6 Hz pole, 40 Hz zero). The gain is 8 ct/ct.LSC-TR_CARM
, we set the offset to −0.5 ct.LSC-REFL_DC_BIAS
with a gain of 30 ct/ct.LSC-REFL_DC_BIAS
to 50 ct/ct.A lockloss plot is attached for an unexplained lock loss during the CARM offset reduction after handing off to sqrt(TRX+TRY). Other lock loss times, all 2015-01-27 UTC: 03:56:53, 04:05:25, 05:36:16, 06:40:59, 07:56:02, 08:37:20. The last four are during CARM offset reduction after the handoff is complete.
For a sqrt(TRX+TRY) offset of −1.5 ct, the gain we need for REFL_DC_BIAS is 50 ct/ct. Our OLTF looks good here. However, when we reach an offset of −2, we seem to lose lock, even though there is no obvious nastiness in the CARM spectrum.
Today, locking DRMI without arms was pretty painless. In contrast, DRMI+arms lock acquisition was very, very slow for most of the morning and afternoon. After about 5pm local, it became painless as well. This may have been correlated with our changing the trigger settings to be twice as high with arms (compared to no arms), since the POP18 buildup is twice as high. But we haven't investigated this systematically.
Last night Sheila requested DRMI_LOCKED on the ISC_LOCK guardian when she left for the night at 1am to che k if this configuration is stable.
DRMI with green in the arms and IR held off resonance stayed locked overnight, and the power slowly degraded untill 4.30am when the lock dropped. Cuardian relocked DRMI untill 6am when DRMI lock dropped and did not recover. This shows that this configuration is fairly stable.
After changing our ALS gain rampdown to be in the CM board rather than ahead of it, we never broke lock due to the rampdown. However, we only tried it in this new way twice.
The lock loss plot attached above was typical of our current lock loss mystery. After moving to a new CARM offset, the CARM error signal seems stable (no peaking in the spectrum) and the loop shape looks good. The lock breaks without any characteristic sound - just a sudden lock loss. Investigation of the 5 LSC error and control signals shows no instability in the 100 ms before lock loss. The lock loss happens several seconds after the CARM offset is done ramping. Since the signal from the Thorlabs Transmon PDs is only 2000-3000 counts, we think that they are not saturating.
No guess yet about what is happening. Any speculations on lock loss causes is welcome.
We struggled with the ETMX bounce mode all of today. It seems to have started ringing up around 3 AM last night (according to the ETMX OL 3-10 Hz BLRMS trend). We spent a couple of hours damping it around noon today, but then it slowly started growing again today. We've now installed a 60 dB stopband filter centered at 9.77 Hz in the OL loops to see if this will stop the ringups. We have also installed a resonant gain filter for 9.77 Hz in the CARM loop to reduce the arm power fluctuations during the offset reduction.
To better monitor the bounce mode, we set up the ETMX OL lockin screens to demod the OLPIT signal at 9.67 Hz. So now one can trend the 0.1 Hz beat note in the lowpassed output of this to see what the Bounce mode peak height is at all times. Probably should make a dedicated BLRMS for each suspension with an OL to monitor its bounce mode height.
I am taking a look at the lock losses Evan listed in this entry. All the correction signals seem good except the BS one. This is, for example, lock loss 4_05 UTC, with different zoom of the correction signals during CARM offset reduction. Probably a campaign of loop measurements for the vertex DOFs can help.
More lock loss science. As far as I can tell, 5:36 and 6:40 are similar to the previous lock loss 4:05, while in 7:55 looks like CARM is indeed the culprit..
(BS oplev seems OK)
Given the prior lock loss science by Lisa, I speculated that the BS oplev loops were doing something bad such as glitches. (Note that the people did not use the ASC loops last night so that the oplev damping loops on BS had been engaged all the time). Looking into the last five lock losses that Evan posted, I am concluding that the oplev damping loops were not glitching or disturbing the MICH loop. The attached are 60 seconds full time series of various BS oplev-related channels for the five lock losses. The 4:05 event is the only one which clearly showed DAC saturation and the rest of them did not saturate the BS DAC before the lock loss. The oplev sum sometimes shows a fast transient, but it happens right after each lock was lost -- indicating that the transient was caused by the motion on the oplev QPD as the BS was kicked and it is NOT initiating the lock loss.
(TRX seems always lower than TRY)
I don't know if this is related to the cause of the lock losses, but I found that TRX have been consistently lower than TRY by roughly 10 % regardless of how big the CARM offset was. It is unclear if this discrepancy is from an unintentional offset in the ALS diff operating point or some kind of calibration error in TRs. In any case, we should fix it in order to reduce DARM coupling in the TR_CARM signal path.
What is the CARM offset and offset reduction in physical units? (pm or Hz of the arm cavitites)
Peter, we can use Kiwamu's plot to convert this offset into physical units (alog 15389). An offset of -0.5cts gives about 800 pm. We were able to bring the carm offset to about 300 pm stabily.
Similar to the HEPI's done last week, I took the text file generated by Barker, which is extracted from the Guardian logs, and removed the truncated channels from that list. The MASTERSWITCH was also added to the list. This I used for the do not monitor channel list. I saved this file with the safe.snap files in /opt/rtcds/userapps/release/isi/h1/burtfiles.
Next the safe.snap is modified to include the monitor bit on all channels not RO and not fields. This is then loaded into the FE with the LOAD TABLE ONLY button.
After confirming a successful load and noting the DIFF list, the safe.snap is then modified with the donotinclude list. When theTABLE is LOADED again, most of the diffs go away and the remaining differences represent the differences we care about wrt most recent safe.snap file. Either the channel is set wrong or the snap needs updating.
All HAM-ISI are good except HAM3, there remains some blend diffs as we test configs for the 0.6hz peak issue.
I'll work on the BSC-ISI soon.
This exercise exposed a problem: if you have more than 40 diffs, it only shows 40 and it only lists 40. It makes you think there is only 40 when there are more. I've filed bug 796.
I have confirmed that the guardian reports (displays a notification) when the filter module is not in the correct state. This may not be true for other subsystem Guardians. Betsy and I saw this was the case on some ETM filter module that Dave's list reports that guardian touched. The monitoring that Guardian does for the SEI channels may need to be coded explicitly.
This has nothing to do with the new SDF functionality, nor has guardian behavior changed. The SEI guardian systems were programmed from the beginning to notify on changes to any of the controller filter banks. This is specific to SEI and is not being done by any of the other subsystems.
07:50 Cris to EX
08:00 Karen to EY
08:48 Bubba & Co to EX - no VEA
09:15 Karen leaving EY
09:21 Fil out to HAM 2 area for video work.
09:54 R Savage out to LVEA to look at viewports
10:00 Ron Carlson contractors on site for R McCarthy
10:02 Corey and Rick out of LVEA
10:29 McCarthy out to HAM6 area w/Daniel approval
11:03 McCarthy out of LVEA
11:09 Corey back out to squeezer bay to look for parts w/comm approval
13:40 Gary Traylor into the optics lab.
15:50 Gary out of optics lab
10:00AM Guidline in effect for Commissioning starting today.
SEI - work on installing blend filters from LLO on HAM3
SUS - working on SDF
CDS - video work around HAM2. Microphone problem is fixed.
3IFO - no work in te LVEA this week and will be postponed for some time.
Corey moving things in Squeezer bay.
Beam Tube Cleaning to resume today on X arm - Apollo on site.
Sudarshan mentioned a desire to work on P-Cal at EX and EY
Nutsinee reporting to LHO for operator position
Thomas Abbott visiting. Will be helping with Drift monitor/ P-Cal etc
The ITMY OPLEV SUM has been oscillating with 10min-ish period for a while. Apparently YAW picks this up with 3urad pp amplitude, making it unusable as an angle monitor (first plot).
It's getting worse slowly. On Tuesday the oscillation was there but it was about 1urad pp (2nd plot).
Something happened on Wednesday afternoon and the SUM dropped by a factor of 2 (third plot), but it's not like the noise didn't jump jumped up right after that event.
Though this is not blocking the locking activity, II asked Doug if he can swap the laser during maintenance. If not, he might try early morning on Wed. or Thu.
While I can't comment on the apparent noise increase, the sawtooth with the 10 minute period looks like the problem we had last September. Check to see if the copper lines that carry the instrument air are not touching the transmitter pylon; the instrument air line is what causes the 10 minute period sawtooth. See alog 13932 for when this happened last September, with pictures.
The user environment for controls on video0 - video6 and projector0, projector1 has been modified to eliminate redundant and outdated command aliases. The aliases are now supposed to be set by a common file of aliases at login. If there are any problems running programs that used to work, let me know. This was driven by an outdated "sitemap" alias found on these computers.
Thank You Jim.
Laser Status: SysStat is good Output power is 29.3 W (should be around 30 W) FRONTEND WATCH is GREEN HPO WATCH is RED PMC: It has been locked 6 day, 1hr 15minutes (should be days/weeks) Reflected power is 2.0 Watts and PowerSum = 25.6 Watts. (Reflected Power should be <= 10% of PowerSum) FSS: It has been locked for 1 h and 37min (should be days/weeks) Threshold on transmitted photo-detector PD = 1.59V (should be 0.9V) ISS: The diffracted power is around 4.7% (should be 5-15%)
People have asked for my alignment/lock acquisition DTT and StripTool templates. They can be found here:
/ligo/home/alexan.staley/Public/DTTTemplate
/ligo/home/alexan.staley/Public/StripToolTemplates/AlignmentStripTemp
Elli, Sheila, Alexa, Rana, Evan
Since we haven't been able to reliably transition ALS COMM to sqrt(TRX+TRY), we decided we'd back off a bit and try the transition with a single arm only (i.e., no DRMI).
The sequence that we found worked for us is as follows:
For most of this process we had a GPIB-controlled SR785 hooked up to the common mode and summing node boards in order to monitor the relative strengths of the ALS COMM and sqrt(TRX) signals. For the excitation we drove EXC A on the common-mode board; for ALS COMM we monitored TEST1 on SUM A, and for sqrt(TRX) we monitored TEST2 on SUM A. This was incredibly helpful for sorting out how to properly shape sqrt(TRX) and how to engineer a stable crossover during the transition.
And if anyone is looking for the freshest copy of the 40m GPIB scripts, they are maintained by Eric Quintero on github: https://github.com/e-q/netgpibdata
At around 3:30 this morning, the ETMX T240s started to drift to some large values. This morning we were having trouble locking the X arm with green light, and noticed that the REFL CNTRL output was huge, indicating that the cavity length was changing by ~20 um in about 10 seconds. We looked at the ISI screens, saw that the T240 mons were red. After interupting Jeff's Saturday morning errands, we decided to try zeroing the T240s by turning on FM6 (labeled AutoZ) in the T240 X filter banks, when we did this the ISI tripped. It started to Isolate, the outputs of the T240s were still large but not as large as they had been, stage 2 watchdog tripped, then it isolated and the cavity motion is much much less.
The firt attached screen shot shows the green laser control signal roughly calibrated in um, you can see the abrubt change when we tripped the ISI and the much smaller fluctuations after. The second screen shot shows a trend of the outputs over the last two days.
I've looked at this a bit more, but i can't find anything obvious. At ~11:40 UTC, something agitated the ISI and it stayed that way until 21:30 when Sheila tripped the ISI ( T240 time series trend shown in first screenshot) The ETM optic was in a similar state, and returned to normal after the ISI tripped. The ground sensors don't show anything (second plot, reference is from 8:00 UTC, active is from 12:00), and I didn't see any evidence that the ISI isolation loops were ringing (last screen shot, I only show a few, but I looked at them all).