J. Kissel Summary We're adding three new calibration lines around 30 Hz on the ETMY actuation stages in order to narrow down the uncertainty in actuation strength independently for each stage. Depending on the success of their analysis, and interference with IFO operations, we'll decide whether to leave them on for ER9. We may also push further forward with cancelling these lines with the Y-end PCAL, but for now, I turn them on without cancelling for the week prior to ER9. We may also push further forward an cancel these lines with the Y-end PCAL, but for now, I turn them on without cancelling for ER9. Motivation Recall that during O1, H1 had a static, ~2% systematic error in the collective actuation strength ("kappa PU"), narrowed down using cumulative integration time allowed for by the overall DARM loop line coupled with the ESD-only line (see e.g. LHO aLOG 24569 or LHO aLOG 25031). We intend to differentiate between the strength of the upper stages for the future, using their constant presence to bring the uncertainty in relative actuation strength to be essentially zero. Once we cancel these lines with PCAL, that'll bring the absolute calibration uncertainty to essentially zero. Line Details For now, without the man-power for further study of their "optimal" location, I've just stolen L1's ~30 Hz calibration line frequencies from O1 (see original sourceT1500377), given that they'll not be involved in ER9. The details of the new lines are: Isolation Stage Frequency Amplitude Oscillator Channel TST / L3 35.3 0.11 H1:SUS-ETMY_L1_CAL_LINE PUM / L2 34.7 1.1 H1:SUS-ETMY_L2_CAL_LINE UIM / L1 33.7 11.0 H1:SUS-ETMY_LKIN_P_OSC These new values have been accepted into the DOWN and SAFE SDF files. This is in addition to the "normal" calibration lines from O1 that will still be on such that we can replicate the O1 calculation without extra effort. On the TST / L3 stage, we now have *two* calibration lines, and this is such that we can still reproduce the O1 calibration line, time-dependent parameter tracking without changing anything. However, because we're not yet confident enough in the PCAL cancelling scheme for it to completely replace the O1 method, and we haven't installed / replaced any infrastructure. Thus, for now, I've stolen one of the Optical Lever Lock-in Oscillator and piped it out to the DAC output as a longitudinal drive using the LKIN2ESD matrix.
The above aLOG entry has some very confusing typos. Here's what I actually meant (and now includes the swap because of the need for synchronized oscillators -- see LHO aLOG 28086): Isolation Stage Frequency Amplitude Oscillator Channel TST / L3 35.3 0.11 H1:SUS-ETMY_L3_CAL_LINE PUM / L2 34.7 1.1 H1:SUS-ETMY_L2_CAL_LINE UIM / L1 33.7 11.0 H1:SUS-ETMY_L1_CAL_LINE And to replicate the O1 calibration line scheme: Isolation Stage Frequency Amplitude Oscillator Channel TST / L3 35.9 0.11 H1:SUS-ETMY_LKIN_P_OSC
Below are the past 10 day trends. There seems to be some correlation in a decrease of XTALTEMP in the last couple of days with an upward trend in PSL-AMP "D" power as well as PSL_PWR_HPL_DC_LP OUTPUT. It is possible that these fluctuations are being facilitated by the change in Chiller action (for the better) over the past couple of days.
As usual, for further in depth analysis please direct to Jason O and Peter K.
Chiller questions may be directed to Jeff B.
J. Kissel, Not sure why (couldn't find a aLOG), but the SDF system found that the PCALY calibration lines at 36.7, 331.9, and 1083.7 have been off since Monday Jun 20. I've turned them back on, such that we can get as much information about this week's lock stretches as we can. This was done by setting the SIN and COS amplitudes back to 125, 2900, and 15000 [ct] respectively.
CP5 has a couple features that are unique to the other CPs: 1. The Dewar exhaust pressure regulator was adjusted to raise the Dewar pressure from 15 psig to 19 psig (one turn on regulator bolt head). 2. The upper end of the electronic actuator was "zeroed" at full range of device rather than where it landed when CP5 was set to 100% open. We added the range and increased Dewar pressure because in order to maintain 92% full the LLCV setting was historically around 88% but has drifted to 98-100%.
Added 904 channels. Removed 410 channels. (see attached)
Flow meter is connected to CP4 exhaust again. Readings are bogus negative values. Will need to send back to company for recalibration and possibly sensor replacement, after freezing device last week with unexpected LN2 surge out the exhaust. Applied LOCTITE Threadlocker Blue 242 nut & bolt locker to the LLCV shaft and actuator coupling nut and re-zeroed the device (no room for a lock nut on shaft).
20 sec. to overfill CP3 with 1/2 turn open on LLCV bypass valve
SEI BRS commissioning continues reinstall anemometers at wind fence SUS OSEM noise hunting swap of 18 bit DACs that fail autocal Tuesday PSL looking at flow/pressure issue, drop in temp, jump in flow FAC property service inventory Tuesday, in LVEA, VEAs VAC apply Loctite to CP LLCV values Tuesday PI model modification Tuesday EE retrieval of VAC VME chassis for excess Tuesday Other 2 visitors in staging building working on PMC Tour scheduled Tuesday afternoon, in LVEA
I have remotely restarted h1nds1 via the management port. It look like the machine probably kernel panic'ed. If it happens again today please call me on my cell phone so we can capture the error message before restarting.
From h1nds1 logs, daqd crashed at 11:30PDT Sat morning. It was processing a second-trend request to retrieve 6 hours of data (actually it looks like two identical requests one second apart).
[Sat Jun 25 11:28:59 2016] ->12: start trend net-writer "7000" 1150892875 21600 { "H1:IMC-PWR_IN_OUT_DQ.min" "H1:IMC-PWR_IN_OUT_DQ.max" "H1:IMC-PWR_IN_OUT_DQ.mean" }
[Sat Jun 25 11:28:59 2016] ->23: version
[Sat Jun 25 11:29:00 2016] ->23: revision
[Sat Jun 25 11:29:00 2016] ->23: status channels 3
[Sat Jun 25 11:29:00 2016] connected to 10.22.0.108.22811; fd=32
[Sat Jun 25 11:29:00 2016] ->23: start trend net-writer "7001" 1150892875 21600 { "H1:IMC-PWR_IN_OUT_DQ.min" "H1:IMC-PWR_IN_OUT_DQ.max" "H1:IMC-PWR_IN_OUT_DQ.mean" }
h1fw1 continues to be stable, 1 day 18 hours now.
Stefan, Lisa We leave the interferometer locked at 40W, starting at Jun 26, UTC 3:32 The only things done by hand are SOFT offsets and SRM alignment (as last night, the SRM alignment loops is left open), and the PI damping (we have the PI damping loops on as described in the previous log).
Jenne, Lisa, Stefan,
Still running at 40W, for 2h.
Tried to damp some PIs and go to low-noise.
- We successfully damped 15009Hz (ETMY) and 15542Hz (ETMY) (damp settings picture attached.)
- 15541Hz (ETMX) was ringing up.So we tried switching to ETMY and transition the ETMX coil driver to low noise.
- We successfully switch the coil drivers, but...
- We switched to ETMY, low noise, held lock for a while, but saturated the ETMY ESD with 20Hz noise from the ASC, as well as a 532.77Hz line, that we don't know the origin off. (PI?) (picture)
===================================
Also, we realized that DTT is a bit too smart: The 64kHz channel H1:OMC-PI_DCPD_64KHZ_A_DQ can in principle look at PIs above the Nyquest through aliasing. However, since DTT only alows you to select a freqency about 10% below the Nyquist frequency, there is an effecive dead band where DTT cannot see PIs between ~29500Hz and ~36036Hz. The MATLAB script below produces a spectrum up to the Nyquist. Indeed, we found an elevated mode at 30018.3Hz, although it was not quite saturating.
MATLAB code to get spectrum up to Nyquist frequency:
addpath /ligo/svncommon/NbSVN/aligonoisebudget/trunk/Externals/SimulinkNb/Utils/
gwd = GWData();
gwd.make_kerberos_ready;
gwd.site_info(2).port = 31200;
gwd.site_info(2).server = 'nds.ligo-wa.caltech.edu';
H1.channels = {'H1:OMC-PI_DCPD_64KHZ_A_DQ'};
time_fetch = tconvert('26 Jun 2016 01:15:00');
[data, t, ~] = gwd.fetch(time_fetch, 1000, H1.channels);
Fs=4*16384;
pwelch(data,[],[],[],Fs)
[Lisa]
Attached is Jenne's PI knowledge, she also updated the PI wiki page.
This last 40W lock ended at June 26, 2.11 UTC while trying transitioning to low noise to damp the 15541 Hz ETMX PI.
The 532.7Hz matches the beat frequency between the 15541.9Hz ETMY and 15009.2Hz ETMY modes that were at very elevated amplitudes during this lock. 15541.9Hz appears to have been unstable ringing up with a time constant of 190 seconds, damping was engaged with a gain of 1000, so it may have been PI or being driven up, while the 15009Hz damped over the duration of the lock with someone actively changing the control gain. See the figures of 15540Hz and 15kHz mode group amplitudes over the last half hour.
There were two peaks ringing up in the unmonitorred region between 29000Hz and 32768Hz, one appearing in the OMC DCPD signals at 30551.09 and another at 31083.90. See third figure. The beat note between these two peaks is also around 532.7Hz. The connection can be seen in the fourth figure when the beginning of lock spectrum (green) is compared to the end of lock spectrum (blue). The largest green peak is the sensing harmonic of the 15009Hz mode. The largest blue peak is the first sensing harmonic of the 15541Hz mode. These mixing with the 532.7 produce the center frequency and the 31620Hz peak. There appears to be something else producing messy ~480Hz 'sidebands' on some of these peaks.
When I got in a few minutes ago, I noticed that the power into the vacuum had been oscillating for at least as long as our 20 min wall trends. I'm not totally sure what was going on, but the PSL guardian was following along and adjusting the normalization factor so that the MC stayed locked. Perhaps TJ can look into this on Monday, but on the PSL guardian screen it said that 38W was requested and the state it was in was the goto_2W, but it looks like the actual rotation stage was going back and forth being requested 25W and 55W.
Anyhow, it seems like it's back under control now, but we should figure out what caused this so we don't do it too often. It's probably to do with the fact that Terra was trying to step down the PSL power just before the last lockloss (alog 27963), but still the ISC_LOCK DOWN state or the PSL guardian needs to be able to handle this situation.
Attached is a 6 hour trend showing the situation just before I stopped the oscillation. (Actually, I was going to post a trend with the rotation stage request, and the guardian states for the PSL and ISC lock guardians, but now I can't connect to nds1. sigh. nds1 is still down, but I can get data from nds0)
15542 Hz (ETMY) and 15522 Hz (ITMX) are parametrically unstable at 40 W after ~2 hour lock even after ring heater power increase. Only 15521 Hz was successfully damped with ESD actuation.
After a few hours locked at 40 W, the 15.5 kHz mode began to grow for both ETMY and ITMX. I'd watched a few other modes rise and fall during the lock, so I didn't start trying to damp until peaks had grown by over an order of magnitude. I was able to quickly damp 15522 (ITMX) with 2 Hz BP, -60 deg filter, and saturating the drive.
I was not able to damp 15542 (ETMY). Driving did slow its growth but I couldn't find a damping filter combination that actually damped. I started trying to damp ETMY ~1 min after starting ITMX, so there's a chance I could've caught it earlier, but as you can see below it was also just ringing up faster. I suggest next 40 W lock we turn the damping filter on from the beginning.
Below I've tracked both frequencies in the OMC DCPDs, showing the PI growth and damping (or failing at damping). I attach a power spectrum showing progression as well.
When it was clear I wasn't able to damp, I tried stepping the power down 40 --> 38 --> 36 W but the lock broke quickly during this (not directly from the PI); sorry Stefan + Lisa. I'm leaving the interferometer in the DOWN state. I've edited the new PI wiki accordingly.
We are seeing 8 peaks in at least two of the horizontal mechanical mode groups (instead of the expected 4, one for each test mass).
Below are spectra of the two mode groups as seen in the OMC DCPDs, taken while locked at 2 W and 20 W.
Compared to the 15000 vertical mode group here.
When I drove the 15070 Hz group:
When I drove the 15600 Hz group:
See wiki for frequencies I was able to drive and thus assign to a specific test mass.
Ideas:
Maybe one more possibility; suppose the field you're monitoring is proportional to some LF (~ Hz) alignment dof for these, so the mode is only seen at +/- sidebands (i.e., suppressed-carrier AM) ?
Should not appear in arm cavity transmission (per Carl).
We investigated the scenario of these being amplitude modulation sidebands by looking at the arm transmission signal. If they were sidebands from an acoustic mode w1 with amplitude modulation of w2 from some lower frequency optic motion the OMC signal of A*sin(w1*t)*sin(w2*t) depicts the case when the motion moves the operating point around the point where there is zero response. In this scenario we would expect 2 sidebands and no w1 peak in the PSD of the OMC signal. We would expect the arm transmssion to be dominated by the w1 term (possibly with w2 sidebands. We drove up the 15606.2Hz peak.
Inspecting the arm tranmsisison signal we found that the 15606.2Hz peak appeared in the same location indicating that the the OMC signal we see is not an amplitude modulation sideband.
The scenarios investigated with COMSOL for possible mode splitting.
Nominal model 3D deformation depicted in the first figure. Dimensions for ITMY taken from 'galaxy' and D080751. RoC not included. Resonant frequency 15078.408Hz
Asymmetry in the ears vertical position + 2mm. Frequency shift to 15078.452Hz
Asymmetry in the ears horizontal position +2mm. Frequency shift to 15078.431Hz
Rotation of one ear by 5 degrees. Frequency shift to 15078.384Hz
Flats not centered in cylinder by +/-1mm (one flat larger area than other). Frequency shift to 15078.420Hz
Wedge angle misaligned from vertical (defined by flats) 10 degrees. Frequency shift to 15078.893Hz
None of these asymmetries produced new modes in the vicinity of the 15078Hz mode. And the frequency shifts are small relative to the observed frequency differences in the modes. These can be ruled out.
Idea of coupling via violin modes (29-31 harmonic) to the penultimate mass. There is relativly large motion of the ears for the horizontal modes which might explain the observation of mode splitting only in the horizontal modes.
ITMY PEN simulated resonant mode frequency is 15069.3Hz. (second figure).
Lisa, Stefan
- We looked at the DC stability of the transmon: over ~week time scale the TMS drifts by
~2 urad in P and Y
<10u in T and V
- This means that a TMS QPD combination that is tuned for SOFT (instead of insensitive to P/Y) will drift the beam waist around by 4.3mm, or about 1/3 of the beam waist - too big (compare this to the 10u in T and V). We thus need to use the TMS P/Y insensitive QPD reference for long term reference. We can blend to a soft/hard basis at AC if we need it for controling the soft degree of freedom.
- For now we added an IF-statement to the Guardian that allows us easily to switch the input matrix and related offsets.
- We also hard-coded the QPD offests (both Sheila's from today and the ones from 5 days ago).
- Running at 20W, we switched back to the 5 days input matrix and offsets, resulting in a PRgain increase from 30 to 32.
- After that we fine-tweaked the soft loop yaw offsets. We just used the offsets in the control filter banks. We got the best recycling gain with
H1:ASC-DSOFT_Y_OFFSET = 0.12
H1:ASC-CSOFT_Y_OFFSET = 0.12
neither of which are in Guardian yet.
- Finally, we tried closing the SCR1 P and Y loops with the following matrix:
A36IP -0.5
B36IP 1.0
A36IY -1.5
B36IY 0.26 (changed today - not in guardian yet)
pitch worked, yaw ran away (see plot). Looks like the signal in A36IY triggered the runaway by flipping its sign. (B36IY had the right sign)
- We also reduced the OMC_DCPD power setpoint (i.e. effectively the DARM offset) to 15mA.
- We thus left the SRM open for the night.
June 25, 5:09 UTC Lock #1: lock up to 40W, recycling gain < 27, locked broken by angular instability June 25, 5:35 UTC Lock #2: lock up to 35W, optimized recycling gain by adjusting soft loops yaw offset recycling gain increased from 27 to 30; lost lock by doing pitch adjustment June 25, 7.15 UTC Lock #3: lock directly to 35W (ISS3 loop engagement now in the guardian) with optimized soft offsets (only YAW offsets effective), recycling gain at 30 started powering up in steps of 1W from 35W More soft loops tuning to keep recycling gain around 29 8:00 UTC stable at 45W 8:10 UTC 50W recycling gain 27, unlocked at 8:15 due to saturations of some sort -- Terra was checking PI, nothing apparent Couldn't lock DRMI -- initial alignment June 25, 9:25 Lock #4: test: tried high power offsets during locking sequence -- worked fine locking sequence all up to 35W, a few steps to 40W, recycling gain 28 9.51 UTC @ 40W SRC loop closed at 10 UTC Bad error signal for SRC YAW, a very short long term test ended 9 minutes afterwards... June 25, 10:15 Lock #5: Directly to 40W, all in the guardian except OMC whitening switch and SOFT offset engagement Recycling gain recovered to 28.5, leaving SRM alignment loop open LEAVING THE INTERFEROMETER LOCKED AT 40W UNDISTURBED (in HIGH NOISE) AT JUNE 25, 10:45 UTC
To see if reducing h1fw1's disk loading would make it more stable, Thursday at 11:30PDT we changed h1fw1's daqdrc file to stop it writing science frames on the next daqd restart. Despite h1fw1 having restarted itself six times already Thursday morning, it then went into a period of stability from 09:53 through to 23:15, at which time it restarted and stopped writing science frames. What happened next was interesting, here is the timeline:
Thu 23:15PDT h1fw1 writes last science frame
Thu 23:20PDT h1fw1 restarts daqd
Thu 23:30PDT h1nds1 restarts
Thu 23:39PDT h1nds0 stops working, but its process still exists so monit does not restart it
Fri 05:02PDT h1fw1 restarts, test has failed at this point
The interesting points are: h1nds1 restarted once 10 minutes after the config change. Perhaps not surprising because it uses h1fw1's frames. At 23:39PDT h1nds0 stopped serving data. This is totally surprising, there is no link between h1nds0 and h1fw1 that we know of.
Since Guardian is the sole NDS client for h1nds0, several Guardian nodes reported nds problems while h1nds0 was in its frozen state. DIAG_MAIN for example reported nds failures from Thr 23:40PDT through Fri 10:54PDT.
Trouble getting h1fw1 writing science frames again.
The 05:02PDT restart of h1fw1 meant the test had failed. I reverted the daqdrc file back to write science frames. In light of the h1nds0 issues from last night, I decided to manually restart h1fw1. Unfortunately h1fw1 became very unstable, sometimes restarting before a single frame could be written. Here is what I did:
wait for monit to restart daqd several times before intervening
manually restart daqd
stop daqd and reboot h1fw1
stop daqd and power cycle h1fw1
finally, the nuclear option, power down h1fw1, power cycle h1ldasgw1, power up h1fw1
At the time of writing, the last restart seems to have made h1fw1 stable, it has been running for 30 mins.
In the past we noticed that power cycling the solaris QFS/NFS server has helped.
h1fw1 is stable again, presumably the reboot of the solaris QFS server h1ldasgw1 was the fix. It has been running for 18+ hours.