The BS IS tripped on GS-13 watchdog at 1328 utc this morning. Traffic, wind, something--plotting scripts still not functioning...
Anyway, I brought it back to lvl2 with 750mHz blends on stage2 and T250s on all Stage1 blends except T100mHz_0.44 blends on X & Y dofs.
I reset the target positions, there was about 700nrads on RX and <4um on Z, all other shifts were less. Please let us know if this impacts any alignments. This allowed for a one button isolation.
I am done with the morning red lock and handed the interferometer over to Keita and Jax. Here are some notes for the green and blue teams:
PRMI locks:
Today I was able to lock the PRMI with the sidebands resonant in the PRC. There were three key points:(1) the alignment was not great, (2) the notches in FM6 of MICH (see alog 10127) was too aggressive for the initial acquisition and (3) a 30 Hz low pass was not engaged in MICH's FM9 which was usually set up by the guardian.
My first guess for the MICH and PRCL gains were 40 and -0.4 respectively (see alog 10168) because these are the nominal values we have been using in the past week. However, it turned out that the alignment of PRMI was not good enough so that optical gain was smaller by a factor of between 2 and 3 for both MICH and PRCL. So I empirically ended up with gain settings of 80 and -1.4 for MICH and PRCL respectively to acquire a lock for a long period. Then tweaking PRM and PR2 gave me a high build up which was approximately 30000 counts in POPAIR_B_RF18 and this is about the same amount we saw on 11th of February. The attached is a trend of the power build up and alignment sliders. The misalignment was mainly in pitch.
At the end, the gain was at 40 and -0.6 in MICH and PRCL respectively. I didn't get a chance to measure the UGF.
Next steps:
Our short term goal is to do the "one arm + PRMI 3f" test and therefore the stability study of the 3f locking is the most critical at this moment. However I (re-)found that the daily alignment is time-consuming and is something we must automate. So I would like to get the dither system running at first before entering a serious 3f study.
Even though the PRMI didn't spontaneously drop the lock at the end of the morning commissioning, fluctuation in the intracavity power was large. The power could drop to the half of its maximum and was oscillating mainly at 0.9 Hz. Looking at the PR3 gigE camera (VID-CAM09), I found that the oscillation of the cavity power synchronized with scattered light off of the PR3 cage which looked oscillating mainly in pitch. So I tried to identify which optic was moving by using the data from this morning.
According to a coherency test (see the attachment), ITMY is the most suspicious at this point.
ITMY was oscillating at 0.4-ish Hz and shows a moderately high coherence with the POP_B_RF18. It is possible that this 0.4-ish Hz motion of ITMY then produced a fluctuation in POP_RF18 at the twice higher frequency due to the quadratic response of the cavity power. This issue is not a killer at this point, but the study will continue.
Done by switching off the DC biases. The oplev was already off.
ETMX is now realigned for the blue team. Oplev is still off.
Sorry, Jamie. I have another guardian job for you.
controls@opsws4:~ 0$ guardctrl start SUS_PRM
starting node SUS_PRM...
fail: SUS_PRM: unable to change to service directory: file does not exist
After the recent upgrade, where I rebuilt the node supervision infrastructure on h1guardian0, I did not yet get around to re-creating and restarting all of the nodes that had been running previously. Arnaud and I are now restarting all the SUS nodes, but just in case, this should be an easy issue to resolve:
The guardctrl
utility will tell you which nodes are currently running:
jameson.rollins@operator1:~ 0$ guardctrl list IFO_IMC * run: IFO_IMC: (pid 11768) 144328s, want down; run: log: (pid 26686) 145415s ISI_HAM4 * run: ISI_HAM4: (pid 26143) 3148s, want down; run: log: (pid 11352) 53329s LSC * run: LSC: (pid 20593) 48884s, want down; run: log: (pid 11727) 48972s SUS_ETMX * down: SUS_ETMX: 145415s; run: log: (pid 26687) 145415s SUS_MC1 * run: SUS_MC1: (pid 29305) 145317s, want down; run: log: (pid 26685) 145415s SUS_MC2 * run: SUS_MC2: (pid 29314) 145317s, want down; run: log: (pid 26863) 145413s SUS_MC3 * run: SUS_MC3: (pid 29327) 145317s, want down; run: log: (pid 26864) 145413s SUS_SRM * run: SUS_SRM: (pid 1869) 63862s, normally down, want down; run: log: (pid 1027) 150829s jameson.rollins@operator1:~ 0$
Any node you think should be there but is not showing up, you can just create:
jameson.rollins@operator1:~ 0$ guardctrl create SUS_PRM creating node SUS_PRM... adding node SUS_PRM... guardian node created: ifo: H1 name: SUS_PRM path: /opt/rtcds/userapps/release/sus/common/guardian/SUS_PRM.py prefix: SUS-PRM usercode: /opt/rtcds/userapps/release/sus/common/guardian/sustools.py /opt/rtcds/userapps/release/sus/common/guardian/SUS.py states (*=requestable): 0 MISALIGNED * 1 SAFE * 2 DAMPED * 3 ALIGNED * 4 INIT 5 TRIPPED jameson.rollins@operator1:~ 0$
Once the node is created, it is ready to start. Before starting, I usually pop open a window viewing the log from the node so I can watch the start up. This is most easily done by opening up the medm control panel for the node via the GUARD_OVERVIEW screen, and clicking on the "log" link.
Finally, just start the node:
jameson.rollins@operator1:~ 0$ guardctrl start SUS_PRM starting node SUS_PRM... jameson.rollins@operator1:~ 0$
We're working on making all the guardians smart enough to identify the current state of the system on startup, and identify the correct state to jump to. The SUS guardians are programmed to go to the ALIGNED state on startup. We're now working on enabling them to identify if the optic is currently misaligned and to go to the MISALIGNED state in that case.
The IMC was not locking. I did the following items to let it lock:
Now it locks.
At 4:47AM local.
I've started building out an LSC guardian library, starting with Kiwamu's PRMI sideband locking guardian module (LSC_PRMIsb.py) I started by making an LSC code library:
USERAPPS/lsc/h1/guardian/lsclib
This is organized as a python package, with sub packages for the various locking configurations. I started by copying Kiwamu's PRMI sideband locking states from LSC_PRMIsb.py into a new module:
USERAPPS/lsc/h1/guardian/lsclib/prmi/sidebandlock.py
It consists of two top level requestable states, LOCKED and THREEFLOCKED (see attached state graph). The above module is then loaded by a new LSC guardian module:
USERAPPS/lsc/h1/guardian/LSC.py
currently the full contents of which are:
from lsclib.prmi.sidebandlock import *
request = 'LOCKED'
The guardutil
utility can be used to inspect the module as is, such as draw the graph and print system info:
$ guardutil print LSC
ifo: H1
name: LSC
path: /home/jrollins/ligo/src/userapps/lsc/h1/guardian/LSC.py
prefix:
usercode:
/home/jrollins/ligo/src/userapps/lsc/h1/guardian/lsclib/prmi/sidebandlock.py
states (*=requestable):
0 LOCKED *
1 UP *
2 THREEFLOCKED *
3 ACQUIRE
4 LOCKING
5 INIT
We can add new modules for new locking configurations, load them from the main LSC module, and add the necessary edges and states to connect them together.
Eventually we might want to move all of this into lsc/common once it stabilizes a bit. I imagine this is also not the final configuration of this stuff.
I started up the "LSC" node on h1guardian0, and it started up without problem. I left it not doing anything for the moment, but it should be ready to run as is.
(Sheila, Alexa, Rana)
During the afternoon, the locking of Green PDH was quite unstable. We suspected that there were some oscillations of the NPRO PZT and/or accidental HOM resonances (since the mode-matching / clipping is so bad).
* Sweeping the NPRO PZT with a low bandwidth PLL lock, found no substantial features in the neighborhood of the peak (~27.4 kHz). Even though there's no resonances in the TF, the peak dominates the RMS of the PDH error signal. We thought that this could perhaps be coming from an oscillation of the PSL FSS, but tweaking the FSS Fast gain doesn't change the peak frequency.
* We tried a few different modulation frequencies for PDH (23.4, 23.9, and 24.4 MHz). These were calculated to make the upper SB be at ~0.3-0.4 of an FSR. As expected we saw a big dip in the PDH loop in the 10-15 kHz range for these different modulation frequencies. These dips were not very stationary - we guessed that this was due to the alignment fluctuations.
* Daniel turned on the 1000:100 Boost in the servo board after awhile and this greatly helped the stability. At the best of times, the green arm power fluctuations were ~10%. At the worst of times, it was more like 50% and the mode would hop between 00 and 01. We had mixed results with the dither alignment and its not always working for both DOFs.
* We should use a directional coupler to check that we're at the peak frequency for the EOM.
Some observations: After reverting to the original sideband frequency we had a hard time locking. The behaviour was similar to what we experienced in the past when we had a lot of alignment fluctuations. We would stay "locked' but switch between 00 mode and a higher order transverse mode without loosing a step. In the past the transition was to a 10 mode whereas yesterday it was to a second order mode. The locking was better when we switched back again to the frequency that is 1MHz off. It turned out that the sidebands were coincidentally set near the second order transverse mode spacing. Using a frequency near nominal with the same tuning worked as well. However, it turned out the real problem was a lack of low frequency gain. With the standard network compensation we just have a pole near 1.6Hz, With the boost turned on the lock is a lot more stable. This seems especially important during elevated wind.
The h1nds0 computer died with a kernel panic, had to reboot.
After re cabling for the ALS WFS the link between slow and fast controls stopped working. The newly assigned DAQ ADC channel has a large -5V offset and seems broken. The offset is there even if nothing is connected to the AA chassis.
Changing the AA chassis didn't fix the problem. So, it is probably the ADC. To minimize the disruption we simple switched to a different channel for now.
Day's Activities
Apollo, and Mitchell The ACB was removed from the solid stack, wrapped and prepped for transport to EY.
I copied some relevant scripts from LLO to
/opt/rtcds/userapps/release/lsc/h1/scripts/sensmat
Although we still need to adapt all the scripts to LHO's environment, we can start looking at the scripts and learn how they would work.
Previous transfer functions of SR2 showed what looked to be cross coupling between Yaw and Roll at approximately 0.75Hz.
After checking (1) the BOSEM centering and alignment, (2) the alignment of the BOSEM flags and flag mounts, (3) the level of the Upper Mass and the Tablecloth, and (4) verifying there were no mechanical interferences, we re-ran the TFs for H1-SR3.
The plots still show a Yaw peak at 0.73Hz. We ran DTT and did not see the same peak in the Yaw DOF. There was good coherence at this frequency. Jeff K. looked at the plots; concluding this peak was probably environmental noise, gave his consent that this suspension is ready for the glass mass installation.
The plots for the last round of transfer functions are posted beflow.
Andres R & Jeff B
We adjusted the EQ stops on the M2 and M3 levels of H1-SR2 to approximately their 0.75mm in vacuum position to protect the glued on magnets on these masses while other installation and commissioning activities are underway in and around HAM4. SR2 is freely suspended at this time.
Kiwamu and Dave
WP4449:
modified h1boot to include first install of the h1oaf model.
13:11 started the first instance of the h1oaf model running on the front end h1oaf0.
changed the DAQ configuration to include the new model
13:26 DAQ restart
now changing the overview MEDM screens to include the h1oaf
MEDM updates completed.
BTW: the new model added one Shared Memory IPC channel on the h1oaf0 frontend:
In the latest oaf model, I intentionally deleted the IPC blocks for now which were supposed to receive signals from HEPIs and ISIs. In any case, the original motivation of starting the OAF model was to get the IMC signal received and saved as science frames, and this was achieved by this installation. At some point in the future, we should revisit the model and implement the IPC blocks back, which requires changes in some other models (see the detail in alog 9963).
[Yuta, Rana, Evan]
When Stefan left Friday evening, PRMI wouldn't lock. We poked around at MEDM screens for a while before deciding that a more systematic diagnosis was in order. We decided to attack just the Michelson first.
We parked PRM and misaligned ETMX. We then adjusted the LSC MICH filter bank to duplicate was was done for Kiwamu's and Yuta's previous Michelson lock characterization (elog 9698, 31 Jan 2014). Even with a 1:0 integrator engaged, we found that the Michelson would not lock for more than 30 s, and the error signal drifted by about a third of its peak-to-peak.
We were able to measure the OLTF, and found that it had a UGF of 3 Hz with no phase margin. Rana suggested we notch out the bounce mode of the BS suspension with filters from LLO. We got the filter, adjusted the frequency to the LHO BS (17.8 Hz, as measured from the REFLAIR_A_RF45_Q_ERR spectrum), and then added it to FM6 on LSC_MICH. After doing this, we found that the Michelson lock is much more stable --- it appears to lock indefinitely.
In order to calibrate REFLAIR_A_RF45_Q_ERR in terms of mirror motion, we let the Michelson swing freely and recorded the fringing. We know that the fringing amplitude (in counts) as a function of asymmetry l is A sin(4 pi l / lambda), so the linear portion has a slope of A * 4 pi / lambda, in counts per m. I took the swinging data, trended the minimum, median, and maximum, and then took the median of the trended minimum and maximum values. A histogram of these values is attached. From this I find A = 643 counts; this gives the conversion factor as 7.6 counts per nanometer.
We used this value to get a calibrated spectrum of the dark noise of REFLAIR_A_RF45_Q_ERR, which we measured with the modecleaner unlocked. A trended 10-minute time series is attached; we see that the drift is on the order of a few nanometers over this time period. Also attached is a spectrum of the dark noise, along with Yuta's estimate of the control signal (LSC_MICH_OUT) the Michelson, given in terms of length. The estimated length noise was 1.1 um RMS.
An OLTF of the improved Michelson loop is attached. The UGF is now 7.5 Hz, with a phase margin of 20 degrees. Also attached is Yuta's model of the expected OLTF; the agreeement is excellent around the UGF, except for the flat gain. This model uses already existing an already of the triple suspension of the BS (/ligo/svncommon/SusSVN/sus/trunk/Common/MatlabTools/TripleModel_Production).
We assumed that the suspension model gives BS actuation efficiency from H1:SUS-BS_M2_LOCK_L_OUTPUT to the actual M3 motion in m/counts. However, there is a missing factor of 1.7e-3 in this actuation efficiency to fit to the measured OLTF.
Written by Yuta
I found that I forgot to put 0.05 in my OLTF model (I forgot that the output matrix H1:LSC-OUTPUT_MTRX for MICH to BS was set to 0.05). I also forgot to put sqrt(2) to convert BS motion to MICH length change. I updated the OLTF figure, and now, the missing factor is 0.024.
Written by Yuta
The missing factor 0.024 was from the conversion factor in uN/counts.
I assumed that the suspension model I use gives me the transfer function in m/counts, but it was actually in m/uN.
The conversion factor can be calculated using the parameters in G1100968 (for BS specific, see T1100479);
0.963 N/A * 0.32 mA/V * 20.0/2**18 V/counts = 2.35e-8 N/counts = 0.024 uN/counts
The OLTF now agrees well with the expected. Thanks to Jeff K. and Arnaud!
(But still, there is a missing factor in the PD signal chain. The measured value 7.6 counts/nm is used in this expected curve. See alog #9630 and #9857)
Note that this factor(uN/counts) is also missing in the current noise budget model which lives in /ligo/svncommon/NbSVN/aligonoisebudget/trunk/PRMI.
I reported the WD plotting issue mentioned by hugh last week. Details can be found in LHO aLog #10057
I am not sure what is going on yet, scripting or server access issue, but I am looking into it.
The plotting scripts work for the HAM-ISI. WD plotting is still disfunctional on the BSC. I think it is a scripting issue in that case, and I am working towards fixing it:
The BSC-ISI WD plotting software was fixed, see LHO aLog #10258 for more details.