Day Shift Summary LVEA Laser Hazard 08:45 PSL check OK 09:12 Craig & Scott – Working in H2-PSL enclosure 09:15 Richard – Working on dust monitor at HAM6 10:05 Betsy & Travis – To End-Y to lock Quad 11:06 Filiberto – Electrical work at End-X 12:36 Sheila – Restarting the ISCEX model 12:51 Thomas – At End-X to recenter Optical lever 12:57 Karen – Cleaning at Mid-Y 13:15 Betsy & Travis – Working on ITM in LVEA test stand 13:45 Apollo – At End-Y to lower BSC cleanroom 14:06 Thomas – Going to End-X and then to the LVEA to align ITMY Optical Levers 14:45 Praxair – On site to deliver nitrogen to Mid-Y 15:00 Corey & Keita – At End-Y working on TMS 16:10 Dave – DAC restart to add Guardian channels
16:10 DAQ restart. Latest H1EDCU_GRD.ini file from Jamie, added H1:ALS-C_COMM_A_LF_OUT_DQ to frame broadcaster for DetChar.
[Jeff K, Duncan M] The H1:SUS-BS_ODC_CHANNEL_BITMASK record has been modified so that the summary bit of the ODC for BS suspension ignores the 'M3 WatchDog OK' bit, at Jeff's suggestion - this bit will be off for the foreseeable future, regardless of the state of the suspension itself. This record now reads 2014, rather than 2046: $ caput H1:SUS-BS_ODC_CHANNEL_BITMASK 2014 Old : H1:SUS-BS_ODC_CHANNEL_BITMASK 2046 New : H1:SUS-BS_ODC_CHANNEL_BITMASK 2014 An equivalent change has been made at LLO.
[Yuta, Evan]
Michelson signal challenge (alog #9857) was solved.
MICHELSON REFLAIR PATH POWER BUDGET
Tabulated by Kiwamu and Lisa in alog #9954.
MICHELSON SIGNAL CHAIN
Michelson length change: dl=lx-ly [m]
|
Optical gain: dPmod/dl = 2*4*pi/lambda*Peff*J0(beta)*J1(beta)*sin(2*pi*fmod/c*lsch)*Ritm = 1.86 W/m (Confirmed with Optickle; Note that Optickle gives half of this since it gives demodulated signal with demod gain of 0.5)
| Effective input power: Peff=7.3uW (measured), Modulation depth beta=0.07 (alog #9395), Modulation frequency: fmod=5*9099471 Hz (alog #9695), Schnupp asymmetry: lsch=0.08 m (alog #9776)
|
PD responce of REFLAIR_A: eta1064 = 0.76 A/W (Perkin Elmer C30642)
|
Trans impedance: Z = 341 Ohm (LIGO-S1203919)
|
Cable loss: Closs = 0.81 (measured; alog #9630)
|
Demodulator gain: Gdmd = 11 (measured; nominally 10.9 according to LIGO-F1100004)
|
Whitening gain: Gwtt = 45 dB (set by H1:LSC-REFLAIR_A_RF45_WHITEN_GAIN) (Anti/Whitening filters and AA filters are ignored here since they have DC gain of 1; see LIGO-D070081 and /opt/rtcds/rtscore/release/src/fe/controller.c
)
|
ADC conversion: V2C= 2**16/40 counts/V
|
H1:LSC-REFLAIR_A_RF45_(I|Q)_IN1
|
H1:LSC-REFLAIR_A_RF45_(I|Q)_GAIN = 5
|
MICH error signal (H1:LSC-REFLAIR_A_RF45_Q_ERR) [counts]
Muliplying these numbers gives dPmod/dl * eta1064 * Z * Closs * Gdmd * Gwtt * V2C * 5 = 6.2e9 counts/m (with error of ~10%)
The measured value using Michelson fringe was 7.6e9 counts/m on Feb 17 (alog #10127), and 6.8e9 counts/m on Jan 30 (alog #9698). So, the expected value and the measurements agree within ~20%.
MEASUREMENT DETAILS
In the process of constructing the calibration chain, we ended up repeating some measurements that were done previously.
We re-measured the power in front of REFLAIR_A. With the PRM aligned, the DC output was 2.35(1) V, and with the PRM misaligned (so that the beam is predominantly the reflection from ITMX), the DC output was −1.30(3) mV. The dark voltage was −1.84(5) mV. The net voltage with the PRM misaligned is then 0.54(6) mV. Using the DC transimpedance (98 Ω) and responsivity of the PD, this gives a power of 7.3(7) μW. Additionally, the ratio of aligned to misaligned is 4.4(4) × 103, and the expected ratio is Ritm/(Tprm^2)*4 = 4.4 × 103, so we have good agreement.
We also used the Ophir power meter to measure the power with the PRM aligned; it was 33.9(1) mW. The power with the PRM misaligned was too small to see with the meter (we couldn't figure out how to remove the filter on the head).
We measured the demodulator gain by driving the RF input of the demodulator with a −10.0 dBm sine slightly offset from fMod, and then watching the Q monitor on a 50-Ω impedance scope. The ratio of pp voltage on the scope to pp voltage of the input sine was found to be 0.51(2). The output impedance of the monitor is 499 Ω, and after the monitor there is a symmetric drive (giving an extra factor of 2), so the gain is actually 11.2(4).
We found that the optical lever for ETMX was starting come off the QPD's linear regime (41 urad in pitch) and so it seemed like a good time to re-align. ITMY's optical lever has been completely off of the QPD for a little while now but we were able to recover the beam without opening the covers. The last time a fine alignment was done was January 8th, 2014.
After completing the install of IOHT2R I've installed and aligned the trans-mon and PRMR paths on the table up to the 5 cameras. The HWPs for the trans-mon and PRMR beams are currently set to minimized transmission through the first BSs and their settings are 174 degrees and 290 degrees respectively. I've noted these setting on the table layout that is fixed to the interior of the enclosure. This table is now awaiting cabling for the cameras which is being ordered by Richard M.
Thanks to Corey for his help. We got the 600lbs off the keel and palleted for the trip up onto the E-module. Corey continues his work on the TMS. SEI will collect one more PS before uncabling. Meanwhile Apollo is lowering the cleanroom.
On 21 Jan 2014, the EPICS gateways were removed in favor of directly specifying the network broadcast addresses using the EPICS_CA_ADDR_LIST environment variable, as described in entry 9400. This eliminates the problem of MEDM displays being slow to respond to IOC reboots, etc., at the expense of additional network traffic. In the original setup, the gateway acts as a proxy for CA channel broadcasts and data streams, such that the gateway can reduce the traffic load for individual IOCs (the gateway can maintain a channel connection, and then fan out the data to clients as required). In the current configuration, clients broadcast and connect to individual IOCs directly; of particular interest is the change in the amount of broadcast traffic.
The Short Version
The current broadcast traffic on the H1FE network is approximately double what it was when the EPICS gateway was in place. For peak traffic, the change was from 300k to 500k bits/s (75k to 200k bits/s for avg traffic). As a percentage of the total bandwidth available between the core switch and the H1FE switch (1Gbps), this is a change from 0.03% to 0.05% peak utilization. Measured in packets/sec, the rate also essentially doubled from ~10 pps to ~20pps. This should not represent a significant additional traffic burden; however it has made more evident some potential flaws with the model switches used for the front ends, for which work is ongoing. This analysis is based solely on the broadcast traffic rates, which is the primary concern at hand.
Vlan101 Interface as a Barometer for Traffic Analysis
The core switch performs L3 routing for the CDS network. As such, the vlan interface for VLAN 101 (the FE network) is an ideal point to monitor changes in traffic. With the gateway in place, this interface will receive the CA beacons from the gateway. With the gateway removed, CA beacons will traverse this interface to reach the front end computer. Note that the majority of the broadcast traffic comes from the hourly autoburt runs; while the gateway proxies connections, it appears to re-broadcast for channel names anyway.
The attached plots for Vlan101 show that the broadcast traffic flow inverts on the 21st as expected. The subsequent relative drop in traffic levels a week and a half later corresponds to a cleanup of the autoburt request files that eliminated invalid/non-existent channels, hence reducing the broadcast rate.
vlan101-bits.png: Vlan101 traffic rate in bits.
vlan101-unicast.png: Vlan101 traffic in unicast packets.
vlan101-non-unicast.png: Vlan101 traffic in broadcast/multicast packets.
Plot times are between 2014-01-12 10:49:59 PST to 2014-02-19 10:49:59 PST. The range is an arbirary choice, other than including the region of interest. Light infill represents max peak traffic, dark infill average traffic.
cdsegw0 Interface Statistics
As a check, plots for cdsegw0 (the EPICS gateway) show a corresponding change in traffic. The two interfaces cannot be compared directly, as the physical interface for cdsegw0 includes traffic from vlans 101,105, and 106. However, the relative traffic changes match.
cdsegw0-bits.png: Vlan101 traffic rate in bits.
cdsegw0-unicast.png: Vlan101 traffic in unicast packets.
cdsegw0-non-unicast.png: Vlan101 traffic in broadcast/multicast packets.
Plot times, traces are as described above for Vlan101.
Travsu and I relocked the ETMy SUS and bagged it for the cartridge flight.
I committed outstanding changes to the base SUS guardian module:
USERAPPS/sus/common/guardian/SUS.py
The SUS.py from sus/l1/guardian was copied to sus/common/guardian, and a couple of improvements to the code were then made:
Outstanding tasks:
For the information of anyone looking at ETMX performance (Fabrice), we turned the oplev damping back on at 18:41 UTC this morning.
Keita, Sheila, Jax
Actually, this is Sheila impersonating Evan
We disabled dither alignment feedback but left the dither for P and Y both, and adjusted the WFS demod phase to minimize the peak for Q. We looked at 380Hz for P and 410Hz for Y.
See attached for the demod phase and whitening setting. This is probably not better than 10deg level.
Fabrice gave approval, or at least a "good enough", on the tf's for the ISI, so this morning I went down and locked the ISI. We're now ready for SUS, et al. to go down and start locking and adding covers in prep for flight. Our sensors are still powered on ( I need to finish collecting spectra), so SEI can't unplug yet, but I should be done with that shortly
Laser Status: SysStat is good Output power is 28.9 W (should be around 30 W) FRONTEND WATCH is Active HPO WATCH is red PMC: It has been locked 1 day, 22 hr 20 minutes (should be days/weeks) Reflected power is 1.1 Watts and PowerSum = 11.9 Watts. (Reflected Power should be <= 10% of PowerSum) FSS: It has been locked for 0 d 4 h and 5 min (should be days/weeks) Threshold on transmitted photo-detector PD = 0.89 V (should be 0.9V) ISS: The diffracted power is around 6.9 % (should be 5-15%) Last saturation event was 0 d, 4 h and 30 minutes ago (should be days/weeks)
The BS IS tripped on GS-13 watchdog at 1328 utc this morning. Traffic, wind, something--plotting scripts still not functioning...
Anyway, I brought it back to lvl2 with 750mHz blends on stage2 and T250s on all Stage1 blends except T100mHz_0.44 blends on X & Y dofs.
I reset the target positions, there was about 700nrads on RX and <4um on Z, all other shifts were less. Please let us know if this impacts any alignments. This allowed for a one button isolation.
I reported the WD plotting issue mentioned by hugh last week. Details can be found in LHO aLog #10057
I am not sure what is going on yet, scripting or server access issue, but I am looking into it.
The plotting scripts work for the HAM-ISI. WD plotting is still disfunctional on the BSC. I think it is a scripting issue in that case, and I am working towards fixing it:
The BSC-ISI WD plotting software was fixed, see LHO aLog #10258 for more details.
I am done with the morning red lock and handed the interferometer over to Keita and Jax. Here are some notes for the green and blue teams:
PRMI locks:
Today I was able to lock the PRMI with the sidebands resonant in the PRC. There were three key points:(1) the alignment was not great, (2) the notches in FM6 of MICH (see alog 10127) was too aggressive for the initial acquisition and (3) a 30 Hz low pass was not engaged in MICH's FM9 which was usually set up by the guardian.
My first guess for the MICH and PRCL gains were 40 and -0.4 respectively (see alog 10168) because these are the nominal values we have been using in the past week. However, it turned out that the alignment of PRMI was not good enough so that optical gain was smaller by a factor of between 2 and 3 for both MICH and PRCL. So I empirically ended up with gain settings of 80 and -1.4 for MICH and PRCL respectively to acquire a lock for a long period. Then tweaking PRM and PR2 gave me a high build up which was approximately 30000 counts in POPAIR_B_RF18 and this is about the same amount we saw on 11th of February. The attached is a trend of the power build up and alignment sliders. The misalignment was mainly in pitch.
At the end, the gain was at 40 and -0.6 in MICH and PRCL respectively. I didn't get a chance to measure the UGF.
Next steps:
Our short term goal is to do the "one arm + PRMI 3f" test and therefore the stability study of the 3f locking is the most critical at this moment. However I (re-)found that the daily alignment is time-consuming and is something we must automate. So I would like to get the dither system running at first before entering a serious 3f study.
Even though the PRMI didn't spontaneously drop the lock at the end of the morning commissioning, fluctuation in the intracavity power was large. The power could drop to the half of its maximum and was oscillating mainly at 0.9 Hz. Looking at the PR3 gigE camera (VID-CAM09), I found that the oscillation of the cavity power synchronized with scattered light off of the PR3 cage which looked oscillating mainly in pitch. So I tried to identify which optic was moving by using the data from this morning.
According to a coherency test (see the attachment), ITMY is the most suspicious at this point.
ITMY was oscillating at 0.4-ish Hz and shows a moderately high coherence with the POP_B_RF18. It is possible that this 0.4-ish Hz motion of ITMY then produced a fluctuation in POP_RF18 at the twice higher frequency due to the quadratic response of the cavity power. This issue is not a killer at this point, but the study will continue.
Sorry, Jamie. I have another guardian job for you.
controls@opsws4:~ 0$ guardctrl start SUS_PRM
starting node SUS_PRM...
fail: SUS_PRM: unable to change to service directory: file does not exist
After the recent upgrade, where I rebuilt the node supervision infrastructure on h1guardian0, I did not yet get around to re-creating and restarting all of the nodes that had been running previously. Arnaud and I are now restarting all the SUS nodes, but just in case, this should be an easy issue to resolve:
The guardctrl
utility will tell you which nodes are currently running:
jameson.rollins@operator1:~ 0$ guardctrl list IFO_IMC * run: IFO_IMC: (pid 11768) 144328s, want down; run: log: (pid 26686) 145415s ISI_HAM4 * run: ISI_HAM4: (pid 26143) 3148s, want down; run: log: (pid 11352) 53329s LSC * run: LSC: (pid 20593) 48884s, want down; run: log: (pid 11727) 48972s SUS_ETMX * down: SUS_ETMX: 145415s; run: log: (pid 26687) 145415s SUS_MC1 * run: SUS_MC1: (pid 29305) 145317s, want down; run: log: (pid 26685) 145415s SUS_MC2 * run: SUS_MC2: (pid 29314) 145317s, want down; run: log: (pid 26863) 145413s SUS_MC3 * run: SUS_MC3: (pid 29327) 145317s, want down; run: log: (pid 26864) 145413s SUS_SRM * run: SUS_SRM: (pid 1869) 63862s, normally down, want down; run: log: (pid 1027) 150829s jameson.rollins@operator1:~ 0$
Any node you think should be there but is not showing up, you can just create:
jameson.rollins@operator1:~ 0$ guardctrl create SUS_PRM creating node SUS_PRM... adding node SUS_PRM... guardian node created: ifo: H1 name: SUS_PRM path: /opt/rtcds/userapps/release/sus/common/guardian/SUS_PRM.py prefix: SUS-PRM usercode: /opt/rtcds/userapps/release/sus/common/guardian/sustools.py /opt/rtcds/userapps/release/sus/common/guardian/SUS.py states (*=requestable): 0 MISALIGNED * 1 SAFE * 2 DAMPED * 3 ALIGNED * 4 INIT 5 TRIPPED jameson.rollins@operator1:~ 0$
Once the node is created, it is ready to start. Before starting, I usually pop open a window viewing the log from the node so I can watch the start up. This is most easily done by opening up the medm control panel for the node via the GUARD_OVERVIEW screen, and clicking on the "log" link.
Finally, just start the node:
jameson.rollins@operator1:~ 0$ guardctrl start SUS_PRM starting node SUS_PRM... jameson.rollins@operator1:~ 0$
We're working on making all the guardians smart enough to identify the current state of the system on startup, and identify the correct state to jump to. The SUS guardians are programmed to go to the ALIGNED state on startup. We're now working on enabling them to identify if the optic is currently misaligned and to go to the MISALIGNED state in that case.
(Sheila, Alexa, Rana)
During the afternoon, the locking of Green PDH was quite unstable. We suspected that there were some oscillations of the NPRO PZT and/or accidental HOM resonances (since the mode-matching / clipping is so bad).
* Sweeping the NPRO PZT with a low bandwidth PLL lock, found no substantial features in the neighborhood of the peak (~27.4 kHz). Even though there's no resonances in the TF, the peak dominates the RMS of the PDH error signal. We thought that this could perhaps be coming from an oscillation of the PSL FSS, but tweaking the FSS Fast gain doesn't change the peak frequency.
* We tried a few different modulation frequencies for PDH (23.4, 23.9, and 24.4 MHz). These were calculated to make the upper SB be at ~0.3-0.4 of an FSR. As expected we saw a big dip in the PDH loop in the 10-15 kHz range for these different modulation frequencies. These dips were not very stationary - we guessed that this was due to the alignment fluctuations.
* Daniel turned on the 1000:100 Boost in the servo board after awhile and this greatly helped the stability. At the best of times, the green arm power fluctuations were ~10%. At the worst of times, it was more like 50% and the mode would hop between 00 and 01. We had mixed results with the dither alignment and its not always working for both DOFs.
* We should use a directional coupler to check that we're at the peak frequency for the EOM.
Some observations: After reverting to the original sideband frequency we had a hard time locking. The behaviour was similar to what we experienced in the past when we had a lot of alignment fluctuations. We would stay "locked' but switch between 00 mode and a higher order transverse mode without loosing a step. In the past the transition was to a 10 mode whereas yesterday it was to a second order mode. The locking was better when we switched back again to the frequency that is 1MHz off. It turned out that the sidebands were coincidentally set near the second order transverse mode spacing. Using a frequency near nominal with the same tuning worked as well. However, it turned out the real problem was a lack of low frequency gain. With the standard network compensation we just have a pole near 1.6Hz, With the boost turned on the lock is a lot more stable. This seems especially important during elevated wind.
After re cabling for the ALS WFS the link between slow and fast controls stopped working. The newly assigned DAQ ADC channel has a large -5V offset and seems broken. The offset is there even if nothing is connected to the AA chassis.
Changing the AA chassis didn't fix the problem. So, it is probably the ADC. To minimize the disruption we simple switched to a different channel for now.