The attachment shows the frequency control signal (IMC-F) and the transmitted intensity (IM4 trans) before and after turning on the HPO. 2 W into the modecleaner, with the interferometer unlocked.
Although Sheila's most recent DARM spectrum shows coherence with both frequency and intensity noise on the laser, I would hazard a guess that the frequency noise is not really the issue here, since it is only a factor of (at most) a few worse than before.
Keita, Evan
Here is a plot of transmission RIN spectra for for the front-end, the HPO, the refcav, the ISS, and the IMC.
Most of the intensity noise transmitted through the IMC between 10 and 100 Hz is coherent with the HPO spectrum, not with the front-end spectrum. The front-end spectrum is also essentially unchanged from before the HPO turn-on (this is not shown in the attachment).
The HPO performance is only slightly worse than what was reported in the HPO paper (Winklemann et al. 2011, fig. 12b). However, this RIN is only reduced by a factor of 10 by the time it gets through the IMC.
The ISS RIN spectra are as expected (see, e.g., Kwee et al. 2011, fig. 7), but these signals seem to be completely unrelated to the intensity noise that is actually transmitted to the downstream parts of the PSL and ISC systems. The out-of-loop diode reports a RIN that is better than 10–6/Hz1/2, but obviously this isn't true for either the refcav or IMC singals. Note in particular the lack of coherence between the out-of-loop sensor and the IMC transmission. (These spectra use Keita's new antiwhitening filters.)
The refcav transmission spectrum is more or less identical to (and coherent with) the IMC transmission spectrum.
I did a set of tests with the guardian node. The codebase is in a state that should be ready for Jamie and I to set it up tomorrow on the guardian script machine. Going forward things to do are: * Update docstrings * Install glue, gracedb, and grid credentials on guardian machine * Plan out how to run the gracedb process and get robot certificate * Do series of injections with guardian node on guardian machine - test full injection pathway, test killing active injection, test reloading schedule, test multiple injections in a row, etc. Below I outline the tests I did. How to do command line tests with guardian daemon Can now do the following tests on the command line at a LHO workstation: * To test reading schedule and finding the next injection: guardian INJ WAIT_FOR_NEXT_INJECT * To test gracedb event creation: guardian INJ CREATE_GRACEDB_EVENT * To test awg and inject a signal from schedule into the detector: guardian INJ CREATE_AWG_STREAM INJECT_CBC_ACTIVE * To test schedule validation script: PYTHONPATH=/opt/rtcds/userapps/release/cal/common/guardian:${PYTHONPATH}; python guardian_inj_schedule_validation.py --ifo H1 --schedule /opt/rtcds/userapps/release/cal/common/guardian/schedule/schedule_1148558052.txt --min-cadence 300 NOTE: You will need glue and gracedb python packages to run some of these tests, and these packages are not system-installed on workstations in the control room. And for gracedb upload testing you need the grid credential tools which are not on LHO workstations. And for gracedb upload test you need to make sure dev_mode is False. Test injections Injections from last night are in aLog 26749. Today I continued with some more development tests. Injections that are constant amplitude of 1e-26 for 1 second duration into H1:CAL-PINJX_TRANSIENT_EXC; start time of the injections are: * 1145554100 * 1145555100 * 1145555700 * 1145560262 (i) Call to awg works and injection goes into INJ-PINJX_TRANSIENT_EXC. (ii) Injections logged correctly and meta-data is propagating through infrastructure to inform the searches. Can see the hardware injection tests done with the guardian node on the detchar summary pages. The first three not logged with a injection type, eg. BURST, because in initial tests just wanted to correctly use the awg module. Can see thereafter the injections were flagged with a type in ODC and this propagates to the low-latency frames for the online searches and the segment database for the offline searches. Can see attached plots for ODC segments and segment database segments. (iii) Destroying a node with an open stream that has trasmitted data to the front end does not perform the injection. (iv) The gracedb upload functions have already been tested. Today I re-checked the functions and here is an example gracedb event that was uploaded T235981. Adding messages to the event log on gracedb was also tested again, notice the "This is a test." message on the T235981 gracedb page. (v) Schedule validation script updated and tested. Codebase developments Some more changes: * There is now a dev_mode in the code to run the tests mentioned in the section above. At the moment this does two things (i) ignores to check if the detector is locked, (ii) ignores gracedb for now until we get the robot certificate sorted out, and (ii) waits in the INJECT_CBC_ACTIVE state instead of the AWG_STREAM_OPEN_PREINJECT state because we need to avoid jump transitions for the command line test above. * Schedule validation script works again (https://redoubt.ligo-wa.caltech.edu/svn/cds_user_apps/trunk/cal/common/scripts/guardian_inj_schedule_validation.py). One thing of note is that guardian does not allow subprocesses to be created by states so the subprocess managment that I had written will not work with guardian. So right now once the injection starts the code will wait for the injection to finish, this is just the implementation in the awg package (see awg.ArbitraryStream.close); it can only be killed by stopping the node.
I've also renamed the base module (INJ.py) to something less generic, it is now CAL_PINJX.py. See: https://redoubt.ligo-wa.caltech.edu/svn/cds_user_apps/trunk/cal/common/guardian/CAL_PINJX.py So modify the examples in this aLog entry as appropriate, ie. guardian INJ becomes guardian CAL_PINJX.
Chris B., Jamie R. Started up the node with guardctrl start CAL_INJ. Used guardmedm CAL_INJ to control the guardian node. Did a variety of tests with the hardware injection guardian node, these all passed: * Tested killing injection before injection awg call is active by requesting KILL_INJECT. * Tested killing the injection during awg.ArbitraryStream.close call, ie. inject is in active state, by requesting KILL_INJECT. * Tested scheduling injections minimum number of seconds apart to make sure guardian picked the correct injection. * External alert happened while injection was scheduled, aborted injection successfully from AWG_STREAM_OPEN_PREINJECT to ABORT_INJECT_FOR_EXTTRIG. Commented out this check to continue working. * Tested out of order schedule file. * Tested FAILURE_READ_WAVEFORM, eg. waveform file does not exist. * Tested all injection states (INJECT_CBC_ACTIVE, INJECT_BURST_ACTIVE, INJECT_STOCHASTIC_ACTIVE, INJECT_DETCHAR_ACTIVE). * Tested that injection does not go into the detector if we turn off dev_mode so that it checks that detector is locked. * Injection start, injection end times, injection outcome values are all being set on MEDM screen. Made another failure mode. If the call to awg.ArbitraryStream.close is too close in time to the start of the injection, then there is a error. Added FAILURE_DURING_ACTIVE_INJECT. awg returns a generic AwgStreamError so without doing some hacked parsing of the error message, there's not much to differentiate why it failed during the function call. None of the gracedb functionality was tested during this, since we need to create a robot certificate still.
After doing a few more tests, I've started scheduling a long series of injections. The schedule file is here: https://redoubt.ligo-wa.caltech.edu/svn/cds_user_apps/trunk/cal/common/guardian/schedule/schedule_1148558052.txt
Injections were still going this morning from last night as expected, every 400 seconds. Attached is an hour of last nights injections. Also I've attached a zoomed in plot on the fast channel for one injection to check the timing of the start of the injection. Looks good.
Chris B., Jamie R. This aLog is documenting the work that has been done so far to get the guardian hardware injection node running at LHO. Documenting the work over the next few days could ease the installation at LLO, after we have sorted out everything at LHO. It also includes some tidbits about the development that's been done, since several members of the hardware injection subgroup wanted to be kept in the loop. Installations There are only a few things we need on the guardian script machines that were not there are already. Things we have done: (1) Updated userapps SVN at /opt/rtcds/userapps/release/cal/common/guardian (2) Checked that we can instantiated guardian node with: guardctrl create INJ (3) Installed awg on the guardian script machine Things we have yet to install on the guardian machine: * glue * ligo-gracedb * grid credentials Codebase Development This afternoon was mostly spent implementing several new things in the codebase. I have attached a new graph of the node to this aLog since there a number of new states, eg. new failure states, renamed the active injection states (formerly called CBC, BURST, etc.), renamed the IDLE state, and the renamed GraceDB state. And as always, the code lives in the SVN here: https://redoubt.ligo-wa.caltech.edu/svn/cds_user_apps/trunk/cal/common/guardian/ Some changes of note: * Changed jump transitions in successful injection pathway to edges. This changes how the node should be run. The model now is that the requested state should be INJECT_SUCCESS while running the node. * The modules (eg. inj_det, inj_types, etc.) have been moved to a new injtools subpackage. * Added success/failure messages for GraceDB events after injection is complete. * Added guardian decorators for a few tasks that are often repeated, eg. checking for external alerts. * Process managing for the awg call. These changes have made the schedule validation script (https://redoubt.ligo-wa.caltech.edu/svn/cds_user_apps/trunk/cal/common/scripts/guardian_inj_schedule_validation.py) out-of-date, and that will need to be updated. Also the GraceDB portions of the code have been commented out for now, since we're holding off on that for now until we have the grid credentials/how we will run the guardian process sorted out. Tests As I was developing I did a couple tests with the guardian node today. I did three hardware injections of constant amplitude (1e-26) into H1:CAL-PINJX_TRANSIENT_EXC. Each injection has a duration of 1 second. GPS start times are: * 1145497100 * 1145497850 * 1145500600 These tests were mostly to check that the call to awg was working properly. The PINJX_HARDWARE filterbank has already been turned off (aLog 26748) so the signal will only appear in the PINJX_TRANSIENT and PINJX_HARDWARE channels. In the attached plot below I shows the injection at 1145500600.
I have installed the following packages on the h1guardian0 machine:
These were installed from the production LSCSoft Debian wheezy archive, which should be fully compatible with this version of Ubuntu (12.04):
controls@h1guardian0:~$ cat /etc/apt/sources.list.d/lscsoft.list deb http://software.ligo.org/lscsoft/debian wheezy contrib deb-src http://software.ligo.org/lscsoft/debian wheezy contrib controls@h1guardian0:~$
We'll be testing these installations today.
As for the awg installation, this was not actually a new install. Instead, the existing GDS installation in /ligo/apps was just made available in the guardian environment:
controls@h1guardian0:~$ grep gds /etc/guardian/local-env . /ligo/apps/linux-x86_64/gds/etc/gds-user-env.sh controls@h1guardian0:~$
That was sufficient to make the awg python bindings available to the guardian nodes.
Matt, Sheila
We locked and saw that we have 3 orders of mognitude or so to much noise at DC readout. It seems this is related to PSL problems, the atached screenshot shows coherence between the ISS second loop PDs and DARM up to 100 Hz, as well as coherence between frequency noise and DARM. The pink trace shows the coherence between intensity and frequency noise. With this noise, we were able to power up to 10 Watts, but saturated the DC PDs.
We looked a little but at impementing a dither loop for SRM control. The alingment dither system currently would allow us to demodulate POP18, but POP90 has much better signal, as you would expect. The IPC for adding this is already in the models, so I've just added it to the matrix but didn't do the model restart yet (WP5840).
We then gave up on locking and went to the PSL racks to look at some signals at high frequencies. We saw a glitch that happens at a repetition rate of 37 kHz, and has frequency content of nearly a MHz, which shows up in the laser intensity noise. Matt has a picture of this in the ISS PD and the ref cav transmission. If we turn off the ISS this is still there (as expected since its above the bandwidth), but it is harder to see on top of low frequency intensity noise.
When the ISS is off, the ISS PDs wander from rail to rail, and oscillate only when they are near the upper rail.
These pictures show what we saw at the PSL rack. The first shows the o'scope: channel 1 is the ISS first loop DCPDA (out of loop, but B looks the same at this frequency), the second shows the PMC TRANS. (The second and third pictures are there to document where we connected the cables.)
There are noise bursts in the PMC TRANS which oscillate at ~500kHz and repeat at 37kHz. (For offline data mining: the o'scope photo was taken at 18:55:56 local time.)
Before to finding the problem shown in the previous comment, we noticed that the noise eater was oscillating. Sheila reset it and it stopped, but someone (Keita?) might want to look at the monitor channels to see how they behave in when oscillating (and when not oscillating). We first noticed this on the FSS fastmon at 18:31 (first photo), saw it again on the PMC trans signal at 18:38 (second and third photo, ~1MHz triangle with ~15% modulation of the power) and fixed it shortly after that. By 18:55 (previous comment) it was not oscillating. (All times are local.)
A little more info on:
"When the ISS is off, the ISS PDs wander from rail to rail, and oscillate only when they are near the upper rail."
While looking at the ISS first loop PDs, we noticed that if the loop is open there are large ~1MHz noise bursts. Going to DC coupling and zooming out, it seems that the PD signals oscillate when they approach the upper rail at ~14V (see photo). This may indicate that the load resistors on the opamps involved are too small, and so the opamps become unstable when outputting large voltages... or maybe it is something else. In any case, the ISS first loop should not be operated with large PD voltages (currently 1.8V).
The message: things are not as bad as they seemed yesterday, but we still have a problem with frequency and intensity noise.
I was confused yesterday, the guardian did not make the transition to DC readout because it checks that the ISS is on before making the transition, so we were actually still locked on RF when I thought we were on DC readout. Tonight I turned the ISS back on, and transitioned to DC readout without a problem. There is still a lot of coherence with the ISS and frequency noise. The IFO had no problem getting to 22 Watts, but we lost lock because of the HSTS coil driver switcing (which I had moved to just after the BS coil drivers, and have now moved to just before the BS coil drivers switching).
The first attached spectra (+coherences) was taken at 2 Watts, the second at 22 Watts.
The atteched trend of recent two days shows that the noise eater was bad from about Apr/24/2016 1:31:50 to 1:52:50 UTC (that's 18:31:50 to 18:52:50 Pacific time).
According to this wiki entry https://lhocds.ligo-wa.caltech.edu/wiki/SYS_DIAG%20Guardian#PSL_Noise_Eater the noise eater is monitored by H1:PSL-MIS_NPRO_RRO_OUTPUT and its nominally good range is -5852 +/- 50.
Matt and I were curious about the ISS inner loop PD circuit design, so I made a quick LISO model. This essentially reproduces the analysis given in T1000634, just showing a little more detail about which components are limiting the noise (mostly the transimpedance and TFIN resistors, R6 and R2)
The design looks OK as far as the linear modeling is concerned -- and as long as the dynamic range constraint plotted in figure 5 of T1000634 is maintained when the second and third loops are engaged.
Modeling files are in /ligo/home/christopher.wipf/Data/20160425_iss
Chris B., Matt E. Turned off the INJ-PCALX_HARDWARE filterbank output at 16:07 UTC. From Today (April 23) to April 27, I plan to test the guardian hardware injection node. Turning off the output of INJ-PCALX_HARDWARE means that no signal will go into the detector.
TITLE: 04/23 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY: After battling ISS oscillations and rotation stage issues all night, I was unable to get past INPUT_ALIGN in initial alignment. I think the IFO is tired of getting beat on this week.
LOG: See previous 2 aLogs from Sheila for details of most of the issues.
Today, many of us worked to relock H1. We've been able to get to DC readout, and had trouble powering up because of rotation stage problems we think we've solved, and as people have noted, there is something wrong with the ISS.
Here is a list of things that were small problems:
We spent some time trying to diagnose what was going on with the intensity noise of the laser.
Craig, Sheila, Jenne, Travis
There is still at least one problem with the rotation stage, that sometimes it goes the wrong way when it first starts moving.
The attached screenshot shows our first attempt at increasing the power since the vent, the rotation stage velocity and request are correctly set to 22 Watts and a velocity of 10, before the rotation stage starts moving. When the rotation stage starts moving, you can see a drop in the input power (which happens faster than the normal rotation stage motion), the normalization (PSL-POWER_SCALE_OFFSET) is lowered by the guardian to adjust for this, but we loose lock anyway. After the lock is lost the rotation stage proceeds at a reasonable pace to 20Watts, and the normalization follows correctly.
This is not a new bug, we've had random locklosses like this for some time, but it is a bug that remains after last week's rotation stage fix.
[Travis, Craig, Jenne, Ross]
Somehow using the faster rotation stage velocity (RS_VELOCITY=100) is making things funny, so when we try to change the laser power after using the faster velocity, we have this problem where the laser power dips before increasing. So, we've changed the PSL power guardian so that the "fast" velocity is the same RS_VELOCITY=10 as the "slow" velocity. This seems to have fixed things.
It seems like the rotation stage still has the problem that the minimimum power angle drifts, tonight we have seen it change by 2 degrees after being rotated several times, and then change again after only being rotated once.
Changes as measured by OSEM values: last IMC lock before laser work to stable IMC lock today
April 4th | April 22 | diff (urad) | |
MC1 p | -27 | -13.5 | +13.5 |
MC1 y | -1044 | -1036 | +8 |
MC2 p | 500 | 504 | +4 |
MC2 y | -688 | -686 | +2 |
MC3 p | -895 | -895 | 0 |
MC3 y | -1027 | -1018 | +9 |
IM4 Trans p | -0.425 | -0.490 | -0.065 |
IM4 Trans y | 0.103 | 0.081 | -0.022 |
I worked on the IMs (IM1-4) this morning, and have verified that they are aligned to the April 4th values, so cannot account for the change in IM4 Trans.
The alignments of MC1-3 follow the alignment of the IMC input beam, so while the alignment changes for these optics are bigger than what we typically see, after the laser work they are not unexpected.
This morning I measured the AOM drive voltage as a function of "offset slider %", and also the amount of diffracted light both with and without the digital support loop. All measurements were taken with the HEPA fans running. With no diffracted light, the maximum output voltage on PDA was (-9.6,-9.3) V and on PDB was (-9.65,-9.4) V. No saturations were observed with the incident power at this level. Attached are the spectra and transfer function measured. The relative power noise is about a factor of 10 higher than it should be. In the spectrum is a peak at ~1 kHz, the source of which is unknown, but is probably related to the problems mentioned by Keita. Pushing the gain higher seems to increase the noise beyond 10 kHz. The noise at frequencies below 100 Hz is significantly higher than with the low power mode. At low frequencies the transfer function is not as clean as the measured result in the low power mode. I am not sure but this might be related to the linearisation of the AOM drive.
Attached are the plots for the AOM drive voltage and diffracted power as a function of the offset slider. With the current settings with the offset slider at 6%, we are diffracting ~4.5 W. The plotted and reported diffraction percentage on the MEDM screen needs to be recalibrated.
The attached trend shows that while Peter was working this morning, analog PD outputs that are used for ISS (i.e. "filt" outputs that are monitored by PSL-ISS_PDA_OUT and PDB) were railing all the time.
Analog "DC" outputs might have been OK but they're not used for ISS.
There was about 3 minutes window where the PDs are less severely railing. Almost (but not quite) in the same window, NPRO noise eater monitor was smaller than usual. But during O1 NEMON was always about 28. I don't know what this means.
During relocking effort, for no apparent reason ISS went crazy several times (attached left), oscillating at about 1.6kHz (attached right). This is not unlike 2.5kHz oscillation reported in alog 26666, but the frequency is different.
This is not FSS though FSS is affected by this, as the refcav was once knocked out of lock during the oscillation but the oscillation in ISS continued. It seems that FSS is affected because there's simply too much intensity noise (attached right top cyan and right bottom shows the refcav transmission DC).
HPO output itself seems to be fine (right top, brown trace), so this really sounds like something that happens in ISS.
Anyway, when this happens, do the following.
Another FYI.
Left is when ISS was NOT oscillating. Bumpy structures at about 1k and 4k seem to be from HPO (left top, brown). Lower than 1kHz, out-of-loop sensor (PDA) is not much coherent with second loop 1-4 SUM. This shows nothing new, but there's a lot of sensing noise in first loop sensors. Noise eater was at its usual (28 or so).
Middle is when it was oscillating and it's not that interesting. The oscillation was causing so much noise in the intensity that all sensors agree with each other. Again noise eater was as usual.
Right is when it was oscillating. The oscillation shows up in many things, and the channel that showed the highest coherence in non-intensity channels was PMC mixer. This could just be that the PMC lock point was thrown off in one direction because of huge intensity noise, causing a large intensity-to-PDH coupling, or this could be that the PMC is locked a bit off to the shoulder.
With the main laser down, it looks like some work effecting the oplevs must have happend over the last week, because some signals sit at 0 and then recover to typical levels.
I zoomed in on the Y axis to the sections of data that have reasonable signals.
Attached:
I am Closing this FAMIS (4672) task for Cheryl.