Unless otherwise noted all powers measured with the water-cooled 300 W Ophir power meter. The ISS was unlocked but still diffracting light, whilst this does distort the beam it should be constant over the time scale of the measurements taken. Using the locking photodiode over a 5 minute observation time: locked: -260 mV to -262 mV unlocked: -1.001 V to -1.004 V Calculated visibility using the locking photodiode is thus: (74.0 +/- 0.4)% Using the power meter: locked: 30.5 W to 30.6 W unlocked: 121.6 W to 122.2 W Calculated visibility using the power meter is (74.9 +/- 0.3)% The power transmitted by the pre-modecleaner was 105.7 W to 106.1 W. Which suggests a pre-modecleaner cavity transmission of (86.9 +/- 0.2)%. The errors listed above are okay to the precision of the measurement and not its accuracy. Bearing in mind that the accuracy of the power meter is +/- 3%, the visibility measurement and transmission measurement are on the edge of agreement. Taking accuracy into consideration the visibility is then (75 +/- 4)% and the transmission is (87 +/- 5)%. The two measurements are in general agreement but are not very accurate nor precise.
Turning the ETMY ring heater on 0 --> 0.55 W top, 0 --> 0.55 W bottom at 7:25 UTC.
This is to revert alog 29702, bringing us back to a stronger optical mode overlap with the 47kHz ETMY mechanical mode so we can test the effectiveness of ESD damping at 50 W.
TITLE: 10/19 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Patrick
SHIFT SUMMARY: Struggled to lock a few times but eventually made it to NLN. Noise hunting continues on the commissioning side.
Sheila Kiwamu Jenne Daniel
We measured the OLG of the PMC loop with the interferometer unlocked, and saw that it is aroun 500Hz while it is supposed to be at around 5kHz. Plotting the measured OLG as closed loop supression, we predict that this loop should have gain peaking approximately around the frequency of our lump in DARM. (second attachment). We tried increaseing and decreasing the gain by 6dB, and didn't see much of a change in DARM.
However, a driven measurement using the newly amplified HV mon as a readback predicts that this noise is about a factor of 2 below DARM in our lump. The third and fourth attachments are the same noise injections that Jenne and I posted on monday for MICH SRCL and PZT jitter, with projections based on PMC PZT HV mon. Kiwamu found an alog from april, 26538 indicating that the gain should be set to 30 dB (it has been 16 dB for the last several months). The third attachment shows the noise projection with the PMC gain at 16dB, while the 4th one shows 30dB. The 5th screenshot shows the difference in the DARM spectrum with the increased gain.
People are still invesitgating the coupling mechanism, we think that intensity noise (which we think was the explanation for the simliar noise at LLO in 2014 16186) is ruled out by the intensity noise injection, although it is interesting to note that the spectrum of this PMC HV lines up fairly well with the ISS control signal.
According to various signals when the PMC HV was excited, we are concluding that this is not a coupling through intensity or frequency of the light. We don't know how the HV noise couples to DARM.
In the attached screen shot, the right two panels show various signals with and without a broad band excitation in the HV. The PMC control gain was at 30 dB throughout the measurements. The upper right panel shows an increase in the PDA spectrum (which has been used as the sensor for the inner loop), indicating that the ISS witnesses increase in the HV noise somehow. However, the second loop sensors don't really show increase in their noise level below 1 kHz. This means that RIN at OMC DCPD should be at 1e-10 RIN/sqrtHz which is a factor of 10 lower than shot noise because our RIN to RIN coupling from the interferometer input to OMC DCPD is roughly -40 dB. So this does look like an intensity noise coupling.
As for frequency noise, the situation seems similar to intensity. The CARM loop sees higher noise level in frequency according to REFL_CTRL_OUT in the lower right panel. However POP 9I, which is an out-of-loop frequency noise sensor, did not show any elevated noise at all below 1 kHz. Based on the coupling of POP 9I measured the other day (30610), POP 9I should show higher noise level by a factor of a few in order to explain the increased noise level in DARM by frequency noise. So it does not seem to be frequency noise coupling either.
I compared nominal PMC locking gain (red, green) and 6dB lower (blue, brown).
Due to gain peaking at 240Hz the feedback signal doesn't decrease below 400Hz.
Anyway, higher than 400Hz, I see some reduction in DARM when the gain was lower, but it seems as if the reduction was mainly at around the peak of the three bumps (440, 580 and 700 Hz). For example it seems as if there's no reduction of noise at 520Hz even though the feedback signal to PMC PZT was reduced by a factor of 4.
We can also see that DARM got worse at 240Hz by reducing the gain due to gain peaking.
Changing the PMC filter to allow us to lock at much lower UGF would help but the featureless bump might stay. The second plot is the same as the first one but the PMC PZT (dashed) is put on top of the DCPD (solid). PMC PZT is arbitrarily scaled so that the DCPD with high bandwidth PMC lock (red, green) looks like the SUM of DCPD with low bandwidth PMC lock (blue, brown) and PZT with high bandwidth PMC lock(pink, orange).
Last night the water level was at 7.7cm so I didn't add any water. Today I went out and it fell to 6.9 cm. That's less than 1 cm! Will continue to keep an eye on it.
1. Noise
When DBB HPO shutter was closed, DARM noise decreased. I did open/close/open/close and this was quite repeatable (first attachment, red/green are shutter closed, blue/brown are open). Some of the jitter-ish peaks subsided and the DARM looked much smoother when the shutter was closed.
These days DBB shutter is open most of the time (it seems to have been in local mode most of the time for jitter FF). Now we're in half-working remote mode where the MEDM screen says "local" all the time but we can close/open the shutter by pressing buttons.
This should be some kind of scattering problem, e.g. optical feedback into HPO and/or even to the frontend.
Inside DBB the reflection from DBBPMC is received by photo diodes, there's one transmission received by CCD camera but the DBBPMC was not resonant. There's a thing called "TFP low power attenuator" that seem to attenuate the power going into DBBPMC, which is between DBB breadboard and the DBB HPO shutter.
Anyway, I wonder how many other significant scattering sources are there on the PSL table. There are other TFP low power attenuators as well as thermal type power meters which might receive some non-negligible power. I don't know which one is essential, but non-essential ones I'd like to temporarily block using black glasses.
2. Power
Stupid thing about this excercize was that the PSL power increased by about 3W when closing the DBB shutter (second attachment, left bottom is PSL power, left mid is the light the DBB REFL diode receives).
Even if you think the scatter causes some problem to the intensity, it should be taken care of by ISS.
It turns out that this doesn't have anything to do with the scatter, it's probably shoddy electronics (design or implementation).
The 1st loop PD actually changes even though the 1st loop is DC-coupled. REF signal is not changing. TRANSFER1_A is just the DCPD scaled and reference voltage added, i.e. PDA_CALI_DC/5+REF.
The "error signal" of the 1st loop is actually TRANSFER1_B (CH10), which is the sum of TRANSFER1_A and TRANSFER2_B.
TRANSFER2_B is the second loop output measured by the 1st loop board, and this should be proportional to the secondloop output measured by the second loop board (CH13). But the second loop was AC-coupled during this measurement.
If you look at all these signals, it's clear that the analog signal downstream of the second loop AC coupling point is pulled by a tiny amount by opening/closing of the DBB shutter (because of the current the driver has to supply?), amplified by the boost stage of the second loop, injected into the first loop, and the first loop dutifully responded by changing the power.
This is not a huge deal as far as we always close the shutter but it's disappointing.
Added 150ml to the crystal chiller.
Diode chiller didn't complain. No water added.
The CW injections have been restarted remotely (after checking with the control room). I discovered that because of a sign convention change in actuation functions between S6 and O1, the CW injections with explicit actuation correction were being injected with a default sign flip. That sign flip has now been removed. In addition, the amplitudes of pulsar injections 0, 1, 2, 4, 7, 9 and 14 have now been doubled over what they were in O1. (This amplitude change was supposed to take effect several months ago, but I had neglected to change a symlink to make it so.) In detail, the above amplitude changes were accomplished by changing the symlink RELEASE pointer in the hinj account from O2test to O2_H1_test2 (it was supposed to be O2_H1_test1 from early summer until now.) The removal of the sign flip was accomplished by adding an argument --actuationScale=1.0 to the call to lalapps_Makefakedata_v4 for each of the 15 injections. Previously, the default value for actuationScale was -1.0, a vestige of iLIGO sign conventions.
Correction: 6 of the 7 injection amplitudes listed above were indeed doubled, but the amplitude of the highest-frequency signal (pulsar 14 at 1991 Hz) was tripled. The attached spectra of the excitation channel from last night and tonight, 1 sidereal day apart, show the increases of amplitude for all injections above 200 Hz.
I tried for a while today to go back toward our July 16th alignments, as suggested by Sheila in alog 30648. I put offsets in POP_A and the SOFT loops to try to make the witnesses/oplevs go back toward their July16th values.
I only made the DARM spectrum worse, not better. In the attached screenshot, green is the spectrum before I started doing anything, pink is after my alignment work, and red is after taking the offsets out again.
I also attach a screenshot showing that the buildups and the recycling gain all deteriorated while trying to go back to July's alignment. To be fair, I was not able to get all optics' witnesses back to their July values, so it's still not really the same alignment, but I think it's closer than what we usually run with. I don't really think that this is a fruitful direction to continue to persue unfortunately, especially in light of what has just been discovered about the noise improvement when we close the DBB.
TITLE: 10/18 Day Shift: 15:00-23:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Aquisition
SHIFT SUMMARY: Locking has been better today, but ITMY bounce modes16:00 Kyle to EX
2:30 pm local Took 3 min. 30 sec. to overfill CP3 with 1/2 turn open on bypass LLCV. Newly installed thermocouples in exhaust pipe (a few feet up the vertical run) responded well to LN2. On Friday let's overfill by doubling LLCV setting (to ~34% open) and see how long it takes for LN2 to trickle out. Meanwhile I've increased LLCV from 16% to 17% open. 3600 second trend attached. As soon as I opened the bypass valve I saw a {small} dip in temperature.
Ideal data for control loop input - nice!
Sheila, Terra, Jeff K, Patrick, Nutsinee
Tonight we continued with the PRMI locking difficulties of yesterday. Nutsinee managed to lock PRMI by lowering the MICH gain and commenting the boost and offloading out of the guardian.
Patrick tracked down some differences between the "good" PRMI time that Jenne mentioned (Oct 17 16:03 UTC) and now, there was a filter missing in the BS top mass offloading, and two whtening stages (with the anti whitening) were engaged in the middle of the lock yesterday afternoon. We don't know why this happened, but just undoing it actually makes it impossible to lock at all.
To match the PRCL loop shape to a reference, I had to add 6 db of gain (PRCL digital gain of 16 rather than 8). However it looks like we still missing some kind of boost in PRCL (1st screenshot). The second screenshot shows the MICH loop measured today on the left, before and after the PRCL fix, and an old measurement on the left.
We have commented out the boost and offloading of MICH in the guardian for PRMI, and nutsinee created new prmi_mich_gain_als and prmi_prcl_gain_als parameters in lscparams, and we have set them to 0.7 and 16 for now, although the nominal values should be 1.4 and 8. You will have to reduce the MICH gain by hand to get it lock.
Could who ever did some measurements that required changing the REFLAIR whitnening gain triple check that whatever they did is not causing us problems in the morning?
I logged in and checked the Beckhoff machines for errors. All of the terminals are in OP. There is a CRC on h1ecatc1 EndXLink R5_6 (not sure what this means). The only other thing I noticed is that the Send Frames diagnostic is very high and increasing on all three machines (also not sure what this means). Attached is a screenshot of h1ecatc1.
Jim W, Kiwamu
As Sheila suspected, the difficulty turned out to be due to wrong whitening settings on REFLAIR RF45 which we had changed on this past Monday during the frequency noise study (30610). We found it having two whitening stages engaged with a gain of 12 dB which should have been no whitening gains with 0 dB according to trend. So we set them back to no whitening stages with 0 dB gain. This apparently improved the situation -- we are now able to lock DRMI reliably and proceed with the rest of full lock sequence.
The SDF is updated accordingly.
Evan G., Chris B., Rick S. Summary: We started the INJ_TRANS guardian node, but due to a time zone issue in the gpstime python module, we couldn't schedule injections to verify the code was doing the right thing. We will come back to make the final tests once the time zone issue is sorted out. Details: To do this test, we logged into the guardian machine and updated the guardian SVN to get the latest and greatest updates from Chris. Then, because the guardian node for INJ_TRANS was not running, we started it by $ guardctrl create INJ_TRANS; guardctrl start INJ_TRANS We added a test injection to the new schedule file, set an injection time in the near future, and verified it by the following: $ PYTHONPATH=/opt/rtcds/userapps/release/cal/common/guardian:${PYTHONPATH} $ CAL_USER_APPS=/opt/rtcds/userapps/release/cal/common $ python ${CAL_USER_APPS}/scripts/guardian_inj_schedule_validation.py --schedule ${CAL_USER_APPS}/guardian/schedule/schedule_1160692574.txt --min-cadence 300 --ifos H1 The state of the guardian never changed as it should have, so after some debugging, we found the difference in the GPS time the node thinks it is versus what GPS time it really is, is about 7 hours. It is likely due to the fact we got the following warning when starting the guardian node: /ligo/apps/linux-x86_64/gpstime/lib/python2.7/site-packages/gpstime-0.1.2-py2.7.egg/gpstime/__init__.py:220: RuntimeWarning: GPS converstion requires timezone info. Assuming local time... RuntimeWarning). Chris is going to investigate how to set the time zone info for the GPS module before we try again to fully test the infrastructure
As mentioned in alog 30663, we have a slightly new infrastructure for balancing the OMC DCPDs. I used the method that JeffK et al. used in alog 29856 to measure the imbalance between A and B.
I'm not sure why I'm getting a different value for the imbalance than they did (I get that the ratio of B/A = 0.969 rather than their 0.958). Perhaps we should look at how this number is actually changing over time, if it is.
Anyhow, the balancing matrix is now populated. See screenshot for values.
Tagging CAL.
Don't have remote monitoring and will need to enter X-end VEA to make measurements sometime this afternoon.
~1320 hrs. local -> Measured zone temps and made small adjustments
The first attached screenshot is a series of DCPD spectra from the last several months. When we first turned on the HPO, the noise in DARM did not get any worse. The noise stayed similar to O1 until July 16th, the lump from 200-1kHz started to appear in late July, when we started a series of changes in alignment and TCS to keep us stable with a decent recycling gain at 50 Watts.
PMC
Last night, Gabriele pointed out that the noise in DARM is coherent with the PMC HV. The HV readback had been mostly some white noise (probably ADC noise), until the last few weeks, but has been getting noisier so that some of the things jitter peaks show up in it now (SEcond attached screenshot, colors corespond to the dates in the DCPD spectrum legend. This may be related to the problem described in LLO alogs 16186 and 15986. The PMC transmision has been degrading since July, which could be a symptom of misalingment. Since July, the REFL power has nearly doubled from 22 to 38 Watts, while the transmission has dropped 28%. The PMC sum has also dropped by 4%, roughly consistent with the 3% drop in the power out of the laser. Peter and Jason are planning on realigning the PMC in the morning, so it will be interesting to see if we see any difference in the HV readback.
TCS + PRC alignment:
The other two main changes we have had in this time are changes in the alignment through the PRC, and changes to TCS. These things were done to improve our recycling gain and stability without watching the noise impact carefully. In July we were using no offsets in the POP A QPDs. We started changing the offsets on August 1st, after the lumpy noise first appeared around July 22nd. We have continued to change them every few weeks since then, but generally moving in the same direction.
The only TCS change that directly coresponds to the dates when our noise got worse was the reduction of ETMY RH from 0.75 W each on July 22nd, the other main TCS changes happened September 10th. It would be nice to undo some of these changes before turning off the HPO, even if it means reduing the power to be stable.
The HVMon signal (PMC length) shows a peak at about 600Hz/rtHz. We don't this is an indication of frequency noise from the laser, but rather an error point offset picked up in the PMC PDH signal. As such this is length noise added to the PMC and suppressed by the cavity storage time. Assuming this factor is about 1/10000, we would still get ~100mHz/rtHz which are modulated onto the laser frequency. Seems a lot.
The HVMon measure 1/50 of the voltage send to the PZT. With no whitening this is not very sensitive.
After the PSL team adjusted the PMC alignment the ~400Hz peaks are no longer visible in the HVMon spectrum. The coherence is gone as well—except for the 1kHz peak.
About the PMC:
1st screenshot shows the small improvement in DARM we got after the PMC realignment. While the coherence with PMC HV may be gone, it might just be that the PMC HV signal is now burried in the noise of the ADC. At a lockloss I went to the floor and measured HV mon, then plugged it into one of the 560s, AC coupled, 10 Hz high pass, gain of 100, and sent the output into H1:LSC-EXTRA_AI_1. We still have high coherence with this channel and DARM. (last attchment)
Also, the PMC realingment this morning did decrease the reflected power, but the transmitted power also dropped.
refl(W) | trans(W) | sum(W) | laser power out | |
July | 20 | 126W | 157 | 174 |
Yesterday | 35 | 103 | 138 | 169 |
today | 27 | 100 | 126 | 169 |
About turning the HPO on not adding noise:
Kiwamu pointed out that the uncalibrated comparison above showing that the noise did not get worse when the HPO came on was not as convincing as it should have been. This morning he and I used the pcal line hieght to scale these uncalibrated spectra to something that should be proportional to meters, although we did not worry about frequency dependent calibration. (4th screenshot) From this you can see that the noise in July was very close to what it was in March before the HPO came on, but there is some stuff in the bucket that is a little worse.
The point is made best by the last attached screenshot, which is nearly identical noise in the last good lock I could find before the HPO came on to the first decent lock after it came on. Pcal was not working at this time, so we can't use that to verify the calibration, but the input powers were similar (20Watts and 24 Watts), DCPD powers are both 20mA, and the DCPD whitening was on in both cases. (The decimation filters were changed around the same time that the HPO came on which accounts for the difference at high frequencies.)
Regarding power available to the PMC, I know this is obvious but another thing we have to consider is the ISS. Since the ISS AOM is before the PMC, it clearly also effects the amount of power available to the PMC. Peter K. can correct me if I am wrong, but it is my understanding that this happens primarily in 2 ways:
On 2016-9-21, for reasons unknown to me, the ISS control offset was changed from ~4.3 to 20. This means we are driving the ISS AOM much harder than we were previously. This then causes changes in the beam profile, which effects the PMC mode matching and lowers the cavity visibility. This is likely why, even though we have had only a 5W decrease in laser power since July, the total power into and the power transmitted by the PMC are down and the power reflected by the PMC has increased, and why we cannot return to the July PMC powers Sheila listed in her table in the above comment by simply tweaking the beam alignment into the PMC. I have attached a 120 day minute-trend of the ISS control offset (H1:PSL-ISS_CTRL_OFFSET) that shows the changes in the ISS control offset value since 2016-6-22, including the 2016-9-21 change. There are of course other reasons why the control offset changed (as can be seen on the attachment, the offset was changed several times over the last 4 months), the one on 9-21 just really stuck out.
Is there a reason why the control offset was changed so drastically? Something to do with the new ISS outer loop electronics?
Work Permit Number: 6252 The DMT calibration has been updated to gstlal-calibration 1.0.6-2.el7. Because of dependencies GDS was also updated to: gds-2.17.10-2. John Zweizig has restarted the DMT monitors.
Tagging CAL.