Displaying reports 53421-53440 of 83260.Go to page Start 2668 2669 2670 2671 2672 2673 2674 2675 2676 End
Reports until 17:30, Wednesday 19 October 2016
H1 AOS
jenne.driggers@LIGO.ORG - posted 17:30, Wednesday 19 October 2016 (30678)
Attempt to go back toward July alignments

I tried for a while today to go back toward our July 16th alignments, as suggested by Sheila in alog 30648.  I put offsets in POP_A and the SOFT loops to try to make the witnesses/oplevs go back toward their July16th values. 

I only made the DARM spectrum worse, not better.  In the attached screenshot, green is the spectrum before I started doing anything, pink is after my alignment work, and red is after taking the offsets out again. 

I also attach a screenshot showing that the buildups and the recycling gain all deteriorated while trying to go back to July's alignment.  To be fair, I was not able to get all optics' witnesses back to their July values, so it's still not really the same alignment, but I think it's closer than what we usually run with.  I don't really think that this is a fruitful direction to continue to persue unfortunately, especially in light of what has just been discovered about the noise improvement when we close the DBB. 

Images attached to this report
H1 General
jim.warner@LIGO.ORG - posted 16:06, Wednesday 19 October 2016 (30676)
Day Shift Summary

TITLE: 10/18 Day Shift: 15:00-23:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Aquisition

SHIFT SUMMARY: Locking has been better today, but ITMY bounce modes16:00 Kyle to EX

16:00 Pcal crew to EY
16:00 JeffB to both ends
20:30 Kyle to EX
21:00 Chandra to MY
16:00 Kyle to EX
16:00 Pcal crew to EY
16:00 JeffB to both ends
20:30 Kyle to EX
21:00 Chandra to MY
LHO VE
chandra.romel@LIGO.ORG - posted 14:49, Wednesday 19 October 2016 - last comment - 15:40, Wednesday 19 October 2016(30670)
CP3 overfill
2:30 pm local

Took 3 min. 30 sec. to overfill CP3 with 1/2 turn open on bypass LLCV. Newly installed thermocouples in exhaust pipe (a few feet up the vertical run) responded well to LN2. On Friday let's overfill by doubling LLCV setting (to ~34% open) and see how long it takes for LN2 to trickle out. Meanwhile I've increased LLCV from 16% to 17% open.

3600 second trend attached. As soon as I opened the bypass valve I saw a {small} dip in temperature. 

Images attached to this report
Comments related to this report
kyle.ryan@LIGO.ORG - 15:40, Wednesday 19 October 2016 (30673)
Ideal data for control loop input - nice!
H1 ISC
sheila.dwyer@LIGO.ORG - posted 14:08, Wednesday 19 October 2016 - last comment - 14:08, Wednesday 19 October 2016(30649)
What happened to REFLAIR on monday afternoon?

Sheila, Terra, Jeff K, Patrick, Nutsinee

Tonight we continued with the PRMI locking difficulties of yesterday.  Nutsinee managed to lock PRMI by lowering the MICH gain and commenting the boost and offloading out of the guardian. 

Patrick tracked down some differences between the "good" PRMI time that Jenne mentioned (Oct 17 16:03 UTC) and now, there was a filter missing in the BS top mass offloading, and two whtening stages (with the anti whitening) were engaged in the middle of the lock yesterday afternoon.  We don't know why this happened, but just undoing it actually makes it impossible to lock at all. 

To match the PRCL loop shape to a reference, I had to add 6 db of gain (PRCL digital gain of 16 rather than 8).   However it looks like we still missing some kind of boost in PRCL (1st screenshot).  The second screenshot shows the MICH loop measured today on the left, before and after the PRCL fix, and an old measurement on the left. 

We have commented out the boost and offloading of MICH in the guardian for PRMI, and nutsinee created  new prmi_mich_gain_als and prmi_prcl_gain_als parameters in lscparams, and we have set them to 0.7 and 16 for now, although the nominal values should be 1.4 and 8.  You will have to reduce the MICH gain by hand to get it lock. 

Could who ever did some measurements that required changing the REFLAIR whitnening gain triple check that whatever they did is not causing us problems in the morning?

Images attached to this report
Comments related to this report
patrick.thomas@LIGO.ORG - 05:58, Wednesday 19 October 2016 (30650)
I logged in and checked the Beckhoff machines for errors. All of the terminals are in OP. There is a CRC on h1ecatc1 EndXLink R5_6 (not sure what this means). The only other thing I noticed is that the Send Frames diagnostic is very high and increasing on all three machines (also not sure what this means).

Attached is a screenshot of h1ecatc1.
Images attached to this comment
kiwamu.izumi@LIGO.ORG - 10:51, Wednesday 19 October 2016 (30658)

Jim W, Kiwamu

As Sheila suspected, the difficulty turned out to be due to wrong whitening settings on REFLAIR RF45 which we had changed on this past Monday during the frequency noise study (30610). We found it having two whitening stages engaged with a gain of 12 dB which should have been no whitening gains with 0 dB according to trend. So we set them back to no whitening stages with 0 dB gain. This apparently improved the situation -- we are now able to lock DRMI reliably and proceed with the rest of full lock sequence.

The SDF is updated accordingly.

H1 INJ (INJ)
evan.goetz@LIGO.ORG - posted 12:48, Wednesday 19 October 2016 (30669)
Half test of hardware injection machinery
Evan G., Chris B., Rick S.

Summary:
We started the INJ_TRANS guardian node, but due to a time zone issue in the gpstime python module, we couldn't schedule injections to verify the code was doing the right thing. We will come back to make the final tests once the time zone issue is sorted out. 

Details:
To do this test, we logged into the guardian machine and updated the guardian SVN to get the latest and greatest updates from Chris.

Then, because the guardian node for INJ_TRANS was not running, we started it by
$ guardctrl create INJ_TRANS; guardctrl start INJ_TRANS

We added a test injection to the new schedule file, set an injection time in the near future, and verified it by the following:
$ PYTHONPATH=/opt/rtcds/userapps/release/cal/common/guardian:${PYTHONPATH}
$ CAL_USER_APPS=/opt/rtcds/userapps/release/cal/common
$ python ${CAL_USER_APPS}/scripts/guardian_inj_schedule_validation.py --schedule ${CAL_USER_APPS}/guardian/schedule/schedule_1160692574.txt --min-cadence 300 --ifos H1

The state of the guardian never changed as it should have, so after some debugging, we found the difference in the GPS time the node thinks it is versus what GPS time it really is, is about 7 hours. It is likely due to the fact we got the following warning when starting the guardian node:
/ligo/apps/linux-x86_64/gpstime/lib/python2.7/site-packages/gpstime-0.1.2-py2.7.egg/gpstime/__init__.py:220: RuntimeWarning: GPS converstion requires timezone info.  Assuming local time...
  RuntimeWarning).

Chris is going to investigate how to set the time zone info for the GPS module before we try again to fully test the infrastructure
H1 ISC
jenne.driggers@LIGO.ORG - posted 12:47, Wednesday 19 October 2016 - last comment - 15:50, Wednesday 19 October 2016(30668)
DCPDs balanced with new matrix infrastructure

As mentioned in alog 30663, we have a slightly new infrastructure for balancing the OMC DCPDs.  I used the method that JeffK et al. used in alog 29856 to measure the imbalance between A and B. 

I'm not sure why I'm getting a different value for the imbalance than they did (I get that the ratio of B/A = 0.969 rather than their 0.958).  Perhaps we should look at how this number is actually changing over time, if it is.

Anyhow, the balancing matrix is now populated.  See screenshot for values.

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 15:50, Wednesday 19 October 2016 (30674)CAL
Tagging CAL.
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 11:36, Wednesday 19 October 2016 (30665)
CDS model and DAQ restart reports, Monday 17th Tuesday 18th October 2016

model restarts logged for Mon 17/Oct/2016 No restarts reported

model restarts logged for Tue 18/Oct/2016

RCG upgrade, all frontend systems and DAQ restarted. Full log in attached file.

Non-image files attached to this report
LHO VE
chandra.romel@LIGO.ORG - posted 11:31, Wednesday 19 October 2016 - last comment - 09:42, Thursday 20 October 2016(30664)
EY purge air skid maintenance complete
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=30631

Satisfies FAMIS asset 364 maintenance
Comments related to this report
chandra.romel@LIGO.ORG - 09:42, Thursday 20 October 2016 (30691)
....just when I thought I understood FAMIS. I still need to "create request" to create a task associated with this asset created which already includes a schedule, procedure, and personnel assignment.
H1 ISC (ISC)
jenne.driggers@LIGO.ORG - posted 11:25, Wednesday 19 October 2016 (30663)
OMC model updates prevented OMC locking - now okay

[Kiwamu, Jenne]

It looks like we did an svn-up of the OMC model sometime recently, and then it was compiled and installed yesterday with the RCG upgrade.  Due to the changes, and the fact that matrix EPICS values initialize to zero, we were getting a value of digital zero for the OMC DCPD sum, even though we were clearly seeing nice flashes on the camera. 

Kiwamu logged in to LLO and did an svn checkin of the new OMC screens that match the new model (they were modified there, but not checked in), and then we pulled them here.  This allowed us to set the matrix, and we're back in business. 

It is the channel OMC-DCPD_BALANCE (or something like that) that disappeared, and is replaced by an explicit matrix to take the _A and _B signals and form the sum and null channels. 

Kiwamu is currently finishing resolving the svn conflicts, so that both sites will be back in sync.

LHO VE
chandra.romel@LIGO.ORG - posted 11:22, Wednesday 19 October 2016 (30662)
CP3 thermocouples
Installed two thermocouples per WP 6249 and connected to existing CDS channels:

HO:VAC-MY_CP3_TE202A_DISCHARGE_TEMP_DEGC
HO:VAC-MY_CP3_TE202B_DISCHARGE_TEMP_DEGC

Removed TC inside CP4 exhaust and reinstalled its check valve.
H1 DAQ (CDS, DCS)
david.barker@LIGO.ORG - posted 11:16, Wednesday 19 October 2016 (30661)
h1fw0 reports slow file system, power cycled solaris QFS/NFS server

at 09:30 PDT this morning h1fw0 restarted itself with reports of a slow QFS disk system. Normally a power cycle of the solaris QFS/NFS server resolves these problems. Between 10:40 and 11:00 PDT I power cycled h1fw0 and h1ldasgw0. We will continue to monitor fw0's status.

H1 General
jeffrey.bartlett@LIGO.ORG - posted 11:03, Wednesday 19 October 2016 (30660)
Monthly Dust Monitor Vacuum Pump Check (FAMIS #7506)
   Fit air bleed intake filters to all pumps, and adjust pressures to 19.0inHg. All pump temperatures are normal. closed FAMIS #7506
LHO VE
kyle.ryan@LIGO.ORG - posted 10:08, Wednesday 19 October 2016 - last comment - 14:48, Wednesday 19 October 2016(30654)
~0915 hrs. local -> Adjusted heating of RGA @ X-end
Don't have remote monitoring and will need to enter X-end VEA to make measurements sometime this afternoon.
Comments related to this report
kyle.ryan@LIGO.ORG - 14:48, Wednesday 19 October 2016 (30671)
~1320 hrs. local -> Measured zone temps and made small adjustments 
H1 ISC
sheila.dwyer@LIGO.ORG - posted 00:47, Wednesday 19 October 2016 - last comment - 15:59, Thursday 20 October 2016(30648)
PMC HV noise, TCS, and lump in DARM from 200-1000Hz

The first attached screenshot is a series of DCPD spectra from the last several months.  When we first turned on the HPO, the noise in DARM did not get any worse.  The noise stayed similar to O1 until July 16th, the lump from 200-1kHz started to appear in late July, when we started a series of changes in alignment and TCS to keep us stable with a decent recycling gain at 50 Watts.  

PMC

Last night, Gabriele pointed out that the noise in DARM is coherent with the PMC HV.  The HV readback had been mostly some white noise (probably ADC noise), until the last few weeks, but has been getting noisier so that some of the things jitter peaks show up in it now (SEcond attached screenshot, colors corespond to the dates in the DCPD spectrum legend.  This may be related to the problem described in LLO alogs 16186 and 15986.  The PMC transmision has been degrading since July, which could be a symptom of misalingment.  Since July, the REFL power has nearly doubled from 22 to 38 Watts, while the transmission has dropped 28%.  The PMC sum has also dropped by 4%, roughly consistent with the 3% drop in the power out of the laser.  Peter and Jason are planning on realigning the PMC in the morning, so it will be interesting to see if we see any difference in the HV readback.  

TCS + PRC alignment:

The other two main changes we have had in this time are changes in the alignment through the PRC, and changes to TCS.  These things were done to improve our recycling gain and stability without watching the noise impact carefully.  In July we were using no offsets in the POP A QPDs.  We started changing the offsets on August 1st, after the lumpy noise first appeared around July 22nd.  We have continued to change them every few weeks since then, but generally moving in the same direction.  

The only TCS change that directly coresponds to the dates when our noise got worse was the reduction of ETMY RH from 0.75 W each on July 22nd, the other main TCS changes happened September 10th.  It would be nice to undo some of these changes before turning off the HPO, even if it means reduing the power to be stable.  

Images attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 07:26, Wednesday 19 October 2016 (30651)

The HVMon signal (PMC length) shows a peak at about 600Hz/rtHz. We don't this is an indication of frequency noise from the laser, but rather an error point offset picked up in the PMC PDH signal. As such this is length noise added to the PMC and suppressed by the cavity storage time. Assuming this factor is about 1/10000, we would still get ~100mHz/rtHz which are modulated onto the laser frequency. Seems a lot.

The HVMon measure 1/50 of the voltage send to the PZT. With no whitening this is not very sensitive.

daniel.sigg@LIGO.ORG - 11:02, Wednesday 19 October 2016 (30659)

After the PSL team adjusted the PMC alignment the ~400Hz peaks are no longer visible in the HVMon spectrum. The coherence is gone as well—except for the 1kHz peak.

Non-image files attached to this comment
sheila.dwyer@LIGO.ORG - 16:38, Wednesday 19 October 2016 (30672)

About the PMC:

1st screenshot  shows the small improvement in DARM we got after the PMC realignment.  While the coherence with PMC HV may be gone, it might just be that the PMC HV signal is now burried in the noise of the ADC.  At a lockloss I went to the floor and measured HV mon, then plugged it into one of the 560s, AC coupled, 10 Hz high pass, gain of 100, and sent the output into H1:LSC-EXTRA_AI_1.  We still have high coherence with this channel and DARM.  (last attchment)

Also, the PMC realingment this morning did decrease the reflected power, but the transmitted power also dropped.

  refl(W) trans(W) sum(W) laser power out
July 20 126W 157 174
Yesterday 35 103 138 169
today 27 100 126 169

About turning the HPO on not adding noise:

Kiwamu pointed out that the uncalibrated comparison above showing that the noise did not get worse when the HPO came on was not as convincing as it should have been.  This morning he and I used the pcal line hieght to scale these uncalibrated spectra to something that should be proportional to meters, although we did not worry about frequency dependent calibration.  (4th screenshot)  From this you can see that the noise in July was very close to what it was in March before the HPO came on, but there is some stuff in the bucket that is a little worse.  

The point is made best by the last attached screenshot, which is nearly identical noise in the last good lock I could find before the HPO came on to the first decent lock after it came on. Pcal was not working at this time, so we can't use that to verify the calibration, but the input powers were similar (20Watts and 24 Watts), DCPD powers are both 20mA, and the DCPD whitening was on in both cases.  (The decimation filters were changed around the same time that the HPO came on which accounts for the difference at high frequencies.)

Images attached to this comment
jason.oberling@LIGO.ORG - 15:59, Thursday 20 October 2016 (30693)PSL

Regarding power available to the PMC, I know this is obvious but another thing we have to consider is the ISS.  Since the ISS AOM is before the PMC, it clearly also effects the amount of power available to the PMC.  Peter K. can correct me if I am wrong, but it is my understanding that this happens primarily in 2 ways:

  • Obviously if the ISS AOM is diffracting more light, the total amount of power availabe to the PMC decreases.
  • Related to the above, the ISS AOM can cause slight distortions in the beam profile, which is dependant on how hard the AOM is driven; the harder the AOM is driven, the more it distorts the beam.  These distortions cause changes to the mode matching into the PMC and therefore lower the visibility of the cavity.  This has the effect of increasing the reflected power and lowering the transmitted power.

On 2016-9-21, for reasons unknown to me, the ISS control offset was changed from ~4.3 to 20.  This means we are driving the ISS AOM much harder than we were previously.  This then causes changes in the beam profile, which effects the PMC mode matching and lowers the cavity visibility.  This is likely why, even though we have had only a 5W decrease in laser power since July, the total power into and the power transmitted by the PMC are down and the power reflected by the PMC has increased, and why we cannot return to the July PMC powers Sheila listed in her table in the above comment by simply tweaking the beam alignment into the PMC.  I have attached a 120 day minute-trend of the ISS control offset (H1:PSL-ISS_CTRL_OFFSET) that shows the changes in the ISS control offset value since 2016-6-22, including the 2016-9-21 change.  There are of course other reasons why the control offset changed (as can be seen on the attachment, the offset was changed several times over the last 4 months), the one on 9-21 just really stuck out.

Is there a reason why the control offset was changed so drastically?  Something to do with the new ISS outer loop electronics?

Images attached to this comment
H1 DCS (CDS, DCS)
gregory.mendell@LIGO.ORG - posted 13:49, Tuesday 18 October 2016 - last comment - 15:55, Wednesday 19 October 2016(30626)
DMT calibration updated to gstlal-calibration 1.0.6-2.el7
Work Permit Number: 6252

The DMT calibration has been updated to gstlal-calibration 1.0.6-2.el7. Because of dependencies GDS was also updated to: gds-2.17.10-2.

John Zweizig has restarted the DMT monitors.
 

Comments related to this report
jeffrey.kissel@LIGO.ORG - 15:55, Wednesday 19 October 2016 (30675)CAL, CDS, DAQ, DCS
Tagging CAL.
H1 ISC (CAL, DetChar)
jeffrey.kissel@LIGO.ORG - posted 22:15, Tuesday 20 September 2016 - last comment - 12:11, Wednesday 19 October 2016(29856)
DCPDs have 2.1% Imbalance; Imbalance Now Compensated
J. Kissel, S. Dwyer, L. Barsotti

At the advice of DetChar (LHO aLOG 29828) we've balanced the DCPDs, which indeed hasn't been done since the new breadboard and PDs were installed (LHO aLOG 28862). Unlike the previous pair of DCPDs which were out-of-the-box perfectly balanced (see LHO aLOG 17650), the current pair has an imbalance of 2.1106%. This value has been entered into H1:OMC-DCPD_BALANCE, and accepted into the SDF "down" state for the h1omc model.

Method:
For the DCPD sum, we take an average of the two: A/2 + B/2.
The h1omc front-end infrastructure is built such that the percentage imbalance, e, is applied to both equally: A*(1 + e)/2 + B(1 - e)/2
So, the transfer function measurement of DCPD A/B is (1+e)/(1-e) ~ 1 + 2e. Since the transfer function magnitude is 0.957788, then the imbalance we enter will be e = ((A/B) - 1) / 2 = 0.021106 = -2.1106%.

We also found this value gradually (because we weren't sure how the infrastructure worked), by confirming the imbalance that minimized the DCPD NULL stream (and the coherence drops to zero). Overshooting the imbalance (e.g. by entering in -4.2%) shows a clear sign flip from the reference trace where there is 0% imbalance compensation.
Images attached to this report
Comments related to this report
jenne.driggers@LIGO.ORG - 12:11, Wednesday 19 October 2016 (30666)

This is not relevant anymore, since the OMC model has been changed to a more transparent matrix, but there was no divide by 2 in the old scheme.  The sum was just a plain sum of the balanced values.  (Trying to incorporate that non-existent factor of 2 just caused a lockloss while trying to implement the new matrix, which is the only reason I mention it).

Displaying reports 53421-53440 of 83260.Go to page Start 2668 2669 2670 2671 2672 2673 2674 2675 2676 End