Displaying reports 59641-59660 of 84690.Go to page Start 2979 2980 2981 2982 2983 2984 2985 2986 2987 End
Reports until 15:40, Thursday 25 February 2016
H1 SYS
daniel.sigg@LIGO.ORG - posted 15:40, Thursday 25 February 2016 (25728)
Commissioning Schedule

Thursday: AS WFS 90MHz centering (afternnon), Noise hunting afterwards

Friday: HWS/TCS morning, OMC noise/noise hunting afterwards

Saturday: SRC work

Sunday: -

Monday: Noise hunting

Tuesday: TCS work

H1 SEI
jim.warner@LIGO.ORG - posted 12:40, Thursday 25 February 2016 (25724)
ETMY buried STS sensor correction

I wanted to try using the buried STS at ETMY for sensor correction when it wasn't windy, to see if it worked. On the 22nd, for half an hour or so I switched it, and it seems to work. Attached spectra shows that using the buried STS gives similar performance to the STS in the building, at least when the wind is low, and we are using sensor correction at the micrseism. Red and pink are the ST1 T240 and buried STS when using the buried STS, dark blue and light blue are the ST1 T240 and the inside STS when running in our normal configuration. Both measurements were taken with the 90mhz blends, so all of the isolation between the T240 and ground between .1 and ~.2 hz is due to the sensor correction.

Unfortunately, when I tried this when it was windy, I got no isolation at all. I'm still looking into that.

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 11:35, Thursday 25 February 2016 - last comment - 14:37, Thursday 25 February 2016(25719)
h1boot problems, all front end computers have locked up

at 11:10 PST h1boot generated an nfsd error (see below). All front end computers have locked up. We tried rebooting h1susauxh2 but h1boot is no longer permitting this (missing root file system). We need to first reboot h1boot, then all front end computers.

Non-image files attached to this report
Comments related to this report
david.barker@LIGO.ORG - 12:01, Thursday 25 February 2016 (25720)

h1boot has gone 230 days without a file system check. An fsck on the /opt/rtcds (800GB) file system is in progress, will take almost an hour to complete.

david.barker@LIGO.ORG - 13:25, Thursday 25 February 2016 (25726)

h1boot completed its fsck and started running. Looks like all the front end real-time cores ran the entire time. This raises an interesting problem, should they run when there is no operator/guardian control of them? Did guardian keep trying to control the system and, with no feed-back, continue to 'push' the system. 

The DAQ data continued to flow for the entire time. We see evidence of some channels glitching when the front end controls was recovered.

We will test this scenario further to see if a solution is needed.

Downtime was 11:10 - 12:40 PST

corey.gray@LIGO.ORG - 14:37, Thursday 25 February 2016 (25727)

Filed an FRS for this since we lost 1.5hrs of commissioning time.  This is FRS #4429.  

Note:  it wasn't clear what units of time to enter for "Orig. Est", and I entered 90 (as in minutes), but this turned out to be an entry of 90hrs!  I believe it has been corrected to 1.5hrs.

H1 TCS
eleanor.king@LIGO.ORG - posted 11:09, Thursday 25 February 2016 (25718)
Removal of polarization sensors from HWSX Path

Cao, Elli

In order to use the ITMX HWS during TCS commissioning, we have removed the polarization sensor optics which were placed in the HWSX path on the HWS table by HAM 4.  (Placement of the polarization sensors is detailed in alog 24046.)  We removed the PBS and the HWSX_ALIGN_M1 mirror from next to the periscope and have bolted them out of the way behind the HWSX camera for the time being.  We also dumped the two sled beams coming through HWS_ALIGN_BS.  We can see the sled beam on the HWS.  I adjusted HWS STEER M10 (the mirror right before the camera) to center the beam on the camera, and did not touch the periscope mirrors.

Images attached to this report
H1 SYS
daniel.sigg@LIGO.ORG - posted 10:37, Thursday 25 February 2016 - last comment - 22:26, Thursday 25 February 2016(25716)
Corner Beckhoff Software Update

Updated the corner Beckhoff code to incoporate the tidal servo engagement delay and the new EOM driver readback calibration.

Comments related to this report
sheila.dwyer@LIGO.ORG - 22:26, Thursday 25 February 2016 (25743)

Does this tidal servo engagment delay also need to be propagated to the end stations?  We have had the same problem tonight that was described in 25695

Images attached to this comment
LHO VE
kyle.ryan@LIGO.ORG - posted 10:03, Thursday 25 February 2016 (25714)
Soft-cycled GV5 and GV7 for 5-Ton crane load


			
			
H1 SEI
jim.warner@LIGO.ORG - posted 10:01, Thursday 25 February 2016 - last comment - 10:34, Thursday 25 February 2016(25713)
Bad CPS at ETMY

Looks like there is a failing CPS at ETMY. I noticed on the Detchar sumary page, ETMY was not performing as well as the other BSCs above 1 hz, and has been for a little while (since about Feb 2nd). When I started looking at spectra, I saw that the Y CPS blend in was high at high frequency, digging down to individual sensors, it looks like ST2 H3 is to blame. LLO had a similar failure last year sometime. We should be able to stick a new card in and still run okay, or maybe we just need to check some connections. Probably not a big deal for now, but we should address this soon. Attached plots shows the CPS for St2 (first plot) and St1(second). Green on the first plot is the bad sensor. The St1 sensors look okay, though H2 (blue, second plot) looks a little high as well.

Images attached to this report
Comments related to this report
hugh.radkins@LIGO.ORG - 10:34, Thursday 25 February 2016 (25715)

FRS 4424

H1 SEI
hugh.radkins@LIGO.ORG - posted 09:26, Thursday 25 February 2016 (25712)
LHO HEPI Guardians Restarted w/corrected code

WP 5744, II 1206, FR 4418

Code oversite fixed.  Guardian restarted.  All HPIs are good.  Tested on HAM6, positive function.  ISI guardians not restarted--should do next Tuesday to confirm no issues there.

files updated:

hugh.radkins@operator3:~ 0$ userapps
hugh.radkins@operator3:release 0$ cd isi/common/guardian/
hugh.radkins@operator3:guardian 0$ svn up
U    isiguardianlib/isolation/const.py
U    isiguardianlib/isolation/states.py
U    isiguardianlib/damping/states.py
Updated to revision 12715.

Here are the changes and additions to fix the oversite

hugh.radkins@operator3:guardian 0$ diff isiguardianlib/damping/states.py isiguardianlib/damping/states.pybuhr
 

>         if (iso_const.ISOLATION_CONSTANTS['SWITCH_GS13_GAIN']) & (top_const.CHAMBER_TYPE != 'BSC_ST1'):

<         if (iso_const.ISOLATION_CONSTANTS['SWITCH_GS13_GAIN']) & (top_const.CHAMBER_TYPE != 'BSC_ST1') & (top_const.CHAMBER_TYPE != 'HPI'):
---
hugh.radkins@operator3:guardian 1$ diff isiguardianlib/isolation/const.py isiguardianlib/isolation/const.pybuhr
139,144d138
<             GS13_GAIN_DOF = ['H1', 'H2', 'H3', 'V1', 'V2','V3'],
<             SWITCH_GS13_GAIN = False,
<             FF_DOF = dict(
<                 FF = [],
<                 HPI_FF = [],
<             ),
 

hugh.radkins@operator3:guardian 1$ diff isiguardianlib/isolation/states.pybuhr isiguardianlib/isolation/states.py
319c319
<             if (iso_const.ISOLATION_CONSTANTS['SWITCH_GS13_GAIN']) & (top_const.CHAMBER_TYPE != 'BSC_ST1'):
---
>             if (iso_const.ISOLATION_CONSTANTS['SWITCH_GS13_GAIN']) & (top_const.CHAMBER_TYPE != 'BSC_ST1') & (top_const.CHAMBER_TYPE != 'HPI'):
321c321,322
<             iso_util.turn_FF('OFF', iso_const.ISOLATION_CONSTANTS['FF_DOF'])
---
>             if (top_const.CHAMBER_TYPE != 'HPI'):
>                 iso_util.turn_FF('OFF', iso_const.ISOLATION_CONSTANTS['FF_DOF'])
490c491
<             if (iso_const.ISOLATION_CONSTANTS['SWITCH_GS13_GAIN']) & (self.has_requested_gs13_gain_switch == False) & (top_const.CHAMBER_TYPE != 'BSC_ST1'):
---
>             if (iso_const.ISOLATION_CONSTANTS['SWITCH_GS13_GAIN']) & (self.has_requested_gs13_gain_switch == False) & (top_const.CHAMBER_TYPE != 'BSC_ST1') & (top_const.CHAMBER_TYPE != 'HPI'):
494c495,496
<                     iso_util.turn_FF('ON', iso_const.ISOLATION_CONSTANTS['FF_DOF'])
---
>                     if (top_const.CHAMBER_TYPE != 'HPI'):  
>                         iso_util.turn_FF('ON', iso_const.ISOLATION_CONSTANTS['FF_DOF'])
 

LHO General
corey.gray@LIGO.ORG - posted 08:54, Thursday 25 February 2016 (25709)
Thursday Morning Meeting Minutes

JRPC Notes

SEI

SUS

CDS

Facilities & VAC

(45) FRS Reports ongoing, and NO new FRS reports (Vern)

LHO General
corey.gray@LIGO.ORG - posted 08:27, Thursday 25 February 2016 (25708)
Morning Status

Operating Mode:  Engineering 

LVEA Laser SAFE

H1 was not locked & Guardian was in Ready state when I walked in.  

Was notified Bubba was on the floor doing some craning.  Concurrent work to be done with this activity:

Switched Operating Mode: Corrective Maintenance

H1 ISC
denis.martynov@LIGO.ORG - posted 03:36, Thursday 25 February 2016 - last comment - 23:06, Thursday 25 February 2016(25706)
100Hz noise investigations

Evan, Den

Tonight we asked ourselves a question whether 1/f (or 1/f^2) noise, seen in DARM around 100Hz, comes from the OMC or not. We put a bandstop filter to the DARM control loop at 92-127Hz and made a correlation measurement between AS 45 WFS SUM and OMC PDs at this frequency band. The idea is that if there is any OMC noise coherent between 2 OMC PDs, it should not be present at the RF detector.

Attached plot shows the results of the measurement. Cross-spectrum between WFS and OMC PDs around 100Hz is almost the same (15% lower) as cross-spectrum between two OMC PDs. We integrated for 3.5 hours and AS 45 WFS SUM channels are not DQ. We can significantly improve the measurement if we record these channels and integrate for ~20 hours.

At the current precision of the correlation we can not say that noise around 100Hz comes from the OMC. Right now it looks vice versa.

Non-image files attached to this report
Comments related to this report
denis.martynov@LIGO.ORG - 16:45, Thursday 25 February 2016 (25731)

We have also measured coherence between DARM and voltage noise of +15V signal going into the vacuum. Noise level is 2uV/sqHz at 100Hz and coherence is <10-3. This noise is insignificant for the current sensitivity.

rich.abbott@LIGO.ORG - 17:13, Thursday 25 February 2016 (25734)ISC
Den, which +15V signal is this that you measured?
denis.martynov@LIGO.ORG - 23:06, Thursday 25 February 2016 (25745)

Rich, we have measured the noise on pin 6 (+15V head 1, D1300502). This noise is highly coherent with noise on pin 2 (+15V on head 2) and partially coherent (0.4) with noise on pin 7 and 3 (-15V, head 1 and 2). For this reason, we did not measure DARM coherence with other pins.

H1 ISC (DetChar, ISC)
sheila.dwyer@LIGO.ORG - posted 19:28, Wednesday 24 February 2016 - last comment - 15:39, Sunday 28 February 2016(25703)
Moved OMC dither frequency by 0.21 Hz, 16 Hz comb on PZT1

Today Keita and I spent some time thinking about OMC length noise, there will be an update coming soon with more information and a noise projection.  

We spent some time looking at some nonlinear behavoir noise in the drive to PZT1.  Our dither frequency is 4100 Hz, and looking at the low voltage PZT monitor we can see a small 8 Hz and a larger 16 Hz comb.  There is also other non stationary noise in the monitor, and a broad peak at 12.7 kHz.  We have moved the dither line frequency to 4100.21 Hz, so if this was the cause of the 16 Hz comb in DARM we would now expect it to be more like a 16.84 Hz comb.  Evan Goetz tells us that we need 15 minutes or more of data in low noise to evaluate if this has changed any combs in DARM. 

We have just reached nominal low noise at 3:14:34 UTC Feb 25th, although the low frequency noise (below 50 Hz) is worse than normal.  I've temporarily changed the dither frequency in the OMC guardian, so if there is a longer lock later tonight it should also have this changed dither frequency.    (If anyone wants to double check what the dither frequency is, the channel is   H1:OMC-LSC_OSC_FREQ

 

 

Comments related to this report
keith.riles@LIGO.ORG - 10:59, Thursday 25 February 2016 (25717)DetChar
To quote Bill Murray in Groundhog Day, "Anything different is good." (at least in this context)

The 16-Hz comb does indeed appear to have changed into a 16.84-Hz comb. 
A DARM spectrum from two hours (400-sec coherence time) last night in the 150-250 Hz band is
attached, along with one from January on a day when the 16-Hz comb
was strong. New lines are seen in this band at

151.56 Hz
168.40 Hz
185.24 Hz
202.08 Hz
218.92 Hz
235.76 Hz

Some 1-Hz zooms are shown for a couple of the new lines.

So...can we fix this problem?
Images attached to this comment
sheila.dwyer@LIGO.ORG - 20:36, Thursday 25 February 2016 (25739)DetChar, ISC

Keita, Sheila

So that we can keep the OMC dither small while driving a reasonable level of counts out of the DAC, we added a voltage divider (somewhat creatively built) to the D-sub from the DAC to the driver chassis.  This is a 11k/110 Ohm divider on pins 1 +6.  We have increased the dither amplitude from 6 cnts to 600 cnts, so the round off errors will now be 100 times smaller compared to our signal.  

The attached screenshots show the PZT1 AC monitor before and after this change.  The lines below 1 kHz are always there, (even when there is 0 coming out of the DAC) and are not present on the analog signal coming into the driver chassis for the monitor.  

We have reverted the frequency to 4100 Hz.  If we get a long enough low noise lock tonight we can hope that the 16 Hz comb will be better.  If things look good we should upgrade our voltage divider. 

Images attached to this comment
keita.kawabe@LIGO.ORG - 09:43, Friday 26 February 2016 (25741)

Thing is, our dither line used to be 12 counts pk-pk, so the rounding error was actually significant (signal/error ratio is something like 10 in RMS), and the error showed up as lots of lines because we're sending in only one sinusoidal signal. These lines actually drive the PZT length.

Making the dither bigger, the round off error RMS doesn't change much so RIN will become smaller.

We inserted two sets of 11k-110 Ohm resistive divider, one each for positive and negative input of the low voltage pzt driver input because it was easy.  This is a temporary non-solution. A permanent solution is TBD.

The first attachment shows the spectrum of the DAC IOP channel for the dither, i.e. the very last stage of the digital, before we increased the amplitude. RMS of the forest of lines is about a factor of 10 below the RMS of the dither.

The second plot is after increasing the amplitude by a factor of 100, the rounding error RMS is still at the same level though you cannot tell from the plot, the dither to error RMS ratio should be more like 1000 now.

Three large lines in the second plot are not round off errors but imaging peaks that were previously buried in the round off errors: 12283.8kHz=16384-4100.2Hz, 20484.2Hz=16384+4100.2Hz, and 28667.8=32768-4100.2Hz.

Images attached to this comment
sheila.dwyer@LIGO.ORG - 23:48, Thursday 25 February 2016 (25747)

Actually, the dither line is still at 4100.21 Hz for tonight, (I had forgotten that I put this into the guardian).  We will revert it tomorow.

keith.riles@LIGO.ORG - 15:13, Friday 26 February 2016 (25758)
Splendid -- no 16-Hz or 16.84-Hz comb seen in a 2-hour stretch from last night!

See attached spectrum 150-250 Hz spectrum for comparison
with above plots, along with 0-1000 Hz spectra from last night
and from the night before, with the 16.84-comb present.

Many thanks from the CW group as we look ahead to O2.

Images attached to this comment
sheila.dwyer@LIGO.ORG - 17:22, Friday 26 February 2016 (25761)
Images attached to this comment
keith.riles@LIGO.ORG - 15:39, Sunday 28 February 2016 (25771)
Sheila pointed out that the noise floor for the 2nd spectrum
is considerably higher than for the 1st and could be hiding
residual lines due to the dither. So I tried shifting the 2-hour 
time window a half hour earlier, to avoid the hellacious glitch
seen in the inspiral range (see 1st figure) near the end of
the original Feb 26 interval. The 2nd figure shows the resulting
spectrum with a noise floor closer to that on Feb 25. The
16.84-Hz comb still does not appear. 

So I think it's safe to say that the 100 times multiplication / divide
trick did indeed suppress the 16.84 Hz (originally 16 Hz) comb a
great deal, but of course, we will need long coherence times and
long integrations to see if what's left causes residual trouble for CW searches.

Images attached to this comment
H1 ISC
evan.hall@LIGO.ORG - posted 18:08, Wednesday 24 February 2016 - last comment - 03:58, Thursday 25 February 2016(25701)
Reworking IX violin mode damping

Den, Patrick, Evan

During IX coil driver investigations yesterday, we rang up some violin modes to very high amplitude (~4×10−14 m rms in DARM, about 4 orders of magnitude higher than our optimal, damped mode height).

The existing violin mode damping settings, which consisted of a few wide filters meant to actuate on multiple modes simultaneously, were damping some of the modes while causing others to ring up.

This is not the first time we have been hoist with our own wide-bandwidth pitard. Additionally, we have more the enough filter modules per test mass to damp each first harmonic individually, which alleviates the problem of having to find a filter to damp multiple modes simultaneously.

There were five IX modes visible in DARM on rf readout. For these five modes, we constructed narrow, individual damping filters (8th order butterworths). The settings are as follows:

Freq. [Hz] FWHM [mHz] Phase Gain [ct/ct]
500.054 100 60° 100
500.212 100 100
501.208 30 −10
501.256 30 10
501.452 100 100

Note the narrow bandwidth of the filters for the two 501.2 Hz modes.

These use the IX L2 damping SFMs, numbers 1 through 5. The corresponding OAF monitor filters have been updated as well.

That means there are three other first harmonics on IX that are yet to be damped.

Comments related to this report
evan.hall@LIGO.ORG - 03:58, Thursday 25 February 2016 (25707)

The damping phase of the mode at 501.208 Hz seems to have flipped sign at the start of the most recent lock. We'll have to keep an eye on this one.

H1 AOS (SUS, SYS)
richard.mccarthy@LIGO.ORG - posted 15:55, Tuesday 23 February 2016 - last comment - 11:25, Friday 26 February 2016(25685)
Add Capacitors to Ham2 SUS SAT Amp
Per work permit 5741 began the process of modifying all Suspension Satellite Amplifiers.  Drawing D0901284-v4 calls for the addition of Cap C601(10uF) and C602 (.1uF) between the -17V to Ground around U503 the Negative Regulator VEE1.
Today with Ed M. Soldering away in the lab we were able to complete all of Han2 units.
Complete are:
MC1, MC3, PRM, PR3, MMT1, MMT2, SM1, SM2  All Stages.

We did have a problem with two Sat Amps. One shared between MC1 and MC3 and the one for  MMT2,a trace blew when it was powered up.  So replaced SN 1100117 with SN S1000287 and SN1100068 with SN1100066.


Comments related to this report
cheryl.vorvick@LIGO.ORG - 12:39, Thursday 25 February 2016 (25723)IOO

Tracking Names: SM1 = IM1, PMMT1 (MMT1) = IM2, PMMT2 (MMT2) = IM3, SM2 = IM4

edmond.merilh@LIGO.ORG - 11:25, Friday 26 February 2016 (25755)

UPDATE:

As of today all HAM3, HAM4 and EX Amps have been modified. HAM2 amps will have to be re-addressed due to an error in installation of the mod. 3IFO boxes are in process.

- Ed

H1 SEI
hugh.radkins@LIGO.ORG - posted 10:26, Tuesday 23 February 2016 - last comment - 09:13, Thursday 25 February 2016(25679)
All LHO HEPI Guardians report USER ERROR CODE after guardian machine restart

DaveB restarted the guardian computer to correct a tconvert issue, WP 5740.

After this, every HEPI platform guardian reports this error:

2016-02-23T17:52:24.50084 HPI_HAM1 [ROBUST_ISOLATED.enter]
2016-02-23T17:52:24.57873 HPI_HAM1 W: Traceback (most recent call last):
2016-02-23T17:52:24.57874   File "/ligo/apps/linux-x86_64/guardian-1485/lib/python2.7/site-packages/guardian/worker.py", line 459, in run
2016-02-23T17:52:24.57875     retval = statefunc()
2016-02-23T17:52:24.57875   File "/ligo/apps/linux-x86_64/guardian-1485/lib/python2.7/site-packages/guardian/state.py", line 240, in __call__
2016-02-23T17:52:24.57876     main_return = self.func.__call__(state_obj, *args, **kwargs)
2016-02-23T17:52:24.57876   File "/ligo/apps/linux-x86_64/guardian-1485/lib/python2.7/site-packages/guardian/state.py", line 240, in __call__
2016-02-23T17:52:24.57877     main_return = self.func.__call__(state_obj, *args, **kwargs)
2016-02-23T17:52:24.57877   File "/opt/rtcds/userapps/release/isi/common/guardian/isiguardianlib/decorators.py", line 86, in wrapper
2016-02-23T17:52:24.57878     return func(*args, **kwargs)
2016-02-23T17:52:24.57878   File "/opt/rtcds/userapps/release/isi/common/guardian/isiguardianlib/decorators.py", line 86, in wrapper
2016-02-23T17:52:24.57879     return func(*args, **kwargs)
2016-02-23T17:52:24.57879   File "/ligo/apps/linux-x86_64/guardian-1485/lib/python2.7/site-packages/guardian/state.py", line 240, in __call__
2016-02-23T17:52:24.57880     main_return = self.func.__call__(state_obj, *args, **kwargs)
2016-02-23T17:52:24.57880   File "/ligo/apps/linux-x86_64/guardian-1485/lib/python2.7/site-packages/guardian/state.py", line 240, in __call__
2016-02-23T17:52:24.57881     main_return = self.func.__call__(state_obj, *args, **kwargs)
2016-02-23T17:52:24.57882   File "/opt/rtcds/userapps/release/isi/common/guardian/isiguardianlib/isolation/states.py", line 490, in run
2016-02-23T17:52:24.57882     if (iso_const.ISOLATION_CONSTANTS['SWITCH_GS13_GAIN']) & (self.has_requested_gs13_gain_switch == False) & (top_const.CHAMBER_TYPE != 'BSC_ST1'):
2016-02-23T17:52:24.57883 KeyError: 'SWITCH_GS13_GAIN'
2016-02-23T17:52:24.57883
2016-02-23T17:52:24.57888 HPI_HAM1 [ROBUST_ISOLATED.run] USERMSG: USER CODE ERROR (see log)
2016-02-23T17:52:24.62596 HPI_HAM1 ERROR in state ROBUST_ISOLATED: see log for more info (LOAD to reset)
 

I've reloaded and restarted both HAM1 and HAM6 and the error returns.  It sure looks like it is related to the updates to get the ISI guardian to switch the GS13 gains depending on the state.  I can not find documentation (logs) that the HPI nodes have been restarted since the gain switching updates which were intended for the ISI.

I've elected to not attempt any code corrections at this time; I suspect Hugo will correct this easily.  Meanwhile, I don't know if the guardian will actually function if it trips.  This can be done easily however with the command scripts isolating to level1.

Comments related to this report
hugh.radkins@LIGO.ORG - 11:32, Tuesday 23 February 2016 (25681)

II 1206 FR 4418

hugh.radkins@LIGO.ORG - 09:13, Thursday 25 February 2016 (25710)

Code oversite fixed.  Guardian restarted.  All HPIs are good.  Tested on HAM6, positive function.  ISI guardians not restarted--should do next Tuesday to confirm no issues there.

 

files updated:

hugh.radkins@operator3:~ 0$ userapps
hugh.radkins@operator3:release 0$ cd isi/common/guardian/
hugh.radkins@operator3:guardian 0$ svn up
U    isiguardianlib/isolation/const.py
U    isiguardianlib/isolation/states.py
U    isiguardianlib/damping/states.py
Updated to revision 12715.
 

H1 ISC
evan.hall@LIGO.ORG - posted 03:51, Tuesday 23 February 2016 - last comment - 13:21, Thursday 25 February 2016(25673)
QPD offsets, 9 MHz phase noise

Den, Sheila, Evan

~> We tried reverting to the old QPD offsets (the ones we used throughout O1) for the soft loops and the PRM pointing loops. These seemed to make the recycling gain slightly worse, and did not improve the jitter coupling. This indicates that (as we had suspected) these offsets are no longer good to use.

~> We measured the 9 MHz oscillator phase noise coupling into DARM. This had been done previously (19911), but with a suspicious calibration and in a way that also drove the 45 MHz phase. This time, we used the OCXO to drive both the harmonic generator (bypassing the 9 MHz distribution amplifier) and to serve as a reference for an IFR/OCXO PLL with a >40 kHz bandwidth. The IFR is used to drive the 9 MHz distribution amplifier. The error point of the PLL was offset in order to maintain the relative time delay of the 9 MHz and 45 MHz signals. When we relocked, we found that the 18 MHz and 27 MHz signals had flipped sign, but otherwise the interferometer locked normally. However, there was a great excess of 45 MHz noise in DARM (worse even than before we installed the 9 MHz bandpass). Nevertheless, we were able to drive enough to see the effect of 9 MHz phase modulation in DARM. The coupling is roughly 2–4×10−6 mA/Hz above 100 Hz.

Images attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 22:36, Tuesday 23 February 2016 (25692)

Also, twice last night (while locked on the IFR) we were battling a 900 Hz line in DARM that increased over the duration of the lock, and eventually caused EY to saturate.

We suspected PI, but this line was also present at 900 Hz in the DCPD IOP channels. So it is not folding around the 8 kHz Nyquist of the digital downsampling.

evan.hall@LIGO.ORG - 18:46, Wednesday 24 February 2016 (25702)

This line does not appear in the IOP channels for the end station QPDs. (2 W, dc readout)

Images attached to this comment
evan.hall@LIGO.ORG - 13:21, Thursday 25 February 2016 (25725)

It's the third harmonic of the beamsplitter violin mode. Den added a stopband filter to the BS M2 length drive, and the line went down.

Unclear why this had not been a problem before.

Displaying reports 59641-59660 of 84690.Go to page Start 2979 2980 2981 2982 2983 2984 2985 2986 2987 End