Displaying reports 61361-61380 of 82999.Go to page Start 3065 3066 3067 3068 3069 3070 3071 3072 3073 End
Reports until 05:15, Sunday 04 October 2015
H1 General
jim.warner@LIGO.ORG - posted 05:15, Sunday 04 October 2015 (22217)
Mid Shift Update

Quiet night at LHO. Wind ~10mph, seimic relatively low, lock from yesterday continues.

H1 General (PEM)
cheryl.vorvick@LIGO.ORG - posted 00:14, Sunday 04 October 2015 (22216)
OPS EVE Summary:

TITLE:  10/3 EVE Shift:  23:00-07:00UTC (Oct.4) (16:00-23:59PT), all times posted in UTC     

STATE OF H1:  Observation at 77Mpc

SUPPORT:  Robert, Jordan, Sheila

INCOMING OPERATOR:  Jim

SHIFT SUMMARY:  

LLO is still having issues with useism.  

Robert did one round of injections, and produced many ETMY saturations in 2 minutes - he may need to redo this measurement tomorrow night.

Jordan did one PEM measurement and would like to take another.  The first was about 30 minutes and the second should be about this long as well.

Shift Activities:

00:00:42UTC, Oct. 4th - Robert's PEM injections start

01:34:37UTC - Robert's PEM injections end

03:35:37UTC - Jordan's PEM injections start

04:06:47UTC - Jordan's PEM injections end

 

 

H1 ISC
sheila.dwyer@LIGO.ORG - posted 18:00, Saturday 03 October 2015 - last comment - 09:28, Monday 05 October 2015(22213)
We should proabably be pulling the OMC off resonance durring CARM offset reduction

I started to look at our locking attempts over the last two weeks, especially trying to understand our difficulty yesterday.  I will write a more complete alog in the next few days, but I wanted to put this one in early so that operators can see it. 

We've known for a long time that at LLO they always pull the OMC off resonance durring the CARM offset reduction, and they've told us that they can't lock if it is flashing.  We know that we can lock when it is flashing here, which might be because our output faraday has better isolation.

In the two weeks of data that I looked at, we've locked DRMI 64 times, 33 of these locks resulted in low noise locks and 31 of them failed durring the acquistion prodecure.  Of these 31 failures, about 9 happened as the OMC was flashing.  We also had about 12 sucsesfull locking attempts where the OMC flashed.  OMC flashing probably wasn't our main problem yesterday, but it can't hurt and it might help to pull the OMC off resonance durring the CARM offset reduction. 

Operators:   If you see that the OMC is flashing (visible on the OMC trans camera right under the AS camera on the center video screen) you can pull it off resonance by opening the OMC control screen, and moving the PZT offset slider which is in the upper right hand quadrant of the screen.  Even if you don't see the OMC flashing on the camera it might not hurt to pull the PZT away from the offset it is left at, which was the offset where it was locked in the last lock.  I will try to add this to guardian soon and let people know when I do.

Comments related to this report
cheryl.vorvick@LIGO.ORG - 19:39, Saturday 03 October 2015 (22215)

screenshot with slider circled in a red dashed line

Images attached to this comment
sheila.dwyer@LIGO.ORG - 09:28, Monday 05 October 2015 (22237)

Evan Sheila

Here is a plot of the 24 locklosses we had from Sept 17th to Oct 2nd durring the early stages of the CARM offset reduction.  The DCPD sum is shown in red while the black line shows H1:LSC-POPAIR_B_RF18_I_NORM (before the phase rotation) to help in identifying the lockloss time.  You can see that in many of these locklosses the OMC was flashing right before for as we lost lock. This is probably because the AS port was flashing right before lockloss and the OMC is usually nearly on resonance.

We looked at 64 total locking attempts in which DRMI locked, 24 of these resulted locklosses in the early stages of CARM offset reduction (before the DHARD WFS are engaged).  In 28 of these 64 attempts the OMC DCPD sum was above 0.3mA sometime before we start locking the OMC, so the OMC flashed in 44% of our attempts. We lost lock 16 out of 18 times that the OMC was flashing (57% of time) and 8 out of 36 times that the OMC was not flashing (22% of the time). 

We will make the guardian pull the OMC off resonance before starting the acquisition sequence durring tomorow's maintence window.

H1 PEM (PEM)
cheryl.vorvick@LIGO.ORG - posted 17:08, Saturday 03 October 2015 (22212)
Eve Shift Start: one hour of data, Robert now doing PEM injections

LHO General
corey.gray@LIGO.ORG - posted 16:10, Saturday 03 October 2015 (22201)
DAY Ops Summary

TITLE:  10/3 DAY Shift:  15:00-23:00UTC (00:00-8:00PDT), all times posted in UTC     

STATE OF H1:  Observation at 75Mpc

SUPPORT:  Evan, Daniel, Robert, Jordan

INCOMING OPERATOR:  Cheryl

SHIFT SUMMARY:  

Started off shift with H1 in DOWN mode.  Established a game plan by talking with Mike & Daniel over the phone.  Evan arrived and started investigations while I re-aligned, and after a few hours we had H1 back up.  It has been up with a decent range around 75Mpc (with a few of the usual ETMy saturations).

LLO is having issues with useism.  Robert is taking advantage of their downtime to run PEM injections.

If you squint your eyes and look sideways, you can see the useism beginning to trend down.  Winds are around 12-15mph.  

Shift Activities:

H1 CDS
corey.gray@LIGO.ORG - posted 15:16, Saturday 03 October 2015 (22210)
GraceDB Querying Failure & Code Start Up

Happen to notice a Red Box on the Ops Overview medm which said there was a GraceDB Querying Failure.  I wanted to figure out when this occurred, but I was not able to figure it out (trended the channel in DV, used conlog, looked on the CAL_INJ medm [where this RED Box also lives]).  Maybe this happened between 21-22:00?

So checking alogs, found a link to the following instructions.  It was not clear to me what state I was in:  did I need a "code start up" or a "code restart".  I followed the "code start up" instructions.

On the operator2 terminal (which is generally logged in as controls), I did the following:

ssh controls@h1fescript0
cd /opt/rtcds/userapps/release/cal/common/scripts
screen

python ext_alert.py run

This gave a GREEN "GraceDB querying Successful" box on the CAL_INJ medm (and the box entirely disappeared on the Ops Overview).

As I detatched from the screen environment, I did not get process ID # for the screen session; I only had a "[detached]" prompt.  So I did NOT store a file with a PID# under the home directory.  Maybe I should have followed the restart instructions?  Distinguishing how to determine what error state one is in will help with the instructions here.

I'm also assuming it's OK to restart/start this computer while in Observation Mode (because I did).

H1 ISC
daniel.sigg@LIGO.ORG - posted 13:52, Saturday 03 October 2015 (22209)
RF Levels in Electronics Room

The attached plot shows some unexpected variations in the controls signals of the two EOM drivers over the past 20 hours. This is also visible in the RF power monitors of all distribution amplifiers in the electronics room. This may be due to temperature fluctuations, but we don't seem to have a temperature readout in the electronics room. The LVEA shows no variation in temperature.

Images attached to this report
H1 General
corey.gray@LIGO.ORG - posted 12:16, Saturday 03 October 2015 (22207)
Mid-Shift Update: H1's Back On The Dancefloor!

H1 Status:

After 22+hrs of being down, H1 is finally back.  The only notable change to the system was Evan's change of an ASC ALS_X Pit Gain.  (another minor point is the moving of PR3 during alignment....my experience is we don't have to move PR3 in general).

For SDF, ACCEPTED a few RED items:  (before / now)

After a few checks, H1 was taken to Observation Mode and  has been hovering pretty close to 77Mpc.  Robert/Evan noted a low periscope peak around 300Hz.

Robert Injection Request

Whenever L1 is down, Robert is looking to continue PEM injections (per WP#5531) with approval from Landry.

Environmental:

useism channel is hovering around 0.2um/s.  Winds are hovering around 15mph.

H1 ISC
evan.hall@LIGO.ORG - posted 11:30, Saturday 03 October 2015 (22206)
Reduced EX ALS WFS pitch gain

Corey, Daniel, Evan

For the past 24 hours the green transmission through the X arm has been uncharacteristically unstable, sometimes dipping to less than 60% of its maximum value on timescales of a few seconds.

Looking at the quad oplev signals, it seems that this dipping (perhaps unsurprisingly) is mostly associated with EX pitch. It could be because of wind (there were gusts above 40 mph yesterday), or it could be because of the microseism (the 0.1­–0.3 Hz STS bands peaked yesterday around 0.4 µm/s, which is the highest they've been in the past 90 days, excepting earthquakes), or it could be because of something else entirely.

Turning down the EX green WFS pitch gain (H1:ALS-X_WFS_DOF_1_P_GAIN) by a factor of 5 seems to lessen the fluctuations in the transmitted green signal, making it stay within 75% of its maximum. It is a small effect, but it seemed to make an improvement for the transmitted IR light in the CHECK_IR step.

After this change we were able to make it past SWITCH_TO_QPDS and all the way to nominal low noise. It could just be a coincidence, though.

H1 CDS (AOS, CDS, SUS)
evan.hall@LIGO.ORG - posted 10:14, Saturday 03 October 2015 (22205)
Quad top-stage OSEM spectra

Taken with ALS WFS feedback going to the ETMs (and not the ITMs).

Images attached to this report
H1 General
corey.gray@LIGO.ORG - posted 10:02, Saturday 03 October 2015 (22204)
H1 Status

10/3 DAY Shift:  15:00-23:00UTC (00:00-8:00PDT), all times posted in UTC   

15:56 - 16:39 Ran through an Initial Alignment

First lock attempt had DRMI lock within 2min, but it dropped out during DARM ON TR.

Support:  Robert, Evan, & Daniel on site.  

Investigations continue.

H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 09:57, Saturday 03 October 2015 (22203)
CDS model and DAQ restart report, Sunday 27th - Friday 2nd October 2015

O1 days 10 to 15

model restarts logged for Sun 27/Sep/2015 No restarts reported

model restarts logged for Mon 28/Sep/2015 No restarts reported

model restarts logged for Tue 29/Sep/2015
2015_09_29 08:34 h1isiham5
2015_09_29 11:45 h1nds0
2015_09_29 11:47 h1nds1

Maintenance day. New ISI HAM5 model, NDS work with raw minute trends

model restarts logged for Wed 30/Sep/2015
2015_09_30 14:49 h1nds1
2015_09_30 14:52 h1nds1

Two unexpected restarts of nds1

model restarts logged for Thu 01/Oct/2015
2015_10_01 15:24 h1ioppemmy
2015_10_01 15:24 h1pemmy

Restarts of pemmy while investigating IO Chassis blown fuse

model restarts logged for Fri 02/Oct/2015
2015_10_02 12:11 h1ioppemmy
2015_10_02 12:11 h1pemmy
2015_10_02 12:21 h1ioppemmy
2015_10_02 12:22 h1pemmy

Restart of pemmy front end while AA chassis is being repaired

LHO General
corey.gray@LIGO.ORG - posted 08:13, Saturday 03 October 2015 - last comment - 08:45, Saturday 03 October 2015(22200)
Transition to DAY Shift Update

TITLE:  10/3 DAY Shift:  15:00-23:00UTC (00:00-8:00PDT), all times posted in UTC     

STATE of H1:  Guardian in DOWN state

Outgoing Operator:  Empty Control Room (Jim was in for part of the OWL shift, but the shift was pseudo-called-off due to H1 issues)

Support:  None

Quick Summary:  The GWI.stat page says H1 has been in the "NOT OK" state for the last 19+hrs.  Will catch up on alog, see if there's anything I can do, and wait for assistance/input.

Comments related to this report
corey.gray@LIGO.ORG - 08:45, Saturday 03 October 2015 (22202)

A Little More:

Mike Called, and then I chatted with Daniel & then Sheila called.  More items worth noting:

Seismic:  Winds are below 10mph.  useism (0.1-0.3Hz) has perhaps drifted a little down in the last 12 hrs.  The signal is still in the high-ish range and is mostly in the middle of the dashed lines (90th & 50th percentile states).  So we still have useism which is higher than normal & according to Sheila we've not locked much with useism this high (NOT to say this is the reason for locking issues, but just something to NOTE).

Laundry List:  (Daniel plans to be in around 10am.  He listed a few items which he'd like to us to pursue.  Sheila mentioned issues she noticed last night (i.e. BS oplev glitching and ALS glitching).  She said they are bad, but probably not the reason for the Locking issues with H1.

  • Run Initial Alignment and return to try locking
  • DRMI, can we lock it alright (we did last night)
  • Revisit/investigate Suspensions (starting with end stations).  Try pushing on Suspension and see if we can see this with OSEMs/oplevs
  • Do we have Standard Spectra for each subsystem?  If we did, one could go through and make cursory checks for anything obviously bad.  If we don't have them, we should work on generating these (atleast for unlocked states).
  • Something uncontrolled?  Is there a fuse blown?  How would one diagnose this?
  • Oplevs.  Compare spectra with other unlocked stretches in the past.

Observatory Mode:  This has been in the "Lock Acquisition" state all night, but we were in the DOWN state.  I would not know what we would call last night.  A "Broken" state would be nice.  Using "Other", "Unavoidable", or "Unknown" are the only other states which I would use to describe our current state, but they don't offer any information & are too similar to each other.

OK, onto an alignment!

H1 SUS (ISC)
peter.fritschel@LIGO.ORG - posted 06:13, Saturday 03 October 2015 - last comment - 17:46, Thursday 08 October 2015(22199)
Issues with the analog monitors of the low-voltage electro-static driver

The low-voltage electro-static driver (D1500016) includes monitors of the output quadrant drive signals that are sent to ADCs for sampling/monitoring (in a SUS IO chassis). The monitors look at the drive voltage after the normal inputs, test inputs, and parametric instability correction inputs are summed together. Each monitor path has a 1:4 voltage divider to fit the full driver range into the ADC input range.

Looking at these monitor channels for ETMY from a recent lock stretch shows several problems with these monitors as useful readbacks for the electro-static drive signals. In the attached plot, the two traces are:

There are several problems:

  1. The sampling rate of the archived monitor channels is too low. It is set to 256 Hz, and the digital AA filter for that rate starts cutting off at about 85 Hz. The sampling rate should be raised to 2048 Hz.
  2. The monitor path in the driver chassis includes a low-pass filter at 42 Hz. This is just a mistake in the production of these chassis; the low-pass was intended to be at 1 kHz. This has been corrected by Rich for all the spare driver chassis.
  3. More significantly, the monitor channel shows excess noise above 8 Hz or so. Except for some higher amplitude features in the drive (like the calibration lines between 30-40 Hz), the monitor channels are swamped by this excess noise, and are incoherent with the DAC drive. Though not shown, I looked at a time when the interferometer was not locked, so that there was no signal going to the ES-driver: the monitor readback spectrum was at the ADC noise floor of 6e-3 counts/rtHz down to low frequencies (below 10 Hz), so with no signal, the excess noise is not present.

Followup work to do: i) check the behavior of the same channels on L1 for comparison; ii) test the behavior of the monitor channels with a spare unit on the bench, in the presence of a signal

Non-image files attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 17:46, Thursday 08 October 2015 (22342)

It's analog. We need a usable whitening for this. (Daniel, Keita)

The noise floor is not a digital artefact, the analog gain seems to be too small to see anything useful at 100Hz. Even if we move 42Hz LPF up to 1k, we would still need a useful whitening, e.g. two stages of ISC style whitening.

In the first attachment top,  red, blue and green are the same ETMY LL ESD low voltage monitor at different points, i.e. red is the test point in SUSAUX (2k), blue is DQ of the same (256), and green is in the IOP model (64k) before the signal comes into SUSAUX.

Also shown is the digital output test point (pink and cyan, pink taken at the same time as red) projected onto the LV monitor by removing the whitening and putting 40Hz LPF and adjusting the DC scaling. No wonder we're not seeing anything useful at 100Hz. 

RMS of IOP channel is basically 1 count down to 8Hz or so, and this means that the noise floor is just ADC noise. RMS of this same signal goes only up to 180 or so counts at 1Hz. Changing 42Hz LPF with 1k is not enough for frequency lower than maybe 400Hz or so.

Boosting the analog gain by a factor of 100 or so, the RMS at low frequency would become uncomfortably large.

The second attachment shows what happens when there are two stages of ISC whitening (z=[1;1] p=[10;10]) plus 1kHz LPF instead of 42Hz, without changing DC gain.

The RMS between 1 and 10 Hz becomes 2000 counts-ish, RMS for f>10 becomes 100 counts-ish, and the monitor noise floor would be at least a factor of 10  larger than the noise floor for f<1k.

In the future, when our sensitivity increases by a factor of 3+ or something for f>100Hz and our drive drops by the safe factor, we might have to think about more whitening or more whitening gain. By that time, we might also be able to more aggressively cut down the low frequency part (f<5Hz) of the drive on ESD. According to Evan ESD-PUM crossover is about 20Hz (alog 19859) so it sounds doable.

[update 0:40 UTC] The third attachment shows the DQ channel (red), test point (blue) and IOP channel (green, which is almost completely masked by blue) measured at the same time up to 116Hz. They're the same except decimation filters.

Images attached to this comment
H1 General
jim.warner@LIGO.ORG - posted 03:48, Saturday 03 October 2015 (22198)
Shift Summary

Title: 10/2 OWL Shift 7:00-15:00 UTC

State of H1: Low noise, finally

Shift Summary: Locking problems continue, packing it in

Activity log:

Ed and Jeff both had problems locking today, so I came in to poke around at a few things.

Sheila  called and suggested I try fiddling with a couple things in Guardian, so I did, but I think other things (that I never touched!) are broken, still.

7:41 loaded new gain in SWITCH TO QPDS -41111 to -80000, sat at DRMI for 5 mins then -> ENGAGE ASC
8:14 Third Try
8:30 #4, also fails
9:02 increased  Switch to QPDS Tramp, lost lock before I could try it
9:15 Trying longer TR CARM TRAMP
10:00 30 sec Tramp, gain -120000
10:20 Reduce gain to -20000, no good, AS port looked like a split mode
10:35 Packing up, revert all changes, leaving ISC_Lock in down

H1 PEM
gerardo.moreno@LIGO.ORG - posted 23:36, Friday 02 October 2015 (22197)
Removed, repaired and re-installed Mid-Y AA Chassis.

Filiberto, Richard, Gerardo

Per work pemit 5529.
The Mid-Y chassis was removed and repaired.  The popped capacitor (C12) on the AA BNC interface board was replaced.  All channels were tested and passed.
Then the gains on channels 1, 2, and 3, were updated by replacing R1 and R4 on the AA BNC interface board.  These channels were tested and 10x gain was noted as expected.
Chassis was returned to the rack and powered up after installing a 5A fuse instead of a 3A.

H1 General
edmond.merilh@LIGO.ORG - posted 23:27, Friday 02 October 2015 (22196)
Shift Summary - OWL Transition

TITLE:  Oct 2 EVE Shift 23:00-07:00UTC (04:00-12:00 PDT), all times posted in UTC

STATE Of H1: Aligning/Locking

LOCK DURATION: N/A

SUPPORT: Sheila, Evan

INCOMING OPERATOR: Jim W

 

Activity log:

00:07 After much ALS alignment tweaking and the wind picking up to around 25mph, Re-locking has been inititated

02:09  Patrick called to let me know that he’ll be in the H2 building for a little while.

02:22 Patrick done in H2 building

04:57 Evan leaving (Patrick still here). There’s nothing more he can do. There seems to be an instability in ALS. Locking doesn’t make it past Switch to QPDs, if it even get’s that far. The majority of the time is spent relocking IMC when it unlocks trying to find IR. Winds are still in the upper 20s.

I placed a call to Sheila to get her thoughts on the situation and Evan suggested that maybe Jim stand down on the OWL shift until the ground motion/wind/ALS is sorted out.

05:10 Called Landry about present condition of IFO and waving off the OWL shift

05:20 Called Sigg about present condition of IFO and waving off the OWL shift. The consensus is Call Jim and let him decide to stand down or come in and confirm proper operation of the Seismic systems, make an aLog and leave.

05:30 Called Jim. He’s going to go ahead and come in NOW and check his stuff and probably NOT stay for the entire OWL shift. His call.

06:10 Jim on site

06:24 Called Landry to give him an update.

 

Shift Summary: The long and short: No locking for me tonight. Evan, Sheila seemed to be at an impasse and Daniel agreed that this trouble be addressed tomorrow during the day. Wind is still blowing ≥25mph. Ground motion is higher than typical. Not much noise above our raised seismic floor from any earthquakes reported as headed our way. Jim is coming in early to assess the seismic systems and probably won’t stay for the OWL shift.

H1 ISC
keita.kawabe@LIGO.ORG - posted 15:54, Thursday 01 October 2015 - last comment - 09:24, Monday 05 October 2015(22154)
Current status of noise bumps that are supposedly from PSL periscope (PeterF, Keita)

Just in case you're wondering why LHO sees two noise bumps at 315 and 350Hz (attached, middle blue) but not at LLO, we don't fully understand either but here is the summary.

There are three things here, environmental noise level, PZT servo, and jitter coupling to DARM. Even though the former two explains a part of the LLO-LHO difference, they cannot explain all of it, and the coupling at LHO seems to be larger.

Reducing the PSL chiller flow will help but that's not a solution for the future.

Reimplementing PZT servo at LHO will help and this should be done. Squashing it all will be hard, though, as we are talking about the jitter between 300 and 370Hz and there's a resonance at 620Hz.

Reducing coupling is one area that was not well explored. Past attempts at LHO were on top of dubious IMC WFS quadrant gain imbalances.


1. Environmental difference

These bumps are supposed to be from the beam jitter caused by PSL periscope resonances (not from the PZT mirror resonances). In the attached you can see that the bumps in H1 (middle blue) correspond to the bumps in PSL periscope accelerometer (top blue). (Don't worry, we figured out which server we need to use for DTT to give us correct results.)

Because of the PSL chiller flow difference between LLO and LHO (LHO alog, couldn't find LLO alog but we have MattH's words), in general LLO periscope noise level is lower than LHO. However, the difference in the accelerometer signal is not enough to explain the difference in IFO.

For example, at 350Hz LHO PSL periscope is only a factor of 2 noisier than LLO. At 330Hz, LHO is quieter than LLO by more than a factor of 2. Yet we have a huge hump in DARM at LHO, it becomes larger and smaller in DARM but it never goes away, while LLO DARM is deat flat.

At LLO they do have a servo to supress noise at about 300Hz, but it shouldn't be doing much if any at 350Hz (see the next section).

So yes, it seems like environmental difference is one of the reasons why we have larger noise.

But the jitter to DARM coupling itself seems to be larger.

Turning down the chiller flow will help but that's not a solution for the future.


2. Servo difference

At LLO there's a servo to squash beam jitter in PIT at 300Hz. LHO used to have it but now it is disabled.

At LLO, IOOWFS_A_I_PIT signal is used to suppress PIT jitter targetting the 300Hz peak which was right on some mechanical resonance/notch structure in PZT PIT (which LHO also has), and the servo reduced the noise between about 270 and about 320Hz (LLO alog 19310).

Same servo was successfully copied to LHO with some modification, which also targeted 300Hz bump (except that YAW was more coherent than PIT and we used YAW signal), with somewhat less (but not much less) aggressive gain and bandwidth. At that time 300Hz bump was problematic together with 250Hz bump and 350Hz bump. Look at the plots from alog 20059 and 20093.

Somehow 250Hz and 300Hz subsided, and now LHO is suffering from 315Hz and 350Hz bumps (compare the attached with the above mentioned alog). Since we never had time to tune the servo filter to target either of the new bumps, and since turning the servo on without modification is going to make marginal improvement at 300Hz and will make 250Hz/350Hz somewhat worse due to gain peaking, it was disabled.

Reimplementing the servo to target 315 and 350Hz bumps will help.  But it's not going to be easy to make this servo wide band enough to squash everything because of 620Hz resonance, which is probably something in the PZT mirror itself (look at the above mentioned alog 20059 for open loop transfer function of the current servo, for example). In principle we can go even wider band, but we'll need more than 2kHz sampling rate for that. We could stiffen the mount if 620Hz is indeed the mount.


3. Coupling difference

As I wrote in the environment difference, from the accelerometer data and IFO signal, it seems as if the coupling is larger at LHO.

There are many jitter coupling measurements at LHO but the best one to look at is this one. We should be able to make a direct comparison with LLO but I haven't looked.

Anyway, it is known that the coupling depends on IMC alignment and OMC alignment (and probably the IFO alignment).

At LHO, IMC WFS has offsets in PIT and YAW in an attempt to minimize the coupling. This is on top of dubious imbalances in IMC WFS quadrant gains at LHO (see alog 20065, the minimum quadrant gain is a factor of 16  larger  smaller than the maximum). We should fix that before spending much time on studying the jitter coupling via alignment.

At LLO, there's no such imbalance and there's no such offset.

Images attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 12:58, Saturday 03 October 2015 (22208)

The coupling of these peaks into DARM appears to pass through a null near the beginning of each full-power lock stretch, perhaps indicating that this coupling can be suppressed through TCS heating.

Already from the summary pages one can see that at the beginning of each lock, these peaks are present in DARM, then they go away for about 20 minutes, and then they come back for the duration of the lock.

I looked at the coherence (both magnitude and phase) between DARM and the IMC WFS error signals at three different times during a lock stretch beginning on 2015-09-29 06:00:00 Z. Blue shows the signals 10 minutes before the sign flip, orange shows the signals near the null, and purple shows the signals 20 minutes after the sign flip.

One can also see that the peaks in the immediate vicinity of 300 Hz decay monotonically from the beginning of the lock strech onward; my guess is that these are generated by some interaction with the beamsplitter violin mode and have nothing to do with jitter.

Images attached to this comment
keita.kawabe@LIGO.ORG - 09:24, Monday 05 October 2015 (22235)

Addendum:

alog 20051 shows the PZT to IMCWFS transfer function (without servo) for PIT and YAW. Easier to see which resonance is on which DOF.

Displaying reports 61361-61380 of 82999.Go to page Start 3065 3066 3067 3068 3069 3070 3071 3072 3073 End