Displaying reports 65981-66000 of 85603.Go to page Start 3296 3297 3298 3299 3300 3301 3302 3303 3304 End
Reports until 18:37, Sunday 02 August 2015
H1 TCS (ISC)
evan.hall@LIGO.ORG - posted 18:37, Sunday 02 August 2015 - last comment - 10:09, Monday 03 August 2015(20146)
TCS test in progress

Kiwamu, Stefan, Evan

We are trying to minimize the coupling of frequency and intensity noise into DARM by tuning the central heating on the IX CP.

The following excitations have been set up:

The amplitudes were chosen so that each line has an SNR of 50 or so in OMC DCPD sum with a 10 s FFT. Each demodulator demodulates OMC DCPD sum at the appropriate frequency, and then lowpasses I and Q with a 100 mHz, 4th-order butterworth.

At 2015-08-03 01:19:45 Z we changed the IX CP heating power from 0.23 W to 0.36 W.

At 2015-08-03 02:57:25 Z we changed the IX CP heating power from 0.36 W to 0.53 W.

At 2015-08-03 04:26:20 Z we changed the IX CP heating power from 0.53 W to 0.41 W.


Additionally:

Comments related to this report
evan.hall@LIGO.ORG - 21:59, Sunday 02 August 2015 (20154)

Stefan has reverted the rewiring on the CARM board.

We are leaving the injected frequency line on so we can watch it as the interferometer settles into its new thermal state.

stefan.ballmer@LIGO.ORG - 22:42, Sunday 02 August 2015 (20155)
Also, we further increased the ISS gains: the first loop went up by 10dB, the second loop by 6dB. No immediate noise improvement was visible in DARM.
lisa.barsotti@LIGO.ORG - 10:09, Monday 03 August 2015 (20159)ISC
I looked at OMC SUM/NULL during the long lock last night, after the frequency noise injection was turned off.
There is no significant difference between the beginning and the end of the lock. The excess of noise was of the order of 10% shot noise level, similarly to the night before. The highest excess of noise I have seen is ~15%, corresponding to  a few days ago , July 31st.
Images attached to this comment
H1 CDS (CDS, DetChar, SEI)
sheila.dwyer@LIGO.ORG - posted 16:20, Sunday 02 August 2015 - last comment - 17:52, Monday 03 August 2015(20137)
ETMX IOP DACKILL, glitches in DARM

Jamie, Sheila, everyone,

Over the past several days, TJ's verbal alarams have been warning us about ETMX software watchdog trips which aren't really happeneing.  This is interseting though, since we've noticed that sometimes this seems coincident with a huge glitch in DARM that can be seen in the spectrum.  The verbal alarm script is checking the channel H1:IOP-SEI_ETMX_DACKILL_STATE.  It sometimes jumps to a value of 3 for about a second and comes back to 0. 

Three incidents from Friday night happened in the 10 to 20 seconds proceeding these times (UTC):

8/1/2015 7:37:40, 5:32:10, 4:00:00

One of these incidents a huge glitch is visible in the DARM time series before the DACKILL state changed. 

Two questions probably need further investigation, is DACKILL behaving the way we want it to, and are the glitches in DARM cauing the DACKILL state to change or is something else causing both DARM glitches and the change in DACKILL state?

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 17:52, Monday 03 August 2015 (20178)DetChar

Dave and Jim suggested 2 more channels to look at for this time when there was an obvious glitch in DARM, and the SEI_ETMX_DACKILL state changed a second or so later.  Indeed, the sus IOP state word changed at the same time as the DACKILL changed, although there is no timing error.  Like the DACKILL state, this seems to happen after the glitch.  

Images attached to this comment
H1 ISC (GRD)
evan.hall@LIGO.ORG - posted 06:01, Sunday 02 August 2015 (20144)
24 W, no ITM oplev damping

Matt, Lisa, Hang, Evan

Tonight we went to full power, turned on the new boost and cutoff in dHard (along with a small lead filter around 3 Hz), and turned off the oplev damping on the ITMs. Then we took an OLTF. Blue shows the new loop with the lead off, and red is the loop with the lead on. So far we've been at full power for more than 3 hours without any sign of instability.

There is some new, untested code sitting in the ISC_LOCK guardian:

However, this new, untested code is commented out (search ISC_LOCK.py for 'guardbomb' to find it). We can uncomment it the next time there is someone in the control room to supervise the lock acquisition.

Additionally, I got impatient with damping the roll modes during the acquisition sequence, and so I have set the quad coil drivers to be high range until the COIL_DRIVERS state, at which point they are switched to low noise. This seems to work fine (i.e., I didn't notice any glitching on the cameras).

Images attached to this report
Non-image files attached to this report
H1 IOO
matthew.evans@LIGO.ORG - posted 02:53, Sunday 02 August 2015 (20142)
IMC_LOCK Guardian changed: new state PREPARE_ISS

The IMC_LOCK guardian now has a state PREPARE_ISS which tunes the offset slider to bring the second loop servo board out of saturation before engaging the second loop.

This work was previously being done by the CLOSE_ISS state, but since it can take a few minutes and the offset tuning does not disturb the IFO, it can be done in parallel with other changes as soon as the operating power level is reached.  The ISC_LOCK guardian will be changed accordingly.

A secondary advantage is that the IMC_LOCK guardian can return from PREPARE_ISS to LOCKED without going through ISS_ON, which can be useful for testing.

Images attached to this report
H1 AOS (SUS)
thomas.abbott@LIGO.ORG - posted 19:29, Saturday 01 August 2015 (20141)
SUS DRIFTMON Updated
Drift monitor thresholds updated with 120 seconds averages during lock at 1122488064 GPS,
Aug 01 2015 18:14:07 UTC.
H1 GRD (GRD, ISC, SYS)
jameson.rollins@LIGO.ORG - posted 17:03, Saturday 01 August 2015 (20134)
Well that didn't work: major lockloss snafu caused by change to the edges around ISC_LOCK::DOWN

I didn't quite fully anticipate all of the affects of separating DOWN from the rest of the graph.  In particular, one really bad unanticipated effect was that after lockloss, when the ISC_LOCK jumps to the LOCKLOSS state, it doesn't find any paths from LOCKLOSS to the last requested state, which causes it to just stall out in LOCKLOSS, and not proceed to DOWN.  In other words, DOWN was not run after the lockloss this morning after last night's 10 hour lock.

When I came in this morning I therefore found a bit of a poo show that I then had to clean up.  None of the control signals had been shut off, multiple SUS and SEI systems were tripped, and bouce roll modes were rung up.  Evan and I eventually wrangled everything back under control, and we're now back to locking.

I have reconnected DOWN to the rest of the graph.  NOTE, however, that this problem is not inherent in the fact that DOWN was disconnected.  It's just that once you do something like that you remove the ability of guardian to find the right path for you, so you have to be careful to make sure you have all the appropriate jumps to get you where you need to be.  I'll rethink things.

Some notable issues:

Lesson's learned:

H1 CDS
sheila.dwyer@LIGO.ORG - posted 16:32, Saturday 01 August 2015 - last comment - 12:27, Sunday 02 August 2015(20135)
Lots of EPICS freezes

It seems like the rate of epics freezes has increased today, I have seen more than 5 in the last 2 hours. 

Comments related to this report
david.barker@LIGO.ORG - 12:27, Sunday 02 August 2015 (20145)

A quick look at my monitors is not showing anything unusual for Saturday. The dolphin manager reports 5 connection errors spread evenly throughout saturday (list show below), my LSC, ASC, SUSAUXB123 CA-monitors only caught the 22:19 event. I'll do some more detailed analysis tomorrow using the EDCU DAQ channels.

08 01 01:29

08 01 12:39

08 01 16:17

08 01 17:27

08 01 22:19

H1 ISC (SUS)
sheila.dwyer@LIGO.ORG - posted 16:12, Saturday 01 August 2015 (20132)
damping bounce on ALS

Sheila, Evan, Jeff B, Corey

Both yesterday and this morning, we had extremly rung up bounce and roll modes (both times because the IFO lost lock and DOWN was not run, yesterday for the reasons explained in comments to alog 20103, today because of a different snafu).

When this happens, we need to damp bounce on ETMY while locked on ALS.  To do this, it seems that we need to use a phase that is +150 degrees compared to the phase we use in full lock.  This phase shift comes from the difference between the DARM loop here and in full lock.  When locked on RF DARM, we need to use +120 degrees compared to the normal settings.

We also had difficulty yesterday with rung up roll modes.  To damp roll we use AS WFS, so we need to get to RF DARM before we try to damp them this way. One difficulty we had was that the roll mode notches in the PUMs were not wide enough (Evan adds that the notch needs to be wide because of the Shapiro effect), so that DHARD could saturate because of the roll mode.  

Bringing these things down when they are verry rung up is very slow, because the actuation authority is small compared to the amount of energy in the mode.  Fortunately, we are normally in the regime where the mode is small and it only takes a few minutes to damp them. 

H1 ISC
lisa.barsotti@LIGO.ORG - posted 16:00, Saturday 01 August 2015 - last comment - 18:34, Saturday 01 August 2015(20131)
High frequency excess noise is ~0.6 times shot + dark noise
Evan, Lisa

This entry is to clarify the fact that the impact of  this excess of high frequency noise  is actually bigger than the coherence with the ASC channels suggests, as it can clearly be seen by comparing OMC NULL and SUM.

For example, around 2 kHz, the discrepancy in the noise floor between OMC SUM (total noise) and OMC NULL (shot + dark noise) is about 15%, so corresponding to a noise which is 0.6 times shot + dark.

The attachment shows OMC SUM/NULL in H1 at low noise (left) compared to L1 (right). 

So, the message is that we are looking for something quite big here..

Images attached to this report
Comments related to this report
lisa.barsotti@LIGO.ORG - 18:34, Saturday 01 August 2015 (20140)
Maybe not surprising, this noise is not stationary from lock to lock. Last night the noise was lower than the night before (first plot: compare OMC SUM green trace with red trace; NULL was the same in both locks).
Images attached to this comment
H1 AOS
robert.schofield@LIGO.ORG - posted 12:37, Saturday 01 August 2015 - last comment - 18:01, Saturday 01 August 2015(20130)
PEM injections after 17:30 UTC

After 17:30 UTC the interferometer was not undisturbed: I was making PEM injections.

Comments related to this report
lisa.barsotti@LIGO.ORG - 16:20, Saturday 01 August 2015 (20133)DetChar, ISC
The interferometer has been locked undisturbed for several hours in low noise before Robert started his injections.

The range degraded slowly over time, and it has been polluted by some huge glitches, similarly to what has been observed in the past. 
Images attached to this comment
lisa.barsotti@LIGO.ORG - 18:01, Saturday 01 August 2015 (20138)PSL
It turns out that the range was degraded by a changing ISS coupling during the lock. 
Evan and Matt had left the ISS second loop open, as they were having problems with it.

You would see a plot with the a DARM spectrum at the beginning and at the end of this lock, showing large peaks appearing in DARM (a factor of a few above the noise floor), if DTT hadn't crash on me twice while trying to save the plot as PDF...
Non-image files attached to this comment
LHO VE
kyle.ryan@LIGO.ORG - posted 12:22, Saturday 01 August 2015 (20129)
1155 -1210 hrs. local -> In and out of X-end VEA
 Measured temps of heated areas of RGA -> 95C < temps < 120C -> Made slight changes to variac settings -> Aux. cart @ 2.5 x 10-5 torr (seems high for this configuration)
H1 DAQ (CDS)
david.barker@LIGO.ORG - posted 10:14, Saturday 01 August 2015 - last comment - 10:42, Saturday 01 August 2015(20127)
DAQ still stable, one week on

Its been a week since the DAQ reconfiguration which reduced the NFS/QFS disk loading and both framewriters continue to be 100% stable. The attached plot shows the restarts of h1fw0 (red circles), h1fw1 (green circles) and the DAQ system as a whole (blue squares) for the month of July. The Magenta lines show when h1fw0 and h1fw1 were modified. In the past 7 days, the only restarts of the framewriters are associated with complete DAQ restarts.

Images attached to this report
Comments related to this report
keith.thorne@LIGO.ORG - 10:42, Saturday 01 August 2015 (20128)DAQ
Which indicates the existing aLIGO DAQ frame writer meet/exceed the original design requirement (~10MB/sec frames to disk). They do not meet the current needs of ~30-40 MB/sec, of course
H1 ISC
evan.hall@LIGO.ORG - posted 03:36, Saturday 01 August 2015 - last comment - 18:01, Saturday 01 August 2015(20126)
Sum and null of OMC DCPDs, noch einmal

Matt, Lisa, Evan

Tonight we looked at the coherences between the OMC DCPD channels and ASC AS C, this time at several different interferometer powers. In the attached plots, green is at 11 W, violet is at 17 W, and apricot is at 24 W.

Evidently, the appearance of excess high-frequency noise in OMC DCPD sum (and the coherence of OMC DCPD sum with ASC AS C) grows as the power is increased. We believe that this behavior rules out the possibility that this is excess noise is caused by RIN in the AS port carrier, assuming that any such RIN is independent of the DARM offset and of the PSL power. Since the DARM offset is adjusted during power-up to maintain a constant dc current on the DCPDs, RIN in the AS carrier should result in an optical power fluctuation whose ASD (in W/rtHz) does not vary during the power-up. This is the behavior that we see in the null stream, where the constant DCPD dc currents ensure that the shot-noise-induced power fluctuation is independent of the PSL power.

Images attached to this report
Non-image files attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 18:01, Saturday 01 August 2015 (20139)

On a semi-related note, the slope in the OMC DCPDs at high frequencies is mostly explained by the uncompensated preamp poles and the uncompensated AA filter.

Non-image files attached to this comment
H1 ISC
jameson.rollins@LIGO.ORG - posted 01:44, Saturday 01 August 2015 - last comment - 17:16, Saturday 01 August 2015(20125)
ISC_LOCK::DOWN state is back to being a 'goto'

I modified the ISC_LOCK guardian to revert the DOWN state back to being a 'goto'.  This allows you to select the state directly, without having to go to MANUAL.

The reason it had been removed as a 'goto' was because occaissionally someone would accidentally request a lower state while the IFO is locked, which would cause the IFO to go back through DOWN to get to the errantly requested state.  To avoid this I implemented some graph shenanigans:  I disconnected DOWN from the rest of the graph, but told it to jump to a new READY state at the bottom of the main connected part of the graph once it's done:

This allows DOWN to be a goto, so it's always directly requestable, but prevents guardian from seeing a path through it to the rest of the graph.  Once DOWN is done, though, it jumps into the main part of the graph at which point guardian will pick up with the last request and move on up as expected.

Images attached to this report
Comments related to this report
jameson.rollins@LIGO.ORG - 17:16, Saturday 01 August 2015 (20136)

Well that didn't work.  See alog 20134.  Separating DOWN from the rest of the graph caused some unanticipated bad affects.  This is actually not inherent in disconnected DOWN from the rest of the graph, but it needed to be considered a bit more carefully.  See the other post for more info.

Displaying reports 65981-66000 of 85603.Go to page Start 3296 3297 3298 3299 3300 3301 3302 3303 3304 End