Displaying reports 61501-61520 of 83150.Go to page Start 3072 3073 3074 3075 3076 3077 3078 3079 3080 End
Reports until 16:00, Sunday 04 October 2015
H1 General
corey.gray@LIGO.ORG - posted 16:00, Sunday 04 October 2015 (22225)
DAY Ops Summary

TITLE:  10/4 DAY Shift:  15:00-23:00UTC (08:00-16:00PDT), all times posted in UTC     

STATE OF H1:  In Observation Mode at 72Mpc

SUPPORT:  Vinny, Robert

INCOMING OPERATOR:  Cheryl

SHIFT SUMMARY:  Quiet shift with only small a small break for PEM injections.  

Shift Activities:

H1 CDS
corey.gray@LIGO.ORG - posted 12:47, Sunday 04 October 2015 (22228)
h1tw

Raw minute-trend writer, h1tw0, has been popping up medm message windows warning of "virtual circuit disconnects" (and h1tw0 boxes on the DAQ Detail go WHITE for a few seconds every few minutes).

H1 AOS
corey.gray@LIGO.ORG - posted 12:12, Sunday 04 October 2015 (22227)
LLO GraceDB Failure: LHO Operators Please Remember To Contact LLO About Alerts/Triggers

LLO has had a GraceDB querying Failure for last few shifts/days.  

Want to reiterate importance of contacting them whenever we receive an Alert on VerbalAlarms.  The Alert/Trigger Site Response Checklist (L1500117), laminated at the Ops Work Station, states operator at each site must contact each the other to confirm they received the alarm (current state [LLO GraceDB Failure] is an example of the importance of this step).

H1 General
corey.gray@LIGO.ORG - posted 11:41, Sunday 04 October 2015 - last comment - 12:01, Sunday 04 October 2015(22224)
GRB Alarm: Stand-down 18:11-19:11UTC!

Received GRB Alert at 18:11UTC.  Going through the checklist (L1500117).

Comments related to this report
corey.gray@LIGO.ORG - 12:01, Sunday 04 October 2015 (22226)

We've both restarted our sessions on TeamSpeak (I rebooted computer since ours would not allow me to open anything.  Upon reboot, TeamSpeak opened automatically [thanks, Ryan!].).  We are both now re-connected.

H1 CDS
corey.gray@LIGO.ORG - posted 08:43, Sunday 04 October 2015 (22222)
FOM/MEDM Note (from yesterday's shift)

Yesterday when taking H1 to Observation Mode, Evan & I noticed a RED SDF on video0 (I think it was for PEMEX or EY), but we did not see it on our SDF screens on our work stations.  I reopened the SDF and the RED went away.  The medm was not frozen, because we were Accepting channels & it would go green before noticing this errant PEM RED.  Just thought it was something interesting.

H1 AOS
corey.gray@LIGO.ORG - posted 08:27, Sunday 04 October 2015 - last comment - 08:46, Sunday 04 October 2015(22219)
Transition to DAY Shift Update

TITLE:  10/4 DAY Shift:  15:00-23:00UTC (00:00-8:00PDT), all times posted in UTC     

STATE of H1:  Observation Mode with Avg of 74Mpc

Outgoing Operator:  JimW

Support:  Vinny

Quick Summary:  useism continues a slow trend down (at about 0.15um/sec).  Winds hovering around 12mph.  

Comments related to this report
corey.gray@LIGO.ORG - 08:29, Sunday 04 October 2015 (22220)

Terramon has just come up with a RED warning of a 5.6 Peruvian earthquake who's Rayleigh wave is due here in a minute (0.7um/s), due at 15:30:38UTC.  We'll see what happens.

corey.gray@LIGO.ORG - 08:32, Sunday 04 October 2015 (22221)

L1 just went down at 15:30.  Terramon said the EQ's Rayleigh waves (of 1.4um/s) would arrive there at 15:17UTC.  

So we might not be out of the woods yet...watching 0.03-0.1Hz (all three axis have yet to move up at the same time and all are still under 0.1um/s...I've seen us drop out when all three go above that velocity...but that was a few weeks ago before the DHARD filter).  

No signs of anything on tidal or ASC control strip tools either.

corey.gray@LIGO.ORG - 08:46, Sunday 04 October 2015 (22223)

It's been 15min since the R-wave arrival estimate, I'm assuming we rode through the EQ.  (it was barely observable in here on seismic bands, range, striptools).  Time to make breakfast/coffee.

H1 General
jim.warner@LIGO.ORG - posted 08:05, Sunday 04 October 2015 (22218)
Shift Summary

Title: 10/3 OWL Shift 7:00-15:00 UTC

State of H1: Low noise, observing 75 mpc

Shift Summary: Quiet night

Activity log:

Nothing happened. Quiet night, Corey's lock from yesterday made it through the night.

H1 General
jim.warner@LIGO.ORG - posted 05:15, Sunday 04 October 2015 (22217)
Mid Shift Update

Quiet night at LHO. Wind ~10mph, seimic relatively low, lock from yesterday continues.

H1 General (PEM)
cheryl.vorvick@LIGO.ORG - posted 00:14, Sunday 04 October 2015 (22216)
OPS EVE Summary:

TITLE:  10/3 EVE Shift:  23:00-07:00UTC (Oct.4) (16:00-23:59PT), all times posted in UTC     

STATE OF H1:  Observation at 77Mpc

SUPPORT:  Robert, Jordan, Sheila

INCOMING OPERATOR:  Jim

SHIFT SUMMARY:  

LLO is still having issues with useism.  

Robert did one round of injections, and produced many ETMY saturations in 2 minutes - he may need to redo this measurement tomorrow night.

Jordan did one PEM measurement and would like to take another.  The first was about 30 minutes and the second should be about this long as well.

Shift Activities:

00:00:42UTC, Oct. 4th - Robert's PEM injections start

01:34:37UTC - Robert's PEM injections end

03:35:37UTC - Jordan's PEM injections start

04:06:47UTC - Jordan's PEM injections end

 

 

H1 ISC
sheila.dwyer@LIGO.ORG - posted 18:00, Saturday 03 October 2015 - last comment - 09:28, Monday 05 October 2015(22213)
We should proabably be pulling the OMC off resonance durring CARM offset reduction

I started to look at our locking attempts over the last two weeks, especially trying to understand our difficulty yesterday.  I will write a more complete alog in the next few days, but I wanted to put this one in early so that operators can see it. 

We've known for a long time that at LLO they always pull the OMC off resonance durring the CARM offset reduction, and they've told us that they can't lock if it is flashing.  We know that we can lock when it is flashing here, which might be because our output faraday has better isolation.

In the two weeks of data that I looked at, we've locked DRMI 64 times, 33 of these locks resulted in low noise locks and 31 of them failed durring the acquistion prodecure.  Of these 31 failures, about 9 happened as the OMC was flashing.  We also had about 12 sucsesfull locking attempts where the OMC flashed.  OMC flashing probably wasn't our main problem yesterday, but it can't hurt and it might help to pull the OMC off resonance durring the CARM offset reduction. 

Operators:   If you see that the OMC is flashing (visible on the OMC trans camera right under the AS camera on the center video screen) you can pull it off resonance by opening the OMC control screen, and moving the PZT offset slider which is in the upper right hand quadrant of the screen.  Even if you don't see the OMC flashing on the camera it might not hurt to pull the PZT away from the offset it is left at, which was the offset where it was locked in the last lock.  I will try to add this to guardian soon and let people know when I do.

Comments related to this report
cheryl.vorvick@LIGO.ORG - 19:39, Saturday 03 October 2015 (22215)

screenshot with slider circled in a red dashed line

Images attached to this comment
sheila.dwyer@LIGO.ORG - 09:28, Monday 05 October 2015 (22237)

Evan Sheila

Here is a plot of the 24 locklosses we had from Sept 17th to Oct 2nd durring the early stages of the CARM offset reduction.  The DCPD sum is shown in red while the black line shows H1:LSC-POPAIR_B_RF18_I_NORM (before the phase rotation) to help in identifying the lockloss time.  You can see that in many of these locklosses the OMC was flashing right before for as we lost lock. This is probably because the AS port was flashing right before lockloss and the OMC is usually nearly on resonance.

We looked at 64 total locking attempts in which DRMI locked, 24 of these resulted locklosses in the early stages of CARM offset reduction (before the DHARD WFS are engaged).  In 28 of these 64 attempts the OMC DCPD sum was above 0.3mA sometime before we start locking the OMC, so the OMC flashed in 44% of our attempts. We lost lock 16 out of 18 times that the OMC was flashing (57% of time) and 8 out of 36 times that the OMC was not flashing (22% of the time). 

We will make the guardian pull the OMC off resonance before starting the acquisition sequence durring tomorow's maintence window.

H1 PEM (PEM)
cheryl.vorvick@LIGO.ORG - posted 17:08, Saturday 03 October 2015 (22212)
Eve Shift Start: one hour of data, Robert now doing PEM injections

LHO General
corey.gray@LIGO.ORG - posted 16:10, Saturday 03 October 2015 (22201)
DAY Ops Summary

TITLE:  10/3 DAY Shift:  15:00-23:00UTC (00:00-8:00PDT), all times posted in UTC     

STATE OF H1:  Observation at 75Mpc

SUPPORT:  Evan, Daniel, Robert, Jordan

INCOMING OPERATOR:  Cheryl

SHIFT SUMMARY:  

Started off shift with H1 in DOWN mode.  Established a game plan by talking with Mike & Daniel over the phone.  Evan arrived and started investigations while I re-aligned, and after a few hours we had H1 back up.  It has been up with a decent range around 75Mpc (with a few of the usual ETMy saturations).

LLO is having issues with useism.  Robert is taking advantage of their downtime to run PEM injections.

If you squint your eyes and look sideways, you can see the useism beginning to trend down.  Winds are around 12-15mph.  

Shift Activities:

H1 CDS
corey.gray@LIGO.ORG - posted 15:16, Saturday 03 October 2015 (22210)
GraceDB Querying Failure & Code Start Up

Happen to notice a Red Box on the Ops Overview medm which said there was a GraceDB Querying Failure.  I wanted to figure out when this occurred, but I was not able to figure it out (trended the channel in DV, used conlog, looked on the CAL_INJ medm [where this RED Box also lives]).  Maybe this happened between 21-22:00?

So checking alogs, found a link to the following instructions.  It was not clear to me what state I was in:  did I need a "code start up" or a "code restart".  I followed the "code start up" instructions.

On the operator2 terminal (which is generally logged in as controls), I did the following:

ssh controls@h1fescript0
cd /opt/rtcds/userapps/release/cal/common/scripts
screen

python ext_alert.py run

This gave a GREEN "GraceDB querying Successful" box on the CAL_INJ medm (and the box entirely disappeared on the Ops Overview).

As I detatched from the screen environment, I did not get process ID # for the screen session; I only had a "[detached]" prompt.  So I did NOT store a file with a PID# under the home directory.  Maybe I should have followed the restart instructions?  Distinguishing how to determine what error state one is in will help with the instructions here.

I'm also assuming it's OK to restart/start this computer while in Observation Mode (because I did).

H1 ISC
daniel.sigg@LIGO.ORG - posted 13:52, Saturday 03 October 2015 (22209)
RF Levels in Electronics Room

The attached plot shows some unexpected variations in the controls signals of the two EOM drivers over the past 20 hours. This is also visible in the RF power monitors of all distribution amplifiers in the electronics room. This may be due to temperature fluctuations, but we don't seem to have a temperature readout in the electronics room. The LVEA shows no variation in temperature.

Images attached to this report
H1 ISC
keita.kawabe@LIGO.ORG - posted 15:54, Thursday 01 October 2015 - last comment - 09:24, Monday 05 October 2015(22154)
Current status of noise bumps that are supposedly from PSL periscope (PeterF, Keita)

Just in case you're wondering why LHO sees two noise bumps at 315 and 350Hz (attached, middle blue) but not at LLO, we don't fully understand either but here is the summary.

There are three things here, environmental noise level, PZT servo, and jitter coupling to DARM. Even though the former two explains a part of the LLO-LHO difference, they cannot explain all of it, and the coupling at LHO seems to be larger.

Reducing the PSL chiller flow will help but that's not a solution for the future.

Reimplementing PZT servo at LHO will help and this should be done. Squashing it all will be hard, though, as we are talking about the jitter between 300 and 370Hz and there's a resonance at 620Hz.

Reducing coupling is one area that was not well explored. Past attempts at LHO were on top of dubious IMC WFS quadrant gain imbalances.


1. Environmental difference

These bumps are supposed to be from the beam jitter caused by PSL periscope resonances (not from the PZT mirror resonances). In the attached you can see that the bumps in H1 (middle blue) correspond to the bumps in PSL periscope accelerometer (top blue). (Don't worry, we figured out which server we need to use for DTT to give us correct results.)

Because of the PSL chiller flow difference between LLO and LHO (LHO alog, couldn't find LLO alog but we have MattH's words), in general LLO periscope noise level is lower than LHO. However, the difference in the accelerometer signal is not enough to explain the difference in IFO.

For example, at 350Hz LHO PSL periscope is only a factor of 2 noisier than LLO. At 330Hz, LHO is quieter than LLO by more than a factor of 2. Yet we have a huge hump in DARM at LHO, it becomes larger and smaller in DARM but it never goes away, while LLO DARM is deat flat.

At LLO they do have a servo to supress noise at about 300Hz, but it shouldn't be doing much if any at 350Hz (see the next section).

So yes, it seems like environmental difference is one of the reasons why we have larger noise.

But the jitter to DARM coupling itself seems to be larger.

Turning down the chiller flow will help but that's not a solution for the future.


2. Servo difference

At LLO there's a servo to squash beam jitter in PIT at 300Hz. LHO used to have it but now it is disabled.

At LLO, IOOWFS_A_I_PIT signal is used to suppress PIT jitter targetting the 300Hz peak which was right on some mechanical resonance/notch structure in PZT PIT (which LHO also has), and the servo reduced the noise between about 270 and about 320Hz (LLO alog 19310).

Same servo was successfully copied to LHO with some modification, which also targeted 300Hz bump (except that YAW was more coherent than PIT and we used YAW signal), with somewhat less (but not much less) aggressive gain and bandwidth. At that time 300Hz bump was problematic together with 250Hz bump and 350Hz bump. Look at the plots from alog 20059 and 20093.

Somehow 250Hz and 300Hz subsided, and now LHO is suffering from 315Hz and 350Hz bumps (compare the attached with the above mentioned alog). Since we never had time to tune the servo filter to target either of the new bumps, and since turning the servo on without modification is going to make marginal improvement at 300Hz and will make 250Hz/350Hz somewhat worse due to gain peaking, it was disabled.

Reimplementing the servo to target 315 and 350Hz bumps will help.  But it's not going to be easy to make this servo wide band enough to squash everything because of 620Hz resonance, which is probably something in the PZT mirror itself (look at the above mentioned alog 20059 for open loop transfer function of the current servo, for example). In principle we can go even wider band, but we'll need more than 2kHz sampling rate for that. We could stiffen the mount if 620Hz is indeed the mount.


3. Coupling difference

As I wrote in the environment difference, from the accelerometer data and IFO signal, it seems as if the coupling is larger at LHO.

There are many jitter coupling measurements at LHO but the best one to look at is this one. We should be able to make a direct comparison with LLO but I haven't looked.

Anyway, it is known that the coupling depends on IMC alignment and OMC alignment (and probably the IFO alignment).

At LHO, IMC WFS has offsets in PIT and YAW in an attempt to minimize the coupling. This is on top of dubious imbalances in IMC WFS quadrant gains at LHO (see alog 20065, the minimum quadrant gain is a factor of 16  larger  smaller than the maximum). We should fix that before spending much time on studying the jitter coupling via alignment.

At LLO, there's no such imbalance and there's no such offset.

Images attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 12:58, Saturday 03 October 2015 (22208)

The coupling of these peaks into DARM appears to pass through a null near the beginning of each full-power lock stretch, perhaps indicating that this coupling can be suppressed through TCS heating.

Already from the summary pages one can see that at the beginning of each lock, these peaks are present in DARM, then they go away for about 20 minutes, and then they come back for the duration of the lock.

I looked at the coherence (both magnitude and phase) between DARM and the IMC WFS error signals at three different times during a lock stretch beginning on 2015-09-29 06:00:00 Z. Blue shows the signals 10 minutes before the sign flip, orange shows the signals near the null, and purple shows the signals 20 minutes after the sign flip.

One can also see that the peaks in the immediate vicinity of 300 Hz decay monotonically from the beginning of the lock strech onward; my guess is that these are generated by some interaction with the beamsplitter violin mode and have nothing to do with jitter.

Images attached to this comment
keita.kawabe@LIGO.ORG - 09:24, Monday 05 October 2015 (22235)

Addendum:

alog 20051 shows the PZT to IMCWFS transfer function (without servo) for PIT and YAW. Easier to see which resonance is on which DOF.

Displaying reports 61501-61520 of 83150.Go to page Start 3072 3073 3074 3075 3076 3077 3078 3079 3080 End