Displaying reports 64481-64500 of 83068.Go to page Start 3221 3222 3223 3224 3225 3226 3227 3228 3229 End
Reports until 15:58, Thursday 11 June 2015
H1 SUS
betsy.weaver@LIGO.ORG - posted 15:58, Thursday 11 June 2015 - last comment - 17:24, Thursday 11 June 2015(19081)
ESD/oplev charge measurements - continued saga

After the LHO IFO dropped lock around 1pm, Leo and I jumped on making more charge measurements of ETMx and ETMy via the oplev/ESDs.  However, we immediately recreated that there seemed to be no coherence in the ETMy measurements.  We fumbled around for a while looking at no coherence with signals here and there and then invoked Kissel.  Sure'nuf, the ESD drive on ETMy wasn't driving in "LO" or "HI" voltage mode.  Richard headed to end Y and re-discovered that turning things off and reterminating the DAC cable fixed the problem - see alog 13335 "Lock up of the DAC channels".

The ETMy measurements are now in progress.  Again.

Meanwhile, we attempted to look at why the ETMx LL ESD drive wasn't working and confirmed what Keita saw last night - it doesn't work.  We see the requested drive in the monitor channels which means that the signal goes bad somewhere beyond the chassis (toward the chamber).  As usual, "no one has been down there" but we're not sure how we can use this ESD to lock the IFO with a dead channel.  Richard reports that he will go there tomorrow to investigate.

Comments related to this report
betsy.weaver@LIGO.ORG - 17:24, Thursday 11 June 2015 (19084)

In case anyone ISN'T tired of hearing that the ETMx OL isn't healthy, here's a snapshot of ugly glitching in the YAW readback.  (Jason has stated numerous times that he plans to swap this laser when we give him time to do it.)  Just recording again here since we have to stare at it for ESD charge measurements.  Ick.

Images attached to this comment
H1 General
jeffrey.bartlett@LIGO.ORG - posted 15:55, Thursday 11 June 2015 (19080)
Day Shift Summary
Observation Bit: Undisturbed 

08:00 – Take over from Nutsinee – IFO locked at LSC_FF 
08:07 Robert – Working at End-Y, but not in the VEA
08:29 - Adjust ISS Diffracted power from 11.1% -2.06v to 8.3% -2.09v
09:32 - Beam tube cleaning crew start work on X-Arm
09:55 Christina & Karen – Open/close rollup door to OBS receiving
10:00 Christina & Karen – Staging garb in the OSB cleaning area
11:52 – Beam cleaning crew stopping for lunch 
13:05 - Lockloss
13:10 IFO in down state for ETM charge testing
13:30 Richard – Going to End-X
13:31 – Beam tube cleaning crew start work on X-Arm
13:50 Gerardo – Going to Mid-Y to get a cable
14:12 Gerardo – Back from Mid-Y
14:25 Richard – Back from End-X
14:36 Richard – Going to End-Y to check cabling for ETM charge testing
15:25 Robert – Going to Beam tube near Mid-Y
15:46 – Beam tube cleaning crew finished for the day
16:00 – Hand over to Travis
H1 DetChar (DetChar, ISC)
andrew.lundgren@LIGO.ORG - posted 15:07, Thursday 11 June 2015 - last comment - 14:55, Friday 12 June 2015(19079)
RF beatnote whistles in PRCL
Andy, Jess

Since the 79.2 MHz fixed frequency source was powered off alog, we have not seen any RF beatnote/whistles in DARM at Hanford. We see them in DARM at Livingston, however, but the mechanism is much more complicated than Hanford. The mechanism is not the PSL VCO beating against a fixed frequency.

Since we still see whistles at Hanford in auxiliary channels, we thought we'd revisit them, to see if that gives us clues for L1. We looked at the lock of Jun 11 starting at 6 UTC. We see whistles in PRCL, LSC-MCL, and sometimes in SRCL. Choosing two times, we find that the whistles correspond exactly to a beatnote of the PSL VCO frequency with a fixed frequency of 78.5 MHz (or something within a few hundred Hz of that). So it's the same simple mechanism as before, just against a different frequency.

Attached are plots of two times in PRCL where we predict the exact shape of the whistle, using IMC-F as a proxy for the PSL VCO frequency. SRCL and MCL are similar. We'll go back and check other locks to see if there's any evidence for other frequencies or shifts in the frequency.
Images attached to this report
Comments related to this report
andrew.lundgren@LIGO.ORG - 14:55, Friday 12 June 2015 (19103)
First, a question. Is there something at 34.7 MHz in the center station? I see this frequency on channel SYS-TIMING_C_FO_A_PORT_11_SLAVE_CFC_FREQUENCY_4 - the PSL VCO is number 5 on this fanout. The numerology just about works with 2*34.7+9.1 = 78.5, i.e. that frequency gets doubled and is seen in the 9 MHz demod of the POP and REFL PDs.

Jeff wanted me to also post an expanded version of the whistles story that I had sent by email, so here it is:

To be clear, H1 *did* have whistles in DARM. Once we got the secret decoder ring that told us how to figure out the PSL VCO frequency, we realized that the whistles in DARM were precisely a beatnote of that frequency with 79.2 MHz.

As a result of that investigation, that fixed frequency was turned off, and the whistles in DARM went away. Huge success!

We also see whistles in SRCL, PRCL, and MCL. We haven't been worrying about them, since they're not in DARM. But just yesterday we decided to see if this is also a simple mechanism. As you can see from the alog, it is - at least at the times we've checked, the whistles are a beatnote against something at 78.5 MHz. 

I realized just a little while ago that these channels all come from 9 MHz demods, so maybe the actual frequency we're looking for is actually 69.5 or 87.5. We'll check whether these signals show up on POP or REFL at either LF or 45 MHz.

We know that LLO is a very different mechanism. Not only do they not have this particular fixed oscillator, but these whistles:

1. Come from multiple very different VCO frequencies.
2. The beat frequencies don't seem stable even within a lock.
3. The whistles do not follow the PSL VCO frequency. They are more like 4 to 7 times the VCO frequency. The multiplier doesn't seem stable, and sometimes the whistles seem to decouple a bit from the VCO frequency.
4. The whistles show at LF, 9 MHz, and 45 MHz PDs, on REFL and POP. Different crossings show up in different photodiodes and with different strengths.

So you can see why we want to tackle Hanford first. I was hoping it would be more complicated but tractable, and that would give us a clue to what's going on in L1.

In case you're wondering whether this is academic, the CBC search loves triggering on the whistles at LLO, and it's hard to automatically reject these because they look like linear or quadratic broadband chirps. I think these give the burst search trouble as well.
​
We'll probably spend another day nailing down the case at Hanford, then look over all ER7 to figure out what was going on at L1. 
H1 SYS (GRD, ISC, SYS)
sheila.dwyer@LIGO.ORG - posted 14:09, Thursday 11 June 2015 - last comment - 19:01, Thursday 29 August 2019(19075)
a look at duty cycle for the first week of ER7

I've taken a look at guardian state information from the last week, with the goal of getting an idea of what we can do to improve our duty cycle. The main messages is that we spent 63% of our time in the nominal low noise state, 13% in the down state, (mostly because the DOWN state was requested), and 8.7% of the week trying to lock DRMI. 

Details

I have not taken into account if the intent bit was set or not during this time, I'm only considering the guardian state.  These are based on 7 days of data, starting at 19:24:48 UTC on June3rd.  The first pie chart shows the percentage of the time during the week the guardian was in a certain state.  For legibility states that took up less than 1% of the week are unlabeled, some of the labels are slightly in the wrong position but you can figure out where they should be if you care. The first two charts show the percentage of the time during the week we were in a particular state, the second chart shows only the unlocked time. 

DOWN as the requested state

We were requesting DOWN for 12.13% of the week, or 20.4 hours.  Down could be the requested state because operators were doing initial alignment, we were in the middle of maintainece (4 hours ), or it was too windy for locking.  Although I haven't done any careful study, I would guess that most of this time was spent on inital alingment.

There are probably three ways to reduce the time spent on initial alignment:

Bounce and roll mode damping

We spent 5.3% of the week waiting in states between lock DRMI and LSC FF, when the state was already the requested state.  Most of this was after RF DARM, and is probably because people were trying to damp bounce and roll or waiting for them to damp.  A more careful study of how well we can tolerate these modes being rung up will tell us it is really necessary to wait, and better automation using the monitors can probably help us damp them more efficiently. 

Locking DRMI

we spent 8.7% of the week locking DRMI, 14.6 hours.  During this time we made 109 attempts to lock it, (10 of these ended in ALS locklosses), and the median time per lock attempt was 5.4 minutes.  From the histogram of time for DRMI locking attempts(3rd attachment), you can see that the mean locking time is increased by 6 attempts that took more than a half hour, presumably either because DRMI was not well aligned or because the wind was high. It is probably worth checking if these were really due to wind or something else.  This histogram includes unsuccessful as well as successful attempts.  

Probably the most effective way to reduce the time we spend locking DRMI would be to prevent locklosses later in the lock acquisition sequence, which we have had many of this week.

Locklosses

A more careful study of locklosses during ER7 needs to be done. The last plot attached here shows from which guardian state we lost lock, they are fairly well distributed throughout the lock acquisition process. The locklosses from states after DRMI has locked are more costly to us, while locklosses from the state "locking arms green" don't cost us much time and are expect as the optics swing after a lockloss. 

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 18:13, Friday 19 June 2015 (19251)

I used the channel H1:GRD-ISC_LOCK_STATE_N to identify locklosses in to make the pie chart of locklosses here, specifcally I looked for times when the state was lockloss or lockloss_drmi.  However, this is a 16 Hz channel and we can move through the lockloss state faster than 1/16th of a second, so doing this I missed some of the locklosses.  I've added 0.2 second pauses to the lockloss states to make sure they will be recorded by this 16 Hz cahnnel in the future.  This could be a bad thing since we should move to DOWN quickly to avoid ringing up suspension modes, but we can try it for now.  

A version of the lockloss pie chart that spans the end of ER7 is attached.  

Images attached to this comment
jameson.rollins@LIGO.ORG - 08:38, Sunday 21 June 2015 (19258)

I'm bothered that you found instances of the LOCKLOSS state not being recorded.  Guardian should never pass through a state without registering it, so I'm considering this a bug.

Another way you should be able to get around this in the LOCKLOSS state is by just removing the "return True" from LOCKLOSS.main().  If main returns True the state will complete immediately, after only the first cycle, which apparently can happen in less than one CAS cycle.  If main does not return True, then LOCKLOSS.run() will be executed, which defaults to returning True if not specified.  That will give the state one extra cycle, which will bump it's total execution time to just above one 16th of a second, therefore ensuring that the STATE channels will be set at least once.

jameson.rollins@LIGO.ORG - 12:56, Sunday 21 June 2015 (19260)

reported as Guardian issue 881

sheila.dwyer@LIGO.ORG - 16:19, Monday 22 June 2015 (19278)

Note that the corrected pie chart includes times that I interprerted as locklosses that in fact were times when the operators made requests that sent the IFO to down.  So, the message is that you can imagine the true picture of locklosses is somewhere intermediate between the firrst and the second pie charts. 

I realized this new mistake because Dave asked me for an example of a gps time when a lockloss was not recorded by the channel I grabbed from nds2, H1:GRD-ISC_LOCK_STATE_N.  An example is

1117959175 

I got rid of the return True from the main and added run states that just return true, so hopefully next time around the channel that is saved will record all locklosses. 

H1 SEI
hugh.radkins@LIGO.ORG - posted 10:44, Thursday 11 June 2015 (19074)
Compare Local Signals with EQ predictor

If you look at the Online EarthQuake Arrival Predictor, it shows the 5.7 mag EQ in Japan, along with many others.  This is just the largest in the last day or so.  I've overlayed the predicted arrival time of the Rayleigh waves on the STS2 ground seismometers and the BS stage2 inertial sensors--see attached.  Unfortunately the IFO was not in lock at the time.

Anyway, need a bigger EQ still to assess the prediction as the Ground sensors.  Just to be more thorough, I shifted and extended the plot a little and now include the predicted arrival times of the P & S waves, second attachment.  I got rid of the Z ground seismometer and added the transmitted arm power for a IFO indicator.  Did the arrival of the p wave ring up the BS ISI?  Ehh I don't think so given the ground seismo unless the prediction is off by ~90 seconds.  But I think the prediction could be off by that much given the complexity of travel paths and assumptions of the velocitys.  With this data set I'd say the ISI is being beat around by the locking attempts, not by this red earthquake.

Images attached to this report
H1 PSL
jeffrey.bartlett@LIGO.ORG - posted 09:35, Thursday 11 June 2015 (19071)
Adjust ISS Diffracted Power
Adjust the ISS diffracted power from 11.1% -2.06v to 8.3% -2.09v 
H1 General
jeffrey.bartlett@LIGO.ORG - posted 09:34, Thursday 11 June 2015 (19070)
Clear HEPI Watchdog Counters
Cleared the HEPI L4C watchdog counter for ITMY. All others were green and clear.
LHO VE
bubba.gateley@LIGO.ORG - posted 08:51, Thursday 11 June 2015 (19069)
Beam Tube Washing
Scott L. Ed P.
6/8/15
The following are some of the dirtiest area we have seen on the X-arm.
Cleaned 58 meters ending 8.4 meters north of HNW-4-068.

6/9/15
 Maintenance day which means we can use the Hilti HEPA vacuum, the most effective vacuum for cleaning out the support tubes. We have been holding off on using this vacuum during ER7 because of the loud thump emitted from the internal cleaning system of the hepa filter.
We will go back and clean 2 sets of support tubes and cap them, as well as move forward to clean as many supports as reasonable before the end of maintenance day at noon.
Cleaned 30 meters of tube ending at HNW-4-070. Test results posted on this A-log.
 
Robert Schofield came out to the area where we are cleaning to look at our procedure and methods of actual cleaning of the tube to investigate possible glitches seen by the control room operators during lock.

6/10/15
Remove lights, cords, vacuum machines, and all related equipment from enclosure and thoroughly clean all equipment, then relocate to next section north.
Started cleaning at HNW-4-070, cleaned 15 meters of tube.

To date we have cleaned a total of 3418 meters of tube. 



Non-image files attached to this report
H1 INJ (INJ)
peter.shawhan@LIGO.ORG - posted 08:32, Thursday 11 June 2015 (19068)
Ending scheduled burst hardware injections for ER7
Since we got a few coincident burst hardware injections this morning, and the rest of ER7 (now through Sunday 8:00 PDT) will be split between local measurements/commissioning and running, I have disabled the scheduled burst hardware injections for the rest of ER7.  To be precise, I left the next few dozen in the schedule but I set their amplitudes to zero, in order to test the long-term behavior and stability of tinj; tinj will call awgstream to inject them, but because awgstream will just add zero strain to the instrument, these should have no effect and will not appear in ODC bits or database segments.

There is still a plan to add a stochastic injection sometime before ER7 ends, if time permits.
H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 08:04, Thursday 11 June 2015 - last comment - 19:34, Friday 12 June 2015(19067)
Owl Shift Summary

00:00 The ifo locked right before I came in. Wind speed is <20 mph. 90 mHz blend filter is used for BSC2.

           I noticed the BS oplev sum is saturated (> 80000 counts). Is this alright? It's been around this value for 10+ days.

01:55 There's a big bump at ~30 Hz that caused a big dip in the BNS range. SUS Oplev plots didn't show anything suspicious. The bump at this frequency happened through out the night, just not as big.

02:00 A 4.7 MAG earthquake in Ecuador shook PR3 a little and BNS range dropped slightly (from 61 Mpc to 60 Mpc), but that's all it did. No WD tripped. 

08:00 We've been locked for 8+ hours and still going strong at 61 Mpc! We had 5+ hours of coincidence with Livingston tonight. Handling the ifo to Jeff B.

Comments related to this report
daniel.hoak@LIGO.ORG - 17:16, Thursday 11 June 2015 (19083)

Judging from the normalized spectrograms on the summary pages, the 30Hz noise looks like occasional scattering noise, likely from the alignment drives sent to the OMC suspension.  Currently the Guardian sets the OMC alignment gain at 0.2 (for a UGF of around 0.1-0.5 Hz in the QPD alignment loops).  This is probably too high from a scattering-noise perspective, it can be reduced by a factor of two without ill effects.

daniel.hoak@LIGO.ORG - 19:34, Friday 12 June 2015 (19105)DetChar

To follow up on this noise, here is a plot of one of the noise bursts around 20-30Hz, alongside the OMC alignment control signals.  The noise has the classic scattering-arch shape, and it is correlated with the ANG_Y loop, which send a large signal to the OMC SUS.  We've seen this kind of thing before.  The start time for the plot is 09:27:10 UTC, June 11 (the time axes of the two plots are a little off, because apparently indexing for mlab PSDs is the hardest thing I've had to do in grad school.)

The second plot attached compares the OMC-DCPD_SUM and NULL channels at the time of the noise bursts in the first plot, to a quiet time one minute prior.  The scattering noise is largely coherent between the two DCPDs.

Images attached to this comment
H1 SEI
hugh.radkins@LIGO.ORG - posted 08:00, Thursday 11 June 2015 (19066)
BS ISI stage1 ISO running high wind blends

Jim must have switched these.  The 90mHz blends are on for X & Y rather than the 45s.  The SDF is red for this reason.

H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 07:56, Thursday 11 June 2015 (19065)
CDS model and DAQ restart report, Wednesday 10th June 2015

model restarts logged for Wed 10/Jun/2015

no restarts reported

H1 INJ (INJ)
peter.shawhan@LIGO.ORG - posted 07:16, Thursday 11 June 2015 (19064)
Transient injections now running at LHO under the 'hinj' account
Dave Barker created a shared 'hinj' account for running hardware injections (alog 19057), and we restarted tinj (the transient injection process) under that account yesterday.  Unfortunately, the injections failed overnight due to awgstream errors.  That was puzzling because executing awgstream on the command line (with zero amplitude or extremely small amplitude) worked fine.  It turns out that different versions of the gds, awgstream and root packages are installed on the LHO injection machine compared to the LLO injection machine, so the environment setup that was copied over from LLO caused awgstream to fail when executed by tinj.  I made a separate environment setup script for LHO and restarted tinj under that, and now it seems to be working fine.

After doing a zero-amplitude test injection at 1118063333 (which should have had no effect on anything), I modified the schedule to more promptly do som burst injections (coincident at both sites), at GPS times 1118063933, 1118067123, and 1118067543.  (The actual signal comes a second or two later than those waveform file start times.  As of this writing, the first of those was picked up by the burst pipelines: see https://gracedb.ligo.org/events/view/G159516 .  We'll see about the others.
H1 SUS (ISC, SUS)
keita.kawabe@LIGO.ORG - posted 02:04, Thursday 11 June 2015 - last comment - 04:46, Thursday 11 June 2015(19060)
Charging localization measurement (Leo, Betsy, Evan, Kiwamu, Daniel, Keita)

Related:

Den's alog 16624

Den's alog 16727

Summary:

Many things are dubious.

  1. EX LL ESD is broken as the ESD to length response is 3 orders of magnitude smaller than the others. Probably shoddy connection somewhere between the driver and the ESD electrode, as the voltage readback looks normal.
  2. It looks as if either the sign of EX ESD output is flipped (positive digital out induces negative voltage) or the sign of EY ESD output is flipped but not both.
  3. I'm just assuming that the sign convention for CAL-CS_DARM_EXTERNAL_DQ is length(X)-length(Y), and that it's correct, without any confirmation.

Despite these things, it seems as if the charges on the back are on the same order as reported in Den's alog 16624.

If we assume that the sign of EY ESD is wrong and we still take 1. into account, the charges are calculated as:

  front back
X 4.4nC 1.1nC
Y 2.2nC 1.2nC

This looks semi-reasonable.

If we assume that the sign of EX ESD is wrong and we still take 1. into account, the charges are:

  front back
X 5.7nC -0.9nC
Y -6nC -0.4nC

I don't like that the signs are all over the place.

If we assume that everything is correct except that the EX LL is broken (i.e. we ignore the 2. above but take 1. into account), the charges based on are:

  front back
X 4.4nC 1.1nC
Y -6nC -0.4nC

Again the signs are all over the place.

These are based on the same calculation as Den's alog 16624.

I'm assuming that the sign convention of CAL-CS_DARM_EXTERNAL_DQ is length(X)-length(Y) (i.e. positive when X stretches and Y shrinks).

Anyway, no matter how you look at the data, the back surface charges are quite similar to what was reported in Den's alog (except for the signs that don't make much sense for the latter two tables).

We tried similar measurements as described in Den's alog 16727 but the angle data for X was unusable (no coherence at all). If you're interested in Y data, all measurements were saved in Betsy's template directory.

 

The gist of the measurements:

Differences between EX and EY measurements:

Fishy sign of ESDs (Go to the floor and figure out):

EY ESD length drivealign matrix has a negative DC gain while the corresponding matrix for EX is positive even though the LSC DARM output matrix already takes care of the sign difference necessary for DARM control for EX and EY.

It looks as if either the bias line has a wrong sign for one ETM but not the other, or LL/LR/UL/UR lines have a wrong sign for one ETM but not the other.

Raw-ish data and calculations:

Measured the zero-bias transfer coefficients from ESD segments and the ring heater (top and bottom combined) to the calibrated DARM channel in meters/volts at around 80Hz. After taking the TF of the drivers and the DAC V/cts into account, they are:

  LL [m/V] LR [m/V] UL [m/V] UR [m/V] ESD combined [m/V] Ring Heater [m/V]
EX +1.3E-18 +2.2E-15 +1.0E-15 +5.6E-16 +3.8E-15 * 4/3 -6.7E-16
EY +5.6E-16 +6.5E-16 +1.4E-15 +1.5E-15 +4.1E-15 +1.9E-15

Positive data is actually about 24 deg (Y arm) or 30 deg (X arm) delayed, while negative data is about 210 deg (X arm) delayed.

EX LL is not working. Coherence is very large, the voltage readback looks OK, but it has 3 orders of magnitude smaller response than the others. EX LL did not change much when the nominal EX ESD bias was put back on.

I multiplied the ESD combined data by 4/3 only for EX to take into account that the EX LL driver is not working.

Force to length transfer function at 80Hz is -1/M/(2*pi*80Hz)^2 = -1E-7[m/N] (negative as the phase is 180 degrees relative to DC).

Also, the above is the TF to DARM, which is supposed to be X length - Y length. In order to move to a sign convention where positive means that the ETM moves closer to ITM, the sign of the X data should be flipped.

Combining these, the above raw-ish data is converted to N/V as:

  LL [N/V] LR [N/V] UL [N/V] UR [N/V] ESD combined [N/V] Ring Heater [N/V]
EX +1.3E-11 +2.2E-8 +1.0E-8 +5.6E-9 +3.8E-8 * 4/3 -6.7E-9
EY -5.6E-9 -6.5E-9 -1.4E-8 -1.5E-8 -4.1E-8 -1.9E-8

The signs of this table don't really make sense (positive ESD electrode potential should move ETMX and ETMY in the same direction if the charge has the same sign).

Anyway, from here, you solve Den's rough formula:

FRH / VRH = Afront Qfront + Aback Qback

FESD / VESD = Bfront Qfront + Bback Qback

Afront = 1 / 0.20 [1/m] ; Aback = -1 / 0.04 [1/m] 

Bfront = 1 / 0.20 [1/m] ; Bback = 1 / 0.04 [1/m]

Comments related to this report
rainer.weiss@LIGO.ORG - 04:46, Thursday 11 June 2015 (19063)
It is too confusing to be sure. My guess is also that there is charge on the back of the test mass. So let me suggest
that since you are now going to enter the chamber and use the top gun that the 10"flange with the off-axis nipple
be mounted on both x and y etm chambers with the associated small gate valves so that we have this capability
of best location for discharge in the future. Do not remove the gate valves from the middle flanges.
H1 AOS
jim.warner@LIGO.ORG - posted 00:03, Thursday 11 June 2015 (19062)
Shift Summary

~16:00 Locked IFO

~16:45 Winds pick up, lock loss

Many frustrating hours of lock losses at REFL_TRANS or there abouts

23:15 Lock IFO again.

H1 AOS
darkhan.tuyenbayev@LIGO.ORG - posted 20:57, Tuesday 09 June 2015 - last comment - 08:41, Monday 22 June 2015(19031)
Cavity pole fluctuations calculated from Pcal line at 540.7 Hz

Sudarshan, Kiwamu, Darkhan,

Abstract

According to the PCALY line at 540.7 Hz, the DARM cavity pole frequency dropped by roughly 7 Hz from the 17 W configuration to the 23 W (alog 18923). The frequency remained constant after the power increment to 23 W. This certainly impacts on the GDS and CAL-CS calibration by 2 % or so above 350 Hz.

Method

Today we've extracted CAL-DELTAL data from ER7 (June 3 - June 8) to track cavity pole frequency shift in this period. The portion of data that can be used are only then DARM had stable lock, so for our calculation we've used a filtered data taking only data at GPS_TIME when guardian flag was > 501.

From an FFT at a single frequency it is possible to obtain DARM gain and the cavity pole frequency from the phase of the DARM line at a particular frequency at which the drive phase is known or not changing. Since the phase of the resultant FFT does not depend on the optical gain but the cavity pole, looking at the phase essentially gives us information about the cavity pole (see for example alog 18436). However we do not know the phase offset due to time-delay and perhaps for some uncompensated filter. We've decided to focus on cavity pole frequency fluctuations (Delta f_p), rather than trying to find actual cavity pole. In our calculations we have assumed that the change in phase come entirely from cavity pole frequency fluctuations.

The phase of the DARM optical plant can be written as

phi = arctan(- f / f_p),

where          f is the Pcal line frequency;

                     f_p - the cavity pole frequency.

Since this equation does not include any dependence on optical gain, the technique we use, according to our knowledge, the measured value of phi does not get disturbed by the change of the optical gain. Introducing a first order perturbation in f_p, one can linearize the above equation to the following:

               f_p^2 + f^2
(Delta f_p) = ------------- (Delta phi)
                    f

An advantage of using this linearized form is that we don't have to do an absolute calibation of the cavity pole frequency since it focues on fluctuations rather than the absolute values.

Results

Using f_p = 355 Hz, the frequency of the cavity pole measured at the particular time (see alog 18420), and f = 540.7 Hz (Pcal EY line freq.), we can write Delta f_p as

Delta f_p = 773.78 * (Delta phi)

Delta f_p trend based on ER7 data is given in the attached plot: "Delta phi" (in degrees) in the upper subplot and "Delta f_p" (in Hz) in the lower subplot.

Judging by overall trend in Delta f_p we can say that the cavity pole frequency dropped to about 7 Hz after June 6, 3:00 UTC, this correspond to a time when PSL power was changed from 17 W to 23 W (see lho alog 18923, [WP] 5252)

Delta phi also show fast fluctuations of about +/-3 degrees, and right now we do not know the reason that causes this "fuzzyness" of the measured phase.

Filtered channel data was saved into:

aligocalibration/trunk/Runs/ER7/H1/Measurements/PCAL_TRENDS/H1-calib_1117324816-1117670416_501above.txt (@ r737)

Scripts and results were saved into:

aligocalibration/trunk/Runs/ER7/H1/Scripts/PCAL_TRENDS (@ r736)
Images attached to this report
Comments related to this report
darkhan.tuyenbayev@LIGO.ORG - 13:36, Thursday 11 June 2015 (19078)

Clarifications

Notice that this method does not give an absolute value of the cavity pole frequency. The equation

Delta f_p = 773.78 * (Deta phi)

gives a first order approximation of change in cav. pole frequency with respect to change in phase of Pcal EY line in CAL-DELTAL at 540.7 Hz (with the assumptions given in the original message).

Notice that (Delta phi) in this equation is in "radians", i.e. (Delta f_p) [Hz] = 773.78 [Hz/rad] (Delta phi) [rad].

shivaraj.kandhasamy@LIGO.ORG - 08:41, Monday 22 June 2015 (19266)

Darkhan, Did you also look at the low frequency (~30 Hz), both amplitude and phase? If these variations come from just cavity pole, then there shouldn't be any changes in either amplitude or phase at low frequencies (below cavity pole). If there is change only in gain, then it is optical gain. Any changes in the phase would indicate more complex change in the response of the detector.

H1 General
jeffrey.bartlett@LIGO.ORG - posted 11:23, Wednesday 03 June 2015 - last comment - 10:26, Thursday 11 June 2015(18826)
Storage Dry Box May Data
Posted are data for the two long term storage dry boxes (DB1 & DB4) in use in the VPW. Measurement data looks good, with no issues or problems being noted. I will collect the data from the desiccant cabinet in the LVEA during the next maintenance window.   
Non-image files attached to this report
Comments related to this report
jeffrey.bartlett@LIGO.ORG - 10:26, Thursday 11 June 2015 (19073)
This is the data for the 3IFO desiccant cabinet in the LVEA.
Non-image files attached to this comment
H1 SUS
betsy.weaver@LIGO.ORG - posted 15:33, Thursday 28 May 2015 - last comment - 12:50, Thursday 11 June 2015(18669)
More ETM charge measurements

After a measurement of charge on each ETM yesterday, I took a few more on each today.  Attached show the results trended with the measurements taken in April and Jan of this year.  There appears to be more charge on the ETMs than in previous measurements, although there is quite a spread in the measurements.  The ion pumps at the end stations are valved in.

Images attached to this report
Comments related to this report
betsy.weaver@LIGO.ORG - 11:53, Friday 29 May 2015 (18695)

Note, the measurement was saturating on ETMy so Kiwamu pointed me to switch the ETMy HI/LOW Voltage mode and BIO state.  This made the measurement run with saturation.  Attached is a snapshot of the settings I used for the ETMy charge measurement.

Images attached to this comment
leonid.prokhorov@LIGO.ORG - 12:50, Thursday 11 June 2015 (19076)
1. I think that the results of charge measurements of ETMY on May, 28 are probably mistaken. I haven't see any correlation in dependence of pitch and yaw from the DC bias. 
2. It seems like there was very small response at ETMX LL quadrant at this charge measurements. Other ETMX quadrants are ok. It correlates with results of June, 10 https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=19049 
Images attached to this comment
Displaying reports 64481-64500 of 83068.Go to page Start 3221 3222 3223 3224 3225 3226 3227 3228 3229 End