Displaying reports 64321-64340 of 83093.Go to page Start 3213 3214 3215 3216 3217 3218 3219 3220 3221 End
Reports until 11:25, Monday 22 June 2015
H1 General
jeffrey.bartlett@LIGO.ORG - posted 11:25, Monday 22 June 2015 (19273)
End-X & End-Y Vent Dust counts
Posted below are the dust trends for the past week, during the vets of End-X and End-Y. Plots are as expected. There are large numbers of high counts in the VEAs (monitor VEA1), which are consistent with work activities in the VEAs. The counts within the cleanrooms over the chambers show decent isolation of these spaces from the clouds of particles found in the VEA in general. 

   The large spikes within the cleanrooms (monitor VEA2)seen on 6/17 at End-Y and on 6/18 at End-X are the doors being installed. The flat line from VEA2 at End-Y starting on the 18th is the dust monitor being powered down.     
Images attached to this report
H1 SEI
hugh.radkins@LIGO.ORG - posted 09:50, Monday 22 June 2015 (19270)
STS2 Update--Unit on HAM2 (located in BG) from ETF Lab looks iffy.

LHO got two STS2s returned from Stanford a few days ago and they are in the BG area.  One is installed as ITMY or STS2-B; the other is on temporary cabling going into HAM2 or STS2-A.  The attached ASD shows that the Y axis on unit A has excess noise below 3 hertz.  The noise is much less on the X & Z. It might be reasonable that the tilt condition is different between the BG location for Units A & B and the HAM5 location for Unit C so I'm not confident saying anything about the differences at low frequencies between A/B and C.  But A and B are within a couple meters of each other and we should expect the tilts for A & B to be very similar.  This further suggests the HAM2 unit has issues.  I should move unit A to its final location by HAM2 and its dedicated cabling but we've been down this swap fest route before.

The attachment data is from Sunday ~5am local

Images attached to this report
H1 PSL (PSL)
edmond.merilh@LIGO.ORG - posted 09:27, Monday 22 June 2015 (19267)
PSL Status report
Laser Status: 
SysStat is good
Front End power is 33.1W (should be around 30 W); 29.1W @ Power Monitor PD
FRONTEND WATCH is GREEN
HPO WATCH is RED

PMC:
It has been locked 2 day, 21 hr 15 minutes (should be days/weeks)
Reflected power is 2.2 Watts  and PowerSum = 25.7 Watts.
(Reflected Power should be <= 10% of PowerSum)

FSS:
It has been locked for 2 d  21h and 29 min (should be days/weeks)
TPD[V] = 1.49V (min 0.9V)

ISS:
The diffracted power is around 7.2% (should be 5-9%)
Last saturation event was 2d 18 h and 5 m ago (should be days/weeks)

NOTES: ISS diffracted power was up to ~9% this morning. I've adjusted the refsignal from -2.12 to -2.14 to yield ~ 7.5%
H1 General
edmond.merilh@LIGO.ORG - posted 09:06, Monday 22 June 2015 (19268)
Morning Meeting Summary

We are currently LASER SAFE in the corner and the ends. PSL is shuttered, CO2 LASERs are OFF. End station Viewports and tables are closed and locked. Not sure about the on/off statues of the ALS.

Code Freeze has been deemed LIFTED

Basic LASER safety training for SURF and STAR students in LSB.

Corey replacing all handles on ISC I/O tables.

 

 

SEI - Jim working on blend filters

SUS - No ESDs until interlocks are re-installed; Kissel needs o determine which TFs still need to be run; HAM6 black glass has been cleaned - Heintze talked about how much of it will be installed in terms of the potential hampering of suspension wires.

VAC - https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=19252https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=19255. EY = 2.12e-06; EX 2.45e-06.

FAC - Beam Tube cleaning ongoing; Ken hanging tubing in bt enclosure for ion pump.

CDS - modifying end station Beckhoff; CHanging out HV power supply scheme for PSL from LVEA rack mounts to CER supplies. 

LHO VE
bubba.gateley@LIGO.ORG - posted 07:59, Monday 22 June 2015 (19264)
Beam Tube Washing
Scott L. Ed P.

6/18/15
Cleaned 39.8 meters of tube ending at HNW-4-082. Test results posted on this A-Log. Remove and begin re-hanging lights in next section north.

6/19/15
Completed hanging lights, start vacuum of support tubes and spraying of heavily soiled floor areas with water/bleach solution. Cleaned 20.8 meters of tube.
Non-image files attached to this report
H1 INS
kaitlin.gushwa@LIGO.ORG - posted 19:44, Sunday 21 June 2015 - last comment - 20:17, Tuesday 23 June 2015(19263)
OMC Black Glass Shroud - update

Calum & Kate

Finished unwrapping and inspecting all 26 panels of black glass. 100% of the parts made it in one piece and all of the coatings look good. This is great news! The panels are dusty, smudgy, and streaky; and will need to be cleaned (this was expected). The plan is to use a methanol rinse followed by drag wipping. 24/26 pieces of glass were rinsed 2x, so we're about halfway through the cleaning effort. Currently using 3 large cleanroom tables, but we'll need at least one more for prep work.

Two points of note for LLO:

1) The following parts were packaged between 2 sheets of float glass:

D1500054-101 & 102

D1500055-101 & 102

It was a good idea to protect the oddly shaped parts, but care should be taken when opening them because a couple pieces of the float (packaging) glass had chips and cracks and these were spread around the wrapping. Again these did their job and protected the black glass (which was in good shape) just a heads up.

2)  Some of the parts are double packed in the bubble wrap. Again caution when opening.

Comments related to this report
gary.traylor@LIGO.ORG - 20:17, Tuesday 23 June 2015 (19301)
All of the black glass at LLO was unwrapped today and laid out on a flow bench in the optics lab for inspection. Upon initial inspection for damage, there were no obvious chips or dings in any of the glass. The odd shaped pieces were in great shape with no abnormalities around the edges due to being sandwiched between glass for packing (the float glass was even flawless). Tomorrow we will check the coatings and begin the cleaning process. I will leave it to the inspection crew to record tomorrow but there were a few minor spot flaws in the coating on about 3 pieces.
H1 AOS
robert.schofield@LIGO.ORG - posted 15:21, Sunday 21 June 2015 (19261)
Completed study of acoustic coupling at HAM6

Turning the chamber over to next group. Report to follow.

H1 INS
calum.torrie@LIGO.ORG - posted 20:42, Saturday 20 June 2015 - last comment - 21:10, Saturday 20 June 2015(19256)
OMC Black Glass Shroud

OMC Black Glass Shroud

9:35am: We arrived at the LSB after checking in at the control room to receive the OMC Black Glass. We had it shipped overnight from vendor with Fedex. It is due before 12pm.

12:04pm: Black Glass arrived.

5:30pm: Left LVEA after unpacking the black glass and transporting it to the cleanroom by HAM6. We got about 20% of the glass unwrapped and inspected. (We started with the most critical items.) More to follow. We will complete unpacking and inspection in the morning. So far so good in terms of the quality of the machining in the black glass and the coating.

One point. Before we entered the LSB we could hear an alarm. On entry the alarm panel by the receiving entrance was flashing fault "Batt. Charging". I pressed the "ACK" button to stop the alarm. It still says "Status: fault "Batt. Charging".

Particle count in cleanroom (4:52 pm): - Humidity 37%. Temp 68 deg F.  0.3um 10 / 1.0 um 10 / 2.0 10. The rest were zero. (Kate, Calum and Robert in cleanroom.)

Particle count in cleanroom (5:58 pm): - Humidity 34%. Temp 71 deg F.  0.3um 10 / 0.5 um 10. The rest were zero. (Kate and Calum in cleanroom.)

Calum Torrie and Kate Gushwa.

Comments related to this report
kaitlin.gushwa@LIGO.ORG - 21:10, Saturday 20 June 2015 (19257)
Images attached to this comment
LHO VE
kyle.ryan@LIGO.ORG - posted 17:28, Saturday 20 June 2015 (19255)
1730 hrs. local -> Kyle leaving site now
End stations on turbos backed by scrolls -> QDP80s shut down
H1 DAQ (DAQ)
stefan.countryman@LIGO.ORG - posted 17:11, Saturday 20 June 2015 - last comment - 13:24, Thursday 25 June 2015(19254)
1PPS Time Offset Histograms for EY/EX/MSR Time Code Generators and MSR Trimble GPS Clock
Data extracted for the three week period of ER7 (since the timing system would nominally be running steadily the entire time). The histograms show:

- Tight grouping of the minute trends in the data; min and max values for each minute end up in discrete bands. 
- The Time Code Generators show nearly identical histograms for their mean minute trends.
- The mean minute trends of the GPS clock are much more tightly grouped with a width of roughly 20ns; with minimum and maximum offsets included, the width roughly doubles.

Issues:

- There was a single second during which the MSR Time Code Generator was off from the timing system by approximately 0.4 seconds. The issue self-rectified before the start of the next second. Second-trend data was not available through dataview (or at least I couldn't get any out of it). This did not happen in the EY and EX time code generators. It did not happen again in the MSR time code generator. The anomaly happened at GPS time 1117315540.

I've attached histograms as well as screenshots of data taken from Grace.
Images attached to this report
Non-image files attached to this report
Comments related to this report
stefan.countryman@LIGO.ORG - 13:24, Thursday 25 June 2015 (19325)
Conclusion: The timing system is internally consistent and doesn't drift much relative to the atomic clock.

We should look at this again once we hook up the Master's 1PPS input to the Symmetricom's 1PPS output; right now it's getting its 1PPS frpm the Master's built in GPS clock, which isn't as accurate as the Symmetricom's signal. 


The time code generator in MSR is connected to an atomic clock, which we'd expect to provide more accurate short-term timing, though GPS beats it in the long-run. So we're interested in short-term deviations from the atomic clock time, not the overall linear trend, which won't be flat unless the atomic clock itself is perfectly calibrated. For this reason, it's not surprising that the timeseries for the TCG and TCT show linear drift. The relevant metric (variation about the linear trend) is actually smaller than the above histograms would suggest, which is good. Even the naive measurement presented in these histograms shows variance of less than 100 ns.
LHO VE
kyle.ryan@LIGO.ORG - posted 10:16, Saturday 20 June 2015 (19253)
1000 hrs. local -> Kyle on site to continue pump down of Ends -> will make log entry when leaving


			
			
LHO VE
kyle.ryan@LIGO.ORG - posted 18:16, Friday 19 June 2015 (19252)
End Stations
Kyle, Gerardo 

Removed 1st generation ESD pressure gauges from BSC9 and BSC10 and installed 4.5" blanks in their place -> Installed 1.5" metal angle valve and 2nd generation ESD pressure gauges on the domes of BSC5 and BSC6 -> Started rough pumping of X-end and Y-end (NOTE. dew point of "blow down" air measured < -3.5 C and < -5.5 C for X-end and Y-end respectively.  This is much wetter than normal.  I had found the purge air valve closed at the X-end today.  It was probably closed to aid door installation yesterday but not re-opened(?).  Purge-air can be throttled back in some cases but should never be closed off.  Y-end explained in earlier log entry)


Kyle 

Finished assembly of aLIGO RGA and coupled to 2.5" metal angle valve at new location on BSC5 -> Valved-out pumping for tonight -> will resume tomorrow and expect to be on the turbos before I leave tomorrow.
H1 PEM (SEI)
robert.schofield@LIGO.ORG - posted 18:41, Wednesday 17 June 2015 - last comment - 09:40, Sunday 21 June 2015(19210)
Tilt noise is much smaller 40 m from building, coherence still high below 0.5 Hz

Summary: A seismometer was buried 40 m from EY to take advantage of strong attenuation of the tilt signal relative to the acceleration signal from distant sources. Seismometers located outside the buildings may be useful in reducing problems associated with tilt.

Our buildings are tilted by wind. Our seismometers do not discriminate between this tilt and acceleration (like a pendulum), so wind-induced tilt produces spurious control signals that can make it difficult to lock and maintain lock. A tilt sensor can be used to discriminate between tilt and acceleration, but we may also be able to discriminate between the two by taking advantage of source differences in the band below 0.5 Hz, where tilt generates the largest spurious acceleration signals. The tilt in this band is mainly generated locally by the wind pressure against the walls, while the acceleration signal is mainly generated by ocean waves and other distant sources.

While the seismometers are in the far field for most low frequency accelerations, they are in the near field for building-generated tilt (wavelengths at 0.1 Hz are about 50 km). To take advantage of the rapid attenuation with distance in the near-field, I buried an STS-2 seismometer in a meter deep hole I dug 40 m from EY (Figure 1).  Figures 2 and 3 show a comparison of the buried seismometer to the SEI seismometer on the ground inside the EY station. During the low wind period the spectra look virtually identical below 0.5 Hz and the coherence is high because the sources are mostly distant and the wavelengths are large. During the high wind period, the buried seismometer doesn’t change that much (there is some change in the microseismic peak because high/low wind were about 24 hours apart), but the building seismometer shows a huge signal from tilt.  The coherence in windy conditions is low because the tilt is highly attenuated 40 m from the building. Figure 4 shows spectra for higher wind, 15-35 MPH.

Of course the external seismometer would not detect real building accelerations due to the wind. But if the unwanted tilt signal dominates over the wanted wind acceleration signal, a seismometer outside the building may be useful, and I think that this is the case below about 0.5 Hz.

I note that I first tried this in the beam tube enclosure tens of meters from EY, but found that the wind tilts the beam tube enclosure almost as much as the building (but not coherently), supporting a hypothesis that aspect ratio is the important variable. Also, Jeff points out that, if we insulate the buried seismometer well (and I did not) we might even be able to do better than the building seismometers with their current insulation during even low-wind periods. Figure 5 is like Figure 2 except that the signal from the stage 1 Trilliums is included.

Non-image files attached to this report
Comments related to this report
brian.lantz@LIGO.ORG - 16:35, Thursday 18 June 2015 (19228)
Interesting data! I'm fascinated by the observation that at 25 mph, the horz. spectra don't match at any frequency. unfortunately, this makes it seem quite problematic to try and use the data in realtime from the outside sensors, since you have to figure out at what freq. they are telling you something useful, and at what frequency they are not. Certainly at 0.5 Hz, the slab sensors are the ones to use and I suspect that at 0.05 Hz one should believe the outdoor sensor, but in between? 

Several years ago, in response to an (incorrect) estimate of newtonian noise from wind-eddies, various suggestions of shrouds, fairings, shorter buildings, trees, berms, etc. were floated and dismissed. any evolution of this thinking? this data seems to imply that if we put a wind block 40 m from the building, and kept the wind off the building, we'd be in better shape. Sadly, short of a giant pile of tumble-weeds, I don't see any practical way to achieve this. 

It certainly bolsters Krishna's assertion that the wind-tilts are local to the buildings.

robert.schofield@LIGO.ORG - 09:40, Sunday 21 June 2015 (19259)

Yes, but we more often have to deal with the 10-20 MPH range, and I think its clear that the burried seismometer is better for that in the 0.05-0.5 range. For higher wind speeds, I would explore the tilt-acceleration cross over by substituting a burried seismometer for building seimometers and see to what wind speed the burried seismometer is an improvement. 

H1 SUS
arnaud.pele@LIGO.ORG - posted 17:55, Wednesday 17 June 2015 - last comment - 11:20, Wednesday 24 June 2015(19208)
TFs started on ETMY and TMSY at 00:45 UTC

[sus crew]

Following the in chamber work from today at end-Y we took quick TFs on the quad and the transmon. They look ok.

Since the pump won't happen before tomorrow I started matlab tfs for all DOF.

The measurement will be running for few hours. 

Images attached to this report
Comments related to this report
arnaud.pele@LIGO.ORG - 11:20, Wednesday 24 June 2015 (19306)

More clue on the TMSX measurement showing a different TF than before the vent (cf log below 19246). I looked at the response from pitch and vertical drive to the individual osems (LF, RT) and it looks like something is funny with the RT osem which should have the same response as the LF one, cf light red curves below 

EDIT : Actually, LF and RT osem response are superposed on the graphs (green LF lies under red RT), so their response is the same.

Mountain View Mountain View

Images attached to this comment
arnaud.pele@LIGO.ORG - 23:16, Friday 19 June 2015 (19246)

EY measurements

ETMY ant TMSY transfer functions were measured on thursday after the doors went on, with chamber at atmospheric pressure. They did not show signs of rubbing, cf the first two pdf attached showing good agreement with previous measurements.

Today, I remeasured the vertical dof, after the suspension sagged ~120um from the pressure drop, and it still looks fine, cf figures below.

 
 
 
  The Pulpit Rock
 

EX measurements

ETMX and TMSX were measured today when pressure in chamber was about 3 Torrs, after the QUAD sagged by about 130um. 

 
 
ETMX TFs are similar to previous ones (3rd pdf), but the dynamics of TMSX table in pitch and vertical changed since the last measurement, cf figure below or 4th pdf. This might be ok, but the damping should be revised.
 
 
Images attached to this comment
Non-image files attached to this comment
arnaud.pele@LIGO.ORG - 14:54, Monday 22 June 2015 (19277)

The second pdf attachment on the log above was supposed to show TMSY transfer functions. Attached is the correct pdf. 

Non-image files attached to this comment
H1 ISC
keita.kawabe@LIGO.ORG - posted 16:10, Tuesday 16 June 2015 - last comment - 11:55, Monday 22 June 2015(19175)
Done with TMSY (Corey, Kiwamu, Keita)

ISI was locked by Jim in the morning.

Before doing anything, EY alignment slider [PIT, YAW] = [142.0, -75.1], TMSY = [116.6, -20.0], EY OPLEV=[-39, -15]-ish.

Transitioned to laser hazard, moved EY such that the green return beam hits the center of the relf PD: EY [PIT, YAW]=[207.4, -75.1]

- Krytox on beam diverter in situ: Good.

Everything went well as per yesterday.

- QPD strain relief: Good-ish.

Unlike TMSX, it turns out that all QPDs were already equipped with a make-shift strain relief using stainless steel cable clamps, the same clamps used for fixing the cables on the TMS ISC table, but the cables were without kapton tubes. We decided to install the right strain relief anyway.

In the end, we were able to install the right ones on three out of four QPDs. As for the remaining one (Green QPDB), we weren't able to install it as the 1/4-20 Allen key to attach the PEEK strain relief to the QPD base would have interfered with the YAW knob of one of the QPD sled mirror holders (M102 in D1201458).  The stainless steel strain relief was left as is.

After this work, we checked if QPDs still work and they did (used green beam for the green QPDs and a flashlight for IR QPDs).

- TMS balancing: Good.

After the work, we checked the balace of the TMS table with the green light injected to the chamber. The vertical alignment was found to be already good and therefore we did not make any mechanical adjustment. Similarly, the horizontal was also good and giving an extra digital bias of +13 urad (resulting in -7.0 urad in OPTICALIGN_OFFSET) made the return beam well-centered on ALS-REFL_PD. So the balance is good.

 

After everything was done: EY slider [PIT, YAW] = [142.0, -75.1] (back to the original), TMSY = [116.6, -7.0], EY OPLEV=[-24, -31]-ish.

Seems like EY moves around by 15urad-ish both in  PIT and YAW, so TMS alignment could be only as good as 15urad-ish.

Comments related to this report
corey.gray@LIGO.ORG - 21:31, Tuesday 16 June 2015 (19189)

Here are photos from EY TMS work today:  https://ligoimages.mit.edu/?c=1616

keita.kawabe@LIGO.ORG - 09:19, Monday 22 June 2015 (19269)
jameson.rollins@LIGO.ORG - 11:55, Monday 22 June 2015 (19274)

The certificate for ligoimages.mit.edu has expired, so this site is currently not accessible.

H1 PSL
edmond.merilh@LIGO.ORG - posted 10:08, Monday 15 June 2015 - last comment - 10:06, Monday 22 June 2015(19147)
NPRO Tripped again
Comments related to this report
edmond.merilh@LIGO.ORG - 10:11, Monday 15 June 2015 (19148)

I'm told this was intentional. I wasn't informed.

jason.oberling@LIGO.ORG - 14:00, Monday 15 June 2015 (19153)

This was actually completely accidental.  Peter and I went out to see if we could pull a log off of the NPRO UPS as part of our investigation into this morning's NPRO trip.  Turns out that when you plug in a DB9 cable to the back of the UPS, the UPS shuts off.  We did not know this.  Now we do.  Note to self...

In other news we were not able to establish communication between the laptop and the UPS, not sure why.  Will continue to investigate.

edmond.merilh@LIGO.ORG - 10:06, Monday 22 June 2015 (19272)

That make me feel much better about not having gone out there alone to try it!

H1 SYS (GRD, ISC, SYS)
sheila.dwyer@LIGO.ORG - posted 14:09, Thursday 11 June 2015 - last comment - 19:01, Thursday 29 August 2019(19075)
a look at duty cycle for the first week of ER7

I've taken a look at guardian state information from the last week, with the goal of getting an idea of what we can do to improve our duty cycle. The main messages is that we spent 63% of our time in the nominal low noise state, 13% in the down state, (mostly because the DOWN state was requested), and 8.7% of the week trying to lock DRMI. 

Details

I have not taken into account if the intent bit was set or not during this time, I'm only considering the guardian state.  These are based on 7 days of data, starting at 19:24:48 UTC on June3rd.  The first pie chart shows the percentage of the time during the week the guardian was in a certain state.  For legibility states that took up less than 1% of the week are unlabeled, some of the labels are slightly in the wrong position but you can figure out where they should be if you care. The first two charts show the percentage of the time during the week we were in a particular state, the second chart shows only the unlocked time. 

DOWN as the requested state

We were requesting DOWN for 12.13% of the week, or 20.4 hours.  Down could be the requested state because operators were doing initial alignment, we were in the middle of maintainece (4 hours ), or it was too windy for locking.  Although I haven't done any careful study, I would guess that most of this time was spent on inital alingment.

There are probably three ways to reduce the time spent on initial alignment:

Bounce and roll mode damping

We spent 5.3% of the week waiting in states between lock DRMI and LSC FF, when the state was already the requested state.  Most of this was after RF DARM, and is probably because people were trying to damp bounce and roll or waiting for them to damp.  A more careful study of how well we can tolerate these modes being rung up will tell us it is really necessary to wait, and better automation using the monitors can probably help us damp them more efficiently. 

Locking DRMI

we spent 8.7% of the week locking DRMI, 14.6 hours.  During this time we made 109 attempts to lock it, (10 of these ended in ALS locklosses), and the median time per lock attempt was 5.4 minutes.  From the histogram of time for DRMI locking attempts(3rd attachment), you can see that the mean locking time is increased by 6 attempts that took more than a half hour, presumably either because DRMI was not well aligned or because the wind was high. It is probably worth checking if these were really due to wind or something else.  This histogram includes unsuccessful as well as successful attempts.  

Probably the most effective way to reduce the time we spend locking DRMI would be to prevent locklosses later in the lock acquisition sequence, which we have had many of this week.

Locklosses

A more careful study of locklosses during ER7 needs to be done. The last plot attached here shows from which guardian state we lost lock, they are fairly well distributed throughout the lock acquisition process. The locklosses from states after DRMI has locked are more costly to us, while locklosses from the state "locking arms green" don't cost us much time and are expect as the optics swing after a lockloss. 

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 18:13, Friday 19 June 2015 (19251)

I used the channel H1:GRD-ISC_LOCK_STATE_N to identify locklosses in to make the pie chart of locklosses here, specifcally I looked for times when the state was lockloss or lockloss_drmi.  However, this is a 16 Hz channel and we can move through the lockloss state faster than 1/16th of a second, so doing this I missed some of the locklosses.  I've added 0.2 second pauses to the lockloss states to make sure they will be recorded by this 16 Hz cahnnel in the future.  This could be a bad thing since we should move to DOWN quickly to avoid ringing up suspension modes, but we can try it for now.  

A version of the lockloss pie chart that spans the end of ER7 is attached.  

Images attached to this comment
jameson.rollins@LIGO.ORG - 08:38, Sunday 21 June 2015 (19258)

I'm bothered that you found instances of the LOCKLOSS state not being recorded.  Guardian should never pass through a state without registering it, so I'm considering this a bug.

Another way you should be able to get around this in the LOCKLOSS state is by just removing the "return True" from LOCKLOSS.main().  If main returns True the state will complete immediately, after only the first cycle, which apparently can happen in less than one CAS cycle.  If main does not return True, then LOCKLOSS.run() will be executed, which defaults to returning True if not specified.  That will give the state one extra cycle, which will bump it's total execution time to just above one 16th of a second, therefore ensuring that the STATE channels will be set at least once.

jameson.rollins@LIGO.ORG - 12:56, Sunday 21 June 2015 (19260)

reported as Guardian issue 881

sheila.dwyer@LIGO.ORG - 16:19, Monday 22 June 2015 (19278)

Note that the corrected pie chart includes times that I interprerted as locklosses that in fact were times when the operators made requests that sent the IFO to down.  So, the message is that you can imagine the true picture of locklosses is somewhere intermediate between the firrst and the second pie charts. 

I realized this new mistake because Dave asked me for an example of a gps time when a lockloss was not recorded by the channel I grabbed from nds2, H1:GRD-ISC_LOCK_STATE_N.  An example is

1117959175 

I got rid of the return True from the main and added run states that just return true, so hopefully next time around the channel that is saved will record all locklosses. 

H1 AOS
darkhan.tuyenbayev@LIGO.ORG - posted 20:57, Tuesday 09 June 2015 - last comment - 08:41, Monday 22 June 2015(19031)
Cavity pole fluctuations calculated from Pcal line at 540.7 Hz

Sudarshan, Kiwamu, Darkhan,

Abstract

According to the PCALY line at 540.7 Hz, the DARM cavity pole frequency dropped by roughly 7 Hz from the 17 W configuration to the 23 W (alog 18923). The frequency remained constant after the power increment to 23 W. This certainly impacts on the GDS and CAL-CS calibration by 2 % or so above 350 Hz.

Method

Today we've extracted CAL-DELTAL data from ER7 (June 3 - June 8) to track cavity pole frequency shift in this period. The portion of data that can be used are only then DARM had stable lock, so for our calculation we've used a filtered data taking only data at GPS_TIME when guardian flag was > 501.

From an FFT at a single frequency it is possible to obtain DARM gain and the cavity pole frequency from the phase of the DARM line at a particular frequency at which the drive phase is known or not changing. Since the phase of the resultant FFT does not depend on the optical gain but the cavity pole, looking at the phase essentially gives us information about the cavity pole (see for example alog 18436). However we do not know the phase offset due to time-delay and perhaps for some uncompensated filter. We've decided to focus on cavity pole frequency fluctuations (Delta f_p), rather than trying to find actual cavity pole. In our calculations we have assumed that the change in phase come entirely from cavity pole frequency fluctuations.

The phase of the DARM optical plant can be written as

phi = arctan(- f / f_p),

where          f is the Pcal line frequency;

                     f_p - the cavity pole frequency.

Since this equation does not include any dependence on optical gain, the technique we use, according to our knowledge, the measured value of phi does not get disturbed by the change of the optical gain. Introducing a first order perturbation in f_p, one can linearize the above equation to the following:

               f_p^2 + f^2
(Delta f_p) = ------------- (Delta phi)
                    f

An advantage of using this linearized form is that we don't have to do an absolute calibation of the cavity pole frequency since it focues on fluctuations rather than the absolute values.

Results

Using f_p = 355 Hz, the frequency of the cavity pole measured at the particular time (see alog 18420), and f = 540.7 Hz (Pcal EY line freq.), we can write Delta f_p as

Delta f_p = 773.78 * (Delta phi)

Delta f_p trend based on ER7 data is given in the attached plot: "Delta phi" (in degrees) in the upper subplot and "Delta f_p" (in Hz) in the lower subplot.

Judging by overall trend in Delta f_p we can say that the cavity pole frequency dropped to about 7 Hz after June 6, 3:00 UTC, this correspond to a time when PSL power was changed from 17 W to 23 W (see lho alog 18923, [WP] 5252)

Delta phi also show fast fluctuations of about +/-3 degrees, and right now we do not know the reason that causes this "fuzzyness" of the measured phase.

Filtered channel data was saved into:

aligocalibration/trunk/Runs/ER7/H1/Measurements/PCAL_TRENDS/H1-calib_1117324816-1117670416_501above.txt (@ r737)

Scripts and results were saved into:

aligocalibration/trunk/Runs/ER7/H1/Scripts/PCAL_TRENDS (@ r736)
Images attached to this report
Comments related to this report
darkhan.tuyenbayev@LIGO.ORG - 13:36, Thursday 11 June 2015 (19078)

Clarifications

Notice that this method does not give an absolute value of the cavity pole frequency. The equation

Delta f_p = 773.78 * (Deta phi)

gives a first order approximation of change in cav. pole frequency with respect to change in phase of Pcal EY line in CAL-DELTAL at 540.7 Hz (with the assumptions given in the original message).

Notice that (Delta phi) in this equation is in "radians", i.e. (Delta f_p) [Hz] = 773.78 [Hz/rad] (Delta phi) [rad].

shivaraj.kandhasamy@LIGO.ORG - 08:41, Monday 22 June 2015 (19266)

Darkhan, Did you also look at the low frequency (~30 Hz), both amplitude and phase? If these variations come from just cavity pole, then there shouldn't be any changes in either amplitude or phase at low frequencies (below cavity pole). If there is change only in gain, then it is optical gain. Any changes in the phase would indicate more complex change in the response of the detector.

Displaying reports 64321-64340 of 83093.Go to page Start 3213 3214 3215 3216 3217 3218 3219 3220 3221 End