Displaying reports 64741-64760 of 83040.Go to page Start 3234 3235 3236 3237 3238 3239 3240 3241 3242 End
Reports until 13:41, Tuesday 02 June 2015
LHO VE
kyle.ryan@LIGO.ORG - posted 13:41, Tuesday 02 June 2015 (18791)
Replaced gauge controller on X-end turbo
Wasn't able to test set point interlock functionality (maybe next maintenance day?) 
H1 CDS (CDS)
david.barker@LIGO.ORG - posted 13:13, Tuesday 02 June 2015 - last comment - 15:03, Tuesday 02 June 2015(18789)
front end filter module files all cleanly loaded

Several front ends had partially loaded filter modules. To prepare H1 for ER7 data taking, I ran a script which pressed all LOAD_NEW_COEFF buttons on every model. I'll periodically "load all filters" if we encounter any partially loaded files during ER7.

Comments related to this report
david.barker@LIGO.ORG - 15:03, Tuesday 02 June 2015 (18794)

after doing this I noticed that three filters in the LSC were being regularly reloaded. Evan tracked this down to the ALIGN_IFO.py guardian which was incorrectly writing a 1 to the RSET PV rather than 2 (load coefficients rather than clear history). This was fixed.

H1 ISC (ISC)
sheila.dwyer@LIGO.ORG - posted 12:15, Tuesday 02 June 2015 - last comment - 16:49, Monday 22 June 2015(18777)
Trial of SR3 optical lever feedback to prevent locklosses

Daniel and I looked at three of the locklosses from Travis's shift last night, from 14:40, 14:02 and 11:33 UTC.  The earlier two both seem to be related to an alignment drift over 2-3 minutes before the lockloss, which shows up clearly in SR3 PIT.  (there is currently not feedback to SR3 PIT)  According to the witness sensors, this drift is only seen on M3.  No optics saturated until after the lockloss.  The DC4 centering loop, as well as both of the SRC alignment loops respond to the drift.  

Its unclear what causes the drift to accelerate in the minutes before the lockloss.  There is also a drfit of SR3 when we power up, as we noted yesterday, but this happens on a slower timescale than the dirfts that preceed a lockloss (3rd screenshot).  Also, there is a longer, slow drift that happens whenever we are locked.  

With Patrick and Cheryl I have engaged a DC coupled optical lever for SR3 PIT, we will see if this helps.  The last screen shot attached shows the MEDM screen used to turn this on or off.  

If the operators need to disable this (due to an earthquake, a trip, or if the optic becomes misalinged for any other reason) you can get to this screen from SR3, M2 OLDAMP.  

Turning off:  

turn off FM1 (labeled DC), then the input

Turning it back on:

Once the optic has settled and the beam is back on the oplev QPD, turn on the damping loop (with FM1 still off).  Average INMON (in a  command line tdsavg 10 H1:SUS-SR3_M2_OLDAMP_P_INMON), type -1 times the average into the offset, make sure the offset is engaged, and finally turn on FM1 to make the loop DC coupled.  

Since this is just a trial, Jeff is not including these changes in his current SDF cleanup campaign. 

Images attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 13:22, Tuesday 02 June 2015 (18790)

Looking at the initial power up, we can see that an increase of a factor of ~10 causes ~0.7 µrad of pitch misalignment. During the accelerated drift in the last 3-5 minutes before the lock loss another 0.4 µrad of pitch misalignment was acquired with only ~10% of power increase. One might wonder, if we see a geometrically induced wire heating run away.

brett.shapiro@LIGO.ORG - 12:23, Saturday 06 June 2015 (18934)

I modeled how much the two front wires have to heat up to casue a bottom mass pitch of 1 microradian. A very small temperature increase is needed to predict this.

* Assuming a constant temperature profile along the wire length (I'm sure this is not the case, but it is easy to calculate), it is

0.003 [C]

* Assuming a linear temperature profile where with the max temperature is in the middle, and the ends of the wire have no temperature increase

0.006 [C]

So we can say an order of magnitude estimate is greater than 1 mC / urad and less than 10 mC / urad.

 

Calculations:

From gwinc, the thermal coefficient of expansion for C70 steel wire is

alpha = 12e-6 [1/C].

From the HLTS model at ../SusSVN/sus/trunk/Common/MatlabTools/TripleModel_Production/hltsopt_wire.m

wire length L = 0.255 [m]

front-back wire spacing s = 0.01 [m]

The change in wire length for pitch = 1 urad is then

dL = s * pitch = 0.01 * 1e-6 = 1e-8 [m]

* For uniform wire heating of dT, this change comes from

dL = alpha * L * dT

So, solving for dT

dT = dL / (alpha * L) = 1e-8 / ( 12e-6 * 0.255 ) = 0.0033 [C]

* For a linear temperature increase profile (max at middle, 0 at ends), I break the wire into many constant temperature segments of length Lsegment.

The temperature increase profile is a vector defined by

dT = dTmax * TempPrile

where TempProfile is a vector of the normalized shape of the temperature prodile. It is triangular, 0 at the ends and 1 at the peak in the middle. Each element of the vector corresponds to a constant temperature segment of the wire. dTmax is a scalar representing the maximum temeprature increase at the middle of the wire.

The change in wire length is then given by

dL = sum( alpha * Lsegment * TempProfile ) * dTmax

solving for dTmax

dTmax = dL / sum( alpha * Lsegment * TempProfile )

with 101 segments, this gives us

dTmax = 0.0063 [C]

about double the uniform heating case.

* I also considered that since the wire has significant stress due to the test mass weight, the Young's modulus's temperature dependence might cause a different effective thermal expansion coefficient alpha_effective. This appears to be a negligible effect.

From gwinc, the temperate dependence of the young's modulus E is

dE/dT = -2.5e-4 [1/C]

and young's modulus E is

E = 212e9 [Pa]

from https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=12581, we know that the change in spring length due to the modulus of eleasticity dependence is

dL = -dE/dT * dT * Tension / Stiffness

where Tension is the load in the wire and Stiffness is the vertical stiffness of the wire.

The Stiffness is given by

Stiffness = E * A / L = E * pi * r^2 / L

where A is the cross sectional area of the wire, and r is the radius.

So plugging this in above

dL = -dE/dT * dT * Tension * L / ( E * pi * r^2 )

We get the correction on alpha by dividing this by L and dT, which eliminates both from the equation. From the HLTS model, the bottom mass is 12.142 kg and the wire radius is 1.346e-4 m.

Tension = 12.142 * 9.81 / 4 = 29.8 [N]

The correction on alpha is then

-dE/dT * Tension / ( E * pi * r^2 ) = 2.5e-4 * 29.8 / (212e9 * pi * 1.346e-4^2) = 6.2e-7 [1/C]

This changes alpha from

12e-6 to 12.6e-6 [1/C]

Not enough to matter for the estimates above.

keita.kawabe@LIGO.ORG - 16:49, Monday 22 June 2015 (19163)

Localizing the heat source:

I made a calculation of the heat absorption by wires.

Based on Brett's temperature estimate, assuming the radiation as the only heat dissipation mechanism, the heat the front wires should be absorbing is about 1uW total per two wires when SR3 tilts by 1 urad regardless of the temperature distribution.

If you only look at the power, any ghost beam coming from PRC power (about 800W per 20W input assuming recycling gain of 40) can supply 1uW as each of these beams has O(10mW) or more.

I looked at BS AR reflection of X reflection, CP wedge AR both ways, and ITM AR both ways. I'm not sure about the first one, but the rest are mostly untouched by anything and falls on SR3 off centered.

The attachment depicts SR3 outline together with the position of CP wedge AR (green) and ITM AR (blue) reflections, assuming the perfect centering of the main beam and the SR3 baffle on SR3. Note that ITMX AR reflection of +X propagating beam falls roughly on the same position on SR3 as ITMY AR reflection of +Y propagating beam. Ditto for all ITM and CP AR reflections. The radius of these circles represent the beam radius. The power is simply 20W*G_rec(40)*(AR(X)+AR(Y))/4 (extra factor of 2 due to the fact that the AR beam goes through the BS) for ITM and CP, and 20W*40*AR/2 for BSAR of -X beam.

I haven't done any more calculations and I don't intend to, but just by looking at the numbers (total power in green and blue beams in the figure is about 240mW, 5 orders of magnitude larger than the heat absorbed by wires), and considering that the centering on SR3 cannot be perfect, and that SR3 baffle is somewhat larger than SR3 itself, and that CP alignment is somewhat arbitrary, it could be that these blobs seeps through the space between the baffle and the SR3 and provide 1uW.

The red thing is where BSAR reflection of -X beam would be if it is not clipped by the SR2 scraper baffle. If everything is as designed, SR2 scraper baffle will cut off 90% of the power (SR2 edge is 5mm outside of the center of the beam with 8mm radius), and remaining 10% comes back to the left edge of the red circle.

Any ghost beam originating from SRC power is (almost) exhonerated, because the wire (0.0106"=0.27mm diameter) is much smaller than any of the known beams such that it's difficult for these beams to dump 1uW on wires. For example the SRC power hitting SRM is about 600mW per 20W input, SRM AR reflection is already about 22uW.

Details of heat absorption:

When the temperature on a section of wire rises, the stretching of that section is proportional to the length of that section itself and the rise in temperature. Due to this, the total wire stretch is  proportional to the temperature rise integrated over the wire length (which is equial to the mean temperature rise multiplied by the wire length) regardless of the temperature distribution as is shown in effect by Brett's calculation:

stretch prop int^L_0 t dL = mean(t) * L

where L is the length of the wire and t is the difference from the room temperature.

Likewise, the heat dissipation of a short wire section of the length dL at temperature T+t via radiation is

sigma*E*C*dL*[(T+t)^4-T^4] ~ 4*sigma*E*C*dL*T^3*t

where sigma is Stefan-Boltzmann constant, E the emmissivity, C the circumference of the wire, T the room temperature (about 300K). The heat dissipation for the entire length of wire is obtained by integrating this over the length, and the relevant integral is int^L_0 t dL, so again the heat dissipation via radiation is proportional to the temperature rise integrated over the wire length regardless of the temperature distribution:

P(radiation) ~ 4*sigma*E*T^3*(C*L)*mean(t).

I assume the emmissivity E of the steel wire surface to be O(0.1). These wires are drawn, couldn't find the emissivity but it's 0.07 for polished steel surface and 0.24 for rolled steel plate.

I used T=300K, t=3mK (Brett's calculation for both of the temperature distributions), C=pi*0.0106", L=0.255m*2 for two front wires, and obtained:

P(radiation) ~ 0.8uW ~ 1uW.

ITM AR:

ITM has a wedge of 0.08 deg, thick side down.

ITM AR reflection of the beam propagating toward ETM is deflected by 2*wedge in +Z direction. For the beam propagating toward BS, ITM AR reflects the beam, deflecting down, and this beam is reflected by ITM and comes back to BS. Deflection of this beam relative to the main bean is -(1+n)*wedge.

AR beam displacement at BS is +14mm for +Z-deflection and -17mm for -Z-deflection. Since the BS baffle hole "radius" seen from ITMs is 100+ mm, and since the beam radius is about 53mm, AR  beams are not blocked much by BS baffle and reaches SR3.

ITM AR reflectivity is about 300ppm.

CP AR:

Similar calculation  for CP except that they have horizontal wedge, thick part being -Y for CPX and -X for CPY.

CP wedge is about 0.07 degrees.

I only looked at the surface of CP that is opposite of the ITM, and assumed that the surface facing ITM is more or less parallel to ITM AR, within an accuracy of O(100urad).

I assumed that S1 is the surface close to the ITM, and took S2 AR numbers from galaxy web page (43.7ppm for X, 5ppm for Y).

BS AR propagation:

BS wedge is 0.076 degrees, with a reflectivity of 50ppm.

Deflection of BS AR reflection of -X beam relative to the main beam is NOT -2*wedge as BS is tilted by 45 degrees. With some calculation it turns out that it is about -0.27 degrees, with a displacement of +48mm (positive = +X).

This beam is not obstructed at all by the BS baffle, hits SR3 and makes it to SR2 baffle edge. What made it to the SR2 surface doesn't go to SRM and instead comes back to SR3 as SR2 is convex  and the beam is heavily off-centered.

If there's no SR2 baffle and if SR2 is much larger, the center of the reflected beam is going to be  50cm in -X direction from the center of SRM, which happens to be on SR3.

I don't know what happens to the edge scattering and the reflection from SR2, but both of these are highly dependent on SR2 centering.

Images attached to this comment
H1 DetChar (DetChar)
keith.riles@LIGO.ORG - posted 12:12, Tuesday 02 June 2015 - last comment - 08:44, Thursday 09 July 2015(18764)
Narrow lines in H1 DARM
Attached is a pre-ER7 list of narrow lines seen above 5 Hz in recent H1 DARM data, along with spectra containing labels for the lines.

The spectra used for line-hunting are from 18 hours of DC-readout conditions on May 17. Most of the lines were also seen
in the early-May mini-run data, but are more exposed in the more sensitive May 17 data (see figure 1)

Notable combs / lines:
  • Exact integers: 16.0000 Hz, 64.0000 Hz (distinctly louder than other 16-Hz lines), 81.0000 Hz, 1150.0000 Hz, 1672.0000 Hz, 1704.0000 Hz, 1880.0000 Hz, 1896.0000 Hz
  • Nearly exact-integer combs:
    • 3.9994 Hz (offset by ~2 Hz from zero, first visible harmonic at 13.9977 Hz -- these seem to correlate with a near 4-Hz comb starting at 2 Hz in EX magnetometers (and to a lesser degree in EY magnetometers), where the contamination in DARM at 10 Hz and below is presumably too faint to be seen in the rapidly rising DARM noise. A strong comb is apparent in PEM FScans.
    • 99.9989 Hz (correlated with magnetometers in EX - see NoEMi and PEM FScans)
  • Quad suspension violin modes with fundamentals near 500 Hz are not quite truly harmonic, but some of their upconversion is (see figure 2)
  • Especially pervasive combs: 3.9994 Hz (34 harmonics), 16.0000 Hz (124 harmonics), 60.0 Hz (9 harmonics), 64.0000 Hz (31 harmonics), 36.9733 (54 harmonics), 59.9954 (6 harmonics), 99.9989 (13 harmonics)
  • There are tentative identifications here of lines near 300 Hz and their harmonics as due to beam splitter violin modes, but their frequencies shifted by several mHz between the mini-run and the May 17 lock stretches, unlike the quad suspension violin modes which hardly moved at all (attributable to temperature?)
  • Designated calibration line frequencies are shown on the spectra even when there is no apparent line (some not yet enabled?)
Notes:
  • Because the binning in the spectra is 0.5 mHz (based on 30-minute FScan Hann-windowed SFTs), but the line widths in the plots are much larger, the spectra shown here look awful (but aren't). Dewhitening applies five poles at 1 Hz and five zeroes at 100 Hz.
  • I didn't bother labeling the forest of bounce/roll-mode sidebands on the quad suspensions and their harmonics
  • Although the 1150-Hz line seems well centered on the integer value, its energy is spread over several tenths of a Hz, unlike other integer-Hz lines
  • Many line frequencies are give to four digits after the decimal point, but their statistical uncertainties are typically no better than a few tenths of a mHz, and based on changes between early and mid May, some lines have systematic drifts of O(mHz).
  • With one exception, the quad violin mode fundamental frequencies were determined from the mini-run data, where they were more excited than on May 17. Those frequencies agree (independently) to within 2 mHz (in most cases, to better than 1 mHz) with the frequencies measured for individual test masses here. The one exception was that I had originally marked 504.1492 Hz as a mode in the mini-run, while the earlier study had found a mode instead at 501.254 Hz. Since the earlier measurements are guided by test-stand data, I am deferring to them and tagging 501.2544 Hz here, since I do see a line there in the mini-run data, albeit weaker than other modes.
Key to spectra labels: b = Bounce modes r = Roll modes C = Calibration lines L = Lock-in oscillator M = Power Mains comb N = Near to power mains comb S = 64 Hz comb s = 16 Hz comb F = Near to 4 Hz (3.9994 Hz) comb H = Near to 100 Hz (99.9989 Hz) comb D = 59.3155 Hz comb E = 36.9733 Hz comb G = 75.23 Hz comb Q = Quad violin modes B = Beam splitter violin modes x = Miscellaneous singlets Figures: 1 - Superposition of 0-2000 Hz spectra for minirun (29.5 hours) and May 17 data (18 hours) 2 - Quad violin mode regions for first three harmonics (actual modes are non-harmonic, but some upconversion of fundamental is propagation harmonically) 3 - May 17 spectrum with labeled lines Other attachments: 1 - Ascii list of narrow lines / combs found in 0-2000 Hz 2 - zipped tar file of 27 sub-band spectra (mostly 100-Hz wide)
Images attached to this report
Non-image files attached to this report
Comments related to this report
keith.riles@LIGO.ORG - 12:17, Tuesday 02 June 2015 (18785)
I meant to attach the excited violin mode spectrum stack from the mini-run, not from the mid-May data,
to illustrate the harmonicity of the upconversion. Here is the right plot.
Images attached to this comment
nelson.christensen@LIGO.ORG - 08:44, Thursday 09 July 2015 (19517)DetChar, PEM
We used the coherence tool on the full ER7 data to try and find coherence between h(t) and other channels for the 99.9989 Hz line and its harmonics.

There is a coherence between h(t) and ...
H1:PEM-CS_MAG_EBAY_SUSRACK_Z_DQ
at
99.9989*1= 99.9989 Hz with coherence of 0.038
99.9989*2 = 199.9978 Hz with coherence of 0.03
99.9989*3 = 299.9967 Hz with coherence of 0.11
99.9989*4 = 399.9956 Hz with coherence of 0.11
99.9989*5 = 499.9945 Hz with coherence of 0.022
99.9989*10 = 999.989 Hz with coherence of 0.13
Similar results for
H1:PEM-CS_MAG_EBAY_SUSRACK_X_DQ
H1:PEM-CS_MAG_EBAY_SUSRACK_Y_DQ
H1-PEM-CS_MAG_LVEA_OUTPUTOPTICS_X_DQ
H1-PEM-CS_MAG_LVEA_OUTPUTOPTICS_Y_DQ
H1-PEM-CS_MAG_LVEA_OUTPUTOPTICS_Z_DQ
H1:PEM-CS_MAG_LVEA_VERTEX_X_DQ
H1-PEM-EY_MAG_EBAY_SUSRACK_Y_DQ
H1-PEM-EY_MAG_EBAY_SUSRACK_Z_DQ
H1-PEM-EX_MAG_EBAY_SUSRACK_X_DQ
H1-PEM-EX_MAG_EBAY_SUSRACK_Y_DQ
H1-PEM-EX_MAG_EBAY_SUSRACK_Z_DQ

The coherence is present but less strong in
H1:PEM-CS_MAG_LVEA_VERTEX_Z_DQ
99.9989*10 = 999.989 Hz with coherence of 0.06
Not really visible in
H1:PEM-CS_MAG_LVEA_VERTEX_Y_DQ

We don't see this line in
H1-PEM-EY_MAG_EBAY_SUSRACK_X_DQ
H1-PEM-EX_MAG_VEA_FLOOR_Z_DQ
H1-PEM-EX_MAG_VEA_FLOOR_Y_DQ
H1-PEM-EX_MAG_VEA_FLOOR_X_DQ
H1-PEM-EY_MAG_VEA_FLOOR_X_DQ
H1-PEM-EY_MAG_VEA_FLOOR_Y_DQ
H1-PEM-EY_MAG_VEA_FLOOR_Z_DQ

Nelson, Eric Coughlin, Michael Coughlin
H1 DetChar (CDS, SUS)
jeffrey.kissel@LIGO.ORG - posted 12:09, Tuesday 02 June 2015 (18783)
H1SUSH34 IOP Front End Model Restarted to Recalibrate 18-bit DACs
J. Kissel, D. Barker,

Receiveing word from DetChar on yesterday's run meeting call that they're seeing major carry transition glitches in the IMC, Dave and I have restarted the h1iopsush34 front end model on the h1sush34 front-end, which runs a calibration routine on the 18-bit DACs in the corresponding I/O chassis. The reboot is now complete, and the IMC is up and running. We've confirmed that all DAC cards had a successful auto-calkbiration.


controls@h1sush34 ~ 0$ uptime
 11:52:19 up 13 days, 33 min,  0 users,  load average: 0.01, 0.02, 0.00
controls@h1sush34 ~ 0$ dmesg | grep AUTOCAL
# After the I/O chassis Power Supply, Timing Slave Firmware, and 18 bit DAC card upgrades had finished on May 21 2015:
[   49.512048] h1iopsush34: DAC AUTOCAL SUCCESS in 5341 milliseconds 
[   54.875289] h1iopsush34: DAC AUTOCAL SUCCESS in 5344 milliseconds 
[   60.668130] h1iopsush34: DAC AUTOCAL SUCCESS in 5345 milliseconds 
[   66.030689] h1iopsush34: DAC AUTOCAL SUCCESS in 5341 milliseconds 
[   71.823494] h1iopsush34: DAC AUTOCAL SUCCESS in 5344 milliseconds 
[   77.186841] h1iopsush34: DAC AUTOCAL SUCCESS in 5345 milliseconds 
# This restart:
[1121136.381304] h1iopsush34: DAC AUTOCAL SUCCESS in 5345 milliseconds 
[1121141.741653] h1iopsush34: DAC AUTOCAL SUCCESS in 5344 milliseconds 
[1121147.529535] h1iopsush34: DAC AUTOCAL SUCCESS in 5344 milliseconds 
[1121152.889081] h1iopsush34: DAC AUTOCAL SUCCESS in 5340 milliseconds 
[1121158.677788] h1iopsush34: DAC AUTOCAL SUCCESS in 5345 milliseconds 
[1121164.037291] h1iopsush34: DAC AUTOCAL SUCCESS in 5340 milliseconds


@ DetChar -- we are VERY interested to track how we these DAC calibrations behave over time, to find out how often we need to do these auto cal routines. Please make sure to look for glitches *every day* during ER7, and report back to us
- Did *this* autocal fix the problem you've seen that made you request it?
- If you still see the problem, did the autocal at least *reduce* the glitches?
- How quickly, if at all, do glitches come back? (a plot of glitch amplitude vs time over ER7)
- You'd mentioned you can't see it in DARM -- confirm that this is true for the entire run
- Because we've have so many of these DAC cards fail during the May 21 upgrade, we were forced to *not* upgrade the cards in the h1sush56 I/O chassis. This means that H1SUSSRM, H1SUSSR3 and H1SUSOMC *have not* had their DAC cards upgraded. Can you tell a difference? 
- I would expect H1SUSSRM and H1SUSSR3 to have great influence on DARM, given the know SRCL to DARM coupling. Is there evidence of this?
- We send control ASC control to both SR2 and SRM, and LSC control to SRM. SR2 has new 18 bit DACs and SRM does not. If you can see glitching in SRCL -- can you tell if it's more SRM / SR3 than SR2? 

Thanks ahead of time!
H1 CDS (DCS)
david.barker@LIGO.ORG - posted 11:56, Tuesday 02 June 2015 (18782)
LDAS Qlogic switch in MSR has new SFP, we are not testing them until end of ER7

Greg, Dan, Jim, Dave:

Dan has the new LDAS SFP to install on the single mode link between MSR and LSB which was reporting errors. For now he has inserted the new SFPs in the QLogic switches but for now these ports are still disabled. We will enable them and remove the link between the MSR switches at the end of ER7.

H1 CDS
david.barker@LIGO.ORG - posted 11:53, Tuesday 02 June 2015 (18781)
h1guardin0 reboot

To clear the accumulated processor load of 40 on h1guardian0, I rebooted this machine at 11:41PDT. All guardian nodes came back online with no problems. The load average is now back at its starting value of 7.

H1 CAL (CDS, DAQ)
david.barker@LIGO.ORG - posted 11:52, Tuesday 02 June 2015 - last comment - 12:21, Tuesday 02 June 2015(18780)
MC2 M3 DAC recalibrated

Jeff, Dave

We restarted the h1iopsush34 model, which performed a calibration on all six 18bit DAC cards (including MC2's M3 DAC which had reported zero crossing glitching Link). All autocals completed successfully. 

Andrew, could you verify if the recalibration has made any improvement in MC2 M3? The calibration could deteriorate over time.

Comments related to this report
jeffrey.kissel@LIGO.ORG - 12:21, Tuesday 02 June 2015 (18786)CDS, DetChar, SUS
For the record, my LHO aLOG 18783 is documenting the same as this above entry. Sorry for the repeat logging!
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 11:47, Tuesday 02 June 2015 (18778)
CDS model and DAQ restart report, Sunday 31th May Monday 1st June 2015

model restarts logged for Mon 01/Jun/2015
2015_06_01 01:02 h1fw1*

model restarts logged for Sun 31/May/2015
2015_05_31 06:42 h1fw0*
2015_05_31 23:10 h1fw1*

 

* = unexpected restart

H1 SEI
hugh.radkins@LIGO.ORG - posted 11:26, Tuesday 02 June 2015 (18774)
HEPI Fluid Levels Checked--all good--no leaks--Accumulators would appear charged

It has been 5 or more weeks since I noted the fluid levels in the reservoirs and there is no measurable change in levels.  This indicates there is obviously no leaks larger than a few nusaince drips, and, the Accumulators remain well charged.

If there is a level trip anytime soon, it means a substantial leak has developed or one or more accumulators has lost its gas charge.

H1 CAL (CAL)
richard.savage@LIGO.ORG - posted 11:14, Tuesday 02 June 2015 - last comment - 11:42, Tuesday 02 June 2015(18773)
Pcal beam localization camera images at Xend

NutsineeK, RickS

This morning, we went to Xend to capture images of the ETM with the illuminator on and the green ALS and red OptLev beams blocked.

The procedure for capturing the measurements is as follows:

  1. Turn off Pcal excitations on the PCal medm screen.
  2. Note the OFS Offset level (6.0 for LHO Xend)
  3. Set OFS offset to maximum level (10 at EndX)
  4. Close ALS green ALS light shutter
  5. Block the optical lever beam with the Pcal red aluminum beam block.  Need to remove the 8-32 screw from the strip that blocks the slot in the viewport protector and remove the strip (be careful not to drop the screw).
  6. Remove the Pcal Rx module cover and block the Pcal beams at the entrance to the Rx PD integrating sphere using a razor blade dump.
  7. Take two images of the ETM, the first with the illuminator on and focused on the edge of the ETM (visible) and the second without the illuminator and focused on the Pcal spots on the ETM surface (infrared).
  8. Set the OFS offset back to the nominal level.
  9. Turn on the excitations.
  10. Remove the beam block from the Optical Lever and replace the strip that covers the slot in the viewport protector.
  11. Open the shutter for the ALS green beam.
  12. Remove the beam dump from the Pcal Rx PD in the Rx module and replace the cover.
The images will be attached to this report shortly.
Comments related to this report
nutsinee.kijbunchoo@LIGO.ORG - 11:42, Tuesday 02 June 2015 (18776)

Images attached to this comment
H1 CAL (CAL)
duncan.macleod@LIGO.ORG - posted 09:13, Tuesday 02 June 2015 - last comment - 12:25, Tuesday 02 June 2015(18772)
Request rebuild/restart of h1calcs

[Ryan F, Duncan M, etc.]

I have committed a new version of the CAL_INJ_MASTER common library model used to control hardware injections in the front-end system. This change reimplements the logic for the ODC state vector, making the state reporting more robust, and implementing logging for the Stochastic injections. The change has no impact outside of the /INJ/ODC block excepting input connections to that block.

If this model could be pulled onto the production system at LHO, and the h1calcs front-end rebuilt and restarted, that would be spiffing. At LLO this change did not require a restart of daqd, which is nice.

Once that change is made, I can remotely modify the related MEDM screens and EPICS variables to ensure that hwinj reporting is configured correctly before data-taking restarts this afternoon.

Comments related to this report
jeffrey.kissel@LIGO.ORG - 12:25, Tuesday 02 June 2015 (18787)INJ
The h1calcs front end model has been recompiled, reinstalled, restarted and restored with an svn updated copy of the CAL_INJ_MASTER. Good luck and god speed!
H1 General
travis.sadecki@LIGO.ORG - posted 07:50, Tuesday 02 June 2015 (18766)
Owl Shift Summary

Times in UTC

7:10 Locked LSC FF, intent bit set to undisturbed (probably ignore this since I was a bit hasty in setting the bit)

8:12 Lockloss

9:00 Locked LSC FF

9:10 Lockloss.

9:20 Round of initial alignment

10:53 Locked LSC FF

10:59 Intent bit set to undisturbed

11:33 Lockloss

11:45 Another round of alignment as the X arm alignment didn't look good

13:09 Locked LSC FF

13:27 Intent bit set to undisturbed

14:02 Lockloss

14:12 Bubba to LVEA taking measurements

14:36 Locked LSC FF

14:40 Bubba out

14:50 Lockloss.  Good luck Patrick!

14:37 Intent bit set to undisturbed

H1 CAL (CAL, CDS, DetChar, INJ, ISC, SUS)
jeffrey.kissel@LIGO.ORG - posted 03:50, Tuesday 02 June 2015 - last comment - 12:03, Tuesday 02 June 2015(18771)
H1 CAL-CS Front-End Calibration Has Been Updated!
J. Kissel, E. Hall

See Evan's entry here: LHO aLOG 18770.

We still need to double check it, and I'm sure we've made a mistake or two, but we think we've installed as much as we can based on the results of the DARM Open Loop Gain transfer functions we have compared against a model (see LHO aLOG 18769) and of the actuation coefficient measurements (see LHO aLOG 18767)

For lack of better quantitative understanding of the DARM OLGTFs we have, we should still consider this calibration at an accuracy of 50% and 20 [deg]. (At least it's better than the factor of two promised :-/ ).

Note that we have NOT yet updated the inverse actuation function in the hardware injection path. Sorry -- but that'll have to wait until the morning.

We've still got plenty more to do and understand, but thanks to all who have helped over the past 1.5 weeks. You help has been invaluable, and much appreciated!!

P.S. IF NEED BE -- one can revert to the old CAL-CS calibration by switching back to ETMX, then reverting to the former sensing function via the filter archive.
Comments related to this report
kiwamu.izumi@LIGO.ORG - 12:03, Tuesday 02 June 2015 (18784)

I found a bug -- ETMY ESD needs another factor of 4. I increased the gain of the simulated ESD filter by a factor of 4 in the CAL-CS front end model. See the attached screen shot.  The SDF was consequently updated.

Also, along the course of trying to find a bug, I made a script which compares the filters in the CALCS font end modle and the ones in the matlab H1DARM model. It is is the calibration svn:

aligocalibration/trunk/Runs/PreER7/H1/Scripts/DARMOLGTFs/compare_CALCS_and_DARMmodel.m

NOTE: This does not affect the gds calibration or h(t). This is only for CAL_DELTAL_EXTERNAL.

Images attached to this comment
H1 CDS (CAL)
david.barker@LIGO.ORG - posted 17:15, Monday 01 June 2015 - last comment - 11:48, Tuesday 02 June 2015(18759)
GRB Alert script running on h1fescript0, runs for several minutes and then stops

Sudarshan, Duncan, Branson, Andrew, Michael T, Greg, Dave:

I got a lot further in installing the GRB alert system at LHO. It now runs, but fails after a couple of minutes. Here is a summary of the install:

LHO and LLO sysadmins decided to run the GRB code on the  front end script machine (Ubuntu12). At LHO it is called h1fescript0

I requested a Robot GRID Cert for this machine, Branson very quickly issued the cert for GraceDB queries last Friday

Following Duncan's and the GraceDB install instructions, I was able to install the python-ligo-gracedb module. The initial install failed, Michael resolved this, I was using the Debian Squeezy repository (which uses python2.6) rather than Wheezy which uses python2.7.

Greg told us how to install the GRID cert on the machine and setup the environment variable so the program could find it.

I found a bug in the code for the lookback, it appears the start,stop times were reversed in the arguments to client.events().

For testing, I saw that a GRB event had happened within the past 10 hours, so I ran the program with a 10 hour lookback. It found the event and posted it to EPICS (see attachement)

But afer running for several minutes, it stopped running with an error. This is reproducible.

controls@h1fescript0:scripts 0$ python ext_alert.py run -l 36000
Traceback (most recent call last):
  File "ext_alert.py", line 396, in

    events = list(client.events('External %d.. %d' % (start, now)))
  File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 450, in events
    response = self.get(uri).json()
  File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 212, in get
    return self.request("GET", url, headers=headers)
  File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 325, in request
    return GsiRest.request(self, method, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 200, in request
    conn.request(method, url, body, headers or {})
  File "/usr/lib/python2.7/httplib.py", line 958, in request
    self._send_request(method, url, body, headers)
  File "/usr/lib/python2.7/httplib.py", line 992, in _send_request
    self.endheaders(body)
  File "/usr/lib/python2.7/httplib.py", line 954, in endheaders
    self._send_output(message_body)
  File "/usr/lib/python2.7/httplib.py", line 814, in _send_output
    self.send(msg)
  File "/usr/lib/python2.7/httplib.py", line 776, in send
    self.connect()
  File "/usr/lib/python2.7/httplib.py", line 1157, in connect
    self.timeout, self.source_address)
  File "/usr/lib/python2.7/socket.py", line 571, in create_connection
    raise err
socket.error: [Errno 110] Connection timed out
 

Images attached to this report
Comments related to this report
keith.thorne@LIGO.ORG - 17:42, Monday 01 June 2015 (18763)CDS
We were having the same issues at LLO - Duncan and Jamie were looking at it.  We've got the robot cert, etc. all set up.  Likely can move to standard operation tomorrow.
duncan.macleod@LIGO.ORG - 11:48, Tuesday 02 June 2015 (18779)

The errors Keith mentioned seeing at LLO are unrelated, I cannot reproduce the connection timeout down there.

I have reproduced the timeout error at LHO as suggested, and have written up a retry workaround that will re-send the query up to 5 times in the event of a timeout error. This seems to run stably at LHO. The logging has been updated to record failed queries.

The SVN commit was made from h1fescript0 with Dave Barker's LIGO.ORG ID (unintentionally).

Displaying reports 64741-64760 of 83040.Go to page Start 3234 3235 3236 3237 3238 3239 3240 3241 3242 End