Wasn't able to test set point interlock functionality (maybe next maintenance day?)
Several front ends had partially loaded filter modules. To prepare H1 for ER7 data taking, I ran a script which pressed all LOAD_NEW_COEFF buttons on every model. I'll periodically "load all filters" if we encounter any partially loaded files during ER7.
Daniel and I looked at three of the locklosses from Travis's shift last night, from 14:40, 14:02 and 11:33 UTC. The earlier two both seem to be related to an alignment drift over 2-3 minutes before the lockloss, which shows up clearly in SR3 PIT. (there is currently not feedback to SR3 PIT) According to the witness sensors, this drift is only seen on M3. No optics saturated until after the lockloss. The DC4 centering loop, as well as both of the SRC alignment loops respond to the drift.
Its unclear what causes the drift to accelerate in the minutes before the lockloss. There is also a drfit of SR3 when we power up, as we noted yesterday, but this happens on a slower timescale than the dirfts that preceed a lockloss (3rd screenshot). Also, there is a longer, slow drift that happens whenever we are locked.
With Patrick and Cheryl I have engaged a DC coupled optical lever for SR3 PIT, we will see if this helps. The last screen shot attached shows the MEDM screen used to turn this on or off.
If the operators need to disable this (due to an earthquake, a trip, or if the optic becomes misalinged for any other reason) you can get to this screen from SR3, M2 OLDAMP.
Turning off:
turn off FM1 (labeled DC), then the input
Turning it back on:
Once the optic has settled and the beam is back on the oplev QPD, turn on the damping loop (with FM1 still off). Average INMON (in a command line tdsavg 10 H1:SUS-SR3_M2_OLDAMP_P_INMON), type -1 times the average into the offset, make sure the offset is engaged, and finally turn on FM1 to make the loop DC coupled.
Since this is just a trial, Jeff is not including these changes in his current SDF cleanup campaign.
Looking at the initial power up, we can see that an increase of a factor of ~10 causes ~0.7 µrad of pitch misalignment. During the accelerated drift in the last 3-5 minutes before the lock loss another 0.4 µrad of pitch misalignment was acquired with only ~10% of power increase. One might wonder, if we see a geometrically induced wire heating run away.
I modeled how much the two front wires have to heat up to casue a bottom mass pitch of 1 microradian. A very small temperature increase is needed to predict this.
* Assuming a constant temperature profile along the wire length (I'm sure this is not the case, but it is easy to calculate), it is
0.003
[C]
* Assuming a linear temperature profile where with the max temperature is in the middle, and the ends of the wire have no temperature increase
0.006
[C]
So we can say an order of magnitude estimate is greater than 1 mC / urad and less than 10 mC / urad.
Calculations:
From gwinc, the thermal coefficient of expansion for C70 steel wire is
alpha = 12e-6 [1/C].
From the HLTS model at ../SusSVN/sus/trunk/Common/MatlabTools/TripleModel_Production/hltsopt_wire.m
wire length L = 0.255 [m]
front-back wire spacing s = 0.01
[m]
The change in wire length for pitch = 1 urad is then
dL = s * pitch = 0.01 * 1e-6 = 1e-8
[m]
* For uniform wire heating of dT, this change comes from
dL = alpha * L * dT
So, solving for dT
dT = dL / (alpha * L) = 1e-8 / ( 12e-6 * 0.255 ) = 0.0033
[C]
* For a linear temperature increase profile (max at middle, 0 at ends), I break the wire into many constant temperature segments of length Lsegment.
The temperature increase profile is a vector defined by
dT = dTmax * TempPrile
where TempProfile is a vector of the normalized shape of the temperature prodile. It is triangular, 0 at the ends and 1 at the peak in the middle. Each element of the vector corresponds to a constant temperature segment of the wire. dTmax is a scalar representing the maximum temeprature increase at the middle of the wire.
The change in wire length is then given by
dL = sum( alpha * Lsegment * TempProfile ) * dTmax
solving for dTmax
dTmax = dL / sum( alpha * Lsegment * TempProfile )
with 101 segments, this gives us
dTmax = 0.0063
[C]
about double the uniform heating case.
* I also considered that since the wire has significant stress due to the test mass weight, the Young's modulus's temperature dependence might cause a different effective thermal expansion coefficient alpha_effective
. This appears to be a negligible effect.
From gwinc, the temperate dependence of the young's modulus E is
dE/dT = -2.5e-4
[1/C]
and young's modulus E is
E = 212e9
[Pa]
from https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=12581, we know that the change in spring length due to the modulus of eleasticity dependence is
dL = -dE/dT
* dT * Tension / Stiffness
where Tension is the load in the wire and Stiffness is the vertical stiffness of the wire.
The Stiffness is given by
Stiffness = E * A / L = E * pi * r^2 / L
where A is the cross sectional area of the wire, and r is the radius.
So plugging this in above
dL = -dE/dT * dT * Tension * L / ( E * pi * r^2 )
We get the correction on alpha by dividing this by L and dT, which eliminates both from the equation. From the HLTS model, the bottom mass is 12.142 kg and the wire radius is 1.346e-4 m.
Tension = 12.142 * 9.81 / 4 = 29.8
[N]
The correction on alpha is then
-dE/dT * Tension / ( E * pi * r^2 ) = 2.5e-4 * 29.8 / (212e9 * pi * 1.346e-4^2) = 6.2e-7
[1/C]
This changes alpha from
12e-6 to 12.6e-6
[1/C]
Not enough to matter for the estimates above.
Localizing the heat source:
I made a calculation of the heat absorption by wires.
Based on Brett's temperature estimate, assuming the radiation as the only heat dissipation mechanism, the heat the front wires should be absorbing is about 1uW total per two wires when SR3 tilts by 1 urad regardless of the temperature distribution.
If you only look at the power, any ghost beam coming from PRC power (about 800W per 20W input assuming recycling gain of 40) can supply 1uW as each of these beams has O(10mW) or more.
I looked at BS AR reflection of X reflection, CP wedge AR both ways, and ITM AR both ways. I'm not sure about the first one, but the rest are mostly untouched by anything and falls on SR3 off centered.
The attachment depicts SR3 outline together with the position of CP wedge AR (green) and ITM AR (blue) reflections, assuming the perfect centering of the main beam and the SR3 baffle on SR3. Note that ITMX AR reflection of +X propagating beam falls roughly on the same position on SR3 as ITMY AR reflection of +Y propagating beam. Ditto for all ITM and CP AR reflections. The radius of these circles represent the beam radius. The power is simply 20W*G_rec(40)*(AR(X)+AR(Y))/4 (extra factor of 2 due to the fact that the AR beam goes through the BS) for ITM and CP, and 20W*40*AR/2 for BSAR of -X beam.
I haven't done any more calculations and I don't intend to, but just by looking at the numbers (total power in green and blue beams in the figure is about 240mW, 5 orders of magnitude larger than the heat absorbed by wires), and considering that the centering on SR3 cannot be perfect, and that SR3 baffle is somewhat larger than SR3 itself, and that CP alignment is somewhat arbitrary, it could be that these blobs seeps through the space between the baffle and the SR3 and provide 1uW.
The red thing is where BSAR reflection of -X beam would be if it is not clipped by the SR2 scraper baffle. If everything is as designed, SR2 scraper baffle will cut off 90% of the power (SR2 edge is 5mm outside of the center of the beam with 8mm radius), and remaining 10% comes back to the left edge of the red circle.
Any ghost beam originating from SRC power is (almost) exhonerated, because the wire (0.0106"=0.27mm diameter) is much smaller than any of the known beams such that it's difficult for these beams to dump 1uW on wires. For example the SRC power hitting SRM is about 600mW per 20W input, SRM AR reflection is already about 22uW.
Details of heat absorption:
When the temperature on a section of wire rises, the stretching of that section is proportional to the length of that section itself and the rise in temperature. Due to this, the total wire stretch is proportional to the temperature rise integrated over the wire length (which is equial to the mean temperature rise multiplied by the wire length) regardless of the temperature distribution as is shown in effect by Brett's calculation:
stretch prop int^L_0 t dL = mean(t) * L
where L is the length of the wire and t is the difference from the room temperature.
Likewise, the heat dissipation of a short wire section of the length dL at temperature T+t via radiation is
sigma*E*C*dL*[(T+t)^4-T^4] ~ 4*sigma*E*C*dL*T^3*t
where sigma is Stefan-Boltzmann constant, E the emmissivity, C the circumference of the wire, T the room temperature (about 300K). The heat dissipation for the entire length of wire is obtained by integrating this over the length, and the relevant integral is int^L_0 t dL, so again the heat dissipation via radiation is proportional to the temperature rise integrated over the wire length regardless of the temperature distribution:
P(radiation) ~ 4*sigma*E*T^3*(C*L)*mean(t).
I assume the emmissivity E of the steel wire surface to be O(0.1). These wires are drawn, couldn't find the emissivity but it's 0.07 for polished steel surface and 0.24 for rolled steel plate.
I used T=300K, t=3mK (Brett's calculation for both of the temperature distributions), C=pi*0.0106", L=0.255m*2 for two front wires, and obtained:
P(radiation) ~ 0.8uW ~ 1uW.
ITM AR:
ITM has a wedge of 0.08 deg, thick side down.
ITM AR reflection of the beam propagating toward ETM is deflected by 2*wedge in +Z direction. For the beam propagating toward BS, ITM AR reflects the beam, deflecting down, and this beam is reflected by ITM and comes back to BS. Deflection of this beam relative to the main bean is -(1+n)*wedge.
AR beam displacement at BS is +14mm for +Z-deflection and -17mm for -Z-deflection. Since the BS baffle hole "radius" seen from ITMs is 100+ mm, and since the beam radius is about 53mm, AR beams are not blocked much by BS baffle and reaches SR3.
ITM AR reflectivity is about 300ppm.
CP AR:
Similar calculation for CP except that they have horizontal wedge, thick part being -Y for CPX and -X for CPY.
CP wedge is about 0.07 degrees.
I only looked at the surface of CP that is opposite of the ITM, and assumed that the surface facing ITM is more or less parallel to ITM AR, within an accuracy of O(100urad).
I assumed that S1 is the surface close to the ITM, and took S2 AR numbers from galaxy web page (43.7ppm for X, 5ppm for Y).
BS AR propagation:
BS wedge is 0.076 degrees, with a reflectivity of 50ppm.
Deflection of BS AR reflection of -X beam relative to the main beam is NOT -2*wedge as BS is tilted by 45 degrees. With some calculation it turns out that it is about -0.27 degrees, with a displacement of +48mm (positive = +X).
This beam is not obstructed at all by the BS baffle, hits SR3 and makes it to SR2 baffle edge. What made it to the SR2 surface doesn't go to SRM and instead comes back to SR3 as SR2 is convex and the beam is heavily off-centered.
If there's no SR2 baffle and if SR2 is much larger, the center of the reflected beam is going to be 50cm in -X direction from the center of SRM, which happens to be on SR3.
I don't know what happens to the edge scattering and the reflection from SR2, but both of these are highly dependent on SR2 centering.
Attached is a pre-ER7 list of narrow lines seen above 5 Hz in recent H1 DARM data, along with spectra containing labels for the lines. The spectra used for line-hunting are from 18 hours of DC-readout conditions on May 17. Most of the lines were also seen in the early-May mini-run data, but are more exposed in the more sensitive May 17 data (see figure 1) Notable combs / lines:
I meant to attach the excited violin mode spectrum stack from the mini-run, not from the mid-May data, to illustrate the harmonicity of the upconversion. Here is the right plot.
We used the coherence tool on the full ER7 data to try and find coherence between h(t) and other channels for the 99.9989 Hz line and its harmonics. There is a coherence between h(t) and ... H1:PEM-CS_MAG_EBAY_SUSRACK_Z_DQ at 99.9989*1= 99.9989 Hz with coherence of 0.038 99.9989*2 = 199.9978 Hz with coherence of 0.03 99.9989*3 = 299.9967 Hz with coherence of 0.11 99.9989*4 = 399.9956 Hz with coherence of 0.11 99.9989*5 = 499.9945 Hz with coherence of 0.022 99.9989*10 = 999.989 Hz with coherence of 0.13 Similar results for H1:PEM-CS_MAG_EBAY_SUSRACK_X_DQ H1:PEM-CS_MAG_EBAY_SUSRACK_Y_DQ H1-PEM-CS_MAG_LVEA_OUTPUTOPTICS_X_DQ H1-PEM-CS_MAG_LVEA_OUTPUTOPTICS_Y_DQ H1-PEM-CS_MAG_LVEA_OUTPUTOPTICS_Z_DQ H1:PEM-CS_MAG_LVEA_VERTEX_X_DQ H1-PEM-EY_MAG_EBAY_SUSRACK_Y_DQ H1-PEM-EY_MAG_EBAY_SUSRACK_Z_DQ H1-PEM-EX_MAG_EBAY_SUSRACK_X_DQ H1-PEM-EX_MAG_EBAY_SUSRACK_Y_DQ H1-PEM-EX_MAG_EBAY_SUSRACK_Z_DQ The coherence is present but less strong in H1:PEM-CS_MAG_LVEA_VERTEX_Z_DQ 99.9989*10 = 999.989 Hz with coherence of 0.06 Not really visible in H1:PEM-CS_MAG_LVEA_VERTEX_Y_DQ We don't see this line in H1-PEM-EY_MAG_EBAY_SUSRACK_X_DQ H1-PEM-EX_MAG_VEA_FLOOR_Z_DQ H1-PEM-EX_MAG_VEA_FLOOR_Y_DQ H1-PEM-EX_MAG_VEA_FLOOR_X_DQ H1-PEM-EY_MAG_VEA_FLOOR_X_DQ H1-PEM-EY_MAG_VEA_FLOOR_Y_DQ H1-PEM-EY_MAG_VEA_FLOOR_Z_DQ Nelson, Eric Coughlin, Michael Coughlin
J. Kissel, D. Barker, Receiveing word from DetChar on yesterday's run meeting call that they're seeing major carry transition glitches in the IMC, Dave and I have restarted the h1iopsush34 front end model on the h1sush34 front-end, which runs a calibration routine on the 18-bit DACs in the corresponding I/O chassis. The reboot is now complete, and the IMC is up and running. We've confirmed that all DAC cards had a successful auto-calkbiration. controls@h1sush34 ~ 0$ uptime 11:52:19 up 13 days, 33 min, 0 users, load average: 0.01, 0.02, 0.00 controls@h1sush34 ~ 0$ dmesg | grep AUTOCAL # After the I/O chassis Power Supply, Timing Slave Firmware, and 18 bit DAC card upgrades had finished on May 21 2015: [ 49.512048] h1iopsush34: DAC AUTOCAL SUCCESS in 5341 milliseconds [ 54.875289] h1iopsush34: DAC AUTOCAL SUCCESS in 5344 milliseconds [ 60.668130] h1iopsush34: DAC AUTOCAL SUCCESS in 5345 milliseconds [ 66.030689] h1iopsush34: DAC AUTOCAL SUCCESS in 5341 milliseconds [ 71.823494] h1iopsush34: DAC AUTOCAL SUCCESS in 5344 milliseconds [ 77.186841] h1iopsush34: DAC AUTOCAL SUCCESS in 5345 milliseconds # This restart: [1121136.381304] h1iopsush34: DAC AUTOCAL SUCCESS in 5345 milliseconds [1121141.741653] h1iopsush34: DAC AUTOCAL SUCCESS in 5344 milliseconds [1121147.529535] h1iopsush34: DAC AUTOCAL SUCCESS in 5344 milliseconds [1121152.889081] h1iopsush34: DAC AUTOCAL SUCCESS in 5340 milliseconds [1121158.677788] h1iopsush34: DAC AUTOCAL SUCCESS in 5345 milliseconds [1121164.037291] h1iopsush34: DAC AUTOCAL SUCCESS in 5340 milliseconds @ DetChar -- we are VERY interested to track how we these DAC calibrations behave over time, to find out how often we need to do these auto cal routines. Please make sure to look for glitches *every day* during ER7, and report back to us - Did *this* autocal fix the problem you've seen that made you request it? - If you still see the problem, did the autocal at least *reduce* the glitches? - How quickly, if at all, do glitches come back? (a plot of glitch amplitude vs time over ER7) - You'd mentioned you can't see it in DARM -- confirm that this is true for the entire run - Because we've have so many of these DAC cards fail during the May 21 upgrade, we were forced to *not* upgrade the cards in the h1sush56 I/O chassis. This means that H1SUSSRM, H1SUSSR3 and H1SUSOMC *have not* had their DAC cards upgraded. Can you tell a difference? - I would expect H1SUSSRM and H1SUSSR3 to have great influence on DARM, given the know SRCL to DARM coupling. Is there evidence of this? - We send control ASC control to both SR2 and SRM, and LSC control to SRM. SR2 has new 18 bit DACs and SRM does not. If you can see glitching in SRCL -- can you tell if it's more SRM / SR3 than SR2? Thanks ahead of time!
Greg, Dan, Jim, Dave:
Dan has the new LDAS SFP to install on the single mode link between MSR and LSB which was reporting errors. For now he has inserted the new SFPs in the QLogic switches but for now these ports are still disabled. We will enable them and remove the link between the MSR switches at the end of ER7.
To clear the accumulated processor load of 40 on h1guardian0, I rebooted this machine at 11:41PDT. All guardian nodes came back online with no problems. The load average is now back at its starting value of 7.
Jeff, Dave
We restarted the h1iopsush34 model, which performed a calibration on all six 18bit DAC cards (including MC2's M3 DAC which had reported zero crossing glitching Link). All autocals completed successfully.
Andrew, could you verify if the recalibration has made any improvement in MC2 M3? The calibration could deteriorate over time.
For the record, my LHO aLOG 18783 is documenting the same as this above entry. Sorry for the repeat logging!
model restarts logged for Mon 01/Jun/2015
2015_06_01 01:02 h1fw1*
model restarts logged for Sun 31/May/2015
2015_05_31 06:42 h1fw0*
2015_05_31 23:10 h1fw1*
* = unexpected restart
It has been 5 or more weeks since I noted the fluid levels in the reservoirs and there is no measurable change in levels. This indicates there is obviously no leaks larger than a few nusaince drips, and, the Accumulators remain well charged.
If there is a level trip anytime soon, it means a substantial leak has developed or one or more accumulators has lost its gas charge.
NutsineeK, RickS
This morning, we went to Xend to capture images of the ETM with the illuminator on and the green ALS and red OptLev beams blocked.
The procedure for capturing the measurements is as follows:
[Ryan F, Duncan M, etc.]
I have committed a new version of the CAL_INJ_MASTER common library model used to control hardware injections in the front-end system. This change reimplements the logic for the ODC state vector, making the state reporting more robust, and implementing logging for the Stochastic injections. The change has no impact outside of the /INJ/ODC block excepting input connections to that block.
If this model could be pulled onto the production system at LHO, and the h1calcs front-end rebuilt and restarted, that would be spiffing. At LLO this change did not require a restart of daqd, which is nice.
Once that change is made, I can remotely modify the related MEDM screens and EPICS variables to ensure that hwinj reporting is configured correctly before data-taking restarts this afternoon.
The h1calcs front end model has been recompiled, reinstalled, restarted and restored with an svn updated copy of the CAL_INJ_MASTER. Good luck and god speed!
Times in UTC
7:10 Locked LSC FF, intent bit set to undisturbed (probably ignore this since I was a bit hasty in setting the bit)
8:12 Lockloss
9:00 Locked LSC FF
9:10 Lockloss.
9:20 Round of initial alignment
10:53 Locked LSC FF
10:59 Intent bit set to undisturbed
11:33 Lockloss
11:45 Another round of alignment as the X arm alignment didn't look good
13:09 Locked LSC FF
13:27 Intent bit set to undisturbed
14:02 Lockloss
14:12 Bubba to LVEA taking measurements
14:36 Locked LSC FF
14:40 Bubba out
14:50 Lockloss. Good luck Patrick!
14:37 Intent bit set to undisturbed
J. Kissel, E. Hall See Evan's entry here: LHO aLOG 18770. We still need to double check it, and I'm sure we've made a mistake or two, but we think we've installed as much as we can based on the results of the DARM Open Loop Gain transfer functions we have compared against a model (see LHO aLOG 18769) and of the actuation coefficient measurements (see LHO aLOG 18767) For lack of better quantitative understanding of the DARM OLGTFs we have, we should still consider this calibration at an accuracy of 50% and 20 [deg]. (At least it's better than the factor of two promised :-/ ). Note that we have NOT yet updated the inverse actuation function in the hardware injection path. Sorry -- but that'll have to wait until the morning. We've still got plenty more to do and understand, but thanks to all who have helped over the past 1.5 weeks. You help has been invaluable, and much appreciated!! P.S. IF NEED BE -- one can revert to the old CAL-CS calibration by switching back to ETMX, then reverting to the former sensing function via the filter archive.
I found a bug -- ETMY ESD needs another factor of 4. I increased the gain of the simulated ESD filter by a factor of 4 in the CAL-CS front end model. See the attached screen shot. The SDF was consequently updated.
Also, along the course of trying to find a bug, I made a script which compares the filters in the CALCS font end modle and the ones in the matlab H1DARM model. It is is the calibration svn:
aligocalibration/trunk/Runs/PreER7/H1/Scripts/DARMOLGTFs/compare_CALCS_and_DARMmodel.m
NOTE: This does not affect the gds calibration or h(t). This is only for CAL_DELTAL_EXTERNAL.
Sudarshan, Duncan, Branson, Andrew, Michael T, Greg, Dave:
I got a lot further in installing the GRB alert system at LHO. It now runs, but fails after a couple of minutes. Here is a summary of the install:
LHO and LLO sysadmins decided to run the GRB code on the front end script machine (Ubuntu12). At LHO it is called h1fescript0
I requested a Robot GRID Cert for this machine, Branson very quickly issued the cert for GraceDB queries last Friday
Following Duncan's and the GraceDB install instructions, I was able to install the python-ligo-gracedb module. The initial install failed, Michael resolved this, I was using the Debian Squeezy repository (which uses python2.6) rather than Wheezy which uses python2.7.
Greg told us how to install the GRID cert on the machine and setup the environment variable so the program could find it.
I found a bug in the code for the lookback, it appears the start,stop times were reversed in the arguments to client.events().
For testing, I saw that a GRB event had happened within the past 10 hours, so I ran the program with a 10 hour lookback. It found the event and posted it to EPICS (see attachement)
But afer running for several minutes, it stopped running with an error. This is reproducible.
controls@h1fescript0:scripts 0$ python ext_alert.py run -l 36000
Traceback (most recent call last):
File "ext_alert.py", line 396, in
events = list(client.events('External %d.. %d' % (start, now)))
File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 450, in events
response = self.get(uri).json()
File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 212, in get
return self.request("GET", url, headers=headers)
File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 325, in request
return GsiRest.request(self, method, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/ligo/gracedb/rest.py", line 200, in request
conn.request(method, url, body, headers or {})
File "/usr/lib/python2.7/httplib.py", line 958, in request
self._send_request(method, url, body, headers)
File "/usr/lib/python2.7/httplib.py", line 992, in _send_request
self.endheaders(body)
File "/usr/lib/python2.7/httplib.py", line 954, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 814, in _send_output
self.send(msg)
File "/usr/lib/python2.7/httplib.py", line 776, in send
self.connect()
File "/usr/lib/python2.7/httplib.py", line 1157, in connect
self.timeout, self.source_address)
File "/usr/lib/python2.7/socket.py", line 571, in create_connection
raise err
socket.error: [Errno 110] Connection timed out
We were having the same issues at LLO - Duncan and Jamie were looking at it. We've got the robot cert, etc. all set up. Likely can move to standard operation tomorrow.
The errors Keith mentioned seeing at LLO are unrelated, I cannot reproduce the connection timeout down there.
I have reproduced the timeout error at LHO as suggested, and have written up a retry workaround that will re-send the query up to 5 times in the event of a timeout error. This seems to run stably at LHO. The logging has been updated to record failed queries.
The SVN commit was made from h1fescript0 with Dave Barker's LIGO.ORG ID (unintentionally).
after doing this I noticed that three filters in the LSC were being regularly reloaded. Evan tracked this down to the ALIGN_IFO.py guardian which was incorrectly writing a 1 to the RSET PV rather than 2 (load coefficients rather than clear history). This was fixed.