Displaying reports 64801-64820 of 83146.Go to page Start 3237 3238 3239 3240 3241 3242 3243 3244 3245 End
Reports until 17:48, Wednesday 03 June 2015
H1 General (INJ)
thomas.shaffer@LIGO.ORG - posted 17:48, Wednesday 03 June 2015 (18837)
CW Inject active light added to OPS OVERVIEW

In response to alog 18831, work permit #5243 was put in for an indicator light on the OPS_OVERVIEW_CUSTOM.adl screen. If the test point on H1CALCS_GDS_TP goes below 187, the light will go from green to red. This way the operator can easily see with the injections have stopped working as they should.

H1 INJ (INJ)
eric.thrane@LIGO.ORG - posted 17:32, Wednesday 03 June 2015 - last comment - 08:36, Friday 05 June 2015(18836)
first burst injection completed. two days of injections scheduled with rate = 1/(2 hr)
Chris Pankow, Jeff Kissel, Adam M, Eric Thrane

We restarted tinj at LHO (GPS=1117411494 ) to resume transient injections. We scheduled a burst injection for GPS=1117411713. The burst waveform is in svn. It is, I understand, a white noise burst. It is, for the time being, are standard burst injection waveform until others are added. The injection completed successfully.

Following this test, we updated the schedule to inject the same waveform every two hours over the next two days. The next injection is scheduled for GPS=1116241935. However, this schedule may change soon as Chris adds new waveforms to the svn repository.

We are not carrying out transient injection tests at LLO because the filter bank needs to be updated and we cannot be sure that the filters are even close to correct. Adam thinks they will be updated by ~tomorrow.
Comments related to this report
chris.pankow@LIGO.ORG - 08:08, Thursday 04 June 2015 (18850)DetChar, INJ
First, some details, the injection is actually a sine-Gaussian with parameters:

SineGaussian t0 = 1116964989.435322284 q = 28.4183770634 f0 = 1134.57994534

The t0 can be safely ignored since this injection would have no counterpart in L1, but it should be noted that this injection *does* have all relevant antenna patterns and polarization factors applied to it (e.g. it is made to look like a "real" GW signal).

I attach the time and frequency domain plots of the waveform --- however they are relative to an O1 type spectrum and so may not be indicative of the actual performance of the instrument at this period of time. Given the most recent spectra and frequency content of the injection, this could be weaker up to a factor of ~2-3. The characteristic SNRs I calculated using the O1 type spectrum:

Waveform SineGaussian at 1116964989.435 has SNR in H1 of 7.673043
Waveform SineGaussian at 1116964989.435 has SNR in L1 of 20.470634
Network SNR for SineGaussian at 1116964989.435 is 21.861438

So, it's possible that this injection had an SNR as low as ~2, not accounting for variance from the noise.

The excitation channel (H1:CAL-INJ_TRANSIENT_EXCMON, trended) does show a non-zero value, and the "count" value is consistent with the amplitude of the strain, another monitor (H1:CAL-INJ_HARDWARE_OUT_DQ, at a higher sample rate) also shows the full injection, though it is not calibrated. So, the injection was successfully scheduled and looks to have been made. I also did an omega scan of the latter channel, and the signal is at the correct frequency (but has, notably, a very long duration).

I did a little poking around to see if this showed up in h(t) (using H1:GDS-CALIB_STRAIN). Unfortunately, it is not visible in the spectrogram of H1:GDS-CALIB_STRAIN (attached). It may be some peculiarity of the scheduling, but it's interesting to note that the non-zero excitation occurs about a second after the GPS time that Eric quotes. More interestingly, this does not seem to have fired off the proper bits in the state vector. H1:GDS-CALIB_STATE_VECTOR reports the value 456 for this period, which corresponds to the data being okay, gamma being okay, but no injection taking place. it also appears to mean that no calibration was taking place (bits 3 and 4 are off). I'm guessing I'm just misinterpreting the meaning of this.

I'd recommend, for future testing, a scale factor of 3 or 4, to make it *clearly* visible and give us a point of reference. We should also close the loop with the ODC / calibration folks to see if something was missed.
Images attached to this comment
peter.shawhan@LIGO.ORG - 11:24, Thursday 04 June 2015 (18859)
I can see that burst injections have been occurring on schedule, generally.  The schedule file (which you currently have to log into h1hwinj1 to view) reads, in part:
...
1117411713 1 1 burst_test_
1117419952 1 1 burst_test_
1117427152 1 1 burst_test_
...
Compare that to the bit transitions in CAL-INJ_ODC:
pshawhan@> ./FrBitmaskTransitions -c H1:CAL-INJ_ODC_CHANNEL_OUT_DQ /archive/frames/ER7/raw/H1/H-H1_R-11174/*.gwf -m fffffff
1117400000.000000  0x00003f9e  Data starts
1117400027.625000  0x00003fde  6 on
1117400028.621093  0x00003fff  0 on, 5 on
1117406895.000000  0x00003e7f  7 off, 8 off
1117411714.394531  0x0000347f  9 off, 11 off
1117411714.480468  0x00003e7f  9 on, 11 on
1117419953.394531  0x0000347f  9 off, 11 off
1117419953.480468  0x00003e7f  9 on, 11 on
1117427153.394531  0x0000347f  9 off, 11 off
1117427153.480468  0x00003e7f  9 on, 11 on
...
Bits 9 and 11 go ON when a burst injection begins, and off when it ends. The offset of start time is because the waveform file initially contains zeroes. To be specific, the first 22507 samples in the waveform file (=1.37372 sec) are all exactly zero; then the next several samples are vanishingly small, e.g. 1e-74 . At t=1.39453 sec = 22848 samples into the waveform file, the strain amplitude of the waveform is about 1e-42. At the end time of the injection (according to the bitmask transition), the strain amplitude has dropped to about 1e-53, but I believe the filtering extends the waveform some. To give an idea of how much, the end time is about 130 samples, or ~8 msec, after the strain amplitude drops below 1e-42. Earlier in the record, bit 6 went on to indicate that the transient filter gain was OK, bit 5 went on to indicate that the transient filter state was OK, and bit 0 went on at the same time to indicate a good summary. Somewhat later, bit 8 went off when CW injections began, and bit 7 went off at the same time to indicate the presence of any hardware injection signal. Note that tinj seems to have been stopped (or died) at about GPS 1117456410, according to the tinj.log file, and that's consistent with a lack of bit transitions in CAL-INJ_ODC after that time.
peter.shawhan@LIGO.ORG - 14:11, Thursday 04 June 2015 (18865)
Checking the other bitmask channels, there's a curious pattern in how the hardware injection bits from CAL-INJ_ODC are getting summarized in ODC-MASTER:
pshawhan@> ./FrBitmaskTransitions -c H1:ODC-MASTER_CHANNEL_OUT_DQ -m 7000000 /archive/frames/ER7/raw/H1/H-H1_R-11174/H-H1_R-11174[12345]*.gwf
1117410048.000000  0x07000000  Data starts
1117411714.391540  0x05000000  25 off
1117411714.391723  0x07000000  25 on
1117411714.391845  0x05000000  25 off
1117411714.475463  0x07000000  25 on
1117419953.391540  0x05000000  25 off
1117419953.391723  0x07000000  25 on
1117419953.391845  0x05000000  25 off
1117419953.475463  0x07000000  25 on
1117427153.391540  0x05000000  25 off
1117427153.391723  0x07000000  25 on
1117427153.391845  0x05000000  25 off
1117427153.475463  0x07000000  25 on
...
The same pattern continues for all 7 of the injections. We can see that there's a brief interval (0.000183 s = 3 samples at 16384 Hz) which is marked as a burst injection, then "no injection" for 2 samples, then back on for 0.083618 s = 1370 samples. Knowing how the sine-Gaussian ramps up from vanishingly small amplitude, I think this is the real in the sense that the first whisper of a nonzero cycle returns to effectively zero for a couple of samples before it grows enough to be consistently "on". It is also interesting to see that the interval ends in ODC-MASTER slightly (5.0 ms) earlier than it does in CAL-INJ_ODC. I suspect that this is OK and the model really runs at 16384 Hz, but CAL-INJ_ODC is a down-sampled record of the real bit activity. I also confirmed that the injection bit got carried over to CALIB-STATE_VECTOR:
pshawhan@> ./FrBitmaskTransitions -c H1:GDS-CALIB_STATE_VECTOR -m 1ff /archive/frames/ER7/hoft/H1/H-H1_HOFT_C00-11174/H-H1_HOFT_C00-11174[012]*.gwf 
1117401088.000000  0x000001c8  Data starts
1117408113.375000  0x000001cc  2 on
1117408119.375000  0x000001dd  0 on, 4 on
1117408143.750000  0x000001df  1 on
1117408494.812500  0x000001dd  1 off
1117408494.937500  0x000001c8  0 off, 2 off, 4 off
1117411714.375000  0x00000148  7 off
1117411714.500000  0x000001c8  7 on
1117419953.375000  0x00000148  7 off
1117419953.500000  0x000001c8  7 on
1117420916.625000  0x000001cc  2 on
1117420922.625000  0x000001dd  0 on, 4 on
1117421174.000000  0x000001df  1 on
1117424707.062500  0x000001dd  1 off
1117424954.437500  0x000001c8  0 off, 2 off, 4 off
1117426693.000000  0x000001cc  2 on
1117426699.000000  0x000001dd  0 on, 4 on
1117426833.625000  0x000001df  1 on
1117427153.375000  0x0000015f  7 off
1117427153.500000  0x000001df  7 on
...
Bit 7 indicates burst injections. It comes on for 2 16-Hz samples at the appropriate times.
peter.shawhan@LIGO.ORG - 08:36, Friday 05 June 2015 (18886)INJ
As of 7:00am PDT on Friday, June 5, there have been 13 burst hardware injections at LHO over the last two days.  All of these are represented by segments (each 1 second long) in the DQ segment database, and can be retrieved using a command like:
pshawhan@> ligolw_segment_query_dqsegdb --segment-url https://dqsegdb5.phy.syr.edu --query-segments --include-segments H1:ODC-INJECTION_BURST --gps-start-time 1117300000 --gps-end-time 'lalapps_tconvert now' | ligolw_print -t segment -c start_time -c end_time -d ' '
These time intervals also agree with the bits in the H1:GDS-CALIB_STATE_VECTOR channel in the H1_HOFT_C00 frame data on the Caltech cluster, except that there is a gap in the h(t) frame data (due to filters being updated and the h(t) process being restarted, as noted in an email from Maddie). Similar DB queries show no H1 CBC injection segments yet, but H1 CW injections are ongoing:
pshawhan@> ligolw_segment_query_dqsegdb --segment-url https://dqsegdb5.phy.syr.edu --query-segments --include-segments H1:ODC-INJECTION_CW --gps-start-time 1117400000 --gps-end-time 'lalapps_tconvert now' | ligolw_print -t segment -c start_time -c end_time -d ' '
1117406895 1117552800
By the way, by repeating that query I observed that the CW segment span was extended by 304 seconds about 130 segments after the segment ended. Therefore, the latency of obtaining this information from the segment database ranges from 130 to 434 seconds, depending on when you query it. (At least under current conditions.) I also did similar checks at LLO, which revealed a bug in tinj -- see https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=18517 .
LHO General
patrick.thomas@LIGO.ORG - posted 17:08, Wednesday 03 June 2015 (18835)
Ops Summary
10:03 Betsy, Kyle, Nutsinee into LVEA (Nutsinee to take pictures with cell phone of ITMX and ITMY spools, Betsy to retrieve equipment, Kyle to check on equipment)
10:13 Kyle out
10:14 Nutsinee out
10:15 Betsy out
14:22 Joe moving cabinets from OSB to staging building
~15:25 - 15:53 Greg moving cabinets from computer users room to staging building

I ran an initial alignment in the morning and made it to LSC_FF but the range was unstable and didn't last long. Had difficulty locking and staying locked the remainder of the shift. Evan and Sheila are continuing to track down the cause.
LHO VE
bubba.gateley@LIGO.ORG - posted 16:53, Wednesday 03 June 2015 (18834)
Beam Tube Washing
Scott L. Ed P.

Results from 5/18/15 thru 5/21/15 posted here.

6/1/15
Cleaning crew returns and sets up after last week off.
Cleaned 40 meters ending 3.8 meters north of HNW-4-060.

6/2/15
Cleaned 49 meters ending 11.4 meters north of HNW-4-062.

6/3/15
Cleaned 33.5 meters ending at HNW-4-064. Removed lights and relocated equipment to next section north and started hanging lights. Safety meeting.
Non-image files attached to this report
H1 ISC
eleanor.king@LIGO.ORG - posted 16:20, Wednesday 03 June 2015 - last comment - 18:06, Wednesday 03 June 2015(18833)
PRM, SRM offloading to M1

Sheila, Elli

This morning we added filters to H1:SUS-PRM_M1_LOCK_L and H1:SUS-SRM_M1_LOCK_L to be used for offloading the M2_LOCK stage.  The motivation for this was because last night the M2 coil drivers were approaching saturation, and also to see if this might addressf the 2^16 (alog18815).  The new filters are zp0.01:0 in FM1, zp0:0.01 in FM2, and -90dB gain in FM3 and they are engaged with a gain of -0.2.  The filters are turned on by the ISC_DRMI guardian at OFFLOAD_DRMI.  We were having trouble locking this afternoon so we turned off these filters for a while, but they are back on again now as they didn't seem to be causing the locking difficulties.

Comments related to this report
sheila.dwyer@LIGO.ORG - 18:06, Wednesday 03 June 2015 (18838)

Plot of M2 master outs over last night's lock

Images attached to this comment
H1 CDS
eleanor.king@LIGO.ORG - posted 16:09, Wednesday 03 June 2015 - last comment - 09:06, Friday 05 June 2015(18832)
Camera exposure changed on ITMY SPOOL and ITMX spool cameras

Kiwamu, Elli

We are interested in whether we can see any changes in the spot size/location on the ITMs due to thermal drift during full lock.  We have decreased the exposure on the ITMY SPOOL and ITMX SPOOL cameras from 100000microseconds to 1000microseconds.  (These cameras are not used any interferometer systems)  We will take images with these cameras.  at 5 minute intervals for the next 24 hours.

Comments related to this report
eleanor.king@LIGO.ORG - 09:06, Friday 05 June 2015 (18889)

Ive changed the interval the cameras automatically take images back to the nominal 60min.

H1 INJ (INJ)
eric.thrane@LIGO.ORG - posted 15:53, Wednesday 03 June 2015 (18831)
CW injections restarted + CW injection procedure
Jeff, Corey, Betsy, Dave, Keith, Eric

Betsy:
After the reboots yesterday, it appears that the CW hardware injections have not been restarted.
1) Can you please restart them?
2) There is no indication that this injection is OFF when looking at the CAL_INJ_CONTROL.adl.  Dave B simply noticed that there was no longer an EXC showing on his CDS overview screen.

Eric:
I just restarted the injections at LHO (at GPS=1117406895). Here are the instructions:

log on to the h1hwinj
cd /data/scirun/O1/HardwareInjection/Details
bin/start_psinject
You will be prompted to give your name and your reason for starting the injections, which are saved to the psinject log. Injections begin 60s after you hit enter.

I'll add these instructions to the DCC document.
H1 SUS (CDS, ISC, SYS)
betsy.weaver@LIGO.ORG - posted 14:49, Wednesday 03 June 2015 (18825)
safe.snaps committed to svn

Since I touched a ton of safe.snaps when setting the SDF monitor switches and accepted newly commissioned settings over the last week, I committed them all to svn.  safe.snaps I committed:

h1susitmy_safe.snap h1lscaux_safe.snap
h1susbs_safe.snap h1lsc_safe.snap
h1susitmyx_safe.snap h1ascimc_safe.snap
h1susmc2_safe.snap h1asc_safe.snap
h1suspr2_safe.snap  
h1sussr2_safe.snap h1omc_safe.snap
h1sussrm_safe.snap h1psliss_safe.snap
h1susetmx_safe.snap h1iscex_safe.snap
h1susetmy_safe.snap h1iscey_safe.snap
h1susim_safe.snap h1alsex_safe.snap
h1sustmsy_safe.snap h1alsey_safe.snap
H1 AOS (ISC)
betsy.weaver@LIGO.ORG - posted 14:45, Wednesday 03 June 2015 (18830)
SDF OMC channel accepted

This morning I found 1 channel in alarm on SDF and accepted it:

H1:OMC-ASC_DITHER_MASTER  setpoint was set to ON, but is now OFF.

Likely this is left-over from last night, and in fact should be off.

H1 INJ
jeffrey.kissel@LIGO.ORG - posted 13:58, Wednesday 03 June 2015 (18828)
Hardware Injection TRANSIENT bank turned back ON
J. Kissel

I don't know why, but the TRANSIENT filter bank had been turned OFF (i.e. the gain had been set to zero and the output had been turned OFF). This cause the ODC bit that reflects the system status to report red (H1:CAL-INJ_ODC_CHANNEL_LATCH). I've now turned ON the transient bank, and the status has turned green. I hope this is what the INJ team wants.

Images attached to this report
H1 ISC (DetChar, ISC)
gabriele.vajente@LIGO.ORG - posted 11:44, Wednesday 03 June 2015 (18827)
Coherences

Brute Force Coherence report for last night lock can be found here:

https://ldas-jobs.ligo.caltech.edu/~gabriele.vajente/bruco_1117341016/

The most interesting features are the coherence with SRCL in the 20-80 Hz band and with MICH in the 20-200 Hz band.

Images attached to this report
H1 General
jeffrey.bartlett@LIGO.ORG - posted 11:23, Wednesday 03 June 2015 - last comment - 10:26, Thursday 11 June 2015(18826)
Storage Dry Box May Data
Posted are data for the two long term storage dry boxes (DB1 & DB4) in use in the VPW. Measurement data looks good, with no issues or problems being noted. I will collect the data from the desiccant cabinet in the LVEA during the next maintenance window.   
Non-image files attached to this report
Comments related to this report
jeffrey.bartlett@LIGO.ORG - 10:26, Thursday 11 June 2015 (19073)
This is the data for the 3IFO desiccant cabinet in the LVEA.
Non-image files attached to this comment
H1 PSL (DetChar, PSL)
edmond.merilh@LIGO.ORG - posted 09:32, Wednesday 03 June 2015 (18824)
PSL Weekly Report

These are the past ten day trends.

Images attached to this report
H1 ISC (SUS)
daniel.hoak@LIGO.ORG - posted 08:19, Wednesday 03 June 2015 - last comment - 18:35, Friday 12 June 2015(18823)
bounce mode Q for ITMX

Dan, Travis

Tonight during our long lock we measured the decay time constant of the ITMX bounce mode.  At 10:10 UTC we set the intent bit to "I solemnly swear I am up to no good" and flipped the sign on the ITMX_M0_DARM_DAMP_V filter bank and let the bounce mode ring up until it was about 3e-14 m/rt[Hz] in the DARM spectrum.  Then, we zeroed the damping gain and let the mode slowly decay over the next few hours.

We measured the mode's Q by fitting the decay curve in two different datasets.  The first dataset is the 16Hz-sampled output of Sheila's new RMS monitors; the ITMX bandpass filter is a 4th-order butterworth with corner frequencies of 9.83 and 9.87Hz (the mode frequency is 9.848Hz, +/- 0.001 Hz).  This data was lowpassed at 1Hz and fit with an exponential curve.

For the second dataset I followed Koji's demodulation recipe from the OMC 'beacon' measurement.  I collected 20 seconds of DELTAL_EXTERNAL_DQ data, every 200 seconds; bandpassed at 9 and 12Hz, demodulated at 9.484Hz, and lowpassed at 2Hz; and collected the median value of the sum of the squares of the demod products.  Some data were neglected on the edges of the 20-sec segment to avoid filter transients.  These every-200-sec datapoints were fit with an exponential curve.

Results attached; the two methods give different results for Q:

RMS channel: 594,000

Demodulated DARM_ERR: 402,000

I fiddled with the data collection parameters and filtering parameters for both fits, but the results were robust.  When varying parameters for each method the results for Q were repeatable within +/- 2,000, this gives some sense of the lower limit on uncertainty of the measurement.  (The discrepancy between the two methods gives a sense of the upper limit...)  Given a choice between the two I think I trust the RMS channel more, the demod path has more moving parts and there could be a subtlety in the filtering that I am overlooking.  The code is attached.

Images attached to this report
Non-image files attached to this report
Comments related to this report
daniel.hoak@LIGO.ORG - 01:19, Thursday 04 June 2015 (18843)

I figured out what was going wrong with the demod measurement - not enough low-passing before the decimation step, the violin modes at ~510Hz were beating against the 256Hz sample rate.  With another layer of anti-aliasing the demod results are in very good agreement with the RMS channel:

RMS channel: 594,400

Demodulated DARM_ERR: 593,800

Images attached to this comment
norna.robertson@LIGO.ORG - 09:39, Friday 05 June 2015 (18890)
To see what we might expect, I took the current GWINC model of suspension thermal noise and did the following.
1) Removed the horizontal thermal noise so I was only plotting vertical.
2) Updated the maraging steel phi to reflect recent  measurement (LLO alog 16740) of Q of UIM blade internal mode of 4 x 10^4. (It is phi of 10^-4, Q 10^4 in the current GWINC). I did this to give better estimate of the vertical noise from higher up the chain.
3) Plotted only around the thermal noise peak and used 1 million points to be sure I resolved it.

Resulting curve is attached. Q looks approx 100K, which is less than what was reported in this log. That is encouraging to me. I know the GWINC model is not quite right - it doesn't reflect tapered shape and FEA results.  However to see a Q in excess of what we predicted in that model is definitely in the right direction.
Images attached to this comment
angus.bell@LIGO.ORG - 08:26, Friday 12 June 2015 (19092)DetChar, SUS
Here we take the Mathematica model with the parameter set 20150211TMproduction and we look at varying some of the loss parameters to see how the model compares with these measurements. The thermal noise amplitude in the vertical for the vertical bounce mode is tabularised around the resonance and we take the full width at 1/√2 height to calculate the Q (equivalent to ½ height for power spectrum). With the recently measured mechanical loss value for maranging steel blade springs of 2.4 e-5, the Mathematica model predicts a Q of 430,000. This is a little bit lower Q than the measurement here, but at this level the loss of the wires and the silica is starting to have an effect, and so small differences between the model and reality could show up. Turning off the loss in the blade springs altogether only takes the Q to 550,000, so other losses are sharing equally in this regime. The attached Matlab figures shows mechanical loss factor of maraging steel versus predicted bounce mode Q and against total loss plus the resonance as a function of loss.
Angus Giles Ken & Borja
Images attached to this comment
Non-image files attached to this comment
daniel.hoak@LIGO.ORG - 18:35, Friday 12 June 2015 (19107)SUS

Since there has been some modeling afoot, I wanted to post the statistical error from the fits above, to give a sense of the [statistical] precision on these measurements.  The best-fit Q value and the 67% confidence interval on the two measurements for the bounce mode are:

RMS channel: 594,410  +/-  26

Demodulated DARM_ERR: 594,375  +/-  1590

The data for the measurements are attached.  Note that this is just the statistical error of the fit -- I am not sure what systematics are present that could bias the measurement in one direction or another.  For example, we did not disable the top-stage local damping on ITMX during this measurement, only the DARM_CTRL --> M0 damping that is bandpassed around the bounce mode.  There is also optical lever feedback to L2 in pitch, and ASC feedback to L2 in pitch and yaw from the TRX QPDs (although this is very low bandwidth).  In principle this feedback could act to increase or decrease the observed Q of the mode, although the drive at the bounce mode frequency is probably very small.

Non-image files attached to this comment
H1 General
travis.sadecki@LIGO.ORG - posted 08:02, Wednesday 03 June 2015 (18817)
Owl Shift Summary

Times in UTC

7:00 Came in to find the IFO had been locked to LSC FF for several hours already, left it as is

9:57 Seeing that Livingston had dropped lock, switched Intent Bit to commissioning, and with a suggestion/assistance from Dan H., undamped the bounce mode of  ITMX for a ringdown measurement of its Q.  Gain of H1SUS-ITMX_M0_DARM_DAMP_V set to 0.0 (from 0.3) for ringdown.  Will need to set back to actively damp the mode once Dan is satisfied

10:58 Switched Intent Bit back to undisturbed

14:11 Switched Intent Bit to commissioning.  Edited LSC_FF guardian to bring laser power back to 24W.  Switched Intent Bit back to undisturbed.

14:35 Lockloss (Richard volunteered to take blame)

14:39 Reduced power back to 16W.  Initial alignment.

15:00 Handoff to Patrick.

Non-image files attached to this report
LHO FMCS
bubba.gateley@LIGO.ORG - posted 08:00, Wednesday 03 June 2015 (18821)
RO System
King Soft Water was on site yesterday and replaced 4 of the 9 membranes in the R.O. system, the other 5 are on order. This seemed to make a noticeable difference in that the water system actually ran all night without tripping. The water was also tested yesterday and found to be in very good condition.
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 07:58, Wednesday 03 June 2015 (18820)
CDS model and DAQ restart report, Tuesday 2nd June 2015

model restarts logged for Tue 02/Jun/2015
2015_06_02 04:40 h1fw0*
2015_06_02 06:49 h1fw0*
2015_06_02 10:57 h1iopsush34
2015_06_02 10:59 h1susmc2
2015_06_02 10:59 h1suspr2
2015_06_02 10:59 h1sussr2
2015_06_02 11:31 h1calcs

* = two unexpected fw0 restarts. Restart of h1sush34 for re-calibration of 18bit-DACs. New calcs model.

H1 ISC (DetChar)
daniel.hoak@LIGO.ORG - posted 02:44, Wednesday 03 June 2015 - last comment - 06:15, Thursday 04 June 2015(18815)
glitches in PRCL, SRCL

Dan, Travis

Around 06:50 UTC we started to observe frequent glitching in the PRCL and SRCL loops that generated a lot of nonstationary noise in DARM between 20 and 100Hz.  The glitches occur several times a minute, it's been two hours of more or less the same behvaior and counting.  Our range has dropped by a couple of megaparsecs.  The first plot has a spectrogram of DARM compared to SRCL that shows a burst of excess noise at 08:08:20 UTC.

The noise shows up in POP_A_RF45_I and POP_A_RF9_I, but not so much in the Q phases, see second plot.  (MICH is POPA_RF45_Q, PRCL and SRCL are from the I-phases.)  A quick look at the PRM and SRM coil outputs doesn't reveal a consistent DAC value at the time of the glitches, so maybe DAC glitching isn't a problem, see third plot.  The three optical levers that we're using in the corner station (ITMX, ITMY, SR3, all in pitch) don't look any different now than they did before 0700 UTC.

Images attached to this report
Comments related to this report
joshua.smith@LIGO.ORG - 04:57, Wednesday 03 June 2015 (18816)DetChar, ISC, SUS

I'm pretty sure these come from PRM, but one stage higher than you looked, M2. Attached is a lineup of PRM M2 UR and the glitches in CAL DELTAL. Looks like 2^16 = 65536 counts DAC glitches to me. I'll check some other times and channels, but wanted to report it now. 

Images attached to this comment
andrew.lundgren@LIGO.ORG - 14:12, Wednesday 03 June 2015 (18829)
After a lot of followup by many detchar people (TJ, Laura, Duncan, and Josh from a plane over the Atlantic), we haven't really been able to make DAC glitches work as an explanation for these glitches. A number of channels (especially PRM M2 and M3) start crossing +/- 2^16 around 7 UTC, when the glitches begin. Some of these glitches line up with the crossings, but there are plenty of glitches that go unexplained, and plenty of crossings that don't correspond to a glitch. It's possible that DAC glitches are part but not all of the explanation. We'll be following this up further since one of these glitches corresponds to an outlier in the burst search.

Duncan does report that no software saturations (channels hitting their software limit, as with the tidal drive yesterday) were found during this time, so we can rule those out.
joshua.smith@LIGO.ORG - 06:15, Thursday 04 June 2015 (18849)DetChar, ISC, SUS

Andy, Laura, TJ, Duncan, Josh: To add to what Andy said, here are a few more plots on the subject, 

1) The glitching being bad does coincide with SRM and PRM channels being close to 2^16 (plotted only positive values here, negative values are similar). Of course this is pretty weak evidence as lots of things drift.

2) A histogram of the number of glitches versus the DAC value of the PRM M2 UR channel has a small line at 2^16. Almost not significant. Statistically, with the hveto algorithm, we find only a weak correlation with +/-2^16 crossings in the PRM M2 suspensions. Again, all very weak.  

3) As you all reported, the glitches are really strong in SRCL and PRCL, ASC REFL channels, etc. hveto would veto ~60% of yesterday's glitches using SRCL and PRCL, as shown in this time-frequency plot. But the rest of the glitches would still get through and hurt the searches. 

So we haven't found the right suspension, or there is something more complicated going on. Sorry we don't have a smoking gun - we'll keep looking. 

Images attached to this comment
H1 General
jeffrey.bartlett@LIGO.ORG - posted 22:17, Tuesday 02 June 2015 - last comment - 07:59, Wednesday 03 June 2015(18812)
Display LLO & LHO Inspiral Range

For unknown reasons, dmtviewer on the Mac Mini computers cannot get data from Livingston.

DMTviewer is used to view data produced by the DMT systems for both Livingston and Hanford observatories. It is used in the control room to display recent seismic data and inspiral range data on projector or TV monitors. The dmtviewer program may also be run on a control room workstation or operator workstation.

To display the inspiral range on a workstation:

  1. Open a terminal window, type the commands shown in bold text
  2. cd /ligo/home/controls/FOMs
  3. kinit albert.einstein (Use your own name!)
  4. (Enter your ligo.org password)
  5. export DMTWEBSERVER=marble.ligo-wa.caltech.edu:443/dmtview/LHO,llodmt.ligo-la.caltech.edu/dmtview/LLO
  6. dmtviewer HL-Range2.xml
  7. In the DMT viewer, click on "Run" near the bottom of the display.
  8. In the DMT viewer, click on the "Graphics" tab near the top of the display.
  9. You can hide the option panel by clicking the small box with the left arrow directly under the "Monitors" tab.
  10. When you are done using dmtviewer, you should enter the command kdestroy to dispose of your kerberos ticket.

Note: These procedures are found on the CDS Wiki. Search for "DMTviewer" and use the DMTviewer article.

Comments related to this report
david.barker@LIGO.ORG - 07:59, Wednesday 03 June 2015 (18822)

L1 range cannot be displayed on Mac OS machines anymore. We are working in replacing the FOM Mac Mini computers with Ubuntu NUC computers to resolve this.

H1 SEI
jim.warner@LIGO.ORG - posted 15:57, Tuesday 02 June 2015 - last comment - 07:33, Wednesday 03 June 2015(18800)
Isolation filters for ETMY, installed May 7th

This is work I completed a while ago (the 7th of May if Matlab is to be believed), but I wanted to put this in for comparison with my log 18453. These are the new (as of May 7th) and old isolation filters for ETMY. ETM's are definitely harder to do that ITM's, but loops should be very similar now on all BSC chambers

Non-image files attached to this report
Comments related to this report
richard.mittleman@LIGO.ORG - 07:33, Wednesday 03 June 2015 (18819)

That is a lot of gain peaking in X and  Y (7 and 4.6 for St1 * 4.2 for stage 2, so ~30 and20 over all) worth remembering if there is a problem later on 

Displaying reports 64801-64820 of 83146.Go to page Start 3237 3238 3239 3240 3241 3242 3243 3244 3245 End