Displaying reports 55921-55940 of 86124.Go to page Start 2793 2794 2795 2796 2797 2798 2799 2800 2801 End
Reports until 20:44, Monday 31 October 2016
H1 ISC (ISC)
jenne.driggers@LIGO.ORG - posted 20:44, Monday 31 October 2016 (31049)
Calculation of LSC feedforward filters

Many people over the years have calculated what kind of filter one should create for LSC feedforward use, but the most commonly referenced note on the calculation is an Initial Virgo note by Bas and others (Lisa links to VIR-050A-08 in LLO alog 13242, and I re-attach it here).  I have rewritten the calculation in Advanced LIGO terms in T1600504, which I also attach here. This note does not (yet) include the feature that we have of summing together two feedforward filters for the same degree of freedom - we use this for SRCL such that we can separately adjust the gain of the low frequency and high frequency parts of the transfer function, but it could alternatively be used to sum in the iterative "tweak" filter. 

Non-image files attached to this report
H1 TCS (ISC, OpsInfo, TCS)
kiwamu.izumi@LIGO.ORG - posted 18:22, Monday 31 October 2016 (31048)
TCS long term measurement; raising CO2X seems to improve sensitivity

Jim W, Cheryl, Corey, Nutsinee, Kiwamu,

Over the weekend Jim, Cheryl, Corey and Nutsinee kindly tested various CO2 configuration for me. The data so far indicate that

I would like to see some more data points with the CO2X laser higher than 0.4 W while maintaining CO2Y at 0 W.

 


Some details

The attached are the DARM spectra from various different times. Also, below shows a list of the CO2 settings that people tried during this weekend.

   time Laser power that CO2X has been [W] Laser power that CO2Y has been [W] Goodnees
ref 0 11:20 29/Oct/2016  0.2  0  OK
ref 1 12:20 29/Oct/2016  0.2  0.4 bad
ref 2 15:00 29/Oct/2016  0.4 0  good
ref 3 18:00 29/Oct/2016 0.4 0  good
ref 4 6:30 30/Oct/2016 0.4 0.2  OK
ref 5 18:00 30/Oct/2016 0.4 0  good
ref 6  10:00 31/Oct/2016 0.3 0.1  bad-ish
ref 7  11:30 31/Oct/2016 0.4 0.2  bad-ish
ref 8  13:00 31/Oct/2016 0.5 0.3  bad-ish
ref 9  14:00 31/Oct/2016 0.6 0.4  bad-ish

As one can see, having a high CO2X seems to improve the noise curve. We should try putting even more power on CO2X (meaning more than 0.4 W) next time.

Interestingly, the peak at 3340 Hz (SRM resonance, 27488) becomes taller when the TCS tunings become worse. This is consistent with a theory that high frequency part of the SRCL coupling is due to HOMs in the SRC. We might be able to use this peak to evaluate the good ness of the CO2 tunings.

Images attached to this report
H1 PSL
keita.kawabe@LIGO.ORG - posted 17:34, Monday 31 October 2016 (31047)
PMC locking electronics further change (Keita, Matt, Daniel)

In addition to what are already in integration issue tracker (https://services.ligo-la.caltech.edu/FRS/show_bug.cgi?id=6500), the following changes need to be done. Both are in T0900577.

ilspmc_servo3.pdf (both for PMC and ILS):

After the demod, 5MHz corner of Sallen-Key is unnecessarily high, and AD797 open loop gain is about 12dB or so at 5MHz, so it's not like it works as intended.

We'll add a passive 5MHz pole, and move the Sallen-Key corner down to 170kHz or so by:

ilspmc_fieldbox4.pdf (both for PMC and ILS):

The DAC output for ramp signal is permanently connected to the HV amp with 200Hz pole and a factor of 50 gain.

If DAC output noise is 1uV/sqHz (depends on the DAC output but the digital output is usually zero), the DAC noise in the HV monitor (which has a 1/50 attenuation) will be 1uV/sqHz at DC and it goes down above 200Hz. See attached for the PMC HV monitor signal in-lock.

We'll add zp=100:10 by stuffing the unused whitening pattern on the board:

These are added to the existing FRS, which already has similar modifications to HV monitor and mixer readback.

Note that only the HV mon modification for PMC board was done, and we intend to do the rest for PMC as well as ILS.

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 17:09, Monday 31 October 2016 (31043)
Ops EVE Shift Transition



TITLE: 10/31 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Wind
OUTGOING OPERATOR: Ed
CURRENT ENVIRONMENT:
    Wind: 36mph Gusts, 26mph 5min avg
    Primary useism: 0.15 μm/s
    Secondary useism: 0.30 μm/s
QUICK SUMMARY:

H1 CDS
david.barker@LIGO.ORG - posted 16:37, Monday 31 October 2016 (31046)
TCS ITMY chiller water flow reduction does not look like a digital problem

The TCS ITMY chiller water flow dropped several times, starting at 18:00 PDT Sunday 30th through to 08:00 PDT Monday 31st. The attached left hand plot shows a one day minute-trend, the right hand plot is a zoom into the last sharp drop at 07:01 PDT using second trends. The drop looks real and not a digital problem. It has not re-appeared since 8am this morning.

Images attached to this report
LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 16:16, Monday 31 October 2016 (31045)
CP3 Manual Overfill at 22:50 utc

Took 4 minutes 42 seconds to overfill CP3 with LLCV bypass valve 1/2 turn open.

H1 CAL (CAL, CDS, DetChar)
jeffrey.kissel@LIGO.ORG - posted 16:13, Monday 31 October 2016 - last comment - 12:42, Wednesday 02 November 2016(31040)
Example 1-hour Stretches of CAL Line Front-end-Calculated Coherence and Uncertainty
J. Kissel

We're exploring the functionality of the new features of the front-end calibration that calculates the coherence and subsequent uncertainty of the transfer function between each CAL line source and DARM. As such, I plot three, one-hour data stretches from different lock stretches in the past 24 hours.
    Data Set A: 2016-10-31 02:30 UTC
    Data Set B: 2016-10-31 07:00 UTC
    Data Set C: 2016-10-31 10:00 UTC

Note the translation between channel names and to which line they're analyzing:
H1:CAL-CS_TDEP_..._[COHERENCE/UNCERTAINTY]     Frequency        Used In Calculating       
           DARM_LINE1                          37.3             kappa_TST                (ESD Actuation Strength)
           PCAL_LINE1                          36.7             kappa_TST & kappa_PU    (ESD and PUM/UIM Act. Strength)
           PCAL_LINE2                         331.9             kappa_C & f_C            (Optical Gain and Cavity Pole)
           SUS_LINE1                           35.9             kappa_PU                 (PUM/UIM Act. Strength)

where you can refer to P1600063 and T1500377

Recall also, that our goal is to have the uncertainty in the time-dependent parameters (which are calculated from combinations of these lines) to around ~0.3-5%, such that these uncertainties remain non-dominate (lines are strong enough), but non-negligible (not excessively strong). An example total response function uncertainty budget in LHO aLOG 26889, to see at what level the time-dependent parameter estimation uncertainty impacts the total uncertainty. That means the uncertainty in each line estimate should be at the 0.1-0.3% level if possible. So, we can use these data sets to tune the amplitude of the CAL lines, so as to optimize uncertainty needs vs. sensitivity pollution.

There are several interesting things. It's best to look at the data sets in order B, then A, then C.
In data set B -- 
- this is what we should expect if we manage to get a stable, O1-style interferometer in the next week or so for ER10 and O2. 
- With the current amplitudes, the uncertainty on the ~30 Hz lines hovers around 0.1% -- so we can probably reduce the amplitude of these lines by a factor of a few if the sensitivity stays this high.
- The 331 Hz line amplitude should probably be increased by a factor of a few.

In data set C -- (this is during the ghoulish lock stretch)
- One can see when the data goes bad, it goes bad in weird, discrete chunks. The width of these chunks is 130 sec (almost exactly), which I suspect is a digital artifact of  the 13 averages and 10 sec FFTs. The sensitivity was popping, whistling, and saturating SUS left and right during this stretch, at a much quicker timescale than 100s of seconds.

In data set B --
- This is an OK sensivity stretch. The good thing is that the coherence/uncertainty appears to be independent of any fast glitching or overall sensitivity, as long as we stick in the 60-75 Mpc range.
- Interestingly, there's either a data dropout, or terrible time period during this stretch (as indicated by the BNS range going to 0) -- but it's only ~120 sec. If it's a data drop out -- good, the calculation is robust against whatever happens in DMT land. If it's a period of glitchy interferometer, it's very peculiar that it doesn't affect the uncertainty calculation, unlike with data set C.

Based on these data sets, I think it'll be safe to set the uncertainty threshold at 1%, and if the uncertainty exceeds that threshold, the associated parameter value gets dumped from the calculation of the average that is applied to h(t).

So, in summary -- looks like the calculations are working, and their calculated value roughly makes sense when the IFO is calm. There're a few suspicious things that we need to iron out when the IFO isn't feeling so well, but I think we're very much ready to use these coherence calculations as a viable veto for time-dependent parameter calculations.
Images attached to this report
Comments related to this report
darkhan.tuyenbayev@LIGO.ORG - 12:42, Wednesday 02 November 2016 (31128)CAL

Jeff K, Darkhan T,

We investigated further the 130s drop in coherence in the data set C (see LHO alog 31040 above).

This drop was possibly caused by a bad data point(s) ("glitch") at the beginning of this drop (when first glitchy data point entered the 130s averaging buffer). A quick look at kappas calculated in PcalMon from 10s FFTs during 600s around time of the glitch indicates that outliers in κTST and κPU values are found only in one of the 10s intervals. This interval is GPS [1161968880, 1161968890) (see attachment 1).

A look at slow channels indicate that the glitch produced impulse responses lasting just under 10s before 0.1Hz low-pass filter and roughly 30s after the filter, DARM_ERR demodulated at 35.9 Hz (see upper panes in attachment 2, ). Start of the glitch is at ~1910s (GPS 1161968887). In the coherence calculation block of the CAL-CS model (attachments 3 and 4), it can be seen that the glitch lasts 20-30s in EPICS records preceeding the 130s averaging blocks (BUFFER_AND_AVERAGE), but results in reduction of the calculated coherence value for 130s (see attachment 5).

If we use coherence values from the CAL-CS front-end model as a threshold for "bad kappas", this kind of glithces will result in unnecessarily marking 130s of kappas as "bad". GDS median kappas should not be sensitive to this kind of short glitches, however CAL-CS front-end κTST were affected for ~250s (front-end kappas are low passed with a 1/128 Hz IIR filter) (see attachment 5).

A potential (not yet tested) solution would be to replace BUFFER_AND_AVERAGE (running aveage) script with a running median. And a similar script can be used for averaging of the front-end kappas, which would also reduce discrepancies between GDS and front-end kappas.

Images attached to this comment
H1 CDS (DAQ, SUS)
david.barker@LIGO.ORG - posted 16:01, Monday 31 October 2016 (31042)
Frame size and cpu usage changes resulting from last Tuesday's SUS model changes

Jeff, Dave:

Following Jeff's SUS changes last tuesday (25th October) we have accrued enough data to report on changes to the frame size and sus front end cpu usage.

Attached is a two week trend of the full frame size. This is a compressed frame, so its size varies dependent upon the state of the interferometer. Drawing an average by eye, the 64 second full frame size has decreased from a mean value of 1.77GB to 1.67GB. This is a 6% disk/tape saving. It equates to a per-second data rate decrease of 27.7 MB/s to 26.1 MB/s (decrease of 1.6 MB/s).

The removal of filters and reduction of DAQ channel decimation also improved the cpu processing time on most SUS models. The attached plot shows the 16 corner station sus and susaux models. They typically report an improvement in cpu time of 10%.

Images attached to this report
H1 General
edmond.merilh@LIGO.ORG - posted 15:56, Monday 31 October 2016 (31041)
Shift Summary - Day
TITLE: 10/31 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Wind
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
Locked early in the shift and then instructed to stay in DOWN due to weather.
LOG:

15:49 Jeff B to mid stations to check on 3IFO items

16:03 Cleaning crew into highbay area before the LVEA to get some C3 covers.

16:07 Intention bit set to UNDISTURBED

16:13 Intention bit reverted to COMMISSIONING mysteriously (it seems)

16:15 Intention bit reset to UNDISTURBED

16:25 Jeff B to EX TMS lab

16:54 Jeff B is back

17:50 IFO will be in DOWN fo the duration of the shift due to high winds

18:25 Travis, Darkhan and Yuki to EY for PCal calibrations

19:21 Sheila out to the LVEA to plaug in power monitors for ESD

20:11 Travis and co. back

21:14 Carlos to EY to take image of the HWS computer

21:25 BRS disabled at BOTH end stations now.

21:27 Robert into PSL enclosure

21:37 Peter out to the LVEA to trace out a cable

21:38 Sheila to end stations to plug in power mointors for ESD

21:42 Automated phone call about a malfunction in a Hanford alarm test. additional testing will be done between 2:45 and 3:45PM

21:53 Peter is back

22:00 Fil out to LVEA North bay to look for a rack stand

22:27 Sheila back.

 
H1 IOO (IOO, ISC)
evan.hall@LIGO.ORG - posted 15:21, Monday 31 October 2016 (31039)
IMC REFL DC whitening and calibration

I added two stages of whitening to the IMC REFL DC readback, so we are no longer ADC noise limited anywhere.

I also added a ct2mW filter based on the old calibration described here.

H1 General (FMP)
betsy.weaver@LIGO.ORG - posted 14:52, Monday 31 October 2016 (31038)
C&B Desiccant Cabinet RH trends for the last month

Attached is the last 2 months of relative humidity data from the desiccant cabinets in the VPW.  Apparently the DB4 RH logger quit recording data sometime last month, so it is not attached.  I'm looking into a replacement logger.  If it's any consolation, when I've spot checked the RH reader periodically over the last month it has always been under 1%.

Non-image files attached to this report
H1 PEM
jeffrey.bartlett@LIGO.ORG - posted 14:48, Monday 31 October 2016 (31037)
PSL Enclosure Elevated Dust Counts
Peter K., Jeff B.   

   Today there have been elevated dust counts in the PSL laser room and in the Anti-Room. Trends over the past few weeks show several spikes of high counts in both rooms.

   The alarm level settings for the Anit-Room (DM 102) are set for clean 1000 (Minor alarm 700 counts, Major alarm 1000 counts). The PSL enclosure is set for Minor alarm at 140 counts and major alarms at 200 counts. 

   We are still looking into the cause of these spikes. 

P.S. Ops - Please let me know if you are getting dust monitor alarms and what the wind is doing at the time of the alarms.    
Images attached to this report
H1 General (SEI)
edmond.merilh@LIGO.ORG - posted 14:12, Monday 31 October 2016 (31036)
ISI Config Changed

I changed to  WINDY_NO_BRSX for PCal work at EX. I forgot to switch it off at EY for earlier work that was done but it appears to be ok down there.

H1 CAL (DetChar)
jeffrey.kissel@LIGO.ORG - posted 14:07, Monday 31 October 2016 (31035)
1083.7 Hz CAL Line Still Resolved between New ~1.1 kHz Non-Stationary Noise
J. Kissel, E. Hall

Worried that the 1083.7 Hz calibration line on PCAL EY -- the "standard candle" for the > 1kHz roaming calibration line on PCAL EX -- is being swamped by new non-stationary noise around 1.1 kHz, I took a closer look. I've compared the DARM ASD (0.01 Hz BW, 10 avgs) and coherence between PCALY RX PD and DARM at several different times:
     2016-01-30 22:22:27 UTC -- Canonical O1 Reference Time, with no such new noise (Shown in YELLOW)
     2016-10-31 07:00:00 UTC -- Beginning of last night's quiet, long lock stretch (Shown in PURPLE)
     2016-10-31 10:00:00 UTC -- In the middle of the same long lock stretch, when the range degraded a little bit (Shown in BLUE)
     2016-10-21 17:30:00 UTC -- Toward the end of the ghoulish lock stretch with high microseism, high-wind, and an 5.4 [mag] earthquake. (Shown in NAVY)

One can see that, regardless of the non-stationary noise, we can still get ample coherence between PCAL and DARM with 10 averages and a 100 [sec] FFT. Recall, this 1083 Hz line is *not* used in any of the calculations for time-dependent DARM loop parameters, so we're free to use any bandwidth and integration time we wish to get the best coherence / SNR/ uncertainty. As such, we'll just leave the line where it is.

*phew*!
Images attached to this report
H1 INJ
evan.goetz@LIGO.ORG - posted 13:24, Monday 31 October 2016 - last comment - 13:26, Monday 31 October 2016(31033)
Injection successful after restarting Guardian node
Evan G., Dave B., Chris B., Thomas S.

After completely restarting the INJ_TRANS Guardian node, this finally fixed the transient injection problem as reported in LHO aLOG 31030. The interferometer was not locked, so no point in trying to recover the signal in a low-latancy analysis. This is the entry in the schedule file:
1161980000 INJECT_CBC_ACTIVE 0 1.0 /ligo/home/evan.goetz/lscrepos/Inspiral/H1/coherentbbh7_1126259455_H1.out

I observed the signal in H1:CAL-PINJX_HARDWARE_OUT, H1:CAL-PCALX_RX_PD_VOLTS_OUT, and H1:CAL-PCALX_OFS_AOM_DRIVE_MON_IN2.
Comments related to this report
evan.goetz@LIGO.ORG - 13:26, Monday 31 October 2016 (31034)
Please note the following warning about "Aborting Logging" in the log file:
2016-10-31T20:12:43.18044 INJ_TRANS executing state: INJECT_CBC_ACTIVE (101)
2016-10-31T20:12:43.18068 INJ_TRANS [INJECT_CBC_ACTIVE.enter]
2016-10-31T20:12:43.18399 INJ_TRANS [INJECT_CBC_ACTIVE.main] USERMSG 0: INJECTION ACTIVE: 1161980000.000000
2016-10-31T20:12:51.05950 Aborting Logging because SIStrLog call failed -1
2016-10-31T20:13:19.57595 INJ_TRANS [INJECT_CBC_ACTIVE.main] GPS start time: 1161980000.000000
2016-10-31T20:13:19.57602 INJ_TRANS [INJECT_CBC_ACTIVE.main] Requested state: INJECT_CBC_ACTIVE
H1 General
edmond.merilh@LIGO.ORG - posted 13:13, Monday 31 October 2016 (31032)
Mid-Shift Summary - Day
H1 DetChar (SEI)
hugh.radkins@LIGO.ORG - posted 12:00, Monday 31 October 2016 - last comment - 12:28, Thursday 03 November 2016(31031)
Terramon--Question for the designers

On the attached snap of the Terramon window, the second event is a largesh EQ in Middle East.  The EQ distance to H1 is 1000km shorter than the distance to L1 but the computed site velocity is 1/2 at H1 versus L1.  Is this one of those cases where the waves arrive first from opposite directions and so the crust encountered is different for the travelling surface waves?  Interesting info I'm sure everyone wants to know.  I see a similar descrepency for the G1 and V1 velocities but those waves are certainly travelling on the same direction.  Maybe it is just the local site geology being taken into account?  HunterG?  Thanks-H

Images attached to this report
Comments related to this report
hunter.gabbard@LIGO.ORG - 12:28, Thursday 03 November 2016 (31164)

Hey Hugh! 

Apologies for the late response. I'm going to paraphrase what Michael Coughlin told me in a recent discussion.

We have historically attributed the different behavior to the sites themselves rather than to any particular selection effect from LHO and LLOs location relative to most EQs. Amplitude as a function of EQ direction dependence would be interesting to look at, as we essentially fit it away by only taking distance into account. Might be a good summer project for someone.

--Hunter 

H1 INJ
evan.goetz@LIGO.ORG - posted 11:10, Monday 31 October 2016 (31030)
Error setting up an awg slot for transient hardware injections
Last Friday (28 Oct.) and now today (31 Oct.), I unfortunately run across this error when trying to make a hardware injection test:
2016-10-31T17:47:43.17850 INJ_TRANS JUMP: AWG_STREAM_OPEN_PREINJECT->INJECT_CBC_ACTIVE
2016-10-31T17:47:43.17898 INJ_TRANS calculating path: INJECT_CBC_ACTIVE->INJECT_SUCCESS
2016-10-31T17:47:43.17943 INJ_TRANS new target: RAMP_GAIN_TO_0
2016-10-31T17:47:43.18017 INJ_TRANS executing state: INJECT_CBC_ACTIVE (101)
2016-10-31T17:47:43.18292 INJ_TRANS [INJECT_CBC_ACTIVE.enter]
2016-10-31T17:47:43.18413 INJ_TRANS [INJECT_CBC_ACTIVE.main] USERMSG 0: INJECTION ACTIVE: 1161971300.000000
2016-10-31T17:47:43.18625
2016-10-31T17:47:43.18632  *** Break *** write on a pipe with no one to read it
2016-10-31T17:47:43.18639 awgSetChannel: awg_clnt[124][0] = NULL
2016-10-31T17:47:43.18646 Error code from awgSetChannel: -5
2016-10-31T17:47:43.22723 INJ_TRANS [INJECT_CBC_ACTIVE.main]   File "/opt/rtcds/userapps/release/cal/common/guardian/INJ_TRANS.py", line 549, in main
2016-10-31T17:47:43.22726     self.hwinj.stream.send(self.hwinj.data)
2016-10-31T17:47:43.22727
2016-10-31T17:47:43.22770 INJ_TRANS [INJECT_CBC_ACTIVE.main]   File "/ligo/apps/linux-x86_64/gds-2.17.9/lib/python2.7/site-packages/awg.py", line 621, in send
2016-10-31T17:47:43.22772
2016-10-31T17:47:43.22772     self.append(data, scale=scale)
2016-10-31T17:47:43.22773 INJ_TRANS [INJECT_CBC_ACTIVE.main]   File "/ligo/apps/linux-x86_64/gds-2.17.9/lib/python2.7/site-packages/awg.py", line 599, in append
2016-10-31T17:47:43.22774
2016-10-31T17:47:43.22774     self.open()
2016-10-31T17:47:43.22775     + ": " + awgbase.SIStrErrorMsg(ret))
2016-10-31T17:47:43.22775 INJ_TRANS [INJECT_CBC_ACTIVE.main]   File "/ligo/apps/linux-x86_64/gds-2.17.9/lib/python2.7/site-packages/awg.py", line 584, in open
2016-10-31T17:47:43.22776
2016-10-31T17:47:43.22777 INJ_TRANS [INJECT_CBC_ACTIVE.main]  can't open stream to H1:CAL-PINJX_TRANSIENT_EXC: Error setting up an awg slot for the channel
2016-10-31T17:47:43.24578 INJ_TRANS JUMP target: FAILURE_DURING_ACTIVE_INJECT
2016-10-31T17:47:43.24605 INJ_TRANS [INJECT_CBC_ACTIVE.exit]
2016-10-31T17:47:43.32248 INJ_TRANS JUMP: INJECT_CBC_ACTIVE->FAILURE_DURING_ACTIVE_INJECT
2016-10-31T17:47:43.32252 INJ_TRANS calculating path: FAILURE_DURING_ACTIVE_INJECT->INJECT_SUCCESS
2016-10-31T17:47:43.32253 INJ_TRANS executing state: FAILURE_DURING_ACTIVE_INJECT (300)
2016-10-31T17:47:43.32253 INJ_TRANS new target: WAIT_FOR_NEXT_INJECT
2016-10-31T17:47:43.32254 INJ_TRANS [FAILURE_DURING_ACTIVE_INJECT.enter]
2016-10-31T17:47:43.32780 INJ_TRANS [FAILURE_DURING_ACTIVE_INJECT.main] ezca: H1:CAL-PINJX_TRANSIENT_GAIN => 0.0
2016-10-31T17:47:45.47968 INJ_TRANS [FAILURE_DURING_ACTIVE_INJECT.main] ezca: H1:CAL-PINJX_TINJ_OUTCOME => -4
2016-10-31T17:47:45.48415 INJ_TRANS [FAILURE_DURING_ACTIVE_INJECT.main] ezca: H1:CAL-PINJX_TINJ_ENDED => 1161971282.48
2016-10-31T17:47:45.52790 INJ_TRANS [FAILURE_DURING_ACTIVE_INJECT.run] USERMSG 0: ERROR


Chris Biwer suggested a simple test that I think worked:
>>> import awg
>>> import numpy
>>> stream = awg.ArbitraryStream('H1:CAL-PINJX_TRANSIENT_EXC', 16384, 1161970100) 
>>> timeseries_data = numpy.zeros(10)
>>> stream.send(timeseries_data)
Warning, couldn't open log file
Aborting Logging because SIStrLog call failed -4
Warning, couldn't open log file


Chris B. and Dave Barker have been notified of this issue.
H1 DetChar (DetChar, IOO, ISC, SEI, SUS, SYS)
jeffrey.kissel@LIGO.ORG - posted 10:48, Monday 31 October 2016 - last comment - 13:38, Friday 04 November 2016(31029)
Delightfully Ghoulish SEI Environment Lock Stretch for Characterization
J. Kissel

Admiring the work of the SEI and ASC teams, we've just lost lock on a really impressive lock stretch in which we had ~40 mph winds, ~70th percentile microseism, and a 5.4 Mag earhtquake in the horn of Africa and survived. It would be most excellent it DetChar can compare amplitudes of ISC control signals, check out the beam rotation sensor tilt levels, the ISI platform sensor amplitudes, take a look at optical lever pitch and yaw compared with ASC signals etc.
    Start: Oct 31 2016 16:15:05  UTC
    End:               17:37-ish UTC
Comments related to this report
jim.warner@LIGO.ORG - 15:40, Tuesday 01 November 2016 (31083)DetChar, SEI

Winds and some ground BLRMS (showing microseism and the earthquake arrival) for this lock stretch. We survived at least one gust over 50mph before losing lock. No one changed seismic configuration during this time.

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 13:38, Friday 04 November 2016 (31206)SEI
For the record, the units of the above attached trends (arranged in the same 4-panel format as the plot) are
    ([nm/s] RMS in band)        [none]


    ([nm/s] RMS in band)        [mph]
  
Thus, 
- the earthquake band trend (H1:ISI-GND_STS_ITMY_Z_BLRMS_30M_100M) shows the 5.3 [mag] EQ peaked at 0.1 [um/s] RMS (in Z, in the corner station, between 30-100 [mHz]), 
- the microseism (again in Z, in the corner station, H1:ISI-GND_STS_ITMY_Z_BLRMS_100M_300M) is averaging 0.25 [um/s] RMS between 100-300 [mHz] (which is roughly average, or 50th percentile -- see LHO aLOG 22995), and 
- the wind speed (in the corner station) is beyond the 95th percentile (again, see LHO aLOG 22995) toward the end of this lock stretch, at 40-50 [mph].

Aside from Jordan Palamos' work in LHO aLOG 22995, also recall David McManus' work in LHO aLOG 27688, that -- instead of a side-by-side bar graph, shows a surface map. According to the cumulative surface map, with 50th percentile winds and 95th percentile winds, the duty cycle was ~30% in O1.

So, this lock stretch is not yet *inconsistent* with O1's duty cycle, but it sure as heck-fy looks promising.
Displaying reports 55921-55940 of 86124.Go to page Start 2793 2794 2795 2796 2797 2798 2799 2800 2801 End