Displaying reports 63741-63760 of 82999.Go to page Start 3184 3185 3186 3187 3188 3189 3190 3191 3192 End
Reports until 22:54, Monday 20 July 2015
H1 ISC (CDS, DetChar, ISC, SEI, SUS)
jeffrey.kissel@LIGO.ORG - posted 22:54, Monday 20 July 2015 (19770)
7/21 Relocking Team's Maintenance Day Planning
J. Kissel, B. Weaver, J. Driggers, R. McCarthy, D. Barker, D. Sigg, J. Batch

Here are the list of tomorrow's maintenance day tasks organized as we intend to execute them chronologically, and prioritized such that the tasks with the most global impact on the IFO are done first (such that we have the most time to recover from them). As with last Tuesday (LHO aLOG 19600), all tasks, associated estimated times for completion, and responsible persons (or "task manager") will be added to the reservation system when they are *actually happening*, and removed after the task manager has checked in with the operator and confirmed completion. PLEASE PAY ATTENTION TO THE RESERVATION SYSTEM (to help, we're going to put it on the big projector during maintenance). 

As always, please keep the operators informed of your activities as accurately as possible / reasonable throughout the maintenance day so the reservation list can be adjusted accordingly and remain accurate. We appreciate your cooperation!


Group 0 -- prep for maintenance (to be done either the night before, or just before start of maintenance):
    - Clear out all SDF system differences
    - Ensure an alignment offset back up snap has been captured / define a reference time to which we will restore them
    - Bring ISC_LOCK guardian to DOWN, Bring IMC_LOCK guardian to OFFLINE
    - Bring all SEI manager guardians to OFFLINE
    - Bring all SUS guardians to SAFE

Group 1 -- (tasks that can be performed simultaneously) to begin as soon as tasks dependent on group 0 are complete, otherwise, 08:00a PT
    - Timing master's GPS reference swapped for external reference 30 min - 2 hours (R. McCarthy)
        - We expect that this timing system swap will not glitch the timing system, and 
          therefore crash all front-ends, sitewide. However, we are preparing for the worst, 
          and bringing all systems to their respective DOWN / OFFLINE / SAFE state. HOWEVER if 
          the front-ends do crash, the recovery time will of order 2 hours to get all front-ends 
          back up and running. If not, we expect there to be little-to-no recovery time other than 
          to bring guardians back to their nominal states.
        - IF AND ONLY IF the front ends do crash and we have to restart them, we will recompile 
          any front-ends that have crashed against RCG 2.9.6 in order to gather in the bug-fixes that
          come with RCGs 2.9.1 through 2.9.6.
    - HEPI Pump Station Repair 30 min (H. Radkins)
        - Only one of four pump station appears noisey. The corner station can be run on only three 
          pump stations, so Hugh will merely ramp the errant pump out of the system and convert to
          running the corner station on three pumps. This ramp out should only cause a brief minor 
          HEPI actuator pressure glitch. The 30 minutes is a conservative over-estimate of how long
          it will take.

Group 1.5 -- can begin immediately after the effects of group 1 are known:
    - Potential recompilation and install of the front-end models of all front-end machine's 
      that have crashed. See above 2 hours (J. Batch, D. Barker, J. Kissel)
    - Cable routing / pulling for PEM Cosmic Ray Detector 2 hours (F. Clara, V. Roma, J. Palamos)
    
 Recovery of corner station SEI / SUS, and relocking the IMC can begin upon assessment of effects of switching the timing master's GPS reference

Group 2 -- can begin while or after the corner station is being or has been recovered:
    - PEM sensor calibrations 1 hour (V. Roma, J. Palamos)
    - Replace / Repair Timing Fanout at EY 30 minutes (J. Batch, D. Barker)
    - Upgrade BIOS on new EY SUS fast front-end 30 minutes (J. Batch, D. Barker)
    - EY SUS, EY Parametric Instability front-end models recompiled against RCG 2.9.6 and installed 30 minutes (J. Batch, D. Barker)
    - EX Low-Voltage, Low-Noise driver installation and cabling 1 hour (R. McCarthy)
    
 Recovery of all models at EY, restoration of settings, and bring up ETMY SEI / SUS, measure charge on ETMY SUS ESD to confirm ESD health

Group 2.5 -- can begin once work at EY is complete and/or while EY is being recovered:
    - Upgrade BIOS on new EX SUS fast front-end 30 minutes (J. Batch, D. Barker)
    - EX SUS, EX Parametric Instability front-end models recompiled against RCG 2.9.6 and installed 30 minutes (J. Batch, D. Barker)

 Recovery of all models at EX, restoration of settings, and being up ETMX SEI/ SUS.
 Confirm / commission the functionality of new ETMX LVLN ESD driver.
 measure charge on ETMX SUS ESD to confirm ESD health.

Group 3 -- can begin once work and retoration at EX is finished
    - Power cycle corner station front-end's network switch 10 minutes (J. Batch, D. Barker)
         - Work stations will briefly loose their connection to the h1boot server, so workstations will be down briefly.
    - Preventative maintenance reboots of the following computers
         - Conlog
         - EPICs gateway
         - Guardian machine

 Restoration of all alignment settings; recovery of FULL IFO can begin.

Group 3.5 -- can begin once workstations are back and preventative maintenance is complete.
    - Rename and include Mid Station / Beam Tube PEM Accelerometers into PEM MX and MY front-end models 10 minutes (J. Batch, D. Barker)
    - Parametric Instability monitor model install at EX 10 minutes (J. Batch, D. Barker)
    - SUS AUX model upgrade 10 minutes (J. Kissel, B. Weaver)
    - Fix LDAS communication fiber hardware 1 hour (J. Batch, D. Barker) 

 Complete IFO recovery and commission of new bits and pieces.


As seen last Tuesday, and many prior Teusdays, the above plan will not happen exactly as described above, as reality strikes. But, we will try our darnedest! Wish us luck!
Images attached to this report
Non-image files attached to this report
H1 ISC
stefan.ballmer@LIGO.ORG - posted 19:50, Monday 20 July 2015 (19769)
SRC1_Y fine-tuning
Hannah, EvanStefan

Since we redid the AS_A_RF36 re-phasing (alog 19572), we never re-did a src coupling test while moving the SRC1_YAW offset (see alog 18436), so this was on the menu today - before redoing the SRCL decoupling.

- First we lowered the AS_A_RF36 whitening gain from 21dB to 18dB because some quadrants had too much signal.
- Again we found that for the matrix (see alog 19572)
    H1:ASC-AS_A_RF36_I_MTRX_2_1    0
    H1:ASC-AS_A_RF36_I_MTRX_2_2    0
    H1:ASC-AS_A_RF36_I_MTRX_2_3    -2
    H1:ASC-AS_A_RF36_I_MTRX_2_4    2
  an offset of -2500 counts in H1:ASC-SRC1_Y_OFFSET gives the lower POP90, higher AS90, and lower high frequency SRCL coupling. (see plot)
- Since I don't like running with offsets in WFS loops, I tried the following sensing matrix, which puts us to the same position:
    H1:ASC-AS_A_RF36_I_MTRX_2_1    0
    H1:ASC-AS_A_RF36_I_MTRX_2_2   -1
    H1:ASC-AS_A_RF36_I_MTRX_2_3    0
    H1:ASC-AS_A_RF36_I_MTRX_2_4    3
  This admittedly looks odd - it should also have a pitch content - but in alog 19572 we saw that the pitch signal is in a different phase anyway... whatever...
 - With that new lock point we observed:
  - The same SRCL coupling at low frequencies - this one seems steady
  - A lower average SRCL coupling at high frequencies - as a result the notch moved up in frequency from ~75Hz to ~110Hz
  - The high frequency part is also the more variable part - before and after the offset shift. Thus - even though the coupling now seems worse around the old notch frequency - that disadvantage should easily be compensated by the SRCLFF path.

 - We also updated the FM8 cut-off filter in SRCL - it is now a less aggressive low pass filter starting at 80Hz. This still kills the variable part of the coupling, but also reduces gain peaking in the SRCL loop - which before made the coupling worse.


Images attached to this report
H1 ISC (ISC)
hang.yu@LIGO.ORG - posted 19:33, Monday 20 July 2015 (19767)
Updates of a2l
Matt, Hang

We ran the a2l decoupling optimization code this evening for all test masses and for both pitch and yaw. It successfully reduced the low frequency noise. Please see the attachment (darm_spectrum.png). 

The changes were:
H1:SUS-ETMX_L2_DRIVEALIGN_P2L_GAIN: 0.93 -> 1.21
H1:SUS-ETMX_L2_DRIVEALIGN_Y2L_GAIN: 0.93 -> 1.32
H1:SUS-ETMY_L2_DRIVEALIGN_P2L_GAIN: 0.00 -> -0.02
H1:SUS-ETMY_L2_DRIVEALIGN_Y2L_GAIN: -0.70 -> -0.59
H1:SUS-ITMX_L2_DRIVEALIGN_P2L_GAIN: 1.95 -> 2.04
H1:SUS-ITMX_L2_DRIVEALIGN_Y2L_GAIN: 0.63 -> 0.74
H1:SUS-ITMY_L2_DRIVEALIGN_P2L_GAIN: 1.05 -> 1.06
H1:SUS-ITMY_L2_DRIVEALIGN_Y2L_GAIN: -2.05 -> -1.48

H1:SUS-ETMX_L2_DRIVEALIGN_P2L_GAIN 
More details of the measurements could be found under:
/opt/rtcds/userapps/trunk/isc/h1/scripts/a2l/rec. 
It contained both the raw measured data (I, Q and total), and plots of our linear fits as well as rotation. The optimal a2l gains corresponded to the zeros of rotated I's. Please again note that since our data were likely to be correlated, the errorbars shown should just be treated as a rough estimation.

=================================================================================================================================================================================

Besides, we also wrapped the python code into a bash shell that could be easily called in the future. It could be found under:
/opt/rtcds/userapps/trunk/isc/h1/scripts/a2l

To rerun the optimization, you can simply enter
./run_a2l.sh
in the command line, and the code will do the optimization for all test masses and all angular dofs. 

If you just want to optimize some specific optics and, say, only their pitch to length coupling, you can just edit the 'a2l_input.in' file.

In cases that the interferometer loses lock, please press "ctrl + c" to terminate the code. With this keyboard interruption, it will automatically set the not yet optimized drive align gains back to their original values and disable the dither input. 

For more instructions, please refer to 'readme.txt' under the same directory. 

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 18:21, Monday 20 July 2015 (19766)
current rcg versions being run

I have updated the RCG_VERSIONS MEDM screen to show the currently running versions before tomorrow's upgrade.

Images attached to this report
H1 AOS
matthew.evans@LIGO.ORG - posted 18:09, Monday 20 July 2015 - last comment - 18:38, Friday 22 March 2019(19765)
Testing multi-thread writes in Guardian

I have put together a few python functions which allow for briefly spawning multiple threads to switch many filters at (roughy) the same time.  The idea here is NOT to provide synchronous filter switching, but rather to speed up Guardian transitions which change the state of many filter modules (or more generally, write many channels).

The new code is in:

userapps/release/isc/h1/guardian/

fast_ezca.py - new functions for writing, switching, and generally doing things quickly

test_write_many.py - test functions for multi-thread writing

test_switch_many.py - test functions for multi-thread switching

test_do_many.py - test functions for multi-thread compound actions

and it is being used in the ISC_library function slow_offload_fast.   There is a single-thread version of this function in ISC_library in case of trouble: slow_offload_many.  The only caller is gen_OFFLOAD_ALIGNMENT_MANY in ISC_GEN_STATES, so go there if you need to switch this out.

NOTE: usage of these functions should NOT spread.  It will be assimilated into the Guardian, and the API will change.
Adding         test_do_many.py
Adding         test_switch_many.py
Adding         test_write_many.py
Adding         test_do_many.py
Adding         test_switch_many.py
Adding         test_write_many.py
Comments related to this report
sheila.dwyer@LIGO.ORG - 18:38, Friday 22 March 2019 (47780)

This allows the guardian to move on without setting a setting, and can cause problems because settings can be wrong and the user has no clues. 

I want to delete this completely.

H1 SUS (CAL, DetChar, ISC, SUS, SYS)
leonid.prokhorov@LIGO.ORG - posted 17:43, Monday 20 July 2015 - last comment - 16:29, Wednesday 22 July 2015(19764)
OPLEV Charge measurements
We continue the charge measurements on ETMs.
Results for ETMX are consistent with negative trend, now the charge is from 10 to 20 [V] Effective Bias Voltage for all the quadrants.
Results for ETMY do not not show a significant trend (probably, the data are beginning to be consistent with positive trend). Charge is below the 10 [V] Effective Bias Voltage for all the quadrants.

Note:  We had positive bias on ETMX and negative bias on ETMY after discharging procedure. So it seems possible that charging is caused by the bias voltage.
Images attached to this report
Comments related to this report
rich.abbott@LIGO.ORG - 17:40, Tuesday 21 July 2015 (19807)ISC
Has the bias on ETMX and ETMY remained positive and negative respectively for the duration of this observation?
leonid.prokhorov@LIGO.ORG - 16:29, Wednesday 22 July 2015 (19846)
Bias was the same for this and next charge measurements. 
It was changed on July, 22:  alog 19821
Today we have the first measurements after changing the bias sign: alog 19848
H1 CDS
david.barker@LIGO.ORG - posted 17:09, Monday 20 July 2015 (19763)
models using TrueRMS part

The nex RCG release will fix the TrueRMS issues. For the record, here is a list of the H1 user models which use this part:

h1susetmypi, h1susetmy, h1susetmx, h1susitmx, h1susitmy, h1omc, h1susomc, h1oaf

The SUS IOP models also use the part for the Software Watchdog:

h1iopsusb123, h1iopsusex, h1iopsusey, h1iopsush2a, h1iopsush34, h1iopsush56

H1 General
travis.sadecki@LIGO.ORG - posted 16:01, Monday 20 July 2015 (19762)
OPS Day Shift Summary

Times PST

9:58 Richard to EX to reconnect HEPI pump ground

10:15 Richard back

10:25 HFD on sit

11:20 Leo taking charge measurements

13:44 Joe D to both Mid stations

14:11 Joe D back

14:41 Richard to roof checking out GPS antenna work for tomorrow

H1 General
travis.sadecki@LIGO.ORG - posted 13:09, Monday 20 July 2015 (19761)
PSL weekly

Laser Status:
SysStat is good
Front End power is 32.64W (should be around 30 W)
Frontend Watch is GREEN
HPO Watch is RED

PMC:
It has been locked 6.0 days, 0.0 hr 31.0 minutes (should be days/weeks)
Reflected power is 2.464Watts and PowerSum = 25.33Watts.

FSS:
It has been locked for 0.0 days 0.0 h and 1.0 min (should be days/weeks)
TPD[V] = 1.643V (min 0.9V)

ISS:
The diffracted power is around 7.629% (should be 5-9%)
Last saturation event was 0.0 days 0.0 hours and 1.0 minutes ago (should be days/weeks)
 

H1 AOS
jason.oberling@LIGO.ORG - posted 12:35, Monday 20 July 2015 - last comment - 19:10, Monday 20 July 2015(19760)
ETMx OpLev appears quiet

J. Oberling, E. Merilh

On June 23rd we swapped the laser on the ETMx oplev, see alog 19290.  We spent the next couple weeks tweaking the operating power of the laser to get it in a stable zone; this has to be done since the thermal environment is different between the end station VEA and the LSB lab the laser was originally stabilized in.  After I got back from vacation last week I've been looking at quiet times (no apparent optic movement; I looked at the pitch & yaw oplev signals and picked times where the optics were quiet) to see if the laser is stable and glitch-free.  I've attached 3 spectrograms of quiet times during the last week

There is no obivious glitching in the ETMx oplev laser as shown in these spectrograms.  I think it is safe to say this oplev is healthy.  I have also attached a spectrum of the ETMx oplev pitch & yaw signals for the same 4 hour stretch on 2015-7-19 as the attached spectrogram.

As usual if anyone notices anything not right with any of the optical levers, let me know.

Images attached to this report
Comments related to this report
suresh.doravari@LIGO.ORG - 19:10, Monday 20 July 2015 (19768)

Glitch Performance evaluation of diode lasers:

   It is sometimes difficult to see whether a laser is glitch free because of the tools we employ display the data.  It is good to compare performance of a laser under test with one that is known to be glitch free and another known to be glitchy.  This way the display tool is validated (since it shows the glitches in the glitchy laser).  At the same time we would know if the laser under test has achieved the Reference Laser's performance level.

Please see my post on laser evaluation.  And it would be preferable if the same kind of plotting tools (ligodv) are used as in the past in order to make the comparisons easier.

H1 GRD (SEI)
thomas.shaffer@LIGO.ORG - posted 12:26, Monday 20 July 2015 (19759)
ISI_ETMY_ST1 Guardian Oddity

This morning around 16:45 UTC, the ETMY ISI Stage 1 and 2 WatchDogs tripped reporting a "payload trip". Jim cleared these trips and then watched Guardian bring ISI back up in an odd way. Guardian brought the ST1 node to HIGH_ISOLATED but not all of the the preferred isolation loops on Stage1 were turned on, along with the input, output, and decimation. (Snipet of the log attached) Stage 2 proceeded to try and isolate but since stage1 was not entirely isolated the WatchDogs tripped again. This time, after clearing the WatchDogs, Guardian brought eveything back up properly and everything seems to have been working well since.

Jamie is unsure of the reason behind this, but suspects some epics connection issues.

Another odd bit to add is that the payload was never tripped...

Images attached to this report
H1 GRD
thomas.shaffer@LIGO.ORG - posted 12:02, Monday 20 July 2015 (19758)
Guardian Connection Issue in ISC_DRMI

At 17:40 UTC the ISC_DRMI Guardian had an the following error: "EzcaError: Could not get value from channel: <PV 'H1:LSC-PD_DOF_MTRX_SETTING_5_18', count=1, type=double, access=read/write>, None"

A caget of the channel yielded a value so it seems as though it was just the Guardian that was not seeing it. Reloading the code did not fix the error, and we were already on the phone with Jamie with another Guardian oddity (alog to follow). Jamie suggested that we STOP the node and then set it back to EXEC. This worked!

This has been seen before (example: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=19673) and seems to not be the same channels but from H1:LSC-PD_DOF_MTRX. Jamie and Dave both have their thinking caps on and are trying to solve the problem.

H1 General
edmond.merilh@LIGO.ORG - posted 11:27, Monday 20 July 2015 (19757)
H1 operator morning locking summary

08:00-10:30 I got a little extra time for locking this morning.

Locking this morning was a little tricky as there was some FSS oscillation and ISS instability that was giving IMC a bit of trouble. Common Gain manipulation got that bit settled down and a bit of AOM diffracted power adjustment recitfied the remainder.

H1 ISC (DetChar, ISC, SEI, SUS)
gabriele.vajente@LIGO.ORG - posted 09:59, Monday 20 July 2015 - last comment - 22:08, Wednesday 22 July 2015(19756)
Brute force coherence summary

Here is a summary of the brute force coherence report already posted in a previous comment to an elog entry describing the good sensitivity lock of last Friday.

Basically, there is no large coherence anywhere, except for the well known periscope peaks that are coherent with ISS signals, IMC angular signals and PSL periscope (figure 1-3)

At low frequency, there is coherence with SUS-ETMY_L3_ESDAMON_?? signals. This was not there in the past, so I guess this coherence is just due to a change in the control strategy. If I'm not mistaken, this signal is just a monitor of the correction sent to the ESD, so coherence with DARM is normal. Please correct me if wrong... (figure 4)

In the 10-30 Hz there is coherence with ASC-MICH_P (figure 5)

In the 10-70 Hz region one dominant source of noise is longitudinal control, since there is coherence with MICH and SRCL (figures 6-7). This noise is not dominant and still a factor of few from the measured sensitivity.

In the higher frequency region (above 100 Hz), there is coherence with ISS and PSL periscope as already pointed out, but there is also some coherence with AS signal: ASC-AS_A/B_DC_SUM, ASC-AS_A_RF36_I_PIT/YAW etc... Together with the main jitter peaks, there is a broadband noise floor at about 1e-10 m/rHz from 100 to 1000 Hz. This might be intensity noise or noise in high order modes that is not completely filtered by the OMC (figure 8).

Finally, a 90 Hz bump seems to be coherent with HAM5 signals (figure 9)

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 22:08, Wednesday 22 July 2015 (19854)SUS
SUS-ETMY_L3_ESDAMON_?? are the recently well-connected and well-digitized analog monitors of the ESD (i.e. TST or L3 stage) actuators. Since we're using ETMY L3 as most of our DARM actuator, it's no surprise that there is coherence with DARM below the DARM UGF. What's strange is that you post that they have coherence before they were connected correctly in the SUS-AUX model from whence they come (see LHO aLOG 19780) ...
H1 ISC
rana.adhikari@LIGO.ORG - posted 04:34, Monday 20 July 2015 - last comment - 16:12, Tuesday 21 July 2015(19752)
Summary of last few days

Hang, Matt, Evan, Stefan, Rana

WE mostly were chasing weird locklosses and instabilities, but also managed to implement a few noise improvements:

  1. SRCL FF better shaping via fitting and better measurements.
  2. A2L decoupling gave us 10-30 Hz improvement.
  3. MICH FF tuned up a bit. New measurments taken; fitting and filter generation yet to be done.
  4. ASC tune up: hard/soft modes for CARM.
  5. ETMY Butterfly mode identified and notched.

Of the locklosses, some were just due to marginal points in the transitions and loop margins. The main issue over the past few days turned out to be that the TIDAL servo was turned OFF somehow on Friday evening. After switching that back on for ETMY, we have returned to having long locks. The highpassing of SRCLFF has removed the high power pitch instabilities that we were seeing.

We were able to get > 40 Mpc with 10W input. The butterfly mode of ETMY @ 6053.81 Hz is preventing us from turning on the OMC DC whitening right now, so we don't know how good our range really is, but our low frequency

Images attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 04:16, Monday 20 July 2015 (19753)

We also got a change to reimplement a small amount of the ASC changes from before last maintenance day:

  • We reverted back to the true hard/soft basis for common pitch only (these settings are not controlled by the guardian right now). The attachment shows the OLTF of the cSoft loop with true soft actuation. Note the absence of the hard mode.
  • We turned up the gain on cHard pitch by 20 dB (not in the guardian).

Next:

  • Revert back to true hard/soft basis for common yaw.
  • Try out hard/soft basis for differential pitch and differential yaw.
Images attached to this comment
betsy.weaver@LIGO.ORG - 16:12, Tuesday 21 July 2015 (19800)

Note, Rana reports that the tidal was OFF at the far right switch on the IMC_F Filter Module (picture attached shows this switch now on as it should be).

Images attached to this comment
H1 General (FMP, PEM, SEI)
rana.adhikari@LIGO.ORG - posted 04:03, Monday 20 July 2015 - last comment - 09:17, Monday 20 July 2015(19751)
Ops Station alarms

Might be nothing, but the following alarms have been going off continuously for the past few days:

EX HEPI DIFF PRESSURE (yellow)

EX DUST1, EY DUST1 (RED), and LVEA DUST

Comments related to this report
hugh.radkins@LIGO.ORG - 09:17, Monday 20 July 2015 (19755)

The EndX HEPI Pressure Sensors are espectially noisy.  EE promises efforts to quiet them.  Meanwhile I've opened up the alarm levels to +-9psi from setpoint.

H1 CDS
david.barker@LIGO.ORG - posted 09:55, Sunday 19 July 2015 - last comment - 07:12, Monday 20 July 2015(19741)
more statistics on FE IOC freeze-up events

the probability a freeze-up does not impact at lease one dolphined FE is very small, so I'm using the h1boot dolphin node manager's logs to data mine these events. The dolphin manager was restarted when h1boot was rebooted Tuesday 7th July, so data epochs at that time.

As I was seeing with my monitoring programs, the restarts preferentially happen in the 20-40 minute block within the hour. The first histogram is the number of events within the hour, divided into 10 minute blocks.

We are also seeing more events recently, the second histogram shows number of events per day. The spike on Tue 14th is most probably due to front end computer reboots during maintenance. Friday's increase is not so easily explained.

FE IOC freeze up time listing:

 

controls on h1boot

 

grep "not reachable by ethernet" /var/log/dis_networkmgr.log |awk '{print $2r, $4}'|awk 'BEGIN{FS=":"}{print $1":"$2}'|sort -u

 

total events 197

minutes within the hour, divided into 10 min blocks

00-09 11  :*****

10-19 11  :*****

20-29 67  :*********************************

30-39 79  :****************************************

40-49 17  :*********

50-59 12  :******

 

events per day in July (start tue 07)

wed 08 09 :*****

thu 09 09 :*****

fri 10 08 :****

sat 11 07 :****

sun 12 08 :****

mon 13 09 :*****

tue 14 22 :***********

wed 15 10 :*****

thu 16 20 :**********

fri 17 38 :*******************

sat 18 16 :********

 

 

 

Comments related to this report
keith.thorne@LIGO.ORG - 07:12, Monday 20 July 2015 (19754)
This is a very clever analysis, Dave
  I checked the LLO logs (there are three, corner, x-end, y-end). So far we only see these issues when we have a front-end down for IO chassis, new hardware installs.
Displaying reports 63741-63760 of 82999.Go to page Start 3184 3185 3186 3187 3188 3189 3190 3191 3192 End