Displaying reports 1921-1940 of 77271.Go to page Start 93 94 95 96 97 98 99 100 101 End
Reports until 20:00, Saturday 04 May 2024
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 20:00, Saturday 04 May 2024 - last comment - 20:24, Saturday 04 May 2024(77618)
OPS Eve Midshift Update

Still Relocking after last NLN lockloss at 23:15 UTC - experiencing a whole host of problems

In the meantime we've had the following issues, with locklosses listed in the lockloss alog 77615:

- BS Glitch and other DRMI locklosses 8 times. The glitch locklosses (4 of these 8 losses) were overcome by waiting at ENGAGE_DRMI_ASC while ASC converges to mitigate the glitch kick. That being said, DRMI is extremely unstable in ENGAGE_DRMI_ASC and lost lock here 2 times right as ASC converged.

- Unknown locklossest at TRANSITION_FROM_ETMX and after initial alignment, LOWNOISE_ESD_ETMX

- IR Not found (also after initial alignment).

I already ran initial alignment and don't think this is the problem. Have troubleshot using Jenne's ppt to no avail. The problem is that I essentially have to wait 1 hr (for the EX Transition issue) before experiencing a lockloss. Having done an initial alignment between EX Transition locklosses, I think it's worth assuming that that didn't fix the issue so I'll resort to the call list when I'm near EX Transition.

Comments related to this report
ibrahim.abouelfettouh@LIGO.ORG - 20:24, Saturday 04 May 2024 (77619)

UPDATE: Just made it past ETMX Transition for the first time since lockloss 4hrs and 15 minutes ago - slowly getting there.

H1 General (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 18:03, Saturday 04 May 2024 - last comment - 19:17, Saturday 04 May 2024(77615)
Repeat Locklosses at Transition DRMI to 3F during Lock Acquisiton (BS Glitch)

While trying to re-lock, I ran into the same issues that Ryan C and Sheila noted in their alog 77574 (most recent lock) where a glitch in the state ENGAGE_DRMI_ASC to the state TRANSITION_DRMI_TO_3F thought to be BS ST2 causes a lockloss - elevated primary microseism probably makes us more susceptible.

As done in the last Lock Acquisition (survey), staying at ENGAGE_DRMI_ASC until ASC converges (such that lockloss chance from the glitch can be minimized) worked to overcome the glitch, which had a markedly smaller effect.

Once we get to NLN, I will investigate this as noted in Sheila's comment (alog 77575). Additionally, alog 77573 talks briefly about this issue and lists where the glitch was seen. While this happens, here are some more trends of the glitch being seen in multiple BS channels (again, and again and again). Will plan to post another alog summarizing any interesting findings.

Images attached to this report
Comments related to this report
ibrahim.abouelfettouh@LIGO.ORG - 18:13, Saturday 04 May 2024 (77616)

Lost lock at TRANSITION_FROM_ETMX and then again at DRMI so just going to run initial alignment.

ibrahim.abouelfettouh@LIGO.ORG - 19:17, Saturday 04 May 2024 (77617)

Another Lockloss at LOWNOISE_ESD_ETMX right after TRANSITON_FROM_ETMX. This is the first lock attempt after initial alignment.

H1 General (SQZ, SUS)
anthony.sanchez@LIGO.ORG - posted 16:15, Saturday 04 May 2024 (77614)
Saturday Ops Shift End

TITLE: 05/04 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
H1 is Still locked and Observing for almost 43 Hours!

SUS Violin update:
After the Calibration sweep I increased the magnitude of the gain on EXmode9 again from -3 to -6.
I kept FM1 and FM10 selected. It's been working all day with values between -0.1 and -6.
Basically all day I was incrementally increasing this gain I started at -0.01 and exponentially increased it until I hit what I thought was the drive thresh hold of 1100-1200 until the Calibration sweep.  

Dropped into Commissioning to adjust the squeeze angle.
22:43 UTC back to OBSERVING.

LOG:
No log

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:13, Saturday 04 May 2024 (77613)
OPS Eve Shift Start

TITLE: 05/04 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 139Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 21mph Gusts, 19mph 5min avg
    Primary useism: 0.09 μm/s
    Secondary useism: 0.11 μm/s
QUICK SUMMARY:

IFO is in NLN and OBSERVING (42 min 51 hr lock!)

Nothing else of note

H1 CAL
anthony.sanchez@LIGO.ORG - posted 14:58, Saturday 04 May 2024 (77612)
Calibration Sweep

Calibration Sweep notes:
 I waited until the Drive Actuations for the Sus EX mode9 Violin were below 800 before running the Calibration sweep to hopefully minimize lockloss potential.
Ran the Following at 20:40 UTC ~GPS: 1398890536:
pydarm measure --run-headless bb 

notification: end of test
diag> save /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240504T204058Z.xml
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240504T204058Z.xml saved
diag> quit
EXIT KERNEL

2024-05-04 13:46:08,970 bb measurement complete.
2024-05-04 13:46:08,970 bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240504T204058Z.xml
2024-05-04 13:46:08,970 all measurements complete.

Then the following was ran :
gpstime;python /ligo/groups/cal/src/simulines/simulines/simuLines.py -i /ligo/groups/cal/src/simulines/simulines/newDARM_20231221/settings_h1_newDARM_scaled_by_drivealign_20231221_factor_p1.ini;gpstime

Start:
GPS: 1398890967.961527
Stop:
GPS: 1398892259.616687

2024-05-04 21:10:41,542 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240504T204910Z.hdf5
2024-05-04 21:10:41,550 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240504T204910Z.hdf5
2024-05-04 21:10:41,555 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240504T204910Z.hdf5
2024-05-04 21:10:41,560 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240504T204910Z.hdf5
2024-05-04 21:10:41,565 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240504T204910Z.hdf5

Images attached to this report
H1 General (CAL, SUS)
anthony.sanchez@LIGO.ORG - posted 13:28, Saturday 04 May 2024 (77611)
Saturday Mid Ops Shift Report.

TITLE: 05/04 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 20mph Gusts, 17mph 5min avg
    Primary useism: 0.06 μm/s
    Secondary useism: 0.11 μm/s
QUICK SUMMARY:
H1 is still locked after 40 hours, and still OBSERVING!

SUS Violin update:
EX mode9 is still elevated but is going down. I have been adjusting the gain H1:SUS-ETMX_L2_DAMP_MODE9_GAIN to increase the rate that it declines while also trying to make the drive actuations ( H1:SUS-ETMX_L2_DAMP_MODE9_OUTPUT )  less than 1200 counts but greater than 800 to maximize the rate that we damp the Violin mode.

CAL Sweeps:
I have requested (from Virgo & LLO)  that the Saturday Calibration time happen at 20:30 UTC to allow time for the Violin to damp more before calibration sweeps. All sites have agreed to 20:30 UTC Commissioning time.


 

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:18, Saturday 04 May 2024 (77610)
Sat CP1 Fill

Sat May 04 10:12:16 2024 INFO: Fill completed in 12min 12secs

Images attached to this report
H1 SUS (SUS)
rahul.kumar@LIGO.ORG - posted 08:35, Saturday 04 May 2024 - last comment - 18:35, Sunday 05 May 2024(77608)
Violin damping ETMX mode 09

Tony, Rahul

ETMX09 was rung up (even though Ibrahim had set the gains off last night) and Tony was trying to fight the Guardian to try and damp this mode early this morning. We were out of Observing (for 15mins while we were figuring it out) and struggling to get in (violins too high message). Hence, at first we set the nominal values of gain in lscparam to zero, then set the violin damping guardian to Damp_Violins_Full_Power and then applied a fraction of the nominal gain. This seems to be slowly damping the mode.

Also then Tony was able to move back to Observing.

Will keep a watch over this mode (and others if needed) until we fully damp it.

Comments related to this report
rahul.kumar@LIGO.ORG - 18:35, Sunday 05 May 2024 (77634)SUS

I have updated the lscparams and loaded the Violin Guardian with the new gain settings for ETMX mode 09 as given below,

FM1+FM10 Gain = -6.0, Max gain = -8.0

H1 General (SUS)
anthony.sanchez@LIGO.ORG - posted 08:05, Saturday 04 May 2024 - last comment - 08:36, Saturday 04 May 2024(77607)
Saturday Ops Day Shift Start


TITLE: 05/04 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 11mph Gusts, 7mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.09 μm/s
QUICK SUMMARY:

H1 has been locked and OBSERVING all night.
But the Violin ETMX mode 9 has been on the rise fore the past 34 hours! and is now very high!
I will be trying to damp this mode first thing!

Images attached to this report
Comments related to this report
anthony.sanchez@LIGO.ORG - 08:36, Saturday 04 May 2024 (77609)

Dropped to COMMISSIONING While trying to find the right setting to damp ETMX Mode 9.
While trying to turn the gain on the Viol Guardian was turning it off a second later. So i had to take the VIOLIN Guardian to DAMP_ON_SIMPLE (which dropped us from Observing) .
To resolve this Rahul had me edit the lscparams file , find the gain settign for that suspension and mode and set it to 0.
Then reload the Violin guardian and that allowed me to take the guardian to DAMP_VIOLINS_FULL_POWER and thus return to OBSERVING.
 Current settings for ETMX Mode 9 have the Violin mode going down.

EXm9: FM1, FM10  Gain: -1

 

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 00:07, Saturday 04 May 2024 (77606)
OPS Eve Shift Summary

TITLE: 05/04 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:

IFO is in NLN and OBSERVING (26hr 46 min lock)

Violin Ring-Up:

From 04:40 UTC to 05:14 UTC, went in and out of observing to try to damp violin ETMX mode 9. The nominal gain did not work nor did testing different gains and (and turning on the angle filters). Checked the wiki/alog also but to no avail (initially went to the violins wiki page and Rahul's ppt to go to damp simple when Guardian was fighting me with gain issues). This violin has been growing slowly since the lock acquisition 26 hrs and 39 minutes ago. (screenshot). During this time, the violin count magnitude increased seven fold. Pinged Rahul and expecting help with it tomorrow.

SQZ Unlock:

Squeezer unlocked at 05:20 UTC but managed to relock at 5:29 UTC. I did have to play around with guardian states because guardian notified me that the SQZ ISS had saturated and to go to alog 70050 from May 2023. As I was playing around with the states, the SQZ relocked automatically. SQZ lockloss was potentially caused by a time corellated local earthquake that showed up on the EQ peak_outmon very sharply but was not high magnitude enough to cause us to go into EQ mode.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
15:11 SAF LASER HAZARD LVEA YES LVEA is LASER HAZARD 00:09
16:21 PCAL Francisco PCAL Lab Local Upgrading the PCAL Lab 19:39
16:31 Optic Terry Optics lab Local 2nd Harmonic generator work 19:29
18:34 TCS Camilla & Jason Optics lab Local Turning on the CO2 laser 20:15
20:39 SQZ Terry & Co Optics Lab Local 2nd harmonic Gen 23:39
20:58 PCAL Francisco PCAL Lab Local Setting up R&D on PCAL Lab system 21:07
21:26 FAC Richard PCAL Lab Local Plugging in & testing the new phone 21:46
22:02 PCAL Francisco PCAL lab Local Testing PCAL KVM 01:02
23:13 VAC Janos & Jordan Mid Y N Disasembly of CP3 23:43
Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 20:02, Friday 03 May 2024 (77605)
OPS Eve Midshift Update

IFO is still in NLN and OBSERVING (22 hr 40 min lock) at 156 MPc

Nothing else of note

H1 General
anthony.sanchez@LIGO.ORG - posted 16:15, Friday 03 May 2024 (77604)
Friday Ops day Shift End

TITLE: 05/03 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 133Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
H1 was in Observing the entire length of my shift. 
H1 has been locked and Obsereving for almost 19 hours.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
15:11 SAF LASER HAZARD LVEA YES LVEA is LASER HAZARD 00:09
15:48 FAC Karen Optics Lab No Technical Cleaning 16:09
16:14 VAC Janos Mid Y N Pump down Vac Jacket for N2 16:26
16:21 PCAL Francisco PCAL Lab Local Upgrading the PCAL Lab 19:39
16:27 VAC Janos & Jordan FTCE Reciving N Checking Vac Equipment 16:38
16:31 Optic Terry Optics lab Local 2nd Harmonic generator work 19:29
16:53 FAC Eric FTCE N Working on Door latch to FTCE 17:53
18:34 TCS Camilla & Jason Optics lab Local Turning on the CO2 laser 20:15
20:39 SQZ Terry & Co Optics Lab Local 2nd harmonic Gen 23:39
20:58 PCAL Francisco PCAL Lab Local Setting up R&D on PCAL Lab system 21:07
21:26 FAC Richard PCAL Lab Local Plugging in & testing the new phone 21:46
22:02 PCAL Francisco PCAL lab Local Testing PCAL KVM 01:02
23:13 VAC Janos & Jordan Mid Y N Disasembly of CP3 23:43
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:05, Friday 03 May 2024 (77603)
OPS Eve Shift Start

TITLE: 05/03 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 14mph Gusts, 10mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:

IFO is in NLN and OBSERVING (18hr 45 min lock)

Nothing else of note

H1 OpsInfo
jenne.driggers@LIGO.ORG - posted 15:57, Friday 03 May 2024 (77602)
L1 trace on DARM FOM updated

[Anamaria, Tony, Jenne]

Anamaria has updated the L1 trace in our DARM FOM, so we can more easily see the difference between the sites. Tony made sure it's on the front wall TV via the launcher.

Images attached to this report
H1 ISC
sheila.dwyer@LIGO.ORG - posted 15:23, Friday 03 May 2024 (77601)
sensitivity comparison

Attached is a comparison of the DARM spectrum of our good time April 11th (164 Mpc), compared to now that we have mostly recovered from our OFI incident (161Mpc).  Our sensitivity below 25Hz is slightly better than before, and above 1kHz the squeezing is a bit worse.

There is a bruco running which will appear here soon: https://ldas-jobs.ligo.caltech.edu/~sheila.dwyer/brucos/CLEAN_1398770931/

 

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 14:22, Friday 03 May 2024 (77599)
Friday Ops Mid Shift report.

TITLE: 05/03 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 10mph Gusts, 7mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.09 μm/s
QUICK SUMMARY:
H1's current lock has just hit 17 hours of Observing.
Wind speed is ramping up.
No plans to drop out of Observing.

 

H1 ISC
sheila.dwyer@LIGO.ORG - posted 15:52, Thursday 02 May 2024 - last comment - 16:10, Friday 03 May 2024(77583)
PRCL gain to high in DRMI on REFL 1F, manually adjusted to get IFO locked

We had several locklosses from DRMI in the last hour.  PRCL was oscillating at just above 100 Hz, because it's gain was too high.  I reduced the PRCL1 gain from 8 to 4.  I then remeasured once DRMI was transitioned to 3F, the gain was now too low by a factor of 2, so I set it back to 8. 

I won't make this change in the guardian, because I don't know if this is consistent or not.

Images attached to this report
Comments related to this report
anthony.sanchez@LIGO.ORG - 16:10, Friday 03 May 2024 (77600)

My notes on this PRCL motion before locklosses:
I have found multiple other times where PRCL has had rapid motion like seen in the above alog.  Screen shots included below.
This is just in ISC state 101 so far. 
But it does seem like this started on the 23rd, as I have not been able to find a time when this happened before April 23rd.
 


List of Locklosses that have happened in ISC state 101
    ID     GPS     UTC     guardian state     state duration     tags     analysis status
0     1398722295     1398722295.375     2024-05-02 21:57:57.375000 UTC     101             fail
1     1398342613     1398342613.1875     2024-04-28 12:29:55.187500 UTC     101         INITIAL_ALIGNMENT     fail
2     1397959320     1397959320.25     2024-04-24 02:01:42.250000 UTC     101     0:00:52     MAINTENANCE EARTHQUAKE INITIAL_ALIGNMENT     analyzed [0.29.1]
3     1397958844     1397958844.0     2024-04-24 01:53:46.000000 UTC     101         MAINTENANCE EARTHQUAKE     fail
*4     1397918529     1397918529.375     2024-04-23 14:41:51.375000 UTC     101             analyzing     -----------Good Example
*5     1397915450     1397915450.0     2024-04-23 13:50:32.000000 UTC     101             analyzing   -----------Good Example
6     1397490602     1397490602.0625     2024-04-18 15:49:44.062500 UTC     101     0:00:12         analyzed [0.29.1]
7     1397351257     1397351257.566406     2024-04-17 01:07:19.566406 UTC     101         WINDY OMC_DCPD REFINED     fail
8     1397349619     1397349619.261719     2024-04-17 00:40:01.261719 UTC     101         REFINED WINDY INITIAL_ALIGNMENT OMC_DCPD     fail
9     1397333742     1397333742.512695     2024-04-16 20:15:24.512695 UTC     101     0:00:33     WINDY REFINED     analyzed [0.29.1]
10     1397226213     1397226213.3125     2024-04-15 14:23:15.312500 UTC     101         INITIAL_ALIGNMENT     fail
11     1396929654     1396929654.377441     2024-04-12 04:00:36.377441 UTC     101         REFINED INITIAL_ALIGNMENT     fail
12     1396880628     1396880627.856934     2024-04-11 14:23:29.856934 UTC     101     0:00:11     REFINED     analyzed [0.29.1]
13     1396751608     1396751608.208984     2024-04-10 02:33:10.208984 UTC     101         REFINED     analyzing
14     1396370541     1396370540.857422     2024-04-05 16:42:02.857422 UTC     101         REFINED     fail
15     1396368947     1396368941.705078     2024-04-05 16:15:23.705078 UTC     101         REFINED OMC_DCPD     fail
16     1396363292     1396363292.875     2024-04-05 14:41:14.875000 UTC     101         FSS_OSCILLATION BOARD_SAT EARTHQUAKE INITIAL_ALIGNMENT     fail
17     1396184286     1396184286.200195     2024-04-03 12:57:48.200195 UTC     101

 

 


Make Histogram of the following data after excluding DAQ restarts:

['H1:GRD-ISC_LOCK_STATE_N.mean,m-trend']
<H1:GRD-ISC_LOCK_STATE_N.mean (0.0166667Hz, MTREND, FLOAT64)>
lines read from servers: 47066
Number of time values where Query state H1:GRD-ISC_LOCK_STATE_N == state 101 is true: 277
Number of Unique times the channel was in such a state: 149
Length of SCstart: 149 Length of SCstop: 149 length of duration 149

0 1396042620 1396042620 0
1 1396042920 1396042920 300
2 1396095060 1396095060 52140
3 1396098000 1396098000 2940
4 1396133820 1396133820 35820
5 1396165320 1396165320 31500
6 1396184700 1396184700 19380
7 1396200540 1396200600 60
8 1396214760 1396214760 14220
9 1396218300 1396218300 3540
10 1396246980 1396246980 28680
11 1396358340 1396358340 111360
12 1396359600 1396360260 660
13 1396361700 1396361700 2100
14 1396362120 1396363200 1080
15 1396367940 1396367940 5820
16 1396368180 1396368180 240
17 1396368420 1396368840 420
18 1396370160 1396370160 1740
19 1396370460 1396370460 300
20 1396385340 1396385340 14880
21 1396451220 1396451220 65880
22 1396494480 1396494480 43260
23 1396548420 1396548600 180
24 1396549140 1396549320 180
25 1396686480 1396686480 137340
26 1396686780 1396686780 300
27 1396728060 1396728060 41280
28 1396746360 1396746360 18300
29 1396747140 1396747140 780
30 1396749420 1396749420 2280
31 1396752120 1396752120 2700
32 1396769520 1396769520 17400
33 1396773900 1396773900 4380
34 1396774140 1396774140 240
35 1396790760 1396790760 16620
36 1396794540 1396794540 3780
37 1396814460 1396814460 19920
38 1396814760 1396814820 60
39 1396930080 1396930140 60
40 1396930380 1396930440 60
41 1396998480 1396998480 68100
42 1397047080 1397047260 180
43 1397047500 1397047680 180
44 1397144280 1397144280 96780
45 1397146080 1397146080 1800
46 1397146620 1397146740 120
47 1397173920 1397173920 27300
48 1397187300 1397187300 13380
49 1397190360 1397190600 240
50 1397225160 1397225160 34800
51 1397226120 1397226120 960
52 1397229900 1397229900 3780
53 1397292240 1397292300 60
54 1397292600 1397292600 360
55 1397335440 1397335560 120
56 1397346900 1397346900 11460
57 1397354820 1397354820 7920
58 1397357220 1397357220 2400
59 1397370960 1397370960 13740
60 1397458140 1397458140 87180
61 1397458440 1397458620 180
62 1397466240 1397466240 7800
63 1397471460 1397471460 5220
64 1397471760 1397471760 300
65 1397494320 1397494320 22560
66 1397524440 1397524440 30120
67 1397524740 1397524800 60
68 1397526420 1397526420 1680
69 1397538420 1397538600 180
70 1397538900 1397538900 480
71 1397599800 1397599800 60900
72 1397600520 1397600700 180
73 1397601000 1397601060 60
74 1397646900 1397646900 45900
75 1397647140 1397647200 60
76 1397647740 1397647800 60
77 1397662560 1397662560 14820
78 1397663220 1397663220 660
79 1397690400 1397690400 27180
80 1397715900 1397715900 25500
81 1397716260 1397716260 360
82 1397716740 1397716860 120
83 1397718120 1397718120 1380
84 1397720280 1397720280 2160
85 1397720520 1397720520 240
86 1397726160 1397726160 5640
87 1397727180 1397727180 1020
88 1397763780 1397763780 36600
89 1397766480 1397766540 60
90 1397848140 1397848140 81660
91 1397848440 1397848500 60
92 1397852880 1397852880 4440
93 1397853660 1397853660 780
94 1397867760 1397867760 14100
95 1397868720 1397868780 60
96 1397912580 1397912580 43860
97 1397915820 1397915820 3240
98 1397916060 1397916060 240
99 1397918940 1397919000 60
100 1397937540 1397937540 18600
101 1397952300 1397952480 180
102 1397952960 1397952960 660
103 1397956860 1397957040 180
104 1397957280 1397957280 420
105 1397957880 1397957880 600
106 1397958600 1397958780 180
107 1397959680 1397959860 180
108 1397960220 1397960220 540
109 1397976900 1397976900 16680
110 1397977140 1397977140 240
111 1398037920 1398037980 60
112 1398038280 1398038280 360
113 1398081000 1398081000 42720
114 1398194400 1398194400 113400
115 1398195300 1398195300 900
116 1398219480 1398219480 24180
117 1398219780 1398219900 120
118 1398252540 1398252540 32760
119 1398275280 1398275280 22740
120 1398276660 1398276660 1380
121 1398341760 1398341760 65100
122 1398359160 1398359340 180
123 1398360060 1398360240 180
124 1398361020 1398361020 960
125 1398381240 1398381420 180
126 1398382260 1398382440 180
127 1398553560 1398553560 171300
128 1398554100 1398554100 540
129 1398574020 1398574080 60
130 1398574560 1398574740 180
131 1398574980 1398574980 420
132 1398607740 1398607740 32760
133 1398619380 1398619380 11640
134 1398619680 1398619800 120
135 1398620340 1398620460 120
136 1398629940 1398630120 180
137 1398630360 1398630360 420
138 1398661560 1398661740 180
139 1398662160 1398662160 600
140 1398664260 1398664320 60
141 1398708420 1398708420 44160
142 1398709140 1398709140 720
143 1398722220 1398722220 13080
144 1398723240 1398723240 1020
145 1398724620 1398724620 1380
146 1398733740 1398733800 60
147 1398734040 1398734220 180
148 1398734580 1398734700 120

 

 Looking for times where PRCL is above 1000:
['H1:LSC-PRCL_IN1_DQ.mean,m-trend']
<H1:LSC-PRCL_IN1_DQ.mean (0.0166667Hz, MTREND, FLOAT64)>
lines read from servers: 47066
Number of time values where Query state H1:LSC-PRCL_IN1_DQ > state 1000 is true: 11
Number of Unique times the channel was in such a state: 3
Length of SCstart: 3 Length of SCstop: 4 length of duration 4
1396110540 1396110660 120 120
1396117620 1396117800 180 180
1397322360 1397919840 597480
Counting states completed
Times ISC_Lock was in Aquire_DRMI_1F [101]
Start: Stop: Duration: Min: Max:
0 1396110540 1396110660 120
1 1396117620 1396117800 180
2 1397322360 1397322480 120


Maybe be use this channel H1:LSC-PRCL_TRIG_MON to check filter out times.

 

 

Images attached to this comment
Displaying reports 1921-1940 of 77271.Go to page Start 93 94 95 96 97 98 99 100 101 End