Displaying reports 66961-66980 of 85599.Go to page Start 3345 3346 3347 3348 3349 3350 3351 3352 3353 End
Reports until 00:55, Sunday 14 June 2015
LHO General
thomas.shaffer@LIGO.ORG - posted 00:55, Sunday 14 June 2015 - last comment - 15:14, Wednesday 17 June 2015(19132)
Ops Report

(times in PST)

0:04 - Locked @ LSC_FF, started PCAL swept line measurement

0:29 - PCAL measurment done, started DARM OLGTF measurement

0:50 - Both measurements done and no more for now it seems, Intention Bit set to Undisturbed

Comments related to this report
kiwamu.izumi@LIGO.ORG - 15:14, Wednesday 17 June 2015 (19200)CAL

The transfer functions that TJ measured for us have been renamed to be more obvious names as follows:

  • 2015-06-14_H1_DARM_OLGTF_LHOaLOG19132_ETMYL3LPOFF_17W.xml
  • 2015-06-14_H1PCALEY_2_DARM_LHOaLOG19132.xml

According to trend data, both mesurements seemed to have done at 17 W. The first file currently resides in aligocalibration/trunk/Runs/PreER7/H1/Measurements/DARMOLGTFs. The other one is in aligocalibration/trunk/Runs/ER7/H1/Measurements/PCAL_TRENDS.

By the way, according to what we had in the calibration svn, TJ must have accidently updated Jeff's DARM OL and Pcal Y sweep measurements with the above latest measurements. I restored Jeff's two measurements back to the previous revisions in the svn. So we now have both Jeff's and TJ's measurements checked in the SVN.

H1 DetChar (DetChar)
paul.altin@LIGO.ORG - posted 00:24, Sunday 14 June 2015 - last comment - 08:08, Wednesday 19 August 2015(19131)
DQ shift summary: LHO 1117843216 - 1118102415 (June 9 - 11)

There were eight separate locks during this shift, with typical inspiral ranges of 60 - 70 Mpc. Total observation time was 28.2 hours, with the longest continuous stretch 06:15 - 20:00 UTC on June 11. Lock losses were typically deliberate or due to maintenance activities.

The following features were investigated:

1 – Very loud (SNR > 200) glitches
Omicron picks up roughly 5-10 of these per day, coinciding with drops in range to 10 - 30 Mpc. They were not caught by Hveto and appear to all have a common origin due to their characteristic OmegaScan appearance and PCAT classification. Peak frequencies vary typically between 100 - 300 Hz (some up to 1 kHz), but two lines at 183.5 and 225.34 Hz are particularly strong. These glitches were previously thought to be due to beam tube cleaning, and this is supported by the coincidence of cleaning activities and glitches on June 11 at 16:30 UTC. However, they are also occurring in the middle of the night, when there should be no beam cleaning going on. Tentative conclusion: they all have a common origin that is somehow exacerbated by the cleaning team's activities.

2 – Quasi-periodic 60 Hz glitch every 75 min
Omicron picks up an SNR ~ 20 - 30 glitch at 60Hz which seems to happen periodically every 70 - 80 min. Hveto finds that SUS-ETMY_L2_WIT_L_DQ is an extremely efficient (use percentage 80-100%) veto, and that SUS-ETMY_L2_WIT_P_DQ and PEM-EY-MAG-EBAY-SEIRACK-X_DQ are also correlated. This effect is discussed in an alog post from June 6 (link): "the end-Y magnetometers witness EM glitches once every 75 minutes VERY strongly and that these couple into DARM".  Due to their regular appearance, it should be possible to predict a good time to visit EY to search for a cause. Robert Schofield is investigating.

3 – Non-stationary noise at 20 - 30Hz
This is visible as a cluster of SNR 10 - 30 glitches at 20 - 30 Hz, which became denser on June 11 and started showing up as short vertical lines in the spectrograms as well. The glitches are not caught by Hveto. Interestingly, they were absent completely from the first lock stretch on June 10, from 00:00 – 05:00 UTC. Daniel Hoak has concluded that this is scattering noise, likely from alignment drives sent to OMC suspension, and plans to reduce the OMC alignment gain by a factor of two to stop this (link to alog).

4 – Broadband spectrogram lines at 310 and 340 Hz
A pair of lines at 310 and 340 Hz are visible in the normalized spectrograms, strongest at the beginning of a lock and decaying over a timescale of ~1 hr as the locked interferometer settles into the nominal alignment state. According to Robert Schofield, these are resonances of the optic support on the PSL periscope. The coupling to DARM changes as the alignment drifts in time (peaks decay beacuse the alignment was tuned to minimize the peaks when the IFO is settled.) Alogs about this: link, link, link.

There are lines of Omicron triggers at these frequencies too, which interestingly are weakest when the spectrogram lines are strongest (probably due to a 'whitening' effect that washes them out when the surrounding noise rises). Robert suspects that the glitches are produced by variations in alignment of the interferometer (changes in coupling to the interferometer making the peaks suddenly bigger or smaller).

5 – Wandering 430 Hz line
Visible in the spectrograms as a thin and noisy line, seen to wander slightly in Fscan. It weakened over the course of the long (14h) lock on June 11. Origin unknown.

6 – h(t) calibration
Especially noisy throughout the shift, with the ASD ratio showing unusually high variance. May be related to odd broadband behavior visible in the spectrogram. Jeff Kissel and calibration group report that nothing changed in the GDS calibration at this time. Cause unknown.

Attached PDF shows some relevant plots.

More details can be found at the DQ shift wiki page.

Non-image files attached to this report
Comments related to this report
nutsinee.kijbunchoo@LIGO.ORG - 14:35, Wednesday 17 June 2015 (19197)

I believe the 430 Hz wandering line is the same line Marissa found at 415 Hz (alog18796). Which turns out, as Gabriele observed, to show coherence with SRCL/PRCL.

Images attached to this comment
edward.daw@LIGO.ORG - 08:08, Wednesday 19 August 2015 (20676)
Ross Kennedy, my Ph.D. student, implemented tracking of this line over 800 seconds using the iWave line tracker. Overlaid with a spectrogram, you can see that there is quite good agreement as the frequency evolves. We're working on automating this tool to avoid hand-tuning parameters of the line tracker. It would also be interesting to track both this line and PSL behaviour at the same time, to check for correlation.

In the attached document there are two spectrograms - in each case the black overlay is the frequency estimate from iWave. 

Non-image files attached to this comment
H1 General
travis.sadecki@LIGO.ORG - posted 00:03, Sunday 14 June 2015 (19130)
EVE shift summary

Times UTC

3:12 Locked LSC_FF @ 23W by request of Evan. Evan running measurments.

4:00 Lockloss.

6:00 Locked LSC_FF.  Still at 23W and Evan and Dan still taking measurements.

6:55 Back to LSC_FF @ 16W.  Starting remaining PCal and OLGTF measurements.

7:00 Handing off to TJ to bring ER7 home.

H1 General
travis.sadecki@LIGO.ORG - posted 20:01, Saturday 13 June 2015 (19126)
EVE mid-shift update

Times UTC

23:00 Still locked LSC_FF.

23:05 Jeff K taking OLGTF measurements and working on OMC calibration.

23:23 Jeff K taking PCal measuements.

0:05 Lockloss.  Appears to be due to EQ in Canada.

0:45 Lockloss on the way up at BOUNCE_VIOLIN_MODE_DAMPING. 

0:51 GRB/SN alarm.  Unfortunately, not locked at the time.

1:22 Paused locking sequence at DC_READOUT_TRANSITION so Dan can take some OMC measurements.

3:00 Dan done with measurements.

H1 CAL (AOS, CAL, DetChar, ISC)
jeffrey.kissel@LIGO.ORG - posted 17:24, Saturday 13 June 2015 - last comment - 13:15, Sunday 14 June 2015(19128)
DARM OLGTF, PCALX and half of PCALY to DARM Sweeps Complete
J. Kissel, T. Sadecki, D. Hoak, E. Merilh

As a last ditch effort to be able to recocile the calibration for the rest of the run, given the drastic change to the ETMY ESD, I've completed a DARM OLGTF, and also tuned and completed a PCAL EX to DARM transfer function. I got halfway through the same transfer function using PCAL EY, but the lock broke from some sort of seismic disturbance. The interferometer was at 17 [W] requested input power for all of the below measurements, I attach the relevant digital parameters that are relevant (sadly the list is growing! as so many things are changing!).

Analysis to come, but the measurements have been committed to the CalSVN repo here:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER7/H1/Measurements/DARMOLGTFs
2015-06-13_H1_DARM_OLGTF_LHOaLOG19128_ETMYL3LPOFF.xml

/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER7/H1/Measurements/PCAL_TRENDS
2015-06-13_H1PCALEX_2_DARM.xml
2015-06-13_H1PCALEX_2_DARM_stronger.xml

/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER7/H1/Measurements/PCAL_TRENDS
2015-06-13_H1PCALEY_2_DARM_stronger.xml

For the PCAL sweeps, I've modified the following settings from what Sudarshan had set up (2015-06-12_pcal_sweep_X.xml in the same folder) such that (a) the measurement would complete in a reasonable amount of time, (b) we'd get coherence over the entire band of measurement, and (c) that the frequency vector would match the DARM OLGTF:
- Changed the frequency range to 5 to 5000 [kHz]
- Changed the frequency vector from linear to logarithmic
- Changed the user-defined amplitude format to Envelope
- Increased the number of cycles to 25
- Increased the duration of a cycle to 1 [sec]
- Increase the drive strength in various frequency bands where there remained no coherence
Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 13:15, Sunday 14 June 2015 (19138)

I took one more DARM OLG at 22.1 Watts, it is in the svn at  /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER7/H1/Measurements/DARMOLGTFs/2015-06-14_H1_DARM_OLGTF_LHOaLOG191138_ETMYL3LPOFF.xml

H1 DetChar
greg.ogin@LIGO.ORG - posted 16:09, Saturday 13 June 2015 (19124)
LHO ER7 data quality shift Wednesday 6/3 to Friday 6/5

LHO ER7 data quality shift Wednesday 6/3 to Friday 6/5 (1117378816-1117584015). TJ Massinger was the shift mentor.

Things we noticed:

- Thursday the sensemon was incorrectly generating the inspiral range, which has since been fixed. The range was between 45 and 55 Mpc on Thursday.

- The calibration accuracy seemed to be more widely varying than in previous days, with large departures (factor of >5) at the high frequency (~6kHz) end.

- We saw the 14Hz roll mode ringing down at the beginning of each lock, except for the 11:00-13:00 UTC segment Thursday, where a ring-down measurement was being performed. Nothing to see here.

- Some ugly wandering lines in CARM around 600Hz that also showed up through their 4th harmonic.

- A particularly glitchy segment just before 18:00 UTC Friday. We tracked down the loudest glitch which was identified as 2.16kHz (SNR 3,190). A time plot shows this was actually high frequency glitches riding on a much larger ~7.7 Hz blip. The high frequency components also showed up in the PRM M2 suspension control signals. Some discussion in the aLog about this: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=18918

 

A link to the shift report: https://wiki.ligo.org/DetChar/DataQuality/DQShiftLHO20150603

H1 General
edmond.merilh@LIGO.ORG - posted 16:01, Saturday 13 June 2015 - last comment - 19:30, Saturday 13 June 2015(19123)
H1 Locked at LSC_FF

15:58PDT

Jeff K in the control room wants to take a DARM loop open gain and correct the calibration. Will wait to go into science mode. Handing of to Travis.

Comments related to this report
jeffrey.kissel@LIGO.ORG - 16:17, Saturday 13 June 2015 (19125)CAL, DetChar, ISC
The calibratinon is wrong at the moment, I think the OMC's automatic scaling during the hand off failed. Dan confirms that he's changed the order o power scaling / gain matching yesterday.

On the phone with sheila, I measure the DARM OLG, to be a factor of 0.63 too low, so with her advice, I've rescaled the sensing gain (in the LSC input matrix) by that same factor,
13.20723*1.5737 = 20.78447
then reconfirmed that both the DARM UGF is correct, as well as the DARM ASD on the wall matches the reference, and we're back in the reasonable Mpc range of ~55 [Mpc].

I'm now taking the full DARM OLGTF sweep, and if that's successful, I'll get the PCAL sweep as well.

daniel.hoak@LIGO.ORG - 19:30, Saturday 13 June 2015 (19129)

Just to clarify - while the DARM OLG did change due to the OMC-READOUT_ERR_GAIN setting, this wasn't due to edits to the OMC_LOCK code.  It's not clear why the gain-matching calculation missed on this lock, it worked fine for subsequent locks.  Looks like a one-off error.

H1 AOS
robert.schofield@LIGO.ORG - posted 15:50, Saturday 13 June 2015 (19122)
1.5 hours of PEM injections

LLO was down at the beginning but came up during the injections. I did most of what I had hoped to do in the first round.

H1 General
edmond.merilh@LIGO.ORG - posted 15:19, Saturday 13 June 2015 (19121)
H1 is in Science Mode

22:09UTC H1 is in science mode:

H1 General
edmond.merilh@LIGO.ORG - posted 13:46, Saturday 13 June 2015 - last comment - 13:49, Saturday 13 June 2015(19119)
Afternoon Looking Much Better

After much diddling with PR's and SR's while swinging them and watching camera images and dataviewer I finally came to my happy place with the alignments!

 

Robert Schofield is here and looking to do some PEM injections. He'll be going into the LVEA.  There seems to be a  strange disturbance in the Force that we can't explain. (see attached screenshot)

I spoke to Joe Hanson at LLO. At 20:30UTC they were locked on RF and were looking to move forward to DC Readout. I informed him as to our status and the testing that Robert was about to do for the next 1.5 hours or so.

For now.....life is good!


Images attached to this report
Comments related to this report
edmond.merilh@LIGO.ORG - 13:49, Saturday 13 June 2015 (19120)

LLO showed up on DMT at ~ 20:48UTC

Robert into LVEA @ 20:47UTC

Jeff K into control room with a tour group @ 21:48

H1 General
edmond.merilh@LIGO.ORG - posted 11:19, Saturday 13 June 2015 (19118)
Morning Shift Summary - 1st three hours

IFO locking difficulties continue:

 

As TJ mentioned in a earlier post there is an excitation showing at ETMX with 5-6 Testpoints occupied with some kind of business. We don't know what this is about and we really hope it isn't a contributing factor to our locking woes.  

I'm considering killing them if no one owns them.


LHO General
thomas.shaffer@LIGO.ORG - posted 08:27, Saturday 13 June 2015 (19116)
Ops Report

Still no locking for my shift. I talked to Sheila for a bit and it seems that when the ISC_LOCK guardian was getting stuck in LOCK_DRMI_1F asking you to adjust by hand, it was only in a PRMI configuration and SRM was misaligned somehow(perhaps related to what I was seeing during IA). I moved SRM around for a bit and brought it back to some old values but still nothing.

Ed will have to finish up the mess I made for him.

H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 07:52, Saturday 13 June 2015 (19115)
CDS model and DAQ restart report, Friday 12th June 2015

model restarts logged for Fri 12/Jun/2015
2015_06_12 23:27 h1fw1*

* = unexpected restart

LHO General
thomas.shaffer@LIGO.ORG - posted 03:44, Saturday 13 June 2015 (19114)
Ops Report

I took over for Travis as the interferometer was locking DRMI, shortly after I had to damp ETMY and ITMX bounce and roll modes. It then made it to BOUNCE_VIOLIN_MODE_DAMPING before it broke. After recovery, it got stuck on LOCK_DRMI_1F for a long time, much longer than usual (25min is where I would give up) . I have heard this could be from a bad alignment so I did a quick initial alignment with no issues and then made it a few times to REFL_TRANS/RESONANCE before it broke lock.

Again LOCK_DRMI_1F took a long time so I did another initial alignment. This time, I struggled keeping the ALS locked. It would all seem good and then one of them would catch a flurry of large modes and then recover back to a steady power above 1. When I limped through IA to SRC_ALIGN, Guardian said that it was locked and it was not. This happens some times and is normally an easy fix but I couldn't make it work well at all, it was constantly moving around. I eventually got something that was round(ish) but pretty wavey and tried to move on. Made it to LOCK_DRMI_1F and it asked that I adjust it by hand, which is also not working.

That's the rundown of my shift so far, let's hope it improves.

H1 General
thomas.shaffer@LIGO.ORG - posted 00:47, Saturday 13 June 2015 (19113)
Excitation on H1SUSETMX left on

Looks like an excitation got left on for H1SUSETMX. I saw it on the CDS Overview when I arrived, there are 6 test points running on it right now. I don't see anything in the alog about this being here intentionally.

H1 GRD (CDS, GRD, ISC)
sheila.dwyer@LIGO.ORG - posted 20:04, Friday 12 June 2015 - last comment - 17:29, Thursday 18 June 2015(19108)
some locking difficulties today

We had four known reasons for having difficulty locking today, one is an unsolved mystery that might be hurting us more often than we realize.

I reloaded both ISC_DRMI and ISC_LOCK guardians today, to incorporate these changes.  

Good luck TJ!

Images attached to this report
Comments related to this report
jameson.rollins@LIGO.ORG - 09:45, Saturday 13 June 2015 (19117)

I don't think the log snippet included above shows the problem, but I found where in the log it does:

2015-06-13T05:36:35.76316 ISC_LOCK [LOWNOISE_ESD_ETMY.enter]
2015-06-13T05:36:35.78009 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L3_LOCK_L_TRAMP => 0
2015-06-13T05:36:35.78025 ISC_LOCK [LOWNOISE_ESD_ETMY.main] Preparing ETMY for DARM actuation transition...
2015-06-13T05:36:36.03624 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_M0_LOCK_L => OFF: INPUT
2015-06-13T05:36:36.03791 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L1_LOCK_L_GAIN => 0.16
2015-06-13T05:36:36.03886 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L3_LOCK_L_GAIN => 0
2015-06-13T05:36:36.04021 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L1_LOCK_L_SW1S => 20804
2015-06-13T05:36:36.29496 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L1_LOCK_L => ONLY ON: INPUT, FM2, FM3, FM5, FM6, FM7, FM8, OUTPUT, DECIMATION
2015-06-13T05:36:36.55096 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L2_LOCK_L => ONLY ON: INPUT, FM6, OUTPUT, DECIMATION
2015-06-13T05:36:36.55229 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L3_LOCK_L_SW1S => 16388
2015-06-13T05:36:36.80324 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L3_LOCK_L => ONLY ON: INPUT, FM6, FM8, FM9, FM10, OUTPUT, DECIMATION
2015-06-13T05:36:37.05938 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L2_DRIVEALIGN_L2L => ONLY ON: INPUT, FM2, OUTPUT, DECIMATION
2015-06-13T05:36:37.31538 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L3_DRIVEALIGN_L2L => ONLY ON: INPUT, FM3, FM4, FM5, OUTPUT, DECIMATION
2015-06-13T05:36:37.31694 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L3_LOCK_L_TRAMP => 10
2015-06-13T05:36:37.32341 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMX_L1_LOCK_L_SW1 => 16
2015-06-13T05:36:37.57613 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMX_L1_LOCK_L => OFF: FM1
2015-06-13T05:36:38.57792 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L3_LOCK_L_GAIN => 0.7
2015-06-13T05:36:38.57846 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMX_L3_LOCK_L_GAIN => 0.5
2015-06-13T05:36:49.58941 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L1_LOCK_L_SW1 => 16
2015-06-13T05:36:49.84053 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L1_LOCK_L => ON: FM1
2015-06-13T05:36:50.84245 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L3_LOCK_L_GAIN => 1.25
2015-06-13T05:36:50.84585 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMX_L3_LOCK_L_GAIN => 0
2015-06-13T05:36:50.84640 ISC_LOCK [LOWNOISE_ESD_ETMY.main] timer['ETMswap'] = 10.0
2015-06-13T05:36:50.85290 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L3_LOCK_L_TRAMP => 0
2015-06-13T05:36:50.85341 ISC_LOCK [LOWNOISE_ESD_ETMY.main] Preparing ETMY for DARM actuation transition...
2015-06-13T05:36:51.10580 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_M0_LOCK_L => OFF: INPUT
2015-06-13T05:36:51.11470 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L3_LOCK_L_GAIN => 0
2015-06-13T05:36:51.11670 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L1_LOCK_L_SW1S => 20804
2015-06-13T05:36:51.37015 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L1_LOCK_L => ONLY ON: INPUT, FM2, FM3, FM5, FM6, FM7, FM8, OUTPUT, DECIMATION
2015-06-13T05:36:51.62486 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L2_LOCK_L => ONLY ON: INPUT, FM6, OUTPUT, DECIMATION
2015-06-13T05:36:51.88017 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L3_LOCK_L => ONLY ON: INPUT, FM6, FM8, FM9, FM10, OUTPUT, DECIMATION
2015-06-13T05:36:52.13170 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L2_DRIVEALIGN_L2L => ONLY ON: INPUT, FM2, OUTPUT, DECIMATION
2015-06-13T05:36:52.38753 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L3_DRIVEALIGN_L2L => ONLY ON: INPUT, FM3, FM4, FM5, OUTPUT, DECIMATION
2015-06-13T05:36:52.39457 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L3_LOCK_L_TRAMP => 10
2015-06-13T05:36:52.65332 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMX_L1_LOCK_L => OFF: FM1
2015-06-13T05:36:53.65498 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L3_LOCK_L_GAIN => 0.7
2015-06-13T05:36:53.65562 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMX_L3_LOCK_L_GAIN => 0.5
2015-06-13T05:37:04.66682 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L1_LOCK_L_SW1 => 16
2015-06-13T05:37:04.91783 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L1_LOCK_L => ON: FM1
2015-06-13T05:37:05.91955 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMY_L3_LOCK_L_GAIN => 1.25
2015-06-13T05:37:05.92183 ISC_LOCK [LOWNOISE_ESD_ETMY.main] ezca: H1:SUS-ETMX_L3_LOCK_L_GAIN => 0
2015-06-13T05:37:05.92216 ISC_LOCK [LOWNOISE_ESD_ETMY.main] timer['ETMswap'] = 10.0
2015-06-13T05:37:05.94222 ISC_LOCK [LOWNOISE_ESD_ETMY.run] MC not locked

Based on the ezca and log output during LOWNOISE_ESD_ETMY.main it does in fact look like main() was executed twice in a row.  That should never happen under any circumstances.  I'm investigating.

jameson.rollins@LIGO.ORG - 16:50, Saturday 13 June 2015 (19127)
sheila.dwyer@LIGO.ORG - 17:29, Thursday 18 June 2015 (19231)

I think that there are potentially two different issues, one being what is shown in the original alog, where the run should return true, but the guardian state doesn't change even though the current state is not the requested state.  We could re-wrtie the guardains (or at tleast this state) to reduce the harm from this, but it still seems like a bug in the way the gaurdian is working.  

On the other hand, the problem that Jamie pointed out is more serious.  For other reasons, I have been looking at histograms of how long the guardian spends in each state.  Some states should take the same amount of time to execute each time, but analog carm for example has 3 possibilities.  We often detect a lockloss in the first second of the state, if the state executes normally it takes 18 seconds, but there were 5 times that it took 35 seconds because it repeated main.  GPS times of these events are:

1117983542.06250
> 1118037936.06250
> 1118148903.93750
> 1118294947.06250
> 1118295997.06250

I looked at guardian log for the first three of these events, and indeed they
are times when the main was repeated. These are mosly sucsesfull locks, so the bug isn't causing locklosses here although it easily could.  

The ISC_LOCK code that was running at the time is in the svn as revision 10776.
Images attached to this comment
Displaying reports 66961-66980 of 85599.Go to page Start 3345 3346 3347 3348 3349 3350 3351 3352 3353 End