Displaying reports 66461-66480 of 85686.Go to page Start 3320 3321 3322 3323 3324 3325 3326 3327 3328 End
Reports until 01:27, Sunday 19 July 2015
H1 PSL (PSL)
rana.adhikari@LIGO.ORG - posted 01:27, Sunday 19 July 2015 (19737)
IMC Lock losses => TTFSS Fast Gain adjustment

We're having some mysterious lock losses as we move from large CARM offset to less large offset. With the DRMI locked, the ALS starts bringing the arms in and the ALS / IMC lose lock.

Suspecting the recent FSS tunings, we looked at the FSS screen. The FAST gain was down at +5 dB. Also, the EOM Drive readback was up at +3V. The attached plot shows the EOM readback (PC_MON) as the FAST gain is tuned.

I have left it at 22.2 dB, where the EOM drive is minimized. IN mid-April, there are a series of entries from Rick and Peter where the loop is tuned up, but the fast gain is turned down incrementally from 21 to 5 dB. Why so??

Also, it seems like we should aim for a ~250 kHz UGF for the FSS to avoid the peaking at 1.8 MHz where the notch is not getting all of the EOM resonance.

Images attached to this report
H1 ISC (ISC)
rana.adhikari@LIGO.ORG - posted 00:19, Sunday 19 July 2015 - last comment - 03:04, Monday 20 July 2015(19736)
Angular Instability in pitch at 10 W

Cataloging the many ways in which we are breaking lock or failing to lock since Friday, we found this one:

Sitting quietly at 10W DC readout, there was a slow ring up of a pitch instability in several ASC signals. Perhaps its time we went back to seriously controlling the ASC in the hard/soft basis instead of the optic basis in which its currently done. The frequency is ~1 Hz and the time constant is ~1 min. It would be great if someone can look at the signs of the fluctuations in the OL and figure out if this was dHard or cHard or whatever.

Images attached to this report
Comments related to this report
rana.adhikari@LIGO.ORG - 22:01, Sunday 19 July 2015 (19744)AOS, ISC, SUS

In the attached plot, I've plotted the OpLev pit signals during the time of this ringup (0702 UTC on 7/19). The frequency is 1 Hz. It appears with the same sign and similar magnitudes in all TMs except ITMX (there's a little 1 Hz signal in ITMX, but much smaller).

  1. Do we believe the calibration of these channels at the 30% level?
  2. Do we believe the sign of these channels?
  3. If the signs are self consistent, it seems to me that this is a Soft mode, Common arm fluctuations. But its weird for it to be at such a high frequency I think.
  4. Why does it take so long to ring up? If its due to Sidles-Sigg alone, I would guess that the Q would be lower (because of local damping). But if its a radiation pressure resonance and we have poor gain margin in our cSOFT loop, then it might could be.
Images attached to this comment
rana.adhikari@LIGO.ORG - 03:04, Monday 20 July 2015 (19750)

Evan, Matt, Rana

We again saw the pitch instability tonight. We tried reducing it in a few ways, but the only successful way was to turn off the SRCL FF.

It appears that at higher powers, the SRCL_FF provides a feedback path for the pitch signals to get back to the Arms (since SRCL_FF drives the ITMs; and both of them as of Thursday). i.e. cSOFT has a secondary feedback path that includes some length<->angle couplings and produces a high Q unstable resonance. I don't understand how this works and I have never heard of this kind of instability before. But we repeatedly were able to see it ringup and ringdown by enabling SRCLFF.

To enable use of SRCL_FF, we've put a high pass filter into the SRCL_FF. This cuts off the SRCL_FF gain below a few Hz while preserving the phase above 10 Hz (where we want the subtraction to be effective). HP filter Bode plot attached.

Non-image files attached to this comment
H1 ISC (DetChar, SUS)
rana.adhikari@LIGO.ORG - posted 00:02, Sunday 19 July 2015 - last comment - 07:17, Sunday 19 July 2015(19734)
BS DAC range not being used

The DAC for the BS M2 stage can put out 131000 counts, but the RMS is only 500 counts after transitioning into 'state 3' of the coil driver (Acq OFF, LP ON).

Seems like we're not in the best state here. We don't want to reduce the BS range for acquisition.

Should we be putting in an offset to avoid the DAC glitches or has this DAC been improved by some EEPROM upgrades?

Has anyone in DetChar seen BS DAC glitches from ER7?

Non-image files attached to this report
Comments related to this report
andrew.lundgren@LIGO.ORG - 07:17, Sunday 19 July 2015 (19740)
I don't remember us ever noticing BS M2 DAC glitches in ER7. The only ones we really saw were on MC2 M3. Looking back at two locks (Jun 5 5 UTC and Jun 8 14 UTC), BS M2 was centered on 0 with a peak-to-peak range of 6000 counts. So we don't really know what happens when we hit +/- 2^16 counts. I think we've seen that the -2^16 crossing is often the worst.

I just looked at some zero-crossings in MICH and the BS M2 noisemons during these two locks. Three quadrants look to be clear of DAC glitches. But in the UR noisemon there seem to be glitches at 300 Hz which match up pretty well with the times of zero crossings. The first attached plot is the Omega triggers of the noisemon with vertical lines showing zero-crossings in MASTER_OUT. The second plot is a different lock, where the conclusion still holds but the timing seems less exact. Next is a spectrum comparison of the UR NOISEMON versus MASTER_OUT. There's a notch around 300 Hz, presumably to avoid ringing up the BS violin modes, but the noisemon sees something coming out of the DAC in this range. For comparison, the other spectrum is the same thing for LR, where there doesn't seem to be excess noise in the notch.

I don't see any evidence of these glitches affecting MICH (I think that's where BS glitches would show up the best). That's probably why we never noticed these. We mostly look for things that affect DARM, although we sometimes serendipitously find other things (we noticed MC2 because it showed up in MCL). It's weird for DAC glitches to show up at high frequency, and the timing doesn't exactly match the zero crossings. It's probably worth keeping a close eye on the noisemons if more range is used on the DACs, and for detchar to check whether the DAC glitches had any effect on the BS stability.
Images attached to this comment
H1 ISC
stefan.ballmer@LIGO.ORG - posted 21:01, Saturday 18 July 2015 (19733)
Chasing random lock losses
Evan, Matt, Stefan

- Matt wrote a many - optic ASC relief function, which we added to the DRMI guardian. This saves us some time.

- for the rest we were chasing  random fast lock losses that hit us pretty much anywhere - Prep_TR_CARM, REFL_TRANS just sitting on resonance at low power, and sitting at high power.

- Evan started to systematically go through and check loop,gains.
- He found the digital REFL   CARM loop to be slightly low, increase Tr_REFLAIR_9 gain from -0.5 to -0.8
- ALS diff looked fine.

H1 ISC (ISC)
stefan.ballmer@LIGO.ORG - posted 14:49, Saturday 18 July 2015 (19732)
New way for filter module code to permanently break
What I did
- Take a filter that ramps over 3sec (always on)
- edit the foton file top a 1 sec ramp
- start the ramp, but before it finishes - load the new filter.
- The filter module keeps ramping, and never finishes...

- I could reproduce this twice.
- I attached a snap of the still ramping FM1 on LSC-REFLBIAS.

- To fix it, I considered rebooting, but since the I suspected the problem to be a runaway counter, I simply added a filter with 600sec ramp time (long enough to catch the original filter ramp). 10min later (the time it took to write this log) it was fixed...
Images attached to this report
H1 ISC (ISC)
stefan.ballmer@LIGO.ORG - posted 04:33, Saturday 18 July 2015 (19731)
REFL_TRANS LOCK LOSSES
Evan, Rana, Stefan

After we mostly fixed the CARM_ON_TR lock losses we ran into the REAFL_TRANS ones. There is definitely a loop instability on TR_REFL_TRANS. Be sped up this transition which at least once seemed to help. However we also randomly lost lock at other places, and never made it to low noise. We'll have another systematic approach tomorrow.
H1 GRD (GRD, ISC)
evan.hall@LIGO.ORG - posted 02:19, Saturday 18 July 2015 (19730)
In-vac REFL transition automated in guardian

Handoff of CARM from in-air REFL to in-vac REFL is now automated via the IN_VACUUM_REFL state in the ISC_LOCK guardian.

It will run after ENGAGE_ASC and before BOUNCE_VIOLIN_MODE_DAMPING. It was tested and seems to work fine.

H1 ISC (CDS, ISC, SUS)
rana.adhikari@LIGO.ORG - posted 22:26, Friday 17 July 2015 (19728)
Fast channels for Parametric Instability monitoring

MattE, Hong, Kiwamu, Rana

We've made a temporary hookup at EY to get the in-vac, IR, TransMon QPD signals into the new fast 'h1susetmypi'. This is so that we can monitor the amplitude and frequency of the unstable opto-acoustic modes in the interferometer (0910.2716). The only cabling change we've made is to add a breakout board at the AA chassis, so things out to run as before after the EQ rings down.

 

Cabling Details:

The TransMon QPD cable goes into a Transimpedance/Whitening amplifier (D1001974) with Z = 1000 Ohms. Then there's a 0.4:40 pole:zero stage with a gain of 1 at DC. The output of this board then goes through a whitening chassis and then the output of that box (in the rack near the BSC) goes into the electronics room and into an ISC AA Chassis via a 9-pin dsub. We put the breakout board at the AA side of this cable. We used clip-doodles to go into a SCSI breakout board and via ribbon cable into the ADC for the h1susetmypi. This is a temporary setup to allow us to commission the model software. In this setup, since we're using the whitening filter outputs, we also get the whitening gain and amplificaiton which is used for the QPD servos. Also we do not need to use the PEM patch panels as initially planned.

The transimpedance box has a single pole at 80 kHz. The whitening filter has no poles below 80 kHz. So these should be fast enough to let us see PI modes up to 30 kHz.

We had to use some critical electrical tape to keep the BNC-clip shields from shorting with each other; take care in working near the AA side of this cabling - it may put offsets into the QPDs and disturb the lock acquisition.

H1 ISC (ISC)
stefan.ballmer@LIGO.ORG - posted 22:07, Friday 17 July 2015 - last comment - 00:31, Saturday 18 July 2015(19727)
h1lsc modification (work permit #5358)
Sheila, Stefan

Our first version of this fix had a bug: the linear fit below the sqrt limiter meant that there is a non-zero chance to drive the arm in the wrong direction, resulting in a DRMI lock-loss. We addressed this in the next version of the code with a quadratic fit:
 - The initial limiter is set at l=1e-4. Above it (x>l), we still simply have sqrt(x) for CARM_TR.
 - Below x<-3*l, we have TR_CARM=0
 - Between -3*l<x<l we add the 2nd path f=A*(x-x0) + f0, with
      - f0=-sqrt(l)        = -0.01
      - x0=-3 l            = -3e-4
      - A = 1/(16*l^(3/2)) = 62500
 - The sum of the two paths gives a smooth interpolation. We tested this code and verified that the FE code does what it should.
 - Next we wanted to optimize the threshold limit l:
   - Looking  at past locks, the pk2pk during PREP_TR_CARM in TR_CARM is about 0.1 cts
   - Thus the following parameters might be even better:
      - l=0.01
      - f0=-sqrt(l)        = -0.1
      - x0=-3 l            = -3e-2
      - A = 1/(16*l^(3/2)) = 62.5
 - We installed this as version V2 of this code.
 - Due to the earthquake we have not yet tested this yet.

Images attached to this report
Comments related to this report
stefan.ballmer@LIGO.ORG - 00:31, Saturday 18 July 2015 (19729)
Evan, Stefan

After the earthquake we had a chance to test the code:
- The good news: the smooth turn-on of TR_CARM described above seems to work just fine - every time.
- The bad news:  we still sometimes lost it sometimes 13 seconds after we grabbed TR_CARM.
- Attached are traces of TR_CARM_INMON for one failed attempt, and 2 successful attempts.
- At any rate - the lock losses happen when we are already clean on the sqrt(x) part - so we will keep the code change.


The lock loss happens most likely during the engaging of LSC-MCL FM3 (BounceRG):
Guardian line 642:
        ezca.switch('LSC-MCL', 'FM3', 'ON')
Guardian log:
2015-07-18T07:58:51.84052 ISC_LOCK [PREP_TR_CARM.main] ezca: H1:LSC-MCL => ON: FM3

We moved the LSC-MCL FM3 engaging to the end of CARM_ON_TR (line 733):
                ezca.switch('LSC-MCL', 'FM3', 'ON')

This seems to have fixed the problem - at least as far as we can tell,, (we are at 2 out of 2 for this type of lock loss...)

We also moved the zeroing of REFL_BIAS matrix elements to the DOWN state. 
Images attached to this comment
H1 ISC (ISC)
stefan.ballmer@LIGO.ORG - posted 20:53, Friday 17 July 2015 (19726)
ASC: PRC2 now feeding back to PR3
Evan, Stefan

We implemented the PR3 feed-back in ENGAGE_ASC in the ISC_LOCK guardian:
- We decided to leave the feed-back on PR2 during DRMI. This allows us to absorb the first correction to the initial alignment to PR2.
- We then switch the feed-back to PR3 in ENGAGE_ASC. This locks down PR3 during power-up - we confirmed that the REFL beam no longer moves in that transition.

- Details:
 - We modified the DRMI guardian ENGAGE_DRMI_ASC state to prepare both PR2 and PR3 for feed-back.
 - PR3 gais were set to roughly match PR2
 - The settings on PR3 are:
   -    ezca['SUS-PR3_M3_ISCINF_P_GAIN'] = 2
   -    ezca['SUS-PR3_M3_ISCINF_P_GAIN'] = 5
   -    SUS-PR3_M1_LOCK have an integrator, a -120dB and a gain of 1
 - Also, it now has the flag self.usePR3, which can select feed-back to PR2 (False) or PR3 (True). The rest of the ENGAGE_DRMI_ASC state uses this flag.
 - The default is self.usePR3=False, i.e. it does just the old PR2 engaging.

 - The PRC2 loop is always off during the CARM reduction sequence. It is re-engaged in ENGAGE_ASC with feed-back to PR3 with the following steps:
 - The output of SUS-PR2_M3_ISCINF_P and SUS-PR2_M3_ISCINF_Y is held
 - The ASC-PRC2_P and ASC-PRC2_Y filters are cleared.
 - The output matrix is updated:
   -    asc_outrix_pit['PR3', 'PRC2'] = 1
   -    asc_outrix_yaw['PR3', 'PRC2'] = -1 # required to keep the same sign as PR2
 - And the loops are turned on again.
 - The current loop gain is still low - the step response is on the order of 30sec.
 - ENGAGE_ASC also has the self.usePR3 flag (default is True), so it is still backwards compatible.

 - The whole sequence (engage DRMI on PR2, switch to PR3 in full lock) was tested successfully once before an earthquake hit.

H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 18:11, Friday 17 July 2015 (19724)
h1suetmypi model started

A new H1 model was started this afternoon on h1susey. It is h1susetmypi, which is a copy of the L1 model minus the IPC input receievers and the DAC outputs on the top level. The new model has DCU_ID=127 and runs at 64kHz.

The initial startup failed with a "DAQ too small" error. We boosted the DQ channels from 2kHz to 64kHz (commissioning frame only) and added some EpicsIn parts at the top level to get through this error. We will investigate this further next week.

The new model was added to the DAQ and the DAQ was restarted. I have added it to the CDS ENG overview MEDM screen, I'll add it to the rest later.

H1 SUS (ISC)
nutsinee.kijbunchoo@LIGO.ORG - posted 17:52, Friday 17 July 2015 (19720)
Update to the complete violin mode study

A while ago I made a "complete" violin mode table here. Matt and Jeff was curious whether or not the frequencies belong to correct test mass. So, I went through the violin mode damping filters and see which one works and which doesn't. So far I was able to confirm that 24 out of 32 frequencies belong to the correct test masses. The filters were able to damp most of them. Three frequencies got rung up. The rest were still inconclusive (either the damping phases were off by 90 degrees or the test masses were wrong).

 

Frequency Test mass Filter Does the filter work? Note
         
500.054 ITMX MODE3 yes  
500.212   MODE3 yes  
501.092 ITMX MODE6 yes  
501.208   MODE3 yes  
501.254 ITMX MODE3 yes  
501.450   MODE3 yes  
502.621 ITMX MODE3 NO  
502.744   MODE3 yes  
         
503.007 ITMY MODE3 yes  
503.119   MODE1 yes  
504.803 ITMY MODE4 NO  
504.872   MODE4 yes  
501.606 ITMY MODE5 NO Rung up!
501.682   MODE5 yes  
501.749 ITMY MODE6 yes  
501.811   MODE6 NO  
         
507.992 ETMY MODE5 NO  
508.146   MODE5 yes  
508.010 ETMY MODE5 yes  
508.206   MODE5 yes  
508.220 ETMY MODE5 NO  
508.289   MODE5 NO  
508.585 ETMY MODE5 yes  
508.661   MODE5 NO  
         
505.587 ETMX MODE6 yes  
505.707   MODE6 NO  
505.710 ETMX MODE4 yes  
505.805   MODE4 NO Rung up!
506.922 ETMX MODE6 yes  
507.159   MODE6 yes  
507.194 ETMX MODE6 NO Rung up!
507.391   MODE6 yes  

 

Below is the table of the violin mode damping filters used by Guardian and the frequencies they (supposed to) damp:

w0 (Hz) wc (Hz) wc-w0 (Hz) Filter (All FM1) Test mass Frequency covered
506 513 7 MODE5 ETMY All ETMY
           
505.78 505.9 0.12 MODE4 ETMX 505.710, 505.805
502 520 18 MODE6 ETMX The rest of ETMX (cheater…)
           
485.7 506.4 20.7 MODE3 ITMX All ITMX
501.05 501.11 0.06 MODE6 ITMX 501.092
           
503.08 503.16 0.08 MODE1 ITMY 503.119
502.96 503.06 0.1 MODE3 ITMY 503.007
504.86 504.91 0.05 MODE4 ITMY 504.803, 504.872
501.63 501.7 0.07 MODE5 ITMY 501.606, 501.682
501.71 501.85 0.14 MODE6 ITMY 501.749, 501.811

 

Both tables are included in the excel file attached below.

Images attached to this report
Non-image files attached to this report
H1 GRD
jameson.rollins@LIGO.ORG - posted 17:42, Friday 17 July 2015 - last comment - 00:07, Sunday 19 July 2015(19723)
Guardian core upgraded to point release to address "double main" bug

Guardian core upgraded to fix "double main" execution bug.

I have just installed a new version of Guardian core:

guardian r1449

It address the "double main" execution bug that was been has been plaguing the system.  See guardian bug 879, ECR 1078.

The new version is in place, but none of the guardian nodes have been restarted yet to pull in the new version.

You can either manually restart the nodes with 'guardctrl restart', or just try rebooting the whole guardian machine.  I might start with the former, to just target the important lock acquisition nodes (ISC_LOCK, etc.), and wait until Tuesday maintenance for a full restart of the Guardian system.

Comments related to this report
evan.hall@LIGO.ORG - 00:07, Sunday 19 July 2015 (19735)

ISC_LOCK and ISC_DRMI were restarted around 2015-07-19 07:07:00 Z.

LHO FMCS
bubba.gateley@LIGO.ORG - posted 16:35, Friday 17 July 2015 (19719)
Beam Tube Enclosure Joint Repair on the X-Arm
Chris S. Joe D.

The crew installed metal strips on the top of 350 meters of tube enclosure joints this week for a total of 1075 meters of enclosure from the corner station on the X-Arm. 
LHO VE
bubba.gateley@LIGO.ORG - posted 16:26, Friday 17 July 2015 (19717)
Beam Tube Washing
APOLOGIZES FOR NOT REPORTING FOR THE PAST WEEK

Scott L. Ed P. Rodney H.

This report will cover 7/13-7/17 dates inclusive.

 The crew cleaned a total of 358.7 meters of tube this week. Test results for the week also shown here.

 We added another generator that we had on site to the cleaning operation so the third man could be vacuuming the support tubes and pre-cleaning the egregiously dirty areas of the tube. This has seemed to increase productivity as seen by the almost 72 meter a day average.

Scott L. will be on vacation next week so to hopefully keep up with the current pace I am bringing out another Apollo employee who is very familiar with the site. Mark Layne will be filling in for Scott next week.  
 
Non-image files attached to this report
H1 ISC
jenne.driggers@LIGO.ORG - posted 12:01, Friday 17 July 2015 - last comment - 17:36, Friday 17 July 2015(19709)
H1 better low freq performance

Matt found some data from last night that looks pretty good - I'm not sure what the state of the IFO was at this particular time, so I won't say.

Images attached to this report
Comments related to this report
gabriele.vajente@LIGO.ORG - 13:04, Friday 17 July 2015 (19711)

Brute force coherence report for this perod can be found here:

https://ldas-jobs.ligo.caltech.edu/~gabriele.vajente/bruco_1120811417/

sheila.dwyer@LIGO.ORG - 15:38, Friday 17 July 2015 (19713)

Our NOMINAL_LOW_NOISE state now includes BS coil drivers swtiched,  SRCL and MICH FF.  A2L coefficients were tuned before the vent but not carefully since then.  We also had the ISS 2nd loop on at this time that helped the noise around 300 Hz. 

rana.adhikari@LIGO.ORG - 17:36, Friday 17 July 2015 (19721)

we have seen that low frequency noise breathe somewhat; the noise was already low aroun 70 Hz when we switched on the SRC FF (the old filters). We have taken a few measurements with better coherence and with better fitting code and will soon get a bit more subtraction. The high frequency noise is less good than the best mysteriously. The DARM offset was giving us 20 mA total OMC DC current. We did not succeed yet in being stable at higher offsets.

H1 DetChar (DetChar, ISC, SUS)
sheila.dwyer@LIGO.ORG - posted 01:56, Friday 17 July 2015 - last comment - 19:46, Friday 17 July 2015(19696)
Lock Losses need some investigation

The following are a list of lock loss messages in the Guardian log. We've had a bunch of locklosses during the transition from locked to low noise this evening. As you can see there are a few different culprits, but one of the big ones is LOWNOISE_ESD_ETMY. It would be handy if someone can check out these lock losses and home in on what precisely went bad during this transition (e.g. ramping, switching, etc.). Then we can get back to SRCL FF tuning.

2015-07-17 06:47:05.694550  ISC_LOCK  LOWNOISE_ESD_ETMY -> LOCKLOSS

2015-07-17 06:48:22.162260  ISC_LOCK  LOCKING_ARMS_GREEN -> LOCKLOSS

2015-07-17 07:14:34.431170  ISC_LOCK  NOMINAL_LOW_NOISE -> LOCKLOSS

2015-07-17 07:26:40.249110  ISC_LOCK  CARM_10PM -> LOCKLOSS

2015-07-17 07:34:59.720030  ISC_LOCK  PREP_TR_CARM -> LOCKLOSS

2015-07-17 07:42:08.269350  ISC_LOCK  LOCKING_ALS -> LOCKLOSS

2015-07-17 08:02:41.773620  ISC_LOCK  LOWNOISE_ESD_ETMY -> LOCKLOSS

2015-07-17 08:21:58.665420  ISC_LOCK  LOWNOISE_ESD_ETMY -> LOCKLOSS

2015-07-17 08:31:29.035330  ISC_LOCK  REDUCE_CARM_OFFSET_MORE -> LOCKLOSS

2015-07-17 08:48:32.514870  ISC_LOCK  LOWNOISE_ESD_ETMY -> LOCKLOSS

 
Rana, Sheila
Comments related to this report
keita.kawabe@LIGO.ORG - 11:01, Friday 17 July 2015 (19705)

Guardian error causing lock loss in LOWNOISE_ESD_ETMY (Evan, Keita)

Summary:

Out of four lock losses in LOWNOISE_ESD_ETMY that Rana and Sheila listed, one lock loss (15-07-17-06-47-05) was due to the guardian running main() of LOWNOISE_ESD_ETMY twice.

Running main() twice (some times but not always) is apprently a known problem of the guardian, but this specific state is written such that running main() twice is not safe.

Details:

Looking at the lock loss, I found that the ETMY_L3_LOCK_L ramp time (left of the attached, red CH16) was set to zero at the same  or right after the ETMX and ETMY L3 gain (blue ch3 and brown ch5) were set to their final number (0 and 1.25 respectively). There was a huge glitch in EY actuators at that point but not to EX.

This transition is supposed to happen with the ramp time of 10 seconds, so setting the ramp time to 0 after setting the gain kills the lock.

Looking at the guardian code (attached right), the ramp time is set to zero at the beginning and set to 10 at the end.

Evan told me that main() could be executed twice, we looked at the log (attached middle), and sure enough, right after LOWNOISE_ESD_ETMY.main is finished at 2015-07-17T06:46:50.39059,  the gain was set to zero again.

Images attached to this comment
jameson.rollins@LIGO.ORG - 11:55, Friday 17 July 2015 (19708)

I have identified the source of the double main execution and have a patch ready that fixes the problem:

https://bugzilla.ligo-wa.caltech.edu/bugzilla3/show_bug.cgi?id=879#c7

If needed we can push out a point release essentially immediately, maybe during next Tuesday's maintenance period.

keita.kawabe@LIGO.ORG - 14:09, Friday 17 July 2015 (19712)

Bounce rang up during the EX-EY transition gain ramping, 3/4 of the times last night.

In three out of four lock losses in LOWNOISE_ESD_ETMY that Rana and Sheila listed, the guardian made it all the way to the gain ramping at the end, and it did not run main() twice.

However, about 7 to 8 seconds after the ramping started, 9.8Hz oscillation built up in DARM, then there came fast glitches in ETMY L2 drive, then the IFO lost lock. 

This looks like a bounce but I have no idea why it was suddenly rang up.

See attached. First attachment shows the very end of the lock losses that clearly shows DARM oscillation.

The second plot shows the same lock losses but zoomed out so you can see that each lock losses happened 7 to 8 seconds after the ramping started.

The last attachment shows one of the DARM oscillation so you can see that 6 cycles = 0.309 seconds (i.e. 9.8Hz signal).

Images attached to this comment
keita.kawabe@LIGO.ORG - 19:46, Friday 17 July 2015 (19725)

Update: After bounce was rung up, OMC DCPDs saturated before IFO lost lock.

In the attached, while 9.8Hz was getting bigger (top left), if you high-pass DARM_IN1_DQ (middle left) you can see that the high frequency part dominated by 2.5kHz suddenly quenched at about t=18sec.

Same thing is observed in OMC DCPDs (middle middle and bottom middle), and even though we don't have a fast channel for DCPD ADCs, it seems like they were very close to the saturation at 18sec (bottom left).

Though we don't know why 9.8Hz was excited, at least we know that the DCPD saturated to cause the lock loss.

Since the same thing happened 3 times, and each time it was 7 to 8 seconds after the ETMX and ETMY L3 LOCK_L gain started ramping, you could set the gains to the values corresponding to this in-between state, keep it there for a minute or so, and see if the IFO can stay locked. If you fail to keep it locked it's a sure sign that this instability is somehow related to the L3 actuator balance between X and Y, or L3-L2 crossover in Y (or in X) or both.

The in-between gain would be something like 1.1 for EY L3 lock and 0.125 for EX.

Images attached to this comment
H1 SUS (SUS)
leonid.prokhorov@LIGO.ORG - posted 19:06, Tuesday 14 July 2015 - last comment - 17:39, Friday 17 July 2015(19645)
Night OPLEV Charge measurements procedure
I wrote the script for the OPLEV charge measurements which sets most of the settings and easy-to-use. It works for ~ 2.5 hours.
If you are the last person who leaving the LHO in the night - please, run it!

Instructions:
1. Set ISC_Lock state to DOWN
2. Set both ETMs to "ALIGNED" state
3. Align Optical levers (pitch and yaw) for both arms to 0 +/-.5 urad
4. Run the scripts: scripts directory is /ligo/svncommon/SusSVN/sus/trunk/QUAD/Common/Scripts
run the python files: ./ESD_Night_ETMX.py and the second script in another terminal: ./ESD_Night_ETMY.py

If it works, it:
a) In first ~ 30 seconds sets the channels and can warn about troubles with ESD or Alignment. If it happens - check this system and press 
b) During the measurements it should once a second print "Receiving data for GPS second: 1234567" 
c) After all the measurements it should restore all the settings back.

If it gives the errors but still receive some data - let it work.
If it obviously not work, you can try to run it again. If it does not help - please, restore all settings running ./ESD Restore_Settings.py . 

Due to it is first try, today I will be very thankful if you'll check after running the scripts :
1. L3 Lock:
   Bias Voltage - 0
   Offset - green light
   Ramp time - 5s
2. ESD linearization:    Bypass - ON
3. For ETMY:   turned on Hi-voltage driver.
Just in case: scripts modify only this settings.
Comments related to this report
leonid.prokhorov@LIGO.ORG - 17:39, Friday 17 July 2015 (19722)
1. Script was updated to align the optical levers. If you did not align it - script will do it between (a) and (b), i.e. in first minutes. Measurements begin when OPLEV are aligned.
2. If you need to stop the charge measurements, there are two ways. 1.(preferable) Press 'Enter' - it will stop the script not later then in 12 minutes, when all the biases and quadrants will be done for this circle. It will restore all the settings 2. You can break it using Ctrl-C immediately but then you will need to restore ESD settings using ./ESD_Restore_Settings.py or do it manually. Using the second way you also loose and need to manually restore the optical levers offsets for ETMs. 
Note: Main charge measurement scripts set all the settings back to where they find it. If you break it and use ESD_Restore_settings.py - it will set all values to "standard". 
(!) We are talking about change the sign of ESD bias voltage on ETMX. Using ESD_Restore_settings.py will change it to today's value. 


Displaying reports 66461-66480 of 85686.Go to page Start 3320 3321 3322 3323 3324 3325 3326 3327 3328 End