Displaying reports 63781-63800 of 83002.Go to page Start 3186 3187 3188 3189 3190 3191 3192 3193 3194 End
Reports until 21:01, Saturday 18 July 2015
H1 ISC
stefan.ballmer@LIGO.ORG - posted 21:01, Saturday 18 July 2015 (19733)
Chasing random lock losses
Evan, Matt, Stefan

- Matt wrote a many - optic ASC relief function, which we added to the DRMI guardian. This saves us some time.

- for the rest we were chasing  random fast lock losses that hit us pretty much anywhere - Prep_TR_CARM, REFL_TRANS just sitting on resonance at low power, and sitting at high power.

- Evan started to systematically go through and check loop,gains.
- He found the digital REFL   CARM loop to be slightly low, increase Tr_REFLAIR_9 gain from -0.5 to -0.8
- ALS diff looked fine.

H1 ISC (ISC)
stefan.ballmer@LIGO.ORG - posted 14:49, Saturday 18 July 2015 (19732)
New way for filter module code to permanently break
What I did
- Take a filter that ramps over 3sec (always on)
- edit the foton file top a 1 sec ramp
- start the ramp, but before it finishes - load the new filter.
- The filter module keeps ramping, and never finishes...

- I could reproduce this twice.
- I attached a snap of the still ramping FM1 on LSC-REFLBIAS.

- To fix it, I considered rebooting, but since the I suspected the problem to be a runaway counter, I simply added a filter with 600sec ramp time (long enough to catch the original filter ramp). 10min later (the time it took to write this log) it was fixed...
Images attached to this report
H1 ISC (ISC)
stefan.ballmer@LIGO.ORG - posted 04:33, Saturday 18 July 2015 (19731)
REFL_TRANS LOCK LOSSES
Evan, Rana, Stefan

After we mostly fixed the CARM_ON_TR lock losses we ran into the REAFL_TRANS ones. There is definitely a loop instability on TR_REFL_TRANS. Be sped up this transition which at least once seemed to help. However we also randomly lost lock at other places, and never made it to low noise. We'll have another systematic approach tomorrow.
H1 GRD (GRD, ISC)
evan.hall@LIGO.ORG - posted 02:19, Saturday 18 July 2015 (19730)
In-vac REFL transition automated in guardian

Handoff of CARM from in-air REFL to in-vac REFL is now automated via the IN_VACUUM_REFL state in the ISC_LOCK guardian.

It will run after ENGAGE_ASC and before BOUNCE_VIOLIN_MODE_DAMPING. It was tested and seems to work fine.

H1 ISC (CDS, ISC, SUS)
rana.adhikari@LIGO.ORG - posted 22:26, Friday 17 July 2015 (19728)
Fast channels for Parametric Instability monitoring

MattE, Hong, Kiwamu, Rana

We've made a temporary hookup at EY to get the in-vac, IR, TransMon QPD signals into the new fast 'h1susetmypi'. This is so that we can monitor the amplitude and frequency of the unstable opto-acoustic modes in the interferometer (0910.2716). The only cabling change we've made is to add a breakout board at the AA chassis, so things out to run as before after the EQ rings down.

 

Cabling Details:

The TransMon QPD cable goes into a Transimpedance/Whitening amplifier (D1001974) with Z = 1000 Ohms. Then there's a 0.4:40 pole:zero stage with a gain of 1 at DC. The output of this board then goes through a whitening chassis and then the output of that box (in the rack near the BSC) goes into the electronics room and into an ISC AA Chassis via a 9-pin dsub. We put the breakout board at the AA side of this cable. We used clip-doodles to go into a SCSI breakout board and via ribbon cable into the ADC for the h1susetmypi. This is a temporary setup to allow us to commission the model software. In this setup, since we're using the whitening filter outputs, we also get the whitening gain and amplificaiton which is used for the QPD servos. Also we do not need to use the PEM patch panels as initially planned.

The transimpedance box has a single pole at 80 kHz. The whitening filter has no poles below 80 kHz. So these should be fast enough to let us see PI modes up to 30 kHz.

We had to use some critical electrical tape to keep the BNC-clip shields from shorting with each other; take care in working near the AA side of this cabling - it may put offsets into the QPDs and disturb the lock acquisition.

H1 ISC (ISC)
stefan.ballmer@LIGO.ORG - posted 22:07, Friday 17 July 2015 - last comment - 00:31, Saturday 18 July 2015(19727)
h1lsc modification (work permit #5358)
Sheila, Stefan

Our first version of this fix had a bug: the linear fit below the sqrt limiter meant that there is a non-zero chance to drive the arm in the wrong direction, resulting in a DRMI lock-loss. We addressed this in the next version of the code with a quadratic fit:
 - The initial limiter is set at l=1e-4. Above it (x>l), we still simply have sqrt(x) for CARM_TR.
 - Below x<-3*l, we have TR_CARM=0
 - Between -3*l<x<l we add the 2nd path f=A*(x-x0) + f0, with
      - f0=-sqrt(l)        = -0.01
      - x0=-3 l            = -3e-4
      - A = 1/(16*l^(3/2)) = 62500
 - The sum of the two paths gives a smooth interpolation. We tested this code and verified that the FE code does what it should.
 - Next we wanted to optimize the threshold limit l:
   - Looking  at past locks, the pk2pk during PREP_TR_CARM in TR_CARM is about 0.1 cts
   - Thus the following parameters might be even better:
      - l=0.01
      - f0=-sqrt(l)        = -0.1
      - x0=-3 l            = -3e-2
      - A = 1/(16*l^(3/2)) = 62.5
 - We installed this as version V2 of this code.
 - Due to the earthquake we have not yet tested this yet.

Images attached to this report
Comments related to this report
stefan.ballmer@LIGO.ORG - 00:31, Saturday 18 July 2015 (19729)
Evan, Stefan

After the earthquake we had a chance to test the code:
- The good news: the smooth turn-on of TR_CARM described above seems to work just fine - every time.
- The bad news:  we still sometimes lost it sometimes 13 seconds after we grabbed TR_CARM.
- Attached are traces of TR_CARM_INMON for one failed attempt, and 2 successful attempts.
- At any rate - the lock losses happen when we are already clean on the sqrt(x) part - so we will keep the code change.


The lock loss happens most likely during the engaging of LSC-MCL FM3 (BounceRG):
Guardian line 642:
        ezca.switch('LSC-MCL', 'FM3', 'ON')
Guardian log:
2015-07-18T07:58:51.84052 ISC_LOCK [PREP_TR_CARM.main] ezca: H1:LSC-MCL => ON: FM3

We moved the LSC-MCL FM3 engaging to the end of CARM_ON_TR (line 733):
                ezca.switch('LSC-MCL', 'FM3', 'ON')

This seems to have fixed the problem - at least as far as we can tell,, (we are at 2 out of 2 for this type of lock loss...)

We also moved the zeroing of REFL_BIAS matrix elements to the DOWN state. 
Images attached to this comment
H1 ISC (ISC)
stefan.ballmer@LIGO.ORG - posted 20:53, Friday 17 July 2015 (19726)
ASC: PRC2 now feeding back to PR3
Evan, Stefan

We implemented the PR3 feed-back in ENGAGE_ASC in the ISC_LOCK guardian:
- We decided to leave the feed-back on PR2 during DRMI. This allows us to absorb the first correction to the initial alignment to PR2.
- We then switch the feed-back to PR3 in ENGAGE_ASC. This locks down PR3 during power-up - we confirmed that the REFL beam no longer moves in that transition.

- Details:
 - We modified the DRMI guardian ENGAGE_DRMI_ASC state to prepare both PR2 and PR3 for feed-back.
 - PR3 gais were set to roughly match PR2
 - The settings on PR3 are:
   -    ezca['SUS-PR3_M3_ISCINF_P_GAIN'] = 2
   -    ezca['SUS-PR3_M3_ISCINF_P_GAIN'] = 5
   -    SUS-PR3_M1_LOCK have an integrator, a -120dB and a gain of 1
 - Also, it now has the flag self.usePR3, which can select feed-back to PR2 (False) or PR3 (True). The rest of the ENGAGE_DRMI_ASC state uses this flag.
 - The default is self.usePR3=False, i.e. it does just the old PR2 engaging.

 - The PRC2 loop is always off during the CARM reduction sequence. It is re-engaged in ENGAGE_ASC with feed-back to PR3 with the following steps:
 - The output of SUS-PR2_M3_ISCINF_P and SUS-PR2_M3_ISCINF_Y is held
 - The ASC-PRC2_P and ASC-PRC2_Y filters are cleared.
 - The output matrix is updated:
   -    asc_outrix_pit['PR3', 'PRC2'] = 1
   -    asc_outrix_yaw['PR3', 'PRC2'] = -1 # required to keep the same sign as PR2
 - And the loops are turned on again.
 - The current loop gain is still low - the step response is on the order of 30sec.
 - ENGAGE_ASC also has the self.usePR3 flag (default is True), so it is still backwards compatible.

 - The whole sequence (engage DRMI on PR2, switch to PR3 in full lock) was tested successfully once before an earthquake hit.

H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 18:11, Friday 17 July 2015 (19724)
h1suetmypi model started

A new H1 model was started this afternoon on h1susey. It is h1susetmypi, which is a copy of the L1 model minus the IPC input receievers and the DAC outputs on the top level. The new model has DCU_ID=127 and runs at 64kHz.

The initial startup failed with a "DAQ too small" error. We boosted the DQ channels from 2kHz to 64kHz (commissioning frame only) and added some EpicsIn parts at the top level to get through this error. We will investigate this further next week.

The new model was added to the DAQ and the DAQ was restarted. I have added it to the CDS ENG overview MEDM screen, I'll add it to the rest later.

H1 SUS (ISC)
nutsinee.kijbunchoo@LIGO.ORG - posted 17:52, Friday 17 July 2015 (19720)
Update to the complete violin mode study

A while ago I made a "complete" violin mode table here. Matt and Jeff was curious whether or not the frequencies belong to correct test mass. So, I went through the violin mode damping filters and see which one works and which doesn't. So far I was able to confirm that 24 out of 32 frequencies belong to the correct test masses. The filters were able to damp most of them. Three frequencies got rung up. The rest were still inconclusive (either the damping phases were off by 90 degrees or the test masses were wrong).

 

Frequency Test mass Filter Does the filter work? Note
         
500.054 ITMX MODE3 yes  
500.212   MODE3 yes  
501.092 ITMX MODE6 yes  
501.208   MODE3 yes  
501.254 ITMX MODE3 yes  
501.450   MODE3 yes  
502.621 ITMX MODE3 NO  
502.744   MODE3 yes  
         
503.007 ITMY MODE3 yes  
503.119   MODE1 yes  
504.803 ITMY MODE4 NO  
504.872   MODE4 yes  
501.606 ITMY MODE5 NO Rung up!
501.682   MODE5 yes  
501.749 ITMY MODE6 yes  
501.811   MODE6 NO  
         
507.992 ETMY MODE5 NO  
508.146   MODE5 yes  
508.010 ETMY MODE5 yes  
508.206   MODE5 yes  
508.220 ETMY MODE5 NO  
508.289   MODE5 NO  
508.585 ETMY MODE5 yes  
508.661   MODE5 NO  
         
505.587 ETMX MODE6 yes  
505.707   MODE6 NO  
505.710 ETMX MODE4 yes  
505.805   MODE4 NO Rung up!
506.922 ETMX MODE6 yes  
507.159   MODE6 yes  
507.194 ETMX MODE6 NO Rung up!
507.391   MODE6 yes  

 

Below is the table of the violin mode damping filters used by Guardian and the frequencies they (supposed to) damp:

w0 (Hz) wc (Hz) wc-w0 (Hz) Filter (All FM1) Test mass Frequency covered
506 513 7 MODE5 ETMY All ETMY
           
505.78 505.9 0.12 MODE4 ETMX 505.710, 505.805
502 520 18 MODE6 ETMX The rest of ETMX (cheater…)
           
485.7 506.4 20.7 MODE3 ITMX All ITMX
501.05 501.11 0.06 MODE6 ITMX 501.092
           
503.08 503.16 0.08 MODE1 ITMY 503.119
502.96 503.06 0.1 MODE3 ITMY 503.007
504.86 504.91 0.05 MODE4 ITMY 504.803, 504.872
501.63 501.7 0.07 MODE5 ITMY 501.606, 501.682
501.71 501.85 0.14 MODE6 ITMY 501.749, 501.811

 

Both tables are included in the excel file attached below.

Images attached to this report
Non-image files attached to this report
H1 GRD
jameson.rollins@LIGO.ORG - posted 17:42, Friday 17 July 2015 - last comment - 00:07, Sunday 19 July 2015(19723)
Guardian core upgraded to point release to address "double main" bug

Guardian core upgraded to fix "double main" execution bug.

I have just installed a new version of Guardian core:

guardian r1449

It address the "double main" execution bug that was been has been plaguing the system.  See guardian bug 879, ECR 1078.

The new version is in place, but none of the guardian nodes have been restarted yet to pull in the new version.

You can either manually restart the nodes with 'guardctrl restart', or just try rebooting the whole guardian machine.  I might start with the former, to just target the important lock acquisition nodes (ISC_LOCK, etc.), and wait until Tuesday maintenance for a full restart of the Guardian system.

Comments related to this report
evan.hall@LIGO.ORG - 00:07, Sunday 19 July 2015 (19735)

ISC_LOCK and ISC_DRMI were restarted around 2015-07-19 07:07:00 Z.

LHO FMCS
bubba.gateley@LIGO.ORG - posted 16:35, Friday 17 July 2015 (19719)
Beam Tube Enclosure Joint Repair on the X-Arm
Chris S. Joe D.

The crew installed metal strips on the top of 350 meters of tube enclosure joints this week for a total of 1075 meters of enclosure from the corner station on the X-Arm. 
LHO VE
bubba.gateley@LIGO.ORG - posted 16:26, Friday 17 July 2015 (19717)
Beam Tube Washing
APOLOGIZES FOR NOT REPORTING FOR THE PAST WEEK

Scott L. Ed P. Rodney H.

This report will cover 7/13-7/17 dates inclusive.

 The crew cleaned a total of 358.7 meters of tube this week. Test results for the week also shown here.

 We added another generator that we had on site to the cleaning operation so the third man could be vacuuming the support tubes and pre-cleaning the egregiously dirty areas of the tube. This has seemed to increase productivity as seen by the almost 72 meter a day average.

Scott L. will be on vacation next week so to hopefully keep up with the current pace I am bringing out another Apollo employee who is very familiar with the site. Mark Layne will be filling in for Scott next week.  
 
Non-image files attached to this report
H1 CDS (CDS, VE)
patrick.thomas@LIGO.ORG - posted 16:25, Friday 17 July 2015 (19718)
Cathodes remotely turned off at end stations
These are NOT the cathodes used as interlocks for the high voltage.

For both end stations:
I logged into the Beckhoff computer. I went to the 'CoE - Online' tab for the Inficon gauge labeled 'Pressure Gauge NEG (BPG 402)' in the system manager. In index FB44:01, 'Emission ON / OFF Command: Command', I entered 00 02 in the Binary box. I then verified that index 6015:05, 'Input Hot Cathode Ion: Emission Status Off/On Module 2' had changed from TRUE to FALSE.

This was done around 11:35 PDT.

Richard will go to the end stations and verify that they are off on Monday.
LHO General
patrick.thomas@LIGO.ORG - posted 16:13, Friday 17 July 2015 (19716)
Ops Summary
Cheryl, Patrick, TJ, Ed

The ETMY LR RMS WD was tripped when I came in. I reset it by writing 0 and then 1 to it. Jim W. and I switched the ITMY, ITMX and BS ISI blends from Windy_90 to Quite_90. The mode cleaner was not locking because the input power was low. I had to do a search for home with the rotation stage. Spent most of the day keeping the IFO at DC power for commissioners. Reloaded Guardian a couple of times for script changes. 

09:18 Richard to roof
09:24 Jason and Peter taking diode box into PSL diode room
09:35 Richard off roof
09:43 Jason and Peter done
09:50 ETMX ISI WD tripped, indicated payload trip, but no WD trip on SUS or TMS
10:51 Richard to roof
11:35 I remotely turned off the cathodes at both end stations (WP 5363)
12:09 Pepsi truck through gate
15:44 Dave installing h1susetmypi model (WP 5365)
15:48 Jeff K. restarting h1susomc model (WP 5366)

Currently Stefan has the IFO and is working on ASC.
H1 SUS (ISC)
jeffrey.kissel@LIGO.ORG - posted 16:09, Friday 17 July 2015 (19714)
OMC ASC Signals Routed through Standard ISC Paths
J. Kissel, S. Dwyer
WP #5366

Continuing to pursue OMC ASC diagonalization (see LHO aLOG 19691), I've made changes to the top level of the OMC SUS front end model such that the ISC signals go through the originally intended ISC path, i.e. through the ISCINF, LOCK, and DRIVEALIGN banks. This is such that we can *use* the drivealign matrix to decouple L, P and Y drive. I've made the change in such a way that this is only a top-level model change, and does not impact any library parts. Sadly this means that the implementation is rather ugly, but if the new scheme is successful, we'll submit an ECR to clean up the model and install the scheme properly during a maintenance day. 

I've saved, compiled, installed, restarted the model, confirmed that all settings have been restored as expected, confirmed alignment sliders at the same value, and that the "new" (or remapped) drive signals arrive in the expected banks as expected. Since the former paths were not disconnected, this change is entirely backward compatible; all previous alignment schemes will still work.

The development of the control filter implementation has now been handed off to Sheila.
Images attached to this report
H1 ISC
jenne.driggers@LIGO.ORG - posted 12:01, Friday 17 July 2015 - last comment - 17:36, Friday 17 July 2015(19709)
H1 better low freq performance

Matt found some data from last night that looks pretty good - I'm not sure what the state of the IFO was at this particular time, so I won't say.

Images attached to this report
Comments related to this report
gabriele.vajente@LIGO.ORG - 13:04, Friday 17 July 2015 (19711)

Brute force coherence report for this perod can be found here:

https://ldas-jobs.ligo.caltech.edu/~gabriele.vajente/bruco_1120811417/

sheila.dwyer@LIGO.ORG - 15:38, Friday 17 July 2015 (19713)

Our NOMINAL_LOW_NOISE state now includes BS coil drivers swtiched,  SRCL and MICH FF.  A2L coefficients were tuned before the vent but not carefully since then.  We also had the ISS 2nd loop on at this time that helped the noise around 300 Hz. 

rana.adhikari@LIGO.ORG - 17:36, Friday 17 July 2015 (19721)

we have seen that low frequency noise breathe somewhat; the noise was already low aroun 70 Hz when we switched on the SRC FF (the old filters). We have taken a few measurements with better coherence and with better fitting code and will soon get a bit more subtraction. The high frequency noise is less good than the best mysteriously. The DARM offset was giving us 20 mA total OMC DC current. We did not succeed yet in being stable at higher offsets.

H1 DetChar (DetChar, ISC, SUS)
sheila.dwyer@LIGO.ORG - posted 01:56, Friday 17 July 2015 - last comment - 19:46, Friday 17 July 2015(19696)
Lock Losses need some investigation

The following are a list of lock loss messages in the Guardian log. We've had a bunch of locklosses during the transition from locked to low noise this evening. As you can see there are a few different culprits, but one of the big ones is LOWNOISE_ESD_ETMY. It would be handy if someone can check out these lock losses and home in on what precisely went bad during this transition (e.g. ramping, switching, etc.). Then we can get back to SRCL FF tuning.

2015-07-17 06:47:05.694550  ISC_LOCK  LOWNOISE_ESD_ETMY -> LOCKLOSS

2015-07-17 06:48:22.162260  ISC_LOCK  LOCKING_ARMS_GREEN -> LOCKLOSS

2015-07-17 07:14:34.431170  ISC_LOCK  NOMINAL_LOW_NOISE -> LOCKLOSS

2015-07-17 07:26:40.249110  ISC_LOCK  CARM_10PM -> LOCKLOSS

2015-07-17 07:34:59.720030  ISC_LOCK  PREP_TR_CARM -> LOCKLOSS

2015-07-17 07:42:08.269350  ISC_LOCK  LOCKING_ALS -> LOCKLOSS

2015-07-17 08:02:41.773620  ISC_LOCK  LOWNOISE_ESD_ETMY -> LOCKLOSS

2015-07-17 08:21:58.665420  ISC_LOCK  LOWNOISE_ESD_ETMY -> LOCKLOSS

2015-07-17 08:31:29.035330  ISC_LOCK  REDUCE_CARM_OFFSET_MORE -> LOCKLOSS

2015-07-17 08:48:32.514870  ISC_LOCK  LOWNOISE_ESD_ETMY -> LOCKLOSS

 
Rana, Sheila
Comments related to this report
keita.kawabe@LIGO.ORG - 11:01, Friday 17 July 2015 (19705)

Guardian error causing lock loss in LOWNOISE_ESD_ETMY (Evan, Keita)

Summary:

Out of four lock losses in LOWNOISE_ESD_ETMY that Rana and Sheila listed, one lock loss (15-07-17-06-47-05) was due to the guardian running main() of LOWNOISE_ESD_ETMY twice.

Running main() twice (some times but not always) is apprently a known problem of the guardian, but this specific state is written such that running main() twice is not safe.

Details:

Looking at the lock loss, I found that the ETMY_L3_LOCK_L ramp time (left of the attached, red CH16) was set to zero at the same  or right after the ETMX and ETMY L3 gain (blue ch3 and brown ch5) were set to their final number (0 and 1.25 respectively). There was a huge glitch in EY actuators at that point but not to EX.

This transition is supposed to happen with the ramp time of 10 seconds, so setting the ramp time to 0 after setting the gain kills the lock.

Looking at the guardian code (attached right), the ramp time is set to zero at the beginning and set to 10 at the end.

Evan told me that main() could be executed twice, we looked at the log (attached middle), and sure enough, right after LOWNOISE_ESD_ETMY.main is finished at 2015-07-17T06:46:50.39059,  the gain was set to zero again.

Images attached to this comment
jameson.rollins@LIGO.ORG - 11:55, Friday 17 July 2015 (19708)

I have identified the source of the double main execution and have a patch ready that fixes the problem:

https://bugzilla.ligo-wa.caltech.edu/bugzilla3/show_bug.cgi?id=879#c7

If needed we can push out a point release essentially immediately, maybe during next Tuesday's maintenance period.

keita.kawabe@LIGO.ORG - 14:09, Friday 17 July 2015 (19712)

Bounce rang up during the EX-EY transition gain ramping, 3/4 of the times last night.

In three out of four lock losses in LOWNOISE_ESD_ETMY that Rana and Sheila listed, the guardian made it all the way to the gain ramping at the end, and it did not run main() twice.

However, about 7 to 8 seconds after the ramping started, 9.8Hz oscillation built up in DARM, then there came fast glitches in ETMY L2 drive, then the IFO lost lock. 

This looks like a bounce but I have no idea why it was suddenly rang up.

See attached. First attachment shows the very end of the lock losses that clearly shows DARM oscillation.

The second plot shows the same lock losses but zoomed out so you can see that each lock losses happened 7 to 8 seconds after the ramping started.

The last attachment shows one of the DARM oscillation so you can see that 6 cycles = 0.309 seconds (i.e. 9.8Hz signal).

Images attached to this comment
keita.kawabe@LIGO.ORG - 19:46, Friday 17 July 2015 (19725)

Update: After bounce was rung up, OMC DCPDs saturated before IFO lost lock.

In the attached, while 9.8Hz was getting bigger (top left), if you high-pass DARM_IN1_DQ (middle left) you can see that the high frequency part dominated by 2.5kHz suddenly quenched at about t=18sec.

Same thing is observed in OMC DCPDs (middle middle and bottom middle), and even though we don't have a fast channel for DCPD ADCs, it seems like they were very close to the saturation at 18sec (bottom left).

Though we don't know why 9.8Hz was excited, at least we know that the DCPD saturated to cause the lock loss.

Since the same thing happened 3 times, and each time it was 7 to 8 seconds after the ETMX and ETMY L3 LOCK_L gain started ramping, you could set the gains to the values corresponding to this in-between state, keep it there for a minute or so, and see if the IFO can stay locked. If you fail to keep it locked it's a sure sign that this instability is somehow related to the L3 actuator balance between X and Y, or L3-L2 crossover in Y (or in X) or both.

The in-between gain would be something like 1.1 for EY L3 lock and 0.125 for EX.

Images attached to this comment
H1 ISC
sheila.dwyer@LIGO.ORG - posted 20:18, Thursday 16 July 2015 - last comment - 16:23, Friday 17 July 2015(19692)
PI at 15734 in Y arm

We were locked at 24 Watts for just over 2 hours before we rang up a PI that shows up in the Y arm QPDs at 15734 Hz.  I increased the ring heater power (for both arms )from 0.5 to 0.6 Watts.  template with the QPD IOP channels is attached.  I tried to reduce the power, but we lost lock when I did that perhaps because the ISS second loop was on.  The lockloss was at about 3:00 UTC on July 17

 

Images attached to this report
Non-image files attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 01:03, Friday 17 July 2015 (19694)DetChar

We suspect this was not PI, that it was the roll mode.  

It would be useful if someone could track down which optic this was by looking at the roll mode peak RMS trends and looking to see if it in fact did saturate any of the actuators.

Rana

edmond.merilh@LIGO.ORG - 09:44, Friday 17 July 2015 (19700)
nutsinee.kijbunchoo@LIGO.ORG - 09:52, Friday 17 July 2015 (19701)

ETMX ring heater has asymmetrical heating at the moment (0.5W upper ring 0.6W lower ring). Not sure if you'd like to keep the setting so I'm leaving it there....

sheila.dwyer@LIGO.ORG - 16:23, Friday 17 July 2015 (19715)

Matt, Sheila

 Matt looked at this lock this morning, and saw that although the ROLL mode might have increased in the last few minutes it likely wasn't the culprit.  However, there was a line at 1055 Hz that apeared and grew in the last 20 minutes of the lock, shown in the attached screenshot.  This would indicate that the PI could be at 15329 or 17439, so this is a new PI for us.  (past incidents alog 17903 and alog 18965)  As far as I know this is also a different frequency from what has been seen at LLO. 
Unfortunately,  in my hurry to grab some fast channels for the QPDs, I used the LLO channel names but we are uusing a different ADC, so I got the wrong channels.  So we don't really know which arm this was in. 

I've made a template that anyone who suspects that a PI is rung up can run:

/ligo/home/sheila.dwyer/ParametricInstabilities/PI_IOP_template.xml

The assymetry in the ring heater was my mistake.

Images attached to this comment
H1 SUS (SUS)
leonid.prokhorov@LIGO.ORG - posted 19:06, Tuesday 14 July 2015 - last comment - 17:39, Friday 17 July 2015(19645)
Night OPLEV Charge measurements procedure
I wrote the script for the OPLEV charge measurements which sets most of the settings and easy-to-use. It works for ~ 2.5 hours.
If you are the last person who leaving the LHO in the night - please, run it!

Instructions:
1. Set ISC_Lock state to DOWN
2. Set both ETMs to "ALIGNED" state
3. Align Optical levers (pitch and yaw) for both arms to 0 +/-.5 urad
4. Run the scripts: scripts directory is /ligo/svncommon/SusSVN/sus/trunk/QUAD/Common/Scripts
run the python files: ./ESD_Night_ETMX.py and the second script in another terminal: ./ESD_Night_ETMY.py

If it works, it:
a) In first ~ 30 seconds sets the channels and can warn about troubles with ESD or Alignment. If it happens - check this system and press 
b) During the measurements it should once a second print "Receiving data for GPS second: 1234567" 
c) After all the measurements it should restore all the settings back.

If it gives the errors but still receive some data - let it work.
If it obviously not work, you can try to run it again. If it does not help - please, restore all settings running ./ESD Restore_Settings.py . 

Due to it is first try, today I will be very thankful if you'll check after running the scripts :
1. L3 Lock:
   Bias Voltage - 0
   Offset - green light
   Ramp time - 5s
2. ESD linearization:    Bypass - ON
3. For ETMY:   turned on Hi-voltage driver.
Just in case: scripts modify only this settings.
Comments related to this report
leonid.prokhorov@LIGO.ORG - 17:39, Friday 17 July 2015 (19722)
1. Script was updated to align the optical levers. If you did not align it - script will do it between (a) and (b), i.e. in first minutes. Measurements begin when OPLEV are aligned.
2. If you need to stop the charge measurements, there are two ways. 1.(preferable) Press 'Enter' - it will stop the script not later then in 12 minutes, when all the biases and quadrants will be done for this circle. It will restore all the settings 2. You can break it using Ctrl-C immediately but then you will need to restore ESD settings using ./ESD_Restore_Settings.py or do it manually. Using the second way you also loose and need to manually restore the optical levers offsets for ETMs. 
Note: Main charge measurement scripts set all the settings back to where they find it. If you break it and use ESD_Restore_settings.py - it will set all values to "standard". 
(!) We are talking about change the sign of ESD bias voltage on ETMX. Using ESD_Restore_settings.py will change it to today's value. 


Displaying reports 63781-63800 of 83002.Go to page Start 3186 3187 3188 3189 3190 3191 3192 3193 3194 End