Displaying reports 69161-69180 of 85671.Go to page Start 3455 3456 3457 3458 3459 3460 3461 3462 3463 End
Reports until 23:45, Thursday 26 February 2015
H1 SUS
keita.kawabe@LIGO.ORG - posted 23:45, Thursday 26 February 2015 - last comment - 23:55, Thursday 26 February 2015(16968)
Better Len2Pit decoupling of ETMX L1 with new top damping

Summary:

Somebody asked for a better L2P from ETMX L1 stage, so I made an attempt at making one.

With new top mass damping that was described in alog 16895 without OL damping, the measurement was done, filter was made, nothing was tricky, no hand tuning, no nonsense unstable poles, it just works.

But of course when OL damping is on, this nice L2P decoupling is broken (because the nice thing was made without OL damping).

I can probably make similarly nice decoupling filter for when the OL damping is on, but the question is exactly when such nice decoupling is necessary. OL damping on? Off? Both?

The ETMX is left with old configuration.

Details:

EX L1 drivealign L2L to oplev P, and drivealign L2P to oplev P were measured, and the latter was divided by the former. In both of these measurements, uniform noise from 0.1 to 1Hz was used and the amplitude was set so each coil outputs several thousands counts or so.

The resulting inversion function between 0.1 and 1.14Hz was fitted using happyvectfit, discarding any bad coherence data. No tuning was done except for selecting the measurement data and setting the fit order to 8.

The resulting L2P filter looks much nicer, Qs are lower,  and in general it makes sense more than the ancient filter that was eventually abandoned (first attachment).

The second attachment shows the step response of ETM oplev (blue) when a large length drive is applied to the L1 stage (brown).

Left panel shows that the step response was about an order of magnitude smaller with the new L2P filter than the old flat gain filter that is used these days.

Right panel shows that when the OL damp is on, this nice reduction is gone, both the flat filter and the new filter are about the same, they are larger than "new" in the left panel.

Note that I had to make a new oldamp filter for this, as the old OLdamp filter is incompatible with the new M0 pit damping.

Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 23:55, Thursday 26 February 2015 (16969)

If you want to use the new settings:

Turn off the OL damping.

For M0 P damping, change H1:SUS-ETMX_M0_DAMP_P_GAIN from -1 to 0, turn off  H1:SUS-ETMX_M0_DAMP_P FM2, turn on FM1, and set the gain to -3. This switches the M0 damping to more aggressive one.

For L1 drivealign L2P decoupling, change H1:SUS-ETMX_L1_DRIVEALIGN_L2P_GAIN from 5.3 to 1, turn off FM10, turn on FM3.

If you want to enable OL P damping, change H1:SUS-ETMX_L2_OLDAMP_P_GAIN from -6500 to 0, turn off FM10, turn on FM9, then set the gain to -4500.

The new OL damping for the new M0 damping is far from good, but it's stable.

H1 SYS
jameson.rollins@LIGO.ORG - posted 18:37, Thursday 26 February 2015 (16967)
more guardian epicsMutex pthread_mutex_unlock errors encountered

The following nodes have reported the epicsMutex thread error:

epicsMutex pthread_mutex_unlock failed: error Invalid argument
epicsMutexOsdUnlockThread _main_ (0x2999f20) can't proceed, suspending.

node and log file:

ISI_ITMX_ST2/@4000000054e3b1f036d0862c.u
IMC_LOCK/@4000000054efab210f2ebddc.s
SUS_PRM/@4000000054eceebe32bce24c.u
SUS_ETMY/@4000000054eceebe20a258e4.u

Still unclear why this is happening.  It happened most recently on IMC_LOCK yesterday, and the first known incident was with ISI_ITMX_ST2 on February 16, while I was testing the new guardian release before the upgrade.

H1 SYS
jameson.rollins@LIGO.ORG - posted 18:14, Thursday 26 February 2015 (16778)
guardian upgrade overview

All guardian systems have been successfully upgraded to the latest release

This is a long overdue update on the guardian upgrade performed last week.  The current installed versions are:

A "final" bug fix patch was applied during the Tuesday 2/24 maintenance period, after which the guardian machine (h1guardian0) was rebooted.  All nodes recovered without issue.  There have been a couple of small issues that I'll note below.

RELEASE NOTES

OP/MODE split

The functionality of the MODE switch has been split into three independent interfaces:

RELOAD improvements

Code reload now happens seamlessly in the background, without interupting the current state code execution at all.  The current state is no longer interupted or restarted.  (triggered by setting the LOAD momentary switch to 'True')

The only known limitation occurs when certain changes to the currently running state are loaded.  If the node is currently in the RUN method of a state and the new code references an attribute or variable that was expected to have been set in MAIN, you will encounter an exception.  You should be able to bypass this problem by re-requesting the current state, which causes the current state to be re-executed from the beginning (i.e. MAIN).

request any STATE in the graph

The REQUEST interface now allows for requesting any state in the system.  "Requestable" states are now only used to populate the REQUEST drop-down menu on the guardian MEDM interfaces. 

A new STATES MEDM screen, accessed via the "all" button next to the REQUEST drop-down or via "guardmedm --states ...", allows for selecting any state in the system. 

The buttons are colored the same as states in the graph.  The "targets" to the right indicate the current STATE (inner), REQUEST (middle), NOMINAL (outer).

The system handles state requests exactly the same, regardless if the state is requestable or not.  In EXEC mode, guardian will follow the graph to the requested state, and hold there once it arrives.

NOTE: this is intended only as an aid to commissioning/debugging, so that intermediate states can be requested without having to modify the code to add/remove states from the request list.  However, we should continue to persue the same philosophy of only making "requestable" the states in which the system is intended to come to rest.  The REQUEST drop down menu is still intended to be the primary request interface.  This way it will always be clear which state are intended "final" states of the system.

MANAGER registration and overhaul

Managers now register themselves with their subordinates.  The current manager is recorded in the MANAGER channel, and a new display on the main MEDM control panel displays the current manager.

NOTE: if the user manually overrides this by selecting a different mode, or if another manager steals the node, the managing node will need to be told to go back through a state where it runs set_managed() to re-acquire the subordinates.

MANUAL MODE

When in MANUAL mode, the graph is completely ignored and the REQUEST state is immediately executed, dropping whatever else the node was doing at the time.

NOTE: This mode should only be used with caution by those who understand the system their controlling.  The graph is there to purposely constrain the dynamics of the system.  Ignoring these contraints can easily put the system into a bad state if you're not careful.

This mode can be accessed via the MANUAL button in the STATES MEDM screen.

"protected" states

A new "redirect = False" flag can be used on states that should never be left until they return True.  This is useful for FAULT states that should not be exited until the fault clears, even if another goto state is selected (e.g. DOWN).

weighted edges

Edges can now have weights, which can be used to break path degeneracies.  Guardian always chooses paths with the lowest total edge weight sum.

execution time recorded

The execution time of user code is now recorded in the EXECTIME (current execution time) and EXECTIME_LAST (execution time of last cycle) records.  Indicators of these values are now on the main control screen.

automatic code archiving

All usercode is committed to a per-node git code archive upon every restart or reload.  This gives us a complete record of execatly what code was running at any given point in time.

The archive root directory is:

An integer representation of the archive git SHA1 commit id is recorded in the new ARCHIVE_ID channel, which is also displayed on the main and compact control screens:

compact MEDM control screen

A new compact control screen can be access via e.g. "guardmedm --compact SUS_ETMX":

setpoint monitoring

A new setpoint monitoring system has been added.  The ezca object now records all EPICS writes performed by the usercode.  If the "ca_monitor=True" flag is set in the module, guardian checks the current value of all setpoint channels to determine if they differ from where they were set by guardian:

If any differences are detected, the SPM box on the control screen goes yellow:

By clicking on the "SPM DIFFS" button, a screen will open showing the current list of differences:

For filter modules, as shown above, just the SWSTAT value is recorded FOR THE ENTIRE MODEULE, even if not all buttons are touched by guardian.   This allows guardian to cover the full state of filter modules, since the front end SDF monitoring can not be told to watch individual states only.  The SPM DIFF screen shows filter engaged differences as shown above.

NOTE: this feature is still experimental, and there are likely kinks that need to be worked out.  In particular, anything that "legitimately" sets values that have been touched by guardian outside of guardian, e.g. a subprocess script or a BURT restore, will cause the SPM to report differences.  This is kind of unavoidable, since there's no way for guardian to know if changes that occur outside of its purview are legitimate or not.

Subsystem commissioners should experiment with this feature and report any issues to me.

improved notifications

Notifications (USERMSGs) now cycle through the USERMSG display on the main control window.  There's also separate USERMSG MEDM screen where each individual message can be viewed.

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 17:55, Thursday 26 February 2015 - last comment - 18:16, Thursday 26 February 2015(16964)
Apple Magsafe2 adapters installed in control room using MagCozy tethers

Jim, Dave

We have installed 10 Apple Magsafe2 power adapters in the control room on the Magsafe power cords. To hopefully prevent these from wandering or getting lost, we followed Rana's advice and purchased MagCozy tethers for these, please see attached photographs.

A reminder, please do not use the magsafe cable which is bundled in the iMac to Thunderbolt Monitor cables. This cable is not long enough to use without pulling on the display port cable which has lead to breakages in the past. If we need more Apple power cords in the control room, please contact me and I will purchase some more.

Images attached to this report
Comments related to this report
lisa.barsotti@LIGO.ORG - 18:16, Thursday 26 February 2015 (16966)
👍
H1 CDS
david.barker@LIGO.ORG - posted 17:43, Thursday 26 February 2015 (16963)
RFM IPC errors on end station SUS models fixed

Jim, Dave

First the good news, the RFM receive error rates on the end station SUS models have been fixed. Now the bad news, we don't know why what we did would fix the problem.

Prior to today's work, the RFM loop was such that the LSC front end communicated directly with the end station SUS front end. The loop continued on to the end station ISC, then back to the corner through OAF and ASC back to LSC. The error rate at both end station SUS systems for both channels being sent by the OMC model was about one every 5 seconds. The OMC sender was running at 13uS, so we should have zero errors at the end station. Indeed, we added the very same receivers in the PEM model (running on ISC front end) and verified a zero error rate there.

Working at EY, we first removed ISC from the RFM loop. The SUS errors went to zero (even though ISC was not inbetween LSC and SUS, we just shorted the loop by one node).

We reinserted ISC, and the SUS errors came back as before.

We then switched the position of SUS and ISC on the end station RFM switch, so now ISC is between LSC and SUS, and SUS error rate went to zero. We left EY in this configuration.

We then went to EX and repeated the above, with the same results. EX now also has SUS/ISC swapped from this morning.

To check once again that we know the direction the loop takes through the VMIC 5596 RFM switch, we repeated our DTS measurements (see X1 alog  for details). We are sure that prior to the change LSC communicated directly with SUS, and now ISC is inbetween.

After many hour of running we have not had a single RFM IPC error.

Operators should now closely investigate any red blocks on the CDS Overview MEDM. The only ones we would expect are the occassional ADC errors on certain IOP models.

We will try to reproduce the original SUS errors on the DTS to investigate why removing ISC or moving the ISC fixes SUS's problem.

LHO VE
kyle.ryan@LIGO.ORG - posted 17:42, Thursday 26 February 2015 (16962)
Continuing HAM1 pumping/view port activities
Kyle 

Soft-closed GV5 and GV7 -> Resumed rough pumping of HAM1 -> New CDS vacuum signals were added for HAM1 pressure gauge pair (much thanks to Richard M., Filiberto C. and Dave B.) -> Associated vacuum computer(s) had to be rebooted to facilitate this -> CP1 level marginally impacted -> Switched pumping over to HAM1 turbo -> Connected leak detector (LD) and helium leak tested all view ports on HAM1 -> Pressure ~1 x 10-5 torr, LD baseline < 5 x 10-10 torr*L/sec, sprayed audible flow of helium for 20 seconds on air side of each window (through hole in VP protector) -> No LD response -> Disconnected LD and resume pumping with HAM1 turbo 

For tonight I am pumping HAM1 with its turbo backed by the HAM pump cart and am leaving GV5 and GV7 soft-closed.  
H1 CDS (VE)
david.barker@LIGO.ORG - posted 17:29, Thursday 26 February 2015 (16961)
Added HAM1 vacuum gauge pair to VE EPICS VME slow controls

Richard, Kyle, Filiburto, Patrick, Dave

the h0vely EPICS system was modified to add the records to monitor/control the new Pirani/Cold-Cathode gauge pair connected to HAM1. HVE-LY:X0.db database file was created and added to the startup.h0vely file. The h0vely VME crate was rebooted and the new database was started. The vaccum MEDM screens were updated to show the new signals.

The H0EDCU_VE.ini file was updated and the DAQ restarted. The Pirani and Cold-Cathode calculated pressures in Torr are being recorded by the DAQ with the channel names HVE-LY:X0_100ATORR and HVE-LY:X0_100BTORR.

The vacuum overview MEDM screen which is being displayed on the CDS Web page was also updated for offsite monitoring.

LHO VE
kyle.ryan@LIGO.ORG - posted 17:28, Thursday 26 February 2015 (16960)
Temporary disruption of commissioning
The realization that I had not completed all of the required steps following Tuesday's pumping of HAM1 has required that I interrupt commissioning.  How soon the Corner Station can be opened to the End Stations is under discussion.  Until then, I am running rotating pumps on and near HAM1.  I am sorry for the inconvenience.  

H1 SEI
hugh.radkins@LIGO.ORG - posted 17:10, Thursday 26 February 2015 - last comment - 14:56, Friday 27 February 2015(16959)
HEPI Pump System Service/Upgrade/Improvement at EndXBSC9 ETMX

Well, unlike at EndY, the pump station maintenance did not pay off as well.  Bottom line: remaining coherences seen in the Z and HP dofs are not reduced as much as they are at EndY after the maintenance.  Possible reason--while I found lots of Accumulators that needed charging (just like at EndY,) the explicit grounding of the power supply common legs did not make the Pressure Sensing noise better.  In fact something in this process has made the sensor noise worse; but it was't the adding of the ground.

Details:  First I added the explicit jumper from the common legs on the power supply to the supply ground plug but this had no observable affect on the striptool I was watching.  I figured it was doing no harm.  After Guardian brought ISI/HPI down to OFFLINE, I ramped the pressure down with the servo.  Then the motor was greased and Accumulators were charged: most Accumulator were essentially uncharged and two on the pump station were leaking.  I was able to play with the Schaeder Valve and stop the leaks (may be iffy.)  After the Accumulators were charged, the system was brought back online and Guardian reisolated with no problems.

See the first attachment for the second trends where the system was down and then back on.  This is where I first saw that the noise on the pressure sensor channel more than doubled.  What happened to cause the noise to change like this?  I plugged the ground wire in before bring the pump down and didn't see an noise increase.  Is the power supply flaky?  I did have to move the servo box around to access the Accumulators...

It is clear this is bad when you look at the second attachment with the Pump Pressure ASD: it is a few to several factors worse than Sunday(Reference traces.)  The remaining attachments are the coherences between the Pump Pressure and the HEPI L4C & ISI T240s.  The coherences have improved suggesting the Accumulator serve us well.  But the improvements are not as good as seen at EndY where the Pump Pressure Noise dropped  5 to x10.

Images attached to this report
Comments related to this report
hugh.radkins@LIGO.ORG - 14:56, Friday 27 February 2015 (16980)

It is clear even from the thumbnail that the RZ had some coherence as well but is too now reduced with the Accumulator charging.  The RZ didn't have near this much coherence at EndY.

H1 SEI
jim.warner@LIGO.ORG - posted 16:03, Thursday 26 February 2015 - last comment - 16:28, Thursday 26 February 2015(16956)
HAM1 HEPI has controls installed, might work

Hugh, Jim

Short version: Control loops have been installed on HAM1 HEPI. It's a hack, it could get ugly. Or it could totally work.

Since unlocking HAM1 HEPI was discussed, Hugh and I decided to see if we could quickly run through the commissioning scripts. I tried to look at the transfer functions we took in June of 2013, but only got errors. When I look for the mat files, I found that the data have never been collected. Hugh reported having troubles, so we figured that our data collection scripts must have broken and Hugh was never able to get help to fix the problem. For now, we've copied the isolation filters from HAM2 HEPI (the HEPI contollers for HAM are all generic, should be okay). There other things missing: blend filters (which are 1  for IPS and 0 for the L4C, anyway), actuator symmetrization filters (which are just gains that should be close to 1) and L4C symmetrization filters (not needed). I've also installed the Z sensor correction filter, like we use on the BSC HEPI, which might allow us to get some inertial isolation between .1 and 1 hz. It would be best to have data to at least compare transfer functions to HAM2, but maybe we'll get lucky.

Comments related to this report
hugh.radkins@LIGO.ORG - 16:28, Thursday 26 February 2015 (16957)

And if we get it unlocked, and; get some time, we could collect the TFs and possibly get these close enoughs to be actually correct.  The L4C is still a problem but as long as we don't try any actual Isolation (which none of the HEPI do now,) we will be okay.

H1 General
jeffrey.bartlett@LIGO.ORG - posted 16:02, Thursday 26 February 2015 (16955)
Ops Day Shift Summary
LVEA: Laser Hazard
Observation Bit: Undisturbed   

07:15 Karen & Cris – Cleaning in the LVEA
08:00 Reset Observation Bit to Commissioning
08:12 Jim – Running testing on HAM4 & HAM5
08:14 Filiberto – Cabling vacuum gauge at HAM1
08:15 Elli – Going to HAM1 area to look for parts
08:15 Hugh – Doing HEPI maintenance at End-Y & End-X
08:20 Elli – Out of the LVEA
08:39 Nutsinee – Going to End-X to recycle PCal camera
08:52 Jim – Finished testing
09:12 Nutsinee – Back from End-X
09:23 Adjusted ISS RefSignal to reduce diffracted power
09:41 Kiwamu – Transition LVEA to Laser Safe
09:55 Sudarshan – Installing accelerometers on IOT2R and ISCT6
10:30 Mitch – Working in the LVEA on 3IFO stuff
10:50 Dave & Jim – Moving fibers at End-Y & restart End-Y PEM/PCal model
10:52 Betsy – Making Safe.Snap of all CS suspensions
11:21 Filiberto – Going to Mid-Y to get parts
11:32 Dave & Jim – Back from End-X
11:33 Dave & Jim – Going to End-Y to move fibers
11:52 TJ – Going to End-Y and End-X to drop off tools
12:10 Kyle – Closing GV5 & GV7
12:14 Dave & Jim – Back from End-Y – Restarting PEM models
12:15 Sheila – Transitioning LVEA back to Laser Hazard
12:16 Contractor on site to see Bubba
12:38 Karen – Cleaning at End-Y
12:44 Richard & Filiberto – Working on electronics at HAM1
12:53 Vender on site to stock snack machines
13:03 TJ – Back from End stations
13:08 Bubba – Going to End-Y to get equipment
13:44 Karen – Leaving End-X
13:45 Mitch – Out of the LVEA
14:05 Stuart & Jason – Running B&K testing of ITM-Y OpLev piers
14:10 Hugh – Going to End-Y to check HEPI
14:15 Richard – Restarting vacuum system computer
14:18 Dave – Restarting vacuum model
14:19 Dick – Going into LVEA to checking RF analyzer
14:35 Betsy, Mitch, & Travis – Going into the LVEA
15:05 Doug & Danny – Looking at OpLev piers to prep for grouting 
15:20 Sudarshan – Working on IOT tables
15:25 Doug & Danny – Out of the LVEA
15:25 Dough – Going to End-Y & End-X to check OpLev piers to prep for grouting
15:30 Stuart & Jason – Out of the LVEA
15:52 Sudarshan – Out of the LVEA
H1 SUS
betsy.weaver@LIGO.ORG - posted 14:03, Thursday 26 February 2015 - last comment - 15:44, Friday 27 February 2015(16949)
SDF Monitoring rolled-out for rest of Suspensions

This morning, Stuart and I used the SDF front end to take new SAFE.SNAP snapshots of the balance of the suspensions - recall, he had done the BS previously, see alog 16896.

 

Repeating his alog instructions on how to perform this:

- Transition the Suspension to a SAFE state via Guardian
- On the SDF_RESTORE MEDM screen available via the Suspension GDS_TP screen select FILE TYPE as "EPICS DB AS SDF" & FILE OPTIONS as "OVERWRITE" then click "SDF SAVE FILE" button to push a SDF snap shot to the target area (which is soft-linked to userapps).
- This safe SDF snapshot was then checked into the userapps svn:
/opt/rtcds/userapps/release/sus/h1/burtfiles/
M        h1susbs_safe.snap

 

Note, please don't use the BURT interface to take SAFE.SNAP snapshots anymore, as the SDF monitoring will become disabled.  All other snapshots are free to be taken via BURT, however.

 

Also, we found many alignment values not saved on IFO_ALIGN which, after confirming with commiss, we saved.

Comments related to this report
stuart.aston@LIGO.ORG - 16:36, Thursday 26 February 2015 (16958)
[Betsy W, Stuart A, Jamie R]

After taking safe SDF snapshots for the IM Suspensions, we found that Guardian had crashed for the IM2 and IM3 when we had attempted to transition them from SAFE to ALIGNED states. Oddly IM1 and IM4 still transitioned fine.

Upon checking the Guardian log it was apparent it was falling over for IM2 & IM3 immediately after reading the alignments from the *.snap files. Under initial inspection there was nothing obviously different or wrong with the IM2 & IM3 alignment files.

After contacting Jamie, he noticed an extra carriage return at the end of the IM2 & IM3 alignment files, which was causing issues for Guardian. Removal of the carriage return, and reloading Guardian rectified the problem.
jameson.rollins@LIGO.ORG - 18:00, Thursday 26 February 2015 (16965)

To be clear, the IM2/3 guardian nodes hadn't crashed, they had just gone into ERROR because of the snap file parsing issue.

Also to be clear, there is a bug in the ezca.burtwb() method, which is being used to restore the alignment offsets, such that it doesn't properly ignore blank lines.  This will be fixed.

Unclear why these alignment snap files had these extra blanklines.  My guess is that they were hand edited at some point.

betsy.weaver@LIGO.ORG - 15:44, Friday 27 February 2015 (16985)

Above, when I said  "Note, please don't use the BURT interface to take SAFE.SNAP snapshots anymore, as the SDF monitoring will become disabled.  All other snapshots are free to be taken via BURT, however."

I was just speaking to SUS safe.snaps.  Sorry for the confusion.

H1 SUS
travis.sadecki@LIGO.ORG - posted 13:37, Thursday 26 February 2015 (16952)
SUS Driftmon updated again

I have updated the SUS Driftmon again with the values from the middle of the most recent good lock stretch (1108983616 GPS).  As of the update time, all SUSes are in the green, with the exception of the OMs, RMs, and OMC, whose alarms values have yet to be evaluated.

H1 General
jeffrey.bartlett@LIGO.ORG - posted 13:32, Thursday 26 February 2015 (16951)
24 Hour OpLev Trend
Took 24 hour OpLev trend measurements. 
Images attached to this report
H1 ISC
gabriele.vajente@LIGO.ORG - posted 11:40, Thursday 26 February 2015 - last comment - 14:26, Thursday 26 February 2015(16942)
Wandering line in DARM

Last night we noticed, looking at the real time spectrum of DARM, that there was a wandering line. The attached spectrograms show the peculiar behavior: about every 270 seconds (not regular) this line enter the spectrum from the high frequency range and moves down in a quite repetible way (the frequency has quite perfect exponential evolution with time). Then there is some sort of burst of noise before the line starts again from the high frequency.

This behavior seems different from the wandering line related to IMC-F seen at Livingston.

Images attached to this report
Comments related to this report
thomas.massinger@LIGO.ORG - 12:02, Thursday 26 February 2015 (16943)DetChar

I was just looking at the same feature. The burst of noise looks like a beatnote whistle, similar to what we saw at Livingston with IMC-F. At first glance, it looks like the whistle is occuring when the drifting signal crosses through the OMC length dither at 3.3kHz. I'm attaching a few spectrograms zoomed on to various levels to look more closely at the feature. The frequencies look discrete when you zoom in, it doesn't seem to be a continuous signal. Was there some kind of swept sine injection that was unintentionally left on during the lock?

Images attached to this comment
thomas.massinger@LIGO.ORG - 12:19, Thursday 26 February 2015 (16944)DetChar

I plotted a spectrum long enough to catch all of the frequencies of the signal as it swept down. The placement of frequencies seems more sparse at higher frequencies and becomes more densely packed as it dips below the kHz range.

Images attached to this comment
gabriele.vajente@LIGO.ORG - 13:16, Thursday 26 February 2015 (16945)

The feature is visible in REFL signals as well, hinting in the direction of something going on in the laser. It's visible as well in LSC-IMCL and LSC-REFL_SERVO_ERR

Images attached to this comment
thomas.massinger@LIGO.ORG - 13:27, Thursday 26 February 2015 (16950)DetChar

This feature is showing up in MICH, SRCL, and PRCL. It's more faint in MICH, but is very strong in PRCL and SRCL. It's also showing up in the input to BS M3 LOCK filter for the length DoF, but it looks like MICH was being used to feed back on the BS position. I didn't see any evidence of the signal in MC2 trans, IM4 trans, IMC-F, or the IMC input power.

Images attached to this comment
gabriele.vajente@LIGO.ORG - 14:26, Thursday 26 February 2015 (16953)

Problem solved: a SR785 was connected to the excitation input of the common mode board, and the excitation was on. We disabled the excitation input from the common board medm screen

H1 ISC
evan.hall@LIGO.ORG - posted 01:23, Thursday 26 February 2015 - last comment - 14:32, Thursday 26 February 2015(16931)
Closing common ETM alignment loops

Sheila, Alexa, Gabriele, Evan

Summary

In addition to the differential ETM loops, we now have closed the common ETM degrees of freedom using REFLA9I + REFLB9I. These loops are slow, with bandwidths of a few tens of millihertz.

Details

Previously (LHO#16883), we had closed loops around IM4 in order to reduce the amount of reflected light into REFL_A_LF. However, tonight we decided instead to close the common ETM DOF, so that the ETMs are nominally controlled in all four angular degrees of freedom. This (hopefully) leaves us free to pursue more loop-closing with the corner optics.

The common ETM loops are implemented in the CHARD filter modules. These modules are stuffed with the same filters as for their DHARD counterparts.

Comments related to this report
sheila.dwyer@LIGO.ORG - 01:40, Thursday 26 February 2015 (16933)

This is a screen shot of QPDs durring a well aligned lock tonight.  

Images attached to this comment
sheila.dwyer@LIGO.ORG - 02:28, Thursday 26 February 2015 (16934)

Since about 10:22 UTC Feb 26th, the IFO has been locked on DC readout with 4 ASC loops closed : DHARD PIT+YAW and CHARD PIT and YAW.  

We are leaving this locked with the intent bit undisturbed.  

lisa.barsotti@LIGO.ORG - 08:26, Thursday 26 February 2015 (16938)
For the records, ~3h lock
Images attached to this comment
evan.hall@LIGO.ORG - 14:32, Thursday 26 February 2015 (16954)

For this lock stretch the ETM ASC loop settings were a bit different from what I said above:

  • All filter modules: FM3, FM4, FM7, FM8, FM9, FM10
  • Gains were 8 ct/ct diff pitch, 30 ct/ct diff yaw, -4 ct/ct comm pitch, -5 ct/ct comm yaw.
Displaying reports 69161-69180 of 85671.Go to page Start 3455 3456 3457 3458 3459 3460 3461 3462 3463 End