Displaying reports 56941-56960 of 83002.Go to page Start 2844 2845 2846 2847 2848 2849 2850 2851 2852 End
Reports until 19:11, Monday 25 April 2016
H1 PSL
keita.kawabe@LIGO.ORG - posted 19:11, Monday 25 April 2016 - last comment - 11:44, Tuesday 06 September 2016(26782)
PSL-ISS_PDA_CALI filters don't make sense, new ones made

Summary:

Filter configuration for ISS 1st loop to generate RIN channels (H1:PSL-ISS_PDA_REL_OUTPUT and PDB) doesn't look good. It seems as though we're somehow chosen to use inferior of the two filters in place both for  "CALI_AC" and "CALI_DC", assuming that D1001998-V2 (PD circuit diagram) and D1001985-V2 (ISS circuit diagram) are correct.

I made somewhat better filter for "CALI_AC". For "Cali_DC" probably it's good enough to use the one we're not using. I loaded the coefficients for the new file for H1PSLISS from H1PSLISS_GDS_TP MEDM screen.

This doesn't affect the loop because these channels are not in the feedback loop, but making these filters right makes it easier for different RIN channels to be compared.

Details:

Reading D1001998-V2, D1001985-V2, T0900630, analog part of the monitor output for PSL-ISS_PDA and PSL-ISS_PDB ADC have analog z, p and k of [0.0723;2700;0.0707] Hz, [3.3607;130;3.12;2300] Hz, and 0.2, they're all in the PD box (ISS box is just the pass through as far as these outputs are concerned). These are DC coupled.

After they are received in the front end model, the signal is first converted to volts but not dewhitened by PSL-ISS_PDA, and then distributed to PSL-ISS_PDA_CALI_DC and PSL-ISS_PDA_CALI_AC. In DC the signal is low-passed, and in AC it's dewhitened and high-passed, and AC is devided by DC to give RIN (first attachment).

In CALI_DC path, there are two filters FM1 and FM2, only FM1 is used (second attachment left). I also plotted the inverse of the analog whitening transfer function including the DC gain in the same attachment. FM1 looks like a strange  dewhite, nothing wrong with that, but I don't see any reason to prefer FM1 over FM2. Just use FM2.

In CALI_AC path, there are also two filters FM3 and FM4 (second attachment right). Compared with the inverse of the analog, it seems like the one in use (FM3) underestimates the RIN by 18dB at 1kHz. FM4 looks better but it's DC coupled. I made a new filter (green) and put it in FM5.

It's kind of odd that we're keeping DC gain factor in two different parts (CALI_DC and CAL_AC) instead of upstream. However, moving this gain into upstream affects PDASTAT thing (see the first attachment again), so I'll not fix this.

Images attached to this report
Comments related to this report
kiwamu.izumi@LIGO.ORG - 11:44, Tuesday 06 September 2016 (29496)

I have enabled the filters that Keita created/recommended. The attached screen shot shows the new settings for CALI_AC and CALI_DC for both PDs. Additionally, I changed the sign of CALI-ACs in order to make them consistent with CALI-DCs which had a minus sign in the gain field.

The SDF table is updated accordingly.

Images attached to this comment
H1 CDS (VE)
james.batch@LIGO.ORG - posted 17:29, Monday 25 April 2016 (26780)
Modified web page for MEDM vacuum detail screens
Changed the web view of vacuum MEDM screens to use the new Beckhoff MEDM screen for Mid Y in place of the old screen.
H1 PSL
patrick.thomas@LIGO.ORG - posted 16:23, Monday 25 April 2016 - last comment - 16:44, Monday 25 April 2016(26776)
PSL Weekly Report
Laser Status:
SysStat is good
Front End power is 30.32 W (should be around 30 W)
Frontend Watch is GREEN
HPO Watch is GREEN

PMC:
It has been locked 1.0 day(s) 21.0 hour(s) and 31.0 minute(s) (should be days/weeks)
Reflected power is 15.98 W and PowerSum is 120.3 W

FSS:
It has been locked 0.0 day(s) 2.0 hour(s) and 13.0 minute(s) (should be days/weeks)
TPD[V] = 3.935 V (minimum 0.9 V)

ISS:
The diffracted power is around 29.4% (should be 5-9%)
Last saturation event was 0.0 day(s) 2.0 hour(s) and 14.0 minute(s) ago (should be days/weeks)
Comments related to this report
peter.king@LIGO.ORG - 16:32, Monday 25 April 2016 (26777)
Note the diffracted power number is not calibrated any more.  Even though it might
indicate that the diffraction is over 20%, we are not diffracting 30W.
patrick.thomas@LIGO.ORG - 16:43, Monday 25 April 2016 (26778)
Is the length of time the FSS is locked useful, since it loses lock with each IFO lockloss?
patrick.thomas@LIGO.ORG - 16:44, Monday 25 April 2016 (26779)
Also, is a minimum of .9 V for the TPD still true?
LHO General
patrick.thomas@LIGO.ORG - posted 16:04, Monday 25 April 2016 (26775)
Ops Day Summary
15:05 UTC Jeff B. to cleaning area to put away garb
15:11 UTC Jeff B. done
15:14 UTC Film crew on site
15:49 UTC Starting initial alignment
16:03 UTC Kyle to HAM 11 and HAM 12 to look at pumps
~16:30 UTC Richard and Filiberto starting Beckhoff vacuum upgrade at mid Y
Finished initial alignment
17:16 UTC Restarted video2 (MEDM screens updating very infrequently)
17:26 UTC Kyle back
18:02 UTC Richard and Filiberto done with Beckhoff vacuum upgrade at mid Y
Nutsinee turning on HWS X,Y
19:07 UTC DAQ restart for mid Y Beckhoff vacuum channels
19:35 UTC Received phone call for Hanford monthly alert test
19:47 UTC Film crew driving down an arm to record footage
20:18 UTC Sheila restarting ASC model (WP 5840)
20:27 UTC DAQ restart for SUSETMXPI and ASC model changes
20:36 UTC Travis and Betsy to LVEA to retrieve equipment
20:47 UTC Travis and Betsy back
20:51 UTC Film crew done recording beam tube footage
20:57 UTC Guardian node testing (WP 5841)
21:21 UTC Film crew to roof
21:22 UTC Gerardo to CP3 to overfill
21:51 UTC Gerardo done
LHO General
thomas.shaffer@LIGO.ORG - posted 16:02, Monday 25 April 2016 (26774)
Ops Eve Shift Transition

TITLE: 04/25 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: Patrick
CURRENT ENVIRONMENT:
    Wind: 14mph Gusts, 7mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.31 μm/s
QUICK SUMMARY: Commissioners hard at work.

 

LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 15:11, Monday 25 April 2016 (26772)
Manually over-filled CP3 at 21:36 utc

1/2 open LLCV bypass valve, and the exhaust bypass valve fully open.
Flow was noted after 57 seconds, closed LLCV valve, and 3 minutes later the exhaust bypass valve was closed.

H1 PSL (PSL)
patrick.thomas@LIGO.ORG - posted 15:09, Monday 25 April 2016 (26771)
Weekly PSL Chiller Reservoir Top-Off
Added 150mL H2O to the H1 PSL crystal chiller.
LHO VE
kyle.ryan@LIGO.ORG - posted 13:48, Monday 25 April 2016 (26769)
More Diagonal annulus leak sorting
Kyle 

HAM11/12

Gerardo and I had improved things Friday but the rate of rise is still too high when the pump carts get valved-out today -> Using large forces applied at great distances, I encouraged all (160) ea, 7/8-9 obstinate door bolts that, in fact, more clamping was possible (~ 11/160 were very loose although at random not necessarily adjacent to each other - Friday Gerardo and I had done only the non-door bolts connecting the HAMs to each other and to the OMC spool - time ran out)

Attached data is promising. 

BSC4 

BSC4 annulus has a "high gas load" as well -> After Gerardo had reminded me that the dome had been removed during the "gutting" of the 2K, I put a wrench on a few of the bolts that I was able to access from the ladder nearby -> most of the bolts were quite loose -> We will torque these tomorrow during maintenance day.
Non-image files attached to this report
H1 ISC
evan.hall@LIGO.ORG - posted 12:36, Monday 25 April 2016 - last comment - 15:14, Tuesday 24 May 2016(26768)
CARM pole measurement

With the elevated intensity noise currently being injected into the interferometer, we can passively make an estimate of the CARM pole frequency.

The attachment shows the transfer functions that take interferometer input power to transmitted arm power (as measured by the four end-station QPD sums) with the interferometer locked at 22 W. There isn't good coherence around the CARM pole frequency itself. However, if we normalize each signal to RIN, this fixes the dc value of the transfer function at 1. Hence, the magnitude of the slope is sufficient to extract the CARM pole.

At 10 Hz, the transfer function is 0.0631, which implies a CARM pole of 0.63 Hz. There is about 1.5 % uncertainty from the spread in the values from the four transfer functions, and about 2 % uncertainty from the drifts in dc intensity for the four QPD sums during the measurement period. Together that makes 2.5 % uncertainty. There may also be additional uncertainty from the whitening and antiwhitening of the QPD signals.

If we assume an ETM transmissivity of 4 ppm, an ITM transmissivity of 1.45 %, a PRM transmissivity of 3.0 %, and 50 ppm of loss on each test mass, we expect a CARM pole of 0.64 Hz (see T1500325).

Obviously a superior method is to take a driven transfer function that has good coherence at and around the CARM pole frequency, and to use the phase information to make a true fit.

Non-image files attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 15:14, Tuesday 24 May 2016 (27357)

Kiwamu, Evan

Similarly, the transfer function from input intensity to POP dc can give the recycling gain of the 9 MHz sidebands. This transfer function is attached, in RIN/RIN. The ac magnitude of this TF is about 0.04.

Assuming the 45 MHz sideband power is negligible in the PRC, assuming the 9 MHz modulation depth is 0.22 rad, and assuming the carrier power recycling gain is 35 W/W, I believe this implies a 9 MHz recycling gain of 60 W/W or so.

Non-image files attached to this comment
H1 TCS
nutsinee.kijbunchoo@LIGO.ORG - posted 11:50, Monday 25 April 2016 (26764)
HWS sleds turned on and HWS code restarted

Dave, Nutsinee

The peak counts didn't make sense for both HWSX and HWSY when I first ran the code. Stream_image shows glitchy looking images on HWSY and no image at all on HWSX. Turning the camera and frame grabber on and off didn't help so we restarted the hwsmsr computer. That seems to have solved the issue.

LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 11:21, Monday 25 April 2016 (26763)
IP9 High Voltage Update

Changed the setting for the high voltage, now at 5.0 KV.  Previous setting at 7.0 KV.

LHO VE
kyle.ryan@LIGO.ORG - posted 10:44, Monday 25 April 2016 (26761)
0900-1025 hrs. local -> Kyle in and out of LVEA climbing on HAM11/12 and BSC4


			
			
LHO General
patrick.thomas@LIGO.ORG - posted 09:52, Monday 25 April 2016 (26759)
morning meeting notes
Crew from Japanese television station is filming on site today and tomorrow
Nicole is here to work on 3IFO and property management

SEI: BRS is on at both end stations and has run over the weekend. Personnel need to notify the operator if they are going to end X or end Y so that the BRS can be turned off beforehand.
PSL: PMC mode matching and ISS issues
VAC: Tapping and drilling holes at mid stations on Tuesday for scroll pumps. Gerardo pulling cables at vertex.
FAC: Landscapers on site
CDS: Beckhoff vacuum upgrade planned for today. Beckhoff vacuum upgrade at LX and LY planned for Tuesday. Workstations will be rebooted at 6 am remotely each Tuesday starting on May 2. You must send an email to Carlos beforehand if you do not want a particular station restarted.
H1 INJ (INJ)
christopher.biwer@LIGO.ORG - posted 14:20, Sunday 24 April 2016 - last comment - 10:11, Tuesday 26 April 2016(26754)
checks on guardian hardware injection node logging and plus some more development
I did a set of tests with the guardian node. The codebase is in a state that should be ready for Jamie and I to set it up tomorrow on the guardian script machine. Going forward things to do are:
 * Update docstrings
 * Install glue, gracedb, and grid credentials on guardian machine
 * Plan out how to run the gracedb process and get robot certificate
 * Do series of injections with guardian node on guardian machine - test full injection pathway, test killing active injection, test reloading schedule, test multiple injections in a row, etc.

Below I outline the tests I did.

How to do command line tests with guardian daemon

Can now do the following tests on the command line at a LHO workstation:
 * To test reading schedule and finding the next injection: guardian INJ WAIT_FOR_NEXT_INJECT
 * To test gracedb event creation: guardian INJ CREATE_GRACEDB_EVENT
 * To test awg and inject a signal from schedule into the detector: guardian INJ CREATE_AWG_STREAM INJECT_CBC_ACTIVE
 * To test schedule validation script: PYTHONPATH=/opt/rtcds/userapps/release/cal/common/guardian:${PYTHONPATH}; python guardian_inj_schedule_validation.py --ifo H1 --schedule /opt/rtcds/userapps/release/cal/common/guardian/schedule/schedule_1148558052.txt --min-cadence 300

NOTE: You will need glue and gracedb python packages to run some of these tests, and these packages are not system-installed on workstations in the control room. And for gracedb upload testing you need the grid credential tools which are not on LHO workstations. And for gracedb upload test you need to make sure dev_mode is False.

Test injections

Injections from last night are in aLog 26749.

Today I continued with some more development tests. Injections that are constant amplitude of 1e-26 for 1 second duration into H1:CAL-PINJX_TRANSIENT_EXC; start time of the injections are:
 * 1145554100
 * 1145555100
 * 1145555700
 * 1145560262

(i) Call to awg works and injection goes into INJ-PINJX_TRANSIENT_EXC.

(ii) Injections logged correctly and meta-data is propagating through infrastructure to inform the searches. Can see the hardware injection tests done with the guardian node on the detchar summary pages. The first three not logged with a injection type, eg. BURST, because in initial tests just wanted to correctly use the awg module. Can see thereafter the injections were flagged with a type in ODC and this propagates to the low-latency frames for the online searches and the segment database for the offline searches. Can see attached plots for ODC segments and segment database segments.

(iii) Destroying a node with an open stream that has trasmitted data to the front end does not perform the injection.

(iv) The gracedb upload functions have already been tested. Today I re-checked the functions and here is an example gracedb event that was uploaded T235981. Adding messages to the event log on gracedb was also tested again, notice the "This is a test." message on the T235981 gracedb page.

(v) Schedule validation script updated and tested.

Codebase developments

Some more changes:
 * There is now a dev_mode in the code to run the tests mentioned in the section above. At the moment this does two things (i) ignores to check if the detector is locked, (ii) ignores gracedb for now until we get the robot certificate sorted out, and (ii) waits in the INJECT_CBC_ACTIVE state instead of the AWG_STREAM_OPEN_PREINJECT state because we need to avoid jump transitions for the command line test above.
 * Schedule validation script works again (https://redoubt.ligo-wa.caltech.edu/svn/cds_user_apps/trunk/cal/common/scripts/guardian_inj_schedule_validation.py).

One thing of note is that guardian does not allow subprocesses to be created by states so the subprocess managment that I had written will not work with guardian. So right now once the injection starts the code will wait for the injection to finish, this is just the implementation in the awg package (see awg.ArbitraryStream.close); it can only be killed by stopping the node.
Images attached to this report
Comments related to this report
christopher.biwer@LIGO.ORG - 11:59, Monday 25 April 2016 (26766)INJ
I've also renamed the base module (INJ.py) to something less generic, it is now CAL_PINJX.py.

See: https://redoubt.ligo-wa.caltech.edu/svn/cds_user_apps/trunk/cal/common/guardian/CAL_PINJX.py

So modify the examples in this aLog entry as appropriate, ie. guardian INJ becomes guardian CAL_PINJX.
christopher.biwer@LIGO.ORG - 18:08, Monday 25 April 2016 (26781)INJ
Chris B., Jamie R.

Started up the node with guardctrl start CAL_INJ.

Used guardmedm CAL_INJ to control the guardian node.

Did a variety of tests with the hardware injection guardian node, these all passed:
 * Tested killing injection before injection awg call is active by requesting KILL_INJECT.
 * Tested killing the injection during awg.ArbitraryStream.close call, ie. inject is in active state, by requesting KILL_INJECT.
 * Tested scheduling injections minimum number of seconds apart to make sure guardian picked the correct injection.
 * External alert happened while injection was scheduled, aborted injection successfully from AWG_STREAM_OPEN_PREINJECT to ABORT_INJECT_FOR_EXTTRIG. Commented out this check to continue working.
 * Tested out of order schedule file.
 * Tested FAILURE_READ_WAVEFORM, eg. waveform file does not exist.
 * Tested all injection states (INJECT_CBC_ACTIVE, INJECT_BURST_ACTIVE, INJECT_STOCHASTIC_ACTIVE, INJECT_DETCHAR_ACTIVE).
 * Tested that injection does not go into the detector if we turn off dev_mode so that it checks that detector is locked.
 * Injection start, injection end times, injection outcome values are all being set on MEDM screen.

Made another failure mode. If the call to awg.ArbitraryStream.close is too close in time to the start of the injection, then there is a error. Added FAILURE_DURING_ACTIVE_INJECT. awg returns a generic AwgStreamError so without doing some hacked parsing of the error message, there's not much to differentiate why it failed during the function call.

None of the gracedb functionality was tested during this, since we need to create a robot certificate still.
christopher.biwer@LIGO.ORG - 20:44, Monday 25 April 2016 (26784)INJ
After doing a few more tests, I've started scheduling a long series of injections. The schedule file is here:
https://redoubt.ligo-wa.caltech.edu/svn/cds_user_apps/trunk/cal/common/guardian/schedule/schedule_1148558052.txt
christopher.biwer@LIGO.ORG - 10:11, Tuesday 26 April 2016 (26790)INJ
Injections were still going this morning from last night as expected, every 400 seconds.

Attached is an hour of last nights injections. Also I've attached a zoomed in plot on the fast channel for one injection to check the timing of the start of the injection. Looks good.
Images attached to this comment
H1 INJ (INJ)
christopher.biwer@LIGO.ORG - posted 23:19, Saturday 23 April 2016 - last comment - 11:00, Monday 25 April 2016(26749)
guardian hardware injection node development tests
Chris B., Jamie R.

This aLog is documenting the work that has been done so far to get the guardian hardware injection node running at LHO. Documenting the work over the next few days could ease the installation at LLO, after we have sorted out everything at LHO.

It also includes some tidbits about the development that's been done, since several members of the hardware injection subgroup wanted to be kept in the loop.

Installations

There are only a few things we need on the guardian script machines that were not there are already. Things we have done:
 (1) Updated userapps SVN at /opt/rtcds/userapps/release/cal/common/guardian
 (2) Checked that we can instantiated guardian node with: guardctrl create INJ
 (3) Installed awg on the guardian script machine

Things we have yet to install on the guardian machine:
 * glue
 * ligo-gracedb
 * grid credentials

Codebase Development

This afternoon was mostly spent implementing several new things in the codebase. I have attached a new graph of the node to this aLog since there a number of new states, eg. new failure states, renamed the active injection states (formerly called CBC, BURST, etc.), renamed the IDLE state, and the renamed GraceDB state. And as always, the code lives in the SVN here: https://redoubt.ligo-wa.caltech.edu/svn/cds_user_apps/trunk/cal/common/guardian/

Some changes of note:
 * Changed jump transitions in successful injection pathway to edges. This changes how the node should be run. The model now is that the requested state should be INJECT_SUCCESS while running the node.
 * The modules (eg. inj_det, inj_types, etc.) have been moved to a new injtools subpackage.
 * Added success/failure messages for GraceDB events after injection is complete.
 * Added guardian decorators for a few tasks that are often repeated, eg. checking for external alerts.
 * Process managing for the awg call.

These changes have made the schedule validation script (https://redoubt.ligo-wa.caltech.edu/svn/cds_user_apps/trunk/cal/common/scripts/guardian_inj_schedule_validation.py) out-of-date, and that will need to be updated.

Also the GraceDB portions of the code have been commented out for now, since we're holding off on that for now until we have the grid credentials/how we will run the guardian process sorted out.

Tests

As I was developing I did a couple tests with the guardian node today. I did three hardware injections of constant amplitude (1e-26) into H1:CAL-PINJX_TRANSIENT_EXC. Each injection has a duration of 1 second. GPS start times are:
 * 1145497100
 * 1145497850
 * 1145500600

These tests were mostly to check that the call to awg was working properly.

The PINJX_HARDWARE filterbank has already been turned off (aLog 26748) so the signal will only appear in the PINJX_TRANSIENT and PINJX_HARDWARE channels.

In the attached plot below I shows the injection at 1145500600.
Images attached to this report
Comments related to this report
jameson.rollins@LIGO.ORG - 11:00, Monday 25 April 2016 (26762)

I have installed the following packages on the h1guardian0 machine:

  • python-glue
  • python-ligo-gracedb

These were installed from the production LSCSoft Debian wheezy archive, which should be fully compatible with this version of Ubuntu (12.04):

controls@h1guardian0:~$ cat /etc/apt/sources.list.d/lscsoft.list
deb http://software.ligo.org/lscsoft/debian wheezy contrib
deb-src http://software.ligo.org/lscsoft/debian wheezy contrib
controls@h1guardian0:~$ 

We'll be testing these installations today.

As for the awg installation, this was not actually a new install.  Instead, the existing GDS installation in /ligo/apps was just made available in the guardian environment:

controls@h1guardian0:~$ grep gds /etc/guardian/local-env
. /ligo/apps/linux-x86_64/gds/etc/gds-user-env.sh
controls@h1guardian0:~$ 

That was sufficient to make the awg python bindings available to the guardian nodes.

H1 PSL (ISC, PSL)
sheila.dwyer@LIGO.ORG - posted 19:21, Saturday 23 April 2016 - last comment - 11:55, Monday 25 April 2016(26750)
PSL high frequency glitches, a little locking

Matt, Sheila

We locked and saw that we have 3 orders of mognitude or so to much noise at DC readout.  It seems this is related to PSL problems, the atached screenshot shows coherence between the ISS second loop PDs and DARM up to 100 Hz, as well as coherence between frequency noise and DARM.  The pink trace shows the coherence between intensity and frequency noise.  With this noise, we were able to power up to 10 Watts, but saturated the DC PDs.  

We looked a little but at impementing a dither loop for SRM control.  The alingment dither system currently would allow us to demodulate POP18, but POP90 has much better signal, as you would expect.  The IPC for adding this is already in the models, so I've just added it to the matrix but didn't do the model restart yet (WP5840).  

We then gave up on locking and went to the PSL racks to look at some signals at high frequencies.  We saw a glitch that happens at a repetition rate of 37 kHz, and has frequency content of nearly a MHz, which shows up in the laser intensity noise.  Matt has a picture of this in the ISS PD and the ref cav transmission.  If we turn off the ISS this is still there (as expected since its above the bandwidth), but it is harder to see on top of low frequency intensity noise.  

When the ISS is off, the ISS PDs wander from rail to rail, and oscillate only when they are near the upper rail.  

Images attached to this report
Comments related to this report
matthew.evans@LIGO.ORG - 19:44, Saturday 23 April 2016 (26751)

These pictures show what we saw at the PSL rack.  The first shows the o'scope: channel 1 is the ISS first loop DCPDA (out of loop, but B looks the same at this frequency), the second shows the PMC TRANS.  (The second and third pictures are there to document where we connected the cables.)

There are noise bursts in the PMC TRANS which oscillate at ~500kHz and repeat at 37kHz.  (For offline data mining: the o'scope photo was taken at 18:55:56 local time.)

Images attached to this comment
matthew.evans@LIGO.ORG - 08:26, Sunday 24 April 2016 (26752)

Before to finding the problem shown in the previous comment, we noticed that the noise eater was oscillating.  Sheila reset it and it stopped, but someone (Keita?) might want to look at the monitor channels to see how they behave in when oscillating (and when not oscillating).  We first noticed this on the FSS fastmon at 18:31 (first photo), saw it again on the PMC trans signal at 18:38 (second and third photo, ~1MHz triangle with ~15% modulation of the power) and fixed it shortly after that.  By 18:55 (previous comment) it was not oscillating.  (All times are local.)

Images attached to this comment
matthew.evans@LIGO.ORG - 08:34, Sunday 24 April 2016 (26753)

A little more info on:

"When the ISS is off, the ISS PDs wander from rail to rail, and oscillate only when they are near the upper rail."

While looking at the ISS first loop PDs, we noticed that if the loop is open there are large ~1MHz noise bursts.  Going to DC coupling and zooming out, it seems that the PD signals oscillate when they approach the upper rail at ~14V (see photo).  This may indicate that the load resistors on the opamps involved are too small, and so the opamps become unstable when outputting large voltages... or maybe it is something else.  In any case, the ISS first loop should not be operated with large PD voltages (currently 1.8V).

Images attached to this comment
sheila.dwyer@LIGO.ORG - 23:18, Sunday 24 April 2016 (26755)

The message: things are not as bad as they seemed yesterday, but we still have a problem with frequency and intensity noise.  

I was confused yesterday, the guardian did not make the transition to DC readout because it checks that the ISS is on before making the transition, so we were actually still locked on RF when I thought we were on DC readout.  Tonight I turned the ISS back on, and transitioned to DC readout without a problem.  There is still a lot of coherence with the ISS and frequency noise.  The IFO had no problem getting to 22 Watts, but we lost lock because of the HSTS coil driver switcing (which I had moved to just after the BS coil drivers, and have now moved to just before the BS coil drivers switching). 

The first attached spectra (+coherences) was taken at 2 Watts, the second at 22 Watts. 

Images attached to this comment
keita.kawabe@LIGO.ORG - 08:26, Monday 25 April 2016 (26757)

The atteched trend of recent two days shows that the noise eater was bad from about Apr/24/2016 1:31:50 to 1:52:50 UTC (that's 18:31:50 to 18:52:50 Pacific time).

According to this wiki entry https://lhocds.ligo-wa.caltech.edu/wiki/SYS_DIAG%20Guardian#PSL_Noise_Eater the noise eater is monitored by H1:PSL-MIS_NPRO_RRO_OUTPUT and its nominally good range is -5852 +/- 50.

Images attached to this comment
christopher.wipf@LIGO.ORG - 11:55, Monday 25 April 2016 (26765)

Matt and I were curious about the ISS inner loop PD circuit design, so I made a quick LISO model.  This essentially reproduces the analysis given in T1000634, just showing a little more detail about which components are limiting the noise (mostly the transimpedance and TFIN resistors, R6 and R2)

The design looks OK as far as the linear modeling is concerned -- and as long as the dynamic range constraint plotted in figure 5 of T1000634 is maintained when the second and third loops are engaged.

Modeling files are in /ligo/home/christopher.wipf/Data/20160425_iss

Images attached to this comment
Displaying reports 56941-56960 of 83002.Go to page Start 2844 2845 2846 2847 2848 2849 2850 2851 2852 End