Displaying reports 73201-73220 of 83105.Go to page Start 3657 3658 3659 3660 3661 3662 3663 3664 3665 End
Reports until 10:11, Thursday 20 February 2014
H1 ISC
keita.kawabe@LIGO.ORG - posted 10:11, Thursday 20 February 2014 (10209)
Phasing ALSX WFS (Jax, Keita)

We disabled dither alignment feedback but left the dither for P and Y both, and adjusted the WFS demod phase to minimize the peak for Q. We looked at 380Hz for P and 410Hz for Y.

See attached for the demod phase and whitening setting. This is probably not better than 10deg level.

Images attached to this report
H1 INS
jim.warner@LIGO.ORG - posted 09:41, Thursday 20 February 2014 (10206)
ETMY ISI locked, ready for other subsystems to prep for cartridge flight

Fabrice gave approval, or at least a "good enough", on the tf's for the ISI, so this morning I went down and locked the ISI. We're now ready for SUS, et al. to go down and start locking and adding covers in prep for flight. Our sensors are still powered on ( I need to finish collecting spectra), so SEI can't unplug yet, but I should be done with that shortly

H1 PSL
andres.ramirez@LIGO.ORG - posted 08:58, Thursday 20 February 2014 (10203)
PSL Check
Laser Status: 
SysStat is good
Output power is 28.9 W (should be around 30 W)
FRONTEND WATCH is Active
HPO WATCH is red

PMC:
It has been locked 1 day, 22 hr 20 minutes (should be days/weeks)
Reflected power is 1.1 Watts  and PowerSum = 11.9 Watts.
(Reflected Power should be <= 10% of PowerSum)

FSS:
It has been locked for 0 d 4 h and 5 min (should be days/weeks)
Threshold on transmitted photo-detector PD = 0.89 V (should be 0.9V)

ISS:
The diffracted power is around 6.9 % (should be 5-15%)
Last saturation event was 0 d, 4 h and 30 minutes ago (should be days/weeks)
H1 SEI
hugh.radkins@LIGO.ORG - posted 08:15, Thursday 20 February 2014 - last comment - 17:01, Friday 21 February 2014(10201)
Thursday AM SEI States--All good except BS ISI Trip

The BS IS tripped on GS-13 watchdog at 1328 utc this morning.  Traffic, wind, something--plotting scripts still not functioning...

Anyway, I brought it back to lvl2 with 750mHz blends on stage2 and T250s on all Stage1 blends except T100mHz_0.44 blends on X & Y dofs.

I reset the target positions, there was about 700nrads on RX and <4um on Z, all other shifts were less.   Please let us know if this impacts any alignments.  This allowed for a one button isolation.

Comments related to this report
hugo.paris@LIGO.ORG - 09:57, Thursday 20 February 2014 (10207)

I reported the WD plotting issue mentioned by hugh last week. Details can be found in LHO aLog #10057

I am not sure what is going on yet, scripting or server access issue, but I am looking into it.

hugo.paris@LIGO.ORG - 10:12, Thursday 20 February 2014 (10208)

The plotting scripts work for the HAM-ISI. WD plotting is still disfunctional on the BSC. I think it is a scripting issue in that case, and I am working towards fixing it:

   Called with arguments: subsystem='ISI_ST1', debug=False, lookback=20, chamber='BS', lookforward=15, device='CPS', rough_trip_time=<object object at 0x7f70dd9050f0>, ifo='H1'
2014-02-20 10:09:21,199 : WDTripPlotter :
    Quitting program
Traceback (most recent call last):
  File "/opt/rtcds/userapps/release//isi/common/scripts/wd_plots/main.py", line 321, in <module>
    main()
  File "/opt/rtcds/userapps/release//isi/common/scripts/wd_plots/main.py", line 241, in main
    rough_trip_time, wd_trip_time_channel_name = util.get_rough_wd_trip_time(ifo, subsystem, chamber)
  File "/opt/rtcds/userapps/trunk/isi/common/scripts/wd_plots/util.py", line 33, in get_rough_wd_trip_time
    channel_suffix = const.WDMON_GPS_TIME_CHANNEL_SUFFIX[_subsystem]
KeyError: 'ISI'   Called with arguments: subsystem='ISI_ST1', debug=False, lookback=20, chamber='BS', lookforward=15, device='CPS', rough_trip_time=<object object at 0x7f70dd9050f0>, ifo='H1'
2014-02-20 10:09:21,199 : WDTripPlotter :
    Quitting program
Traceback (most recent call last):
  File "/opt/rtcds/userapps/release//isi/common/scripts/wd_plots/main.py", line 321, in <module>
    main()
  File "/opt/rtcds/userapps/release//isi/common/scripts/wd_plots/main.py", line 241, in main
    rough_trip_time, wd_trip_time_channel_name = util.get_rough_wd_trip_time(ifo, subsystem, chamber)
  File "/opt/rtcds/userapps/trunk/isi/common/scripts/wd_plots/util.py", line 33, in get_rough_wd_trip_time
    channel_suffix = const.WDMON_GPS_TIME_CHANNEL_SUFFIX[_subsystem]
KeyError: 'ISI'
hugo.paris@LIGO.ORG - 17:01, Friday 21 February 2014 (10259)

The BSC-ISI WD plotting software was fixed, see LHO aLog #10258 for more details.

H1 ISC
kiwamu.izumi@LIGO.ORG - posted 07:31, Thursday 20 February 2014 - last comment - 11:01, Thursday 20 February 2014(10199)
Done with the morning red lock

I am done with the morning red lock and handed the interferometer over to Keita and Jax. Here are some notes for the green and blue teams:

Comments related to this report
kiwamu.izumi@LIGO.ORG - 08:41, Thursday 20 February 2014 (10200)

PRMI locks:

Today I was able to lock the PRMI with the sidebands resonant in the PRC. There were three key points:(1) the alignment was not great, (2) the notches in FM6 of MICH (see alog 10127) was too aggressive for the initial acquisition and (3) a 30 Hz low pass was not engaged in MICH's FM9 which was usually set up by the guardian.

My first guess for the MICH and PRCL gains were 40 and -0.4 respectively (see alog 10168) because these are the nominal values we have been using in the past week. However, it turned out that the alignment of PRMI was not good enough so that optical gain was smaller by a factor of between 2 and 3 for both MICH and PRCL. So I empirically ended up with gain settings of 80 and -1.4 for MICH and PRCL respectively to acquire a lock for a long period. Then tweaking PRM and PR2 gave me a high build up which was approximately 30000 counts in POPAIR_B_RF18 and this is about the same amount we saw on 11th of February. The attached is a trend of the power build up and alignment sliders. The misalignment was mainly in pitch.

At the end, the gain was at 40 and -0.6 in MICH and PRCL respectively. I didn't get a chance to measure the UGF.

Next steps:

Our short term goal is to do the "one arm + PRMI 3f" test and therefore the stability study of the 3f locking is the most critical at this moment. However I (re-)found that the daily alignment is time-consuming and is something we must automate. So I would like to get the dither system running at first before entering a serious 3f study.

Images attached to this comment
kiwamu.izumi@LIGO.ORG - 11:01, Thursday 20 February 2014 (10211)

Even though the PRMI didn't spontaneously drop the lock at the end of the morning commissioning, fluctuation in the intracavity power was large. The power could drop to the half of its maximum and was oscillating mainly at 0.9 Hz. Looking at the PR3 gigE camera (VID-CAM09), I found that the oscillation of the cavity power synchronized with scattered light off of the PR3 cage which looked oscillating mainly in pitch. So I tried to identify which optic was moving by using the data from this morning.

According to a coherency test (see the attachment), ITMY is the most suspicious at this point.

ITMY was oscillating at 0.4-ish Hz and shows a moderately high coherence with the POP_B_RF18. It is possible that this 0.4-ish Hz motion of ITMY then produced a fluctuation in POP_RF18 at the twice higher frequency due to the quadratic response of the cavity power. This issue is not a killer at this point, but the study will continue.

Images attached to this comment
H1 SUS
kiwamu.izumi@LIGO.ORG - posted 05:08, Thursday 20 February 2014 - last comment - 07:19, Thursday 20 February 2014(10197)
ETMX intentionally misaligned

Done by switching off the DC biases. The oplev was already off.

Comments related to this report
kiwamu.izumi@LIGO.ORG - 07:19, Thursday 20 February 2014 (10198)

ETMX is now realigned for the blue team. Oplev is still off.

H1 SUS
kiwamu.izumi@LIGO.ORG - posted 05:04, Thursday 20 February 2014 - last comment - 09:12, Thursday 20 February 2014(10196)
SUS_PRM guardian not running, unable to restart

Sorry, Jamie. I have another guardian job for you.

controls@opsws4:~ 0$ guardctrl start SUS_PRM
starting node SUS_PRM...
fail: SUS_PRM: unable to change to service directory: file does not exist

Comments related to this report
jameson.rollins@LIGO.ORG - 09:12, Thursday 20 February 2014 (10204)

After the recent upgrade, where I rebuilt the node supervision infrastructure on h1guardian0, I did not yet get around to re-creating and restarting all of the nodes that had been running previously.  Arnaud and I are now restarting all the SUS nodes, but just in case, this should be an easy issue to resolve:

The guardctrl utility will tell you which nodes are currently running:

jameson.rollins@operator1:~ 0$ guardctrl list
IFO_IMC * run: IFO_IMC: (pid 11768) 144328s, want down; run: log: (pid 26686) 145415s
ISI_HAM4 * run: ISI_HAM4: (pid 26143) 3148s, want down; run: log: (pid 11352) 53329s
LSC * run: LSC: (pid 20593) 48884s, want down; run: log: (pid 11727) 48972s
SUS_ETMX * down: SUS_ETMX: 145415s; run: log: (pid 26687) 145415s
SUS_MC1 * run: SUS_MC1: (pid 29305) 145317s, want down; run: log: (pid 26685) 145415s
SUS_MC2 * run: SUS_MC2: (pid 29314) 145317s, want down; run: log: (pid 26863) 145413s
SUS_MC3 * run: SUS_MC3: (pid 29327) 145317s, want down; run: log: (pid 26864) 145413s
SUS_SRM * run: SUS_SRM: (pid 1869) 63862s, normally down, want down; run: log: (pid 1027) 150829s
jameson.rollins@operator1:~ 0$ 

Any node you think should be there but is not showing up, you can just create:

jameson.rollins@operator1:~ 0$ guardctrl create SUS_PRM
creating node SUS_PRM...
adding node SUS_PRM...
guardian node created:
ifo: H1
name: SUS_PRM
path: /opt/rtcds/userapps/release/sus/common/guardian/SUS_PRM.py
prefix: SUS-PRM
usercode:
  /opt/rtcds/userapps/release/sus/common/guardian/sustools.py
  /opt/rtcds/userapps/release/sus/common/guardian/SUS.py
states (*=requestable):
  0 MISALIGNED *
  1 SAFE *
  2 DAMPED *
  3 ALIGNED *
  4 INIT
  5 TRIPPED
jameson.rollins@operator1:~ 0$

Once the node is created, it is ready to start.  Before starting, I usually pop open a window viewing the log from the node so I can watch the start up.  This is most easily done by opening up the medm control panel for the node via the GUARD_OVERVIEW screen, and clicking on the "log" link.

Finally, just start the node:

jameson.rollins@operator1:~ 0$ guardctrl start SUS_PRM
starting node SUS_PRM...
jameson.rollins@operator1:~ 0$ 

We're working on making all the guardians smart enough to identify the current state of the system on startup, and identify the correct state to jump to.  The SUS guardians are programmed to go to the ALIGNED state on startup.  We're now working on enabling them to identify if the optic is currently misaligned and to go to the MISALIGNED state in that case.

H1 IOO (ISC)
kiwamu.izumi@LIGO.ORG - posted 04:57, Thursday 20 February 2014 (10195)
IMC was not locking

The IMC was not locking. I did the following items to let it lock:

  1. The FSS trans threshold was decreased from 0.8 to 0.7 because it kept dropping its lock.
  2. I lowered the limitter for MC2_M3_LOCK_LIMIT to 4e5 from 6.4e6. It seems this was intentionally increased to the high value yesterday. Though I am not sure how effective this low limitter value is.
  3. I ran als/h1/scripts/COMM_down.

Now it locks.

H1 ISC
evan.hall@LIGO.ORG - posted 04:47, Thursday 20 February 2014 (10194)
Red team's morning comissioning started

At 4:47AM local.

H1 ISC (ISC, SYS)
jameson.rollins@LIGO.ORG - posted 00:33, Thursday 20 February 2014 (10193)
new LSC guardian library, and initial PRMI sideband locking states

I've started building out an LSC guardian library, starting with Kiwamu's PRMI sideband locking guardian module (LSC_PRMIsb.py) I started by making an LSC code library:

USERAPPS/lsc/h1/guardian/lsclib

This is organized as a python package, with sub packages for the various locking configurations. I started by copying Kiwamu's PRMI sideband locking states from LSC_PRMIsb.py into a new module:

USERAPPS/lsc/h1/guardian/lsclib/prmi/sidebandlock.py

It consists of two top level requestable states, LOCKED and THREEFLOCKED (see attached state graph). The above module is then loaded by a new LSC guardian module:

USERAPPS/lsc/h1/guardian/LSC.py

currently the full contents of which are:

from lsclib.prmi.sidebandlock import *
request = 'LOCKED'

The guardutil utility can be used to inspect the module as is, such as draw the graph and print system info:

$ guardutil print LSC
ifo: H1
name: LSC
path: /home/jrollins/ligo/src/userapps/lsc/h1/guardian/LSC.py
prefix: 
usercode:
  /home/jrollins/ligo/src/userapps/lsc/h1/guardian/lsclib/prmi/sidebandlock.py
states (*=requestable):
  0 LOCKED *
  1 UP *
  2 THREEFLOCKED *
  3 ACQUIRE
  4 LOCKING
  5 INIT

We can add new modules for new locking configurations, load them from the main LSC module, and add the necessary edges and states to connect them together.

Eventually we might want to move all of this into lsc/common once it stabilizes a bit. I imagine this is also not the final configuration of this stuff.

I started up the "LSC" node on h1guardian0, and it started up without problem. I left it not doing anything for the moment, but it should be ready to run as is.

Images attached to this report
H1 ISC
alexan.staley@LIGO.ORG - posted 22:19, Wednesday 19 February 2014 - last comment - 09:16, Thursday 20 February 2014(10192)
Green accomplishments

(Sheila, Alexa, Rana)

 

Images attached to this report
Comments related to this report
rana.adhikari@LIGO.ORG - 08:29, Thursday 20 February 2014 (10202)ISC

During the afternoon, the locking of Green PDH was quite unstable. We suspected that there were some oscillations of the NPRO PZT and/or accidental HOM resonances (since the mode-matching / clipping is so bad).

* Sweeping the NPRO PZT with a low bandwidth PLL lock, found no substantial features in the neighborhood of the peak (~27.4 kHz). Even though there's no resonances in the TF, the peak dominates the RMS of the PDH error signal. We thought that this could perhaps be coming from an oscillation of the PSL FSS, but tweaking the FSS Fast gain doesn't change the peak frequency.

* We tried a few different modulation frequencies for PDH (23.4, 23.9, and 24.4 MHz). These were calculated to make the upper SB be at ~0.3-0.4 of an FSR. As expected we saw a big dip in the PDH loop in the 10-15 kHz range for these different modulation frequencies. These dips were not very stationary - we guessed that this was due to the alignment fluctuations.

* Daniel turned on the 1000:100 Boost in the servo board after awhile and this greatly helped the stability. At the best of times, the green arm power fluctuations were ~10%. At the worst of times, it was more like 50% and the mode would hop between 00 and 01. We had mixed results with the dither alignment and its not always working for both DOFs.

* We should use a directional coupler to check that we're at the peak frequency for the EOM.

daniel.sigg@LIGO.ORG - 09:16, Thursday 20 February 2014 (10205)

Some observations: After reverting to the original sideband frequency we had a hard time locking. The behaviour was similar to what we experienced in the past when we had a lot of alignment fluctuations. We would stay "locked' but switch between 00 mode and a higher order transverse mode without loosing a step. In the past the transition was to a 10 mode whereas yesterday it was to a second order mode. The locking was better when we switched back again to the frequency that is 1MHz off. It turned out that the sidebands were coincidentally set near the second order transverse mode spacing. Using a frequency near nominal with the same tuning worked as well. However, it turned out the real problem was a lack of low frequency gain. With the standard network compensation we just have a pole near 1.6Hz, With the boost turned on the lock is a lot more stable. This seems especially important during elevated wind.

H1 DAQ (CDS)
james.batch@LIGO.ORG - posted 16:44, Wednesday 19 February 2014 (10191)
Restarted h1nds0
The h1nds0 computer died with a kernel panic, had to reboot.
H1 ISC
daniel.sigg@LIGO.ORG - posted 16:36, Wednesday 19 February 2014 - last comment - 14:33, Thursday 20 February 2014(10190)
RT comm link in EX

After re cabling for the ALS WFS the link between slow and fast controls stopped working. The newly assigned DAQ ADC channel has a large -5V offset and seems broken. The offset is there even if nothing is connected to the AA chassis.

Comments related to this report
daniel.sigg@LIGO.ORG - 14:33, Thursday 20 February 2014 (10218)

Changing the AA chassis didn't fix the problem. So, it is probably the ADC. To minimize the disruption we simple switched to a different channel for now.

LHO General
corey.gray@LIGO.ORG - posted 16:03, Wednesday 19 February 2014 (10170)
Ops DAY Summary

Day's Activities

Displaying reports 73201-73220 of 83105.Go to page Start 3657 3658 3659 3660 3661 3662 3663 3664 3665 End