Displaying reports 66621-66640 of 85877.Go to page Start 3328 3329 3330 3331 3332 3333 3334 3335 3336 End
Reports until 19:33, Monday 20 July 2015
H1 ISC (ISC)
hang.yu@LIGO.ORG - posted 19:33, Monday 20 July 2015 (19767)
Updates of a2l
Matt, Hang

We ran the a2l decoupling optimization code this evening for all test masses and for both pitch and yaw. It successfully reduced the low frequency noise. Please see the attachment (darm_spectrum.png). 

The changes were:
H1:SUS-ETMX_L2_DRIVEALIGN_P2L_GAIN: 0.93 -> 1.21
H1:SUS-ETMX_L2_DRIVEALIGN_Y2L_GAIN: 0.93 -> 1.32
H1:SUS-ETMY_L2_DRIVEALIGN_P2L_GAIN: 0.00 -> -0.02
H1:SUS-ETMY_L2_DRIVEALIGN_Y2L_GAIN: -0.70 -> -0.59
H1:SUS-ITMX_L2_DRIVEALIGN_P2L_GAIN: 1.95 -> 2.04
H1:SUS-ITMX_L2_DRIVEALIGN_Y2L_GAIN: 0.63 -> 0.74
H1:SUS-ITMY_L2_DRIVEALIGN_P2L_GAIN: 1.05 -> 1.06
H1:SUS-ITMY_L2_DRIVEALIGN_Y2L_GAIN: -2.05 -> -1.48

H1:SUS-ETMX_L2_DRIVEALIGN_P2L_GAIN 
More details of the measurements could be found under:
/opt/rtcds/userapps/trunk/isc/h1/scripts/a2l/rec. 
It contained both the raw measured data (I, Q and total), and plots of our linear fits as well as rotation. The optimal a2l gains corresponded to the zeros of rotated I's. Please again note that since our data were likely to be correlated, the errorbars shown should just be treated as a rough estimation.

=================================================================================================================================================================================

Besides, we also wrapped the python code into a bash shell that could be easily called in the future. It could be found under:
/opt/rtcds/userapps/trunk/isc/h1/scripts/a2l

To rerun the optimization, you can simply enter
./run_a2l.sh
in the command line, and the code will do the optimization for all test masses and all angular dofs. 

If you just want to optimize some specific optics and, say, only their pitch to length coupling, you can just edit the 'a2l_input.in' file.

In cases that the interferometer loses lock, please press "ctrl + c" to terminate the code. With this keyboard interruption, it will automatically set the not yet optimized drive align gains back to their original values and disable the dither input. 

For more instructions, please refer to 'readme.txt' under the same directory. 

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 18:21, Monday 20 July 2015 (19766)
current rcg versions being run

I have updated the RCG_VERSIONS MEDM screen to show the currently running versions before tomorrow's upgrade.

Images attached to this report
H1 AOS
matthew.evans@LIGO.ORG - posted 18:09, Monday 20 July 2015 - last comment - 18:38, Friday 22 March 2019(19765)
Testing multi-thread writes in Guardian

I have put together a few python functions which allow for briefly spawning multiple threads to switch many filters at (roughy) the same time.  The idea here is NOT to provide synchronous filter switching, but rather to speed up Guardian transitions which change the state of many filter modules (or more generally, write many channels).

The new code is in:

userapps/release/isc/h1/guardian/

fast_ezca.py - new functions for writing, switching, and generally doing things quickly

test_write_many.py - test functions for multi-thread writing

test_switch_many.py - test functions for multi-thread switching

test_do_many.py - test functions for multi-thread compound actions

and it is being used in the ISC_library function slow_offload_fast.   There is a single-thread version of this function in ISC_library in case of trouble: slow_offload_many.  The only caller is gen_OFFLOAD_ALIGNMENT_MANY in ISC_GEN_STATES, so go there if you need to switch this out.

NOTE: usage of these functions should NOT spread.  It will be assimilated into the Guardian, and the API will change.
Adding         test_do_many.py
Adding         test_switch_many.py
Adding         test_write_many.py
Adding         test_do_many.py
Adding         test_switch_many.py
Adding         test_write_many.py
Comments related to this report
sheila.dwyer@LIGO.ORG - 18:38, Friday 22 March 2019 (47780)

This allows the guardian to move on without setting a setting, and can cause problems because settings can be wrong and the user has no clues. 

I want to delete this completely.

H1 SUS (CAL, DetChar, ISC, SUS, SYS)
leonid.prokhorov@LIGO.ORG - posted 17:43, Monday 20 July 2015 - last comment - 16:29, Wednesday 22 July 2015(19764)
OPLEV Charge measurements
We continue the charge measurements on ETMs.
Results for ETMX are consistent with negative trend, now the charge is from 10 to 20 [V] Effective Bias Voltage for all the quadrants.
Results for ETMY do not not show a significant trend (probably, the data are beginning to be consistent with positive trend). Charge is below the 10 [V] Effective Bias Voltage for all the quadrants.

Note:  We had positive bias on ETMX and negative bias on ETMY after discharging procedure. So it seems possible that charging is caused by the bias voltage.
Images attached to this report
Comments related to this report
rich.abbott@LIGO.ORG - 17:40, Tuesday 21 July 2015 (19807)ISC
Has the bias on ETMX and ETMY remained positive and negative respectively for the duration of this observation?
leonid.prokhorov@LIGO.ORG - 16:29, Wednesday 22 July 2015 (19846)
Bias was the same for this and next charge measurements. 
It was changed on July, 22:  alog 19821
Today we have the first measurements after changing the bias sign: alog 19848
H1 CDS
david.barker@LIGO.ORG - posted 17:09, Monday 20 July 2015 (19763)
models using TrueRMS part

The nex RCG release will fix the TrueRMS issues. For the record, here is a list of the H1 user models which use this part:

h1susetmypi, h1susetmy, h1susetmx, h1susitmx, h1susitmy, h1omc, h1susomc, h1oaf

The SUS IOP models also use the part for the Software Watchdog:

h1iopsusb123, h1iopsusex, h1iopsusey, h1iopsush2a, h1iopsush34, h1iopsush56

H1 General
travis.sadecki@LIGO.ORG - posted 16:01, Monday 20 July 2015 (19762)
OPS Day Shift Summary

Times PST

9:58 Richard to EX to reconnect HEPI pump ground

10:15 Richard back

10:25 HFD on sit

11:20 Leo taking charge measurements

13:44 Joe D to both Mid stations

14:11 Joe D back

14:41 Richard to roof checking out GPS antenna work for tomorrow

H1 General
travis.sadecki@LIGO.ORG - posted 13:09, Monday 20 July 2015 (19761)
PSL weekly

Laser Status:
SysStat is good
Front End power is 32.64W (should be around 30 W)
Frontend Watch is GREEN
HPO Watch is RED

PMC:
It has been locked 6.0 days, 0.0 hr 31.0 minutes (should be days/weeks)
Reflected power is 2.464Watts and PowerSum = 25.33Watts.

FSS:
It has been locked for 0.0 days 0.0 h and 1.0 min (should be days/weeks)
TPD[V] = 1.643V (min 0.9V)

ISS:
The diffracted power is around 7.629% (should be 5-9%)
Last saturation event was 0.0 days 0.0 hours and 1.0 minutes ago (should be days/weeks)
 

H1 AOS
jason.oberling@LIGO.ORG - posted 12:35, Monday 20 July 2015 - last comment - 19:10, Monday 20 July 2015(19760)
ETMx OpLev appears quiet

J. Oberling, E. Merilh

On June 23rd we swapped the laser on the ETMx oplev, see alog 19290.  We spent the next couple weeks tweaking the operating power of the laser to get it in a stable zone; this has to be done since the thermal environment is different between the end station VEA and the LSB lab the laser was originally stabilized in.  After I got back from vacation last week I've been looking at quiet times (no apparent optic movement; I looked at the pitch & yaw oplev signals and picked times where the optics were quiet) to see if the laser is stable and glitch-free.  I've attached 3 spectrograms of quiet times during the last week

There is no obivious glitching in the ETMx oplev laser as shown in these spectrograms.  I think it is safe to say this oplev is healthy.  I have also attached a spectrum of the ETMx oplev pitch & yaw signals for the same 4 hour stretch on 2015-7-19 as the attached spectrogram.

As usual if anyone notices anything not right with any of the optical levers, let me know.

Images attached to this report
Comments related to this report
suresh.doravari@LIGO.ORG - 19:10, Monday 20 July 2015 (19768)

Glitch Performance evaluation of diode lasers:

   It is sometimes difficult to see whether a laser is glitch free because of the tools we employ display the data.  It is good to compare performance of a laser under test with one that is known to be glitch free and another known to be glitchy.  This way the display tool is validated (since it shows the glitches in the glitchy laser).  At the same time we would know if the laser under test has achieved the Reference Laser's performance level.

Please see my post on laser evaluation.  And it would be preferable if the same kind of plotting tools (ligodv) are used as in the past in order to make the comparisons easier.

H1 GRD (SEI)
thomas.shaffer@LIGO.ORG - posted 12:26, Monday 20 July 2015 (19759)
ISI_ETMY_ST1 Guardian Oddity

This morning around 16:45 UTC, the ETMY ISI Stage 1 and 2 WatchDogs tripped reporting a "payload trip". Jim cleared these trips and then watched Guardian bring ISI back up in an odd way. Guardian brought the ST1 node to HIGH_ISOLATED but not all of the the preferred isolation loops on Stage1 were turned on, along with the input, output, and decimation. (Snipet of the log attached) Stage 2 proceeded to try and isolate but since stage1 was not entirely isolated the WatchDogs tripped again. This time, after clearing the WatchDogs, Guardian brought eveything back up properly and everything seems to have been working well since.

Jamie is unsure of the reason behind this, but suspects some epics connection issues.

Another odd bit to add is that the payload was never tripped...

Images attached to this report
H1 GRD
thomas.shaffer@LIGO.ORG - posted 12:02, Monday 20 July 2015 (19758)
Guardian Connection Issue in ISC_DRMI

At 17:40 UTC the ISC_DRMI Guardian had an the following error: "EzcaError: Could not get value from channel: <PV 'H1:LSC-PD_DOF_MTRX_SETTING_5_18', count=1, type=double, access=read/write>, None"

A caget of the channel yielded a value so it seems as though it was just the Guardian that was not seeing it. Reloading the code did not fix the error, and we were already on the phone with Jamie with another Guardian oddity (alog to follow). Jamie suggested that we STOP the node and then set it back to EXEC. This worked!

This has been seen before (example: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=19673) and seems to not be the same channels but from H1:LSC-PD_DOF_MTRX. Jamie and Dave both have their thinking caps on and are trying to solve the problem.

H1 General
edmond.merilh@LIGO.ORG - posted 11:27, Monday 20 July 2015 (19757)
H1 operator morning locking summary

08:00-10:30 I got a little extra time for locking this morning.

Locking this morning was a little tricky as there was some FSS oscillation and ISS instability that was giving IMC a bit of trouble. Common Gain manipulation got that bit settled down and a bit of AOM diffracted power adjustment recitfied the remainder.

H1 ISC (DetChar, ISC, SEI, SUS)
gabriele.vajente@LIGO.ORG - posted 09:59, Monday 20 July 2015 - last comment - 22:08, Wednesday 22 July 2015(19756)
Brute force coherence summary

Here is a summary of the brute force coherence report already posted in a previous comment to an elog entry describing the good sensitivity lock of last Friday.

Basically, there is no large coherence anywhere, except for the well known periscope peaks that are coherent with ISS signals, IMC angular signals and PSL periscope (figure 1-3)

At low frequency, there is coherence with SUS-ETMY_L3_ESDAMON_?? signals. This was not there in the past, so I guess this coherence is just due to a change in the control strategy. If I'm not mistaken, this signal is just a monitor of the correction sent to the ESD, so coherence with DARM is normal. Please correct me if wrong... (figure 4)

In the 10-30 Hz there is coherence with ASC-MICH_P (figure 5)

In the 10-70 Hz region one dominant source of noise is longitudinal control, since there is coherence with MICH and SRCL (figures 6-7). This noise is not dominant and still a factor of few from the measured sensitivity.

In the higher frequency region (above 100 Hz), there is coherence with ISS and PSL periscope as already pointed out, but there is also some coherence with AS signal: ASC-AS_A/B_DC_SUM, ASC-AS_A_RF36_I_PIT/YAW etc... Together with the main jitter peaks, there is a broadband noise floor at about 1e-10 m/rHz from 100 to 1000 Hz. This might be intensity noise or noise in high order modes that is not completely filtered by the OMC (figure 8).

Finally, a 90 Hz bump seems to be coherent with HAM5 signals (figure 9)

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 22:08, Wednesday 22 July 2015 (19854)SUS
SUS-ETMY_L3_ESDAMON_?? are the recently well-connected and well-digitized analog monitors of the ESD (i.e. TST or L3 stage) actuators. Since we're using ETMY L3 as most of our DARM actuator, it's no surprise that there is coherence with DARM below the DARM UGF. What's strange is that you post that they have coherence before they were connected correctly in the SUS-AUX model from whence they come (see LHO aLOG 19780) ...
H1 ISC
rana.adhikari@LIGO.ORG - posted 04:34, Monday 20 July 2015 - last comment - 16:12, Tuesday 21 July 2015(19752)
Summary of last few days

Hang, Matt, Evan, Stefan, Rana

WE mostly were chasing weird locklosses and instabilities, but also managed to implement a few noise improvements:

  1. SRCL FF better shaping via fitting and better measurements.
  2. A2L decoupling gave us 10-30 Hz improvement.
  3. MICH FF tuned up a bit. New measurments taken; fitting and filter generation yet to be done.
  4. ASC tune up: hard/soft modes for CARM.
  5. ETMY Butterfly mode identified and notched.

Of the locklosses, some were just due to marginal points in the transitions and loop margins. The main issue over the past few days turned out to be that the TIDAL servo was turned OFF somehow on Friday evening. After switching that back on for ETMY, we have returned to having long locks. The highpassing of SRCLFF has removed the high power pitch instabilities that we were seeing.

We were able to get > 40 Mpc with 10W input. The butterfly mode of ETMY @ 6053.81 Hz is preventing us from turning on the OMC DC whitening right now, so we don't know how good our range really is, but our low frequency

Images attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 04:16, Monday 20 July 2015 (19753)

We also got a change to reimplement a small amount of the ASC changes from before last maintenance day:

  • We reverted back to the true hard/soft basis for common pitch only (these settings are not controlled by the guardian right now). The attachment shows the OLTF of the cSoft loop with true soft actuation. Note the absence of the hard mode.
  • We turned up the gain on cHard pitch by 20 dB (not in the guardian).

Next:

  • Revert back to true hard/soft basis for common yaw.
  • Try out hard/soft basis for differential pitch and differential yaw.
Images attached to this comment
betsy.weaver@LIGO.ORG - 16:12, Tuesday 21 July 2015 (19800)

Note, Rana reports that the tidal was OFF at the far right switch on the IMC_F Filter Module (picture attached shows this switch now on as it should be).

Images attached to this comment
H1 General (FMP, PEM, SEI)
rana.adhikari@LIGO.ORG - posted 04:03, Monday 20 July 2015 - last comment - 09:17, Monday 20 July 2015(19751)
Ops Station alarms

Might be nothing, but the following alarms have been going off continuously for the past few days:

EX HEPI DIFF PRESSURE (yellow)

EX DUST1, EY DUST1 (RED), and LVEA DUST

Comments related to this report
hugh.radkins@LIGO.ORG - 09:17, Monday 20 July 2015 (19755)

The EndX HEPI Pressure Sensors are espectially noisy.  EE promises efforts to quiet them.  Meanwhile I've opened up the alarm levels to +-9psi from setpoint.

H1 ISC (ISC)
hang.yu@LIGO.ORG - posted 00:37, Monday 20 July 2015 - last comment - 01:32, Monday 20 July 2015(19747)
a2l coupling optimizing
Matt, Hang

We wrote some codes to optimize a2l coupling coeffecients. The script will measure omc dcpd's response to injected angular dither, and do a linear fit (for I and Q separately) of this response as a function of a2l gain. With a proper rotation of the I-Q plane s.t. the rotated Q' stays as a constant, the minimum of the response will be at the zero of the rotated I'. 

We ran the script for ITMX, ETMX, and ETMY. Results are shown in the attachments. The errorbars were underestimated because the raw data might be correlated if the sampling frequency was not low enough, thus we just plotted them as a reference but no minimization of chi-sq in the fit. 

Based on them, we did the following changes:

H1:SUS-ITMX_L2_DRIVEALIGN_P2L_GAIN: 2.300 -> 1.947
H1:SUS-ITMX_L2_DRIVEALIGN_Y2L_GAIN: 1.350 -> 0.632

H1:SUS-ETMX_L2_DRIVEALIGN_P2L_GAIN: 1.100 -> 0.927
H1:SUS-ETMX_L2_DRIVEALIGN_Y2L_GAIN: 1.370 -> 0.927

H1:SUS-ETMY_L2_DRIVEALIGN_P2L_GAIN: -0.200 -> 0.000
H1:SUS-ETMY_L2_DRIVEALIGN_Y2L_GAIN: -0.600 -> -0.700
Images attached to this report
Comments related to this report
hang.yu@LIGO.ORG - 01:32, Monday 20 July 2015 (19749)
Seemed that our script worked... 
Images attached to this comment
H1 CDS
david.barker@LIGO.ORG - posted 09:55, Sunday 19 July 2015 - last comment - 07:12, Monday 20 July 2015(19741)
more statistics on FE IOC freeze-up events

the probability a freeze-up does not impact at lease one dolphined FE is very small, so I'm using the h1boot dolphin node manager's logs to data mine these events. The dolphin manager was restarted when h1boot was rebooted Tuesday 7th July, so data epochs at that time.

As I was seeing with my monitoring programs, the restarts preferentially happen in the 20-40 minute block within the hour. The first histogram is the number of events within the hour, divided into 10 minute blocks.

We are also seeing more events recently, the second histogram shows number of events per day. The spike on Tue 14th is most probably due to front end computer reboots during maintenance. Friday's increase is not so easily explained.

FE IOC freeze up time listing:

 

controls on h1boot

 

grep "not reachable by ethernet" /var/log/dis_networkmgr.log |awk '{print $2r, $4}'|awk 'BEGIN{FS=":"}{print $1":"$2}'|sort -u

 

total events 197

minutes within the hour, divided into 10 min blocks

00-09 11  :*****

10-19 11  :*****

20-29 67  :*********************************

30-39 79  :****************************************

40-49 17  :*********

50-59 12  :******

 

events per day in July (start tue 07)

wed 08 09 :*****

thu 09 09 :*****

fri 10 08 :****

sat 11 07 :****

sun 12 08 :****

mon 13 09 :*****

tue 14 22 :***********

wed 15 10 :*****

thu 16 20 :**********

fri 17 38 :*******************

sat 18 16 :********

 

 

 

Comments related to this report
keith.thorne@LIGO.ORG - 07:12, Monday 20 July 2015 (19754)
This is a very clever analysis, Dave
  I checked the LLO logs (there are three, corner, x-end, y-end). So far we only see these issues when we have a front-end down for IO chassis, new hardware installs.
H1 ISC (ISC)
rana.adhikari@LIGO.ORG - posted 00:19, Sunday 19 July 2015 - last comment - 03:04, Monday 20 July 2015(19736)
Angular Instability in pitch at 10 W

Cataloging the many ways in which we are breaking lock or failing to lock since Friday, we found this one:

Sitting quietly at 10W DC readout, there was a slow ring up of a pitch instability in several ASC signals. Perhaps its time we went back to seriously controlling the ASC in the hard/soft basis instead of the optic basis in which its currently done. The frequency is ~1 Hz and the time constant is ~1 min. It would be great if someone can look at the signs of the fluctuations in the OL and figure out if this was dHard or cHard or whatever.

Images attached to this report
Comments related to this report
rana.adhikari@LIGO.ORG - 22:01, Sunday 19 July 2015 (19744)AOS, ISC, SUS

In the attached plot, I've plotted the OpLev pit signals during the time of this ringup (0702 UTC on 7/19). The frequency is 1 Hz. It appears with the same sign and similar magnitudes in all TMs except ITMX (there's a little 1 Hz signal in ITMX, but much smaller).

  1. Do we believe the calibration of these channels at the 30% level?
  2. Do we believe the sign of these channels?
  3. If the signs are self consistent, it seems to me that this is a Soft mode, Common arm fluctuations. But its weird for it to be at such a high frequency I think.
  4. Why does it take so long to ring up? If its due to Sidles-Sigg alone, I would guess that the Q would be lower (because of local damping). But if its a radiation pressure resonance and we have poor gain margin in our cSOFT loop, then it might could be.
Images attached to this comment
rana.adhikari@LIGO.ORG - 03:04, Monday 20 July 2015 (19750)

Evan, Matt, Rana

We again saw the pitch instability tonight. We tried reducing it in a few ways, but the only successful way was to turn off the SRCL FF.

It appears that at higher powers, the SRCL_FF provides a feedback path for the pitch signals to get back to the Arms (since SRCL_FF drives the ITMs; and both of them as of Thursday). i.e. cSOFT has a secondary feedback path that includes some length<->angle couplings and produces a high Q unstable resonance. I don't understand how this works and I have never heard of this kind of instability before. But we repeatedly were able to see it ringup and ringdown by enabling SRCLFF.

To enable use of SRCL_FF, we've put a high pass filter into the SRCL_FF. This cuts off the SRCL_FF gain below a few Hz while preserving the phase above 10 Hz (where we want the subtraction to be effective). HP filter Bode plot attached.

Non-image files attached to this comment
Displaying reports 66621-66640 of 85877.Go to page Start 3328 3329 3330 3331 3332 3333 3334 3335 3336 End