Displaying reports 53081-53100 of 83296.Go to page Start 2651 2652 2653 2654 2655 2656 2657 2658 2659 End
Reports until 10:21, Tuesday 01 November 2016
H1 AOS
robert.schofield@LIGO.ORG - posted 10:21, Tuesday 01 November 2016 (31061)
Piezo can be pointed to reduce jitter peaks in IMC-REFL_DC with IMC off

This weekend, as part of clipping investigations, I found that I could adjust the pointing of the PSL piezo mirror to minimize the jitter peaks in IMC-REFL_DC ( pitch: 1846 to 2300, yaw: 2043 to 1500) with the IMC off. The figure shows the resulting reduction of peaks in REFL_DC. Shaking of HAM2 was used to enhance some of the peaks, but there was little shaking at the 280 Hz piezo mirror peak. Sheila and I tried to move the piezo mirror to these settings while keeping the IMC locked but we were unsuccessful. It might be worth adjusting the beam on the diode while trying to minimize peaks.

Robert, Sheila

Non-image files attached to this report
H1 PSL (PSL)
cheryl.vorvick@LIGO.ORG - posted 09:49, Tuesday 01 November 2016 (31060)
Weekly PSL Chiller Reservoir Top-Off

added 125ml to the crystal chiller

nothing added to the diode chiller (no fault on the chiller panel, trend of 14 days shows no "check chiller" faults)

H1 PSL
jeffrey.bartlett@LIGO.ORG - posted 08:27, Tuesday 01 November 2016 (31059)
PSL Weekly Report (FAMIS #7410)

PSL: 
SysStat: All Green, except VB program offline 
Frontend Output power: 34.7W  
Frontend Watch: Red
HPO Watch: Red

PMC:
Locked: 0 days, 0 hours, 0 minutes
Reflected power:   32.7W
Transmitted power: 100.5W 
Total Power:       133.2W 

ISS:
Diffracted power: 2.576%
Last saturation event: 0 days, 0 hours, 0 minutes 

FSS:
Locked: 0 days, 0 hours, 0 minutes
Trans PD: 0.073V
H1 General (OpsInfo)
nutsinee.kijbunchoo@LIGO.ORG - posted 03:11, Tuesday 01 November 2016 (31057)
Ops OWL shift summary

Not much hope here. With any luck I would get all the way to lock PRMI and DRMI but they don't last. Below I attached some plots from the lockloss tool using Sheila's ALS list of channel (/ligo/home/sheila.dwyer/Desktop/Locklosses/channels_to_look_at_ALS.txt). ALS REFL glitched prior to locklosses but why does that matter to DRMI and PRMI that have nothing to do with the arms?

 

To run lockloss tool using custom channel list do

$ lockloss -c custom_channel_list.txt select

 

The default channel list doesn't seems to work.

Images attached to this report
LHO General (OpsInfo)
corey.gray@LIGO.ORG - posted 00:18, Tuesday 01 November 2016 (31044)
OPS Eve Shift Summary

TITLE: 10/31 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Wind
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY:

Had some trouble with the MC.  Sometimes Clearing History (ASC WFS) would help.  Another time FSS was oscillating, so the Fast Gain was ramped down to it's lowest value and returned to 9.0.  (Sheila mentioned both of these to me.)

For first attempts at locking, simply tried to lock (no initial alignment), and would lock DRMI, but it would drop out shortly after.  Jenne mentioned going to LOCK_DRMI_1F and then looking at ASC ERROR signals on striptool & tweak optics for signals which have big ERROR signals.

LOG:

H1 ISC
corey.gray@LIGO.ORG - posted 00:11, Tuesday 01 November 2016 - last comment - 10:47, Tuesday 01 November 2016(31056)
ISC_DRMI Node CONNECT ERROR

During locking tonight, had the following ERROR for ISC_DRMI:

EZCA CONNECTION ERROR:  Could not connect to channel (timeout=2s):  H1:LSC-PD_DOF_MTRX_SETTING_1_23

I tried a couple things:  I hit "LOAD", which did nothing.  Then I hit "Execute" which broke the lock.

One thing I did not do was Re-Request the state I was in.  (Nutsinee just let me know that this is what works for her when she has had "CONNECTION ERRORS".

Comments related to this report
thomas.shaffer@LIGO.ORG - 10:47, Tuesday 01 November 2016 (31063)

After double checking that all CDS systems are running, waiting a few minutes, and checking that you can do a caget with the channel in question, then change the operating mode of the node with the connection error from EXEC to STOP.  Wait for node to change to a Yellow background before requesting EXEC again. If one of these nodes was previously managed, then you may need to INIT the manager (if the manager is working, a possible way to do this would be wait for the current state to finish, if it can, and then go to manual, INIT, and then back to where it was was).

H1 PSL
robert.schofield@LIGO.ORG - posted 23:27, Monday 31 October 2016 (31054)
PSL IOO periscope and piezo mirror tuning

I tuned the periscope a little and tuned and damped the piezo mirror mount. The 280 Hz peak is from the piezo mirror on the table, like the 300 Hz peak at LLO, and not from the top mount on the periscope, which is holding as I tuned it last year.  I am not sure why the piezo mirror mount is suddenly more prominent than it was last year. I would like to check again for some problem like clipping.  In any case, I would like to sub in one of the newer piezo mirror mounts; what I did was pretty jury rigged and I dont expect much improvement, especially since much time was spent trying to re-lock the mode cleaner. We havent had a fully locked interferometer since to evaluate the changes I made.

H1 CDS
sheila.dwyer@LIGO.ORG - posted 22:33, Monday 31 October 2016 - last comment - 23:43, Monday 31 October 2016(31053)
ALS glitches

We are having a recurrence of the glitches that have been described in 30519 25523 22184

This time they are verry intermitent (only seem to happen about once every ten minutes), which would make it diffifcult to trouble shoot right now. 

Comments related to this report
corey.gray@LIGO.ORG - 23:43, Monday 31 October 2016 (31055)

Sheila said to watch time series of (4) ALS Channels for glitches.  They are:

  • H1:ALS-X-REFL_CTRL_OUT_DQ
  • H1:ALS-Y-REFL_CTRL_OUT_DQ
  • H1:ALS-C_TRX_A_LF_OUT_DQ
  • H1:ALS-C_TRY_A_LF_OUT_DQ

Have noticed some drops on these channels corresponding with locklosses while locking.

Sheila mentioned that previously this effect would go away on its own, so there may be hope.  Or, if the effect becomes more infrequent, if one can zip through the locking sequence and get to atleast RF_DARM, then one could be "home free".  (So far, the glitches have been taking H1 down at most steps starting at LOCKING_ARMS_GREEN up to PREP_TR_CARM.

I just finished an Initial_Alignment, and am now trying to see if we can get past RF_DARM and elude the dreaded ALS glitches.  (So far I am 0for5 in the last 20minutes of locking.)

H1 CDS (ISC)
evan.hall@LIGO.ORG - posted 20:50, Monday 31 October 2016 - last comment - 22:23, Monday 31 October 2016(31050)
IMC F readback not actually whitened?

Matt, Evan

We were perplexed by the steep slope of IMC F below 100 Hz, paticularly since it seemed to vary with PMC gain in the same way as the flat part of the spectrum above 1 kHz.

The attached plot shows the IN1 readbacks (i.e., no digital filtering or compensation applied) for IMC F and IMC L. Evidently, they seem to have the same spectral shape above 10 Hz. Since IMC L does not have any analog whitening, this would seem to indicate that IMC F readback has no analog whitening applied (despite what is implied by the schematic for the MC board).

However, the IMC F filter module (which produces the calibrated IMC frequency control channel that we've been using to estimate the IMC control noise) has a filter consisting of two 10 Hz / 100 Hz p/z pairs, as if to compensate for some kind of analog whitening.

What is actually stuffed into the analog whitening for the IMC F readback on the MC board?

[Also, we don't claim to understand why the TF is 0 dB at the peaks but -5 dB everywhere else.]

Images attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 21:00, Monday 31 October 2016 (31051)

According to the schematics both MC_L and MC_F have the same whitening: 10Hz/100Hz double zero/pole with DC gain of 1. MC_I has a simple gain of 100.

evan.hall@LIGO.ORG - 22:23, Monday 31 October 2016 (31052)

Yes, this was confusion on our part about the analog source of IMC L. Indeed, they both seem to have whitening installed (by comparison with IMC I).

H1 ISC (ISC)
jenne.driggers@LIGO.ORG - posted 20:44, Monday 31 October 2016 (31049)
Calculation of LSC feedforward filters

Many people over the years have calculated what kind of filter one should create for LSC feedforward use, but the most commonly referenced note on the calculation is an Initial Virgo note by Bas and others (Lisa links to VIR-050A-08 in LLO alog 13242, and I re-attach it here).  I have rewritten the calculation in Advanced LIGO terms in T1600504, which I also attach here. This note does not (yet) include the feature that we have of summing together two feedforward filters for the same degree of freedom - we use this for SRCL such that we can separately adjust the gain of the low frequency and high frequency parts of the transfer function, but it could alternatively be used to sum in the iterative "tweak" filter. 

Non-image files attached to this report
H1 TCS (ISC, OpsInfo, TCS)
kiwamu.izumi@LIGO.ORG - posted 18:22, Monday 31 October 2016 (31048)
TCS long term measurement; raising CO2X seems to improve sensitivity

Jim W, Cheryl, Corey, Nutsinee, Kiwamu,

Over the weekend Jim, Cheryl, Corey and Nutsinee kindly tested various CO2 configuration for me. The data so far indicate that

I would like to see some more data points with the CO2X laser higher than 0.4 W while maintaining CO2Y at 0 W.

 


Some details

The attached are the DARM spectra from various different times. Also, below shows a list of the CO2 settings that people tried during this weekend.

   time Laser power that CO2X has been [W] Laser power that CO2Y has been [W] Goodnees
ref 0 11:20 29/Oct/2016  0.2  0  OK
ref 1 12:20 29/Oct/2016  0.2  0.4 bad
ref 2 15:00 29/Oct/2016  0.4 0  good
ref 3 18:00 29/Oct/2016 0.4 0  good
ref 4 6:30 30/Oct/2016 0.4 0.2  OK
ref 5 18:00 30/Oct/2016 0.4 0  good
ref 6  10:00 31/Oct/2016 0.3 0.1  bad-ish
ref 7  11:30 31/Oct/2016 0.4 0.2  bad-ish
ref 8  13:00 31/Oct/2016 0.5 0.3  bad-ish
ref 9  14:00 31/Oct/2016 0.6 0.4  bad-ish

As one can see, having a high CO2X seems to improve the noise curve. We should try putting even more power on CO2X (meaning more than 0.4 W) next time.

Interestingly, the peak at 3340 Hz (SRM resonance, 27488) becomes taller when the TCS tunings become worse. This is consistent with a theory that high frequency part of the SRCL coupling is due to HOMs in the SRC. We might be able to use this peak to evaluate the good ness of the CO2 tunings.

Images attached to this report
H1 PSL
keita.kawabe@LIGO.ORG - posted 17:34, Monday 31 October 2016 (31047)
PMC locking electronics further change (Keita, Matt, Daniel)

In addition to what are already in integration issue tracker (https://services.ligo-la.caltech.edu/FRS/show_bug.cgi?id=6500), the following changes need to be done. Both are in T0900577.

ilspmc_servo3.pdf (both for PMC and ILS):

After the demod, 5MHz corner of Sallen-Key is unnecessarily high, and AD797 open loop gain is about 12dB or so at 5MHz, so it's not like it works as intended.

We'll add a passive 5MHz pole, and move the Sallen-Key corner down to 170kHz or so by:

ilspmc_fieldbox4.pdf (both for PMC and ILS):

The DAC output for ramp signal is permanently connected to the HV amp with 200Hz pole and a factor of 50 gain.

If DAC output noise is 1uV/sqHz (depends on the DAC output but the digital output is usually zero), the DAC noise in the HV monitor (which has a 1/50 attenuation) will be 1uV/sqHz at DC and it goes down above 200Hz. See attached for the PMC HV monitor signal in-lock.

We'll add zp=100:10 by stuffing the unused whitening pattern on the board:

These are added to the existing FRS, which already has similar modifications to HV monitor and mixer readback.

Note that only the HV mon modification for PMC board was done, and we intend to do the rest for PMC as well as ILS.

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 17:09, Monday 31 October 2016 (31043)
Ops EVE Shift Transition



TITLE: 10/31 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Wind
OUTGOING OPERATOR: Ed
CURRENT ENVIRONMENT:
    Wind: 36mph Gusts, 26mph 5min avg
    Primary useism: 0.15 μm/s
    Secondary useism: 0.30 μm/s
QUICK SUMMARY:

H1 CDS
david.barker@LIGO.ORG - posted 16:37, Monday 31 October 2016 (31046)
TCS ITMY chiller water flow reduction does not look like a digital problem

The TCS ITMY chiller water flow dropped several times, starting at 18:00 PDT Sunday 30th through to 08:00 PDT Monday 31st. The attached left hand plot shows a one day minute-trend, the right hand plot is a zoom into the last sharp drop at 07:01 PDT using second trends. The drop looks real and not a digital problem. It has not re-appeared since 8am this morning.

Images attached to this report
LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 16:16, Monday 31 October 2016 (31045)
CP3 Manual Overfill at 22:50 utc

Took 4 minutes 42 seconds to overfill CP3 with LLCV bypass valve 1/2 turn open.

H1 CAL (CAL, CDS, DetChar)
jeffrey.kissel@LIGO.ORG - posted 16:13, Monday 31 October 2016 - last comment - 12:42, Wednesday 02 November 2016(31040)
Example 1-hour Stretches of CAL Line Front-end-Calculated Coherence and Uncertainty
J. Kissel

We're exploring the functionality of the new features of the front-end calibration that calculates the coherence and subsequent uncertainty of the transfer function between each CAL line source and DARM. As such, I plot three, one-hour data stretches from different lock stretches in the past 24 hours.
    Data Set A: 2016-10-31 02:30 UTC
    Data Set B: 2016-10-31 07:00 UTC
    Data Set C: 2016-10-31 10:00 UTC

Note the translation between channel names and to which line they're analyzing:
H1:CAL-CS_TDEP_..._[COHERENCE/UNCERTAINTY]     Frequency        Used In Calculating       
           DARM_LINE1                          37.3             kappa_TST                (ESD Actuation Strength)
           PCAL_LINE1                          36.7             kappa_TST & kappa_PU    (ESD and PUM/UIM Act. Strength)
           PCAL_LINE2                         331.9             kappa_C & f_C            (Optical Gain and Cavity Pole)
           SUS_LINE1                           35.9             kappa_PU                 (PUM/UIM Act. Strength)

where you can refer to P1600063 and T1500377

Recall also, that our goal is to have the uncertainty in the time-dependent parameters (which are calculated from combinations of these lines) to around ~0.3-5%, such that these uncertainties remain non-dominate (lines are strong enough), but non-negligible (not excessively strong). An example total response function uncertainty budget in LHO aLOG 26889, to see at what level the time-dependent parameter estimation uncertainty impacts the total uncertainty. That means the uncertainty in each line estimate should be at the 0.1-0.3% level if possible. So, we can use these data sets to tune the amplitude of the CAL lines, so as to optimize uncertainty needs vs. sensitivity pollution.

There are several interesting things. It's best to look at the data sets in order B, then A, then C.
In data set B -- 
- this is what we should expect if we manage to get a stable, O1-style interferometer in the next week or so for ER10 and O2. 
- With the current amplitudes, the uncertainty on the ~30 Hz lines hovers around 0.1% -- so we can probably reduce the amplitude of these lines by a factor of a few if the sensitivity stays this high.
- The 331 Hz line amplitude should probably be increased by a factor of a few.

In data set C -- (this is during the ghoulish lock stretch)
- One can see when the data goes bad, it goes bad in weird, discrete chunks. The width of these chunks is 130 sec (almost exactly), which I suspect is a digital artifact of  the 13 averages and 10 sec FFTs. The sensitivity was popping, whistling, and saturating SUS left and right during this stretch, at a much quicker timescale than 100s of seconds.

In data set B --
- This is an OK sensivity stretch. The good thing is that the coherence/uncertainty appears to be independent of any fast glitching or overall sensitivity, as long as we stick in the 60-75 Mpc range.
- Interestingly, there's either a data dropout, or terrible time period during this stretch (as indicated by the BNS range going to 0) -- but it's only ~120 sec. If it's a data drop out -- good, the calculation is robust against whatever happens in DMT land. If it's a period of glitchy interferometer, it's very peculiar that it doesn't affect the uncertainty calculation, unlike with data set C.

Based on these data sets, I think it'll be safe to set the uncertainty threshold at 1%, and if the uncertainty exceeds that threshold, the associated parameter value gets dumped from the calculation of the average that is applied to h(t).

So, in summary -- looks like the calculations are working, and their calculated value roughly makes sense when the IFO is calm. There're a few suspicious things that we need to iron out when the IFO isn't feeling so well, but I think we're very much ready to use these coherence calculations as a viable veto for time-dependent parameter calculations.
Images attached to this report
Comments related to this report
darkhan.tuyenbayev@LIGO.ORG - 12:42, Wednesday 02 November 2016 (31128)CAL

Jeff K, Darkhan T,

We investigated further the 130s drop in coherence in the data set C (see LHO alog 31040 above).

This drop was possibly caused by a bad data point(s) ("glitch") at the beginning of this drop (when first glitchy data point entered the 130s averaging buffer). A quick look at kappas calculated in PcalMon from 10s FFTs during 600s around time of the glitch indicates that outliers in κTST and κPU values are found only in one of the 10s intervals. This interval is GPS [1161968880, 1161968890) (see attachment 1).

A look at slow channels indicate that the glitch produced impulse responses lasting just under 10s before 0.1Hz low-pass filter and roughly 30s after the filter, DARM_ERR demodulated at 35.9 Hz (see upper panes in attachment 2, ). Start of the glitch is at ~1910s (GPS 1161968887). In the coherence calculation block of the CAL-CS model (attachments 3 and 4), it can be seen that the glitch lasts 20-30s in EPICS records preceeding the 130s averaging blocks (BUFFER_AND_AVERAGE), but results in reduction of the calculated coherence value for 130s (see attachment 5).

If we use coherence values from the CAL-CS front-end model as a threshold for "bad kappas", this kind of glithces will result in unnecessarily marking 130s of kappas as "bad". GDS median kappas should not be sensitive to this kind of short glitches, however CAL-CS front-end κTST were affected for ~250s (front-end kappas are low passed with a 1/128 Hz IIR filter) (see attachment 5).

A potential (not yet tested) solution would be to replace BUFFER_AND_AVERAGE (running aveage) script with a running median. And a similar script can be used for averaging of the front-end kappas, which would also reduce discrepancies between GDS and front-end kappas.

Images attached to this comment
H1 CDS (DAQ, SUS)
david.barker@LIGO.ORG - posted 16:01, Monday 31 October 2016 (31042)
Frame size and cpu usage changes resulting from last Tuesday's SUS model changes

Jeff, Dave:

Following Jeff's SUS changes last tuesday (25th October) we have accrued enough data to report on changes to the frame size and sus front end cpu usage.

Attached is a two week trend of the full frame size. This is a compressed frame, so its size varies dependent upon the state of the interferometer. Drawing an average by eye, the 64 second full frame size has decreased from a mean value of 1.77GB to 1.67GB. This is a 6% disk/tape saving. It equates to a per-second data rate decrease of 27.7 MB/s to 26.1 MB/s (decrease of 1.6 MB/s).

The removal of filters and reduction of DAQ channel decimation also improved the cpu processing time on most SUS models. The attached plot shows the 16 corner station sus and susaux models. They typically report an improvement in cpu time of 10%.

Images attached to this report
Displaying reports 53081-53100 of 83296.Go to page Start 2651 2652 2653 2654 2655 2656 2657 2658 2659 End