Displaying reports 63281-63300 of 83068.Go to page Start 3161 3162 3163 3164 3165 3166 3167 3168 3169 End
Reports until 10:42, Friday 07 August 2015
H1 PSL
keita.kawabe@LIGO.ORG - posted 10:42, Friday 07 August 2015 - last comment - 06:22, Saturday 08 August 2015(20319)
ISS injection turned off at 17:30-ish UTC (Jenne, Keita)

This morning, after the IFO was locked, DARM was super noisy in kHz region. ISS error point was also super noisy and the coherence between the two was big.

Turns out that the ISS got noisy at the tail end of the lock stretch from last night, at around 2015-08-07 12:00:00 UTC. That's 5AM in the morning.

We went to the floor and sure enough a function generator was connected to ISS injection point via SR560. We switched them off, disconnected the cable from the ISS front panel, but left the equipment on the floor so the injection can be restarted later.

Images attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 06:22, Saturday 08 August 2015 (20349)

When Stefan and I hooked the injection back up, we found that the digital enable/disable switches weren't doing their jobs. Toggling the outputs of H1:PSL-ISS_TRANSFER2_INJ and H1:PSL-ISS_TRANSFER1_INJ had no effect on the appearance of the noise in DARM.

H1 AOS
jeffrey.kissel@LIGO.ORG - posted 09:52, Friday 07 August 2015 (20316)
The ETMY 508.2896 Hz Violin Mode Saga is Over
J. Kissel, as a once-removed observer to the people doing the real work: J. Driggers, S. Dwyer, S. Ballmer, S. Karki, E. Hall, M. Evans, H. Yu, etc.

For record, this mode that Stefan is just beginning to have victory over (see LHO aLOG 20307) has been giving us problems since this past Saturday. My strong suspicion is that this mode was rung up during the ISC_LOCK :: DOWN sanfu (LHO aLOG 20134), in which the ISC system was blasting non-sense into the suspensions for many hours after it failed to recognize that the IFO was down (and it was a Saturday, so it was only by chance that Jamie happened to come on site). Further, I think that it was exacerbated / rung up further because turning OFF one of the failed techniques to damp the mode had not been included in the ISC_LOCK DOWN state, and not discovered for a few days (see LHO aLOG 20272). 

Also, the newly doctored Dan Hoak has been on site for a few days, the current worlds expert on violin mode damping in the H1 aLIGO IFO. As soon as he heard we were violin mode damping, he said "Oh, is it the 508.289 mode? Yeah, I tried to damp that, and never could. Good luck!" So this particular ETMY mode has never been controllable.

We have *just* re-acquired since 4 hours ago, and the violin mode has been damped to the best reference levels. 
Fabulous work ladies and gents.

Note to self: let's not ring up this mode ever again!

The resolution / timescale of my statement about the causes is informed by scanning this past week's summary pages, and informed by control room / commissioning meeting conversations over the past week that don't make it into the log:
Saturday 08/01: [[Initial Ring Up]]

Sunday 08/02: [[Failed Violin Mode Damping Rails]]

Monday 08/03:

Tuesday 08/04:

Wednesday 08/05: [[Failed Violin Mode Damping Railing turned OFF]]

Thursday: 08/06

H1 DAQ (CDS)
james.batch@LIGO.ORG - posted 08:56, Friday 07 August 2015 (20314)
Channels added to h1broadcast0 for DMT GDS system
The h1broadcast0 daqd process was restarted to add channels to the frame sent to the DMT GDS system.  The following channels were added:

H1:CAL-CS_TDEP_ESD_LINE1_REF_A_IMAG
H1:CAL-CS_TDEP_ESD_LINE1_REF_A_REAL
H1:CAL-CS_TDEP_ESD_LINE1_REF_C_IMAG
H1:CAL-CS_TDEP_ESD_LINE1_REF_C_NOCAVPOLE_IMAG
H1:CAL-CS_TDEP_ESD_LINE1_REF_C_NOCAVPOLE_REAL
H1:CAL-CS_TDEP_ESD_LINE1_REF_C_REAL
H1:CAL-CS_TDEP_ESD_LINE1_REF_D_IMAG
H1:CAL-CS_TDEP_ESD_LINE1_REF_D_REAL
H1:CAL-CS_TDEP_ESD_LINE1_REF_OLG_REAL
H1:CAL-CS_TDEP_ESD_LINE1_REF_OLG_IMAG
H1:CAL-CS_TDEP_PCALY_LINE1_REF_A_IMAG
H1:CAL-CS_TDEP_PCALY_LINE1_REF_A_REAL
H1:CAL-CS_TDEP_PCALY_LINE1_REF_C_IMAG
H1:CAL-CS_TDEP_PCALY_LINE1_REF_C_NOCAVPOLE_IMAG
H1:CAL-CS_TDEP_PCALY_LINE1_REF_C_NOCAVPOLE_REAL
H1:CAL-CS_TDEP_PCALY_LINE1_REF_C_REAL
H1:CAL-CS_TDEP_PCALY_LINE1_REF_D_IMAG
H1:CAL-CS_TDEP_PCALY_LINE1_REF_D_REAL
H1:CAL-CS_TDEP_PCALY_LINE1_REF_OLG_IMAG
H1:CAL-CS_TDEP_PCALY_LINE1_REF_OLG_REAL
H1:CAL-CS_TDEP_PCALY_LINE2_REF_A_IMAG
H1:CAL-CS_TDEP_PCALY_LINE2_REF_A_REAL
H1:CAL-CS_TDEP_PCALY_LINE2_REF_C_IMAG
H1:CAL-CS_TDEP_PCALY_LINE2_REF_C_NOCAVPOLE_IMAG
H1:CAL-CS_TDEP_PCALY_LINE2_REF_C_NOCAVPOLE_REAL
H1:CAL-CS_TDEP_PCALY_LINE2_REF_C_REAL
H1:CAL-CS_TDEP_PCALY_LINE2_REF_D_IMAG
H1:CAL-CS_TDEP_PCALY_LINE2_REF_D_REAL
H1:CAL-CS_TDEP_PCALY_LINE2_REF_OLG_IMAG
H1:CAL-CS_TDEP_PCALY_LINE2_REF_OLG_REAL
H1:CAL-DELTAL_RESIDUAL_DQ
H1:CAL-DELTAL_CTRL_DQ

H1 General
edmond.merilh@LIGO.ORG - posted 08:40, Friday 07 August 2015 (20313)
Morning Meeting Summary

Work Permit meeting is part of today's meeting due to not having it on Wednesday

H1 AOS
stefan.ballmer@LIGO.ORG - posted 03:10, Friday 07 August 2015 - last comment - 09:22, Friday 07 August 2015(20307)
Victory over 508.2896Hz in 2nd round
Evan Stefan, Sudarshan
Previous alogs 20298, 17610, 19720

- With the configuration from alog 20298 
  - MODE5: FM1, FM2, FM4, G=-100
  - MODE8: FM1, FM4, FM5, G==500
  we noticed that the 508.2896Hz rang down but then settled at a fixed and still large value of about 9e-15mRMS (2e-14m RMS/rtHz with BW=0.187Hz)
- This had two causes:
  - The MODE5 filters (broadband feed-back for all ETMY modes) were interfering. Thus we designed a notch for the 508.2896Hz mode in MODE5.
    This made the phase slightly worse for the 508.22Hz mode - but it still damps.
  - While MODE8 it driving pitch, it has has some coupling to length, leading to some gain peaking at the violin mode frequency.
    This effect limits the gain we can have.

Next we fine-tweaked the damping phase with the following trick:
 - Right after turning on the MODE8 damping we see the immediate effect of the pitch to length coupling change the violin mode height.
 - by fine-tweaking the phase, we can find the phases for which this immediate effect results in...
   - ... the biggest step-down (violin mode and direct drive are exactly out of phase), and ...
   - ... the biggest step-up (violin mode and direct drive are exactly in phase)
 - We then added roughly 90deg to the step-down version of the filter, such that there is no immediate step when tuning on the damping.
   This implies that the direct drive is lagging about 90deg behind the violin mode, which is what we want for damping.
 - The setting in this configuration was (see snapshot)
  - MODE5: FM1, FM2, FM4, FM5, G=-100
  - MODE8: FM1, FM3, FM4, FM5, G==800
 - In this configuration we saw a slope in the 508.2896Hz mode of about 1 decade reduction per 3.5h. (slow, but consistent!)
 - We tried increasing the gain, but ran into a growing beat with about 2min duration. I am guessing that this is related to the P2L coupling induced gain peaking.
 - After about 3h of this, we were again able to turn on the OMC DCPD whitening.
 - MODE8 is now turned on in the guardian.
   
  

Images attached to this report
Comments related to this report
stefan.ballmer@LIGO.ORG - 04:31, Friday 07 August 2015 (20309)
Some numbers on the feed-back phases and gains

The filters currently used have:
MODE5:
Mode  |   507.992  508.146  508.010  508.206  508.220  508.585  508.661  
phase |   -173d     177d    -173d     170d     167d      167d     160d    
gain  |   260dB   for all                                                


MODE8:
Mode  |   508.289 
phase |   72d
gain  |   277dB

A transfer function measurement of the MODEx_OUT/DARM_IN1 gives roughly the same numbers, but lagging about 12deg, corresponding to 1 sample delay.

So, roughly speaking, the 508.289 mode likes to be driven with an additional 90deg phase lag, and a 17dB higher gain.
stefan.ballmer@LIGO.ORG - 04:34, Friday 07 August 2015 (20310)
After about 4h of damping, the mode is now about 10x smaller.
H1 ISC (SEI)
sheila.dwyer@LIGO.ORG - posted 22:51, Thursday 06 August 2015 - last comment - 10:25, Friday 07 August 2015(20305)
Earthquakes, not much progress this evening, coil driver lockloss

Sheila, Gabriele, Nutsinee

Today I asked the operators to log when they do intial alingment and say what prompted them to do it, so I will start doing the same.

Tonight we had a trip of the ITMX ISI, stage 1 on the L4Cs, which was probably triggered by a PEM injection (which was unfortunately coincident with the arrival of a small earthquake).  After this, it seemed as though the alignment of the ITM might have changed. We had two locklosses durring CARM offset reduction, and DRMI alingment was bad.  After and earthquake, an ISI trip, and several locklosses, we decided to do inital alingment. 

After this we had one sucessfull lock, which I broke attempting to disengage CHARD so that we could phase refl WFS.  Then one and a half more earthquakes made us decide to call it a night without learning anything more about the violin mode. 

Note: We did add logic to the ISC_LOCK DOWN state that switches off ALL the violin mode damping, and added logic to the DRMI gaurdian (in engage DRMI ASC) that will prevent it from doing anything other than check for locklosses if SRM is misaligned. 

Also, earlier in the afternoon we had a lockloss that appeared to be because of coil driver switching. The lockloss seems to be coincident with BS M2 LL coil driver state changing. 

Images attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 03:05, Friday 07 August 2015 (20308)

Not really sure what was going on seismically tonight... nono

Images attached to this comment
nutsinee.kijbunchoo@LIGO.ORG - 09:52, Friday 07 August 2015 (20317)

The three big peaks seem to coincide with three earthquakes in Mexico (5.3M, 4.9M, and 5.1M) arrived at LHO 21:47:49, 23:06:43, and 00:05:07 PDT (time of R-wave arrival). There was a 5.5M earthquake in Rwanda arrived at LHO around 19:36 PDT - around the time we started having trouble locking.

 

https://ldas-jobs.ligo.caltech.edu/~hunter.gabbard/earthquake_mon/seismic.html

hugh.radkins@LIGO.ORG - 10:25, Friday 07 August 2015 (20318)

The alignment shift from the BSC ISI Trip amounts to ~400nrads--see attached.  Something about half that for RY & RX and about maybe 150nm for Z.  No position DOF is restored for the BSC ISIs.   I also attach the Stage2 cartesian trends.  Z has a similar shift as Stage1 but the other are much samller.  Notice the trends of the tilt dofs (loops not closed) but for this 24 hour trend, it isn't much.

Images attached to this comment
H1 DetChar (DetChar)
gabriele.vajente@LIGO.ORG - posted 21:25, Thursday 06 August 2015 (20304)
Two-months-long statistics on loud glitches

In this elog entry I'll describe some statistical study of all the loud glitches of the last two months. Unfortunately, I don't have any clear conclusion, but I identified 151 glitches of the kind we are looking for.

Method

Looking at the Detchar summary pages, I selected the most stable and longest lock stretches from June 1st till today. For each of them, loud glitches were selected by looking at the minutes when the inspiral range dropped more than 5 Mpc with respect to the general trend (computed with a running average of 30 minutes). The list of lock stretches is reported in the first attached text file (lock_stretches.txt).

The range is sampled at 1/60 Hz, so to better identify the glitch time I loaded LSC-DARM_IN1 and computed a band-limited RMS between 30 and 300 Hz. Empirically, this turned out to be a good indicator of the kind of glitches we are trying to investigate. The time of the glitch is taken as the time that corresponds to the maximum value of the said BLRMS. This works very well for our glitches, but may give slightly wrong results (within one minute) for other kind of glitches

In total I identified 285 loud glitches, as defined above. The corresponding GPS times are listed in the second attached text file (glitch_times.txt) together with a number which classifies the glitch (see more later).

Some general statistics

First of all, I wanted to understand if those loud glitches, regardless of their shape and origin, are clustered in particular hours of the day. The first attached figure shows three histograms:

  1. the top one is telling you how many hours of lock have been selected for each period of time. So the first histogram is telling you that my analysis covered about total 8 hours of data between 0 am and 1 am every morning (collected over the two months), about 10 hours between 1 am and 2 am, etc...
  2. the middle histogram is the count of loud glitches in the corresponding period of time, regardless of the day. So it is telling you that, over the two months, we had 11 glitches happening between 0 am and 1 am, and so on...
  3. the third histogram shows the ratio of the second over the first, so basically it's an estimate of the hourly glitch rate as a function of the time of the day, in local time. The red solid line is the average over the whole day, the dashed and dotted lines correspond to one and two standard deviations

I don't see any dramatic clustering, however it seems that there are a bit more glitches happening between 5pm and 8pm. Not very significant though. Morevoer, remember that this analysis covers all locks that I judged to be good, without having any information on the activity of the commissioners.

Statistics during ER7 and not-ER7

The second attached plot is the same kind of histogram, but here I restricted the analysis to the period of time marked as ER7 (between GPS 1117400416 and GPS 1118329216). This should be a more controlled period, without too much commissioning or tube cleaning. A total of 167 glitches were identified in this period.

Basically, the same daily distribution as before is visible, altough the predominance of 5pm-8pm is somewhat lower.

The third attached plot shows the same histogram again, but for all the non ER7 periods. It looks like the dominant periods are later in the night, between 7pm and 10pm.

In conclusion, I don't see any striking dependency on the time of the day.

Glitch classification

I looked into the 285 glitches one by one, to try a classification based on their shape. The kind of glitches we are hunting down have a very clear shape, as pointed out by Keita. Here is my classification system:

Class 0: unidentified origin (not clear what caused the range to drop...)
Class 1: like the glitches we are looking for, but kind of small and sometimes not completely certain
Class 2: definitely the glitches we are looking for

Class 3: somewhat slower glitches, with a duration of 10-100 ms
Class 5: general noise increase on a longer time scale (seconds)
Class 6: messy things, including clear human actions (swept sines, etc..)

The classification is based on the behavor of the BLRMS in the 30-300 Hz band, the time series, and a 100Hz high passed time series.

In total, I could identify 151 glitches of class 1 and 2, that most likely corresponds to what we are looking for. Attached figures 4 and 5 show two examples of class 1 and class 2 glitches. I saved similar plots for all 285 glitches, so ask me if you are interested to see all of them.

Statistics of class 1 and 2 glitches

I repeated the same stastistical analysis described above, but this time using only class 1 and 2 glitches. The 6th attached plot shows the dependency on the time of the day. Unclear to me if there is anything significant. The peak is betwen 6pm and 7pm...

I also checked if there is a correlation with the day of the week, see the 7th plot. Not clear either, altough it seems excluded that there are less glitches over the weekend. It's more likely the contrary.

Finally, the very last plot shows the glitch rate as a function of the date. It seems that three days were particularly glitchy: June 8th, July 25th and August 1st.

Images attached to this report
Non-image files attached to this report
H1 CDS
patrick.thomas@LIGO.ORG - posted 18:26, Thursday 06 August 2015 (20303)
Updated Conlog channel list
Added 356 channels.

The following channels remain unmonitored:

H1:GRD-LSC_CONFIGS_LOGLEVEL
H1:GRD-LSC_CONFIGS_MODE
H1:GRD-LSC_CONFIGS_NOMINAL_S
H1:GRD-LSC_CONFIGS_REQUEST
H1:GRD-LSC_CONFIGS_REQUEST_S
H1:GRD-LSC_CONFIGS_STATE_S
H1:GRD-LSC_CONFIGS_STATUS
H1:GRD-LSC_CONFIGS_TARGET_S

I'm not sure why they weren't caught by the conlog_create_pv_list.bsh script?
H1 General
cheryl.vorvick@LIGO.ORG - posted 16:41, Thursday 06 August 2015 (20301)
Locking Ops Summary:
H1 General
edmond.merilh@LIGO.ORG - posted 15:51, Thursday 06 August 2015 (20290)
Daily Ops Summary

ALL TIMES IN UTC

15:00 IFO locked at 32 mpc

15:21 Looked at PSL status. Everything looks good .

15:23 Robert S. out ino LVEA to do some HF acoustic injections while IFO is locked.

16:32 Sent Katie out to  LVEA to join Robert.

16:50 Robert out temporarily. Katie still in.

16:54 TJ working with SYS_DIAG. Red Guarian Backgrounds are to be ignored until further notice.

17:00 Robert and Katie out until lock re-established.

17:10 Kyle and company out to Y2-8 (~300m from Y-End) to deliver eqiupment.

17:34  Guardian:OMC_LOCK in error as Kiwamu warned me. He's in a meeting. Jamie is looking into it for now.

18:00 Operator training in the control room.

22:00 Filled the chiller water level. Low Level alarm sounding on unit. Filling restored it back to normal.

 

 

 

 

H1 ISC (ISC)
stefan.ballmer@LIGO.ORG - posted 15:39, Thursday 06 August 2015 (20298)
508.2896Hz ETMY violin mode finally damping
The steps that lead to success:
- First I got the best mode frequency estimate so far, from Keith Riles alog: 508.2892Hz (19190)
- Next I drove at a frequency nominally 1/10min above: 508.2892Hz + 1/(60sec*10) = 508.290867 Hz
- (Note: I picked a higher frequency because the neighbouring mode is at a lower frequency.)
- This produces a beat signal with a period of 13min, maybe +-10sec
- Thus, today our best mode frequency estimate is f0 +- df = 508.2892Hz + 1/(60sec*10) - 1/(60sec*13) +- df = 508.289585Hz +- 0.000017Hz.
- Thus, a simple awggui drive at that frequency should run away with at most 1/df/360deg = 167sec/deg, or 2.8min/deg
- I guessed an initial phase and left the drive running during the commissioning call, and got a nice decline of the mode.
- Next I designed a feed-back filter that matched my successful awggui drive (all fast enough that my feed-back phase didn’t run away.)
- That loop looked good so far - except that we several times tripped the silly hardware watchdogs - they take us out at about 1/2 of the DAC range... :-(

- We then turned on all violin damping filters. Notably this includes MODE5 on ETMY, which is a broadband filter also affecting the drive at 508.2896Hz.
- We will observe this state for a while before complete declaring success.

The filters currently used are FM1, FM4 and FM5 (a +12deg), and a gain of about +500.
The setting are not in Guardian yet.
Images attached to this report
H1 PSL (PSL)
edmond.merilh@LIGO.ORG - posted 15:29, Thursday 06 August 2015 (20299)
PSL Chiller

Filiberto, Ed

Fil alerted me to the Low Water Level alarm going off in the chiller room. I added 400ml. Alarm reset. Situation returned to normal.

H1 SEI (ISC)
hugh.radkins@LIGO.ORG - posted 14:31, Thursday 06 August 2015 - last comment - 15:01, Thursday 06 August 2015(20296)
Tidal offset to HEPI Bleed rate reduced to 1um/sec

As I showed and Daniel knew and confirmed, moving the HEPIs at 2um/sec was too fast to not upset the BSC ISI State1 T240 sensors.  With larger offsets which therefore will bleed down for longer periods of time, the T240s have londer time to get to their trip point and trip the watchdog.

I changed the H1:LSC-X(Y)_TIDAL_CTRL_BLEEDRATEs from 2 to 1u/s; the change in SDF has been accepted.  Will follow up after we've had a few lock stretches, and bleedoffs.

Comments related to this report
hugh.radkins@LIGO.ORG - 15:01, Thursday 06 August 2015 (20297)

Did a quick scan through DV and there are 3 trips of the ETMY ISI from this bleedoff since July 6.  It looks like there have been no trips of this type on the ETMX since then.

H1 ISC
evan.hall@LIGO.ORG - posted 04:26, Thursday 06 August 2015 - last comment - 17:54, Thursday 06 August 2015(20288)
Thermal tuning preparation

Jamie, Dan, Gabriele, Evan

There are some files in evan.hall/Public/Templates/LSC/CARM/FrequencyCouplingAuto that will allow for automated transfer function measurements of frequency noise into DARM (using IOP channels) at specified intervals. Running ./launcher 1200 15 FreqCoupIOPrun.sh will open a dtt template, record the transfer function (and coherence) between the appropriate IOP channels, and then save a timestamped copy of said template. This then repeats every 20 minutes for 15 iterations (i.e., about 5 hours).

The idea here is to hook up the usual broadband, analog frequency excitation into the CARM error point and leave it running for the duration of the thermal tuning (1 V rms, 300 Hz HPF, 30 kHz LPF, with the CARM IN2 slider at -17 dB seems to be OK). This conclusion (i.e., to do a broadband frequency measurement only) was the conclusion that Gabriele and I came to after finding that a similar excitation injected into the ISS error point was highly nonstationary in the OMC DCPD sum.

If we want to do broadband noise injection to both the ISS and CARM, then some modification of the script will be required; i.e., we probably will have to ramp the two excitations on and off so that they are not injecting simultaneously. That's not hard, since both the CARM board and the ISS board have digitally controlled enable/disable switches.

Right now a thermal sweep with transfer function measurements does not really seem compatible with the heinous upconversion in DARM from the 508.289 Hz mode. Dan and I tried for some time to continue on with the work started by Jenne, Stefan, and Kiwamu, but we could not make progress either. We even tried (on Stefan's suggestion) sending in an awg excitation at the mode frequency into the EY ESD, but this did not have any real effect on the mode height. If it's useful, the EY mode #8 BLRMS filters are set up to measure the behavior of the violin modes in the neighborhood of this one.

Comments related to this report
evan.hall@LIGO.ORG - 17:54, Thursday 06 August 2015 (20302)

This was rewritten to accomodate interleaved measurements of the ISS and FSS couplings:

./launcher 1200 15 FreqCoupIOPrun.py

It will turn on an FSS injection, measure the FSS coupling, turn off the injection, then repeat for the ISS (via the second-loop error point injection). It remains to be seen what the appropriate ISS drive level is.

IOP channels for ISS second loop array:

H1:IOP-PSL0_MADC0_20 = H1:PSL-ISS_SECONDLOOP_PD1

...

H1:IOP-PSL0_MADC0_27 = H1:PSL-ISS_SECONDLOOP_PD8

Displaying reports 63281-63300 of 83068.Go to page Start 3161 3162 3163 3164 3165 3166 3167 3168 3169 End