Displaying reports 53601-53620 of 84799.Go to page Start 2677 2678 2679 2680 2681 2682 2683 2684 2685 End
Reports until 11:10, Thursday 01 December 2016
H1 DetChar (DetChar)
greg.ogin@LIGO.ORG - posted 11:10, Thursday 01 December 2016 (32073)
DQ Shift Mon 28 – Wed 30 November

Report on DetChar DQ glitch shift for Monday Nov 28 - Wednesday Nov 30

 

Looked at 3 major sources of glitches during this shift:

 

Full report can be found at https://wiki.ligo.org/DetChar/DataQuality/DQShiftLHO20161128


H1 CDS (GRD)
david.barker@LIGO.ORG - posted 10:40, Thursday 01 December 2016 - last comment - 11:14, Thursday 01 December 2016(32072)
h1guardian0 memory upgrade

WP6366 Increase memory in Guardian machine

Dave, TJ, Carlos:

at 08:47PDT h1guardian0 was powered back up after having its RAM size increased from 12GB to 48GB. TJ reports all nodes are operational.

Comments related to this report
thomas.shaffer@LIGO.ORG - 11:14, Thursday 01 December 2016 (32074)

I spoke a bit too soon, the IFO node froze up after the reboot. On first inspection it looked ok, but later I noticed that it wasn't reporting which nodes it was waiting for. The last  log before it froze up was "2016-12-01_16:51:04.396720Z IFO W: initialized" and it had negative SPM diffs.

A quick node restart fixed it.

H1 CAL (CAL)
darkhan.tuyenbayev@LIGO.ORG - posted 09:12, Thursday 01 December 2016 (32067)
Sensing function reference-time parameter values (from MCMC fitting)

Preliminary results from MCMC fitting of the sensing function measurements taken on Nov 12 to a 5 parameter model are given below:

  gain    : 1.160e+06     (nonlinear fit was 1.153e6, 0.6% different from MCMC)
  f_c     : 342.3340 Hz   (... 346.7 Hz, difference is 1.3%)
  delay   : -8.809e-08 s  (... -2e us, this value from nonlinar fit was ignored,
                               uncertainty on the value from the nonlinear fit was +/- 3.3 us)
  f_s     : 8.19 Hz       (... 7.389 Hz, difference is 9.8%)
  1/Q     : 0.0474        (... 0.0454, difference is 4.2%)

Values of the nonlinear fit were reported in LHO alog 31665.

The fitted sensing function and the corner plots are attached.

These are preliminary results, the uncertainties on the parameters have not been analyzed at the moment.

Non-image files attached to this report
LHO General
corey.gray@LIGO.ORG - posted 08:40, Thursday 01 December 2016 (32063)
Ops Day Shift Transition

TITLE: 12/01 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
    Wind: 7mph Gusts, 5mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.42 μm/s
QUICK SUMMARY:

Received status about how the night/morning went from Cheryl.  

16:35:  Since we are not LOCKED, performing memory upgrade for h1guardian0 (WP6366, via Carlos).

Will get back to locking afterward and investigate around DC Readout and wait for assistance.

H1 General
cheryl.vorvick@LIGO.ORG - posted 08:29, Thursday 01 December 2016 (32064)
Ops Owl Summary:

State of H1: losing lock at LOW_NOISE_ESD

Activities:

H1 General (OpsInfo)
cheryl.vorvick@LIGO.ORG - posted 08:23, Thursday 01 December 2016 (32059)
Violin Damping - ETMX mode4 damped with many changes, ETMY and ITMX small changes

ETMX changes:

mode 1 gain: 50 -> 100
mode 2 gain: 0 -> 150, FM3 engaged (+60deg), FM5 disengaged (N505.710)
mode 3 gain: 100 -> 300, FM3 engaged (+60deg)
mode 4 gain: set by guardian -> -120
mode 5 gain: 30 -> 100, FM3 engaged (+60deg)
mode 6 gain: -100 -> -300
mode 7 gain: 150 -> 300
mode 8 gain: 50 -> 200

ETMY changes:

mode 6 gain: -24 -> -60

ITMX changes:

mode 1 FM3 engaged (+60deg)

Bounce Mode Damping: I increased the gains

ETMY gain = 0.3
ETMX gain = 0.3
ITMX gain = 0.5
ITMY gain = 0.5

After relocking, the ETMX changes didn't work exactly as they did in my lock - picture attached.

Images attached to this report
H1 General
cheryl.vorvick@LIGO.ORG - posted 01:07, Thursday 01 December 2016 (32056)
Ops Owl Transition

TITLE: 12/01 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
    Wind: 6mph Gusts, 4mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.47 μm/s
QUICK SUMMARY:

H1 General
jim.warner@LIGO.ORG - posted 00:04, Thursday 01 December 2016 (32055)
Shift Summary

TITLE: 12/01 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY: Having troubles relocking, seems 4.7khz violins are acting up
LOG:

IFO was locked when I got here. Jeff was working on tamping down second harmonic violins. Eventually got that taken care of got to observing. After an hour (after everyone else went home) started getting ETMY saturations. Guess I didn't notice the 4.7khz modes ringing up. Haven't been able to relock since, nothing I've tried has gotten these violins to quit playing. ISC_LOCK also keeps failing to go to and come out of down, I've had to select down after each lockloss and then manually go ready. Don't know if that is a problem.
 

H1 ISC
jim.warner@LIGO.ORG - posted 21:22, Wednesday 30 November 2016 - last comment - 22:04, Wednesday 30 November 2016(32052)
Weird lockloss, 47 hz feature

Just had a lockloss. It started with a long string of continual EY saturations, some broad "low frequency" noise (roughly 10-50 hz) appeared in the live DARM spectra, then a lump appear around 50 hz. This eventually became a sharp peak at about 47 hz that would get large for a few seconds, then settle down a little. Attached spectra show a measurement just before the lock loss (red) with the 47 hz feature, a measurement with the low frequency stuff before the 47hz peak came up (brown), and a measurement 2 hours ago (green). 47 hz sounds familiar, but I know not from whence.

Images attached to this report
Comments related to this report
jim.warner@LIGO.ORG - 21:25, Wednesday 30 November 2016 (32053)

I add: during hand off it was mentioned the low frequency stuff had been poking up all day occasionally. And this didn't get bad until after everyone else went home, but looking at the verbal log, it looks like EY saturations had been increasing in frequency over this lock stretch, but not in a noticeable way.

jim.warner@LIGO.ORG - 22:04, Wednesday 30 November 2016 (32054)

Looks like it might have been one of the 4.7khz violin modes. Red is from right before a lockloss just a couple minutes ago, green is from the earlier lockloss, blue is from around the time Jeff had damped down this mode. Looks like the lower frequency mode got rung up somehow and is still pretty high.

Images attached to this comment
H1 SUS (DetChar)
jeffrey.kissel@LIGO.ORG - posted 19:20, Wednesday 30 November 2016 - last comment - 09:34, Thursday 01 December 2016(32050)
Campaign to Reduce 2nd Harmonics (~1000 kHz) of QUAD Violin Modes
J. Kissel, E. Merilh, J. Warner, T. Hardwick, J. Driggers, S. Dwyer

Prompted by DetChar worries about glitching around the harmonics of violin modes, Ed, Jim, and I went on an epic campaign to damp the ~1kHz, 2nd harmonic violin modes. These are tricky because not all modes had been successfully damped before, and one has to alternate filters in two filter banks to hit all 8 modes for a given suspension. 

We've either used, or updated Nutsinee's violin mode table, with the notable newly damped entries being 
994.8973    ITMY     -60deg, +gain      MODE9: FM2 (-60deg), FM4 (100dB), FM9 (994.87) 
997.7169    ITMY       0deg, -400gain   MODE9: FM4 (100dB), FM6(997.717)                 VERY Slow
997.8868    ITMY       0deg, -200gain   MODE10: FM4 (100dB), FM6(997.89) 

Also, we inadvertently rung up modes around 4735 Hz, so we spent a LONG time trying to fight that. We eventually won by temporarily turning on the 4735Hz notch in FM3 of the LSC-DARM2 filter bank and waiting a few hours. I had successfully damped the ETMY mode at 4735.09 Hz by moving the band-pass filter in H1:SUS-ETMY_L2_DAMP_MODE9 's FM10 from centered around 4735.5 to centered around 4735 Hz exactly, and using positive gain with zero phase. However, there still remains a mode rung up at 4735.4 Hz but it's from an as-of-yet unidentified test mass, and we didn't want to spend the time exploring. These 4.7 kHz lines have only appeared once before in late October (LHO aLOG 31020).

Attched is a before vs. after ASD of DELTAL_EXTERNAL. I question the calibration, but what's important is the difference between the two traces. Pretty much all modes in this frequency band have been reduced by 2 or 3 orders of magnitude -- better than O1 levels. Hopefully these stick through the next few lock losses and acquisitions.

Thanks again to the above mentioned authors for all their help!
Images attached to this report
Comments related to this report
laura.nuttall@LIGO.ORG - 06:23, Thursday 01 December 2016 (32058)

Thanks to all for your efforts! You can really see the dramatic decrease in the glitch rate around 21:00 UTC in the attached plot. The glitch rate in the lock after you did this work (which ended around 5 UTC today) looks much more typical of what we know the glitch rate at LHO to be.

Images attached to this comment
joshua.smith@LIGO.ORG - 07:40, Thursday 01 December 2016 (32062)DetChar

Comparing yesterday before damping to today the high frequency effect of the damping seems to be the removal of glitchy forests around 2, 3, 4, and 5 kHz (base frequency 2007.9 Hz but wide). Great! Not sure of the mechanism to get these frequencies yet, seems to be more than double the modes you damped. As noted above the 4735 is pretty large.  

Images attached to this comment
andrew.lundgren@LIGO.ORG - 09:34, Thursday 01 December 2016 (32070)DetChar, ISC
Attached is a spectrogram showing how the 2000 and 3000 Hz bands go away as the 1000 Hz violin modes are damped. You can also see that the bursts in these bands correspond with places where the spectrogram is 'bright' at 1000 Hz. Having two violin modes very close at 1000 Hz is like having one mode at 2000 Hz with a slow amplitude modulation. Probably that is getting turned into bursts in DARM by some non-linear process, modulated by that effective amplitude variation.

The 1080 Hz band is bursting on its own time scale, and does not seem to be related.
Images attached to this comment
H1 CDS (GRD)
david.barker@LIGO.ORG - posted 17:35, Wednesday 30 November 2016 - last comment - 11:19, Thursday 01 December 2016(32048)
h1guardian0 memory usage rate has increased, we'll install more memory at the next convenient time

The free memory size on the guardian machine is about 4GB. At the current rate of usage we predict a reboot is needed before next Tuesday. At the next opportune time, we will increase the memory size from 12GB to 48GB and perhaps schedule regular reboots on Tuesdays. 

Plot of available memory for the month of November is attached (Y-axis mis-labelled, actually MB).

Images attached to this report
Comments related to this report
keith.thorne@LIGO.ORG - 06:46, Thursday 01 December 2016 (32060)CDS, GRD
We did similar analysis at LLO ( See LLO aLOG 30004 ). We do see increasing memory over time from the guardian process.
michael.thomas@LIGO.ORG - 06:56, Thursday 01 December 2016 (32061)
Does this LHO memory plot include cached memory?  It would be interesting to see the amount of cache memory used along with the free memory.
jameson.rollins@LIGO.ORG - 08:44, Thursday 01 December 2016 (32065)

The character of memory usage on the LLO guardian machine is quite different, and doesn't look the same as what Dave has posted at all.  The LLO usage seems to plateau and not continually increase.  The plots that Dave is showing here look like a very steady increase, which looks much different.  The LHO plot looks more disturbing, as if there's a memory leak in something.  Memory usage has been fairly flat when we've looked in the past, so I'm surprised to see such a high rate of increase.

I also note that something changed two Tuesday's ago, which is what we also notice at LLO.  Was there an OS upgrade on h1guardian0 on Nov. 14?

keith.thorne@LIGO.ORG - 11:19, Thursday 01 December 2016 (32075)
The LLO guardian script machine was rebooted 16 days ago on Nov 15 (typically after we do a 'aptitude safe-upgrade').  The other dips are likely due to Guardian restarts due to DAQ, etc.
H1 SUS (CDS)
jeffrey.kissel@LIGO.ORG - posted 11:16, Wednesday 30 November 2016 - last comment - 09:17, Wednesday 29 March 2017(32021)
SUS PR2 Frame Rate Differences -- Understood; Let's Leave It Be
J. Kissel, S. Aston, P. Fritschel

Peter was browsing through the list of frame channels and noticed that there are some differences between H1 and L1 on PR2 (an HSTS), even after we've both gone through and made the effort to revamp our channel list -- see Integration Issue 6463, ECR E1600316, LHO aLOG 30844, and LLO aLOG 29091.

What difference he found is the result of the LHO-ONLY ECR E1400369 to increase the drive strength of the lower states *some* of the HSTSs. This requires the two sites to have a different front-end model library part for different types of the same suspension because the BIO control each stage is different depending on the number of drivers that have been modified;
At LHO the configuration is
    Library Part        Driver Configuration            Optics
    HSTS_MASTER.mdl     No modified TACQ Drivers        MC1, MC3
    MC_MASTER.mdl       M2 modified, M3 not modified    MC2
    RC_MASTER.mdl       M2 and M3 modified              PRM, PR2, SRM, SR2

At LLO, the configuration is
    Library Part        Driver Configuration            Optics
    HSTS_MASTER.mdl     No modified TACQ Drivers        MC1, MC3, PR2
    MC_MASTER.mdl       M2 modified, M3 not modified    MC2, PRM, SR2, SRM
    RC_MASTER.mdl       M2 and M3 modified       none

This model's DAQ channel list for the MC and RC masters is the same. The HSTS master is different, and slower, because these SUS are used for angular control only: 
                           HSTS (Hz)        MC or RC (Hz)
    M3_ISCINF_L_IN1        2048               16384

    M3_MASTER_OUT_UL       2048               16384
    M3_MASTER_OUT_LL       2048               16384
    M3_MASTER_OUT_UR       2048               16384
    M3_MASTER_OUT_LR       2048               16384

    M3_DRIVEALIGN_L_OUT    2048               4096

Since LLO's PR2 does not have any modifications to its TACQ drivers, it uses the HSTS_MASTER model, which means that PR2 alone is going to show up as a difference in the channel list between the sites that seemed odd Peter -- that L1 had 6 more 2048 Hz channels than H1. Sadly, it *is* used for longitudinal control, so LLO suffers the lack of stored frame rate.

In order to "fix" this difference, we'd have to create a new library part for LLO's PR2 alone that has the DAC channel list of an MC or RC master, but have the BIO control logic of an HSTS master (i.e. to operate M2 and M3 stages with an unmodified TACQ driver). That seems excessive given that we already have 3 different models due to differing site preferences (and maybe range needs), so I propose we leave things as is, unless there's dire need to compare the high frequency drive signals to the M3 stage of PR2 at LLO.

I attach a screenshot that compares the DAQ channel lists for the three library parts, and the two types of control needs as defined by T1600432.
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 08:59, Thursday 01 December 2016 (32066)
Just to trace out the history HSTS TACQ drivers at both sites:

Prototype of L1200226 increase MC2 M2 stage at LLO:
LLO aLOG 4356
      >> L1 MC2 becomes MC_MASTER.

ECR to implement the L1200226 on MC2, PRM, and SRM M2 stages for both sites: E1200931
      >> L1 PRM, SRM become MC_MASTERs
      >> H1 MC2, PRM, SRM become MC_MASTERs

LLO temporarily changes both PR2 and SR2 M2 drivers for an L1200226 driver: LLO aLOG 16945
And then reverted two days later: LLO aLOG 16985
     
ECR to increase the drive strength SR2 M2 stage only at LLO: E1500421
      >> L1 SR2 becomes MC_MASTER

ECR to increase the drive strength of SR2 and PR2 M2 and PRM, PR2, SRM, SR2 M3 at LHO only: E1400369
      >> H1 SRM, SR2, SRM, SR2 become RC_MASTERs.
stuart.aston@LIGO.ORG - 09:17, Wednesday 29 March 2017 (35180)
LLO have since had an ECR to increase the drive strength for PR2 M2 stage: E1700108
      >> L1 PR2 from HSTS_MASTER becomes MC_MASTER

This has now been implemented (and has stuck this time) at LLO: LLO aLOG's 32597 and 32623.
H1 CAL (CAL)
aaron.viets@LIGO.ORG - posted 12:03, Monday 28 November 2016 - last comment - 05:26, Thursday 01 December 2016(31911)
Gating kappa_tst with DARM line coherence in GDS pipeline
I have been investigating the spike in the computed value of kappa_tst shown here in the summary pages:
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20161122/cal/time_varying_factors/
A closer look reveals a seeming correlation between the DARM line coherence and the values of kappa_tst (see the attached plot). Currently, kappa_tst computed values are gated with PCALY_LINE1 and SUS_LINE1, the only lines used to compute kappa_tst.

Things to note in the plot:
1) kappa_tst diverges from the expected range about a minute after the DARM line coherence uncertainty skyrockets, just the amount of time it takes to corrupt the 128 running median.
2) kappa_tst as computed by CALCS also diverges when the DARM coherence goes bad.
3) Ungated kappa_pu behaves similarly to kappa_tst, but since it is gated with the DARM coherence, the GDS output is not corrupted.
Non-image files attached to this report
Comments related to this report
aaron.viets@LIGO.ORG - 05:26, Thursday 01 December 2016 (32057)
Here is a plot of offline-calibrated data that includes the same time, this time adding DARM cohreence gating of kappa_tst. Note that the the GDS timeseries now looks good, and kappa_tst is well behaved.
Non-image files attached to this comment
H1 CAL (DetChar)
jeffrey.kissel@LIGO.ORG - posted 16:11, Tuesday 22 November 2016 - last comment - 09:36, Thursday 01 December 2016(31738)
PCALX Roaming Calibration Line Frequency Changed from 4801.3 to 5001.3 Hz
J. Kissel for S. Karki

I've moved the roaming calibration line to its highest frequency we intend to go, and it's also the last super-long duration we need. We may run through the lower frequency points again, given that (a) they need much less data, and (b) those data points were taken at various input powers that will likely confuse/complicate the analysis. Below is the current schedule status.

Current Schedule Status:
Frequency    Planned Amplitude        Planned Duration      Actual Amplitude    Start Time                 Stop Time                    Achieved Duration
(Hz)         (ct)                     (hh:mm)                   (ct)               (UTC)                    (UTC)                         (hh:mm)
---------------------------------------------------------------------------------------------------------------------------------------------------------
1001.3       35k                      02:00                   39322.0           Nov 11 2016 21:37:50 UTC    Nov 12 2016 03:28:21 UTC      ~several hours @ 25 W
1501.3       35k                      02:00                   39322.0           Oct 24 2016 15:26:57 UTC    Oct 31 2016 15:44:29 UTC      ~week @ 25 W
2001.3       35k                      02:00                   39322.0           Oct 17 2016 21:22:03 UTC    Oct 24 2016 15:26:57 UTC      several days (at both 50W and 25 W)
2501.3       35k                      05:00                   39322.0           Oct 12 2016 03:20:41 UTC    Oct 17 2016 21:22:03 UTC      days     @ 50 W
3001.3       35k                      05:00                   39322.0           Oct 06 2016 18:39:26 UTC    Oct 12 2016 03:20:41 UTC      days     @ 50 W
3501.3       35k                      05:00                   39322.0           Jul 06 2016 18:56:13 UTC    Oct 06 2016 18:39:26 UTC      months   @ 50 W
4001.3       40k                      10:00                   39322.0           Nov 12 2016 03:28:21 UTC    Nov 16 2016 22:17:29 UTC      days     @ 30 W (see LHO aLOG 31546 for caveats)
4301.3       40k                      10:00                   39322.0           Nov 16 2016 22:17:29 UTC    Nov 18 2016 17:08:49 UTC      days     @ 30 W          
4501.3       40k                      10:00                   39322.0           Nov 18 2016 17:08:49 UTC    Nov 20 2016 16:54:32 UTC      days     @ 30 W (see LHO aLOG 31610 for caveats)   
4801.3       40k                      10:00                   39222.0           Nov 20 2016 16:54:32 UTC    Nov 22 2016 23:56:06 UTC      days     @ 30 W
5001.3       40k                      10:00                   39222.0           Nov 22 2016 23:56:06 UTC
Images attached to this report
Comments related to this report
evan.goetz@LIGO.ORG - 19:26, Tuesday 22 November 2016 (31752)
Before the HW injection test, we turned off this line (before entering observation intent). I turned it back on at Nov 23 2016 03:25 UTC, but this did not drop us out of observation intent.
evan.goetz@LIGO.ORG - 20:13, Tuesday 22 November 2016 (31755)
This line was again turned off at 4:12 Nov 23 2016 UTC so that DetChar safety study can be made late tonight.
sudarshan.karki@LIGO.ORG - 09:36, Thursday 01 December 2016 (32068)

The analysis of sensing function at frequency above 1 kHz obtained from the roaming lines listed in the alog above is attached.  These lines were run at different times than the low frequency sweep (below 1 kHz) taken on Nov 18 and included in this plot. So, the lines above 1 kHz will need to be compensated for the time varying parameters to make accurate comparison and has not been done for this plot.

One way of compensating the changes are by applying kappas calculated using the SLM Tool (or GDS). The other way of doing it is comparing each individual line with 1083.7 Hz (which is always on) line at time t (time at which each line is running)  and time t0 (time of low freqeuncy sweep).

Sensing Function [ct] /[m] = (DARM_ERRR/TxPD) f= hf, t  * (TxPD/DARM_ERR) f = 1083.7, t * (DARM_ERRR/TxPD) f= 1083.7, t0

Both methods are essentially same but I will use the second method and the plot with the correct compensation applied to come soon.

Non-image files attached to this comment
Displaying reports 53601-53620 of 84799.Go to page Start 2677 2678 2679 2680 2681 2682 2683 2684 2685 End