Displaying reports 54721-54740 of 84704.Go to page Start 2733 2734 2735 2736 2737 2738 2739 2740 2741 End
Reports until 07:43, Tuesday 25 October 2016
H1 DetChar
scott.coughlin@LIGO.ORG - posted 07:43, Tuesday 25 October 2016 - last comment - 07:43, Tuesday 25 October 2016(30804)
distribution of scratchy (also called Blue Mountains) noise in O1
Distribution of hours at which scratchy glitches occurred according to the ML output from GravitySpy. In addition, a histogram of amount of O1 time spend in analysis ready mode is provided. I have uploaded omega scans and FFT spectrograms of what Scratchy glitch looked like in O1.
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 13:33, Monday 24 October 2016 (30810)
For those of us who haven't been on DetChar calls to have heard this latest DetChar nickname... "Scratchy glitches?"
joshua.smith@LIGO.ORG - 15:10, Monday 24 October 2016 (30814)DetChar

Hi Jeff, 

Scotty's comment above refers to Andy's comment to the range drop alog 30797 (see attachment here and compare to Andy's spectrogram). We're trying to help figure out its cause. It's a good lead that they seem to be related to RM1 and RM2 motion. 

"Scratchy" is the name used in GravitySpy for these glitches. They are called that because they sound like scratches in audio https://wiki.ligo.org/DetChar/InstrumentSounds . In FFT they look like mountains, or if you look closer, like series of wavy lines. They were one of the most numerous types of H1 glitches in O1. In DetChar we also once called them "Blue mountains." Confusing, I know. But there is a DCC entry disambiguating (in this case equating) scratchy and blue mountain https://dcc.ligo.org/LIGO-G1601301 and a further entry listing all of the major glitch types https://dcc.ligo.org/G1500642 and the notes on the GravitySpy page. 

Images attached to this comment
LHO VE
chandra.romel@LIGO.ORG - posted 16:47, Monday 24 October 2016 (30824)
CP3 overfill
3:32 pm local

Overfilled CP3 by doubling LLCV to 34%. Took 18 min. to see vapor and tiny amounts of LN2 out the exhaust line. TCs responded but still did not readout high negative #s like I'd expect or saw with 1/2 turn on bypass fill valve. So I left LLCV at 35% for an additional 7 min. but did not see a drastic change in temps. Never saw much flow out of the bypass exhaust pipe. 

Bumped LLCV nominal from 17% to 18% open.
Images attached to this report
LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 16:44, Monday 24 October 2016 - last comment - 21:12, Monday 24 October 2016(30823)
Replaced Batteries for UPSes on All Vacuum Racks

Removed and replaced battery packs for all vacuum rack UPSes (Ends/Mids/Corner station).  No glitches noted on racks.

Work done under WP#6270.

Comments related to this report
kyle.ryan@LIGO.ORG - 21:12, Monday 24 October 2016 (30834)
If FAMIS were allowed to digest this activity, it could expect to become more "regular" (I'm laughing at my own jokes!)
H1 PSL
peter.king@LIGO.ORG - posted 16:40, Monday 24 October 2016 (30822)
front end laser diode box power supply replaced
The front end diode box that was removed from service some months ago, S/N DB 12-07
had one of its Lumina power supplies replaced - the one on the right hand side as
you face the front panel and key switch.
  old: S/N 38226
  new: S/N 118533


Fil/Gerardo/Peter
H1 SUS (CDS, DAQ, DetChar, ISC, SYS)
jeffrey.kissel@LIGO.ORG - posted 16:36, Monday 24 October 2016 - last comment - 17:15, Monday 24 October 2016(30821)
Front-End Model Prep: All SUS Science Frame Channels Changed to Match T1600432
J. Kissel
Integration Issue 6463
ECR: E1600316
WP: 6263

P. Fritschel, S. Aston, and I have revamped the SUS channel list that is stored in the frames in order to
(1) Reduce the overall channel frame rate in light of the new scheme of storing only one science frame (no commissioning frames), and
(2) because the list was an un-organized hodge podge of inconsistent ideas of what to store from over 6 years of ideas.
The new channel list (and the rationale for each channel and its rate) can be found in T1600432, and will not change throughout O2.

I've spend the day modifying front-end models such that they all conform to this new model. This impacts *every* SUS model, and we'll install the changes tomorrow (including the removal of ISIWIT channels, prepped on Friday; LHO aLOG 30728).

For the SUS models used in any control system, the channel list was changed in the respective suspension type's library part,
  Sending        BSFM_MASTER.mdl
  Sending        HLTS_MASTER.mdl
  Sending        HSSS_MASTER.mdl
  Sending        HSTS_MASTER.mdl
  Sending        MC_MASTER.mdl
  Sending        OMCS_MASTER.mdl
  Sending        QUAD_ITM_MASTER.mdl
  Sending        QUAD_MASTER.mdl
  Sending        RC_MASTER.mdl
  Sending        TMTS_MASTER.mdl
  Transmitting file data ..........
  Committed revision 14509.
and commited to the user apps repo.

For monitor models, the changes are done on the secondary level, but that layer is not made of library parts, so they have to be individually changed per suspension. These models, 
  Sending        models/h1susauxb123.mdl
  Sending        models/h1susauxex.mdl
  Sending        models/h1susauxey.mdl
  Sending        models/h1susauxh2.mdl
  Sending        models/h1susauxh34.mdl
  Sending        models/h1susauxh56.mdl
  Transmitting file data ......
  Committed revision 14508.
are also no committed to the userapps repo.

We're ready for a rowdy maintenance day tomorrow -- hold on to your butts!
Comments related to this report
david.barker@LIGO.ORG - 17:15, Monday 24 October 2016 (30826)

note that to permt sus-aux channels to be acquired at 4kHz, the models h1susauxh2, h1susauxh34 and h1susauxh56 will be modified from 2K models to 4K models as part of tomorrow's work.

H1 CDS
david.barker@LIGO.ORG - posted 16:27, Monday 24 October 2016 (30820)
h1fs0, cleared out old zfs snapshots

Today I did some more cleanup work on the /opt/rtcds file system following yesterday's full-filesytem errors.

We perform hourly zfs snapshots on this file system, and zfs-sync them to the backup machine h1fs1 at the same rate. h1fs0 had hourly snapshots going back to May 2016.

Yesterday I had deleted all of May and thinned June down to one-per-day. Today we made the decision that since all the files are backed up to tape, we can delete all snapshots older than 30 days. This will ensure that disk allocaed to a deleted file will be recovered after the last snapshot which references it is destroyed after 30 days. I destroyed all snapshots up to 26th September 2016.

After the snapshot cleanup, the 928G file system is using 728G (split as 157G used by snapshots and 571G used by disk-system). This is a usage of 78% which is what DF reports.

H1 General
edmond.merilh@LIGO.ORG - posted 15:56, Monday 24 October 2016 (30818)
Shift Summary - Day
 
 
TITLE: 10/24 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Aquisition
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
LOG:

16:45 Small cube truck on site for Bubba. (Gutter Kings) Installation of gutters on the North Side of OSB. WP#6266

17:27 Peter and Jason into the PSL . WP#6268

17:39 Kyle out to EX to take measurements

17:45 Took IMC to 'OFFLINE' as per the request of Peter and Jason

17:56 ISI config changed to no BRS for both end stations for maintenance purposes

17:58 Fil at EY

18:00 Gerardo out to execute WP#6270

18:27 Kyle back from EX

18:37 Fil leaving EY

19:00 Fil at EX

19:03 Jason and Peter out. Circumstances did not allow the intended task to be performed

19:08 Fill leaving EX

19:10 Begin bringing the IFO back. IMC locking a bit daunting. Cheryl assisting.

19:12 BRS turned on at both end stations

19:35 reset PSL Noise Eater

20:39 having some dificulty aligning X-arm

20:40 Rick S and aguest up to the observation deck

20:59 Sheila out to PSL rack

21:11 Jeff into CER

21:41 Bubba to MX to inspect a fan

22:25 Chandra to MY to do CP3

 
H1 GRD
sheila.dwyer@LIGO.ORG - posted 15:15, Monday 24 October 2016 - last comment - 09:06, Saturday 29 October 2016(30815)
exca connection error

Ed, Sheila

Are ezca connection errors becoming more frequent?  Ed has had two in the last hour or so, one of which contributed to a lockloss (ISC_DRMI).

The first one was from ISC_LOCK, the screenshot is attached. 

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 18:15, Monday 24 October 2016 (30828)

Happened again but for a different channel H1:SUS-ITMX_L2_DAMP_MODE2_RMSLP_LOG10_OUTMON ( Sheila's post was for H1:LSC-PD_DOF_MTRX_7_4). I trended and found data for both of those channels at the connection error times, and during the second error I could also caget the channel while ISC_LOCK still could not connect. I'll keep trying to dig and see what I find.

Relevant ISC_LOCK log:

2016-10-25_00:25:57.034950Z ISC_LOCK [COIL_DRIVERS.enter]
2016-10-25_00:26:09.444680Z Traceback (most recent call last):
2016-10-25_00:26:09.444730Z   File "_ctypes/callbacks.c", line 314, in 'calling callback function'
2016-10-25_00:26:12.128960Z ISC_LOCK [COIL_DRIVERS.main] USERMSG 0: EZCA CONNECTION ERROR: Could not connect to channel (timeout=2s): H1:SUS-ITMX_L2_DAMP_MODE2_RMSLP_LOG10_OUTMON
2016-10-25_00:26:12.129190Z   File "/ligo/apps/linux-x86_64/epics-3.14.12.2_long-ubuntu12/pyext/pyepics/lib/python2.6/site-packages/epics/ca.py", line 465, in _onConnectionEvent
2016-10-25_00:26:12.131850Z     if int(ichid) == int(args.chid):
2016-10-25_00:26:12.132700Z TypeError: int() argument must be a string or a number, not 'NoneType'
2016-10-25_00:26:12.162700Z ISC_LOCK EZCA CONNECTION ERROR. attempting to reestablish...
2016-10-25_00:26:12.175240Z ISC_LOCK CERROR: State method raised an EzcaConnectionError exception.
2016-10-25_00:26:12.175450Z ISC_LOCK CERROR: Current state method will be rerun until the connection error clears.
2016-10-25_00:26:12.175630Z ISC_LOCK CERROR: If CERROR does not clear, try setting OP:STOP to kill worker, followed by OP:EXEC to resume.

sheila.dwyer@LIGO.ORG - 21:12, Friday 28 October 2016 (30977)

It happened again just now. 

Images attached to this comment
david.barker@LIGO.ORG - 09:06, Saturday 29 October 2016 (30993)

Opened FRS on this, marked a high priority fault.

ticket 6558

H1 PSL
jenne.driggers@LIGO.ORG - posted 15:14, Monday 24 October 2016 - last comment - 18:36, Monday 24 October 2016(30817)
ISS 2nd loop offset somewhere?

[Jenne, Daniel, Stefan]

There seems to be an offset somewhere in the ISS second loop.  When the 2nd loop comes on, even though it is supposed to be AC coupled, the diffracted power decreases significantly.  This is very repeatable with on/off/on/off tests.  One bad thing about this (other than having electronics with unknown behavior) is that the diffracted power is very low, and can hit the bottom rail, causing lockloss - this happened just after we started trending the diffracted power to see why it was so low.

Daniel made it so the second loop doesn't change the DC level of diffracted power by changing the input offset for the AC coupling servo (H1:PSL-ISS_SECONDLOOP_AC_COUPLING_SERVO_OFFSET from 0.0 to -0.5), the output bias of the AC coupling servo (H1:PSL-ISS_SECONDLOOP_AC_COUPLING_INT_BIAS from 210 to 200), and the input offset of the 2nd loop (H1:PSL-ISS_THIRDLOOP_OUTPUT_OFFSET from 24.0 to 23.5  - this is just summed in to the error point of the 2nd loop servo).  What we haven't checked yet is if we can increase the laser power with these settings.

Why is there some offset in the ISS 2nd loop that changes the diffracted power??  When did this start happening?

Comments related to this report
jenne.driggers@LIGO.ORG - 16:04, Monday 24 October 2016 (30819)

We were able to increase power to 25W okay, but turning off the AC coupling made things go crazy and we lost lock.  The diffracted power went up, and we lost lock around the time it hit 10%. 

keita.kawabe@LIGO.ORG - 18:36, Monday 24 October 2016 (30829)

The 2nd loop output offset observed by the 1st loop was about 30mV (attached, CH8). With the 2nd ISS gain slider set at 13dB and a fixed gain stage of 30, this correspond to 0.2mV offset in the AC coupling point. This offset is relatively small.

One thing that has happened in the past two weeks or so is that the power the 1st loop sensor (PDA) receives was cut by about half (second attachment). This was caused by the move from the old position to the new position of the PD.

Since the sensing gain of the 1st loop was reduced by a factor of two, seen from the 2nd loop the 1st loop is twice as efficient an actuator. Apparently the second loop gain slider was not changed (the slider is still at 13dB), so even if the same offset was there before, the effect was a factor of two smaller before.

Another thing which is kind of far-fetched is that I switched off the DBB crate completely and we know that opening/closing the DBB and frontend/200W shutters caused some offset change in the 2nd loop board.

Images attached to this comment
H1 CAL (CAL)
evan.goetz@LIGO.ORG - posted 13:38, Monday 24 October 2016 - last comment - 10:25, Wednesday 26 October 2016(30811)
Amplitude of 331.9 Hz Pcal line reduced
Since the noise of the detector has improved around the 331.9 Hz Pcal injection frequency, we can reduce the amplitude of the injection (current setting 9000 cts for both sine and cosine). I have reverted changes that increased the amplitude of this line (see LHO aLOG 30476). The new amplitude setting is 2900 (for both sine and cosine amplitudes), which is the same as it was before increasing the injection amplitude.

This also brings the total injections to Pcal Y below the threshold (see LHO aLOG 30802). The threshold is 44,000 counts. The current total injection is now 38650.0 counts.

Screenshot of excitation settings attached.
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 13:45, Monday 24 October 2016 (30812)
Note to self: check the front-end calculations of the uncertainty and coherence of these lines before and after this change after the IFO reverted back to 25 [W] input power. 
Example checks: 
   - do the calculations show the expected decrease in coherence / increase in uncertainty? 
   - how much was the uncertainty / coherence when the SNR was so high? 
   - do we like that level of uncertainty? did it reveal more real optical parameter changes instead of noise?

evan.goetz@LIGO.ORG - 10:25, Wednesday 26 October 2016 (30889)
Delayed update, these changes were accepted in the SDF today (Oct. 26, 2016, ~10:20 PDT).
LHO VE
kyle.ryan@LIGO.ORG - posted 13:27, Monday 24 October 2016 (30807)
~1015 hrs. local -> Checked on X-end RGA bake -> OK
Bake exercise had been scheduled to end today but will now be extended as it doesn't seem to be limiting others work.  "Hotter, longer" is the game here.
H1 DetChar (ISC)
young-min.kim@LIGO.ORG - posted 12:42, Monday 24 October 2016 - last comment - 13:27, Monday 24 October 2016(30800)
BruCo scan around excess noises

I ran Bruco on two times around excess noises as Andy's suggestion.

Oct 24, 12:40:00 UTC : https://ldas-jobs.ligo-wa.caltech.edu/~youngmin/BruCo/PRE-ER10/H1/Oct24/H1-1161348017-600/

Oct 24, 15:45:00 UTC : https://ldas-jobs.ligo-wa.caltech.edu/~youngmin/BruCo/PRE-ER10/H1/Oct24/H1-1161359117-600/

 

The first time is right after Range Drop which Stefan mentioned (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=30790).

For the comparison, the bruco scan on nominal state around 70Mpc is here (https://ldas-jobs.ligo-wa.caltech.edu/~youngmin/BruCo/PRE-ER10/H1/Oct24/H1-1161342497-600/)

 

It's not easy for me to find a specific channel to look at, therefore,

Any comments/suggestions on what I should look at and/or any further analysis are welcome.

Comments related to this report
sheila.dwyer@LIGO.ORG - 13:26, Monday 24 October 2016 (30806)

Thanks-

This shows that we have ASC noise below 30 Hz, and that perhaps the A2L for ITMY was not tuned well at the end of the lock: ITMY_L3_ISCINF_P

SRCL noise is high from 15-50 Hz, we will attempt to make a better feedforward filter soon for this (This is also the conculsion of some quick noise injections this morning.)

PRCL coherence is high both where the SRCL coherence is high and at the jitter peaks, which could be coupling through SRCL or frequency noise lock point errors.

PSL channels that have good whitening show coherence around our high frequency lump.  (OSC PD INT ISS PDB)

There are a few channels that I think could be added to the excluded list:

SUS-ETMY_L2_FASTIMON_LL_OUT_DQ

OMC-PI_DCPD_64KHZ_AHF_DQ

OMC-DCPD_NULL_OUT_DQ

gabriele.vajente@LIGO.ORG - 13:27, Monday 24 October 2016 (30808)

Looking at the frequency range of interest (mainly 50 - 200 Hz), there isn't any coherence with significant channels. This is not unexpected if the noise is due to scattering, since it would be a highly non linear coupling, thus not seen with a coherence analysis.

H1 ISC
stefan.ballmer@LIGO.ORG - posted 10:49, Monday 24 October 2016 - last comment - 14:08, Tuesday 25 October 2016(30790)
Range drop this morning as a hint for scattering?

At 5:20am local time we saw a significant range drop (from about 70Mpc to 60Mpc) that seems to be due to a signicant increase of the line structure in the bucket that always lingers around the noise floor.

Attached are two spectra - 1h12min apart (from 11:18:00 and 12:20:00 UTC on 2016/10/24), showing that structure clearly.

Plot two shows the seismic BLRMS from the three LEAs - the corner shows the clearest increase. We are now chasing any particularly bad correlations around 12:20 thgis morning in the hope that it will give us a hint where this scatter is from.

Images attached to this report
Comments related to this report
young-min.kim@LIGO.ORG - 11:12, Monday 24 October 2016 (30792)

From you request, I ran Bruco on those times. The results are as follows,

bad time (12:20 UTC) : https://ldas-jobs.ligo-wa.caltech.edu/~youngmin/BruCo/PRE-ER10/H1/Oct24/H1-1161346817-600/

good reference(11:08 UTC): https://ldas-jobs.ligo-wa.caltech.edu/~youngmin/BruCo/PRE-ER10/H1/Oct24/H1-1161342497-600/

These could give you a hint for range drop.

stefan.ballmer@LIGO.ORG - 11:23, Monday 24 October 2016 (30793)

Here is a plot of thge auxiliary loops, again comparing good vs bad.

Note the two broad noise humps around 11.8Hz and 23.6Hz. They both increased at the bad time compared to the good time.

Interesingly, the peaks showing up in the DARM spectrum are the 4th 5th, etc. to 12th harmonic of that 11.8-ish Hz.

It all smells to me like some form of scatter in the input chain.

Images attached to this comment
jenne.driggers@LIGO.ORG - 11:38, Monday 24 October 2016 (30794)

The IMs do not change their motion between Stefan's good time (11:08 UTC) and bad time (12:20 UTC).  But, the RMs, particularly RM2, see elevated motion, almost a factor of 2 more motion between 8Hz - 15Hz.

First screenshot is IMs, second is RMs.  In both, the references are the good 11:08 time, and the currents are the bad 12:20 time.

Stefan and TeamSEI are looking at the HEPI and ISI motion in the input HAMs right now.

EDIT:  As one would expect, the REFL diodes see an increase in jitter at these same frequencies, predominantly in pitch.  See 3rd attachment.

Images attached to this comment
andrew.lundgren@LIGO.ORG - 12:14, Monday 24 October 2016 (30797)DetChar, ISC, SUS
I quickly grabbed a time during O1 when this type of noise was happening, and it also corresponds to elevated motion around 6 Hz in RM1 and RM2. Attached are a spectrogram of DARM, and the pitch and yaw of RM2 at the time compared to a reference.

There is a vertical mode of the RMs at 6.1 Hz (that's the LLO value, couldn't find it for LHO). Maybe those are bouncing more, and twice that is what's showing up in PRCL?

Images attached to this comment
norna.robertson@LIGO.ORG - 13:06, Monday 24 October 2016 (30803)
There should not be any ~6 Hz mode from the RM suspensions (HSTS or HLTS), so I am puzzled what this is. 
For a list of expected resonant frequencies for HSTS and HLTS see links from this page

https://awiki.ligo-wa.caltech.edu/aLIGO/Resonances

jeffrey.kissel@LIGO.ORG - 13:32, Monday 24 October 2016 (30809)DetChar, SUS
@Norna: the RMs, for "REFL Mirrors" are HAM Tip-Tilt Suspensions, or HTTS (see, e.g. G1200071). These, indeed, have been modeled to have their highest (and only) vertical mode at 6.1 Hz (see HTTS Model on the aWiki).

I can confirm there is no data committed to the SUS repo on the measured vertical mode frequencies of these not-officially-SUS-group suspensions at H1. Apologies! Remember, these suspensions don't have transverse / vertical / roll sensors or actuators, so one have to rely on dirt coupling showing up in the ASDs of the longitudinal / pitch / yaw sensors. 

We'll grab some free-swinging ASDs during tomorrow's maintenance period.
jim.warner@LIGO.ORG - 13:59, Monday 24 October 2016 (30813)SEI

Stefan has had Hugh and I looking SEI coupling to PRCL over this period, and so far I haven't found anything, but HAM1 HEPI is coherent with the RM damp channels and RM2 shows some coherence to CAL_DELTAL, around 10hz.  Attached plot shows coiherence from RM2_P to HEPI Z L4Cs (blue), RM2_P to CAL_PRCL (brown), and RM2_P to CAL_DELTAL (pink). The HAM1_Z to PRCL is similar to the RM2_P to CAL_PRCL, so I didn't include it. HAM1 X and RY showed less coherence, and X was at lower frequency. There are some things we can do to improve the HAM1 motion if it's deemed necessary, like increasing the gain on the Z isolation loops, but there's not a lot of extra margin there.

Images attached to this comment
hugh.radkins@LIGO.ORG - 15:15, Monday 24 October 2016 (30816)

Here are ASDs of the HAM3 HEPI L4Cs (~in-line dofs: RY RZ & X) and the CAL-CS_PRCL_DQ.  The HAM2 and HAM1 HEPI channels would be assessed the same way:  The increase in motion seen on the HAM HEPIs is much broader than that seen on the PRC signal.  Also, none of these inertial sensor channels see any broadband coherence with the PRC, example also attached.

Images attached to this comment
betsy.weaver@LIGO.ORG - 14:08, Tuesday 25 October 2016 (30856)

Freee swing PSD of RMs and OM are in alog 30852.

Displaying reports 54721-54740 of 84704.Go to page Start 2733 2734 2735 2736 2737 2738 2739 2740 2741 End