Displaying reports 54701-54720 of 84703.Go to page Start 2732 2733 2734 2735 2736 2737 2738 2739 2740 End
Reports until 09:38, Tuesday 25 October 2016
H1 DAQ
daniel.sigg@LIGO.ORG - posted 09:38, Tuesday 25 October 2016 (30842)
Updated TwinCAT code

This update supports (WP 6259):

Images attached to this report
H1 DetChar
scott.coughlin@LIGO.ORG - posted 07:43, Tuesday 25 October 2016 - last comment - 07:43, Tuesday 25 October 2016(30804)
distribution of scratchy (also called Blue Mountains) noise in O1
Distribution of hours at which scratchy glitches occurred according to the ML output from GravitySpy. In addition, a histogram of amount of O1 time spend in analysis ready mode is provided. I have uploaded omega scans and FFT spectrograms of what Scratchy glitch looked like in O1.
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 13:33, Monday 24 October 2016 (30810)
For those of us who haven't been on DetChar calls to have heard this latest DetChar nickname... "Scratchy glitches?"
joshua.smith@LIGO.ORG - 15:10, Monday 24 October 2016 (30814)DetChar

Hi Jeff, 

Scotty's comment above refers to Andy's comment to the range drop alog 30797 (see attachment here and compare to Andy's spectrogram). We're trying to help figure out its cause. It's a good lead that they seem to be related to RM1 and RM2 motion. 

"Scratchy" is the name used in GravitySpy for these glitches. They are called that because they sound like scratches in audio https://wiki.ligo.org/DetChar/InstrumentSounds . In FFT they look like mountains, or if you look closer, like series of wavy lines. They were one of the most numerous types of H1 glitches in O1. In DetChar we also once called them "Blue mountains." Confusing, I know. But there is a DCC entry disambiguating (in this case equating) scratchy and blue mountain https://dcc.ligo.org/LIGO-G1601301 and a further entry listing all of the major glitch types https://dcc.ligo.org/G1500642 and the notes on the GravitySpy page. 

Images attached to this comment
H1 ISC
stefan.ballmer@LIGO.ORG - posted 02:33, Tuesday 25 October 2016 (30841)
PCAL readback indicates calibration at 25W is 11% off - ~80Mpc once corrected (and if PCAL correct)

We repeatedly noticed that the current front-end calibration is slightly off - tonight all cal lines (low and high freq) in DARM were 11% above the PCAL read-back.

If I take the PCAL readback as reference and scale down the calibrated spectrum (as attached), I got about 80Mpc.

On the other hand, Evan Goetz reported that he thinks the PCAL is clipping (30827). We'll see whether these 11% are real...

Images attached to this report
H1 ISC (ISC)
jenne.driggers@LIGO.ORG - posted 01:36, Tuesday 25 October 2016 (30840)
Removing PR2 feedforward from MICH length

[Stefan, Jenne]

We removed the PR2 length feedforward that removes the MICH signal in PRCL.  We did this by ramping the PR2 LOCK_L gain at the lowest stage to 0.  We didn't see any change in DARM.  We also tried increasing the gain by a factor of 3.  Again, we didn't see any change in DARM. 

However, since we discovered and mitigated some scattering effects earlier tonight (but after this PR2 test), we should try this again. 

H1 ISC
stefan.ballmer@LIGO.ORG - posted 01:15, Tuesday 25 October 2016 (30839)
Fixed up PRMI / DRMI locking

Jenne, Stefan

PRMI and DRMI lock acquisition was very sloppy the last few days, so we actually looked at the fringes, gains, trigger thresholds, etc. A number of tweaks were required:

REFLAIR_A_RF45 PHASE was changed from 142deg to 157 to minimize the I signal bleeding through.

PRMI acquisiton gains: PRCL 16, MICH 2.8

PRMI locked gains: PRCL 8 (nominal UGF 40Hz), MICH 2.8 (nominal UGF 10Hz)

DRMI locked gains: PRCL 8 (nominal UGF 40Hz), MICH 1.4 (nominal UGF 10Hz), SRCL -45 (nominal UGF 70Hz)

DRMI acquisition gains: same as PRMI: PRCL 16, MICH 2.8, and SRCL -30

 

 
 
LHO General
thomas.shaffer@LIGO.ORG - posted 00:00, Tuesday 25 October 2016 - last comment - 16:17, Tuesday 25 October 2016(30837)
Ops Eve Shift Summary

TITLE: 10/25 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Aquisition
INCOMING OPERATOR: Jeff
SHIFT SUMMARY: Locking DRMI/PRMI was not easy, it required large adjustments and waiting a good amount of time. useism is also getting pretty high so I tried the WINDY_USEISM state, brought it back to WINDY because I couldnt tell which was better. Aside form that the commissioners are working.

 

Comments related to this report
jim.warner@LIGO.ORG - 12:39, Tuesday 25 October 2016 (30848)SEI

I should probably just remove or rename the WINDY_USEISM state. It may have a use, but I think people are taking the configuration guide on the SEI_CONF screen too literally. I'm reluctant to try to make the guide more accurate because I'm not a cubist. The WINDY_USEISM state should be thought of as a more wind resistant state than the high microseism configuration we used during O1 (USEISM in SEI_CONF). Anyone remember how hard locking was with 15mph winds and high microseism during our first observing run?

We are getting into new territory with the current configuration (implemented during the windy, low microseism summer), but looking at the locks last night, it looks like the WINDY configuration is still what we want to use. The five attached plots are the ISC_LOCK state, SEI_CONF state (40 is WINDY, 35 is WINDY_USEISM), the ETMX Z 30-100mhz STS BLRMS (in nm, so 1000=1 micron) and the corner station windspeed. The last plot shows all four channels together, red is the ISC state, blue the SEI_CONF state, green is the STS BLRMS, black is the wind. It's kind of a mess, but it gives a better feel for the time line.

Microseism was high over this entire period (around 1micron RMS), wind was variable, so this was a good time to test. I think the take away is that the WINDY state was sufficient to handle the high microseism for the 2  NLN locks over this stretch, and is very probably more robust against the wind than the WINDY_USEISM state.

Images attached to this comment
thomas.shaffer@LIGO.ORG - 16:17, Tuesday 25 October 2016 (30868)

This is great to know. I was pretty sure that you said WINDY is good for almost every situation, but I thought it worth a try.

Tagging OpsInfo so we can get the lastest

H1 AOS
robert.schofield@LIGO.ORG - posted 23:14, Monday 24 October 2016 - last comment - 23:14, Monday 24 October 2016(30835)
Shaking ISCT1 produced noise, beam diverter now closed

This morning a broad increase in ground motion around 12Hz reduced the range. ISCT1 has a table resonance there so I shook it and noticed that shaking by several times normal produced significant noise (see attached Fig). We switched to REFL B 9I and 45I so that we could close the beam diverter. The coupling went away.

Robert Stefan Jenne Evan

Images attached to this report
Comments related to this report
stefan.ballmer@LIGO.ORG - 23:11, Monday 24 October 2016 (30836)

The Guardian now again uses the REFL WFS for PRC2 by default, and closes the beam diverters. While this didn't change the range much, it seems to have improved the non-staionarity in that frequency band. One down, more to go.

H1 ISC (CDS, GRD, ISC)
jenne.driggers@LIGO.ORG - posted 20:32, Monday 24 October 2016 - last comment - 09:10, Saturday 29 October 2016(30831)
cdsutils avg giving weird results in guardian??

The results of cdsutils.avg() in guardian is sometimes giving us very weird values. 

We use this function to measure the offset value of the trans QPDs in Prep_TR_CARM.  At one point, the result of the average gave the same (wrong) value for both the X and Y QPDs, to within 9 decimal places (right side of screenshot, about halfway down).  Obviously this isn't right, but the fact that the values are identical will hopefully help track down what happened.

The next lock, it correctly got a value for the TransX (left side of screenshot, about halfway down), but didn't write a value for the TransY QPD, which indicates that it was trying to write the exact same value that was already there (epics writes aren't logged if they don't change the value). 

So, why did 3 different cdsutils averages all return a value of 751.242126465?

This isn't the first time that this has happened.  Stefan recalls at least one time from over the weekend, and I know Cheryl and I found this sometime last week. 

Images attached to this report
Comments related to this report
jameson.rollins@LIGO.ORG - 21:01, Monday 24 October 2016 (30832)

This is definitely a very strange behavior.  I have no idea why that would happen.

As with most things guardian, it's good to try to get independent verification of the effect.  If you make the same cdsutils avg calls from the command line do you get similarly strange results?  Could the NDS server be getting into a weird state?

jenne.driggers@LIGO.ORG - 21:11, Monday 24 October 2016 (30833)

On the one hand, it works just fine right now in a guardian shell.  On the other hand, it also worked fine for the latest acquisition.  So, no conclusion at this time.

jenne.driggers@LIGO.ORG - 01:03, Tuesday 25 October 2016 (30838)OpsInfo

This happened again, but this time the numbers were not identical.  I have added a check to the Prep_TR_CARM state that if the absolute value of the offsets are larger than 5 (normally they're around 0.2 and 0.3, and the bad values have all been above several hundred) then notify and don't move on. 

Operators:  If you see the notification Check Trans QPD offsets! then look at H1:LSC-TR_X_QPD_B_SUM_OFFSET and H1:LSC-TR_Y_QPD_B_SUM_OFFSET.  If you do an ezca read on that number and it's giant, you can "cheat" and try +0.3 for X, and +0.2 for Y, then go back to trying to find IR.

sheila.dwyer@LIGO.ORG - 21:10, Friday 28 October 2016 (30976)OpsInfo

This happened again to Jim, and Cheryl, today and caused multiple locklosses

I've commented out the averaging of the offsets in the guardian. 

We used to not do this averaging, and jsut rely on the dark offsets not to change.  Maybe we could go back to that.  

 

For operators, until this is fixed you might need to set these by hand:

If you are having trouble with FIND IR, this is something to check.  From the LSC overview sceen, click on the yellow TRX_A_LF TRY_A_LF button toward the middle oc the left part of the screen.  Then click on the R INput button circled in the attachment, and from there check that both the X and Y arm QPD SUMs have reasonable offsets.  (If there is not IR in the arms, the offset should be about -1*INMON)

Images attached to this comment
david.barker@LIGO.ORG - 09:10, Saturday 29 October 2016 (30994)

Opened as high priority fault in FRS:

ticket 6559

H1 ISC
evan.hall@LIGO.ORG - posted 18:48, Monday 24 October 2016 (30830)
9 MHz reflected power now scales sensibly with input power, but mysteries remain

I drove a 222.3 Hz line in the 9 MHz RFAM stabilization error point (giving 3.4×10−4 RAN rms) and then watched the resulting lines in REFL LF and ASC NSUM as we powered up from 2 W to 25 W. [Note that the DC readback of the RFAM servo really does give us a RAN, not a RIN. This can be verified by noting that the dc value changes by a factor of 2 when the rf power is reduced by 6 dB.]

At 2 W, we have 0.013 W of 9 MHz power reflected from the PRM and 0.0007 W of 9 MHz power coming out of the AS port.

At 25 W, we have 0.11 W of 9 MHz power reflected from the PRM and 0.034 W of 9 MHz power coming out of the AS port.

The lock stretch happens around 2016-10-25 00:21:00 Z if anyone wants to look at it.

The values for the reflected PRM power still seem to imply that the 9 MHz sideband either is not strongly overcoupled to the PRC, or the modulation depth is smaller than the old PSL OSA measurement (0.22 rad). For 0.22 rad of modulation depth and strong overcoupling, we'd expect something like 0.045 W reflected off the PRM at 2 W of input power. Also, the amount of 9 MHz leaking out the AS port evidently does not scale linearly with the input power.

H1 CAL (CAL, DetChar)
evan.goetz@LIGO.ORG - posted 18:17, Monday 24 October 2016 - last comment - 10:23, Tuesday 25 October 2016(30827)
Pcal Y laser likely clipping
Summary:
The Pcal Y laser beam is likely clipping somewhere in the beam path. This will need to be addressed ASAP. In the future we need to keep a close eye on the Pcal summary spectra on the DetChar web pages.

Details:
Jeff K. and I noticed that the spectrum for the Y-end Pcal seemed particularly noisy. I plotted some TX and RX PD channels at different times since Oct. 11. Several days since Oct. 11, the Pcal team has been to EY to perform some Pcal maintenance. One of those times (I think Oct. 18, but we don't have an aLOG reference for this), we realigned the beams on the test mass. Potentially, this change caused some clipping.

Attached are the spectra for TX and RX. Notice that there are no dramatic changes in the TX spectra. In the RX spectra, there is structure becoming more apparent with time in the 15-30 Hz region and 90-140 Hz. Also, various other peaks are growing

Also attached is a minute trend of the TX and RX PD mean values. On Oct 18, after realignment (the step down), the RX PD starts to drift downward while the TX PD power holds steady. The decrease in RX PD is nearly 10% from the start of the realignment. 

The Pcal team should address this ASAP, hopefully during tomorrow's maintenance time.

Images attached to this report
Comments related to this report
shivaraj.kandhasamy@LIGO.ORG - 10:23, Tuesday 25 October 2016 (30845)CAL

Evan, it seems they are ~14% off. On top of the ~10% drift we see there is also ~4% difference between RX and TX PD immediately after the alignment. The alignment itself seems to have ended up with some clipping.

H1 DAQ (CDS)
david.barker@LIGO.ORG - posted 17:11, Monday 24 October 2016 (30825)
fw0 restart caused fw1 restart on Saturday 22nd October

Here is the sequence of events which happened in the 18:23 minute PDT Sat 22 October

18:23:10 - 18:23:24 h1fw0 asked for data retransmissions
18:23:24 h1fw0 stopped running
18:23:26 - 18:23:36 h1fw1 asked for data retransmissions
18:23:36 h1fw1 stopped running

Having both frame writers down meant we lost three full frames for the GPS times 1161220928, 1161220992, 1161221056

Clearly fixing the retransmission errors will become a higher priority if they are cascaded like this and not random as they have been in the past. Our third frame writer h1fw2 did not crash and could have been used to fill in the gap if it were to be connected to LDAS.

LHO VE
chandra.romel@LIGO.ORG - posted 16:47, Monday 24 October 2016 (30824)
CP3 overfill
3:32 pm local

Overfilled CP3 by doubling LLCV to 34%. Took 18 min. to see vapor and tiny amounts of LN2 out the exhaust line. TCs responded but still did not readout high negative #s like I'd expect or saw with 1/2 turn on bypass fill valve. So I left LLCV at 35% for an additional 7 min. but did not see a drastic change in temps. Never saw much flow out of the bypass exhaust pipe. 

Bumped LLCV nominal from 17% to 18% open.
Images attached to this report
LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 16:44, Monday 24 October 2016 - last comment - 21:12, Monday 24 October 2016(30823)
Replaced Batteries for UPSes on All Vacuum Racks

Removed and replaced battery packs for all vacuum rack UPSes (Ends/Mids/Corner station).  No glitches noted on racks.

Work done under WP#6270.

Comments related to this report
kyle.ryan@LIGO.ORG - 21:12, Monday 24 October 2016 (30834)
If FAMIS were allowed to digest this activity, it could expect to become more "regular" (I'm laughing at my own jokes!)
H1 SUS (CDS, DAQ, DetChar, ISC, SYS)
jeffrey.kissel@LIGO.ORG - posted 16:36, Monday 24 October 2016 - last comment - 17:15, Monday 24 October 2016(30821)
Front-End Model Prep: All SUS Science Frame Channels Changed to Match T1600432
J. Kissel
Integration Issue 6463
ECR: E1600316
WP: 6263

P. Fritschel, S. Aston, and I have revamped the SUS channel list that is stored in the frames in order to
(1) Reduce the overall channel frame rate in light of the new scheme of storing only one science frame (no commissioning frames), and
(2) because the list was an un-organized hodge podge of inconsistent ideas of what to store from over 6 years of ideas.
The new channel list (and the rationale for each channel and its rate) can be found in T1600432, and will not change throughout O2.

I've spend the day modifying front-end models such that they all conform to this new model. This impacts *every* SUS model, and we'll install the changes tomorrow (including the removal of ISIWIT channels, prepped on Friday; LHO aLOG 30728).

For the SUS models used in any control system, the channel list was changed in the respective suspension type's library part,
  Sending        BSFM_MASTER.mdl
  Sending        HLTS_MASTER.mdl
  Sending        HSSS_MASTER.mdl
  Sending        HSTS_MASTER.mdl
  Sending        MC_MASTER.mdl
  Sending        OMCS_MASTER.mdl
  Sending        QUAD_ITM_MASTER.mdl
  Sending        QUAD_MASTER.mdl
  Sending        RC_MASTER.mdl
  Sending        TMTS_MASTER.mdl
  Transmitting file data ..........
  Committed revision 14509.
and commited to the user apps repo.

For monitor models, the changes are done on the secondary level, but that layer is not made of library parts, so they have to be individually changed per suspension. These models, 
  Sending        models/h1susauxb123.mdl
  Sending        models/h1susauxex.mdl
  Sending        models/h1susauxey.mdl
  Sending        models/h1susauxh2.mdl
  Sending        models/h1susauxh34.mdl
  Sending        models/h1susauxh56.mdl
  Transmitting file data ......
  Committed revision 14508.
are also no committed to the userapps repo.

We're ready for a rowdy maintenance day tomorrow -- hold on to your butts!
Comments related to this report
david.barker@LIGO.ORG - 17:15, Monday 24 October 2016 (30826)

note that to permt sus-aux channels to be acquired at 4kHz, the models h1susauxh2, h1susauxh34 and h1susauxh56 will be modified from 2K models to 4K models as part of tomorrow's work.

H1 GRD
sheila.dwyer@LIGO.ORG - posted 15:15, Monday 24 October 2016 - last comment - 09:06, Saturday 29 October 2016(30815)
exca connection error

Ed, Sheila

Are ezca connection errors becoming more frequent?  Ed has had two in the last hour or so, one of which contributed to a lockloss (ISC_DRMI).

The first one was from ISC_LOCK, the screenshot is attached. 

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 18:15, Monday 24 October 2016 (30828)

Happened again but for a different channel H1:SUS-ITMX_L2_DAMP_MODE2_RMSLP_LOG10_OUTMON ( Sheila's post was for H1:LSC-PD_DOF_MTRX_7_4). I trended and found data for both of those channels at the connection error times, and during the second error I could also caget the channel while ISC_LOCK still could not connect. I'll keep trying to dig and see what I find.

Relevant ISC_LOCK log:

2016-10-25_00:25:57.034950Z ISC_LOCK [COIL_DRIVERS.enter]
2016-10-25_00:26:09.444680Z Traceback (most recent call last):
2016-10-25_00:26:09.444730Z   File "_ctypes/callbacks.c", line 314, in 'calling callback function'
2016-10-25_00:26:12.128960Z ISC_LOCK [COIL_DRIVERS.main] USERMSG 0: EZCA CONNECTION ERROR: Could not connect to channel (timeout=2s): H1:SUS-ITMX_L2_DAMP_MODE2_RMSLP_LOG10_OUTMON
2016-10-25_00:26:12.129190Z   File "/ligo/apps/linux-x86_64/epics-3.14.12.2_long-ubuntu12/pyext/pyepics/lib/python2.6/site-packages/epics/ca.py", line 465, in _onConnectionEvent
2016-10-25_00:26:12.131850Z     if int(ichid) == int(args.chid):
2016-10-25_00:26:12.132700Z TypeError: int() argument must be a string or a number, not 'NoneType'
2016-10-25_00:26:12.162700Z ISC_LOCK EZCA CONNECTION ERROR. attempting to reestablish...
2016-10-25_00:26:12.175240Z ISC_LOCK CERROR: State method raised an EzcaConnectionError exception.
2016-10-25_00:26:12.175450Z ISC_LOCK CERROR: Current state method will be rerun until the connection error clears.
2016-10-25_00:26:12.175630Z ISC_LOCK CERROR: If CERROR does not clear, try setting OP:STOP to kill worker, followed by OP:EXEC to resume.

sheila.dwyer@LIGO.ORG - 21:12, Friday 28 October 2016 (30977)

It happened again just now. 

Images attached to this comment
david.barker@LIGO.ORG - 09:06, Saturday 29 October 2016 (30993)

Opened FRS on this, marked a high priority fault.

ticket 6558

H1 PSL
jenne.driggers@LIGO.ORG - posted 15:14, Monday 24 October 2016 - last comment - 18:36, Monday 24 October 2016(30817)
ISS 2nd loop offset somewhere?

[Jenne, Daniel, Stefan]

There seems to be an offset somewhere in the ISS second loop.  When the 2nd loop comes on, even though it is supposed to be AC coupled, the diffracted power decreases significantly.  This is very repeatable with on/off/on/off tests.  One bad thing about this (other than having electronics with unknown behavior) is that the diffracted power is very low, and can hit the bottom rail, causing lockloss - this happened just after we started trending the diffracted power to see why it was so low.

Daniel made it so the second loop doesn't change the DC level of diffracted power by changing the input offset for the AC coupling servo (H1:PSL-ISS_SECONDLOOP_AC_COUPLING_SERVO_OFFSET from 0.0 to -0.5), the output bias of the AC coupling servo (H1:PSL-ISS_SECONDLOOP_AC_COUPLING_INT_BIAS from 210 to 200), and the input offset of the 2nd loop (H1:PSL-ISS_THIRDLOOP_OUTPUT_OFFSET from 24.0 to 23.5  - this is just summed in to the error point of the 2nd loop servo).  What we haven't checked yet is if we can increase the laser power with these settings.

Why is there some offset in the ISS 2nd loop that changes the diffracted power??  When did this start happening?

Comments related to this report
jenne.driggers@LIGO.ORG - 16:04, Monday 24 October 2016 (30819)

We were able to increase power to 25W okay, but turning off the AC coupling made things go crazy and we lost lock.  The diffracted power went up, and we lost lock around the time it hit 10%. 

keita.kawabe@LIGO.ORG - 18:36, Monday 24 October 2016 (30829)

The 2nd loop output offset observed by the 1st loop was about 30mV (attached, CH8). With the 2nd ISS gain slider set at 13dB and a fixed gain stage of 30, this correspond to 0.2mV offset in the AC coupling point. This offset is relatively small.

One thing that has happened in the past two weeks or so is that the power the 1st loop sensor (PDA) receives was cut by about half (second attachment). This was caused by the move from the old position to the new position of the PD.

Since the sensing gain of the 1st loop was reduced by a factor of two, seen from the 2nd loop the 1st loop is twice as efficient an actuator. Apparently the second loop gain slider was not changed (the slider is still at 13dB), so even if the same offset was there before, the effect was a factor of two smaller before.

Another thing which is kind of far-fetched is that I switched off the DBB crate completely and we know that opening/closing the DBB and frontend/200W shutters caused some offset change in the 2nd loop board.

Images attached to this comment
Displaying reports 54701-54720 of 84703.Go to page Start 2732 2733 2734 2735 2736 2737 2738 2739 2740 End