Displaying reports 53261-53280 of 83273.Go to page Start 2660 2661 2662 2663 2664 2665 2666 2667 2668 End
Reports until 13:42, Tuesday 25 October 2016
H1 TCS
betsy.weaver@LIGO.ORG - posted 13:42, Tuesday 25 October 2016 (30852)
RM and OM HTTS vertical modes measured and confirmed to be 6.1Hz

As a follow on from alog 30790 where RM and OM vertical modes are of interest, this morning we measured them for LHO - they are in fact at 6.1Hz.  Attached are spectra of these modes for RM1, RM2, OM1, OM2, and OM3.  The spectra are with the HEPI not isolating, while the black ref trace on all polots is from earlier in the morning when the HEPI was isolating.  At a glance, we also note that while the OMs look relatively healthy, there is some fishyness on the RM spectra from the 6.1Hz v-mode up to ~20Hz seen on a few OSEMs.  OSEM open light voltages look ok, however.

 

Data can be found in

/ligo/svncommon/SusSVN/sus/trunk/HTTS/Common/Data/

2016-10-25_1924_H1SUSOM_PSD.png
2016_10_25_1924_H1SUSOM_PSD.xml
2016-10-25_1924_H1SUSRM_PSD.png
2016_10_25_1924_H1SUSRM_PSD.xml

Images attached to this report
LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 13:41, Tuesday 25 October 2016 (30853)
Serviced Compressors #3, #4, and #5 at X-End Vent/Purge-Air Skid

Compressors #3, #4, and #5 were greased and pressure tested.

Compression test findings: #3 at 120 psi, #4 at 130 psi, and #5 at 110 psi.
All compressors and electrical motors were greased.

All relief valves passed their test.

All compressor assemblies were run tested after service was performed.

Work performed under WP#6277.

H1 SEI
hugh.radkins@LIGO.ORG - posted 13:12, Tuesday 25 October 2016 (30850)
LHO HEPI Fluid System Accumulators' Charge Checked

WP 6267

No accumulator needed charging and differences from June suggest none have leaks.  T1500280 updated.  Famis 4594 closed.

H1 DetChar (SUS, SYS)
jeffrey.kissel@LIGO.ORG - posted 12:58, Tuesday 25 October 2016 (30849)
Updated M0 Damping Loop State Comparators to Indicate Goodness -- QUAD ODCs now Green
J. Kissel

While browsing the new MEDM overview screens from Stuart (updated at LHO earlier today LHO aLOG 30844), I noticed that the ODC lights for the QUADs were red, because the top mass, main chain damping loops claimed to be in a bad state. 

The ODC bit indicating the status is informed by the comparison between a "known good state" as defined by EPICs records entered in by-hand (e.g. H1:SUS-ETMX_M0_DAMP_P_STATE_GOOD), and the current state as reported by the front end (e.g. H1:SUS-ETMX_M0_DAMP_P_STATE_NOW). Both are shown on the right-and-side of the DAMP screen (see second attachment). These "known good state" values were probably not updated when Rana and Evan had made changes to the QUAD damping loops back in May of 2016 (LHO aLOG 27464). 

I've now updated the good values, the comparator lights are green, all ODC lights are green, and I've accepted the new good values into the SDF system. 

Lot's of redundancy there, but it's just indicative of all three generations of state definition control that the SUS have seen that haven't been cleaned up, de-scoped, or standardized.
Images attached to this report
H1 DetChar (DetChar, ISC, PEM, SEI)
andrew.lundgren@LIGO.ORG - posted 12:38, Tuesday 25 October 2016 - last comment - 15:07, Tuesday 25 October 2016(30847)
Jumping line features in DARM and many motion channels
Andy, Jess, Josh, TJ

Following up alog 30790 and comments (range drop may be related to excess RM motion), we've found some very strange motion with apparently quantized jumps in frequency. This motion is seen in accelerometers near HAM1, in the nearby HAM HEPI L4Cs, in the RM mirror motion, and in some REFL and POP DC signals. Some of this motion seems to show up in DARM as well.

The first plot is a zoom in frequency on the kind of motion that we see in these channels. It looks like several lines which sometimes jump suddenly in frequency, then jump to another frequency, then another. It looks like a MIDI music file (or a music box). We've found this in many channels, but the clearest so far is in the HAM1 floor accelerometer. It's also in the ISCT1 accelerometer, and the RM OSEMs see it in some degrees of freedom (which is how we first noticed it). The HAM1 and HAM2 HEPI L4Cs see some of it, see second and third plots. Even the HAM6 accelerometer sees some of it, so it's not local just to HAM1/2 - but we haven't checked exactly how widespread it it. We've also checked at least in the HEPI that this was there at an earlier time in the day, and also two weeks prior.

The last plot is DARM, showing that this seems to couple at least in the 10 to 20 Hz region. That could be through the RMs somehow, or maybe through scatter from ISCT1. Since the beam diverter was closed (alog 30835), the next locks can check if this is through ISCT1.

What's causing this motion? It looks really peculiar. It's hard to pick it out in just a few minutes of data, because the lines are narrow and don't wander - they jump suddenly. So it may have gone un-noticed before, but it would be nice to understand it even if the coupling to DARM is easily fixed.
Images attached to this report
Comments related to this report
joshua.smith@LIGO.ORG - 15:07, Tuesday 25 October 2016 (30857)DetChar

This is a movie of me fading in and out between seismic and DARM just to show that the seismic features do weakly show up in DARM in the 10-30Hz range. It's a bit too big to attach to the alog, so here's a link

H1 PSL
daniel.sigg@LIGO.ORG - posted 11:16, Tuesday 25 October 2016 - last comment - 14:00, Tuesday 25 October 2016(30846)
PMC HVMon whitening filter

Jason Daniel (WP 6273)

We added a whitening filter to the PMC HV monitor in the PMC fieldbox, D1001619. The changes are:

This generates a whitening filter with a zero at 1 Hz, a pole at 100 Hz and a DC gain of –1. The inverse of this filter has been added to the corresponding PSL PMC filter bank, so that the slow readbacks are unchanged.

The attached spectrum shows the PZT high voltage which is 50 times the monitor readback and has an additional pole at 770 Hz which is formed by the output series resistor, see alog 30729. The noise level of the unwhitened HV monitor was at 0.2 mV/√Hz before the change, see alog 30648. The signal  is at least a factor of 10 above ADC noise at all frequencies and is coherent with the temporary channel hooked up to EXTRA_AI_1. The later is no longer needed.

Non-image files attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 14:00, Tuesday 25 October 2016 (30855)

The two SR560 units which were used for temporarily monitoring the HVMons have been disconnected and brought back to the shop.

H1 SUS (CAL, CDS, DAQ, DCS, DetChar, IOO, ISC, OpsInfo)
jeffrey.kissel@LIGO.ORG - posted 11:10, Tuesday 25 October 2016 (30844)
SUS Model Updates Complete: ISIWIT Channels Removed, Science Frame Channel List Revamped
J. Kissel

Integration Issues: 6463, 4694
ECRs: E1600316, E1600028
WP: 6263

I've completed the changes begun this past Friday (LHO aLOG 30728) and Monday (LHO aLOG 30821) by installing and restarting all SUS front-end models this morning. Again, changes are 
(1) the removal of now-redundant ISI_WIT channel path which projected ISI GS13s to the suspension point basis. These are now computed elsewhere.
(2) the revamp of the SUS science frame channel storage; updated to match T1600432
Regarding (1), I've also updated each SUS-type's MEDM overview screens to remove these paths (just an SVN update; thanks Stuart!), and I've resaved and reloaded all SUS safe.snaps in the SDF system to remove these channels which were registering as are no longer found.

We need only a DAQ restart, and then (2) takes affect. 

@Detchar: Please let us know if your down-stream / off-site software has been adversely affected (once maintenance is complete, of course).
H1 CDS
james.batch@LIGO.ORG - posted 09:58, Tuesday 25 October 2016 (30843)
NDS2 client software updated
WP 6275

Updated NDS2 client software package to nds2-client-0.13.0 for Ubuntu 12 and Ubuntu 14 control room workstations.
H1 DAQ
daniel.sigg@LIGO.ORG - posted 09:38, Tuesday 25 October 2016 (30842)
Updated TwinCAT code

This update supports (WP 6259):

Images attached to this report
H1 DetChar
scott.coughlin@LIGO.ORG - posted 07:43, Tuesday 25 October 2016 - last comment - 07:43, Tuesday 25 October 2016(30804)
distribution of scratchy (also called Blue Mountains) noise in O1
Distribution of hours at which scratchy glitches occurred according to the ML output from GravitySpy. In addition, a histogram of amount of O1 time spend in analysis ready mode is provided. I have uploaded omega scans and FFT spectrograms of what Scratchy glitch looked like in O1.
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 13:33, Monday 24 October 2016 (30810)
For those of us who haven't been on DetChar calls to have heard this latest DetChar nickname... "Scratchy glitches?"
joshua.smith@LIGO.ORG - 15:10, Monday 24 October 2016 (30814)DetChar

Hi Jeff, 

Scotty's comment above refers to Andy's comment to the range drop alog 30797 (see attachment here and compare to Andy's spectrogram). We're trying to help figure out its cause. It's a good lead that they seem to be related to RM1 and RM2 motion. 

"Scratchy" is the name used in GravitySpy for these glitches. They are called that because they sound like scratches in audio https://wiki.ligo.org/DetChar/InstrumentSounds . In FFT they look like mountains, or if you look closer, like series of wavy lines. They were one of the most numerous types of H1 glitches in O1. In DetChar we also once called them "Blue mountains." Confusing, I know. But there is a DCC entry disambiguating (in this case equating) scratchy and blue mountain https://dcc.ligo.org/LIGO-G1601301 and a further entry listing all of the major glitch types https://dcc.ligo.org/G1500642 and the notes on the GravitySpy page. 

Images attached to this comment
H1 ISC
stefan.ballmer@LIGO.ORG - posted 02:33, Tuesday 25 October 2016 (30841)
PCAL readback indicates calibration at 25W is 11% off - ~80Mpc once corrected (and if PCAL correct)

We repeatedly noticed that the current front-end calibration is slightly off - tonight all cal lines (low and high freq) in DARM were 11% above the PCAL read-back.

If I take the PCAL readback as reference and scale down the calibrated spectrum (as attached), I got about 80Mpc.

On the other hand, Evan Goetz reported that he thinks the PCAL is clipping (30827). We'll see whether these 11% are real...

Images attached to this report
H1 ISC (ISC)
jenne.driggers@LIGO.ORG - posted 01:36, Tuesday 25 October 2016 (30840)
Removing PR2 feedforward from MICH length

[Stefan, Jenne]

We removed the PR2 length feedforward that removes the MICH signal in PRCL.  We did this by ramping the PR2 LOCK_L gain at the lowest stage to 0.  We didn't see any change in DARM.  We also tried increasing the gain by a factor of 3.  Again, we didn't see any change in DARM. 

However, since we discovered and mitigated some scattering effects earlier tonight (but after this PR2 test), we should try this again. 

H1 ISC
stefan.ballmer@LIGO.ORG - posted 01:15, Tuesday 25 October 2016 (30839)
Fixed up PRMI / DRMI locking

Jenne, Stefan

PRMI and DRMI lock acquisition was very sloppy the last few days, so we actually looked at the fringes, gains, trigger thresholds, etc. A number of tweaks were required:

REFLAIR_A_RF45 PHASE was changed from 142deg to 157 to minimize the I signal bleeding through.

PRMI acquisiton gains: PRCL 16, MICH 2.8

PRMI locked gains: PRCL 8 (nominal UGF 40Hz), MICH 2.8 (nominal UGF 10Hz)

DRMI locked gains: PRCL 8 (nominal UGF 40Hz), MICH 1.4 (nominal UGF 10Hz), SRCL -45 (nominal UGF 70Hz)

DRMI acquisition gains: same as PRMI: PRCL 16, MICH 2.8, and SRCL -30

 

 
 
LHO General
thomas.shaffer@LIGO.ORG - posted 00:00, Tuesday 25 October 2016 - last comment - 16:17, Tuesday 25 October 2016(30837)
Ops Eve Shift Summary

TITLE: 10/25 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Aquisition
INCOMING OPERATOR: Jeff
SHIFT SUMMARY: Locking DRMI/PRMI was not easy, it required large adjustments and waiting a good amount of time. useism is also getting pretty high so I tried the WINDY_USEISM state, brought it back to WINDY because I couldnt tell which was better. Aside form that the commissioners are working.

 

Comments related to this report
jim.warner@LIGO.ORG - 12:39, Tuesday 25 October 2016 (30848)SEI

I should probably just remove or rename the WINDY_USEISM state. It may have a use, but I think people are taking the configuration guide on the SEI_CONF screen too literally. I'm reluctant to try to make the guide more accurate because I'm not a cubist. The WINDY_USEISM state should be thought of as a more wind resistant state than the high microseism configuration we used during O1 (USEISM in SEI_CONF). Anyone remember how hard locking was with 15mph winds and high microseism during our first observing run?

We are getting into new territory with the current configuration (implemented during the windy, low microseism summer), but looking at the locks last night, it looks like the WINDY configuration is still what we want to use. The five attached plots are the ISC_LOCK state, SEI_CONF state (40 is WINDY, 35 is WINDY_USEISM), the ETMX Z 30-100mhz STS BLRMS (in nm, so 1000=1 micron) and the corner station windspeed. The last plot shows all four channels together, red is the ISC state, blue the SEI_CONF state, green is the STS BLRMS, black is the wind. It's kind of a mess, but it gives a better feel for the time line.

Microseism was high over this entire period (around 1micron RMS), wind was variable, so this was a good time to test. I think the take away is that the WINDY state was sufficient to handle the high microseism for the 2  NLN locks over this stretch, and is very probably more robust against the wind than the WINDY_USEISM state.

Images attached to this comment
thomas.shaffer@LIGO.ORG - 16:17, Tuesday 25 October 2016 (30868)

This is great to know. I was pretty sure that you said WINDY is good for almost every situation, but I thought it worth a try.

Tagging OpsInfo so we can get the lastest

H1 AOS
robert.schofield@LIGO.ORG - posted 23:14, Monday 24 October 2016 - last comment - 23:14, Monday 24 October 2016(30835)
Shaking ISCT1 produced noise, beam diverter now closed

This morning a broad increase in ground motion around 12Hz reduced the range. ISCT1 has a table resonance there so I shook it and noticed that shaking by several times normal produced significant noise (see attached Fig). We switched to REFL B 9I and 45I so that we could close the beam diverter. The coupling went away.

Robert Stefan Jenne Evan

Images attached to this report
Comments related to this report
stefan.ballmer@LIGO.ORG - 23:11, Monday 24 October 2016 (30836)

The Guardian now again uses the REFL WFS for PRC2 by default, and closes the beam diverters. While this didn't change the range much, it seems to have improved the non-staionarity in that frequency band. One down, more to go.

H1 ISC (CDS, GRD, ISC)
jenne.driggers@LIGO.ORG - posted 20:32, Monday 24 October 2016 - last comment - 09:10, Saturday 29 October 2016(30831)
cdsutils avg giving weird results in guardian??

The results of cdsutils.avg() in guardian is sometimes giving us very weird values. 

We use this function to measure the offset value of the trans QPDs in Prep_TR_CARM.  At one point, the result of the average gave the same (wrong) value for both the X and Y QPDs, to within 9 decimal places (right side of screenshot, about halfway down).  Obviously this isn't right, but the fact that the values are identical will hopefully help track down what happened.

The next lock, it correctly got a value for the TransX (left side of screenshot, about halfway down), but didn't write a value for the TransY QPD, which indicates that it was trying to write the exact same value that was already there (epics writes aren't logged if they don't change the value). 

So, why did 3 different cdsutils averages all return a value of 751.242126465?

This isn't the first time that this has happened.  Stefan recalls at least one time from over the weekend, and I know Cheryl and I found this sometime last week. 

Images attached to this report
Comments related to this report
jameson.rollins@LIGO.ORG - 21:01, Monday 24 October 2016 (30832)

This is definitely a very strange behavior.  I have no idea why that would happen.

As with most things guardian, it's good to try to get independent verification of the effect.  If you make the same cdsutils avg calls from the command line do you get similarly strange results?  Could the NDS server be getting into a weird state?

jenne.driggers@LIGO.ORG - 21:11, Monday 24 October 2016 (30833)

On the one hand, it works just fine right now in a guardian shell.  On the other hand, it also worked fine for the latest acquisition.  So, no conclusion at this time.

jenne.driggers@LIGO.ORG - 01:03, Tuesday 25 October 2016 (30838)OpsInfo

This happened again, but this time the numbers were not identical.  I have added a check to the Prep_TR_CARM state that if the absolute value of the offsets are larger than 5 (normally they're around 0.2 and 0.3, and the bad values have all been above several hundred) then notify and don't move on. 

Operators:  If you see the notification Check Trans QPD offsets! then look at H1:LSC-TR_X_QPD_B_SUM_OFFSET and H1:LSC-TR_Y_QPD_B_SUM_OFFSET.  If you do an ezca read on that number and it's giant, you can "cheat" and try +0.3 for X, and +0.2 for Y, then go back to trying to find IR.

sheila.dwyer@LIGO.ORG - 21:10, Friday 28 October 2016 (30976)OpsInfo

This happened again to Jim, and Cheryl, today and caused multiple locklosses

I've commented out the averaging of the offsets in the guardian. 

We used to not do this averaging, and jsut rely on the dark offsets not to change.  Maybe we could go back to that.  

 

For operators, until this is fixed you might need to set these by hand:

If you are having trouble with FIND IR, this is something to check.  From the LSC overview sceen, click on the yellow TRX_A_LF TRY_A_LF button toward the middle oc the left part of the screen.  Then click on the R INput button circled in the attachment, and from there check that both the X and Y arm QPD SUMs have reasonable offsets.  (If there is not IR in the arms, the offset should be about -1*INMON)

Images attached to this comment
david.barker@LIGO.ORG - 09:10, Saturday 29 October 2016 (30994)

Opened as high priority fault in FRS:

ticket 6559

H1 CAL (CAL, DetChar)
evan.goetz@LIGO.ORG - posted 18:17, Monday 24 October 2016 - last comment - 10:23, Tuesday 25 October 2016(30827)
Pcal Y laser likely clipping
Summary:
The Pcal Y laser beam is likely clipping somewhere in the beam path. This will need to be addressed ASAP. In the future we need to keep a close eye on the Pcal summary spectra on the DetChar web pages.

Details:
Jeff K. and I noticed that the spectrum for the Y-end Pcal seemed particularly noisy. I plotted some TX and RX PD channels at different times since Oct. 11. Several days since Oct. 11, the Pcal team has been to EY to perform some Pcal maintenance. One of those times (I think Oct. 18, but we don't have an aLOG reference for this), we realigned the beams on the test mass. Potentially, this change caused some clipping.

Attached are the spectra for TX and RX. Notice that there are no dramatic changes in the TX spectra. In the RX spectra, there is structure becoming more apparent with time in the 15-30 Hz region and 90-140 Hz. Also, various other peaks are growing

Also attached is a minute trend of the TX and RX PD mean values. On Oct 18, after realignment (the step down), the RX PD starts to drift downward while the TX PD power holds steady. The decrease in RX PD is nearly 10% from the start of the realignment. 

The Pcal team should address this ASAP, hopefully during tomorrow's maintenance time.

Images attached to this report
Comments related to this report
shivaraj.kandhasamy@LIGO.ORG - 10:23, Tuesday 25 October 2016 (30845)CAL

Evan, it seems they are ~14% off. On top of the ~10% drift we see there is also ~4% difference between RX and TX PD immediately after the alignment. The alignment itself seems to have ended up with some clipping.

LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 16:44, Monday 24 October 2016 - last comment - 21:12, Monday 24 October 2016(30823)
Replaced Batteries for UPSes on All Vacuum Racks

Removed and replaced battery packs for all vacuum rack UPSes (Ends/Mids/Corner station).  No glitches noted on racks.

Work done under WP#6270.

Comments related to this report
kyle.ryan@LIGO.ORG - 21:12, Monday 24 October 2016 (30834)
If FAMIS were allowed to digest this activity, it could expect to become more "regular" (I'm laughing at my own jokes!)
Displaying reports 53261-53280 of 83273.Go to page Start 2660 2661 2662 2663 2664 2665 2666 2667 2668 End