Displaying reports 50281-50300 of 83127.Go to page Start 2511 2512 2513 2514 2515 2516 2517 2518 2519 End
Reports until 16:30, Tuesday 31 January 2017
H1 CAL (CAL)
travis.sadecki@LIGO.ORG - posted 16:30, Tuesday 31 January 2017 (33783)
End X PCal Calibration

Yuki, Heather, Travis, Rick

Results of the latest End X PCal calibration can be found on the DCC at T1500129 and T1500622.  Although the baseline is rather short since the Working Standard was replaced before the last 2 measurements, the variation in the PDs is less than the 1% required by Calibration.  Results have been reviewed by Rick, myself, and Yuki and the only standout we noted was the power imbalance of the inner and outer beams, which varied by 1.2% from the mean.  We will continue to monitor this imbalance in further measurements and address it if it persists.

LHO General (CDS)
filiberto.clara@LIGO.ORG - posted 16:16, Tuesday 31 January 2017 (33782)
Safety System - Beckhoff

WP 6460

Set the addresses to the Beckhoff modules used for the safety system. LVEA, Mechanical mezzanine, and EY were complete. Will try to complete EX next Tuesday.

F. Clara, R. McCarthy

LHO General
corey.gray@LIGO.ORG - posted 16:08, Tuesday 31 January 2017 (33754)
Day Summary

TITLE: 01/31 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventitive Maintenance
INCOMING OPERATOR: Jim
SHIFT SUMMARY:
Maintenance Day roughly stopped around 23:00utc (3pm); not tons of activities, but we allowed some work to take us past our noontime "deadline".  Thank you to Patrick, TJ, & Travis helping me out with work I had to pawn off on them.

Immediately upon looking at ALS locking, BOTH arms looked atrocious.  Took a while to finally get away from zero and to a point where the alignment looked serviceable (Jim suggested I probably look at temperatures because of this...or it could be due to all the BSCISIs going down).  At any rate, in the last hour of the shift, I rushed to try and have an aligned H1 to hand off to Jim.

OK, handing an aligned H1 to Jim.

LOG:

Lost unsaved draft, so times are iffy below:

H1 SYS
filiberto.clara@LIGO.ORG - posted 16:05, Tuesday 31 January 2017 (33781)
EY Illuminator

WP 6459

Looked at illuminator controller chassis at EY. Found faulty crimp connection on one of the spade connectors. Removed cabling and power supply that was used to power the illuminator. Cabling from the controller chassis is now connected and illuminator can now be controlled through MEDM.

F. Clara, R. McCarthy

H1 SUS
thomas.shaffer@LIGO.ORG - posted 16:02, Tuesday 31 January 2017 (33780)
Charge Measurements Taken for EX EY

Took charge measurements for both ends today. Results attached, but they all look good.

The matlab scripts had trouble with a few of the measurements taken today. The first one for EY and the second for EX would bring up an error during the many measurements script, something about matrix indicies not matching. Betsy told me to just skip the ones that it would error on and then the rest seemed to work. I also appended ".bad" to the name of the directories that would I was erroring on.

Images attached to this report
LHO VE
chandra.romel@LIGO.ORG - posted 15:45, Tuesday 31 January 2017 - last comment - 10:53, Wednesday 01 February 2017(33779)
CP4,3 sensing line surgery
Update on theoretical length

Kyle, Chandra

WP 6463

Today we fed a 1/16" diam. rope wire through the bottom sensing lines of CP 3 & 4. Length of 109.5" penetrated CP4 (measured from the 1/4" swagelock fitting). Length for CP3 was 105.5". The sensing line is 1" longer on CP4 than for CP3 (based on how much it protrudes from nipple welded to CP outer body).

The theoretical length from drawings V049-4-005, V049-4-090, V049-4-121 is 111.7" ....and then add 1.2" for CP3 and 2.2" for CP4 for additional length of swagelock connections.

We felt resistence toward the end, where I thought we were hitting the 90 deg bend just past the bibraze joint (until looking at these measurements more closely); then we were able to push another ~2". Vertical length is 2.25". The gap between the inner and outer vessel wall at the bottom is only 7/8". The last 4-5" of the wire was cold and frosted when we pulled it out. We then cycled between ~1 Torr (diaphragm pump) and 100 psig pressure on CP4 lower sensing line. Did not detect any breakthrough.

We left both CP3 and CP4 upper and lower sensing lines plumbed together through the shunt valve, essentially bypassing the transducer. Currently not pumping or pressurizing CP4 line.

Comments related to this report
chandra.romel@LIGO.ORG - 10:53, Wednesday 01 February 2017 (33805)

Based on drawing and measurements, the blockage in CP4 sensing line is just past the bibraze joint. For CP3 it is just before the bibraze joint.

Images attached to this comment
LHO General
corey.gray@LIGO.ORG - posted 14:29, Tuesday 31 January 2017 (33778)
LVEA Swept After Maintenance Day Activities

Used the latest version of the VEA Sweep Checklist (T1500386), and we look good.

H1 SEI
hugh.radkins@LIGO.ORG - posted 14:18, Tuesday 31 January 2017 - last comment - 17:35, Tuesday 31 January 2017(33776)
H1 ISIs (sans HAM2 & 3) rebuilt withcoil Driver mod & Restarted

WP 6457  ECR E1700032 FRS 7099 DCC T1700025

This model update takes the binary coil status data out of the WD tripping code.  If this data goes bad for 60 seconds, a red medm light will show on the ISI platform overview screen labeled OVERTEMP.  If the coil driver actually does experience an overtemperature condition, the hardware itself will trip and of course the ISI will trip as the Actuators let go and seismometers rail, you know, cats & dogs living together, mass histeria.  So there is little risk that anything too bad would result from this change.

The change was to prevent unnecessary tripping of the ISI when the binary signal from the Coil Driver went erroneously bad.  This happened to LHO Oct 2015 & 22977  and the model was changed to allow a 10 sec wait before tripping the watchdog.  In May 2016 this was extended to the HAMs. In Sept 2016, this 10 second delay proved not sufficient for LHO BS ISI.  It then became a problem for LLO earlier this month.  So this is a short likely episode missing history of the problem leading to this model change.

Comments related to this report
jim.warner@LIGO.ORG - 17:35, Tuesday 31 January 2017 (33787)

The coilmon status channels are now monitored in DIAG_MAIN, so the control room will get notifications if the status bit goes to zero. We'll need to modify the test code when we do HAMs 2&3.

 

@SYSDIAG.register_test
def SEI_COILMON_ALL_OK():
    """ISI coilmon status
   
    """
    hams = ['HAM4','HAM5','HAM6']
    chambers = ['BS','ITMX','ITMY','ETMX','ETMY']
    bscs = chambers
    for chamber in bscs:
        if ezca['ISI-' + chamber + '_COILMON_STATUS_ALL_OK '] != 1:
            yield "ISI %s coilmon drop out" % (chamber)
    for chamber in hams:
        if ezca['ISI-' +chamber + '_BIO_IN_COILMON_STATUS_ALL_OK'] != 1:
            yield "ISI %s coilmon drop out" % (chamber)

 

H1 CDS (OpsInfo)
ryan.blair@LIGO.ORG - posted 14:16, Tuesday 31 January 2017 (33777)
LHO CDS wiki has moved

LHO CDS wiki (aka 'daqwiki') has been moved outside of the CDS network to a new server at https://cdswiki.ligo-wa.caltech.edu/wiki/. Redirections have been put in place on the CDS web server to ease the migration.

H1 PSL
filiberto.clara@LIGO.ORG - posted 13:59, Tuesday 31 January 2017 (33774)
PSL Rotation Stage Setup

WP 5855

Removed the temporary Beckhoff rotation stage setup by HAM1/PSL enclosure. The following items were disconnected:

1. Power supply
2. Rotation stage interface board
3. Beckhoff terminals
4. Two network cables

The two cables going into the PSL enclosure need to be pulled out (DB15 and Conec 2W2), but requires someone inside the enclosure to guide/feed them out.

H1 ISC
keita.kawabe@LIGO.ORG - posted 13:50, Tuesday 31 January 2017 (33773)
OMC length sensing noise in DARM

At the beginning of the maintanance window, at about 16:27:20 UTC I doubled the OMC LSC dither amplitude by a factor of two while reducing the OMC LSC-I gain by the same factor so the overall OMC LSC control bandwidth doesn't change. I wanted to repeat low/high comparison but IFO broke lock (not because of this test).

Anyway, just for this one test, I don't see any difference.

The changes were reverted back.

Images attached to this report
H1 SEI
hugh.radkins@LIGO.ORG - posted 13:48, Tuesday 31 January 2017 (33767)
PEM STS (on ETMY BRS) centering--still not good

TJ and I went to EndY to address the potential mis-centering of the STS colocated on the BRS Table.  Krishna reported the spectra of the STS did not look ideal and suggested centering may be needed.

We checked the mass position voltage readings before doing any centering and it was well out of spec: u v w were -3.7 -1.8 & -12.0V. Less than 2.5 is essential and 2v even better. We then shorted the resonant frequency jumper to make it a 1 sec instrument and hit the centering button.  This is all done on the monitor port of the field satellite box about 10 feet from the BRS where the STS is located.

Our repeated attempts and observations over the next couple hours suggest the instrument masses would just go to the opposite rail and stay there espectially the w mass.  I thought we waited long enough as we gave each centering several (10min) before trying again.  I did not think that the instrument was poorly leveled as in that case the mass would float but just not in the center or it would sit over on one rail, rather than just going back and forth rail to rail at each centering attempt and stay there as it was doing.  Still eventually, decided to check the bubble level on the STS and it was beautifully leveled.

So, either, the masses take much longer than I expect to come off the rail (not my experience) (maybe the jumper to make it a 1sec machine did not work) and if we were to look later we'd see the positions to be different. Or, something is screwed up in the machine with either the centering process or the masses themselves.

So, I'd like to look at the masses again at a later time if the coordinator would grant the access to the VEA.

H1 CAL (CAL)
aaron.viets@LIGO.ORG - posted 13:43, Tuesday 31 January 2017 - last comment - 12:01, Wednesday 01 March 2017(33771)
DCS filters for LHO data starting at 1169326080
I have produced filters for offline calibration of Hanford data starting at GPS time 1169326080. The filters can be found in the calibration SVN at this location:
ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/GDSFilters/H1DCS_1169326080.npz

For information on the associated change in calibration, see:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=33585

For suggested command line options to use when calibrating this data, see:
https://wiki.ligo.org/Calibration/GDSCalibrationConfigurationsO2

The filters were produced using this Matlab script in SVN revision 4251:
ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/TDfilters/H1_run_td_filters_1169326080.m

The parameters files used (all in revision 4251) were:
ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/params/2017-01-24/modelparams_H1_2017-01-24.conf
ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/params/2017-01-24/H1_TDparams_1169326080.conf
ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/CAL_EPICS/D20170124_H1_CAL_EPICS_VALUES.m

Several plots are attached. The first four (png files) are spectrum comparisons between CALCS, GDS, and DCS. GDS and DCS agree to the expected level. Kappas were applied in both the GDS plots and the DCS plots with a coherence uncertainty threshold of 0.4%.
Time domain vs. frequency domain comparison plots of the filters are also attached. Lastly, brief time series of the kappas and coherences are attached, for comparison with CALCS.
Images attached to this report
Non-image files attached to this report
Comments related to this report
shivaraj.kandhasamy@LIGO.ORG - 12:01, Wednesday 01 March 2017 (34493)CAL
Here is a plot that compares the ratios of GDS and DCS (CO1) data (expected vs measured). Above ~8 Hz, the expected and measured ratios agree. Below ~8 Hz the we see difference. This comparison doesn't account for the FIR implementation of ~9 Hz high pass filter used in GDS and DCS data. If there is difference between how this implemented it could produce the difference we see here (need to be checked).  The code used to make this plot is added to svn,
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/CALCS_FE/CALCSvsDARMModel_20170124.m 
This is just an updated version of the code Jeff used to make the CALCS and GDS comparison. 
Images attached to this comment
H1 TCS
filiberto.clara@LIGO.ORG - posted 13:35, Tuesday 31 January 2017 (33769)
TCSY CO2 Laser Interlock Chassis

WP 6428

The CO2 Laser Interlock Chassis D1200745 was replaced with the original chassis S1302125. Chassis removed S1302122. Replacement of chassis did not clear the glitches seen in the TCSY flowrate.  See alog 32776 and alog 33129.

LHO General
corey.gray@LIGO.ORG - posted 12:57, Tuesday 31 January 2017 (33766)
Maintenance Update

Most of our activities are complete.  We have a few items which are still ongoing:

Betsy mentioned Test Mass charge measurements after their Scattering work. 

H1 GRD (GRD, OpsInfo)
corey.gray@LIGO.ORG - posted 12:50, Tuesday 31 January 2017 (33765)
ISC_LOCK Guardian Node Loaded

This is per alog 33735 and regarding FM9 filter for 4.7kHz violin mode.

H1 DetChar (DetChar, PEM, SEI, SUS)
thomas.dent@LIGO.ORG - posted 11:59, Tuesday 31 January 2017 - last comment - 08:20, Wednesday 08 February 2017(33761)
Severe transient scattering events in DARM caused by loud 20-30Hz disturbances ('thuds') in CS/LVEA

PyCBC analysts, Thomas Dent, Andrew Lundgren

Investigation of some unusual and loud CBC triggers led to identifying a new set of glitches which occur a few times a day, looking like one or two cycles of extremely high-frequency scattering arches in the strain channel.  One very clear example is this omega scan (26th Jan) - see particularly LSC-REFL_A_LF_OUT_DQ and IMC-IM4_TRANS_YAW spectrograms for the scattering structure.  (Hence the possible name SPINOSAURUS, for which try Googling.)

The cause is a really strong transient excitation at around 30Hz (aka 'thud') hitting the central station, seen in many accelerometer, seismometer, HEPI, ISI and SUS channels.  We made some sound files from a selection of these channels :

PEM microphones, interestingly, don't pick up the disturbance in most cases - so probably it is coming through the ground.

Note that the OPLEV accelerometer shows ringing at ~60-something Hz. 

Working hypothesis is that the thud is exciting some resonance/relative motion of the input optics which is causing light to be reflected off places where it shouldn't be ..

The frequency of the arches (~34 per second) would indicate that whatever is causing scattering has a motion frequency of about 17Hz (see eg https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=154054 as well as the omega scan above).

Maybe someone at the site could recognize what this is from listening to the .wav files?

Comments related to this report
thomas.dent@LIGO.ORG - 12:07, Tuesday 31 January 2017 (33763)

A set of omega scans of similar events on 26th Jan (identified by thresholding on ISI-GND_STS_HAM2_Y) can be found at https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/isi_ham2/

robert.schofield@LIGO.ORG - 13:26, Tuesday 31 January 2017 (33768)

Wow that is pretty loud, seems like it is even seen (though just barely) on seismometers clear out at EY with about the right propagation delay for air or ground propagation in this band (about 300 m/s). Like a small quake near the corner station or something really heavy, like the front loader, going over a big bump or setting its shovel down hard. Are other similar events during working hours and also seen at EY or EX?

thomas.dent@LIGO.ORG - 12:43, Wednesday 01 February 2017 (33811)

It's hard to spot any pattern in the GPS times.  As far as I have checked the disturbances are always much stronger in CS/LVEA than in end station (if seen at all in EX/EY ..).

More times can be found at https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/isi_ham2/jan23/ https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/isi_ham2/jan24/

Hveto investigations have uncovered a bunch more times - some are definitely not in working hours, eg  https://ldas-jobs.ligo-wa.caltech.edu/~tjmassin/hveto/O2Ac-HPI-HAM2/scans/1169549195.98/ (02:46 local)   https://ldas-jobs.ligo-wa.caltech.edu/~tjmassin/hveto/O2Ab-HPI-HAM2/scans/1168330222.84/  (00:10 local)
 

thomas.dent@LIGO.ORG - 08:20, Thursday 02 February 2017 (33812)

Here's a plot which may be helpful as to the times of disturbances in CS showing the great majority of occurrences on the 23rd, 26th-27th and early on 28th Jan (all times UTC).  This ought to be correlated with local happenings.

The ISI-GND HAM2 channel also has loud triggers at times where there are no strain triggers as the ifo was not observing.  The main times I see are approximately (UTC time)

Jan 22 : hours 13, 18 21-22

Jan 23 : hours 0-1, 20

Jan 24 : hours 0, 1, 3-6, 10, 18-23

Jan 25 : hours 21-22

Jan 26 : hours 17-19, 21-22

Jan 27 : hours 1-3, 5-6, 10, 15-17, 19, 21, 23

Jan 28 : hours 9-10

Jan 29 : hours 19-20

Jan 30 : hours 17, 19-20 

Hmm.  Maybe this shows a predominance of times around hour 19-20-21 UTC i.e. 11-12-13 PST.  Lunchtime??  And what was special about the 24th and 27th ..

Images attached to this comment
jim.warner@LIGO.ORG - 12:12, Thursday 02 February 2017 (33846)

Is this maybe snow falling off the buildings? The temps started going above the teens on the 18th or so and started staying near freezing by the 24th. Fil reported seeing a chunk he thought could be ~200 lbs fall.

corey.gray@LIGO.ORG - 12:48, Thursday 02 February 2017 (33847)DetChar

Ice Cracking On Roofs?

In addition to ice/snow falls mentioned by Jim, thought I'd mention audible bumps I heard from the Control Room during some snowy evenings a few weeks ago (alog33199)....Beverly Berger emailed me suggesting this could be ice cracking on the roof.  We currently do not have tons of snow on the roofs, but there are some drifts which might be on the order of a 1' tall.

MSR Door Slams?

After hearing the audio files from Thomas' alog, I was sensitive to the noise this morning.  Because of this, thought I'd note some times this morning when I heard a noise similar to Thomas' audio, and this noise was the door slamming when people were entering the MSR (Mass Storage Room adjacent to the Control Room & there were a pile of boxes which the door would hit when opened...I have since slid them out of the way).  Realize this isn't as big of a force as what Robert mentions or the snow falls, but just thought I'd note some times when they were in/out of the room this morning:

  • 19:00:55, 19:05:22, 19:10:16, 19:43:40-19:44:00 Mass Storage Room door slam (not seen on DARM spectra).
thomas.dent@LIGO.ORG - 06:02, Friday 03 February 2017 (33858)

I took a brief look at the times in Corey's previous 'bumps in the night' report, I think I managed to deduce correctly that it refers to UTC times on Jan 13.  Out of these I could only find glitches corresponding to the times 5:32:50 and 6:09:14.  There were also some loud triggers in the ISI-GND HAM2 channel on Jan 13, but only one corresponded in time with Corey's bumps: 1168320724 (05:31:46).

The 6:09 glitch seems to be a false alarm, a very loud blip glitch at 06:09:10 (see https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/H1_1168322968/) with very little visible in aux channels.  The glitch would be visible on the control room glitchgram and/or range plot but is not associated with PEM-CS_SEIS or ISI-GND HAM2 disturbances.

The 5:32:50 glitch was identified as a 'PSL glitch' some time ago - however, it also appears to be a spinosaurus!  So, a loud enough spinosaurus will also appear in the PSL. 
Evidence : Very loud in PEM-CS_SEIS_LVEA_VERTEX channels (https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=155306) and characteristic sail shape in IMC-IM4 (https://ldvw.ligo.caltech.edu/ldvw/view?act=getImg&imgId=155301).

The DetChar SEI/Ground BLRMS Y summary page tab has a good witness channel, see the 'HAM2' trace in this plot for the 13th - ie if you want to know 'was it a spinosaurus' check for a spike in HAM2. 

thomas.dent@LIGO.ORG - 06:44, Tuesday 07 February 2017 (33962)

Here is another weird-audio-band-disturbance-in-CS event (or series of events!) from Jan 24th ~17:00 UTC :
https://ldas-jobs.ligo-wa.caltech.edu/~tdent/detchar/o2/PEM-CS_ACC_LVEAFLOOR_HAM1_Z-1169312457.wav

Could be someone walking up to a piece of the instrument, dropping or shifting some heavy object then going away .. ??

Omega scan: https://ldas-jobs.ligo-wa.caltech.edu/~tdent/wdq/psl_iss/1169312457.3/

thomas.dent@LIGO.ORG - 08:20, Wednesday 08 February 2017 (33996)

The time mentioned in the last entry turns out to have been a scheduled Tuesday maintenance where people were indeed in the LVEA doing work (and the ifo was not observing, though locked).

LHO FMCS
kyle.ryan@LIGO.ORG - posted 10:14, Tuesday 31 January 2017 - last comment - 16:42, Tuesday 31 January 2017(33758)
Reduced CP3's manual mode LLCV to 16% from 17% in response to LN2 delivery today


			
			
Comments related to this report
chandra.romel@LIGO.ORG - 13:40, Tuesday 31 January 2017 (33770)

I lowered it even more to 14% at 21:40 UTC.

chandra.romel@LIGO.ORG - 16:42, Tuesday 31 January 2017 (33784)

Temps still low and exhaust pressure still high so I lowered incrementally all the way down to 1%. Pressure fell to 0 psi but exhaust temps are still read low (-33 C).

The Dewar was filled to a higher level than normal (86% full).

Raising LLCV to 12% and will go out to check actuator to make sure it's not frozen (we got snow today).

LHO VE
chandra.romel@LIGO.ORG - posted 11:45, Monday 30 January 2017 - last comment - 13:49, Tuesday 31 January 2017(33737)
IP7 voltage change

Changed the HV setting on IP7 controller (last of the old multivac Varian controllers in LVEA) from 5.6 kV to 7 kV.

PT-140 shows a pressure rise from turning off HV off for a couple of minutes.

Comments related to this report
chandra.romel@LIGO.ORG - 13:49, Tuesday 31 January 2017 (33772)

Raising the HV setting improved vacuum level in diagonal volume.

Images attached to this comment
LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 09:56, Tuesday 24 January 2017 - last comment - 14:00, Tuesday 31 January 2017(33571)
X2-8 Ion Pump Controller Modification

The controller unit was powered down and disconnected to change its output to 5.6 kV from 7.0 kV, did a re-configuration/calibration after via the front panel.

Cable was reconnected and the controller powered back up, attached is pressure trend of near volumes to X2-8 (2 hours).

Per WP#6447

Images attached to this report
Comments related to this report
chandra.romel@LIGO.ORG - 14:00, Tuesday 31 January 2017 (33775)

Pressures are holding fine with new settings on X2-8 and Y2-8 IPs.

Images attached to this comment
Displaying reports 50281-50300 of 83127.Go to page Start 2511 2512 2513 2514 2515 2516 2517 2518 2519 End