Displaying reports 50801-50820 of 83128.Go to page Start 2537 2538 2539 2540 2541 2542 2543 2544 2545 End
Reports until 20:55, Friday 13 January 2017
H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 20:55, Friday 13 January 2017 - last comment - 07:49, Saturday 14 January 2017(33249)
Injection Guardian

It's been set to KILL since Jan 4th. Are we leaving it at KILL on purpose?

Comments related to this report
keith.thorne@LIGO.ORG - 07:49, Saturday 14 January 2017 (33261)CDS
This applies to the transient injections (not continuous pulsars).  It is likely that we are trying to accumulate a good stretch of data without such injections since we came back up in January to better measure sensitivity backgrounds.
H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 19:35, Friday 13 January 2017 (33248)
Eve shift transition

Patrick handed me a locked interferometer. The range still low. DMT Omega shows low frequency up to 20Hz is more glitchy compared to the past 12 hours. at 3:17 UTC (about 50 minutes into the lock) I watned to hit the button to run a2l dtt but instead I hit run a2l button. Back to Observe 3:34 UTC.

H1 General
patrick.thomas@LIGO.ORG - posted 18:37, Friday 13 January 2017 (33247)
Back to observing
02:35 UTC Set to observing. I had to accept an SDF difference (see attached). I'm not sure what caused it. Would someone mind taking a look?
Images attached to this report
H1 ISC (DetChar, ISC)
sheila.dwyer@LIGO.ORG - posted 18:07, Friday 13 January 2017 (33244)
OMC length ugf set to 6Hz

The OMC length locking ugf was increased at 18:30 today (while the interferometer was unlocked).  It will stay this way permanently now, and we will have fewer glitches. (see alog 33104 for the work detch has done checking out that this configuration change is OK)

LHO VE
kyle.ryan@LIGO.ORG - posted 17:15, Friday 13 January 2017 (33243)
~1100 - 1600 hrs local -> Pumped CP4's clogged level sensing line with small diaphram pump
We intend to do this during the days when we are here as an experiment.  If the clog is due to ice, reverse sublimation would be a convenient fix but we haven't looked at this closely.
LHO General
patrick.thomas@LIGO.ORG - posted 16:36, Friday 13 January 2017 - last comment - 18:25, Friday 13 January 2017(33242)
Ops Day Shift Summary
TITLE: 01/14 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 54.8268Mpc
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY: Ran initial alignment after losing lock. Stopped at DC readout transition while Jim damped 4735.09 Hz violin mode. Sheila increased the ISS diffracted power and I accepted the SDF differences (see attached). Sheila and Jenne took time at NLN to investigate the loss in range.
LOG:

16:42 UTC Chris to end Y to remove snow with John Deere tractor
16:49 UTC Bubba to end X to remove snow with front end loader
16:52 UTC Turned off sensor correction at end X
17:23 UTC Jim and Carlos to end X to check power supply for network switch
17:34 UTC Jim turning off another degree of sensor correction at end X, dropped to commissioning
17:36 UTC Jim accepted SDF difference, set back to observing
17:40 UTC Krishna changed filter that kicked us out of observing, set back
17:41 UTC Set back to observing
18:00 UTC Jim and Carlos back
18:11 UTC Lockloss. HAM6 ISI trip. Spike to ~ .1 um/s in end Y 0.03 - 0.1 Hz seismic band
18:15 UTC Jason to LVEA to increase SR3 laser power
18:19 UTC Karen driving to warehouse
18:24 UTC Starting initial alignment. ALS X is ~ .7
18:26 UTC Jason done
18:59 UTC Kyle to mid Y to turn on diaphragm pump for CP4 sensing line
19:08 UTC Turned off sensor correction at end Y
19:09 UTC Jim and Carlos to end Y
19:18 UTC Gerardo to mid Y
19:29 UTC Chris back from end Y
19:35 UTC Kyle back
19:40 UTC Jim and Carlos back
19:43 UTC Initial alignment done. Had some trouble locked SRC.
19:48 UTC Bubba done snow removal
19:49 UTC Sensor correction turned back on at both end X and end Y
20:01 UTC Stopped at CHECK_IR. Sheila adjusted ISS diffracted power from 1 to 4 percent following procedure in alog 31262
20:10 UTC Gerardo back
20:27 UTC Stopping at DC_READOUT_TRANSITION per Sheila's request. Jim damping 4735.09 Hz violin mode using procedure in alog 32081. Set H1SUS-ETMY_L2_DAMP_MODE10 gain to .1
20:52 UTC NLN. Set 4735.09 Hz damping gain back to .1, guardian must have reset it?
20:57 UTC Sheila starting investigation in low range
20:59 UTC Accepted SDF differences for ISS diffraction change
21:05 UTC Setting observation mode to corrective maintenance for Sheila's investigation
21:57 UTC Batteries in UPS for portable atomic clock died
22:30 UTC Filiberto and interns to mid Y to look for equipment
23:36 UTC Damped PI mode 28
23:41 UTC Gerardo to mid Y to retrieve power supply
23:43 UTC Filiberto back
23:50 UTC Sheila and Jenne done investigation. Set to observing.
00:28 UTC Lock loss after steadily falling range.

Images attached to this report
Comments related to this report
patrick.thomas@LIGO.ORG - 18:13, Friday 13 January 2017 (33245)
No flashes for DMRI or PRMI (later Sheila told me this is because she moved optics).
00:59 UTC Starting initial alignment. MC not locking. FSS is oscillating. Set to down a couple of times. Fixed. Struggled locking ALS X arm. Can not get the COMM beatnote above 6 by adjusting PR3. SRC locked, was converging then unlocked. Set ALIGN_IFO to down. Misaligned SRM. Moved SR2 to center pointing on ASC_AS_C. Realigned SRM. SRC locked.
02:00 UTC Initial alignment done. ** Note that this is the second time PR3 has needed to be moved a large amount to bring back the COMM beat note. **
On the way to NLN.
patrick.thomas@LIGO.ORG - 18:25, Friday 13 January 2017 (33246)
Summary of locking issues:
4735.09 Hz violin mode needed to be damped. Jim W. did this.

Ran initial alignment twice. Each time I had to move PR3 a large amount to bring back the COMM beat note. The second time I could not bring it above 6.

During both initial alignments I has trouble locking SRY. Sheila showed me a trick: Bring IFO_ALIGN to down. Misalign SRM. Open Sitemap -> LSC -> Photo Detectors Overview. Click on AS_C (box to the furthest right in red circle). Move SR2 to center beam spot in the X,Y plot on the right. Realign SRM. Take IFO_ALIGN back to SRC_ALIGN.

The ISS diffracted power was low. Sheila fixed this (see summary).
LHO General
patrick.thomas@LIGO.ORG - posted 16:05, Friday 13 January 2017 (33240)
Updated and restarted weather IOCs
WP 6434

I have updated the code to address the unsigned to signed conversion for the inside temperature, outside temperature, wind direction and barometer (see alog 32653 and FRS 6963).

I had to move the IOCs from h0epics2 to h0epics. h0epics2 is running Ubuntu 10.04.4 and h0epics is running Ubuntu 12.04.5. The version of the glibc library installed with Ubuntu 10.04.4 is not compatible with the version EPICS base was compiled with.
H1 CDS
david.barker@LIGO.ORG - posted 14:03, Friday 13 January 2017 - last comment - 16:06, Friday 13 January 2017(33236)
UPS battery in MSR swollen and burnt, removed from room

We were running the old atomic clock on a cart in the MSR, it was being powered by a small UPS unit. Around 13:30 PDT two batteries in the UPS unit failed and produced a very strong odor. Gerardo and Fil quickly discovered the problem and removed the unit from the premises (good catch). The lead-acid batteries are swollen but did not break.

We are running with opened doors (its 5degF outside, -14C) and fans to remove the odor.

Comments related to this report
david.barker@LIGO.ORG - 16:06, Friday 13 January 2017 (33241)

"the big stink" seems to have dissipated somewhat. We have closed the doors and now there is just a lingering aroma.

H1 DAQ (DAQ)
david.barker@LIGO.ORG - posted 13:24, Friday 13 January 2017 (33233)
DAQ frame writer h1fw0 downtime due to LDAS power issues

h1fw0 was down for several hours last night due to LDAS power issues in the VPW server room. The data gap on h1fw0 was

Thu Jan 12 20:55 - Thu Jan 12 22:38 PDT (about 1hr 43 mins).

h1fw0 was down because its SATABOY disk raids are located in the VPW. h1fw1 continued to run as its raids are in the MSR and no O2 data was lost.

H1 ISC
jenne.driggers@LIGO.ORG - posted 13:10, Friday 13 January 2017 - last comment - 15:58, Friday 13 January 2017(33232)
ITM spots moved ~3mm in yaw since Nov

[Keita, Sheila, Daniel, Jenne]

Keita pointed out to me that the ITM a2l gains have been moving monotonically in yaw for the last month or two.  I ran the script to figure out what this means for spot positions, and it looks like the spots on the ITMs have moved about 3mm in yaw since the 20th of November. 

We may investigate trying to move the spots back toward their Nov/Dec values, in hopes that that will alleviate the noise that we've been seeing the last few days.

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 15:58, Friday 13 January 2017 (33239)

Jenne, Sheila

We have tried the three things described in alog 33230:

We saw that the MICH to DARM coupling hasn't changed since measurements made in November, the SRCL to DARM noise coupling is worse by about 5dB, so our feedforward was slightly mistuned, but this the projection of this noise is still about a factor of 5 below the DARM noise, and the PRCL coupling has gotten a few dB better (but this noise is not important). 

We moved PRM yaw by about 45 urad, which brings our spot position on the POP A QPD close to centered, and brings us much closer to centered on the ITMs.  We first thought that this helped, but it wasn't repeatable, so we have reverted.  We also tried opening the SRM loop and moving SRM, but this made no repeatable difference.

H1 ISC
jenne.driggers@LIGO.ORG - posted 12:53, Friday 13 January 2017 (33231)
ITMY IR camera exposure reduced

[Daniel, Jenne, Keita]

Looking at the movement of beam spot on ITMs, we decided that the ITMY camera spot size wasn't particularly useful since the camera was very over-exposed.  The exposure was at 10,000 and we have now changed it to 200.  Now the camera looks reasonable at 30W. 

The calculated waist size for both ITMs is ~80counts +-10ish, although the ITMY had been reporting 180counts with the over exposure. 

LHO VE
logbook/robot/script0.cds.ligo-wa.caltech.edu@LIGO.ORG - posted 12:10, Friday 13 January 2017 - last comment - 13:52, Friday 13 January 2017(33228)
CP3, CP4 Autofill 2017_01_13
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 50% open. Fill completed in 16 seconds. LLCV set back to 14.0% open.
Starting CP4 fill. TC A error. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 32 seconds. LLCV set back to 35.0% open.
Images attached to this report
Comments related to this report
chandra.romel@LIGO.ORG - 13:52, Friday 13 January 2017 (33234)

Lowered CP4 LLCV from 35% to 34% open because TCs are reading around -30degC.

CP3 filled fast but TCs are reading temps as expected.

H1 AOS (DetChar, ISC, SUS)
jason.oberling@LIGO.ORG - posted 10:34, Friday 13 January 2017 - last comment - 14:06, Friday 13 January 2017(33224)
SR3 Oplev Laser Power Increased (WP 6433)

As noted here, the SR3 oplev began glitching at around the same time H1 experienced a large range drop.  While the glitching is unlikely to be the cause of the range drop (no oplev damping on SR3), and Keita's request I have attempted to address the glitching.  As soon as H1 lost lock, I increased the laser power; the voltage that monitors the laser diode current increased from 0.934 mV to 0.940 mV.  This is the max for this laser, so if this does not alleviate the glitching I will try lowering the laser power.  If this does not work then a laser swap will be necessary.  I will leave WP 6433 open until it is known whether or not the laser power increase has alleviated the laser glitching.

Comments related to this report
jason.oberling@LIGO.ORG - 14:06, Friday 13 January 2017 (33237)

After a couple 3.5  hours it seems as though the power adjustment cleared the glitching.  Here is a link the SR3 summary pages; I've also attached the 2 relevant graphs to this alog.  The first attachment is a time series of the oplev SUM, where the glitching is clearly visible, as is my adjustment and the following cessation of the glitching.  The second attachment is a spectrogram of the SUM signal, normalized relative to the median.  Here it the start and end of the glitching can be clearly seen.  I will continue to monitor this over the weekend, but initially it looks like the laser power adjustment cleared the glitching for now.

Images attached to this comment
H1 DetChar
jeffrey.kissel@LIGO.ORG - posted 10:30, Friday 13 January 2017 - last comment - 18:39, Friday 13 January 2017(33223)
More Ideas and Requests From Site to DetChar about BNS Range Decay
J. Kissel, S. Dwyer, E. Goetz, K. Venkateswara

We're lamenting last night's severe decay in range, and the gross long term trend toward zero since the restart after the break. Our most said theory (which doesn't make it the best or the right theory) is temperature adversely affecting the global alignment system's operating point, which has steered the IFO into a place with much worse scattering coupling. As such the anthropogenic noise (3-10, 10-30 Hz BLRMS of ground motion) is now adversely affecting the IFO much more than in the past. 

We'd like DetChar's help to confirm:
- We've lost our fellow whom has run BruCo for us. Does detchar in general know how to run BruCo, or does Gabriele need to teach new people again? We suggest that no coherence with any particular channel would confirm the non-linear process of scattering.
- We see summary page spectrograms showing excess noise, but they're too long of a time scale to tell if the features are scattering arches.

Note that Bubba had started plowing at the X-End this morning, so data from ~17:00 UTC will show elevated 3-10 and 10-30 Hz BLRMS seismic noise there. So we understand that ground motion is much higher then, and because of the greater sensitivity to it (again, claiming the alignment operating point increases scattered light coupling theory) -- the problem is overnight when there appears to be no elevation in seismic noise, but the range is still terrible.

Any other ideas are welcome.

--------
Also because Bubba was going down to the X-end, we turned off sensor correction at 16:52 UTC (LHO aLOG 33221). This looks to coincide with the 1080 Hz glitching getting worse on the summary pages. This is consistent with the OMC length noise increase investigations we've done over the past few days (LHO aLOG 33104 and LHO aLOG 33037).
Images attached to this report
Comments related to this report
laura.nuttall@LIGO.ORG - 10:39, Friday 13 January 2017 (33225)
joshua.smith@LIGO.ORG - 10:47, Friday 13 January 2017 (33226)DetChar, ISC

Josh, TJ, Jess, Alex

This looks like PI ringing up. In the spectrogram posted above there is a line at 4734.75Hz that grows around the time of the noise. Here are some plots:

Fig 1: the line at 4734.75

Fig2: a BLRMS of this line vs time it grows alot.

Fig3: 8.8 hours of the line growing from onset of noise to really ratty time. 

We will work harder to tie this to the noise. We've started looking back at earlier pages and seeing this line rising around times that noise rises but have work to do to tie them together. 

Images attached to this comment
sheila.dwyer@LIGO.ORG - 12:41, Friday 13 January 2017 (33230)

There are three things that we can try to improve or investigate the drop in range as Patrick relocks.

  • We will stop to damp the violin mode harmoinc that Josh noted above (30281 for solution to this problem in the past, note that notching this in the DARM loop before damping it was important).  Jim W and Patrick are currently working on this.
  • Based on the BRUCO that Laura linked above it looks like we have higher coherence with things related to SRCL/MICH/PRCL coherence than in the past, so we can try some quick injections to check that our feedforward is still well tuned.
  • We also have several reasons to think that we have slowly drifted away from the center of the optics on both ITMs in the horizontal direction (screenshot attached of the a2l gains which indicate the spot position on the optic, PR3 and PR2 yaw witness sensors, POPB yaw which is our out of loop QPD in the PRC, and the range over the last 45 days.)  We can adjust this using the PR3 spot move script to see if things get any better.
Images attached to this comment
alexander.urban@LIGO.ORG - 18:39, Friday 13 January 2017 (33235)DetChar

Josh Smith, Jess, TJ and I have investigated a bit and found that there is a line around 4700 Hz (Fig. 1) that rings up around the same time as a scratchy forest of lines in mid-range frequencies, right in the bucket. At Josh's suggestion I took a comparative look at the beginnings and ends of three lock stretches on Jan 6 where this pattern rings up (the circled regions in Fig. 1). The last three figures show 10-second averaged spectra of 100 seconds of data gathered at the start and end of these lock stretches (GPS times given in the plots). "Good" periods, where there is no 4.7 kHz line, are shown in blue; "bad" periods where the line shows up are plotted in red. You can clearly see the ASD of H1:GDS-CALIB_STRAIN is systematically worse between 70-500 Hz when the high-frequency line is rung up, compared to when it wasn't. (We also note this behavior on Friday, 13 Jan, and on a couple of other days back in december, all coincident with a wandering downward trend in BNS range.)

Images attached to this comment
H1 CAL (CAL)
travis.sadecki@LIGO.ORG - posted 15:57, Thursday 12 January 2017 - last comment - 15:28, Friday 13 January 2017(33187)
PCalY RX PD Clipping Alleviated

Taking the opportunity while both sites were down this afternoon, EvanG and I went to EY to investigate the possible cause of clipping at the PCalY RX PD as noted in JeffK's aLog 33108.  Looking at the beams in the RX enclosure, Evan and I agreed that the clipping appeared to be coming from the newly installed alignment irises which were installed the last time we fixed a clipping issue at the same end station.  We also agreed that the least invasive fix to the issue was to move the iris, rather than move the steering mirrors in the TX module as had been done the previous time.  We still don't understand why these beams are moving (perhaps the mirrors on the in-vac periscope are moving, as has been hypothesized?, or the steering mirrors in the TX module are mysteriously drifting), but both inner and outer beams were entering the input aperture of the RX PD integrating sphere after moving the iris, so we decided to stop there. 

In attached ASD, the GREEN trace is the current measurement after fixing the clipping.

The 13 day trend of the RX PD also shows that it has returned to pre-clipping values.

Images attached to this report
Comments related to this report
evan.goetz@LIGO.ORG - 15:28, Friday 13 January 2017 (33238)
Evan G., Travis S., Jeff K.

Looking back at the calibrated Pcal TX and RX PD trends over the last 90 days, it appears that a small amount of clipping might have began shortly after the iris apertures were installed near the end of October (see first attachment and LHO aLOG 30877). Also observed from this trend, the EPICS records are updated on Nov 7 (see LHO aLOG 31295); a slow trend continues until about mid-December when variations becoming more apparent; shortly after Jan 4, the clipping becomes more severe with much larger excursions; and finally, Travis and I fixed the clipping, returning the trend back to the nominal value.

Jeff and I were concerned this might impact the reference model calibration measurements made early Jan 4 UTC. Looking carefully at the trend, Jeff's measurements happen at a very fortuitous time. The excursion from the nominal, good Pcal state are extremely small, so the impact on the measurements made for the O2b reference model are negligible.

To investigate the cause of the variations over the last 2 weeks trended the temperature of the VEA to look for correlations between temperature and the observed fluctuations (see second attachment) over the last 13 days. The larger temperature changes from Jan 3-Jan 4 correlates with the measured light at the RX PD and changes at the 1% level. The large, rapid temperature change on Jan 6 impacts the light measured at the RX PD by 13%. Then, following this, the temperature holds steady while we observe variations at the few-percent level, indicating the alignment was brought into a bad state. These variations over the last two weeks means that the time-dependent calibration factors (computed from the RX PD signal) are impacted at the few-percent level, larger than the requirement. We might need to fall back to using the TX PD as the reference for time-dependent calibration factors.

The estimated uncertainty of the photon calibrator (without clipping) is 0.75% (see P1500249). Also note from the first attachment that the ratio of TX and RX PD channels is ~0.2% (without clipping), well within the uncertainty of the Pcal.

We wondered if these variations might impact the range, but the variation and trend of the range do not appear to correlate by eye with the variations in the RX PD trend.

In summary, the reference measurements are not impacted, but the time-dependent calibration factors are impacted at the few-percent level until the clipping problem was fixed.
Images attached to this comment
Displaying reports 50801-50820 of 83128.Go to page Start 2537 2538 2539 2540 2541 2542 2543 2544 2545 End