Displaying reports 50861-50880 of 83128.Go to page Start 2540 2541 2542 2543 2544 2545 2546 2547 2548 End
Reports until 16:23, Thursday 12 January 2017
LHO General
patrick.thomas@LIGO.ORG - posted 16:23, Thursday 12 January 2017 (33189)
Ops Day Shift Summary
TITLE: 01/13 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 60.4067Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Began locked in observing. Snow removal continued, but not near end stations. Lock loss (see Sheila's alog). Travis and Evan took the opportunity to alleviate PCALY clipping. Ran initial alignment and a2l. Sheila ran test. Accepted SDF differences from test and set to observing.
LOG:
17:05 UTC Joe to mid and end stations to shovel snow around doors
17:10 UTC Filiberto and Marc to mid Y to look for parts for squeezer
17:23 UTC Chris to mid X to move snow
17:34 UTC Bubba and Ken back from mid X
18:09 UTC Chris done at mid X, moving to mid Y
18:59 UTC Chris done at mid Y, heading back to corner station
19:04 UTC Joe back from X arm
19:07 UTC Bubba to mid X to remove snow
19:54 UTC Joe to Y arm to shovel snow around doors
20:24 UTC Bubba back
20:43 UTC Joe back
22:02 UTC Out of observing for Sheila to run test. Lock loss.
22:08 UTC Evan and Travis to end Y to investigate PCAL clipping. Turned off sensor correction.
22:11 UTC Gerardo to end Y to check on cable for IP
22:17 UTC Starting initial alignment
23:01 UTC Gerardo back from end Y
23:03 UTC Initial alignment done
23:10 UTC Evan and Travis back
23:13 UTC Dave and Carlos to CER to check on switch
23:20 UTC Dave and Carlos back
23:30 UTC NLN
23:32 UTC Running a2l per Sheila's request
23:36 UTC Damped PI mode 28, a2l still running
23:58 UTC Damped PI mode 27
00:16 UTC Accepted SDF differences from Sheila's test (see attached) and set to observing.
Images attached to this report
H1 CAL (CAL)
travis.sadecki@LIGO.ORG - posted 15:57, Thursday 12 January 2017 - last comment - 15:28, Friday 13 January 2017(33187)
PCalY RX PD Clipping Alleviated

Taking the opportunity while both sites were down this afternoon, EvanG and I went to EY to investigate the possible cause of clipping at the PCalY RX PD as noted in JeffK's aLog 33108.  Looking at the beams in the RX enclosure, Evan and I agreed that the clipping appeared to be coming from the newly installed alignment irises which were installed the last time we fixed a clipping issue at the same end station.  We also agreed that the least invasive fix to the issue was to move the iris, rather than move the steering mirrors in the TX module as had been done the previous time.  We still don't understand why these beams are moving (perhaps the mirrors on the in-vac periscope are moving, as has been hypothesized?, or the steering mirrors in the TX module are mysteriously drifting), but both inner and outer beams were entering the input aperture of the RX PD integrating sphere after moving the iris, so we decided to stop there. 

In attached ASD, the GREEN trace is the current measurement after fixing the clipping.

The 13 day trend of the RX PD also shows that it has returned to pre-clipping values.

Images attached to this report
Comments related to this report
evan.goetz@LIGO.ORG - 15:28, Friday 13 January 2017 (33238)
Evan G., Travis S., Jeff K.

Looking back at the calibrated Pcal TX and RX PD trends over the last 90 days, it appears that a small amount of clipping might have began shortly after the iris apertures were installed near the end of October (see first attachment and LHO aLOG 30877). Also observed from this trend, the EPICS records are updated on Nov 7 (see LHO aLOG 31295); a slow trend continues until about mid-December when variations becoming more apparent; shortly after Jan 4, the clipping becomes more severe with much larger excursions; and finally, Travis and I fixed the clipping, returning the trend back to the nominal value.

Jeff and I were concerned this might impact the reference model calibration measurements made early Jan 4 UTC. Looking carefully at the trend, Jeff's measurements happen at a very fortuitous time. The excursion from the nominal, good Pcal state are extremely small, so the impact on the measurements made for the O2b reference model are negligible.

To investigate the cause of the variations over the last 2 weeks trended the temperature of the VEA to look for correlations between temperature and the observed fluctuations (see second attachment) over the last 13 days. The larger temperature changes from Jan 3-Jan 4 correlates with the measured light at the RX PD and changes at the 1% level. The large, rapid temperature change on Jan 6 impacts the light measured at the RX PD by 13%. Then, following this, the temperature holds steady while we observe variations at the few-percent level, indicating the alignment was brought into a bad state. These variations over the last two weeks means that the time-dependent calibration factors (computed from the RX PD signal) are impacted at the few-percent level, larger than the requirement. We might need to fall back to using the TX PD as the reference for time-dependent calibration factors.

The estimated uncertainty of the photon calibrator (without clipping) is 0.75% (see P1500249). Also note from the first attachment that the ratio of TX and RX PD channels is ~0.2% (without clipping), well within the uncertainty of the Pcal.

We wondered if these variations might impact the range, but the variation and trend of the range do not appear to correlate by eye with the variations in the RX PD trend.

In summary, the reference measurements are not impacted, but the time-dependent calibration factors are impacted at the few-percent level until the clipping problem was fixed.
Images attached to this comment
H1 General
patrick.thomas@LIGO.ORG - posted 15:39, Thursday 12 January 2017 (33186)
Error during running of a2l
H1:ASC-ADS_YAW4_DEMOD_Q => OFF: FM1
H1:ASC-ADS_YAW5_DEMOD_Q => ON: FM2
H1:ASC-ADS_YAW5_DEMOD_Q => OFF: FM1
Traceback (most recent call last):
  File "/opt/rtcds/userapps/trunk/isc/common/scripts/decoup/run_1TM_a2l.py", line 54, in 
Traceback (most recent call last):
    decoupObject.runMain()
  File "/opt/rtcds/userapps/trunk/isc/common/scripts/decoup/run_1TM_a2l.py", line 54, in 
  File "/opt/rtcds/userapps/trunk/isc/common/scripts/decoup/deMod_deCoup.py", line 510, in runMain
    decoupObject.runMain()
  File "/opt/rtcds/userapps/trunk/isc/common/scripts/decoup/deMod_deCoup.py", line 510, in runMain
Traceback (most recent call last):
  File "/opt/rtcds/userapps/trunk/isc/common/scripts/decoup/run_1TM_a2l.py", line 54, in 
Traceback (most recent call last):
  File "/opt/rtcds/userapps/trunk/isc/common/scripts/decoup/run_1TM_a2l.py", line 54, in 
    decoupObject.runMain()
  File "/opt/rtcds/userapps/trunk/isc/common/scripts/decoup/deMod_deCoup.py", line 510, in runMain
    decoupObject.runMain()
  File "/opt/rtcds/userapps/trunk/isc/common/scripts/decoup/deMod_deCoup.py", line 510, in runMain
Traceback (most recent call last):
  File "/opt/rtcds/userapps/trunk/isc/common/scripts/decoup/run_1TM_a2l.py", line 54, in 
    decoupObject.runMain()
  File "/opt/rtcds/userapps/trunk/isc/common/scripts/decoup/deMod_deCoup.py", line 510, in runMain
    os.mkdir('rec_%s/'%self.siteName)
        os.mkdir('rec_%s/'%self.siteName)
os.mkdir('rec_%s/'%self.siteName)
        os.mkdir('rec_%s/'%self.siteName)
os.mkdir('rec_%s/'%self.siteName)
OSError: [Errno 17] File exists: 'rec_LHO/'
OSErrorOSErrorOSErrorOSError: : [Errno 17] File exists: 'rec_LHO/': : [Errno 17] File exists: 'rec_LHO/'
[Errno 17] File exists: 'rec_LHO/'[Errno 17] File exists: 'rec_LHO/'


H1:SUS-ETMX_L2_DRIVEALIGN_P2L_GAIN => 1.4143
H1:SUS-ETMX_L2_DRIVEALIGN_Y2L_GAIN => 1.2898
H1:SUS-ITMX_L2_DRIVEALIGN_P2L_GAIN => 1.6687
H1 DetChar (DetChar)
beverly.berger@LIGO.ORG - posted 14:39, Thursday 12 January 2017 (33185)
DQ Shift: Monday 9 Jan 2017 00:00 UTC - Wednesday 11 Jan 2017 23:59 UTC

DQ Shifter: Beverly

LHO: Fellows: Young-Min, Evan

Full results may be found here.

Non-image files attached to this report
H1 General (Lockloss)
sheila.dwyer@LIGO.ORG - posted 14:18, Thursday 12 January 2017 (33184)
Lockloss at 2017-01-12_12:30:51

Caused by me trying to increase OMC length gain. (see alog 33104 and several comments)

LHO General
patrick.thomas@LIGO.ORG - posted 13:25, Thursday 12 January 2017 - last comment - 16:00, Thursday 12 January 2017(33180)
1 PM meeting notes
Bubba needs several more hours at end X to clear snow for the CP delivery next Tuesday.
PCAL clipping at end Y needs investigation.

Both of these will take place tomorrow (Thursday).
Comments related to this report
patrick.thomas@LIGO.ORG - 16:00, Thursday 12 January 2017 (33188)
PCAL clipping investigation done today after lockloss.
LHO General
patrick.thomas@LIGO.ORG - posted 12:41, Thursday 12 January 2017 (33178)
Ops Day Mid Shift Summary
Have remained locked since the beginning of the shift. No issues to report. Have not had to damp any PI modes. Snow removal has continued on both arms.
LHO General
patrick.thomas@LIGO.ORG - posted 08:09, Thursday 12 January 2017 (33174)
Ops Day Shift Start
TITLE: 01/12 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 58.2425Mpc
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
    Wind: 2mph Gusts, 1mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.24 μm/s 
QUICK SUMMARY: Locked and observing with double coincidence. TCSY chiller flow is low verbal alarm when I arrived. Flow had dropped to around 2.2, just came back to around 3.1.
H1 General
jim.warner@LIGO.ORG - posted 07:50, Thursday 12 January 2017 (33173)
Shift Summary

TITLE: 01/12 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 54.185Mpc
INCOMING OPERATOR: Patrick
SHIFT SUMMARY:
LOG:
11:20 LLO was down, A2L had looked bad since I arrive, so ran A2L script

12:30 Lockloss, back to observing at 13:09

15:30 Bubba and Ken to MX to check on temps

LHO General
corey.gray@LIGO.ORG - posted 23:58, Wednesday 11 January 2017 (33166)
EVE Operator Summary

TITLE: 01/12 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 53.5793Mpc
INCOMING OPERATOR: Jim
SHIFT SUMMARY:

Other than an odd EQ (thanks Krishna for pointing out the high freq (0.3-1Hz which we don't have up on the wall) of the EQ---at any rate, not your typical EQ), in OBSERVING all shift and PI modes were straightforward.
LOG:

H1 SEI (SEI)
corey.gray@LIGO.ORG - posted 19:59, Wednesday 11 January 2017 - last comment - 21:47, Wednesday 11 January 2017(33170)
Earthquake Report: 5.3 in Guatamala, Lockloss on L1 & Then H1
Comments related to this report
krishna.venkateswara@LIGO.ORG - 21:47, Wednesday 11 January 2017 (33171)

This was a low magnitude but relatively close event which implies higher frequency ground motion. You can see a spike in the 0.3-1 Hz blrms at the lockloss time. I suspect that is what caused the lockloss.

LHO General
corey.gray@LIGO.ORG - posted 18:07, Wednesday 11 January 2017 (33165)
Transition To EVE

TITLE: 01/12 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 53.5793Mpc
OUTGOING OPERATOR: Patrick
CURRENT ENVIRONMENT:
    Wind: 7mph Gusts, 6mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.26 μm/s

No snow!  Winds under 10mph, a balmy 19degF, & seismic isn't too bad (other than some plowing at the beginning of this shift).  Roads were plowed & clear on way in.
QUICK SUMMARY:
Patrick took H1 to OBSERVING as we made the shift change.
There was some snow plowing in the Corner Station area at the beginning of this shift.

H1 TCS
filiberto.clara@LIGO.ORG - posted 11:59, Tuesday 10 January 2017 - last comment - 13:42, Thursday 12 January 2017(33129)
TCSY CO2 Laser Interlock Chassis Replaced

WP 6428

This morning the CO2 Laser Interlock Chassis D1200745 was replaced. This work is part of the ongoing investigation of the TCS flow sensor alarms/glitches. See alog 32776. Verified trip point for the temperature shutoff of the laser on new chassis. Steps taken to replace chassis:

1. Turn off laser at key
2. Turn off chiller in mechanical room
3. Turn off chassis and unplug all connections
4. Install new chassis, rocker switch set to CW (OPS) position
5. Redo connections to chassis & power on
6. Turn on chiller
7. Check there are no red lights on chassis front panel
8. Key on laser and press gate button
9. Check laser power actually powers on (MEDM screen).

Old Chassis S1302125
New Chassis S1302122

Fil, Richard, Nutsinee, Alastair

Comments related to this report
nutsinee.kijbunchoo@LIGO.ORG - 13:07, Thursday 12 January 2017 (33179)TCS

This swap unfortunately didn't seem to fix the problem. TCSY flowrate continues to glitch.

Images attached to this comment
alastair.heptonstall@LIGO.ORG - 13:42, Thursday 12 January 2017 (33181)
Okay, no problem. That hopefully means it's unlikely this will cause the lasers to shutoff (ie more likely a data acquisition fault). We should look at Beckhoff next I think.
H1 ISC (DetChar, TCS)
sheila.dwyer@LIGO.ORG - posted 12:29, Monday 09 January 2017 - last comment - 07:21, Friday 13 January 2017(33104)
change in OMC length gain helps with 1083 Hz glitches

This morning we sat in nominal low noise without going to observing from 19:21 to 19:51 UTC (Jan 9th) in a configuration that should be much better for the 1084Hz glitches. (WP6420)  

On Friday we noticed that the 1084Hz feature is due to OMC length fluctuations, and that the glitch problem started on Oct 11th when the dither line amplitude was decreased (alog 30380 ).  This morning I noticed that the digital gain change described in alog 30380 that was intended to compensate for the reduced dither amplitude didn't make it into any guardian, so that we have had a UGF that was a factor of 8 lower than what I used when projecting OMC length noise to DARM30510 The first attachment shows open loop gain measurements from the 3 configurations: before oct 11th (high dither amplitude), after october 11th (lower dither amplitude, uncompensated) and the test configuration (lower dither amplitude, compensated).  

We ran with the servo gain set to 24 (to give us the nominal 6Hz ugf) and the lowered dither line amplitude from 19:21 UTC to 19:51 UTC Jan 9th.  You can see the spectrum durring this stretch in the second attached screenshot, in the test configuration the peak around 1083Hz is gone, with just the pcal line visible, and the OMC length dither at 4100Hz is reduced by more than an order of magnitude. You can also compare the glitches from this lock stretch with one from yesterday  to see that the glitches at 1084 Hz seem to be gone. This is probably the configuration we would like to run with for now, but we may try one more test with increased dither line amplitude.  

Other notes because we don't have an operator  today due to weather:

This morning all 4 test mass ISIs were tripped probably from the Earthquake last night that brought the EQ  BLRMS to 10 um/second around midnight UTC.  ITMY tripped again while it was re-isolating, no problem on the second try. 

Richard topped added 400mL to the TCSY chiller around 10:15 or 10:30 local time, since we were getting low flow alarms. The flow alarms came back a few minutes before 11am local time.  

I went through inital alingment witout problems and got to DC_readout transition. Then I measured the UGF of the OMC length loop in preparation for increasing the dither line height   From that measurement and trends it became clear that when the OMC dither amplitude was reduced, the compensation of the OMC digital gain described in didn't make it into the guardian.  This means we have been operating with a UGF in the OMC length loop that was a factor of 8 too low since mid october.  

We arrived in low noise at 19:21 UTC with the OMC ugf increased to 6Hz.  After about a half hour PI modes 27 and 28 rang up, and I wasn't fast enough to get them under control so we lost lock.  

Images attached to this report
Comments related to this report
andrew.lundgren@LIGO.ORG - 16:34, Monday 09 January 2017 (33114)DetChar, ISC
Here's a graphical version of what Sheila wrote, showing the time on Oct 11 when the 1083 Hz glitches started. The dither amplitude was reduced at 3:20 UTC, but the servo gain was increased to compensate. There are no 1083 Hz glitches at this time. Severe RF45 noise starts an hour later and lasts until the end of the lock. The 1083 Hz glitches are evident from the beginning of the next lock, and persist in every lock until the recent fix.

The dither amplitude stayed low in the second lock, but the servo gain was reset back to its low value. Apparently, both need to be low to produce the glitches.
Non-image files attached to this comment
sheila.dwyer@LIGO.ORG - 11:17, Thursday 12 January 2017 (33175)

Keita tells me that people are concerned about making this change because of the increased noise below 25 Hz in the screenshot attached to the original post.  We did not run the A2L decoupling durring this lock strech, and it was not well tuned.  The shape of the HARD loop cut off at 25Hz is visible in the spectrum, which is one way of identifying bad ASC noise.  The high coherence between CHARD P and DARM at this time is another way of seeing that this is angular noise (attachment). 

So I think that this is unrelated to the OMC gain change and not really a problem. 

Images attached to this comment
joshua.smith@LIGO.ORG - 11:40, Thursday 12 January 2017 (33176)DetChar, ISC

1080Hz removal OMC gain/line conditions, does it make more low frequency noise?
Josh, Andy, TJ, Beverly

Conclusion: For two on/off times each for the two OMC gain tests (total 8 times) it looks like the high gain / low line configuration that takes away 1080 Hz (and also takes away some bumps around 6280Hz) coincides with a bit more noise below 25Hz.

Request: We hope this connection with noise below 25Hz is chance (It might have just been drift and we chose times unluckily) and we would like debunk/confirm it. We could do that with a couple cycles of on/off, (e.g. 5 minutes each, with the current configuration vs high gain / low dither configuration). 

See the attached PDF. The pages are: 

  • 2,3: January 9th test configuration from Sheila's page: "We ran with the servo gain set to 24 (to give us the nominal 6Hz ugf) and the lowered dither line amplitude from 19:21 UTC to 19:51 UTC Jan 9th." Red/Orange are the test time with low line / high gain, and no 1080Hz
  • 4,5: Similar experiment from October. Blue/green are the test time with low line / high gain, and no 1080Hz.

Also: There is no coherence above 10Hz between STRAIN and OMC LSC SERVO/I for any of these test times. So coupling must be non-linear. 
Also: When the 1080Hz bumps disappear we also see a bump around 6280Hz disappear (attached image, sorry no x-axis label but its 6280Hz)

Images attached to this comment
Non-image files attached to this comment
andrew.lundgren@LIGO.ORG - 12:10, Thursday 12 January 2017 (33177)DetChar, ISC
Our post crossed with Sheila's. If possible, we'd still like to see a quick on/off test with the A2L tuned. Could we have five minutes with the gain high and then ramp it down? Maybe with one repeat. Since this is a non-linear effect, we'd like to make sure there's no funny coupling with the CHARD noise. We're not too worried by excess noise below 25 Hz now, but it might be important when we're able to push lower.
sheila.dwyer@LIGO.ORG - 16:34, Thursday 12 January 2017 (33183)

While LLO was down I attempted to do a test by increasing the OMC length gain while in lock, which unlocked the IFO.  So on/off tests aren't possible.  Edit:  I broke the lock by changing the gain after the integrator (which had been OK when not on DC readout), we can change the gain upstream instead without unlocking.  

For now I put the new gain into the guardian so the next lock will be with the increased gain, and hopefully see that the low frequency noise is fine. 

Now we have relocked, Patrick ran a2l, and Jeff, Evan, Krishna and I did an on off test by ramping H1:OMC-LSC_I_GAIN:

  • high gain for about 15 minutes before 23:55 UTC Jan 12th
  • low gain from 25:56:20 Jan 12th to 0:03:21 UTC Jan 13th
  • high gain from 0:04 UTC to 0:13:22 UTC
  • back to low gain

The attached screen shot using the same color scheme as in the presentation above shows that there is not a difference at low frequency between high gain and low gain.  

We are back in observing in the low gain configuration, but the gain is set in an unusual way (and accepted in SDF so that we can go to obsevering). Keita would like us to hear confirmation from detchar before making this permanent. 

Images attached to this comment
joshua.smith@LIGO.ORG - 07:21, Friday 13 January 2017 (33214)DetChar, ISC

Thank you Sheila, this looks really good. No change at low frequency. 1080Hz gone. The 6280Hz just varies on its own timescale. From our end we're happy with the configuration change since it only does good. Sorry for the red herring about low frequencies. 

Images attached to this comment
H1 General (SEI)
cheryl.vorvick@LIGO.ORG - posted 14:32, Sunday 08 January 2017 - last comment - 13:56, Thursday 12 January 2017(33089)
H1 ISI CPS Noise Spectra Check - Weekly

Ran the diaggui template for the HAM and BSC ISI CPSs (CPS'...).

No overall issue with one sensor being elevated.

I found one thing that is maybe a known and understood feature, but I didn't find it in the alog with my searching (maybe the wrong search words?).

I notice that there is noise in HAM2, HAM3, HAM4, and HAM6 around 41.3Hz, and the elevated noise level goes from 41Hz to about 41.6Hz.

This noise is not present in the 1 Dec 2016 plots, there seems to be hint of it in the 14 Dec 2016 plots, and it's clearly visible in the 1 Jan 2017 plots.

Attached:

Images attached to this report
Comments related to this report
cheryl.vorvick@LIGO.ORG - 14:36, Sunday 08 January 2017 (33091)
richard.mittleman@LIGO.ORG - 07:25, Monday 09 January 2017 (33100)

good catch cheryl, when you see fun cps signals, can you check the inertial sensor signals (HEPI and ISI) also, to make sure that it is motion and if so of what?

thanks

hugh.radkins@LIGO.ORG - 13:56, Thursday 12 January 2017 (33182)

Looks too that ITMX has some broadband elevated noise on Stage1 V1 and Stage2 H1.  We may need to cycle board power/seating if it persists.

Displaying reports 50861-50880 of 83128.Go to page Start 2540 2541 2542 2543 2544 2545 2546 2547 2548 End