Displaying reports 48841-48860 of 83204.Go to page Start 2439 2440 2441 2442 2443 2444 2445 2446 2447 End
Reports until 11:38, Thursday 06 April 2017
LHO General
corey.gray@LIGO.ORG - posted 11:38, Thursday 06 April 2017 (35360)
LVEA Card Reader Was OFF

This morning Dick noticed that the Garb Room/LVEA Card Reader was OFF (not sure how long it was OFF).  We like to keep these ON & so I turned it back on.  

Noticed the VEA Sweep checklists do NOT mention checking these readers (it's an action for me to update these documents).

H1 General (AOS, CAL, SUS, TCS)
corey.gray@LIGO.ORG - posted 09:28, Thursday 06 April 2017 (35358)
H1 In COMMMISSIONING Mode From 16-20UTC (9am - 1pm PST)

Activity On The Docket:

LHO General
corey.gray@LIGO.ORG - posted 08:45, Thursday 06 April 2017 (35357)
Ops Day Shift Transition

TITLE: 04/06 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 69Mpc
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
    Wind: 7mph Gusts, 5mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.39 μm/s
QUICK SUMMARY:

H1 locked for over 31hrs.  Still plan to have Commissioning Break from 9am - 1pm (due to LLO Commissioning break) with Calibration sweep & Blip glitch activities on the docket.

H1 General
cheryl.vorvick@LIGO.ORG - posted 08:17, Thursday 06 April 2017 (35356)
Ops Owl Summary:

TITLE: 04/06 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 67Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
LOG: 12:32UTC - GRB, with a clean hour of stand down time after

H1 General
cheryl.vorvick@LIGO.ORG - posted 01:11, Thursday 06 April 2017 (35354)
Ops Owl Transition:

TITLE: 04/06 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 70Mpc
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
    Wind: 15mph Gusts, 12mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.43 μm/s
QUICK SUMMARY: locked 24 hours as of 08:10UTC

H1 General
jim.warner@LIGO.ORG - posted 00:03, Thursday 06 April 2017 (35353)
Shift Summary

TITLE: 04/06 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 72Mpc
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY:
LOG: Not much happening. Environment is quiet, lock is almost 23 hrs long. Our range for the last ~4 hours has been pretty good, someone should figure out what we've been doing right.

LHO General
corey.gray@LIGO.ORG - posted 15:59, Wednesday 05 April 2017 (35337)
DAY Operator Summary

TITLE: 04/05 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 71.5Mpc
INCOMING OPERATOR: Jim
SHIFT SUMMARY:

H1 locked over 14hrs with a fairly quiet shift.  Took H1 out of OBSERVING to address a rung up violin mode.
LOG:

H1 SUS
corey.gray@LIGO.ORG - posted 15:10, Wednesday 05 April 2017 (35351)
Damped Out 4.735 kHz Violin Mode

While in double coincidence, Sheila happened to catch that H1's 4734.75 Hz violin mode was ringing up.  To nip this in the bud, H1 was taken out of OBSERVING to damp this mode out.  With StripTool & DTT in hand, made adjustments to the ETMy MODE10 Filter Bank.  What was done to damp it out:

  1. FM9 (+30deg5k) was turned ON, but this immediately rung up the mode more.
  2. Kept FM9 ON & also turned ON FM6 (-60deg5k), and this immediately damped out 4735Hz within minutes.

Out of OBSERVING from 20:17 - 20:24.

ACCEPTED the new MODE10 diffs, but should still commit the new snapshot to the SVN.

Images attached to this report
LHO General
vernon.sandberg@LIGO.ORG - posted 14:16, Wednesday 05 April 2017 (35350)
Work Permit Summary for 2017 April 04
Work Permit Date Description alog/status
       
6560.html 04/04/17 08:58 AM Attempt to align HWSY periscopes (if misaligned). The most recent HWSY data suggested that there might be some misalignment on the table + bad SLED. Will be taking the Hartmann plate off to live-stream the camera and check the beam profile. Need: green light for alignment, SR3, and ITMY aligned  
6559.html 04/04/17 08:13 AM Look at PEM AA chassis. Reported bad channels by PEM group. 35316
6558.html 04/03/17 01:58 PM Grab regular, bi-weekly CAL measurement at some non-maintenance day this week while LLO is down. Measurement takes ~40 minutes at most.  
6557.html 04/03/17 01:56 PM Add a state "NLN_CAL_MEAS" to ISC_LOCK guardian which preps the detector (already in nominal low noise, NLN) for biweekly regular calibration measurement suites (so far, just involves turning off calibration lines). Prep code on Monday (Today 3/4), Load and Test tomorrow (Tuesday, 4/4). 35295
6556.html 04/03/17 01:01 PM Temporary swap the HWS camera fiber cable so we can collect data from ITMY. To see some gradient change we need either one power-up or one lockless during the time of data collecting. Will revert the connection back to ITMX once this is done. 35294
6555.html 04/03/17 10:37 AM Swap the optical lever laser with one that has been stabilized in the lab. This is to cure the ongoing glitching issues this oplev has been experiencing. No viewports will be exposed during this work. The laser will need ~2 hours to come to thermal equilibrium once installation is complete. 35309
6554.html 04/03/17 10:36 AM Move several elements of the NGN array in Bier Garten in prep for upcoming ITMX vent.  
6553.html 04/03/17 10:15 AM Soft close GV 5 & 7 to protect beam tubes during pcal camera housing installation, requiring access to viewport. 35315
6552.html 04/03/17 10:03 AM Recenter BRSY suspended proof mass to return probe mirror autocollimator image on CCD. Requires turning off ISI ETMY sensor correction. 35313
6551.html 04/03/17 09:57 AM Move Compact BRS clear of ITMX chamber in prep for Short Vent. Will leave cBRS in new location for the duration of O2.  
6550.html 04/01/17 01:13 PM Install newly modified ITM camera housing, install telescope and camera, take photos locally. Will require VAC group supervision/assistance since viewport protection will be removed during install. Taking photos locally will involve plugging the camera into a laptop and adjusting settings and aperture masks while in the LVEA. Total duration of install/photography may require more than the typical 8am-noon maintenance period. 35326, 35327
6549.html 03/30/17 04:23 PM open IOT2R and move beam blocks to allow the beam to travel to the LVEA wall and then the positions marked. LVEA will need to be laser hazard. IMC will need to be locked. IM1-4 will need to be aligned. There will be limited access past IOT2R while the beam is exposed. I'm on owl shift, so would like to do this at the beginning of Maintenance. Deferred to next week.
6548.html 03/30/17 03:45 PM Remove Mid-Y Compressed Air alarms from the cell phone alarm system.  
6547.html 03/29/17 10:08 AM ECR1700111 to add 7 slow channels to DAQ broadcaster 35321
LHO VE
logbook/robot/script0.cds.ligo-wa.caltech.edu@LIGO.ORG - posted 12:10, Wednesday 05 April 2017 (35348)
CP3, CP4 Autofill 2017_04_05
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 50% open. Fill completed in 86 seconds. TC B did not register fill. LLCV set back to 18.0% open.
Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 681 seconds. TC A did not register fill. LLCV set back to 35.0% open.
Images attached to this report
H1 CAL (CAL)
sudarshan.karki@LIGO.ORG - posted 12:09, Wednesday 05 April 2017 (35347)
Pcal beam spots for X-End and Y-End

Using the pictures recently taken using the Pcal camera system I have determined the Pcal beam spot position. The numbers quoted below are the Pcal beam offset (in mm) from their nominal position of [0,111.6] for upper beam and [0, -111.6] for lower beam.

LHOX: Upper Beam [1.4+/-0.2, 1.3+/-0.2]

            Lower Beam [-0.4+/-0.3, 1.3+/-0.3]

LHOY: Upper Beam [-0.6+/-0.2, 1.1+/-0.1]

           Lower Beam [-3.1+/-0.1, 1.3+/-0.3]

The pictures for X-end were taken by Travis on 2017/03/28 and the pictures along with the analyzed figures can be found at:

https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Projects/PhotonCalibrator/Results/pcalImageAnalysis/LHOX/D20170328/

The pictures for Y-end were taken by me on 2017/03/21 and the pictures along with the analyzed figures can be found at:

https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Projects/PhotonCalibrator/Results/pcalImageAnalysis/LHOY/D20170321/

The last beam spot analysis for Xend can be found at: LHO alog # 29873 and Y-end at: LHO alog # 30105

 

H1 SUS (DetChar, SUS)
krishna.venkateswara@LIGO.ORG - posted 10:16, Wednesday 05 April 2017 - last comment - 11:20, Wednesday 05 April 2017(35341)
Darm range drop at ~5:30 UTC due to increased quad motion - another clue incriminating ITMY oplev

Krishna, Jeff

There are several recent cases of sudden range drops, coincident with increased quad motion, oplev noise, CAL line upconversion, OMC dither heights and angular motion etc. See Jeff's log for some details and also 34999. It happened again last night on 2017-04-05 from ~05:30 UTC to 06:40 before the earthquake knocked us out. DetChar has suspected ITMY Oplev.

As seen from the previous alogs, it is difficult to follow the causal chain, but I suspect OMC is unlikely to be the problem since it can't influence the QUAD motion which clearly goes up each time. All the QUADs show similar PITCH motion, so it is not clear which was the cause. So far, a smoking gun for the problem seems to be ITMY Oplev YAW 0.3Hz to 1Hz blrms: crossing a threshold of ~0.04 (microrad?) seems to trigger a range drop. Remember that oplevs are used only to damp PITCH motion of the quads but they are independent witnesses in YAW.

I have attached four cases where the ITMY Oplev - YAW sees an increase in the 0.3 to 1 Hz blrms correlated with range drop: April 5, April 2, March 29, March 22. A quick look at the summary pages shows that this increase is only associated with ITMY - Oplev YAW. There are more cases but this is sufficient I think.

It is interesting that even though other Oplevs (such as ETMY) glitch more, it looks like only ITMY-Oplev shows broad low-frequency increase in apparent angular motion.

Edit: For clarification, I think ITMY Oplev sees an apparent increase in PITCH and YAW during these times. Since we use it for damping in PITCH, all QUADS start pitching more affecting DARM. ITMY Oplev may need to be tuned/fixed. The other less likely possibility is that ITMY is rubbing occasionally.

Images attached to this report
Comments related to this report
krishna.venkateswara@LIGO.ORG - 11:20, Wednesday 05 April 2017 (35345)

It looks like there is a line in the Oplev laser at ~0.44 Hz, which gets larger during the periods corresponding to the range drops. In the attached pdf, dashed data is on April 5 starting from 05:30 UTC, while the solid lines are from 03:30 UTC.

Edit: The reason we think this problem is in the Oplev and not the QUAD chain is that the 0.3-1 Hz blrms for the ITMY YAW shows a clear difference in behavior before and after the laser swap on March 7th.

Non-image files attached to this comment
H1 DetChar (DetChar)
corey.gray@LIGO.ORG - posted 09:15, Wednesday 05 April 2017 - last comment - 06:18, Thursday 06 April 2017(35338)
GWI.stat Not Updating Detector States

This Morning noticed that the GWI.stat tool is froze with regards to the Detector states (the time stamp at the top is updating though).  Right now it currently lists H1 & L1 in NOT OK states.

Images attached to this report
Comments related to this report
peter.shawhan@LIGO.ORG - 09:27, Wednesday 05 April 2017 (35339)
I am looking into this.  It looks like it's a problem with the gstlal_gw_stat process which extracts the status information from frames -- some side effect of Tuesday maintenance since it has been that way since yesterday at 10:10 PDT.  I will ask Chad Hanna for help.
peter.shawhan@LIGO.ORG - 20:12, Wednesday 05 April 2017 (35352)
Chad modified the way the gstlal_gw_stat process checks checksums, and now GWIstat is working again.
peter.shawhan@LIGO.ORG - 06:18, Thursday 06 April 2017 (35355)
This morning there is a different problem -- condor is not running properly on ldas-grid.ligo.caltech.edu .  I've emailed the Caltech LDAS admins.
H1 TCS (GRD)
jim.warner@LIGO.ORG - posted 19:23, Tuesday 04 April 2017 - last comment - 12:05, Wednesday 05 April 2017(35329)
TCS guardian just knocked us out of observe

The interferometer had been locked and we had been observing for about 40 minutes when the TCS_ITMY_CO2 guardian knocked us out of observing. It created 3 diffs in TCSCS, and the ITMY_CO2 guardian complained it was not nominal. We couldn't get back to observing until the guardian had finished FIND_LOCK_POINT and returned to LASER_UP. Verbal has also complained several times that TCSY chill-air is low. I'm assuming for now that this is all related to the TCS work earlier.

Images attached to this report
Comments related to this report
corey.gray@LIGO.ORG - 10:11, Wednesday 05 April 2017 (35342)OpsInfo, TCS

Dave came in to talk about this.  This sounds similar to this post from last month:  alog#34861.  This was followed by email discussion between Keita & Alastair.

david.barker@LIGO.ORG - 10:38, Wednesday 05 April 2017 (35343)

Guardian had reported that the ITMY CO2 laser became unlocked at 18:09:32 PDT last night:

2017-04-05T01:09:32.92258 TCS_ITMY_CO2 [LASER_UP.run] laser unlocked. jumping to find new locking point
2017-04-05T01:09:32.98424 TCS_ITMY_CO2 JUMP target: FIND_LOCK_POINT


So the SDF differences raised and the fact they terminated observation mode appears to be correct.

david.barker@LIGO.ORG - 12:05, Wednesday 05 April 2017 (35346)

Jeff K cleared up the confusion of what should and shouldn't be monitored in this case.

The filter modules in question should not be monitored, they are being changed by guardian during observation. The TSTEP channel records the GPS time a step is made, and should never be monitored.

Taking TSTEP as an example, I checked through the SVN repository at the observe.snap file for TCS CO2 ITMX and ITMY and found that in Oct 2016 both were not monitored. In 3rd March this year they both were monitored. On 22 March ITMX was not monitored but ITMY was. We suspect that by accident too many changes are being applied to the snap files, for example perhaps monitor-all was applied.

H1 CAL (CAL)
richard.savage@LIGO.ORG - posted 17:29, Tuesday 04 April 2017 - last comment - 10:56, Wednesday 05 April 2017(35327)
ITMX images with newly-installed Pcal-style ITM camera

TravisS, KarlT, PeterK, RickS

We captured several series of images with the exposure for each successive image about a factor of three higher than the previous image.

Attached are four multi-page .pdf files containing the photos with:

- Resonant Green only (ITM OptLev on)

- Resonant IR and Green (ITM OptLev on)

- Resonant IR at 2W incident (ITM OptLev off)

- Resonant IR at 20W incident (ITM OptLev off)

The camera settings for the images are in the fifth attached .pdf file.

 

Non-image files attached to this report
Comments related to this report
jenne.driggers@LIGO.ORG - 10:56, Wednesday 05 April 2017 (35344)

Is there any chance that these images are flipped left-right?  If not, the bright spots seem to be in a position that is inconsistent with the position of the heat absorption, as shown in Aidan's alog 35336.  According to Aidan's alog, the absorber is on the bottom right when viewing the ITMX HR side, however the bright spots here seem to be on the bottom left when viewing ITMX's HR side. 

H1 CAL (CAL)
sudarshan.karki@LIGO.ORG - posted 15:25, Tuesday 21 March 2017 - last comment - 12:11, Wednesday 05 April 2017(34980)
Pcal beam spot position

We were able to get images using the Pcal camera at ENDY today and these images will be used to determine the position of the Pcal beam. (Analysis to follow). We were not able to get down to ENDX because of tumbleweed infestation. We will probably try to get images from ENDX during Thursday's commissioning effort.

Images attached to this report
Comments related to this report
sudarshan.karki@LIGO.ORG - 12:11, Wednesday 05 April 2017 (35349)

Analysis can be found at: LHO alog #35347

Displaying reports 48841-48860 of 83204.Go to page Start 2439 2440 2441 2442 2443 2444 2445 2446 2447 End