Displaying reports 48221-48240 of 83228.Go to page Start 2408 2409 2410 2411 2412 2413 2414 2415 2416 End
Reports until 12:44, Thursday 04 May 2017
H1 General
travis.sadecki@LIGO.ORG - posted 12:44, Thursday 04 May 2017 (36025)
Ops Day Mid-shift Summary

We are back to being locked at NLN.  Nutsinee is still running some measurements and the cleaning crew is still working on the cleanroom.  We will remain in Commissioning mode until either the cleaning crew is done or LLO come back on line, which ever occurs first.

H1 AOS (AOS, SEI, SUS)
corey.gray@LIGO.ORG - posted 12:03, Thursday 04 May 2017 - last comment - 14:22, Thursday 04 May 2017(36024)
Optical Lever 7 Day Trends

Attaching screenshot of plots.  CLOSING out FAMIS #4726.

Images attached to this report
Comments related to this report
jason.oberling@LIGO.ORG - 14:22, Thursday 04 May 2017 (36030)

Everything with these trends looks normal, except for the ITMy SUM.  This now appears to be a failing laser diode (previous issue reported here), so the oplev laser was swapped this afternoon while LLO was down.

For the HAM2 oplev, this can likely be removed from the script as the HAM oplevs are now set for decomissioning (ECR E1700123 and ECR tracker 7884).

H1 General
travis.sadecki@LIGO.ORG - posted 10:48, Thursday 04 May 2017 (36021)
Lockloss 17:45 UTC

Possibly related to cleaning activities.  Nutsinee is taking advantage of the coincident downtime to reinstall HWS plates at both end stations and take some measurements during relocking.

H1 General
travis.sadecki@LIGO.ORG - posted 10:39, Thursday 04 May 2017 (36019)
GRB alert 17:38 UTC

However, we are in Commissioning mode for the cleanroom cleaning and LLO is still down.:(

H1 PEM
travis.sadecki@LIGO.ORG - posted 09:38, Thursday 04 May 2017 - last comment - 10:46, Thursday 04 May 2017(36017)
HAM2 GS13 injections

Sheila helped me sort out the issue from yesterday's attempt at these measurements.  Today they were performed successfully while we were in Maintenance for cleaning for the upcoming vent.  Results can be found in /ligo/home/sheila.dwyer/PEM/HPI_injections_fire_pump:

HPI_HAM2_X4May2017.xml

HPI_HAM2_Y4May2017.xml

HPI_HAM2_Z4May2017.xml

Comments related to this report
travis.sadecki@LIGO.ORG - 10:46, Thursday 04 May 2017 (36020)

Note these are NOT HEPI injections as the titles may have you believe.  They are ISI injections.  I'll leave it to Sheila to move and rename them appropriately.

H1 General
travis.sadecki@LIGO.ORG - posted 08:59, Thursday 04 May 2017 (36016)
Out of Observing for cleaning

Out of Observing at 15:52 UTC to let the cleaning crew get started on the second (first?) cleaning of the cleanroom in the beer garden.  Set OPS_OBSERVATORY_MODE to PREVENTATIVE MAINTENANCE since there is no obvious choice for cleaning activities.

H1 PSL
travis.sadecki@LIGO.ORG - posted 08:54, Thursday 04 May 2017 (36014)
Weekly PSL Chiller Reservoir Top-Off

Added 250 mL H20 to Xtal chiller.  Diode chiller is OK.  Filter appear clean.  This closes FAMIS 6521.

H1 General
travis.sadecki@LIGO.ORG - posted 08:06, Thursday 04 May 2017 (36013)
Ops Day Shift Transition

TITLE: 05/4 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing
OUTGOING OPERATOR: Jeff
QUICK SUMMARY:  No issues were handed off.  Lock is just over 14 hours old.

H1 General
jeffrey.bartlett@LIGO.ORG - posted 07:58, Thursday 04 May 2017 (36012)
Ops Owl Shift Summary
Ops Shift Log: 05/04/2017, Owl Shift 07:00 – 15:00 (00:00 - 08:00) Time - UTC (PT)
State of H1: Locked at NLN 29.8W and 63.7Mpc of range      
Intent Bit: Observing
Support: N/A
Incoming Operator: Travis

Shift Summary: Run A2L check script. Pitch is OK; Yaw is up to 0.7. LLO is down, dropped out of Observing to run A2L repair script. Wind and seismic are starting to rise. Otherwise, good observing shift with no issues to report.

     Activity Log: Time - UTC (PT)
07:00 (00:00) Take over from TJ
07:41 (00:41) Drop out of Observing to run the A2L script
07:50 (00:50) Back to Observing
15:00 (08:00) Turn over to Travis
H1 General
jeffrey.bartlett@LIGO.ORG - posted 04:03, Thursday 04 May 2017 (36011)
Ops Owl Mid-Shift Summary
  Locked and observing for past 10 hours. Wind, weather, and seismic are all OK. There have been a few glitches, but no other concerns at this time. 
H1 General
jeffrey.bartlett@LIGO.ORG - posted 00:17, Thursday 04 May 2017 (36010)
Ops Owl Shift Transition
Ops Shift Transition: 05/04/2017, Owl Shift 07:00 – 15:00 (00:00 - 08:00) - UTC (PT)
State of H1: IFO locked at NLN, 29.9W and 65.7Mpc  
Intent Bit: Observing
Weather: Wind is Calm; Skies are clear; Temps in the lower 60s  
Primary 0.03 – 0.1Hz: At 0.01um/s 
Secondary 0.1 – 0.3Hz: At 0.2um/s   
Quick Summary:  Locked for 6 hours. There have been several glitches during the previous shift. Right now all OK.
Outgoing Operator: TJ
LHO General
thomas.shaffer@LIGO.ORG - posted 00:00, Thursday 04 May 2017 (36007)
Ops Eve Shift Summary

TITLE: 05/04 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC

STATE of H1: Observing at 65Mpc
INCOMING OPERATOR: Jeff
SHIFT SUMMARY: LLO is down for the night with fast shutter problems and severe weather. Here, we have been very glitchy for the past ~8hrs, not sure why. Range is still okay and we have been locked for 6hrs.
LOG:

H1 General (CDS)
thomas.shaffer@LIGO.ORG - posted 17:35, Wednesday 03 May 2017 - last comment - 17:54, Wednesday 03 May 2017(36008)
Lockloss @00:20UTC

Not sure of the cause. TCS IMTY laser lock its lock point 15min prior, but quickly found it again so that doesnt seem to be the issue.

I could not get the lockloss tool to work, I got the error:

"RuntimeError: Low level daq error occured [13]: Requested data were not found.
There is a gap in H1:GRD-ISC_LOCK_STATE_N"

I will see if I can dig up anything on that error after locking.

 

Comments related to this report
thomas.shaffer@LIGO.ORG - 17:54, Wednesday 03 May 2017 (36009)

Back to Observing at 00:53UTC. No issues coming back up.

H1 DetChar (DetChar)
evan.goetz@LIGO.ORG - posted 16:26, Wednesday 03 May 2017 (36006)
Bruco scans before and after 50 Hz glitching appearing on April 8
To better understand the cause of 50 Hz glitches observed in Omicron scans (see, e.g., here), I made some Bruco scans, hoping to identify if a coupling mechanism changed around the start of day April 8.

April 7 bruco scan results

April 8 bruco scan results

I don't see anything obvious, but maybe others, with more expert Bruco eye, can spot something.
H1 PSL
thomas.shaffer@LIGO.ORG - posted 16:16, Wednesday 03 May 2017 (36005)
PSL Weekly Report

FAMIS#7436


Laser Status:
SysStat is good
Front End Power is 34.01W (should be around 30 W)
HPO Output Power is 169.1W
Front End Watch is GREEN
HPO Watch is GREEN

PMC:
It has been locked 5 days, 17 hr 55 minutes (should be days/weeks)
Reflected power = 16.65Watts
Transmitted power = 62.6Watts
PowerSum = 79.25Watts.

FSS:
It has been locked for 0 days 15 hr and 29 min (should be days/weeks)
TPD[V] = 3.514V (min 0.9V)

ISS:
The diffracted power is around 3.8% (should be 3-5%)
Last saturation event was 0 days 15 hours and 28 minutes ago (should be days/weeks)

Possible Issues:
None

H1 AOS (DetChar)
thomas.massinger@LIGO.ORG - posted 11:11, Wednesday 03 May 2017 - last comment - 11:10, Thursday 04 May 2017(35993)
EX/EY oplev laser glitches

TJ Massinger, Derek Davis, Laura Nuttall

Summary: the intervention on April 26th cleaned up the ETMY oplev glitching but the ETMX oplev is still causing problems. Both noise sources have been seen to cause loud background triggers for transient searches when they're glitching.

Since the end station oplev laser glitches have been seen to couple into h(t), we had to develop vetoes based on the BLRMS of the OPLEV_SUM channels to flag and remove these transients from CBC and Burst searches. Using the thresholds that capture the coupling into h(t), we were able to take some long trends of the impact of the oplev glitching using the veto evaluation toolkit (VET).

ETMY results: https://ldas-jobs.ligo-wa.caltech.edu/~tjmassin/detchar/VET/ETMY_OPLEV/1176422418-1177779618/65/

The take home message from this result page is the Omicron glitchgrams. In the first attached image, every blue point is an Omicron trigger. Those in red are triggers coincident with the ETMY L3 OPLEV SUM having a high 10-50 Hz BLRMS. This population of glitches in the bucket was severely damaging for transient searches, but the problem seems to have gone away since the intervention on April 26th (alog 35798). Looking at the segments where the OPLEV SUM BLRMS was high (attachment 2), we see that after April 26th there are few times when the flag is active, which indicates that there are fewer fluctuations in the OPLEV SUM readout.

ETMX results: https://ldas-jobs.ligo-wa.caltech.edu/~tjmassin/detchar/VET/ETMX_OPLEV/1176422418-1177779618/7_5/

Once again, looking at the Omicron glitchgram (attachment 3), every blue point is an Omicron trigger. Those in red are trigger coincident with ETMX L3 OPLEV SUM having a high 45-100 Hz BLRMS (chosen to capture the power fluctuations in the oplev). The ETMX oplev glitches aren't quite as damaging as the ETMY oplev glitches, but they're still producing loud background triggers in the transient searches. Looking at the segments where this flag was active (attachment 4), we see that the ETMX oplev laser has been glitching on and off over the last two weeks and coupling into h(t).

 

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 11:57, Wednesday 03 May 2017 (35994)

I talked to Jason Oberling about this earlier, and it sounds like the same quick fix that worked for ETMY won't work for ETMX, since that laser is already operating near the high end of its power range and I turned up the power on friday (35887). Jason is working on tuning up the one other laser that he has to be ready to make a swap on Tuesday, but the ITMY oplev laser may also be failing so the new laser might be needed for that.  If we need to fix the problem in hardware, the only immediate option we have is to turn off the oplev completely, which will mean that we loose our only independent reference of the optic angle. We're reluctant to loose this especially since we have had several computer crashes recently.  

Is the message of the alog above that the veto is good enough until the laser can be fixed (probably not until the vent)?  

Also, although Jason can probably fix this laser in a few weeks, these lasers fail frequently and we probably will continue to have problems like this throughout the rest of the run.  

 

keita.kawabe@LIGO.ORG - 12:26, Wednesday 03 May 2017 (35997)DetChar

It would be interesting if detchar can also have a look at ITMs and L1.  I'd like to know if similar glitches in oplev power causes similar glitches in DARM regardless of the test mass and IFO, which should be the case if it's radiation pressure.

thomas.massinger@LIGO.ORG - 13:24, Wednesday 03 May 2017 (36000)DetChar

Sheila: The OPLEV SUM channels are good witnesses for this, so we can monitor them and veto glitchy times from the searches. The deadtime from the ETMX glitches isn't much, VET shows 0.04% deadtime over the roughly two weeks I ran this for, so it's not damaging to the search in terms of analysis time if we need to continue to veto them. 

Keita: I'm also curious about the ITMs and L1, they're next up on the list.

thomas.massinger@LIGO.ORG - 11:10, Thursday 04 May 2017 (36023)DetChar

Keita: It looks like the glitches in the ITMs typically have a lower SNR than those in the ETMs. I mentioned this in LLO alog 33531, but will attach the relevant figure here as well.

The attached figure shows SNR histograms of Omicron triggers from the LHO test mass OPLEV_SUM channels with the x-axis restricted to show loud (SNR > 100) triggers. The loudest SNR Omicron triggers in the ITM OPLEV_SUM channels are 450 and 550 for ITMX and ITMY respectively and they're part of a sparse tail of loud triggers. For ETMX and ETMY, the loudest Omicron triggers have SNR 750 and 5500 for ETMX and ETMY respectively and both have a higher population of loud glitches (ETMY in particular).

Images attached to this comment
H1 TCS
nutsinee.kijbunchoo@LIGO.ORG - posted 15:16, Tuesday 02 May 2017 - last comment - 10:15, Thursday 04 May 2017(35979)
PZT2 offset check -- only EX was done

To be able to take measurement with HWS at the end stations we need a proper PZT offset for both EX and EY to get rid of the reflection coming off of the ITMs during full lock (when the arm is locked in RED). Today I tested Elli's EX PZT2 offset from 2 years ago (alog17860) while taken the past and current PZT2 YAW output into account. A quick conclusion is the sum of output + misalign bias that Elli figured out still gives goodish image on the HWSX camera, but needed fine tuning.

 

How to lock arms in red (one at a time, this is for X arm)

1) Request INITIAL_ALIGNMENT on the ISC_LOCK guardian, take ALIGN_IFO to DOWN then request XARM_IR_LOCKED

2) If ALIGN_IFO guardian stuck at LOCKING_XARM_IR, the arm is too misalign. Do INITIAL_ALIGNMENT for the arm that you want (or both)

    Once the arms are locked in green, they are good for red too. Go back to LOCKING_XARM_IR

    Guardian will misalign PRM, SRM, ITMY, and ETMY. Make sure no one else is using these.

 

Misaligning ALSX PZT2

1) Turn off H1:ALSX_PZT2_YAW_OUTEN. This is an ON/OFF switch. The switch can be found in SITEMAP>ALSX Overview, PZT2_YAW

    Make sure you stream image so you can see the transition

2) Add YAW offset to H1:ALS-X_PZT2_YAW_MISALIGN_BIAS

3) Type of misalignment => make it SUM

4) Click Misalign.

 

To revert the configuration, make sure to watch out for the growing output before turning the OUTEN switch back on. If the number grows, turn the integrator off and bleed rate on. The counts will slowly ramp down.

 

I compared my streamed images to Elli's using the same matlab script. But the images don't look quite the same (mine seem a bit more saturated). On April 24 I calculated the more recent offset to be -3381. This result in a very clipped-looking image on the HWSX so I went backward a little. -2400 seems to have given the best result. Today's PZT2 YAW output (~10340) + misalignment bias (-2400) sum up to be 7940, 321 counts different from Elli's sum (10919-3300 = 7619). 

 

I'm afraid that ITMs alignment might also affect this offset. If there's no more commissioning time this week to take the data the offset should be checked again after the vent.

 

We had no problem going back to NLN later. If there was any hysteresis, it didn't case a problem. 

 

HWS plates are currently off at both end stations.

 

Images attached to this report
Comments related to this report
nutsinee.kijbunchoo@LIGO.ORG - 10:15, Thursday 04 May 2017 (36018)

I had a chance to check EY PZT offset today. -3500 PZT2 YAW misalign bias worked best for H1:ALS-Y_PZT2_YAW_OUTPUT = 15233 (sum being 11733)

If there's a lockloss, we can take some data with End Stations HWS during the power up. Hartmann waveplates still have to be put back though. Waiting for an opportunistic down time.

Images attached to this comment
H1 PSL
edmond.merilh@LIGO.ORG - posted 16:12, Monday 01 May 2017 - last comment - 08:56, Thursday 04 May 2017(35951)
PSL Weekly Report - 10 Day Trends FAMIS #6146

Nothing unusual. There are obvious signs of diode current adjustments.

Images attached to this report
Comments related to this report
jason.oberling@LIGO.ORG - 08:56, Thursday 04 May 2017 (36015)

Concur with Ed, everything looks normal.

Displaying reports 48221-48240 of 83228.Go to page Start 2408 2409 2410 2411 2412 2413 2414 2415 2416 End