Displaying reports 50221-50240 of 85243.Go to page Start 2508 2509 2510 2511 2512 2513 2514 2515 2516 End
Reports until 04:54, Friday 05 May 2017
H1 General (IOO)
cheryl.vorvick@LIGO.ORG - posted 04:54, Friday 05 May 2017 - last comment - 11:08, Friday 05 May 2017(36039)
Opw Owl Transition:
Comments related to this report
cheryl.vorvick@LIGO.ORG - 05:01, Friday 05 May 2017 (36040)

SDF files:

Images attached to this comment
jim.warner@LIGO.ORG - 11:08, Friday 05 May 2017 (36045)

The SDF diffs indicate that, again, HAMs 2&3 are in the wrong state.

LHO General
thomas.shaffer@LIGO.ORG - posted 01:32, Friday 05 May 2017 (36033)
Ops Eve Shift Summary

TITLE: 05/05 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Other
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY: A large lightning storm swept through the area and caused numerous power glitches to the site. Since then it has been an effort to recover. Just to make things a bit more hectic, we had a 5.8M earthquake in Tajikastan to shake up the recovery effort. I unfortunately didn't write down times for th erecovery process, but it went something like this:

Thank you for the help of Dave Barker, Jason Oberling, Patrick Thomas, Richard McCarthy, and Keita Kawabe!

I have restored all suspensions back to where they were during our last lock and made it through initial alignment up to SRC align and without any major issues. Now handing off to Cheryl.

LOG:

H1 CDS
david.barker@LIGO.ORG - posted 23:16, Thursday 04 May 2017 (36038)
CDS is GREEN

I managed to clear the DAQ errors for h1pemm[x,y] by restarting the IOP models way more than should be necessary for it to latch to GPS time via EPICS. I suspect its EPICS GPS source was having issues earlier. Anyhow, all DAQ data is now good.

Greg checked on the LDAS and GDS systems, he found no problems. We should be good to go to observation mode when ready.

I checked the HEPI pump controller computers, they did not reboot and seem to have rode through the whole thing. BTW, they are running a 2.6.18 kernel, which is prior to the day208bug.

In summary, I don't think we need to do anything more at this point.

H1 CDS
david.barker@LIGO.ORG - posted 22:33, Thursday 04 May 2017 - last comment - 23:10, Thursday 04 May 2017(36035)
recovery from extended electrical storm

We think the worst of the storm may be over. I'll let TJ fill in the details, I know of at least three glitches which took down front end models. The first just took out the ISC models at the end stations (suspect timing card power supplies), the second (at 8:40pm) was the biggest and it rebooted every front end computer. After that most front ends came back up after the reboots, h1ioppsl0 had a timing excursion which took about 15 minutes to clear. At that point the only problem was bad DAQ data from both PEM systems at the midstations. A further glitch caused h1ioplsc0 to DAC error, I restarted all the LSC models to clear this.

The Beckhoff slow control machines also rebooted. All code restarted except for the three PLCs at EX. These have just been started.

At time of writing, the only computer issues I see is the bad data from the midstation PEM front end computers to the DAQ. If memory serves, these send their DAQ data back to the end stations via ethernet-fibre_optics converters, and then piggy-back on the main switch-to-switch link to the MSR. Maybe the converters need a power cycle.

 

Comments related to this report
david.barker@LIGO.ORG - 22:44, Thursday 04 May 2017 (36036)

Greg is checking if HofT would be fine with invalid mid station PEM channels.

gregory.mendell@LIGO.ORG - 23:10, Thursday 04 May 2017 (36037)CDS, DCS

All of LDAS at LHO survived the power glitches.

Data from CDS and the DMT continue to flow and I see H1 raw, trend and low-latency and aggregated hoft arriving on the cluster at LHO and CIT. And the summary pages are up-to-date.

Thus, I think things are fine from the point-of-view of LDAS.

Thus, hoft should be marked as good when H1 gets back into observation mode and ready for low-latency analysis downstream.

 

H1 General
thomas.shaffer@LIGO.ORG - posted 20:06, Thursday 04 May 2017 (36034)
Lockloss 03:02UTC

Lightning strike. I just came back inside and looked at the camera to see it hit us. Dave is on the phone now.

H1 CDS (CDS, SEI)
patrick.thomas@LIGO.ORG - posted 16:54, Thursday 04 May 2017 (36032)
Logged out of h1hpipumpctrll0
Forgot I was logged in from when we were in maintenance for cleanroom cleaning. Logged out at ~23:51 UTC.
LHO General
thomas.shaffer@LIGO.ORG - posted 16:11, Thursday 04 May 2017 (36031)
Ops Eve Shift Transition

TITLE: 05/04 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 64Mpc
OUTGOING OPERATOR: Travis
CURRENT ENVIRONMENT:
    Wind: 7mph Gusts, 6mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.22 μm/s
QUICK SUMMARY: LLO is still down but we are running at 65Mpc for 4hrs

H1 General
travis.sadecki@LIGO.ORG - posted 16:00, Thursday 04 May 2017 (36022)
Ops Day Shift Summary

TITLE: 05/04 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 63Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY:  In and out of Observing a few times for cleanroom work and OpLev swap and one lockloss likely caused by the cleaning crew (blame withheld).  Otherwise no issues to report.   
LOG:

17:45 Nutsinee to both ends to reinstall HWS plate

17:50 Hugh to MX

H1 AOS (DetChar)
jason.oberling@LIGO.ORG - posted 14:14, Thursday 04 May 2017 (36029)
ITMy Optical Lever Laser Swapped (WP 6616)

Looking at the trends posted in LHO alog 36024, the problem with the ITMy oplev appears to be a failing laser diode.  Since LLO was down, it was decided to perform a swap of this laser with one I recently stabilized in the lab.  The motivation is so we have a before/after record of the ITMy pointing for the vent beginning next week; it was thought to be better to perform the swap now, versus during the vent (either due to preventive maintenance or laser failure) and this effecting the alignment reading of the oplev (in theory it shouldn't, but we all know that old saying about theory and practice...).  New laser SN is 120-2, old laser SN is 189-1.  As is usual, the laser will need a couple hours to come to thermal equilibrium with its new home, then I can assess it for glitchiness.  I will leave WP 6616 open until the laser is operating glitch-free.

H1 General
travis.sadecki@LIGO.ORG - posted 14:02, Thursday 04 May 2017 (36028)
Observing at 21:01 UTC

OpLev laser swap is complete.  Back to Observing.

H1 General
travis.sadecki@LIGO.ORG - posted 13:40, Thursday 04 May 2017 (36027)
Commissioning mode at 20:39 UTC

For Jason to swap ITMy OpLev laser.  WP#6616.

H1 DetChar (DetChar)
beverly.berger@LIGO.ORG - posted 13:00, Thursday 04 May 2017 (36026)
DQ Shift: Monday 1 May 2017 00:00 UTC - Wednesday 3 May 2017 23:59 UTCQ

Shifter: Beverly Berger

LHO fellow: Evan Goetz

For full results see here.

H1 General
travis.sadecki@LIGO.ORG - posted 12:44, Thursday 04 May 2017 (36025)
Ops Day Mid-shift Summary

We are back to being locked at NLN.  Nutsinee is still running some measurements and the cleaning crew is still working on the cleanroom.  We will remain in Commissioning mode until either the cleaning crew is done or LLO come back on line, which ever occurs first.

H1 AOS (AOS, SEI, SUS)
corey.gray@LIGO.ORG - posted 12:03, Thursday 04 May 2017 - last comment - 14:22, Thursday 04 May 2017(36024)
Optical Lever 7 Day Trends

Attaching screenshot of plots.  CLOSING out FAMIS #4726.

Images attached to this report
Comments related to this report
jason.oberling@LIGO.ORG - 14:22, Thursday 04 May 2017 (36030)

Everything with these trends looks normal, except for the ITMy SUM.  This now appears to be a failing laser diode (previous issue reported here), so the oplev laser was swapped this afternoon while LLO was down.

For the HAM2 oplev, this can likely be removed from the script as the HAM oplevs are now set for decomissioning (ECR E1700123 and ECR tracker 7884).

H1 General
travis.sadecki@LIGO.ORG - posted 10:48, Thursday 04 May 2017 (36021)
Lockloss 17:45 UTC

Possibly related to cleaning activities.  Nutsinee is taking advantage of the coincident downtime to reinstall HWS plates at both end stations and take some measurements during relocking.

H1 PEM
travis.sadecki@LIGO.ORG - posted 09:38, Thursday 04 May 2017 - last comment - 10:46, Thursday 04 May 2017(36017)
HAM2 GS13 injections

Sheila helped me sort out the issue from yesterday's attempt at these measurements.  Today they were performed successfully while we were in Maintenance for cleaning for the upcoming vent.  Results can be found in /ligo/home/sheila.dwyer/PEM/HPI_injections_fire_pump:

HPI_HAM2_X4May2017.xml

HPI_HAM2_Y4May2017.xml

HPI_HAM2_Z4May2017.xml

Comments related to this report
travis.sadecki@LIGO.ORG - 10:46, Thursday 04 May 2017 (36020)

Note these are NOT HEPI injections as the titles may have you believe.  They are ISI injections.  I'll leave it to Sheila to move and rename them appropriately.

H1 AOS (DetChar)
thomas.massinger@LIGO.ORG - posted 11:11, Wednesday 03 May 2017 - last comment - 11:10, Thursday 04 May 2017(35993)
EX/EY oplev laser glitches

TJ Massinger, Derek Davis, Laura Nuttall

Summary: the intervention on April 26th cleaned up the ETMY oplev glitching but the ETMX oplev is still causing problems. Both noise sources have been seen to cause loud background triggers for transient searches when they're glitching.

Since the end station oplev laser glitches have been seen to couple into h(t), we had to develop vetoes based on the BLRMS of the OPLEV_SUM channels to flag and remove these transients from CBC and Burst searches. Using the thresholds that capture the coupling into h(t), we were able to take some long trends of the impact of the oplev glitching using the veto evaluation toolkit (VET).

ETMY results: https://ldas-jobs.ligo-wa.caltech.edu/~tjmassin/detchar/VET/ETMY_OPLEV/1176422418-1177779618/65/

The take home message from this result page is the Omicron glitchgrams. In the first attached image, every blue point is an Omicron trigger. Those in red are triggers coincident with the ETMY L3 OPLEV SUM having a high 10-50 Hz BLRMS. This population of glitches in the bucket was severely damaging for transient searches, but the problem seems to have gone away since the intervention on April 26th (alog 35798). Looking at the segments where the OPLEV SUM BLRMS was high (attachment 2), we see that after April 26th there are few times when the flag is active, which indicates that there are fewer fluctuations in the OPLEV SUM readout.

ETMX results: https://ldas-jobs.ligo-wa.caltech.edu/~tjmassin/detchar/VET/ETMX_OPLEV/1176422418-1177779618/7_5/

Once again, looking at the Omicron glitchgram (attachment 3), every blue point is an Omicron trigger. Those in red are trigger coincident with ETMX L3 OPLEV SUM having a high 45-100 Hz BLRMS (chosen to capture the power fluctuations in the oplev). The ETMX oplev glitches aren't quite as damaging as the ETMY oplev glitches, but they're still producing loud background triggers in the transient searches. Looking at the segments where this flag was active (attachment 4), we see that the ETMX oplev laser has been glitching on and off over the last two weeks and coupling into h(t).

 

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 11:57, Wednesday 03 May 2017 (35994)

I talked to Jason Oberling about this earlier, and it sounds like the same quick fix that worked for ETMY won't work for ETMX, since that laser is already operating near the high end of its power range and I turned up the power on friday (35887). Jason is working on tuning up the one other laser that he has to be ready to make a swap on Tuesday, but the ITMY oplev laser may also be failing so the new laser might be needed for that.  If we need to fix the problem in hardware, the only immediate option we have is to turn off the oplev completely, which will mean that we loose our only independent reference of the optic angle. We're reluctant to loose this especially since we have had several computer crashes recently.  

Is the message of the alog above that the veto is good enough until the laser can be fixed (probably not until the vent)?  

Also, although Jason can probably fix this laser in a few weeks, these lasers fail frequently and we probably will continue to have problems like this throughout the rest of the run.  

 

keita.kawabe@LIGO.ORG - 12:26, Wednesday 03 May 2017 (35997)DetChar

It would be interesting if detchar can also have a look at ITMs and L1.  I'd like to know if similar glitches in oplev power causes similar glitches in DARM regardless of the test mass and IFO, which should be the case if it's radiation pressure.

thomas.massinger@LIGO.ORG - 13:24, Wednesday 03 May 2017 (36000)DetChar

Sheila: The OPLEV SUM channels are good witnesses for this, so we can monitor them and veto glitchy times from the searches. The deadtime from the ETMX glitches isn't much, VET shows 0.04% deadtime over the roughly two weeks I ran this for, so it's not damaging to the search in terms of analysis time if we need to continue to veto them. 

Keita: I'm also curious about the ITMs and L1, they're next up on the list.

thomas.massinger@LIGO.ORG - 11:10, Thursday 04 May 2017 (36023)DetChar

Keita: It looks like the glitches in the ITMs typically have a lower SNR than those in the ETMs. I mentioned this in LLO alog 33531, but will attach the relevant figure here as well.

The attached figure shows SNR histograms of Omicron triggers from the LHO test mass OPLEV_SUM channels with the x-axis restricted to show loud (SNR > 100) triggers. The loudest SNR Omicron triggers in the ITM OPLEV_SUM channels are 450 and 550 for ITMX and ITMY respectively and they're part of a sparse tail of loud triggers. For ETMX and ETMY, the loudest Omicron triggers have SNR 750 and 5500 for ETMX and ETMY respectively and both have a higher population of loud glitches (ETMY in particular).

Images attached to this comment
Displaying reports 50221-50240 of 85243.Go to page Start 2508 2509 2510 2511 2512 2513 2514 2515 2516 End