Displaying reports 54121-54140 of 84760.Go to page Start 2703 2704 2705 2706 2707 2708 2709 2710 2711 End
Reports until 12:43, Tuesday 15 November 2016
H1 CDS
david.barker@LIGO.ORG - posted 12:43, Tuesday 15 November 2016 - last comment - 13:01, Tuesday 15 November 2016(31496)
exttrig alerts of GRB and Supernovae is operational

The EXTTRIG MEDM screen was apparently not working correctly on first glance. The last query time was updating correctly, but the last event was from Nov 6th and there have been many events since that time.

Upon investigation, it turns out that the system is behaving correctly. Here is the sequence:

the epics channels exttrig uses are served by the h1calcs model. This runs on h1oaf0, which has been restarted many time over the past week. Each time the h1calcs model is restarted, two things happen:

  1. the EXTTRIG epics channels are restored from the safe.snap file, which has the Nov 6th event data recorded
  2. the EXTTRIG script crashes (due to lost connection with its EPICS channels) and is restarted by monit. On restart, if no events have occured in the past 10 hours, the event info is not updated and the Nov 6th data stands.

For the record, here is the startup sequence of the code on h1fescript0:

process is controlled by monit. Its file is /etc/monit/conf.d/monit_ext_alert.  It monitors a process whose PID is stored in the file /var/log/ext_alert/ext_alert.pid.

If the process needs to be started/restarted, monit executes (as root) the file /etc/init.d/ext_alert. This in turn, using the start-stop-daemon runs the script /home/exttrig/run_ext_alert.sh, and this runs the script /opt/rtcds/userapps/release/cal/common/scripts/ext_alert.py with the appropriate arguments.

We are investigating this morning's Tuesday test events were not  recorded. These are T-events in gracedb with a label of H1OPS. The run_ext_alert.sh script was not calling ext_alert.py with the '--test' argument to query for test events. We turned on the gracedb-query of test events in /home/exttrig/run_ext_alert.sh and got this morning's test event on startup. Duncan pointed out that there are many non-ops test events per day so this will generate many false positives.

Comments related to this report
david.barker@LIGO.ORG - 13:01, Tuesday 15 November 2016 (31499)

On Duncan's recommendation we turned off the reporting of test events.

By the way, it looks like today's SNEWS external test event at 09:00PST did not get posted to Gracedb?

H1 IOO (IOO)
cheryl.vorvick@LIGO.ORG - posted 12:35, Tuesday 15 November 2016 (31494)
REFL DC incident power and output voltage at 22W, 31W, and 50W

Today I looked at the output voltage from REFLduring H1 locks at 22W, 31W, and 50W input power, and calulated how the output voltage increased with input power to H1.

22W 31W 50W

27mW incident power

(measured)

 

61mW incident power

(calculated from 22W

power budget)

0.3V output voltage 0.7V output voltage 1.0V output voltage

In the above chart the incident power at 50W is calculated from the measured 22W power budget numbers.

I used the 22W incident power to get a conversion from mW to counts on REFL_DC_INMON, and used that conversion to calculate the incident power at 31W and 50W.

input power REFL DC incident power mW/count incident power
Watts counts measured, mW measured calculated, mW
22 520 27 0.05192  
31 1137     59
50 1586     82

At 50W input power REFL sees 82mW incident power.

Above is incorrect (unlocked numbers, not locked).

Numbers below are calculated, based on unlocked transmition through the HWP and CWP in the IMC MCR path.

input power REFL DC incident power mW/count incident power
measured, Watts measured, counts calculated, uW   calculated, uW
22 520 8 0.0154  
31 1137     17
50 1586     24
H1 IOO
sheila.dwyer@LIGO.ORG - posted 12:18, Tuesday 15 November 2016 (31495)
IMC WFS re-centered

The IMC WFS were not well centered, in part because we recently moved the uncontrolled DOF and in part because they weren't well centered before that.  

I picomotored them during maintence this morning.

H1 SUS
jeffrey.kissel@LIGO.ORG - posted 12:12, Tuesday 15 November 2016 (31493)
h1susauxb123 ITM LV ESD Monitor Channel Front-End Model Bug Fixed
J. Kissel, S. Aston

WP #6318
FRS #6511

Stuart had pointed out a bug in the channel ordering for the monitor signals of the new ITM LV ESD drivers (see LHO aLOG 30861). In return, I proposed a change to our common library part because otherwise we'd bee creating a two-wrongs-make-a-right situation. He's since graciously offered to make that fix first, such that I merely need to svn up and re-arrange make ADC inputs. He has done so -- see LLO aLOG 29498 -- so I've made the update, and corrected the ADC inputs.

I attach a few screenshots for proof. In this case, I only attach the "after" shots. 

The fixed models have been compiled, installed, and restarted. This fix did not require a DAQ restart.

This closes out the work permit and FRS ticket.
Images attached to this report
H1 AOS (SEI, SUS)
edmond.merilh@LIGO.ORG - posted 10:32, Tuesday 15 November 2016 - last comment - 14:04, Tuesday 15 November 2016(31491)
ETMY OpLev centered

...as per WP6316. ≈ -1.6 µrad offset in YAW due to 'kick' when the positioner is turned on/off. Multiple attempts to offset made no difference.

Comments related to this report
jason.oberling@LIGO.ORG - 10:35, Tuesday 15 November 2016 (31492)

I also centered the BS oplev, using a different picomotor controller from the EE shop (see LHO alog 31333 for details on BS oplev centering issues).  This closes WP 6316.

edmond.merilh@LIGO.ORG - 14:04, Tuesday 15 November 2016 (31505)

I noticed that YAW had changed to ≈2.8µrad after I returned to the corner. PCal work began immediately after I left the alignment. I noticed the change in position and re-centered after the PCal work was completed. During the PCal work, the PCal shutter was open and closed so that I could observe any action on the alignment. There didn't seem to be any affect with the shutter in either of it's positions. After a brief period of time it seems that PIT has drifted to ≈ -1.8. This seems to be an inherent issue thoughout most of the OpLev system.

H1 CDS
david.barker@LIGO.ORG - posted 10:19, Tuesday 15 November 2016 (31490)
h1oaf0 remains stable after one-stop fiber replacement, adcHoldTimeEverMax scan of FECs

Currently h1oaf0 has been stable for 22 hours following the one-stop cable-transceiver replacement (as suggested by Daniel).

When the oaf stopped driving the DAC, the h1iop model's proc status file showed a very large value for adcHoldTimeEverMax (in the 90's) while most systems showed this value around 17uS.

If we can take this value was an indicator of a failing PCI-bus extender transceiver, I have written a script to scan all the front end computers and report this value. This was ran at 10:10PST and the results are tabulated below.

Note that they are all in the 16-20uS range except for the h1suse[x,y] systems which are in the 70's. The end station SUS machines are the newer type and this is a known issue not related to possible one-stop fibers.

h1iopsush2a 17
h1iopsush2b 18
h1iopsush34 19
h1iopsush56 20
h1iopsusauxh34 18
h1iopsusauxh56 18
h1iopsusauxh2 18
h1iopsusauxb123 19
h1ioppsl0 17
h1iopsusex 74
h1iopseiex 21
h1iopiscex 18
h1iopsusauxex 20
h1iopsusey 71
h1iopseiey 20
h1iopiscey 18
h1iopsusauxey 19
h1iopoaf0 17
h1iopsusb123 17
h1iopseib1 18
h1iopseib2 18
h1iopseib3 21
h1ioplsc0 17
h1iopseih16 19
h1iopseih23 16
h1iopseih45 17
h1iopasc0 17
h1ioppemmx 18
h1ioppemmy 19
LHO VE
chandra.romel@LIGO.ORG - posted 09:29, Tuesday 15 November 2016 (31489)
CP3 Dewar fill and LLCV
As CP3 Dewar is being filled right now with LLCV set to 21% in manual mode, the exhaust pressure rose to 0.5 psi and the TCs are reading lower than normal temps. So I lowered LLCV to 16% open which was the setting we used after the last Dewar fill.
Images attached to this report
LHO VE
chandra.romel@LIGO.ORG - posted 09:22, Tuesday 15 November 2016 (31488)
Opened CP exhaust bypass valve on CP1,2,5,6,8
Per WP 6320, yesterday I opened the exhaust bypass valves on cryopumps along x-arm and CP1. CP 3,4 at MY are already open. The only one left to open is CP7 at EY. 

These valves will remain open during normal operations as an added layer of safety for over pressurization. LLO has been operating in this mode for some time.
H1 CDS
james.batch@LIGO.ORG - posted 09:00, Tuesday 15 November 2016 (31487)
New nds2 client software installed for control room.
WP 6319

Updated nds2_client software to version 0.13.1 for Ubuntu 12, Ubuntu 14, and Debian 8.
H1 General
travis.sadecki@LIGO.ORG - posted 08:00, Tuesday 15 November 2016 (31485)
Ops Owl Shift Summary

TITLE: 11/15 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
LOG:

10:32 Set SRC1_P and SRC1_Y gain to 0 per Sheila's recommendation.  Reopened POP beam diverters to monitor POP90 signal.  Successfully made it to NLN.  LLO lost lock just as we were getting to NLN, so I am going to wait 30-45 minutes before making Kissel's measurements and going to Observe to see if things seem stable.

11:04 Running a2l_min_LHO.

11:09 PI mode 27 ringing up. Changed phase from 130 to 180 and gain from 3000 to 4000.

11:10 Running a2l_min_PR2.

11:15 Running a2l_min_PR3.

11:26 Closed POP beam diverters.  Starting Kissel's PCAL2DARMTF measurement.

11:37 Finished PCAL2DARMTF.  Saved as /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/Measurements/PCAL/2016_11_15_H1_PCAL2DARMTF_4to1200Hz_fasttemplate.xml.

11:38 Started Kissel's DARMOLGTF measurement.

11:51 Saved DARMOLGTF measurement as /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/Measurements/DARMOLGTF/2016_11_15_H1_DARM_OLGTF_4to1200Hz_fasttemplate.xml.

11:55 Restarted PCAL lines.

11:56 Set to Observing.

12:15 Out of Observing to damp PI mode 28.  Changed phase from 60 to -60, no gain change.

12:23 Observing

12:32 Lockloss.  From the error signal striptools, it appears that MICH_P, DHARD_P, and SRC1_P rang up over the course of 10 minutes prior to lockloss.  Recall that I had set the SRC1 gains to 0 at the beginning of this lock stretch.  Perhaps it needed to be turned back on at some point during the lock, but wasn't an issue for 2 hours or so.  HAM6 ISI WD tripped at lockloss.

14:29 NLN.  Took SRC1 gains to 0 again since it seemed to work last time.

14:35 Observing.

14:52 PI mode 28 ringing up.  Changed phase from -60 to 60.  Forgot to go out of Observing to do so.

H1 General
travis.sadecki@LIGO.ORG - posted 04:31, Tuesday 15 November 2016 (31486)
Ops Owl Mid-shift Summary

After a bit of a struggle to get to NLN, with SRC1 loop turned off, we are back to Observing.  Unfortunately, coincident with LHO coming to full lock, LLO lost lock.

LHO General (DetChar)
corey.gray@LIGO.ORG - posted 00:19, Tuesday 15 November 2016 (31480)
OPS Eve Shift Summary

TITLE: 11/15 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Travis
SHIFT SUMMARY:

A bit of a rough shift with H1 making it to NLN, but only staying locked on the order of ~25min.  Observe ASC signals growing (over a period of 4-5min) just before it breaks out of lock.

LOG:

Notes:

H1 ISC
terra.hardwick@LIGO.ORG - posted 23:36, Monday 14 November 2016 (31483)
30 W PI modes

At 30 (or 32) W we're actively damping PI modes 3, 26, 27, 28. I spent some time the last two days with damping turned off one at a time to get the natural ring ups of each. Rough values below (got to see one ringup each for modes 3, 27, 28, two for 26).

Mode # Freq Optic tau
3 15606 Hz ITMX 0.068 s
26 15009 Hz ETMY 0.092 s
27 47495 Hz ETMY 0.024 s
28 47477 Hz ETMY 0.022 s

I also ran Dr. Evan's circulating-power-from-HEPI script on a recent power up and am getting an estimated circulating arm power of 133.67 kW for 30 W PSL power, though the HEPI displacement should be recalibrated. The usual simple calculation of 30 W (input power) * 0.88 (IMC and faraday) *  0.5 (50/50 BS) * 40 W/W (PRG) * 283 W/W (arm build up) = 149.4 kW. 

H1 General
corey.gray@LIGO.ORG - posted 21:23, Monday 14 November 2016 - last comment - 23:29, Monday 14 November 2016(31482)
Mid Shift Status: Short NLN Locks

Summary:  Back to locking, but locks do not last long so far (15-25min).  

After Sheila, Patrick, & Jeff restored H1 to a point we could lock it, have been going to Nominal Low Noise.  Unfortunately, it only stays locked for a little while.  During the last NLN lock, could see: 

Have turned off some Calibration Lines because of wanting to run some measurements for Jeff K.  (will need to remember to turn them back on).

Images attached to this report
Comments related to this report
corey.gray@LIGO.ORG - 23:29, Monday 14 November 2016 (31484)

Chatted with Sheila about the recent locklosses (noted above)  & Sheila suggested a few things to try:

Run Lockloss Tool

Zoomed in on SRC1_P_OUT & DHARD_P_OUT to see what frequency they were ringing up at and it looks like just under 0.5Hz.  (see attached)

Take SRC1 Pit to 0.0, Open Beam Diverter, & Tweak SRM

Haven't tried this yet.  Want to either skip the CLOSE BEAM DIVERTER step or Close them after the fact.  Then want to take SRC1_P's gain to zero, and then tweak the alignment of SRM such that you:

  • Minimize POP90 (most important)
  • And maximize AS90
Images attached to this comment
H1 ISC
jenne.driggers@LIGO.ORG - posted 19:18, Monday 14 November 2016 (31481)
Skipping Reduce RF9 modulation depth

I added a path in ISC_LOCK to skip the Reduce_RF9_modulation_depth state.  It seems like the last 2 locklosses were at that state, and Sheila and I aren't sure why.  It's also not really clear that we need to reduce the 9MHz modulation depth, so we're going to run for the night without the reduction. 

Perhaps tomorrow we can look at some BruCos to see if this really changes anything other than what we see in the OMC trans camera when we turn the exposure up super high.

H1 CDS
david.barker@LIGO.ORG - posted 17:06, Monday 14 November 2016 (31478)
Summary of h1oaf0 investigation

Here is an overview of how the h1oaf0 problem is presenting and what has been tried so far:

Problems started when we added a 7th ADC for PEM expansion.
Initial ADC card was a PMC card on PMC-2-PCIe adapter.
An old style adapter was used, computer would not attempt to
boot (no bios screen) if the chassis was powered.
Eventually a pci-e ADC was installed as the 7th card.
h1iopoaf0 was recording ADC/DAC errors at random times.

Investigating the proc files, each event is an ADC/DAC timing error.
ADC records a very high adcHoldTimeEverMax (>90) which is recoverable.
16bit DAC records fifo_status of 0 (not first quarter) and is not recoverable.

----------------------------------------------------------------------------------------------
Thu 11/10:

Removed 7th ADC from chassis, reverted h1iopoaf0.mdl model back to earlier version.
Reseated and screwed down all cards.

----------------------------------------------------------------------------------------------
Fri 11/11:

09:55PST
replaced DC power supply in chassis.
Reseated ADC, DAC, BIO and interface cards.

Replaced IO Chassis with x1psl0. onestop/BIO/ADC/DAC/interface/ribbon-cables transfered over.

Timing, OneStop came with new chassis.

Replaced 1st ADC set (ADC+ribbon+i/f) from x1psl0.

----------------------------------------------------------------------------------------------
Mon 11/14:

went back to original chassis.

installed new one stop card in chassis (original card had problems with heat-sink)

swapped first and second ADC card sets (ADC+ribbon+i/f)

replaced 18bit DAC with modified card

replaced SFP on fanout, and returned to second slot. Replaced SFP on timing slave.

pull power cords out of h1oaf0 to get it to boot.

Replaced long run one-stop fibre between computer and chassis



We are currently testing if the fiber change has helped, has been running for 4 hours so far.

Attached plot shows last 14days trend of h1iopoaf0 STATE_WORD, showing the ADC+DAC+DK+OVF events.
 

Images attached to this report
LHO General
patrick.thomas@LIGO.ORG - posted 17:02, Monday 14 November 2016 (31477)
Ops Day Shift Summary
TITLE: 11/15 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Started the shift with the IFO locked at DC_READOUT. Shortly after the morning meeting the h1oaf frontend was powered down to try another hardware fix. When it was powered back up it would not boot and glitched the Dolphin network. Upon recovery, Jeff, Sheila and Cheryl restored the positions of the optics. I ran through an initial alignment and once again had trouble locking the SRC. In this case it appeared to be a misalignment of SR3 (see Jenne's alog). The next difficulty was with locking ALS DIFF. At first we thought that this was due to a high ETMY bounce mode, however the problem persisted after it was damped. Sheila and Jeff then found wrong settings in two LSC DARM filter banks. When these were fixed we were able get through locking ALS DIFF. I then handed the IFO over to Corey. Sheila, Jenne and Jeff are helping with further recovery.

LOG:
16:24 UTC Restarted video2. The uptime of the Beckhoff computers was pausing on the CDS overview medm. The h1oaf timing bit is flashing.
17:11 UTC Jim B. took h1oaf down. Richard to CER to work on h1oaf IO chassis. Verbal alarm log is constantly repeating 'TypeError while running test: TCS'.
17:29 UTC Kyle to LVEA to assess requirements for bakeout of vacuum gauge valves
17:38 UTC Kyle back
17:46 UTC Stopped verbal alarms to see if restarting would clear the repeating error. It now crashes when I try to start it with: 'NameError: global name 'to_terminal' is not defined'.
18:00 UTC Richard to LVEA to take pictures by PSL rack while Jim B. restarts software.
18:04 UTC h1oaf frontend computer will not boot. Dolphin network has glitched.
18:45 UTC Frontends are being brought back. A complete power down of h1oaf with the power cord removed was necessary for it to boot on powerup. I have successfully restarted verbal alarms.
18:51 UTC Richard done in CER. Jeff, Sheila, Cheryl bringing optics back. Jason bringing PSL back.
19:21 UTC Starting initial alignment
19:50 UTC Cheryl had to move IM4 to lock X arm on IR in initial alignment
20:21 UTC h1oaf crashed. Used script to restart.
20:33 UTC h1oaf crashed
20:36 UTC Richard to CER
20:49 UTC Richard back
20:50 UTC Filiberto to LVEA to restart TCS lasers
20:57 UTC Initial alignment done
21:02 UTC Filiberto back
21:19 UTC Losing lock at ALS DIFF. Jim W. adjusted the fiber polarization in the MSR.
21:22 UTC Fire department through gate
21:25 UTC Still losing lock at ALS DIFF
22:00 UTC Fire department done at LSB and leaving site
23:03 UTC Chandra to CP3 to fill with valve at 100% open
H1 PSL
jason.oberling@LIGO.ORG - posted 15:00, Monday 14 November 2016 - last comment - 16:49, Monday 14 November 2016(31473)
HPO Pump Diode Decay

Attached is a 180 day minute trend that shows the decay of the output power of the 4 HPO pump diode boxes.  Everything looks as expected except for one thing: the decay for diode box #1 (H1:PSL-OSC_DB1_PWR) seems to have accelerated since Peter's adjustment of the HPO diode currents on October 6 (alogged here).  Just doing a quick by eye comparison, the diode box lost ~0.9% over 55 days (from 05-25-2016 to 07-18-2016).  More recently it has lost the same ~0.9%, this time over 35 days (from 10-12-2016 to 11-14-2016).  This is likely simply due to the age of the diode box; I can find no entry in the LHO alog where any of the HPO diode boxes were swapped (and we have not performed any HPO diode box swaps since I joined the PSL team), so it is likely these are still the original diode boxes installed with the PSL in 2011.  Will keep an eye on this.

Looking at the overall trends I think we should be good for the duration of ER10/O2a; the HPO diode box currents will have to be adjusted before the start of O2b.

Images attached to this report
Comments related to this report
peter.king@LIGO.ORG - 16:49, Monday 14 November 2016 (31476)
These are the original diode boxes from the H2 installation (October 2011).  The oscillator was
running for quite some time in the H2 PSL Enclosure before being moved to the H1 PSL Enclosure
(which was a consequence of the 3rd IFO decision).
Displaying reports 54121-54140 of 84760.Go to page Start 2703 2704 2705 2706 2707 2708 2709 2710 2711 End