Displaying reports 65881-65900 of 85472.Go to page Start 3291 3292 3293 3294 3295 3296 3297 3298 3299 End
Reports until 16:40, Friday 31 July 2015
H1 ISC (ISC)
stefan.ballmer@LIGO.ORG - posted 16:40, Friday 31 July 2015 (20114)
Power drop on 9MHz oscillator
Evan, Kiwamu, Stefan

Evan reported that he observed a sudden drop in several AS poer signals, including 
H1:ASC-AS_C_SUM_OUT_DQ
H1:LSC-ASAIR_B_RF90_I_ERR_DQ
around Jul 31 2015 11:23:32 UTC

We found a corresponding drop of the 9MHz main oscillator feed, monitored by:
H1:ISC-RF_C_REFLAMP9M1_OUTPUTMON
H1:ISC-RF_C_AMP9M1_OUTPUTMON
Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 16:35, Friday 31 July 2015 - last comment - 18:23, Friday 31 July 2015(20113)
channels which differ between the two sites' science frames

attached is a file listing the channel differences between the L1 and H1 science frames.

Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 18:23, Friday 31 July 2015 (20121)DAQ, ISC, SEI, SUS, SYS
It looks like almost all of the non-PEM differences can be explained by differences in hardware, control scheme/choices, and non-deprecated channels due to little-to-no maintenance. 

LHO has a beam rotation sensor, and LLO does not.
< H1:ISI-GND_BRS_ETMX_REF_OUT_DQ 256
< H1:ISI-GND_BRS_ETMX_RY_OUT_DQ 256

LHO uses a different tidal scheme (T1400733).
< H1:LSC-Y_ARM_OUT_DQ 256
< H1:LSC-Y_TIDAL_OUT_DQ 256

LHO has not yet updated the CAL-CS calibration for IMC-F, so it remains in OAF.
< H1:OAF-CAL_IMC_F_DQ 16384

LLO has PI damping and LLO does not?
> L1:LSC-X_EXTRA_AI_1_OUT_DQ 2048
> L1:LSC-X_EXTRA_AI_2_OUT_DQ 2048
> L1:LSC-X_EXTRA_AI_3_OUT_DQ 2048
> L1:LSC-Y_EXTRA_AI_1_OUT_DQ 2048
> L1:LSC-Y_EXTRA_AI_2_OUT_DQ 2048
> L1:LSC-Y_EXTRA_AI_3_OUT_DQ 2048

Regardless of what was decided via the formal process, Daniel hasn't visited LLO recently and force-reduced the ODC data rate.
< H1:ODC-X_CHANNEL_OUT_DQ 16384
< H1:ODC-Y_CHANNEL_OUT_DQ 16384
< H1:PSL-ODC_CHANNEL_OUT_DQ 16384
---
> L1:ODC-X_CHANNEL_OUT_DQ 32768
> L1:ODC-Y_CHANNEL_OUT_DQ 32768
> L1:PSL-ODC_CHANNEL_OUT_DQ 32768

LLO has not completely deprecated OAF for all of its LSC DOF calibrations.
> L1:OAF-CAL_CARM_X_DQ 16384
> L1:OAF-CAL_DARM_DQ 16384
> L1:OAF-CAL_MICH_DQ 16384
> L1:OAF-CAL_PRCL_DQ 16384
> L1:OAF-CAL_SRCL_DQ 16384
> L1:OAF-CAL_XARM_DQ 16384
> L1:OAF-CAL_YARM_DQ 16384

LLO uses a different ISS second loop scheme (or hasn't deprecated one of its attempts that is no longer used)?
> L1:PSL-ISS_SECONDLOOP_PD_14_SUM_OUT_DQ 16384
> L1:PSL-ISS_SECONDLOOP_PD_58_SUM_OUT_DQ 16384

LLO has a HV ESD driver on its ITMX, LHO does not.
> L1:SUS-ITMX_L3_ESDAMON_DC_DQ 256
> L1:SUS-ITMX_L3_ESDAMON_LL_DQ 256
> L1:SUS-ITMX_L3_ESDAMON_LR_DQ 256
> L1:SUS-ITMX_L3_ESDAMON_UL_DQ 256
> L1:SUS-ITMX_L3_ESDAMON_UR_DQ 256
> L1:SUS-ITMX_L3_ESDDMON_CAS_DQ 256
> L1:SUS-ITMX_L3_ESDDMON_HVN_DQ 256
> L1:SUS-ITMX_L3_ESDDMON_HVP_DQ 256
> L1:SUS-ITMX_L3_ESDDMON_LVN_DQ 256
> L1:SUS-ITMX_L3_ESDDMON_LVP_DQ 256
> L1:SUS-ITMX_L3_ESDDMON_MCU_DQ 256
> L1:SUS-ITMX_L3_ESDDMON_TM1_DQ 256
> L1:SUS-ITMX_L3_ESDDMON_TM2_DQ 256
> L1:SUS-ITMX_L3_ESDDMON_TM3_DQ 256
> L1:SUS-ITMX_L3_ESDDMON_TM4_DQ 256
> L1:SUS-ITMX_L3_ESDDMON_TM5_DQ 256
> L1:SUS-ITMX_L3_ESDDMON_TM6_DQ 256
H1 CDS (SUS)
david.barker@LIGO.ORG - posted 16:32, Friday 31 July 2015 - last comment - 17:12, Friday 31 July 2015(20111)
SUS ITMX software watchdog trip during period of bad IFO state

During the period of bad IFO state which started at approx 12:12 PDT, the ITMX SWWD triped the SEI-B3 system. In the plot below, Ch1 shows the ITMX SUS top stage F1 OSEM signal, which rapidly exceeded its 95mV trip limit. This started the SUS SWWD counter. Five minutes later, Ch2 shows the signal going to the SEI-B3 drop to zero (the BAD state). This started the SEI SWWD counter, shown in Ch3 as going from one (GOOD) to three (1ST COUNTER). Four minutes later the SEI SWWD transitioned to four (2ND COUNTER) and one minute later tripped the SWWD, which zeroed all SEI-B3 DAC outputs. This shows up on the RMS plot as a slightly elevated signal, but SUS continues to be rung up. At 12:27 the operator intervened and manually panic'ed the SUS SWWD, at which point the RFM started decreasing.

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 17:12, Friday 31 July 2015 (20116)

Here is a timeline of the CDS issues we had today (all times local):

10:37 h1oaf0 models stop running

11:11 h1oaf0 models restarted

11:15 h1calex and h1caley models restarted with IRIGB channels added

11:55 DAQ restarted due to cal model changes

12:12 SUS ITMX rings up

12:22 SEI-B3 SWWD trip

12:27 SUSB123 manual trip of DAC

LHO VE
kyle.ryan@LIGO.ORG - posted 16:22, Friday 31 July 2015 - last comment - 18:12, Friday 31 July 2015(20112)
X-end -> ~100C bake of RGA at BSC5 over the weekend
Kyle, Gerardo

In and out of X-end VEA 

~1030 - 1230 hrs. local

Added 1.5" O-ring valve in-series with existing 1.5" metal angle valve -> Wrapped RGA with heater tapes and foil -> Elevated pump cart off of floor (resting on foam) -> NW40 inlet 50 L/s turbo (no vent valve) backed by aux. cart (no vent valve) -> Begin 100C bake 

In and out of X-end VEA between 

1405 - 1425 hrs. local, 

1450 - 1455 hrs. local and 

1600 - 1605 hrs. local. 

NOTE:  Will need to enter X-end VEA to make adjustments Saturday morning
Comments related to this report
kyle.ryan@LIGO.ORG - 18:12, Friday 31 July 2015 (20120)
~1710 -1800 hrs. local

I realized that I had a CFF inlet 50L/s turbo on the shelf as well as UHV 1.5" valve -> Swapped out 1.5" O-ring valve and NW40 inlet turbo for their dryer cousins -> resumed bakeout
H1 General
jeffrey.bartlett@LIGO.ORG - posted 16:11, Friday 31 July 2015 (20110)
Ops Day Shift Summary
LVEA: Laser Hazard
IFO: Locked
Observation Bit: Commissioning  

07:45 Cleared IPC errors on H1SUSTMSY & H1IOPASCO
08:00 IFO locked at Low Noise, 24.2w, 68Mpc 
08:35 Karen – Cleaning in the LSB cleaning area
09:16 LockLoss – Unknown
09:17 Kiwamu & Sudarshan – Going into LVEA to make ISS Outer Loop Servo measurements
09:19 Robert – Going into LVEA to setup for PEM injection
09:22 Jason – Going into LVEA to check PR3 OpLev
09:25 Kyle & Gerardo – Going to End-Y to get pump cart
09:30 ISI-ITMY WD trip – Robert doing PEM setup work in Beer Garden
09:53 Kyle & Gerardo – Finished at End-Y – back to CS
10:00 Kyle & Gerardo – Going to End-X to connect pump cart
10:17 Kyle – In LVEA to take valve off Pump Cart
10:20 Kyle – Out of LVEA
10:27 TJ – Going to End-X to reset BRS
11:07 Dave – Restarting models after OAF crash
11:12 Stefan – Going into MSR to work on atomic clock
11:27 Stefan – Out of MSR
11:30 Sudarshan – In LVEA plugging monitoring equipment
11:50 IFO Locked at Low Noise, 24.1w, 65Mpc
11:55 Dave – DAQ restart – NOTE: DAQ restart did not take IFO out of lock
12:12 IFO LockLoss – DAQ restart/Guardian/etc
12:36 Kyle & Gerardo – Back from End-X
12:39 Vendor on site to restock machines
12:47 Bubba – Test fire pumps
14:28 Dave & Stefan – At End-Y to install attenuator 
14:50 Dave & Stefan – Going to Mid-Y to power cycle PEM
15:05 Dave & Stefan – Back from Mid-Y
15:15 Dave & Stefan – Going to End-X to install attenuator
15:30 Dave & Stefan – Back from End-X
LHO FMCS
bubba.gateley@LIGO.ORG - posted 16:06, Friday 31 July 2015 (20109)
Monthly fire pump test
Today I performed the monthly fire pump test. All numbers and flows were acceptable.
H1 AOS
david.barker@LIGO.ORG - posted 15:52, Friday 31 July 2015 - last comment - 15:56, Friday 31 July 2015(20107)
attenuated IRIG-B signal at end stations, rebooted h1pemmy

Robert, Vinny, Stefan C, Dave:

The recently installed IRIGB signals were clipping the ADC due to the 10X gain in the PEM AA chassis. We installed a 10X attenuator in the line at the IRIG-B chassis in both end stations, the IRIGB signal range is now 0 - 5000 counts.

We also power cycled h1pemmy to see if this would remove the 64Hz noise being seen in the SEIS channels there. The procedure was: stop all models, power down h1pemmy, power down IO Chassis, power up IO Chassis, power up h1pemmy. We got lucky on the auto-code start and the timing came back OK with no further IOP restarts required.

Unfortunately this power cycle does not seem to have fixed the noise.

Comments related to this report
david.barker@LIGO.ORG - 15:56, Friday 31 July 2015 (20108)

The attenuated IRIGB signals are shown, first 1/16th of the second at 16kHz.

Images attached to this comment
LHO VE
bubba.gateley@LIGO.ORG - posted 15:48, Friday 31 July 2015 (20104)
Beam Tube Washing
Scott L. Ed P. Rodney H.

7/30/15
The crew relocated lights, fans and cords to begin cleaning the next section starting at HSW-2-043. 51 meters of tube cleaned ending l2 meters east of HSW-2-041.
I repaired several of the extension cords and a cord on the vacuum used for cleaning the support tubes. 

7/31/15
Ed took today off. 
The support vehicles and all related equipment were relocated after lunch and a total of 61 meters of tube cleaned today ending 13.7 meters east of HSW-2-038.

The past ~200 meters of tube have been especially dirty with feces and urine. We have been giving these areas extra attention.

1239.5 meters of Y-Arm cleaned to date.
Non-image files attached to this report
H1 CDS (CAL, DAQ)
david.barker@LIGO.ORG - posted 15:48, Friday 31 July 2015 (20106)
IRIG-B channels added to end station cal models, DAQ and DMT

Stefan, Jim, Andres, Filiberto, Dave

WP5392

New h1calex and h1caley models were installed today, these read out ADC3-CHAN30 as the IRIGB signal. These signals were added to the commissioning frame and the frame broadcaster. The DAQ was restarted at 11:55 PDT to install this change.

H1 General (DAQ, GRD, IOO, PSL, SEI, SUS)
cheryl.vorvick@LIGO.ORG - posted 13:43, Friday 31 July 2015 - last comment - 22:50, Friday 31 July 2015(20103)
something happend coincidentally with a lock loss, and we had a number of minutes of a bad IFO state

Something happened at lock loss and the IFO did not reach the defined DOWN state.

Symptoms:

Dave, Jaime, Sheila, operators, and others are investigating.

Images attached to this report
Comments related to this report
jameson.rollins@LIGO.ORG - 22:50, Friday 31 July 2015 (20105)

Let me try to give a slightly more detailed narrative as we were able to reconstruct it:

  • At around 11:55, Dave initiated a DAQ restart.  At this point the IFO was locked in NOMINAL_LOW_NOISE.
  • The DAQ restart came with a restart of the NDS server being used by Guardian.
  • The IMC_LOCK guardian node was in ISS_ON, which utilizes cdsutils.avg(), which is an NDS call.  When the NDS server went away, the cdsutils.avg() threw a CdsutilError which caused the IMC_LOCK node to go into ERROR where it waits for operator reset (bug number 1).
  • The ISC_LOCK guardian node, which manages the IMC_LOCK node, noticed that the IMC_LOCK node had gone into ERROR and itself threw a notification.
  • No one seemed to notice that a) the IMC_LOCK node was in error and b) that the ISC_LOCK node was complaining about it (bug number 2)
  • At about 12:12 the IFO lost lock.
  • The ISC_LOCK guardian node relies on the IMC_LOCK guardian node to report the lock losses.  But since the IMC_LOCK node was in error it wasn't doing anything, which of course includes not checking for lock losses.  Consequently the ISC_LOCK node didn't know the IFO had lost lock, it didn't repond and didn't reset to DOWN, and all the control outputs were left on.  This caused the ISS to go into oscillation, and it drove multiple suspensions to trip.

So what's the take away:

bug number 1: guardian should have caught the NDS connection error during the NDS restart and gone into a "connection error" (CERROR) state.  In that case, it would have continually checked the NDS connection until it was re-established, at which point it would have continued normal operation.  This is in contrast to the ERROR state where it waits for operator intervention.  I will work on fixing this for the next release.

bug number 2: The operators didn't know or didn't repond to the fact that the IMC_LOCK guardian had gone into ERROR.  This is not good, since we need to respond quickly to these things to keep the IFO operating robustly.  I propose we set up an alarm in case any guardian node goes into ERROR.  I'll work with Dave et. al to get this setup.

As an aside, I'm going to be working over the next week to clean up the guardian and SDF/SPM situation to eliminate all the spurious warnings.  We've got too many yellow lights on the guardian screen, which means that we're now in the habit of just ignoring them.  They're supposed to be there to inform us of problems that require human intervention.  If we just leave them yellow all the time they end up having zero affect and we're left with a noisy alarm situation that everyone just ignores.

thomas.shaffer@LIGO.ORG - 16:58, Friday 31 July 2015 (20115)

A series of events lead to the ISC_LOCK Gaurdian to not understand that there was a lockloss.

  1. ISC_LOCK was brought to DOWN after realizing the confusion.
  2. A series of events lead to the ISC_LOCK Gaurdian to not understand that there was a lockloss.
  3. ISC_LOCK was brought to DOWN after realizing the confusion.
  4. DAQ restart by Dave at 11:55 PST
  5. IMC_LOCK went into Error with a "No NDS server available" from the DAQ restart
  6. This was not seen by the operator, or was dismissed as a result of the restart.
  7. Lockloss at 12:12 PST
  8. ISC_LOCK did not catch this lock because IMC_LOCK was still in Error.
  9. Since the ISC_LOCK thought it was still in full lock, it was still actuating on many suspensions and trip some watchdogs (like Daves alog20111)
  10. ISC_LOCK was brought to DOWN after realizing the confusion.

To prevent this from happening in the future, Jamie will have Guardian continue to wait for the NDS server to reconnect, rather than stopping and waiting for user intervention before becoming active again.  I also added a verbal alarm for Guardian nodes in Error to alert Operators/Users that action is required.

(If i missed something here please let me know)

H1 DetChar (DetChar, ISC)
gabriele.vajente@LIGO.ORG - posted 12:59, Friday 31 July 2015 (20102)
Coherence for latest quiet data

I ran BruCo on the quiet data period reported here. The report can be found at the following link:

https://ldas-jobs.ligo.caltech.edu/~gabriele.vajente/bruco_1122369690/

I'll look into the table and provide a summary later today.

H1 DAQ (DAQ)
stefan.countryman@LIGO.ORG - posted 12:31, Friday 31 July 2015 (20101)
Recalibrated timing diagnostic Cs-III Cesium Atomic Clock Frequency and Phase in MSR
Between 11:10 and 11:25 I adjusted the frequency offset and the phase offset in the timing diagnostic system's cesium clock in MSR so that the 1PPS time difference between it and our GPS backed timing distribution system equals 0±100ns and the long-term drift rate is near zero.

Last time I adjusted the timing offset (3 weeks ago), I subtracted 68e-15 from the factory-set frequency offset of 1078e-15; this was apparently the wrong direction, as evidenced by a very visible doubling in the cesium clock's drift rate in the weeks since that adjustment. Since the last measurement of drift rate was calculated from a month's worth of data and was ostensibly accurate, I went ahead and changed the frequency offset to (1078+68)e-15. The manufacturers instructions are vague on the correct adjustment direction and the tech support reps aren't very knowledgable, so I'm going to write up a short technical paper on Cs-III calibration with specific instructions for how to go about this.

I zeroed the phase using a 1PPS signal from the MSR master. The GPS timing jitter is O(10ns), which is well within the 100ns resolution of the CsIII phase syncing utility. I used a short (3') BNC cable to minimize delay.

I also went ahead and installed the most recent Monitor3 software on the Lenovo X61 Thinkpad in MSR and put the Cs-III instruction manual on that computer's desktop, so it should be easy for someone else to make finer adjustments to the frequency offset once we have another month or two worth of 1PPS drift data. 
H1 CDS
david.barker@LIGO.ORG - posted 11:27, Friday 31 July 2015 (20098)
h1oaf0 timing error, restart of all models

at 10:37 PDT the IOP model on h1oaf0 received a timing error, all models stopped running at this time(h1pemcs, h1oaf, h1tcscs, h1odcmaster and h1calcs). The calculated IFO range went to a high value and the DARM FOM display went noisy. Only one external model receives IPC from the h1oaf0 computer, h1omc has a dolphin channel from the calcs, these channels went into 100% error.

There was PEM work in the CER around this time, this could have been the cause of the glitch. All the models were restarted and all is running correctly now. The intent bit was not set during this time.

H1 SEI
thomas.shaffer@LIGO.ORG - posted 11:04, Friday 31 July 2015 (20097)
BRS software restart

BRS software crashed on the 25th, so today I went to EX to restrart the code. I kept the damper commented out, so the damper is currently OFF. I will turn it back on when it seems like it will be calm down at EX for a bit (vac team is down there right now).

H1 SEI
thomas.shaffer@LIGO.ORG - posted 09:52, Friday 31 July 2015 - last comment - 10:19, Friday 31 July 2015(20095)
ISI_ETMY ST1/2 WD trip after lockloss, possibly Tidal?

Within a min afer a lockloss, the ISI ETMY ST1/2 WDs tripped. The medm said that it was ST1 T240 and the plots showed a slow drift to the WD trip level (plot attached). HEPI moved 170 um after lockloss before tidal began to bleed down. A further investigation is ongoing by the SEI team.

Images attached to this report
Comments related to this report
hugh.radkins@LIGO.ORG - 10:19, Friday 31 July 2015 (20096)

Looks like this is a trip we will suffer whenever the drive to the HEPI is large enough at the time of lockloss.  That doesn't necessarily mean the length of lock but it is certainly related.  If the tide turns around and the offset given to HEPI heads back to zero, we could be fine after a long lock stretch.

The attached 30 min second trend plots are around the three most recent locklosses from some low noise state.  In each case you can see the T240 tilting towards trip level but only the most recent trip was the tidal offset large enough to 'tilt' the T240 long enough to hit the trigger.  The HEPI output from ISC is in nm.

Is the problem the bleed off too fast?  It seems this is 2um/sec.  Is there horizontal to tilt coupling that needs to be addressed?  Is it a problem anyway?  The T240 will likely take a couple minutes at worse to settle and another minute for the ISI to reisolate.

Images attached to this comment
H1 General
jeffrey.bartlett@LIGO.ORG - posted 09:38, Friday 31 July 2015 (20094)
08:30 Meeting Minutes
CDS: EE shop work to prep for Ring Heater installation
PSL: Wants to tweak PR3 OpLev at first available opportunity
FAC: Continued beam tube cleaning Y-Arm   
VAC: Moving pump cart to End-X for weekend long RGA bake out

H1 INJ
evan.hall@LIGO.ORG - posted 23:22, Thursday 30 July 2015 - last comment - 18:06, Friday 31 July 2015(20078)
1821 Hz TMSX sensor spikes

Matt, Evan

Why do the TMSX RT and SD OSEMs have such huge spikes at 1821 Hz and harmonics? These spikes are about 4000 ct pp in the time series. In comparison, the other OSEMs on TMSX are 100 ct pp or less (F1 and LF shown for comparison).

Also attached are the spectra and time series of the corresponding IOP channels.

Images attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 02:44, Friday 31 July 2015 (20083)

On a possibly related note: in full lock, the TMSX QPDs see more than 100 times more noise at 10 Hz than the TMSY QPDs do.

From Gabriele's bruco report, the X QPDs have some coherence with DARM around 78 Hz and 80 Hz. A coherence plot is attached.

Images attached to this comment
arnaud.pele@LIGO.ORG - 12:10, Friday 31 July 2015 (20100)

It seems similar to the problem from log 12465. Recycling AA chassis power fixed the issue at the time.

keita.kawabe@LIGO.ORG - 18:06, Friday 31 July 2015 (20118)

Quenched the oscillation for now (Vern, Keita)

We were able to clearly hear some kHz-ish sound from the satellite amplifier of TMSX that is connected to SD and RT. Power cycling (i.e. removing the cable powering the BOSEM and connecting it again) didn't fix it despite many trials.

We moved to the driver, power cycled the driver chassis, and it didn't help either.

The tone of the audible oscillation changed when we wiggled the cable on the satellite amp, but that didn't fix it.

Vern gave the DB25 connector on the satellite amp a hard whack in a direction to push the connector further into the box, and that fixed the problem for now.

Images attached to this comment
Displaying reports 65881-65900 of 85472.Go to page Start 3291 3292 3293 3294 3295 3296 3297 3298 3299 End