Displaying reports 53061-53080 of 83296.Go to page Start 2650 2651 2652 2653 2654 2655 2656 2657 2658 End
Reports until 15:34, Tuesday 01 November 2016
H1 ISC (ISC)
marc.pirello@LIGO.ORG - posted 15:34, Tuesday 01 November 2016 (31081)
ISC Readback Channels Repaired

M. Pirello, E. Castrellon, F. Clara, R. McCarthy

Per FRS Ticket 6059 and Work Permit 6284 we investigated and repaired the bad channel and restored the unit to operation.

We initially suspected the 5V linear voltage regulator in the Slow Controls Contentrator #2 (S1103451) and pulled it for inspection, but found that its outputs were working correctly.  We replaced the suspect regulator and added a heat sink to help dissipate heat from this regulator.

Upon reinstallation, most of the bad channels were working, and only one set of related channels were not working.  We traced this failure back to a bent pin on the EtherCAT Corner Station Chassis #3 (S1107447).  This pin corresponds with channel DIV40M which Daniel says was not working since its installation.  The Beckhoff Module associated with these signals was behaving poorly, acting like it had an internal short.  We pulled the Chassis #3 and replaced the Beckhoff module #11 on panel #9, with a brand new E1124 Digital IO Module.

After installation other problems with this chassis surfaced so we replaced one of the computer modules and after much testing we were satisfied that the chassis would work when installed.  I attached an image of all blocks green as seen from the CER.

Images attached to this report
LHO VE
kyle.ryan@LIGO.ORG - posted 15:19, Tuesday 01 November 2016 (31082)
Moved (1) ea. EH2600 BT roughing pump from X-mid to Y-mid
Need also to move (1) ea. EDP200 BT roughing pump from CS Mechanical Room to Y-mid (Part of mid-term - next 6 months - effort to establish an emergency Beam Tube rough pumping option)
LHO VE
kyle.ryan@LIGO.ORG - posted 15:15, Tuesday 01 November 2016 - last comment - 14:51, Wednesday 02 November 2016(31080)
Completed commissioning of X-end RGA (now in its nominal state awaiting CDS hook up)
WP#6255 and WP#6293 completed  

0930 hrs. local -> Valved-in RGA turbo to RGA volume and energized filament.  

1130 hrs. local -> Took scans of the RGA volume with and without cal-gases -> isolated RGA turbo from RGA volume -> combined RGA volume with X-end volume and took scans of the X-end with and without calibration gases (inadvertently dumped ~5 x 10-4 torr*L of Krypton or 2 hrs accumulation @ 5 x 10-8 torr*L/sec into site) -> vented RGA turbo and removed from RGA hardware -> installed 1 1/2" UHV valve in its place -> Pumped volume between two 1 1/2" valves to 10-4 torr range before decoupling and de-energizing all pumps, controllers and noise sources with the exception of the RGA electronics which was left energized and with its fan running 24/7.

Leaving RGA exposed to X-end, filament off and cal-gases isolated.  Will post scan data as a comment to this entry within next 24 hrs..
Comments related to this report
kyle.ryan@LIGO.ORG - 14:51, Wednesday 02 November 2016 (31140)
Here are the scans from yesterday: 

Note the presence of amu 7 obviously "sourced" from the N2 cal-gas bottle.  I will need to revisit the noted observation of the appearance of amu 7 when the cal-gas isolation valve used with Vacuum Bake Oven C is closed and the baffling disappearance of this amu when the isolation valve is opened???.  
Non-image files attached to this comment
H1 OpsInfo
jim.warner@LIGO.ORG - posted 15:14, Tuesday 01 November 2016 (31079)
Changes to SEI_CONF screen, operators should take note

I have updated the SEI_CONF configuration table to more accurately reflect our experience with higher microseism. The extremes of this table (i.e. any version of very high wind and/or microseism) are still being explored as we roll into winter, but so far the nominal "WINDY" state has been sufficient up to 1+ micron/s RMS microseism and 40 mph winds. I have also made a few of the states in SEI_CONF not "requestable", mostly states that had "microseism" in the name. These states are all versions of our high microseism configuration from O1, which only worked in low winds. These states are still available, but you will have to hit the "all" button on SEI_CONF, they no longer show up on the top level drop-down. 

We also might be close to getting some epics earthquake notifications, so that information might get included on this screen in the future.

Operators should reference Jeff's alog yesterday (31029), and my alog 30848 when trying to make decisions about seismic configuration.

Images attached to this report
H1 PSL
peter.king@LIGO.ORG - posted 14:55, Tuesday 01 November 2016 (31078)
PSL work
Temporarily installed a 2 in. diameter, 45 degree thin film polariser in the output of the pre-modecleaner.
Measured 0.342 W reflected from the polariser with the 10A-V2-SH power meter.  Measured 92.5 W transmitted
through the pre-modecleaner with the polariser removed, and with the L300W-LP power meter.  The power
stabilisation was on for both measurements.

    The output polarisation is calculated to be (1 - 0.342/92.5)*100 =  99.6% linearly polarised.

    Using the same thin film polariser in the input beam to the pre-modecleaner, 15.0 mW was measured in
reflection.  1.51 W was measured in transmission.  The power stabilisation was off for this measurement.
The input polarisation is calculated to be (1 - 0.015/1.51)*100 = 99%.  It's not obvious why it would be
this low.  Other than perhaps the angle of incidence alignment is off a little - a degree of freedom we
do not have - because typically thin film polarisers are somewhat sensitive to input angle.

    Another high power attenuator was installed in the beam between the two Picomotor equipped mounts.
We confirmed that we could re-lock the pre-modecleaner prior to turning off the high power oscillator to
allow for modifications to the field box(es) by Daniel and Keita.




Jason/Peter
H1 CAL
aaron.viets@LIGO.ORG - posted 14:42, Tuesday 01 November 2016 (31074)
Updated Kappa Comparison between SLM tool and GDS pipeline with minor bug fix
I applied a bug fix suggested by John Zweizig in the demodulation routine in the GDS pipeline that reduces error due to finite machine precision. After this, it appears that the kappas as computed by GDS, especially the cavity pole, are significantly less noisy, but still not in agreement with the SLM tool (See this aLOG for reference: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=30888).

Below is a table of mean and standard deviation values for the data taken from GDS, SLM, and the ratio GDS / SLM:

                                   SLM mean         SLM std          GDS mean          GDS std             ratio mean            ratio std

Re(kappa_tst)             0.8920              0.0068            0.8916                0.0056              0.9995                  0.0043

Im(kappa_tst)             -0.0158             0.0039            -0.0145               0.0008882       1.0013                   0.0041

Re(kappa_pu)             0.8961              0.0080            0.8958                0.0057              0.9997                  0.0065

Im(kappa_pu)             -0.0050            0.0056            -0.0035              0.0013               1.0015                   0.0059

kappa_c                      1.1115                0.0094            1.1154                 0.0072               1.0035                  0.0060

f_c                               354.2338         2.9305            345.6435           0.7686               0.9758                  0.0084

Here are covariance matrices and correlation coefficient matrices between SLM and GDS:

       Covariance                          Correlation

Re(kappa_tst)
1.0e-04 *
    0.4615    0.3157                  1.0000    0.8238
    0.3157    0.3181                  0.8238    1.0000

Im(kappa_tst)
1.0e-04 *
    0.1506    0.0007                  1.0000     -0.0216
    0.0007    0.0079                  -0.0216    1.0000

Re(kappa_pu)
1.0e-04 *
    0.6387    0.3113                 1.0000    0.6866
    0.3113     0.3219                 0.6866    1.0000

Im(kappa_pu)
1.0e-04 *
    0.3139    -0.0036               1.0000    -0.0490
   -0.0036    0.0174               -0.0490    1.0000

kappa_c
1.0e-04 *
    0.8895    0.4815               1.0000    0.7118
    0.4815    0.5144                0.7118    1.0000

f_c
    8.5876   -0.0023               1.0000   -0.0010
   -0.0023    0.5908              -0.0010    1.0000

Plots and histograms are attached.
Images attached to this report
H1 CAL (CAL)
travis.sadecki@LIGO.ORG - posted 14:33, Tuesday 01 November 2016 - last comment - 10:18, Wednesday 02 November 2016(31077)
PCal working standard damaged

During the windstorm yesterday, the PCal team attempted to complete end station calibrations of both ends.  The calibration for EY went off without a hitch (results to come in a separate aLog).  However, while setting up for the EX calibration, I dropped the working standard from the top of the PCal pylon onto the floor of the VEA.  The working standard assembly ended up in 3 pieces: the integrating sphere, one spacer piece, and the PD with the second spacer piece.  Minor damage was noted, mostly to the flanges of the integrating sphere and spacer pieces where the force of the fall had pulled the set screws through the thin mating flanges.  I cleaned up and reassembled the working standard assembly and completed the end station calibration.  Worried that some internal damage had occurred to the PD or integrating sphere, I immediately did a ratios measurement in the PCal lab.  The results of this showed that the calibration of the working standard had changed by ~2% which is at the edge of our acceptable error.  As a result of this accident, we are currently working to put together a new working standard assembly from PCal spares.  Unfortunately this means that we will lose the calibration history of this working standard and will start fresh with a new standard.  We plan to do frequent (~daily) ratios measurements of the new working standard in the PCal lab in order to establish a new calibration trend before the beginning of O2.

Comments related to this report
vernon.sandberg@LIGO.ORG - 10:18, Wednesday 02 November 2016 (31130)CAL

Opened FRS Ticket 6576 - PCal working standard damaged.

H1 CDS
david.barker@LIGO.ORG - posted 14:22, Tuesday 01 November 2016 (31076)
h1guardian0 reboot

WP6289 Dave

during the model shutdown due to h1oaf0 ADC problem, h1guardian0 was rebooted (it had been up 28 days). Yesterday INJ_TRANS node needed a restart to permit hardware injections, so we felt a reboot was in order.

All nodes came back automatically and correctly.

H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 14:18, Tuesday 01 November 2016 (31075)
added ADC card to h1oaf0, crashed a bunch of front ends, took ADC card out of h1oaf0

WP6287. Richard, Jim, Dave:

this morning between 10:18 and 11:20 PDT we installed a seventh ADC card into the h1oaf0 IO Chassis for PEM expansion. Unfortunately this ADC (a PMC card on a PMC-to-PCIe adapter) has a fault which causes the front end computer to not boot up. In fact with the One-Stop fiber optics cable attached h1oaf0 did not output anything on its VGA graphics port, so not even the BIOS was shown. When the fiber was disconnected, the computer booted.

When h1oaf0 was powered up, it glitched the Dolphin network even though it was shutdown correctly. This glitched all the Dolphined front ends in the corner station, which now included the PSL. While we were removing the new ADC card from the IOC chassis, I restarted the PSL models. On the second h1oaf0 restart the PSL was again shutdown. At that point we restarted all the corner station models (starting with the PSL).

We did make two changes to the h1oaf0 IO Chassis which we left in:

H1 CDS
david.barker@LIGO.ORG - posted 13:59, Tuesday 01 November 2016 - last comment - 20:24, Tuesday 01 November 2016(31071)
clean power cycle of h1iscex did not reverse ADC DC level shift

WP6288 Dave, Jim:

We cleanly power cycled h1iscex and its IO Chassis this morning between 11:45 and 12:06 PDT. We were not able to reproduce the slight offset on an ADC channel (see attached). Note this channel is actually only seeing ADC bit-noise, so the offset is in the micro-volt level.

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 20:24, Tuesday 01 November 2016 (31097)CDS, OpsInfo

Sheila,.Jeff, Ed

This was an attempt to fix the drop in green arm power that happened last sunday. 30884

Since it didn't work, operators will continue to see that green power from the X arm is low. 

If this can't be fixed, we can just rescale the normalization. 

H1 ISC
cheryl.vorvick@LIGO.ORG - posted 12:48, Tuesday 01 November 2016 (31070)
shutter control medm - a somewhat fix

Moved the isctey green beam shutter (Y, B) to the position that matched the isctex green beam shutter, although the name is defined outside of medm, so it still says green fiber shutter.   Info attached.

Images attached to this report
Non-image files attached to this report
H1 OpsInfo (SEI)
jim.warner@LIGO.ORG - posted 11:52, Tuesday 01 November 2016 - last comment - 16:02, Tuesday 01 November 2016(31067)
BRSY recentered, out of commission for a couple more hours

BRSY has been continuing its slow drift off of the photodiode, and was about 2-300 counts from the edge, so this morning I went to EY to try to recenter it. I think I was successful, but will need a couple hours to tell. Right now it's still rung up pretty bad, so we will need to wait for it to damp down on it's own a bit before trying to re-engage it. For now, operators should use one of the seismic configurations that doesn't use BRSY.

Images attached to this report
Comments related to this report
jim.warner@LIGO.ORG - 15:55, Tuesday 01 November 2016 (31086)

Looks like BRSY is closer to center now (at ~ -3000) than before, but given the current drift of ~1500 cts/week I didn't get as much margin before the next adjustment as I'd prefer. Will probably have to do this again in ~2 months.

Images attached to this comment
krishna.venkateswara@LIGO.ORG - 16:02, Tuesday 01 November 2016 (31087)

Remember it will probably drift up because of the slow thermal equilibration for the next 1-2 days, probably ending up above 3k counts. I think that is very good. Good job, you have mastered the BRS!

H1 TCS
kiwamu.izumi@LIGO.ORG - posted 11:20, Tuesday 01 November 2016 - last comment - 12:34, Monday 07 November 2016(31064)
CO2Y shut off for unknown reason

Patrick, Kiwamu,

In this morning, Patrick found that the CO2Y was not outputting a laser power. In the end we could not figure out why it had shut off. The laser is now back on.

[Some more details]

I thought this was a return of the faulty behavior that we were trying to diagnose in the early October (30472). However, the combination of looking at the front panel of the laser controller and trending the warning/alarm states did not show us something conclusive. So no conclusion again.

When I went to the floor and checked the front panel, no red LED was found. The only unusual thing is the GATE LED which was found to be off. Pressing the red gate button then brought the GATE LED back in green as expected. This could be an indication of the IR sensor momentarily went to the fault state and came back normal leaving the laser shut off. In this scenario, the IR sensor does not latch any LEDs and for this reason I thought this could be it. Then looking at the trend at around the time the laser went off, I did not find any alarm flags raising at all. Even if it is a fast transient in the IR sensor, I would expect to see it in the trend. So these two observations together can't support the IR sensor scenario. Another plausible scenario would be somebody accidentally hitting the gate button resulting in no laser output.

Images attached to this report
Comments related to this report
betsy.weaver@LIGO.ORG - 11:24, Tuesday 01 November 2016 (31065)

I also went to the chiller and confirmed no error there - water level was mid-way (which I topped off), all seemed good.

alastair.heptonstall@LIGO.ORG - 11:40, Tuesday 01 November 2016 (31066)

That certainly sounds like the IR sensor.  Unfortunately we don't currently have analogue readout from that channel, or a good error reporting system.  We are already planning on fixing this with a new version of the controller that we should be getting ready for post O2 install.

Has there been a temperature change in the LVEA recently?  And the Y-arm laser power is a bit highrer than before, but not as high as during your recent testing?  I'm just wondering what else could be causing this sensor to be close to its tripping point.

kiwamu.izumi@LIGO.ORG - 12:08, Tuesday 01 November 2016 (31068)

Alastair, if this was due to the IR sensor, how do you exlain the fact that it didn't show up in ITMY_CO2_INTRLK_RTD_OR_IR_ALRM? Is it so fast that the digital system can not reacord the transient?

alastair.heptonstall@LIGO.ORG - 12:13, Tuesday 01 November 2016 (31069)

I don't understand that.  Even if it doesn't latch the laser off, it should still show up on that channel.   Is it possible that the chassis itself got a brief power glitch?  If that was turned off/on momentarily then it would also put the laser into this state.

patrick.thomas@LIGO.ORG - 13:14, Tuesday 01 November 2016 (31072)
From trends the laser tripped off around 15:52 UTC this morning. This was well before the work on h1oaf took it down.
betsy.weaver@LIGO.ORG - 12:34, Monday 07 November 2016 (31287)

It's very possible that the Tuesday maint activity that involved IO chassis hardware work which may or may not have been involved in the dolphin network glitch -> Beckhoff issues (which lasted most of the day Tues), is what caused this particular TCS laser issue.  It was compounded by the later h1oaf work that day which caused other chiller tripping.  Cause of this full saga TBD...

H1 ISC
corey.gray@LIGO.ORG - posted 00:11, Tuesday 01 November 2016 - last comment - 10:47, Tuesday 01 November 2016(31056)
ISC_DRMI Node CONNECT ERROR

During locking tonight, had the following ERROR for ISC_DRMI:

EZCA CONNECTION ERROR:  Could not connect to channel (timeout=2s):  H1:LSC-PD_DOF_MTRX_SETTING_1_23

I tried a couple things:  I hit "LOAD", which did nothing.  Then I hit "Execute" which broke the lock.

One thing I did not do was Re-Request the state I was in.  (Nutsinee just let me know that this is what works for her when she has had "CONNECTION ERRORS".

Comments related to this report
thomas.shaffer@LIGO.ORG - 10:47, Tuesday 01 November 2016 (31063)

After double checking that all CDS systems are running, waiting a few minutes, and checking that you can do a caget with the channel in question, then change the operating mode of the node with the connection error from EXEC to STOP.  Wait for node to change to a Yellow background before requesting EXEC again. If one of these nodes was previously managed, then you may need to INIT the manager (if the manager is working, a possible way to do this would be wait for the current state to finish, if it can, and then go to manual, INIT, and then back to where it was was).

H1 CAL (CAL)
evan.goetz@LIGO.ORG - posted 17:34, Tuesday 23 August 2016 - last comment - 10:25, Tuesday 01 November 2016(29259)
Better understanding Pcal timing signals
Summary:
Repeating the Pcal timing signals measurements made at LHO (aLOG 28942) and LLO (aLOG 27207) with more test point channels in the 65k IOP model, we now have a more complete picture of the Pcal timing signals and where there are time delays.

Bottom line: 61 usec delay from user model (16 kHz) to IOP model (65 kHz); no delay from IOP model to user model; 7.5 usec zero-order-hold delay in the DAC; and 61 usec delay in the DAC or the ADC or a combination of the two. Unfortunately, we cannot determine from these measurements on which of the ADC or DAC has the delay.

Details:
I turned off the nominal high frequency Pcal x-arm excitation and the CW injections for the duration of this measurement. I injected a 960 Hz sine wave, 5000 counts amplitude in H1:CAL-PCALX_SWEPT_SINE_EXC. Then I made transfer function measurements from H1:IOP-ISC_EX_ADC_DT_OUT to H1:CAL-PCALX_DAC_FILT_DTONE_IN1, H1:IOP-ISC_EX_MADC0_TP_CH30 to H1:CAL-PCALX_DAC_NONFILT_DTONE_IN1, and H1:CAL-PCALX_SWEPT_SINE_OUT to H1:CAL-PCALX_TX_PD_VOLTS_IN1, as well as points in between (see attached diagram, and plots)

The measurements match the expectation, except there is one confusing point: the transfer function H1:IOP-ISC_EX_MADC0_TP_CH30 to H1:CAL-PCALX_DAC_NONFILT_DTONE_IN1 does not see the 7.5 usec zero-order-hold DAC delay. Why?

There is a 61 usec delay from just after the digital AI and just before the digital AA (after accounting for the known phase loss by the DAC zero-order-hold, and the analog AI and AA filters). From these measurements, we cannot determine if the delay is in the ADC or DAC or a combination of both. For now, we have timing documentation such as LIGO-G-1501195 to suggest that there are 3 IOP clock cycles delay in the DAC and 1 IOP clock cycle delay at the ADC.

It is important to note that there is no delay in the channels measured in the user model acquired by the ADC. In addition, the measurements show that there is a 61 usec delay when going from the user model to the IOP model.

All this being said, I'm still a little confused from various other timing measurements. See, for example, LLO aLOG 22227 and LHO aLOG 22117. I'll need a little time to digest this and try to reconcile the different results.
Non-image files attached to this report
Comments related to this report
shivaraj.kandhasamy@LIGO.ORG - 09:53, Thursday 25 August 2016 (29298)

By looking at the phase of the DuoTone signals we can constrain whether there is any delay in ADC side (like Keita's analysis here). The DuoTone signals are desgined such that the two sinusoidal signals 960 Hz and 961 Hz will be maximum at the start of a GPS second (and also in phase with each other). To be presice, the maximum will be 6.7 µs delayed from the integer GPS boundary (T1500513). The phase of 960 Hz signal at IOP (L1:IOP-ISC_EX_ADC_DT_OUT) is -92.52 degrees with respect to GPS integer boundary (LLO a-log 27207). Since the DuoTone signal is supposed to be maximum at GPS integer boundary i.e, it is a cosine function, this corresponds to -2.52 degrees (estimate of 92.52 assumes it is a sine function) phase change. Converting this phase change to time delay we get 7.3 µs. Since there is an inherent 6.7µs delay by the time the DuoTone signals reaches the ADC, we are left with only 0.6 µs delay possibly from ADC process (or some small systematic we haven't accounted for yet). This is what Keita's measurements were showing. Combing this measurment and above transfer function measurments we can say that we understand the ADC chain and there are no time delays more than 0.6 µ in that chain. This also suggest that the 61 µs delay we see in ADC-DAC combination exist completely in DAC side.  

evan.goetz@LIGO.ORG - 10:44, Tuesday 27 September 2016 (29999)CAL
The DuoTone signals are sine waves, so a minor correction to Shivaraj's comment above, the zero-crossing corresponds to the supposed GPS integer second. I looked at a time series and observe that the zero-crossing occurs at ~7.2 usec. Since the analog DuoTone signal lags behind the GPS second by ~6.7 usec, I can confirm that the ADC side has essentially no delay. Thus, the 61 usec seen through the DAC-ADC loop is entirely on the DAC side.

Attached is a time series zoom showing the zero crossing of the DuoTone signal.
Non-image files attached to this comment
kiwamu.izumi@LIGO.ORG - 16:41, Thursday 06 October 2016 (30282)

When using dtt to make a transfer function measurement between an IOP model and a user model, one has to keep in mind that dtt does another decimation silently. This is due to dtt trying to match the number of data points between two models. Fortunately, this does not seem to affect the phase, see my note at https://dcc.ligo.org/T1600454.

evan.goetz@LIGO.ORG - 10:25, Tuesday 01 November 2016 (31062)
Updated the timing diagram for consistency with other timing measurements (LHO aLOG 30965). See attached PDF to this comment.
Non-image files attached to this comment
Displaying reports 53061-53080 of 83296.Go to page Start 2650 2651 2652 2653 2654 2655 2656 2657 2658 End