Displaying reports 57121-57140 of 85564.Go to page Start 2853 2854 2855 2856 2857 2858 2859 2860 2861 End
Reports until 01:31, Tuesday 23 August 2016
H1 CDS (GRD, Lockloss)
sheila.dwyer@LIGO.ORG - posted 01:31, Tuesday 23 August 2016 - last comment - 10:28, Tuesday 23 August 2016(29238)
strange lockloss

Jenne, Sheila

We had an unusual lockloss a few minutes ago, related to 28255

I happened around 8:11  August 23rd UTC, the DRMI gaurdian seemed to think that the lock was lost although it was not.

Comments related to this report
thomas.shaffer@LIGO.ORG - 09:51, Tuesday 23 August 2016 (29242)

There are two locklosses around that time, so Ill play detective for both.

1.) 8:09:33 UTC (1155974990)

Looking at the Guardian log:

2016-08-23_08:09:30.786330Z ISC_DRMI [ENGAGE_DRMI_ASC.run] ezca: H1:ASC-MICH_P_SW1 => 16
2016-08-23_08:09:31.037960Z ISC_DRMI [ENGAGE_DRMI_ASC.run] ezca: H1:ASC-MICH_P => OFF: FM1
2016-08-23_08:09:31.042700Z ISC_DRMI [ENGAGE_DRMI_ASC.run] ezca: H1:ASC-MICH_Y_SW1 => 16
2016-08-23_08:09:31.290770Z ISC_DRMI [ENGAGE_DRMI_ASC.run] ezca: H1:ASC-MICH_Y => OFF: FM1
2016-08-23_08:09:33.911750Z ISC_DRMI new request: DRMI_WFS_CENTERING
2016-08-23_08:09:33.911930Z ISC_DRMI calculating path: ENGAGE_DRMI_ASC->DRMI_WFS_CENTERING
2016-08-23_08:09:33.912540Z ISC_DRMI new target: DOWN
2016-08-23_08:09:33.912620Z ISC_DRMI GOTO REDIRECT
2016-08-23_08:09:33.912900Z ISC_DRMI REDIRECT requested, timeout in 1.000 seconds

Seems as though there was a request for a state that is behind its current position, so it had to go through DOWN to get there. This request came from ISC_LOCK:

2016-08-23_08:09:33.546800Z ISC_LOCK [LOCK_DRMI_3F.run] DRMI TRIGGERED NOT LOCKED:
2016-08-23_08:09:33.546920Z ISC_LOCK [LOCK_DRMI_3F.run] LSC-MICH_TRIG_MON = 0.0
2016-08-23_08:09:33.547020Z ISC_LOCK [LOCK_DRMI_3F.run] LSC-PRCL_TRIG_MON = 1.0
2016-08-23_08:09:33.547110Z ISC_LOCK [LOCK_DRMI_3F.run] LSC-SRCL_TRIG_MON = 0.0
2016-08-23_08:09:33.547210Z ISC_LOCK [LOCK_DRMI_3F.run] DRMI lost lock
2016-08-23_08:09:33.602500Z ISC_LOCK state returned jump target: LOCKLOSS_DRMI
2016-08-23_08:09:33.602710Z ISC_LOCK [LOCK_DRMI_3F.exit]
2016-08-23_08:09:33.666340Z ISC_LOCK JUMP: LOCK_DRMI_3F->LOCKLOSS_DRMI
2016-08-23_08:09:33.667220Z ISC_LOCK calculating path: LOCKLOSS_DRMI->LOCK_DRMI_3F
2016-08-23_08:09:33.667760Z ISC_LOCK new target: LOCK_DRMI_1F
2016-08-23_08:09:33.668520Z ISC_LOCK executing state: LOCKLOSS_DRMI (3)
2016-08-23_08:09:33.668750Z ISC_LOCK [LOCKLOSS_DRMI.enter]
2016-08-23_08:09:33.854350Z ISC_LOCK EDGE: LOCKLOSS_DRMI->LOCK_DRMI_1F
2016-08-23_08:09:33.855110Z ISC_LOCK calculating path: LOCK_DRMI_1F->LOCK_DRMI_3F
2016-08-23_08:09:33.855670Z ISC_LOCK new target: ENGAGE_DRMI_ASC
2016-08-23_08:09:33.856260Z ISC_LOCK executing state: LOCK_DRMI_1F (101)
2016-08-23_08:09:33.856410Z ISC_LOCK [LOCK_DRMI_1F.enter]
2016-08-23_08:09:33.868100Z ISC_LOCK [LOCK_DRMI_1F.main] USERMSG 0: node TCS_ITMY_CO2_PWR: NOTIFICATION
2016-08-23_08:09:33.868130Z ISC_LOCK [LOCK_DRMI_1F.main] USERMSG 1: node SEI_BS: NOTIFICATION
2016-08-23_08:09:33.893890Z ISC_LOCK [LOCK_DRMI_1F.main] ezca: H1:GRD-ISC_DRMI_REQUEST => DRMI_WFS_CENTERING
 

 

and

2.) 08:13:12 UTC (1155975209)

Doesnt seem to be any funny business here. The DRMI_locked() function looks at the channels in the log below and then will return to DRMI_1F, and at this point it seems like the MC lost lock (see plots).

2016-08-23_08:13:17.613090Z ISC_DRMI [DRMI_WFS_CENTERING.run] DRMI TRIGGERED NOT LOCKED:
2016-08-23_08:13:17.613160Z ISC_DRMI [DRMI_WFS_CENTERING.run] LSC-MICH_TRIG_MON = 0.0
2016-08-23_08:13:17.613230Z ISC_DRMI [DRMI_WFS_CENTERING.run] LSC-PRCL_TRIG_MON = 1.0
2016-08-23_08:13:17.613300Z ISC_DRMI [DRMI_WFS_CENTERING.run] LSC-SRCL_TRIG_MON = 0.0
2016-08-23_08:13:17.613500Z ISC_DRMI [DRMI_WFS_CENTERING.run] la la
2016-08-23_08:13:17.670880Z ISC_DRMI state returned jump target: LOCK_DRMI_1F
2016-08-23_08:13:17.671070Z ISC_DRMI [DRMI_WFS_CENTERING.exit]
2016-08-23_08:13:17.671520Z ISC_DRMI STALLED
2016-08-23_08:13:17.734330Z ISC_DRMI JUMP: DRMI_WFS_CENTERING->LOCK_DRMI_1F
2016-08-23_08:13:17.741520Z ISC_DRMI calculating path: LOCK_DRMI_1F->DRMI_WFS_CENTERING
2016-08-23_08:13:17.742080Z ISC_DRMI new target: DRMI_LOCK_WAIT
2016-08-23_08:13:17.742750Z ISC_DRMI executing state: LOCK_DRMI_1F (30)
2016-08-23_08:13:17.742920Z ISC_DRMI [LOCK_DRMI_1F.enter]
2016-08-23_08:13:17.744030Z ISC_DRMI [LOCK_DRMI_1F.main] MC not Locked
2016-08-23_08:13:17.795150Z ISC_DRMI state returned jump target: DOWN
2016-08-23_08:13:17.795290Z ISC_DRMI [LOCK_DRMI_1F.exit]
 

Here are the functions that are used as decorators in DRMI_WFS_CENTERING

def MC_locked():
    trans_pd_lock_threshold = 50
    return ezca['IMC-MC2_TRANS_SUM_OUTPUT']/ezca['IMC-PWR_IN_OUTPUT'] >= trans_pd_lock_threshold

def DRMI_locked():
    MichMon = ezca['LSC-MICH_TRIG_MON']
    PrclMon = ezca['LSC-PRCL_TRIG_MON']
    SrclMon = ezca['LSC-SRCL_TRIG_MON']
    if (MichMon > 0.5) and (PrclMon > 0.5) and (SrclMon > 0.5):
        # We're still locked and triggered, so return True
        return True
    else:
        # Eeep!  Not locked.  Log some stuff
        log('DRMI TRIGGERED NOT LOCKED:')
        log('LSC-MICH_TRIG_MON = %s' % MichMon)
        log('LSC-PRCL_TRIG_MON = %s' % PrclMon)
        log('LSC-SRCL_TRIG_MON = %s' % SrclMon)
        return False

Images attached to this comment
thomas.shaffer@LIGO.ORG - 10:28, Tuesday 23 August 2016 (29244)

Something I also should have mentioned is that ISC_LOCK was brought into Manual and then requested LOCK_DRMI_3F right before the logs seen above. Seems as though it wasnt quite ready to be there yet so it jumped back down to LOCK_DRMI_1F, reran the state where it requested DRMI_WFS_CENTERING from the ISC_DRMI guardian.

LHO General
corey.gray@LIGO.ORG - posted 00:05, Tuesday 23 August 2016 (29234)
EVE Operator Summary

All Times Pacific Standard Time (PST):

H1 ISC
jenne.driggers@LIGO.ORG - posted 23:43, Monday 22 August 2016 (29237)
TCS heating wrong settings?

[Jenne, Sheila, Terra, Corey]

We've been having trouble with MICH ASC lately.  Sheila suggested that I double-check the TCS power settings, and in fact, the ITMX CO2 guardian is setting the laser to the wrong power. 

At 50W, we want TCS-CO2_X to be at 0.2W, and TCS-CO2_Y to be at 0.0W.  However, the guardian was taking both to 0.0W.  This is because the PSL_power_checker function inside of the TCS guardian has bad gain and offset values (I think).  This function calculates the desired TCS power based on the current PSL power, and the gain and offset values are defined separately for each CO2 laser.  This function wants to set the TCS-CO2_X power to 0.0W, and the TCS-CO2_Y power to -0.2W.  It has a check so that if it's trying to go to a negative power, just go to 0W, which is why both are being set to 0W.  At least for 50W, it seems that if we increase the offset values by 0.2W for each laser, we would be getting the power that we want.  However, I don't know that this will give us the correct CO2 power for any other PSL powers, so I am not changing it.  Someone from the TCS group should look into the calculation of CO2 power versus PSL power in the guardian, please.

After looking into things, I'm actually not sure why we were getting 0.2W for TCS-CO2_X.  It looks like the TCS guardian hasn't changed since its original checkin mid-May, and that it has always been calculating (using the PSL_power_checker function) that it should set TCS-CO2_X to 0.0W in the NOMINAL_LOCKED_CO2_LEVEL state, and that we've been requesting that state in the..... As I was typing, I realized why we weren't getting the wrong TCS powers.  Over the last few months, we've spent a lot of our time commissioining in the Increase_Power state for ISC_LOCK, where we had forgotten to comment out an explicit TCS power request from before we had the guardians.  The TCS guardians weren't requested to do anything until Coil_Drivers, the state after Increase_Power.  So, we were getting the TCS powers we wanted from the explicit request that should have been deleted, and weren't going to the wrong powers because we weren't advancing the ISC guardian. 

When Sheila shuffled a few guardian states over the last few days to make things more clear after we arrive at 50W, she put the TCS request in a state that we are now always going to, so that's why we've recently run into this.  For now, I've commented out the request for the TCS guardians to go to their "nominal" powers, and am leaving in the explicit request in Increase_Power.  Once Team TCS confirms the calculation of CO2 power versus PSL power, we can go back to using the TCS guardians.

H1 ISC (COC, ISC)
evan.hall@LIGO.ORG - posted 18:33, Monday 22 August 2016 - last comment - 03:37, Tuesday 30 August 2016(29235)
EY butterfly and drumhead mode Q factors

Terra and I used the PI damping infrastructure to excite the butterfly and drumhead modes on EY, and then ring them down.

We excited the butterfly mode (6053.9 Hz) during a 50 W lock. The observed ringdown time was 23.5 minutes (= 1410 s), giving a Q of 27×106.

We excited the drumhead mode (8158.0 Hz) during at 2 W lock. The observed ringdown time was 13.5 minutes (= 810 s), giving a Q of 20×106.

The templates containing the spectrum data for these ringdowns live in my directory under Public/Templates/SUS/BodyModes.

Comments related to this report
terra.hardwick@LIGO.ORG - 19:48, Monday 22 August 2016 (29236)

In PI model: 

MODE 29 = ETMX Drumhead

MODE 30 = ETMX Butterfly

MODE 31 = ETMY Butterfly

MODE 32 = ETMY Drumhead

evan.hall@LIGO.ORG - 03:37, Tuesday 30 August 2016 (29379)

The attached plot shows the expected ratio of the surface strain energy (in J/m) on the test mass face to the total strain energy (in J) in the test mass for the body modes between 5 kHz and 11 kHz. This is a simple Comsol model with a perfect silica cylinder.

Evidently, the drumhead and butterfly modes have similar energy ratios, so we should not expect their Q factors to be too different. It might be good to try the 9.2 kHz modes, since their energy ratio is rather different from the drumhead and butterfly modes, and they produce test mass strain in the beamline direction (the modes at 8.25 kHz and 9.4 kHz do not).

Non-image files attached to this comment
LHO General
thomas.shaffer@LIGO.ORG - posted 16:07, Monday 22 August 2016 (29230)
Ops Day Shift Summary

TITLE: 08/22 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Aligning
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Lots of commissioning work ongoing.
LOG:

LHO VE
chandra.romel@LIGO.ORG - posted 15:35, Monday 22 August 2016 (29233)
CP3 overfill
3:20pm local

34 sec. to overfill CP3 with 1/2 turn open on LLCV bypass valve
H1 CAL (CAL)
darkhan.tuyenbayev@LIGO.ORG - posted 15:22, Monday 22 August 2016 - last comment - 11:32, Tuesday 23 August 2016(29231)
Added a synchronized oscillator to L3 stage in QUAD_MASTER (SUS-ETMY recompilation pending)

Overview

A synchronized oscillator was added to QUAD_MASTER model test mass stage (L3). After re-compiling the SUS-ETMY model there will be two synchronized ossilators in L3 stage that will be used for driving calibration lines: *_L3_CAL_LINE and *_L3_CAL2_LINE.

Removed channel LKIN_P_LO from the list of DQ channels and added L3_CAL2_LINE_OUT into the list.

The h1susetmy model must be recompiled in order for the changes to take effect.

Details

For one of the two calibration lines that we needed to run during ER9 we used a pitch dither oscillator, SUS-ETMY_LKIN_P (see LHO alog 28164). After analyzing the ER9 data we found two problems with this line (see LHO alog 29108):

The second synchronized oscillator was added at L3_CAL2_LINE_OUT and the list of DQ channels was modified accordingly. The L3_CAL2_LINE_OUT was added with sampling rate 512 Hz. LKIN_P_LO was removed from the list of DQ channels.

The changes were commited to USERAPPS repository, rev. 14081.

Images attached to this report
Comments related to this report
darkhan.tuyenbayev@LIGO.ORG - 11:32, Tuesday 23 August 2016 (29249)CAL, DAQ

Dave, TJ, Jeff K, Darkhan,

H1:SUS-ETM(X|Y) were recompiled and restarted, DAQ was restarted (see LHO alog 29245, WP 6117).

The QUAD MEDM screen was updated to show the new oscillator settings.

The MEDM screen updates were committed to userapps repository (rev. 14088):

common/medm/quad/SUS_CUST_QUAD_ISTAGE_CAL2_LINE.adl
common/medm/quad/SUS_CUST_QUAD_OVERVIEW.adl

Images attached to this comment
H1 PSL (PSL)
peter.king@LIGO.ORG - posted 14:22, Monday 22 August 2016 (29229)
PSL reference cavity temperature monitor(s)
After making corrections to the FSS front end model to reflect the hardware connections for
signals involving temperature stabilisation of the reference cavity, the filter gain(s) and
offset(s) for the ambient temperature and the average reference cavity temperature have been
changed.  These are for:
H1:PSL-FSS_DINCO_REFCAV_TEMP
  offset = 344582.9468
    gain = 9.011805496E-4
H1:PSL-FSS_DINCO_REFCAV_AMBTEMP
  offset = 328418.8499
    gain = 9.650437381E-4

    With these settings, these channels now read out the temperature in Kelvin.
H1 PSL
edmond.merilh@LIGO.ORG - posted 12:31, Monday 22 August 2016 - last comment - 12:41, Monday 22 August 2016(29225)
Weekly PSL Trends - Past 10 days FAMIS #6110

Weekly Xtal - tiny decreases in amp diode powers. No surprise there. All other powers seem nominally steady. Still have a bad current reading at OSC DB3.

Weekly LASER -  Osc box humidity down considerably after the internal water event. It appears to even be lower than it was in then days prior to the event. PMC temp output seems to be lower by a couple of degrees, I'm going to call this a good thing. All other power outputs seem nominally stable.

Weekly Environment - I see a decrease in all relative humidity counts. Also there seems to be a marginal drop in temps except for the LVEA and the PSL anteroom.

Weekly Chiller - Trends are zoomed in for high resolution. All flow and pressure trends show downward tendencies except for OSC head2 which trended slightly higher in flow.

Summary - all around, everything seems to be in good shape.  There doesn't seem to be any immediate cause for alarm at this time.

Images attached to this report
Comments related to this report
jason.oberling@LIGO.ORG - 12:41, Monday 22 August 2016 (29227)

Concur with Ed's analysis, everything looks to be running OK.

H1 PSL
jeffrey.bartlett@LIGO.ORG - posted 10:48, Monday 22 August 2016 (29224)
Weekly PSL Chiller Reservoir FAMIS #6484
   FAMIS # 6428 - Checked the chillers and filters. Added 125ml to Crystal chiller. Added 250ml to diode as a preventative measure. Both filters are clean. No debris; no discoloration.
   Trends of chiller flows, pressures, and temperatures are all OK.
LHO General
thomas.shaffer@LIGO.ORG - posted 08:40, Monday 22 August 2016 (29222)
Morning Meeting Minutes

SEI - All good. Progress was made with the shutter last week; different filter, too high gain settings (see alog29149).

SUS - All good.

CDS - Running.

    Pulling chasis at ends for Beckhoff tomorrow.

PSL - All good.

Vac - HAM6 still coming down slowly.

    Tomorrow terminate the cables for the ion pumps. Software needs an update. RGA baking.

Facilities - HVAC system contractors here.

    Tomorrow moving items around in LVEA.

Interview and PNNL tour tomorrow in LVEA

Please finish maintenance by NOON tomorrow.

H1 AOS
sheila.dwyer@LIGO.ORG - posted 00:33, Sunday 21 August 2016 (29220)
locking today

Sheila Terra Evan

This afternoon we made some progress on things that were making locking difficult, and only a little progress on getting to low noise.  

H1 DetChar (DetChar)
keith.riles@LIGO.ORG - posted 09:35, Saturday 20 August 2016 (29166)
Checking recent timing system interventions with folded magnetometer data
Summary: The interventions in late July and early August to disable blinking LEDs and isolate timing system power supplies
have made some difference in the periodicities that emerge when folding magnetometer data from the end stations, with the largest
difference seen with the initial firmware updates done in late July. 

Details:
Weigang Liu has been cumulatively applying his folding algorithm to magnetometer data from January through
early August, including periods before, during and after recent attempts to mitigate leakage of periodic
(1 Hz and 0.5 Hz) transients seen in magnetometer channels into DARM. [Recent clogging of the Caltech 
cluster nodes with sufficient memory has delayed the automatic production of these plots, so Weigang did
a bunch of jobs manually on head nodes for this report.]

Summary of recent interventions:
  • July 19 - Upgrade of timing slave card firmware to disable alternating on/off blinking of LEDs with 2-second periodicity (on 1 second, off 1 second), suspected to lead to alternative positive / negative current transients that affect DARM.
  • August 2 - Upgrade of EX timing fanout card firmware for same reason
  • August 5 - Isolation of timing card power supplies
  • August 9 - Upgrade of EY timing fanout card firmware
  • August 16 - Additional timing card firmware upgrades in LVEA, EX, EY
The following table includes links to summary pages for most of the days to date of 2016 (some condor jobs still pending) for six end-station magnetometer channels, along with comments on changes visible from July 16 to July 21, then to August 6, then to August 18, along with figure attachment numbers showing the folded data plots for those days.
Channel / link to 2016 web pagesFigure attachmentsJuly 16 to July 21 changesJuly 21 to August 6 changesAugust 6 to August 18 changes
H1:PEM−EX_MAG_EBAY_SUSRACK_X1-4ImprovedHigher peaksSimilar
H1:PEM−EX_MAG_EBAY_SUSRACK_Y5-8ImprovedHigher peaks1-Hz structure different (not better)
H1:PEM−EX_MAG_EBAY_SUSRACK_Z9-121-Hz structure worseSimilar1-Hz structure reduced
H1:PEM−EY_MAG_EBAY_SUSRACK_X13-16ImprovedSimilarSimilar
H1:PEM−EY_MAG_EBAY_SUSRACK_Y17-20WorseEven worseSimilar
H1:PEM−EY_MAG_EBAY_SUSRACK_Z21-242-Hz structure smaller, 1-Hz structure worseSimilarImproved
Non-image files attached to this report
H1 ISC
terra.hardwick@LIGO.ORG - posted 00:21, Saturday 20 August 2016 (29217)
PI work

PI damping working, though all gains required a sign flip. Successfully damped ETMX mode while ETMX ESD was in HV mode thanks to recent mod.

Modes 17 (ETMX), 25, 26, 27 (all ETMY) rang up. All four have been in guardian and were damped tonight with a sign flip of the gains. I was able to check some phase optimizing but locks were too short for much investigation. I've saved these changes to the guardian and they auto damped the next lock successfully. 

H1 PSL
jason.oberling@LIGO.ORG - posted 15:35, Friday 19 August 2016 - last comment - 12:59, Saturday 20 August 2016(29203)
PSL Recovered from Water Leak

P. King, J. Oberling

Short Version:  The PSL is now up and running following the HPO water leak (first reported here, repairs reported here).

Long Version:  This morning, after giving the HPO ~48 hours to completely dry, we inspected the HPO optical surfaces.  The only thing found was some water spots on the head 1 4f lens (this was drag wiped clean); all other optical surfaces look good.  We then slowly brought up each head individually to ensure no contamination was causing the optical surfaces to glow; all good here as well.  The HPO was then successfully powered up an allowed to warm up for several minutes.  The front end came on without issue and the injection locking locked immediately.  After allowing the system to warm up for ~1 hour, I attempted to lock the PSL subsystems (PMC, FSS, ISS).  The PMC did not want to lock; according to Peter this was likely due to a slight horizontal misalignment (this is seen in a trend of the QPD that lives in the ISS box.  I unfortunately don't have a copy of it).  I returned to the enclosure and tweaked the beam alignment into the PMC and it locked without issue.  I then tweaked the PMC alignment further to maximize the power throughput.  PMC now has a visibility of ~80% with ~122W transmitted (with ISS on).  The FSS and ISS both locked without issue.  The PSL is now operational and fully recovered from the water leak.

Comments related to this report
peter.king@LIGO.ORG - 12:59, Saturday 20 August 2016 (29219)
Information about the mis-alignment was obtained from the reflected spot CCD image,
not the ISS QPD.
H1 ISC
sheila.dwyer@LIGO.ORG - posted 23:46, Tuesday 16 August 2016 - last comment - 09:08, Monday 22 August 2016(29142)
some ASC work

Terra, Sheila

Tonight we had trouble engaging the ASC again.

Losing optical gain in POP X

We rang up what we think is a PR3 bounce mode when engaging the ASC the same way as last night.  We found that we could avoid ringing this mode up by keeping the PRC2 gain low (digital gains of -500).  Right before the OMC damage/vent, the POP X path was reworked and the optical gain seemed suspiciously low. 

Tonight we found that the optical gain has decreased even more.  Terra changed the demod phase by dithering PR3 pit (500 counts to M3) and rotated the phase positive 65 degrees, (Q1, Q2, Q3, Q4 from 55, 53, 54, 51 to 120, 118, 119, 116 ) to maximize the signal in I (minimize the Q signal).  The 2 attached figures show Terra's before and after OLG measurements (excitation gain of 50), both with Jenne's gain of -5000, showing a 10dB increase in optical gain which is about what we expected based on the dither amplitude change. 

After optimizing the phase, we did not see the 28 Hz mode get rung up, but this seems to come and go because we also didn't see it yesterday.  We quickly tried moving L2 on the POP X path, while watching the amplitude of the PR3 dither line in the POP X signal.  We moved the lens about 4 inches closer to POP X and about 3 inches further away, and didn't find any location that had more signal for PR3 so we replaced it as we found it. 

We are going to leave the IFO locked in DC readout 2 Watts with the request set to down so that it will not try to relock. The noise is bad as expected. 

Images attached to this report
Non-image files attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 09:21, Wednesday 17 August 2016 (29148)

POPX whitening gain is 0dB but should be odd, see alog 26307. FRS 6057 filed.

sheila.dwyer@LIGO.ORG - 14:20, Wednesday 17 August 2016 (29158)

The whitening gain on POP X was changed from a gain step of 7 (21 dB) to 0 (0dB) on August 12th.  This whitening chassis has a problem and we must use odd gain settings, or else it will return an error and not set the gains equally on all quadratns, as Keita and Hang noted 26307

The change in gain probably happened during a beckhoff restart for the shutter code, but we could have been saved from this problem by SDF.  I cannot find a record for these whitening chassis in any SDF table. 

Also, this does not explain the drop in gain that Jenne saw, which happened before the whitening settings changed. 

sheila.dwyer@LIGO.ORG - 17:34, Wednesday 17 August 2016 (29164)

The stuck whitening gain bit is the LSB of the Q3 channel. In the past this was typically an indication of a cable problem (short).

sheila.dwyer@LIGO.ORG - 18:37, Wednesday 17 August 2016 (29170)

Sheila Daniel Terra

Connected the AM laser to the POP X head, and saw that we have very similar response in the electronics to what Evan measured in 27069

we had 3.3 mW out of the AM laser with a whitening gain of 21 dB, used -40 dBm of RF drive at 45.501150 MHz.  We saw about 600 counts on each quadrant (except quadrant 3 which had 350 counts and also the least amount of DC light because of way the laser was mis centered on the diode).  

We saw that there are rather large offsets when we changed the whitening gain, so Daniel reset the offsets.  The large offsets might have contributed to problems last night, along with confusion about the whitening gain. 

Also, we remembered that a factor of 6.7 of the mystery gain loss was due to adding a beamsplitter and forgetting to comensate for it on July 11. 

(Edit:  Actually, Haocun and I did remember to correct for this gain change, we just compensated for it in the digital loop gain. )

So to summarize:

loops were intially commisioned with a whitening gain of 21, a digital gain of -21, a 1 Hz ugf, and electronics gain similar to what we have now. (late may)

Edit: loops were originally commisioned with a filter gain of -200 for pit, -0.1 in the input matrix, an analog gain of 21 dB, and the WFS head electronics performing in a way simlar to what we have now.  This is when the reference that I think Jenne used was saved, and within a few days the pit input matrix was reduced by a factor of 2.

Edit: Around June 16th, we had difficulty staying locked when these loops were engaged, which was noted in the alog.  Terra and I just looked at trends of the filter gains, and it seems like we also reduced the digital gain from -220 to -3.4 although this was not noted in the alog.  This, together with the input matrix change explains most of the missing gain that Jenne found. 

On July 11th I forgot to compensate for the beamsplitter causing a gain reduction of 6.7 that no one noticed.

On July 26th, Evan and Keita relocated POP X and Jenne noticed that the digital gain had to be increased by a factor of 250 (or 500 for yaw) to keep the ugf the same.

August 12th the whitening gain was reduced to 0 dB from 21 dB by mistake in a beckhoff reboot.

August 16th Terra and I noticed this further reduction in gain, which is explained by the whitening gain.  We also changed the demod phase which increased the gain by about 10 dB.  We checked that small movements of the L2 don't change the optical gain much, and moving it by a few inches can decrease the signal. 

So, we are missing about a factor of 40 gain, which we cannot explain with electronics.

In the end only a factor of 2 of Jenne's gain change in unexplained.  It seems that we have had stable high power locks with both the high gain and low gain settings for PRC2, so we can decide which we want to use.  We also should have a factor of 3 increase in gain because of the phasing Terra and I did. 

keita.kawabe@LIGO.ORG - 17:18, Friday 19 August 2016 (29211)

More complicated than that.

  Whitening
(dB)
POPX digital gain
before rotation
Input matrix PRC2_P_GAIN

BS
(15% transmission)

Overall gain
relative to original
alog
Originally 33 1 -1 -220 none NA  
May 24 ~1:02 33 1 -0.05 -220 none 0.5  
Jun. 17 33 1 -0.05 -3 none 6.8E-3  
Jun. 22 ~noon 21

2.8

-0.05 -3 none 4.8E-3 27901
Jul. 11-12 21 2.8 -0.05 -21 inserted 5.0E-3 28324
Jul. 27 ~4:20 21 2.8 -0.05 -5000 inserted 1.2 28666

No mystery optical/electronic gain reduction any more. Maybe a factor of 1.2 came from the rework on the table.

It's not clear to me why the PRC2 filter gain was reduced by a huge amount on Jun. 17 but I haven't searched through alog.

keita.kawabe@LIGO.ORG - 09:08, Monday 22 August 2016 (29223)

Typo in the above table, originally the input matrix was -0.1, not -1.

H1 SEI
hugh.radkins@LIGO.ORG - posted 12:19, Friday 12 August 2016 - last comment - 08:12, Tuesday 23 August 2016(29056)
SEI response to 7.2 EQ in SW Pacific (New Caledonia)

HEPI BS Tripped few minutes before ITMX ISI.  This is the only HEPI that tripped in the neighborhood of the large quake.

ITMY ISI tripped--timing (H1:ISI-ITMY_ST2_WD_MON_GPS_TIME) indicates stage2 tripped on ACTuators 1 second before Stage1 on T240s but looking at the plots, the Actuators have only registered a few counts, nothing near saturation/trip level.  But the T240s hit their rail almost instantly.  It seems the Stage2 Last Trip (H1:ISI-ITMY_ST2_WD_MON_FIRSTTRIG_LATCH) should be indicating ST1WD rather than Actuator. On ETMY, the Trip Time is the same for the two stages and Stage2 notes it is an actuator trip but again, there are only a few counts on the MASTER DRIVE; seems this too should have been a ST1WD trip[ indication trip on Stage2--I'll look into the logic.

On the BS ISI, the Stage1 and Stage2 trip times are the same, and the Last Trip for Stage2 indicates ST1WD.  The Stage2 sensors are very rung up after the trip time but not before unlike the T240s which are ramping to to the rail a few seconds before trip.  ETMX shows this same logical pattern in the trip sequence indicators.

On the ITMX ISI, Stage1 Tripped 20 minutes before the last Stage2 trip. This indicates the Stage1 did not trip at the last Stage2 trip.

No HAM ISI Tripped on this EQ.

Bottom line: the logical output of the WDs are not consistent from this common model code--needs investigating. Maybe I should open an FRS...

Attachment 1) Trip plots showing Stage2 trip time 1 second before the stage1 trip where the stage2 actuators do not go anywhere near saturation levels.

Attachment 2) Dataviewer plot showing the EQ on the CS ground STS and the platform trip times indicated.

Images attached to this report
Comments related to this report
hugh.radkins@LIGO.ORG - 15:20, Monday 22 August 2016 (29232)

It seems this is not a problem with the watchdog but a problem with the plotting script.  It seems for ST2 Actuators, it misses a multiplier on the Y axis.  It works correctly for ST1 Actuators and all the sensors; it does not work for other chambers as well for ST2 ACT.  FRS 6072.

hugh.radkins@LIGO.ORG - 08:12, Tuesday 23 August 2016 (29241)

Actually, the plotting script is working fine.  When the spike is so large that the plotting decides to switch to exponential notation, the exponent is hidden by the title until you blow up the plot to very large size. 

H1 ISC
evan.hall@LIGO.ORG - posted 19:27, Friday 15 July 2016 - last comment - 11:18, Saturday 20 August 2016(28448)
Low-noise DARM loop retuning

I removed the 300 Hz and 600 Hz stopband filters in DARM, along with the 950 Hz low-pass filter.

I increased the gain from 840 ct/ct to 1400 ct/ct, giving a UGF of 55 Hz. This seems to have improved the gain peaking situation around 10 Hz (see attachment).

The new settings have been added to the guardian (in the EY transition state), but have not been tested. The calibration has not been updated.

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 22:09, Friday 15 July 2016 (28451)CAL
Tagging CAL Group. Evan Goetz has also been working on a better PUM roll-off. He'll be installing those improvements soon as well, and a full loop design comparison.
evan.hall@LIGO.ORG - 11:18, Saturday 20 August 2016 (29218)

Since we spend a nontrivial amount of time commissioning at high powers (>20 W) with DARM controlled by EX, I moved the DARM gain increase so that it comes on once the PSL power reaches 20 W.

Displaying reports 57121-57140 of 85564.Go to page Start 2853 2854 2855 2856 2857 2858 2859 2860 2861 End