Displaying reports 54921-54940 of 84690.Go to page Start 2743 2744 2745 2746 2747 2748 2749 2750 2751 End
Reports until 17:09, Monday 17 October 2016
H1 CDS (DAQ, DCS, GRD, SYS)
jeffrey.kissel@LIGO.ORG - posted 17:09, Monday 17 October 2016 (30607)
We Need Improvements to the SDF System
J. Kissel, J. Driggers, S. Dwyer

I've already mentioned this a few months ago in LHO aLOG 27517, but I'll repeat these requests again, because we've thought a little bit more on how we'd like them implemented.

Thing we need to make SDF more maintainable:
1) Remove the distinction between SAFE and DOWN for SUS and SEI. Because SEI and SUS come up with the watchdog tripped, and the start up state for both systems guardian has the masterswitch off, there's no difference between these states. 
2) A rapid way to assess and change a group of front-end model's SDF reference files. I.e. we should be able to switch between OBSERVE and DOWN quickly instead of having the 4 screen, 15 clicks per front-end process it is now.
3) A easy, built in way to find out what channels of a given front-end are controlled by all guardians, such that we can un-monitor them.
4) A much easier way to reconcile non-initialized and not-found channels than the still-confusing SDF Save and SDF Restore screens.

Regarding 
(1) -- This has been completed. The SUS that *have* down.snaps all now have their safe.snaps soft-linked to their down.snaps.
(2) -- we now have a rapid way to assess at which reference file a front end is looking, because I've modified the overview. However, we still need a rapid way to change them all. Sheila and Jenne have programmed the enforcement of those front end models that *have* down.snaps to reference them in the DOWN state of the ISC_LOCK guardian, using shell scripts like
/opt/rtcds/userapps/release/isc/h1/guardian/
All_SDF_down.sh
All_SDF_observe.sh

These are rapid ways to change them all, once created, but because the SDF system identifies models by DCUID number instead of model name, it's arduous to create and maintain such a script.

(3) -- We still don't have this. Jamie and Jenne (and several others, I'm sure) have already discussed that parsing guardian code to find what channels it touches is intractable. 
As such, the new idea is that we have some script that 
     (a) takes two burt snapshots of an entire front-end model EPICs settings database
     (b) compares them -- any channel that has a change between those two snap files gets flagged for not-monitoring, all others get flagged for monitoring.
     (c) Those flags are absorbed into a *single* SDF file (with none on this unmaintainable several-file nonsense), and then
     (d) all monitored settings values are accepted.
We imagine this script to be run once a week, or so.

(4) -- We can now initialize a value by just flipping over to the "CHANS NOT INIT" menu, and accepting them. However, by default, they go into the list of channels *not* monitored. In order to initialize them as monitored, one has to remember to select *MON* (all) and then confirm. Until we get (3) addressed, the functionality should be that hitting *accept* for a non-initialized channel should automatically monitor it. For CHANS NOT FOUND We still must do the multi-screen, confusing save and reload.

This entry is posted as a request to the CDS software team to help us out. We don't think we can accomplish the solutions to (3) or (4), and we're mildly unhappy with the work-arounds we've done for (1) and (2). Thanks ahead of time.
H1 ISC (SUS)
jeffrey.kissel@LIGO.ORG - posted 16:25, Monday 17 October 2016 (30605)
SUS Alignement Offsets during This Afternoon's Lock Stretch, in prep for tomorrow's RCG Upgrade Recovery
The title says it all!
Images attached to this report
H1 OpsInfo
terra.hardwick@LIGO.ORG - posted 16:18, Monday 17 October 2016 (30602)
'Bouncing' PI modes

Operators: If PI modes bounce during your shift - such as Mode26 and Mode18 - tune phase based on overall slope of the top of the bounces. The bouncing is not real (it's a filter issue I'm working on), but the overall height of the bounces is, so damp that as usual. This should only happen when two modes close in frequency are rung up at the same time, so you might need to be changing both phases during this time. Note that this will likely occur within the first half hour of powering up from a cold(er) state. 

- - - - - 

Since editing filters to help with the crossing modes - Mode18 and Mode26 - we've seen PI signals bounce on the PI StripTool several times now. I think this occurs during crossing when both modes are rung up; in this case, there are two peaks of similar amplitude within a few tenths of a Hz apart, the amplitudes are enough to engage the PLL, and the PLL(s) get confused about which peak is 'theirs' and end up jumping back and forth between the two. At the same time, bandpass filters are being turned on/off by the guardian, based on the frequency readback of the PLL's; the bounciness is caused by alternatively right and wrong BP filters being turned on, so the amplitude of peak appears to change dramatically as its within or outside of the bandpass. 

Attached are spectra during bouncy time when both peaks are elevated and during normal damping time (ten minutes later) when only one peak is high. Also attached are trends of relevant channels during this time. Bounciness seen in first bump in RMSMON channels, no bounciness ten minutes later in second bump. The FREQ_MON channels show the first bump correlating with strong switchbacks in frequency (especially in Mode18), versus single frequency tracking ten minutes later.

Still working on a good solution to handle this. For now, Mode's 18 and 26 have only been staying so close and rung up for periods of ~10 min and only during colder lock starts (after that Mode18 rises in frequency away from Mode26); we only saw this once over the full weekend of locking. And if we damp based on overall slope of bouncing, it's been damping well. 

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 16:01, Monday 17 October 2016 (30588)
Ops Day Shift Summary

TITLE: 10/17 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Aquisition
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY: Lost lock a few times just after getting to NLN or just after powerup, not really sure why (once was me trying to adjust the ISS diffracted power too late in the game). Been locked for 4hours, commissioners hard at work.
LOG:

H1 CAL (DetChar)
jeffrey.kissel@LIGO.ORG - posted 15:47, Monday 17 October 2016 - last comment - 16:17, Monday 17 October 2016(30597)
PCALX Roaming Calibration Line Frequency Changed to 2001.3 Hz
J. Kissel for S. Karki

I've changed the > 1 kHz PCALX calibration line frequency, a.k.a. the "long duration sweep" line, from 2501.3 to 2001.3 Hz. Recall this started moving again on Oct 6th (see LHO aLOG 30269) I report the progress towards completing the sweep below. 

Frequency    Planned Amplitude        Planned Duration      Actual Amplitude    Start Time                 Stop Time                    Achieved Duration
(Hz)         (ct)                     (hh:mm)                   (ct)               (UTC)                    (UTC)                         (hh:mm)
---------------------------------------------------------------------------------------------------------------------------------------------------------
1001.3       35k                      02:00      
1501.3       35k                      02:00     
2001.3       35k                      02:00                   39322.0           Oct 17 2016 21:22:03 UTC
2501.3       35k                      05:00                   39322.0           Oct 12 2016 03:20:41 UTC    Oct 17 2016 21:22:03 UTC      days
3001.3       35k                      05:00                   39322.0           Oct 06 2016 18:39:26 UTC    Oct 12 2016 03:20:41 UTC      days
3501.3       35k                      05:00                   39322.0           Jul 06 2016 18:56:13 UTC    Oct 06 2016 18:39:26 UTC      months
4001.3       40k                      10:00
4301.3       40k                      10:00       
4501.3       40k                      10:00
4801.3       40k                      10:00  
5001.3       40k                      10:00
Images attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 15:22, Monday 17 October 2016 (30601)

I had changed the pcal y line heights yesterday around 10 am local for a contrast defect test and have only now reverted them back to their old values.

jeffrey.kissel@LIGO.ORG - 16:17, Monday 17 October 2016 (30604)DetChar
Saying more words for Evan:

He changed the 7.9 Hz and 1083.7 Hz line heights, by adjusting the H1:CAL-PCALY_PCALOSC4_OSC_[SIN,COS]GAIN and H1:CAL-PCALY_PCALOSC3_OSC_[SIN,COS]GAIN (respectively) oscillator gains. They we changed starting at Oct 16 2016 17:27 UTC and restored by  Oct 17 2016 22:23 UTC. So any undisturbed time processed from this period should be excised from the collection of data for the 2501.3 Hz analysis for fear of confusion on the optical gain.

Thus the new table of times for valid analysis is


Frequency    Planned Amplitude        Planned Duration      Actual Amplitude    Start Time                 Stop Time                    Achieved Duration
(Hz)         (ct)                     (hh:mm)                   (ct)               (UTC)                    (UTC)                         (hh:mm)
---------------------------------------------------------------------------------------------------------------------------------------------------------
1001.3       35k                      02:00      
1501.3       35k                      02:00     
2001.3       35k                      02:00                   39322.0           Oct 17 2016 22:24:00 UTC
2501.3       35k                      05:00                   39322.0           Oct 12 2016 03:20:41 UTC    Oct 16 2016 17:27:00 UTC      days
3001.3       35k                      05:00                   39322.0           Oct 06 2016 18:39:26 UTC    Oct 12 2016 03:20:41 UTC      days
3501.3       35k                      05:00                   39322.0           Jul 06 2016 18:56:13 UTC    Oct 06 2016 18:39:26 UTC      months
4001.3       40k                      10:00
4301.3       40k                      10:00       
4501.3       40k                      10:00
4801.3       40k                      10:00  
5001.3       40k                      10:00
Images attached to this comment
LHO VE
kyle.ryan@LIGO.ORG - posted 14:38, Monday 17 October 2016 (30599)
Manually over-filled CP3
LN2 @ exhaust in 60 seconds after opening LLCV bypass valve 1/2 turn -> closed LLCV bypass valve. 

Next over-fill to be Wednesday, Oct. 19th.  
LHO VE (PEM)
thomas.shaffer@LIGO.ORG - posted 14:26, Monday 17 October 2016 - last comment - 15:51, Monday 17 October 2016(30598)
EX Pump NLN Test

WP#6221

While locked in Low Noise, Kyle headed to EX to start and stop a pump in two 5 min intervals. The pump is connect to the North side of BSC5.

Times in UTC (GPS):

Start1 - 20:52:52 (1160772789)

Stop1 - 20:57:52 (1160773089)

Start2 - 21:02:52 (1160773389)

Stop2 - 21:07:52 (1160773689)

 

The purpose of this test is to determine if these pumps can be run without interferring with commissioners.

Comments related to this report
kyle.ryan@LIGO.ORG - 15:51, Monday 17 October 2016 (30603)
"Going once...going twice....SOLD!"  

Having heard no complaints, I will forge ahead and plan on running these pumps for 5-7 days (starting tomorrow morning) in support of the required bake-out of the X-end RGA.
LHO FMCS
john.worden@LIGO.ORG - posted 14:01, Monday 17 October 2016 (30596)
Mid station chillers

The Mid station chillers were turned off today and will likely remain off until March or so when the warmer weather returns.

H1 DetChar (DetChar, ISC, PSL)
andrew.lundgren@LIGO.ORG - posted 13:35, Monday 17 October 2016 (30595)
Range loss due to jitter coupling
Some recent locks have had steeply downward-trending ranges, which seem to be due to a steady increase in jitter coupling.

I hadn't seen anyone say it explicitly, so I decided to check the cause of one of these drops in range. I picked the lock on Oct 15, where the detector was in intent mode around 19:30, and the range had dropped 20 Mpc by 20:30. I ran Bruco at this later time (Bruco results page).

Above 200 Hz, all the coherences are with ISS PDB, IMC WFS, and the like, suggesting that jitter coupling is steadily increasing (and I think there's no feedforward because of issues with the DBB). Attached are the h(t) spectrum and the change in coherence with PDB relative to the start of the lock - the spectrum of PDB and the WFS don't change.
Images attached to this report
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 12:38, Monday 17 October 2016 (30593)
CDS model and DAQ restart reports: Thursday 13th - Sunday 16th October 2016

No restarts reported over these 4 days.

H1 General (PSL)
edmond.merilh@LIGO.ORG - posted 11:45, Monday 17 October 2016 - last comment - 12:41, Monday 17 October 2016(30591)
PSL Weekly 10 Day Trends - FAMIS #6118

Weekly Xtal - graphs show some strange power fluctuations in Amp diode powers starting on or around the 10th. Also, this can be seen in the OSC_DB4_PWR graph as well.

Weekly Laser - Osc Box Humidity reached a high point at about the same time (10th) but seems to have started an upward trend sometime between the 8th and the 9th. PMC Trans power looks pretty erratic.  Included is a zoomed view of the Osc box humidity and the OSC Box Temp just for correlative purpose.

Weekly Env - nothing notable. 

Weekly Chiller - some marginal downward trends in headflow for heads 1-3. Head 4 is eother crazy stable and good OR this data is trash. ??

Images attached to this report
Comments related to this report
peter.king@LIGO.ORG - 12:41, Monday 17 October 2016 (30594)
Head 4, power meter circuit, and front end flows are "fake" due to force writing in TwinCAT.
H1 SEI
jim.warner@LIGO.ORG - posted 11:15, Monday 17 October 2016 (30592)
Some wind fence pics

I went to EX this morning to check on the wind fence after Friday's wind storm. The fence is still there, intact, and hasn't accumulated any tumbleweeds, which is one of the concerns Robert had about a fence. However, a couple of the posts have been twisted, probably by the wind load and moisture, and all of the poured concrete footings have started creeping in the sand. I dont think there is any danger of the fence collapsing, yet, but I'll keep an eye on this.

Attached photos are: a picture of the total coverage from a month or two back (this hasn't changed), a picture showing the worst twisted post (this is new, I didn't notice this the last time I looked) and a picture of the gap in the sand around one of the footings (not new, but it's been getting bigger).

Images attached to this report
H1 PSL
thomas.shaffer@LIGO.ORG - posted 09:40, Monday 17 October 2016 (30590)
PSL Weekly Report

Laser Status:
SysStat is good
Front End Power is 34.67W (should be around 30 W)
Front End Watch is GREEN
HPO Watch is GREEN
PMC:
It has been locked 0.0 days, 0.0 hr 48.0 minutes (should be days/weeks)
Reflected power is 33.57Watts and PowerSum = 126.8Watts.

FSS:
It has been locked for 0.0 days 0.0 hr and 42.0 min (should be days/weeks)
TPD[V] = 3.225V (min 0.9V)

ISS:
The diffracted power is around 6.695% (should be 5-9%)
Last saturation event was 0.0 days 0.0 hours and 51.0 minutes ago (should be days/weeks)

Possible Issues:
PMC reflected power is high
ISS diffracted power is High (This seems to pop up after power increases)

LHO General
thomas.shaffer@LIGO.ORG - posted 08:37, Monday 17 October 2016 (30587)
Morning Meeting Minutes

SEI - Testing out some new configurations to get ready for O2.

SUS - No report

CDS - Running (go catch it. Ha.)

PSL/TCS - All good

Vac - Kyle wants to do test with pumps at end stations with locked IFO.

Fac - RO system issues, but it is back and running.

 


 

Maintenance

CDS - FRS's to address, cable pulling, power up RGAs at ends.

PCal - Travis to do some work.

H1 ISC (CAL, ISC)
evan.hall@LIGO.ORG - posted 14:05, Sunday 16 October 2016 - last comment - 17:53, Monday 17 October 2016(30573)
DARM contrast defect in full lock

I wanted to find out how much contrast defect light we have in DARM at the moment. It seems to be about 2.0(5) mA at the moment. Since we run with 20 mA of total photocurrent, this implies a homodyne angle that is mistuned by about 6° away from the nominal value of 90°. I did not check how stable it is over the course of several hours.

To measure the contrast defect, I watched the height of 332 Hz pcal line in DARM while varying the dc offset.

Non-image files attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 19:55, Sunday 16 October 2016 (30575)

Also, I found that the DARM residual is microseism-dominated at 50 W of input power (the current blrms is about 0.5 µm/s). So I turned on a boost in FM6 of LSC-OMC_DC. We should incorporate this into the DARM filter modules.

jeffrey.kissel@LIGO.ORG - 17:53, Monday 17 October 2016 (30600)CAL
Expanding more on Evan's methods here:

Optical gain values in [mA/pm] were obtained by taking the magnitude of the transfer function at 331.9 [Hz] between H1:CAL-PCALY_RX_PD_OUT_DQ (pre-calibrated into [m] / zpk([],[1,1],1)) and H1:OMC-DCPD_SUM_OUT_DQ (pre-calibrated into [mA] to ~10% accuracy).

Total light on the OMC DCPD values in [mA] were pulled directly from H1:OMC-DCPD_SUM_OUT_DQ (again, pre-calibrated into [mA]).

The DARM DC offset was varied by adjusting the "fringe offset" or H1:OMC-READOUT_X0_OFFSET (pre-calibrated into [pm] to ~20-30% accuracy). This EPICs record can be found on the "IFO DC READOUT" sub-screen (called OMC_DC_READOUT.adl) of the OMC_CONTROL.adl overview screen. The nominal value is 10.65623 [pm], and to obtain the above data Evan varied the DARM DC offset from 6 to 13 in 1 [pm] increments.

     zeta = homodyne angle [rad] = arccot( contrast defect [mA] / total light on DCPDs [mA] )

where the contrast defect [mA] is the y intercept of the parabola.

The subsequent IFO optical gain vs. DC power on the DCPDs was then fit (by-eye) to blind/simple quadratic function with a DC offset. to arrive at the answer.

From Evan's presentation G1601599, which nicely distills the famous-yet-cryptic 2001 Buonnano&Chen paper, the response of a DRFPMI interoferometer using detuned resonant sideband extraction can be parametrized into a pair of complex poles (for the optical spring, at frequency |p| and quality factor Q), a pair of real poles (for the coupled cavity, or "RSE" pole, at frequency xi) and zero, at frequency z, which can potentially (and typically does for low detuning) cancel one of the RSE poles: 
     dP                    1 + i f / z 
     -- = g * ---------------------------------------           (5)
     dL       (1 + if/|p|Q + (f/|p|)^2) - (xi / f)^2
The zero, in his presentation, is composed of the following fundamental parameters,
                 cos(phi + zeta) - r_s cos(phi - zeta)
      z = f_a * ------------------------------------------      (8)
                 cos(phi + zeta) + r_s cos(phi - zeta)
where f_a is the arm cavity pole frequency (assumed to be the same for both arms), phi is the SRC detuning phase, and zeta is the homedyne angle.

One of the outputs of the above measurement, is that, if the homodyne angle, zeta, is consistently 90 +/- 6 [deg], then we can used Eq. (8) to simply fix the zero frequency in the overall IFO response (5), assuming the arm cavity pole frequency and SRC detuning phase also remain constant. This would reduce the parameter space over which the calibration group would have to MCMC in fits to measurements of the overall response (e.g. LHO aLOG 28302).

However, 
(1) This is, again, *one* measurement of the homodyne angle, zeta. We're going to have to measure it multiple times, and quantify the uncertainty in the estimate better, to make sure that we're confident it stays there.
(2) The SRC detuning phase, phi, and the arm cavity pole frequency, f_a, also need measuring with quantifiable uncertainty. These are also parameters believed to be fixed, but the question is always to what level. f_a has been measured before to be ~83 Hz, using several techniques (e.g. LHO aLOG 7054), but rarely with quantified uncertainty. Further, those measurements are typically taken of a single cavity, and there is worry that the pole frequency may change a bit in the full IFO due to different spot centering*. The detuning phase "can be determined by the spring frequency."
To me, this is quickly going down a rabbit hole of another independent MCMC parameter estimation fitting regime, but I'm still quite ignorant on the topic.

*word on the street is that LLO has a technique, once in full lock, of "kicking the SRM fast enough" such that full IFO remains locked but simple as an PRFPMI. I couldn't find an aLOG on it, and these discussions with Evan today were the first I'd heard of it. 

Worth exploring!
Images attached to this comment
gabriele.vajente@LIGO.ORG - 16:50, Monday 17 October 2016 (30606)

Doing a real least square fit gives different results, depending on what you assume

  1. If we require the minimum of the curve to be at z=0 (meaning that we have a good calibration of the optical gain zero), give a contrast defect of 1.6 mA. Sum of residuals is 1.3 mA^2
  2. If we allow the most generic second order polynomial, the fit is better (sum of residuals is 0.68 mA^2), but the minimum is at optical gain equal to 0.8 mA/pm and the contract defect is larger, 4.3 mA
Images attached to this comment
H1 CAL (DetChar)
jeffrey.kissel@LIGO.ORG - posted 17:27, Friday 14 October 2016 - last comment - 10:14, Tuesday 18 October 2016(30548)
PCALY OFS Glitching; Not Related to CAL Line Changes
J. Kissel, D. MacLeod

Duncan had noticed that Omicron triggers for the H1 PCAL Y RX PD (H1:CAL-PCALY_RX_PD_OUT_DQ) had failed on Oct 13 02:51 UTC (Oct 12 18:51 PDT) because it was receiving too many triggers. 

Worried that it might have been a result of the recent changes in calibration line amplitudes (LHO aLOG 30476) or the restoration of the 1083.7 kHz line (LHO aLOG 30499), I've trended the output of the optical follower servo, making sure that it has not saturated, and/or is not constantly glitching.

Attached is a 3 day and 30 day trend.

There is indeed a feature in the trend at Oct 13 02:51 UTC, but it is uncorrelated in time with the two changes mentioned above. Indeed, the longer trend shows that the OFS has been glitching semi-regularly for at least 30 days. I'll have Detchar investigate whether any of these correspond with heightened period of glitching in DARM, but as of yet, I'm not sure we can say that this glitching in a problem.
Images attached to this report
Comments related to this report
shivaraj.kandhasamy@LIGO.ORG - 09:38, Monday 17 October 2016 (30589)DetChar

The number of glitches seems to be definitely large and seeing them in OFS indicate it is real (and will be seen in DARM). Since Pcal interaction to DARM (at LHO) is oneway i.e, DARM is not expected to influence Pcal, it is probably originating in Pcal. At LLO we have seen glitches in Pcal when there were issues with power supplies (a-log LLO 21430), so it might be good to check those possibilities.

evan.goetz@LIGO.ORG - 10:14, Tuesday 18 October 2016 (30619)CAL, CDS, DetChar
Evan G., Darkhan T., Travis S.

We investigated these glitches in the y-end PCAL OFS PD more deeply and can fully explain all of the deviations. The excursions either due to DAQ restarts, line changes by users (including manual oscillator restarts, or by request to make transfer function measurements), shuttering the PCAL laser, or maintenance activities. See the attached 35 day trend of the excitation channel, shutter status, and OFS PD output (trends for both the 16 kHz and 16 Hz channels).

What sets the limits on Omicron triggers? Should Omicron be set to allow a higher number of triggers for Pcal?
Images attached to this comment
Displaying reports 54921-54940 of 84690.Go to page Start 2743 2744 2745 2746 2747 2748 2749 2750 2751 End