Displaying reports 67161-67180 of 85686.Go to page Start 3355 3356 3357 3358 3359 3360 3361 3362 3363 End
Reports until 12:04, Tuesday 09 June 2015
LHO General
thomas.shaffer@LIGO.ORG - posted 12:04, Tuesday 09 June 2015 - last comment - 16:14, Tuesday 09 June 2015(19013)
Ops Maintenance Log

809 - Richard to both ends

809 - Jeff B to LVEA then to ends setting up dust mons

811 - Christina fork lifting to OSB receiving and opening door

814 - Rick to manifolds on both arms

820 - Kyle to EY

823 - Jodi to LVEA to put signs up, then to MY

826 - Christina/Karen to EY to clean

827 - Beam tube cleaning start

832 - Robert to EY to move coils

832 - Patrick to restart EX Beckhoff

835 - Jodi and Rick out of LVEA

838 - Andres to LVEA to move parts around

845 EX Beckhoff back up

853 - Fil/Peter K to LVEA

900 - Kingsoft on site for RO work

905 - Praxair truck on site

907 - Fil/Peter out of LVEA

909 - Jodi leaving MY

917 - Andres out

921 - Gerardo to LVEA for wrenches

923 - Mitch to ends for serial number hunt

927 - Karen/Christina leaving EY for EX

928 - Gerardo out and to EY

947 - Robert out of LVEA

1000 - Praxair truck #2 on site

1006 - Daniel to EY to look at TCS setup

1010 - Richard back fomr Ends

1020 - Robert to LVEA

1020 - Karen/ Christina leaving EX

1020 - Fil to EY

1027 - Ed to CER

1036 - Ed out

1040 - RIchard to EY

1049 - Rick LVEA fit check

1052 - Daniel back, for a moment

1057 - Richard to LVEA looking at Cosmic Ray Detector

1105 - Christina/Karen to LVEA

1107 - Patrick restarting PEMEY

1128 - Mitch back

1128 - Daniel back

1133 - Jeff B back

Comments related to this report
jeffrey.bartlett@LIGO.ORG - 16:14, Tuesday 09 June 2015 (19032)
Log for second half of the day:

11:48 Beam Tube cleaning crew breaking for lunch
12:00 Start both dust monitors at End-Y
12:00 Kyle – Back from End-X – Note: Fan and power supply running while turbo pump spins down
12:27 Bubba & Gerardo – Back from End-Y 
13:35 Beam tube cleaning restarted HNWX2
14:48 Kyle – Going to End-X to shut off turbo pump fans and power supply
15:16 Kyle – Back from End-X
15:30 Beam cleaning team done for the day
H1 TCS
daniel.sigg@LIGO.ORG - posted 11:41, Tuesday 09 June 2015 (19020)
EY ring heater hooked up for charge measurements

The ring heater for ETMY was disconnected from the chassis. The power of the chassis was already off, since we were not using it. A special cable which shorts all ring heater segments together was connected to the output of an SR560. The SR560 ground was connected to the ESD driver ground, while its input is driven by the PEM DAC (H1:PEM-EY_GDS_1_EXC). A drive of 12000 counts corresponds to about 1.8Vpp.

H1 CDS
patrick.thomas@LIGO.ORG - posted 11:12, Tuesday 09 June 2015 (19017)
end Y weather station IOC restarted
Got disconnected as part of dust monitor work at end Y. Burtrestored.
H1 INJ (CAL, DetChar)
jeffrey.kissel@LIGO.ORG - posted 10:32, Tuesday 09 June 2015 (19015)
Updated Hardware Injection's Inverse Actuation Filter Turned ON
J. Kissel

In the H1:CAL-INJ_HARDWARE and H1:CAL-INJ_BLIND filter banks, I've turned ON the updated inverse actuation filter for ER7 (which lives in FM2 of both banks), turned OFF the mini-run filter (which *still* lives in FM1) and accepted the changes in the SDF system. See LHO aLOG 18997 for design details of the new filter.
H1 CAL (DetChar)
jeffrey.kissel@LIGO.ORG - posted 10:02, Tuesday 09 June 2015 - last comment - 11:02, Tuesday 09 June 2015(19012)
Lock Segments at 23W (and therefore calibration should be used with caution)
J. Kissel

Over the weekend, Evan, Stefan and Kiwamu has explored running the IFO at 24 [W] request PSL input power (see e.g. LHO aLOG 18923) instead of 17 [W] for which the interferometer had been calibrated (see LHO aLOGs 18769 and 18813). Because we do not yet have automatic optical gain scaling built into the IFO's control system, the calibration will be incorrect for the following science segments:
Seg Num GPS Start       GPS End         Duration [s] UTC Start             UTC End
1	1117601338	1117665437	64099        Jun 06 2015 04:48:42  Jun 06 2015 22:37:01
2	1117667359	1117702258	34899        Jun 06 2015 23:09:03  Jun 07 2015 08:50:42
3       1117709888	1117743019	33131        Jun 07 2015 10:57:52  Jun 07 2015 20:10:03
4	1117748054	1117773188	15134        Jun 07 2015 21:33:58  Jun 08 2015 04:32:52
5	1117803358	1117814235	10877        Jun 08 2015 12:55:42  Jun 08 2015 15:56:59
6	1117814313	1117815464	1151         Jun 08 2015 15:58:17  Jun 08 2015 16:17:28
7	1117829201	1117833161	3960         Jun 08 2015 20:06:25  Jun 08 2015 21:12:25
---
                        Total Time      Uptime@23[W] Single IFO Duty Factor During These Segments
                        64.40 hr        45.27 hr     70.3% 

Offline optical gain calculations are being made by the GDS calibration pipeline, but they are not being applied directly since this is the first time they've been calculated. However, evidence still suggests that LHO's DARM coupled cavity pole frequency (i.e. the single-pole approximation to the interferometer's response to gravitational waves / displacement noise) is still a moving target, so the calibration error (not uncertainty, but actual error) may not only just be a scale factor, but a frequency-dependent error. We should* have enough information from PCAL and DARM calibration lines to make an estimate of how the frequency dependence is changing over time.

*"should" is still "in principle;" we have not yet finished the commissioning of processing PCAL / DARM calibration lines to a point where we can determine at what precision the 6 lines will be able to determine the optical transfer function. This work is on-going.
Comments related to this report
kiwamu.izumi@LIGO.ORG - 10:47, Tuesday 09 June 2015 (19014)

"Incorrect" sounds too strong to me.

I would say it was incorrect only in the sense that the cavity pole frequency was uncertain which is the case not only for the 24 W configuration but also for the 17 W. Otherwise we believe that the calibration had remained valid both in GDS and CAL-CS at 24 W (though, rememeber CAL-CS has not fully updated to the equivalent of GDS, alog 1880 and hence the descrepancy between them). The OMC has  a power-scaling functionality (alog 18470) and therefore, ideally it does not change the optical gain as we change the PSL power. As for the cavity pole frequency, the Pcal lines should be able to tell us how stable it has been.

As reported in alog 18293, the optical gain seemed to have dropped by 4% in this particualr lock according to the Pcal line at 540 Hz. Sudartian is currently analyzing the Pcal trend, but it seems that the optical gain typically changes by 4-5 % in every lock stretch probably due to different OMC error gain (which is computed in every RF->OMC transition) and perhaps different alignment somewhere. We compensated it by increasing OMC-READOUT_ERR_GAIN by 4 % at the beggining of this particular lock and therefore we thought the calibration was good assuming the cavity pole stayed at the same frequency, 355 Hz.

daniel.hoak@LIGO.ORG - 11:02, Tuesday 09 June 2015 (19016)DetChar

I suspect there is a lingering source of error in the gain of the OMC-DCPD --> DARM_IN1 path.  This may be due to the initial gain-matching calculation between DCPD_SUM and RF-DARM, but it could also be due a scaling error as we adjust the overall gain during the power-up step.  We initially set the gain at a DARM offset of ~40pm, but as we power up to 23W we reduce the offset to ~15pm.  The current gain-scaling calculation that Kiwamu links to does not account for the small static DARM offset that we have observed (it's a fraction of a picometer, see here).  I will post a note about this today -- the overall effect should be very small, but may account for the ~4% change that we have observed.  (If this is the source of the gain error it will be proportional to DARM offset, which is the same as power level since we change both at the same time.)

H1 CDS
patrick.thomas@LIGO.ORG - posted 08:47, Tuesday 09 June 2015 (19010)
restarted h1ecatx1
Biweekly crash recovery. Burtrestored to 06/08/2015 00:10.
LHO General
thomas.shaffer@LIGO.ORG - posted 08:13, Tuesday 09 June 2015 - last comment - 08:21, Tuesday 09 June 2015(19008)
Ops Report

Out of Science mode.

I flipped the intent bit because with Tuesday maintenance comes lots of noise.

Comments related to this report
thomas.shaffer@LIGO.ORG - 08:21, Tuesday 09 June 2015 (19009)

Lock broke, we will keep it this way till the maintenance period is over or until we hear otherwise.

H1 INJ (DetChar, INJ)
peter.shawhan@LIGO.ORG - posted 07:43, Tuesday 09 June 2015 - last comment - 14:01, Tuesday 09 June 2015(19006)
First test of detchar 'safety' injections
Peter Shawhan, Andy Lundgren, Nutsinee Kijbunchoo

We did a first -- and successful! -- test of the "detchar" or "safety" hardware injections shortly after 6:00 PDT this morning, at the time recommended by Jeff (work permit 5262).

The detchar injections are a sequence of loud sine-gaussians at a range of frequencies, primarily intended to check for couplings from the GW strain channel to auxiliary channels.  (See https://dcc.ligo.org/LIGO-G1500713 .)  For now, at least, we are using a set of 14 frequencies logarithmically spaced from 30 Hz to 2000 Hz, each injected at 3 different amplitudes to try to get target SNR values, spaced 5 seconds apart.  Here is the full list with times relative to the start time of the injection:
Matlab> GenerateSGSequence('H1','H1_ASD_at_1117710916.txt');
__time__   __freq__   __SNR__    __AMP__
    0.50       30.0      25.0    2.22e-20
    5.50       41.4      25.0    7.54e-21
   10.50       57.2      25.0    2.86e-21
   15.50       79.1      25.0    1.72e-21
   20.50      109.2      25.0    1.52e-21
   25.50      150.9      25.0     1.7e-21
   30.50      208.4      25.0    1.97e-21
   35.50      287.9      25.0    2.92e-21
   40.50      397.7      25.0    3.73e-21
   45.50      549.3      25.0    5.42e-21
   50.50      758.8      25.0    6.95e-21
   55.50     1048.2      25.0     1.1e-20
   60.50     1447.9      25.0    1.71e-20
   65.50     2000.0      25.0    2.68e-20
   70.50       30.0      50.0    4.44e-20
   75.50       41.4      50.0    1.51e-20
   80.50       57.2      50.0    5.71e-21
   85.50       79.1      50.0    3.44e-21
   90.50      109.2      50.0    3.03e-21
   95.50      150.9      50.0    3.41e-21
  100.50      208.4      50.0    3.94e-21
  105.50      287.9      50.0    5.85e-21
  110.50      397.7      50.0    7.47e-21
  115.50      549.3      50.0    1.08e-20
  120.50      758.8      50.0    1.39e-20
  125.50     1048.2      50.0     2.2e-20
  130.50     1447.9      50.0    3.41e-20
  135.50     2000.0      30.0    3.22e-20
  140.50       30.0     100.0    8.88e-20
  145.50       41.4     100.0    3.02e-20
  150.50       57.2     100.0    1.14e-20
  155.50       79.1     100.0    6.87e-21
  160.50      109.2     100.0    6.06e-21
  165.50      150.9     100.0    6.81e-21
  170.50      208.4     100.0    7.87e-21
  175.50      287.9     100.0    1.17e-20
  180.50      397.7     100.0    1.49e-20
  185.50      549.3     100.0    2.17e-20
  190.50      758.8     100.0    2.78e-20
  195.50     1048.2     100.0    4.41e-20
  200.50     1447.9      75.0    5.12e-20
  205.50     2000.0      30.0    3.22e-20
The file, on h1hwinj, is /data/scirun/O1/HardwareInjection/Details/config/Burst/Waveform/detchar_1117890580_3_H1.txt . With H1 running in good low-noise mode, Nutsinee switched the intent bit to 'commissioning' and we first injected the sequence starting at 1117890580 with an overall scale factor of 0.25 -- so the target SNRs/amplitudes are a factor of 4 smaller than in the table above. Nutsinee didn't see anything obvious appearing in the live spectrum initially, but Andy looked at Omicron output afterward and say that it had picked up at least some of the louder signals. We then injected the sequence again starting at 1117891250, this time with an overall scale factor of 1.0 . Nutsinee saw the signals clearly peak up in the live spectrogram, and Andy's quick check with Omicron showed many signals found with large SNR. The interferometer appeared to handle the injections fine, staying in lock. Afterward (and also in between the two injections), Nutsinee set the intent bit back to 'science'. Note: In the future, we expect detchar safety injections such as these to be marked with the DetChar bit in the CAL-INJ_ODC bitmask, but for the test today we treated it as a burst injection -- it will be marked in ODC (and GDS-CALIB_STATE) as a burst injection, and should produce ODC-INJECTION_BURST segments in the DQ segment database.
Comments related to this report
andrew.lundgren@LIGO.ORG - 12:01, Tuesday 09 June 2015 (19019)DetChar, INJ
I've done a few checks of the injections. The first attachment is the spectrum before the first injections were started compared to the spectrum just after the last one finished. The spectrum is the same before as after, so I don't think anything got rung up. Maybe we can check the violin modes more carefully. There was a fairly big glitch two minutes later (Omega scan) but I don't think it was related.

The other four attachments are the injections of the first round of the injection set, done at normal gain. These are meant to have SNR of about 25, but that varies with the spectrum. Most look fine. However, the injections at 1 kHz and above are not correct. They look to be anti-aliased down, or maybe there's a saturation or something wrong with the actuation. We'll check our code to see if there's something wrong with the file we generated.
Images attached to this comment
peter.shawhan@LIGO.ORG - 14:01, Tuesday 09 June 2015 (19028)DetChar, INJ
When Andy presented this on the ER7 call and talked about the higher-frequency injections not appearing correctly, Jeff asked if we were hitting the software limit at +-200 counts.  That limit had been set based on some CBC injection studies; we don't really know the level at which unavoidable saturation in the software or hardware chain would set in.  Duncan M quickly confirmed that the H1:CAL-INJ_HARDWARE_OUT_DQ channel was hitting +-200 for some of the injections.  I took a look too and estimated what SNR value hits that software limit:
 At 549 Hz,  SNR=80  hits 200 counts
   759 Hz   SNR=33
  1048 Hz   SNR=10.4
  1448 Hz   SNR<6.25  (saturated even for the weakest one we injected)
  2000 Hz   SNR<6.25  (saturated even for the weakest one we injected)
So this tells us how much that (currently rather arbitrary) software limit would need to be relaxed to put in larger-SNR injections at those higher frequencies, if it's important to do so.
H1 General (DetChar)
nutsinee.kijbunchoo@LIGO.ORG - posted 04:01, Tuesday 09 June 2015 - last comment - 04:56, Tuesday 09 June 2015(19004)
Lock loss 2:42 PST still trying to reacquire the lock

Jim told me that Guardian kept losing lock at BOUNCE_VIOLIN_MODE_DAMPING. After the lock loss I let Guardian took care of the locking up until this state and it lost lock as expected. I've just finish the initial alignment and will be doing the Bounce violin mode damping by hand. LOWNOISE_ESD_ETMY also known to have problems today so I will be doing the same thing when I get there. I have never run the code by hand so success not guaranteed. I'll do the best I can to get the ifo up for the 6AM hardware injection. Detcharians please stay tuned.

Comments related to this report
nutsinee.kijbunchoo@LIGO.ORG - 04:56, Tuesday 09 June 2015 (19005)

04:30 Locking again at LSC_FF. Guardian worked fine when requested LOWNOISE_ESD_ETMY. Intent bit switched to undisturbed after the range is stable.

H1 General
jim.warner@LIGO.ORG - posted 00:00, Tuesday 09 June 2015 - last comment - 19:32, Tuesday 09 June 2015(19001)
Shift Summary

Mostly quiet shift

16:00 Took over initial alignement from Cheryl, trying to lock

18:30 Finally locked

19:00 Robert to  EY to bang on beam tube, lots of glitching -> particulate?

20:00 Evan breaks lock with a small change to Guardian

22:00 Lock reacquired, LLO is up too.

23:26 HFD is on site, out the gate at 23:55

Comments related to this report
robert.schofield@LIGO.ORG - 19:32, Tuesday 09 June 2015 (19038)

I went out to watch the cleaning earlier in the day and returned after work had finished to reproduce some of the cleaning activities. I was on the phone with the operator who monitored DARM for glitches. I found that tapping the beam tube with metal like the water/vacuum nozzles produced large glitches, but brushing with the brushes did not. I found that the softer the instrument, the harder it was to make glitches. I was never able to make glitches with my fist, but there was nearly a one-to-one coincidence with metal taps. All glitches, according to Jim, the operator, were broad band and a couple of orders of magnitude above background. I had to wait quite a time for the spectrum to settle down before tapping again. The glitches were like delta functions, not like scattering shelves. There did not seem to be a difference between locations at a baffle and half way between baffles.

I suggested to Bubba and John that we might make fewer glitches if there was a polymer guard on the nozzles.

I guess that the important quantities for freeing metal oxide particles are either acceleration or change in curvature of the beam tube. The difference between soft and hard "hammers" is consistent with both of these hypotheses. I think that it is important to estimate the inter-site coincidence rate and propose that I mount an accelerometer and a shaker on the tube to study glitching as a function of frequency and amplitude. I suspect that there is a soft threshold in curvature change or acceleration, and that this will be fairly constant with time since the oxide layers should no longer be growing. 

H1 General
jim.warner@LIGO.ORG - posted 23:36, Monday 08 June 2015 - last comment - 23:56, Monday 08 June 2015(19002)
HFD is on site

HFD reports a power outage (? I could barely hear him... gate phone,ftl) and they need to check some boxes. He's currently driving down the X-arm very slowly, so I've flipped the intent bit. Will revert when I see he's off site.

Comments related to this report
jim.warner@LIGO.ORG - 23:56, Monday 08 June 2015 (19003)

Elvis has left the building. HFD off site at 23:55.

H1 GRD
evan.hall@LIGO.ORG - posted 22:27, Monday 08 June 2015 (19000)
Guardian state for transitioning back to ETMX

The ISC_LOCK guardian now has a state HIGH_RANGE_ESD_ETMX, which is intended to be used to transition control of DARM from ETMY back to ETMX.

However, in the interest of maximizing the amount of coincident locking time with L1, this state has not yet been tested.

H1 ISC (DetChar, SUS)
jim.warner@LIGO.ORG - posted 21:24, Sunday 07 June 2015 - last comment - 21:53, Monday 08 June 2015(18965)
25.4 Hz peaks have reappeared, requested power down to 16-18 watts

Starting right around 21:00 local, the 25.4 hz (well, DTT says something like 25.37) peak showed back up, creating a nasty looking comb in the DARM spectra (first image). Unsure of what else to do, I turned the power down to 16 watts, the peak has now kind of subsided(second image). I'll wait to see if the peak settles down any more, then maybe turn the power back up. 

Images attached to this report
Comments related to this report
jim.warner@LIGO.ORG - 22:43, Sunday 07 June 2015 (18966)

Dan called in and helped me look for a PI with a template he had ready. There were 2 peaks rung up at 15516 and 15540, see attached plot. The main culprit causes the big bump in the RMS, the little peak next to it was also rung up more than Dan's quiet reference. We were trouble shooting when the IFO lost lock. Guardian is bringing everything back up.

Images attached to this comment
daniel.hoak@LIGO.ORG - 00:16, Monday 08 June 2015 (18969)

Thanks, Jim!

The channel H1:IOP-LSC0_MADC0_TP_CH12 is the 64kHz-sampled IOP input for OMC DCPD A.

Recall that the frequency of this mode matches what Elli measured for our parametric instability.

carl.blair@LIGO.ORG - 08:22, Monday 08 June 2015 (18975)

The other approach would be to change your ring heater power. I guess you in the same situation as us from the fact that the previous step up in ring heater power was effective. So you need to increase your ring heater up from 0.5W per segment, which is what I think it was set to after your first observation of parametric instability.  At Livingston if we increase the ring heater too much we then ring up a 15004Hz mode. There is a new wiki here for operators as we had an apparent change in the parametric gain after last vent and we are still in the process of finding a new operating point for the ring heater.

carl.blair@LIGO.ORG - 21:53, Monday 08 June 2015 (18999)

The two acoustic modes appear to ring up with a similar time constant see image, there is also a peak in DARM a bit further down ldvw link any idea if this is related?  It's a bit big for an acoustic mode that is not ringing up.
I would guess that these two modes are ETMY and ETMX ringing up with the vertically oriented mode, as that is what we mostly see at Livingston.  You could look at the transmission QPD channels for more information if you are interested.  
At Livingston these channels are L1:IOP-ISC_EY_MADC1_TP_CH0-7 for X and Y end, pitch and yaw orientations can be derived.

Images attached to this comment
H1 DetChar
kiwamu.izumi@LIGO.ORG - posted 18:01, Friday 05 June 2015 - last comment - 10:00, Tuesday 09 June 2015(18918)
unidentified DARM glitches

Stefan, Kiwamu,

In this morning, DARM had many numbers of glitches which were visible in the DARM spectrum as wide band noise. We looked at various channels to see what caused the glitches, but we were not able to identify them.

We would like to get some help from the detchar people. Could you guys please look for a cause of the DARM glitches ?

The lock stretch which had a high glitch rate is the one starting at 2015-06-05 17:17 -sh to 18:00 -ish UTC. I attach an example time series of the glitches shown in OMC-DCPD A and B. We know that some of the loud ones were so large that they saturated the ADCs of the OMC DCPD.

Note that in the same lock stretch, we changed the DARM offset multiple times which changed the amount of the carrier light resonating in OMC. We do not think our activity with the DARM offset caused the high glitch rate.

Images attached to this report
Comments related to this report
stefan.ballmer@LIGO.ORG - 18:04, Friday 05 June 2015 (18919)
Here is also a DARM spectrum and time series, as well as the summary page range graph. The glitches occur in the short science segment just before 18h.
Images attached to this comment
thomas.massinger@LIGO.ORG - 11:32, Saturday 06 June 2015 (18932)DetChar, ISC

We need to do some more thorough follow-up, but I can give some quick feedback that might give some hints. The initial signs seem to suggest that the problem might be due to input pointing glitches or alignment fluctuations in the PRC.

The loudest glitches in that short science segments were coincident with ASC-REFL_A_RF9_I_PIT_OUT_DQ, which looks like it was being used as the error signal for INP1 and PRC2 loops, feeding back onto IM4 and PR2. The corresponding YAW channel was also significant, but not quite as much so. We also saw ASC-REFL_B_RF9_I_PIT_OUT_DQ as somewhat significant, which looks like it was being used in both the INP1, INP2, and PRC2 loops and feeding back on IM4 and PR2.

kiwamu.izumi@LIGO.ORG - 10:00, Tuesday 09 June 2015 (19011)

So, we now start believing that these loud glitches were related to the cleaning activity on the X arm vacuum tube (alog 18992). Here I show glitch wave forms in time series from three glitch events (out of many) that were observed in this past Friday, 5th of June.

Typically the glitches were very fast and I am guessing that the glitch itself happens on a time scale of 10 ms or maybe less. Usually the OMC DC signals or the DARM error follows with a relatively slow oscillation with a period of roughly 100 msec which I believe is an impulse response of the DARM control. All the three events showed a power drop in the carrier light everywhere ( TRX, TRY and POP) indicating that the power recycling gain dropped simultaneously. For some reason POP_A_LF showed slower power drop which I do not understand. Also, in all three events that I looked at, the OMC DCPDs showed small fluctuation roughly 20 ms before the big transient happens.

 

1. 2015-06-05 17:49 UTC

(This one contains two glitch events apart only by roughly 200 msec)

2. 2015-06-05 15:51 UTC

3. 015-06-05 17:55 UTC

 

4. High passed version of glitch event 1

(all the time series are high-passed with zpk([0], [40], 1) ). You can see a glitchy behavior in REFL WFS (as reported by TJ) as well as TRX QPD  (and a little bit in TRY QPD).

Images attached to this comment
Displaying reports 67161-67180 of 85686.Go to page Start 3355 3356 3357 3358 3359 3360 3361 3362 3363 End