Displaying reports 61541-61560 of 85849.Go to page Start 3074 3075 3076 3077 3078 3079 3080 3081 3082 End
Reports until 20:29, Thursday 14 January 2016
H1 CDS (CDS)
keita.kawabe@LIGO.ORG - posted 20:29, Thursday 14 January 2016 (24957)
DTT-NDS2 incompatibility is still very dangerous

There is a danger to use DTT together with NDS2 as described here: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=22128

When you look at the data of a channel from the past, if the sampling rate of that channel was changed in  the past, DTT could be confused and could output a totally bogus calculation. If you're only looking at the spectrum it's just a matter of wrong frequency axis, but if you look at the coherence between a channel with this problem and a channel without, your coherence is totally gone. Recently I was hit by this behavior again and spent a day to figure it out.

In the first attachment, I was looking at the coherence between PEM magnetometers and DARM using DTT. On the bottom row left half,  you can see that DARM and H1:PEM-CS_MAG_LVEA_OUTPUTOPTICS_QUAD_SUM_DQ are sometimes coherent with each other (BTW this cannot be magnetic coupling into HAM6, because the level of magnetic noise is too small according to Robert) but indivisual X, Y and Z channles don't show any coherence (top row right half, middle row). Strange thing is, QUAD_SUM is made inside the frontend, adding square of X, Y, and Z.

In the second attachment, I did the same measurement using matlab, and  X coherence is actually larger than QUAD_SUM. This is not as mysterious as I thought from the DTT plot.

The difference, it turns out, is that the sampling rate of X, Y and Z channels were increased from 4096Hz to 8192Hz (but that's not the case with SUM). DTT cannot handle this and assumes that they were always 4096Hz. If you look at the spectrum of, say, X channel in DTT, you'll see that the first mains line peak is at 30Hz, not 60.

This is really annoying.

Images attached to this report
H1 General
jeffrey.bartlett@LIGO.ORG - posted 20:01, Thursday 14 January 2016 (24956)
Ops Evening Mid-Shift
   After settling down from the problems during the first half of the shift, we are in Observing mode. General environmental conditions are good. At this time there are no apparent problems to report. 
H1 CAL (CAL)
evan.goetz@LIGO.ORG - posted 19:10, Thursday 14 January 2016 (24954)
Calibrated Pcal PD ratio trend investigations
Evan G., Darkhan T.
 
Summary:
Continuing to assess the observation Jeff made that there is a 1.8% discrepancy comparing PCALY_?X_PD_OUT_DQ, ?=(T,R) (where there should be no discrepancy), we investigated the long-term trend of the TX and RX PD values. To measure this, we grabbed 10-minute mean value trends of PCAL*_?X_PD_OUT_DQ where *=X or Y starting from September 11, 2015, for 110 days. Plotting the RX/TX for each end station, we can make the following observations
  1. The H1 x-end Pcal has a very sad drift downward on the order of 20% over the 110 days.
  2. In H1 y-end Pcal, there is a distinct change ocurring on Nov. 17 (a maintanence day!), from the 10 minute trends, the new epoch starts at GPS = 1131826740
  3. The second epoch of H1 y-end Pcal may show signs of a two-mode Gaussian distribution, but only because this fits the histogram better than a single Gaussian function

Conclusion:
We can account for 0.25% of the discrepancy measured by Jeff. If we also consider that the Foton filter values for H1 y-end Pcal are incorrect by 1.5% from the beginning of the run due to a change in measured optical efficiency. The current calibration factors in the front-end filters use a measurement from May 2015 and ETMY was misaligned during this measurement. The current hypothesis is that the ETM alignment during the end station calibration measurements can affect the measurements, so this could have changed the amount of light reaching the receiver module. We believe this should resolve the 1.8% discrepancy measured recently. Separately evaluating the two epochs could give a better measure of systematic uncertainties (difference between front-end filter and the correct values) during the two epochs rather than a combined uncertainty.

These measurements should be repeated for L1.

Details:
We took the ratio RX/TX of the trends and then removed outlier data points that were laying far from 1.0. (There were outliers when the end station PDs are calibrated or when the ETMs are misaligned.)
 
Then we plotted trends and made histograms. From here forward we cannot say much about the RX/TX ratio for x-end because it is probably clipping and therefore unusable. The results below are for H1 y-end.
 
The change in RX/TX ratio is easily observed and we consider the two epochs separately. Each epoch histogram is fitted with a Gaussian distribution (not x-end because the trend is not stable) or, in the case of epoch 2 y-end, a two-mode Gaussian distribution. The trend of PCALY RX/TX calibrated PD output show that on Nov 17 (Tuesday maintenance day, GPS = 1131826740) an RxPD to TxPD ratio changed by about 0.23%. The discrepancy between the two photodetectors increased from 1.0125 to 1.0148.
 
Comparison of RX/WS, TX/WS and e (optical efficiency) of PCALY end-station measurements on Oct 13 and Dec 22, thankfully measured during the two epochs, give

    Quantity     D20151013    D20151222    PercentChange
    _________    _________    _________    _____________

    'e'          0.98933        0.989       0.033457   
    'TX / WS'    -2.7427      -2.7382        0.16349   
    'RX / WS'    -4.0004      -4.0028      -0.061419   
    'RX / TX'     1.4585       1.4618       -0.22527   

 
from which we can conclude that mostly the change we see in the 110 days long calibrated RX / TX trend that happened on Nov 17 is mainly due to change in TX / WS response ratio, while the RX/WS response changes by 1/3 less and optical efficiency even smaller. We find that th RX/TX ratio from the calibration measurements give a change of 0.23% as well, confirming our trend results. In addition, the trend reveals the main culprit for the change is the TX/WS response. Quite possibly, the wedge splitter was bumped on the maintenance day resulting in a change in the splitting ratio.
Non-image files attached to this report
H1 General
jeffrey.bartlett@LIGO.ORG - posted 18:54, Thursday 14 January 2016 (24955)
IFO Relocked
    Dave and Jim corrected the problem with the WD reset. Did an initial alignment and relocked with no apparent problems. Power at 21.4W range at 80Mpc. Cleared a few SDF issues and set the intent bit to Observing.  
H1 General
jeffrey.bartlett@LIGO.ORG - posted 18:04, Thursday 14 January 2016 (24953)
Ops Evening Shift Transition
  Transition Summary:
Title:  01/14/2016, Evening Shift 00:00 – 08:00 (16:00 – 00:00) All times in UTC (PT)
	
State of H1: IFO unlocked and OMC WD triped. Working on fixing the WD trips (see Dave's aLOG), initial alignment, and relocking  

Outgoing Operator: TJ
H1 SUS (CDS)
david.barker@LIGO.ORG - posted 17:47, Thursday 14 January 2016 - last comment - 15:42, Friday 15 January 2016(24950)
SUS watchdog resets stopped working this afternoon

Jeff K, Jeff B. , Jenne, TJ, Jim, Dave:

Around 4pm PST TJ reported that OMC had tripped and the watchdog could not be untripped. Jeff K. recommended a model restart. Unfortunately due to a communication problem we first mistakenly restarted the OMC model on the LSC front end (sorry OMC). Then we restarted the correct SUS-OMC model on SUSH56. This did not fix it. We then restarted all the models on SUSH56 (including the IOP). This did not fix it. We then stopped all models and only started IOP and SUS-SRM to do further debugging. (in the mean time the SWWD on the IOP had tripped SEI for HAM5 and HAM6). After some debugging we found that the PERL script sus/common/scripts/wdreset_all.pl was throwing an error about not finding the PERL CA LIBRARY. Jim tracked this down to a missing CaTools.pm perl module in the userapps/release/guardian directory. Turns out this file was removed from the SVN repository way back on 2nd March 2015 and the LHO working directory was only updated this afternoon by Jenne and TJ. This all nicely ties in with the watchdog resets working last night but not this afternoon.

In the mean time we had manually reset the watchdogs for SUS-SRM/SR3/OMC and SEI HAM5,6 and set the SDF back to OBSERVE for SUSH56IOP, SUSSRM/SR3/OMC and OMC.

For now we have manually copied the CaTools.pm file into userapps/release/sus/common/scripts to get the watchdog reset script working again. 

This raises an FRS:

A perl module which is used by the watchdog systems has been deprecated. The watchdog system should be changed to no longer use PERL and instead use PYTHON (or perhaps BASH for exceptionally simple scripts).

Comments related to this report
david.barker@LIGO.ORG - 17:51, Thursday 14 January 2016 (24951)
stuart.aston@LIGO.ORG - 07:06, Friday 15 January 2016 (24962)CDS
We experienced a seemingly identical occurrence of this issue at LLO last Wednesday (see LLO aLOG entry 24156). However, as well as the SUS/SEI Watchdog reset scripts our initial alignment script was also affected, since it has Perl dependencies.

It is still unknown how the symbolic-link to CaTools.pm became broken at LLO, see #4180.
thomas.shaffer@LIGO.ORG - 15:22, Friday 15 January 2016 (24974)

Stuart, it was broken because I updated the same the same folder when I was visiting LLO. I am at fault for both of these CaTools.pm links being broken at both sites, though I had no idea that simply updating the SVN could cause this.

stuart.aston@LIGO.ORG - 15:42, Friday 15 January 2016 (24975)
Thanks for shedding light on this mystery! I would suspect that svn'ing up pushed the changes to deprecate the Perl module sooner than was intended.
H1 INJ (DetChar, INJ)
adam.mullavey@LIGO.ORG - posted 17:02, Thursday 14 January 2016 - last comment - 17:54, Thursday 14 January 2016(24949)
More Burst (and some CBC) Injections schedule for tonight
Last nights burst injections were scheduled for times when H1 was down, so we're attempting to do these again. This time I've schedule 5 lots of, the first lot starting at 22:00 CT (06:00 UTC), spaced 2 hours apart, with the last lot starting at 06:00 CT (14:00 UTC). Each injection is spaced 20 minutes apart.

We're also including a BNS CBC injection in between the 5 lots of burst injections. All up there will be 30 injections, one every 20 minutes starting at 22:00 CT (06:00 UTC) and ending at 07:40 CT (15:40 UTC).

I'll also mention that transient hardware injections only go ahead when the IFO is in observation mode, so they won't interfere with any PEM measurements that may be happening at the time. 

Here is the updated schedule:

1136872817 2 1.0 burst_GPS_76.259_
1136874017 2 1.0 burst_GPS_76.262_
1136875217 2 1.0 burst_GPS_76.263_
1136876417 2 1.0 burst_GPS_76.264_
1136877617 2 1.0 burst_GPS_76.266_
1136878817 1 1.0 coherentbns1_1135135335_
1136880017 2 1.0 burst_GPS_76.259_
1136881217 2 1.0 burst_GPS_76.262_
1136882417 2 1.0 burst_GPS_76.263_
1136883617 2 1.0 burst_GPS_76.264_
1136884817 2 1.0 burst_GPS_76.266_
1136886017 1 1.0 coherentbns1_1135135335_
1136887217 2 1.0 burst_GPS_76.259_
1136888417 2 1.0 burst_GPS_76.262_
1136889617 2 1.0 burst_GPS_76.263_
1136890817 2 1.0 burst_GPS_76.264_
1136892017 2 1.0 burst_GPS_76.266_
1136893217 1 1.0 coherentbns1_1135135335_
1136894417 2 1.0 burst_GPS_76.259_
1136895617 2 1.0 burst_GPS_76.262_
1136896817 2 1.0 burst_GPS_76.263_
1136898017 2 1.0 burst_GPS_76.264_
1136899217 2 1.0 burst_GPS_76.266_
1136900417 1 1.0 coherentbns1_1135135335_
1136901617 2 1.0 burst_GPS_76.259_
1136902817 2 1.0 burst_GPS_76.262_
1136904017 2 1.0 burst_GPS_76.263_
1136905217 2 1.0 burst_GPS_76.264_
1136906417 2 1.0 burst_GPS_76.266_
1136907617 1 1.0 coherentbns1_1135135335_
Comments related to this report
evan.hall@LIGO.ORG - 17:54, Thursday 14 January 2016 (24952)

06:00 UTC = 00:00 CT = 22:00 PT

LHO General
thomas.shaffer@LIGO.ORG - posted 16:29, Thursday 14 January 2016 (24945)
Ops Day Shift Summary

TITLE: 1/14 Day Shift: 16:00-00:00UTC (08:00-16:00 PDT), all times posted in UTC"

STATE Of H1: Calibration on going

SHIFT SUMMARY: Calibration work for the majority of the day. Lockloss at 23:15 tripped the OMC WatchDog. It would not untrip so Dave had to restart the IOP, but this also tripped HAM5/6. Now SRM will not UNtrip. Dave and Jeff B are currently working to figure it out.

INCOMING OPERATOR: Jeff B

ACTIVITY LOG:

LHO General
thomas.shaffer@LIGO.ORG - posted 16:18, Thursday 14 January 2016 (24947)
Lockloss 23:15 UTC OMC WD Tripped

OMC WD would not UNtrip, Dave had to restart IOPSUSH56 to clear this. By restarting these, HAM5,6 also tripped.

We are currently untripping everything.

H1 DAQ (CDS)
james.batch@LIGO.ORG - posted 15:25, Thursday 14 January 2016 - last comment - 08:36, Friday 15 January 2016(24946)
Trend Writer disk array failure
The SSD RAID for h1tw0 has failed, so h1tw0 is down until we can get it fixed.

FRS 4228
Comments related to this report
corey.gray@LIGO.ORG - 08:36, Friday 15 January 2016 (24967)

For operators, you will notice that h1tw0 will be WHITE-boxed on the DAQ Detail medm (which you should be checking every shift in the Ops Checksheet).

LHO FMCS
john.worden@LIGO.ORG - posted 13:56, Thursday 14 January 2016 (24944)
LVEA temperatures

I reduced one heater, HC1B, from 10ma to 9ma control current at 1:47 local time.

H1 OpsInfo
thomas.shaffer@LIGO.ORG - posted 11:53, Thursday 14 January 2016 (24943)
Initial Alignment Tools

I have revamped my initial alignment lazy script and made a new medm to help operators and commissioners bring up what they need a bit faster.

The Script:

Location: /userapps/.../isc/h1/scripts/Init_align_lazy.py

Action: This will bring up 5 StripTools(XARM_GREEN_WFS.stp, YARM_GREEN_WFS.stp, PITCH_ASC_INITIAL_ALIGNMENT.stp, YAW_ASC_INITIAL_ALIGNMENT.stp, initall_alignment.stp) as well as the new medm I made (INIT_ALIGN.adl). The script will arrange them on the left monitor in a 3x2 grid. They can then be moved if so desired. If any of these windows are open in another workspace, the window manager will get confused and move the other one, so keep that in mind of they aren't in their proper positions.

I have aliased this in for "ops" as 'initial_alignment'

The medm:

Location: /userapps/.../isc/h1/medm/INIT_ALIGN.adl

Combines the ALIGN_IFO, X and Y ALS Guardians with the alignment sliders for ETMX, ETMY, ITMX, ITMY, BS, and PR3. These are the most commonly adjusted optics for IA.

I have not linked this medm off of the sitemap yet, but if/when I do it will most likely be under OPS.

 

Comments adn questions are always welcome.

Images attached to this report
H1 SEI (OpsInfo)
thomas.shaffer@LIGO.ORG - posted 11:24, Thursday 14 January 2016 (24942)
Changed the ISI ETMX X blend from 45 to 90

Jenne noticed the control room symptoms of the ETMX ISI starting to ring up (H1:IMC-F_OUT16 on our Tidal.stp will be gin to oscillate [shot attached] along with the ASC control signals). I brought up the template to watch it and sure enough, ISI ETMX X was ringing up. I switched the X direction blend to the 90mHz and it imediately settled down.

I'm attaching screen shots of the environment control room tools to hopefully help.

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 08:04, Thursday 14 January 2016 (24941)
Ops Day Transition

TITLE: 1/14 Day Shift 16:00-00:00UTC (08:00-16:00 PDT), all times posted in UTC"

STATE Of H1: Observing at 78Mpc for 2hrs

OUTGOING OPERATOR: Travis

QUICK SUMMARY: wind <10mph, useism 0.4um/s, CWinj running, More calibration measurements today to come.

H1 General
travis.sadecki@LIGO.ORG - posted 08:00, Thursday 14 January 2016 (24940)
OPS Owl shift summary

Title: 1/14 Owl Shift 8:00-16:00 UTC (0:00-8:00 PST).  All times in UTC.

State of H1: Observing

Shift Summary:  Unlocked at the beginning of my shift.  After an IA, it locked up pretty quickly.  Then there was an lockloss due to EQ.  After EQ ringdown, we stumbled through the CARM section a few times before getting back to NLN.

Incoming operator: TJ

Activity log:

9:27 Locked at NLN

9:30 cleared ETMx TIM error

9:32 Observing

10:06 Set to Commissioning to tweak TMS in an attempt to get POP power higher (it wasn't drifting down, just not as high as usual,15800).  Unsuccessful.

10:13 Observing

11:42 Lockloss.  EQ?

13:50 Locked NLN, attempting to tweak POP power again

14:08 Another unsuccessful POP attempt.  I did however manage to ring up the ASC loops like crazy!  Waiting for them to ring down.

14:21 Observing

H1 General
travis.sadecki@LIGO.ORG - posted 06:24, Thursday 14 January 2016 (24939)
Observing at 14:21 UTC

Finally back to Observing after a few failed attempts.

H1 General
travis.sadecki@LIGO.ORG - posted 03:47, Thursday 14 January 2016 (24938)
Lockloss 11:42 UTC

According to our seismometers, it appears to be an EQ.  The only thing showing on Terramon or USGS at the moment is a 5.1 near Wallis and Futuna with Terramon prediciting 0.17 um/s R-Wave.  However, our seismos are already at ~1 um/s a few minutes before the predicted arrival time for this EQ.

H1 General
travis.sadecki@LIGO.ORG - posted 01:34, Thursday 14 January 2016 (24937)
Observing at 9:32 UTC

After running through an initial alignment, we are back to Observing.  The issue with PRM align mentioned in Jeff's summary seems to have been due to PRM alignment being far off (hundreds of urads).  I restored them to earlier values and it locked without issue. 

Displaying reports 61541-61560 of 85849.Go to page Start 3074 3075 3076 3077 3078 3079 3080 3081 3082 End