Displaying reports 48501-48520 of 83211.Go to page Start 2422 2423 2424 2425 2426 2427 2428 2429 2430 End
Reports until 13:01, Saturday 22 April 2017
H1 General
thomas.shaffer@LIGO.ORG - posted 13:01, Saturday 22 April 2017 (35721)
Ops Mid shift report

The range dropped down to ~60Mpc for a few hours but seems to be back up. the 4.7k mode is high right now, but it is not growing. I am waiting for LLO to drop before I damp it.

LHO General
thomas.shaffer@LIGO.ORG - posted 08:03, Saturday 22 April 2017 (35720)
Ops Day Shift Transition

TITLE: 04/22 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 64Mpc
OUTGOING OPERATOR: Travis
CURRENT ENVIRONMENT:
    Wind: 19mph Gusts, 11mph 5min avg
    Primary useism: 0.07 μm/s
    Secondary useism: 0.15 μm/s
QUICK SUMMARY: Seems like a calm Saturday so far.

H1 General
travis.sadecki@LIGO.ORG - posted 08:00, Saturday 22 April 2017 (35719)
Ops Owl Shift Summary

TITLE: 04/22 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 62Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY:  One lockloss of unknown cause early in the shift.  No issues relocking or otherwise.
LOG:  See previous aLogs.

H1 General
travis.sadecki@LIGO.ORG - posted 02:09, Saturday 22 April 2017 - last comment - 02:37, Saturday 22 April 2017(35717)
Lockloss 9:00 UTC

No readily apparent cause.

Comments related to this report
travis.sadecki@LIGO.ORG - 02:37, Saturday 22 April 2017 (35718)

Observing 9:36 UTC.

H1 General
travis.sadecki@LIGO.ORG - posted 01:16, Saturday 22 April 2017 (35716)
GRB alert 8:14 UTC
H1 General
travis.sadecki@LIGO.ORG - posted 00:06, Saturday 22 April 2017 (35715)
Ops Owl Shift Transistion

TITLE: 04/22 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 60Mpc
OUTGOING OPERATOR: Jeff
CURRENT ENVIRONMENT:
    Wind: 5mph Gusts, 3mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.14 μm/s
QUICK SUMMARY:  No issues handed off.  Lock is 5.5 hours old.

H1 General
jeffrey.bartlett@LIGO.ORG - posted 23:59, Friday 21 April 2017 (35714)
Ops Evening Shift Summary
Ops Shift Log: 04/21/2017, Day Shift 15:00 – 23:00 (08:00 - 16:00) Time - UTC (PT)
State of H1: Locked at NLN, 31.0W, and 61.4Mpc or range     
Intent Bit: Observing
Support: N/A
Incoming Operator: Travis

Shift Summary: Run A2L check script. Pitch and Yaw are both under the reference. Lost lock – reason unknown; at the time the range was moving around a bit. Relocked with no problems. Accepted SDF Diff with ETMY_L2_DAMP_MODE10_GAIN and back to Observing. Had to damp PI Mode-27 and Mode-28 a couple of times.

Ran A2L check – after relock, Yaw is slightly elevated, Pitch is good.

Locked in Observing for 5.25 hours. After the earlier lockloss, the remaining shift was quiet with no apparent problems. A2L remains at or below the reference.    

 
   Activity Log: Time - UTC (PT)
23:00 (16:00) Take over from Nutsinee
01:11 (18:11) Lockloss – Unknown
01:39 (18:39) Damp PI Mode-28
01:41 (18:41) Damp PI Mode-27
01:42 (18:42) Relocked and Observing
01:43 (18:43) Damp PI Mode-28
01:43 (18:43) Damp PI Mode-27
07:00 (00:00) Turn over to Travis
H1 General
jeffrey.bartlett@LIGO.ORG - posted 20:12, Friday 21 April 2017 (35713)
Ops Evening Mid-Shift Summary
   After relocking the IFO has been behaving. The environmental and seismic conditions are benign. Range has improved (now at 62.3 Mpc) and is steady. Before the lockloss the range was in the mid to upper 50s and moving around. A2L Pitch is below the reference, YAW is elevated to 0.6. 
        
H1 General
jeffrey.bartlett@LIGO.ORG - posted 18:53, Friday 21 April 2017 (35712)
Accept SDF Diff After Relock
   After relocking, accepted the SDF Diff for ETMY_L2_DAMP_Mde10_GAIN. The SP was 0.0 the Epics was 0.02. Back to Observing. 
H1 DetChar (DetChar)
evan.goetz@LIGO.ORG - posted 16:26, Friday 21 April 2017 (35711)
Disconnecting ethernet adapter for the EY illuminator and PD doesn't affect lines
Analyzing the time period where we disconnected an ethernet adapter for the EY illuminator (see aLOG 35640) shows that this doesn't impact the lines observed in the VEA magnetometer between 20 Hz and 120 Hz. I attach normalized spectrograms the the magnetometer data, and we do not see any indication of lines increasing or decreasing around the time period that we disconnected the adapter. The time it would have impacted would be 30 - 34 minutes in these spectrograms.

So it appears this ethernet adapter isn't too bad, at least from this short test.

Keep hunting lines...
Images attached to this report
H1 General
jeffrey.bartlett@LIGO.ORG - posted 16:09, Friday 21 April 2017 (35710)
Ops Evening Shift Transition
Ops Shift Transition: 04/21/2017, Day Shift 23:00 – 07:00 (16:00 - 00:00) - UTC (PT)
State of H1: IFO locked at NLN, 31.0W and 61.2 Mpc  
Intent Bit: Observing
Weather: Wind is a Light Breeze, Clear with temps in the mid 60s  
Primary 0.03 – 0.1Hz: At 0.01um/s 
Secondary 0.1 – 0.3Hz: At 0.1um/s   
Quick Summary:  Locked for 26.30 hours. All appears normal at this time.     
Outgoing Operator: Nutsinee
H1 DetChar (ISC)
heather.fong@LIGO.ORG - posted 15:59, Friday 21 April 2017 - last comment - 17:23, Wednesday 26 April 2017(35709)
Finding correlations between auxiliary channels and sensitivity range (potential problem in ITMY reaction mass)

[Heather Fong, Sheila Dwyer]

Over the last few months, Sheila and I have been trying to find correlations between auxiliary channels and LHO's sensitivity range. In order to do so, we first made changes to the OAF BLRMS range channels by adding notch filters such that they can track changes to the SENSMON range (see alog 33437). After we made these changes, the summed OAF BLRMS range contributions now have a linear relationship with the SENSMON range, with their units roughly calibrated to be Mpc.

I then wrote a Python script that does the following:

- Loads in desired auxiliary channel data (using NDS and GWpy) for a specified period of time (we analyze minute trends)
- Calculates the Pearson correlation coefficient (PCC) between auxiliary channels and the OAF BLRMS range channels in order to determine how linearly correlated the channels are
- Plots and saves the channels with the highest PCC (aux channels vs. OAF BLRMS range and aux channels vs. time)

Attached to this entry are examples of the aux channels vs. OAF BLRMS range for the time period of Feb 24 2017 to Apr 10 2017 (45 days, H1 observing mode only). The complete list of channels that were analyzed are attached under the file name 'BLRMS_channel_list.txt'. With the exception of the OAF channels, both mean and RMS channels were analyzed. For ~360 channels over a time range of 45 days, this script takes ~2 hours to complete, where the data retrieval from NDS is the bottleneck. The Python script has been uploaded to the LIGO git repository and can be found here:

git clone https://heather-fong@git.ligo.org/heather-fong/BLRMS-channels-correlations.git

where BLRMS_channels_analysis.py is the Python script, and BLRMS_channel_list.txt is an example of channels that can be analyzed.

We found the channels with the highest absolute PCC values (and are therefore most correlated with the range) to be the following (plots are attached):
H1:ASC_AS_B_RF36_Q_PIT_OUT16
H1:ASC-AS-B-RF36_Q_YAW_OUT16
H1:SUS-ITMY_R0_DAMP_Y_INMON

Other channels we analyzed that appear to be correlated with the range include:
H1:SUS-ITMX_M0_DAMP_R_INMON (max PCC = -0.5 for 38-60Hz range)
H1:SUS-ITMY_R0_DAMP_L_INMON (max PCC = 0.5 for 38.60 Hz range)

The results of this analysis gives us hints as to which parts of the interferometer are affecting the sensitivity range. In particular, the results suggest that there are problems with the ITMY reaction mass that are not seen in the ITMX reaction mass, and we can, for example, try putting different offsets in ITMY to confirm this.

Images attached to this report
Non-image files attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 17:23, Wednesday 26 April 2017 (35813)

During the commissioning window this morning I tried moving ITMY reaction mass in yaw so that the DAMP Y INMON moved from about -75 to -86 several times, so see if there is any noticable difference in the DARM spectrum.  I didn't see anything changing in the spectrum.  Attached is a time series of the yaw osem and the DARM BLRMS.  

Images attached to this comment
H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 15:58, Friday 21 April 2017 (35708)
Ops Day Shift Summary

TITLE: 04/21 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC

STATE of H1: Observing at 60Mpc

INCOMING OPERATOR: Jeff

SHIFT SUMMARY: Observed the entire shift. No issue to report.

LOG:

15:05 Karen to optics lab

15:22 Christina to MX

15:25 Karen driving to retrieve vacuum from receiving, then to MY

16:12 Karen leaving MY to VPW

16:18 Christina leaving MX

16:51 Karen out of VPW

18:07 Marc to MY

18:34 Karen to H2 electronics

18:39 Marc back

18:58 Karen driving to mechanical area

21:19 Gerado to all the chiller yards. MX, EX, MY, EY

LHO VE
logbook/robot/script0.cds.ligo-wa.caltech.edu@LIGO.ORG - posted 12:10, Friday 21 April 2017 (35707)
CP3, CP4 Autofill 2017_04_21
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 50% open. Fill completed in 47 seconds. TC B did not register fill. LLCV set back to 22.0% open.
Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 286 seconds. LLCV set back to 40.0% open.
Images attached to this report
H1 General
travis.sadecki@LIGO.ORG - posted 08:00, Friday 21 April 2017 (35704)
Ops Owl Shift Summary

TITLE: 04/21 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 62Mpc
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY:  No issues other than the EY CRC error.  OPS Teamspeak computer was rebooted and will need a password I don't have to reconnect to wifi.  Lock is currently 18.5 hours of age.
LOG:  See previous aLogs.

H1 General
travis.sadecki@LIGO.ORG - posted 04:10, Friday 21 April 2017 (35703)
Ops Owl Mid-shift summary

No issues other than the EY CRC error.

H1 General (CAL, CDS)
travis.sadecki@LIGO.ORG - posted 03:11, Friday 21 April 2017 - last comment - 10:50, Friday 21 April 2017(35702)
EY CRC Error 9:51 UTC

Verbal Alarms announced an "EY CRC Error" at 9:51 UTC.  Following Dave's instructions, I successfully restarted the HofT calibration code.  GWIstat is now reporting that LHO is green and "OK+Intent", where before it was yellow and "HofT Bad".  The DMT SPI page never showed an issue with the calibration (it was always green).  I'll check the Detchar summary page when it refreshes to make sure it has greened up.

Comments related to this report
david.barker@LIGO.ORG - 10:50, Friday 21 April 2017 (35705)

Travis, Greg, Dave:

Many thanks to Travis for getting all this running again in the early hours of this morning. The mx_stream of data coming from h1susey was interrupted for 23 data blocks (@16 blocks per second, 1.44 seconds) at 09:51 UTC 4/21 (02:51 PDT). The DAQ made all SUS EY channels invalid for this period. The DMT calibration code (because of a code bug) latched all H1 HofT data as being invalid from this point onwards until Travis restarted the code on h1dmt0 and h1dmt1. Greg has confirmed that these restarts caused no problems downstream. The attached Det-Char summary plot shows the HofT latching to invalid (RED 'Calibration' and 'Observing' with a GREEN 'Obs intent'), and then shows the problem being resolved soon after.

Sunday's event was at 03:07 PDT, this morning's at 02:51 PDT

Images attached to this comment
H1 DetChar (DetChar)
evan.goetz@LIGO.ORG - posted 15:52, Tuesday 18 April 2017 - last comment - 06:11, Friday 28 April 2017(35640)
Turned off Pcal camera ethernet adapter -- maybe mitigates a ~1 Hz comb?
Evan G., Robert S.

Looking back at Keith R.'s aLOGs documenting a changes happening on March 14 (see 35146, 35274, and 35328), we found that one cause seems to be the shuttering of the OpLev lasers on March 14. Right around this time, 17:00 UTC on March 14 at EY and 16:07 UTC at EX, there is an increase in line activity.

The correlated cause is Travis' visit to the end station to take images of the Pcal spot positions. The images are taken using the Pcal camera system and needs the OpLevs to be shuttered so that a clean image can be taken without the light contamination. We spoke with Travis and he explained that he disconnected the USB interface between the DSLR and the ethernet adapter, and used a laptop to directly take images. Around this time, the lines seem to get worse in the magnetometer channels (see, for example, the plots attached to Keith's aLOG 35328).

After establishing this connection, we went to the end stations to turn off the ethernet adapters for the Pcal cameras (the cameras are blocked anyway, so this active connection is not needed). I made some magnetometer spectra before and after this change (see attached). This shows that a number of lines in the magnetometers are reduced or are now down in the noise.

Hopefully this will mitigate some of the recent reports of combs in h(t). 

We also performed a short test turning off another ethernet adapter for the H1 illuminator and PD. This was turned off at 20:05:16 18/04/2014 UTC and turned back on at 20:09:56 UTC. I'll post another aLOG with this investigation as well.
Images attached to this report
Comments related to this report
keith.riles@LIGO.ORG - 13:46, Wednesday 19 April 2017 (35667)DetChar
Good work! That did a lot of good in DARM. Attached are spectra in which many narrow lines went 
away or were reduced (comparing 22 hours of FScan SFTs before the change (Apr 18) with 10 hours of
SFTs after the change (Apr 19). We will need to collect much more data to verify that all of the 
degradation that began March 14 has been mitigated, but this first look is very promising - many thanks!

Fig 1: 20-50 Hz 
Fig 2: 50-100 Hz
Fig 3: 100-200 Hz

Images attached to this comment
keith.riles@LIGO.ORG - 09:01, Thursday 20 April 2017 (35682)DetChar
Attached are post-change spectra using another 15 hours of FScan SFTs since yesterday. Things continue to look good.

Fig 1: 20-50 Hz 
Fig 2: 50-100 Hz
Fig 3: 100-200 Hz

Images attached to this comment
evan.goetz@LIGO.ORG - 11:54, Friday 21 April 2017 (35706)
Correction: the date is 18/04/2017 UTC.
keith.riles@LIGO.ORG - 19:28, Thursday 27 April 2017 (35826)DetChar
Another follow-up with more statistics. The mitigation from turning off
the ethernet adapter continues to be confirmed with greater certainty.
Figures 1-3 show spectra from pre-March 14 (1210 hours), a sample of 
post-March 14 data (242 hours) and post-April 18 (157 hours) 
for 20-50 Hz, 50-100 Hz and 100-200 Hz.

With enough post-April 18 statistics, one can also look more closely at
the difference between pre-March 14 and and post-April 18. Figures 4-6
and 7-9 show such comparisons with different orderings and threrefore
different overlays of the curves. It appears there are lines in the post-April 18
data that are stronger than in the pre-March 14 data and lines in the earlier 
data that are not present in the recent data. Most notably, 1-Hz combs
with +0.25-Hz and 0.50-Hz offsets from integers have disappeared.

Narrow low-frequency lines that are distinctly stronger in recent data include these frequencies:

21.4286 Hz
22.7882 Hz - splitting of 0.0468 Hz
27.4170 Hz 
28.214 Hz 
28.6100 Hz - PEM in O1
31.4127 Hz and 2nd harmonic at  62.8254 Hz
34.1840 Hz
34.909 Hz (absent in earlier data)
41.8833 Hz 
43.409 Hz (absent in earlier data)
43.919 Hz
45.579 Hz
46.9496 Hz
47.6833 Hz
56.9730 Hz
57.5889 Hz
66.7502 Hz (part of 1 Hz comb in O1)
68.3677 Hz
79.763 Hz
83.315 Hz
83.335 Hz
85.7139 Hz
85.8298 Hz
88.8895 Hz
91.158 Hz
93.8995 Hz
95.995 Hz (absent in earlier data)
107.1182 Hz 
114.000 Hz (absent in earlier data)

Narrow low-frequency lines in the earlier data that no longer appear include these frequencies:

20.25 Hz - 50.25 Hz (1-Hz comb wiped out!)
24.50 Hz - 62.50 Hz (1-Hz comb wiped out!)
29.1957 Hz 
29.969 Hz


Note that I'm not claiming change points occurred  for the above lines on March 14 (as I did for the
original set of lines flagged) or on April 18. I'm merely noting a difference in average line strengths 
before March 14 vs after April 18. Change points could have occurred between March 14 and April 18,
shortly before March 14 or shortly after April 18. 
Images attached to this comment
keith.riles@LIGO.ORG - 06:11, Friday 28 April 2017 (35858)DetChar
To pin down better when the two 1-Hz combs disappeared from DARM,
I checked Ansel's handy-dandy comb tracker and found the answer immediately.

The two attached figures (screen grabs) show the summed power in the teeth of those combs.
The 0.5-Hz offset comb is elevated before March 14, jumps up after March 14
and drops down to normal after April 18. The 0.25-Hz offset comb is highly
elevated before March 14, jumps way up after March 14 and drops down to normal
after April 18.

These plots raise the interesting question of what was done on April 18 that
went beyond the mitigation of the problems triggered on March 14. 

Figure 1 - Strength of 1-Hz comb (0.5-Hz offset) vs time (March 14 is day 547 after 9/15/2014, April 18 is day 582)

Figure 2 - Strength of 1-Hz comb (0.25-Hz offset) vs time 
Images attached to this comment
Displaying reports 48501-48520 of 83211.Go to page Start 2422 2423 2424 2425 2426 2427 2428 2429 2430 End