Displaying reports 62261-62280 of 83004.Go to page Start 3110 3111 3112 3113 3114 3115 3116 3117 3118 End
Reports until 12:48, Tuesday 08 September 2015
H1 PEM (DetChar)
robert.schofield@LIGO.ORG - posted 12:48, Tuesday 08 September 2015 (21272)
PEM injections complete at LHO; report on magnetic coupling, site activity coupling, and preliminary report on other coupling

Summary: We completed all of the most important PEM injections and I think we are good to go. Ambient magnetic fields are unlikely to produce DARM noise at more than 1/10 of the DARM floor. There is high magnetic coupling at the EY satellite box rack. Coupling of self-inflicted magnetic fields in the CS ebay may keep us from reaching design sensitivity unless corrected. Most site activities will not show up in DARM, but moderate impacts in the control room/EE shop/Vacuum shop etc. can produce DARM signals. Any such events can, like signals from off-site, be vetoed by redundant vibration sensors. Although vibration coupling analysis is ongoing and not presented here, ambient vibrations produce noise at near DARM floor levels at HAM2, 6 and the PSL table and dominate DARM around 300 Hz. Radio signals at 9 and 45 MHz would have to be at least 100 times background on the radio channels before they start to show in DARM. Some sensor issues are also discussed.

Introduction

Between Tuesday and Saturday, we made roughly 100 injections to measure the environmental coupling levels to the LHO interferometer. The table shows the number of locations in each building.

Injection type

CS locations

EX locations

EY locations

magnetic

5

3

3

acoustic

5

3

3

shaking

6

0

0

radio at modulation frequencies, etc.

1

 

 

site activities

10

 

1

In most cases we attempted to inject from a great enough distance that the field levels at coupling sites would be about the same as they were at the sensors nearest to the coupling sites.  For magnetic and acoustic injections we split the potential coupling sites into regions. In the LVEA, these regions were usually the vertex, the ITM optical lever regions, the PSL, the input arm, the output arm and the electronics bay. At the end stations the regions were the VEAs and the electronics bays. For shaking, we selected particular chambers or sites that had been identified previously as coupling sites or potential coupling sites.

We started with analysis of magnetic coupling because of its importance to the stochastic GW search, and results are presented below. In addition, it is important to set up site rules for the run so analysis of site activity injections is also included here. Other coupling analyses are not yet ready, but preliminary observations are discussed below. 

Magnetic field coupling

Figure 1 summarizes results with single magnetic coupling functions for each station in meters of differential test mass motion per Tesla of magnetic field. In general, our coupling functions are applicable to signals originating at a distance from the coupling site that is large compared to the distance between the sensor and the coupling site. If the signal originates much closer to the coupling site than the sensor is, these coupling functions are likely to underestimate the resulting test mass motion. The magnetic coupling functions of Figure 1 are based on the maximum regional coupling observed at each station.  The highest of the 3 stations should be used for estimating Schumann resonance inter-site correlation.

The lower panel in Figure 1 shows S6 (iLIGO) magnetic coupling functions. At 20 Hz the highest coupling is about an order of magnitude lower than it was in iLIGO. At 100 Hz aLIGO is about 2 orders of magnitude better. This is mainly due to the lack of magnets on the test mass. The coupling functuions are not as linear in a log-log plot as they were for initial LIGO, most likely because test mass magnet coupling is no longer dominant and coupling to electronics and cables are significant contributions.

The much higher coupling at EY than at EX is likely due to coupling to cables and connectors in the satellite amp rack. We used a small coil to track the coupling to the satellite amp rack in the VEA. It was clear that the excess coupling was at this rack, and not, for example, at the rack next to it, but we could not find a specific coupling site within this rack. This is what would be expected for cable coupling and so we should probably check the cable shield grounding for the coil drive signals in this rack. However, even at this elevated coupling level, the estimated DARM noise for ambient field levels is more than 10 times lower than the O1 DARM floor.

Figure 2 shows an example of spectra from an injection. This injection focuses on the vertex, and uses a coil set up about 20m down the Y-manifold. The top plot shows magnetometer spectra and the bottom plot DARM spectra. The injection is a 6Hz ramp, producing a 6Hz comb in the red injection spectra (blue are no-injection). The amplitudes of the peaks produced in DARM are divided by the amplitudes of the peaks in the magnetometer signal to give coupling functions in m/T. Estimates of the noise contribution to DARM are made by multiplying the coupling functions by the local ambient background from the injection-free spectrum. These estimates thus assume linearity. While vibration coupling has usually been found to be linear, we have found that, in certain bands, acoustic coupling can be non-linear, in which case the estimates are upper limits. To minimize the overestimate, we try to inject with as little amplitude over ambient as possible. We have not observed non-linear magnetic coupling.

The estimate of the ambient magnetic contribution to DARM for the injection of Figure 2 is shown as black squares.  Also included in the figure are estimates for the other corner station injection zones. The closest that the estimates come to the O1 DARM noise floor is about a factor of ten below for coupling in the ebay at 12 Hz. The ebay does not have the highest coupling function, but it does have the highest ambient fields. Thus the ebay would not be the most sensitive place for Schumann resonance coupling, but, according to this estimate, coupling of self-inflicted fields in the ebay will keep us from reaching design sensitivity unless we make improvements.

Other environmental coupling

As with magnetic coupling, coupling of ambient RF (including self-inflicted) does not seem to be a problem. However, vibrational coupling noise reaches within a factor of a few of the DARM noise floor at a couple of locations, and in bands around 300 Hz dominates DARM. It is likely that our sensitivity to site activities is due to coupling through the HAM2 and 6 ISIs. We should be able to test this possibility with the coupling functions we measured for GS13s in the HAMs. In addition to HAMs, PSL table/periscope vibrations produce the peaks near 300 Hz via beam jitter, and perhaps contribute more broadly to the DARM floor.

No noise from pre-identified scattering sites

Scattering does not seem to be a problem at current sensitivities. We installed shakers at sites that had been identified through photographs taken from the points of view of the test masses and other optics in order to identify potential scattering coupling sites (here).  We mounted shakers at the GV8 valve seat, the input mode cleaner beam tube, the signal recycling cavity beam tube and the BSC2 chamber walls (connected to the TCS mirror holders), all of which had been identified in photos as potential sites (here). Increasing motion at these sites by 2 orders of magnitude did not produce features that we saw in DARM.

Sensor problems

All identified coupling sites were well covered with sensors, but we found a few sensor problems. The most important of these are problems with the VEA magnetometers at the end stations. These are essential PEM sensors since there is no redundancy. These magnetometers did detect our injected magnetic fields, but the signal conditioning boxes appear to be producing approximately 1Hz glitches that effectively raise the sensor noise floor by an order or two of magnitude. We think that it is the conditioning boxes because we swapped out the filter boxes and set up a separate magnetometer nearby that did not see these glitches. We were unable to replace the boxes because we found no working spares. If we cannot replace these by the run, I suggest swapping them with the e-bay magnetometer setups in the end stations, where there is redundancy.

A more minor problem is that the axes of the input arm magnetometer and the two VEA magnetometers appear to be incorrect. I swapped the axes on the input magnetometer, but did not get to the VEA magnetometers. These should be checked.

Incomplete injections

We ran out of time and were not able to make shaker injections at the end and mid-stations. Perhaps we can do these during the run using the commissioning budget.

Site activities

We should expect coupling of site activites since vibration coupling levels for ambient, non-transient vibration levels are near or at the DARM noise floor. Figure 3 has tables showing all of the site activity injections, with the ones that showed up in DARM marked in red, and a link to Nutsinee’s spectrogram analyses.

The things that showed up in DARM:

·       large super ball dropped in control room from about 5 feet

·       hammer dropped from 4 feet in vacuum lab next to EE shop

·       jumping in LVEA changing area

·       jumping in control room

·       setting car battery down in OSB shipping area

·       sudden braking of silver van (things fly off seat) in high bay area

 

Some of the things that didn’t show up in DARM

·       sudden braking (things fly off seat) in most other areas,  including just outside of EY

·       car horns

·       crowds walking in control room or halls (no jumping)

·       loud music in control room

·       rolling in chair across control room

·       bouncing on seating/exercise ball in control room

·       airlock and external door actuation

·       slamming doors in office area

·       outer roll up door actuation, OSB shipping

·       quick human movements in the OSB optics lab

In summary, sudden impacts in the control room, EE shop, vacuum lab and all other areas on the lab side of the air lock may cause events in DARM. This includes jumping, dropping heavy (>1lbs) things and quickly setting down heavy packages. I don’t think that we need to dramatically alter our usual activities because of this observation; we just need to be careful. I say this because we can veto all of these events with the highly redundant PEM and SEI vibration monitoring systems, just as we can veto signals from off-site. But impacts in the control room/lab/shop area can produce DARM events, so we should be mindful of minimizing these impacts.

We see no evidence that a drive down the arms and a trip into the mid/end stations will be more likely to produce events in DARM than control room activities. Tour crowds in the control room are also unlikely to produce DARM events, unless they jump around or knock chairs over. Just as for our own activities, we would rely on the sensor systems to veto any events that these tours produced.

We should be able to identify the coupling sites for these site activities once we have coupling functions for vibrational signals (stay tuned). In the mean time, I would speculate that the coupling sites are mainly HAM2 and 6, up through the ISI suspension, and, possibly, increased vibration of the PSL table.

 

Robert Schofield, Anamaria Effler, Jordan Palamos, Nutsinee Kijbunchoo, Katie Banowetz with help from crowds of others.

Non-image files attached to this report
H1 ISC
stefan.ballmer@LIGO.ORG - posted 12:19, Tuesday 08 September 2015 (21297)
ODC housekeeping
Updated the following ODC settings:
ASC: DHARD input range check limit from 1500cts to 2500 to avoid occasional crossing.
ASC: INP1  input range check limit from 1.5cts to 5 to avoid occasional crossing.
OMC: include bit 17 & 18 in subsystem mask (this is effectively an OMC DCPD saturation monitor).
OMC: DARM CTRL input range check limit from 120000cts to 150000 to avoid occasional crossing.
LSC, OMC, ASC: exclude ADC saturation bit from ODCmaster ADC saturation monitor:
     for ASC the CDS saturation monitor is stuck on for unknown reasons
     for LSC and OMC, the parked ALS inputs saturate the ADC constantly. 
TCS: exclude DAC saturation bit from ODCmaster DAC saturation monitor
     for TCS, the DAC saturation monitor is reporting on constantly for two channels - not sure why.

All setting changes have been accepted in SDF.

Left to do: there are some SUS filter status checks that changed their nominal state since I last updated them in August. I can only update them once the IFO is locked.
H1 SUS (ISC, SUS)
sheila.dwyer@LIGO.ORG - posted 12:12, Tuesday 08 September 2015 - last comment - 21:43, Tuesday 08 September 2015(21295)
PRM coil driver switching causing locklosses

We have large glitches when we switch the coil driver state for PRM in every example that I've looked at.  This causes locklosses, about 5 over the weekend.  

One thing that we can do to make this less painfull is to move the switch earlier in the locking process (for example, we can try doing this right after DRMI locks rather than waiting until DRMI on POP).  

A real solution might be to change the front end model so that we can switch each coil separately.   

We could try tuning the delay, as described for the LLO BS 16295

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 21:43, Tuesday 08 September 2015 (21321)

We didn't get a chance to look at the delay, but we have made a few gaurdian changes that should help mitigate this situation.  

We are now increasing the PRM and SRM offlaoding after we transition to 3F before starting the CARM offset reduction.  This gives us a bit more headroom when we siwtch the coil dirver.  We also moved the coildriver switching sooner (it is now in ISC_DRMI in the LOCKED_3F state) so that we don't waste as much time if this breaks the lock.  

H1 PEM
jordan.palamos@LIGO.ORG - posted 12:09, Tuesday 08 September 2015 (21292)
End station magnetometers swapped

Jordan Palamos, Vinny Roma

We swapped some pem magnetometer power supplies at both end stations to fix the 1Hz glitches mentioned by Robert in his alog 21272.  Since we don't have enough working boxes at the moment, both EY and EX have one of their electronics bay magnetometers disconnected. Specifically EX_MAG_SUSRACK and EY_MAG_SEIRACK (all axes) are disconnected.

Each VEA magnetometer was using a 'new style' power supply / signal conditioning box that was causing the big glitches (these glitches seemed go away when the box unplugged and running on batteries, not sure what the battery life is but I think it would be unfeasable to run as such). At both end stations we replaced these boxes with ones taken from the electronics bay (old style). After swapping, the bad glitches disappeared and the noise floors look much better. The glitchy boxes are now in the EE room.

H1 DAQ (CDS)
james.batch@LIGO.ORG - posted 11:53, Tuesday 08 September 2015 (21293)
h1fw0, h1fw1 replaced with new computers.
ECR 1500312, WP 5455

The h1fw0 and h1fw1 computers have been replaced with new computers, which were formerly tested as h1fw3 and h1fw2.  The old h1fw0 and h1fw1 computers have been renamed h1tw0 and h1tw1, and have been reconfigured to write raw minute files to the locally attached SSD RAID.

Both h1fw0 and h1fw1 are now writing science, commissioning, minute trend, and second trend files to SATABoy disk arrays through the ldas gateway computers.

The myricom drivers still need to be updated and configured on h1tw0, h1tw1, h1nds0, h1nds1, and h1broadcast0.
H1 CDS
james.batch@LIGO.ORG - posted 08:44, Tuesday 08 September 2015 (21291)
MSR UPS Bypass switch replaced
The UPS in the MSR is back in service.  The bypass switch which failed a couple of weeks ago has been replaced.  No power glitches occurred, systems are running normally.
H1 CDS
patrick.thomas@LIGO.ORG - posted 08:38, Tuesday 08 September 2015 (21289)
restarted Conlog
Sep  8 03:23:35 h1conlog1-master conlog: ../conlog.cpp: 301: process_cac_messages: MySQL Exception: Error: Out of range value for column 'value' at row 1: Error code: 1264: SQLState: 22003: Exiting.

Also took the opportunity to reconfigure the replica to allow connections from other hosts. (WP 5481)
H1 AOS
travis.sadecki@LIGO.ORG - posted 08:00, Tuesday 08 September 2015 (21288)
OPS Owl shift summary

Once the series of EQs ceased (2+ hour ringdown), I began the locking procedure.  Alignment looked good so I decided to forego initial alignment and move straight to locking.  A bit of tweaking of the BS in PRMI was all that was required to get the IFO locked, after a few false starts with locklosses at various places on the way up.

10:35 lockloss @ DC_READOUT, ITMx, SR2, ITMy, MC2, and SRM saturated

10:51 lockloss @ INCREASE_POWER, ITMx saturated

11:06 lockloss @ SWITCH_TO_QPDS, PRM saturated

11:13 lockloss @ SWITCH_TO_QPDS, PRM saturated

11:21 lockloss @ DRMI_ON_POP, all TMs and PRM saturated

11:36 locked NOMINAL_LOW_NOISE 70+ MPC

12:27 OMC DCPD and ETMy saturation cause large momentary glitch/drop in range

11:43 set to Observing mode after engaging OMC whitening

13:45 Bubba starts collecting grouting equipment

14:06 ETMy saturation causes large momentary glitch/drop in range

H1 CAL (CAL)
darkhan.tuyenbayev@LIGO.ORG - posted 01:34, Tuesday 08 September 2015 (21283)
H1 SUS ETMY UIM coil driver electronics analysis

Jeffrey K, Kiwamu I, Darkhan T

Overview

In this alog we present a summary of H1 SUS ETMY UIM coil driver electronics measurements (LHO alog 20846) analysis. UIM coil driver consist of three switchable low-pass filters and a static high pass filter (DCC D070481). According to UIM driver state machine diagram (DCC T1100507) each of the the switchable filters were designed to have a zero at 10.5 Hz and a pole at 1 Hz. A non-switchable filter was designed to have a zero at 50 Hz and pole at 300 Hz.

Fitted zeros of all of the UIM driver low-pass filters were mostly within +/- 0.2 Hz and poles were within 0.02 Hz from designed ( 10.5 : 1.0 ) Hz with uncertainties mostly under 0.3% (see plots below). For individual uncertainties in each of the fit results see tables 1-3.

In this analysis it was not possible to check accuracy of ( 50 : 300 ) Hz pair, because FAST I MONs used for measuring coil driver transfer functions are immune to the effect of this filter, see LHO alog 21142 for more details. This means that UIM driver TF in state 1 should be a flat TF, attachment 4 shows that the TF is mostly flat at low frequencies but we still see something that looks like a high-frequency pole.

Another thing to mention is that we see an unexplained high-frequency feature in UL quadrant measurement that does not appear in LL, UR and LR quadrants.

Details

Similarly to what was explained in our previous alog 21232, the measured transfer functions between excitation and readout signals, apart from driver itself, include also frequency dependent effects from IOP upsampling, digital anti-imaging, analog anti-imaging, and analog anti-aliasing filters shown in the attached diagram (also explained in LHO alog comment 21127).

LP1

LP1 TF was isolabed from all of the other frequency dependencies by taking the ratio of State 2 TF / State 1 TF measurements. Results of LISO fitting of one zero and one pole into the measured TF in a range [0.1 100.0] Hz are given in table 1. Plot of the fitted (model) zero-pole TF vs. measurement and the residuals are shown in the plot under the table.

 Table 1. H1:SUSETMY-UIM driver LP1 fit details 
=================================================
               Fitted LP1 and fit uncertainty  
                        ( z : p ) [Hz]         
-------------------------------------------------
 UL     ( 10.52 +/-  0.06 % : 0.98 +/-  0.06 % )
 LL     ( 10.61 +/-  0.06 % : 0.99 +/-  0.06 % )
 UR     ( 10.34 +/-  0.05 % : 0.96 +/-  0.05 % )
 LR     ( 10.48 +/-  0.06 % : 0.98 +/-  0.07 % )
=================================================

LP2

Similarly to LP1, LP2 TF was isolabed by taking the ratio of State 3 TF / State 2 TF measurements. Results of LISO fitting of one zero and one pole into the measured TF in a range [0.1 100.0] Hz are given in table 2. Plot of the fitted (model) zero-pole TF vs. measurement and the residuals are shown in the plot under the table.

 Table 2. H1:SUSETMY-UIM driver LP2 fit details 
=================================================
               Fitted LP2 and fit uncertainty  
                        ( z : p ) [Hz]         
-------------------------------------------------
 UL     ( 10.50 +/-  0.14 % : 1.02 +/-  0.14 % )
 LL     ( 10.43 +/-  0.12 % : 1.01 +/-  0.12 % )
 UR     ( 10.56 +/-  0.12 % : 1.02 +/-  0.12 % )
 LR     ( 10.55 +/-  0.13 % : 1.02 +/-  0.14 % )
=================================================

LP3

Similarly to LP1 and LP2, LP3 TF was isolabed by taking the ratio of State 4 TF / State 3 TF measurements. Results of LISO fitting of one zero and one pole into the measured TF in a range [0.1 100.0] Hz are given in table 3. Plot of the fitted (model) zero-pole TF vs. measurement and the residuals are shown in the plot under the table.

 Table 3. H1:SUSETMY-UIM driver LP3 fit details 
=================================================
               Fitted LP3 and fit uncertainty  
                        ( z : p ) [Hz]         
-------------------------------------------------
 UL     ( 10.36 +/-  0.33 % : 0.97 +/-  0.34 % )
 LL     ( 10.66 +/-  0.27 % : 0.99 +/-  0.27 % )
 UR     ( 10.60 +/-  0.26 % : 0.99 +/-  0.27 % )
 LR     ( 10.21 +/-  0.36 % : 0.97 +/-  0.37 % )
=================================================

Scripts and Plots

Measurement parameters were committed to calibration SVN:

CalSVN/aligocalibration/trunk/Runs/ER8/H1/Scripts/Electronics/H1SUSETMY_UIMDriver_$(state)_param_$(gps_time).m

The script that loads all of the measurement parameters, calls the fitting function and produces the plots was committed to the calibration SVN:

CalSVN/aligocalibration/trunk/Runs/ER8/H1/Scripts/Electronics/runFit_H1SUSETMY_UIMdriver.m

This script uses the same Matlab functions that were used for PUM coil driver electronics analysis in the same SVN directory.

Plots were committed to the calibration SVN under following names:

CalSVN/aligocalibration/trunk/Runs/ER8/H1/Results/Electronics/2015-09-07_H1SUSETMY_UIMDriver_$(short_description).pdf

CalSVN/aligocalibration/trunk/Runs/ER8/H1/Results/Electronics/2015-09-07_H1SUSETMY_UIMDriver_$(short_description).png

Images attached to this report
H1 General
travis.sadecki@LIGO.ORG - posted 00:42, Tuesday 08 September 2015 - last comment - 02:22, Tuesday 08 September 2015(21282)
Lockloss due to another EQ

Another EQ in New Zealand got us (and LLO).  We'll see how long it takes to ring down.

Comments related to this report
travis.sadecki@LIGO.ORG - 01:36, Tuesday 08 September 2015 (21284)

Add 2 more EQs to that.  5.5 in Mexico and 3.9 in Oregon.  Seismos are rung up higher than they were for the 6.4 in New Zealand that took us down last night.  Could be a bumpy night.

travis.sadecki@LIGO.ORG - 01:47, Tuesday 08 September 2015 (21285)

And another 5.5 in New Zealand.

travis.sadecki@LIGO.ORG - 02:22, Tuesday 08 September 2015 (21286)

And another 4.6 in Mexico.  Yep, that's 5 EQs since my shift started 2 hours ago.

H1 General
jeffrey.bartlett@LIGO.ORG - posted 00:00, Tuesday 08 September 2015 (21281)
Evening Ops Summary
LVEA: Laser Hazard
IFO: Locked
Intent Bit: Observing  

All Times in UTC (PT)

23:00 (16:00) Take over from Ed
23:00 (16:00) IFO Locked at NOMINAL_LOW_NOISE, 23W @70Mpc
23:18 (16:18) Commissioners commissioning work while LLO is recovering
23:25 (16:25) Kiwamu – Loading foton files for calibration 
23:37 (16:37) Lockloss - 
23:42 (16:42) Stop at LOCK_DRMI_1F for commissioning work
23:48 (16:48) Lockloss - 
00:00 (17:00) Stop at DRMI_LOCKED for commissioning work
00:05 (17:05) Lockloss – 
01:09 (18:09) Stop at DRMI_LOCKED for commissioning work
02:12 (19:12) Locked at NOMINAL_LOW_NOISE, 22.9W, 74Mpc
02:16 (19:16) Set intent to Observing Mode
07:00 Turnover to Travis



Shift Summary & Observations:

Took over from Ed at 23:00 (16:00). IFO has just relocked at the shift change. Intent bit set to Observing. LLO is down, working on relocking problem. 

Commissioners working on IFO while LLO is down 

Saturations of ETMX at 00:01, 00:02, 00:03 while at DRMI_LOCKED

00:45 (17:45) Reset fiber polarization for Y-Arm to get ALS to lock

02:15 (19:15) 4.5 mag earthquake in Hawthorne, NV – Did not lose lock 
02:50 (19:50) 5.2 mag earthquake near L’Esperance Rock, New Zealand – Did not lose lock
Smooth shift with good locking range. 

 
 

  

H1 CAL
jeffrey.kissel@LIGO.ORG - posted 23:04, Monday 07 September 2015 (21280)
Reference ER8 / O1 Actuation Strength Report and ER8 Matlab Model Virtually Complete
J. Kissel, K. Izumi, D. Tuyenbayev, S. Karki, C. Cahillane

We've completed all the to-do list items for comparing all three methods of measuring actuation strength (listed in LHO aLOG 21015). This means that the ER8 / O1 DARM model is virtually complete. Now we just need to compare the full model against measured DARM OLGTFs to confirm no remaining high-frequency systematics in either the actuation or sensing function, the we can declare victory on the frequency domain model side of things. Once victorious there, we
- Update the CAL-CS front-end model to match the low-frequency content of matlab model
- Update the GDS pipeline to matach the high-frequency content of the matlab model
- Generate an inverse actuation filter, and install it in the CAL-CS bank
We hope to complete these items within the next few days.

Results on the Actuation Strength of ETMY
-----------------------------------------
Though there still remain some unexplained systematics, we are confident enough in the PCAL results that we've chosen to use only the PCAL to determine the actuation strength to high precision. The other two methods, ALS DIFF and Free-Swinging Michelson, though less precise, confirm the accuracy within their statistical uncertainty (though rigorous, statistical comparison was not done). The results are as follows:

    'Optic'      'Weighted Mean'    '1-sigma Uncertainty'    '1-sigma Uncertainty'
    'Stage'      '[m/ct]'           '[m/ct]'                 '%'                  
    'ETMY L1'    '5.15e-11'         '2e-12'                  '3.9'                
    'ETMY L2'    '7.3e-13'          '5.6e-16'                '0.076'              
    'ETMY L3'    '1.11e-14'         '1.1e-17'                '0.096' 


Discussion Against ER7, Expectations, & Alternate Displays of Above
-------------------------------------------------------------------
ETMY L1 and L2 are 0.5% and 4.5% larger than what was used during ER7 (see LHO aLOG 18767), as expected because we do not expect this actuation strength to change. The larger percent change on L2 is almost certainly because we've greatly refined our knowledge of the actuation chain electronics. However, both numbers still remain very consistent with models of the transconductance of the coil drivers (the ER8 model uses the cannonical values from G1100968) and actuation stength of the A/BOSEM coil/magnet system (1.74 [N/A] and 0.0333 [N/A] are the fit values for the UIM and PUM used in this model, as opposed to the cannonical 1.694 and 0.0309 [N/A], originally from T1000164).

ETMY L3 is 45% stronger (or 82% depending on whether you choose preER7 or this measurement as the reference) from prior to ER7 (again, see LHo aLOG 18767), we believe simply because the test mass has been discharged. For those who like the numbers reported in more "fundamental" units, the ESD strength has changed from 7.96e-11 to 1.55e-10 [N/V^2].

ETMY L1's uncertainty is so much larger than L2 and L3 because a relatively huge, frequency-dependent systematic still remains in the data. Indeed, if we believe the measurements (and ALL methods show it) the UIM has a rather unsettling right-half-plane zero at around 100 [Hz].

These numbers, in the form of [N/ct], will be added to the CAL-CS model within the next day or so,
    'Optic'      'Weighted Mean'    '1-sigma Uncertainty'    '1-sigma Uncertainty'
    'Stage'      '[N/ct]'           '[N/ct]'                 '%'                  
    'ETMY L1'    '8.17e-08'         '3.2e-09'                '3.9'                
    'ETMY L2'    '6.82e-10'         '5.2e-13'                '0.076'              
    'ETMY L3'    '4.24e-12'         '4.1e-15'                '0.096'

Details & Plots
-----------------
I attached several sets of plots, one set comparing all three calibration methods against the model and each other for each stage of actuation (*_AllMethods.pdf) for all three days of measurement, and the other set comparing all three days of PCAL on one plot for each stage. It is to the latter combined data set that we fit the model and form the uncertainty estimations based on the residuals between that fit and the data. 

The model lives here:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER8/H1/Scripts/DARMOLGTFs/
H1DARMOLGTFmodel_ER8.m
with the paramater set
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER8/H1/Scripts/DARMOLGTFs/
H1DARMparams_1124827626.m

The comparison script
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER8/H1/Scripts/
compare_actcoeffs_ER8.m

uses Kiwamu's recently functionalized
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER8/H1/Scripts/
PCAL/analyze_pcal.m
ALSDiff/Matlab/analyze_alsdiff.m
FreeSwingMich/analyze_mich_freeswinging.m

to "quickly" analysis all of last weeks data and to compare against the same model.

One can immediately see that the UIM is the outlier in terms of gross systematics here. I've been trying to chase down the high-frequency discrepanct for days, but in the interest of time, we must move on. Unfortunately, the pre-ER7 UIM measurements were only taken up to the 7 [Hz] upper-limit of FSM method, so we cannot say whether this feature has always been there. Thankfully, because the UIM is well-rolled-off by 100 [Hz] with the hiearchical control filters, even this nasty of a systematic should not impact the DARM calibration above 10 [Hz], given that the UIM / PUM cross-over frequency is roughly 2 [Hz] (we will confirm this more precisely with our model, but it has been confirmed via measurement in LHO aLOG 20941). 

We are using updated electronics chain information for the PUM and TST, based on Darkhan's and my work earlier last week (see LHO aLOGs  21232 and 21189), and this has cleaned up the results greatly from when the data was peviously, individually processed in LHO aLOGs 21049 and 21015.


Non-image files attached to this report
H1 ISC
sheila.dwyer@LIGO.ORG - posted 20:06, Monday 07 September 2015 - last comment - 21:05, Wednesday 09 September 2015(21276)
a little commissioning time spent on recycling mirror offloading

Evan, Sheila, Jeff B Ed

Earlier today we were knocked out of lock by a 5.2 in New Zealand, the kind of thing that we would like to be able to ride out.  The lockloss plot is attached, we saturated M3 of the SRM before the lockloss and PRM M3 was also close to saturating.  

While LLO was down, we spent a little time on the offloading, basically the changes described in alog 21084.  This offloading scheme worked fine in full lock for PRM, however we ran into trouble using it durring the acquisiton sequence.  Twice we lost lock on the transition to DRMI, and twice we lost lock when the PRM coil state swtiched in DRMI on POP.  Hpwever, we can acquire lock with the new filter in the top stage of PRM and SRM, but the old low gain (-0.02).  We've been able to turn the gain up by a factor of 2 in full lock twice, so I've left the guardian so that it will turn of the gain in M2 (before the intergrator) in the noise tunings step. 

If anyone decides they need to undo this change overnight, they can comment out lines 344-347 and  2508-2514 of the ISC_LOCK guardian. 

Before we started this, the PRM top mass damping was using 10000 cnts rms at frequencies above a few hundred Hz, because of problems in the OSEMS (alog 21060 ).  Evan put some low passes at 200 Hz in RT and SD OSEMINF which reduces this to 2000 cnts rms. Jeff B accepted this change in SDF.  

 The second attached screenshot shows the PRM drives, the references are in the minutes before the earthquake dropped us out of lock.  The red and blue curves show the current drives, with the high frequency reduction in M1 due to Evan's low pass, and the new offloading on.  The last attached screenshot shows SRM drives with the new offloading.  

Images attached to this report
Comments related to this report
sebastien.biscans@LIGO.ORG - 08:42, Wednesday 09 September 2015 (21333)

I don't think the 5.2 EQ is the cause of the lockloss.

According to your plot, the lockloss happened on Sep 08 at 00:31:07 UTC. The 5.2 EQ happened on Sep 07 at 20:24:56.84 UTC and hit the site at 20:38:21 UTC according to Seismon (so 4 hours earlier). The BLRMS plot confirm that statement (see attachment).

Around loss time, the ground seems as quiet as usual.

Images attached to this comment
sheila.dwyer@LIGO.ORG - 21:05, Wednesday 09 September 2015 (21354)

I was mistaken in identifying the earthquake, but the ground motion did increase slightly, which seems to be what caused the lockloss.  

Images attached to this comment
H1 DetChar
paul.altin@LIGO.ORG - posted 20:03, Monday 07 September 2015 - last comment - 06:37, Tuesday 08 September 2015(21277)
DQ shift summary: LHO Sep 3 - 6 (1125273617 - 1125619217)

There were six separate locks during this shift, including a new record lock stretch of 25 hours. Typical inspiral range was ~ 75 Mpc. At least two locklosses were caused by earthquakes. Total observation time 57 hours (duty cycle 59%).

Very loud (SNR ~ 1e3) glitches associated with range drops and ETMY saturations continue (roughly 40 during this shift). The correlation between glitches and saturations was investigated; it was found that every SNR > 1000 Omicron trigger (and most with SNR 100 – 1000) was simultaneous with a range drop and an ETMY saturation. Considering the possibility that the 'dust glitches' and the ETMY saturations are actually the same thing. (More detailed alog on this coming soon; details and follow-up at PreO1WorstGlitches wiki).

Spectrograms showed some excess noise in several broadband lines between 10 and 40 Hz on Thursday, and a set of narrower lines between 10 and 50 Hz on Friday. Cause unknown.

The third observation period on Saturday had a significantly higher strain noise floor and lower range (~ 60 Mpc). Robert suggested that this was high frequency noise in an oscillator ( alog). Not followed up.

The periodic 60 Hz glitch which occurs every 72 minutes continues, with slightly lower SNR than during ER7. They are vetoed very efficiently by H1:SUS-ETMY_L2_WIT_L_DQ.

More details can be found at the DQ shift wiki page.

Comments related to this report
daniel.hoak@LIGO.ORG - 06:37, Tuesday 08 September 2015 (21287)

Probably the ~60Mpc segment was due to a calibration issue -- in the previous lock, the OMC-READOUT_ERR_GAIN was off by a few tens of percent, and this will change the DARM loop gain and the calibration.  I suspect the gain-matching calculation was off for this lock, too.  You can check to see if this was the cause by comparing the height of the calibration lines from one lock to the next.  This is something the summary pages can do, but it looks like they haven't been updated for the new line frequencies and amplitudes.

(A source of RF noise had been recently suppressed with a bandpass filter on the 9MHz oscillator, this would not have changed between the locks.)

H1 ISC (ISC)
daniel.hoak@LIGO.ORG - posted 04:07, Saturday 05 September 2015 - last comment - 13:27, Monday 14 September 2015(21234)
Input beam jitter coupling to DARM

Dan, Evan

This evening we made a qualitative study of the coupling of beam jitter before the IMC into DARM.  This is going to need more attention, but it looks like the quiescent noise level may be as high as 10% of the DARM noise floor around 200Hz.  While we don't yet understand the coupling mechanism, this might explain some of the excess noise between 100-200Hz in the latest noise budget.

We drove IMC-PZT with white noise in pitch, and then yaw.  The amplitude was chosen to raise the broadband noise measured by IMC-WFS_A_I_{PIT,YAW} to approximately 10x the quiescent noise floor.  This isn't a pure out-of-loop sensor, and since we were driving the control point of the DOF3 and DOF5 loops of the IMC alignment channels we will need to work out the loop suppression to get an idea of how much input beam motion was being generated.  Unfortunately we don't have a true out-of-loop sensor of alignment before the IMC.  We may try this test again with the loops off, or the gain reduced, or calibrate the motion using the IMC WFS dc channels with the IMC unlocked.  Recall that Keita has commissioned the DOF5 YAW loop to suppress the intensity noise around 300Hz.

The two attached plots show the coherence between the excitation channel (PIT or YAW) and various interferometer channels.  The coupling from YAW is much worse: at 200Hz, an excitation 10x larger than normal noise (we think) generates coherence around 0.6, so the quiescent level could generate a few percent of the DARM noise.  Looking at these plots has us pretty stumped.  How does input beam jitter couple into DARM?  If it's jitter --> intensity noise, why isn't it coherent with something like REFL_A_LF or POP_A_LF (not shown, but zero)? 

The third plot is a comparison of various channels with the excitation on (red) and off (blue).  Note the DCPD sum in the upper right corner.  Will have to think more about this after getting some slpee.

Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 08:43, Tuesday 08 September 2015 (21290)

Transfer function please.

evan.hall@LIGO.ORG - 12:04, Tuesday 08 September 2015 (21294)

TFs of the yaw measurement attached.

If the WFS A error signal accurately represents the quiescent yaw jitter into the IMC, the orange TF suggests that this jitter contributes the DCPD sum at a level of 3×10−8 mA/Hz1/2 at 100 Hz, which is about a factor of 6 below the total noise.

Images attached to this comment
evan.hall@LIGO.ORG - 02:02, Friday 11 September 2015 (21393)

Using this measured WFS A yaw → DCPD sum TF, I projected the noise from WFS A onto the DARM spectrum (using data from 2015-08-27). Since the coupling TF was taken during a completely different lock stretch than the noises, this should be taken with a grain of salt. However, it gives us an idea of how significant the jitter is above 100 Hz. (Pitch has not yet been included.)

Non-image files attached to this comment
keita.kawabe@LIGO.ORG - 11:33, Friday 11 September 2015 (21402)

PIT coupling per beam rotation angle is a factor of 7.5 smaller than YAW:

https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=21212

paul.fulda@LIGO.ORG - 07:38, Monday 14 September 2015 (21496)

Re: "How does beam jitter couple to DARM?" : jitter can couple to DARM via misalignments of core optics (see https://www.osapublishing.org/ao/abstract.cfm?uri=ao-37-28-6734).

If this is the dominant coupling mechanism, you should see some coherence between a DARM BLRMS channel where this jitter noise is the dominant noise (you may need to drive jitter with white noise for this) and some of the main IFO WFS channels. 

gabriele.vajente@LIGO.ORG - 09:00, Monday 14 September 2015 (21498)

The BLRMS in the input beam jitter region (300-400 Hz) is remarkably stable over each lock (see my entry here), so there seems to be no clear correlation with residual motion of any IFO angular control.

paul.fulda@LIGO.ORG - 13:27, Monday 14 September 2015 (21509)

Thanks for the link to that post, I hadn't seen it. It may still be possible though that there's some alignment offset in the main IFO that couples the jitter to DARM (i.e. a DC offset that is large compared to residual motion – perhaps caused by mode mismatch + miscentering on a WFS). This could be checked by putting offsets on WFS channels and seeing how the coupling changes. 

H1 INJ (INJ)
eric.thrane@LIGO.ORG - posted 17:34, Thursday 03 September 2015 - last comment - 14:51, Tuesday 08 September 2015(21198)
LIMIT in HWINJ filter bank turned off
Eric, Cheryl

Following advice from D Shoemaker et al., we have disabled the LIMIT on the HWINJ filter bank. The change was made at approximately GPS = 1125361473. The LLO LIMIT was turned off earlier today:

https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=20249
Comments related to this report
jenne.driggers@LIGO.ORG - 12:18, Tuesday 08 September 2015 (21296)

[TJ, Jenne]

TJ pointed out to me that the Cal Hardware Injection ODC bit is red, and I tracked down where it's coming from.  The ODC is upset that the CAL-INJ_HARDWARE limiter was turned off.  I don't think that this has been preventing us from going to Observing mode, since the limiter was turned off on Thursday, but if it does (particularly if some updates have been made during maintenence day that tie this bit into our "OK" status) I will turn the limiter back on until the ODC can be updated to know that the limit switch should be off.

I have sent EricT an email asking for him to help figure out the ODC part of this.

jameson.rollins@LIGO.ORG - 12:54, Tuesday 08 September 2015 (21299)

Just to be clear, there should be NO ODC CHECKS INVOLVED in the IFO READY bit.  ODC is only used as transport of the bits, and none of the checks being done by ODC affect IFO READY.  The only thing that should be going in to IFO READY now is the GRD IFO top node.  In other words, this ODC issue should not have been preventing you from going to Observing mode.

betsy.weaver@LIGO.ORG - 14:51, Tuesday 08 September 2015 (21303)

Note, when the EXC bit in the CALCS CDS overview is in alarm, we tend to open the screen CAL_INJ_CONTROL to attempt to diagnose - This shows a big red light for some ODC Channel OK Latch, leading us to misdiagnose what is actually in alarm.  We have 2 operational problems:

 

1) If generically, there is a red light on the CDS screen - where do you go?  Normally, we follow the logical medm and are able to get to the bottom of the red status via logical nested reds.  This is not the case for the CALCS screen - the CDS H1:FEC-117_STATE_WORD bit is RED on the H1CALCS line of the overview screen, yet this bit is nowhere on the CALCS screen.

So, where does the info come from for specifically the EXC bit of the H1CALCS state word, such that we can do something about it?

 

2) Someone should rework the CAL_INJ_CONTROL.adl so that it doesn't cause us to misdiagnose actual reds.  Currently, the HARDWARE INJECTIONS are out of configuration (outstanding issue to still be sorted) and yet, there is NO INDICATION of that on the CAL_INJ_CONTROL screen...  Also, the CW injection appears to be off, but there is no "red alarm" on the screen.

 

BTW, the HARDWARE INJ appear to be off.  They dropped around 7pm local time last night (20 hours ago).

Images attached to this comment
Displaying reports 62261-62280 of 83004.Go to page Start 3110 3111 3112 3113 3114 3115 3116 3117 3118 End