Displaying reports 54121-54140 of 86376.Go to page Start 2703 2704 2705 2706 2707 2708 2709 2710 2711 End
Reports until 08:09, Thursday 12 January 2017
LHO General
patrick.thomas@LIGO.ORG - posted 08:09, Thursday 12 January 2017 (33174)
Ops Day Shift Start
TITLE: 01/12 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 58.2425Mpc
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
    Wind: 2mph Gusts, 1mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.24 μm/s 
QUICK SUMMARY: Locked and observing with double coincidence. TCSY chiller flow is low verbal alarm when I arrived. Flow had dropped to around 2.2, just came back to around 3.1.
H1 General
jim.warner@LIGO.ORG - posted 07:50, Thursday 12 January 2017 (33173)
Shift Summary

TITLE: 01/12 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 54.185Mpc
INCOMING OPERATOR: Patrick
SHIFT SUMMARY:
LOG:
11:20 LLO was down, A2L had looked bad since I arrive, so ran A2L script

12:30 Lockloss, back to observing at 13:09

15:30 Bubba and Ken to MX to check on temps

LHO General
corey.gray@LIGO.ORG - posted 23:58, Wednesday 11 January 2017 (33166)
EVE Operator Summary

TITLE: 01/12 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 53.5793Mpc
INCOMING OPERATOR: Jim
SHIFT SUMMARY:

Other than an odd EQ (thanks Krishna for pointing out the high freq (0.3-1Hz which we don't have up on the wall) of the EQ---at any rate, not your typical EQ), in OBSERVING all shift and PI modes were straightforward.
LOG:

H1 SEI (SEI)
corey.gray@LIGO.ORG - posted 19:59, Wednesday 11 January 2017 - last comment - 21:47, Wednesday 11 January 2017(33170)
Earthquake Report: 5.3 in Guatamala, Lockloss on L1 & Then H1
Comments related to this report
krishna.venkateswara@LIGO.ORG - 21:47, Wednesday 11 January 2017 (33171)

This was a low magnitude but relatively close event which implies higher frequency ground motion. You can see a spike in the 0.3-1 Hz blrms at the lockloss time. I suspect that is what caused the lockloss.

LHO General
corey.gray@LIGO.ORG - posted 18:07, Wednesday 11 January 2017 (33165)
Transition To EVE

TITLE: 01/12 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 53.5793Mpc
OUTGOING OPERATOR: Patrick
CURRENT ENVIRONMENT:
    Wind: 7mph Gusts, 6mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.26 μm/s

No snow!  Winds under 10mph, a balmy 19degF, & seismic isn't too bad (other than some plowing at the beginning of this shift).  Roads were plowed & clear on way in.
QUICK SUMMARY:
Patrick took H1 to OBSERVING as we made the shift change.
There was some snow plowing in the Corner Station area at the beginning of this shift.

H1 SEI
patrick.thomas@LIGO.ORG - posted 17:02, Wednesday 11 January 2017 (33168)
Earthquake report
M5.5 - 42km SSW of Betafo, Madagascar

Reported twice by Terramon, once by USGS

Was it reported by Terramon, USGS, SEISMON? Yes, Yes, No

Magnitude (according to Terramon, USGS, SEISMON): (5.4, 5.5), 5.5, NA

Location: 42km SSW of Betafo, Madagascar, 20.158°S, 46.633°E, 8.7 km depth 

 

Starting time of event (ie. when BLRMS started to increase on DMT on the wall): ~ 23:20 UTC

Lock status? Not locked at time

EQ reported by Terramon BEFORE it actually arrived? Unknown.
Images attached to this report
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 16:30, Wednesday 11 January 2017 (33164)
CDS O2 restart report: Wednesday 4th - Tuesday 10th January 2017

model restarts logged for Tue 10/Jan/2017
2017_01_10 08:47 h1nds1

Complete h1tw1 offloading, reconfigure daqdrc on nds1.

model restarts logged for Mon 09/Jan/2017
2017_01_09 10:02 h1tw1

Restart tw1 after NFS failure Fri 06

model restarts logged for Sun 08/Jan/2017 - Sat 07/Jan/2017 No restarts reported (minute trends unavailable due to h1tw1 NFS error)

model restarts logged for Fri 06/Jan/2017
2017_01_06 08:13 h1nds0
2017_01_06 08:13 h1nds1

Restart both nds's as tw1 offloading starts. h1tw1 stopped NFS exporting at 17:20 this day.

model restarts logged for Thu 05/Jan/2017 - Wed 04/Jan/2017 No restarts reported

Outside of the NDS, TW1 and FW0 restarts, the rest of the DAQ has now been running for 37+ days (DC, FW1, FW2, GDS-BRCSTR).

LHO General
patrick.thomas@LIGO.ORG - posted 16:20, Wednesday 11 January 2017 - last comment - 16:29, Wednesday 11 January 2017(33162)
Ops Day Shift Summary
TITLE: 01/12 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 67.1386Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Just made it back to observing. Lockloss likely from snow removal at end X ringing up BRS X.
LOG:
15:46 UTC John starting clearing snow by CP1
16:13 UTC Starting initial alignment
16:20 UTC Terra called from LLO to ask about our state. They will probably start a test.
16:36 UTC John at end Y clearing snow
16:46 UTC Krishna turning sensor correction at end Y off
16:52 UTC Krishna turning sensor correction at end Y back on
16:58 UTC May have accidently moved PR2 instead of BS during MICH_DARK_LOCKED. Trended back and changed pitch back to previous value. Went back to redo PRM_ALIGN.
17:01 UTC Accidently selected INPUT_ALIGN_OFFLOADED instead of PRM_ALIGN_OFFLOADED. Reselected PRM_ALIGN.
17:04 UTC Karen to warehouse
17:15 UTC Initial alignment complete
17:28 UTC John done clearing snow at end Y, working way back to corner
17:34 UTC Sensor correction had not actually been turned back on at end Y. Reselecting guardian state turned it on.
17:45 UTC Lockloss at INCREASE_POWER. HAM6 ISI tripped.
17:52 UTC Karen out of warehouse
18:07 UTC Lockloss at INCREASE_POWER. HAM6 ISI tripped.
18:20 UTC Gerardo to end Y to check on IP controller. Not going into VEA. Not changing BRS.
18:42 UTC Damped PI mode 27
18:46 UTC NLN
18:50 UTC Changed H1:SYS-MOTION_C_PICO_D_CURRENT_ENABLE from Enabled to Disabled. Observing.
19:02 UTC GRB. LLO is down. Called LLO control room. Joe says he did not receive alert. LLO currently down due to wind.
19:19 UTC Damped PI modes 27 and 28
19:22 UTC Kyle to mid Y to take pictures
19:26 UTC Damped PI mode 27. Had to change phase.
19:34 UTC GRB. LLO is still down.
19:49 UTC Chandra running down Y arm
19:52 UTC Ran passive a2l check: /opt/rtcds/userapps/release/isc/common/scripts/decoup/DARM_a2l_passive.xml. Looks fine.
20:08 UTC Greg to warehouse
20:19 URC Greg back from warehouse
20:20 UTC Kyle back from mid Y
20:32 UTC Joe to mid Y and end Y to shovel snow
21:25 UTC Bubba starting clearing snow on X arm
21:36 UTC Joe back from mid Y
21:38 UTC Damped PI mode 28
21:50 UTC Gerardo, Filiberto and Marc starting WP 6430
22:23 UTC Out of observing for Sheila to test affect on DARM of closing beam diverter
22:37 UTC Sheila done. Back to observing.
22:58 UTC Gerardo, Filiberto and Marc done
23:08 UTC Lock loss. BRS X in fault. Bubba at end X clearing snow with front end loader close to building.
23:15 UTC Changed ISI Config at end X to BLEND_Quite_250_SC_None.
23:25 UTC Bubba on the way back from end X
23:26 UTC John to mid X
23:46 UTC Changed ISI Config at end X back to BLEND_QUITE_250_SC_BRS
00:05 UTC Corey reports front end loader is back at corner station
00:09 UTC Krishna rerequested ISI Config at end X to BLEND_QUITE_250_SC_BRS
00:14 UTC Observing
Comments related to this report
patrick.thomas@LIGO.ORG - 16:29, Wednesday 11 January 2017 (33163)
00:28 UTC Diag reset H1IOPASC0 to clear timing error.
H1 CAL
evan.goetz@LIGO.ORG - posted 15:57, Wednesday 11 January 2017 (33161)
Updated calibration for Pcal to DARM injections
Since we have now been correctly accounting for the ~0.99 AA gain used in the DARM calibration model (see LHO aLOG 32907), I have updated the control room DTT templates with the correct calibration. This is done similarly to LHO aLOG 32542.

The math needs to be updated to account for the improved understanding (see calibration subway map diagram):

DELTAL_EXT   W * [Derr/C_foton + A*Dctrl*delay]
---------- = ----------------------------------
PCAL_RX_PD    m * f^2 * AA_a/(AA_a gain) * AA_d

 m     DELTAL_EXT * [C_foton/C_real] / [W * AA_a * AA_d * OMCpoles * OMCtoCALCSdelay * lightTimeDelay * unknownSensingDelay]
--- = -----------------------------------------------------------------------------------------------------------------------
 m                                 sign * PCAL_RX_PD / [f^2 * AA_a/(AA_a gain) * AA_d]

Then the Pcal to DARM calibration is:

                                     [C_foton/C_real] * f^2
---------------------------------------------------------------------------------------------------------
[W * sign * (AA_a gain) * uncompensatedOMCpoles * OMCtoCALCSdelay * lightTimeDelay * unknownSensingDelay]

For the broad band Pcal to DARM measurement, since the calibration is applied separately to PCAL_RX_PD, we only need:

                                         [C_foton/C_real]
---------------------------------------------------------------------------------------------------------
[W * sign * (AA_a gain) * uncompensatedOMCpoles * OMCtoCALCSdelay * lightTimeDelay * unknownSensingDelay]

I have made a new control room calibration folder containing these scripts and calibration transfer function products, using the latest DARM reference model:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/ControlRoomCalib/H1_pcal2darm_correction.m
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/ControlRoomCalib/pcal2darm_calib.txt
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/ControlRoomCalib/caldeltal_calib.txt

The latest DTT measurements have been updated to include these transfer function products:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Measurements/SensingFunctionTFs/2017-01-03_H1_PCAL2DARMTF_4to1200Hz_8min.xml
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Measurements/SensingFunctionTFs/2017-01-03_H1_PCAL2DARMTF_BB_5to1000Hz.xml
LHO VE
filiberto.clara@LIGO.ORG - posted 15:30, Wednesday 11 January 2017 - last comment - 16:01, Wednesday 11 January 2017(33158)
Y2-8 HV Ion Pump Cable

WP 6430

Chandra reported issues with Y2-8 ion pump, see alog entry. Gerardo verified controller had not tripped off. Following error messages were displayed on the controller:

1. Controller found with "Error 02"
2. After restarting controller, "Error 10", excessive arcing condition has been detected

With HV cable disconnected at power supply, HV output was enabled without no issues. Disconnected HV cable at both ends, and tested with the HiPOT tester. Test failed at ~500V. Marc tested the cable with the fieldfox N9912A, and will post some data.

Fil, Gerard, Marc

Comments related to this report
marc.pirello@LIGO.ORG - 16:01, Wednesday 11 January 2017 (33160)

We compared scans from 8-2-2016 to today.

The short appears to be in approximately the same location as the previous short.

Images attached to this comment
H1 CDS
james.batch@LIGO.ORG - posted 15:25, Wednesday 11 January 2017 (33159)
Updated nuc3, nuc6 control room displays
Changed to new versions of the H1 DARM figure of merit displays for nuc3 and nuc6 in the control room.  New versions are in userapps, checked in to svn.
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 15:09, Wednesday 11 January 2017 (33157)
CDS maintenance summary, Tuesday 10th January 2017

(Late entry from yesterday's maintenance)

h1tw1 offload old minute trends

Jim

h1nds1 was restarted to use the recent offload of minute trends from h1tw1.

WP 6411 Add Chiller Yard H2O Supply Temperatures to cell phone alerts

John, Dave:

CS, EX and EY chiller yard supply water temperatures were added to the cell phone alert system. Alarm levels are HIGH=55F, LOW=32F.

H1 General (OpsInfo)
sheila.dwyer@LIGO.ORG - posted 14:53, Wednesday 11 January 2017 (33156)
H1 DARM FOM updated

With some help from Jim Batch, Evan G and Jeff K I've updated the DARM FOM for display on nuc3.  The gold trace in the new version is updated to a spectrum taken in the O2 configuration, when the range was good (75MPc, Dec 13th).  This should make it easier for operators to asses the current performance of H1 against what it has been in O2.  

The old gold reference from Jan 30th 2016, which is smiilar in range but has better low frequency sensitivy, is still saved in the template. 

LHO General
corey.gray@LIGO.ORG - posted 20:29, Tuesday 10 January 2017 - last comment - 16:51, Wednesday 11 January 2017(33143)
EVE Operator Summary

TITLE: 01/11 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 60.9258Mpc
INCOMING OPERATOR: Jim may or may not decide to venture in (I have been in contact with him this evening)
SHIFT SUMMARY:
H1 has been in OBSERVING for almost 6hrs at a fairly steady 60Mpc.

Continue to experience non-normal winter weather with forecast for possibly more snow overnight & freezing temperatures.  Have also had fairly blizzard like conditions with winds just below 20mph.  Have been in contact with OWL shift operator with regards to coverage & they will do what feels safe.  

I am leaving NOW to brave roads home & so H1 will be flying on automatic pilot for a few hours or until the morning. 

I have talked with William Parker (LLO operator) and aprised him of our situation.

LOG:

DIAG_MAIN message:   PCAL:  Y RX PD is 0.01 OFF

Comments related to this report
corey.gray@LIGO.ORG - 00:46, Wednesday 11 January 2017 (33144)OpsInfo

~5:30utc (9:30pmPST) 6:37utc (10:37pmPST) H1 had a lockloss.  Looking at the seismic trends showed nothing (my guess is this was a PI mode ringing up).

Sheila had remote access and tried to relock, but H1 kept dropping at ALS LOCKING.  She mentioned that the Diff beatnote was low.  I made an attempt to drive back in, but Rt10 was impassable for my vehicle.  

Observatory Mode was left at OBSERVING through the lockloss and upt to around 12:40am.  Then I asked Sheila to switch it to UNAVOIDABLE (we don't have a SNOW STORM option).

This is where H1 will be left until the morning.

terra.hardwick@LIGO.ORG - 08:00, Wednesday 11 January 2017 (33146)

Looking at the summary pages (PI Overview under ISC and then I looked at the last aka broadband monitoring; the blue spike right at lockloss is from signals saturating during lockloss, not from a ringing up PI), this lockloss didn't appear to be from PI.

corey.gray@LIGO.ORG - 16:51, Wednesday 11 January 2017 (33167)

Thanks, Terra!  Yeah, also checking the VERBAL_ALARM log, we have a lockloss at 6:36:55utc & no PI Mode notifications.

H1 ISC (DetChar, TCS)
sheila.dwyer@LIGO.ORG - posted 12:29, Monday 09 January 2017 - last comment - 07:21, Friday 13 January 2017(33104)
change in OMC length gain helps with 1083 Hz glitches

This morning we sat in nominal low noise without going to observing from 19:21 to 19:51 UTC (Jan 9th) in a configuration that should be much better for the 1084Hz glitches. (WP6420)  

On Friday we noticed that the 1084Hz feature is due to OMC length fluctuations, and that the glitch problem started on Oct 11th when the dither line amplitude was decreased (alog 30380 ).  This morning I noticed that the digital gain change described in alog 30380 that was intended to compensate for the reduced dither amplitude didn't make it into any guardian, so that we have had a UGF that was a factor of 8 lower than what I used when projecting OMC length noise to DARM30510 The first attachment shows open loop gain measurements from the 3 configurations: before oct 11th (high dither amplitude), after october 11th (lower dither amplitude, uncompensated) and the test configuration (lower dither amplitude, compensated).  

We ran with the servo gain set to 24 (to give us the nominal 6Hz ugf) and the lowered dither line amplitude from 19:21 UTC to 19:51 UTC Jan 9th.  You can see the spectrum durring this stretch in the second attached screenshot, in the test configuration the peak around 1083Hz is gone, with just the pcal line visible, and the OMC length dither at 4100Hz is reduced by more than an order of magnitude. You can also compare the glitches from this lock stretch with one from yesterday  to see that the glitches at 1084 Hz seem to be gone. This is probably the configuration we would like to run with for now, but we may try one more test with increased dither line amplitude.  

Other notes because we don't have an operator  today due to weather:

This morning all 4 test mass ISIs were tripped probably from the Earthquake last night that brought the EQ  BLRMS to 10 um/second around midnight UTC.  ITMY tripped again while it was re-isolating, no problem on the second try. 

Richard topped added 400mL to the TCSY chiller around 10:15 or 10:30 local time, since we were getting low flow alarms. The flow alarms came back a few minutes before 11am local time.  

I went through inital alingment witout problems and got to DC_readout transition. Then I measured the UGF of the OMC length loop in preparation for increasing the dither line height   From that measurement and trends it became clear that when the OMC dither amplitude was reduced, the compensation of the OMC digital gain described in didn't make it into the guardian.  This means we have been operating with a UGF in the OMC length loop that was a factor of 8 too low since mid october.  

We arrived in low noise at 19:21 UTC with the OMC ugf increased to 6Hz.  After about a half hour PI modes 27 and 28 rang up, and I wasn't fast enough to get them under control so we lost lock.  

Images attached to this report
Comments related to this report
andrew.lundgren@LIGO.ORG - 16:34, Monday 09 January 2017 (33114)DetChar, ISC
Here's a graphical version of what Sheila wrote, showing the time on Oct 11 when the 1083 Hz glitches started. The dither amplitude was reduced at 3:20 UTC, but the servo gain was increased to compensate. There are no 1083 Hz glitches at this time. Severe RF45 noise starts an hour later and lasts until the end of the lock. The 1083 Hz glitches are evident from the beginning of the next lock, and persist in every lock until the recent fix.

The dither amplitude stayed low in the second lock, but the servo gain was reset back to its low value. Apparently, both need to be low to produce the glitches.
Non-image files attached to this comment
sheila.dwyer@LIGO.ORG - 11:17, Thursday 12 January 2017 (33175)

Keita tells me that people are concerned about making this change because of the increased noise below 25 Hz in the screenshot attached to the original post.  We did not run the A2L decoupling durring this lock strech, and it was not well tuned.  The shape of the HARD loop cut off at 25Hz is visible in the spectrum, which is one way of identifying bad ASC noise.  The high coherence between CHARD P and DARM at this time is another way of seeing that this is angular noise (attachment). 

So I think that this is unrelated to the OMC gain change and not really a problem. 

Images attached to this comment
joshua.smith@LIGO.ORG - 11:40, Thursday 12 January 2017 (33176)DetChar, ISC

1080Hz removal OMC gain/line conditions, does it make more low frequency noise?
Josh, Andy, TJ, Beverly

Conclusion: For two on/off times each for the two OMC gain tests (total 8 times) it looks like the high gain / low line configuration that takes away 1080 Hz (and also takes away some bumps around 6280Hz) coincides with a bit more noise below 25Hz.

Request: We hope this connection with noise below 25Hz is chance (It might have just been drift and we chose times unluckily) and we would like debunk/confirm it. We could do that with a couple cycles of on/off, (e.g. 5 minutes each, with the current configuration vs high gain / low dither configuration). 

See the attached PDF. The pages are: 

  • 2,3: January 9th test configuration from Sheila's page: "We ran with the servo gain set to 24 (to give us the nominal 6Hz ugf) and the lowered dither line amplitude from 19:21 UTC to 19:51 UTC Jan 9th." Red/Orange are the test time with low line / high gain, and no 1080Hz
  • 4,5: Similar experiment from October. Blue/green are the test time with low line / high gain, and no 1080Hz.

Also: There is no coherence above 10Hz between STRAIN and OMC LSC SERVO/I for any of these test times. So coupling must be non-linear. 
Also: When the 1080Hz bumps disappear we also see a bump around 6280Hz disappear (attached image, sorry no x-axis label but its 6280Hz)

Images attached to this comment
Non-image files attached to this comment
andrew.lundgren@LIGO.ORG - 12:10, Thursday 12 January 2017 (33177)DetChar, ISC
Our post crossed with Sheila's. If possible, we'd still like to see a quick on/off test with the A2L tuned. Could we have five minutes with the gain high and then ramp it down? Maybe with one repeat. Since this is a non-linear effect, we'd like to make sure there's no funny coupling with the CHARD noise. We're not too worried by excess noise below 25 Hz now, but it might be important when we're able to push lower.
sheila.dwyer@LIGO.ORG - 16:34, Thursday 12 January 2017 (33183)

While LLO was down I attempted to do a test by increasing the OMC length gain while in lock, which unlocked the IFO.  So on/off tests aren't possible.  Edit:  I broke the lock by changing the gain after the integrator (which had been OK when not on DC readout), we can change the gain upstream instead without unlocking.  

For now I put the new gain into the guardian so the next lock will be with the increased gain, and hopefully see that the low frequency noise is fine. 

Now we have relocked, Patrick ran a2l, and Jeff, Evan, Krishna and I did an on off test by ramping H1:OMC-LSC_I_GAIN:

  • high gain for about 15 minutes before 23:55 UTC Jan 12th
  • low gain from 25:56:20 Jan 12th to 0:03:21 UTC Jan 13th
  • high gain from 0:04 UTC to 0:13:22 UTC
  • back to low gain

The attached screen shot using the same color scheme as in the presentation above shows that there is not a difference at low frequency between high gain and low gain.  

We are back in observing in the low gain configuration, but the gain is set in an unusual way (and accepted in SDF so that we can go to obsevering). Keita would like us to hear confirmation from detchar before making this permanent. 

Images attached to this comment
joshua.smith@LIGO.ORG - 07:21, Friday 13 January 2017 (33214)DetChar, ISC

Thank you Sheila, this looks really good. No change at low frequency. 1080Hz gone. The 6280Hz just varies on its own timescale. From our end we're happy with the configuration change since it only does good. Sorry for the red herring about low frequencies. 

Images attached to this comment
H1 CAL (CAL)
aaron.viets@LIGO.ORG - posted 14:43, Wednesday 04 January 2017 - last comment - 18:05, Wednesday 11 January 2017(32965)
DCS filters for LHO data for early O2 A (2016)
I have produced filters for offline calibration of Hanford data from the beginning of O2 A until the end of 2016. The filters can be found in the calibration SVN at this location:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/GDSFilters/H1DCS_1163173888.npz

For information on the calibration model, see
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=31693
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32329
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32907

For suggested command line options to use when calibrating this data, see:
https://wiki.ligo.org/Calibration/GDSCalibrationConfigurationsO2

The filters were produced using this Matlab script in SVN revision 4050:
ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/TDfilters/H1_run_td_filters_1163173888.m

The parameters files used (all in revision 4050) were:
ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/Common/params/IFOindepParams.conf
ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/H1/params/H1params.conf
ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/H1/params/2016-11-12/H1params_2016-11-12.conf
ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/params/H1_TDparams_1163173888.conf
ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/H1/Scripts/CAL_EPICS/D20161122_H1_CAL_EPICS_VALUES.m

Several plots are attached. The first four (png files) are spectrum comparisons between CALCS, GDS, and DCS. Kappas were applied in both GDS and DCS plots with a coherence uncertainty threshold of 0.4%. Time domain vs. frequency domain comparison plots of the filters are also attached. Lastly, brief time series of the kappas and coherences are attached, for comparison with CALCS.
Images attached to this report
Non-image files attached to this report
Comments related to this report
aaron.viets@LIGO.ORG - 06:25, Thursday 05 January 2017 (32986)
More plots from beginning of O2 (Nov 30) to show that these filters still have the right model and EPICS.
Images attached to this comment
Non-image files attached to this comment
aaron.viets@LIGO.ORG - 07:01, Thursday 05 January 2017 (32987)
Same set of plots one more time, this time in early ER10 (Nov 16). Note that kappas were not applied in the GDS pipeline this time, leading to a notable difference in the spectra.
Images attached to this comment
Non-image files attached to this comment
aaron.viets@LIGO.ORG - 18:05, Wednesday 11 January 2017 (33169)CAL
These filters have been updated to account for corrections made to the DARM loop parameters since the AA/AI filter bug fixes. For information on the model changes, see:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=33153

The updated filters were produced using all the same files (updated versions) in SVN revision #4133. The only exception is that the EPICS file and the parameters file to produce it were:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/H1/Scripts/CAL_EPICS/DCS20161112_H1_CAL_EPICS_VALUES.m
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/H1/Scripts/CAL_EPICS/callineParams_20161118.m

Note from the plots the slight discrepancy between GDS and DCS, presumably due to the corrections to the model. Also note that DCS and CALCS do not agree on the kappas. This is likely not cause for concern, as the model used to compute them was different. The EPICS and pcal correction factors were produced using the same parameter files as the filters, so they should be correct.
Images attached to this comment
Non-image files attached to this comment
Displaying reports 54121-54140 of 86376.Go to page Start 2703 2704 2705 2706 2707 2708 2709 2710 2711 End