Displaying reports 48881-48900 of 83210.Go to page Start 2441 2442 2443 2444 2445 2446 2447 2448 2449 End
Reports until 15:57, Tuesday 04 April 2017
LHO General
patrick.thomas@LIGO.ORG - posted 15:57, Tuesday 04 April 2017 - last comment - 16:36, Tuesday 04 April 2017(35324)
Ops Day Shift Summary
TITLE: 04/04 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Jim
SHIFT SUMMARY: X arm is locked on IR. ALS is shuttered. ITMX optical lever damping is off. Travis said they would turn ITMX optical lever off. Input power is at 19.6 W. PCAL team is taking images of ITMX with this configuration. We are riding through the tail end of a 5.5 magnitude earthquake in Alaska.
LOG:

Apollo is at mid Y
15:00 UTC Took IFO down. Peter to LVEA to transition.
15:00 UTC Jeff to LVEA, measurements and dust monitors.
15:00 UTC Travis to LVEA.
15:00 UTC Krishna to end Y. Set to SC_OFF_NOBRSXY.
15:07 UTC Balers on X arm. Apollo and Chris back from end Y.
15:14 UTC Peter back.
15:27 UTC Filiberto looking at PEM chassis in CER.
15:30 UTC Rotorooter and LN2 through gate.
15:35 UTC Karen and Christina to end stations.
15:53 UTC Jason and Ed to end Y to swap optical lever laser.
15:58 UTC Pest control on site.
15:58 UTC Cintos through gate.
16:00 UTC Chandra back, gate valves closed.
16:01 UTC GRB. Ignored.
16:13 UTC Chris taking pest control to LVEA.
16:21 UTC Nutsinee to LVEA to work on HWS X camera install
16:24 UTC Chris taking pest control down Y arm.
16:28 UTC Apollo to end X mechanical room. Bubba to LVEA.
16:30 UTC Jeff B. done in LVEA. Heading to both end station VEAs for dust monitor maintenance.
16:37 UTC Dick G. to CER with signal analyzer to chase RF noise.
16:43 UTC Bubba back from LVEA.
16:44 UTC Chris and pest control moving from Y arm to X arm.
16:51 UTC Krishna done.
16:52 UTC Jason and Ed done.

Restarted all nuc computers in CR per Carlo's request.

17:36 UTC Jeff B. done
17:40 UTC Filiberto done in CER.
17:44 UTC Karen leaving end Y.
18:01 UTC Karen to LVEA
18:22 UTC Jeff B. to LVEA to take pictures
18:27 UTC Tumbleweed baling in front of OSB
18:30 UTC Set ISI config to windy
18:34 UTC Pest control done on site
18:42 UTC TCS Y chiller flow is low verbal alarm
19:47 UTC Rick, Travis and crew out for lunch.
19:48 UTC Nutsinee out. Not completing remaining work (no laser hazard)
19:48 UTC Starting attempt to lock X arm on green
19:54 UTC Peter: PSL unshuttered. TCS back on.
20:12 UTC Dick G. done
20:40 UTC Rick, Travis, Carl to LVEA to take pictures with new ITMX camera
20:44 UTC Peter to join PCAL team at ITMX.
21:32 UTC Dave WP 6547
21:35 UTC Nutsinee to LVEA to take picture
21:56 UTC Nutsinee back
22:11 UTC X arm locked on IR
22:13 UTC PCAL crew to LVEA to take next set of pictures.

Closed ALS shutters. Turned off optical lever damping on ITMX. Increased power to 19.5 W.
Comments related to this report
patrick.thomas@LIGO.ORG - 16:36, Tuesday 04 April 2017 (35325)
Found INJ_TRANS guardian set to INJECT_KILL upon start of shift. Just set to INJECT_SUCCESS.
H1 SEI
patrick.thomas@LIGO.ORG - posted 15:48, Tuesday 04 April 2017 (35323)
Earthquake Report
5.5 Adak, Alaska

Was it reported by Terramon, USGS, SEISMON? Yes, Yes, No

Magnitude (according to Terramon, USGS, SEISMON): 5.5, 5.5, NA

Location: 69km SSE of Adak, Alaska; 51.269°N   176.440°W

Starting time of event (ie. when BLRMS started to increase on DMT on the wall): ~22:16 UTC

Lock status? L1 remained lock. H1 out of lock for maintenance.

EQ reported by Terramon BEFORE it actually arrived? Not sure
Images attached to this report
H1 TCS (TCS)
nutsinee.kijbunchoo@LIGO.ORG - posted 15:30, Tuesday 04 April 2017 (35322)
Camera and pick-off BS installed

Fil, Richard, Nutsinee

 

Quick conclusion: The camera and the pick-off beam splitter is in place, but not aligned.

 

Details:  First I swapped the camera fiber cable back so h1hwsmsr can stream images from HWSX camera while I install the BS.

While looking at the stream images, I positioned the BS in such a way that doesn't cause a significant change to the stream images (I didn't take the HW plate off).

Then I installed the camera (screwed on to the table). Because the gate valves were close for the Pcal camera installation, I didn't have the green light to do the alignment.

Richard got the network streaming to work. Now we can look at what the GigE sees though CDS >> Digital Video Cameras. There's nothing there.

The alignment will have to wait until a next opportunity now that green and the IR are back (LVEA is still laser safe).

The camera is left powered on, connected to the Ethernet cable, and CCD cap off.

 

I re-ran the python script, retook the reference centroids. From 1175379652 GPS time the data written to /data/H1/ITMX_HWS comes from HWSX camera.

 

Specification

Camera: BASLER scA1400-17gc

Beam Splitter: Thorlabs BSN17 2"diameter 90:10 UVFS BS Plate, 700-110nm t=8mm

Images attached to this report
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 15:02, Tuesday 04 April 2017 (35321)
DAQ breoadcaster restarted, added 7 slow channels

WP6547, ECR-E1700111

John Z, Dave:

The DAQ broadcaster was restarted after 7 additional slow channels were added (H1:OMC-BLRMS_32_BAND{1,7}_RMS_OUTMON)

Once again we noticed that after the broadcaster restart, a portion of the seismic fom data went missing (see attached). This was also observed last Tuesday.

https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=35155

Images attached to this report
H1 ISC
patrick.thomas@LIGO.ORG - posted 14:46, Tuesday 04 April 2017 (35320)
ITMX green camera moved
Jenne D., Patrick T.

We suspect that the ITMX green camera got moved during the PCAL camera installation today. Initial alignment for green arm was not working right. We closed the initial alignment loops for green WFS and aligned ITMX by hand to maximize the transmitted power. We then set the camera nominal position such that the output was zero at that place.

Attached is a screen shot of the nominal reference position before our change.
Images attached to this report
H1 PEM
jeffrey.bartlett@LIGO.ORG - posted 14:40, Tuesday 04 April 2017 (35319)
Monthly PSL Chiller Filter Inspection (FAMIS #8295)
   Inspection of the inline filters in the PSL Chiller Room showed no contamination, discoloration, or debris. See attached photos. The inline filters in the PSL enclosure were inspected two weeks ago; no problems were noted. Closing FAMIS #8295 
Images attached to this report
H1 PEM
jeffrey.bartlett@LIGO.ORG - posted 14:31, Tuesday 04 April 2017 (35318)
Dust Monitor Quarterly Testing (FAMIS #7314)
   Did a zero count and flow rate test on all pump-less dust monitors in the CS and at both end stations. The PSL monitors were check a couple of week ago. All the monitors are running well whit no problems or issues to report. Closing FAMIS task # 7314. 
LHO General
patrick.thomas@LIGO.ORG - posted 13:49, Tuesday 04 April 2017 (35317)
Ops Day Late Mid Shift Summary
ISI config is back to WINDY. Gate valves are back open. PSL is unshuttered. TCS lasers are back on. IMC is locked. X and Y arms are locked on green. PCAL team is working on taking pictures.
H1 CDS (PEM)
filiberto.clara@LIGO.ORG - posted 13:06, Tuesday 04 April 2017 (35316)
H1 PEM AA Chassis

WP 6559

PEM group reported possible issues with some of the PEM "Test" channels in one of the AA chassis in the CER (PEM/OAF Rack slot U7 & U6). Channels 23-32 were all verified to be working.

F. Clara, R. McCarthy

LHO VE
chandra.romel@LIGO.ORG - posted 12:33, Tuesday 04 April 2017 (35315)
GV 5, 7 open

Soft closed GV 5,7 at 15:25 UTC and re-opened at 19:00 UTC during viewport pcal camera installation. We let the accumulated gas load in gate annuli  be burped in.

Took the opportunity to train a couple operators on stroking pneumatic valves: Jeff Bartlett and Nutsinee

Thanks to all for transitioning to laser safe for the installation.

Images attached to this report
H1 SEI (SEI)
krishna.venkateswara@LIGO.ORG - posted 11:12, Tuesday 04 April 2017 (35313)
BRS-Y Recentered

Krishna

I recentered the BRS-Y this morning. The driftmon should end up around + 5k counts, which gives ~20k counts of useful range before the next recentering. The drift rate currently is about 180 counts per day (see image), so next recentering will be in ~3 months. As noted in 34145, recentering was much easier this time because the recentering rod prefers certain positions, which happened to be in an acceptable range this time.

BRS-Y is ready for SEI use.

Images attached to this report
H1 PSL
jason.oberling@LIGO.ORG - posted 10:30, Tuesday 04 April 2017 (35310)
PSL Power Watchdogs Reset (FAMIS 3644)

I reset the PSL power watchdogs at 16:56 UTC (9:56 PDT).  This closes FAMIS 3644.

H1 AOS (DetChar)
jason.oberling@LIGO.ORG - posted 10:28, Tuesday 04 April 2017 - last comment - 15:32, Thursday 06 April 2017(35309)
ETMy Optical Lever Laser Swapped (WP 6555)

J. Oberling, E. Merilh

This morning we swapped the oplev laser for the ETMy oplev, which has been having issues with glitching.  The swap went smooth with zero issues.  Old laser SN is 130-1, new laser SN is 194-1.  This laser operates at a higher power than the previous laser, so the SUM counts are now ~70k (used to be ~50k); the individual QPD segments are sitting between 16k and 19k counts.  This laser will need a few hours to come to thermal equilibrium, so I will assess whether or not glitching has been improved this afternoon; I will keep the work permit open until this has been done.

For those investigating the possibility of these lasers causing a comb in DARM, the laser was off and the power unplugged for ~11 minutes.  The laser was shut off and unplugged at 16:14 UTC (9:14 PDT); we plugged it back in and turned it on at 16:25 UTC (9:25 PDT).

Comments related to this report
keith.riles@LIGO.ORG - 21:55, Tuesday 04 April 2017 (35328)DetChar
Attached are spectrograms (1500-18:00 UTC vs 20-22 Hz) of the EY optical lever power sum over a 3-hour period today containing the laser swap and of a witness magnetometer channel that appeared to indicate on March 14 that a change in laser power strengthened the 0.25-Hz-offset 1-Hz comb at EY. Today's spectrograms, however, don't appear to support that correlation. During the 11-minute period when the optical lever laser is off, the magnetometer spectrogram shows steady lines at 20.25 and 21.25 Hz.

For reference, corresponding 3-hour spectrograms are attached from March 14 that do appear to show the 20.25-Hz and 21.25-Hz teeth appear right after a power change in the laser at about 17:11 UTC.

Similarly, 3-hour spectrograms are attached from March 14 that show the same lines turning on at EX at about 16:07 UTC. Additional EX power sum and magnetometer spectrograms are also attached, to show that those two lines persist during a number of power level changes over an additional 8 hours. In my earlier correlation check, I noted the gross changes in magnetometer spectra, but did not appreciate that the 0.25-Hz lines were relatively steady.

In summary, those lines strengthened at distinct times on March 14 (roughly 16:07 UTC at EX and 17:11 at EY) that coincide (at least roughly) with power level changes in the optical lever lasers, but the connection is more obscure than I had appreciated and could be chance coincidence with other maintenance work going on that day. Sigh. 

Can anyone recall some part of the operation of increasing the optical lever laser powers that day that could have increased coupling of combs into DARM, e.g., tidying up a rack by connecting previously unconnected cables? A shot in the dark, admittedly, but it's quite a coincidence that these lines started up at separate times at EX and EY right after those lasers were turned off (or blocked from shining on the power sum photodiodes) and back on again. 


Spectrograms of optical level power sum and magnetometer channels

Fig 1: EY power - April 4 - 15:00-18:00 UTC
Fig 2: EY witness magnetometer - Ditto

Fig 3: EY power - March 14 - 15:00-18:00 UTC
Fig 4: EY magnetometer - Ditto

Fig 5: EX power - March 14 - 14:00-17:00 UTC
Fig 6: EX witness magnetometer - Ditto

Fig 7: EX power - March 14 - 17:00-22:00 UTC
Fig 8: EX witness magnetometer - Ditto

Fig 9: EX power - March 15 - 00:00-04:00 UTC
Fig 10: EX witness magnetometer - Ditto

Images attached to this comment
jason.oberling@LIGO.ORG - 15:32, Thursday 06 April 2017 (35368)

Laser continued to glitch after the swap; see attachment from 4/5/2017 ETMy oplev summary page.  My suspicion is that the VEA temp was just different enough from the Pcal lab (where we stabilize the lasers before install) that the operating point of the laser once installed was just outside the stable range set in the lab.  So during today's commissioning window I went to End Y and slightly increased the laser power to hopefully return the operating point to within the stable range.  Using the Current Mon port on the laser to monitor the power increase:

  • Old value: 0.860 V
  • New value: 0.865 V

Preliminary results look promising, so I will let it run overnight and evaluate in the morning whether or not further tweaks to the laser power are necessary.

Images attached to this comment
LHO FMCS
bubba.gateley@LIGO.ORG - posted 09:54, Tuesday 04 April 2017 (35308)
Turning off more heat in the LVEA
I have turned the heat off in Zone 3B in the LVEA. I will continue to closely monitor these temperatures. 
LHO General
patrick.thomas@LIGO.ORG - posted 08:54, Tuesday 04 April 2017 (35307)
Ops Day Shift Transition
TITLE: 04/04 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
    Wind: 7mph Gusts, 6mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.48 μm/s 
QUICK SUMMARY:

Peter has transitioned the LVEA to laser safe. Chandra has closed the gate valves. IFO is down. IMC is set to OFFLINE. ISS second loop is off. ISI config is set to SC_OFF_NOBRSXY. Krishna is working at end Y. Filiberto is investigating PEM chassis in CER. Tumbleweed balers are on site. Rotorooter is here to backfill hole. Travis has started prep work for camera install. Ed and Jason are heading to end Y to swap optical lever laser.
H1 General
cheryl.vorvick@LIGO.ORG - posted 07:49, Tuesday 04 April 2017 (35306)
Ops Owl Summary:

TITLE: 04/04 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 64Mpc
INCOMING OPERATOR: Travis
SHIFT SUMMARY: one lock loss, relocked and in Observe
LOG:

H1 SEI
hugh.radkins@LIGO.ORG - posted 13:56, Friday 31 March 2017 - last comment - 11:26, Tuesday 04 April 2017(35250)
BRSY Remote Desktop session exited at 2050--2051utc

As Krishna noted in aLog 35160, this machine does not have a lot of extra juice to run this.  I should have closed out before but was watching the diagnostics.  Jim turned off the sensor correction so in case the session closing glitched things, it would not glith the ISI/IFO.  It did not and SC has been turned back on.

Comments related to this report
krishna.venkateswara@LIGO.ORG - 11:26, Tuesday 04 April 2017 (35314)DetChar, SEI

I've attached about 10 hours of BRS-Y driftmon data around the time of this crash. It looks like this crash was caused by trying to use BRS-Y in a bad range. Even if the data looks temporarily smooth, BRS-Y should not be used if driftmon is below -15k counts.

The close time association with closing of the remote desktop terminal was likely just a coincidence. It is still advised to not login to the machine remotely when BRS-Y is being used for feedforward, unless really necessary.

Images attached to this comment
H1 CDS (SYS)
david.barker@LIGO.ORG - posted 12:12, Monday 27 March 2017 - last comment - 11:09, Tuesday 04 April 2017(35111)
Timing error at 08:45 PDT this morning

Jim, Dave:

At 15:44:58 UTC (08:44:58 PDT) we received a timing error which only lasted for one second. The error was reported by the CNS-II independent GPS receivers at both end stations, they both went into the 'Waiting for GPS lock' error state at 15:44:58, stayed there for one second, and then went good. The IRIG-B signals from these receivers are being acquired by the DAQ (and monitored by GDS). The IRIG-B signals for the second prior, the second of the error, and the following two seconds (4 seconds in total) are shown below.

As can be seen, even though EX and EY both reported the error, only EX's IRIG-B is missing during the bad second.

The encoded seconds in the IRIG-B are shown in the table below. Note that the GPS signal does not have leap seconds applied, so GPS = UTC +18.

Actual seconds EX IRIG-b seconds EY IRIG-b seconds
15 15 15
16 missing 16
17 16 17
18 18 18

So EY was sequential through this period. EX slipped the 16 second by a second, skipped 17 and resynced at 18.

Images attached to this report
Comments related to this report
stefan.countryman@LIGO.ORG - 08:02, Tuesday 28 March 2017 (35126)
Summary: All problems were in CNS II GPS Channels at LHO. No problems were observed in the Trimble GPS Channels at either site, nor in the LLO CNS II Channels, with the exception of a change of -80ns in the LLO Trimble GPS PPSOFFSET a few seconds after the anomally (see below). It seems that both LHO CNS II Clocks simultaneously dropped from 10 to 3 satellites tracked for a single second. There is no channel recording the number of satellites locked by the Trimble clocks, but the RECEIVERMODEs at both sites remain at the highest level of quality, OverDeterminedClock (level 5 for the Trimbles) with no interruption at the time of the anomaly.

It is unclear whether the LLO PPSOFFSET is causally related to the LHO event; the lack of other anomalous output from the LLO Trimble clock suggests that it is otherwise performing as intended.

Descriptions of anomalous plots below. All anomalous plots are attached.

Dilution of precision at BOTH LHO CNS II clocks skyrockets to 100 around the event (nominal values around 1) (H1:SYS-TIMING_X_GPS_A_DOP, H1:SYS-TIMING_Y_GPS_A_DOP). Number of satellites tracked by BOTH LHO CNS II clocks Plummets or two seconds from 10 to 3 (H1:SYS-TIMING_X_GPS_A_TRACKSATELLITES, H1:SYS-TIMING_Y_GPS_A_TRACKSATELLITES). In the second before the anomaly, Both of the LHO CNS II Clocks' RECEIVERMODEs went from 3DFix to 2DFix for exactly one second, as evidenced by a change in state from 6 to 5 in their channels' values (H1:SYS-TIMING_X_GPS_A_RECEIVERMODE, H1:SYS-TIMING_Y_GPS_A_RECEIVERMODE). The 3D Speed also spiked right around the anomaly for both LHO CNS clocks (H1:SYS-TIMING_X_GPS_A_SPEED3D, H1:SYS-TIMING_Y_GPS_A_SPEED3D). LHO CNS II Clock's 2D speeds both climb up to ~0.1 m/s (obviously fictitious) (H1:SYS-TIMING_X_GPS_A_SPEED2D, H1:SYS-TIMING_Y_GPS_A_SPEED2D). LHO Y-End CNS II Clock calculated a drop in elevation of 1.5m following the anomaly (obviously this is spurious) (H1:SYS-TIMING_Y_GPS_A_ALTITUDE). LHO X-End CNS II Clock thinks it dropped by 25m following the anomaly! I'm not sure why this is so much more extreme than the Y-End calculated drop (H1:SYS-TIMING_X_GPS_A_ALTITUDE). The Livingston Corner GPS PPSOFFSET went from its usual value of ~0+/-3ns to -80 ns for a single second at t_anomaly + 3s (L1:SYS-TIMING_C_GPS_A_PPSOFFSET). The GPS Error Flag for both LHO CNS II clocks came on, of course (H1:SYS-TIMING_Y_GPS_A_ERROR_FLAG, H1:SYS-TIMING_X_GPS_A_ERROR_FLAG)
Images attached to this comment
patrick.thomas@LIGO.ORG - 11:09, Tuesday 04 April 2017 (35312)
Using my very limited knowledge of Windows administration, I have attempted to list the events logged on h1ecatc1 from 8:00 - 10:00 AM on Feb. 27 2017. Attached is a screenshoot of what was reported. I don't see anything at the time in question. However, there is a quite reasonable chance that there are other places to look that I am not aware of and/or I did not search correctly.
Images attached to this comment
H1 CAL
jeffrey.kissel@LIGO.ORG - posted 16:12, Tuesday 21 March 2017 - last comment - 10:50, Tuesday 04 April 2017(34984)
2017-03-21 New Calibration Sensing Function Measurement Suite
J. Kissel

I've gathered our "bi-weekly" calibration suite of measurements to track the sensing function, and ensure all calibration is within reasonable uncertainty and to have corroborating evidence for a time-dependent detuning spring frequency & Q. Trends of previous data have now confirmed time dependence -- see LHO aLOG 34967.

Evan is processing the data and will add this day's suite to the data collection.

We will begin analyzing the 7.93 Hz PCAL line that's been in place since the beginning of ER10, using a method outlined in T1700106, and check the time dependence in a much more continuous fashion. My suspicion is that the SRC detuning parameters will change on the same sort of time scale as the optical gain and cavity pole frequency.

Note also, that I've grabbed a much longer data set for the broad-band injection, as requested by Shivaraj -- from 22:50:15 UTC to 22:54:20 UTC, roughly 4 minutes.

The data have been saved and committed to:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Measurements/SensingFunctionTFs
    2017-03-21_H1DARM_OLGTF_4to1200Hz_25min.xml
    2017-03-21_H1_PCAL2DARMTF_4to1200Hz_8min.xml

    2017-03-06_H1_PCAL2DARMTF_BB_5to1000Hz_0p25BW_250avgs_5min.xml
The data have been exported with similar names to the same location in the repo.

For time-tracking, this suite took ~38 minutes from 2017-03-21, 22:18 - 22:56 UTC.
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 17:37, Tuesday 21 March 2017 (34987)CAL, DetChar, GRD, ISC
J. Kissel

Because the calibration suite requires one to turn OFF all calibration lines before the measurements then back ON after, the time-dependent correction factor computation is spoiled temporarily. In the GDS pipeline, which uses FIR filters, it takes about 2 minutes for the calculation to return to normal functionality and produce sensible results (Good! this is what's used to correct h(t)). However, because the front-end's version of this calculation (NOT used in any corrections of any astrophysical or control room product) uses IIR filters, it remains polluted until one manually clears the history on all filter banks involved in the process. 

Normally, as the ISC_LOCK guardian runs through the lock acquisition sequence, it clears these filter banks history appropriately. However, the calibration suite configuration is still a manual action.

Moral of the story -- I'd forgotten to do this history clearing until about 1 hr into the current observation stretch. The history was cleared at approximately  2017-03-22 00:10 UTC.

Why am I aLOGging it? Because clearing this history does NOT take us out of observation mode. Rightfully so in the case, because again the front-end calculation is not yet used in any control system, or to correct any data stream, it is merely a monitor. I just aLOG it so that the oddball behavior shown at the tail end of today's UTC summary page has an explanation (both 20170321 and 20170322 show the effect).

To solve this problem in the future, I'm going to create a new state in the ISC_LOCK guardian that does the simple configuration switches necessary so no one forgets in the future.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 18:48, Tuesday 21 March 2017 (34991)CAL, ISC
J. Kissel

On the discussion of "Why Can't LLO the same SNR / Coherence / Uncertainty below 10 Hz for These Sensing Function Measurements?"

It was re-affirmed by Joe on Monday's CAL check-in call that LLO cannot get SNR on 5- 10 Hz data points. There are two things that have been investigated that could be the reason for this:
   (1) The L1 DARM Loop Gain is too large ("much" larger than H1) at these frequencies, which suppresses the PCAL and SUS actuator drive signals.
   (2) Because of L1's choice in location of applying the optical plant's DC readout DARM offset and avoiding-DAC-zero-crossing-glitching SUS offset means there are single vs. double precision problems in using a very traditional DARM_IN1/DARM_IN2 location of the open loop gain transfer function.
both investigations are described in LHO aLOG 32061.

They've convinced me that (2) is a small effect, and the major reason for the loss in SNR is the loop gain. However, Evan G. has put together a critique of the DARM loop (see G1700316), which shows that the difference in suppression between 5-10 Hz is only about a factor of 4. I put a screen cap of page 4 which shows the suppression.

I attach a whole bunch of supporting material that shows relevant ASDs for both during the lowest frequency points of the DARM OLG TF and the PCAL 2 DARM TF:
     - DACRequest -- shows that a factor of 4 increase in drive strength would not saturate any stage of the ETMY suspension actuators
     - SNR_in_DARM_ERR -- shows the loop suppressed SNR of the excitation
     - SNR_in_DELTAL_EXT -- shows the calibrated displacement driven
     - SNR_in_OMC_DCPDs -- shows that a factor of 4 increase in drive strength would not saturate the OMC DCPDs 

 So ... is there something I'm missing?
Images attached to this comment
shivaraj.kandhasamy@LIGO.ORG - 09:39, Tuesday 28 March 2017 (35134)

Here with attached a plot showing comparison of PCal, CAL-DELTAL_EXTERNAL and GDS for the broad band injection. As expected, GDS agrees better with PCal injection signal. The code used to make the plot is added to svn,

/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/PCAL/PcalBroadbandComparison20170321.m
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 10:39, Tuesday 28 March 2017 (35135)
Just to close out the question in Comment #2 above, LLO was indeed able to use LHO-like templates and drastically improve their SNR at low-frequency; check out LLO aLOG 32495. 

Hazaah!
jeffrey.kissel@LIGO.ORG - 10:50, Tuesday 04 April 2017 (35311)
J. Kissel, E. Goetz

The processed results for this data set are attached.

For context of how this measurement fits in with the rest of measurements taken during ER10 / O2, check out LHO aLOG 35163.
Non-image files attached to this comment
Displaying reports 48881-48900 of 83210.Go to page Start 2441 2442 2443 2444 2445 2446 2447 2448 2449 End