Displaying reports 52041-52060 of 83253.Go to page Start 2599 2600 2601 2602 2603 2604 2605 2606 2607 End
Reports until 16:51, Thursday 01 December 2016
H1 GRD
sheila.dwyer@LIGO.ORG - posted 16:51, Thursday 01 December 2016 - last comment - 17:29, Thursday 01 December 2016(32087)
problem with LOCKLOSS_SHUTTER_CHECK, solution

TJ investigated why the LOCKLOSS_SHUTTER_CHECK guardian was sometimes mistakenly identifying locklosses when there had not been any lockloss while ISC_LOCK was in the DRMI_ON_POP state.

As a reminder, the only purpose of LOCKLOSS_SHUTTER_CHECK is to check that the shutter triggers after locklosses in which we had more than 25kW circulating power in the arms.  The lockloss checking for this guardian is independent of any other guardian.  

The problem TJ found was a consequence of the work described in 31980 .  Since that work when we switch the whitening gain on the transmon QPDs, there is a large spike in the arm transmission channels which the LOCKLOSS_SHUTTER_CHECK guardian recognizes as a lockloss (TJ will attach a plot).  

We edited the ISC_LOCK guardian to hold the output of the TR_A,B_NSUM filter before making the switch, and turn the hold off after the switch is done.  We loaded this when we were knocked out of observe by TCS.  This is a simple change but if operators have any trouble with DRMI_ON_POP tonight you can call TJ or I.  

Comments related to this report
thomas.shaffer@LIGO.ORG - 17:29, Thursday 01 December 2016 (32088)

Here are some plots with the TR_A,B_NSUM channels and the numeric states for ISC_LOCK. The Lockloss Shutter Check node would think that the power in the arms was above its 25kW threshold where it would then move to it's High Power state. This state would check that the arm power didn't drop below its low threshold, thinking it was a lockloss, and then jump to the Check Shutter state. Here it takes the last 10sec of data and tests for a kick in the HAM6 GS13s. This test would fail since there was no lockloss. We were not even in high power at the time.

Images attached to this comment
LHO General
corey.gray@LIGO.ORG - posted 16:18, Thursday 01 December 2016 - last comment - 18:11, Thursday 01 December 2016(32069)
Ops Day Shift Summary

TITLE: 12/01 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Jim
SHIFT SUMMARY:

Much_Of_Morning Main Issues:

  1. ETMy 4735Hz Violin Mode Harmonic damping!  Upgraded Filter Banks for ETMy & damped out the mode(!)
  2. HAM3 ISI Saturation & Trips.  Hugh noted, addressed, & appear to be better so far.

LOG:

Locking Notes:

After hand-off this morning, held at VIOLIN_MODE_DAMPING.  Kiwamu came in to take a look at H1 & wanted to note a few items he checked.

 

Comments related to this report
jenne.driggers@LIGO.ORG - 17:08, Thursday 01 December 2016 (32089)

A note on the OMC whitening:

The 4.7kHz mode was super rung up, and this was causing the saturations, and a giant comb of upconversion around the line.  I turned off the stage of whitening so that we would have a hope of damping anything, which is nearly impossible to do while saturations are happening everywhere.  Anyhow, hopefully this won't be a problem anymore since we have found filters that work well for this mode, but any operator can use this trick to save a lock, if a mode is super rung up and needs serious damping.

To remove a stage of whitening, I opened the "all" screen of the OMC_LOCK guardian, and selected RemoveWhiteningStage.  Once it starts that state, you can re-select ReadyForHandoff (the nominal state) and it'll return there when it is done.  You should see 2 SDF diffs in the OMC, which ensures that you aren't going to Observe with this weird state - it's just for use while damping seriously bad modes.

sheila.dwyer@LIGO.ORG - 18:11, Thursday 01 December 2016 (32091)Lockloss

Young-min and I looked into the 22:08 lockloss that is still unexplained, and attempted to use the BLRMS tool.

The first suspensions to saturate are the ETMY ESD channels, which are saturate at almost exactly the lockloss time.  There isn't much in the ASC until after the lockloss, and other than DARM the other LSC loops don't seem to be having trouble.  

The first thing that we see happening is a fast glitch in the DCPDs.  We don't see anything in CARM signals, OMC PZTs, or ISS, but there is a similar glitch in AS_C_SUM, AS_A, AS_B.  I 

It is hard to imagine optics moving fast enough to cause this lockloss, but I am not sure what would have caused it.  

Images attached to this comment
H1 SEI (SEI)
corey.gray@LIGO.ORG - posted 15:53, Thursday 01 December 2016 - last comment - 09:53, Friday 02 December 2016(32084)
Earthquake Report: 6.3, Peru, H1 Lockloss

USGS Link

Comments related to this report
sheila.dwyer@LIGO.ORG - 17:07, Thursday 01 December 2016 (32086)GRD, Lockloss, OpsInfo

Corey, Sheila, Jim W

TerraMon and LLO warned Corey that this EQ was coming, with a predicted R wave velocity of 4.8 um/second (it showed up in our EQ band BLRMS peak at about 1 um/second RMS at about the time predicted). Our useims blrms is around 0.3-0.4 um/second right now.

Since Corey had a warning he consulted with Jim W who suggested trying BLEND_QUIET_250_SC_EQ for both end station ISIs (one at a time).  The attached screenshot shows the transition from  BLEND_QUIET_250_SC_EQ back to our normal windy configuration BLEND_QUIET_250_SC_BRS, which is much quieter at 50-70 mHz.  

Jim explans that this sensor correction has a notch at around 50mHz (he will attach a plot), and that this worked OK durring the summer when the microseism was verry low.  However, it will reduce the amount of isolation that we get at the microseism, which was fine when Jim first tested it in the summer months.  

If an EQ moves the whole site in common, we can lock all the chambers to the ground at EQ frequencies to reduce the common motion.  Our problem this time was probably that we switched only the end stations without changing the corner. 

For now, the recomeneded operator action durring earthquakes is:

If the IFO is locked, don't do anything.  We want to collect some data about what size EQ we can ride out with our normal WINDY configuration.

If the IFO unlocks, and the earthquake is going to be large enough to trip ISIs (several um/sec) switch the ISI configuration node to LARGE_EQ_NOBRSXY.  This just prevents tripping of ISIs

Once the BLRMS are back to around 1 um/sec you can set the SEI_CONF back to WINDY, and ask ISC_LOCK to try LOCKING_ARMS_GREEN. If the arms stay locked for a miute or so, you can try relocking the IFO. 

Images attached to this comment
michael.coughlin@LIGO.ORG - 09:53, Friday 02 December 2016 (32102)
I took a quick look at Seismon performance on the MIT test setup. The internal notice was written a few hundred seconds after the earthquake.

Internal:
File: /Seismon/Seismon/eventfiles/private/pt16336050-1164667247.xml
EQ GPS: 1164667247.0
Written GPS: 1164667525.0

H1 (P): 1164667949.1
L1 (P): 1164667773.9

We beat the p-wave arrival by about 200s at LLO and 400s at LHO.

Arrivals below:

-bash-4.2$ /usr/bin/python seismon_info -p /Seismon/Seismon/seismon/input/seismon_params_earthquakesInfo.txt -s 1164667243 -e 1164670843 --eventfilesType private --doEarthquakes --doEPICs
/Seismon/Seismon/all/earthquakes_info/1164667243-1164670843
1164667246.0 6.3 1164667949.1 1164667963.2 1164671462.6 1164669655.5 1164668932.7 4.52228e-06 1164667900 1164671500 -15.3 -70.5 8.433279e+06 H1

1164667246.0 6.3 1164667773.9 1164667787.6 1164670002.2 1164668821.0 1164668348.5 1.12682e-05 1164667700 1164670100 -15.3 -70.5 5.512348e+06 L1

1164667246.0 6.3 1164668050.7 1164668064.9 1164672594.3 1164670302.2 1164669385.3 6.73904e-06 1164668000 1164672600 -15.3 -70.5 1.069658e+07 G1

1164667246.0 6.3 1164668041.4 1164668055.5 1164672479.8 1164670236.7 1164669339.5 3.22116e-06 1164668000 1164672500 -15.3 -70.5 1.046759e+07 V1

1164667246.0 6.3 1164667831.5 1164667845.3 1164670438.5 1164669070.3 1164668523.0 6.99946e-06 1164667800 1164670500 -15.3 -70.5 6.385045e+06 MIT

1164667243.2 6.3 1164667948.9 1164667953.5 1164671451.9 1164669648.2 1164668926.7 4.74116e-06 1164667900 1164671500 -15.3 -70.8 8.417411e+06 H1

1164667243.2 6.3 1164667773.6 1164667778.0 1164669993.8 1164668815.0 1164668343.5 1.15920e-05 1164667700 1164670000 -15.3 -70.8 5.501199e+06 L1

1164667243.2 6.3 1164668052.2 1164668056.8 1164672601.7 1164670305.2 1164669386.6 7.10833e-06 1164668000 1164672700 -15.3 -70.8 1.071690e+07 G1

1164667243.2 6.3 1164668043.0 1164668047.6 1164672488.9 1164670240.7 1164669341.5 3.35518e-06 1164668000 1164672500 -15.3 -70.8 1.049125e+07 V1

1164667243.2 6.3 1164667832.1 1164667836.6 1164670436.3 1164669067.8 1164668520.5 7.31460e-06 1164667800 1164670500 -15.3 -70.8 6.386137e+06 MIT

1164667247.0 6.2 1164667941.5 1164667978.2 1164671455.2 1164669651.7 1164668930.3 2.75907e-06 1164667900 1164671500 -15.4 -71.0 8.416356e+06 H1

1164667247.0 6.2 1164667767.1 1164667802.5 1164669998.4 1164668819.2 1164668347.6 7.79549e-06 1164667700 1164670000 -15.4 -71.0 5.502860e+06 L1

1164667247.0 6.2 1164668045.1 1164668082.4 1164672612.9 1164670313.2 1164669393.4 3.86408e-06 1164668000 1164672700 -15.4 -71.0 1.073178e+07 G1

1164667247.0 6.2 1164668035.9 1164668073.2 1164672500.5 1164670249.0 1164669348.4 1.94756e-06 1164668000 1164672600 -15.4 -71.0 1.050694e+07 V1

1164667247.0 6.2 1164667825.7 1164667861.5 1164670443.9 1164669073.8 1164668525.8 4.24775e-06 1164667800 1164670500 -15.4 -71.0 6.393872e+06 MIT


H1 SUS (DetChar, ISC, Lockloss)
jeffrey.kissel@LIGO.ORG - posted 14:53, Thursday 01 December 2016 (32081)
Successful Damping of 4735.09 Hz Violin Mode -- Some Filter Rearrangement
J. Kissel, J. Driggers, C. Gray 

We've successfully damped the 4735.09 [Hz] violin mode harmonic. The key: the +/- 60deg phase filters in the ETMY_L2_DAMP_MODE9 bank, which we thought were moving the damping control signal phase around the unit circle, was actually not adjusting the phase at this 4.7 [kHz] since they were tuned for the 2nd harmonics at 1 [kHz] (thanks to Jenne who dug into foton to check that these filter made sense). This left us with essentially only 0 [deg] (+ gain) or 180 [deg] (- gain) as options, and neither worked well. 

After rearranging some filters between MODE9 and MODE10 filter banks, Jenne was able to create new phase adjustment filters with 30 [deg] increments for 5 [kHz]. +60 [deg] with a positive gain worked well for the first two orders of magnitude, but we eventually needed to nudge by -30 [deg] once other modes around these frequencies began to be resolved, confusing the error signal. Thus, the final settings that we think will work well:
MODE10: 
gain = +0.02   (we were able to use up to +0.1 while we were trying hard)
FM4       ("100dB")        gain(100,"dB")
FM9       ("+30deg5k")     zpk([0],[28.5238+i*4744.41;28.5238-i*4744.41],1,"n")gain(2.07488e-06)
FM10      ("4735")         gain(100,"dB")*butter("BandPass",4,4734.5,4735.5)gain(120,"dB")

I think all the success I claimed yesterday () was merely by turning on the notch filter in the DARM path and waiting.

I've updated the LHO Violin Mode Wiki, and I've also updated the ISC_LOCK guardian code to ensure this continues to get turned on (specifically I edited the gen_VIOLIN_MODE_DAMPING functional state in the ISC_GEN_STATES.py subfunction that creates VIOLIN_MODE_DAMPING_1 and VIOLIN_MODE_DAMPING_2 in the acquisition sequence).

Unclear if related to Cheryl's reported problems with the fundamentals over night (LHO aLOG 32059). Note the Violin Mode Wiki or the ISC_LOCK guardian have been updated with her changes.
Images attached to this report
H1 SEI
hugh.radkins@LIGO.ORG - posted 14:14, Thursday 01 December 2016 - last comment - 15:43, Thursday 01 December 2016(32079)
WHAM3 ISI Corner2 CPS Glitches break IFO lock--Sat Rack Power Cycled and Satellite Cards Reseated

See the first attached, showing the ISC lock state (Ch16) dropping as the HAM3 ISI WD (Ch1) trips to State3 (Damping only.)

All the CPSs show a twitch and shift as the DC coupled ISO loops disengage but note the scale of the glitch on H2 and V2 CPSs.  What is curious to me is how the C2 sensors clearly exceed saturation level (20000) but the SAT_COUNT (Ch2) does not count them.  Ahhh, I think that any 'trip' of the watchdog, even if only to damping only, will clear the saturations.  I think we should change this in the code although I think someone coded it explicitly this way (BTL/HRAP.)  I think this hinders us though and maybe we can rework it.

I decided to Power Cycle the CPS and while off reseated the corner2 Gauge Boards in the satellite rack.  Did this several times.  ISI reisolated without difficulty.

Attached as well is another view a few hours ago of this glitching dropping the IFO out of some state better than zero.  Prior to the ISI trip is another glitch on the Corner2 CPS that was not enough to rile things up.

Finally, a 7 hour trend capturing all the recent occurrances of this.  Prior to this, 22 Nov saw a CPS saturation and trip but that was a Tuesday.

Images attached to this report
Comments related to this report
brian.lantz@LIGO.ORG - 15:43, Thursday 01 December 2016 (32083)
Hugh - 
Recall that we added the channel
H1:ISI-HAM3_WD_CPS_SAT_SINCE_RESTART
which can probably do what you want. It will have some big number in it, but should only count up, the delta will be what you want. 
I think we added this just for you, so merry christmas!
-Brian
H1 SEI (PEM)
edmond.merilh@LIGO.ORG - posted 14:10, Thursday 01 December 2016 (32071)
H1 ISI CPS Sensor Noise Spectra Check - Weekly FAMIS#6874

All BSC spectra looks pretty normal. Noise below 20 Hz is due to different platforms behaving differently.

HAM spectra looks fine above 20Hz except for some non-consequential bumps between 20 and 40Hz in HAMs 2&5.

Images attached to this report
H1 SUS
jenne.driggers@LIGO.ORG - posted 14:05, Thursday 01 December 2016 (32080)
Selected violin mode gains not-monitored

In order to go to Observe, JeffK noticed that some of the violin mode gains change lock-to-lock.  We looked in the guardian, and the violin damping mode generator state uses a function called adjustDamperGain() which lowers the damping gain of selected modes if the peak is too high. 

For all of the modes which are currently under this adjuster's control, I have not-monitored the gain channel for that mode.  The attached screenshot is of the guardian code that is adjusting the gain, and serves as a list of all the modes whos gain I've not-monitored. 

If we comment some of these adjusters out, or add more, we'll need to re-visit the list of not-monitored channels.

Images attached to this report
H1 General
corey.gray@LIGO.ORG - posted 13:38, Thursday 01 December 2016 (32078)
H1 Status

This morning we continued to see issues with a rung up 4735Hz Violin Mode Harmonic.  Managed to squash it out after a few hours of upgrading filter banks (Jenne), getting schooled on technique (Jenne, Terra, Jeff), and the pesky mode finally went away and made it to NLN.....

....But then HAM3 ISI tripped as we were about ready to join L1 in OBSERVING.

With H1 down, Hugh went to address HAM3.

On our way back up to NLN & hopefully OBSERVING soon!

H1 OpsInfo (CSWG, ISC, OpsInfo, SUS)
jeffrey.kissel@LIGO.ORG - posted 12:38, Thursday 01 December 2016 (32077)
Resonant Mode Damping Phase Scratch Pad
J. Kissel

Since we've spent so much time over the past few days exploring for the correct damping phases on violin modes, I figured it was time we upgraded our unit-circle-sticky-note system. As such, I've create a one-sheet scratch pad for damping mode phases that one can use to keep oneself sane while searching for just the right phase on up to 4 resonant modes.

The official place for this diagram, in case it ever needs changing is G1602364, but the source also lives in the SUS SVN repo,
/ligo/svncommon/SusSVN/sus/trunk/Common/Documents/G1602364, and I attach it here.

This has been printed out, laminated, and put next to the operator's work station with an appropriate dry erase marker.

Happy hunting!
Non-image files attached to this report
H1 SEI
hugh.radkins@LIGO.ORG - posted 12:02, Thursday 01 December 2016 (32076)
WHAM3 ISI Corner2 CPSs are glitching, occasionally

Don't know the reason yet as I don't see the control loops responding before the Actuators.  See SEI log 1083 for some details.

The first attached plot shows the glitches on H2 and V2 and the Saturations counts totalizing.  The other CPS look fine.

This has not caused us to lose lock everytime.  The second plot is 6 hours showing ISI tripping twice--lots of things happen here so you can't say much from such a quick coarse look.  However, at the two places where the CPS SAT COUNT totals and we don't trip, saturations are really only seen on H2 & V2.

Looking back 30 days suggests to me this might have happened a couple times but the platform tripped so without diving deeper, I can't say for sure.

I'd say this warrants a satellite rack power cycle and a few iterations of board reseating to clean up contacts at the first opportune time.

Images attached to this report
H1 DetChar (DetChar)
greg.ogin@LIGO.ORG - posted 11:10, Thursday 01 December 2016 (32073)
DQ Shift Mon 28 – Wed 30 November

Report on DetChar DQ glitch shift for Monday Nov 28 - Wednesday Nov 30

 

Looked at 3 major sources of glitches during this shift:

 

Full report can be found at https://wiki.ligo.org/DetChar/DataQuality/DQShiftLHO20161128


H1 CDS (GRD)
david.barker@LIGO.ORG - posted 10:40, Thursday 01 December 2016 - last comment - 11:14, Thursday 01 December 2016(32072)
h1guardian0 memory upgrade

WP6366 Increase memory in Guardian machine

Dave, TJ, Carlos:

at 08:47PDT h1guardian0 was powered back up after having its RAM size increased from 12GB to 48GB. TJ reports all nodes are operational.

Comments related to this report
thomas.shaffer@LIGO.ORG - 11:14, Thursday 01 December 2016 (32074)

I spoke a bit too soon, the IFO node froze up after the reboot. On first inspection it looked ok, but later I noticed that it wasn't reporting which nodes it was waiting for. The last  log before it froze up was "2016-12-01_16:51:04.396720Z IFO W: initialized" and it had negative SPM diffs.

A quick node restart fixed it.

H1 SUS (DetChar)
jeffrey.kissel@LIGO.ORG - posted 19:20, Wednesday 30 November 2016 - last comment - 09:34, Thursday 01 December 2016(32050)
Campaign to Reduce 2nd Harmonics (~1000 kHz) of QUAD Violin Modes
J. Kissel, E. Merilh, J. Warner, T. Hardwick, J. Driggers, S. Dwyer

Prompted by DetChar worries about glitching around the harmonics of violin modes, Ed, Jim, and I went on an epic campaign to damp the ~1kHz, 2nd harmonic violin modes. These are tricky because not all modes had been successfully damped before, and one has to alternate filters in two filter banks to hit all 8 modes for a given suspension. 

We've either used, or updated Nutsinee's violin mode table, with the notable newly damped entries being 
994.8973    ITMY     -60deg, +gain      MODE9: FM2 (-60deg), FM4 (100dB), FM9 (994.87) 
997.7169    ITMY       0deg, -400gain   MODE9: FM4 (100dB), FM6(997.717)                 VERY Slow
997.8868    ITMY       0deg, -200gain   MODE10: FM4 (100dB), FM6(997.89) 

Also, we inadvertently rung up modes around 4735 Hz, so we spent a LONG time trying to fight that. We eventually won by temporarily turning on the 4735Hz notch in FM3 of the LSC-DARM2 filter bank and waiting a few hours. I had successfully damped the ETMY mode at 4735.09 Hz by moving the band-pass filter in H1:SUS-ETMY_L2_DAMP_MODE9 's FM10 from centered around 4735.5 to centered around 4735 Hz exactly, and using positive gain with zero phase. However, there still remains a mode rung up at 4735.4 Hz but it's from an as-of-yet unidentified test mass, and we didn't want to spend the time exploring. These 4.7 kHz lines have only appeared once before in late October (LHO aLOG 31020).

Attched is a before vs. after ASD of DELTAL_EXTERNAL. I question the calibration, but what's important is the difference between the two traces. Pretty much all modes in this frequency band have been reduced by 2 or 3 orders of magnitude -- better than O1 levels. Hopefully these stick through the next few lock losses and acquisitions.

Thanks again to the above mentioned authors for all their help!
Images attached to this report
Comments related to this report
laura.nuttall@LIGO.ORG - 06:23, Thursday 01 December 2016 (32058)

Thanks to all for your efforts! You can really see the dramatic decrease in the glitch rate around 21:00 UTC in the attached plot. The glitch rate in the lock after you did this work (which ended around 5 UTC today) looks much more typical of what we know the glitch rate at LHO to be.

Images attached to this comment
joshua.smith@LIGO.ORG - 07:40, Thursday 01 December 2016 (32062)DetChar

Comparing yesterday before damping to today the high frequency effect of the damping seems to be the removal of glitchy forests around 2, 3, 4, and 5 kHz (base frequency 2007.9 Hz but wide). Great! Not sure of the mechanism to get these frequencies yet, seems to be more than double the modes you damped. As noted above the 4735 is pretty large.  

Images attached to this comment
andrew.lundgren@LIGO.ORG - 09:34, Thursday 01 December 2016 (32070)DetChar, ISC
Attached is a spectrogram showing how the 2000 and 3000 Hz bands go away as the 1000 Hz violin modes are damped. You can also see that the bursts in these bands correspond with places where the spectrogram is 'bright' at 1000 Hz. Having two violin modes very close at 1000 Hz is like having one mode at 2000 Hz with a slow amplitude modulation. Probably that is getting turned into bursts in DARM by some non-linear process, modulated by that effective amplitude variation.

The 1080 Hz band is bursting on its own time scale, and does not seem to be related.
Images attached to this comment
H1 CDS (GRD)
david.barker@LIGO.ORG - posted 17:35, Wednesday 30 November 2016 - last comment - 11:19, Thursday 01 December 2016(32048)
h1guardian0 memory usage rate has increased, we'll install more memory at the next convenient time

The free memory size on the guardian machine is about 4GB. At the current rate of usage we predict a reboot is needed before next Tuesday. At the next opportune time, we will increase the memory size from 12GB to 48GB and perhaps schedule regular reboots on Tuesdays. 

Plot of available memory for the month of November is attached (Y-axis mis-labelled, actually MB).

Images attached to this report
Comments related to this report
keith.thorne@LIGO.ORG - 06:46, Thursday 01 December 2016 (32060)CDS, GRD
We did similar analysis at LLO ( See LLO aLOG 30004 ). We do see increasing memory over time from the guardian process.
michael.thomas@LIGO.ORG - 06:56, Thursday 01 December 2016 (32061)
Does this LHO memory plot include cached memory?  It would be interesting to see the amount of cache memory used along with the free memory.
jameson.rollins@LIGO.ORG - 08:44, Thursday 01 December 2016 (32065)

The character of memory usage on the LLO guardian machine is quite different, and doesn't look the same as what Dave has posted at all.  The LLO usage seems to plateau and not continually increase.  The plots that Dave is showing here look like a very steady increase, which looks much different.  The LHO plot looks more disturbing, as if there's a memory leak in something.  Memory usage has been fairly flat when we've looked in the past, so I'm surprised to see such a high rate of increase.

I also note that something changed two Tuesday's ago, which is what we also notice at LLO.  Was there an OS upgrade on h1guardian0 on Nov. 14?

keith.thorne@LIGO.ORG - 11:19, Thursday 01 December 2016 (32075)
The LLO guardian script machine was rebooted 16 days ago on Nov 15 (typically after we do a 'aptitude safe-upgrade').  The other dips are likely due to Guardian restarts due to DAQ, etc.
H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 04:06, Wednesday 23 November 2016 - last comment - 15:12, Thursday 01 December 2016(31767)
Guardian INJ_TRANS node in error

I hit load and cleared the error. But I have no idea about the state of the injection since it didn't happen the first time. Looks like it will try again at GPS 1163938117?

Images attached to this report
Comments related to this report
nutsinee.kijbunchoo@LIGO.ORG - 04:12, Wednesday 23 November 2016 (31768)

And looks like another error.

Images attached to this comment
nutsinee.kijbunchoo@LIGO.ORG - 04:15, Wednesday 23 November 2016 (31769)

I hit init. Seems like it's going to try the next one at 1163938617.

Images attached to this comment
nutsinee.kijbunchoo@LIGO.ORG - 04:19, Wednesday 23 November 2016 (31770)

Alright I'm seeing some malicious noise in DARM now. I think the injection finally came through.

Looks like only the first two injections (1169397617 and 1163938117) didn't happen.


		
		
eric.thrane@LIGO.ORG - 15:12, Thursday 01 December 2016 (32082)
Looking at the error message, it appears that the injection "failed" as opposed to being skipped:

https://dcc.ligo.org/DocDB/0113/T1400349/013/hwinjections.pdf (p11)
• CAL-PINJX TINJ OUTCOME. Set as follows:
– 1 = success
– -1 = skipped because interferometer is not operating normally – -2 = skipped due to GRB alert
– -3 = skipped due to operator override (pause OR override)
– -4 = injection failed
– -5 = skipped due to detector not being locked
– -6 = skipped due to intent bit off (but detector locked)

In previous experience, injections have failed when AWG has been unable to access a test point. Sometimes, this error is fixed by rebooting the awg computer. I'm not sure why it went away this time.
H1 CAL (DetChar)
jeffrey.kissel@LIGO.ORG - posted 16:11, Tuesday 22 November 2016 - last comment - 09:36, Thursday 01 December 2016(31738)
PCALX Roaming Calibration Line Frequency Changed from 4801.3 to 5001.3 Hz
J. Kissel for S. Karki

I've moved the roaming calibration line to its highest frequency we intend to go, and it's also the last super-long duration we need. We may run through the lower frequency points again, given that (a) they need much less data, and (b) those data points were taken at various input powers that will likely confuse/complicate the analysis. Below is the current schedule status.

Current Schedule Status:
Frequency    Planned Amplitude        Planned Duration      Actual Amplitude    Start Time                 Stop Time                    Achieved Duration
(Hz)         (ct)                     (hh:mm)                   (ct)               (UTC)                    (UTC)                         (hh:mm)
---------------------------------------------------------------------------------------------------------------------------------------------------------
1001.3       35k                      02:00                   39322.0           Nov 11 2016 21:37:50 UTC    Nov 12 2016 03:28:21 UTC      ~several hours @ 25 W
1501.3       35k                      02:00                   39322.0           Oct 24 2016 15:26:57 UTC    Oct 31 2016 15:44:29 UTC      ~week @ 25 W
2001.3       35k                      02:00                   39322.0           Oct 17 2016 21:22:03 UTC    Oct 24 2016 15:26:57 UTC      several days (at both 50W and 25 W)
2501.3       35k                      05:00                   39322.0           Oct 12 2016 03:20:41 UTC    Oct 17 2016 21:22:03 UTC      days     @ 50 W
3001.3       35k                      05:00                   39322.0           Oct 06 2016 18:39:26 UTC    Oct 12 2016 03:20:41 UTC      days     @ 50 W
3501.3       35k                      05:00                   39322.0           Jul 06 2016 18:56:13 UTC    Oct 06 2016 18:39:26 UTC      months   @ 50 W
4001.3       40k                      10:00                   39322.0           Nov 12 2016 03:28:21 UTC    Nov 16 2016 22:17:29 UTC      days     @ 30 W (see LHO aLOG 31546 for caveats)
4301.3       40k                      10:00                   39322.0           Nov 16 2016 22:17:29 UTC    Nov 18 2016 17:08:49 UTC      days     @ 30 W          
4501.3       40k                      10:00                   39322.0           Nov 18 2016 17:08:49 UTC    Nov 20 2016 16:54:32 UTC      days     @ 30 W (see LHO aLOG 31610 for caveats)   
4801.3       40k                      10:00                   39222.0           Nov 20 2016 16:54:32 UTC    Nov 22 2016 23:56:06 UTC      days     @ 30 W
5001.3       40k                      10:00                   39222.0           Nov 22 2016 23:56:06 UTC
Images attached to this report
Comments related to this report
evan.goetz@LIGO.ORG - 19:26, Tuesday 22 November 2016 (31752)
Before the HW injection test, we turned off this line (before entering observation intent). I turned it back on at Nov 23 2016 03:25 UTC, but this did not drop us out of observation intent.
evan.goetz@LIGO.ORG - 20:13, Tuesday 22 November 2016 (31755)
This line was again turned off at 4:12 Nov 23 2016 UTC so that DetChar safety study can be made late tonight.
sudarshan.karki@LIGO.ORG - 09:36, Thursday 01 December 2016 (32068)

The analysis of sensing function at frequency above 1 kHz obtained from the roaming lines listed in the alog above is attached.  These lines were run at different times than the low frequency sweep (below 1 kHz) taken on Nov 18 and included in this plot. So, the lines above 1 kHz will need to be compensated for the time varying parameters to make accurate comparison and has not been done for this plot.

One way of compensating the changes are by applying kappas calculated using the SLM Tool (or GDS). The other way of doing it is comparing each individual line with 1083.7 Hz (which is always on) line at time t (time at which each line is running)  and time t0 (time of low freqeuncy sweep).

Sensing Function [ct] /[m] = (DARM_ERRR/TxPD) f= hf, t  * (TxPD/DARM_ERR) f = 1083.7, t * (DARM_ERRR/TxPD) f= 1083.7, t0

Both methods are essentially same but I will use the second method and the plot with the correct compensation applied to come soon.

Non-image files attached to this comment
Displaying reports 52041-52060 of 83253.Go to page Start 2599 2600 2601 2602 2603 2604 2605 2606 2607 End