Displaying reports 50841-50860 of 85239.Go to page Start 2539 2540 2541 2542 2543 2544 2545 2546 2547 End
Reports until 14:36, Friday 07 April 2017
H1 SUS (SEI)
iain.dorrington@LIGO.ORG - posted 14:36, Friday 07 April 2017 - last comment - 14:37, Friday 07 April 2017(35395)
Suspension watchdog threshold values
I have been investigating the seismic system watchdogs. We want to know if the watchdogs threshold amount of motion to shut down the controls of the system can be relaxed. As a first step in doing this I found the threshold values for each of the suspension watchdogs. The channel and the corresponding threshold are shown below.
H1:SUS-ITMX_M0_WD_OSEMAC_RMS_MAX 80000.0
H1:SUS-ITMX_R0_WD_OSEMAC_RMS_MAX 10000.0
H1:SUS-ITMX_L2_WD_OSEMAC_RMS_MAX 8000.0
H1:SUS-ITMX_L1_WD_OSEMAC_RMS_MAX 80000.0
H1:SUS-TMSY_M1_WD_OSEMAC_RMS_MAX 8000.0
H1:SUS-SRM_M1_WD_OSEMAC_RMS_MAX 15000.0
H1:SUS-SRM_M3_WD_OSEMAC_RMS_MAX 80000.0
H1:SUS-SRM_M2_WD_OSEMAC_RMS_MAX 80000.0
H1:SUS-SR3_M1_WD_OSEMAC_RMS_MAX 80000.0
H1:SUS-SR3_M3_WD_OSEMAC_RMS_MAX 80000.0
H1:SUS-SR3_M2_WD_OSEMAC_RMS_MAX 80000.0
H1:SUS-PR2_M1_WD_OSEMAC_RMS_MAX 8000.0
H1:SUS-PR2_M3_WD_OSEMAC_RMS_MAX 8000.0
H1:SUS-PR2_M2_WD_OSEMAC_RMS_MAX 8000.0
H1:SUS-ETMX_M0_WD_OSEMAC_RMS_MAX 25000.0
H1:SUS-ETMX_R0_WD_OSEMAC_RMS_MAX 25000.0
H1:SUS-ETMX_L2_WD_OSEMAC_RMS_MAX 25000.0
H1:SUS-ETMX_L1_WD_OSEMAC_RMS_MAX 25000.0
H1:SUS-OMC_M1_WD_OSEMAC_RMS_MAX 12000.0
H1:SUS-IM2_M1_WD_OSEMAC_RMS_MAX 8000.0
H1:SUS-OM3_M1_WD_OSEMAC_RMS_MAX 80000.0
H1:SUS-ITMY_M0_WD_OSEMAC_RMS_MAX 13000.0
H1:SUS-ITMY_R0_WD_OSEMAC_RMS_MAX 13000.0
H1:SUS-ITMY_L2_WD_OSEMAC_RMS_MAX 13000.0
H1:SUS-ITMY_L1_WD_OSEMAC_RMS_MAX 13000.0
H1:SUS-BS_M1_WD_OSEMAC_RMS_MAX 8000.0
H1:SUS-BS_M2_WD_OSEMAC_RMS_MAX 8000.0
H1:SUS-IM1_M1_WD_OSEMAC_RMS_MAX 8000.0
H1:SUS-MC3_M1_WD_OSEMAC_RMS_MAX 25000.0
H1:SUS-MC3_M3_WD_OSEMAC_RMS_MAX 15000.0
H1:SUS-MC3_M2_WD_OSEMAC_RMS_MAX 15000.0
H1:SUS-RM1_M1_WD_OSEMAC_RMS_MAX 8000.0
H1:SUS-IM4_M1_WD_OSEMAC_RMS_MAX 8000.0
H1:SUS-OM1_M1_WD_OSEMAC_RMS_MAX 80000.0
H1:SUS-MC1_M1_WD_OSEMAC_RMS_MAX 25000.0
H1:SUS-MC1_M3_WD_OSEMAC_RMS_MAX 15000.0
H1:SUS-MC1_M2_WD_OSEMAC_RMS_MAX 15000.0
H1:SUS-ETMY_M0_WD_OSEMAC_RMS_MAX 16000.0
H1:SUS-ETMY_R0_WD_OSEMAC_RMS_MAX 8000.0
H1:SUS-ETMY_L2_WD_OSEMAC_RMS_MAX 80000.0
H1:SUS-ETMY_L1_WD_OSEMAC_RMS_MAX 8000.0
H1:SUS-TMSX_M1_WD_OSEMAC_RMS_MAX 8000.0
H1:SUS-SR2_M1_WD_OSEMAC_RMS_MAX 11000.0
H1:SUS-SR2_M3_WD_OSEMAC_RMS_MAX 80000.0
H1:SUS-SR2_M2_WD_OSEMAC_RMS_MAX 8000.0
H1:SUS-OM2_M1_WD_OSEMAC_RMS_MAX 80000.0
H1:SUS-PR3_M1_WD_OSEMAC_RMS_MAX 15000.0
H1:SUS-PR3_M3_WD_OSEMAC_RMS_MAX 15000.0
H1:SUS-PR3_M2_WD_OSEMAC_RMS_MAX 15000.0
H1:SUS-MC2_M1_WD_OSEMAC_RMS_MAX 18000.0
H1:SUS-MC2_M3_WD_OSEMAC_RMS_MAX 80000.0
H1:SUS-MC2_M2_WD_OSEMAC_RMS_MAX 80000.0
H1:SUS-RM2_M1_WD_OSEMAC_RMS_MAX 25000.0
H1:SUS-PRM_M1_WD_OSEMAC_RMS_MAX 80000.0
H1:SUS-PRM_M3_WD_OSEMAC_RMS_MAX 80000.0
H1:SUS-PRM_M2_WD_OSEMAC_RMS_MAX 80000.0
H1:SUS-IM3_M1_WD_OSEMAC_RMS_MAX 8000.0
Comments related to this report
iain.dorrington@LIGO.ORG - 14:37, Friday 07 April 2017 (35396)
A link to the corresponding report for Livingston https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=32886
LHO VE
logbook/robot/script0.cds.ligo-wa.caltech.edu@LIGO.ORG - posted 12:10, Friday 07 April 2017 - last comment - 08:42, Monday 10 April 2017(35392)
CP3, CP4 Autofill 2017_04_07
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 50% open. Fill completed in 1984 seconds. TC B did not register fill. LLCV set back to 18.0% open.
Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 3264 seconds. LLCV set back to 35.0% open.
Images attached to this report
Comments related to this report
chandra.romel@LIGO.ORG - 13:24, Friday 07 April 2017 (35393)

Raised CP3 to 19% open and CP4 to 38% open.

david.barker@LIGO.ORG - 14:06, Friday 07 April 2017 (35394)

this is the first run of an actual overfill using the new virtual strip tool system. This completes the cp3, cp4 portion of  FRS7782. This ticket has been extended to cover the vacuum pressure strip tool.

chandra.romel@LIGO.ORG - 08:42, Monday 10 April 2017 (35434)

Reduced CP4 valve setting to 37% open - looked to be overfilling.

H1 DetChar (DetChar)
miriam.cabero@LIGO.ORG - posted 11:55, Friday 07 April 2017 - last comment - 18:05, Friday 07 April 2017(35373)
Sub-set of blip glitches related to computer errors

Summary: there seems to be a correlation between computer errors (for instance timing or IPC errors) with one or two types of blip glitches in DARM.

Explanation:

With the help of Dave Barker, in the last days I have been looking at several FEC_{}_STATE_WORD channels to find times of computer errors. First I used the minute trends to go back 90 days, and after that I used the second trends to find the times of the errors with an accuracy of a second (going back two weeks). I have been looking for error codes up to 128, where 2=TIM, 4=ADC, 8=DAC, 16=DAQ, 32=IPC, 64=AWG, and 128=DK. Finally, I generated omega scans of GDS-CALIB_STRAIN for the times of errors and found that they often show glitches.

Since there were some times that did not glitch, we are trying to track down how the coupling between the errors and the glitches is happening to see if it can be fixed or if the times can be vetoed. It might not be possible to fix the issue until we go into commissioning time at the end of O2, so DetChar might want to consider creating a flag for these times.

Investigations:

With the times obtained from the second trends, I have created tables of omega scans to see how the errors propagate and where the glitches appear. All those tables can be found here as html files, and the lists of times of errors for the last ~two weeks as txt files. So far, the model with the highest rate of glitches vs non-glitches is the susetm(x/y)pi.

I have been looking at some of these glitches, and the loudest ones coincide with small range drops. Also, the three glitches that had a strong residual in the subtraction of the MASTER signal from the NOISEMON signal (see aLog 34694) appear in the minute trends as times when there were computing errors (I haven't tracked those times down to the second though) in the h1iopsusey model.

Comments related to this report
miriam.cabero@LIGO.ORG - 18:05, Friday 07 April 2017 (35400)

Between March 24 and April 3, this population of glitches corresponds in average to approximately 10% of the population of blip glitches reported by the low-latency blip hunter (see aLog 34257 for more details on the blip hunter). The maximum percentage obtained is 22.2%, and the minimum is 2.8%

H1 General (Lockloss)
corey.gray@LIGO.ORG - posted 11:00, Friday 07 April 2017 - last comment - 12:26, Friday 07 April 2017(35389)
Lockloss: WINDY!

17:44 Lockloss

Had had been degrading in range, BUT we have been in the middle of a windstorm where we are currently riding through sustained 30mph winds with gusts up to 50mph

Have taken Observatory Mode to WIND!

Comments related to this report
corey.gray@LIGO.ORG - 11:14, Friday 07 April 2017 (35390)

18:12 With winds in mid30s - mid 40s & useism (0.1-0.3Hz) at about 0.7microns, took ISI CONFIG to VERY_WINDY_NOBRSXY (Guide recommends VERY_WINDY, but that is no longer a state).

Even if we can't lock, we'll stay here during these high winds to give Jim W. some data with these conditions and in this state.

corey.gray@LIGO.ORG - 12:26, Friday 07 April 2017 (35391)

VERY_WINDY_NOBRSXY didn't look like it improved the situation (Sheila had a strip tool up while we were LOCKING_ARMS_GREEN & it looked noticeably worse).  So we collected ~15min of data for Jim in this state as the winds blow.

18:27-18:57 MORE_WINDY state

18:58 Back to WINDY, but the ETMy BRS Velocity trace (middle right on medm) has a RED box around it & signifies DAMPING is ON.  But this is just because we have so much wind at EY.

At any rate, Jim W. is here & working through various ISI states while we still have high winds.

H1 AOS
aidan.brooks@LIGO.ORG - posted 09:32, Friday 07 April 2017 (35386)
Evidence of point source on ITMX back to March 2016 - as far as HWS data goes back

We have full gradient field data from the ITMX HWS archived back to March-2016. In order to see the thermal lens, we need coincident times between low noise HWS operation and a relatively high power lock-acquisition or lock-loss of the IFO.

By mining the archived data, I can see evidence for the point source going back to 12-March-2016 [the gradient field is relative noisy]. Unfortunately, there is no archived data earlier than this time.

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 08:18, Friday 07 April 2017 (35384)
Transition To DAY

TITLE: 04/07 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    Wind: 21mph Gusts, 15mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.61 μm/s

Looks like there was a bit of a wind storm starting at about 5am PDT (13:00utc) with a few minutes of steady winds of 45-50mph 30min before the H1 lockloss.
QUICK SUMMARY:

LHO General
thomas.shaffer@LIGO.ORG - posted 07:45, Friday 07 April 2017 (35381)
Ops Owl Shift Summary

TITLE: 04/07 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Corey
SUMMARY: Locked and Observing until the last hour or so and then PI mode 19 SDF diffs and a lockloss. I could not get ALSX to relock and was unable to bring up a camera to see how it was misaligned. The normalized power was ~.5 and moving ETMX could not improve it. Without being able to see what I was doing made it basically impossible. Corey came in early and Richard went to see what he could do as well.

H1 General
thomas.shaffer@LIGO.ORG - posted 07:04, Friday 07 April 2017 (35383)
Lockloss @13:53UTC

A few min after I fixed the SDF diff it dropped lock, not sure of the cause yet.

H1 General
thomas.shaffer@LIGO.ORG - posted 06:59, Friday 07 April 2017 - last comment - 13:36, Wednesday 12 April 2017(35382)
Briefly out of Observing from PI mode 19 SDF diff

PI mode 19 started to ring up a bit to cause the SUS_PI node to change some of the PLL filters causing an SDF diff. Mode 19 is not one of the modes that we regularly damp while at 30W, but it may have been excited from the wind. I went to clear the SDF diff but just trying to click the unmonitor button would not work, there was an exclamation point next to the "MON" but I'm not quite sure if that was suppose pop up a screen for me that didn't work remotely, or if it was something else. I managed to select the unmonitor all, with only that channel having an FM3 diff, and getting it to accept it that way. I'm worried that this may have accepted the entire filter bank, and there was another screen that I couldn't see that would allow me to choose what parts on that filter bank to monitor.

We are back to Observing and there is a screenshot below of the SDF diff.

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 13:36, Wednesday 12 April 2017 (35505)

I brought this back to how it was before I unmonitored this entire bank, but I unmonitored the FMs since the PI Guardian can change them. I was correct that I was suppose to get another screen for the filter bank when clicking the !MON button, but I could not get this remotely.

H1 General
thomas.shaffer@LIGO.ORG - posted 04:52, Friday 07 April 2017 (35380)
Ops Mid-Shift report

Smooth sailing so far. The wind just jumped from about 5mph to 30. It is forecasted to get very windy today, so we will see this may be the beginning.

LHO General
thomas.shaffer@LIGO.ORG - posted 00:46, Friday 07 April 2017 (35379)
Ops Owl Shift Transition

TITLE: 04/07 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 69Mpc
OUTGOINGING OPERATOR: Jim
SUMMARY: OPERATOR IS WORKING REMOTELY OF THE DURATION OF THIS SHIFT. I still have not recovered from my pinkeye and we could not find an emergency replacement, so I am logged in and will be monitoring from home. I will do as much as I can from home, and Corey offered to drive on site in the early morning if need be. I have contacted LLO and I am on the Control Rooms TeamSpeak channel if anyone needs to get a hold of me.

H1 General
jim.warner@LIGO.ORG - posted 00:11, Friday 07 April 2017 - last comment - 09:54, Friday 07 April 2017(35378)
Shift Summary

TITLE: 04/07 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 69Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: Commissioning changes made staying locked difficult, but sorted now. TJ attempting to do his shift remotely (otherwise he's "out sick")
LOG:
3:30 UTC Noticed range was falling off & ASC was ringing up

4:00 UTC Lockloss, started trying to detangle commissioning changes from earlier, got locked again, but range deteriorated again and ASC range up again. Sheila helped me figure out that her CSOFT changes hadn't been added to the ISC_LOCK guardian, so CSOFT currently still has to be turned up by hand. We should sort this out tomorrow.

Comments related to this report
sheila.dwyer@LIGO.ORG - 09:54, Friday 07 April 2017 (35387)

Just to clarify what happened:

ITMY oplev has been going bad, it got bad enough to start causing locklosses during the commissioning window yesterday, so we spent some of our commissioning time doing corrective maintence to get rid of oplev damping.  Since my change to the CSOFT gain got overwritten in the guardian (not sure why) it didn't come on with the same gain next lock.  This meant that we had the radiation pressure instability that the oplev damping was originally intended to suppress, which is different from the 0.44 Hz oscillations we've been having for the past few weeks which are actually caused by the oplev.  

So the original cause of the trouble was not commissioning, but the oplev going bad.  

H1 SUS (ISC, SUS)
sheila.dwyer@LIGO.ORG - posted 16:40, Thursday 06 April 2017 - last comment - 22:52, Thursday 06 April 2017(35371)
Op Lev damping off on all test masses, a look at top mas damping

Brett, Jeff and I have been looking a little at Brett's suspension model, and trying to asses how our damping design (both top mass and oplev) is at high power, based on some conversations at the LLO commissioning meeting.  Today I turned off oplev damping on all test masses. 

Top mass damping:

The first attached plot shows the magnitudes of the top to top transfer functions for L, P, and Y for 3 different states (no damping and no power in the arms, top mass only damping and no power in the arms, and top mass only damping and 30 Watts of input power).  You can see that while it would be possible to design better damping loops to deal with the radiation pressure changes in the plant, there isn't anything terrible happening at 30 Watts.  I also looked at 50 Watts input power, and these aren't any large undamped features.  I'm not plotting T, V, and R because the radiation pressure doesn't have any impact on them according to the model.  

Oplev damping:

The next three plots show the transfer functions from pum to test mass pitch, in the optic basis, hard and soft.  One thing that is clear is that we should not try to damp the modes above 1 Hz in the optic basis, so if we really want to use oplev damping we should do it in the radiation pressure basis and probably have a power dependent damping filter.  The next two plots show the impact on the hard and soft modes. You can see that the impact of the oplev damping on the hard modes (and therefore the hard ASC loops) is minimal, but there is an impact on the soft loops around half a Hz.  DSOFT pitch has a low bandwidth, but we use a bandwidth of something like 0.7Hz for CSOFT P to suppress the radiation pressure instability.  

Krishna had noted that the ITMY oplev seems to have been causing several problems recently, (aog 34351), and this afternoon we think that this caused us a lockloss.  I tried just turning the opelv damping offf, which was fine, except that we saw signs of the radiation pressure instability returning after a few minutes at 30 Watts.  I turned the gain up by a factor of 2 (from 0.2 to 0.4), and this instability went away.  The last attached plot is a measurement of the csoft pitch loop with 2 Watts, no oplev damping, before I increased the gain to 0.4.  

The guardian is now set to not turn on the oplev damping, and this is accepted in SDF.  Hopefully this saves us some problems, but it is probably still a good idea to fix the ITMY oplev on tuesday. 

Images attached to this report
Non-image files attached to this report
Comments related to this report
jim.warner@LIGO.ORG - 21:12, Thursday 06 April 2017 (35374)ISC

I don't know if we can run like this. 5 hours in and all of the ASC pitch loops are angry and the range is suffering. Pretty sure the lock won't last in this configuration. Attached plot for ASC DHARD pitch shows all of the increased signal is in the .44 hz quad mode. Red is from 4 hours ago, blue is current.

Images attached to this comment
jim.warner@LIGO.ORG - 21:27, Thursday 06 April 2017 (35375)ISC

We just lost lock. I think we should re-engage the oplevs.

Images attached to this comment
brett.shapiro@LIGO.ORG - 22:33, Thursday 06 April 2017 (35376)

Recall that top to top transfer functions to do not show the damping of the test mass. They can mislead you that there is high damping when there isn’t really at the test mass. So all transfer functions in this case should show the test mass as an output. Input probably isn’t too important, but from a consistency point of view may as well make all transfer functions between the same stages.

I agree that doing the oplev damping in hard and soft coordinates may be better than local coordinates. In local coordinates, all test masses need high bandwidth damping to accommodate the hard modes. In hard and soft coordinates, only the hard loops need high bandwidth damping. So that might give us a win of up to sqrt(2) in noise for the same amount of damping.

jim.warner@LIGO.ORG - 22:52, Thursday 06 April 2017 (35377)ISC

After getting relocked, the range started dropping again and the ASC foms showed .44hz mode was still ringing up even with oplev damping. After talking to Sheila, we realized the CSOFT gain wasn't in the guardian yet (it was still at .2, should be .4 with oplev damping off), so I set 30 second tramps on the CSOFT P and oplev damping and with some quick clicking ramped the CSOFT gain up and the oplev gains back to zero.  The lock survived this, ASC foms settled down and the range started recovering. I've accepted the changes in SDF. Hopefully the lock will survive the night, but if it doesn't the CSOFT gain will show up in SDF. It should be set to .4.

H1 ISC (SEI)
jim.warner@LIGO.ORG - posted 00:12, Wednesday 05 April 2017 - last comment - 10:43, Friday 07 April 2017(35330)
Large MC2 length drive causes power IMC fluctuations

I've been looking to see if LHO needs to pursue better L2A de-coupling in the corner station suspensions to improve our wind and earthquake robustness. The good news is I had to look for a while to find a candidate, but I know better what to look for now, so I'll see what else I can find. Looking at a couple of recent earthquakes, I noticed that we seemed to lose lock when the IM4 TRANS qpd pitch hit a threshold of -.6. After talking to Jenne about it, we looked at other QPDs close by and it was immediately obvious that MC2 trans qpd pitch was being driven by MC2 M1 length drive. The attached plot shows the story.

Both plots are time series for and earthquake on March 27 of this year, where we lost lock at around 1174648460 UTC. The top plot shows the MC2_TRANS_PIT_INMON, MC2_M1_DRIVEALIGN_L_OUTMON and MC2_TRANS_SUM_OUT16. The bottom plot is the ITMY STS in the Y direction. The first 600 seconds are before the earthquake arrives and is quiet. The spike in the STS at about 700 seconds is the arrival of the P waves. This causes MC2 sus to  move more, but the MC2 trans sum isn't affected much. At about 900 seconds the R waves arrive and MC2 starts moving more and more, moving the spot on the qpd more and driving down the qpd sum. I've looked at the other pds used for asc and only IM4 trans and MC2 trans seem to move this much during an earthquake.

Images attached to this report
Comments related to this report
jenne.driggers@LIGO.ORG - 10:43, Friday 07 April 2017 (35388)

[Vaishali, JimW, Jenne]

We started looking at transfer functions yesterday to do the length-to-angle decoupling, but I mis-read Jim's plot, and focused on the lowest M3 stage, rather than the low frequency top stage. 

Anyhow, hopefully we can take some passive TFs over the next few days (especially now, with the >90%ile useism and >98%ile wind), and have a decoupling filter ready for the next commissioning window. 

Displaying reports 50841-50860 of 85239.Go to page Start 2539 2540 2541 2542 2543 2544 2545 2546 2547 End