Displaying reports 48201-48220 of 84669.Go to page Start 2407 2408 2409 2410 2411 2412 2413 2414 2415 End
Reports until 11:27, Friday 14 July 2017
H1 AOS (SUS)
corey.gray@LIGO.ORG - posted 11:27, Friday 14 July 2017 - last comment - 12:20, Friday 14 July 2017(37527)
Violin Modes Rung Up Again!

During current lock:

Jenne had me quickly maintain lock by:

This calmed things down.  Then moved on to Violin Modes.

ETMx MODE4 (505.805Hz) was once again the largest rung up violin mode.  Have stayed with positive gain, and played with increasing it & also fiddling with phase, but not having luck with that yet.

* See Sticky Note for how to change the Manager Flag from "User" to "ISC_LOCK".

Comments related to this report
corey.gray@LIGO.ORG - 12:20, Friday 14 July 2017 (37528)

Not sure anything I did improved anything or if it was just time allowing the ETMx MODE4 to ring down on its own & get to a value where we could finally go back to locking (Spent about 45min tweaking on violin modes).

Kiwamu took a look at spectra of RMS of OMC_DCPD_A_IN1 & OMC_DCPD_B_IN1 & they looked fine to ADD_WHITENING.  So I went ahead and did that:

  • Go to OMC_LOCK node
  • Go to MANUAL
  • Select ADD_WHITENING state & wait for it to complete
  • Select READY_FOR_HANDOFF
  • Go back to AUTO  Edit:  Go to Manual (& Manager: will list this node as being managed by YOU, "User", etc.  You will want to change this to ISC_LOCK at some point*)

BUT....I must have not done something write here, because the ISC_LOCK complains about OMC_LOCK no longer being MANAGED by ISC_LOCK.  How do we get it to reclaim OMC_LOCK?

* See Sticky Note for how to change the Manager Flag from "User" to "ISC_LOCK".

Either way, back to NLN.

VIOLIN_MODE_DAMPING_1 Issue?

Since these violins appeared to ring up as we were raising power & went through the VIOLIN_MODE_DAMPING_1 guardian state, does this mean we might still have an issue with the Guardian settings for the Violin Modes?

H1 ISC
daniel.sigg@LIGO.ORG - posted 10:44, Friday 14 July 2017 (37526)
ALS Fiber distribution

Looking at the ALS fiber distribution box:

We have 15-20mW into the fiber. With a ~7% splitting ratio (the losses through the AOM are about 3 times higher than specifications would indicate), we will have about 1mW in the SQZ LO fiber output. This is in line with the L1 measurement in alog 28961.

H1 SEI
hugh.radkins@LIGO.ORG - posted 10:23, Friday 14 July 2017 (37522)
LHO BS CPS Glitches

The first plot attached shows 24 hours of Corner3 and other CPS In1s along with Lock state signals.

The two large spikes in the first half of the trend shows saturating spikes on the Corner 3 CPS and concurrently the BS Watchdog tripping.  These glitches only show on C3 and are on all four C# CPSs.  Not shown are earlier locklosses and BS Trips that JimW in aLog 37499 attributed to Corner3 glitching.

First action Jim performed was power cycling the satellite racks at the chamber.  Jim did this by unplugging the rack power cord.

After the first LL on this log's plot, Jim un- and re-seated the gauge boards in the satellite racks.

After the second LL at ~1740pdt, I power cycled the C3 interface chassis and reseated and tightened the power cord of the chassis.

There is a final lockloss on this plot around 6am pdt but there is no to little spiking of the CPS channels and certainly nothing of the WD tripping saturations seen earlier.

So, the 6am lockloss was not the CPS but only time will tell if things are completely cleared up.

I still plan to look closer at the glitching and locklosses here so there may be more.

Images attached to this report
H1 PSL
jason.oberling@LIGO.ORG - posted 09:36, Friday 14 July 2017 (37519)
PSL Laser Head Flow Rates

Attached are 3 day trends of the laser head flow rates in the PSL HPO.  So far everything looks to be holding steady; the jagged-looking nature of the signals at the beginning of the trends is the tail end of the flow rate tests performed by Peter and Jeff on Tuesday.  Will continue to keep an eye on these flows.

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 08:29, Friday 14 July 2017 - last comment - 12:14, Friday 14 July 2017(37518)
Transition To DAY

TITLE: 07/14 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Earthquake
OUTGOING OPERATOR: Travis
CURRENT ENVIRONMENT:
    Wind: 4mph Gusts, 2mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.07 μm/s

Quiet at the moment with small useism & low winds.
QUICK SUMMARY:

Richard and Travis were troubleshooting IMC when I walked in.  Richard asked me to run the DOWN state (Guardian was in the INITIAL_ALIGNMENT state), and this locked up the IMC.  Will start an initial alignment from scratch now.

Note:  OBSERVATORY MODE is in the EARTHQUAKE state.  It was taken to OBSERVING last night.

Comments related to this report
corey.gray@LIGO.ORG - 12:14, Friday 14 July 2017 (37525)

Initial Alignment/Locking Notes:

INPUT_ALIGN

While running through the Input Align step, noticed big ASC signal ringing down after OFFLOADING (screenshot attached).  I don't recall this happening before & just thought I'd mention it because Travis had his IMC issue during this step of INPUT_ALIGN.

MICH_DARK_LOCKED

The spot here was bright & I wasn't able to get it to the usual dark spot.  I centered this spot & moved on.

Locking Issues:

So after the alignment, couldn't get much action while trying to lock PRMI (looked very quiet), and it also looked very quiet/dead for CHECK_MICH_FRINGES.  So I went to DRMI locking and looked aft flashes and moved the BS accordingly.  Managed to eventually get a lock.  Just wondering if my alignment was bad due to issue with MICH_DARK_LOCKED.

Images attached to this comment
H1 General
travis.sadecki@LIGO.ORG - posted 08:04, Friday 14 July 2017 (37517)
Ops Owl Shift Summary

TITLE: 07/14 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: Corey
SHIFT SUMMARY:  After the lockloss (still not convinced it was due to the EQ), I started an IA since the AS spot looked iffy and DRMI was not locking.  When I got to Input_align, the IMC lost lock and would not relock.  I followed the procedure in the troubleshooting wiki and recovered flashing in the IMC, but it still would not lock.  I started looking around at the MC SUS screen and found a channel that had a huge value coming into it.  Thinking it might be electronics, I called in Richard to have a look.  At the moment, he is still poking through MEDM screens to try to find the source of the signal. 
LOG:  See previous aLogs.

H1 General (SEI)
travis.sadecki@LIGO.ORG - posted 06:06, Friday 14 July 2017 - last comment - 07:34, Friday 14 July 2017(37514)
Lockloss 12:58 UTC - Earthquake ?

I set the OPS_OBSERVATORY-MODE to Earthquake, but I'm not entirely certain that was the culprit.  We were at the peak of the motion from a 5.3M EQ in Chile, but the BLRMS top out at only ~0.1 um/s, so well below the typical lockloss threshold.  Jim's StripTool shows some wiggles in the BS CPS traces, but not what I'd call glitches.

For the SEI team, I attach screenshots of the USGS, Terramon, and SEISMON 5-EQ screens that show that SEISMON is not reporting the latest EQs (there have been several EQs over 4.0M with dates of 2017-07-14 that don't show up on SEISMON). 

Images attached to this report
Comments related to this report
keith.thorne@LIGO.ORG - 07:34, Friday 14 July 2017 (37516)CDS, SEI
SEISMON seems to be working (at least updating) at LLO (See attached).

We made need to update the USGS code, and other code to get the location field to work.  There is also the phenomenon that after a restart of all the scripts it can take a couple days for all the processing to catch up to current events.
Images attached to this comment
H1 General
travis.sadecki@LIGO.ORG - posted 05:49, Friday 14 July 2017 (37513)
GRB alert 12:40 UTC

Called LLO control room to verify they received the alert.  Jeremy at LLO verified they did, and logged back in to Teamspeak since their Teamspeak machine had rebooted.

H1 SEI (SEI)
travis.sadecki@LIGO.ORG - posted 00:15, Friday 14 July 2017 - last comment - 00:16, Friday 14 July 2017(37511)
H1 ISI CPS Noise Spectra Check - Weekly

All appears to be well.

Images attached to this report
Comments related to this report
travis.sadecki@LIGO.ORG - 00:16, Friday 14 July 2017 (37512)

FAMIS task 6906.

H1 General
travis.sadecki@LIGO.ORG - posted 00:00, Friday 14 July 2017 (37509)
Ops Owl Shift Transistion

TITLE: 07/14 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 53Mpc
OUTGOING OPERATOR: Patrick
CURRENT ENVIRONMENT:
    Wind: 5mph Gusts, 3mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.06 μm/s
QUICK SUMMARY:  We are finally back to Observing even with the range still a bit lower than we'd like.  I'll keep an eye on the BS ISI CPS for glitches if we have any locklosses.

LHO General
patrick.thomas@LIGO.ORG - posted 23:59, Thursday 13 July 2017 (37510)
Ops Eve Shift Summary
TITLE: 07/13 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 54Mpc
INCOMING OPERATOR: Travis
SHIFT SUMMARY:

I spent the beginning of the shift trying to help Jeff and Hugh damp violin modes. I worked on adjusting the gains for the first fundamental modes 2, 4, 6 and 8 of ETMY. Increasing the gain on modes 4 and 6 did not seem to have any effect on the ~508.219 Hz mode. Increasing the gain on modes 2 and 8 may have helped damp the ~507.992 Hz and ~508.66 Hz modes respectively, or they could have just been coming down regardless, it was unclear. Thomas, Pep and Jenne worked on moving the beam spot positions but reverted their changes before they left. We had another lock loss attributed to the BS ISI CP glitching. Hugh made another change to try and address it (see his alog). We can now make it to NLN without intervening in the violin mode damping and without skipping any states. After consulting with Vern we set the intent bit to observing despite the range of ~54 MPc.

LOG:

23:55 UTC GRB verbal alarm. LLO not on teamspeak. Ignoring.
00:42 UTC Lock loss. BS ISI CP glitch. Thomas and Pep reverting beam spot moving ASC changes.
00:44 UTC Hugh to CER to powercycle BS interface chassis.
00:48 UTC Untripped BS ISI. Hugh back to CER to tighten screws.
01:00 UTC Thomas and Pep done. Relocking.
01:36 UTC NLN.
02:01 UTC Observing after consulting with Vern.
H1 AOS (DetChar)
robert.schofield@LIGO.ORG - posted 22:54, Thursday 13 July 2017 (37503)
After Swiss cheese baffle damping: no rush-hour range drop and improved jitter-subtracted insprial range

Damping of the baffle has been shown to have successfully reduced noise in DARM from local vibration injections around the input beam tube (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=36979), but an improvement for site-wide vibration has not been shown until now.

While we have been down for some time following the Montana quake, there was enough data before the quake that I could look into indicators of site-wide coupling. The evidence here suggests that the baffle damping also reduced coupling of global 10-30 Hz vibrations and thus supports the hypothesis that the baffle was the dominant coupling location on site for this band.

The rush hour range drop was thought to be produced by scattering from the Swiss cheese baffle (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=35735), and it was thought that damping might reduce this nearly daily drop in range.  Each of the panels in Figure 1 show the 10-30 Hz seismic band and the inspiral range. The rush hour traffic features are the broad ~gaussian peaks (from many cars) at the beginning and end of the work days. The panels from April and last December show that the range dropped during each rush hour (unless the range was already low), and that this has been happening for the entire run. The recent plot does not show similar range drops, suggesting that site-wide coupling in this band has been substantially reduced (as an aside, we do still often loose lock during the morning rush hour, even though the range doesn’t drop, and this bears further investigation).

To test for a global effect when the traffic noise was not elevated, I used Jenne’s jitter subtraction code and compared non-rush hour times before and after the vent. I took 1024 second stretches that were matched in original inspiral range and applied Jenne’s code. Figure 2 shows that the range with subtraction was, for each pair, higher after the vent. The difference was only a few percent but is significant (at p<0.05 ). Because the scattering noise is highly non-stationary, damping may also have improved stationarity (compare minute-scale range variation in Figure 1).

Non-image files attached to this report
H1 SEI
hugh.radkins@LIGO.ORG - posted 18:40, Thursday 13 July 2017 - last comment - 09:59, Friday 14 July 2017(37505)
CPS Glitch on BS Corner3 Again appears to drop lock

Still drilling into the fine print but this still holds although there may be more details to uncover.

After this glitch, I power cycled the sensor chassis in the CER--this did not include the T240s.  After doing so I noticed the power cable to the Corner3 chassis was not screwed in and the cable was somewhat tight and the connector did not seem square/fully seated.  I pulled a bit of slack in, seated and screwed down the connector--we'll see...

Comments related to this report
hugh.radkins@LIGO.ORG - 09:59, Friday 14 July 2017 (37520)

This power cycle was just the Corner3 Chassis.

H1 SEI
jim.warner@LIGO.ORG - posted 13:52, Thursday 13 July 2017 - last comment - 10:07, Friday 14 July 2017(37499)
BS CPS glitching caused the locklosses this morning

As was alluded to earlier by Corey, it looks like the CPSs on BSC2 started glitching this morning and caused the ISI to trip, breaking several locks. Attached plot shows several hours around the locklosses this morning. The windows where the ST1 WD was at state 4 are when the WD tripped. The long fuzzy period on the left of the CPS plots are the earthquake this morning, only ST1 H3 & V3. It looks like the CPS started glitching some time after the earthquake. I went out, pulled and reconnected the power on the satellite racks on the chamber. I've had a striptool running of the CPS since power-cycling and it looks like the glitching has stopped. Given the glitches start before getting bad enough to trip the ISI, maybe we can come up with some of diagnosing before this becomes a problem, so we at least know what we have to fix. I don't know what that would be though. We could at least get longer trends (an hour or two) of CPS if a "mysterious" ISI trip is suspected of causing a lock loss.

Images attached to this report
Comments related to this report
richard.mittleman@LIGO.ORG - 07:26, Friday 14 July 2017 (37515)

Do the corner 3 Stage 2 sensors show any of this glitchiness?

hugh.radkins@LIGO.ORG - 10:07, Friday 14 July 2017 (37521)

Yes Richard, see my later plots.

H1 ISC
kiwamu.izumi@LIGO.ORG - posted 13:38, Thursday 13 July 2017 - last comment - 18:45, Tuesday 18 July 2017(37498)
Modification of RF cabling for 72 MHz WFSs

WP 7075

To further proceed with the 72 MHz WFS (Wave Front Sensor)s (37042), today I made the hardware changes (mostly cabling), as summarized below, while the interferometer was in a violin-mode-damping state.

I am going to leave the hardware configuration as it is. If this new setup doesn't cause extra noise in the interferometer, they will stay semi-permanently.

Here is a summary of the new configuration:

[The modifications]

Images attached to this report
Comments related to this report
kiwamu.izumi@LIGO.ORG - 10:45, Friday 14 July 2017 (37524)

Later, Jenne pointed out that the dark offset should have been readjusted. So we re-adjusted the dark offset. As a result, the -1 dB gain I originally placed turned out to be inaccurate. I set it to 2 dB in order to get roughly 550 counts at the normalized in-phase output when the DRMI is locked with the arm at a off resonance point. The RF phase is also adjusted accordingly.

sheila.dwyer@LIGO.ORG - 18:45, Tuesday 18 July 2017 (37601)

It seems like since this work there has been excess low frequency noise in the RF9 AM stabilization control signal. The attachment shows the difference.

Images attached to this comment
Displaying reports 48201-48220 of 84669.Go to page Start 2407 2408 2409 2410 2411 2412 2413 2414 2415 End