Displaying reports 56701-56720 of 83394.Go to page Start 2832 2833 2834 2835 2836 2837 2838 2839 2840 End
Reports until 20:07, Saturday 28 May 2016
H1 ISC
sheila.dwyer@LIGO.ORG - posted 20:07, Saturday 28 May 2016 - last comment - 14:08, Friday 03 June 2016(27437)
locklosses possibly related to RF problem or SR3 glitches

Evan and I spent most of the day trying to investigate the sudden locklosses we've had over the last 3 days.  

1) We can stay locked for ~20 minutes with ALS and DRMI if we don't turn on the REFL WFS loops.  If we turn these loops on we loose lock within a minute or so.  Even with these loops off we are still not stable though, and saw last night that we can't make it through the lock acquisition sequence. 

2)In almost every lockloss, you can see a glitch in SR3 M2 UR and LL noisemons just before the lockloss, which lines up well in time with glitches in POP18.  Since the UR noisemon has a lot of 60 Hz noise, the glitches can only be seen there in the OUT16 channel, but the UR glitches are much larger.  (We do not actuate on this stage at all).  However, there are two reasons to be skeptical that this is the real problem:

It could be that the RF problem that started in the last few days somehow makes us more senstive to loosing lock because of tiny SR3 glitches, or that the noisemons are just showing some spurious signal which is related to the lockloss/ RF problems. Some lockloss plots are attached. 

It seems like the thing to do would be trying to fix the RF problem, but we don't have many ideas for what to do. 

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 20:25, Saturday 28 May 2016 (27438)

We also tried running the Hang's automatic lockloss tool, but it is a little difficult to interpret the results from this.  There are some AS 45 WFS channels that show up in the third plot that apprears, which could be related to either a glitchy SR3 or an RF problem. 

Images attached to this comment
sheila.dwyer@LIGO.ORG - 20:27, Saturday 28 May 2016 (27439)

One more thing: Nnds1 chrashed today and Dave helped us restart it over the phone.

andrew.lundgren@LIGO.ORG - 07:41, Wednesday 01 June 2016 (27470)DetChar, ISC, Lockloss
For the three locklosses that Sheila plotted, there actually is something visible on the M3 OSEM in length. It looks like about two seconds of noise from 15 to 25 Hz; see first plot. There's also a huge ongoing burst of noise in the M2 UR NOISEMON that starts when POP18 starts to drop. The second through fourth attachments are these three channels plotted together, with causal whitening applied to the noisemon and osem.

Maybe the OSEM is just witnessing the same electrical problem as is affecting the noisemon, because it does seem a bit high in frequency to be real. But I'm not sure. It seems like whatever these two channels are seeing has to be related to the lockloss even if it's not the cause. It's possible that the other M2 coils are glitching as well. None of the other noisemons look as healthy as UR, so they might not be as sensitive to what's going on.
Images attached to this comment
keita.kawabe@LIGO.ORG - 14:08, Friday 03 June 2016 (27501)

RF "problem" is probably not a real RF problem.

Bad RFAM excess was only observed in out-of-loop RFAM sensor but not in the RFAM stabilization control signal. In the attached, top is out-of-loop, middle is the control signal, and the bottom is the error signal.

Anyway, whatever this low frequency excess is, it should come in after the RF splitter for in- and out-of-loop board. Since this is observed both in 9 and 45MHz RFAM chassis, it should be in the difference in how in- and out-of-loop boards are configured. See D0900761. I cannot pinpoint what that is but my guess is that this is some DC stuff coming into the out-of-loop board (e.g. auto bias adjustment feedback which only exists in out-of-loop).

Note that even if it's a real RFAM, 1ppm RIN at 0.5Hz is nothing assuming that the calibration of that channel is correct.

Images attached to this comment
andrew.lundgren@LIGO.ORG - 15:22, Wednesday 01 June 2016 (27486)DetChar, ISC, Lockloss
Correction: The glitches are visible on both the M2 and M3 OSEMs in length, also weakly in pitch on M3. The central frequency looks to be 20 Hz. The height of the peaks in length looks suspiciously similar between M2 and M3.
Images attached to this comment
andrew.lundgren@LIGO.ORG - 01:42, Thursday 02 June 2016 (27496)DetChar, ISC, Lockloss
Just to be complete, I've made a PDF with several plots. Every time the noise in the noisemons comes on, POP18 drops and it looks like lock is lost. There are some times when the lock comes back with the noise still there, and the buildup of POP18 is depressed. When the noise ends, the buildup goes back up to its normal value. The burst of noise in the OSEMs seems to happen each time the noise in the noisemons pops up. The noise is in a few of the noisemons, on M2 and M3.
Non-image files attached to this comment
H1 ISC
evan.hall@LIGO.ORG - posted 17:06, Saturday 28 May 2016 (27436)
Oscillator RFAM is worse since the morning of May 26

Not clear why. It may correspond to a PSL incursion the same morning.

Also, what used to be hooked up to outputs 7 and 8 on the 9 MHz distribution amplifier in the CER? Sheila and I found some terminated attenuators on these ports, but I recall it used to be cabled up somehow.

Non-image files attached to this report
H1 CDS (CDS, TCS)
sheila.dwyer@LIGO.ORG - posted 13:05, Saturday 28 May 2016 (27435)
How to fix TCS chiller tripping problem

Evan H arrived on site this morning to find the TCS chillers tripped, reset them and found the same behavoir as described in alogs 27381 and 27374

On the OAFIOP GDS TP screen, both the DAC and ADC bits (as well as DK) were red, the ADC error cleared with a diag reset but the DAC error would not reset.  We called Dave and he asked us to check that the DAC outputs were all zero by looking at the DAC MON screens (accessed by clicking the blue buttons labeled D0). This means that it is the same DAC problem Nutsinee described.  

To fix the problem, you need to stop and start all the models on the front end, including the IOP model.  This can be done by:

1)maybe check SDF before you kill models (I frogot this step)

2) log in to h1oaf0 as controls 

3) run a script /etc/kill_models.sh, wait for all models to be shut down in the correct order, with the IOP model last

4) run a script /etc/start_models.sh

5) Dave said that for the PEM model only we restore the settings by loading the OBSERVE.snap and hitting LOAD TABLE +EDB.  Since I frogot to check SDF before killing the models I am using the automatic burts to restore the rest of the models. Confusingly, the automatic burts always appear to indicate that there are no diffs, because all channels are no mon in them, so to actually check you need to select full table, select all mon, then set the table back to setting diffs. Time machine doesn't work for the SDF screens, which would be handy in a situation like this.  

Go to the mezzanine and follow instructions here https://lhocds.ligo-wa.caltech.edu/wiki/TCS to restart the chillers, and the laser controllers near the TCS tables.

H1 General
edmond.merilh@LIGO.ORG - posted 21:30, Friday 27 May 2016 (27434)
Shift Summary - Evening
TITLE: 05/28 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Unknown
INCOMING OPERATOR: None
SHIFT SUMMARY:
  • Tour group in the Control Room.
  • Tara also gave a private tour to a couple of friends.
  • Work on TCS is done for the evening but not complete.
  • Sheila and Evan engaged in ongoing investigation of locklosses occurring around the ENGAGE_SRC_ASC stage.
  • ~ 2:15UTCGoing away party for Brynley has me at the helm, solo. I was instructed to see how long I can get things locked for (with not much hope). Otherwise, I can call it if I'm not having any luck.
  • 3:13UTC EY .3micron dust alarm
  • 3:00UTC Patrick left. I didn't realize he'd remained.
  • DRMI can get past it's ASC stage.
  • After the last DRMI lockloss I now hoave NO IR flashes in the arms.I guess it's time I called it a night 
LOG:
H1 TCS (ISC)
nutsinee.kijbunchoo@LIGO.ORG - posted 19:08, Friday 27 May 2016 (27433)
CO2Y table alignment work

Kiwamu, Tega, Nutsinee

 

Quick Conclusion: We are done for the day. The clipping is pretty much fixed but the heating profile remains ununiform. We will resume the work on Tuesday. CO2Y power has been set to zero while CO2X remains at its nominal power. The beam dump has been put back in front of the FLIR camera.

Details: Today we went back to the table and tried to find the clipping point somewhere around M4 and M4A mirrors because the low transmissivity we saw the day before (alog27369). Seeing that the beam profile looks good reflecting off M4 we measured the power again using a power meter with a bigger aperture. We measured more power this time and the transmissivity from point 2 to point 3 wasn't as bad (78% instead of 35% -- see first attachment). We moved on to fix the horizontal clipping on M5 starting by adjusting M4A mirror. After centered the beam on M5 we moved M5, M6, and moved the annulus mask position sensor away from the beam path by about a centimeter. We adjusted BS1 to align the beam through both irises and fine tuned the alignment using the steering mirror between BS1 and the first iris. We adjusted M3 slightly to center the beam spot onto the FLIR camera screen. Looking at the camera image we noticed the beam profile has temperature gradient by 2 deg C from the lowest to highest when the power was 0.15 W at the screen. We are not sure if this is critical but we can improve this on Tuesday. The beam dump has been put back in place before we closed out.

 

ALso by fixing this clipping we improved the maximum CO2Y power to the ITM from ~2.5 W to ~3.7 W.

 

FLIR image yesterday

 

FLIR image today (I don't know why the image rotates. Use your imagination.)

 

 

Images attached to this report
LHO VE (VE)
david.barker@LIGO.ORG - posted 16:13, Friday 27 May 2016 (27432)
CP3 refill reminders schedule changes

The CP3 refill reminders (emails and cell phone text messages) have been rescheduled from every other day (actually every odd-day of the year) to Mon, Wed, Fri. This means that reminders will no longer be sent during weekends. The change was requested by the vacuum group.

LHO General
patrick.thomas@LIGO.ORG - posted 16:11, Friday 27 May 2016 (27431)
Ops Day Summary
13:55 UTC Chris S. opened high bay outside door for approx. 30 min to remove barrels
Mode cleaner lost lock, NPRO noise eater went out of range
15:22 UTC Jim W. to LVEA to toggle noise eater switch
15:28 UTC Jim W. done
15:55 UTC Betsy to LVEA to look for equipment
16:11 UTC Travis to HAM6 with tape measure
16:22 UTC Jim B. to staging building
16:28 UTC Travis and Betsy done
16:42 UTC Nutsinee and Kiwamu to TCSY CO2 table
17:02 UTC Jeff B. and Jason using forklift by mechanical room
17:32 UTC Jim B. back
17:55 UTC Keita and Haocun to HAM6 to take measurements
18:02 UTC Jeff B. and Jason done
18:09 UTC Kyle to LVEA to look for property tag
18:11 UTC Boom lift delivery through gate
18:55 UTC Kyle back. Kyle going to mid Y.
19:05 UTC Nutsinee and Kiwamu back
19:25 UTC Kyle back from mid Y
20:05 UTC Jeff B. using forklift near mechanical building
20:21 UTC Christina to open OSB receiving door
20:31 UTC Chandra and Gerardo to mid Y to fill CP3 and look at equipment in building
20:51 UTC Kiwamu and Nutsinee back to TCSY CO2 table
21:01 UTC Keita to CER to check ISC racks
21:13 UTC Jeff B. done
21:13 UTC Chandra and Gerardo done

Sheila and Evan H. have been troubleshooting locklosses. Nutsinee and Kiwamu have been working on TCSY CO2.
H1 General
edmond.merilh@LIGO.ORG - posted 16:08, Friday 27 May 2016 (27430)
Shift Summary - Evening Transition
TITLE: 05/27 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Aquisition
OUTGOING OPERATOR: Patrick
CURRENT ENVIRONMENT:
    Wind: 17mph Gusts, 7mph 5min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.07 μm/s 
QUICK SUMMARY:
LHO VE
chandra.romel@LIGO.ORG - posted 14:28, Friday 27 May 2016 (27428)
CP3 LLCV decrease
Lowered LLCV value from 20% to 19% because exhaust pressure was reading 1.2 psi. 

Chandra and Gerardo made arrangements for CP3 fill on Monday, Memorial Day.
LHO VE
chandra.romel@LIGO.ORG - posted 14:15, Friday 27 May 2016 (27427)
CP3 overfill
2pm local

1/2 turn open LLCV bypass - took 22 sec. to overfill CP3. 

H1 CDS
david.barker@LIGO.ORG - posted 12:16, Friday 27 May 2016 - last comment - 15:53, Friday 27 May 2016(27425)
Staging building SUS test stand restarted

Jim, Dave:

Thursday afternoon and this morning we brought the Staging Building SUS test stand back to life. The front end machine (bscteststand2) and its IO Chassis started with no problems. Most of our work was in getting the workstation running to permit DTT and DATAVIEWER to run. The original SUS workstation, which used to be in the office area, has been repurposed. We located the old SEI workstation in the DTS and pressed that into SUS service. In order to run the workstation outside of the clean area but not in the office, we have temporarily setup in the communications closet. Next week when the area behind the large roll-up door becomes available we will move the workstation there to avoid the closet's cooling issues.

Originally a NAT router (bisbee) in the FEC rack was connected to the GC switch via a wall outlet. We have decommissioned the NAT router, and used the ethernet run to the closet as the way of hooking the workstation up to the iMac and the FEC. We will not be replacing the NAT router, this test stand will no longer be accessible from GC.

Next week Betsy will be able to use the iMac in the clean room to X-foward DTT sessions running on the workstation.

Comments related to this report
david.barker@LIGO.ORG - 15:53, Friday 27 May 2016 (27429)

workstation in comms closet has been powered down and the door has been closed.

H1 SEI
hugh.radkins@LIGO.ORG - posted 09:41, Friday 27 May 2016 (27423)
EndY BRS Drift--May want to recenter before power outage

Attached are the trends for the BRSY Drift.  We are close to -10000 counts and doing about 5000 counts per week.  Next weekend power outage may suggest we just wait until after power recovery.  Will check with UW whether we should recenter before or after as well as power recovery procedure.

Images attached to this report
LHO General
patrick.thomas@LIGO.ORG - posted 08:43, Friday 27 May 2016 (27422)
morning meeting notes
TCS CO2 Y arm alignment in the morning (Nutsinee)

Vacuum group will replace remaining pneumatic LLCV actuators with electrically driven actuators on Tuesday (Chandra)

Boom lift arriving on site this morning from Sun Valley rentals (Bubba)
H1 PSL
peter.king@LIGO.ORG - posted 08:01, Friday 27 May 2016 (27420)
PSL Beckhoff computer reboot
To fix a problem with the readouts for the diode chiller flow and conductivity, the PSL Beckhoff computer was
rebooted this morning.

    I also took advantage of the opportunity to reset the clock on the computer from Central European time
to US Pacific.

    At first glance it looks like rebooting the computer fixed the signal(s).

    The laser was brought back to life, by the time I got back to the Control Room all the servos were locked
and engaged.



Jeff, Peter
H1 ISC
terra.hardwick@LIGO.ORG - posted 02:33, Friday 27 May 2016 - last comment - 12:43, Friday 27 May 2016(27416)
First successful electrostatic damping of PI at LHO, new PI mode?

Ross, Tega, Evan, Terra

Tonight we successfully damped a known parametric instability at 15540.6 Hz with the newly implemented ESD damping scheme. 

In April last year, this mode was detected in the X-arm during a 15W lock. Ultimately it was avoided by turning on the ETMX ring heater (0.5 W requested power top and bottom), shifting the optical mode peak down in frequency and away from ~15540 Hz mechanical modes. To test the new active damping scheme, we turned off the ETMX ring heater, allowed 15540.6 Hz to start to ring up during a 24W lock, and damped it by driving the UR and LL quadrants of the ETMX ESD.  

Below we tracked the amplitude of the ~15540.6 Hz mode. Leftmost action is the important part: first we briefly manually rang it up (gain -1000) before switching the gain sign (gain +500) to rapidly damp. Attached images show power spectrum before damping and immediately after.  We had planned to ring up and down again to get a better idea of the gain settings, but with the newly low magnitude peak, the line tracker got confused with another peak ~1 Hz away and then we lost lock shortly after, for unrelated reasons. 

Briefly, the damping set up: We grab the mechanical mode signal from the OMC transmission DCPDs (H1:OMC-PI_DCPD_64KHZ_A) and send it to the relevant end station, downconverting before the trip and upconverting after using synced oscillators set approximately to the known mechanical mode. There, the mechanical mode peak is tracked with iWave. Output is run through a damping filter for gain control and finally sent to actuate on the UR and LL quadrant of the ETM LNLV ESD. Overall, we get early detection of PI from the OMC and actuation on the test mass with the exact equal but opposite mechanical mode frequency that is ringing up, enabling damping to happen earlier in the lock aquisition process before PI has as much time to ring up. This is necessary as we increase power yet remain working with relatively low actuation force from the ESDs. 

PI at 15520: While working with 15540.6 Hz, we witnessed a mode at 15520 begin to ring up as well. During a second 24W lock, we allowed both to ring up ~15 min; they grew rapidly at similar rates, ultimately producing a strong 20Hz comb and breaking the lock. Will investigate (and attempt to damp) more this weekend.

We didn't get another good lock to test on tonight and we're still working out issues so I've left the damping system in manual mode and have turned the ETMX ring heater back on. 

Images attached to this report
Comments related to this report
ross.kennedy@LIGO.ORG - 11:33, Friday 27 May 2016 (27424)

We used offline data from the same time as this damping and tracked the amplitude and frequency of the line. At around 700s you see the same response as discussed above. From the frequency tracking you can see that the amplitude is just from the 15540Hz mode i.e. our line tracker was locked on this mode. The scale of the amplitude in this plot compared to the above plot differs by ~sqrt(2) due to a forgotten factor in our h1susetmxpi model. 

Images attached to this comment
aidan.brooks@LIGO.ORG - 12:43, Friday 27 May 2016 (27426)

Here are the estimates for the HOM spacing (in Hz) for the X and Y arm cavities over the last two days. 

Remember:

  • Self-heating creates a bulge on the surface that makes the concave optic surface flatter and hence ROC becomes larger
    • responsible for short time scale spikes
  • RH turning on makes the surface more concave, hence ROC becomes smaller
    • around 3/4 of the way through this time series
  • RH turning off makes the ROC larger
    • about 1/3 of the way through this time series

Images attached to this comment
H1 INJ (DetChar, INJ)
keith.riles@LIGO.ORG - posted 16:23, Thursday 26 May 2016 - last comment - 08:26, Friday 27 May 2016(27409)
CW injections - 24 hours with new actuation scheme
Following up on yesterday's restart of CW HW injections with a new actuation scheme,
here are comparisons over 24-hour intervals of the excitation channel H1:CAL-PINJX_HARDWARE
with what it was previously when a time-domain inverse actuation filter was used.
One benefit for transient search groups is that if sporadic CW injection dropouts are seen again in O2, 
they should not induce nasty glitches in DARM (see figures 11-13 below).

The bottom line for CW searches is that things look close to what is expected, but the amplitude of the 
highest-frequency pulsar injections (above 1 kHz) are significantly lower than before. 

The small residual discrepancy does not seem to be explained by the difference
between the old and new inverse actuation filter curves that Evan G. posted yesterday. Perhaps both the old and new inverse actuation filters simply amplify
the 1000-2000 Hz band too much (by 20-30%)? 

The figures below show 24-hour second-trend plots of the excitation channel envelope and
4-minute spectrum snapshots taken at 6-hour intervals, along with samples of sudden shutting
off of the injections.

Figure 1 - 24-hour trend (min/mean/max) of the channel for old actuation, showing the envelope of injections,
which is affected by the rotating antenna pattern of the interferometer w.r.t. 15 different
points on the sky with various intrinsic source polarization and strengths.

Figure 2 - 24-hour trend for new actuation - one can see a small drop in amplitude, driven by the highest
frequency pulsars for which the inconsistency between old inverse actuation filter and new actuation function is largest

Figure 3 - 4-minute spectrum at 22:34 UTC on May 24 (old actuation)

Figure 4 - 4-minute spectrum at 22:30 UTC on May 25 (new actuation) - approximately one sidereal day later

Figure 5 - 4-minute spectrum at 04:34 UTC on May 25 (old actuation)

Figure 6 - 4-minute spectrum at 04:30 UTC on May 26 (new actuation) - approximately one sidereal day later

Figure 7 - 4-minute spectrum at 10:34 UTC on May 25 (old actuation)

Figure 8 - 4-minute spectrum at 10:30 UTC on May 26 (new actuation) - approximately one sidereal day later

Figure 9 - 4-minute spectrum at 10:34 UTC on May 25 (old actuation)

Figure 10 - 4-minute spectrum at 10:30 UTC on May 26 (new actuation) - approximately one sidereal day later

Figure 11 - Glitch induced by sudden shutoff of CW injections with old inverse actuation filter

Figure 12 - Vertical zoom of glitch

Figure 13 - No glitch induced by shutoff of CW injections with new direct application of inverse actuation function


Note that the new trend (Figure 2) is a little smoother than the old one, as expected, 
without the amplification of tiny glitches seen with the old inverse filter. Another
manifestation is the much cleaner noise floors seen in the new spectra, away from
the injected lines. 
Images attached to this report
Comments related to this report
keith.riles@LIGO.ORG - 08:26, Friday 27 May 2016 (27421)INJ
At Rick's request, I am attaching more information about the desired injection strengths.
Attached are a time series plot and a csv file for 10 seconds of H1:CAL-PINJX_CW on May 24 when
the old time-domain IAF was in use, along with a spectrum and csv file for a minute,
starting at the same time (rectangular window, no overlap, amp spectrum - not density).
Graphs and files were generated via ldvw.

This sample of May 24 data starting at 22:34 UTC corresponds closely to what should
have been injected on May 25 at 22:30 UTC, i.e., the first pair of spectral snapshots above.

Images attached to this comment
Non-image files attached to this comment
Displaying reports 56701-56720 of 83394.Go to page Start 2832 2833 2834 2835 2836 2837 2838 2839 2840 End