Displaying reports 59321-59340 of 86133.Go to page Start 2963 2964 2965 2966 2967 2968 2969 2970 2971 End
Reports until 09:20, Monday 06 June 2016
LHO General
corey.gray@LIGO.ORG - posted 09:20, Monday 06 June 2016 - last comment - 11:44, Monday 06 June 2016(27563)
Running Alog Of CDS Power Outage Recovery

This will be a running log of activities as they occur for bringing back CDS after the planned outage over the weekend.  Refer to Richard's alog about what happened over the weekend for prep.  

We started the shift with lights, front gate operation, wifi, etc.

Comments related to this report
corey.gray@LIGO.ORG - 09:28, Monday 06 June 2016 (27564)

Step 6.3:  LVEA Network Switches 

This box appeared to be running (green lights & no errors).

corey.gray@LIGO.ORG - 09:30, Monday 06 June 2016 (27565)

Step 6.4 Work Station Computers

Work Stations powered in Control Room----->  But BEFORE these were powered up, The NFS File Server should have been checked in the MSR before powering anything up.  This needs to be added to the procedure.

We have found several issues due to computers being started before the NFS File Server was addressed.  These items had to be brough up again:

  • EPICS Gateway
  • NDS

----->And the items above allow us to run Dataviewer now and also bring up the Vacuum Overview

Step 6.5  Wiki & alog & CDS Overview MEDM

They are back.  Overview on the wall (thanks, Kissel).

Update:  The wiki was actually NOT back since it was started before the NFS File Server was started.  So the wiki was restarted.

corey.gray@LIGO.ORG - 09:43, Monday 06 June 2016 (27566)

Sec 8:  EX  (Richard & TJ)

They have run through the document & are moving on to other outbuildings.  On the overview for EX, we see Beckhoff & SUSAUX are back.

Sec 10:  MX (Richard & TJ)

Haven't heard from them, but we see that PEMMX is back on the Overview.

corey.gray@LIGO.ORG - 09:57, Monday 06 June 2016 (27567)

Sec 9:  EY (Richard & TJ)

This is backonline.  So now we can start front ends in the Corner Station!  

They are now heading to MY now.....

Sec 11:  MY (Richard & TJ)

...but it looks like MY is already back according to the CDS Overview.

corey.gray@LIGO.ORG - 10:09, Monday 06 June 2016 (27568)

STATUS at 10AM (after an hour of going through procedure):

Most of the CDS Overview is GREEN, -except- the LSC.  Dave said there were issues with bring the LSC front end back up and will need to investigate in the CER.

End Stations: (TJ, Richard)

  • ALS lasers powered on (which wasn't mentioned in procedure).
  • Mentioned issues with high power voltage power supplies are intentionally left off so VAC Team can check their system first.
corey.gray@LIGO.ORG - 10:15, Monday 06 June 2016 (27569)

7.5 Front Ends (Updated from Kissel)

Everything is looking good on the CDS Overview

Sec 7.8:  Guardian (updated from Kissel)

Working on bringing this back.  Some of these nodes need data from LDAS (namely, ISC); so some of these may take a while.  

BUT, basic nodes such as SUS & SEI may be ready fairly sooner.

corey.gray@LIGO.ORG - 10:18, Monday 06 June 2016 (27570)

7.1 CORNER STATION DC POWER SUPPLIES

These have all been powered ON (Richard).

(Still holding off on End Stations for VAC team to let us know it's ok.)

corey.gray@LIGO.ORG - 10:49, Monday 06 June 2016 (27571)

EY (Richard)

High Power voltage  & ESD are back (this step is not in recovery document).

corey.gray@LIGO.ORG - 11:10, Monday 06 June 2016 (27572)

EX (Richard)

High Power voltage  & ESD are back (this step is not in recovery document).

corey.gray@LIGO.ORG - 11:36, Monday 06 June 2016 (27573)

There appear to be some order issues here.  (Jeff, Nutsinee, and others are working on fixing the order in the document.)

1)  We held off on addressing DC high power because of wanting to wait for Vacuum System Team at the LVEA (for vacuum gauge) and at the End Stations (for vacuum gauge & the ESD).

2)  We held off on some Corner Station items, because of them being on the Dolphin Network.  So to address End Stations FIRST, Assigned Richard & TJ to head out to start the End Station sections of the document & get their Dolphin Network items online.  Once they were done, Dave started on cornter station Front Ends on the Dolphin network.

Extraneous items:

  • Conlog was brought up (Patrick)
corey.gray@LIGO.ORG - 11:41, Monday 06 June 2016 (27574)

Sec 7.6 LVEA HEPI Pump Station Controller

After Hugh's HEPI Maintenance, Jim brought this back.

Sec 8.5 EX HEPI Pump Station Controller

After Hugh's HEPI Maintenance, Jim brought this back.

Sec 9.5 EY HEPI Pump Station Controller

After Hugh's HEPI Maintenance, Jim brought this back.

 
corey.gray@LIGO.ORG - 11:44, Monday 06 June 2016 (27575)

UPDATE:

At this point (11:42am), we appear to be mostly restored with regard to the CDS side.  Most of the operations subsystems are back (we are mostly green on the Ops Overview).  The VAC group's Annulus Ion Pumps are back to using site power.

Lingering CDS/Operations items:

  • NDS may still be down
  • PSL restoration is still underway (Jason, Ed)
H1 ISC (ISC, SUS)
rich.abbott@LIGO.ORG - posted 08:54, Monday 06 June 2016 - last comment - 12:01, Monday 06 June 2016(27562)
Installation of ITM ESD
Filiberto, Ed, Rich

Installation is complete for the ITM ESD Driver for ITMX and ITMY.  A short was found on the bias connection to ITMX (see attached sketch to clear up the pin numbering of the legacy connector).  The shield was cut and insulated on the 2ft section of cable up near the vacuum feedthrough for this connection.  All nominally 24V connections were HIPOT tested to 500V, and the high voltage bias connections were tested to 700V.

An ADC (ADC07) was added to SUS-C5-14 to permit the ITM ESD readback channels to be implemented per the latest wiring diagram (D1500464).

At time of writing, no aspects of the installed system have been verified in situ.  This is the next item on the checkout.

Some useful system constants (total for both drivers):
+/- 18VDC Nominal Quiescent Current -> +460mA, -420mA
+/- 24VDC Nominal Quiescent Current -> +40mA, -40mA
+/- 430VDC Nominal Quiescent Current -> +/- 5.6mA

Serial Numbers:
ITMY ESD Driver (D1600092) (installed in SUS-R6) -> S1600266
ITMX ESD Driver (D1600092) (installed in SUS-R5) -> S1600267
ITM PI AI Chassis (D1600077) (installed in SUS-C6 U23) -> S1600245
Images attached to this report
Comments related to this report
carl.blair@LIGO.ORG - 12:01, Monday 06 June 2016 (27576)

There is some noise on the LL quadrant of the ITMX readback.  H1:IOP-SUS_AUX_B123_MADC7_TP_CH4.   Appears at ~ 8kHz 3000 counts peak.    
 

Images attached to this comment
LHO General
richard.mccarthy@LIGO.ORG - posted 08:11, Monday 06 June 2016 (27561)
LHO Power Outage
As noted by Dave B.  We began the orderly shutdown of the site at 1500 local time on Friday.
The power actually went out at 1715 and we were almost ready.
Vacuum system with the generators setup did not go as smoothly as it could.
1. Generator output was too low for the UPS to operate on.  UPS looking for 115V and we were at 112V.  We bypassed the UPS and ran the 24V dc supplies directly from the Generator.
2. The GFI outlet on EY generator would not function so it was replaced
By 1930 we were comfortable leaving the site.

Sat.
0800 Site was dark all buildings, generators, Vacuum system in good shape
1800 New switch at substation installed.

Sun.
0800 Site was dark all buildings, generators, Vacuum system in good shape
1200 Testing of switch complete (hicups included)
1300 Site power restored.
Facility cooling began.
converted Vacuum system to building power generators turned off.
1500 left for day.
Other Vacuum details by Vacuum group.
LHO OpsInfo
carlos.perez@LIGO.ORG - posted 15:45, Friday 03 June 2016 (27559)
h1ecatc1
During power off procedure h1ecatc1 reported it had unsaved changes, As the changes where unknown at the moment, we ignored them and continue with normal shutdown.
H1 General
cheryl.vorvick@LIGO.ORG - posted 15:40, Friday 03 June 2016 - last comment - 15:57, Friday 03 June 2016(27558)
Ops Day Summary: weekend prep activities as of 22:40UTC

State of H1: systems are off

 

Weekend Prep activities:

* indicates  there was some small glitche in our procedure

Comments related to this report
cheryl.vorvick@LIGO.ORG - 15:57, Friday 03 June 2016 (27560)
  • 22:47 - Controll Room screens off
  • 22:48 - Fil to mids and ends to power off
  • 22:56 - Control Room machines off
  • 22:57 - OPS workstation / aux computers off
H1 ISC
cheryl.vorvick@LIGO.ORG - posted 15:27, Friday 03 June 2016 (27557)
H1 alignment snapshot:

This alignment snapshot is the final lock before the weekend.

Images attached to this report
H1 TCS
nutsinee.kijbunchoo@LIGO.ORG - posted 15:26, Friday 03 June 2016 (27556)
All TCS systems (HWS, CO2) shutdown

Ready for the power outage. Safe.snap last updated June 1st.

HWS:

CO2:

H1 CDS
david.barker@LIGO.ORG - posted 15:15, Friday 03 June 2016 (27555)
orderly shutdown of LHO CDS systems has started

we are starting the shutdown procedure for LHO CDS.

H1 SUS
jeffrey.kissel@LIGO.ORG - posted 15:04, Friday 03 June 2016 (27554)
Beam Splitter (BSFM) Library Part Updated to Handle Newly Named TACQ Coil Driver Library Part
J. Kissel, D. Barker 

After the modifications to the HAM triple models to make coil driver filtering individually switchable (see LHO aLOG 27223), I had renamed the library parts in the 
/opt/rtcds/userapps/release/sus/common/models/STATE_BIO_MASTER.mdl
in order to clean up the confusion between the digital control of a modified triple-acquistion driver vs. and unmodified triple-acquisition driver.

However, in doing so, this renaming destroys the library link's reference in other models that use the specific block. This was identified by Dave who was trying to compile the BSFM model in prep for the power outage (which is the only *other* suspension type that uses the TACQ driver). As such, I copied in the newly renamed part from the library which restored the link. The model now compiles nicely, and has been committed to the userapps repository.
H1 General
cheryl.vorvick@LIGO.ORG - posted 14:58, Friday 03 June 2016 (27553)
Ops Day Summary: update at 21:57UTC

State of H1:  at 21:33UTC locked in ENGAGE_REFL_POP_WFS

 

Activities at 21:33UTC:

 

Activities during the day:

 

H1 locked:

 

H1 unlocked:

H1 SUS (COC, SUS, SYS)
calum.torrie@LIGO.ORG - posted 14:35, Friday 03 June 2016 (27550)
Fiber experiment in optics lab

Travis Sadeki and Calum Torrie

Fiber experiment in optics lab to investigate effect (if any) of using the Top Gun Ionizing Air Gun System in proximity to the LIGO production fibers. For Top Gun information refer to https://dcc.ligo.org/LIGO-T1300687. Note the Temp was 71-73 deg F and the Humidity was ~ 38% on day 1 and day 2. The gas used is ultra high grade pure Nitrogen.
 
Experiment 1.1 (Day 1: June 2nd 2016 2:00pm).
Fiber #1 with Top Gun at 10 psi. Production Fiber with 15 kg x2 (overload test weight). Note this production fiber (#1) was hanging for 2 hours in test box prior to spraying with Top Gun. Also note that all of these experiments were conducted in a clean space, not in a dedicated clean room. For Fiber #1 we opened the door and sprayed across the opening (back and forth) for 30s 8" from fiber, at a position 1/2 way down the fiber. The door was then closed and we noted the fiber and mass got small kick from this action. Note this experiment is to mimic what a fiber could be exposed too during a 1st contact peel. However, it should be noted that this is a worse case scenario as in general the fiber guard is between the Top Gun and the fiber.
 
Experiment 1.2 (Day 1: June 2nd 2016 2:10pm).
Fiber #2 with Top Gun at 10 psi. Production Fiber with 15 kg x2 (overload test weight). Note this production fiber (#2) was hanging for 2 hours in test box prior to spraying with Top Gun. Also note that all of these experiments were conducted in a clean space, not in a dedicated clean room. for Fiber #2 we opened door for 30s. No spray was performed. Then we closed the door. Same kick seen on fiber as above. Note this is / was a control for experiment 1.1.
 
Experiment 1.1 (Day 2 am: June 2nd 2016 9:30am).
Fiber #1 still hanging. We then poked fiber in each box via peep hole - all good.
Now with Fiber #1 we repeated the experiment from the previous day again at 10 psi (2 time) and again for 30s as before.
Note - fiber was swinging back and forth during this experiment, as a result of previous poke. All good i.e. fiber still hanging at end.

Experiment 1.2 (Day 2 am: June 2nd 2016 9:40am).
Fiber #2 still hanging. We then poked fiber in each box via peep hole - all good.
Experiment 1.2 is now closed.

Experiment 2.1 (Day 2 am: June 2nd 2016 9:45am).
Now with Fiber #2 we increased the regulator to 20 psi test. Else same as above. All good i.e. fiber still hanging at end. Note this experiment is to mimic what a fiber could be exposed too during a gap charge removal at an ETM, refer to https://dcc.ligo.org/LIGO-T1500101. However, it should be noted that this is a worse case scenario as in general the fiber guard is between the Top Gun and the fiber.
 
Experiment 1.1 (Day 2 pm: June 2nd 2016 1:30pm).
Back to Fiber #1 and again at 10 psi (3rd time). We noted that there is a small swing induced on the fiber / mass by the Top Gun. All good.

Experiment 2.1 (Day 2 pm: June 2nd 2016 1:35pm).
Then again back to Fiber #2. 20 psi 30s again at 8" (2nd time). All good.
 
Conclusion
So far so good and no effect seen on suspended fibers as a result of exposing fibers to ionized air from Top Gun at both 10 psi and 20 psi. Travis and Calum.
 
Images attached to this report
H1 INJ (INJ)
keith.riles@LIGO.ORG - posted 14:24, Friday 03 June 2016 - last comment - 20:04, Tuesday 07 June 2016(27548)
Implemented new inverse actuation for CW injections with time delay correction
I have copied Evan's new actuation function to the h1hwinj1 directory currently used
for CW injections: ~hinj/Details/pulsar/O2test/. I used the one that corrects for the
actuation delay:  H1PCALXactuationfunction_withDelay.txt.

For reference, the uncorrected version (no "_withDelay") sits in the same directory,
along with the one we first tried last week: H1PCALXactuationfunction.txt.25may2016.
The perl script that generates the command files in.N (N=0-14) has been updated to
use "_withDelay" and the command files regenerated.

The CW injections have been killed and automatically restarted by monit. Attached 
are second trends before and after the gap showing that things look about the same,
as expected, but there is a small increase in injection amplitude (~5%).
Images attached to this report
Comments related to this report
keith.riles@LIGO.ORG - 20:04, Tuesday 07 June 2016 (27642)INJ
Evan wondered if the ~5% increase in total injection amplitude was dominated
by the highest frequency injection or one at lower frequencies. I took a look 
for this time interval and found that the total amplitude is dominated by
the injection at ~1220.5 Hz. Simply comparing spectral line strengths 
before and after the changeover turned out not to be a robust way to
estimate the frequency-dependent ratio of the new to the old inverse actuation 
function, because some pulsar injections (especially the highest frequency one)
are going through rapid antenna pattern modulations during this period.

But comparing the new to the old spectral line strengths at the same
sidereal time several days later (after power outage recovery) gives
robust measures for a sampling of low-, medium- and high-frequency
injections:

Freq (Hz)"Old" amplitude (before switchover)New amplitude (4 sidereal days later)Ratio (new/old)
190.900.322920.322111.00
849.0060.50262.3441.03
1220.5299.37318.701.06
1393.1207.50224.371.08
1991.228.56532.7881.15
These results seem in reasonable agreement with Evan's expectation of a new/old ratio rising with frequency, reaching 15% at 2 kHz. Plots and text files of 4-minute spectral averages are attached for June 3 before the switchover and for June 7 with the newer inverse acuation function.
Images attached to this comment
Non-image files attached to this comment
H1 SEI
hugh.radkins@LIGO.ORG - posted 14:22, Friday 03 June 2016 (27549)
Wandering Bump in ITMY ISI CPS Stage2 Corner2--Don't see in GS13s of the ITMY

Just did a quick check on the bump I first reported in 27479 and yes it is still there and yes it is still wandering.  Of note, it can move quickly.  It went from ~60Hz to ~80Hz in less than 30 minutes 2 June 1055 to 1125 pdt. 

Images attached to this report
H1 CDS (SUS)
james.batch@LIGO.ORG - posted 09:53, Friday 03 June 2016 - last comment - 13:36, Friday 03 June 2016(27535)
Additional ADC added to h1susauxb123
An additional ADC was added to the h1susauxb123 I/O chassis.  The new card was added to the expansion board at bus 1-2, making it ADC1 in relation to the existing ADC cards.  All ribbon cables between the ADC and interface cards were rearranged with the exception of ADC0 so that none of the cables to the AA chassis needed to be swapped and the h1susauxb123 model doesn't need to be modified.
Comments related to this report
keith.thorne@LIGO.ORG - 10:06, Friday 03 June 2016 (27536)
Which ECR is this done under ( I assume it is the ITM ESD install)?
david.barker@LIGO.ORG - 11:19, Friday 03 June 2016 (27541)

I'm assuming it is covered by ECR E1600064 though it is not clear if that ECR shows the additional ADC channels for the sus-aux system needed to support the PI-ESD install on the ITMs. 

james.batch@LIGO.ORG - 13:36, Friday 03 June 2016 (27547)
I should have made it apparent that there are now 8 ADC cards installed in the h1susauxb123 I/O chassis. My original post omitted this important detail.  The newly installed card is ADC7.
H1 General
cheryl.vorvick@LIGO.ORG - posted 09:42, Friday 03 June 2016 - last comment - 14:28, Friday 03 June 2016(27533)
Morning Meeting:

Activities:

Currently:

Work still planned for today:

Tours:

Moday timeline:

Comments related to this report
hugh.radkins@LIGO.ORG - 14:28, Friday 03 June 2016 (27551)

To be more clear regarding the HEPI task I will perform Monday morning, see WP 5910.

This is the Capacitive Accumulator Pressure checking which requires the HEPI Pumps off.  This is done only every 3 to 6 months.

H1 SUS (CDS, SUS)
sheila.dwyer@LIGO.ORG - posted 19:32, Thursday 02 June 2016 - last comment - 14:59, Friday 03 June 2016(27521)
SR3 glitches do seem to be a problem

Tonight we are again having random, fast locklosses, in different configurations.  We also are seeing some large glitches that don't knock us out of lock.  Again they seem to correspond to times when there is something noisy in SR3 channels, while its not clear that the SR3 channels are seeing real optic motion, it is probably worth swapping some electronics as a test because these frequent locklosses are making commissioning very difficult. 

See 27437 and Andy Lundgren's comments

The first attached plot shows that something about this channel changed on May 10th, and that there have been noisy periods since then.  The next two are two more examples of sudden unexplained locklosses where something shows up in SR3.

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 22:17, Thursday 02 June 2016 (27526)

SR3 is not the problem after all. 

KIwamu and I unplugged the cables from the Sat amp to the chamber for both M2 and M3, and the locklosses and glitches still happened.  The good news is that Kiwamu seems to have found a good clue about the real culprit.

kiwamu.izumi@LIGO.ORG - 22:42, Thursday 02 June 2016 (27527)ISC, PSL

Our current theory is that locklosses are due to the ISS which shuts itself off for some reason at random times at a rate of once in 10 minutes or so. This causes a glitch in the laser intensity. Before a lockloss, there was a fast glitch (~milliseconds) in PRCL, SRCL and CARM error signals. That made us think that the laser field may be glitching. Indeed, we then found that the ISS had gone off automatically at the same time as the glitch and seemingly had caused the subsequent lockloss. We then tested the stability of ISS in a simpler configuration where only IMC is locked. We saw glitches of the same type in this configuration too.

In order to localize the issue, we are leaving the ISS open overnight to see if some anomaly is there without the ISS loop.

Images attached to this comment
kiwamu.izumi@LIGO.ORG - 09:47, Friday 03 June 2016 (27532)ISC, PSL

Conclusion: it was the ISS which had a too low diffraction power.

According to the overnight test last night, I did not find a glitchy behavior in the laser intensity (I looked at IMC-MC2_TRANS_SUM). This means that the ISS first loop is the culprit. Looking at trend of the recent diffraction power, the diffraction power kept decreasing in the past few days from 12-ish to almost 10% (see the attached). As Keita studied before (alog 27277), a diffraction power of 10% is about the value where the loop can go unstable (or hit too low diffraction value to shut off the auto-locked loop). I increased the diffraction power to about 12% so that the variation in the diffraction power looks small to my eyes.

Images attached to this comment
keita.kawabe@LIGO.ORG - 14:59, Friday 03 June 2016 (27552)

Note that there are two reasons that the diffracted power changes, i.e. intentional change of the set point (left top) and the HPO power drift (right bottom). When the latter goes down, ISS doesn't have to diffract as much power, so the diffraction goes lower.

In the attached, at the red vertical line somebody lowered the diffraction for whatever reason, and immediately the ISS got somewhat unhappy (you can see it by the number of ISS "saturation" in right middle panel).

Later at the blue vertical line (that's the same date when PSL air conditioning was left on), the diffraction was reduced again, but the HPO power went up, and for a while it was OK-ish.

After the PSL was shut down and came back, however, the power slowly degraded, the diffraction went lower and lower, and the number of saturation events sky-rocketed.

Images attached to this comment
H1 SUS
rana.adhikari@LIGO.ORG - posted 15:56, Wednesday 01 June 2016 - last comment - 09:59, Wednesday 06 July 2016(27488)
SRM composite mass thermal noise

Rana, Evan

WE measured the SRM to SRCL TF today to find the frequency and Q of the internal mode. Our hypothesis is that the thermal noise from the PEEK screws used to clamp the mirror into the mirror holder might be significant contribution to DARM.

The attached Bode plot shows the TF. The resonance frequency is ~3340 and the Q ~150. Our paper and pencil estimate is that this may be within an order of magnitude of DARM, depending upon the shape of the thermal noise spectrum. If its steeper than structural damping it could be very close.

"But isn't this ruled out by the DARM offset / noise test ?", you might be thinking. No! Since the SRCL->DARM coupling is a superposition of radiation pressure  (1/f^2) and the 'HOM' flat coupling, there is a broad notch in the SRCL->DARM TF at ~80 Hz. So, we need to redo this test at ~50 Hz to see if the changing SRCL coupling shows up there.

Also recall that the SRCLFF is not doing the right thing for SRM displacement noise; it is designed to subtract SRC sensing noise. Stay tuned for an updated noise budget with SRM thermal noise added.

** see  https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=27455  for pictures of the SRM compsoite mass.

Images attached to this report
Comments related to this report
rana.adhikari@LIGO.ORG - 16:24, Wednesday 01 June 2016 (27489)

The peak is also visible in the DARM spectrum. In this plot the peak is at 3335 instead of 3340 Hz. Why is there a 1.5% frequency shift?

Images attached to this comment
evan.hall@LIGO.ORG - 17:33, Wednesday 01 June 2016 (27490)

Here are projected SRM thermal noise curves for structural and viscous damping.

Given a typical SRC coupling into DARM of 1×10−4 m/m at 40 Hz, 20 W of PSL power, and 13 pm of DARM offset (25019), this would imply a noise in DARM of 1×10−20 m/Hz1/2 at 40 Hz if the damping is structural.

Non-image files attached to this comment
calum.torrie@LIGO.ORG - 18:07, Wednesday 01 June 2016 (27493)

When I modelled the optics in https://dcc.ligo.org/LIGO-T1500376 and in particular the surrogate SRM I had assumed optic was bonded. After looking again earlier with Rana and Betsy realised it is held with 2 set screws (Peek?) on barrell at 12 o'clock and two line contacts at 4 and 8 o'clcok. See https://dcc.ligo.org/LIGO-D1200886.

The previous bonded model for the SRM surrogate (I believe) had a fisrt mode predicted around 8k Hz. However, from a quick model I ran today (with the set screws etc ... ) the first mode appears to be around 3400 Hz. The mode is associated with the optic held with the peek screws. (Now I was doing model using remote desktop so I will need to check it again when I get a better connection, so more to follow on this. I will also post updated model, once I get back to Caltech.) 

stefan.ballmer@LIGO.ORG - 07:27, Thursday 02 June 2016 (27498)

The ~3340Hz peak is also clearly visible in the PDA/PDB x-correlation spectrum. See alog 26345.

peter.fritschel@LIGO.ORG - 14:11, Thursday 02 June 2016 (27510)

A couple of comments on this topic:

  • There is no feature at 3340 Hz in the L1 DARM spectrum, nor within several hundred Hz of this. So the main mode of the L1 composite SRM seems to be at a different frequency, though I think there has never been a transfer function measured there above ~1 kHz that would indicate the mode frequency (put on todo list). The first attached plot shows Kiwamu's full-O1 DARM and cross-correlation spectra for H1 and L1, zoomed in to the several kHz region. There are some peaks in L1 in the 4700-5100 Hz region, but it's fairly complicated in that region.
  • The thermal noise peak in Evan's SRM plot is at a level of 2e-15 m/rtHz, which is a bit above the SRCL sensing shot noise level, so we should be able to see it in SRCL. The second attached plot shows a spectrum of the SRCL error signal from January 9, 2016. From the Aug 2015 entry 20270, the shot noise corresponds to a displacement noise of 1.3e-15 m/rtHz. In the attached spectrum, the mode peak (which is at 3320 Hz here - ?), is 2.5x above the shot noise level, putting it at about 3e-15 m/rtHz. This is a bit higher than in Evan's model, but it is actually remarkably close. (I didn't include it in this plot, but the DARM spectrum also shows this peak at 3320 Hz at this time, and there is 0.8 coherence with SRCL at the peak.)
Images attached to this comment
Non-image files attached to this comment
matthew.heintze@LIGO.ORG - 08:11, Friday 03 June 2016 (27530)SUS

Danny, Matt (Peter F remotely)

Due to the issues currently seen at LHO, we were asked how the LLO SRM surrogate was put together and if we could add to the alog for a record of the process. The easiest way is to do it via photos (which we have of the assembly process).

IMG_1462....There are only two setscrews that hold the optic in place. Can be seen putting these in place below in the "cup" that holds the optic (eventually). Im not sure of the material but Peter F's speculation is that "I think those set screws must be the carbon-loaded PEEK type. The only other option I can think of for a black set screw would be carbon-steel, and it surely isn’t that."

IMG_1455...Here you seen the three main parts. The optic, the “cup” that the optic goes into and then the main mass the cup goes in. Note in the “cup” you see the two raised parts at around 4 and 8 o’clock that the setscrews ‘push’ the optic onto. So its not 'really' a three point contact, its 2 points (set screws) and 2 lines (in the holder)

IMG_1466...Here is the optic going into the cup making sure the fiducial on the optic lines up with the arrow on the cup

IMG_1470.....Optic now in the cup and doing up the setscrews that hold it in place. I cant remember how much we torqued it up (we only did it by hand). But as Peter F again speculated that perhaps we just did the setscrews up tighter than LHO

IMG_1475....Flipping the cup (with the optic in it) over and placing in main mass

IMG_1478....Cup now sitting in Main mass (without screws holding cup into main mass)

IMG_5172......the SRM surrogate installed into the suspension

Images attached to this comment
peter.fritschel@LIGO.ORG - 13:36, Friday 03 June 2016 (27546)

It looks like there might be a mode in the L1 SRM at 2400 Hz. See the attached plot of SRCL error signal from January, along with DARM and the coherence. There is also a broad peak (hump) around 3500 Hz in SRCL, with very low coherence (0.04 or so) with DARM. The SRCL data has been scaled by 5e-5 here so that it lines up with DARM at 2400 Hz.

Non-image files attached to this comment
evan.hall@LIGO.ORG - 12:56, Tuesday 07 June 2016 (27625)ISC

Here are two noise budgets showing the expected DARM noise assuming (1) structural (1/f1/2) SRM damping and (2) a hyperstructural (1/f3/4) SRM damping. This hyperstructural damping could explain the DARM noise around 30 to 40 Hz, but not the noise at 50 Hz and above.

I also attach an updated plot of the SRCL/DARM coupling during O1, showing the effect of the feedforward on both the control noise and on the cavity displacement noise (e.g., thermal noise). Above 20 Hz, the feeforward is not really making the displacement noise coupling any worse (compared to having the feedforward off).

Note that the PEEK thermal noise spectrum along with the SRCL/DARM coupling is able to explain quite well the appearance of the peak in DARM.

Non-image files attached to this comment
evan.hall@LIGO.ORG - 09:59, Wednesday 06 July 2016 (28191)

I am attaching noise budget data for the structural case in 27625.

Non-image files attached to this comment
H1 ISC
sheila.dwyer@LIGO.ORG - posted 20:07, Saturday 28 May 2016 - last comment - 14:08, Friday 03 June 2016(27437)
locklosses possibly related to RF problem or SR3 glitches

Evan and I spent most of the day trying to investigate the sudden locklosses we've had over the last 3 days.  

1) We can stay locked for ~20 minutes with ALS and DRMI if we don't turn on the REFL WFS loops.  If we turn these loops on we loose lock within a minute or so.  Even with these loops off we are still not stable though, and saw last night that we can't make it through the lock acquisition sequence. 

2)In almost every lockloss, you can see a glitch in SR3 M2 UR and LL noisemons just before the lockloss, which lines up well in time with glitches in POP18.  Since the UR noisemon has a lot of 60 Hz noise, the glitches can only be seen there in the OUT16 channel, but the UR glitches are much larger.  (We do not actuate on this stage at all).  However, there are two reasons to be skeptical that this is the real problem:

It could be that the RF problem that started in the last few days somehow makes us more senstive to loosing lock because of tiny SR3 glitches, or that the noisemons are just showing some spurious signal which is related to the lockloss/ RF problems. Some lockloss plots are attached. 

It seems like the thing to do would be trying to fix the RF problem, but we don't have many ideas for what to do. 

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 20:25, Saturday 28 May 2016 (27438)

We also tried running the Hang's automatic lockloss tool, but it is a little difficult to interpret the results from this.  There are some AS 45 WFS channels that show up in the third plot that apprears, which could be related to either a glitchy SR3 or an RF problem. 

Images attached to this comment
sheila.dwyer@LIGO.ORG - 20:27, Saturday 28 May 2016 (27439)

One more thing: Nnds1 chrashed today and Dave helped us restart it over the phone.

andrew.lundgren@LIGO.ORG - 07:41, Wednesday 01 June 2016 (27470)DetChar, ISC, Lockloss
For the three locklosses that Sheila plotted, there actually is something visible on the M3 OSEM in length. It looks like about two seconds of noise from 15 to 25 Hz; see first plot. There's also a huge ongoing burst of noise in the M2 UR NOISEMON that starts when POP18 starts to drop. The second through fourth attachments are these three channels plotted together, with causal whitening applied to the noisemon and osem.

Maybe the OSEM is just witnessing the same electrical problem as is affecting the noisemon, because it does seem a bit high in frequency to be real. But I'm not sure. It seems like whatever these two channels are seeing has to be related to the lockloss even if it's not the cause. It's possible that the other M2 coils are glitching as well. None of the other noisemons look as healthy as UR, so they might not be as sensitive to what's going on.
Images attached to this comment
keita.kawabe@LIGO.ORG - 14:08, Friday 03 June 2016 (27501)

RF "problem" is probably not a real RF problem.

Bad RFAM excess was only observed in out-of-loop RFAM sensor but not in the RFAM stabilization control signal. In the attached, top is out-of-loop, middle is the control signal, and the bottom is the error signal.

Anyway, whatever this low frequency excess is, it should come in after the RF splitter for in- and out-of-loop board. Since this is observed both in 9 and 45MHz RFAM chassis, it should be in the difference in how in- and out-of-loop boards are configured. See D0900761. I cannot pinpoint what that is but my guess is that this is some DC stuff coming into the out-of-loop board (e.g. auto bias adjustment feedback which only exists in out-of-loop).

Note that even if it's a real RFAM, 1ppm RIN at 0.5Hz is nothing assuming that the calibration of that channel is correct.

Images attached to this comment
andrew.lundgren@LIGO.ORG - 15:22, Wednesday 01 June 2016 (27486)DetChar, ISC, Lockloss
Correction: The glitches are visible on both the M2 and M3 OSEMs in length, also weakly in pitch on M3. The central frequency looks to be 20 Hz. The height of the peaks in length looks suspiciously similar between M2 and M3.
Images attached to this comment
andrew.lundgren@LIGO.ORG - 01:42, Thursday 02 June 2016 (27496)DetChar, ISC, Lockloss
Just to be complete, I've made a PDF with several plots. Every time the noise in the noisemons comes on, POP18 drops and it looks like lock is lost. There are some times when the lock comes back with the noise still there, and the buildup of POP18 is depressed. When the noise ends, the buildup goes back up to its normal value. The burst of noise in the OSEMs seems to happen each time the noise in the noisemons pops up. The noise is in a few of the noisemons, on M2 and M3.
Non-image files attached to this comment
Displaying reports 59321-59340 of 86133.Go to page Start 2963 2964 2965 2966 2967 2968 2969 2970 2971 End