Displaying reports 57721-57740 of 84537.Go to page Start 2883 2884 2885 2886 2887 2888 2889 2890 2891 End
Reports until 09:20, Monday 06 June 2016
LHO General
corey.gray@LIGO.ORG - posted 09:20, Monday 06 June 2016 - last comment - 11:44, Monday 06 June 2016(27563)
Running Alog Of CDS Power Outage Recovery

This will be a running log of activities as they occur for bringing back CDS after the planned outage over the weekend.  Refer to Richard's alog about what happened over the weekend for prep.  

We started the shift with lights, front gate operation, wifi, etc.

Comments related to this report
corey.gray@LIGO.ORG - 09:28, Monday 06 June 2016 (27564)

Step 6.3:  LVEA Network Switches 

This box appeared to be running (green lights & no errors).

corey.gray@LIGO.ORG - 09:30, Monday 06 June 2016 (27565)

Step 6.4 Work Station Computers

Work Stations powered in Control Room----->  But BEFORE these were powered up, The NFS File Server should have been checked in the MSR before powering anything up.  This needs to be added to the procedure.

We have found several issues due to computers being started before the NFS File Server was addressed.  These items had to be brough up again:

  • EPICS Gateway
  • NDS

----->And the items above allow us to run Dataviewer now and also bring up the Vacuum Overview

Step 6.5  Wiki & alog & CDS Overview MEDM

They are back.  Overview on the wall (thanks, Kissel).

Update:  The wiki was actually NOT back since it was started before the NFS File Server was started.  So the wiki was restarted.

corey.gray@LIGO.ORG - 09:43, Monday 06 June 2016 (27566)

Sec 8:  EX  (Richard & TJ)

They have run through the document & are moving on to other outbuildings.  On the overview for EX, we see Beckhoff & SUSAUX are back.

Sec 10:  MX (Richard & TJ)

Haven't heard from them, but we see that PEMMX is back on the Overview.

corey.gray@LIGO.ORG - 09:57, Monday 06 June 2016 (27567)

Sec 9:  EY (Richard & TJ)

This is backonline.  So now we can start front ends in the Corner Station!  

They are now heading to MY now.....

Sec 11:  MY (Richard & TJ)

...but it looks like MY is already back according to the CDS Overview.

corey.gray@LIGO.ORG - 10:09, Monday 06 June 2016 (27568)

STATUS at 10AM (after an hour of going through procedure):

Most of the CDS Overview is GREEN, -except- the LSC.  Dave said there were issues with bring the LSC front end back up and will need to investigate in the CER.

End Stations: (TJ, Richard)

  • ALS lasers powered on (which wasn't mentioned in procedure).
  • Mentioned issues with high power voltage power supplies are intentionally left off so VAC Team can check their system first.
corey.gray@LIGO.ORG - 10:15, Monday 06 June 2016 (27569)

7.5 Front Ends (Updated from Kissel)

Everything is looking good on the CDS Overview

Sec 7.8:  Guardian (updated from Kissel)

Working on bringing this back.  Some of these nodes need data from LDAS (namely, ISC); so some of these may take a while.  

BUT, basic nodes such as SUS & SEI may be ready fairly sooner.

corey.gray@LIGO.ORG - 10:18, Monday 06 June 2016 (27570)

7.1 CORNER STATION DC POWER SUPPLIES

These have all been powered ON (Richard).

(Still holding off on End Stations for VAC team to let us know it's ok.)

corey.gray@LIGO.ORG - 10:49, Monday 06 June 2016 (27571)

EY (Richard)

High Power voltage  & ESD are back (this step is not in recovery document).

corey.gray@LIGO.ORG - 11:10, Monday 06 June 2016 (27572)

EX (Richard)

High Power voltage  & ESD are back (this step is not in recovery document).

corey.gray@LIGO.ORG - 11:36, Monday 06 June 2016 (27573)

There appear to be some order issues here.  (Jeff, Nutsinee, and others are working on fixing the order in the document.)

1)  We held off on addressing DC high power because of wanting to wait for Vacuum System Team at the LVEA (for vacuum gauge) and at the End Stations (for vacuum gauge & the ESD).

2)  We held off on some Corner Station items, because of them being on the Dolphin Network.  So to address End Stations FIRST, Assigned Richard & TJ to head out to start the End Station sections of the document & get their Dolphin Network items online.  Once they were done, Dave started on cornter station Front Ends on the Dolphin network.

Extraneous items:

  • Conlog was brought up (Patrick)
corey.gray@LIGO.ORG - 11:41, Monday 06 June 2016 (27574)

Sec 7.6 LVEA HEPI Pump Station Controller

After Hugh's HEPI Maintenance, Jim brought this back.

Sec 8.5 EX HEPI Pump Station Controller

After Hugh's HEPI Maintenance, Jim brought this back.

Sec 9.5 EY HEPI Pump Station Controller

After Hugh's HEPI Maintenance, Jim brought this back.

 
corey.gray@LIGO.ORG - 11:44, Monday 06 June 2016 (27575)

UPDATE:

At this point (11:42am), we appear to be mostly restored with regard to the CDS side.  Most of the operations subsystems are back (we are mostly green on the Ops Overview).  The VAC group's Annulus Ion Pumps are back to using site power.

Lingering CDS/Operations items:

  • NDS may still be down
  • PSL restoration is still underway (Jason, Ed)
H1 ISC (ISC, SUS)
rich.abbott@LIGO.ORG - posted 08:54, Monday 06 June 2016 - last comment - 12:01, Monday 06 June 2016(27562)
Installation of ITM ESD
Filiberto, Ed, Rich

Installation is complete for the ITM ESD Driver for ITMX and ITMY.  A short was found on the bias connection to ITMX (see attached sketch to clear up the pin numbering of the legacy connector).  The shield was cut and insulated on the 2ft section of cable up near the vacuum feedthrough for this connection.  All nominally 24V connections were HIPOT tested to 500V, and the high voltage bias connections were tested to 700V.

An ADC (ADC07) was added to SUS-C5-14 to permit the ITM ESD readback channels to be implemented per the latest wiring diagram (D1500464).

At time of writing, no aspects of the installed system have been verified in situ.  This is the next item on the checkout.

Some useful system constants (total for both drivers):
+/- 18VDC Nominal Quiescent Current -> +460mA, -420mA
+/- 24VDC Nominal Quiescent Current -> +40mA, -40mA
+/- 430VDC Nominal Quiescent Current -> +/- 5.6mA

Serial Numbers:
ITMY ESD Driver (D1600092) (installed in SUS-R6) -> S1600266
ITMX ESD Driver (D1600092) (installed in SUS-R5) -> S1600267
ITM PI AI Chassis (D1600077) (installed in SUS-C6 U23) -> S1600245
Images attached to this report
Comments related to this report
carl.blair@LIGO.ORG - 12:01, Monday 06 June 2016 (27576)

There is some noise on the LL quadrant of the ITMX readback.  H1:IOP-SUS_AUX_B123_MADC7_TP_CH4.   Appears at ~ 8kHz 3000 counts peak.    
 

Images attached to this comment
LHO General
richard.mccarthy@LIGO.ORG - posted 08:11, Monday 06 June 2016 (27561)
LHO Power Outage
As noted by Dave B.  We began the orderly shutdown of the site at 1500 local time on Friday.
The power actually went out at 1715 and we were almost ready.
Vacuum system with the generators setup did not go as smoothly as it could.
1. Generator output was too low for the UPS to operate on.  UPS looking for 115V and we were at 112V.  We bypassed the UPS and ran the 24V dc supplies directly from the Generator.
2. The GFI outlet on EY generator would not function so it was replaced
By 1930 we were comfortable leaving the site.

Sat.
0800 Site was dark all buildings, generators, Vacuum system in good shape
1800 New switch at substation installed.

Sun.
0800 Site was dark all buildings, generators, Vacuum system in good shape
1200 Testing of switch complete (hicups included)
1300 Site power restored.
Facility cooling began.
converted Vacuum system to building power generators turned off.
1500 left for day.
Other Vacuum details by Vacuum group.
LHO OpsInfo
carlos.perez@LIGO.ORG - posted 15:45, Friday 03 June 2016 (27559)
h1ecatc1
During power off procedure h1ecatc1 reported it had unsaved changes, As the changes where unknown at the moment, we ignored them and continue with normal shutdown.
H1 General
cheryl.vorvick@LIGO.ORG - posted 15:40, Friday 03 June 2016 - last comment - 15:57, Friday 03 June 2016(27558)
Ops Day Summary: weekend prep activities as of 22:40UTC

State of H1: systems are off

 

Weekend Prep activities:

* indicates  there was some small glitche in our procedure

Comments related to this report
cheryl.vorvick@LIGO.ORG - 15:57, Friday 03 June 2016 (27560)
  • 22:47 - Controll Room screens off
  • 22:48 - Fil to mids and ends to power off
  • 22:56 - Control Room machines off
  • 22:57 - OPS workstation / aux computers off
H1 ISC
cheryl.vorvick@LIGO.ORG - posted 15:27, Friday 03 June 2016 (27557)
H1 alignment snapshot:

This alignment snapshot is the final lock before the weekend.

Images attached to this report
H1 TCS
nutsinee.kijbunchoo@LIGO.ORG - posted 15:26, Friday 03 June 2016 (27556)
All TCS systems (HWS, CO2) shutdown

Ready for the power outage. Safe.snap last updated June 1st.

HWS:

CO2:

H1 CDS
david.barker@LIGO.ORG - posted 15:15, Friday 03 June 2016 (27555)
orderly shutdown of LHO CDS systems has started

we are starting the shutdown procedure for LHO CDS.

H1 SUS
jeffrey.kissel@LIGO.ORG - posted 15:04, Friday 03 June 2016 (27554)
Beam Splitter (BSFM) Library Part Updated to Handle Newly Named TACQ Coil Driver Library Part
J. Kissel, D. Barker 

After the modifications to the HAM triple models to make coil driver filtering individually switchable (see LHO aLOG 27223), I had renamed the library parts in the 
/opt/rtcds/userapps/release/sus/common/models/STATE_BIO_MASTER.mdl
in order to clean up the confusion between the digital control of a modified triple-acquistion driver vs. and unmodified triple-acquisition driver.

However, in doing so, this renaming destroys the library link's reference in other models that use the specific block. This was identified by Dave who was trying to compile the BSFM model in prep for the power outage (which is the only *other* suspension type that uses the TACQ driver). As such, I copied in the newly renamed part from the library which restored the link. The model now compiles nicely, and has been committed to the userapps repository.
H1 General
cheryl.vorvick@LIGO.ORG - posted 14:58, Friday 03 June 2016 (27553)
Ops Day Summary: update at 21:57UTC

State of H1:  at 21:33UTC locked in ENGAGE_REFL_POP_WFS

 

Activities at 21:33UTC:

 

Activities during the day:

 

H1 locked:

 

H1 unlocked:

H1 SUS (COC, SUS, SYS)
calum.torrie@LIGO.ORG - posted 14:35, Friday 03 June 2016 (27550)
Fiber experiment in optics lab

Travis Sadeki and Calum Torrie

Fiber experiment in optics lab to investigate effect (if any) of using the Top Gun Ionizing Air Gun System in proximity to the LIGO production fibers. For Top Gun information refer to https://dcc.ligo.org/LIGO-T1300687. Note the Temp was 71-73 deg F and the Humidity was ~ 38% on day 1 and day 2. The gas used is ultra high grade pure Nitrogen.
 
Experiment 1.1 (Day 1: June 2nd 2016 2:00pm).
Fiber #1 with Top Gun at 10 psi. Production Fiber with 15 kg x2 (overload test weight). Note this production fiber (#1) was hanging for 2 hours in test box prior to spraying with Top Gun. Also note that all of these experiments were conducted in a clean space, not in a dedicated clean room. For Fiber #1 we opened the door and sprayed across the opening (back and forth) for 30s 8" from fiber, at a position 1/2 way down the fiber. The door was then closed and we noted the fiber and mass got small kick from this action. Note this experiment is to mimic what a fiber could be exposed too during a 1st contact peel. However, it should be noted that this is a worse case scenario as in general the fiber guard is between the Top Gun and the fiber.
 
Experiment 1.2 (Day 1: June 2nd 2016 2:10pm).
Fiber #2 with Top Gun at 10 psi. Production Fiber with 15 kg x2 (overload test weight). Note this production fiber (#2) was hanging for 2 hours in test box prior to spraying with Top Gun. Also note that all of these experiments were conducted in a clean space, not in a dedicated clean room. for Fiber #2 we opened door for 30s. No spray was performed. Then we closed the door. Same kick seen on fiber as above. Note this is / was a control for experiment 1.1.
 
Experiment 1.1 (Day 2 am: June 2nd 2016 9:30am).
Fiber #1 still hanging. We then poked fiber in each box via peep hole - all good.
Now with Fiber #1 we repeated the experiment from the previous day again at 10 psi (2 time) and again for 30s as before.
Note - fiber was swinging back and forth during this experiment, as a result of previous poke. All good i.e. fiber still hanging at end.

Experiment 1.2 (Day 2 am: June 2nd 2016 9:40am).
Fiber #2 still hanging. We then poked fiber in each box via peep hole - all good.
Experiment 1.2 is now closed.

Experiment 2.1 (Day 2 am: June 2nd 2016 9:45am).
Now with Fiber #2 we increased the regulator to 20 psi test. Else same as above. All good i.e. fiber still hanging at end. Note this experiment is to mimic what a fiber could be exposed too during a gap charge removal at an ETM, refer to https://dcc.ligo.org/LIGO-T1500101. However, it should be noted that this is a worse case scenario as in general the fiber guard is between the Top Gun and the fiber.
 
Experiment 1.1 (Day 2 pm: June 2nd 2016 1:30pm).
Back to Fiber #1 and again at 10 psi (3rd time). We noted that there is a small swing induced on the fiber / mass by the Top Gun. All good.

Experiment 2.1 (Day 2 pm: June 2nd 2016 1:35pm).
Then again back to Fiber #2. 20 psi 30s again at 8" (2nd time). All good.
 
Conclusion
So far so good and no effect seen on suspended fibers as a result of exposing fibers to ionized air from Top Gun at both 10 psi and 20 psi. Travis and Calum.
 
Images attached to this report
H1 INJ (INJ)
keith.riles@LIGO.ORG - posted 14:24, Friday 03 June 2016 - last comment - 20:04, Tuesday 07 June 2016(27548)
Implemented new inverse actuation for CW injections with time delay correction
I have copied Evan's new actuation function to the h1hwinj1 directory currently used
for CW injections: ~hinj/Details/pulsar/O2test/. I used the one that corrects for the
actuation delay:  H1PCALXactuationfunction_withDelay.txt.

For reference, the uncorrected version (no "_withDelay") sits in the same directory,
along with the one we first tried last week: H1PCALXactuationfunction.txt.25may2016.
The perl script that generates the command files in.N (N=0-14) has been updated to
use "_withDelay" and the command files regenerated.

The CW injections have been killed and automatically restarted by monit. Attached 
are second trends before and after the gap showing that things look about the same,
as expected, but there is a small increase in injection amplitude (~5%).
Images attached to this report
Comments related to this report
keith.riles@LIGO.ORG - 20:04, Tuesday 07 June 2016 (27642)INJ
Evan wondered if the ~5% increase in total injection amplitude was dominated
by the highest frequency injection or one at lower frequencies. I took a look 
for this time interval and found that the total amplitude is dominated by
the injection at ~1220.5 Hz. Simply comparing spectral line strengths 
before and after the changeover turned out not to be a robust way to
estimate the frequency-dependent ratio of the new to the old inverse actuation 
function, because some pulsar injections (especially the highest frequency one)
are going through rapid antenna pattern modulations during this period.

But comparing the new to the old spectral line strengths at the same
sidereal time several days later (after power outage recovery) gives
robust measures for a sampling of low-, medium- and high-frequency
injections:

Freq (Hz)"Old" amplitude (before switchover)New amplitude (4 sidereal days later)Ratio (new/old)
190.900.322920.322111.00
849.0060.50262.3441.03
1220.5299.37318.701.06
1393.1207.50224.371.08
1991.228.56532.7881.15
These results seem in reasonable agreement with Evan's expectation of a new/old ratio rising with frequency, reaching 15% at 2 kHz. Plots and text files of 4-minute spectral averages are attached for June 3 before the switchover and for June 7 with the newer inverse acuation function.
Images attached to this comment
Non-image files attached to this comment
H1 General
cheryl.vorvick@LIGO.ORG - posted 09:42, Friday 03 June 2016 - last comment - 14:28, Friday 03 June 2016(27533)
Morning Meeting:

Activities:

Currently:

Work still planned for today:

Tours:

Moday timeline:

Comments related to this report
hugh.radkins@LIGO.ORG - 14:28, Friday 03 June 2016 (27551)

To be more clear regarding the HEPI task I will perform Monday morning, see WP 5910.

This is the Capacitive Accumulator Pressure checking which requires the HEPI Pumps off.  This is done only every 3 to 6 months.

H1 SUS (CDS, SUS)
sheila.dwyer@LIGO.ORG - posted 19:32, Thursday 02 June 2016 - last comment - 14:59, Friday 03 June 2016(27521)
SR3 glitches do seem to be a problem

Tonight we are again having random, fast locklosses, in different configurations.  We also are seeing some large glitches that don't knock us out of lock.  Again they seem to correspond to times when there is something noisy in SR3 channels, while its not clear that the SR3 channels are seeing real optic motion, it is probably worth swapping some electronics as a test because these frequent locklosses are making commissioning very difficult. 

See 27437 and Andy Lundgren's comments

The first attached plot shows that something about this channel changed on May 10th, and that there have been noisy periods since then.  The next two are two more examples of sudden unexplained locklosses where something shows up in SR3.

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 22:17, Thursday 02 June 2016 (27526)

SR3 is not the problem after all. 

KIwamu and I unplugged the cables from the Sat amp to the chamber for both M2 and M3, and the locklosses and glitches still happened.  The good news is that Kiwamu seems to have found a good clue about the real culprit.

kiwamu.izumi@LIGO.ORG - 22:42, Thursday 02 June 2016 (27527)ISC, PSL

Our current theory is that locklosses are due to the ISS which shuts itself off for some reason at random times at a rate of once in 10 minutes or so. This causes a glitch in the laser intensity. Before a lockloss, there was a fast glitch (~milliseconds) in PRCL, SRCL and CARM error signals. That made us think that the laser field may be glitching. Indeed, we then found that the ISS had gone off automatically at the same time as the glitch and seemingly had caused the subsequent lockloss. We then tested the stability of ISS in a simpler configuration where only IMC is locked. We saw glitches of the same type in this configuration too.

In order to localize the issue, we are leaving the ISS open overnight to see if some anomaly is there without the ISS loop.

Images attached to this comment
kiwamu.izumi@LIGO.ORG - 09:47, Friday 03 June 2016 (27532)ISC, PSL

Conclusion: it was the ISS which had a too low diffraction power.

According to the overnight test last night, I did not find a glitchy behavior in the laser intensity (I looked at IMC-MC2_TRANS_SUM). This means that the ISS first loop is the culprit. Looking at trend of the recent diffraction power, the diffraction power kept decreasing in the past few days from 12-ish to almost 10% (see the attached). As Keita studied before (alog 27277), a diffraction power of 10% is about the value where the loop can go unstable (or hit too low diffraction value to shut off the auto-locked loop). I increased the diffraction power to about 12% so that the variation in the diffraction power looks small to my eyes.

Images attached to this comment
keita.kawabe@LIGO.ORG - 14:59, Friday 03 June 2016 (27552)

Note that there are two reasons that the diffracted power changes, i.e. intentional change of the set point (left top) and the HPO power drift (right bottom). When the latter goes down, ISS doesn't have to diffract as much power, so the diffraction goes lower.

In the attached, at the red vertical line somebody lowered the diffraction for whatever reason, and immediately the ISS got somewhat unhappy (you can see it by the number of ISS "saturation" in right middle panel).

Later at the blue vertical line (that's the same date when PSL air conditioning was left on), the diffraction was reduced again, but the HPO power went up, and for a while it was OK-ish.

After the PSL was shut down and came back, however, the power slowly degraded, the diffraction went lower and lower, and the number of saturation events sky-rocketed.

Images attached to this comment
Displaying reports 57721-57740 of 84537.Go to page Start 2883 2884 2885 2886 2887 2888 2889 2890 2891 End