Displaying reports 60021-60040 of 86268.Go to page Start 2998 2999 3000 3001 3002 3003 3004 3005 3006 End
Reports until 17:25, Tuesday 03 May 2016
H1 PSL (PSL)
peter.king@LIGO.ORG - posted 17:25, Tuesday 03 May 2016 - last comment - 17:46, Wednesday 04 May 2016(26972)
HPO status
The attenuated output from the front end, or seed, laser was admitted into the high
power oscillator ring.  No signs of clipping on the intermediate optics was observed.
However the overlap between the beam promptly reflected from the oscillator's output
coupler and the beam that traversed the ring was slightly off - the interference
fringes were clearly left of centre.  This was corrected for.  The mis-alignment of
the output coupler was most likely caused by the drag wipe cleaning the previous day.

    Each laser head was powered up with 5A of pump current.  No bright spots were
observed on the optics that would indicate some kind of point damage.  Each head was
powered up to 50A and again, no point damage spots were observed.

    The oscillator was then fully powered up, starting at 40A-45A per head.  The
laser power was noticeably down.  The beam from the oscillator, shown in
FirstTurnOn1.png, was ugly but stable.  My interpretation of this was that there
was no point damage on the optics but that the resonator was severely mis-aligned.
The exact reason for why the resonator would have become mis-aligned is not clear
to me.

    Adjusting the output coupler did improve the beam shape but not the output
power.  A more thorough alignment process will be embarked on tomorrow.
Images attached to this report
Comments related to this report
peter.king@LIGO.ORG - 17:46, Wednesday 04 May 2016 (27017)
Forgot to add this picture yesterday, which was after adjusting the output coupler.
Images attached to this comment
LHO VE
kyle.ryan@LIGO.ORG - posted 17:23, Tuesday 03 May 2016 (26973)
~1200 hrs. local -> bake of Vertex RGA started
Kyle 

Multiple hiccups delayed the actual start of the bake and, as such, this exercise will now drag into Friday -> I am utilizing a second isolation valve via adding, in series, the turbo+gauge portion of a donor pump cart.  I modified the wiring so that this redundant isolation valve closes on a fore line pressure set point. The nominal fore line isolation valve closes only upon the loss of AC to the scroll pump motor.  
H1 General
cheryl.vorvick@LIGO.ORG - posted 16:03, Tuesday 03 May 2016 - last comment - 16:58, Tuesday 03 May 2016(26969)
Day Ops Summary: Maintenance

Ops Day Shift: 16:00-23:59UTC (08:00-3:59PT)

State of H1: PSL is down, hardware, software, and channel name changes made today

Activities today - general:

Activities - details:

Currently working on site, as of 4PM PT:

Comments related to this report
cheryl.vorvick@LIGO.ORG - 16:58, Tuesday 03 May 2016 (26971)

as of 5PM PT:

  • Stefan is at EY
  • DaveB restarting DAC to incorporate fixes to ISI models, VAC models, and additional code fixes needed after the first restart today
  • red lights on DAC overview are all excitations, restart was successful
H1 ISC
jenne.driggers@LIGO.ORG - posted 15:55, Tuesday 03 May 2016 - last comment - 17:34, Tuesday 03 May 2016(26968)
Restored ALS EndY

I noticed that the Yend ALS laser was not hitting its input pointing QPDs.  While looking around, I saw that many beckhoff epics channels were zeros.  I used the SDF interface to load the down_160502_115544.snap epics database (i.e., the new way of doing a burt restore), and things immediately came back. 

This looks like it won't be necessary on the Xend.

Comments related to this report
aidan.brooks@LIGO.ORG - 17:34, Tuesday 03 May 2016 (26974)

The HWS camera and RCX CLINK were restored in the OFF state. I just restarted them:

aidan.brooks@cdsssh:~$ caput H1:TCS-ETMY_HWS_RCXCLINKSWITCH On

Old : H1:TCS-ETMY_HWS_RCXCLINKSWITCH Off

New : H1:TCS-ETMY_HWS_RCXCLINKSWITCH On

aidan.brooks@cdsssh:~$ caput H1:TCS-ETMY_HWS_DALSACAMERASWITCH On

Old : H1:TCS-ETMY_HWS_DALSACAMERASWITCH Off

New : H1:TCS-ETMY_HWS_DALSACAMERASWITCH On

H1 CDS
jonathan.hanks@LIGO.ORG - posted 15:01, Tuesday 03 May 2016 (26967)
Updated the slow controls SDF monitor code to the RCG 3.0.2 release

I updated and restarted the slow controls SDF monitors using the RCG 3.0.2 code.

H1 INJ (CAL)
jeffrey.kissel@LIGO.ORG - posted 14:43, Tuesday 03 May 2016 (26966)
Blind Injection Infrastructure Removed from CAL CS and EX Front End Models
J. Kissel
ECR: E1600118
FRS/II Ticket: 5307
WP: 5866.

As my last duty serving on the O1 blind injection team, I've removed the blind injection front-end code infrastructure from the common library part,
/opt/rtcds/userapps/release/cal/common/models/CAL_INJ_MASTER.mdl
and from each of the top level of our local models,
/opt/rtcds/userapps/release/cal/h1/models/
h1calcs.mdl
h1calex.mdl
and committed them to the repo. Thankfully, the only MEDM infrastructure that was ever created / used were the automatically generated screens from the RCG, so no work needs doing there.

Note that this *gives back* two 16 kHz channels to the data rate pool. Nice!

LLO need only update the CAL_INJ_MASTER.mdl part, and then remove any summation and tags from the top level of the corner / end-station model.
Images attached to this report
H1 GRD
jameson.rollins@LIGO.ORG - posted 13:56, Tuesday 03 May 2016 (26965)
guardian log access issues

There have been reports of some issues with the new guardian log reading infrastructure.  I have a suspicion that some of the problems might have been associated with the <a href="https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=26963">h1fs0 crash</a> this morning.  In any event, I'm looking into the issue, and will report what I find.

And to be sure, please make sure you're using the latest version of the client, which is r1541.  This new version was installed on Friday to fix a couple of issues.  You may need to log out/in to refresh.

H1 CDS
david.barker@LIGO.ORG - posted 11:20, Tuesday 03 May 2016 (26963)
h1fs0 crash

During the "make installWorld" part of RCG3.0.2 install the /opt/rtcds NFS server crashed (h1fs0). We reset h1fs0, but the NFS services did not come back cleanly. We restarted the nfs-server daemon and the services restarted correctly and the NFS clients reconnected.

Looking at the h1fs0 logs, problems were being reported starting at 09:05 PDT this morning.

We are restarting the install process and monitoring the error logs and disk usage carefully.

H1 General
jeffrey.bartlett@LIGO.ORG - posted 11:17, Tuesday 03 May 2016 (26962)
3IFO Stroage Cabinets Temp/RH Checks
   Collected the temperature and RH data from the two 3IFO Dry Boxes in the VPW, and the 3IFO desiccant cabinet in the LVEA. Relative humidity data for all three containers are fine (mean range between -0.71 and 3.29%). 

   Temperature data shows a different story. There were several 20 plus degree swings in the VPW temperature during the first part of the month. The second half of the month the temperature swings were around 10 degrees.       
Non-image files attached to this report
H1 General
jeffrey.bartlett@LIGO.ORG - posted 11:00, Tuesday 03 May 2016 (26961)
Check Dust Monitor Operations
   Did a flow check and Zero Count test of all operating dust monitors (except in the PSL as those were checked at install). All DMs are performing normally.  
H1 SUS
betsy.weaver@LIGO.ORG - posted 09:09, Tuesday 03 May 2016 - last comment - 09:19, Tuesday 03 May 2016(26957)
Finished SUSAUX model changes, closing out ECR 1200, E1600033

Today, I finished tweaking the DAQ channel data rates for the SUS HAM FASTIMONS which are all in the commissioning frames - Kissel and Jenne discussed upping some of these rates, especially at the lowest stage of the globally controlled SUSes.  Namely the rates per stage of SUS are as follows.

For MC2, PRM, SRM:

M1   256

M2   2048

M3   Full rate

 

For MC1, MC3, PR2, PR3, SR2, SR3:

M1   256

M2   2048

M3   2048

 

Pictures of examples of each type are attached.

 

I have compiled, with a "successful" message:

h1susauxh2

h1susauxh34

h1susaux56

 

I think this means we can closeout ECR 1200.  'Will work on that with Kissel.

Images attached to this report
Comments related to this report
betsy.weaver@LIGO.ORG - 09:19, Tuesday 03 May 2016 (26958)

Note, this bug used be ECR 1200, now on FRS it is apparently 4702.

H1 General
edmond.merilh@LIGO.ORG - posted 08:49, Tuesday 03 May 2016 (26955)
Shift Summary - Day transition
TITLE: 05/03 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    Wind: 6mph Gusts, 2mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.15 μm/s 
QUICK SUMMARY: Maintenance Day!
 
Ed sitting in for Cheryl for the first hour or so.
 

14:30 

15:24 The portable toilet people are on site

15:45 Joe into LVEA

15:49 Jeff B added 200ml to the Xtal chiller

15:50 Jeff B to End stations

15:50 No L4C watchdog counts for me to reset. Perhaps It was done by someone else this morning?

H1 ISC (ISC, SUS)
stefan.ballmer@LIGO.ORG - posted 20:48, Monday 02 May 2016 - last comment - 19:45, Tuesday 03 May 2016(26948)
ITM L2 noise - 1.5nV/rtHz at 25Hz

Superseeds alogs 26910 and 26924

Bad news: There is lots of MHzish pick-up on the cables to the ITM L2 coils: ~50mVpk
Good news: The ITM L2 coil noise at low freuency is very good: 1.5nV/rtHz at 25Hz, and we might not care about all that pickup.

Details:
The 10Hz harmonics reported in alog 26910 was a measurement problem, generaterd in Rai's preamplifier box (D060205). The cables pick up on the order of 50mVpk at around 1MHz, which was amplified by 100x, causing slew-rate down-conversion.
This was fixed (in the measurment setup) with a 270nF capacitor in prarallel to the 23.8Ohm cable and coil resistance, resulting in a 24.7kHz pole to cut off the cable pick-up.
 
Plot 1 and 2:
ITMX and ITMY coil noise.
Configuration:
- Everything (coil driver, cable, coil) was connected. The breakout box was inserted between coil driver and cable to the satellite amplifier.
- The L2 cois drivers were both is state 3: Acq Off, LP On, which is the run mode. They are never switched for the ITMs.
- The coil driver inputs were left connected to the DAC/AI. I also sent a 100ctpk, 3kHz signal into H1:SUS-ITM[XY]_L2_DRIVEALIGN_L2L_EXC, corresponing to a 2.6ctpk signal on the DAC. I did this to make sure the DAC is as least flipping bits, which raises its noise level.
- A 270nF capacitor was put in parallel to the coil using a pomona box to avoid saturating the D060205-preamp.
- The preamp has a gain of 100. After the preamp a 100Hz low pass was used (1595Ohm and 1uF) to allow the SR785 to run in the lowest noise mode.

Plot 3:
Noise projection assuming incoherent noise and assuming the ETMs behave the same.


Plot 4:
High frequency noise pick-up on the coil cable (coil driver disconnected).
The dominant noise is at ~1MHz, broadband.
Shown are two traces: one in nominal configuratiuon (green), and one with an additional choke on the cable to the coils.

Plot 5:
Scope trace of the high-frequency signal (only the x100 amplifier is used). The signal is made up of ~10msec bursts every ~100msec.

Plot 6:
All 4 coils (without amplifyer) directly connected to the scope. Note that the grounded inputs of the scope slightly change the signal.

Plot 7:
Scope trace with only an antenna connected to the scope. The signal pickup was largest between the racks - I could not trace it to a source yet.

Images attached to this report
Comments related to this report
stefan.ballmer@LIGO.ORG - 21:01, Monday 02 May 2016 (26951)

For reference, the data and matlab code is available at ~controls/sballmer/20160429/plotIt.m

stefan.ballmer@LIGO.ORG - 21:08, Monday 02 May 2016 (26952)

Also, since we probably have to do similar noise checks when we have the IFO back, here is the equipment I used: Top: Ring antenna Bottom, from left to right: choke, breakout card, 270nF parallel capacitor box, Rai's preamp box, 100Hz LP filter, AC coupling for looking at RF on the spectrum analyzer.

Images attached to this comment
richard.mccarthy@LIGO.ORG - 08:28, Tuesday 03 May 2016 (26956)
Plot 4.  With the cable disconnected from the Coil drive the Shield is no longer terminated.  This may contribute to the pickup.
stefan.ballmer@LIGO.ORG - 19:41, Tuesday 03 May 2016 (26986)

The high frequency noise coupling in in plot 4 is mostly common, and shows up because Rai's preamp has no differential sensing.

In the attached plot the noise seen on ITMX coil 1 is plotted, once sensed with Rai's single-ended preamp, once with an SR560 in differential mode.

 

Conclusion: This noise does not show up on the coil current.

However: The same common HF noise pickup seems to be present on all cables. This now makes me worry about the ESD: I suspect the ESD has much less common rejection, because the +400V and -400V comes in on different cables. Moreover, a broadband noise at 1MHz on the ESD will produce noise near DC due to the quadratic nature of the ESD coupling.

Images attached to this comment
stefan.ballmer@LIGO.ORG - 19:45, Tuesday 03 May 2016 (26987)

I also tried to use the antenna to locate the source. The field is strongest in a circle arouind the rack, suggesting the source might not be in the electorics room, but rather just brought in by all the cables.

H1 PSL (INJ, PSL)
matthew.evans@LIGO.ORG - posted 19:19, Monday 02 May 2016 - last comment - 17:54, Tuesday 03 May 2016(26945)
PSL Rotation Stage Working Even Better

Daniel, Patrick, Matt

We did a little more rotation stage science today.  The objective was to understand the remaining acceleration mystery, and to confirm that the resistor was helping.  The on-screen EPICS values are the ones being used for acceleration and deceleration, and they now have an upper limit of 65000 (or 65s to reach the maximum speed of 100 RPM).  Note that the on-screen velocity is in units of 0.01% of the maximum, so a value of 10000 gives the maximum speed of 100 RPM, and a value of 100 gives 1 RPM.  (These RPM values are presumably for the motor, not the waveplate.)

We found that with the current firmware settings (which Patrick will append), the 50 Ohm resistor was not necessary, so we removed it.  This means that other waveplates in the field need no hardware modification to achieve the 0.01dg accuracy we are seeing with this rotation stage.

The attached screenshot shows a move from 10W to 2W (velocity = 3000, acc = 6000, dec = 6000) and then from 2W to 10W (velocity = 300, acc = 60000, dec = 60000).  Note that the higher values of acceleration and deceleration for lower velocities result in a smoother ride.

Images attached to this report
Comments related to this report
patrick.thomas@LIGO.ORG - 19:29, Monday 02 May 2016 (26946)
Current settings attached.
Non-image files attached to this comment
daniel.sigg@LIGO.ORG - 21:41, Monday 02 May 2016 (26953)

A couple of diagnostics features have been added to the code:

  • Channels to look at the difference between measured and actual position (both counts and degrees)
  • A channel which calculates the power from the measured angle
  • Channels which look at the duty cycle and current of the PWM controller
  • A halt flag which indicates that the rotation stage hasn't moved in the past 1 sec
  • A busy error flag which indicates that the motor is no longer moving, but still indicating busy
  • A tolerance value for the allowed angular error
  • A target error flag which indicates that the tolerance has not been reached after the motor halted
  • An auto abort button which, when enabled, will reset the busy flag after the busy error is lasting more than 2 sec
daniel.sigg@LIGO.ORG - 17:54, Tuesday 03 May 2016 (26976)

I reduced the calibration velocity from 3000 to 500. Driving too fast towards the home position seems to reduce its reproducibility. This test will have to be repeated by looking at the laser power.

The TCS rotation stages also got the new motor settings and can be tested. The TCS medm screens need to be updated as well. (why are they diffeerent?)

H1 CDS (CDS, ISC, PEM, SEI, SUS)
jeffrey.kissel@LIGO.ORG - posted 18:39, Monday 02 May 2016 - last comment - 12:00, Tuesday 03 May 2016(26930)
Creating target area softlinks for all SDF safe, down, and OBSERVE files; Committed to userapps repo.
J. Kissel

Continuing the work of Corey et al. have done cleaning up SDF files, (see LHO aLOG 26917), I've gone one level deeper to ensure that all snap files used in the target areas are soft links to locations in the userapps repo. 

There *is* a safe.snap for every front-end model / epics db, of which there are 129. Unfortunately, because they're human construction, there are less OBSERVE.snaps (112) and down.snaps (28). OBSERVE.snaps at least exist for every front-end model / epics db that existed during O1. However, weather station dbs, dust monitor dbs, and pi front-end models are new since O1, so OBSERVE.snaps don't exist for them. Further, down.snaps seem to have only been created for ISC models, the globally controlled SUS models, and the ISC-related beckhoff PLCs. We know the safe.snaps are poorly maintained, and sadly we haven't been in a configuration we'd call OBSERVE.snap worth in a long time, so they're also out of date. On top of all this, each subsystem seems to have a different philosophy about safe vs. down.

Daniel, Sheila, Jamie, and myself were discussing this on Friday, we'd come to the conclusion that it is far too difficult to maintain three different SDF files. If the SDF mask is built correctly, then there should be no difference between the "down" and "safe" state. The inventors of the "safe" state are the SEI and SUS teams because they have actuators strong enough to damage hardware. As such, they've designed the front-end models such that all watchdogs come up tripped and user intervention is required to allow for excitations. So, as the model comes up, it's already "safe" regardless of it's settings. Of course, even though the IFO is "down" at that starting point, we still want the platforms to be fully isolated. So, in that sense, for the ISIs "down" is the same as "OBSERVE." And again, if all settings that change via guardian are correctly masked out, then "safe" is the same as "down" is the same as "observe" and you only need one file. 

So, eventually -- we should get back to having only one file per subsystem. But this will take a good bit of effort to make sure that what's controlled via guardian is masked out of every SDF, and vice versa, that what is masked out of SDF *is* controlled by guardian. The temporary band-aid idea, will be at least to make sure that every model's down is the same as it's safe. Because Corey et al. put a good bit of effort into reconciling the down and safe.snap files today, I've copied all of the down.snap's over to the safe.snaps and committed them to the repo. I've not yet gone as far as to change the safe.snap softlinks to point to the down.snaps, but that will be next.

Anyways -- this aLOGs kinda rambling, because this activity has been disjointed, rushed, and sporadic, but I wanted to get these thoughts down and give an update on the progress. In summary, at least every safe, down, and OBSERVE.snap in the target area is a soft link to the userapps repo, and all of those files in the userapps repo are committed. More tomorrow, maybe.


Comments related to this report
corey.gray@LIGO.ORG - 12:00, Tuesday 03 May 2016 (26964)

Thanks for the write-up here!  A couple of comments/notes:

1)  Does every frontend really have a safe.snap?  I thought I could not find some safe.snaps for some of the ECAT (i.e. slow control) frontends.  Or is there a way for the SDF Overview medm to not display *all* SDF files?

2)  If we manage to get to ONE SDF file, what' will we name it?  Will we stay with "safe" since that's what the RCG calls out, or will we change it to a name more preferred (this was another subtle note I overheard you all discussing on Fri.)

H1 TCS
nutsinee.kijbunchoo@LIGO.ORG - posted 14:04, Monday 02 May 2016 - last comment - 11:20, Thursday 05 May 2016(26927)
Restarted h1hwsmsr and restarted HWSX code

~21:01 UTC I turned off the camera, frame grabber, then powercycled the computer (then turned the frame grabber and the camera back on). Only HWSX code is running at the moment. Things look good for now.

Comments related to this report
nutsinee.kijbunchoo@LIGO.ORG - 09:45, Tuesday 03 May 2016 (26960)

May 3 16:44 UTC Stopped HWSX code and ran HWSY code alone. HWSX code had been running fine since yesterday.

nutsinee.kijbunchoo@LIGO.ORG - 11:20, Thursday 05 May 2016 (27030)

May 5th 18:20 UTC  I noticed HWSY code stopped running. There has been many comuter and front end restart since I left it running so it was unclear what caused it to stop. I reran it again and going to leave it again for another day.

H1 CDS
patrick.thomas@LIGO.ORG - posted 14:00, Monday 02 May 2016 - last comment - 22:42, Monday 02 May 2016(26926)
h1conlog1-master down
May  2 12:08:37 h1conlog1-master conlog: ../conlog.cpp: 301: process_cac_messages: MySQL Exception: Error: Data too long for column 'value' at row 1: Error code: 1406: SQLState: 22001: Exiting.

Coincident with Bechoff restarts?
Comments related to this report
patrick.thomas@LIGO.ORG - 14:32, Monday 02 May 2016 (26929)
Restarted.
patrick.thomas@LIGO.ORG - 18:52, Monday 02 May 2016 (26944)
Restarted again. Same error.
daniel.sigg@LIGO.ORG - 22:42, Monday 02 May 2016 (26954)

This may be different, but we also ended up with a couple of corrupt autoburt files. There the problem seems to be that string values are not properly escaped. A carriage return character in a string will force a line break in the autoburt text file. A burt restore will then complain that the string is not terminated by double quotes.

Displaying reports 60021-60040 of 86268.Go to page Start 2998 2999 3000 3001 3002 3003 3004 3005 3006 End