Displaying reports 50421-50440 of 85202.Go to page Start 2518 2519 2520 2521 2522 2523 2524 2525 2526 End
Reports until 23:58, Tuesday 25 April 2017
LHO General
patrick.thomas@LIGO.ORG - posted 23:58, Tuesday 25 April 2017 (35793)
Ops Eve Shift Summary
TITLE: 04/26 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 68Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:

Nutsinee covered the first hour or so of my shift while I finished up work on the FMCS IOC changes.

I'm afraid I am not certain if the LVEA has been swept. I do know that the WAP and FLIR cameras are off.

I spent a considerable amount of time damping the ITMY bounce and roll modes with Sheila's help.

Greg helped me verify that the calibration code was running after I went into observing.

At one point while attempting to lock the ETMX RMS WD tripped. I reset it.

The verbal alarms script crashed when I set the intention bit to observing. I restarted it.

I had to accept some minor SDF differences to go into observing. A screenshot of these is attached to the mid shift summary.

The ops_auto_alog.py script does not return from attempting to get data when run for the shift transition log. I suspect an NDS issue.

LOG:

00:28 UTC Kiwamu to CER to turn off LVEA WAP
00:31 UTC Kiwamu back
00:41 UTC Kyle back from mid stations
00:53 UTC ETMX RMS WD tripped
Stopped at ENGAGE_REFL_POP_WFS to damp ITMY bounce and roll modes.
01:59 UTC Moving on
02:20 UTC NLN. Accepted SDF differences.
02:24 UTC Running a2l.
02:33 UTC a2l done. Set intent bit to observing. Verbal alarms script crashed. Restarted it.
02:42 UTC Damped PI mode 28 by changing phase
02:54 UTC Damped PI mode 27 by changing phase
H1 CDS
patrick.thomas@LIGO.ORG - posted 23:26, Tuesday 25 April 2017 - last comment - 00:49, Tuesday 08 August 2017(35792)
Migration of FMCS EPICS channels to BACNet IOC
WP 6577

Dave B., Carlos P., Bubba G., John W., Patrick T.

I have migrated a subset of the EPICS channels provided by the FMCS IOC on h0epics to an IOC I created on fmcs-epics-cds. The IOC on fmcs-epics-cds connects to the BACNet server that Apollo has installed as part of the FMCS upgrade. The channels that I migrated have been taken over by this upgrade and can no longer be read out by the server that the IOC on h0epics reads from. The fmcs-epics-cds computer connects to the slow controls network (10.105.0.1) on eth0 and the BACNet network (10.2.0.1) on eth1. It is running Debian 8.

The IOC on h0epics is started from the target directory /ligo/lho/h0/target/h0fmcs (https://cdswiki.ligo-wa.caltech.edu/wiki/h0fmcs). I commented out the appropriate channels from the fmcs.db and chiller.db files in the db directory of this path and restarted this IOC. I made no changes to the files in svn.

The IOC on fmcs-epics-cds uses code from SNS: http://ics-web.sns.ornl.gov/webb/BACnet/ and resides in /home/cdsadmin/BACnet_R0-8. This is a local directory on fmcs-epics-cds. This IOC is started as cdsadmin:

> ssh cdsadmin@10.105.0.112
cdsadmin@fmcs-epics-cds: screen
Hit Enter
cdsadmin@fmcs-epics-cds: cd /home/cdsadmin/BACnet_R0-8/iocBoot/e2b-ioc/
cdsadmin@fmcs-epics-cds: ../../bin/linux-x86_64/epics2bacnet st.cmd
Hit CTRL-a then 'd'

Issues:

I came to realize during this migration that the logic behind the binary input channels is different in BACNet. In BACNet a value of 0 corresponds to 'inactive' and a value of 1 corresponds to 'active'. In the server being migrated from a value of 0 corresponds to 'invalid'. This was verified for the reverse osmosis alarm: H0:FMC-CS_WS_RO_ALARM. In the BACNet server it reads as 0 or 'inactive' when not in alarm. When John W. forced it into alarm it read as 1 or 'active'. I believe Dave has updated his cell phone alarm notifier to match this.

A similar situation exists for the state of the chiller pumps. In the server being migrated from a value of 1 appears to correspond to 'OFF' and a value of 2 appears to correspond to 'ON'. It has not been verified, but I believe in the BACNet server a value of 0 corresponds to 'OFF' and a value of 1 corresponds to 'ON'. The pump status for each building is calculated by looking at the state of the pumps. The calculation in the database for the IOC being migrated from appears to be such that as long as one pump is running the status is 'OPERATIONAL'. If no pump is running the status is 'FAILED'. I need to double check this with John or Bubba. I updated the corresponding calc records in the database for the BACNet IOC to match this.

In the server being migrated from, channels that are read by BACNet as binary inputs and binary outputs are read as analog inputs. I changed these in the database for the BACNet IOC to binary inputs and set the ONAM to 'active' and the ZNAM to 'inactive'. The alarm levels also need to be updated. They are currently set through the autoburt snapshot files that contain a separate channel for each alarm field. The autoburt request file has to be updated for the binary channels to have channels for .ZSV and .OSV instead of .HSV, .LSV, etc. So currently there is no control room alarm set for the binary channels, including the reverse osmosis alarm.

I also need to update the medm screens to take account of this change.

Also, there is an invalid alarm on the control room alarm station computer for the mid X air handler reheat temperature. Looking on the BACNet FMCS server this channel actually does appear to be genuinely invalid.

It should be noted that this BACNet IOC is a temporary install until an OPC server is installed on the BACNet server.

I would like to leave the permit for this work open until the FMCS upgrade is complete and all the channels have been migrated to the BACNet IOC.
Comments related to this report
patrick.thomas@LIGO.ORG - 14:33, Wednesday 26 April 2017 (35806)
I updated the autoBurt.req file.
patrick.thomas@LIGO.ORG - 16:56, Wednesday 26 April 2017 (35811)
I have set the alarm on the RO channel: caput H0:FMC-CS_WS_RO_ALARM.OSV MAJOR
patrick.thomas@LIGO.ORG - 14:19, Thursday 27 April 2017 (35844)
I have updated the medm screens.
patrick.thomas@LIGO.ORG - 00:49, Tuesday 08 August 2017 (38061)
Note: The 'ARCH = linux-x86' line in the Makefile under 'BACnet_R0-8/iocBoot/e2b-ioc' had to be changed to 'ARCH = linux-x86_64' in the code copied from SNS.
LHO General
patrick.thomas@LIGO.ORG - posted 20:42, Tuesday 25 April 2017 (35791)
Ops Eve Mid Shift Summary
Back to observing. Accepted attached SDF differences.

Spent a significant amount of time damping ITMY roll and bounce modes with Sheila's help. Confirmed that the calibration code is running with Greg's help.
Images attached to this report
LHO General
patrick.thomas@LIGO.ORG - posted 17:48, Tuesday 25 April 2017 (35790)
Ops Eve Shift Transition
Nutsinee covered the first hour or so of my shift while I finished migrating the BacNET FMCS channels to EPICS.

Commissioning appears to be complete. Attempting to relock.

Note: The ops_auto_alog.py script hangs while trying to get data. NDS issue?
H1 General
jim.warner@LIGO.ORG - posted 16:06, Tuesday 25 April 2017 (35787)
Shift Summary

TITLE: 04/25 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Patrick
SHIFT SUMMARY:
LOG:

15:00 Chris, Bubba, Christina, Gerardo to LVEA, LVEA to laser safe
15:15 Fil to HAM3 for temp sensor work
15:30 Fred, Mike & guests out to LVEA, out 16:45
15:30 JimW, Krishna to LVEA to move cBRS
16:15 Nutsinee to EY, transitioning VEA to laser hazard
16:15 Jason to LVEA, ITMY oplev power tweak
16:30 Gerardo & Chandra to LVEA
16:30 Jason to EY
16:45 Fred, Mike & guests to EY

17:30 JeffK doing charge measurements

19:00 Krishna, Hugh to EY to recenter T240 on BRS table
19:30 LVEA to laser hazard, Nutsinee to HWSX, Jenne looking at ISCT1, out 20:00
21:30 Richard out of LVEA
21:45 JeffK starting rubbing measurement on ITMY
22:45 Gerado to both ends
22:45 Nutsinee to EX

LHO VE
kyle.ryan@LIGO.ORG - posted 15:47, Tuesday 25 April 2017 - last comment - 16:51, Tuesday 25 April 2017(35786)
Dislodged double-sided tape-mounted (to floor) accelerometer beneath CP1 in LVEA
Today I mounted the CP1 ion pump (described as "auxiliary ion pump" in ECR?) to the floor beneath CP1.  I did not make the vacuum connection at this time - TBC.  In the process, I managed to dislodge the accelerometer that is mounted on the floor with douple-sided tape.  I set it back on the tape but have no idea 1. if I damaged the unit and 2. if my re-attachment attempt is adequate.  
Comments related to this report
jeffrey.kissel@LIGO.ORG - 16:51, Tuesday 25 April 2017 (35789)CDS, DetChar, PEM
Tagging CDS, PEM, and DetChar -- this smells like one of Robert's. He should comment.
LHO FMCS
bubba.gateley@LIGO.ORG - posted 15:44, Tuesday 25 April 2017 (35785)
LVEA Crane in a different location
The LVEA crane is not in the regular parking spot. It is located above the clean room just west of the bier garden that was slated to be moved into the bier garden. We ran into some electrical issues with some of the FFUs which were resolved with the help of Richard and Fil however, we ran out of time before we were able to place the clean room in the bier garden. We will relocate the clean room at the next most opportune time. 
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 15:26, Tuesday 25 April 2017 (35783)
CDS Maintenance Summary, Tuesday 25th April 2017

WP6590 Gather Ubuntu12 workstations from outbuildings and LVEA

Richard, Carlos:

All ubuntu12 workstations were removed from the LVEA and VEAs and are stored in the CUR awaiting OS upgrade to Debian8.

WP6594 NDS2 client update

Jonathan:

Actually the new NDS2 client 0.14 was made the default for matlab.

WP6577 Read out new FMCS data using the BacNET EPICS IOC

Patrick, Carlos, Dave, Buba,  John

EPICS FMCS channels which are now being montored by the new FMCS system were removed from the old IOC (h0epics) and added to the new IOC (fmcs-epics-cds). Due to a difference in the way binary data is handled, the cell phone text alert configuration was changed and restarted. This change is being propagated to the MEDM screens by Patrick.

Carlos installed the new server in the MSR, it spans the CDS-SLOW (10.105) and the FMCS (10.2) networks, talking BacNET on the FMCS network, and EPICS on the SLOW network.

WP6603 Rack up 4th DMT computer

Dan, Carlos, Dave:

An iLIGO Sun X2200 was racked up in the MSR (second DAQ rack) above h1dmt2. This will be configured as the fourth DMT machine (h1dmt3) and be used as a test system.

WP6602 Downgrade DMT calibration/hoft code

Greg, Aaron, John Z.

The DMT code was downgraded to the version as-of three weeks ago. This is a temporary status while the new code is made ready for release.

LHO FMCS (CDS)
filiberto.clara@LIGO.ORG - posted 15:10, Tuesday 25 April 2017 (35782)
HVAC Temperature Cabling - HAM2
WP 6584

Continued work from last week, see alog 3564. Final temperature sensor cable was pulled from HAM3 over to HAM2. This completes work for WP 6584.

H1 CDS
jonathan.hanks@LIGO.ORG - posted 14:59, Tuesday 25 April 2017 (35781)
NDS client 0.14.0 now the default on the Debian 8 workstations

At the request of the control room and as outlined in WP #6607 the new nds client package has been made the default on the debian 8 workstations.  You no longer need to source an additional environment file or do a javaaddpath in matlab.

The default matlab (2012b) for the debian 8 workstations will pick up the new client by default.

This does not affect the remaining Ubuntu12 systems (including the guardian).

LHO VE
chandra.romel@LIGO.ORG - posted 14:48, Tuesday 25 April 2017 - last comment - 15:33, Tuesday 25 April 2017(35780)
GV 5, 7 closed/opened

Gerardo, Chandra

Soft closed GV 5,7 this morning in preparation for craning over large cleanroom. Unfortunately there were some issues with the cleanroom so it didn't get moved today. We opened valves at 2:30 pm local.

 

Comments related to this report
gerardo.moreno@LIGO.ORG - 15:33, Tuesday 25 April 2017 (35784)VE

Pressure build at the vertex due to closure of GVs.

Images attached to this comment
H1 General (CAL, ISC)
keita.kawabe@LIGO.ORG - posted 14:29, Tuesday 25 April 2017 - last comment - 13:05, Wednesday 26 April 2017(35779)
Commissioning Wed 1600-2000 UTC, calibration Thu 1600-1900 UTC.
We'll have a four hours of commissioning window on Wed. Apr. 26, 1600-2000 UTC (0900-1300 Pacific) in coincidence with LLO.
* ASC measurements/tuning
* Fixing EY oplev laser

We've also reserved three hours for calibration on Thu. Apr. 27, 1600-1900 UTC (0900-1200 Pacific) in coincidence with LLO.

Comments related to this report
corey.gray@LIGO.ORG - 13:05, Wednesday 26 April 2017 (35804)

Keita writing as Corey.

ASC -> Jenne, Sheila and others. Locked IFO needed.

EY oplev -> Jason. IFO status doesn't matter.

Additionally, the following might happen.

OMC jitter measurement -> Sheila, locked IFO.

Removing/putting on Hartman plate on the end station HWF camera when the IFO is unlocked -> Nutsinee.

H1 SUS (CAL, CDS, DetChar, ISC, OpsInfo, SYS, VE)
jeffrey.kissel@LIGO.ORG - posted 13:54, Tuesday 25 April 2017 (35778)
Charge Measurement Update; New Plan -- Same as the Old Plan
J. Kissel

I've taken this week's effective bias voltage measurements that track accumulated charge on the test mass/reaction mass electrostatic drive system. 

The standard metrics:
- Monitoring the single-frequency actuation strength of each quadrant as a function of requested bias voltage using Pitch and Yaw optical optical lever signals to measure the response. 
- Monitoring the overall longitudinal strength comparing an ESD excitation against a photon calibrator excitation at adjacent single frequencies.

The conclusions from both these metrics have not changed from the last two measurements (LHO aLOGs 35553, 35366):

Just before we vent on May 8th, during last minute preparations as we bring the IFO down, let's 
    - Turn OFF the ETMX ESD bias completely, and leave it OFF 
    - Leave the ETMY bias ON at +400 [V], with the opposite sign as in observation (Negative in NLN, switch to Positive for vent)
for the duration of the vent (i.e. until we open back up to the arms)
Images attached to this report
H1 AOS (AOS, DetChar, PEM)
sheila.dwyer@LIGO.ORG - posted 12:58, Tuesday 25 April 2017 - last comment - 17:57, Wednesday 26 April 2017(35774)
End Y oplev fiber disconnected

Sheila, Jason, Evan G, Krishna

Mode hoping of the ETMY oplev has been showing up in hveto since April 19th, although the oplev damping is off.  The glitches that show up are definitely glitches in the sum, and the oplev is well centered, so the issue is not that the optic is moving. There is a population of DARM glitches around 30 Hz that is present on days when the oplev is glitching but not on other days.  We are curious about the coupling mechanism for these glitches and wonder if this coupling could be causing problems even when the oplev is not glitching loudly . 

Evan, Jason and I connected the monitor on the oplev laser diode power to one of the PEM ADC channels used to monitor sus rack power (we used H1:PEM-EY_ADC0_14_OUT_DQ, which was monitoring the +24V power and is channel 7 on the field rack patch pannel.  Jason can make the laser glitch by tapping it, with this test we saw clear glitches in the sum but no sign of anything in the monitor so this monitor might not be very useful.  Plugging this in means that the lid of the cooler is slightly open.

We also unplugged the fiber, so that for the time being there is no light going into the chamber from the oplev. If these glitches are coupling to DARM electromagnetically, we expect to keep seeing them in DARM.  If they were somehow coupling through the light (radiation pressure, something else), we would expect them to go away now.  One glitch that we looked at is about a 75 uW drop in the laser power on the optic.  (A=2P/(c*m*omega^2)= 3e-19 meters if all the power were at 20 Hz).  We don't really know how centered the beam is on the optic, or what the reflectivity is for the oplev laser, but it seems like radiation pressure could be at the right level to explain this.   

Using an ASD of the oplev sum during a time when the oplev is quiet, this noise is more than 3 orders of magnitude below DARM at 30 Hz.

Images attached to this report
Comments related to this report
evan.goetz@LIGO.ORG - 16:38, Tuesday 25 April 2017 (35788)AOS, DetChar, PEM
The fiber was disconnected at ~19:05:00 25 April 2017 UTC. There will not be any Hveto correlations after this time because the OpLev QPD will not be receiving any light. We will be looking at Omicron histograms from the summary pages to determine whether this is the cause of noise. 
jason.oberling@LIGO.ORG - 13:00, Wednesday 26 April 2017 (35803)DetChar

With this test over, I have reverted the above changes; the ETMy oplev is now fully functional again, as of ~18:30 UTC (11:30 PDT).  I also unplugged the cable we used for the laser diode monitor port and reconnected H1:PEM-EY_ADC0_14_OUT_DQ so it is now once again monitoring the +24V power on the SUS rack.

To address the glitching, I increased the oplev laser output power; using the Current Mon port on the back of the laser:

  • Old: 0.865 V
  • New: 0.875 V

The laser will need several hours to come to thermal equilibrium, as the cooler was left open overnight (as a result of the above test).  Once this is done I can assess the need for further output power tweaks.

keita.kawabe@LIGO.ORG - 17:57, Wednesday 26 April 2017 (35815)

If the glitch happens tonight while Jason is unavailable, leave it until tomorrow when Jason can make another attempt to tune the temperature.

Even when H1 is observing, operators can go out of observing, let Jason work for a short while, and go back into observation as soon as he's done.

But it's clear that we need a long term solution that doesn't include intensive babysitting like this.

H1 AOS
jason.oberling@LIGO.ORG - posted 12:40, Tuesday 25 April 2017 - last comment - 11:04, Tuesday 02 May 2017(35776)
ITMy Optical Lever Laser Swapped, Yet Again (WP 6591)

As I noted here, the oplev laser SN 191 was found to be running very warm; this in turn made it very difficult to eliminate glitches.  In light of this, as per WP 6591, this morning I re-installed laser SN 189-1 into the ITMy oplev.  The laser will need a few hours to come to thermal equilibrium, then I can assess whether or not further tweaks to the laser output power are needed to obtain glitch-free operation.  I will leave WP 6591 open until the laser is operating glitch-free.

Comments related to this report
jason.oberling@LIGO.ORG - 11:04, Tuesday 02 May 2017 (35970)

SUM counts for this laser have been very low; ~2.8k versus the ~30k the last time this laser was used (March 2017).  Today I pulled the laser out of the cooler and tested it in the Pcal lab and found the output power to be very low; at the setting being used I measured 0.11 mW versus the 2.35 mW I measured before I installed the laser.  By tweaking the lens alignment (lens that couples light from the laser diode into the internal 1m fiber) I was able to increase the output power to ~0.2 mW.  There is clearly something not quite right with this laser, my suspicion being either a gross misalignment of the coupling assembly (which takes longer than a maintenance period to correct) or something is going bad with the laser diode.  Knowing the history of these lasers, both are equally probably in my opinion.

Unfortunately there is not a spare currently ready for install.  In light of this, since the laser is currently working, I reinstalled SN 189-1 into the ITMy optical lever so at least we have a functional ITMy oplev.  Once I get a spare ready for install this laser will be swapped out at the earliest opportunity.  I have closed WP 6591.

H1 CAL (CAL)
aaron.viets@LIGO.ORG - posted 11:58, Tuesday 25 April 2017 - last comment - 12:33, Tuesday 25 April 2017(35772)
LHO calibration pipeline restarted with gstlal-calibration-1.1.4-v2
[Greg Mendell, Aaron Viets]

Due to the bug found in gstlal-calibration-1.1.5, the DMT machines have been reverted to version 1.1.4. Primary and redundant pipelines, as well as the DMTDQ processes, were restarted at 1177181448. Output looks normal so far.
Comments related to this report
gregory.mendell@LIGO.ORG - 12:33, Tuesday 25 April 2017 (35775)CAL, DCS

Aggregation of calibration hoft has been restarted on DCS, with these channels removed,

H1:GDS-CALIB_F_S
H1:GDS-CALIB_F_S_NOGATE
H1:GDS-CALIB_SRC_Q_INVERSE
H1:GDS-CALIB_SRC_Q_INVERSE_NOGATE

starting from 1177181608 == Apr 25 2017 11:53:10 PDT == Apr 25 2017 18:53:10 UTC

 

H1 ISC
jenne.driggers@LIGO.ORG - posted 14:49, Tuesday 18 April 2017 - last comment - 13:15, Tuesday 25 April 2017(35636)
New beam dump in place on ISCT1

[Jenne, Vaishali, Karl]

We have replaced the razor beam dump on ISCT1 that was causing scattered light problems (see alog 35538) with an Arai-style black glass dump, provided by the 40m (see 40m log 8089, first style). You can see the new dump just to the left of the PD in the attached photo.  I was thinking about sending the reflection from this dump (after several black glass bounces) to the razor dump, but I can't see any reflection with just a card, so skipped this step for now.  We can come back to it with an IR viewer if we have more time in the future.

We're on our way to NLN, so maybe we'll see if this helps any, if we happen to get high ground motion sometime.

Images attached to this report
Comments related to this report
jenne.driggers@LIGO.ORG - 13:15, Tuesday 25 April 2017 (35777)

[Jenne, Vaishali, Karl, Betsy]

Koji pointed out to me that even though the new black glass beam dump had been sitting on a HEPA table at the 40m, since it has been so long since it was cleaned, it could accumulate a bit of dust or film. 

So, we temporarily put the razor dump back, disassembled the black glass dump, and with Betsy's guidance cleaned the surfaces of the black glass with first contact.  We then reassembled the dump and put it back on the table. 

Taking advantage of a few minutes while those working on the cleanroom took a short break, we transitioned to laser hazard so that we could do a fine alignment of the beam dump with the DRMI flashing.  The LVEA was transitioned back to laser safe after this brief work was completed, so that the cleanroom team could work more easily.

Displaying reports 50421-50440 of 85202.Go to page Start 2518 2519 2520 2521 2522 2523 2524 2525 2526 End