Displaying reports 51281-51300 of 85453.Go to page Start 2561 2562 2563 2564 2565 2566 2567 2568 2569 End
Reports until 16:44, Tuesday 28 March 2017
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 16:44, Tuesday 28 March 2017 (35162)
CDS Maintenance summary, Tuesday 28th March 2017

WP6540 Reduce cronjobs on HEPI Pump Controllers

Dave:

Completed EY config, applied config to EX and L0.

WP6539 Relocate HWS frame grabber card from EX to MSR1

Carlos, Nutsinee, Dave:

Card was relocated, but we appear to have driver issues on the spare HWS machine. Carlos is working with TCS to install this machine from scratch and put the configuration into puppet.

WP6543 Upgrade cdslogin to Debian8

Carlos, Jonathan, Jim, Dave

cdslogin was upgraded from U12 to Deb8.

WP6544 New code h1calcs and h1omc

Daniel, Kiwamu, Jim, Dave:

New code was installed on h1omc and h1calcs. A new dolphin sender was installed on omc, with corresponding receiver on calcs. Two 4k DQ channels were added to the DAQ, along with a few dozen slow channels.

WP6546 New PCAL guardian node

TJ, Rick, Sudarshan

A new temporary guardian node named HIGH_FREQ_LINES was added

WP6519 Put seismon channels into DAQ.

Not done, defer to later time

H1 CDS
david.barker@LIGO.ORG - posted 16:34, Tuesday 28 March 2017 (35161)
cdslogin machine upgraded from U12.04 to Debian 8

Carlos, Jonathan, Jim, Dave:

the sshd and alarms machine cdslogin was upgraded from Ubuntu 12.04LTS to Debian 8 (Jessie) today. This is a 2FA sshd server machine, which also runs EPICS CA client code to send vacuum and fmcs alerts to lho staff cell phones as text messages. Due to Carlos and Jonathan's hard work in putting this configuration into puppet, the entire upgrade only took a couple of hours.

H1 SEI
hugh.radkins@LIGO.ORG - posted 16:17, Tuesday 28 March 2017 - last comment - 12:05, Friday 31 March 2017(35160)
Exiting BRSY Remote Desktop trips ISI! But not always, eh!

Was logged on checking health from the earlier invasive work.  Everything was working fine so I closed (pushed X) on the remote desktop shell.  The BRS output went to never never land and the ISI tripped.  This of course did nothing useful for the the IFO or Observation mode.

When I logged back onto the BRSY, it was still running but giving some errors and the output was still very rung up.  I am not sure which was causing which though.  Following the BRS2 Manual (T1600103), restarted the TwinCat code, killed the old occurance and restarted the C#, and finally the EPICS.  Exited the session the same way and this time it survived.  Yikes!

The amplitude is still a bit large with the camera image swinging into the reference image on the edge.  When that stays off the edge during its cycle, the BRS will be useful again.

 

Comments related to this report
hugh.radkins@LIGO.ORG - 17:34, Tuesday 28 March 2017 (35164)

The BRSY is now damping itself down and is no longer swinging out of range.  But it is still getting itself under control.  It is coming down quickly but may be some time.  Operators should feel free to contact me if they aren't sure if it can be returned to service.

krishna.venkateswara@LIGO.ORG - 12:05, Friday 31 March 2017 (35247)

It is strongly advised to not login to the BRS machines when they are being used. This is because the spike in CPU-use disrupts the autocollimator fitting routine. This causes ~seconds long delays in the tilt output which affects the tilt-subtraction and so on.

LHO General
vernon.sandberg@LIGO.ORG - posted 16:13, Tuesday 28 March 2017 (35158)
Work Permit Summary for 2017 March 28
Work Permit Date Description alog/status
6546.html 03/27/17 04:05 PM Create and test a Guardian node that will turn on and adjust a set of Cal lines for later analysis. 35153
6545.html 03/27/17 01:43 PM Inject blip-like glitches to test performance of NOISEMON channels 35116
6544.html 03/27/17 01:20 PM We will add a new signal path in h1omc and h1calcs models in order to implement the DCPD cross correlation scheme (whose ECR, E1700107, has been approved). Because this implementation is independent of the main interferometer control signals, we don't expect any impact on locking or sensitivity. DAQ restart is required. 35156, 35139, 35115
6543.html 03/27/17 09:56 AM Upgrade LHO CDS login servers cdsssh, cdslogin and opslogin to Debian8. Needed because current OS is near end-of-support. 35119
6542.html 03/27/17 09:03 AM Repair underground water leak on feed line to VPW. This will require digging up asphalt and digging down to the PVC line.  
6541.html 03/27/17 08:40 AM Perform scheduled maintenance to scroll compressors at Y-MID vent/purge-air supply skid. Maintenance activity will require for the compressors to run for brief periods of time to check compression. Lock-out/tag-out power to skid as required. 35149
6540.html 03/23/17 12:57 PM Reduce crontab tasks on h1hpipumptrcl[l0,EX] to be that used on EY unit. Check local clocks are correct. 35141
6539.html 03/23/17 11:36 AM We would like to "borrow" a HWS PCIe card from one of the end stations (whichever one we can get to on Tuesday) and put it on the spared HWS machine at the corner station so we can get HWSY running. Transfer of PCIe card completed.
6538.html 03/23/17 11:03 AM Remove the INSTAIR alarms for the MX compressor from the cell phone text alarm system.  
6537.html 03/23/17 07:04 AM Swap out the harmonic frequency generator with a spare to see if this is the cause of glitches in the RF 45. Swap complete, now monitor for glitches.
       
Updates to previous Work Permits      
6527.html 03/19/17 09:36 PM Access to pcal system and camera to take pictures and possibly move the pcal beams 34980, 35137
H1 CAL (TCS)
david.barker@LIGO.ORG - posted 16:13, Tuesday 28 March 2017 (35159)
ITMX HWS had duplicate channels for a while today

Aidan, Nutsinee, Dave:

during the HWS image grabber card swap this morning, a duplicate of the ITMX EPICS  IOC was running on the h1hwsmsr1 machine. The DAQ EDCU incorrectly connected to the duplicate and had bad data between the times 19:45 and 22:56 UTC. I powered h1hwsmsr1 down and the EDCU reconnected to the correct channels sometime later. 

H1 ISC
kiwamu.izumi@LIGO.ORG - posted 16:05, Tuesday 28 March 2017 (35156)
Infrastructure for DCPD cross correlation installed

WP 6544, ECR E1700107,

Related alogs: 35115, 35139

We have installed the infrastructure in the frontend models to relatively easily produce the DCPD cross correlation spectrum.

It seems to do what it is supposed to do. See the first attachment for demonstration of data acquisition and calibration using DTT.


[Additional model changes]

In addition to what we have reported yesterday (35115), we implemented two additional minor changes today.

[Front end settings and other settings]

Also, I have made a DTT template in which the frequency-domain calibration (33161) is applied to all the relevant spectra. The template is saved at

/opt/rtcds/userapps/release/isc/h1/scripts/CrossCorrTemplate.xml

Two MEDM screens are newly made for this infrastructure:

A screenshot of each MEDM screen is attached as well. They are saved in common medm directories at

/opt/rtcds/userapps/release/omc/common/medm/OMC_NULL_READOUT.adl

/opt/rtcds/userapps/release/cal/common/medm/CAL_CS_CUST_CROSSCORR.adl

[A quick verification: it seems OK]

To check whether the new infrastructure is doing the right thing or not, I exported the spectra from the DTT (the ones shown in the first attachment). I then computed the difference between the ordinary DARM spectrum and the cross correlated spectrum by subtracting one from the other. If things are correct, this leaves only sensing noise which should be dominated by shot noise at most of frequencies. The result is shown in the last attachment-- the difference seems reasonably smooth in its spectral shape as expected at most of frequencies.

The are a few points/regions where the difference deviated from shot noise. The peaks/valleys at 35 Hz, 60 Hz, 350 Hz and 500 Hz and low frequency wall below 30 Hz are reasonably suppressed by at least roughly 20 dB from the ordinary DARM which may be limited by lack of number of averages. On the other hand, the peak at 180 Hz was reduced only by 6 dB or so. I am not sure why. Otherwise, it looks reasonably good.

Images attached to this report
H1 General
edmond.merilh@LIGO.ORG - posted 16:01, Tuesday 28 March 2017 (35129)
Shift Summary - Day

TITLE: 03/28 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY:

LOG:

15:06 Intention bit set to "Preventative Maintenance"

15:07 Peter and Jeff B to PSL enclosure

15:08 Fire Dept. on site smoke/heat testing

15:15 Hugh to EX

15:35 Travis to EX for PCAL work

15:50 Fil to EX for Interlock cabling.

15:53 Christina and Karen to out buildings

15:55 Gerardo to mid-stations

16:00 NORCO LN2 Y-ARM

16:13 Jeff B going to End Stations for FAMIS dust monitor checks

16:13 Hugh back from EX?

16:31 TJ doing Charge measurements on ETMY

16:34 More FD personell on site 

16:40 Taking FD to CER

16:43 NORCO LN2 X-ARM

16:45 More fire personnell on site

16:45 Betsy to LVEA for parts. HWS work is postponed.

17:04 Travis back from EX

17:05 Karen leaviong EY

17:09 Jeff B back

17:14 Carlos out to EX to retrieve HWS card

17:31 Bubba bringing FD into LVEA

17:44 Dick into LVEA - non invasive

17:44 Carlos reported back from EX

17:53 Fire testing in OSB is done

17:54 Hugh into CER

18:00 Bubba out of LVEA with FD

18:28 DAQ restart

18:38 Paradise Water on site for delivery

19:06 FIll back 

19:07 Bailing still going on

21:31 Betsy, Travis and TJ to MX

22:34 Lockloss - yet undetermined but possibly linked to some BRS issue that Hugh was doing. He and Jeff K are hashing it out.

22:41 Switched ISI_CONFIG back to NOBRSY and recovered ETMY ISI to resume.

22:49 TJ, Betsy, and Travis back

23:00 Handing of to Nutsinee

H1 TCS (TCS)
aidan.brooks@LIGO.ORG - posted 15:38, Tuesday 28 March 2017 (35157)
HWS-ITMX EPICS channels are not updating because of H1HWSMSR1

In prepping for the high-power PRMI test tomorrow, I noticed that the HWS-ITMX EPICS channels are not being updated in the MEDM screens. The gradient field data is still being written to file.

Looking at the frames, we can see normal data until 19:47 UTC (around 12:47PM PDT).

I ran a CAGET on the H1HWSMSR and can see the data just fine in the channels. The same CAGET on OPSWS4 returns zero.

Dave Barker and Nutsinee are looking into this.

 

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 14:42, Tuesday 28 March 2017 (35155)
strange gap in seismic BLRMS FOM plot from this morning

Jim, Dave:

we noticed a strange gap in the FOM seismic BLRMS plot. It starts at around 11am PDT and seems to end when the DAQ was restarted at 11:28. The subsequent 12:47 PDT DAQ restart shows as a sharp line as expected. To the best of our knowledge, nothing was happening on CDS or GDS at 11am.

Images attached to this report
H1 General
edmond.merilh@LIGO.ORG - posted 14:31, Tuesday 28 March 2017 (35154)
Relocking - H1 Observing

19:11 Assessing re-locking - TJ performed LVEA sweep.

21:03 NLN

21:13 Intention Bit - UNDISTURBED

H1 CDS (CAL, GRD)
thomas.shaffer@LIGO.ORG - posted 14:30, Tuesday 28 March 2017 - last comment - 18:12, Tuesday 28 March 2017(35153)
Added new HIG_FREQ_LINES PCal node

Sudarshan, Shaffer

Today I created, started, and tested a node that Sudarshan made to change some EX PCal lines every 24hrs while we are not locked. The code will wait for the gps time to get greater than a day since its last change, then it will wait till the IFO is not locked, and finally it will adjust the frequency by 500hz and jump to waiting for a day to go by. Once the frequency has reached 5100hz it will then hang our in the FINISHED state. Here, Sudarshan can let us know what he wants to do with it.

Attached are the graph for the node and the Guardian overview

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 18:12, Tuesday 28 March 2017 (35165)CDS, DetChar, GRD
Just a few points of verification, clarification, and tagging DetChar, letting them (especially the CW group) know this sweeping PCAL X calibration line has been turned on.

I attach a full and zoomed CAL-DELTAL spectrum against a PCALX spectrum (among the other usual things from the front wall's sensitivity FOM). 

Note that Pulsar injections are on-going, and it doesn't look like these features will interfere. This calibration line regime is sticking to the 2000 to 5000 Hz regions, and pulsar injections only go up to 1991.2 Hz (For a reminder of the full list of pulsar injection frequencies, see LHO aLOG 27642.

The starting frequency is 2001.3 Hz, and the starting excitation amplitude is 30000 [ct], and has been hard-coded to remain this amplitude for all frequencies. The excitation frequency can be tracked using EPICs via this channel: 
    H1:CAL-PCALX_PCALOSC1_OSC_FREQ
and you can see a fast-channel version of the requested output here:
    H1:CAL-PCALX_EXC_SUM_DQ

The guardian code for this node lives here 
/opt/rtcds/userapps/release/cal/h1/guardian/HIGH_FREQ_LINES.py

and is attached for ease of reference.

After some initial debugging, it guardian's log suggests that TJ left the frequency at 2001.3 starting at 2017-03-28 17:16:31 UTC, though it looks like the frequency has only been stable at 2001.3 since 19:49:00 UTC. TJ has the CHECK_IFO_STATUS state look whether the ISC_LOCK guardian's state is lower than 11, 
which means the excitation is killed if we're in
    - INITIAL ALIGNMENT
    - DOWN
    - IDLE
    - LOCKLOSS
    - LOCKLOSS_DRMI
    - INIT
but maybe we want to rethink this, because it seems to turn off the excitation at unnecessary times (see attached dataviewer trend, where the guardian has changed the frequency twice since TJ left it).
Images attached to this comment
Non-image files attached to this comment
H1 PEM
jeffrey.bartlett@LIGO.ORG - posted 14:17, Tuesday 28 March 2017 (35152)
Monthly Dust Monitor Vacuum Pump Check (FAMIS 7511)
   Adjusted pressures and checked the temperatures of the dust monitor vacuum pumps at EX, EY, and the CS. All operating within normal ranges. Closing FAMIS #7511 
H1 SUS
thomas.shaffer@LIGO.ORG - posted 13:57, Tuesday 28 March 2017 (35148)
Charge Measurements For EX and EY

EY charge is still growing and the angular actuation strength continues to slowly change, but the longitudinal relative actuation strength is still around 3-4%. This is basically the same as last week (look at Jeff's alog, not mine).

Images attached to this report
LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 13:56, Tuesday 28 March 2017 (35149)
Serviced Compressors #1 and #2 at Y-Mid Vent/Purge-Air Skid

Unit #1 and #2 are done, grease was applied to scroll pump for unit #1 and #2, and to all electrical motors.

Compression test results:  
- Unit #1 pressure 125 psi
- Unit #2 pressure 120 psi
- Unit #3 pressure 120 psi
- Unit #4 pressure 125 psi
- Unit #5 pressure 125 psi

Scroll pumps #3, #4 and #5 need to be greased.

Unit #2 needs a new one-way valve.

Unit #5 needs a new relief valve, but all other relief valves are OK.

Done under WP 6541.

 

H1 SEI
hugh.radkins@LIGO.ORG - posted 13:43, Tuesday 28 March 2017 - last comment - 14:10, Tuesday 28 March 2017(35147)
More Insulation added to BRSY Seismo Table

Following improvements seen with insulating the Legs of the table, 34805, & 34869, 1 to 2 inches of insulation has been added to the table held by the legs which is holding the T240.  See the photos in 34805 and compare with the photo attached here.

Images attached to this report
Comments related to this report
hugh.radkins@LIGO.ORG - 14:10, Tuesday 28 March 2017 (35150)

Sad thing though, I should have realized, the decline of the driftmon now puts the camera image in jeopardy of being too close to the edge.  I had the enclosure open for a time and the cool down has pulled the drift signal down to -15660.  It could be 24 hours before things warm up enough to bring it back to before incursion level of -14000.

Ed has the BRSY engaged to nominal now but we are watching to make sure it is doing some good for the IFO.

We'll need to make another centering effort within a couple weeks.

H1 CDS (DAQ, ISC)
david.barker@LIGO.ORG - posted 11:33, Tuesday 28 March 2017 - last comment - 14:16, Tuesday 28 March 2017(35139)
h1calcs, h1omc new code, DAQ restart

WP6544

Daniel, Kiwamu, Dave:

new model code was installed for h1calcs and h1omc. This added one Dolphin IPC channel (omc sender, calcs receiver) and added two 4kHz DAQ channels from calcs.

Models were restarted, followed by a DAQ restart.

Comments related to this report
david.barker@LIGO.ORG - 14:16, Tuesday 28 March 2017 (35151)

BTW: prior to the DAQ restart it had been running for 28 days 0 hours and 38 minutes.

H1 CAL (CAL, ISC)
evan.goetz@LIGO.ORG - posted 11:23, Tuesday 21 March 2017 - last comment - 17:11, Tuesday 28 March 2017(34967)
Trends of H1 optical response parameters for ER10 and thus far in O2
Using MCMC fitting to the sensing function measurements made in ER10 and O2, we can establish an estimate on the variation of the optical response parameters. The table below gives the typical (simple mean), maximum, and minimum of the measured maximum a posteriori values from the MCMC fits.

Parameter             typical    maximum    minimum
---------------------------------------------------
Gain (ct/m)           1.143e6    1.166e6    1.085
Cavity pole (Hz)      347.1      358        341
Time delay (usec)     0.5        2.5        -1.3
Detuned spring (Hz)   7.3        8.8        4.9
Detuned spring 1/Q    0.04       0.08       1e-3
This covers measurement dates from Nov 07 2016 through Mar 06 2017. Attached are plots showing these trends for ER10 and O2. Note that I have added--where possible--the GDS calculated values for kappa_C and f_c (black crosses). Note that these values do not come with error bars because the uncertainty would need to be computed from the measurement uncertainty of what goes into the calculate of kappa_C and f_c. It would be useful to have calibration lines running during the measurements to see if there is any trend or drift during the measurements themselves.
Non-image files attached to this report
Comments related to this report
evan.goetz@LIGO.ORG - 17:11, Tuesday 28 March 2017 (35163)
Attached are trend plots for all ER10 and O2 measurements.

The plots are stored at:
${CALSVN}/aligocalibration/trunk/Runs/O2/H1/Results/SensingFunctionTFs

The script to produce plots is:
${CALSVN}/aligocalibration/trunk/Runs/O2/H1/Scripts/SensingFunctionTFs/runSensingAnalysis_H1_O2.m
Non-image files attached to this comment
Displaying reports 51281-51300 of 85453.Go to page Start 2561 2562 2563 2564 2565 2566 2567 2568 2569 End