Displaying reports 48401-48420 of 83212.Go to page Start 2417 2418 2419 2420 2421 2422 2423 2424 2425 End
Reports until 07:30, Thursday 27 April 2017
H1 SEI (SEI)
peter.king@LIGO.ORG - posted 07:30, Thursday 27 April 2017 - last comment - 08:04, Thursday 27 April 2017(35824)
Seismic EX computer froze
The seismic computer at EX froze (see attached picture).  It was rebooted
along with the AI chassis and IO chassis.

    Dave Barker was notified.





Richard / Peter
Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 08:04, Thursday 27 April 2017 (35825)

Richard power cycled h1seiex in the order: CPU power down, IOChassis power down,  AI chassis power down, IOChassis power up, CPU power up, AI chassis power up (The AI chassis power cycle is to prevent problems when the 16bit DAC sends voltage when the unmanaged IOChassis is powered up).

I attempted to remotely take h1seiex out of the Dolphin fabric using h1iscex, but this failed. When h1seiex was powered back up, this glitched the other dolphin'ed models at EX. This required a restart of all the models on h1susex and h1iscex as well as h1seiex.

The order of model restarts was : kill all models on h1susex, h1seiex, h1iscex, then turn on all models in the same order (using /etc/model_kill.sh and /etc/model_start.sh to ensure correct local order). We had no timing excursions from the IOP models.

All IPC errors were then cleared (global DIAG_RESET). Corey reset the SWWD-EX. The accumulated DAQ CRC counts were cleared. User models' watchdogs were reset which cleared the DK status in the state words.

The CW excitation bit became good, however someone in the Hardware Injection team should verify CW excitations are running correctly.

Restarts were completed at 07:25 PDT (14:25 UTC)

H1 SEI (CDS, SEI)
corey.gray@LIGO.ORG - posted 06:33, Thursday 27 April 2017 (35823)
13:10 Out Of OBSERVING: EX SEI Front Ends DOWN!

At 13:10utc (6:10amPDT):

(attached is a screenshot of all the WHITE screens we have for EX.

Images attached to this report
H1 ISC (ISC)
corey.gray@LIGO.ORG - posted 05:47, Thursday 27 April 2017 - last comment - 10:43, Thursday 27 April 2017(35822)
11:51-12:24utc Out Of OBSERVING: Lockloss

Random lockloss (no obvious seismic to blame).

Out of OBSERVING for 33min.

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 10:43, Thursday 27 April 2017 (35830)

The wrong gain of  CHARD P was because I forgot to load the guardian.  Should be fixed now.  Thanks Corey. 

LHO General
corey.gray@LIGO.ORG - posted 04:36, Thursday 27 April 2017 (35820)
Mid Shift Status

Steady-running for H1 with an 18hr lock (15hrs in Observing).  

The Lock Clock was restarted & does NOT show the correct time, so until the next lockloss, Add 14.5hrs To The Lock Clock time (i.e. now it is ~3.5 + ~14.5 = ~18hrs).

useism is slightly higher than last night but still happily down near 50th percentile.  Here's to more steady running.

LHO General
corey.gray@LIGO.ORG - posted 01:00, Thursday 27 April 2017 (35818)
Transition To OWL

TITLE: 04/27 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 67Mpc
OUTGOING OPERATOR: Nutsinee
CURRENT ENVIRONMENT:
    Wind: 6mph
    Primary useism: 0.02 μm/s
    Secondary useism: 0.15 μm/s (at 50 percentile)
QUICK SUMMARY:

H1 has been at NLN since 17:32utc (~14.5hrs).  I need to get the Lock Timer going on nuc1 & also Inspiral Integrand on video0.

H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 00:30, Thursday 27 April 2017 (35817)
Ops Eve Shift Summary

TITLE: 04/27 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC

STATE of H1: Observing at 62Mpc

INCOMING OPERATOR: Corey

SHIFT SUMMARY: Been observing the whole shift. No issue to report.

 

H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 19:55, Wednesday 26 April 2017 (35816)
Ops Eve Midshift Summary

Been observing since Jim handed over. Wind speed has been fluctuating. Not much else going on.

H1 SEI
patrick.thomas@LIGO.ORG - posted 17:41, Wednesday 26 April 2017 (35814)
H1 ISI CPS Noise Spectra Check - Weekly
FAMIS 6895

I do not see any that are particularly elevated.
Images attached to this report
LHO General
vernon.sandberg@LIGO.ORG - posted 17:28, Wednesday 26 April 2017 (35812)
Work Permit Summary for 2017 April 25
Work Permit Date Description alog/status
6608 4/26/2017 13:22 Add EPICS channels to the new FMCS IOC to report on the run status of the fire pumps in the WS. Setup verbal alarms and cell phone text alarms if the pumps are started.  
6607 4/25/2017 13:48 Make the new nds2-client 0.14.0 the default on debian 8 systems. This includes updating the matlab config for /ligo/apps/debian/matlab* to automatically load the 0.14.0 client. This will NOT change the nds client for the guardian. This was a request of commissioning/recovery staff after the client was installed. 35781
6606 4/25/2017 10:55 engage more aggressive CHARD Y cut off, add to guardian  
6605 4/25/2017 10:28 Investigate how oplev glitches are coupling to DARM with damping off. Unplug fiber from laser so that no light goes into chamber, watch for glitches in DARM. Borrow PEM ADC channel that is monitoring sus rack power to monitor the current fed to the laser diode to see if this witnesses oplev laser glitches. This will involve leaving the cooler lid ajar. No viewports will be exposed. 35774, 35798
6604 4/25/2017 10:22 Remove black glass beam dump from POP path on ISCT1 (temp. replace with the razor dump). Drag wipe glass pieces. Re-install black glass dump. 35636
6603 4/25/2017 9:24 rack up new Sun X2200 as a 4th DMT machine (h1dmt3). Requires powering down h1dmt2 and relocating it to make space. 35783
6602 4/25/2017 9:21 Revert calibration code running on the DMT from gstlal-calibration-1.1.5 to gstlal-calibration-1.1.4. This is the recommendation from SCCB and John Zweizig is available to help with this. 1.Run "yum downgrade gstlal-calibration" to revert to gstlal-calibration-1.1.4 on the DMT computers. 2. Edit /home/dmtexec/pars/H-DQmod_O2.json to remove SRC channels. 3. Aaron to update the calibration configuration/filter files. 4. Aaron to restart htCalib_H1, htCalibR_H1 and DMTDQ_H1 monitors. 35783, 35772
6601 4/25/2017 8:26 Taking Indian visitors on tours of LVEA and around site.  
6600 4/24/2017 17:39 Leak test LN2 vaporizers located outside next to Dewars. We will flow LN2 from Dewar into vaporizer (with a 15 psi pressure relief valve at exit) to locate any cracked, leaky welds. We will also install new GN2 flow meters. This should not disrupt observing, but we will let the operator know when we do this work.  
6599 4/24/2017 16:37 Re-measure the alignment sensing matrix to reproduce measurements from alog 35694 35694
6598 4/24/2017 15:04 I would like to check the HWS beam health at the end stations and test Elli's PZT offsets from alog 17860 with PZT output drifts taken into account.  
6597 4/24/2017 14:26 Comment out a line in violin mode damping 1 and 2 that turns on the damping gain for 4.7kHz (meaning, the mode will not be damped by Guardian). Then wait and see if the mode become less of a problem. 35759
6596 4/24/2017 13:13 Soft close GV5 & GV7 to protect beam tube before craning activities, GVs will be open when craning activities are finished.  
6595 4/24/2017 12:37 Extend the current operator DMT restart instructions to cover bad HofT no linked to CDS-CRC errors. Requires additional Verbal Alarm.  
6594 4/24/2017 11:31 Install the latest nds2_client (0.14.0) on the Debian 8 cds workstations (Dave has requested that I do not install this for Ubuntu12 as those workstations are being removed and guardian is not being updated). This will not be set as the default and should not impact running software. 35783, 35771
6593 4/24/2017 11:12 Tweak output power of oplev laser to attempt to lessen frequency and severity of glitching. No viewports will be exposed during this work. BRSy will need to be disabled, as the oplev laser is located next to the BRSy enclosure.  
6592 4/24/2017 10:36 Move engine hoist into bier garden. Move clean room into bier garden after BRS removal. All of this work is in preparation for the upcoming vent on 05/08/2017. 35785, 35768
6591 4/24/2017 10:33 Swap the laser in the ITMy optical lever. Current laser (SN 191) was found to be running very warm, which is increasing the glitching of the laser. Will install a known working laser (SN 189-1) and investigate laser SN 191 in the Pcal lab. No viewports will be exposed during this work. 35776
6590 4/21/2017 14:05 Move all U12 workstations from LVEA and End Stations to CUR for updates. Replace with Debian8 as available. 35783
6589 4/19/2017 10:18 Until the DMT calibration code bug is fixed, provide a mechanism for the operator to restart the DMT calibration code if a DAQ error caused by a SUS-EY glitch has triggered the marking of all HofT frames invalid.  
       
Past WPs      
6584 4/17/2017 14:37 Run new cabling for temperature sensors from the mechanical room into the LVEA. Runs in the LVEA are to the following chambers: HAM2, HAM3, HAM4, HAM5, BSC1, and BSC3. Work will require climbing on some of the chambers. This work is part of the HVAC upgrade being done by Apollo. 35643,
6577 4/14/2017 11:39 Remove the channels that have already been migrated to BACNet during the FMCS upgrade from the existing FMCS IOC where they are now invalid. Start and run a separate BACNet IOC for these channels and additional ones created during the upgrade. Continue this process as the upgrade continues. 35792, 35783
H1 ISC
sheila.dwyer@LIGO.ORG - posted 17:13, Wednesday 26 April 2017 - last comment - 17:53, Thursday 27 April 2017(35810)
CHARD cut offs

I added two band stops to the CHARD loops, which have reduced the CHARD drives from 15-25 Hz by about a factor of 10.  ASC noise should no longer be limiting our DARM sensitivity above 15Hz, the noise is only slightly better from 15-20 Hz.  

The first attachment shows the slight improvement in the DARM noise, the difference in the drives (the cut offs I added were 2nd order elliptic bandstops), and the reduction in the coherence between the ASC drives and DARM.  For CHARD Y the first screenshot shows the loop measurement before I added the bandstop, the bandstop should have reduced the phase at the upper ugf (2.5 Hz) by about 5 degrees.  For CHARD P I reduced the gain by about 3dB, the third screenshot shows the before measurement in blue, a measurement after I reduced the gain but before I added the cut off in red.  For CHARD the cut off only reduced the phase at the upper ugf of 3 Hz by 6 degrees, we are left with almost 50 degrees of phase margin.

I also re ran the noise budget injections for CHARD, DHARD, MICH, SRCL, PRCL and IMC PZT jitter.  There only real change is that the ASC noise is lower, and there is a larger gap between the sum and the measured noise at 25 Hz.  I am not able to download GDS strain from this afternoon, so I will post a noise budget when I can get data. 

Images attached to this report
Comments related to this report
corey.gray@LIGO.ORG - 05:31, Thursday 27 April 2017 (35821)ISC

Was there a change to H1:ASC-CHARD_P_GAIN from -0.10 to -0.14?  (This came up as an SDF Diff for the next lock.)

sheila.dwyer@LIGO.ORG - 17:53, Thursday 27 April 2017 (35838)

Yes, Corey, sorry I forgot to load the guardian.  

The first attachment is the noise budget with measurements from yesterday.  You can see that the broad lump that we blame on beam size jitter is worse, there is a gap between the measured noise and the sum of the predicted noises from 300-800 Hz which was not present in the noise budget from early Jan (here) Looking at the summary pages, you can see that this has happened in the last week.  (April 18th compared to yesterday).  Kiwamu and I had a look at some alignment sensors, and at first glance it doesn't seem like we've had an unusual alignment change this week. We asked Aidan to check the Hartmann data to see if there has been a change in absorption on ITMX. 

 The linear jitter is also slowly getting worse, which you can see by comparing the 350 Hz peak to January. The next two attached pngs are screenshots of the jitter transfer functions measured yesterday using the IMC PZT.  You can compare these to measurements from mid Feb and see that the coupling is about 50% worse for yaw and almost a factor fo 2 worse for pit.  

The 4th attachment shows a comparison of the coherence between DARM and the IMC WFS DC signals for February to earlier today.  We now have broad coherence between IMC WFS B pit and darm, which I don''t think I have seen before even when we had a broad lump of noise in DARM before our pre O2 alignment change. 

The last attachement shows coherences between DARM and the bullseye PD on the PSL. 

Images attached to this comment
Non-image files attached to this comment
LHO General
kyle.ryan@LIGO.ORG - posted 15:26, Wednesday 26 April 2017 (35809)
1400 - 1405 hrs. local -> Ran air compressor in warehouse


			
			
H1 CDS (VE)
david.barker@LIGO.ORG - posted 15:05, Wednesday 26 April 2017 (35808)
started vacuum pressures striptool on the virtual server

Adding to the list of vacuum monitoring striptools being ran on the virtual machine, I've included the vacuum pressures striptool (called press.stp). Reminder, these images are available on the web page:

vacuum_striptools

(accessed from the CDS home page by clicking the Vac (screen shots) link)

LHO VE
logbook/robot/script0.cds.ligo-wa.caltech.edu@LIGO.ORG - posted 12:10, Wednesday 26 April 2017 - last comment - 14:42, Wednesday 26 April 2017(35802)
CP3, CP4 Autofill 2017_04_26
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 50% open. Fill completed in 2460 seconds. TC B did not register fill. LLCV set back to 16.0% open.
Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 654 seconds. TC A did not register fill. LLCV set back to 41.0% open.
Images attached to this report
Comments related to this report
chandra.romel@LIGO.ORG - 14:42, Wednesday 26 April 2017 (35807)

Raised CP3 by 1% to 17% open.

H1 CDS
david.barker@LIGO.ORG - posted 10:44, Wednesday 26 April 2017 - last comment - 13:18, Wednesday 26 April 2017(35799)
new FMCS EPICS-IOC down for about 30 minutes

The new FMCS IOC (BacNET interface) was down for about 30 minutes while we investigated a routing issue. It will be quickly restarted later this morning to run it within a screen session.

Comments related to this report
david.barker@LIGO.ORG - 13:18, Wednesday 26 April 2017 (35805)

Patrick and I did a quick restart at noon to get the IOC running within a screen session.

H1 CDS
patrick.thomas@LIGO.ORG - posted 23:26, Tuesday 25 April 2017 - last comment - 00:49, Tuesday 08 August 2017(35792)
Migration of FMCS EPICS channels to BACNet IOC
WP 6577

Dave B., Carlos P., Bubba G., John W., Patrick T.

I have migrated a subset of the EPICS channels provided by the FMCS IOC on h0epics to an IOC I created on fmcs-epics-cds. The IOC on fmcs-epics-cds connects to the BACNet server that Apollo has installed as part of the FMCS upgrade. The channels that I migrated have been taken over by this upgrade and can no longer be read out by the server that the IOC on h0epics reads from. The fmcs-epics-cds computer connects to the slow controls network (10.105.0.1) on eth0 and the BACNet network (10.2.0.1) on eth1. It is running Debian 8.

The IOC on h0epics is started from the target directory /ligo/lho/h0/target/h0fmcs (https://cdswiki.ligo-wa.caltech.edu/wiki/h0fmcs). I commented out the appropriate channels from the fmcs.db and chiller.db files in the db directory of this path and restarted this IOC. I made no changes to the files in svn.

The IOC on fmcs-epics-cds uses code from SNS: http://ics-web.sns.ornl.gov/webb/BACnet/ and resides in /home/cdsadmin/BACnet_R0-8. This is a local directory on fmcs-epics-cds. This IOC is started as cdsadmin:

> ssh cdsadmin@10.105.0.112
cdsadmin@fmcs-epics-cds: screen
Hit Enter
cdsadmin@fmcs-epics-cds: cd /home/cdsadmin/BACnet_R0-8/iocBoot/e2b-ioc/
cdsadmin@fmcs-epics-cds: ../../bin/linux-x86_64/epics2bacnet st.cmd
Hit CTRL-a then 'd'

Issues:

I came to realize during this migration that the logic behind the binary input channels is different in BACNet. In BACNet a value of 0 corresponds to 'inactive' and a value of 1 corresponds to 'active'. In the server being migrated from a value of 0 corresponds to 'invalid'. This was verified for the reverse osmosis alarm: H0:FMC-CS_WS_RO_ALARM. In the BACNet server it reads as 0 or 'inactive' when not in alarm. When John W. forced it into alarm it read as 1 or 'active'. I believe Dave has updated his cell phone alarm notifier to match this.

A similar situation exists for the state of the chiller pumps. In the server being migrated from a value of 1 appears to correspond to 'OFF' and a value of 2 appears to correspond to 'ON'. It has not been verified, but I believe in the BACNet server a value of 0 corresponds to 'OFF' and a value of 1 corresponds to 'ON'. The pump status for each building is calculated by looking at the state of the pumps. The calculation in the database for the IOC being migrated from appears to be such that as long as one pump is running the status is 'OPERATIONAL'. If no pump is running the status is 'FAILED'. I need to double check this with John or Bubba. I updated the corresponding calc records in the database for the BACNet IOC to match this.

In the server being migrated from, channels that are read by BACNet as binary inputs and binary outputs are read as analog inputs. I changed these in the database for the BACNet IOC to binary inputs and set the ONAM to 'active' and the ZNAM to 'inactive'. The alarm levels also need to be updated. They are currently set through the autoburt snapshot files that contain a separate channel for each alarm field. The autoburt request file has to be updated for the binary channels to have channels for .ZSV and .OSV instead of .HSV, .LSV, etc. So currently there is no control room alarm set for the binary channels, including the reverse osmosis alarm.

I also need to update the medm screens to take account of this change.

Also, there is an invalid alarm on the control room alarm station computer for the mid X air handler reheat temperature. Looking on the BACNet FMCS server this channel actually does appear to be genuinely invalid.

It should be noted that this BACNet IOC is a temporary install until an OPC server is installed on the BACNet server.

I would like to leave the permit for this work open until the FMCS upgrade is complete and all the channels have been migrated to the BACNet IOC.
Comments related to this report
patrick.thomas@LIGO.ORG - 14:33, Wednesday 26 April 2017 (35806)
I updated the autoBurt.req file.
patrick.thomas@LIGO.ORG - 16:56, Wednesday 26 April 2017 (35811)
I have set the alarm on the RO channel: caput H0:FMC-CS_WS_RO_ALARM.OSV MAJOR
patrick.thomas@LIGO.ORG - 14:19, Thursday 27 April 2017 (35844)
I have updated the medm screens.
patrick.thomas@LIGO.ORG - 00:49, Tuesday 08 August 2017 (38061)
Note: The 'ARCH = linux-x86' line in the Makefile under 'BACnet_R0-8/iocBoot/e2b-ioc' had to be changed to 'ARCH = linux-x86_64' in the code copied from SNS.
H1 General (CAL, ISC)
keita.kawabe@LIGO.ORG - posted 14:29, Tuesday 25 April 2017 - last comment - 13:05, Wednesday 26 April 2017(35779)
Commissioning Wed 1600-2000 UTC, calibration Thu 1600-1900 UTC.
We'll have a four hours of commissioning window on Wed. Apr. 26, 1600-2000 UTC (0900-1300 Pacific) in coincidence with LLO.
* ASC measurements/tuning
* Fixing EY oplev laser

We've also reserved three hours for calibration on Thu. Apr. 27, 1600-1900 UTC (0900-1200 Pacific) in coincidence with LLO.

Comments related to this report
corey.gray@LIGO.ORG - 13:05, Wednesday 26 April 2017 (35804)

Keita writing as Corey.

ASC -> Jenne, Sheila and others. Locked IFO needed.

EY oplev -> Jason. IFO status doesn't matter.

Additionally, the following might happen.

OMC jitter measurement -> Sheila, locked IFO.

Removing/putting on Hartman plate on the end station HWF camera when the IFO is unlocked -> Nutsinee.

H1 AOS (AOS, DetChar, PEM)
sheila.dwyer@LIGO.ORG - posted 12:58, Tuesday 25 April 2017 - last comment - 17:57, Wednesday 26 April 2017(35774)
End Y oplev fiber disconnected

Sheila, Jason, Evan G, Krishna

Mode hoping of the ETMY oplev has been showing up in hveto since April 19th, although the oplev damping is off.  The glitches that show up are definitely glitches in the sum, and the oplev is well centered, so the issue is not that the optic is moving. There is a population of DARM glitches around 30 Hz that is present on days when the oplev is glitching but not on other days.  We are curious about the coupling mechanism for these glitches and wonder if this coupling could be causing problems even when the oplev is not glitching loudly . 

Evan, Jason and I connected the monitor on the oplev laser diode power to one of the PEM ADC channels used to monitor sus rack power (we used H1:PEM-EY_ADC0_14_OUT_DQ, which was monitoring the +24V power and is channel 7 on the field rack patch pannel.  Jason can make the laser glitch by tapping it, with this test we saw clear glitches in the sum but no sign of anything in the monitor so this monitor might not be very useful.  Plugging this in means that the lid of the cooler is slightly open.

We also unplugged the fiber, so that for the time being there is no light going into the chamber from the oplev. If these glitches are coupling to DARM electromagnetically, we expect to keep seeing them in DARM.  If they were somehow coupling through the light (radiation pressure, something else), we would expect them to go away now.  One glitch that we looked at is about a 75 uW drop in the laser power on the optic.  (A=2P/(c*m*omega^2)= 3e-19 meters if all the power were at 20 Hz).  We don't really know how centered the beam is on the optic, or what the reflectivity is for the oplev laser, but it seems like radiation pressure could be at the right level to explain this.   

Using an ASD of the oplev sum during a time when the oplev is quiet, this noise is more than 3 orders of magnitude below DARM at 30 Hz.

Images attached to this report
Comments related to this report
evan.goetz@LIGO.ORG - 16:38, Tuesday 25 April 2017 (35788)AOS, DetChar, PEM
The fiber was disconnected at ~19:05:00 25 April 2017 UTC. There will not be any Hveto correlations after this time because the OpLev QPD will not be receiving any light. We will be looking at Omicron histograms from the summary pages to determine whether this is the cause of noise. 
jason.oberling@LIGO.ORG - 13:00, Wednesday 26 April 2017 (35803)DetChar

With this test over, I have reverted the above changes; the ETMy oplev is now fully functional again, as of ~18:30 UTC (11:30 PDT).  I also unplugged the cable we used for the laser diode monitor port and reconnected H1:PEM-EY_ADC0_14_OUT_DQ so it is now once again monitoring the +24V power on the SUS rack.

To address the glitching, I increased the oplev laser output power; using the Current Mon port on the back of the laser:

  • Old: 0.865 V
  • New: 0.875 V

The laser will need several hours to come to thermal equilibrium, as the cooler was left open overnight (as a result of the above test).  Once this is done I can assess the need for further output power tweaks.

keita.kawabe@LIGO.ORG - 17:57, Wednesday 26 April 2017 (35815)

If the glitch happens tonight while Jason is unavailable, leave it until tomorrow when Jason can make another attempt to tune the temperature.

Even when H1 is observing, operators can go out of observing, let Jason work for a short while, and go back into observation as soon as he's done.

But it's clear that we need a long term solution that doesn't include intensive babysitting like this.

H1 DetChar (ISC)
heather.fong@LIGO.ORG - posted 15:59, Friday 21 April 2017 - last comment - 17:23, Wednesday 26 April 2017(35709)
Finding correlations between auxiliary channels and sensitivity range (potential problem in ITMY reaction mass)

[Heather Fong, Sheila Dwyer]

Over the last few months, Sheila and I have been trying to find correlations between auxiliary channels and LHO's sensitivity range. In order to do so, we first made changes to the OAF BLRMS range channels by adding notch filters such that they can track changes to the SENSMON range (see alog 33437). After we made these changes, the summed OAF BLRMS range contributions now have a linear relationship with the SENSMON range, with their units roughly calibrated to be Mpc.

I then wrote a Python script that does the following:

- Loads in desired auxiliary channel data (using NDS and GWpy) for a specified period of time (we analyze minute trends)
- Calculates the Pearson correlation coefficient (PCC) between auxiliary channels and the OAF BLRMS range channels in order to determine how linearly correlated the channels are
- Plots and saves the channels with the highest PCC (aux channels vs. OAF BLRMS range and aux channels vs. time)

Attached to this entry are examples of the aux channels vs. OAF BLRMS range for the time period of Feb 24 2017 to Apr 10 2017 (45 days, H1 observing mode only). The complete list of channels that were analyzed are attached under the file name 'BLRMS_channel_list.txt'. With the exception of the OAF channels, both mean and RMS channels were analyzed. For ~360 channels over a time range of 45 days, this script takes ~2 hours to complete, where the data retrieval from NDS is the bottleneck. The Python script has been uploaded to the LIGO git repository and can be found here:

git clone https://heather-fong@git.ligo.org/heather-fong/BLRMS-channels-correlations.git

where BLRMS_channels_analysis.py is the Python script, and BLRMS_channel_list.txt is an example of channels that can be analyzed.

We found the channels with the highest absolute PCC values (and are therefore most correlated with the range) to be the following (plots are attached):
H1:ASC_AS_B_RF36_Q_PIT_OUT16
H1:ASC-AS-B-RF36_Q_YAW_OUT16
H1:SUS-ITMY_R0_DAMP_Y_INMON

Other channels we analyzed that appear to be correlated with the range include:
H1:SUS-ITMX_M0_DAMP_R_INMON (max PCC = -0.5 for 38-60Hz range)
H1:SUS-ITMY_R0_DAMP_L_INMON (max PCC = 0.5 for 38.60 Hz range)

The results of this analysis gives us hints as to which parts of the interferometer are affecting the sensitivity range. In particular, the results suggest that there are problems with the ITMY reaction mass that are not seen in the ITMX reaction mass, and we can, for example, try putting different offsets in ITMY to confirm this.

Images attached to this report
Non-image files attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 17:23, Wednesday 26 April 2017 (35813)

During the commissioning window this morning I tried moving ITMY reaction mass in yaw so that the DAMP Y INMON moved from about -75 to -86 several times, so see if there is any noticable difference in the DARM spectrum.  I didn't see anything changing in the spectrum.  Attached is a time series of the yaw osem and the DARM BLRMS.  

Images attached to this comment
Displaying reports 48401-48420 of 83212.Go to page Start 2417 2418 2419 2420 2421 2422 2423 2424 2425 End