Displaying reports 50441-50460 of 83117.Go to page Start 2519 2520 2521 2522 2523 2524 2525 2526 2527 End
Reports until 22:02, Tuesday 24 January 2017
H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 22:02, Tuesday 24 January 2017 - last comment - 22:57, Tuesday 24 January 2017(33608)
Lockloss

05:32 UTC Just like last night, nothing's obvious on the FOM. No runaway signal that I can see. The range was stable. The IFO just dropped lock.

Comments related to this report
nutsinee.kijbunchoo@LIGO.ORG - 22:38, Tuesday 24 January 2017 (33609)

Back to Observe 6:37 UTC

nutsinee.kijbunchoo@LIGO.ORG - 22:54, Tuesday 24 January 2017 (33610)

After I set the intent bit I stepped out for a bit and came back to find PI mode28 (18059Hz) rung up pretty high but it was coming down on its own. First time I've seen this.

Images attached to this comment
nutsinee.kijbunchoo@LIGO.ORG - 22:57, Tuesday 24 January 2017 (33611)

I also accepted SUS-PR3_M1_LOCK_L SDF diff before setting the intent bit. There's nothing on this filter bank anyway.

Images attached to this comment
H1 General (ISC, SUS)
nutsinee.kijbunchoo@LIGO.ORG - posted 18:41, Tuesday 24 January 2017 - last comment - 00:25, Wednesday 25 January 2017(33605)
1k harmonics damping added to the guardian

I was tasked by Sheila to find out which of the 1st violin mode harmonics are consistently rung up. We were hoping to see just a few modes that like to ring up so we can have the guardian damp them automatically. However, pretty much everything in 1kHz rings up over time (also true back in O1). Some of the lines seem to be ringing down on their own and some don't. Because we only have 2-3 free violin filter banks per test mass I chose to damp some of the lines that are high and don't ring down on their own. The lucky frequencies that will be damp automatically by the guardian are:

[ITMX]

992.42Hz -- MODE9: FM8(BP), FM2(-60deg), FM4(100dB), -10gain

994.27Hz -- MODE10: FM5(BP), FM2(-60deg), FM4(100dB), -10gain

[ITMY]

994.65Hz -- MODE9: FM8(BP), FM2(-60deg), FM4(100dB), +10gain

998.81Hz -- MODE10: FM8(BP), FM2(-60deg), FM4(100dB) -10gain

[ETMX]

1003.67, 1003.78, 1003.91 -- MODE10: FM1(BBP), FM2(-60deg), FM4(100dB), +30gain -- Luckily the phases work out such that I can make a broader bp filter that covers all three lines. I wish to make more of these for other test masses but they weren't as straight forward.

1004.54 -- MODE9: FM9(BP), FM2(-60deg), FM4(100dB), +10gain

[ETMY]

1009.44, 1009.49 -- MODE1: FM6(BBP), FM4(100dB) +15gain (these two lines are generally the highest of 1k violin amplitude. I don't want to crank up to the gain too much. Just let them damp slowly, safely, and surely)

 

With permission from Keita I took the IFO out of observing briefly to test all the settings (LLO was down), wrote them in the ISC_GEN_STATES.py, and accepted the differences in the SDFs. I haven't hit the load button yet. I hit load button after a lockloss.

 

Note: BBP = Broad Band Pass filter (not that broad, actually, just broader than usual)

Images attached to this report
Comments related to this report
nutsinee.kijbunchoo@LIGO.ORG - 18:51, Tuesday 24 January 2017 (33606)

All the settings were tested for 30+ minutes and will be monitored until the end of shift. I also updated the violin mode table.

borja.sorazu@LIGO.ORG - 23:02, Tuesday 24 January 2017 (33612)

Notice that the damping filter of the 1009Hz lines may have an effect on the 2kHz glitch line reported here.

nutsinee.kijbunchoo@LIGO.ORG - 00:25, Wednesday 25 January 2017 (33616)

7 hours later the 1009.44Hz and 1009.49Hz are continued to be damped (I gave them a very small gain given their amplitudes). Other modes seem to be done damping.

 

Borja: I hope the "effect" is a positive one!

Images attached to this comment
H1 CAL (DetChar, ISC)
jeffrey.kissel@LIGO.ORG - posted 17:05, Tuesday 24 January 2017 (33604)
New Calibration Sensing Function Measurement Suite
J. Kissel

I've taken a new set of calibration measurements that can be used to help track the slow changes in the sensing function over time. The goal is to have a set of these measurements once every few weeks over the course of the run to eventually build up enough of a data set that we can compare the results against the parameter tracking from calibration lines -- specifically the SRC detuning spring frequency and Q, which we're currently not tracking.

The new data live here:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Measurements/SensingFunctionTFs/
   2017-01-24_H1DARM_OLGTF_4to1200Hz_25min.xml
   2017-01-24_H1_PCAL2DARMTF_4to1200Hz_8min.xml
   2017-01-24_H1_PCAL2DARMTF_BB_5to1000Hz.xml

These do *not* need to serve as a new set of reference measurements, given that we've only changed digital filters today (see LHO aLOG 33585).

The broad-band (BB) PCAL2DARM transfer function was in the spectrum from Jan 23 2017 23:09:32 UTC for a minute or so (50 averages, 75% overlap, 0.5 Hz BW), so this can be used to verify the GDS pipeline output as well.

More detailed analysis of the sweeps to come.
Images attached to this report
H1 OpsInfo
jim.warner@LIGO.ORG - posted 16:20, Tuesday 24 January 2017 (33602)
Changes to ISC_LOCK today for BRSX

With Jenne and Krishna's approval, I added a couple of steps to ISC_LOCK to reduce the chance that BRSX will engage damping while we are locked. In the DOWN state I added an ezca call on line 218 to set the BRS damping upper limit to a level lower than we usually run:

        ezca['ISI-GND_BRS_ETMX_HIGHTHRESHOLD'] = 1000

On line 1098 (TURN_ON_BS_STAGE2) I reset this value to the normal run value:

        ezca['ISI-GND_BRS_ETMX_HIGHTHRESHOLD'] = 2000

If either of these states cause a problem relocking, the on shift operator should just comment both of these lines out. Then check on the BRSX overview that the damp threshold value has been set to 2000. If it's still at 1000, you can use a caput in terminal or right click --> probe --> adjust to set it to 2000.

H1 General
edmond.merilh@LIGO.ORG - posted 16:01, Tuesday 24 January 2017 (33600)
Verbal Alarms crashed at 00:00UTC

restarted immediately.

H1 General
edmond.merilh@LIGO.ORG - posted 15:59, Tuesday 24 January 2017 (33599)
Shift Summary - Day
TITLE: 01/24 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 65Mpc
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY:
Maintenacne day went well. It went about 1.5 hours over midday. Re locking was not a problem except for a small guardian script glitch that kept us from advancing beyond DRMI_LOCKED, briefly.
LOG:
	16:05 UTC Dick G. starting WP 6453
16:10 UTC Bubba to endY to start fan lubrication
16:16 UTC Bubba and Evan to end Y for PCAL calibration. Set ISI config to SC_OFF_NOBRSXY.
16:21 UTC Karen and Christina to end stations
16:25 UTC Joe D. to LVEA (eye wash, fire ext., etc.)
16:27 UTC Carlos restarting nuc for control station camera control
16:30 UTC Gerardo to end X
16:40 Jeff B into LVEA near HAM2 for WP#
16:41 Fil into LVEA near HAM6 for WP#6389
16:44 Travis transitioned EY to LASER HAZARD
17:11 Ken done his work in the warehouse and headed to the LSB
17:16 Christina leaving EX and heading for the LVEA
17:29 Gerardo back from EX
17:40 Due to exemplary siesmic isolation efforts, work being done around HAM2 has failed to cause lockloss! So I've manually broken the lock to afford others the opportunity to work.
17:47 Cheryl into PSL anteroom to look for I/O parts.
17:55 Charndra into LVEA to check gate valves WP#6445
18:00 FIl done in LVEA and headed to EX
18:02 Karen back from EY and heading into the LVEA
18:05 Jim doing measurement on HAM1. Will involve RMs and deisolating HAM1 HEPI
18:06 Multiple corner watchdog trips occurred, maybe because of Chandra in the biergarten near the STS??
18:14 Jeff B out to Ends/complressor rooms to check dust monitor
18:15 Dave and Jim B to mechanical room.
18:20 Joe D out of the LVEA
18:21 Dick G into CER
18:23 Evan and Travis done at EY. It is SAFE
18:27 Jim and Dave back
18:30 Cintas on site to change mats
18:40 Cheryl out
18:50 Cheryl and Betsy out to Mid Stations - 3IFOI inventory
18:56 Chandra out of LVEA and headed out to EX
18:57 Joe D out to LVEA to run the oxidation out of the eyewash water
19:10 Jeff B out
19:13 Travis out
19:27 Robert to EY
19:38 Cheryl and Betsy back
19:42Karen out of LVEA and Jason and Betsy to MY
19:44 DAQ restart imminent
19:47 DAQ has been restarted....and we're back
20:33 Begin initial alignment
21:16 Begin Locking sequence
23:28 Intention Bit set to Undisturbed
LHO VE
kyle.ryan@LIGO.ORG - posted 15:24, Tuesday 24 January 2017 (33597)
1830 -1930 UTC -> Ran EDP200 rotating shaft vacuum pump in the Corner Staion Mechanical Room
Background - 
The site's roughing pumps haven't been run for a few years and are to be re-purposed as standby "Beam Tube Roughing Pumps" (BTBPs) in the near future.  The plan is to run one (of 6) during each of the next five consecutive maintenance days as a demonstration of their functionality.  Following this, they will be brought up to current maintenance and relocated to the LVEA on or around the Beam Tube building penetrations.

Today - 
EDP200 s/n CSM1165
Oil level good but dirty -> no action required at this time.  Internal coolant 12 fluid oz. low - Added 12 fluid oz. of 50/50 mix of distilled water and Drystar coolant w/rust inhibitor.  Installed modified ASA blank to cap inlet.  Modification included tapped hole allowing addition of a mechanical vacuum gauge and 1/4" ball valve for gas admission.  Ran pump for 1 hour with the only load being 2 LPM of UHP GN2.  Pump ran quiet and achieved operating temperature; pressure < 29" Hg - no problems with this unit.  
 
Non-image files attached to this report
H1 General (OpsInfo)
corey.gray@LIGO.ORG - posted 15:10, Tuesday 24 January 2017 - last comment - 07:55, Wednesday 22 February 2017(33595)
LVEA Swept After Maintenance Day Activities

Went through an updated LVEA Sweep checklist (T1500386) after Maintenance Day activities.  Issues/Notes are below (and some photos are attached).

Cleanroom curtains on ISC Tables:

Do we worry about this?  Found "mechanical shorts " of curtains contacting various ISC Tables.  Below are what I found for curtains/tables and actions taken (if any):

Other Items:

Images attached to this report
Comments related to this report
corey.gray@LIGO.ORG - 07:55, Wednesday 22 February 2017 (34313)OpsInfo

Blip Glitch Set-Up:  EXTEND Until End of O2

This is for Robert.  We should update the note on this set-up to say "Set-up until End of O2".

LHO VE
david.barker@LIGO.ORG - posted 15:01, Tuesday 24 January 2017 (33596)
modified vacuum text alarms to cell phones

WP6451 Dave:

The cell phone text alert system was modified to:

1. restore the actual pump level for CP4

2. annotate the daily messages with the error status for each channel

I restored CP4 and its ERROR channel in the h0ve alarm configuration file.

I modified the epics_alarm_texter.py code so each daily text for cryopump and vacuum gauge status shows the corresponding ERROR status of each value. The ERROR status is a single character following the value:

SPACE means no error

"!" means error

"?" means error channel data unavailable

Today's vacuum gauge text is:

2.602e-09  1.907e-09  1.365e-09  1.624e-09  
2.434e-09  9.976e-10  1.203e-09  1.317e-09  

spaces following the numbers means no errors

Today's cryopump text is:

cp1=92.0  cp2=92.0  cp3=-1.0? cp4=0.0! 
cp5=92.0  cp6=92.0  cp7=92.0  cp8=92.1  

CP3 has a value -1.0? which means both the value and the error are unknown (this is currently not monitored by the system)

CP4 has a value of 0.0! which means we are receiving a value of 0.0 but the ERROR channel is asserted (4-20mA current below 4mA)

H1 GRD
sheila.dwyer@LIGO.ORG - posted 14:41, Tuesday 24 January 2017 (33594)
changes to down state of ISC_DRMI

Borja has continued working on his analysis of signals sent to the suspensions after locklosses, (see 23831), he is working on another alog with the current results.  

After his original log, I made some changes to the ISC_LOCK down state to make it more efficiently turn off loops, and removed some things that were shutting of DRMI loops in the ISC_LOCK guardian, becuase those loops are also handled in the ISC_DRMI guardian.  However, what was done in ISC_DRMI wasn't turning everything off.  Today we tried to make similar edits to ISC_DRMI to make sure it was efficiently turning off ASC and LSC feedback to the suspensions.  We have tested this once, and fixed one error.  The new code is very slow to complete, ten times slower than extremely simlar code in ISC_LOCK, and takes a total of 19 seconds to finsh turning things off.  Since we don't understand why this code should be slower, we filled a bug 1068  The currrent svn revision of ISC_DRMI is 14928 

For now we are leaving the new, slow code in place because at least we think it is better than the old code which was also slow. 

H1 SUS
keita.kawabe@LIGO.ORG - posted 14:13, Tuesday 24 January 2017 - last comment - 00:40, Saturday 28 January 2017(33547)
When ITMY started rubbing?

Attached script checks any jumps for ITM M0 TEST outputs (e.g. when operators run initial alignment) and L2 witness monitors (or oplev) starting 09/Jan/2017 UTC for 14 days. It calculates DC "response" in urad/urad, as in angle measured by witness (or oplev) per angle offset of M0.

In the attached, just look at middle two panels. Second panel from the top shows the DC "response" for both ITMX and ITMY, and you can clearly see that ITMY startted to act erratically after Jan 15 2017 10:30:00 UTC or so, and it was fixed as soon as the vertical offset was applied to push the optic up (third panel from the top).

Seems like V=-65.7um is the threshold for ITMY to rub something. Interestingly, when ITMY rubs, YAW is constrained but PIT moves more in DC. Also, since the vertical offset was applied, ITMY has slowly drifted back up by about 30um, probably due to temperature change.

Unfortunately this doesn't show anything about CPX because the M0 offset only acts on ITMX chain, not CPX.

Images attached to this report
Non-image files attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 00:40, Saturday 28 January 2017 (33691)

Borja fixed a bug which potentially makes the script skip a few last TEST OFFSET jumps. In this particular analysis the result was the same. Fixed script is attached.

Thanks Borja!

Non-image files attached to this comment
H1 SUS
betsy.weaver@LIGO.ORG - posted 13:24, Tuesday 24 January 2017 - last comment - 13:21, Wednesday 25 January 2017(33591)
Weekly TUES ETM charge measurement Trends

THis morning I took the weekly ETM charge data set.  Unfortunately I did not put 2-and-2 together that Beckhoff EX issues may have caused problems with the SUS ETMX HV power supply while I was running the measurement of ETMX, so its data has a bit of a large error bar.  Even so, it mostly looks like things are OK there wrt charge accumulation (or lack thereof!).  All looks good at ETMY as well.

Images attached to this report
Comments related to this report
patrick.thomas@LIGO.ORG - 00:03, Wednesday 25 January 2017 (33614)
What were the Beckhoff EX issues?
richard.mccarthy@LIGO.ORG - 13:21, Wednesday 25 January 2017 (33636)
ETM X was not a Beckhoff issue. The power supply had a problem.  Though still on the display was not displaying properly and thus we were not sure the unit was capable of providing power.  Power cycle did not fix this but removal of HV card from back and re-installing it did.

ETM-y BRS Beckhoff computer had a problem and needed to be restarted.
H1 CAL (ISC)
jeffrey.kissel@LIGO.ORG - posted 12:48, Tuesday 24 January 2017 - last comment - 16:47, Tuesday 24 January 2017(33585)
Improved 4.7 kHz Notch in DARM2 Filter Bank, Calibration Model Updated, EPICS records installed.
J. Kissel, S. Dwyer
WP # 6448

I've installed the improved 4.7 kHz elliptic bandstop notch in FM3 of the DARM2 bank, as designed yesterday (see LHO aLOG 33546),
    User Model     Bank Name        Filter Module       Name             Design String
    H1OMC          H1:LSC-DARM2     FM3                 "not4735.25"     ellip("BandStop",4,1,80,4720,4750)gain(1.12202)
The filter file has been committed to the userapps repo, 
    /opt/rtcds/userapps/release/omc/h1/filterfiles/H1OMC.txt      (r14921)

Because this filter bank in unmonitored by SDF, yet under guardian control, we've ensured that this filter is always on by hard-coding the filter to be always on into the DOWN state of the ALS_DIFF.py guardian (line 112)
    /opt/rtcds/userapps/release/als/common/guardian/ALS_DIFF.py   (r14918)
which has been committed.

Calibration impacts:
Both the CAL-CS (which produces DELTAL_EXTERNAL) and the GDS (which produces GDS-CALIB_STRAIN) time-independent portions of the low-latency calibration pipeline are unaffected by changes to the DARM filters. However, the calculation of time-dependent correction factors (TDCFs) involves correcting for the entire DARM loop frequency dependence between calibration lines. Thus as the change to the DARM filter banks changes the DARM loop, it impacts the computation of the TDCFs, which impacts h(t). See T1500377 for details of the calculation.

That being said, the improvement in to notch reduces this impact at the calibration line frequencies to negligible. However, for completeness, traceability, reproducibility, and sanity's sake, we update the calibration model in order to account for this new filter. 

Calibration update process:
Here're the steps to ensure that the calibration has been updated to reflect the change:
- Copy the updated OMC filter file from the chans archive to the copy of the archive in the CalSVN and commit,
    ]$ cp /opt/rtcds/lho/h1/chans/filter_archive/h1omc/H1OMC_1169320184.txt /ligo/svncommon/CalSVN/aligocalibration/trunk/Common/H1CalFilterArchive/h1omc/H1OMC_1169320184.txt
- Create a new IFOparams.conf and corresponding IFOparams_YYYY-MM-DD.conf configuration files in the CalSVN,
    ]$ cd /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/params/
    ]$ svn mkdir 2017-01-24
    ]$ svn commit -m "" 2017-01-24
    ]$ cp H1params.conf 2017-01-24/     #note that the H1params.conf file in the top level of the O2/H1/params/ folder is the 2017-01-03 reference model (LHO aLOG 33004)
    ]$ cp 2017-01-03/H1params_2017-01-03.conf 2017-01-24/H1params_2017-01-24.conf
- Update IFOparams.conf file to point to the newly updated H1OMC filter file in the variable DfilterFile, and make sure the DfilterModulesInUse2 is calling out that the new filter is on in FM3, and commit to the repo.
- Make a new CAL_EPICS parameter file that points to the new reference model,
    ]$ cd /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/CAL_EPICS/
    ]$ cp callineParams_20170103.m callineParams_20170124.m
- Change par.conf.ifoDepFilename and par.conf.ifoMeasParams to point to IFOparams.conf and corresponding IFOparams_YYYY-MM-DD.conf
- Use 
    /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/CAL_EPICS/writeH1_CAL_EPICS.m
  to create and install new EPICs records. Be sure to update the par0 that calls the above mentioned callineParams_YYYYMMDD.m
- Accept the new records in the SDF system for both safe.snap and OBSERVE.snap, and commit
    /opt/rtcds/userapps/release/cal/h1/burtfiles/
    h1calcs_safe.snap
    h1calcs_OBSERVE.snap.

This process generates the following output for offline DCS consumption, and represent the new calibration model applicable for all lock stretches beyond 2017-01-24 20:00 UTC:
    /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/params/2017-01-24/
    H1params.conf                    (r4212, lc: 4212)
    H1params_2017-01-24.conf         (r4212, lc: 4212)

    /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/CAL_EPICS/
    20170124_H1_CAL_EPICS_VALUES.txt (r4215, lc: r4215)
    D20170124_H1_CAL_EPICS_VALUES.m  (r4215, lc: r4215)
    
Because it's about time we take a new set of calibration suite measurements, I'll take a set later today and compare against this new model.
Images attached to this report
Non-image files attached to this report
Comments related to this report
aaron.viets@LIGO.ORG - 16:47, Tuesday 24 January 2017 (33603)
I just checked the output of GDS on the DMT machines, and the time dependent corrections are all close to nominal values:

H1:GDS-CALIB_KAPPA_TST_REAL:     ~1.005
H1:GDS-CALIB_KAPPA_PU_REAL:       ~1.001
H1:GDS-CALIB_KAPPA_C:                   ~1.032
H1:GDS-CALIB_F_CC:                          ~342 Hz
H1 PSL (PSL)
travis.sadecki@LIGO.ORG - posted 11:25, Tuesday 24 January 2017 - last comment - 15:15, Tuesday 24 January 2017(33582)
PSL Weekly

Laser Status:
SysStat is good
Front End Power is 34.06W (should be around 30 W)
Front End Watch is GREEN
HPO Watch is GREEN
PMC:
It has been locked 2.0 days, 17.0 hr 57.0 minutes (should be days/weeks)
Reflected power is 13.65Watts and PowerSum = 77.54Watts.

FSS:
It has been locked for 0.0 days 0.0 hr and 11.0 min (should be days/weeks)
TPD[V] = 2.413V (min 0.9V)

ISS:
The diffracted power is around 3.6% (should be 3-5%)
Last saturation event was 0.0 days 0.0 hours and 41.0 minutes ago (should be days/weeks)

Possible Issues:
(Be sure to look into these, as they will not be printed in the report)
PMC reflected power is high
 

Note: See attached screenshot of the PMC Refl power for the past month.  It has been reported as being high for the past 2 weeks (see Corey's aLog 33378).  Is this due to some adjustment by the PSL team and we should update our weekly script with a higher nominal value, or is this a real issue?

This completes FAMIS 7422.

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 15:15, Tuesday 24 January 2017 (33598)

Updated the script with the new PMC values.

H1 SEI
hugh.radkins@LIGO.ORG - posted 11:10, Tuesday 24 January 2017 - last comment - 14:12, Tuesday 24 January 2017(33580)
WHAM6 HEPI Pringle Loops Removed from CartBias Restore list

TJ & Hugh--Re WP 6286

Successfully modified the Guardian Parameters to just restore the non-pringle-mode target positions:

from isiguardianlib.isolation.const import ISOLATION_CONSTANTS
ISOLATION_CONSTANTS['CART_BIAS_DOF_LISTS'] = (['Z','RX','RY'],['X','Y','RZ'])
from isiguardianlib.HPI import *
prefix = 'HPI-HAM6'
ca_monitor = True

The top two lines of the HPI_HAM6.py code above are the modifications.

However, it does not sequence things quite as we would like: it completely restores the vertical dofs before even starting the isolation of the horizontal loops.  Although this platform did not have difficulty completing the isolation, it is reasonable that some platforms could have trouble as it isolates the horizontal dofs after it has driven the vertical dofs to their target position.  If those targets are extreme or whatever enough, the error point of the horizontal loops may be too large for the loops to engage stabily.  The preference would be for all the loops to be running before any of the target positions are restored.

We'll look into changing the sequence more to our preference but TJ feels it could be complicated or at least deeper into the code.  WP remains active.

Comments related to this report
thomas.shaffer@LIGO.ORG - 13:26, Tuesday 24 January 2017 (33592)GRD

Two comments to this work:

1.  We loaded in the new code, saw the "code changes detected and committed" log, and then opened the graph to make sure that it had the new states we expected. It did. We brought the node to READY and then started to isolate but here it did not follow the new state graph, but the old one instead. I think this is <a href="https://bugzilla.ligo-wa.caltech.edu/bugzilla3/show_bug.cgi?id=1013">bug1013</a>, but we only changed the HPI_HAM6.py file which is not in a sub directory. The module that creates the edges is in a sub directory though, so this is what makes me think that it is the same bug.

2.  Reorganizing the states as Hugh hopes to do will require a rewirte, or at least a major change to (userapps)/isi/common/guardian/isiguardianlib/isolation/edges.py. This can be done, if we get LLO on board, but probably during a commissioning break.

hugh.radkins@LIGO.ORG - 14:12, Tuesday 24 January 2017 (33593)

Given TJ's assessment of doing this the desired way, I will endeavor to make these changes to all the other HEPIs for restart next Maintenance period.  We'll find out if any of the platforms are particularly sensitive to the sequence; and, plan to improve the process after O2.

H1 GRD (CDS, TCS)
thomas.shaffer@LIGO.ORG - posted 11:02, Tuesday 24 January 2017 - last comment - 16:16, Tuesday 24 January 2017(33578)
Added New TCS_RH_MON Node

I created and started Nutsinee's TCS_RH_MON node that will monitor the TCS Ring Heaters. There were no issues creating it and it seems to work as it should.

It has been added to the GUARD_OVERVIEW.adl as well under TCS.

I updated (userapps)/cds/h1/scripts/check_guardian_nodes_against_medm_screen.bsh to use the the more up-to-date guardian client command to get the nodes. All nodes are on the overview.

Comments related to this report
david.barker@LIGO.ORG - 11:59, Tuesday 24 January 2017 (33589)

TJ also restarted the DIAG_MAIN node, the free memory size for h1guardian0 went from 34.2GB to 43.5GB

thomas.shaffer@LIGO.ORG - 13:05, Tuesday 24 January 2017 (33590)

By Nutsinee's request, this not is not being monitored by the top-level IFO node yet. She wants to make sure it should behave as it should and that it won't kick us out of Observing. I updated the exclude list with this node.

nutsinee.kijbunchoo@LIGO.ORG - 16:16, Tuesday 24 January 2017 (33601)TCS

The code lives in /opt/rtcds/userapps/release/tcs/common/guardian and it's called TCS_RH_MON.py. This one code monitors all ring heater segments at all test masses.

Displaying reports 50441-50460 of 83117.Go to page Start 2519 2520 2521 2522 2523 2524 2525 2526 2527 End