Displaying reports 43881-43900 of 84081.Go to page Start 2191 2192 2193 2194 2195 2196 2197 2198 2199 End
Reports until 12:02, Tuesday 10 April 2018
LHO VE
chandra.romel@LIGO.ORG - posted 12:02, Tuesday 10 April 2018 - last comment - 15:32, Tuesday 10 April 2018(41358)
Vertex turbo spun up

Spun up vertex turbo right before lunch at ~0.6 Torr. Still need to close purge/vent isolation valves (OMC and IMC) and IP3 GV and little right angle valve.

Comments related to this report
chandra.romel@LIGO.ORG - 15:32, Tuesday 10 April 2018 (41365)

Kyle closed the IMC & OMC purge valves and I closed the right angle metal valve on top of IP3 GV. Leak checker is connected to vertex turbo cart, valved out, and warming up for leak checking tomorrow or whenever He bottle is freed up from EY.

H1 DCS (CDS, DCS)
gregory.mendell@LIGO.ORG - posted 11:23, Tuesday 10 April 2018 (41357)
Patch and reboot the DMT production computers
The DMT production computers in the MSR, h1dmt0, h1dmt1, and h1dmt2, have been patched and rebooted. WP 7466 is done.
H1 SEI
hugh.radkins@LIGO.ORG - posted 08:37, Tuesday 10 April 2018 - last comment - 15:47, Tuesday 10 April 2018(41356)
Some extreme values on the ETMY CPS sensors--suspect timing

At ~2256utc on 3 April (or precisely 1206831398gps), all the CPS sensor readouts on both stages jumped to a different value. Eight of the twelve spiked to a 33kish number before going somewhere else.  A few on Stage2 are not crazy numbers but still shifted.  Most on Stage1 are extreme ~positive 8k.  See attached

Another item to be investigated is the Coil Driver Thermal trip hit at 1557utc on the fifth...

Images attached to this report
Comments related to this report
hugh.radkins@LIGO.ORG - 15:47, Tuesday 10 April 2018 (41367)

Wasted a bit of time by power cycling the CPS power and the sensors interface chassis.  Found the CPS Timing Sync fanout chassis power supply unplugged.  Guess it got swept up in the VE group's house cleaning that was found to be necessary.  While the senors don't read exactly what they did before, they are at least much more reasonable.  And, the platform will isolate.

H1 TCS (TCS)
cheryl.vorvick@LIGO.ORG - posted 08:19, Tuesday 10 April 2018 - last comment - 08:36, Tuesday 10 April 2018(41354)
TCS is in an unknown state

I confirmed with Patrick yesterday that the RS232 flow rate for both chillers shows that the chillers are on, and the flow rates are around 3gpm.

The flow rates on the medm are negative, both around -2.5gmp.

Greg's most recent alog 41314 from April 5th says the chiller is running in a small closed loop, and it appears that this is for both TCSX and TCSY chillers.

Images attached to this report
Comments related to this report
cheryl.vorvick@LIGO.ORG - 08:36, Tuesday 10 April 2018 (41355)TCS

Patrick went to look, and all 3 TCS chillers are running and all read around 3gpm flow.

LHO VE
kyle.ryan@LIGO.ORG - posted 21:05, Monday 09 April 2018 (41353)
IP11 leak found
Kyle, Gerardo, Chandra 

Today we pumped IP11 using a local turbo backed by an aux. cart.  IP11 is isolated from the Y-end via its closed GV.  As I squirted IPA around each CFF joint and vacuum welds of the Chevron Baffle nipple, Gerardo M. monitored the pressure gauge on the aux. cart to see if there was a response.  Chandra R. had already established this to be a very large leak (0.1 torr*L/sec).  We tried "listening" for the audio leak using a device intended for that purpose -> Something seems to be originating from the Chevron Baffle but difficult to eliminate background noise in the room -> We shut down the Clean Room (1417 hrs. local) and that helped a little.  Next, we modified the as-found bag such that each CFF joint was exposed but keeping the nipple's tube vacuum welds within the bagged volume.  We then purged the bag with bottled N2 while spraying helium around each CFF joint -> no significant response.  Finally, we reversed the bagging, now purging the CFF joint(s) while spraying helium around the welds of the nipple tube -> found very large leak coming from the bottom tube-to-flange vacuum weld (nipple s/n 01).  

Having found one leaking Chevron Baffle, Chandra R. examined a different one stored in the VPW.  She found suspicious looking welds.  

Also, we prepared IP11+chevron baffle nipple for tomorrow's removal by venting it with bottled UHP N2 and removing all but (3) ea. of the nipple-to-GV nuts and studs.  Note that with the local turbo spun down and disconnected from the aux. cart, I applied the N2 to the Tee at the inlet of the turbo letting it flow trough the turbo and into the room via the open-ended turbo exhaust line.  This setup would prevent the possibility of over-pressurizing IP11 as it remained open to the room.  I then opened the 1 1/2" vent/pump valve to vent IP11.  Gerardo M. noticed that room air was then entering through the exhaust.  Thus, for a short period, I had been back streaming through the local turbo and into IP11.  I then closed the 1 1/2" valve and removed the turbo. This is a "dirty" pump in that it is not a maglev.  I'll consult Chandra R. as to what, if any action is needed (FTIR?).   
LHO VE
chandra.romel@LIGO.ORG - posted 18:27, Monday 09 April 2018 (41352)
valved out vertex mechanical pump for the night

Isolated QDP80 at vertex with pressure at 1.3 Torr. Will resume tomorrow morning.

LHO VE
david.barker@LIGO.ORG - posted 16:41, Monday 09 April 2018 (41351)
increase TE202A thermocouple high alarm level

Chandra has requested that the high alarm level for H0:VAC-MY_CP3_TE202A_DISCHARGE_TEMP_DEGC be raised from 130.0 degC to 140.0 degC (this signal is currently operating at 131.1 degC). This was done and the alarm system was restarted.

H1 General
cheryl.vorvick@LIGO.ORG - posted 16:29, Monday 09 April 2018 (41350)
OPS summary:

day started with JeffB's alog: 41345

All Tmes UTC:

H1 ISC
marc.pirello@LIGO.ORG - posted 16:23, Monday 09 April 2018 - last comment - 17:57, Tuesday 24 April 2018(41349)
End Y Timing Comparator Modification Complete
Taking advantage of the pump down we retrieved, modified, and reinstalled the timing comparator at end Y.  The following modifications were applied to unit S1201222:

LIGO-E1800008: ECR: Timing Comparator replace 5V regulator with seperate DC Power board

LIGO-E1200034: Timing RF Counter/Comparator Software Release

LIGO-E1700246: ECR: Add frequency counter channels to timing comparator

One down, three to go!

Comments related to this report
marc.pirello@LIGO.ORG - 17:57, Tuesday 24 April 2018 (41652)

Reinstalled proper Timing Comparator S1107952 into CER and returned spare to the rack.  *Spare still requires upgrade

Pulled, Upgraded and Reinstalled MSR Timing Comparator S1201224.

No front ends were harmed in this operation. 

Three Timing Comparators done, one remains unmodified at X end.

H1 SUS
betsy.weaver@LIGO.ORG - posted 15:35, Monday 09 April 2018 - last comment - 15:22, Tuesday 10 April 2018(41346)
WBSC9 ETMX QUAD halves back together

Today, Travis and I finished assembling the ETMX lower reaction chain.  We assembled the PenRe mass in the structure, attaching the balance of parts to make the weight from last week including AOSEMs, cables, and cable routing brackets.  We then installed the new annular AERM07 mass below the PenRe in the lower structure and adjusted it in it's 6 DOF's by eye before suspending it.  We then reclamped everything and rolled the main lower chain up to it.  A quick lift via the Genie duct jack and we set the main lower structure and newly fiber'ed masses into the trolley with the reaction set.  We added the UIM and PUM magnets/flags (set with polarities per the QUAD controls poster), and then shoved the 2 structures together and fastened them.

We wrapped the now complete lower unit and rolled it to the chamber side.  After attaching some of the LSAT lifting blocks, we staged the Genie duct jack so that we are ready to install it on the arm and into the chamber tomorrow.

Comments related to this report
betsy.weaver@LIGO.ORG - 15:39, Monday 09 April 2018 (41347)

Note - Travis noticed a door bolt sitting in the BSC9 flange in the unused hole at 3 o'clock.  The washer from the bolt (which held nothinglooked to be rubbing funny on the o-ring so he removed it.  The vacuum crew will need to inspect this after we remove the install arm.  (Note, the install arm and stiffening flanges cover most of the right side of holes and o-ring on this chamber, except for a ~6" gap at 3 o'clock.  There are o-ring covers everywhere else.  This bolt should not have been in this hole, since it served no purpose.)

betsy.weaver@LIGO.ORG - 15:22, Tuesday 10 April 2018 (41364)

From Chandra and Kyle who inspected this o-ring portion yesterday late afternoon:

Kyle and I inspected the o-ring and deemed it ok to reuse. Kyle peeled away a
sliver of viton material that was hanging off of the o-ring. To the naked eye
the surface looks ok. We will take note of this when we install the door and
pump down the annulus volume.
H1 General
jeffrey.bartlett@LIGO.ORG - posted 09:58, Monday 09 April 2018 (41345)
Ops Day Shift Summary
Ops Shift Log: 04/09/2018, Day Shift 15:00 – 23:00 (08:00 - 16:00) Time - UTC (PT)
State of H1: Unlocked for vent and upgrades
Intent Bit: Engineering  
Support: N/A
Incoming Operator: Cheryl
Shift Summary: Morning fill in for Cheryl. Rebuilds, upgrade, and commissioning, where possible, continues.   
 
Activity Log: Time - UTC (PT)
14:50 (07:50) Start of shift – AM fill in for Cheryl
16:22 (09:22) Marc – Going to End-Y
16:25 (09:25) Amber – School tour of the CR
16:50 (09:50) Marc – Back from End-Y
16:55 (09:55) Betsy & Travis – Going to End-X
17:00 (10:00) Turn over to Cheryl
H1 General
jeffrey.bartlett@LIGO.ORG - posted 09:56, Monday 09 April 2018 (41344)
09:00 Meeting Mintues
Vent:
Plan to put HAM6 doors on mid next week

PSL: 
Tweaks and alignment continue. 
Will be adding the shutter shortly. 
Commissioning wants the beam on Wednesday 

VAC:
NO ACCESS TO HAM6 until Wednesday
Leak checking at various points across the site 
Leak at End-Y
CS vertex is pumping down
Viewport work at HAM6
MY CP4 bake out continues

CDS/EE:
Working on End-Y timing system
Working on End-Y AOS laser interlock

PEM:
Moving a weather station to a Beckhoff system
H1 PSL
edmond.merilh@LIGO.ORG - posted 09:23, Monday 09 April 2018 (41343)
PSL Weekly Report - 10 Day Trends FAMIS #6195

70W install is ongoing.

Images attached to this report
H1 GRD
jameson.rollins@LIGO.ORG - posted 17:12, Sunday 08 April 2018 - last comment - 18:04, Tuesday 10 April 2018(41337)
subset of guardian nodes moved to h1guardian1 for further evaluation

I have moved a subset of guardian nodes to the new configuration on h1guardian1.  This is to try to catch more of the segfaults we were seeing during the last upgrade attempt, that we have not been able to reproduce in testing.

The nodes should function normally on the new system, but given  what we saw before we expect to see segfaults with a mean time to failure of about 100 hours.  I will be baby sitting the nodes on the new setup, and will restart them as soon as they crash.

The nodes that have been moved to the new system are all the SUS and SEI nodes in the input chambers, BS, and the arms.  No nodes from HAM4, HAM5, or HAM6 were moved.  Full list of nodes now running on h1guardian1:

jameson.rollins@opsws12:~ 0$ ssh guardian@h1guardian1 list
HPI_BS
HPI_ETMX
HPI_ETMY
HPI_HAM1
HPI_HAM2
HPI_HAM3
HPI_ITMX
HPI_ITMY
ISI_BS_ST1
ISI_BS_ST1_BLND
ISI_BS_ST1_SC
ISI_BS_ST2
ISI_BS_ST2_BLND
ISI_BS_ST2_SC
ISI_ETMX_ST1
ISI_ETMX_ST1_BLND
ISI_ETMX_ST1_SC
ISI_ETMX_ST2
ISI_ETMX_ST2_BLND
ISI_ETMX_ST2_SC
ISI_ETMY_ST1
ISI_ETMY_ST1_BLND
ISI_ETMY_ST1_SC
ISI_ETMY_ST2
ISI_ETMY_ST2_BLND
ISI_ETMY_ST2_SC
ISI_HAM2
ISI_HAM2_SC
ISI_HAM3
ISI_HAM3_SC
ISI_ITMX_ST1
ISI_ITMX_ST1_BLND
ISI_ITMX_ST1_SC
ISI_ITMX_ST2
ISI_ITMX_ST2_BLND
ISI_ITMX_ST2_SC
ISI_ITMY_ST1
ISI_ITMY_ST1_BLND
ISI_ITMY_ST1_SC
ISI_ITMY_ST2
ISI_ITMY_ST2_BLND
ISI_ITMY_ST2_SC
SEI_BS
SEI_ETMX
SEI_ETMY
SEI_HAM2
SEI_HAM3
SEI_ITMX
SEI_ITMY
SUS_BS
SUS_ETMX
SUS_ETMY
SUS_IM1
SUS_IM2
SUS_IM3
SUS_IM4
SUS_ITMX
SUS_ITMY
SUS_MC1
SUS_MC2
SUS_MC3
SUS_PR2
SUS_PR3
SUS_PRM
SUS_RM1
SUS_RM2
SUS_TMSX
SUS_TMSY
jameson.rollins@opsws12:~ 0$

guardctrl for nodes on h1guardian1

NOTE: Until the new system has been put fully into production, "guardctrl" interaction with these nodes on h1guardian1 is a bit different.  To start/stop the nodes, or get status or view the logs, you will need to send the appropriate guardctrl command to guardian@h1guardian1 over ssh, e.g.:

jameson.rollins@opsws12:~ 0$ ssh guardian@h1guardian1 status SUS_BS
● guardian@SUS_BS.service - Advanced LIGO Guardian service: SUS_BS
   Loaded: loaded (/usr/lib/systemd/user/guardian@.service; enabled; vendor preset: enabled)
  Drop-In: /home/guardian/.config/systemd/user/guardian@.service.d
           └─timeout.conf
   Active: active (running) since Sun 2018-04-08 14:48:47 PDT; 1h 53min ago
 Main PID: 24724 (guardian SUS_BS)
   CGroup: /user.slice/user-1010.slice/user@1010.service/guardian.slice/guardian@SUS_BS.service
           ├─24724 guardian SUS_BS /opt/rtcds/userapps/release/sus/common/guardian/SUS_BS.py
           └─24745 guardian-worker SUS_BS /opt/rtcds/userapps/release/sus/common/guardian/SUS_BS.py

Apr 08 14:48:50 h1guardian1 guardian[24724]: SUS_BS executing state: ALIGNED (100)
Apr 08 14:48:50 h1guardian1 guardian[24724]: SUS_BS [ALIGNED.enter]
Apr 08 16:01:45 h1guardian1 guardian[24724]: SUS_BS REQUEST: ALIGNED
Apr 08 16:01:45 h1guardian1 guardian[24724]: SUS_BS calculating path: ALIGNED->ALIGNED
Apr 08 16:01:45 h1guardian1 guardian[24724]: SUS_BS same state request redirect
Apr 08 16:01:45 h1guardian1 guardian[24724]: SUS_BS REDIRECT requested, timeout in 1.000 seconds
Apr 08 16:01:45 h1guardian1 guardian[24724]: SUS_BS REDIRECT caught
Apr 08 16:01:45 h1guardian1 guardian[24724]: SUS_BS [ALIGNED.redirect]
Apr 08 16:01:45 h1guardian1 guardian[24724]: SUS_BS executing state: ALIGNED (100)
Apr 08 16:01:45 h1guardian1 guardian[24724]: SUS_BS [ALIGNED.enter]
jameson.rollins@opsws12:~ 0$

Problems encountered during move

A couple of the SEI systems did not come back up to the same states they were in before the move.  This caused a trip on ETMY HPI, and ETMX ISI_ST1.  I eventually recovered everything back to the states they were in at the beginning of the day.

The main problem I've been having is with the ISI_*_SC nodes.  They all are supposed to be in the SC_OFF state, but a couple of the nodes are cycling between TURNING_OFF_SC and SC_OFF.   For instance, ISI_ITMY_ST2_SC is showing the following:

2018-04-09_00:00:39.728236Z ISI_ITMY_ST2_SC new target: SC_OFF
2018-04-09_00:00:39.729272Z ISI_ITMY_ST2_SC executing state: TURNING_OFF_SC (-14)
2018-04-09_00:00:39.729667Z ISI_ITMY_ST2_SC [TURNING_OFF_SC.enter]
2018-04-09_00:00:39.730468Z ISI_ITMY_ST2_SC [TURNING_OFF_SC.main] timer['ramping gains'] = 5
2018-04-09_00:00:39.790070Z ISI_ITMY_ST2_SC [TURNING_OFF_SC.run] USERMSG 0: Waiting for gains to ramp
2018-04-09_00:00:44.730863Z ISI_ITMY_ST2_SC [TURNING_OFF_SC.run] timer['ramping gains'] done
2018-04-09_00:00:44.863962Z ISI_ITMY_ST2_SC EDGE: TURNING_OFF_SC->SC_OFF
2018-04-09_00:00:44.864457Z ISI_ITMY_ST2_SC calculating path: SC_OFF->SC_OFF
2018-04-09_00:00:44.865347Z ISI_ITMY_ST2_SC executing state: SC_OFF (10)
2018-04-09_00:00:44.865730Z ISI_ITMY_ST2_SC [SC_OFF.enter]
2018-04-09_00:00:44.866689Z ISI_ITMY_ST2_SC [SC_OFF.main] SENSCOR_Y_IIRHP FMs:[4] is not in the correct configuration
2018-04-09_00:00:44.866988Z ISI_ITMY_ST2_SC [SC_OFF.main] USERMSG 0: SENSCOR_Y_IIRHP FMs:[4] is not in the correct configuration
2018-04-09_00:00:44.927099Z ISI_ITMY_ST2_SC JUMP target: TURNING_OFF_SC
2018-04-09_00:00:44.927619Z ISI_ITMY_ST2_SC [SC_OFF.exit]
2018-04-09_00:00:44.989053Z ISI_ITMY_ST2_SC JUMP: SC_OFF->TURNING_OFF_SC
2018-04-09_00:00:44.989577Z ISI_ITMY_ST2_SC calculating path: TURNING_OFF_SC->SC_OFF
2018-04-09_00:00:44.989968Z ISI_ITMY_ST2_SC new target: SC_OFF
2018-04-09_00:00:44.991117Z ISI_ITMY_ST2_SC executing state: TURNING_OFF_SC (-14)
2018-04-09_00:00:44.991513Z ISI_ITMY_ST2_SC [TURNING_OFF_SC.enter]
2018-04-09_00:00:44.993546Z ISI_ITMY_ST2_SC [TURNING_OFF_SC.main] timer['ramping gains'] = 5
2018-04-09_00:00:45.053773Z ISI_ITMY_ST2_SC [TURNING_OFF_SC.run] USERMSG 0: Waiting for gains to ramp

Note that the problem seems to be that it's failing a check for the SENSCOR filter banks being in the correct state once SC_OFF has been achieved.  Here are the nodes that are having problems, and the messages they're throwing:

ISI_HAM2_SC     [SC_OFF.main] SENSCOR_GND_STS_Y_FIR FMs:[1] is not in the correct configuration
ISI_HAM3_SC     [SC_OFF.main] SENSCOR_GND_STS_Y_FIR FMs:[1] is not in the correct configuration
ISI_BS_ST2_SC   [SC_OFF.main] SENSCOR_Y_IIRHP FMs:[4] is not in the correct configuration
ISI_BS_ST1_SC   [SC_OFF.main] SENSCOR_GND_STS_Y_WNR FMs:[6] is not in the correct configuration
ISI_ITMX_ST2_SC [SC_OFF.main] SENSCOR_Y_IIRHP FMs:[4] is not in the correct configuration
ISI_ITMY_ST2_SC [SC_OFF.main] SENSCOR_Y_IIRHP FMs:[4] is not in the correct configuration
ISI_ETMY_ST1_SC [SC_OFF.main] SENSCOR_GND_STS_Y_WNR FMs:[6] is not in the correct configuration
ISI_ETMY_ST1_SC [SC_OFF.main] SENSCOR_GND_STS_Y_WNR FMs:[6] is not in the correct configuration

I've tried to track down where exactly the problem is coming from, but haven't been able to figure it out yet.  It looks like the expected configuration just does not match with how they're currently set.  I will need to consult with the SEI folks tomorrow to sort this out.  In the mean time, I'm leaving all of the above nodes paused.

 

 

Comments related to this report
thomas.shaffer@LIGO.ORG - 07:52, Monday 09 April 2018 (41339)

A note on the SC nodes:

Since these new SC nodes are still in a bit of a testing phase, I don't think all of the filters that will be used are in the configuration file. One way we could get around this, until the config file is set exactly how the SEI wants it, is to remove the check temporarily. I'm hesitant to remove it entirely, but that might be best since it doesn't allow for any testing of new filters.

jameson.rollins@LIGO.ORG - 07:58, Monday 09 April 2018 (41340)

As of 7:50 am this morning (after I restarted 5 nodes last night):

HPI_BS              enabled    active     2018-04-08 14:48:46-07:00
HPI_ETMX            enabled    active     2018-04-08 14:48:05-07:00
HPI_ETMY            enabled    active     2018-04-08 15:55:18-07:00
HPI_HAM1            enabled    active     2018-04-08 14:42:32-07:00
HPI_HAM2            enabled    failed     2018-04-08 14:40:54-07:00
HPI_HAM3            enabled    active     2018-04-08 14:41:09-07:00
HPI_ITMX            enabled    active     2018-04-08 14:48:05-07:00
HPI_ITMY            enabled    failed     2018-04-08 14:48:05-07:00
ISI_BS_ST1          enabled    active     2018-04-08 14:49:54-07:00
ISI_BS_ST1_BLND     enabled    active     2018-04-08 15:38:51-07:00
ISI_BS_ST1_SC       enabled    active     2018-04-08 16:16:57-07:00
ISI_BS_ST2          enabled    failed     2018-04-08 14:49:54-07:00
ISI_BS_ST2_BLND     enabled    active     2018-04-08 15:38:51-07:00
ISI_BS_ST2_SC       enabled    active     2018-04-08 16:16:56-07:00
ISI_ETMX_ST1        enabled    failed     2018-04-08 14:47:10-07:00
ISI_ETMX_ST1_BLND   enabled    active     2018-04-08 15:34:22-07:00
ISI_ETMX_ST1_SC     enabled    active     2018-04-08 16:16:57-07:00
ISI_ETMX_ST2        enabled    active     2018-04-08 14:47:10-07:00
ISI_ETMX_ST2_BLND   enabled    active     2018-04-08 15:34:21-07:00
ISI_ETMX_ST2_SC     enabled    active     2018-04-08 16:16:56-07:00
ISI_ETMY_ST1        enabled    active     2018-04-08 14:47:10-07:00
ISI_ETMY_ST1_BLND   enabled    active     2018-04-08 15:37:45-07:00
ISI_ETMY_ST1_SC     enabled    active     2018-04-08 16:16:57-07:00
ISI_ETMY_ST2        enabled    active     2018-04-08 14:47:10-07:00
ISI_ETMY_ST2_BLND   enabled    failed     2018-04-08 15:37:45-07:00
ISI_ETMY_ST2_SC     enabled    active     2018-04-08 16:16:57-07:00
ISI_HAM2            enabled    active     2018-04-08 14:40:54-07:00
ISI_HAM2_SC         enabled    failed     2018-04-08 16:16:57-07:00
ISI_HAM3            enabled    failed     2018-04-08 14:41:09-07:00
ISI_HAM3_SC         enabled    active     2018-04-08 16:16:56-07:00
ISI_ITMX_ST1        enabled    active     2018-04-08 14:47:10-07:00
ISI_ITMX_ST1_BLND   enabled    active     2018-04-08 23:17:14-07:00
ISI_ITMX_ST1_SC     enabled    active     2018-04-08 16:16:57-07:00
ISI_ITMX_ST2        enabled    active     2018-04-08 23:17:14-07:00
ISI_ITMX_ST2_BLND   enabled    failed     2018-04-08 15:33:53-07:00
ISI_ITMX_ST2_SC     enabled    active     2018-04-08 16:16:57-07:00
ISI_ITMY_ST1        enabled    active     2018-04-08 14:47:10-07:00
ISI_ITMY_ST1_BLND   enabled    active     2018-04-08 15:33:12-07:00
ISI_ITMY_ST1_SC     enabled    failed     2018-04-08 16:16:57-07:00
ISI_ITMY_ST2        enabled    active     2018-04-08 14:47:10-07:00
ISI_ITMY_ST2_BLND   enabled    failed     2018-04-08 15:33:37-07:00
ISI_ITMY_ST2_SC     enabled    active     2018-04-08 16:16:57-07:00
SEI_BS              enabled    active     2018-04-08 14:48:45-07:00
SEI_ETMX            enabled    active     2018-04-08 14:47:26-07:00
SEI_ETMY            enabled    active     2018-04-08 23:17:13-07:00
SEI_HAM2            enabled    active     2018-04-08 14:40:53-07:00
SEI_HAM3            enabled    active     2018-04-08 14:41:08-07:00
SEI_ITMX            enabled    failed     2018-04-08 14:47:26-07:00
SEI_ITMY            enabled    active     2018-04-08 14:47:26-07:00
SUS_BS              enabled    active     2018-04-08 23:17:13-07:00
SUS_ETMX            enabled    active     2018-04-08 14:47:27-07:00
SUS_ETMY            enabled    active     2018-04-08 14:47:27-07:00
SUS_IM1             enabled    active     2018-04-08 14:05:38-07:00
SUS_IM2             enabled    failed     2018-04-08 14:05:38-07:00
SUS_IM3             enabled    failed     2018-04-08 14:05:38-07:00
SUS_IM4             enabled    failed     2018-04-08 14:05:38-07:00
SUS_ITMX            enabled    active     2018-04-08 14:47:27-07:00
SUS_ITMY            enabled    active     2018-04-08 23:17:13-07:00
SUS_MC1             enabled    active     2018-04-08 23:17:14-07:00
SUS_MC2             enabled    active     2018-04-08 14:40:11-07:00
SUS_MC3             enabled    failed     2018-04-08 14:40:11-07:00
SUS_PR2             enabled    failed     2018-04-08 14:40:11-07:00
SUS_PR3             enabled    failed     2018-04-08 14:40:11-07:00
SUS_PRM             enabled    active     2018-04-08 14:40:11-07:00
SUS_RM1             enabled    failed     2018-04-08 13:45:41-07:00
SUS_RM2             enabled    active     2018-04-08 13:45:48-07:00
SUS_TMSX            enabled    active     2018-04-08 14:53:45-07:00
SUS_TMSY            enabled    active     2018-04-08 14:53:46-07:00

Including the five nodes I restarted last night, that's 23 seg faults out of 68 nodes in roughly 18 hours = 6 hour MTTF.  That's higher than it was previously.  I'm reverting all nodes back to h1guardian0.

jameson.rollins@LIGO.ORG - 08:26, Monday 09 April 2018 (41342)

All nodes have been reverted back to h1guardian0

jonathan.hanks@LIGO.ORG - 18:04, Tuesday 10 April 2018 (41371)
I have attached a pdf with a breakdown of stack traces and pids, so that we can see what the causes of the failures were.
Non-image files attached to this comment
H1 GRD
jameson.rollins@LIGO.ORG - posted 15:36, Sunday 08 April 2018 - last comment - 08:25, Monday 09 April 2018(41338)
fixed "sei_config" guardian node imports

The "sei_config" guardian nodes (ISI_*_{BLND,SC}) were showing import errors having to do with not finding the module "SEI_CONFIG".  It looks like this module was renamed to "sei_config" on March 26, but the nodes that import it were not updated to import the module under the new name.  I've updated them appropriately, and committed to the SVN:

jameson.rollins@opsws12:/opt/rtcds/userapps/release/isi/h1/guardian 0$ svn status
M       ISI_BS_ST1_BLND.py
M       ISI_BS_ST1_SC.py
M       ISI_BS_ST2_BLND.py
M       ISI_BS_ST2_SC.py
M       ISI_ETMX_ST1_BLND.py
M       ISI_ETMX_ST1_SC.py
M       ISI_ETMX_ST2_BLND.py
M       ISI_ETMX_ST2_SC.py
M       ISI_ETMY_ST1_BLND.py
M       ISI_ETMY_ST1_SC.py
M       ISI_ETMY_ST2_BLND.py
M       ISI_ETMY_ST2_SC.py
M       ISI_HAM2_SC.py
M       ISI_HAM3_SC.py
M       ISI_HAM4_SC.py
M       ISI_HAM5_SC.py
M       ISI_HAM6_SC.py
M       ISI_ITMX_ST1_BLND.py
M       ISI_ITMX_ST1_SC.py
M       ISI_ITMX_ST2_BLND.py
M       ISI_ITMX_ST2_SC.py
M       ISI_ITMY_ST1_BLND.py
M       ISI_ITMY_ST1_SC.py
M       ISI_ITMY_ST2_BLND.py
M       ISI_ITMY_ST2_SC.py
jameson.rollins@opsws12:/opt/rtcds/userapps/release/isi/h1/guardian 0$ svn commit
Sending        ISI_BS_ST1_BLND.py
Sending        ISI_BS_ST1_SC.py
Sending        ISI_BS_ST2_BLND.py
Sending        ISI_BS_ST2_SC.py
Sending        ISI_ETMX_ST1_BLND.py
Sending        ISI_ETMX_ST1_SC.py
Sending        ISI_ETMX_ST2_BLND.py
Sending        ISI_ETMX_ST2_SC.py
Sending        ISI_ETMY_ST1_BLND.py
Sending        ISI_ETMY_ST1_SC.py
Sending        ISI_ETMY_ST2_BLND.py
Sending        ISI_ETMY_ST2_SC.py
Sending        ISI_HAM2_SC.py
Sending        ISI_HAM3_SC.py
Sending        ISI_HAM4_SC.py
Sending        ISI_HAM5_SC.py
Sending        ISI_HAM6_SC.py
Sending        ISI_ITMX_ST1_BLND.py
Sending        ISI_ITMX_ST1_SC.py
Sending        ISI_ITMX_ST2_BLND.py
Sending        ISI_ITMX_ST2_SC.py
Sending        ISI_ITMY_ST1_BLND.py
Sending        ISI_ITMY_ST1_SC.py
Sending        ISI_ITMY_ST2_BLND.py
Sending        ISI_ITMY_ST2_SC.py
Transmitting file data .........................
Committed revision 17121.
jameson.rollins@opsws12:/opt/rtcds/userapps/release/isi/h1/guardian 0$

 

Comments related to this report
jameson.rollins@LIGO.ORG - 08:25, Monday 09 April 2018 (41341)

A couple other notes about the ISI_*_BLND and ISI_*_SC nodes:

  • See log 41337: THE *_SC nodes are not going to their correct configuration, presumably due to not loaded filter bankds.
  • Both _SC and _BLND nodes are not able to determine their current configuration and jump to the appropriate state on INIT.
Displaying reports 43881-43900 of 84081.Go to page Start 2191 2192 2193 2194 2195 2196 2197 2198 2199 End