Displaying reports 70001-70020 of 77083.Go to page Start 3497 3498 3499 3500 3501 3502 3503 3504 3505 End
Reports until 16:21, Wednesday 31 July 2013
LHO VE
john.worden@LIGO.ORG - posted 16:21, Wednesday 31 July 2013 - last comment - 16:50, Thursday 01 August 2013(7311)
Plot of QDP80 Roughing the vertex

QDP80 roughing the vertex volume. For Mike Z.

Volume pumped would be the Y Beam Manifold + HAM2,3 + BSC1,2.

Images attached to this report
Comments related to this report
john.worden@LIGO.ORG - 16:50, Thursday 01 August 2013 (7327)

Volume pumped should be Y Beam Manifold + HAM2,3 + BSC 1,2,3

LHO General
gerardo.moreno@LIGO.ORG - posted 16:15, Wednesday 31 July 2013 (7310)
Ops Summary

- Rick to visit the roof, 10:30 am.
- Betsy and Travis at X-End, SUS work @ ETMX.
- Sheila to H1-PSL enclosure, dust monitor on hand.
- Filiberto to Y-End, disassembly of racks.
- Hugh and Greg, Y-End station, HEPI movement.
- Dave and Jim, 1 DAQ reboot, cheers!.

H1 DAQ
david.barker@LIGO.ORG - posted 16:14, Wednesday 31 July 2013 (7309)
DAQ incorrect dataValid fixed by reducing number of dcu, also fixed corrupted DUST slow data

Since the EX system was added to the DAQ we have been seeing two problems in the frame:

Alex found that there is a limit on the number of DCUs acquired by the DAQ, the limit is 85. We had 94 DCUs in the DAQ.

We removed 16 DCUs from the DAQ this afternoon, and both problems have gone away (Patrick verified all DUST channels have good data).

Alex will test a fix to allow the full complement of DCUs in the DAQ next week. For now the systems listed below are out of the DAQ:

h1iopsusauxh56, h1susauxh56

h1iopsusquadtst, h1susquadtst

h1iopsusbstst, h1susbstst

h1iopseitst, h1isitst

h1iopsusex, h1iopseiex, h1iopiscex, h1iopsusauxex

h1iopsusey, h1iopseiey, h1iopiscey, h1iopsusuxey

LHO VE
kyle.ryan@LIGO.ORG - posted 16:07, Wednesday 31 July 2013 (7308)
Running rotating pumps @ HAM3 and BSC7


			
			
H1 SUS
mark.barton@LIGO.ORG - posted 15:09, Wednesday 31 July 2013 (7306)
SUS_CUST_HAUX_DACKILL.adl fix

Jeff K pointed out that the button to go to the IM3 WD screen from SUS_CUST_HAUX_DACKILL screen was broken (it gave the right screen but with lots of white channels). It turned out that I had missed updating an args="..." line in the recent cleanup, so I fixed it and commited the new file.

H1 ISC
jeffrey.kissel@LIGO.ORG - posted 12:30, Wednesday 31 July 2013 (7300)
ISC Filter File configuration Control
In preparation for updating the IMC to LLO style control, I've brought all the *current H1 ISC filter files under proper version control, such that we have a version controlled back up.
This meant
(1) Adding and/or Ccmmitting any outstanding changes that were present in the local copy in the userapps repo (if they already existed)
(2) Copying the chans directory copy over to the userapps repo, if they are different, assuming the chans directory copy is the latest and greatest.
(3) Commit the latest and greatest to the repo.
(4) Remove the chans copy, and turn it into a soft link to the userapps copy.

(In case you're worried about whether Foton, the RCG, and autoquack can correctly follow this link, SUS has been interacting with the filter file / softlink for many months now without issue. It survived many re-compiles, RCG upgrades, opening and editing the chans "copy" with Foton, and quacking new filters in.)

NOTE: It should now be common practice to open and edit the filter file in the userapps repo, NOT in the chans directory even though foton can handle both.

Now the ISC filter files in the chans directory look like this (and point to -> that):
controls@operator1:chans 0$ pwd
/opt/rtcds/lho/h1/chans
controls@operator1:chans 0$ ls -l H1{I,L,A}SC*.txt
lrwxrwxrwx 1 controls controls 59 Jul 31 12:07 H1ASCIMC.txt -> /opt/rtcds/userapps/release/asc/h1/filterfiles/H1ASCIMC.txt
lrwxrwxrwx 1 controls controls 58 Jul 19 16:14 H1ASCTT.txt -> /opt/rtcds/userapps/release/asc/h1/filterfiles/H1ASCTT.txt
lrwxrwxrwx 1 controls controls 56 Jul 31 12:24 H1ASC.txt -> /opt/rtcds/userapps/release/asc/h1/filterfiles/H1ASC.txt
lrwxrwxrwx 1 controls controls 58 Jul 31 12:11 H1ISCEY.txt -> /opt/rtcds/userapps/release/isc/h1/filterfiles/H1ISCEY.txt
lrwxrwxrwx 1 controls controls 56 Jul 31 12:06 H1LSC.txt -> /opt/rtcds/userapps/release/lsc/h1/filterfiles/H1LSC.txt
H1 SEI
sebastien.biscans@LIGO.ORG - posted 11:58, Wednesday 31 July 2013 (7299)
HEPI model needs an update - Switches buried in the model

Right now there is a mismatch between the MEDM HEPI screens and the model, more precisely in the BLND screen: three ON/OFF switches are still active but not displayed in the MEDM screen.

By default, the switches are OFF and the ouput of the BLND is 0 (see pictures attached).

This bug should disappeared in the last model version (we got rid of the switches). I'll try to put a WP and do the update this week.

 

For now, the quick solution is to use caput commands in the terminal. Example for HEPI-HAM3:

put the BLND L4C switche ON -> caput H1:HPI-HAM3_BLND_L4C_SWITCH 1

put the BLND IPS switche ON -> caput H1:HPI-HAM3_BLND_IPS_SWITCH 1

put the BLND BIAS switche ON -> caput H1:HPI-HAM3_BLND_BIAS_SWITCH 1

Images attached to this report
H1 PSL
sheila.dwyer@LIGO.ORG - posted 11:38, Wednesday 31 July 2013 (7298)
PSL in commisioning mode for 10 minutes
I took the PSL from "science" to "commisioning" mode for 10 minutes, from 11:20 to 11:32.  I only entered the anteroom, where I retrieved an ALS faraday and swapped a dust monitor for patrick.  

Sheila
H1 SEI
sebastien.biscans@LIGO.ORG - posted 19:28, Tuesday 30 July 2013 (7296)
HEPI-HAM2 and HAM-ISI HAM3 Measurements
Just started a measurement on HEPI-HAM2 and ISI-HAM3 (on the computer 'operator3' in the control room). Should be done in the morning around 10h30.
H1 SUS
jeffrey.kissel@LIGO.ORG - posted 18:43, Tuesday 30 July 2013 - last comment - 14:03, Wednesday 31 July 2013(7295)
H1SUSBS Phase 3b M1-M1 Measurements Launched
[J. Kissel, A. Pele]

We've launched acceptance transfer function measurements on H1 SUS BS, using a matlab session on operator1 computer. Measurement start 30-Jul-2013 18:23:34 PDT. We expect it to take 12 (!!) hours, so it should finish 31-Jul-2013 06:30 PDT.
Comments related to this report
arnaud.pele@LIGO.ORG - 14:03, Wednesday 31 July 2013 (7304)

measurements succesfully completed.

H1 General
jeffrey.kissel@LIGO.ORG - posted 17:13, Tuesday 30 July 2013 - last comment - 19:48, Tuesday 30 July 2013(7289)
Restoring the IMC After RCG 2.7 Upgrade
[S. Biscans, K. Izumi, J. Kissel, A. Pele, J. Rollins]

Log of how we brought the IMC back up to locking today after the RCG 2.7 upgrade in rough chronological order:


- SUS (for each SUS --  MC1, MC2, MC3, PRM, PR2, PR3, BS, ITMY, IM1-IM4)
   - Given to us as it comes up from a reboot, in SAFE -- watchdogs tripped, masterswitch off, (stale*) alignment offsets OFF. ** 
   - Reset IOP watchdog
   - Reset Inner-most USER watchdogs
   - Reset USER DACKILL
   - Turned on Masterswitch (damping loops should already be trying to run)
   - Turned stale alignment offsets (in anticipation of having to burt restore ripe offsets that are not that different)
   - Recovered ripe alignment offsets for MC1, MC2, and MC3
       - snapped to 2013-07-29 20:00 (Local)
       - /ligo/cds/lho/h1/burt/2013/07/29/20:00/h1susmc*epics.snap

- HPI-HAM2 & HPI-HAM3
   - Given to us as it comes up from a reboot, in SAFE -- watchdogs tripped, masterswitch off ***
   - Make sure no offsets are requested, ramp down master gain
   - reset watchdog
   - turned on master switch
   - manually restored offsets in OUTF 
      - put in a 10 sec ramp 
      - entered and turned on offsets from LHO aLOG 7180 (offsets turn on use the ramp time now as well)


- ISI-HAM2 & ISI-HAM3
   - Given to us as it comes up from a reboot, in SAFE -- watchdogs tripped, masterswitch off
   - Make sure no offsets are requested, ramp down master gain
   - reset watchdogs
   - Turned on Master Switch
   - Used Command window, to 
      - damp ISI
      - isolate ISI Level 2 

- PSL
   - Runs entirely on Beckhoff, and was unaffected by the the RCG 2.7 upgrade so there was no recovery needed

- IOO
   - Given to us as it comes up from a reboot
   - burt restored h1ascimc.snap (mainly to restore alignment of Beckhoff controlled mirror at the top of the periscope -- all other mirrors are fixed)
      - 2013-07-29 20:00 (Local)
      - /ligo/cds/lho/h1/burt/2013/07/29/20:00/h1susascimc.snap
   - turned off IMC WFS loops
      - ramp down master gain in bottom left corner
   - turned on guardian ****
      - go to IMC screen, have the guardian screen handy/visible
      - In a terminal
         ]$ ssh h1gaurdian0
         ]$ screen -RAad imcguardian
           (opens a "screen" environment, which looks just like a new terminal)
         ]$ cd /opt/rtcds/userapps/release/ioo/h1/scripts/
         ]$ ./runIMCGuardian
           (some lights should go green on the IMC Guardian Screen)
           (hit successive crtl+a, then ctrl+d which "detaches" the screen session, taking you back to the original h1gaurdian0 terminal)
         ]$ exit 
           (logs you out of h1guardian)
      - the guardian starts up not doing anything (state request or current state are blank)
      - hit "manual", "quiet," or "damped" on the IMC Guardian Screen (since there are currently no "goto_" scripts for "locked," hitting "locked" doesn't do anything)
      - Manually Ramp IMC WFS master gain back up (guardian doesn't know to touch it)
      - IMC Locks up like a charm


* Because the safe.snaps for most suspensions had been captured a while ago, one can never trust the alignment offsets. 

** For MC1, MC2, BS, and ITMY, the safe state came up with the BIO settings frozen and red -- specifically the Test/Coil Enable switches were red, and changing the state request didn't change the filter settings. Scared that this was a bug in RCG's interaction with RCG 2.7, I had considered going through a full boot process and/or declaring a failure mode of the upgrade. However, since this did *not* appear on MC3, PRM, PR2, or PR3 (which ruled out that the bug was common to a given computer), my quick test to see whether it was frozen on a given SUS was open open the BIO screen and randomly pick a field to try to change. When I got to the beam splitter, my randomness drove me to try changing the TEST/COIL Enable bit instead of the State Request. When I requested to turn on the SUS-BS M3 TEST/COIL Enable, it flipped the M1 and M2 TEST/COIL enable and all BIO functionality returned, and the stages began behaving independent as expected and as normal. I went back to the other suspensions, and saw similar behavior i.e. that the BIO EPICs record was stuck, and the SAFE burt restore didn't sufficiently tickle the EPICS database to get the BIO request and monitors to report back correct information. Once tickled in the right way, all functionally returned. YUCK!

*** Sebastian, having never seen spectra of LHO IPS, we had assumed that the HPIs were locked, or the IPS were broken. Because of the nonsense in described in ** I told him to wait a bit. Then we convinced ourselves that we needed to power-cycle the H1SEIH23 I/O Chassis (because we'd heard that Dave had to restart the PSL I/O chassis this morning because it complained it had lost connection with the dolphin network or something). 

**** We tried to follow Chris' instructions from LHO aLOG 6920, but this failed because the h1script0 machine is still running Ubuntu 10.4 (as opposed to 12.04 that h1guardian has been updated to recently, like the rest of the control room machines) -- so we don't know how Chris did it. Thankfully Jamie was helping and identified the problem immediately after querying Jim.
Comments related to this report
christopher.wipf@LIGO.ORG - 19:48, Tuesday 30 July 2013 (7297)

**** Regarding the guardian/h1script0 mystery -- it sounds like LHO's installation of perl has been upgraded recently.  At the time of entry 6920, the situation was the opposite: the IMC guardian could not be started on the 12.04 workstations, and only the more outdated machines (such as h1script0) were capable of running it.

H1 CDS
david.barker@LIGO.ORG - posted 17:04, Tuesday 30 July 2013 - last comment - 18:30, Tuesday 30 July 2013(7293)
RCG2.7 upgrade, decommission of EY

WP 4065

this morning we upgraded CDS to tag advLigoRTS-2.7. All daqd systems were compiled from source, all front end models were rebuilt, all front end computers were rebooted.

At the same time, the EY computers were powered down and will remain off for several weeks while we reconfigure EY to the aLIGO configuration.

The EY systems were taken out of the Dolphin manager, and their channels were marked as acquire=0 in DAQ so that they are not written to the frame. Their channels are still in the DAQ to allow trending of the old data.

h1dc0 required a reboot to load the new GPS drivers. h1susb123 and h1psl0 were power cycled for permit comms with their IO Chassis. 

I did not update the PSL models today. This system was burt restored to 8am today.

We are testing the new science frames features in the DAQ. We have found some problems with the channel stats.

Unfortunately our dataValid flags are still incorrect (set to ONE, should be ZERO). Jim rebuilt tag 2.7 on the DTS and its DAQ there is writing correct dataValid values. We are investigating what is different between the two.

Jeff reports strange binary I/O behavior on the sus models (QUAD and HAM). The BIO readbacks are bogus until an output channel has been switched, at which point the data becomes correct. We are investigating.

Comments related to this report
keith.thorne@LIGO.ORG - 18:30, Tuesday 30 July 2013 (7294)
This behavior was noted at LLO during the RCG 2.7 upgrade by Stuart Aston (See aLOG entry last Thursday.
LHO VE
kyle.ryan@LIGO.ORG - posted 17:00, Tuesday 30 July 2013 (7292)
Found leak on GV6 gate annulus piping
Was able to open GV5 and leak test the gate annulus piping joints on GV6 today -> 1 of 5 CFF was found to be leaking.  


Volume = Y1, Y-mid, Y2, YBM, XBM, HAM2-3, BSC1,2,3,7,8 being pumped by CP1, CP2, YBM MTP (backed by leak detector), CP3, CP4 and IP9.  
Helium background 6.5x10-8 torr*L/sec -> 1.33 CFF on North side of GV6 climbed steadily from background to 1.5x10-7 torr*L/sec beginning 18 seconds after filling bag until purged with air at the 200 second mark.  Demonstration was repeated after signal fell back to near the original starting value. 
H1 SUS
mark.barton@LIGO.ORG - posted 14:54, Tuesday 30 July 2013 - last comment - 16:02, Wednesday 31 July 2013(7287)
ETMX OL values, gains and offsets

I measured the ETMX OL values

prettyOSEMgains('H1','ETMX')

M0F1 19894 1.508  -9947
M0F2 29453 1.019 -14727
M0F3 30661 0.978 -15330
M0LF 24512 1.224 -12256
M0RT 24724 1.213 -12362
M0SD 21784 1.377 -10892
R0F1 28922 1.037 -14461
R0F2 23013 1.304 -11507
R0F3 25716 1.167 -12858
R0LF 26662 1.125 -13331
R0RT 24388 1.230 -12194
R0SD 21961 1.366 -10980
L1UL 24267 1.236 -12133
L1LL 26538 1.130 -13269
L1UR 24545 1.222 -12273
L1LR 26259 1.142 -13130
L2UL 17935 1.673  -8967
L2LL 18726 1.602  -9363
L2UR 25124 1.194 -12562
L2LR 25518 1.176 -12759

Comments related to this report
mark.barton@LIGO.ORG - 15:15, Tuesday 30 July 2013 (7288)

I entered the gains and offsets into the OSEMINF screens and updated and committed the /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susetmx_safe.snap .

mark.barton@LIGO.ORG - 12:37, Wednesday 31 July 2013 (7301)

Betsy was concerned about the M0 F1 and SD OL values, which were a bit low (in spec, but rather lower than average). So she swapped out those two and I remeasured those two values only. The new master list is as follows:

M0F1 24018 1.249 -12009 (new)
M0F2 29453 1.019 -14727
M0F3 30661 0.978 -15330
M0LF 24512 1.224 -12256
M0RT 24724 1.213 -12362
M0SD 28519 1.052 -14259 (new)
R0F1 28922 1.037 -14461
R0F2 23013 1.304 -11507
R0F3 25716 1.167 -12858
R0LF 26662 1.125 -13331
R0RT 24388 1.230 -12194
R0SD 21961 1.366 -10980
L1UL 24267 1.236 -12133
L1LL 26538 1.130 -13269
L1UR 24545 1.222 -12273
L1LR 26259 1.142 -13130
L2UL 17935 1.673  -8967
L2LL 18726 1.602  -9363
L2UR 25124 1.194 -12562
L2LR 25518 1.176 -12759

I entered the new values into the OSEMINF screen and redid the safe.snap.

betsy.weaver@LIGO.ORG - 16:02, Wednesday 31 July 2013 (7307)

Now with associated serial numbers which are also in ICS:

M0F1 - 643

M0F2 - 486

M0F3 - 414

M0LF - 580

M0RT - 507

M0SD - 563

R0F1 - 426

R0F2 - 497

R0F3 - 504

R0LF - 468

R0RT - 418

R0SD - 434

LHO General
patrick.thomas@LIGO.ORG - posted 19:49, Monday 22 July 2013 - last comment - 12:44, Wednesday 31 July 2013(7170)
dust monitors in LVEA
The dust monitors in the LVEA are NOT currently being recorded. It appears swapping the dust monitor in the H1 PSL enclosure has broken the communications.
Comments related to this report
patrick.thomas@LIGO.ORG - 09:45, Tuesday 23 July 2013 (7177)
Upon startup the IOC communicates correctly with each dust monitor until it gets to location 16 (the one that was swapped yesterday). After this it starts reporting back errors of the form:

Error: ../commands.c: 49: Sent command � not echoed
Received ?
patrick.thomas@LIGO.ORG - 10:54, Wednesday 24 July 2013 (7205)
I powercycled the Comtrol this morning. It worked after location 16 for a little while, but the error has returned.
patrick.thomas@LIGO.ORG - 17:58, Wednesday 24 July 2013 (7210)
Robert says he swapped the dust monitor in the H1 PSL laser enclosure.

First one dust monitor was disconnected from the breakout box outside the entire H1 PSL enclosure. If I recall correctly, the dust monitor at location 16 was then still found by the IOC. The communication errors persisted. The first dust monitor was plugged back in and the other one disconnected. The IOC still found the dust monitor at location 16, but the communication errors went away. The dust monitor at location 16 reported calibration errors.

It may be that the wrong dust monitor was swapped, leading to two set at the same location number, but this would not explain why the communication errors persisted after the first one was disconnected.

As it stands, one of the dust monitors in the H1 PSL enclosure is disconnected. The dust monitor at location 16 is reporting calibration errors. I am not sure where the dust monitor at location 16 is. The dust monitor at location 10 is not found by the IOC. The remainder of the dust monitors in the LVEA are running again.
patrick.thomas@LIGO.ORG - 12:44, Wednesday 31 July 2013 (7302)
Sheila swapped the dust monitor in the anteroom with one programmed at location 10. The one she removed from the anteroom is labeled 'H'. It had no charge left in the battery when I got it.

There was no change in the status. The dust monitor at location 10 is still unseen, and the dust monitor at location 16 is still giving calibration errors.

This leads me to believe that:
The dust monitor at location 16 is in the laser room and has calibration errors.
The dust monitor at location 10 is in the anteroom and is unplugged at the breakout box outside the enclosure.
H1 SEI
jess.mciver@LIGO.ORG - posted 21:18, Thursday 18 July 2013 - last comment - 13:40, Wednesday 31 July 2013(7129)
ETMY seismic and suspensions glitches correlated with ground motion
Daniel Halbe, Josh Smith, Jess McIver

Summary: strong, semi-periodic transient ground motion is propagating up the SEI platforms and down the suspension stages at ETMY. Cause of the ground motion is not yet determined. Effect on the HIFOY signal is not yet evaluated. 

Glitching in the top stage of the ETMY BOSEMs was first identified by Daniel Halbe (see Spectrogram_SUS_Longitundinal_M0_BOSEM_July2.png). These glitches are seen in all DOFs of the suspensions and seismic isolation platforms, have an irregular periodicity of about every 10-20 minutes, a duration of a few minutes, a central frequency of 3-5 Hz, an amplitude in Stage 1 T240s of the ISI on the order of a thousand nm/s (~ um/s) away from baseline noise, and have been occurring since at least June 12, 2013. They are not seen in ITMY suspensions channels. 

For a table that traces these glitches across each DOF and up the stages of seismic isolation to the top stage of the suspension, see: https://wiki.ligo.org/DetChar/HIFOYETMYglitching > Normalized spectrograms (PSD) of the periodic glitches for 1 hour 10 min

Daniel also found them in the lower stage ETMY OSEMs: https://wiki.ligo.org/DetChar/SpectrogramsOfH1AllMassQuadSUSETMY

And Josh Smith traced them to excess ground motion using a representative top stage BOSEM channel (see EMTY_top_stage_BOSEM_pitch_correlation_to_excess_ground_motion.png). 

These glitches have a strong correlation with local ground motion and significant correlation with ground motion near the vault. There appears to be faint correlation with ground motion near MX and the LVEA that merits further investigation.  
(See the normalized spectrogram Top_stage_BOSEM_ETMY_longitudinal_glitching.png and compare to normalized spectrograms Ground_motion_PEM_{location}_spectrogram.png of the same time period)

For additional plots of ground motion at various locations around the ifo during these glitches, see again: https://wiki.ligo.org/DetChar/HIFOYETMYglitching
(If you are unable to see some of the plots on this page, please see the instructions under 'Normalized spectrograms (PSD) of the periodic glitches for 1 hour and 10 min'). 

Note that the reported units of counts are incorrect for all plots (a bug in ligoDV) - these channels are calibrated to nm/s for inertial sensors or to um for BOSEMs and OSEMs.
Images attached to this report
Comments related to this report
jess.mciver@LIGO.ORG - 16:41, Friday 26 July 2013 (7253)

According to the summary bit of the ODC, the ETMY ISI was not in a 'good' state during this time. 

From the Hanford cluster:

$ligolw_segment_query -t https://segdb-er.ligo.caltech.edu  --query-segments --include-segments H1:ODC-ISI_ETMY_SUMMARY:1 --gps-start-time 1056797416 --gps-end-time 1056801616 | ligolw_print  -t segment:table -c start_time -c end_time -d ' '

Returned no results. 

 

 

 

 

 

 

jess.mciver@LIGO.ORG - 13:40, Wednesday 31 July 2013 (7303)

TJ Massinger, Jess McIver

TJ did a similar study in the H1 BS top stage BOSEMs and found glitching at a lower frequency (2.8Hz) than we've seen in the ETMY (3-5Hz). 

A comparison of the top stage BOSEMs of the core optics at Hanford is attached. The glitches seen in the beam splitter BOSEMs do not seem coincident in time with the glitches in the ETMY. 

ISI states at this time are below (note that if an isolation loop is not indicated to be in a good state, it may be because the 'correct state' value for the comparison to generate the ODCs was wrong/outdated for some chamber until Celine fixed it a few hours ago):

 

ETMY

H1:ODC-ISI_ETMY_MASTER_SWITCH:1
H1:ODC-ISI_ETMY_ST1_DAMP:1
H1:ODC-ISI_ETMY_ST1_WATCHDOG:1
H1:ODC-ISI_ETMY_ST2_DAMP:1
H1:ODC-ISI_ETMY_ST2_WATCHDOG:1
 
ITMY
H1:ODC-ISI_ITMY_MASTER_SWITCH:1
H1:ODC-ISI_ITMY_ST1_DAMP:1
H1:ODC-ISI_ITMY_ST1_WATCHDOG:1
H1:ODC-ISI_ITMY_ST2_DAMP:1
H1:ODC-ISI_ITMY_ST2_ISOLATION:1
H1:ODC-ISI_ITMY_ST2_WATCHDOG:1
 
BS
H1:ODC-ISI_BS_MASTER_SWITCH:1
H1:ODC-ISI_BS_ST1_DAMP:1
H1:ODC-ISI_BS_ST1_ISOLATION:1
H1:ODC-ISI_BS_ST1_WATCHDOG:1
H1:ODC-ISI_BS_ST2_DAMP:1
H1:ODC-ISI_BS_ST2_ISOLATION:1
H1:ODC-ISI_BS_ST2_WATCHDOG:1
H1:ODC-ISI_BS_SUMMARY:1

 

 

Images attached to this comment
Displaying reports 70001-70020 of 77083.Go to page Start 3497 3498 3499 3500 3501 3502 3503 3504 3505 End