Displaying reports 57261-57280 of 78021.Go to page Start 2860 2861 2862 2863 2864 2865 2866 2867 2868 End
Reports until 19:25, Tuesday 08 September 2015
H1 DetChar
paul.altin@LIGO.ORG - posted 19:25, Tuesday 08 September 2015 - last comment - 19:40, Tuesday 08 September 2015(21314)
Loud glitches & ETMY saturations

I've been investigating the range drops associated with loud (SNR > 1000) glitches.

During ER7, these were thought to be due to particles falling through the beam (see alogs on "dust glitches" 20276, 20354, 20355, 20328, 20395, 20484).

In ER8, we are still seeing loud glitches associated with range drops, however now there are also saturations on SUS ETMY (see alogs 19939, 19947, 20071, 20612, and almost every Ops summary since August 17).

Every Omicron trigger with SNR > 1000 (and most of those with SNR 100 – 1000 as well) is simultaneous with a range drop and an ETMY saturation during the 92 hours from September 3 - 6.

By contrast, none of the SNR > 1000 triggers from June 9 - 11 are associated with ETMY saturations.

On the other hand, OmegaScans and timeseries plots of the ER7 and ER8 glitches look similar, suggesting that all the glitches may be related.

So, if there are two distinct glitch classes, the "dust glitches" appear to have stopped occurring.

Or, if all the glitches have the same cause, then something seems to have changed to make ETMY more sensitive to them.

Has anything in the control system changed in a way that could lead to larger signals appearing on the ETMs for the same ‘glitch stimulus’?

See plots attached, and PreO1WorstGlitches page for more details.

Non-image files attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 19:40, Tuesday 08 September 2015 (21315)

Yes, since ER7 we have turned on analog low-pass filtering on the EY ESD. For any signal above 50 Hz, we drive the DAC 500 times harder than before in order to overcome the effect of this filter.

H1 DCS (DCS)
gregory.mendell@LIGO.ORG - posted 17:46, Tuesday 08 September 2015 (21312)
Restart of DCS Disk2Disk and diffFrames

Restarted DCS Disk2Disk and diffFrames which copies data from CDS to LDAS and checks for diffs between the science frames from the two framewriters. The restart was between 16:28 PDT and 17:19 PDT today (08 Sept. 2015). This was to fix a minor bug found in these scripts where they did not close a file used internally by the scripts to check its own run status. This change should have no impact on CDS.

H1 CDS
patrick.thomas@LIGO.ORG - posted 17:30, Tuesday 08 September 2015 (21311)
Updated Conlog channel list
I added 'H1:OMC-READOUT_ERR_GAIN' to the exclude list, since it sometimes generates 'nan' values and stops Conlog. I then regenerated the channel list and set Conlog to use it.

+ H1:OMC-FPGA_DTONE_GAIN
+ H1:OMC-FPGA_DTONE_LIMIT
+ H1:OMC-FPGA_DTONE_OFFSET
+ H1:OMC-FPGA_DTONE_RSET
+ H1:OMC-FPGA_DTONE_SW1S
+ H1:OMC-FPGA_DTONE_SW2S
+ H1:OMC-FPGA_DTONE_SWSTAT
+ H1:OMC-FPGA_DTONE_TRAMP
- H1:OMC-READOUT_ERR_GAIN
inserted 8 pv names
deleted 1 pv names
H1 SEI
hugh.radkins@LIGO.ORG - posted 16:31, Tuesday 08 September 2015 - last comment - 12:31, Wednesday 09 September 2015(21309)
LHO SEI HEPI has OBSERVE.snap for SDF

Collected OBSERVE.snap files for use in full observing configuration monitoring.

With the SDF happy (green) and Guardian Nominal (Robust Isolated) and green, I made an OBSERVE.snap file with SDF_SAVE screen

choosing EPICS DB TO FILE  & SAVE AS == OBSERVE

This saves the current values of all switches and maintains the monitor/not monitor bit into the target area, e.g.   /opt/rtcds/lho/h1/target/h1hpietmy/h1hpietmyepics/burt as OBSERVE.snap

This file is then copied to the svn: /opt/rtcds/userapps/release/hpi/h1/burtfiles as h1hpietmy_OBSERVE.snap and added/commited as needed.

The OBSERVE.snap in the target area is now deleted and a soft link is created for OBSERVE.snap to point to the svn copy:

ln -s /opt/rtcds/userapps/release/hpi/h1/burtfiles/h1hpietmy_OBSERVE.snap OBSERVE.snap

Finally, the SDF_RESTORE screen is used to select the OBSERVE.snap softlink and loaded with the LOAD TABLE button.

Now, for the HEPIs for example, the not monitored channels dealt with by the guardian will be a different value from the safe.snap but, the not monitored channels are still not monitored so the SDF remains green and happy.  And if the HEPI platform trips, it will still be happy and green because, the not monitored channels are still not monitored.

What's the use of all this you say?  Okay, I say, go to the SDF_TABLE screen and switch the MONITOR SELECT choice to ALL (vs MASK.)  Now, the not monitored channel bit flag is ignored and all records are monitored and differences (ISO filters when the platform is tripped for example) will show in the DIFF list until guardian has the platform back to nominal.

Notice too that the SDF_OVERVIEW has the pink light indicating monitor ALL is set.  This should stay this way unless Guardian is having trouble reisolating the platform and then the operator may want to reenable the bit mask to make more evident any switches that guardian isn't touching more apparent.

Images attached to this report
Comments related to this report
jameson.rollins@LIGO.ORG - 12:31, Wednesday 09 September 2015 (21345)

But rather than rely on selecting ALL in the SDF_MON_ALL selection, I would suggest you actual set the monitor bit to True for all channels in the OBSERVE.snap.  That way we don't have to do a two-step select process to activate it, and we can indicate if there are special channels that we don't monitor, for whatever reason.

hugh.radkins@LIGO.ORG - 12:05, Wednesday 09 September 2015 (21343)

Yes Jameson.  That is why I selectied the ALL button allowing all channels to be monitored.

jameson.rollins@LIGO.ORG - 09:48, Wednesday 09 September 2015 (21337)

Hugh, I think the OBSERVE snaps should have the montor bit set for all channels.  In some sense that's the whole point of having separate OBSERVE files to be used in this way, that we use them to define setpoints against which every channel should be moniotred.

LHO General
thomas.shaffer@LIGO.ORG - posted 16:13, Tuesday 08 September 2015 - last comment - 16:42, Tuesday 08 September 2015(21308)
Maintenance Day Summery

Tasks completed (Time in PST)

HAM1 Grouting (800-1220)

Crane TCS Chiller (-1220)

EX replace annulus ion pump (1012-1451)

Duotone Frame change (-1211)

CDS Frame Writers (831-1300)

UPS bypass repair (828-841)

PSL laser WD on Bechoff comuter (819-821)

PZT per LPF swap (1231-1321)

HWS alignment (1315-1409)

GDS channels added to DAQ (1300)

ETMX charge measurements (945-1033)

ETMY Coil Driver measurement (950-1150)

Comments related to this report
keita.kawabe@LIGO.ORG - 16:42, Tuesday 08 September 2015 (21310)

Duotone: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=21222

After installation, I confirmed that H1:CAL-PCALX_FPGA_DTONE_IN1_DQ, H1:CAL-PCALY_FPGA_DTONE_IN1_DQ and H1:OMC-FPGA_DTONE_IN1_DQ are with "acquire=3" line in H1CALEX.ini, HACALEY.ini and H1OMC.ini, respectively.

LHO General
thomas.shaffer@LIGO.ORG - posted 16:06, Tuesday 08 September 2015 (21298)
Ops Day Shift Summery

Log:

Handing off to Jeff B trying to lock DRMI_1F.

Longer maintenance Day, but no huge surprises.

LHO VE
kyle.ryan@LIGO.ORG - posted 15:08, Tuesday 08 September 2015 (21306)
~1030 - 1445 hrs. local -> Replaced expired annulus ion pump at BSC9
Pump cart running next to BSC9 until further notice
H1 AOS
betsy.weaver@LIGO.ORG - posted 14:52, Tuesday 08 September 2015 - last comment - 18:19, Tuesday 08 September 2015(21304)
HARDWARE INJ OFF

Note, when the EXC bit in the CALCS CDS overview is in alarm, we tend to open the screen CAL_INJ_CONTROL to attempt to diagnose - This shows a big red light for some ODC Channel OK Latch, leading us to misdiagnose what is actually in alarm.  We have 2 operational problems:

 

1) If generically, there is a red light on the CDS screen - where do you go?  Normally, we follow the logical medm and are able to get to the bottom of the red status via logical nested reds.  This is not the case for the CALCS screen - the CDS H1:FEC-117_STATE_WORD bit is RED on the H1CALCS line of the overview screen, yet this bit is nowhere on the CALCS screen.

So, where does the info come from for specifically the EXC bit of the H1CALCS state word, such that we can do something about it?

 

2) Someone should rework the CAL_INJ_CONTROL.adl so that it doesn't cause us to misdiagnose actual reds.  Currently, the HARDWARE INJECTIONS are out of configuration (outstanding issue to still be sorted) and yet, there is NO INDICATION of that on the CAL_INJ_CONTROL screen...  Also, the CW injection appears to be off, but there is no "red alarm" on the screen.

 

BTW, the HARDWARE INJ appear to be off.  They dropped around 7pm local time last night (20 hours ago).

Images attached to this report
Comments related to this report
betsy.weaver@LIGO.ORG - 14:52, Tuesday 08 September 2015 (21305)

Images attached to this comment
jameson.rollins@LIGO.ORG - 15:21, Tuesday 08 September 2015 (21307)

The hardware injection folks should comment, but I spoke to Duncan Macleod at LLO last week about redesigning that screen.  I told him not to use red for things that don't actually indicate errors, or conditions that need to be acted upon.  I think he was working on the screen, so maybe it can be updated at LHO soon.

The EXC bit in the CALCS is expressly excluded from the DIAG_EXC checks, because excitations in the CALCS model are used for hardware injections.

jenne.driggers@LIGO.ORG - 18:19, Tuesday 08 September 2015 (21313)

Our confusion with the Cal EXC was that we were expecting an excitation to exist, but it didn't.  The CDS overview screen that Betsy posted is modified to show red for the CALCS EXC when there *is not* an excitation, whereas it shows red for every other model if there *is* an excitation.  But, we were having trouble on the CAL CS overview screen determining if there was actually a problem.

H1 TCS
nutsinee.kijbunchoo@LIGO.ORG - posted 14:22, Tuesday 08 September 2015 (21302)
ITMX HWS alignment work

I went in to have a quick look at the HWSX table and hoping it would be a quick fix to get the SLED beam centered without taking the Hartmann plate off. However, I couldn't get the streamed images to look any better as I move the periscope mirrors so I took the Hartmann plate off to see what the beam looks like. I've attached a picture. The earthquake stops are there so the beam *should* be  roughly centered. Is this a scaling problem? Or do we have dead pixels somewhere?

 

I put the Hartmann plate back on although the streamed image seems useless...

Images attached to this report
H1 SUS
betsy.weaver@LIGO.ORG - posted 13:53, Tuesday 08 September 2015 (21301)
ETM charge measurements

Taking over from where Leaonid left off, between last Tuesday and today I took the weekly sets of charge measurements during the Tuesday maintenance down time.  Attached are the long trend swhich now include the last 2 weeks of data.

Images attached to this report
H1 IOO
keita.kawabe@LIGO.ORG - posted 13:30, Tuesday 08 September 2015 (21300)
PSL PZT peri mirror LPF swap (Filiberto, Sheila, Keita)

I found that the PSL PZT peri mirror LPF (D1500001) uses 220 Ohm resistors in line with PZTs, each axis having 6uF capacitance. That gives the pole of 120 Hz, which is too high.

It turns out that LLO uses 220 kOhm, which is a bit too big.

After consulting with Peter Fritschel I decided to use 22kOhm, which gives us 1.2Hz pole.

Filiberto fitted the spare (S1500002) with 22kOhm resistors. I and Sheila went in the PSL room and swapped the old one (S1500001) with the spare.

Before the swap we checked the beam position outside of the PSL room, which was maybe 1mm lower than the marks.

After the swap the beam position didn't change and MC locked without problem.

Since IMC WFS DOF5Y loop was NOT doing anything useful these days (as the noise spectrum shape changed), and since I need to measure the TF again if we want to enable this loop, I turned off the output of the loop.

This is the new configuration for now.

H1 PEM (DetChar)
robert.schofield@LIGO.ORG - posted 12:48, Tuesday 08 September 2015 (21272)
PEM injections complete at LHO; report on magnetic coupling, site activity coupling, and preliminary report on other coupling

Summary: We completed all of the most important PEM injections and I think we are good to go. Ambient magnetic fields are unlikely to produce DARM noise at more than 1/10 of the DARM floor. There is high magnetic coupling at the EY satellite box rack. Coupling of self-inflicted magnetic fields in the CS ebay may keep us from reaching design sensitivity unless corrected. Most site activities will not show up in DARM, but moderate impacts in the control room/EE shop/Vacuum shop etc. can produce DARM signals. Any such events can, like signals from off-site, be vetoed by redundant vibration sensors. Although vibration coupling analysis is ongoing and not presented here, ambient vibrations produce noise at near DARM floor levels at HAM2, 6 and the PSL table and dominate DARM around 300 Hz. Radio signals at 9 and 45 MHz would have to be at least 100 times background on the radio channels before they start to show in DARM. Some sensor issues are also discussed.

Introduction

Between Tuesday and Saturday, we made roughly 100 injections to measure the environmental coupling levels to the LHO interferometer. The table shows the number of locations in each building.

Injection type

CS locations

EX locations

EY locations

magnetic

5

3

3

acoustic

5

3

3

shaking

6

0

0

radio at modulation frequencies, etc.

1

 

 

site activities

10

 

1

In most cases we attempted to inject from a great enough distance that the field levels at coupling sites would be about the same as they were at the sensors nearest to the coupling sites.  For magnetic and acoustic injections we split the potential coupling sites into regions. In the LVEA, these regions were usually the vertex, the ITM optical lever regions, the PSL, the input arm, the output arm and the electronics bay. At the end stations the regions were the VEAs and the electronics bays. For shaking, we selected particular chambers or sites that had been identified previously as coupling sites or potential coupling sites.

We started with analysis of magnetic coupling because of its importance to the stochastic GW search, and results are presented below. In addition, it is important to set up site rules for the run so analysis of site activity injections is also included here. Other coupling analyses are not yet ready, but preliminary observations are discussed below. 

Magnetic field coupling

Figure 1 summarizes results with single magnetic coupling functions for each station in meters of differential test mass motion per Tesla of magnetic field. In general, our coupling functions are applicable to signals originating at a distance from the coupling site that is large compared to the distance between the sensor and the coupling site. If the signal originates much closer to the coupling site than the sensor is, these coupling functions are likely to underestimate the resulting test mass motion. The magnetic coupling functions of Figure 1 are based on the maximum regional coupling observed at each station.  The highest of the 3 stations should be used for estimating Schumann resonance inter-site correlation.

The lower panel in Figure 1 shows S6 (iLIGO) magnetic coupling functions. At 20 Hz the highest coupling is about an order of magnitude lower than it was in iLIGO. At 100 Hz aLIGO is about 2 orders of magnitude better. This is mainly due to the lack of magnets on the test mass. The coupling functuions are not as linear in a log-log plot as they were for initial LIGO, most likely because test mass magnet coupling is no longer dominant and coupling to electronics and cables are significant contributions.

The much higher coupling at EY than at EX is likely due to coupling to cables and connectors in the satellite amp rack. We used a small coil to track the coupling to the satellite amp rack in the VEA. It was clear that the excess coupling was at this rack, and not, for example, at the rack next to it, but we could not find a specific coupling site within this rack. This is what would be expected for cable coupling and so we should probably check the cable shield grounding for the coil drive signals in this rack. However, even at this elevated coupling level, the estimated DARM noise for ambient field levels is more than 10 times lower than the O1 DARM floor.

Figure 2 shows an example of spectra from an injection. This injection focuses on the vertex, and uses a coil set up about 20m down the Y-manifold. The top plot shows magnetometer spectra and the bottom plot DARM spectra. The injection is a 6Hz ramp, producing a 6Hz comb in the red injection spectra (blue are no-injection). The amplitudes of the peaks produced in DARM are divided by the amplitudes of the peaks in the magnetometer signal to give coupling functions in m/T. Estimates of the noise contribution to DARM are made by multiplying the coupling functions by the local ambient background from the injection-free spectrum. These estimates thus assume linearity. While vibration coupling has usually been found to be linear, we have found that, in certain bands, acoustic coupling can be non-linear, in which case the estimates are upper limits. To minimize the overestimate, we try to inject with as little amplitude over ambient as possible. We have not observed non-linear magnetic coupling.

The estimate of the ambient magnetic contribution to DARM for the injection of Figure 2 is shown as black squares.  Also included in the figure are estimates for the other corner station injection zones. The closest that the estimates come to the O1 DARM noise floor is about a factor of ten below for coupling in the ebay at 12 Hz. The ebay does not have the highest coupling function, but it does have the highest ambient fields. Thus the ebay would not be the most sensitive place for Schumann resonance coupling, but, according to this estimate, coupling of self-inflicted fields in the ebay will keep us from reaching design sensitivity unless we make improvements.

Other environmental coupling

As with magnetic coupling, coupling of ambient RF (including self-inflicted) does not seem to be a problem. However, vibrational coupling noise reaches within a factor of a few of the DARM noise floor at a couple of locations, and in bands around 300 Hz dominates DARM. It is likely that our sensitivity to site activities is due to coupling through the HAM2 and 6 ISIs. We should be able to test this possibility with the coupling functions we measured for GS13s in the HAMs. In addition to HAMs, PSL table/periscope vibrations produce the peaks near 300 Hz via beam jitter, and perhaps contribute more broadly to the DARM floor.

No noise from pre-identified scattering sites

Scattering does not seem to be a problem at current sensitivities. We installed shakers at sites that had been identified through photographs taken from the points of view of the test masses and other optics in order to identify potential scattering coupling sites (here).  We mounted shakers at the GV8 valve seat, the input mode cleaner beam tube, the signal recycling cavity beam tube and the BSC2 chamber walls (connected to the TCS mirror holders), all of which had been identified in photos as potential sites (here). Increasing motion at these sites by 2 orders of magnitude did not produce features that we saw in DARM.

Sensor problems

All identified coupling sites were well covered with sensors, but we found a few sensor problems. The most important of these are problems with the VEA magnetometers at the end stations. These are essential PEM sensors since there is no redundancy. These magnetometers did detect our injected magnetic fields, but the signal conditioning boxes appear to be producing approximately 1Hz glitches that effectively raise the sensor noise floor by an order or two of magnitude. We think that it is the conditioning boxes because we swapped out the filter boxes and set up a separate magnetometer nearby that did not see these glitches. We were unable to replace the boxes because we found no working spares. If we cannot replace these by the run, I suggest swapping them with the e-bay magnetometer setups in the end stations, where there is redundancy.

A more minor problem is that the axes of the input arm magnetometer and the two VEA magnetometers appear to be incorrect. I swapped the axes on the input magnetometer, but did not get to the VEA magnetometers. These should be checked.

Incomplete injections

We ran out of time and were not able to make shaker injections at the end and mid-stations. Perhaps we can do these during the run using the commissioning budget.

Site activities

We should expect coupling of site activites since vibration coupling levels for ambient, non-transient vibration levels are near or at the DARM noise floor. Figure 3 has tables showing all of the site activity injections, with the ones that showed up in DARM marked in red, and a link to Nutsinee’s spectrogram analyses.

The things that showed up in DARM:

·       large super ball dropped in control room from about 5 feet

·       hammer dropped from 4 feet in vacuum lab next to EE shop

·       jumping in LVEA changing area

·       jumping in control room

·       setting car battery down in OSB shipping area

·       sudden braking of silver van (things fly off seat) in high bay area

 

Some of the things that didn’t show up in DARM

·       sudden braking (things fly off seat) in most other areas,  including just outside of EY

·       car horns

·       crowds walking in control room or halls (no jumping)

·       loud music in control room

·       rolling in chair across control room

·       bouncing on seating/exercise ball in control room

·       airlock and external door actuation

·       slamming doors in office area

·       outer roll up door actuation, OSB shipping

·       quick human movements in the OSB optics lab

In summary, sudden impacts in the control room, EE shop, vacuum lab and all other areas on the lab side of the air lock may cause events in DARM. This includes jumping, dropping heavy (>1lbs) things and quickly setting down heavy packages. I don’t think that we need to dramatically alter our usual activities because of this observation; we just need to be careful. I say this because we can veto all of these events with the highly redundant PEM and SEI vibration monitoring systems, just as we can veto signals from off-site. But impacts in the control room/lab/shop area can produce DARM events, so we should be mindful of minimizing these impacts.

We see no evidence that a drive down the arms and a trip into the mid/end stations will be more likely to produce events in DARM than control room activities. Tour crowds in the control room are also unlikely to produce DARM events, unless they jump around or knock chairs over. Just as for our own activities, we would rely on the sensor systems to veto any events that these tours produced.

We should be able to identify the coupling sites for these site activities once we have coupling functions for vibrational signals (stay tuned). In the mean time, I would speculate that the coupling sites are mainly HAM2 and 6, up through the ISI suspension, and, possibly, increased vibration of the PSL table.

 

Robert Schofield, Anamaria Effler, Jordan Palamos, Nutsinee Kijbunchoo, Katie Banowetz with help from crowds of others.

Non-image files attached to this report
H1 ISC
stefan.ballmer@LIGO.ORG - posted 12:19, Tuesday 08 September 2015 (21297)
ODC housekeeping
Updated the following ODC settings:
ASC: DHARD input range check limit from 1500cts to 2500 to avoid occasional crossing.
ASC: INP1  input range check limit from 1.5cts to 5 to avoid occasional crossing.
OMC: include bit 17 & 18 in subsystem mask (this is effectively an OMC DCPD saturation monitor).
OMC: DARM CTRL input range check limit from 120000cts to 150000 to avoid occasional crossing.
LSC, OMC, ASC: exclude ADC saturation bit from ODCmaster ADC saturation monitor:
     for ASC the CDS saturation monitor is stuck on for unknown reasons
     for LSC and OMC, the parked ALS inputs saturate the ADC constantly. 
TCS: exclude DAC saturation bit from ODCmaster DAC saturation monitor
     for TCS, the DAC saturation monitor is reporting on constantly for two channels - not sure why.

All setting changes have been accepted in SDF.

Left to do: there are some SUS filter status checks that changed their nominal state since I last updated them in August. I can only update them once the IFO is locked.
H1 INJ (INJ)
eric.thrane@LIGO.ORG - posted 17:34, Thursday 03 September 2015 - last comment - 14:51, Tuesday 08 September 2015(21198)
LIMIT in HWINJ filter bank turned off
Eric, Cheryl

Following advice from D Shoemaker et al., we have disabled the LIMIT on the HWINJ filter bank. The change was made at approximately GPS = 1125361473. The LLO LIMIT was turned off earlier today:

https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=20249
Comments related to this report
jenne.driggers@LIGO.ORG - 12:18, Tuesday 08 September 2015 (21296)

[TJ, Jenne]

TJ pointed out to me that the Cal Hardware Injection ODC bit is red, and I tracked down where it's coming from.  The ODC is upset that the CAL-INJ_HARDWARE limiter was turned off.  I don't think that this has been preventing us from going to Observing mode, since the limiter was turned off on Thursday, but if it does (particularly if some updates have been made during maintenence day that tie this bit into our "OK" status) I will turn the limiter back on until the ODC can be updated to know that the limit switch should be off.

I have sent EricT an email asking for him to help figure out the ODC part of this.

jameson.rollins@LIGO.ORG - 12:54, Tuesday 08 September 2015 (21299)

Just to be clear, there should be NO ODC CHECKS INVOLVED in the IFO READY bit.  ODC is only used as transport of the bits, and none of the checks being done by ODC affect IFO READY.  The only thing that should be going in to IFO READY now is the GRD IFO top node.  In other words, this ODC issue should not have been preventing you from going to Observing mode.

betsy.weaver@LIGO.ORG - 14:51, Tuesday 08 September 2015 (21303)

Note, when the EXC bit in the CALCS CDS overview is in alarm, we tend to open the screen CAL_INJ_CONTROL to attempt to diagnose - This shows a big red light for some ODC Channel OK Latch, leading us to misdiagnose what is actually in alarm.  We have 2 operational problems:

 

1) If generically, there is a red light on the CDS screen - where do you go?  Normally, we follow the logical medm and are able to get to the bottom of the red status via logical nested reds.  This is not the case for the CALCS screen - the CDS H1:FEC-117_STATE_WORD bit is RED on the H1CALCS line of the overview screen, yet this bit is nowhere on the CALCS screen.

So, where does the info come from for specifically the EXC bit of the H1CALCS state word, such that we can do something about it?

 

2) Someone should rework the CAL_INJ_CONTROL.adl so that it doesn't cause us to misdiagnose actual reds.  Currently, the HARDWARE INJECTIONS are out of configuration (outstanding issue to still be sorted) and yet, there is NO INDICATION of that on the CAL_INJ_CONTROL screen...  Also, the CW injection appears to be off, but there is no "red alarm" on the screen.

 

BTW, the HARDWARE INJ appear to be off.  They dropped around 7pm local time last night (20 hours ago).

Images attached to this comment
Displaying reports 57261-57280 of 78021.Go to page Start 2860 2861 2862 2863 2864 2865 2866 2867 2868 End