Displaying reports 57001-57020 of 85462.Go to page Start 2847 2848 2849 2850 2851 2852 2853 2854 2855 End
Reports until 15:50, Tuesday 23 August 2016
H1 ISC
marc.pirello@LIGO.ORG - posted 15:50, Tuesday 23 August 2016 (29257)
ISC Anti Aliasing Chassis

M. Pirello, R. McCarthy, E. Castrellon

Work Permit #6116

We removed the ISC Anti-Aliasing Chassis and investigated channel 14 which was suspected to have a failure, per ALOG 29189.  To test the channels, we attached the DB9 Breakout to the front DB9 connector and used the SR785 to test the transfer function through the filter.  We applied the test leads directly to the differential test points on the PCB.  Channel 14 looked much like all of the other channels. Unfortunately, the data was corrupted on the thumb drive, but I did get one scan of channel 9 and have attached this scan.  Every scan was nearly identical to this scan.

We returned the chassis to its original location and reattached all of the cables.

Images attached to this report
Non-image files attached to this report
H1 CAL (ISC, SUS, SYS)
jeffrey.kissel@LIGO.ORG - posted 15:41, Tuesday 23 August 2016 (29256)
Charge Measurement Update; Charge Now Comparable to Largest Levels from O1 -- 40 [V] Effective Bias
J. Kissel

Here's your weekly broken record. Time to change the ETMX and ETMY bias sign!

Several quadrants of the H1SUSETMX and H1SUSETMY's TST L3 ESD systems show an effective bias voltage of around 40 [V], which is roughly 10% of the request / applied bias voltage of 430 [V]. This means the actuation strength is 10% different from its value in late June when we had last herded the effective bias voltage to 0 [V] and we'd hoped to begin the campaign of regular bias flipping (see LHO aLOG 27890). 

We're still waiting for a robust enough interferometer / acquisition sequence to declare we're happy (and probably also for me to be in the control room for a non-Tuesday late night when we're ever happy) to flip the bias sign and debug why we had troubles with ALS DIFF and switching to ETMY (see LHO aLOGs 28362 and 28152). 
Images attached to this report
H1 ISC
chris.whittle@LIGO.ORG - posted 14:36, Tuesday 23 August 2016 - last comment - 18:34, Tuesday 23 August 2016(29250)
Installed FSS IMC 200kHz LPF daughter board

Marc P, Chris W

As per Work Permit #6104, we added a 200 kHz lowpass filter (D1600314) to the fast path of the Common Mode Servo Board (D040180), located in the LVEA ISC-R1 rack. The serial number of the servo board is S1102626.

When reconnecting the power to the Common Mode Servo Board, the 17V supply was connected first (should connect 24V first or use the sequencer), which resulted in blowing diodes D4 and D5 at the power input (see D0901846). At the time, we saw LEDs of other boards on the rack turning off. The diodes were replaced and the current drawn by the Common Mode Servo Board was confirmed to be the same as drawn by an equivalent spare board. The board was then replaced in ISC-R1 rack, this time using the sequencer switch to power cycle the whole rack.

Images attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 14:12, Tuesday 23 August 2016 (29253)

For reasons unknown it seems that the first channel on the modecleaner IQ demodulator broke today. The I output gives a fixed 300mV offset. We switched to the second channel (spare).

I also removed 2 TNC-to-BNC adapters, 2 BNC cables and a BNC barrel which were used to connect the I output of the demodulator with the IMC board input. This mashup was replaced with a proper TNC cable.

vernon.sandberg@LIGO.ORG - 17:30, Tuesday 23 August 2016 (29262)FRS, ISC

An FRS ticket has been opened for this event.  See https://services.ligo-la.caltech.edu/FRS/show_bug.cgi?id=6084

daniel.sigg@LIGO.ORG - 18:34, Tuesday 23 August 2016 (29264)

The IMC VCO had a couple of blown OpAmps that were replaced. The IMC is locking again and the MC_F spectrum looks OK.

H1 CDS
david.barker@LIGO.ORG - posted 14:21, Tuesday 23 August 2016 (29254)
New 'Beckhoff' SDF systems started, h1susprocpi added to SDF overview MEDM

I have created two new "Beckhoff" SDF systems: h1hpipumpctrlsdf and h1pslopcsdf. These are built in the same way as Jonathan's slow controls SDF systems, following Jonathan's wiki-page instructions. They run on h1build and take the next available DCUIDs

h1hpipumpctrlsdf (dcuid=1033). Monitors settings on the three "purple box" hepi pump controllers

h1pslopcsdf (dcuid=1034). Monitors the diode-room Beckhoff OPC system settings (Peter K provided the channel list)

The two new SDF systems were added to the SDF_OVERVIEW.adl medm screen. Also the missing h1susprocpi was added, which necessitated making the screen taller to accomodate the additional sus system. The new systems are marked on the screen capture image attached.

Images attached to this report
H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 13:55, Tuesday 23 August 2016 - last comment - 14:06, Tuesday 23 August 2016(29251)
DAQ restart, latest corner station Beckhoff slow controls code

After Daniel's h1ecac1plc1 code changes, I have installed the new INI file and restarted the DAQ.

Comments related to this report
daniel.sigg@LIGO.ORG - 14:06, Tuesday 23 August 2016 (29252)

This TwinCAT update included:

  1. A better lock loss monitor by triggering on the transmitted arm power,
  2. Adding save/restore functionality to the PEM library,
  3. Fixing the save/restore for the RF chanels in the corner PLC2,
  4. Marking 2 ouptut channels as readonly in the laser power controller.
LHO VE
chandra.romel@LIGO.ORG - posted 11:28, Tuesday 23 August 2016 (29248)
CP4 ramp
Ramped CP4 LLCV in 5% increments, every 5 minutes, until it reached 100% full (39% to 63% open). 

Note:  generated alarm in CDS due to overfilling

Attached is a snap of the exhaust flow and pressure as LLCV ramped.

I lowered the fill set point from 92% to 88% and and will redo experiment once CP ramps down to 88%.
Images attached to this report
H1 OpsInfo (SYS)
jim.warner@LIGO.ORG - posted 11:17, Tuesday 23 August 2016 - last comment - 09:11, Wednesday 24 August 2016(29247)
Python script to do weekly oplev trends

I've made a script to somewhat automate the weekly oplev trends FAMIS task. It makes 3 plots like the attached image of the oplev pit, yaw and sum channels for the test-masses, BS, PR3 and SR3. It still requires a little fiddling with the plot, you have to zoom in manually on any plots that have 1e9-like spikes, but this should still be easier than running dataviewer templates. It uses h1nds for data and a pre-release version of the python nds2 client that has gap handling, so updates in the future could break this. I'll try to maintain this script, so any changes or improvements should come to me. The script lives in the userapps/sys/h1/scripts folder.

The script is run by going to the sys/h1/scripts folder:

jim.warner@opsws0:~ 0$ cd /opt/rtcds/userapps/release/sys/h1/scripts

And running the oplev_trends.py script with python:

jim.warner@opsws0:scripts 0$ python oplev_trends.py

You will then need to do the usual zooming in on useful data, saving screen shots and posting to the alog. I'll look into automating more of this, but it works well enough for now. It would also be very easy to add this to a "Weeklies" tab on the sitemap, which I believe LLO has done with some similar tasks.
 

Images attached to this report
Non-image files attached to this report
Comments related to this report
jim.warner@LIGO.ORG - 15:01, Tuesday 23 August 2016 (29255)

I've now added the HEPI monthly pressure trends to the same folder. Admittedly, there's little difference here between running my python script and running the dataviewer template, as the HEPI trends all fit on one dataviewer window easily. But this was pretty easy to throw together, and may allow us to automate these tasks more in the future, say if we could couple this with something like TJ's shift summary alog script.

Running it is similar to the oplev script:

jim.warner@opsws0:~ 0$ cd /opt/rtcds/userapps/release/sys/h1/scripts
 

jim.warner@opsws0:scripts 0$ python hepi_trends.py
 

Images attached to this comment
Non-image files attached to this comment
jason.oberling@LIGO.ORG - 09:11, Wednesday 24 August 2016 (29275)

For the oplev trends, they look good.  I'll update the FAMIS procedure to run this script instead of using dataviewer. 

Can you add the HAM2 oplev to this as well?  While its usefulness is debated, it is an active optical lever so we should be trending it as well.

Thanks Jim!

LHO VE
kyle.ryan@LIGO.ORG - posted 10:59, Tuesday 23 August 2016 (29246)
~1040 - 1045 hrs. local -> climbed on HAM6 to disconnect turbo controller cable
All vacuum pumping related connections are now removed from HAM5/HAM6
H1 CDS
david.barker@LIGO.ORG - posted 10:39, Tuesday 23 August 2016 (29245)
new sus etm code, bad DAQ restart

Darkham, Jeff, Jim, Dave:

h1susetm[x,y] models were restarted following changes to QUAD_MASTER.mdl.

Change is to replace DAQ DQ channel H1:SUS-ETMY_LKIN_P_LO_DQ with H1:SUS-ETMY_L3_CAL2_LINE_OUT_DQ and add an oscillator (H1:SUS-ETMY_L3_CAL2_LINE).

DAQ did not restart nicely. My change to add h1sysecatxxplcysdf INI and PAR files to daq master caused h1dc0 to stop on error "channel '' has bad DCU id 1024". I have removed these from the master and the DAQ is running.

LHO VE
chandra.romel@LIGO.ORG - posted 09:53, Tuesday 23 August 2016 (29243)
PT-210
PT-210b CC was not reading pressure, so I remotely rebooted.
H1 ISC (ISC)
jenne.driggers@LIGO.ORG - posted 02:02, Tuesday 23 August 2016 - last comment - 12:53, Sunday 28 August 2016(29239)
Dither loop for offset of MICH ASC

[Sheila, Jenne]

This is starting to feel a bit like we're using popsicle sticks and duct tape to make a splint and hold things together, but we're still having trouble with the vertex ASC at high powers, so we have tried implementing a dither/ezcaservo combo loop for MICH pitch.  As of now, Sheila has a script that will turn on a dither line for BS pitch and set up the demodulators in the ADS.  The script then sets up a cdsutils ezca servo to move the offset of ASC-MICH_P to zero the output of the demodulator. 

Earlier, we tried just a regular dither servo, moving the BS to minimize the demodulated version of AS90, but that wasn't working.  Note that earlier today we removed the AS90 element from the SRM dither loop, so the SRM dither now only looks at POP90. 

Since adding MICH offsets worked okay over the weekend to fix the sideband buildup after the POP offsets are adjusted, we thought we'd give the demodulator-adjusted offsets a try. This is well after fixing the TCS situation described in alog 29237.

On at least one occasion, this new system didn't keep up with the IFO heating when starting the offset from zero, so now it starts the pit offset from the value that Sheila et al. found the other day.

We've tried it now with both BS pitch and yaw under this new additive-offset-like configuration, but the sideband buildups still seem to be decaying.  Perhaps SRM dither should go back to the AS/POP ratio, and BS should be servoed to maximize POP18?

 

Side note:  We have got to find time to re-look at the OMC ASC.  Engaging the dither loops once again rails the OMC suspension.  This was happening last summer, and then the problem somewhat mysteriously went away, so I don't have any magical solution right now.  But this certainly can't be good for our noise.

Comments related to this report
evan.hall@LIGO.ORG - 08:55, Sunday 28 August 2016 (29354)

Reminder of Dan's parallel-universe HAM6 dc centering scheme: no OMC sus actuation needed, but requires giving up centering on one of the two AS WFS.

koji.arai@LIGO.ORG - 12:53, Sunday 28 August 2016 (29355)

In case the DC range of the OMC SUS is the issue:

The WFS spot centering pushes OM1 and OM2, and causes OM3 and OMC SUS struggle to align the beam to the OMC.

To avoid this, align the OMC with OM1 and OM2, then use picomotors to align WFS spots.
This offloading is not trivial to do when the WFS spot servo and the OMC ASC are running.
So, the single bounce wil help to work with this offloading.

H1 CDS (GRD, Lockloss)
sheila.dwyer@LIGO.ORG - posted 01:31, Tuesday 23 August 2016 - last comment - 10:28, Tuesday 23 August 2016(29238)
strange lockloss

Jenne, Sheila

We had an unusual lockloss a few minutes ago, related to 28255

I happened around 8:11  August 23rd UTC, the DRMI gaurdian seemed to think that the lock was lost although it was not.

Comments related to this report
thomas.shaffer@LIGO.ORG - 09:51, Tuesday 23 August 2016 (29242)

There are two locklosses around that time, so Ill play detective for both.

1.) 8:09:33 UTC (1155974990)

Looking at the Guardian log:

2016-08-23_08:09:30.786330Z ISC_DRMI [ENGAGE_DRMI_ASC.run] ezca: H1:ASC-MICH_P_SW1 => 16
2016-08-23_08:09:31.037960Z ISC_DRMI [ENGAGE_DRMI_ASC.run] ezca: H1:ASC-MICH_P => OFF: FM1
2016-08-23_08:09:31.042700Z ISC_DRMI [ENGAGE_DRMI_ASC.run] ezca: H1:ASC-MICH_Y_SW1 => 16
2016-08-23_08:09:31.290770Z ISC_DRMI [ENGAGE_DRMI_ASC.run] ezca: H1:ASC-MICH_Y => OFF: FM1
2016-08-23_08:09:33.911750Z ISC_DRMI new request: DRMI_WFS_CENTERING
2016-08-23_08:09:33.911930Z ISC_DRMI calculating path: ENGAGE_DRMI_ASC->DRMI_WFS_CENTERING
2016-08-23_08:09:33.912540Z ISC_DRMI new target: DOWN
2016-08-23_08:09:33.912620Z ISC_DRMI GOTO REDIRECT
2016-08-23_08:09:33.912900Z ISC_DRMI REDIRECT requested, timeout in 1.000 seconds

Seems as though there was a request for a state that is behind its current position, so it had to go through DOWN to get there. This request came from ISC_LOCK:

2016-08-23_08:09:33.546800Z ISC_LOCK [LOCK_DRMI_3F.run] DRMI TRIGGERED NOT LOCKED:
2016-08-23_08:09:33.546920Z ISC_LOCK [LOCK_DRMI_3F.run] LSC-MICH_TRIG_MON = 0.0
2016-08-23_08:09:33.547020Z ISC_LOCK [LOCK_DRMI_3F.run] LSC-PRCL_TRIG_MON = 1.0
2016-08-23_08:09:33.547110Z ISC_LOCK [LOCK_DRMI_3F.run] LSC-SRCL_TRIG_MON = 0.0
2016-08-23_08:09:33.547210Z ISC_LOCK [LOCK_DRMI_3F.run] DRMI lost lock
2016-08-23_08:09:33.602500Z ISC_LOCK state returned jump target: LOCKLOSS_DRMI
2016-08-23_08:09:33.602710Z ISC_LOCK [LOCK_DRMI_3F.exit]
2016-08-23_08:09:33.666340Z ISC_LOCK JUMP: LOCK_DRMI_3F->LOCKLOSS_DRMI
2016-08-23_08:09:33.667220Z ISC_LOCK calculating path: LOCKLOSS_DRMI->LOCK_DRMI_3F
2016-08-23_08:09:33.667760Z ISC_LOCK new target: LOCK_DRMI_1F
2016-08-23_08:09:33.668520Z ISC_LOCK executing state: LOCKLOSS_DRMI (3)
2016-08-23_08:09:33.668750Z ISC_LOCK [LOCKLOSS_DRMI.enter]
2016-08-23_08:09:33.854350Z ISC_LOCK EDGE: LOCKLOSS_DRMI->LOCK_DRMI_1F
2016-08-23_08:09:33.855110Z ISC_LOCK calculating path: LOCK_DRMI_1F->LOCK_DRMI_3F
2016-08-23_08:09:33.855670Z ISC_LOCK new target: ENGAGE_DRMI_ASC
2016-08-23_08:09:33.856260Z ISC_LOCK executing state: LOCK_DRMI_1F (101)
2016-08-23_08:09:33.856410Z ISC_LOCK [LOCK_DRMI_1F.enter]
2016-08-23_08:09:33.868100Z ISC_LOCK [LOCK_DRMI_1F.main] USERMSG 0: node TCS_ITMY_CO2_PWR: NOTIFICATION
2016-08-23_08:09:33.868130Z ISC_LOCK [LOCK_DRMI_1F.main] USERMSG 1: node SEI_BS: NOTIFICATION
2016-08-23_08:09:33.893890Z ISC_LOCK [LOCK_DRMI_1F.main] ezca: H1:GRD-ISC_DRMI_REQUEST => DRMI_WFS_CENTERING
 

 

and

2.) 08:13:12 UTC (1155975209)

Doesnt seem to be any funny business here. The DRMI_locked() function looks at the channels in the log below and then will return to DRMI_1F, and at this point it seems like the MC lost lock (see plots).

2016-08-23_08:13:17.613090Z ISC_DRMI [DRMI_WFS_CENTERING.run] DRMI TRIGGERED NOT LOCKED:
2016-08-23_08:13:17.613160Z ISC_DRMI [DRMI_WFS_CENTERING.run] LSC-MICH_TRIG_MON = 0.0
2016-08-23_08:13:17.613230Z ISC_DRMI [DRMI_WFS_CENTERING.run] LSC-PRCL_TRIG_MON = 1.0
2016-08-23_08:13:17.613300Z ISC_DRMI [DRMI_WFS_CENTERING.run] LSC-SRCL_TRIG_MON = 0.0
2016-08-23_08:13:17.613500Z ISC_DRMI [DRMI_WFS_CENTERING.run] la la
2016-08-23_08:13:17.670880Z ISC_DRMI state returned jump target: LOCK_DRMI_1F
2016-08-23_08:13:17.671070Z ISC_DRMI [DRMI_WFS_CENTERING.exit]
2016-08-23_08:13:17.671520Z ISC_DRMI STALLED
2016-08-23_08:13:17.734330Z ISC_DRMI JUMP: DRMI_WFS_CENTERING->LOCK_DRMI_1F
2016-08-23_08:13:17.741520Z ISC_DRMI calculating path: LOCK_DRMI_1F->DRMI_WFS_CENTERING
2016-08-23_08:13:17.742080Z ISC_DRMI new target: DRMI_LOCK_WAIT
2016-08-23_08:13:17.742750Z ISC_DRMI executing state: LOCK_DRMI_1F (30)
2016-08-23_08:13:17.742920Z ISC_DRMI [LOCK_DRMI_1F.enter]
2016-08-23_08:13:17.744030Z ISC_DRMI [LOCK_DRMI_1F.main] MC not Locked
2016-08-23_08:13:17.795150Z ISC_DRMI state returned jump target: DOWN
2016-08-23_08:13:17.795290Z ISC_DRMI [LOCK_DRMI_1F.exit]
 

Here are the functions that are used as decorators in DRMI_WFS_CENTERING

def MC_locked():
    trans_pd_lock_threshold = 50
    return ezca['IMC-MC2_TRANS_SUM_OUTPUT']/ezca['IMC-PWR_IN_OUTPUT'] >= trans_pd_lock_threshold

def DRMI_locked():
    MichMon = ezca['LSC-MICH_TRIG_MON']
    PrclMon = ezca['LSC-PRCL_TRIG_MON']
    SrclMon = ezca['LSC-SRCL_TRIG_MON']
    if (MichMon > 0.5) and (PrclMon > 0.5) and (SrclMon > 0.5):
        # We're still locked and triggered, so return True
        return True
    else:
        # Eeep!  Not locked.  Log some stuff
        log('DRMI TRIGGERED NOT LOCKED:')
        log('LSC-MICH_TRIG_MON = %s' % MichMon)
        log('LSC-PRCL_TRIG_MON = %s' % PrclMon)
        log('LSC-SRCL_TRIG_MON = %s' % SrclMon)
        return False

Images attached to this comment
thomas.shaffer@LIGO.ORG - 10:28, Tuesday 23 August 2016 (29244)

Something I also should have mentioned is that ISC_LOCK was brought into Manual and then requested LOCK_DRMI_3F right before the logs seen above. Seems as though it wasnt quite ready to be there yet so it jumped back down to LOCK_DRMI_1F, reran the state where it requested DRMI_WFS_CENTERING from the ISC_DRMI guardian.

LHO General
corey.gray@LIGO.ORG - posted 00:05, Tuesday 23 August 2016 (29234)
EVE Operator Summary

All Times Pacific Standard Time (PST):

H1 CAL (CAL)
darkhan.tuyenbayev@LIGO.ORG - posted 15:22, Monday 22 August 2016 - last comment - 11:32, Tuesday 23 August 2016(29231)
Added a synchronized oscillator to L3 stage in QUAD_MASTER (SUS-ETMY recompilation pending)

Overview

A synchronized oscillator was added to QUAD_MASTER model test mass stage (L3). After re-compiling the SUS-ETMY model there will be two synchronized ossilators in L3 stage that will be used for driving calibration lines: *_L3_CAL_LINE and *_L3_CAL2_LINE.

Removed channel LKIN_P_LO from the list of DQ channels and added L3_CAL2_LINE_OUT into the list.

The h1susetmy model must be recompiled in order for the changes to take effect.

Details

For one of the two calibration lines that we needed to run during ER9 we used a pitch dither oscillator, SUS-ETMY_LKIN_P (see LHO alog 28164). After analyzing the ER9 data we found two problems with this line (see LHO alog 29108):

The second synchronized oscillator was added at L3_CAL2_LINE_OUT and the list of DQ channels was modified accordingly. The L3_CAL2_LINE_OUT was added with sampling rate 512 Hz. LKIN_P_LO was removed from the list of DQ channels.

The changes were commited to USERAPPS repository, rev. 14081.

Images attached to this report
Comments related to this report
darkhan.tuyenbayev@LIGO.ORG - 11:32, Tuesday 23 August 2016 (29249)CAL, DAQ

Dave, TJ, Jeff K, Darkhan,

H1:SUS-ETM(X|Y) were recompiled and restarted, DAQ was restarted (see LHO alog 29245, WP 6117).

The QUAD MEDM screen was updated to show the new oscillator settings.

The MEDM screen updates were committed to userapps repository (rev. 14088):

common/medm/quad/SUS_CUST_QUAD_ISTAGE_CAL2_LINE.adl
common/medm/quad/SUS_CUST_QUAD_OVERVIEW.adl

Images attached to this comment
H1 SEI
hugh.radkins@LIGO.ORG - posted 12:19, Friday 12 August 2016 - last comment - 08:12, Tuesday 23 August 2016(29056)
SEI response to 7.2 EQ in SW Pacific (New Caledonia)

HEPI BS Tripped few minutes before ITMX ISI.  This is the only HEPI that tripped in the neighborhood of the large quake.

ITMY ISI tripped--timing (H1:ISI-ITMY_ST2_WD_MON_GPS_TIME) indicates stage2 tripped on ACTuators 1 second before Stage1 on T240s but looking at the plots, the Actuators have only registered a few counts, nothing near saturation/trip level.  But the T240s hit their rail almost instantly.  It seems the Stage2 Last Trip (H1:ISI-ITMY_ST2_WD_MON_FIRSTTRIG_LATCH) should be indicating ST1WD rather than Actuator. On ETMY, the Trip Time is the same for the two stages and Stage2 notes it is an actuator trip but again, there are only a few counts on the MASTER DRIVE; seems this too should have been a ST1WD trip[ indication trip on Stage2--I'll look into the logic.

On the BS ISI, the Stage1 and Stage2 trip times are the same, and the Last Trip for Stage2 indicates ST1WD.  The Stage2 sensors are very rung up after the trip time but not before unlike the T240s which are ramping to to the rail a few seconds before trip.  ETMX shows this same logical pattern in the trip sequence indicators.

On the ITMX ISI, Stage1 Tripped 20 minutes before the last Stage2 trip. This indicates the Stage1 did not trip at the last Stage2 trip.

No HAM ISI Tripped on this EQ.

Bottom line: the logical output of the WDs are not consistent from this common model code--needs investigating. Maybe I should open an FRS...

Attachment 1) Trip plots showing Stage2 trip time 1 second before the stage1 trip where the stage2 actuators do not go anywhere near saturation levels.

Attachment 2) Dataviewer plot showing the EQ on the CS ground STS and the platform trip times indicated.

Images attached to this report
Comments related to this report
hugh.radkins@LIGO.ORG - 15:20, Monday 22 August 2016 (29232)

It seems this is not a problem with the watchdog but a problem with the plotting script.  It seems for ST2 Actuators, it misses a multiplier on the Y axis.  It works correctly for ST1 Actuators and all the sensors; it does not work for other chambers as well for ST2 ACT.  FRS 6072.

hugh.radkins@LIGO.ORG - 08:12, Tuesday 23 August 2016 (29241)

Actually, the plotting script is working fine.  When the spike is so large that the plotting decides to switch to exponential notation, the exponent is hidden by the title until you blow up the plot to very large size. 

Displaying reports 57001-57020 of 85462.Go to page Start 2847 2848 2849 2850 2851 2852 2853 2854 2855 End