Displaying reports 64661-64680 of 83224.Go to page Start 3230 3231 3232 3233 3234 3235 3236 3237 3238 End
Reports until 17:41, Wednesday 10 June 2015
H1 CDS
david.barker@LIGO.ORG - posted 17:41, Wednesday 10 June 2015 (19056)
GRB EPICS alarm

Dave Barker. WP5269

I have put together a quick EPICS alarm handler for Gamma Ray Bursts and Supernovae (GRB/SN) which runs on the operator alarm machine. It is called grbsn.alhConfig (in svn under cds/h1/alarmfiles).

The GRB alert system (which runs on h1fescript0) polls the GraceDB database every 10 seconds looking for External events (Gamma Ray Bursts and Supernovae). If an event is detected, its information is written to EPICS records which are hosted on the FE model h1calcs.

To put together a quick alarm system for ER7, I am using the alarm fields of the H1:CAL-INJ_EXTTRIG_ALERT_TIME record which records the GPS time of the trigger. By setting the HIGH alarm field to the current GPS time plus one second, when the next event is recorded this PV goes into the MAJOR alarm state.

The grbsn.alhConfig file provides the operator with two buttons:

[G] button opens the guidance text. It refers to the alog entry  https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=19037

[P] button runs a script which resets the alarm level to the current event, turning off the current alarm

When a GRB/SN alarm is raised the operator should:

1. acknowledge the alarm to stop it beeping/flashing

2. press the [P] button to turn off the current alarm and prime the system for the next event

3. read the guidance to see if any further action is needed

Information on the current event can be obtained by running the script grb_latest_event_info.py. This gives you the event time as local time, so you can easily determine when the one hour stand-down time will expire.

Images attached to this report
H1 IOO
daniel.sigg@LIGO.ORG - posted 17:27, Wednesday 10 June 2015 (19055)
Strange IO alignment shift

(baffled control room crew)

After the most recent lock loss the IMC wasn't relocking and looked completely misaligned. The screenshot shows about 20 minutes of the previous lock, 30 minutes of confusion and 10 minutes on a new lock. The only dof which shows a signifcant change is the input pointing. Strange.

Images attached to this report
H1 General
jeffrey.bartlett@LIGO.ORG - posted 16:34, Wednesday 10 June 2015 (19054)
Ops Day Shift Summary
Observation Bit: Commissioning 

08:00 – Working on relocking
08:05 – Add 150ml water to PSL crystal chiller
08:11 – IFO Locked at LSC_FF
08:13 – Intent bit set to Undisturbed 
08:30 – Bean tube cleaning team working on X-Arm
09:11 – Set intent bit to Commissioning – Kieta running charging measurements at both end stations
09:33 – LLO ops reported a power hit this AM. Working on recovery
09:49 – Lockloss – Commissioning activities 
11:52 – Jim Blunt on site to meet with Richard
11:55 Dan – Making delivery to VPW with PU truck
11:59 – IFO Locked at LSC_FF
12:00 Keita – Running charging measurements – Intent bit Commissioning 
12:21 – Beam tube cleaning crew breaking for lunch
13:08 Daniel – Moving charge test rig from End-Y to End-X
13:45 – Beam tube cleaning crew working on X-Arm
14:52 Daniel – Back from End stations
15:00 Keita – Finished with charge measurements
15:03 - IFO locked at LSC_FF
15:05 - Set Intent bit to Undisturbed 
15:15 Bubba – Going to Mid-X to get some tubing
15:17 - Lockloss – Possible due to beam tube cleaning
15:42 - Beam tube cleaning crew finished for the day
16:29 - Turn over to Jim W. – Still working on relocking
H1 SUS (DetChar, ISC, SUS, SYS)
betsy.weaver@LIGO.ORG - posted 11:20, Wednesday 10 June 2015 - last comment - 16:10, Wednesday 10 June 2015(19049)
ESD/charge measurements have ~commenced - status report on IFO

We attempted to set up the IFO for some end station ESD measurements around 10am.  During some of the switch settings, the IFO lost lock.  Relocking is ongoing with help from commiss.  A suite of ESD/charge measurements have become increasingly more important, hence time has been allotted today.

Comments related to this report
kiwamu.izumi@LIGO.ORG - 12:45, Wednesday 10 June 2015 (19050)

I was asked by Keita to update the ETMX actuator calibration path for their charge measurement in full lock. I updated the foton filter of CAL-CS and a few settings associated with the simulated ETMX actuators in CAL-CS. The SDF is updated accordingly. We are now temporarily using the simulated ETMX in CAL-CS by switching the output matrix as of around 2015-06-10 19:00 UTC. We did not touch the ETMY actuator path at all.

 

* * * * * *

The changes I made are:

  • H1:CAL-CS_DARM_ANALOG_ETMX_L3_GAIN
    • 1.42 -> 1.0
  • H1:CAL-CS_DARM_ANALOG_ETMX_L1_GAIN
    • -1.0 -> 1.0 (this must be a typo)
  • H1:CAL-CS_DARM_FE_ETMX_L1_LOCK_OUTSW_L
    • OFF -> ON
  • H1:CAL-CS_DARM_FE_ETMX_L3_LOCK_OUTSW_L
    • OFF -> ON
  • Changed the DC gain of FM1 in H1:CAL-CS_DARM_ANALOG_ETMX_L3
    • DC gain is set to 3.3491e-13 [m/cnts] (alog 18756

 

betsy.weaver@LIGO.ORG - 13:18, Wednesday 10 June 2015 (19051)

With Keita's guidance, we calibrated the TR QPDs to the ETM OPLEVs to get a calibration constant which will be needed to interpret the charge data being taken.  We found the calibration by driving at the L2 stage at 6Hz and watching the QPDs and OPLEVs.  Note, the Y-arm QPD TR B was not as well centered as the X-arm TR A and B and X-arm TR A.

Here are the calibration numbers:

 

    OPLEV QPD A QPD B OL/QPD  
    uRad/ct ct/ct ct/ct Calib Const  
          uRad/ct  
ETMX  PIT AMP 3.26654e-8 2.30053e-8 2.31317e-8 1.419907587  
  PHASE -12 169 169    
             
ETMX YAW AMP 4.1988e-8 3.08716e-8 2.88229e-8 1.360084997  
  PHASE -12 169 169    
             
ETMY PIT AMP 3.40891e-8 2.30263e-8 5.51346e-9 1.480441929  
  PHASE -31 167 165    
             
ETMY YAW AMP 4.07971e-8 3.26463e-8 1.5472e-8 1.249669947  
  PHASE -31 169 169    
             
betsy.weaver@LIGO.ORG - 16:10, Wednesday 10 June 2015 (19053)

Keita has completed the ESD/RH charge measurements for ETMx and ETMy, however he is still chewing on the numbers.  He expects to post something later today.

H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 10:44, Wednesday 10 June 2015 (19045)
Owl Shift Summary

00:00 IFO not locked. Jim said it lost lock twice at BOUNCE_VIOLIN_MODE_DAMPING. And lost lock again at ANALOG_CARM when I came.

          Knowing that it fails to grab lock repeatedly, I approach all the Guardian state with caution (wait until the spectrum becomes settle before requesting the next state)

01:13 Lock lost at REFL_TRANS. I brought everything down, readjust SR2, PR2, PR3, BS, and IM4. Ended up took me a while because I messed up the alignment.

         Couldn't get ASC-C_COMM_A_DEMOD_RFMON to reach 5 counts. I touched PR3 all I could. Any other optics I could touch to improve this?

2:18 4.7 Earthquake in Japan - no WD tripped.

4:00 Lock loss at LOCK_DRMI_1F

4:10 Trouble locking ALS-X

5:00 Kristina to the wood shop.

        The ifo still tries to lock DRMI.

5:10 Kristina back.

6:16 Still not grabing DRMI lock. Evan suggested bad BS alignment as AS air momentary locked at 10 mode. I couldn't get it to lock just by adjusting the BS so I redid the initial alignment. The earthquake earlier might have thrown the optics off.

6:50 Lock loss at REFL_TRANS.

7:19 Lock loss at REFL_TRANS AGAIN. I waited until AS90 and POP18 became stable before requesting the state.

7:31 Stopping at DRMI_LOCK. Waiting for the 6.0 earthquake in Chile to pass (it didn't affect us - no WD tripped at least).

8:06 Hugh came to find ITMY Optlev damping has been turned off. This could have been the cause of repeated lock losses through out the night. Next time I'll be sure to check SDF before even touching the interferometer....

       Evan touched PRM to bring up POP18 (and AS90) before requesting RESONANCE. Maybe helped with the lock loss at REFL_TRANS?

       The ifo is now locking. Handling the ifo off to Jeff B.

H1 PSL
jeffrey.bartlett@LIGO.ORG - posted 09:32, Wednesday 10 June 2015 (19048)
Add Water to PSL Chiller
Add 150ml water to the PSL Crystal chiller. 
H1 General
jeffrey.bartlett@LIGO.ORG - posted 09:28, Wednesday 10 June 2015 (19047)
08:30 Meeting Minutes
Operators and commissioners were reminded to check the SDF file for changes. There were problems with locking last night due in part to the ITMY OpLev. This error was apparent when looking at the SDF overview.

All were reminded to aLOG, aLOG, and aLOG. The aLOG is a valuable repository for documenting the many things that happen during a day. When in doubt aLOG it.
 
The kitchen sink has been fixed. The Facilities folks asked that food scraps not be put into the sink, as they cause plugged pipes. Food scraps should be put into the compost bins or at the least put in the garbage, but not down the sink drain.

Repairs to the RO system are complete. There is no potable water in the staging building due to a water leak. 

Beam tube cleaning continues.

No safety meeting for today.

Charge measurements will be taken today at both end stations.  
H1 AOS
edmond.merilh@LIGO.ORG - posted 09:19, Wednesday 10 June 2015 (19046)
PSL Weekly Report - Past 10 day Trends
Images attached to this report
H1 SYS
betsy.weaver@LIGO.ORG - posted 08:46, Wednesday 10 June 2015 (19044)
SDF apon arrival

Apon arrival, I see that the IFO is back up but there are 2 red SDF channels:

One is a somewhat benign LIMIT on the BS OPLEV YAW bank.  The other is 2 new filters that are enabled on ITMY_M0_DARM_DAMP_R.  I *thought* I saw these being switched a bit via commissioners yesterday during the locking attempts that were taking a while yesterday.  There are no alogs about these being turned on.  This needs to be addressed by ops/commiss today.

 

OPERATORS:  The SDF and GUARDIAN overview screens are very useful at determining what is out of configuration when attempting to lock.  If the GRD overview screen is litup like a "holiday" tree on the SUS and SEI banks, things are wrong and likely inhibiting lock.  Likewise on the SDF overview screen.  If there are red indications of channel diffs, it might help locking to work through the diffs.  Yesterday we had numerous indications on both of these screens that things were not in good states to lock.

Images attached to this report
H1 AOS
david.barker@LIGO.ORG - posted 08:08, Wednesday 10 June 2015 (19043)
CDS model and DAQ restart report, Tuesday 9th June 2015

model restarts logged for Tue 09/Jun/2015
2015_06_09 10:26 h1iopsush2a
2015_06_09 10:26 h1susmc1
2015_06_09 10:26 h1susmc3
2015_06_09 10:28 h1suspr3
2015_06_09 10:28 h1susprm
2015_06_09 10:32 h1iopsush2b
2015_06_09 10:32 h1susim
2015_06_09 10:37 h1iopsush34
2015_06_09 10:37 h1susmc2
2015_06_09 10:37 h1suspr2
2015_06_09 10:37 h1sussr2
2015_06_09 10:48 h1broadcast0
2015_06_09 11:47 h1iopsusb123
2015_06_09 11:47 h1susbs
2015_06_09 11:47 h1susitmx
2015_06_09 11:47 h1susitmy
2015_06_09 17:25 h1fw1*
2015_06_09 19:16 h1fw1*
2015_06_09 21:03 h1fw1*

* = unexpected restart

Maintenance day. Restart of SUS IOP models for 18bit DAC recalibration. Reboot DMT broadcaster.

H1 General
nutsinee.kijbunchoo@LIGO.ORG - posted 07:39, Wednesday 10 June 2015 (19042)
Lock lost on REFL_TRANS third time tonight

The last time it lost lock I tried my best to approach this state slowly. I have attached the screenshots of the spectrum and PRMI StripTool when it was at CARM_5PM. I waited until POP18 and AS90 became relatively stable before moved on and requested REFL_TRANS.  Didn't help...

Images attached to this report
H1 INJ (INJ)
eric.thrane@LIGO.ORG - posted 06:59, Wednesday 10 June 2015 (19041)
transient injections restarted with new lock check + intent check
I restarted transient injections at LHO and LLO at (06:43, 08:51) local time respectively. The injection code was not running when I logged on at each site. I've started a new log file tinj.err to try to ascertain why. Before restarting, I implemented a new check so that tinj should only attempt an injection if the detector is locked and if the intent bit is on. At LHO, I restarted tinj using the new user=hinj account.
H1 General
jim.warner@LIGO.ORG - posted 00:19, Wednesday 10 June 2015 (19036)
Shift Summary

17:13 IFO Locked, Intent bit set,

20:08 LLO has gone down, I start a DARM OLG measurement, done at 20:19

21:49 Lock loss

23:00 I start initial alignment again, start trying to lock at 23:45

H1 AOS
darkhan.tuyenbayev@LIGO.ORG - posted 20:57, Tuesday 09 June 2015 - last comment - 08:41, Monday 22 June 2015(19031)
Cavity pole fluctuations calculated from Pcal line at 540.7 Hz

Sudarshan, Kiwamu, Darkhan,

Abstract

According to the PCALY line at 540.7 Hz, the DARM cavity pole frequency dropped by roughly 7 Hz from the 17 W configuration to the 23 W (alog 18923). The frequency remained constant after the power increment to 23 W. This certainly impacts on the GDS and CAL-CS calibration by 2 % or so above 350 Hz.

Method

Today we've extracted CAL-DELTAL data from ER7 (June 3 - June 8) to track cavity pole frequency shift in this period. The portion of data that can be used are only then DARM had stable lock, so for our calculation we've used a filtered data taking only data at GPS_TIME when guardian flag was > 501.

From an FFT at a single frequency it is possible to obtain DARM gain and the cavity pole frequency from the phase of the DARM line at a particular frequency at which the drive phase is known or not changing. Since the phase of the resultant FFT does not depend on the optical gain but the cavity pole, looking at the phase essentially gives us information about the cavity pole (see for example alog 18436). However we do not know the phase offset due to time-delay and perhaps for some uncompensated filter. We've decided to focus on cavity pole frequency fluctuations (Delta f_p), rather than trying to find actual cavity pole. In our calculations we have assumed that the change in phase come entirely from cavity pole frequency fluctuations.

The phase of the DARM optical plant can be written as

phi = arctan(- f / f_p),

where          f is the Pcal line frequency;

                     f_p - the cavity pole frequency.

Since this equation does not include any dependence on optical gain, the technique we use, according to our knowledge, the measured value of phi does not get disturbed by the change of the optical gain. Introducing a first order perturbation in f_p, one can linearize the above equation to the following:

               f_p^2 + f^2
(Delta f_p) = ------------- (Delta phi)
                    f

An advantage of using this linearized form is that we don't have to do an absolute calibation of the cavity pole frequency since it focues on fluctuations rather than the absolute values.

Results

Using f_p = 355 Hz, the frequency of the cavity pole measured at the particular time (see alog 18420), and f = 540.7 Hz (Pcal EY line freq.), we can write Delta f_p as

Delta f_p = 773.78 * (Delta phi)

Delta f_p trend based on ER7 data is given in the attached plot: "Delta phi" (in degrees) in the upper subplot and "Delta f_p" (in Hz) in the lower subplot.

Judging by overall trend in Delta f_p we can say that the cavity pole frequency dropped to about 7 Hz after June 6, 3:00 UTC, this correspond to a time when PSL power was changed from 17 W to 23 W (see lho alog 18923, [WP] 5252)

Delta phi also show fast fluctuations of about +/-3 degrees, and right now we do not know the reason that causes this "fuzzyness" of the measured phase.

Filtered channel data was saved into:

aligocalibration/trunk/Runs/ER7/H1/Measurements/PCAL_TRENDS/H1-calib_1117324816-1117670416_501above.txt (@ r737)

Scripts and results were saved into:

aligocalibration/trunk/Runs/ER7/H1/Scripts/PCAL_TRENDS (@ r736)
Images attached to this report
Comments related to this report
darkhan.tuyenbayev@LIGO.ORG - 13:36, Thursday 11 June 2015 (19078)

Clarifications

Notice that this method does not give an absolute value of the cavity pole frequency. The equation

Delta f_p = 773.78 * (Deta phi)

gives a first order approximation of change in cav. pole frequency with respect to change in phase of Pcal EY line in CAL-DELTAL at 540.7 Hz (with the assumptions given in the original message).

Notice that (Delta phi) in this equation is in "radians", i.e. (Delta f_p) [Hz] = 773.78 [Hz/rad] (Delta phi) [rad].

shivaraj.kandhasamy@LIGO.ORG - 08:41, Monday 22 June 2015 (19266)

Darkhan, Did you also look at the low frequency (~30 Hz), both amplitude and phase? If these variations come from just cavity pole, then there shouldn't be any changes in either amplitude or phase at low frequencies (below cavity pole). If there is change only in gain, then it is optical gain. Any changes in the phase would indicate more complex change in the response of the detector.

H1 General
jim.warner@LIGO.ORG - posted 20:08, Tuesday 09 June 2015 - last comment - 20:19, Tuesday 09 June 2015(19039)
Intent bit turned off, DARM OLG measurement starting

LLO just went down and Jeff asked for a DARM OLG measurement, should be 20 minutes or so. Will revert intent bit when done.

Comments related to this report
jim.warner@LIGO.ORG - 20:19, Tuesday 09 June 2015 (19040)

Measurement done.

H1 CDS (SUS)
david.barker@LIGO.ORG - posted 14:28, Tuesday 09 June 2015 - last comment - 15:41, Wednesday 10 June 2015(19030)
status of 18bit DAC calibrations

We are seeing two issues with the autocal of the 18bit DAC cards (used almost exclusively by suspension models). The first is a failure of the autocal; the second is the autocal succeeding but taking longer than normal.

There are three calibration completion times: ~5.1S for unmodified DAC, ~5.3S for modified DAC, ~6.5S for modified long-running DAC

h1sus2b, h1sush34, h1susex, h1susey all have no DAC issues

h1sush2a: the third DAC is taking 6.5S to complete. This DAC is shared between PRM and PR3 models. First two channels are PRM M1 Right and Side. Last six channels are PR3 M1 T1,T2,T3,Left,Right,Side.

h1sush56: this has unmodified cards. 4th DAC is failing autocal

h1susb123: This one is strange:

On first autocal run after a computer reboot: 7th DAC failed autocal

On first restart of all models: 1st DAC failed autocal, 7th DAC succeeded

On second restart of all models: 1st DAC failed autocal, all others good (consistent restart behavior)

In all cases, 5th DAC card is running long for autocal (6.57S).

In the current situation with the 7th card now succeeding and the 1st DAC failing, the 1st DAC is driving ITMY M0 (F1,F2,F3,LF,RT,SD) and M0 (LF,RT)

Comments related to this report
jeffrey.kissel@LIGO.ORG - 15:41, Wednesday 10 June 2015 (19052)CDS, DetChar, ISC, SUS
I've been asked to translate / expand on the above entry, and I figure my reply was worth just commenting on the aLOG itself. If one person asks, many more are just as confused.

---------
We know that DetChar have seen Major Carry Transition or "DAC" [digital-to-analog converter (DAC)] glitches in some of the detectors interferometric signals, that have been pin-pointed to be from a few select stages of a few select suspensions (see e.g. LHO aLOGs 18983, 18938, or 18739).

To combat the issue, yesterday, we restarted all the front-end processes (including the Input Output Processor [IOP] process) on the four corner station SUS computers:
h1sush2a (controlling MC1, MC3, PRM and PR3 all in HAM2)
h1sush2b (controlling IMs 1 through 4)
h1sush34 (controlling MC2 and PR2 in HAM3 and SR2 in HAM4)
h1susb123 (controlling ITMY, BS, and ITMX in BSCs 1, 2 and 3 respectively)

Restarting the IOP process for any front-end who is coupled with an I/O chassis that runs 18-bit DAC cards performs the auto-calibration (autocal) routine on those 18-bit DACs, recalibrating the voltage bias between the (2-bit + 16-bit cards) of the 18-bit DAC to reduce Major Carry transition glitching. When the front-end computer is rebooted or power-cycled, the IOP process is started first, and therefore runs the auto-calibration routine as well. After autocalibration is complete, the user models for each suspension are started. 

Note that the other 3 SUS "control" computers,
h1sush56 (controlling SRM and SR3 in HAM5 and the OMC in HAM6 )
h1susex (controlling ETMX and TMSX)
h1susey (controlling ETMY and TMSY)
were NOT restarted yesterday, but have been restarted several times over recent history.

Each of these front-end or I/O chassis contains many DAC cards, because it (typically) controls many suspensions (as described above), hence the distinction between the card numbers in each front-end. Said differently, each DAC card has 8 channels -- because of initial attempts to save money and conserve space, the above mentioned corner station computers / IO chassis have some DAC cards that control OSEMs of two different suspensions. There's a map of which DAC card which controls which OSEM in the wiring diagrams; there is a diagram for each of the above mentioned computers / I/O chassis; each diagram, named after the chambers the I/O chassis controls can be found from the suspension electronics drawing tree, E1100337.

Recall that we have recently swapped out *most* of the suspensions' 18-bit DAC cards for a "new" 18-bit DAC card with upgraded EPROMs (see LHO aLOGs 18557 and 18503, ECR E1500217, and Integration Issue 945). This is what Dave means when he references "modified" vs. "unmodified" DAC cards. All DAC cards in h1sush56 remain unmodifed, where as the other corner-station DAC cards in all other SUS I/O chassis have all been upgraded.

Dave also describes that 18-bit DACs are used "almost exclusively by suspension models." The PCAL and PEM DACs are also 18-bit DACs. Every other fast DAC (used by TCS, SEI, PSL, ISC, etc.) is a 16-bit DAC which have not been found to have any issues with major carry transition glitching. Note that because the tip-tilt suspensions (the OMs and RMs, collectively abbreviated as the HTTS for HAM Tip-Tilt Suspensions) were purchased under ISC dollars, who made the decision to do this (as with many other things) differently -- they have 16-bit DACs. 

After we've restarted the IOP front-end process, we track its start-up log file to ensure that the auto-calibration was successfully completed. We've found that there are two "failure" modes to this auto-calibration process that get reported in these log files. 
(1) The IOP process log reports that the auto-calibration process has failed.
(2) The IOP process log reports that the auto-calibration process took longer than is typical. 
We don't know the practical result of either of these two failure modes means. This was brought up on the CDS hardware call today, but no one had any idea. I've pushed on the action item for Rolf&Co to follow up with the vendor to try to figure out what this means.

As for (2), 
- the typical time for an IOP process to complete the auto-calibration of one unmodified DAC card is 5.1 [sec]. 
- the typical time for a modified DAC card with the new EEPROM is 5.3 [sec].
- the atypical "errant" time appears to be 6.5 [sec]. 
But, again, since we have no idea what "running long" means, our only reason to call this a "failure mode" is that it is atypical.

So re-writing Dave's above status (which is a combined status that is the results of yesterday's IOP model restarts and other previous IOP model restarts) with the jargon explained a little differently and in more detail:
- h1sus2b, h1sush34, h1susex, h1susey all have no DAC auto-calibration issues. All DAC cards in these front ends / I/O chassis report a successful auto-calibration routine.
- h1sush2a: of the all the DAC cards in this I/O chassis, only the 3rd DAC (which is s controls the some of the TOP, M1 mass OSEMs of PRM and some of the TOP, M1 OSEMs of PR3) is suffering from failure mode (2).
- h1sush56: the 4th DAC is suffering from failure mode (1).
- h1susb123: this computer's IOP process was restarted several times. 
	- Upon the first autocal, which was performed as a result of rebooting the computer, 
		- the 1st and 7th DAC card suffered from failure mode (1), 
		- the 5th DAC card suffered from failure mode (2),
		- all other DAC cards reported success.
	- Upon the second autocal, which was performed as a result of restarting the IOP process, 
		- the 1st DAC card suffered from failure mode (1), 
		- the 5th DAC card suffered from failure mode (2),
		- all other DAC cards reported success.
	- Upon the thir autocal, which was performed as a result of restarting the IOP process again, 
		- the 1st DAC card suffered from failure mode (1), 
		- the 5th DAC card suffered from failure mode (2),
		- all other DAC cards reported success.
	The first DAC card is shared between the two TOP, M0 and R0 masses of ITMY. (note Dave's typo in the initial statement)

For previous assessments, check out LHO aLOG 18584.
Displaying reports 64661-64680 of 83224.Go to page Start 3230 3231 3232 3233 3234 3235 3236 3237 3238 End