Displaying reports 1101-1120 of 84513.Go to page Start 52 53 54 55 56 57 58 59 60 End
Reports until 00:07, Sunday 20 July 2025
H1 General (SEI)
ryan.crouch@LIGO.ORG - posted 00:07, Sunday 20 July 2025 (85868)
OPS OWL assistance

12:01 GRD called, we got hit by 2 large semi close earthquake from the eastern Russian penisula, a 6.7 then a 7.4. A few ISIs and suspensions tripped, it'll be a few hours till the ground motion comes down enough to relock. We were going through DRMI_ASC at the time of the earthquakes.

H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 22:14, Saturday 19 July 2025 (85867)
Ops Eve Shift Report

TITLE: 07/20 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
Inherited a Locked IFO.
Dropped from Observing at 3:35:45 UTC for SQZ_FC locking Issues.
I followed the instructions for FC troublshooting found here.
We went back into Observing at 3:47:32UTC
Wind started to pick up in speed.
Lockloss potentially from an Alaskan 4.7M Earthquake.

Locking Notes:
Initial Alignment was ran and completed.
 

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 20:30, Saturday 19 July 2025 (85866)
Mid Shift Ops Eve shift & Fire Watch report.

TITLE: 07/20 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 23mph Gusts, 17mph 3min avg
    Primary useism: 0.08 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:
I have attached a series of pictures from the LIGO Hanford Corner Station roof.
Conditions are not smokey at all here. No fires or smoke can be seen.  

Came back down form the roof and immdiately heard these from Verbals.

GRB-Short E582309 02:16:06 UTC
SuperEvent S250720J 02:16:58
GRB-Short E582309 02:17:09
SuperEvent S250720J 02:21:27

I'm not sure why there are duplicates like this.
 

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 16:48, Saturday 19 July 2025 (85865)
Saturday Ops Eve Shift Start

TITLE: 07/19 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 58Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 22mph Gusts, 9mph 3min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:

H1 has been locked for over 12 Hours!

Stand down alerts Failure happened earlier. 
Ryan pointed out these instructions to me: 
https://cdswiki.ligo-wa.caltech.edu/wiki/Ryan%20Crouch?highlight=%28Ryan%29%7C%28Crouch%29

We were able to get it up and running again fairly quickly.

LHO General
corey.gray@LIGO.ORG - posted 16:31, Saturday 19 July 2025 (85860)
Sat DAY Ops Summary

TITLE: 07/19 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:

Another nice DAY shift with H1 being locked for more than the entire DAY shift (over 13hrs!). 

Since H1 was locked the entire shift, did not get another chance at removing the SR3 Pit OFFSET (So it is still there and the SR3 is at the new Pit Bias I put in yesterday---and it was aligned to this last night.  So when we want to fix this, we'll need to take the SR3 Pit Offset to 0.0 and then run an alignment ) 

Attempted the Saturday Calibration, but it was most likely not successful (but ran a 2nd Calibration at the end of the shift---which was SUCCESSFUL!)
LOG:

H1 CAL (CAL)
corey.gray@LIGO.ORG - posted 16:29, Saturday 19 July 2025 (85864)
Attempt #2: Saturday H1 Calibration Measurement (broadband headless + simulines)

NOTE:  Saw that L1 had a lockloss, so took the opportunity at running my 2nd Calibration of the day (WITHOUT any CTRL-C's!!!)

Measurement NOTES:

Attached is a screenshot of the Calibration Monitor + pdf of Pydarm Report (but I only ran:  "pydarm report" vs. "pydarm report --skip-gds"

Images attached to this report
Non-image files attached to this report
H1 CAL (CAL)
corey.gray@LIGO.ORG - posted 14:25, Saturday 19 July 2025 (85862)
(Probably Errant) Saturday H1 Calibration Measurement (broadband headless + simulines)

Summary

Measurement NOTES:

Attached is a screenshot of the Calibration Monitor, and unfortunately, I did not get to run a PyDarm Report.  I'm assuming this is due to my CTRL-C from the headleass measurement noted above.  Because, also at the end of this measurement, there was an SDF Diff also!  Luckily, Tony is here and he was able to take care of the SDF Diff.  The SDF was for PCAL Y (medm is SiteMap/CAL EY) , and it was related to the In-Loop (OFS) PD(H1:CAL-PCALY_OFS_PD_OUT16) being railed at -7.8.  Tony fixed this by toggling the Loop Enable Button (H1:CAL-PCALY_OPTICALFOLLOWERSERVOENABLE) to Off and then On.  This is all mentioned on the top of the PCal Known Issues wiki.

Once the SDF was cleared H1 was taken back to Observing, but there was discussion about trying to run the calibration again since L1 was still relocking.  Opted to not drop out of Observing for this since we were already out of Observing for over 30min.

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 12:27, Saturday 19 July 2025 (85863)
SaturDAY Mid-Shift Status

Smooth sailing thus far with H1 locked for almost 9hrs (H1 even rode through two M5+ earthquakes off the Guatamalan coast!).  Delayed the Saturday Calibration, to allow L1 to thermalize after their recent lockloss and will start the calibration in about 30min.

LHO VE
david.barker@LIGO.ORG - posted 10:23, Saturday 19 July 2025 (85861)
Sat CP1 Fill

Sat Jul 19 10:09:44 2025 INFO: Fill completed in 9min 40secs

 

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 07:57, Saturday 19 July 2025 (85859)
Sat DAY Ops Transition

TITLE: 07/19 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 1mph 3min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.10 μm/s
QUICK SUMMARY:

H1's been locked almost 4.5hrs with a decent night; microseism continues to drop and is below the 50th percentile and winds have been calm the last 7hrs.

H1 General
anthony.sanchez@LIGO.ORG - posted 22:03, Friday 18 July 2025 (85858)
Friday Eve Shift Report.

TITLE: 07/19 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
I inherited an unlocked IFO. After a few DRMI Locking attempt. I was able to relcok the IFO.
But the changes to SR3  H1:SUS-SR3_M1_DITHER_P_OUTPUT were reverted by SDF Revert after a DRMI lockloss.

H1 was locked at NLN at 2:26:39 UTC
And Observing at 2:18:19UTC.

SQZ_manager Dropped from FREQ_DEP_SQZ and took H1 into commissioning @ 2:28:38 UTC.
SQZ_FC is Stuck between GR_SUS_LOCKING and Down.
H1:SUS-FC2_M1_OPTICALIGN_P_OFFSET & Its Yaw counter part were moved to relock the FC.
We got back to Observing at 3:00:01 UTC

LOG:
No Log.

H1 SUS (SUS)
edgard.bonilla@LIGO.ORG - posted 20:34, Friday 18 July 2025 (85857)
Changes to the HLTS_W_EST model to test the OSEM estimator on H1 SR3

Edgard, Ivey, Brian.

Relevant FRS ticket : 32526

We made modifications to the HLTS_W_EST and estimator library parts to add DQ channels to monitor the total drive request to the M1 OSEMs with and without the estimator damping. In passing, we made a few changes to the names of channels on the EST block (by modifying ESTIMATOR_PARTS.mdl ) to make them a bit more readable/less redundant. These changes will only affect the H1 SR3/PR3 models only.

The changes were committed to the userapps svn under revision 32426.

 

Oli mentioned that they will do a model restart to get these changes in on Tuesday, as long as we got the changes in before Monday.

The estimator MEDM screens haven't been updated yet, but I think Brian will get to it on Monday.

____________

This is a summary of the library part changes [see attached.pdf for screenshots of these changes in the library parts]:

SIXOSEM_T_STAGE_MASTER_W_EST.mdl

HLTS_MASTER_W_EST.mdl

ESTIMATOR_PARTS.mdl

 

 

Non-image files attached to this report
H1 DetChar (DetChar, PEM)
derek.davis@LIGO.ORG - posted 17:22, Friday 18 July 2025 - last comment - 11:20, Thursday 24 July 2025(85856)
20.2 Hz line appeared Jun 9, turns on and off

Prompted by me noticing on-off behaviors in the daily strain spectrogram for today at around 20.2 Hz, I've done some additional investigations into the source and behavior of this line: 

The 20.2 Hz line, which is currently prominent in DARM, first appeared in accelerometer and microphone data from the corner station on June 9. The first appearance of this line that I found was in the PSL mics, as shown in this spectrogram. This line then appeared in DARM in the first post-vent locks a few days later. The summary of work from June 9 does not show anything obvious to me that would be the source of this new noise.

This feature also turns off and on multiple times during the day. An example from today can be seen in this spectrogram. Most corner station microphones and accelerometers exhibit this feature, but it is most pronounced visually in the PSL microphone spectrograms. I was unable to identify any other non-PEM channels that showed the same on-off behavior, but this does reveal many change points that should aid in tracking down the source. Almost every day, this line exhibits abrupt on-off features at different times of the day and for varying durations. Based on my initial review, these change points appear to be more likely during the local daytime (although not at any specific time).  When the line first appeared, it was usually in the "off" state and then turned on for short periods. However, this has slowly changed, so that now the line is generally in the "on" state and turns off for brief periods. 

  

Images attached to this report
Comments related to this report
derek.davis@LIGO.ORG - 09:24, Monday 21 July 2025 (85887)

Looking into past alogs, I noticed that I reported this same issue last summer in alog 79948. Additional discussion about this line can be found in the detchar-requests repository (requires authentication). In this case, the line appeared in late spring and disappeared in early autumn of 2024. No source was identified before the line disappeared. 

Going back further, I also see the same feature appearing in late spring and disappearing in early autumn of 2023. The presence of the line is hence correlated with the outside temperature, likely related to some aspect of the air conditioning system that is only needed when it is (roughly) hotter outside than inside. This also means that we can expect this line to remain present in the data until autumn unless mitigation measures are taken.

timothy.ohanlon@LIGO.ORG - 11:20, Thursday 24 July 2025 (85959)

I looked briefly into the 20 Hz Noise without much success. Comparing the floor accelerometers, the noise is louder in the EBAY than the LVEA (although the signal of the EBAY accelerometer doesn't look good since the vent). The next closest is HAM1 followed by BS. So the noise is around the -X-Y corner of the LVEA, likely in the EBAY, Transition Area or Optics Lab because HAM6 sees less motion than HAM1 and EBAY sees the most.

Images attached to this comment
H1 CAL
elenna.capote@LIGO.ORG - posted 17:14, Friday 18 July 2025 (85851)
Summary of Calibration Confusion So Far

For background, I attempted to push a new calibration on 7/3 to account for the change in the SRCL offset that we made on 6/26, but it failed due to the broadband PCAL measurement showing a larger uncertainty that we had beforehand (see 85529). Since then, we have been running with the same calibration we have had since 6/10, which has a low error (~3%), but is based on a model that know to be incorrect. Namely, the model created and pushed on 6/10 has a small, positive spring, and we believe now that DARM has no spring to at least 10 Hz. We are especially confused because we expected the model change to be focused around the 10-30 Hz region, since this is the band where we expect significant change due to the SRCL offset, but the measurement shows large, >5%, error at 100 Hz.

I have made a series of plots comparing a variety of PCAL broadband measurements from different points since 6/10, measuring PCAL with GDS CALIB STRAIN and CAL DELTA L.

Plot 1 shows CALIB STRAIN/PCAL and DELTA L/PCAL on 6/11 after we pushed a new calibration modeled with a positive spring. The calibration at this point was very good; the calibration line uncertainties showed error of 3% or less. However, this plot is already showing something a bit confusing- a difference in CAL DELTA L and GDS CALIB STRAIN, where GDS CALIB STRAIN has a higher uncertainty around 70-200 Hz. We believe the application of the kappas should further reduce the uncertainty of GDS CALIB STRAIN.

Plot 2 shows CALIB STRAIN/PCAL and DELTA L/PCAL on 6/26 after changed the SRCL offset. The calibration report generated from that day indicates that the sensing function is flatter with the adjusted SRCL offset. Because the calibration still expects a spring, we were not surprised to see that the low frequency uncertainty changed.

Plot 3 shows CALIB STRAIN/PCAL and DELTA L/PCAL on 7/3 after we pushed a new calibration which was supposed to account for the flatter sensing function. However, we saw that the uncertainty increased at 100 Hz, which we did not expect. This measurement was run slightly early during the "TDCF burn in" so it may not have been an accurate look at the effect of the new calibration.

Plot 4 shows CALIB STRAIN/PCAL and DELTA L/PCAL on 7/3 after we pushed a new calibration, and then were only relocked for 10 minutes. The uncertainty was even larger than the previous uncertainty measurement. We were also very confused that CAL DELTA L changed significantly compared to plot 3. We're not sure if the kappas were significantly different from 1 to also cause problems in GDS CALIB STRAIN when applied.

Images attached to this report
H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 17:14, Friday 18 July 2025 (85854)
Friday Eve Shift

TITLE: 07/18 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 17mph Gusts, 10mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.11 μm/s
QUICK SUMMARY:

Unknown Lockloss 2025-07-18 23:23:49

Relocking notes:
Ran initial Alignment to get relocked quickly.
But that ran us through SDF Revert which un-did Corey's changes.
Re-offloaded SR3 Offsets
H1:SUS-SR3_M1_DITHER_P_OUTPUT was 32 and is now 0.
 

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 17:06, Friday 18 July 2025 (85842)
Fri Ops DAY Shift Summary

TITLE: 07/18 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:

H1's been locked the entire Day shift with a lock of over 10hrs!  Fairly quiet day with decent triple coincidence.  There was a forecast of a Red Flag wind day, but it's not been horrible---Corner Station gusts have gotten over 30mph, but nothing worse than that so far.

Did not get to offload the SR.....SCRATCH THAT!  H1 had a lockloss right at the end of the shift, so I took the opportunity to Offload the SR3 Pitch Offset (see alog 85855)....but SDF Revert took the Offset back to 32.3.  
LOG:

H1 SQZ
sheila.dwyer@LIGO.ORG - posted 16:56, Friday 18 July 2025 - last comment - 16:02, Monday 25 August 2025(85852)
first look at sqz dataset, limit on IFO to OMC mode mismatch

I've taken a first look at the data that Camilla and Matt took in https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=85813

In the past, I've started modeling these data sets by assuming a fixed arm power, and finding IFO readout losses that are needed to fit the measured shot noise without squeezing at 2kHz.  This time, inspired in part by a comment from Begum, I instead used only the known IFO readout losses and attributing the rest to mode mismatch between the IFO and OMC, this allows us to make an upper limit on the mode mismatch from the IFO to the OMC.

From the google sheet , I will include SRC losses as a known readout loss (although the are listed as sqz injection losses, I think for the IFO they are readout losses), 0.99(SRC)*0.995(OFI)*0.9993(OM1)*0.985(OM3)*0.9904(QPD)*0.956(OMC)*0.98(QE) = 10% known readout losses.  The known injection losses (not including SRC losses) are then 0.985(OPO)*0.99^3 (3 SFI passes)0.99 (FC QPDs)*0.99(other HAM7 loss) *0.99 (OFI)= 7.2%

Fitting the level of squeezing and anti-squeezing at 2kHz suggests an NLG of 13.2 (fairly close to Camilla's measurement of 13.4), and a total efficiency of 0.752 using the Aoki equations (treating mismatches as losses).  Looking at the interactive sqz gui, the IFO to OMC mismatch reduces the measured sqz anf anti squeezing at 2kHz, but the mismatch phase only has an impact below about 400Hz. 

Using only the known readout (10%) and injection losses (7.2%), perfect sqz to OMC mode matching, we can get some limit on the amount of OMC to IFO mode mismatch that can be compatible with our 2kHz squeezing.  5.1% mismatch (which would imply 355kW in the arm cavity) seems too high to be compatible with our squeezing, while a mode mismatch of 3.7% with 350kW in the arm does seem compatible if the sqz to OMC mode matching is perfect. So, we can take 3.7% as an upper limit on the IFO to OMC mode matching that is compatible with known squeezing losses.  The data could be compatible with mode mismatches as low as 2.3% (345kW in arms) without introducing any extra losses.   Any unknown squeezer losses, like excess crystal losses, will reduce this amount.  The arm power of upper limit that we'd infer from this is 350kW, but this depends sensitvely on what we assume the non quantum noise is at 2kHz.  I will try to redo this estimate soon using the cross correlation data that Elenna is working on to have more confident limits on the arm power.

These first two plots show that this model isn't well tuned at a few hundred Hz, I haven't tried to set either the SRCL offset or homodyne angle yet, or the OMC to IFO mismatch phase.  At first glance it does not seem like I will be able to make this match well by adjusting the mismatch phase. 

The last two plots show the squeezing level in dB, just so that we have a plot we can look at.  The script to make these plots are committed here

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 16:02, Monday 25 August 2025 (85942)

Posting this comment which has been saved in my drafts, which I thought I'd posted a while ago:

I've taken a second quick look at this data, constructing a model in the same way that I've done in the past without any mode mismatches.  This works OK, and suggests that for 347kW arm power we'd have 2.5% unknown readout losses and

This data can be fit reasonably well without frequency independent losses, no mode mismatch and an SRC detuning, simliar to previous data sets.  This doesn't mean that there's not a mode mismatch, but there's not new evidence for mode mismatch here.  Kevin did mention that I should look at scenarios where we have both SQZ to OMC mismatches and IFO to OMC mismatch.  Indeed, when I do that the mismatch phase is important for the sqz and anti-sqz level at 2kHz. 

Images attached to this comment
H1 SUS (SUS)
corey.gray@LIGO.ORG - posted 16:54, Friday 18 July 2025 (85855)
SR3 Offset Offload

Lockloss toward the end of the shift, but took the opportunity to do the SR3 Pitch Offset Offload (per Oli's alog 85830). 

Sheila ran me through what we should change the SR3 Pitch to once we zero/offload the SR3 Dither Offset:

Made the change above, but an SDF Revert undid the Offset change!  So, I zeroed the SR3 Pit Dither Offset once again.  Now we are waiting for DRMI to lock.

Attached are screenshots of the (1) ndscope showing the offload and (2) medms involved.

Images attached to this report
H1 PSL (PSL)
corey.gray@LIGO.ORG - posted 13:52, Friday 18 July 2025 (85850)
PSL Status Report (FAMIS #26431)

This is for FAMIS #26431.

Laser Status:
    NPRO output power is 1.87W
    AMP1 output power is 70.35W
    AMP2 output power is 141.0W
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 3 days, 2 hr 42 minutes
    Reflected power = 23.67W
    Transmitted power = 105.6W
    PowerSum = 129.3W

FSS:
    It has been locked for 0 days 8 hr and 10 min
    TPD[V] = 0.8367V

ISS:
    The diffracted power is around 3.8%
    Last saturation event was 0 days 8 hours and 10 minutes ago


Possible Issues:
    PMC reflected power is high

H1 PSL (IOO)
jennifer.wright@LIGO.ORG - posted 11:56, Thursday 17 July 2025 - last comment - 15:32, Monday 21 July 2025(85795)
ISS array work - horizontal scan

Jennie, Rahul

On Tuesday Rahul and I took the measurements for the horizontal coupling in the ISS array currently on the optical table.

The QPD read 9500 e-7 W.

The X position was 5.26 V, the Y position was -4.98 V.

PD DC Voltage [mV] pk-pk AC Voltage [mV] pk-pk
1 600 420
2 600 380
3 600 380
4 600 420
5 800 540
6 800 500
7 600 540
8 800 540

After thinking about this data I realise we need to retake it as we should record the mean value for the DC coupled measurements. This was with a 78V signal applied from the PZT driver and an input dither signal of 2 Vpp at 100Hz on the oscilloscope and I think 150 mA pump current on the laser.

Comments related to this report
jennifer.wright@LIGO.ORG - 16:14, Friday 18 July 2025 (85853)

Rahul, Jennie W

 

Yesterday we went back into the lab and retook the DC and AC measurements while horizontal dither was on while measuring using the 'mean' setting and without changing the overall input pointing from what it was in the above measurement.

 

PD DC Voltage [V] mean AC Voltage [V] mean
1 -4.08 -0.172
2 -3.81 0.0289
3 -3.46 0.159
4 -3.71 0.17
5 -3.57 -0.0161
6 -3.5 0.00453
7 -2.91 0.187
8 -3.36 0.0912

 

 

QPD direction Mean Voltage [V] Pk-Pk Voltage [V]
X 5.28 2.20
Y -4.98 0.8

QPD sum is roughly 5V.

 

Next time we need to plug in the second axis of the PZT driver so as to take the dither coupling measurement in the vertical direction.

jennifer.wright@LIGO.ORG - 15:12, Monday 21 July 2025 (85890)

horizontal dither calibration = 10.57 V/mm

dither Vpk-pk on QPD x-direction = 2.2V

dither Vpk-pk on QPD y-direction = 0.8V

dither motion in horizontal direction in V on QPD = sqrt(2.2^2 + 0.8^2)

motion in mm on QPD that corresponds to dither of input mirror = sqrt(2.2^2 + 0.8^2) / 4.644 = 0.222 mm

Code is here for calibration of horizontal beam motion to QPD motion plus calibration of dither measurements.

Non-image files attached to this comment
jennifer.wright@LIGO.ORG - 15:32, Monday 21 July 2025 (85891)

To work out the relative intensity noise:

RIN = change in power/ power

= ( change in current/ current) / responsivity of PD

= (change in voltage/voltage) / (responsitvity * load resistance)

 

Therefore to minimise RIN we want to minimise change in voltage / voltage for each PD.

To get the least coupling to array input alignment we work out

relative RIN coupling = (delta V/ V) / beam motion at QPD

 

This works because the QPD is designed to be in the same plane as the PD array.

 

PD DC Voltage [V] mean AC Voltage [mV] pk-pk Beam Motion at QPD [mm] Relative Coupling [1/m]
1 -4.08 420 0.222 465
2 -3.81 380 0.222 450
3 -3.46 380 0.222 496
4 -3.71 420 0.222 511
5 -3.57 540 0.222 683
6 -3.5 500 0.222 645
7 -2.91 540 0.222 838
8 -3.36 540 0.222 726

 

These are all a factor of 50 higher than those measured by Mayank and Shiva but after discussion with Keita either we need higher resolution measurements or we need to further optimise input alignment to the array to minimise the coupling.

Displaying reports 1101-1120 of 84513.Go to page Start 52 53 54 55 56 57 58 59 60 End