Displaying reports 61241-61260 of 83394.Go to page Start 3059 3060 3061 3062 3063 3064 3065 3066 3067 End
Reports until 12:05, Thursday 22 October 2015
H1 General
travis.sadecki@LIGO.ORG - posted 12:05, Thursday 22 October 2015 - last comment - 00:27, Friday 23 October 2015(22746)
Lockloss

Lockloss @ 18:46 UTC.  Cause is currently under investigation.

Comments related to this report
jenne.driggers@LIGO.ORG - 13:02, Thursday 22 October 2015 (22747)

This lockloss was caused by an EPICS freeze. 

We regularly run the SR3 "cage servo" instead of an oplev servo.  It is normally always on (even if the IFO is unlocked), but just in case it's not, the locking guardian requests that the SR3_CAGE_SERVO guardian turns on the servo sometime during the DRMI lock sequence.  As a reminder, this servo looks at the OSEM pitch value of the M3 stage, and actuates on the M2 stage to keep the OSEM values constant. This has been determined to be more stable over very long time scales than the optical levers. 

The SR3_CAGE_SERVO guardian was running along like normal, trying to write values to the SUS-SR3_M2_TEST_P_OFFSET channels. However, since EPICS was frozen, these values weren't actually being written.  The servo thinks that it just needs to push harder and harder, so starts significantly changing the offset value that it's trying to write (it's normally +-10 counts or so, but after this starts changing by about 1000 counts).  Once the EPICS freeze is over, it successfully writes one of these new, significantly different values.  This kicks SR3 pretty significantly, and we lose lock shortly afterward.

The attached lockloss plot shows POPAIR dropping at -1.75 seconds, which is when the actual lockloss happens.  Starting at -20 seconds, the OFFSET channel flatlines, which causes the M2_MASTER_OUT channels to also flatline.  You can see in 3 of the 4 NOISEMON channels that the RMS is significantly reduced temporarily.  Perhaps the M2 UL Noisemon is one of the ones that is broken, and that's why we don't see it there?  Anyhow, at around -6 seconds, the EPICS freeze is over, and the OFFSET comes back on with a very different value.  This causes a big glitch in the MASTER_OUTs and the NOISEMONs.  We lose lock about 4 seconds later.

Images attached to this comment
sheila.dwyer@LIGO.ORG - 18:06, Thursday 22 October 2015 (22754)

Dave, Evan, Sheila

We've added a few lines to the SR3 cage servo guardian to hopefully avoid this in the future. This will not update the offset unless the witness sensor value has changed. This may cause the cage servo to occasionally not run, but this doesn't seem like a problem. 

 

in the main we added 

        self.wit = ezca['SUS-SR3_M3_WIT_PMON']

 

    def run(self):
        if ezca['GRD-SUS_SR3_STATE_N'] == 100 and not self.not_aligned_flag:
            #if the value has not changed (possibly an epics freeze) skip running the servo
            if self.wit == ezca['SUS-SR3_M3_WIT_PMON']:
                pass
            else:
                self.servo.step()
                return True
        else:
            notify('SR3 not aligned!')
            return 'CAGE_SERVO_OFF'
    def run(self):
        if ezca['GRD-SUS_SR3_STATE_N'] == 100 and not self.not_aligned_flag:
            #if the value has not changed (possibly an epics freeze) skip running the servo
            if self.wit == ezca['SUS-SR3_M3_WIT_PMON']:
                pass
            else:
                self.servo.step()
                return True
        else:
            notify('SR3 not aligned!')
            return 'CAGE_SERVO_OFF'
sheila.dwyer@LIGO.ORG - 00:27, Friday 23 October 2015 (22764)GRD
Actually the code above won't do what was intended. We need to make sure self.wit gets updated each time run is executed. This code does no harm as is, so I will wait until morning to fix it.
H1 General
jeffrey.bartlett@LIGO.ORG - posted 08:07, Thursday 22 October 2015 (22743)
Ops Owl Shift Summary
Activity Log: All Times in UTC (PT)

07:00 (00:00) Take over from TJ
07:15 (00:15) Marissa & Kiwamu – Leaving for the night
09:42 (02:42) ETM-Y saturation
09:51 (02:51) ETM-Y saturation
10:05 (03:05) ETM-Y saturation
10:14 (03:14) ETM-Y saturation
14:07 (07:07) Received GRB alert – Executed checklist procedures – Stand-down until 15:07
14:41 (07:41) Received GRB alert – Executed checklist procedures – stand-down until 08:41) 
15:00 (08:00) Turn over to Travis




End of Shift Summary:

Title: 10/21/2015, Owl Shift 07:00 – 15:00 (00:00 – 08:00) All times in UTC (PT)

Support:  

Incoming Operator: Travis

Shift Summary: 
   A few ETM-Y saturation during first half of the shift. Some of these saturations roughly coincided with a spike in the CS-Z microseism. There was a slight ring up of the 45MHz modulation at the same time.  
   CRB alert – Run through checklist - In stand-down until 15:07 (08:07)
   Second GRB alert – Execute checklist procedures
H1 General
jeffrey.bartlett@LIGO.ORG - posted 04:13, Thursday 22 October 2015 (22742)
Ops Owl Mid-Shift Summary
   A quiet first half of the shift. IFO locked at LOW_NOISE in Observation mode for the past 14 hours. LHO is running at 22.6W with a 77Mpc inspiral range. The wind is a light breeze (0-4mph); Microseism although elevated is starting to decline. There have been 6 ETM-Y saturations.   
H1 General
jeffrey.bartlett@LIGO.ORG - posted 00:10, Thursday 22 October 2015 (22741)
Ops Owl Shift Transistion
 
Title:  10/22/2015, Owl Shift 07:00 – 15:00 (00:00 – 08:00) All times in UTC (PT)

State of H1: 07:00 (00:00) IFO is locked at NOMINAL_LOW_NOISE for the past 10 hours; Intent Bit is set to Observing, 22.6W, 75Mpc.		

Outgoing Operator: TJ

Quick Summary: Wind is a light to gentle breeze; Microseism is a bit elevated. Operations appears to be normal.        

LHO General
thomas.shaffer@LIGO.ORG - posted 00:00, Thursday 22 October 2015 (22740)
Ops Eve Shift Summary
LHO General
thomas.shaffer@LIGO.ORG - posted 20:53, Wednesday 21 October 2015 (22737)
Ops Eve Mid Shift Report

Observing @ 78Mpc, locked for 12 hours

LLO is up

Environment calm

Handful of glitches

LHO VE
john.worden@LIGO.ORG - posted 16:39, Wednesday 21 October 2015 - last comment - 10:00, Thursday 22 October 2015(22736)
MIDY vacuum leak

Kyle will report more but here is a plot of pressure for the last 4 hours. We are not sure how permanent this "fix" might be but the magnitude of recovery suggests that this is the dominant or only leak in this volume.

Images attached to this report
Comments related to this report
john.worden@LIGO.ORG - 10:00, Thursday 22 October 2015 (22745)

This morning the pressure is at 1.6e-9 torr.  We had been at 4.6e-9 torr. At these pressures the main ion pump may have 1000 l/s pumping speed (guess) for air. Given this the air leak was on the order of 3e-6 tl/s.

The largest signal seen on the helium leak detector was ~ 1e-7 tl/s for helium. However, the leak detector was only sampling a fraction of the helium as we are open to the beam tube on both sides.

LHO VE
kyle.ryan@LIGO.ORG - posted 16:36, Wednesday 21 October 2015 (22735)
Y-mid leak stopped (for now)
Kyle, John 

Today we applied Apiezon Q putty to that portion of the leak region that we could access, i.e. 2" of seam weld+stitch weld behind the support gusset and 2" of seam weld+stitch weld on the opposite side of the stiffener ring (not blocked by the gusset).  Basically the known region of the leak site minus the inaccessible space between the stiffener ring and the spool wall -> No change in Y-mid pressure.  

Kyle

After lunch, I wet the entire region with Isopropyl alcohol -> no change -> Next, I removed the Apiezon Q putty using various scrapers and wire brushes and then used opposite facing wedges to compress a silicone rubber mat against a cut "feature" (that Gerardo had noticed yesterday which is just below the seam weld behind the support gusset and which is ~1.25" away from the stiffener ring) and the support stand.  The Y-mid pressure began to respond.  This cut had been covered by the putty and should have been sealed.

Kyle, Gerardo, John, Bubba 

We removed the silicone rubber mat but the pressure continued to drop -> We sprayed the entire area with Isopropyl alcohol and the pressure continued to drop -> We wet the region with Acetone and the pressure continued to drop -> We applied VACSEAL (SPI #5052-AB, Lot 1180402) to the cut and the pressure continued to drop 

CONCLUSION: 
It may be that the compressed rubber mat was more effective at getting remnants of the initial putty applied into the deep cut feature than was John and I's initial attempt using rigid flat tools.  Another possibility is that the brushes used to remove the putty may have embed putty more effectively into the leak.  Phase-change freezing of the initially applied alcohol is another possibility except that the leak stayed plugged for longer than the alcohol would have stayed frozen.  

At this point, I think that we can say that the leak is at the cut in the spool wall.  How permanently it stays plugged will be a function of the interaction of all of the substances which were applied to it.
H1 SEI (PEM)
jeffrey.kissel@LIGO.ORG - posted 16:22, Wednesday 21 October 2015 (22733)
H1 Vault STS Broken / Busted
J. Kissel, for R. McCarthy, R. Schofield, V. Roma, J. Warner

This has came up recently after I'd asked Duncan Macleod to plot it on the summary pages, but I've found out today, through a grave vine of those mentioned above, that the vault STS2 is busted. It's under semi-active investigation, also by the team mentioned above, but I just want to make sure something is in the aLOG about it. See attached ASD demonstrating the badness.

Also for the record, I used a DTT template that lives in the SeiSVN,
/ligo/svncommon/SeiSVN/seismic/Common/Data/2014-01-31_0000UTC_GND_STS2_ASD.xml
with the calibrations originally from LHO aLOG 9727.
Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 16:17, Wednesday 21 October 2015 (22734)
Ops Eve Shift Transition
H1 General
travis.sadecki@LIGO.ORG - posted 16:00, Wednesday 21 October 2015 (22730)
OPS Day Shift Summary

Title: 10/21 Day Shift 15:00-23:00 UTC (8:00-16:00 PST).  All times in UTC.

State of H1: Observing

Shift Summary: Commissioners were mid-relocking when I arrived this morning.  After some hunting down of alignment changes of the IMs, and accepting them in SDF, in addition to letting violin modes rings down, the H1 locked without much of a problem.  It has been locked for 7 hours now.  Microseism is trending upwards, but wind is calm. 

Incoming operator: TJ

Activity log:

15:30 Sheila and Kiwamu to LVEA to turn on PZT driver

16:27 Joe D to X arm for beam tube sealing

18:09 Kyle to MY leak hunting

19:30 Kyle back

20:13 Joe D back to X arm

20:33 Kyle to MY

21:22 Kyle done

21:38 Richard to roof deck

21:48 Kyle, Gerardo, John, and Bubba to MY

21:50 Richard off roof

22:11 Joe D done

22:44 Kyle, Gerardo, John, and Bubba back from MY

H1 General
travis.sadecki@LIGO.ORG - posted 14:12, Wednesday 21 October 2015 (22727)
Back to Observing Mode

Observing Mode at 21:12 UTC.

H1 ISC (DetChar)
sheila.dwyer@LIGO.ORG - posted 13:35, Wednesday 21 October 2015 - last comment - 23:13, Wednesday 04 November 2015(22710)
evidence that scattered light couples anthropegenic noise to DARM up to 250 Hz

We have a few piece of evidence that suggest that anthropegenic noise (probably trucks going to ERDF) couples to DARM through scattered light which is most likely hitting something that is attached to the ground in the corner station.

  1. Our spectrum is more non-stationary between 100-200 Hz durring times of high anthropegenic noise. Nairwita noted this by looking through summary pages (these glitches only seem to appear on weekdays between 7 am and 4pm local (14-23UTC), and not on Hanford Fridays when anthropegenic noise is low), and Jordan confirmed this by making a few comparisons of high/low anthropegenic noise within lock stretches.  (alog 22594)
  2. Corner station ground sensors are a good witness of these glitches.  HVETO shows this clearly (see page for October 14th for example).  Also, comparison of bandpassed DARM to several corner ground motion sensors and accelerometers show that glitches in DARM coincide with ground motion (for example see nutsinee's alog 22527)
  3. The DARM spectragram at the time of these glitches show what looks like scattering arches from 1 Hz motion and a velocity of around 40 um second total path length change. alog 22523  Both this high velocity and the fact that the seismometers on the tables don't seem to witness this motion well suggest that something bolted to the ground is involved in the scattering. This velocity is probably too high for something bolted to the ground.
  4. The scattering amplitude ratio (ration of scattered amplitude to DC readout light on DCPDs) we would estimate bassed on the fringes in DARM 1e-5, similar to what we got in April  Using the ISCT6 accelerometer to predict the velocity of the motion doesn't quite work out.
  5. Annamaria and Roert did some PEM injections in the east bay, which showed a linear coupling to DARM.  Annamaria is still working on the data and trying to disentangle downconverion from the linear coupling, but if we assume that scattered light is responsible for the linear coupling the amplitude ratio is fairly consistent with what we got from the fringe wrapping when trucks go by.

On monday, Evan and I went to ISCT6 and listened to DARM and watched a spectrum while tapping and knocking on various things.  We couldn't get a response in DARM by tapping around ISCT6.  We tried knocking fairly hard on the table, the enclosure, tapping aggresively on all the periscope top mirrors, and several mounts on the table and nothing showed up.  We did see something in DARM at around 100 Hz when I tapped loudly on the light pipe, but this seemed like an excitation that is much louder than anything that would normaly happen.  Lastly we tried knocking on the chamber walls on the side of HAM6 near ISCT6, and this did make some low frequency noise in DARM.  Evan has the times of our tapping.

It might be worth revisiting the fringe wrapping measurements we made in April by driving the ISI, the OMC sus, and the OMs.  It may also be worth looking at some of the things done at LLO to look accoustic coupling through the HAM5 bellow (19450 and 

19846)

Comments related to this report
evan.hall@LIGO.ORG - 21:37, Tuesday 03 November 2015 (23089)

14:31: tapping on HAM6 table

14:39: tapping on HAM6 chamber (ISCT6 side), in the region underneath AS port viewport

14:40: tapping on HAM6 chamber (ISCT6 side), near OMC REFL light pipe

14:44: with AS beam diverter open, tapping on HAM6 chamber (ISCT6 side)

14:45: with OMC REFL beam diverter open, tapping on HAM6 chamber (ISCT6 side)

14:47: beam diverters closed again, tapping on HAM6 chamber (ISCT6 side)

All times 2015-10-19 local

nutsinee.kijbunchoo@LIGO.ORG - 23:13, Wednesday 04 November 2015 (23122)DetChar

I've made some plots based on the tap time Evan recorded (the recorded time seems off by half a minute or so compare to what really shows up in the accelerometer and DARM). Not all taps created signals in DARM but every signal that showed up in DARM has the same feature in a spectrogram (visible at ~0-300Hz, 900Hz, 2000Hz, 3000Hz, and 5000Hz. See attachment2). Timeseries also reveal that whether or not the tap would show up in DARM does not seems to depend on the overall amplitude of the tap (seen in HAM6 accelerometer, see attachment 3). PEM spectrum during different taps times doesn't seem to give any clue why one tap shows up in DARM more than the other (attachment 4,5). Apology for the wrong conclusion I drew earlier based on the spectrum I plotted using wrong GPS time (those plots have been deleted).

Images attached to this comment
nutsinee.kijbunchoo@LIGO.ORG - 20:41, Wednesday 04 November 2015 (23127)

I zoomed in a little closer at higher frequency and realized this pattern is similiar to the unsolved n*505 glitches. Could this be a clue to figuring out the mechanism that caused the n*505?

Images attached to this comment
H1 General
travis.sadecki@LIGO.ORG - posted 13:05, Wednesday 21 October 2015 (22726)
Commissioning Mode for jitter measurements

Out of Observing Mode for jitter measurements while LLO is down.

H1 General
travis.sadecki@LIGO.ORG - posted 10:00, Wednesday 21 October 2015 - last comment - 14:14, Wednesday 21 October 2015(22718)
Back to Observing (finally)

We have made it back to Observing Mode at 16:59 UTC.  The violin modes are a bit rung up, but they are coming down in their characteristically slow fashion.

Comments related to this report
travis.sadecki@LIGO.ORG - 11:43, Wednesday 21 October 2015 (22724)

I ran the A2L script before going to Observing.  It gave the following error:

IOError: [Errno 13] Permission denied: 'LinFit.png'
 

jenne.driggers@LIGO.ORG - 14:14, Wednesday 21 October 2015 (22728)

The script creates a temporary file (LinFit.png) that I had forgotten to chmod, so the Ops account couldn't write to it.

Travis just successfully ran the a2l from the Ops account, so I *think* all the bugs are fixed.  Note that the script won't return the command line prompt to you until you hit Enter on the keyboard, so it's a bit confusing as to when it's finished, but if all the SDF diffs are gone, it's over and you can go to Observe.

H1 SYS (CAL, CDS, DAQ, GRD, INJ, ISC, PEM, PSL, SEI, SUS, TCS, VE)
jeffrey.kissel@LIGO.ORG - posted 18:24, Tuesday 20 October 2015 - last comment - 09:23, Thursday 22 October 2015(22703)
Summary of Power Outtage Recovery
J. Kissel, for R. McCarthy, J. Worden, G. Moreno, J. Hanks, R. Bork, C. Perez, R. Blair, K. Kawabe, P. King, J. Oberling, J. Warner, H. Radkins, N. Kijbunchoo, E. King, B. Weaver, T. Sadecki, E. Hall, P. Thomas, S. Karki, D. Moraru, G. Mendell

Well, it turns out the IFO is a complicated beast with a lot of underlying infrastructure that we rely on to even begin recovering the IFO. Since LHO so infrequently loses power, I summarize the IFO systems that are necessary before we can begin the alignment / recovery process with pointers to aLOGs and/or names of people who did the work, so that we have a global perspective on all of the worlds that need attention when all power dies. One could consider this a sort of check-list, so I've roughly prioritized the items into stages, where the items within each stage can be done in parallel if the man-power exists and/or is on-site.

Stage 1
--------------------
Facilities - Richard

Vacuum - John / Kyle / Gerardo

Stage 2
--------------------
CDS - 
     Work Stations - Richard
     Control Room FOMs - Operators / Carlos
     DC Power Supplies - Richard

Stage 3
--------------------
CDS continued
     Front-Ends and I/O Chassis - Dave (LHO aLOG 22694, LHO aLOG 22704)
         Timing System 
         Guardian Machine (comes up OK with a simple power cycle)
     Beckhoff PLCs - Patrick (LHO aLOG 22671)

PSL - Peter / Jason / Keita (LHO aLOG 22667, LHO aLOG 22674, LHO aLOG 22693)
     Laser
     Chillers
     Front Ends
     TwinCAT Beckhoff (separate from the rest of the IFO's Beckhoff)
     IO Rotation Stage

TCS Nutsinee / Elli (LHO aLOG 22675)
     Laser
     Chillers
     TCS Rotation Stage (run on same Beckhoff chassis as IO Rotation Stage, and some PSL PEM stuff too)

ALS Green Lasers - Keita
     The interlock for these lasers are on-top of the ISCT-Ends, and need a key turn as well as a "start" button push, so it's a definite trip to the end-station

PCAL Lasers - Sudarshan
     These either survived the power outtage, don't have an interlock, or can be reset remotely. I asked Sudarshan about the health of the PCAL lasers, and he was able to confirm goodness without leaving the control room.

High-Voltage - Richard McCarthy
     ESD Drivers, PZTs
     
HEPI Pumps and Pump Servos - Hugh (LHO aLOG 22679)

Stage 4
------------------
Cameras - Carlos
    PCAL Spot-position Cameras
    Green and IR cameras
     
SDF System - Betsy / Hugh
    Changing the default start-up SAFE.snap tables to OBSERVE.snap tables (LHO aLOG 22702)

Hardware Injections - Chris Biwer / Keith Riles / Dave Barker
    These have not yet been restarted

DMT / LDAS - Greg Mendell / Dan Moraru (LHO aLOG 22701)


May we have excercise this list very infrequently if at all in the future!
Comments related to this report
jeffrey.kissel@LIGO.ORG - 09:23, Thursday 22 October 2015 (22744)
C. Vorvick should be added to the list of participants! Apologies for anyone else that slipped from my mind late in the evening.
H1 CAL (CAL, DetChar)
andrew.lundgren@LIGO.ORG - posted 02:03, Monday 19 October 2015 - last comment - 23:51, Wednesday 21 October 2015(22631)
Calibration artifact around 508 Hz in ER8
During ER8, there was a calibration artifact around 508 Hz - a non-stationary peak with a width of about 5 Hz. The peak went away on Sep 14 16 UTC probably due to an update of the calibration filters which was documented in this alog. When re-calibrated data is produced, it's worth having a look at some of this ER8 time to check that the peak is removed.

I made a comparison spectrum a bit before and after the change of the filters (plot 1). The wide peak is removed and the violin modes that it covers (ETMY modes, maybe some others) appears. I did the same thing for a longer span of time, comparing Sep 11 and Oct 17 (plot 2). The artifact manifests itself also as an incoherence between GDS-CALIB_STRAIN and OMC-DCPD_SUM (plot 3). The only other frequency where these channels aren't coherent is at the DARM_CTRL calibration line at 37.3 Hz.

I've also made a spectrogram (plot 4) of the artifact. It has blobs of power every several seconds. The data now looks more even (plot 5), though it's more noisy because the calibration lines are lower.
Images attached to this report
Comments related to this report
kiwamu.izumi@LIGO.ORG - 23:51, Wednesday 21 October 2015 (22739)

I agree that it was due to bad digital filters in CAL-CS. See my recent investigation on this issue at alog 22738.

H1 CAL
kiwamu.izumi@LIGO.ORG - posted 09:36, Monday 14 September 2015 - last comment - 23:46, Wednesday 21 October 2015(21500)
violin filters updated in CAL CS

WP 5489

I have update the violin mode filters in CAL CS in order to make them more accurate. This will impact on the calibration at sub-percent level at around 30 Hz.

Joe B pointed me out that the way my matlab script (alog 21322) handles violin's zpk was not ideal (i.e. I was implicitly assuming certain ordering in the zeros and poles in zpk data format). I corrected the script as was already done in Livingston (LLO alog 20512). This resulted in somewhat better accuracy for PUM in 1-100 Hz . The attached screenshots are the new filters and discrepancy between the full ss model and the installed discrete filters. Compared with the one I previously reported in alog 21322, the magnitude of PUM is now somewhat better. The magnitude of PUM at 30 Hz is now more accurate with a very small discrepancy of 0.08 % (which used to be 0.2 % discrepancy ), and it is also more accurate at 100 Hz with a small discrepancy of 0.65 % (which used to be 2.4 %). I do not expect any noticeable change in the binary range with this update.

I have installed the new filters and loaded the coefficients in CAL-CS.

Images attached to this report
Comments related to this report
kiwamu.izumi@LIGO.ORG - 23:46, Wednesday 21 October 2015 (22738)

This is a follow up on the change we made on the L1 and L2 stage violin mode filters for calibration.

 

As Andy reported in alog 22631, there had been a prominent peak at 508 Hz before the change on the violin calibration filters. This was due to the fact that both ETMY L1 and L2 stages of CAL-CS had a too high violin mode by mistake (see the original entry above) at 508 Hz. The spectral shape of the violin modes that he posted looks very similar to what we mistakenly had in CAL-CS before Sep. 14th.

I am concluding that the 508 Hz nonstationary behavior seen in the calibrated signals before Sep. 14th are indeed artifact due to too-high response in the violin calibration filters.

I made a comparison between the violin calibration filters before and after my fix on the matlab code. See the attached screenshots below:

Fig.1 L1 stage violin calibration filter. (Blue) before the bug fix, (red) after the bug fix.

Fig.2 L2 stage violin calibration filter. (Blue) before the bug fix, (red) after the bug fix.

 

It is clear in the plots that the previous filters had a high peak at 508 Hz and they were as tall as 120 dB ! Therefore the ETMY suspension calibration filters must have been unnecessarily sensitive to any small signals in DARM_CTRL before Sep. 14th. In fact, this was exactly the thing I was worried and was the main motivation to decrease the violin Qs down to 1e3 (alog 21322). Note that, according to the suspension model, the Q-factor of the violin modes can be as high as 1x109. However, we decided to artificially decrease the violin Qs for the actuator calibration filters  in order to maintain the IIR filters reasonably stable. Otherwise, the modes would be easily rung up by numerical precision errors, a small step in the actual signal or anything.

I also attach the difference of the filters in zpk formt. See the third and fourth attachements. Since their violin Qs are chopped off to be 1e3, both L1 and L2 stages have the same frequency response.

Images attached to this comment
Displaying reports 61241-61260 of 83394.Go to page Start 3059 3060 3061 3062 3063 3064 3065 3066 3067 End