[dpdk-dev] [PATCH 0/9] mbuf: structure reorganization

Ananyev, Konstantin konstantin.ananyev at intel.com
Thu Mar 30 18:45:18 CEST 2017



> -----Original Message-----
> From: Richardson, Bruce
> Sent: Thursday, March 30, 2017 1:23 PM
> To: Olivier Matz <olivier.matz at 6wind.com>
> Cc: dev at dpdk.org; Ananyev, Konstantin <konstantin.ananyev at intel.com>; mb at smartsharesystems.com; Chilikin, Andrey
> <andrey.chilikin at intel.com>; jblunck at infradead.org; nelio.laranjeiro at 6wind.com; arybchenko at solarflare.com
> Subject: Re: [dpdk-dev] [PATCH 0/9] mbuf: structure reorganization
> 
> On Thu, Mar 30, 2017 at 02:02:36PM +0200, Olivier Matz wrote:
> > On Thu, 30 Mar 2017 10:31:08 +0100, Bruce Richardson <bruce.richardson at intel.com> wrote:
> > > On Wed, Mar 29, 2017 at 09:09:23PM +0100, Bruce Richardson wrote:
> > > > On Wed, Mar 29, 2017 at 05:56:29PM +0200, Olivier Matz wrote:
> > > > > Hi,
> > > > >
> > > > > Does anyone have any other comment on this series?
> > > > > Can it be applied?
> > > > >
> > > > >
> > > > > Thanks,
> > > > > Olivier
> > > > >
> > > >
> > > > I assume all driver maintainers have done performance analysis to check
> > > > for regressions. Perhaps they can confirm this is the case.
> > > >
> > > > 	/Bruce
> > > > >
> > > In the absence, of anyone else reporting performance numbers with this
> > > patchset, I ran a single-thread testpmd test using 2 x 40G ports (i40e)
> > > driver. With RX & TX descriptor ring sizes of 512 or above, I'm seeing a
> > > fairly noticable performance drop. I still need to dig in more, e.g. do
> > > an RFC2544 zero-loss test, and also bisect the patchset to see what
> > > parts may be causing the problem.
> > >
> > > Has anyone else tried any other drivers or systems to see what the perf
> > > impact of this set may be?
> >
> > I did, of course. I didn't see any noticeable performance drop on
> > ixgbe (4 NICs, one port per NIC, 1 core). I can replay the test with
> > current version.
> >
> I had no doubt you did some perf testing! :-)
> 
> Perhaps the regression I see is limited to i40e driver. I've confirmed I
> still see it with that driver in zero-loss tests, so next step is to try
> and localise what change in the patchset is causing it.
> 
> Ideally, though, I think we should see acks or other comments from
> driver maintainers at least confirming that they have tested. You cannot
> be held responsible for testing every DPDK driver before you submit work
> like this.
> 

Unfortunately  I also see a regression.
Did a quick flood test on 2.8 GHZ IVB with 4x10Gb.
Observed a drop even with default testpmd RXD/TXD numbers (128/512):
from 50.8 Mpps down to 47.8 Mpps.
>From what I am seeing the particular patch that causing it:
[dpdk-dev,3/9] mbuf: set mbuf fields while in pool

cc version 5.3.1 20160406 (Red Hat 5.3.1-6) (GCC)
cmdline:
./dpdk.org-1705-mbuf1/x86_64-native-linuxapp-gcc/app/testpmd  --lcores='7,8'  -n 4 --socket-mem='1024,0'  -w 04:00.1 -w 07:00.1 -w 0b:00.1 -w 0e:00.1 -- -i

Konstantin



More information about the dev mailing list