[dpdk-dev] [PATCH v2 0/2] AVX2 Vectorized Rx/Tx functions for i40e

Bruce Richardson bruce.richardson at intel.com
Wed Jan 10 15:38:57 CET 2018


On Wed, Jan 10, 2018 at 03:25:23PM +0100, Vincent JARDIN wrote:
> Le 10/01/2018 à 10:27, Richardson, Bruce a écrit :
> > > Hi Bruce,
> > > 
> > > Just curious, can you provide some hints on percent increase in at least
> > > some representative cases? I'm just trying to get a sense of if this is
> > > %5, 10%, 20%, more... I know mileage will vary depending on system, setup,
> > > configuration, etc.
> > > 
> > Best case conditions to test under are using testpmd as that is where any IO improvement will be most seen. As a ballpark figure though, on my system while testing testpmd with both 16B and 32B descriptors, (RX/TX ring sizes 1024/512) I saw ~15% performance increase, and sometimes quite a bit higher, e.g. when testing with 16B descriptors with larger burst sizes.
> 
> Hi Bruce,
> 
> Then, about the next limit after this performance increase: is it the
> board/Mpps capacity/PCI bus? If so, you should see that CPU usage on
> testpmd's cores to be decreased. Can you be more explicit about it?
> 

Hi Vincent,

again it really depends on your setup. In my case I was using 2 NICs
with 1x40G ports each, and each one using a PCI Gen3 x8 connection to
CPU. I chose this particular setup because there is sufficient NIC
capacity and PCI bandwidth available that for 64-byte packet sizes,
there will be more IO available than a single core can handle. This
patchset basically reduces the cycles needed for a core to process each
packet, so in cases where the core is the bottleneck you will get
improved performance. For other cases where PCI or NIC capability is the
issue this patch almost certainly won't help, as there are no changes to
the way in which the NIC descriptor ring is used, e.g. no changes to
descriptor write-back over PCI etc.

> What's about other packet size like 66 bytes? 122 bytes? which are not
> aligned on 64 bytes.
> 
Sorry, I don't have comparison data for that to share.

/Bruce


More information about the dev mailing list