[dpdk-dev] [PATCH 2/5] ixgbe: add prefetch to improve slow-path tx perf

Neil Horman nhorman at tuxdriver.com
Wed Sep 17 19:59:36 CEST 2014


On Wed, Sep 17, 2014 at 03:35:19PM +0000, Richardson, Bruce wrote:
> 
> > -----Original Message-----
> > From: Neil Horman [mailto:nhorman at tuxdriver.com]
> > Sent: Wednesday, September 17, 2014 4:21 PM
> > To: Richardson, Bruce
> > Cc: dev at dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH 2/5] ixgbe: add prefetch to improve slow-path tx
> > perf
> > 
> > On Wed, Sep 17, 2014 at 11:01:39AM +0100, Bruce Richardson wrote:
> > > Make a small improvement to slow path TX performance by adding in a
> > > prefetch for the second mbuf cache line.
> > > Also move assignment of l2/l3 length values only when needed.
> > >
> > > Signed-off-by: Bruce Richardson <bruce.richardson at intel.com>
> > > ---
> > >  lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 12 +++++++-----
> > >  1 file changed, 7 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> > b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> > > index 6f702b3..c0bb49f 100644
> > > --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> > > +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> > > @@ -565,25 +565,26 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf
> > **tx_pkts,
> > >  		ixgbe_xmit_cleanup(txq);
> > >  	}
> > >
> > > +	rte_prefetch0(&txe->mbuf->pool);
> > > +
> > 
> > Can you explain what all of these prefetches are doing?  It looks to me like
> > they're just fetching the first caheline of the mempool structure, which it
> > appears amounts to the pools name.  I don't see that having any use here.
> > 
> This does make a decent enough performance difference in my tests (the amount varies depending on the RX path being used by testpmd). 
> 
> What I've done with the prefetches is two-fold:
> 1) changed it from prefetching the mbuf (first cache line) to prefetching the mbuf pool pointer (second cache line) so that when we go to access the pool pointer to free transmitted mbufs we don't get a cache miss. When clearing the ring and freeing mbufs, the pool pointer is the only mbuf field used, so we don't need that first cache line.
ok, this makes some sense, but you're not guaranteed to either have that
prefetch be needed, nor are you certain it will still be in cache by the time
you get to the free call.  Seems like it might be preferable to prefecth the
data pointed to by tx_pkt, as you're sure to use that every loop iteration.

> 2) changed the code to prefetch earlier - in effect to prefetch one mbuf ahead. The original code prefetched the mbuf to be freed as soon as it started processing the mbuf to replace it. Instead now, every time we calculate what the next mbuf position is going to be we prefetch the mbuf in that position (i.e. the mbuf pool pointer we are going to free the mbuf to), even while we are still updating the previous mbuf slot on the ring. This gives the prefetch much more time to resolve and get the data we need in the cache before we need it.
> 
Again, early isn't necessecarily better, as it just means more time for the data
in cache to get victimized. It seems like it would be better to prefetch the
tx_pkts data a few cache lines ahead.

Neil

> Hope this clarifies things.
> 
> /Bruce
> 


More information about the dev mailing list