[dpdk-dev] [PATCH 4/4] lib/librte_eal: Optimized memcpy in arch/x86/rte_memcpy.h for both SSE and AVX platforms

Wang, Zhihong zhihong.wang at intel.com
Wed Jan 21 04:18:40 CET 2015



> -----Original Message-----
> From: Neil Horman [mailto:nhorman at tuxdriver.com]
> Sent: Wednesday, January 21, 2015 3:16 AM
> To: Stephen Hemminger
> Cc: Wang, Zhihong; dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 4/4] lib/librte_eal: Optimized memcpy in
> arch/x86/rte_memcpy.h for both SSE and AVX platforms
> 
> On Tue, Jan 20, 2015 at 09:15:38AM -0800, Stephen Hemminger wrote:
> > On Mon, 19 Jan 2015 09:53:34 +0800
> > zhihong.wang at intel.com wrote:
> >
> > > Main code changes:
> > >
> > > 1. Differentiate architectural features based on CPU flags
> > >
> > >     a. Implement separated move functions for SSE/AVX/AVX2 to make
> > > full utilization of cache bandwidth
> > >
> > >     b. Implement separated copy flow specifically optimized for
> > > target architecture
> > >
> > > 2. Rewrite the memcpy function "rte_memcpy"
> > >
> > >     a. Add store aligning
> > >
> > >     b. Add load aligning based on architectural features
> > >
> > >     c. Put block copy loop into inline move functions for better
> > > control of instruction order
> > >
> > >     d. Eliminate unnecessary MOVs
> > >
> > > 3. Rewrite the inline move functions
> > >
> > >     a. Add move functions for unaligned load cases
> > >
> > >     b. Change instruction order in copy loops for better pipeline
> > > utilization
> > >
> > >     c. Use intrinsics instead of assembly code
> > >
> > > 4. Remove slow glibc call for constant copies
> > >
> > > Signed-off-by: Zhihong Wang <zhihong.wang at intel.com>
> >
> > Dumb question: why not fix glibc memcpy instead?
> > What is special about rte_memcpy?
> >
> >
> Fair point.  Though, does glibc implement optimized memcpys per arch?  Or
> do they just rely on the __builtin's from gcc to get optimized variants?
> 
> Neil

Neil, Stephen,

Glibc has per arch implementation but is for general purpose, while rte_memcpy is more for small size & in cache memcpy, which is the DPDK case. This lead to different trade-offs and optimization techniques.
Also, glibc's update from version to version is also based on general judgments. We can say that glibc 2.18 is for Ivy Bridge and 2.20 is for Haswell, though not full accurate. But we need an implementation for both Sandy Bridge and Haswell.

For instance, glibc 2.18 has load aligning optimization for unaligned memcpy but doesn't support 256-bit mov; while glibc 2.20 add support for 256-bit mov, but remove load aligning optimization. This hurts unaligned memcpy performance a lot on architectures like Ivy Bridge. Glibc's reason is that the load aligning optimization doesn't help when src/dst isn't in cache, which could be the general case, but not the DPDK case.

Zhihong (John)


More information about the dev mailing list