[dpdk-dev] [PATCH 0/4] DPDK memcpy optimization

Wang, Zhihong zhihong.wang at intel.com
Wed Jan 28 03:06:57 CET 2015



> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Tuesday, January 27, 2015 8:20 PM
> To: Wang, Zhihong; Richardson, Bruce; 'Marc Sune'
> Cc: 'dev at dpdk.org'
> Subject: RE: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization
> 
> 
> 
> > -----Original Message-----
> > From: Ananyev, Konstantin
> > Sent: Tuesday, January 27, 2015 11:30 AM
> > To: Wang, Zhihong; Richardson, Bruce; Marc Sune
> > Cc: dev at dpdk.org
> > Subject: RE: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization
> >
> >
> >
> > > -----Original Message-----
> > > From: Wang, Zhihong
> > > Sent: Tuesday, January 27, 2015 1:42 AM
> > > To: Ananyev, Konstantin; Richardson, Bruce; Marc Sune
> > > Cc: dev at dpdk.org
> > > Subject: RE: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Ananyev, Konstantin
> > > > Sent: Tuesday, January 27, 2015 2:29 AM
> > > > To: Wang, Zhihong; Richardson, Bruce; Marc Sune
> > > > Cc: dev at dpdk.org
> > > > Subject: RE: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization
> > > >
> > > > Hi Zhihong,
> > > >
> > > > > -----Original Message-----
> > > > > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Wang,
> > > > > Zhihong
> > > > > Sent: Friday, January 23, 2015 6:52 AM
> > > > > To: Richardson, Bruce; Marc Sune
> > > > > Cc: dev at dpdk.org
> > > > > Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization
> > > > >
> > > > >
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Bruce
> > > > > > Richardson
> > > > > > Sent: Wednesday, January 21, 2015 9:26 PM
> > > > > > To: Marc Sune
> > > > > > Cc: dev at dpdk.org
> > > > > > Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization
> > > > > >
> > > > > > On Wed, Jan 21, 2015 at 02:21:25PM +0100, Marc Sune wrote:
> > > > > > >
> > > > > > > On 21/01/15 14:02, Bruce Richardson wrote:
> > > > > > > >On Wed, Jan 21, 2015 at 01:36:41PM +0100, Marc Sune wrote:
> > > > > > > >>On 21/01/15 04:44, Wang, Zhihong wrote:
> > > > > > > >>>>-----Original Message-----
> > > > > > > >>>>From: Richardson, Bruce
> > > > > > > >>>>Sent: Wednesday, January 21, 2015 12:15 AM
> > > > > > > >>>>To: Neil Horman
> > > > > > > >>>>Cc: Wang, Zhihong; dev at dpdk.org
> > > > > > > >>>>Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy
> > > > > > > >>>>optimization
> > > > > > > >>>>
> > > > > > > >>>>On Tue, Jan 20, 2015 at 10:11:18AM -0500, Neil Horman wrote:
> > > > > > > >>>>>On Tue, Jan 20, 2015 at 03:01:44AM +0000, Wang, Zhihong
> wrote:
> > > > > > > >>>>>>>-----Original Message-----
> > > > > > > >>>>>>>From: Neil Horman [mailto:nhorman at tuxdriver.com]
> > > > > > > >>>>>>>Sent: Monday, January 19, 2015 9:02 PM
> > > > > > > >>>>>>>To: Wang, Zhihong
> > > > > > > >>>>>>>Cc: dev at dpdk.org
> > > > > > > >>>>>>>Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy
> > > > > > > >>>>>>>optimization
> > > > > > > >>>>>>>
> > > > > > > >>>>>>>On Mon, Jan 19, 2015 at 09:53:30AM +0800,
> > > > > > > >>>>>>>zhihong.wang at intel.com
> > > > > > > >>>>wrote:
> > > > > > > >>>>>>>>This patch set optimizes memcpy for DPDK for both
> > > > > > > >>>>>>>>SSE and AVX
> > > > > > > >>>>platforms.
> > > > > > > >>>>>>>>It also extends memcpy test coverage with unaligned
> > > > > > > >>>>>>>>cases and more test
> > > > > > > >>>>>>>points.
> > > > > > > >>>>>>>>Optimization techniques are summarized below:
> > > > > > > >>>>>>>>
> > > > > > > >>>>>>>>1. Utilize full cache bandwidth
> > > > > > > >>>>>>>>
> > > > > > > >>>>>>>>2. Enforce aligned stores
> > > > > > > >>>>>>>>
> > > > > > > >>>>>>>>3. Apply load address alignment based on
> > > > > > > >>>>>>>>architecture features
> > > > > > > >>>>>>>>
> > > > > > > >>>>>>>>4. Make load/store address available as early as
> > > > > > > >>>>>>>>possible
> > > > > > > >>>>>>>>
> > > > > > > >>>>>>>>5. General optimization techniques like inlining,
> > > > > > > >>>>>>>>branch reducing, prefetch pattern access
> > > > > > > >>>>>>>>
> > > > > > > >>>>>>>>Zhihong Wang (4):
> > > > > > > >>>>>>>>   Disabled VTA for memcpy test in app/test/Makefile
> > > > > > > >>>>>>>>   Removed unnecessary test cases in test_memcpy.c
> > > > > > > >>>>>>>>   Extended test coverage in test_memcpy_perf.c
> > > > > > > >>>>>>>>   Optimized memcpy in arch/x86/rte_memcpy.h for
> > > > > > > >>>>>>>>both SSE
> > > > > > and AVX
> > > > > > > >>>>>>>>     platforms
> > > > > > > >>>>>>>>
> > > > > > > >>>>>>>>  app/test/Makefile                                  |   6 +
> > > > > > > >>>>>>>>  app/test/test_memcpy.c                             |  52 +-
> > > > > > > >>>>>>>>  app/test/test_memcpy_perf.c                        | 238
> +++++---
> > > > > > > >>>>>>>>  .../common/include/arch/x86/rte_memcpy.h           |
> 664
> > > > > > > >>>>>>>+++++++++++++++------
> > > > > > > >>>>>>>>  4 files changed, 656 insertions(+), 304
> > > > > > > >>>>>>>> deletions(-)
> > > > > > > >>>>>>>>
> > > > > > > >>>>>>>>--
> > > > > > > >>>>>>>>1.9.3
> > > > > > > >>>>>>>>
> > > > > > > >>>>>>>>
> > > > > > > >>>>>>>Are you able to compile this with gcc 4.9.2?  The
> > > > > > > >>>>>>>compilation of test_memcpy_perf is taking forever for
> > > > > > > >>>>>>>me.  It
> > > > appears hung.
> > > > > > > >>>>>>>Neil
> > > > > > > >>>>>>Neil,
> > > > > > > >>>>>>
> > > > > > > >>>>>>Thanks for reporting this!
> > > > > > > >>>>>>It should compile but will take quite some time if the
> > > > > > > >>>>>>CPU doesn't support
> > > > > > > >>>>AVX2, the reason is that:
> > > > > > > >>>>>>1. The SSE & AVX memcpy implementation is more
> > > > > > > >>>>>>complicated
> > > > > > than
> > > > > > > >>>>AVX2
> > > > > > > >>>>>>version thus the compiler takes more time to compile
> > > > > > > >>>>>>and optimize
> > > > > > 2.
> > > > > > > >>>>>>The new test_memcpy_perf.c contains 126 constants
> > > > > > > >>>>>>memcpy calls for better test case coverage, that's
> > > > > > > >>>>>>quite a lot
> > > > > > > >>>>>>
> > > > > > > >>>>>>I've just tested this patch on an Ivy Bridge machine
> > > > > > > >>>>>>with GCC
> > > > 4.9.2:
> > > > > > > >>>>>>1. The whole compile process takes 9'41" with the
> > > > > > > >>>>>>original test_memcpy_perf.c (63 + 63 = 126 constant
> memcpy calls) 2.
> > > > > > > >>>>>>It takes only 2'41" after I reduce the constant memcpy
> > > > > > > >>>>>>call number to 12 + 12 = 24
> > > > > > > >>>>>>
> > > > > > > >>>>>>I'll reduce memcpy call in the next version of patch.
> > > > > > > >>>>>>
> > > > > > > >>>>>ok, thank you.  I'm all for optimzation, but I think a
> > > > > > > >>>>>compile that takes almost
> > > > > > > >>>>>10 minutes for a single file is going to generate some
> > > > > > > >>>>>raised eyebrows when end users start tinkering with it
> > > > > > > >>>>>
> > > > > > > >>>>>Neil
> > > > > > > >>>>>
> > > > > > > >>>>>>Zhihong (John)
> > > > > > > >>>>>>
> > > > > > > >>>>Even two minutes is a very long time to compile, IMHO.
> > > > > > > >>>>The whole of DPDK doesn't take that long to compile
> > > > > > > >>>>right now, and that's with a couple of huge header files
> > > > > > > >>>>with routing tables in it. Any chance you could cut
> > > > > > > >>>>compile time down to a few seconds while still
> > > > > > having reasonable tests?
> > > > > > > >>>>Also, when there is AVX2 present on the system, what is
> > > > > > > >>>>the compile time like for that code?
> > > > > > > >>>>
> > > > > > > >>>>	/Bruce
> > > > > > > >>>Neil, Bruce,
> > > > > > > >>>
> > > > > > > >>>Some data first.
> > > > > > > >>>
> > > > > > > >>>Sandy Bridge without AVX2:
> > > > > > > >>>1. original w/ 10 constant memcpy: 2'25"
> > > > > > > >>>2. patch w/ 12 constant memcpy: 2'41"
> > > > > > > >>>3. patch w/ 63 constant memcpy: 9'41"
> > > > > > > >>>
> > > > > > > >>>Haswell with AVX2:
> > > > > > > >>>1. original w/ 10 constant memcpy: 1'57"
> > > > > > > >>>2. patch w/ 12 constant memcpy: 1'56"
> > > > > > > >>>3. patch w/ 63 constant memcpy: 3'16"
> > > > > > > >>>
> > > > > > > >>>Also, to address Bruce's question, we have to reduce test
> > > > > > > >>>case to cut
> > > > > > down compile time. Because we use:
> > > > > > > >>>1. intrinsics instead of assembly for better flexibility
> > > > > > > >>>and can utilize more compiler optimization 2. complex
> > > > > > > >>>function body for better performance 3. inlining This increases
> compile time.
> > > > > > > >>>But I think it'd be okay to do that as long as we can
> > > > > > > >>>select a fair set of
> > > > > > test points.
> > > > > > > >>>
> > > > > > > >>>It'd be great if you could give some suggestion, say, 12 points.
> > > > > > > >>>
> > > > > > > >>>Zhihong (John)
> > > > > > > >>>
> > > > > > > >>>
> > > > > > > >>While I agree in the general case these long compilation
> > > > > > > >>times is painful for the users, having a factor of 2-8x in
> > > > > > > >>memcpy operations is quite an improvement, specially in
> > > > > > > >>DPDK applications which need to deal
> > > > > > > >>(unfortunately) heavily on them -- e.g. IP fragmentation
> > > > > > > >>and
> > > > reassembly.
> > > > > > > >>
> > > > > > > >>Why not having a fast compilation by default, and a
> > > > > > > >>tunable config flag to enable a highly optimized version of
> rte_memcpy (e.g.
> > > > > > RTE_EAL_OPT_MEMCPY)?
> > > > > > > >>
> > > > > > > >>Marc
> > > > > > > >>
> > > > > > > >Out of interest, are these 2-8x improvements something you
> > > > > > > >have benchmarked in these app scenarios? [i.e. not just in
> > > > > > > >micro-
> > > > benchmarks].
> > > > > > >
> > > > > > > How much that micro-speedup will end up affecting the
> > > > > > > performance of the entire application is something I cannot
> > > > > > > say, so I agree that we should probably have some additional
> > > > > > > benchmarks before deciding that pays off maintaining 2 versions
> of rte_memcpy.
> > > > > > >
> > > > > > > There are however a bunch of possible DPDK applications that
> > > > > > > could potentially benefit; IP fragmentation, tunneling and
> > > > > > > specialized DPI applications, among others, since they
> > > > > > > involve a reasonable amount of memcpys per pkt. My point
> > > > > > > was, *if* it proves that is enough beneficial, why not having it
> optionally?
> > > > > > >
> > > > > > > Marc
> > > > > >
> > > > > > I agree, if it provides the speedups then we need to have it
> > > > > > in - and quite possibly on by default, even.
> > > > > >
> > > > > > /Bruce
> > > > >
> > > > > Since we're clear now that the long compile time is mainly
> > > > > caused by too
> > > > many inline function calls, I think it's okay not to do this.
> > > > > Would you agree?
> > > >
> > > > Actually I wonder, if instead of:
> > > >
> > > > +	switch (srcofs) {
> > > > +	case 0x01: MOVEUNALIGNED_LEFT47(dst, src, n, 0x01); break;
> > > > +	case 0x02: MOVEUNALIGNED_LEFT47(dst, src, n, 0x02); break;
> > > > +	case 0x03: MOVEUNALIGNED_LEFT47(dst, src, n, 0x03); break;
> > > > +	case 0x04: MOVEUNALIGNED_LEFT47(dst, src, n, 0x04); break;
> > > > +	case 0x05: MOVEUNALIGNED_LEFT47(dst, src, n, 0x05); break;
> > > > +	case 0x06: MOVEUNALIGNED_LEFT47(dst, src, n, 0x06); break;
> > > > +	case 0x07: MOVEUNALIGNED_LEFT47(dst, src, n, 0x07); break;
> > > > +	case 0x08: MOVEUNALIGNED_LEFT47(dst, src, n, 0x08); break;
> > > > +	case 0x09: MOVEUNALIGNED_LEFT47(dst, src, n, 0x09); break;
> > > > +	case 0x0A: MOVEUNALIGNED_LEFT47(dst, src, n, 0x0A); break;
> > > > +	case 0x0B: MOVEUNALIGNED_LEFT47(dst, src, n, 0x0B); break;
> > > > +	case 0x0C: MOVEUNALIGNED_LEFT47(dst, src, n, 0x0C); break;
> > > > +	case 0x0D: MOVEUNALIGNED_LEFT47(dst, src, n, 0x0D); break;
> > > > +	case 0x0E: MOVEUNALIGNED_LEFT47(dst, src, n, 0x0E); break;
> > > > +	case 0x0F: MOVEUNALIGNED_LEFT47(dst, src, n, 0x0F); break;
> > > > +	default:;
> > > > +	}
> > > >
> > > > We'll just do:
> > > > MOVEUNALIGNED_LEFT47(dst, src, n, srcofs);
> > > >
> > > > That should reduce size of the generated code quite a bit, wouldn't it?
> > > > From other side MOVEUNALIGNED_LEFT47() is pretty big chunk, so
> > > > performance difference having offset value in a register vs
> > > > immediate value shouldn't be significant.
> > > >
> > > > Konstantin
> > > >
> > > > >
> > > > > Zhihong (John)
> > >
> > > Hey Konstantin,
> > >
> > > We have to use switch here because PALIGNR requires the shift count to
> be an 8-bit immediate.
> >
> > Ah ok, then can we move switch inside the for the block of code that using
> PALIGNR?
> > Or would it be too big performance drop?
> 
> I meant 'inside the MOVEUNALIGNED_LEFT47()' macro. :)

I think it's more like a matter of programming taste :) and I agree that it looks clearer inside the macro.
Will add this in the next version. Thanks!

Zhihong (John)

> 
> > Konstantin
> >
> > >
> > > Zhihong (John)


More information about the dev mailing list