[dpdk-dev] [PATCH v3 0/5] vhost: optimize enqueue

Wang, Zhihong zhihong.wang at intel.com
Thu Sep 22 08:58:24 CEST 2016



> -----Original Message-----
> From: Jianbo Liu [mailto:jianbo.liu at linaro.org]
> Sent: Thursday, September 22, 2016 1:48 PM
> To: Yuanhan Liu <yuanhan.liu at linux.intel.com>
> Cc: Wang, Zhihong <zhihong.wang at intel.com>; Maxime Coquelin
> <maxime.coquelin at redhat.com>; dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v3 0/5] vhost: optimize enqueue
> 
> On 22 September 2016 at 10:29, Yuanhan Liu <yuanhan.liu at linux.intel.com>
> wrote:
> > On Wed, Sep 21, 2016 at 08:54:11PM +0800, Jianbo Liu wrote:
> >> >> > My setup consists of one host running a guest.
> >> >> > The guest generates as much 64bytes packets as possible using
> >> >>
> >> >> Have you tested with other different packet size?
> >> >> My testing shows that performance is dropping when packet size is
> more
> >> >> than 256.
> >> >
> >> >
> >> > Hi Jianbo,
> >> >
> >> > Thanks for reporting this.
> >> >
> >> >  1. Are you running the vector frontend with mrg_rxbuf=off?
> >> >
> Yes, my testing is mrg_rxbuf=off, but not vector frontend PMD.
> 
> >> >  2. Could you please specify what CPU you're running? Is it Haswell
> >> >     or Ivy Bridge?
> >> >
> It's an ARM server.
> 
> >> >  3. How many percentage of drop are you seeing?
> The testing result:
> size (bytes)     improvement (%)
> 64                   3.92
> 128                 11.51
> 256                  24.16
> 512                  -13.79
> 1024                -22.51
> 1500                -12.22
> A correction is that performance is dropping if byte size is larger than 512.


Jianbo,

Could you please verify does this patch really cause enqueue perf to drop?

You can test the enqueue path only by set guest to do rxonly, and compare
the mpps by show port stats all in the guest.


Thanks
Zhihong

> 
> >> >
> >> > This is expected by me because I've already found the root cause and
> >> > the way to optimize it, but since it missed the v0 deadline and
> >> > requires changes in eal/memcpy, I postpone it to the next release.
> >> >
> >> > After the upcoming optimization the performance for packets larger
> >> > than 256 will be improved, and the new code will be much faster than
> >> > the current code.
> >> >
> >>
> >> Sorry, I tested on an ARM server, but I wonder if there is the same
> >> issue for x86 platform.
> >
> > Would you please provide more details? Say, answer the two left
> > questions from Zhihong?
> >
> > Thanks.
> >
> >         --yliu


More information about the dev mailing list