[dpdk-users] Dpdk poor performance on virtual machine

edgar helmut helmut.edgar100 at gmail.com
Thu Dec 15 18:29:54 CET 2016


Stephen, this is not the case, it relies on using the transparent hugepages
which looks like 2M hugepages size.
Why should be a problem to back 1G pages of the guest to 2M pages at the
host?
the transparent hugepages makes the deployment much more flexible.



On Thu, Dec 15, 2016 at 7:17 PM, Stephen Hemminger <
stephen at networkplumber.org> wrote:

> On Thu, 15 Dec 2016 14:33:25 +0000
> "Hu, Xuekun" <xuekun.hu at intel.com> wrote:
>
> > Are you sure the anonhugepages size was equal to the total VM's memory
> size?
> > Sometimes, transparent huge page mechanism doesn't grantee the app is
> using
> > the real huge pages.
> >
> >
> > -----Original Message-----
> > From: users [mailto:users-bounces at dpdk.org] On Behalf Of edgar helmut
> > Sent: Thursday, December 15, 2016 9:32 PM
> > To: Wiles, Keith
> > Cc: users at dpdk.org
> > Subject: Re: [dpdk-users] Dpdk poor performance on virtual machine
> >
> > I have one single socket which is Intel(R) Xeon(R) CPU E5-2640 v4 @
> 2.40GHz.
> >
> > I just made two more steps:
> > 1. setting iommu=pt for better usage of the igb_uio
> > 2. using taskset and isolcpu so now it looks like the relevant dpdk cores
> > use dedicated cores.
> >
> > It improved the performance though I still see significant difference
> > between the vm and the host which I can't fully explain.
> >
> > any further idea?
> >
> > Regards,
> > Edgar
> >
> >
> > On Thu, Dec 15, 2016 at 2:54 PM, Wiles, Keith <keith.wiles at intel.com>
> wrote:
> >
> > >
> > > > On Dec 15, 2016, at 1:20 AM, edgar helmut <helmut.edgar100 at gmail.com
> >
> > > wrote:
> > > >
> > > > Hi.
> > > > Some help is needed to understand performance issue on virtual
> machine.
> > > >
> > > > Running testpmd over the host functions well (testpmd forwards 10g
> > > between
> > > > two 82599 ports).
> > > > However same application running on a virtual machine over same host
> > > > results with huge degradation in performance.
> > > > The testpmd then is not even able to read 100mbps from nic without
> drops,
> > > > and from a profile i made it looks like a dpdk application runs more
> than
> > > > 10 times slower than over host…
> > >
> > > Not sure I understand the overall setup, but did you make sure the
> NIC/PCI
> > > bus is on the same socket as the VM. If you have multiple sockets on
> your
> > > platform. If you have to access the NIC across the QPI it could explain
> > > some of the performance drop. Not sure that much drop is this problem.
> > >
> > > >
> > > > Setup is ubuntu 16.04 for host and ubuntu 14.04 for guest.
> > > > Qemu is 2.3.0 (though I tried with a newer as well).
> > > > NICs are connected to guest using pci passthrough, and guest's cpu
> is set
> > > > as passthrough (same as host).
> > > > On guest start the host allocates transparent hugepages
> (AnonHugePages)
> > > so
> > > > i assume the guest memory is backed with real hugepages on the host.
> > > > I tried binding with igb_uio and with uio_pci_generic but both
> results
> > > with
> > > > same performance.
> > > >
> > > > Due to the performance difference i guess i miss something.
> > > >
> > > > Please advise what may i miss here?
> > > > Is this a native penalty of qemu??
> > > >
> > > > Thanks
> > > > Edgar
> > >
> > > Regards,
> > > Keith
> > >
> > >
>
> Also make sure you run host with 1G hugepages and run guest in hugepage
> memory. If not, the IOMMU has to do 4K operations and thrashes.
>


More information about the users mailing list