[dpdk-users] Dpdk poor performance on virtual machine

edgar helmut helmut.edgar100 at gmail.com
Thu Dec 15 14:32:04 CET 2016


I have one single socket which is Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz.

I just made two more steps:
1. setting iommu=pt for better usage of the igb_uio
2. using taskset and isolcpu so now it looks like the relevant dpdk cores
use dedicated cores.

It improved the performance though I still see significant difference
between the vm and the host which I can't fully explain.

any further idea?

Regards,
Edgar


On Thu, Dec 15, 2016 at 2:54 PM, Wiles, Keith <keith.wiles at intel.com> wrote:

>
> > On Dec 15, 2016, at 1:20 AM, edgar helmut <helmut.edgar100 at gmail.com>
> wrote:
> >
> > Hi.
> > Some help is needed to understand performance issue on virtual machine.
> >
> > Running testpmd over the host functions well (testpmd forwards 10g
> between
> > two 82599 ports).
> > However same application running on a virtual machine over same host
> > results with huge degradation in performance.
> > The testpmd then is not even able to read 100mbps from nic without drops,
> > and from a profile i made it looks like a dpdk application runs more than
> > 10 times slower than over host…
>
> Not sure I understand the overall setup, but did you make sure the NIC/PCI
> bus is on the same socket as the VM. If you have multiple sockets on your
> platform. If you have to access the NIC across the QPI it could explain
> some of the performance drop. Not sure that much drop is this problem.
>
> >
> > Setup is ubuntu 16.04 for host and ubuntu 14.04 for guest.
> > Qemu is 2.3.0 (though I tried with a newer as well).
> > NICs are connected to guest using pci passthrough, and guest's cpu is set
> > as passthrough (same as host).
> > On guest start the host allocates transparent hugepages (AnonHugePages)
> so
> > i assume the guest memory is backed with real hugepages on the host.
> > I tried binding with igb_uio and with uio_pci_generic but both results
> with
> > same performance.
> >
> > Due to the performance difference i guess i miss something.
> >
> > Please advise what may i miss here?
> > Is this a native penalty of qemu??
> >
> > Thanks
> > Edgar
>
> Regards,
> Keith
>
>


More information about the users mailing list