[dpdk-dev] Performance hit - NICs on different CPU sockets

Wiles, Keith keith.wiles at intel.com
Mon Jun 13 21:35:33 CEST 2016


On 6/13/16, 9:07 AM, "dev on behalf of Take Ceara" <dev-bounces at dpdk.org on behalf of dumitru.ceara at gmail.com> wrote:

>Hi,
>
>I'm reposting here as I didn't get any answers on the dpdk-users mailing list.
>
>We're working on a stateful traffic generator (www.warp17.net) using
>DPDK and we would like to control two XL710 NICs (one on each socket)
>to maximize CPU usage. It looks that we run into the following
>limitation:
>
>http://dpdk.org/doc/guides/linux_gsg/nic_perf_intel_platform.html
>section 7.2, point 3
>
>We completely split memory/cpu/NICs across the two sockets. However,
>the performance with a single CPU and both NICs on the same socket is
>better.
>Why do all the NICs have to be on the same socket, is there a
>driver/hw limitation?

Normally the limitation is in the hardware, basically how the PCI bus is connected to the CPUs (or sockets). How the PCI buses are connected to the system depends on the Mother board design. I normally see the buses attached to socket 0, but you could have some of the buses attached to the other sockets or all on one socket via a PCI bridge device.

No easy way around the problem if some of your PCI buses are split or all on a single socket. Need to look at your system docs or look at lspci it has an option to dump the PCI bus as an ASCII tree, at least on Ubuntu.
>
>Thanks,
>Dumitru Ceara
>





More information about the dev mailing list