[dpdk-users] Low Rx throughput when using Mellanox ConnectX-3 card with DPDK

Shihabur Rahman Chowdhury shihab.buet at gmail.com
Wed Apr 12 23:00:10 CEST 2017


Hello,

We are running a simple DPDK application and observing quite low
throughput. We are currently testing a DPDK application with the following
setup

- 2 machines with 2xIntel Xeon E5-2620 CPUs
- Each machine with a Mellanox single port 10G ConnectX3 card
- Mellanox DPDK version 16.11
- Mellanox OFED 4.0-2.0.0.1 and latest firmware for ConnectX3

The application is doing almost nothing. It is reading a batch of 64
packets from a single rxq, swapping the mac of each packet and writing it
back to a single txq. The rx and tx is being handled by separate lcores on
the same NUMA socket. We are running pktgen on another machine. With 64B
sized packets we are seeing ~14.8Mpps Tx rate and ~7.3Mpps Rx rate in
pktgen. We checked the NIC on the machine running the DPDK application
(with ifconfig) and it looks like there is a large number of packets being
dropped by the interface. Our connectx3 card should be theoretically be
able to handle 10Gbps Rx + 10Gbps Tx throughput (with channel width 4, the
theoretical max on PCIe 3.0 should be ~31.2Gbps). Interestingly, when Tx
rate is reduced in pktgent (to ~9Mpps), the Rx rate increases to ~9Mpps.

We would highly appriciate if we could get some pointers as to what can be
possibly causing this mismatch in Rx and Tx. Ideally, we should be able to
see ~14Mpps Rx well. Is it because we are using a single port? Or something
else?

FYI, we also ran the sample l2fwd application and test-pmd and got
comparable results in the same setup.

Thanks
Shihab


More information about the users mailing list