[dpdk-users] low Tx throughputs in DPDK with Mellanox ConnectX-3 card

zhilong zheng zhengzl0715 at gmail.com
Tue Jul 25 15:20:38 CEST 2017


Hi Adrien,

Thanks for your reply and suggestion. I change the packet size to 128B, it can generate ~34Gbps and ~40Gbps while to 256B and larger.

Regards,
Zhilong

> 在 2017年7月25日,02:20,Adrien Mazarguil <adrien.mazarguil at 6wind.com> 写道:
> 
> Hi Zhilong,
> 
> On Sat, Jul 22, 2017 at 12:05:51AM +0800, zhilong zheng wrote:
>> Hi all,
>> 
>> I have some problem when generating packets to the Mellanox ConnectX-3 dual 40G ports card from the latest pktgen-dpdk.
>> 
>> The problem is that it can only generate ~22Gbps per port (actually I just use one port.), not saturating the 40G port. This server has two 12-cores E5-2650 v4 @2.20GHz cpus and 128G 2400MHz DDR4 memory. The DPDK version is 16.11.
>> 
>> This is the driver bound to the NIC:   0000:81:00.0 'MT27500 Family [ConnectX-3]' if=p6p1,p6p2 drv=mlx4_core unused=
>> I guess that it’s the problem of driver. The document shows the driver name should be librte_pmd_mlx4 (url: http://dpdk.org/doc/guides/nics/mlx4.html <http://dpdk.org/doc/guides/nics/mlx4.html>), however when completing installation, it’s bound to mlx4_core.
> 
> It's OK, mlx4_core is the name of the kernel driver while librte_pmd_mlx4 is
> that of the DPDK driver. There is no librte_pmd_mlx4 kernel module, see
> blurb about prerequisites [1].
> 
>> Any clue about this problem? And whether it’s caused by the driver or others?
> 
> Depending on packet size and other configuration settings, you may have hit
> the maximum packet rate, these devices cannot reach line rate with 64-byte
> packets for instance.
> 
> [1] http://dpdk.org/doc/guides/nics/mlx4.html#prerequisites
> 
> -- 
> Adrien Mazarguil
> 6WIND



More information about the users mailing list