[dpdk-dev] [PATCH 00/24] Refactor mlx5 to improve performance

Nélio Laranjeiro nelio.laranjeiro at 6wind.com
Tue Jun 14 08:57:52 CEST 2016


On Mon, Jun 13, 2016 at 11:50:48AM -0700, Javier Blazquez wrote:
>[...] 
> This is a very exciting patch. I applied it and reran some microbenchmarks
> of mine that test the TX and RX paths separately. These are the results I
> got:
> 
> TX path (burst = 64 packets)
> 
> 1 thread - 2 ports - 4 queues per port: 39Mpps => 48Mpps
> 2 threads - 2 ports - 2 queues per port: 60Mpps => 60Mpps (hardware
> limitation?)

To be able to reach higher values you will need to configure the inline
feature with the device argument txq_inline, and only activate it with
more than 1 queue, this can be done with the txq_min_inline argument.

This feature helps the NIC by reducing the PCI back-pressure, in
counterpart it will consume more CPU cycles.

You can take a look to the NIC documentation (doc/guides/nics/mlx5.rst)
updated in this path-set which explains both txq_inline and
txqs_min_inline device arguments.

> RX path (burst = 32 packets)
> 
> 1 thread - 2 ports - 4 queues per port: 38Mpps => 46Mpps
> 2 threads - 2 ports - 2 queues per port: 43Mpps => 50Mpps
> 
> The tests were run on the following hardware, using DPDK master with this
> patch and the "Miscellaneous fixes for mlx4 and mlx5" patch applied:
> 
> 2x Intel Xeon E5-2680 v3 2.5GHz
> 64GB DDR4-2133
> 1x Mellanox ConnectX-4 EN, 40/56GbE dual-port, PCIe3.0 x8 (MCX414A-BCAT)
> 
> I haven't test it extensively outside of these microbenchmarks, but so far
> this patch has been working great on my end, so:
> 
> tested-by: Javier Blazquez <jblazquez at riotgames.com>

Regards,

-- 
Nélio Laranjeiro
6WIND


More information about the dev mailing list