[dpdk-dev] [PATCH 00/24] Refactor mlx5 to improve performance

Javier Blazquez jblazquez at riotgames.com
Mon Jun 13 20:50:48 CEST 2016


> Enhance mlx5 with a data path that bypasses Verbs.
>
> The first half of this patchset removes support for functionality
completely
> rewritten in the second half (scatter/gather, inline send), while the data
> path is refactored without Verbs.
>
> The PMD remains usable during the transition.
>
> This patchset must be applied after "Miscellaneous fixes for mlx4 and
mlx5".
>
> Adrien Mazarguil (8):
>   mlx5: replace countdown with threshold for TX completions
>   mlx5: add debugging information about TX queues capabilities
>   mlx5: check remaining space while processing TX burst
>   mlx5: resurrect TX gather support
>   mlx5: work around spurious compilation errors
>   mlx5: remove redundant RX queue initialization code
>   mlx5: make RX queue reinitialization safer
>   mlx5: resurrect RX scatter support
>
> Nelio Laranjeiro (15):
>   mlx5: split memory registration function for better performance
>   mlx5: remove TX gather support
>   mlx5: remove RX scatter support
>   mlx5: remove configuration variable for maximum number of segments
>   mlx5: remove inline TX support
>   mlx5: split TX queue structure
>   mlx5: split RX queue structure
>   mlx5: update prerequisites for upcoming enhancements
>   mlx5: add definitions for data path without Verbs
>   mlx5: add support for configuration through kvargs
>   mlx5: add TX/RX burst function selection wrapper
>   mlx5: refactor RX data path
>   mlx5: refactor TX data path
>   mlx5: handle RX CQE compression
>   mlx5: add support for multi-packet send
>
> Yaacov Hazan (1):
>   mlx5: add support for inline send
>
>  config/common_base             |    2 -
>  doc/guides/nics/mlx5.rst       |   94 +-
>  drivers/net/mlx5/Makefile      |   49 +-
>  drivers/net/mlx5/mlx5.c        |  158 ++-
>  drivers/net/mlx5/mlx5.h        |   10 +
>  drivers/net/mlx5/mlx5_defs.h   |   26 +-
>  drivers/net/mlx5/mlx5_ethdev.c |  188 +++-
>  drivers/net/mlx5/mlx5_fdir.c   |   20 +-
>  drivers/net/mlx5/mlx5_mr.c     |  280 +++++
>  drivers/net/mlx5/mlx5_prm.h    |  155 +++
>  drivers/net/mlx5/mlx5_rxmode.c |    8 -
>  drivers/net/mlx5/mlx5_rxq.c    |  757 +++++---------
>  drivers/net/mlx5/mlx5_rxtx.c   | 2206
+++++++++++++++++++++++-----------------
>  drivers/net/mlx5/mlx5_rxtx.h   |  176 ++--
>  drivers/net/mlx5/mlx5_txq.c    |  362 ++++---
>  drivers/net/mlx5/mlx5_vlan.c   |    6 +-
>  16 files changed, 2578 insertions(+), 1919 deletions(-)
>  create mode 100644 drivers/net/mlx5/mlx5_mr.c
>  create mode 100644 drivers/net/mlx5/mlx5_prm.h
>
> --
> 2.1.4

This is a very exciting patch. I applied it and reran some microbenchmarks
of mine that test the TX and RX paths separately. These are the results I
got:

TX path (burst = 64 packets)

1 thread - 2 ports - 4 queues per port: 39Mpps => 48Mpps
2 threads - 2 ports - 2 queues per port: 60Mpps => 60Mpps (hardware
limitation?)

RX path (burst = 32 packets)

1 thread - 2 ports - 4 queues per port: 38Mpps => 46Mpps
2 threads - 2 ports - 2 queues per port: 43Mpps => 50Mpps

The tests were run on the following hardware, using DPDK master with this
patch and the "Miscellaneous fixes for mlx4 and mlx5" patch applied:

2x Intel Xeon E5-2680 v3 2.5GHz
64GB DDR4-2133
1x Mellanox ConnectX-4 EN, 40/56GbE dual-port, PCIe3.0 x8 (MCX414A-BCAT)

I haven't test it extensively outside of these microbenchmarks, but so far
this patch has been working great on my end, so:

tested-by: Javier Blazquez <jblazquez at riotgames.com>


More information about the dev mailing list