[dpdk-dev] [PATCH v2 00/28] net/mlx5: support LRO

Raslan Darawsheh rasland at mellanox.com
Tue Jul 23 08:48:52 CEST 2019


Hi,

> -----Original Message-----
> From: dev <dev-bounces at dpdk.org> On Behalf Of Matan Azrad
> Sent: Monday, July 22, 2019 5:52 PM
> To: Ferruh Yigit <ferruh.yigit at intel.com>; Shahaf Shuler
> <shahafs at mellanox.com>; Yongseok Koh <yskoh at mellanox.com>; Slava
> Ovsiienko <viacheslavo at mellanox.com>
> Cc: dev at dpdk.org; Dekel Peled <dekelp at mellanox.com>
> Subject: [dpdk-dev] [PATCH v2 00/28] net/mlx5: support LRO
> 
> Introduction:
> LRO (Large Receive Offload) is intended to reduce host CPU overhead when
> processing Rx TCP packets.
> LRO works by aggregating multiple incoming packets from a single stream
> into a larger buffer, before they are passed higher up the networking stack.
> Thus reducing the number of packets that have to be processed.
> 
> Use:
> MLX5 PMD will query the HCA capabilities on initialization to check if LRO is
> supported and can be used.
> LRO in MLX5 PMD is intended for use by applications using a relatively small
> number of flows.
> LRO support can be enabled only per port.
> In each LRO session, packets of the same flow will be coalesced until one of
> the following occur:
>   *   Buffer size limit is exceeded.
>   *   Session timeout is exceeded.
>   *   Packet from a different flow is received on the same queue.
> 
> When LRO session ends the coalesced packet is passed to the PMD, which
> will update the header fields before passing the packet to the application.
> For efficient memory utilization, the MPRQ mechanism is used.
> Support of Non-LRO flows will not be impacted.
> 
> Existing API:
> Offload capability DEV_RX_OFFLOAD_TCP_LRO will be used to indicate
> device supports LRO.
> testpmd command-line option "-enable-lro" will be used to request LRO
> feature enable on application start.
> testpmd rx_offload "tcp_lro" on or off will be used to request LRO feature
> enable or disable during application runtime.
> Offload flag PKT_RX_LRO will be used. This flag can be set in Rx mbuf to
> indicate this is a LRO coalesced packet.
> 
> New API:
> PMD configuration parameter lro_timeout_usec will be added.
> This parameter can be used by application to select LRO session timeout (in
> microseconds).
> If this value is not specified, the minimal value supported by device will be
> used.
> 
> Known limitations:
> mbuf head-room is zero for any packet if LRO is configured in the port.
> Keep CRC offload cannot be supported with LRO.
> CQE compression is not supported with LRO.
> 
> v2:
> Fix small compilation issue detected per commit (Found By Ferruh).
> 
> Dekel Peled (23):
>   net/mlx5: remove redundant item from union
>   net/mlx5: add LRO APIs and initial settings
>   net/mlx5: support LRO caps query using devx API
>   net/mlx5: glue func for queue query using new API
>   net/mlx5: glue function for action using new API
>   net/mlx5: check conditions to enable LRO
>   net/mlx5: support Tx interface query using new API
>   net/mlx5: update Tx queue create for LRO
>   net/mlx5: create advanced RxQ object using new API
>   net/mlx5: modify advanced RxQ object using new API
>   net/mlx5: create advanced Rx object using new API
>   net/mlx5: create advanced RxQ table using new API
>   net/mlx5: allocate door-bells using new API
>   net/mlx5: rename RxQ verbs to general RxQ object
>   net/mlx5: rename verbs indirection table to obj
>   net/mlx5: rename hash RxQ verbs to general
>   net/mlx5: update queue state modify function
>   net/mlx5: store protection domain number on create
>   net/mlx5: func to create Rx verbs completion queue
>   net/mlx5: function to create Rx verbs work queue
>   net/mlx5: create advanced RxQ using new API
>   net/mlx5: support LRO with single RxQ object
>   doc: update MLX5 doc and release notes with LRO
> 
> Matan Azrad (5):
>   net/mlx5: replace the external mbuf shared memory
>   net/mlx5: update LRO fields in completion entry
>   net/mlx5: handle LRO packets in Rx queue
>   net/mlx5: zero the LRO mbuf headroom
>   net/mlx5: adjust the maximum LRO message size
> 
>  doc/guides/nics/features/mlx5.ini      |    1 +
>  doc/guides/nics/mlx5.rst               |   14 +
>  doc/guides/rel_notes/release_19_08.rst |    2 +-
>  drivers/net/mlx5/Makefile              |    5 +
>  drivers/net/mlx5/meson.build           |    2 +
>  drivers/net/mlx5/mlx5.c                |  223 ++++++-
>  drivers/net/mlx5/mlx5.h                |  160 ++++-
>  drivers/net/mlx5/mlx5_devx_cmds.c      |  326 +++++++++
>  drivers/net/mlx5/mlx5_ethdev.c         |   14 +-
>  drivers/net/mlx5/mlx5_flow.h           |    6 +
>  drivers/net/mlx5/mlx5_flow_dv.c        |   28 +-
>  drivers/net/mlx5/mlx5_flow_verbs.c     |    3 +-
>  drivers/net/mlx5/mlx5_glue.c           |   33 +
>  drivers/net/mlx5/mlx5_glue.h           |    6 +-
>  drivers/net/mlx5/mlx5_prm.h            |  379 ++++++++++-
>  drivers/net/mlx5/mlx5_rxq.c            | 1132 ++++++++++++++++++++++-------
> ---
>  drivers/net/mlx5/mlx5_rxtx.c           |  167 ++++-
>  drivers/net/mlx5/mlx5_rxtx.h           |   80 ++-
>  drivers/net/mlx5/mlx5_rxtx_vec.h       |    6 +-
>  drivers/net/mlx5/mlx5_rxtx_vec_sse.h   |   16 +-
>  drivers/net/mlx5/mlx5_trigger.c        |   12 +-
>  drivers/net/mlx5/mlx5_txq.c            |   27 +-
>  drivers/net/mlx5/mlx5_vlan.c           |   32 +-
>  23 files changed, 2194 insertions(+), 480 deletions(-)
> 
> --
> 1.8.3.1

Series applied to next-net-mlx,

Kindest regards
Raslan Darawsheh


More information about the dev mailing list