[v3] doc: fix update release notes for Mellanox drivers

Message ID 1557747662-188493-1-git-send-email-orika@mellanox.com (mailing list archive)
State Accepted, archived
Headers
Series [v3] doc: fix update release notes for Mellanox drivers |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Ori Kam May 13, 2019, 11:41 a.m. UTC
  This patch adds some missing features to Mellanox drivers release notes.
It also updates the mlx5/mlx4 documentations.

Fixes: d85b204b5dba ("doc: update release notes for Mellanox drivers")
Cc: yskoh@mellanox.com

Signed-off-by: Ori Kam <orika@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>

---
v3:
 * Address ML comments.

V2:
 * Fix checkpatch issues.

---
 doc/guides/nics/mlx4.rst               |   2 +-
 doc/guides/nics/mlx5.rst               | 184 +++++++++++++++++++++++++--------
 doc/guides/rel_notes/release_19_05.rst |   7 +-
 3 files changed, 148 insertions(+), 45 deletions(-)
  

Comments

Thomas Monjalon May 13, 2019, 6:53 p.m. UTC | #1
13/05/2019 13:41, Ori Kam:
> This patch adds some missing features to Mellanox drivers release notes.
> It also updates the mlx5/mlx4 documentations.
> 
> Fixes: d85b204b5dba ("doc: update release notes for Mellanox drivers")
> Cc: yskoh@mellanox.com
> 
> Signed-off-by: Ori Kam <orika@mellanox.com>
> Acked-by: Shahaf Shuler <shahafs@mellanox.com>

Applied, thanks
  
Tom Barbette May 14, 2019, 5:47 a.m. UTC | #2
Hi all,

I still find it a little bit unclear about direct rules. And therefore, how to enable this large sclale mode.

It is only available for eswitch ? Therefore PF rules will still be slow ?  How can I be sure to be in direct mode ?

At some point someone mentioned performance of using a second group would be much faster than updating  rules in the main group. I think having that somewhere in the doc, or internal Mellanox guide would be useful.

Last point, is the requirement really OFED 4.6.2 for direct rules ? Because it's still not publicly available.

Thanks,

Tom

Le 13 mai 2019 14:14, Ori Kam <orika@mellanox.com> a écrit :
This patch adds some missing features to Mellanox drivers release notes.
It also updates the mlx5/mlx4 documentations.

Fixes: d85b204b5dba ("doc: update release notes for Mellanox drivers")
Cc: yskoh@mellanox.com

Signed-off-by: Ori Kam <orika@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>

---
v3:
 * Address ML comments.

V2:
 * Fix checkpatch issues.

---
 doc/guides/nics/mlx4.rst               |   2 +-
 doc/guides/nics/mlx5.rst               | 184 +++++++++++++++++++++++++--------
 doc/guides/rel_notes/release_19_05.rst |   7 +-
 3 files changed, 148 insertions(+), 45 deletions(-)

diff --git a/doc/guides/nics/mlx4.rst b/doc/guides/nics/mlx4.rst
index f6d7a16..5c6bbde 100644
--- a/doc/guides/nics/mlx4.rst
+++ b/doc/guides/nics/mlx4.rst
@@ -253,7 +253,7 @@ thanks to these environment variables:
 Mellanox OFED as a fallback
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~

-- `Mellanox OFED`_ version: **4.4, 4.5**.
+- `Mellanox OFED`_ version: **4.4, 4.5, 4.6**.
 - firmware version: **2.42.5000** and above.

 .. _`Mellanox OFED`: http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 325e9f6..9540657 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -7,7 +7,7 @@ MLX5 poll mode driver

 The MLX5 poll mode driver library (**librte_pmd_mlx5**) provides support
 for **Mellanox ConnectX-4**, **Mellanox ConnectX-4 Lx** , **Mellanox
-ConnectX-5**, **Mellanox ConnectX-6** and **Mellanox Bluefield** families
+ConnectX-5**, **Mellanox ConnectX-6** and **Mellanox BlueField** families
 of 10/25/40/50/100/200 Gb/s adapters as well as their virtual functions (VF)
 in SR-IOV context.

@@ -62,8 +62,8 @@ Features
 - RX VLAN stripping.
 - TX VLAN insertion.
 - RX CRC stripping configuration.
-- Promiscuous mode.
-- Multicast promiscuous mode.
+- Promiscuous mode on PF and VF.
+- Multicast promiscuous mode on PF and VF.
 - Hardware checksum offloads.
 - Flow director (RTE_FDIR_MODE_PERFECT, RTE_FDIR_MODE_PERFECT_MAC_VLAN and
   RTE_ETH_FDIR_REJECT).
@@ -78,6 +78,10 @@ Features
 - Rx HW timestamp.
 - Tunnel types: VXLAN, L3 VXLAN, VXLAN-GPE, GRE, MPLSoGRE, MPLSoUDP.
 - Tunnel HW offloads: packet type, inner/outer RSS, IP and UDP checksum verification.
+- Nic HW offloads: encapsulation (vxlan, gre, mplsoudp, mplsogre), NAT, routing, TTL
+  increment/decrement, count, drop, mark. For details please see :ref:`Supported hardware offloads using rte_flow API`.
+- Flow insertion rate of more then million flows per second, when using Direct Rules.
+- Support for multiple rte_flow groups.

 Limitations
 -----------
@@ -112,8 +116,6 @@ Limitations
   is set to multi-packet send or Enhanced multi-packet send. Otherwise it must have
   less than 50 segments.

-- Count action for RTE flow is **only supported in Mellanox OFED**.
-
 - Flows with a VXLAN Network Identifier equal (or ends to be equal)
   to 0 are not supported.

@@ -147,30 +149,16 @@ Limitations
   To receive IPv6 Multicast messages on VM, explicitly set the relevant
   MAC address using rte_eth_dev_mac_addr_add() API.

-- E-Switch VXLAN tunnel is not supported together with outer VLAN.
-
-- E-Switch Flows with VNI pattern must include the VXLAN decapsulation action.
-
-- E-Switch VXLAN decapsulation Flow:
+- E-Switch decapsulation Flow:

   - can be applied to PF port only.
   - must specify VF port action (packet redirection from PF to VF).
-  - must specify tunnel outer UDP local (destination) port, wildcards not allowed.
-  - must specify tunnel outer VNI, wildcards not allowed.
-  - must specify tunnel outer local (destination)  IPv4 or IPv6 address, wildcards not allowed.
-  - optionally may specify tunnel outer remote (source) IPv4 or IPv6, wildcards or group IPs allowed.
   - optionally may specify tunnel inner source and destination MAC addresses.

-- E-Switch VXLAN encapsulation Flow:
+- E-Switch  encapsulation Flow:

   - can be applied to VF ports only.
   - must specify PF port action (packet redirection from VF to PF).
-  - must specify the VXLAN item with tunnel outer parameters.
-  - must specify the tunnel outer VNI in the VXLAN item.
-  - must specify the tunnel outer remote (destination) UDP port in the VXLAN item.
-  - must specify the tunnel outer local (source) IPv4 or IPv6 in the , this address will locally (with scope link) assigned to the outer network interface, wildcards not allowed.
-  - must specify the tunnel outer remote (destination) IPv4 or IPv6 in the VXLAN item, group IPs allowed.
-  - must specify the tunnel outer destination MAC address in the VXLAN item, this address will be used to create neigh rule.

 Statistics
 ----------
@@ -227,7 +215,7 @@ These options can be modified in the ``.config`` file.

 .. note::

-   For Bluefield, target should be set to ``arm64-bluefield-linux-gcc``. This
+   For BlueField, target should be set to ``arm64-bluefield-linux-gcc``. This
    will enable ``CONFIG_RTE_LIBRTE_MLX5_PMD`` and set ``RTE_CACHE_LINE_SIZE`` to
    64. Default armv8a configuration of make build and meson build set it to 128
    then brings performance degradation.
@@ -277,8 +265,8 @@ Run-time configuration

   Supported on:

-  - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield.
-  - POWER8 and ARMv8 with ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield.
+  - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField.
+  - POWER9 and ARMv8 with ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField.

 - ``rxq_cqe_pad_en`` parameter [int]

@@ -296,7 +284,7 @@ Run-time configuration

   Supported on:

-  - CPU having 128B cacheline with ConnectX-5 and Bluefield.
+  - CPU having 128B cacheline with ConnectX-5 and BlueField.

 - ``rxq_pkt_pad_en`` parameter [int]

@@ -308,8 +296,8 @@ Run-time configuration

   Supported on:

-  - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield.
-  - POWER8 and ARMv8 with ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield.
+  - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField.
+  - POWER8 and ARMv8 with ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField.

 - ``mprq_en`` parameter [int]

@@ -375,13 +363,13 @@ Run-time configuration

   This option should be used in combination with ``txq_inline`` above.

-  On ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield without
+  On ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField without
   Enhanced MPW:

         - Disabled by default.
         - In case ``txq_inline`` is set recommendation is 4.

-  On ConnectX-5, ConnectX-6 and Bluefield with Enhanced MPW:
+  On ConnectX-5, ConnectX-6 and BlueField with Enhanced MPW:

         - Set to 8 by default.

@@ -395,14 +383,14 @@ Run-time configuration
         - Set to 8 by default on ARMv8.
         - Set to 4 by default otherwise.

-  On Bluefield
+  On BlueField

         - Set to 16 by default.

 - ``txq_mpw_en`` parameter [int]

   A nonzero value enables multi-packet send (MPS) for ConnectX-4 Lx and
-  enhanced multi-packet send (Enhanced MPS) for ConnectX-5, ConnectX-6 and Bluefield.
+  enhanced multi-packet send (Enhanced MPS) for ConnectX-5, ConnectX-6 and BlueField.
   MPS allows the TX burst function to pack up multiple packets in a
   single descriptor session in order to save PCI bandwidth and improve
   performance at the cost of a slightly higher CPU usage. When
@@ -417,13 +405,13 @@ Run-time configuration
   DEV_TX_OFFLOAD_VXLAN_TNL_TSO, DEV_TX_OFFLOAD_GRE_TNL_TSO, DEV_TX_OFFLOAD_VLAN_INSERT``.
   When those offloads are requested the MPS send function will not be used.

-  It is currently only supported on the ConnectX-4 Lx, ConnectX-5, ConnectX-6 and Bluefield
+  It is currently only supported on the ConnectX-4 Lx, ConnectX-5, ConnectX-6 and BlueField
   families of adapters.
   On ConnectX-4 Lx the MPW is considered un-secure hence disabled by default.
   Users which enable the MPW should be aware that application which provides incorrect
   mbuf descriptors in the Tx burst can lead to serious errors in the host including, on some cases,
   NIC to get stuck.
-  On ConnectX-5, ConnectX-6 and Bluefield the MPW is secure and enabled by default.
+  On ConnectX-5, ConnectX-6 and BlueField the MPW is secure and enabled by default.

 - ``txq_mpw_hdr_dseg_en`` parameter [int]

@@ -443,14 +431,14 @@ Run-time configuration

 - ``tx_vec_en`` parameter [int]

-  A nonzero value enables Tx vector on ConnectX-5, ConnectX-6 and Bluefield NICs if the number of
+  A nonzero value enables Tx vector on ConnectX-5, ConnectX-6 and BlueField NICs if the number of
   global Tx queues on the port is less than ``txqs_max_vec``.

   This option cannot be used with certain offloads such as ``DEV_TX_OFFLOAD_TCP_TSO,
   DEV_TX_OFFLOAD_VXLAN_TNL_TSO, DEV_TX_OFFLOAD_GRE_TNL_TSO, DEV_TX_OFFLOAD_VLAN_INSERT``.
   When those offloads are requested the MPS send function will not be used.

-  Enabled by default on ConnectX-5, ConnectX-6 and Bluefield.
+  Enabled by default on ConnectX-5, ConnectX-6 and BlueField.

 - ``rx_vec_en`` parameter [int]

@@ -480,10 +468,15 @@ Run-time configuration

   A nonzero value enables the DV flow steering assuming it is supported
   by the driver.
-  The DV flow steering is not supported on switchdev mode.

   Disabled by default.

+- ``dv_esw_en`` parameter [int]
+
+  A nonzero value enables E-Switch using Direct Rules.
+
+  Enabled by default if supported.
+
 - ``mr_ext_memseg_en`` parameter [int]

   A nonzero value enables extending memseg when registering DMA memory. If
@@ -545,7 +538,7 @@ DPDK and must be installed separately:
 - **libmlx5**

   Low-level user space driver library for Mellanox
-  ConnectX-4/ConnectX-5/ConnectX-6/Bluefield devices, it is automatically loaded
+  ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices, it is automatically loaded
   by libibverbs.

   This library basically implements send/receive calls to the hardware
@@ -567,7 +560,7 @@ DPDK and must be installed separately:
   their devices:

   - mlx5_core: hardware driver managing Mellanox
-    ConnectX-4/ConnectX-5/ConnectX-6/Bluefield devices and related Ethernet kernel
+    ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices and related Ethernet kernel
     network devices.
   - mlx5_ib: InifiniBand device driver.
   - ib_uverbs: user space driver for Verbs (entry point for libibverbs).
@@ -575,7 +568,7 @@ DPDK and must be installed separately:
 - **Firmware update**

   Mellanox OFED/EN releases include firmware updates for
-  ConnectX-4/ConnectX-5/ConnectX-6/Bluefield adapters.
+  ConnectX-4/ConnectX-5/ConnectX-6/BlueField adapters.

   Because each release provides new features, these updates must be applied to
   match the kernel modules and libraries they come with.
@@ -622,7 +615,8 @@ thanks to these environment variables:
 Mellanox OFED/EN
 ^^^^^^^^^^^^^^^^

-- Mellanox OFED version: **4.4, 4.5** / Mellanox EN version: **4.5**
+- Mellanox OFED version: ** 4.5, 4.6** /
+  Mellanox EN version: **4.5, 4.6**
 - firmware version:

   - ConnectX-4: **12.21.1000** and above.
@@ -630,7 +624,7 @@ Mellanox OFED/EN
   - ConnectX-5: **16.21.1000** and above.
   - ConnectX-5 Ex: **16.21.1000** and above.
   - ConnectX-6: **20.99.5374** and above.
-  - Bluefield: **18.99.3950** and above.
+  - BlueField: **18.25.1010** and above.

 While these libraries and kernel modules are available on OpenFabrics
 Alliance's `website <https://www.openfabrics.org/>`__ and provided by package
@@ -766,6 +760,56 @@ Quick Start Guide on OFED/EN
 6. Compile DPDK and you are ready to go. See instructions on
    :ref:`Development Kit Build System <Development_Kit_Build_System>`

+Enable switchdev mode
+---------------------
+
+Switchdev mode is a mode in E-Switch, that binds between representor and VF.
+Representor is a port in DPDK that is connected to a VF in such a way
+that assuming there are no offload flows, each packet that is sent from the VF
+will be received by the corresponding representor. While each packet that is
+sent to a representor will be received by the VF.
+This is very useful in case of SRIOV mode, where the first packet that is sent
+by the VF will be received by the DPDK application which will decide if this
+flow should be offloaded to the E-Switch. After offloading the flow packet
+that the VF that are matching the flow will not be received any more by
+the DPDK application.
+
+1. Enable SRIOV mode:
+
+  .. code-block:: console
+
+        mlxconfig -d <mst device> set SRIOV_EN=true
+
+2. Configure the max number of VFs:
+
+  .. code-block:: console
+
+        mlxconfig -d <mst device> set NUM_OF_VFS=<num of vfs>
+
+3. Reset the FW:
+
+  .. code-block:: console
+
+        mlxfwreset -d <mst device> reset
+
+3. Configure the actual number of VFs:
+
+  .. code-block:: console
+
+        echo <num of vfs > /sys/class/net/<net device>/device/sriov_numvfs
+
+4. Unbind the device (can be rebind after the switchdev mode):
+
+  .. code-block:: console
+
+        echo -n "<device pci address" > /sys/bus/pci/drivers/mlx5_core/unbind
+
+5. Enbale switchdev mode:
+
+  .. code-block:: console
+
+        echo switchdev > /sys/class/net/<net device>/compat/devlink/mode
+
 Performance tuning
 ------------------

@@ -842,6 +886,62 @@ Performance tuning
    - Configure per-lcore cache when creating Mempools for packet buffer.
    - Refrain from dynamically allocating/freeing memory in run-time.

+Supported hardware offloads using rte_flow API
+----------------------------------------------
+
+.. _Supported hardware offloads using rte_flow API:
+
+.. table:: Supported hardware offloads using rte_flow API
+
+   +-----------------------+-----------------+-----------------+
+   | Offload               | E-Switch        | NIC             |
+   |                       |                 |                 |
+   +=======================+=================+=================+
+   | Count                 | | DPDK 19.05    | | DPDK 19.02    |
+   |                       | | OFED 4.6      | | OFED 4.6      |
+   |                       | | RDMA-CORE V24 | | RDMA-CORE V23 |
+   |                       | | ConnectX-5    | | ConnectX-5    |
+   +-----------------------+-----------------+-----------------+
+   | Drop / Queue / RSS    | | DPDK 19.05    | | DPDK 18.11    |
+   |                       | | OFED 4.6      | | OFED 4.5      |
+   |                       | | RDMA-CORE V24 | | RDMA-CORE V23 |
+   |                       | | ConnectX-5    | | ConnectX-4    |
+   +-----------------------+-----------------+-----------------+
+   | Encapsulation         | | DPDK 19.05    | | DPDK 19.02    |
+   | (VXLAN / NVGRE / RAW) | | OFED 4.6.2    | | OFED 4.6      |
+   |                       | | RDMA-CORE V24 | | RDMA-CORE V23 |
+   |                       | | ConnectX-5    | | ConnectX-5    |
+   +-----------------------+-----------------+-----------------+
+   | Header rewrite        | | DPDK 19.05    | | DPDK 19.02    |
+   | (set_ipv4_src /       | | OFED 4.6.2    | | OFED 4.6.2    |
+   | set_ipv4_dst /        | | RDMA-CORE V24 | | RDMA-CORE V23 |
+   | set_ipv6_src /        | | ConnectX-5    | | ConnectX-5    |
+   | set_ipv6_dst /        |                 |                 |
+   | set_tp_src /          |                 |                 |
+   | set_tp_dst /          |                 |                 |
+   | dec_ttl /             |                 |                 |
+   | set_ttl /             |                 |                 |
+   | set_mac_src /         |                 |                 |
+   | set_mac_dst)          |                 |                 |
+   +-----------------------+-----------------+-----------------+
+   | Jump                  | | DPDK 19.05    | | DPDK 19.02    |
+   |                       | | OFED 4.6.2    | | OFED 4.6.2    |
+   |                       | | RDMA-CORE V24 | | N/A           |
+   |                       | | ConnectX-5    | | ConnectX-5    |
+   +-----------------------+-----------------+-----------------+
+   | Mark / Flag           | | DPDK 19.05    | | DPDK 18.11    |
+   |                       | | OFED 4.6      | | OFED 4.5      |
+   |                       | | RDMA-CORE V24 | | RDMA-CORE V23 |
+   |                       | | ConnectX-5    | | ConnectX-4    |
+   +-----------------------+-----------------+-----------------+
+   | Port ID               | | DPDK 19.05    |     | N/A       |
+   |                       | | OFED 4.6      |     | N/A       |
+   |                       | | RDMA-CORE V24 |     | N/A       |
+   |                       | | ConnectX-5    |     | N/A       |
+   +-----------------------+-----------------+-----------------+
+
+* Minimum version for each component and nic.
+
 Notes for testpmd
 -----------------

@@ -863,7 +963,7 @@ Usage example
 -------------

 This section demonstrates how to launch **testpmd** with Mellanox
-ConnectX-4/ConnectX-5/ConnectX-6/Bluefield devices managed by librte_pmd_mlx5.
+ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.

 #. Load the kernel modules:

diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 4e0eed5..ce90f65 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -148,10 +148,13 @@ New Features
    * Added support for multiport InfiniBand device.
    * Added control of excessive memory pinning by kernel.
    * Added support of DMA memory registration by secondary process.
-   * Added Direct Rule support in Direct Verbs flow driver.
    * Added support of per-process device registers, reserving identical VA space
      is not needed anymore.
-   * Added E-Switch support in Direct Verbs flow driver.
+   * Added support for jump action for both E-Switch and Nic.
+   * Added Support for multiple rte_flow groups in Nic steering.
+   * Flow engine re-design to support large scale deployments. this includes:
+      * Support millions of offloaded flow rules.
+      * Fast flow insertion and deletion up to 1M flow update per second.

 * **Renamed avf to iavf.**

--
1.8.3.1
  
Ori Kam May 14, 2019, 8:18 a.m. UTC | #3
Hi Tom,

Thanks for your mail.

PSB

Best,
Ori Kam

From: Tom Barbette <barbette@kth.se>
Sent: Tuesday, May 14, 2019 8:47 AM
To: Ori Kam <orika@mellanox.com>
Cc: Yongseok Koh <yskoh@mellanox.com>; Shahaf Shuler <shahafs@mellanox.com>; Matan Azrad <matan@mellanox.com>; Thomas Monjalon <thomas@monjalon.net>; dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH v3] doc: fix update release notes for Mellanox drivers

Hi all,

I still find it a little bit unclear about direct rules. And therefore, how to enable this large sclale mode.

It is only available for eswitch ? Therefore PF rules will still be slow ?  How can I be sure to be in direct mode ?

[Ori] Direct Rules are activated automatically when you enable dv using dv_flow_en=1 (in the device parameters) and Direct Rules are supported
by driver.
As stated in the "Supported hardware offloads using rte_flow API" table, Direct Rules for E-Switch will be supported starting OFED 4.6.2 or
RDMA-CORE V24.
The support for Nic Direct Rules will begin in OFED  4.6.2.

At some point someone mentioned performance of using a second group would be much faster than updating  rules in the main group. I think having that somewhere in the doc, or internal Mellanox guide would be useful.

[Ori] You are correct in order to get the fast insertion rate you need to use group larger then 0. (I will update the doc accordingly)


Last point, is the requirement really OFED 4.6.2 for direct rules ? Because it's still not publicly available.

[Ori] OFED 4.6.2 is scheduled for release at the end of the month.

Thanks,

Tom

Le 13 mai 2019 14:14, Ori Kam <orika@mellanox.com<mailto:orika@mellanox.com>> a écrit :
This patch adds some missing features to Mellanox drivers release notes.
It also updates the mlx5/mlx4 documentations.

Fixes: d85b204b5dba ("doc: update release notes for Mellanox drivers")
Cc: yskoh@mellanox.com<mailto:yskoh@mellanox.com>

Signed-off-by: Ori Kam <orika@mellanox.com<mailto:orika@mellanox.com>>
Acked-by: Shahaf Shuler <shahafs@mellanox.com<mailto:shahafs@mellanox.com>>

---
v3:
 * Address ML comments.

V2:
 * Fix checkpatch issues.

---
 doc/guides/nics/mlx4.rst               |   2 +-
 doc/guides/nics/mlx5.rst               | 184 +++++++++++++++++++++++++--------
 doc/guides/rel_notes/release_19_05.rst |   7 +-
 3 files changed, 148 insertions(+), 45 deletions(-)

diff --git a/doc/guides/nics/mlx4.rst b/doc/guides/nics/mlx4.rst
index f6d7a16..5c6bbde 100644
--- a/doc/guides/nics/mlx4.rst
+++ b/doc/guides/nics/mlx4.rst
@@ -253,7 +253,7 @@ thanks to these environment variables:
 Mellanox OFED as a fallback
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~

-- `Mellanox OFED`_ version: **4.4, 4.5**.
+- `Mellanox OFED`_ version: **4.4, 4.5, 4.6**.
 - firmware version: **2.42.5000** and above.

 .. _`Mellanox OFED`: http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 325e9f6..9540657 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -7,7 +7,7 @@ MLX5 poll mode driver

 The MLX5 poll mode driver library (**librte_pmd_mlx5**) provides support
 for **Mellanox ConnectX-4**, **Mellanox ConnectX-4 Lx** , **Mellanox
-ConnectX-5**, **Mellanox ConnectX-6** and **Mellanox Bluefield** families
+ConnectX-5**, **Mellanox ConnectX-6** and **Mellanox BlueField** families
 of 10/25/40/50/100/200 Gb/s adapters as well as their virtual functions (VF)
 in SR-IOV context.

@@ -62,8 +62,8 @@ Features
 - RX VLAN stripping.
 - TX VLAN insertion.
 - RX CRC stripping configuration.
-- Promiscuous mode.
-- Multicast promiscuous mode.
+- Promiscuous mode on PF and VF.
+- Multicast promiscuous mode on PF and VF.
 - Hardware checksum offloads.
 - Flow director (RTE_FDIR_MODE_PERFECT, RTE_FDIR_MODE_PERFECT_MAC_VLAN and
   RTE_ETH_FDIR_REJECT).
@@ -78,6 +78,10 @@ Features
 - Rx HW timestamp.
 - Tunnel types: VXLAN, L3 VXLAN, VXLAN-GPE, GRE, MPLSoGRE, MPLSoUDP.
 - Tunnel HW offloads: packet type, inner/outer RSS, IP and UDP checksum verification.
+- Nic HW offloads: encapsulation (vxlan, gre, mplsoudp, mplsogre), NAT, routing, TTL
+  increment/decrement, count, drop, mark. For details please see :ref:`Supported hardware offloads using rte_flow API`.
+- Flow insertion rate of more then million flows per second, when using Direct Rules.
+- Support for multiple rte_flow groups.

 Limitations
 -----------
@@ -112,8 +116,6 @@ Limitations
   is set to multi-packet send or Enhanced multi-packet send. Otherwise it must have
   less than 50 segments.

-- Count action for RTE flow is **only supported in Mellanox OFED**.
-
 - Flows with a VXLAN Network Identifier equal (or ends to be equal)
   to 0 are not supported.

@@ -147,30 +149,16 @@ Limitations
   To receive IPv6 Multicast messages on VM, explicitly set the relevant
   MAC address using rte_eth_dev_mac_addr_add() API.

-- E-Switch VXLAN tunnel is not supported together with outer VLAN.
-
-- E-Switch Flows with VNI pattern must include the VXLAN decapsulation action.
-
-- E-Switch VXLAN decapsulation Flow:
+- E-Switch decapsulation Flow:

   - can be applied to PF port only.
   - must specify VF port action (packet redirection from PF to VF).
-  - must specify tunnel outer UDP local (destination) port, wildcards not allowed.
-  - must specify tunnel outer VNI, wildcards not allowed.
-  - must specify tunnel outer local (destination)  IPv4 or IPv6 address, wildcards not allowed.
-  - optionally may specify tunnel outer remote (source) IPv4 or IPv6, wildcards or group IPs allowed.
   - optionally may specify tunnel inner source and destination MAC addresses.

-- E-Switch VXLAN encapsulation Flow:
+- E-Switch  encapsulation Flow:

   - can be applied to VF ports only.
   - must specify PF port action (packet redirection from VF to PF).
-  - must specify the VXLAN item with tunnel outer parameters.
-  - must specify the tunnel outer VNI in the VXLAN item.
-  - must specify the tunnel outer remote (destination) UDP port in the VXLAN item.
-  - must specify the tunnel outer local (source) IPv4 or IPv6 in the , this address will locally (with scope link) assigned to the outer network interface, wildcards not allowed.
-  - must specify the tunnel outer remote (destination) IPv4 or IPv6 in the VXLAN item, group IPs allowed.
-  - must specify the tunnel outer destination MAC address in the VXLAN item, this address will be used to create neigh rule.

 Statistics
 ----------
@@ -227,7 +215,7 @@ These options can be modified in the ``.config`` file.

 .. note::

-   For Bluefield, target should be set to ``arm64-bluefield-linux-gcc``. This
+   For BlueField, target should be set to ``arm64-bluefield-linux-gcc``. This
    will enable ``CONFIG_RTE_LIBRTE_MLX5_PMD`` and set ``RTE_CACHE_LINE_SIZE`` to
    64. Default armv8a configuration of make build and meson build set it to 128
    then brings performance degradation.
@@ -277,8 +265,8 @@ Run-time configuration

   Supported on:

-  - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield.
-  - POWER8 and ARMv8 with ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield.
+  - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField.
+  - POWER9 and ARMv8 with ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField.

 - ``rxq_cqe_pad_en`` parameter [int]

@@ -296,7 +284,7 @@ Run-time configuration

   Supported on:

-  - CPU having 128B cacheline with ConnectX-5 and Bluefield.
+  - CPU having 128B cacheline with ConnectX-5 and BlueField.

 - ``rxq_pkt_pad_en`` parameter [int]

@@ -308,8 +296,8 @@ Run-time configuration

   Supported on:

-  - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield.
-  - POWER8 and ARMv8 with ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield.
+  - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField.
+  - POWER8 and ARMv8 with ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField.

 - ``mprq_en`` parameter [int]

@@ -375,13 +363,13 @@ Run-time configuration

   This option should be used in combination with ``txq_inline`` above.

-  On ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield without
+  On ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField without
   Enhanced MPW:

         - Disabled by default.
         - In case ``txq_inline`` is set recommendation is 4.

-  On ConnectX-5, ConnectX-6 and Bluefield with Enhanced MPW:
+  On ConnectX-5, ConnectX-6 and BlueField with Enhanced MPW:

         - Set to 8 by default.

@@ -395,14 +383,14 @@ Run-time configuration
         - Set to 8 by default on ARMv8.
         - Set to 4 by default otherwise.

-  On Bluefield
+  On BlueField

         - Set to 16 by default.

 - ``txq_mpw_en`` parameter [int]

   A nonzero value enables multi-packet send (MPS) for ConnectX-4 Lx and
-  enhanced multi-packet send (Enhanced MPS) for ConnectX-5, ConnectX-6 and Bluefield.
+  enhanced multi-packet send (Enhanced MPS) for ConnectX-5, ConnectX-6 and BlueField.
   MPS allows the TX burst function to pack up multiple packets in a
   single descriptor session in order to save PCI bandwidth and improve
   performance at the cost of a slightly higher CPU usage. When
@@ -417,13 +405,13 @@ Run-time configuration
   DEV_TX_OFFLOAD_VXLAN_TNL_TSO, DEV_TX_OFFLOAD_GRE_TNL_TSO, DEV_TX_OFFLOAD_VLAN_INSERT``.
   When those offloads are requested the MPS send function will not be used.

-  It is currently only supported on the ConnectX-4 Lx, ConnectX-5, ConnectX-6 and Bluefield
+  It is currently only supported on the ConnectX-4 Lx, ConnectX-5, ConnectX-6 and BlueField
   families of adapters.
   On ConnectX-4 Lx the MPW is considered un-secure hence disabled by default.
   Users which enable the MPW should be aware that application which provides incorrect
   mbuf descriptors in the Tx burst can lead to serious errors in the host including, on some cases,
   NIC to get stuck.
-  On ConnectX-5, ConnectX-6 and Bluefield the MPW is secure and enabled by default.
+  On ConnectX-5, ConnectX-6 and BlueField the MPW is secure and enabled by default.

 - ``txq_mpw_hdr_dseg_en`` parameter [int]

@@ -443,14 +431,14 @@ Run-time configuration

 - ``tx_vec_en`` parameter [int]

-  A nonzero value enables Tx vector on ConnectX-5, ConnectX-6 and Bluefield NICs if the number of
+  A nonzero value enables Tx vector on ConnectX-5, ConnectX-6 and BlueField NICs if the number of
   global Tx queues on the port is less than ``txqs_max_vec``.

   This option cannot be used with certain offloads such as ``DEV_TX_OFFLOAD_TCP_TSO,
   DEV_TX_OFFLOAD_VXLAN_TNL_TSO, DEV_TX_OFFLOAD_GRE_TNL_TSO, DEV_TX_OFFLOAD_VLAN_INSERT``.
   When those offloads are requested the MPS send function will not be used.

-  Enabled by default on ConnectX-5, ConnectX-6 and Bluefield.
+  Enabled by default on ConnectX-5, ConnectX-6 and BlueField.

 - ``rx_vec_en`` parameter [int]

@@ -480,10 +468,15 @@ Run-time configuration

   A nonzero value enables the DV flow steering assuming it is supported
   by the driver.
-  The DV flow steering is not supported on switchdev mode.

   Disabled by default.

+- ``dv_esw_en`` parameter [int]
+
+  A nonzero value enables E-Switch using Direct Rules.
+
+  Enabled by default if supported.
+
 - ``mr_ext_memseg_en`` parameter [int]

   A nonzero value enables extending memseg when registering DMA memory. If
@@ -545,7 +538,7 @@ DPDK and must be installed separately:
 - **libmlx5**

   Low-level user space driver library for Mellanox
-  ConnectX-4/ConnectX-5/ConnectX-6/Bluefield devices, it is automatically loaded
+  ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices, it is automatically loaded
   by libibverbs.

   This library basically implements send/receive calls to the hardware
@@ -567,7 +560,7 @@ DPDK and must be installed separately:
   their devices:

   - mlx5_core: hardware driver managing Mellanox
-    ConnectX-4/ConnectX-5/ConnectX-6/Bluefield devices and related Ethernet kernel
+    ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices and related Ethernet kernel
     network devices.
   - mlx5_ib: InifiniBand device driver.
   - ib_uverbs: user space driver for Verbs (entry point for libibverbs).
@@ -575,7 +568,7 @@ DPDK and must be installed separately:
 - **Firmware update**

   Mellanox OFED/EN releases include firmware updates for
-  ConnectX-4/ConnectX-5/ConnectX-6/Bluefield adapters.
+  ConnectX-4/ConnectX-5/ConnectX-6/BlueField adapters.

   Because each release provides new features, these updates must be applied to
   match the kernel modules and libraries they come with.
@@ -622,7 +615,8 @@ thanks to these environment variables:
 Mellanox OFED/EN
 ^^^^^^^^^^^^^^^^

-- Mellanox OFED version: **4.4, 4.5** / Mellanox EN version: **4.5**
+- Mellanox OFED version: ** 4.5, 4.6** /
+  Mellanox EN version: **4.5, 4.6**
 - firmware version:

   - ConnectX-4: **12.21.1000** and above.
@@ -630,7 +624,7 @@ Mellanox OFED/EN
   - ConnectX-5: **16.21.1000** and above.
   - ConnectX-5 Ex: **16.21.1000** and above.
   - ConnectX-6: **20.99.5374** and above.
-  - Bluefield: **18.99.3950** and above.
+  - BlueField: **18.25.1010** and above.

 While these libraries and kernel modules are available on OpenFabrics
 Alliance's `website <https://www.openfabrics.org/<https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.openfabrics.org%2F&data=02%7C01%7Corika%40mellanox.com%7C328d55d2b2bf42cb208e08d6d82fa678%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636934096506526058&sdata=V8Xbjt%2FRz8%2FsPD%2B71gSXbxGu%2BQWllBldjS3yfaZsxIE%3D&reserved=0>>`__ and provided by package
@@ -766,6 +760,56 @@ Quick Start Guide on OFED/EN
 6. Compile DPDK and you are ready to go. See instructions on
    :ref:`Development Kit Build System <Development_Kit_Build_System>`

+Enable switchdev mode
+---------------------
+
+Switchdev mode is a mode in E-Switch, that binds between representor and VF.
+Representor is a port in DPDK that is connected to a VF in such a way
+that assuming there are no offload flows, each packet that is sent from the VF
+will be received by the corresponding representor. While each packet that is
+sent to a representor will be received by the VF.
+This is very useful in case of SRIOV mode, where the first packet that is sent
+by the VF will be received by the DPDK application which will decide if this
+flow should be offloaded to the E-Switch. After offloading the flow packet
+that the VF that are matching the flow will not be received any more by
+the DPDK application.
+
+1. Enable SRIOV mode:
+
+  .. code-block:: console
+
+        mlxconfig -d <mst device> set SRIOV_EN=true
+
+2. Configure the max number of VFs:
+
+  .. code-block:: console
+
+        mlxconfig -d <mst device> set NUM_OF_VFS=<num of vfs>
+
+3. Reset the FW:
+
+  .. code-block:: console
+
+        mlxfwreset -d <mst device> reset
+
+3. Configure the actual number of VFs:
+
+  .. code-block:: console
+
+        echo <num of vfs > /sys/class/net/<net device>/device/sriov_numvfs
+
+4. Unbind the device (can be rebind after the switchdev mode):
+
+  .. code-block:: console
+
+        echo -n "<device pci address" > /sys/bus/pci/drivers/mlx5_core/unbind
+
+5. Enbale switchdev mode:
+
+  .. code-block:: console
+
+        echo switchdev > /sys/class/net/<net device>/compat/devlink/mode
+
 Performance tuning
 ------------------

@@ -842,6 +886,62 @@ Performance tuning
    - Configure per-lcore cache when creating Mempools for packet buffer.
    - Refrain from dynamically allocating/freeing memory in run-time.

+Supported hardware offloads using rte_flow API
+----------------------------------------------
+
+.. _Supported hardware offloads using rte_flow API:
+
+.. table:: Supported hardware offloads using rte_flow API
+
+   +-----------------------+-----------------+-----------------+
+   | Offload               | E-Switch        | NIC             |
+   |                       |                 |                 |
+   +=======================+=================+=================+
+   | Count                 | | DPDK 19.05    | | DPDK 19.02    |
+   |                       | | OFED 4.6      | | OFED 4.6      |
+   |                       | | RDMA-CORE V24 | | RDMA-CORE V23 |
+   |                       | | ConnectX-5    | | ConnectX-5    |
+   +-----------------------+-----------------+-----------------+
+   | Drop / Queue / RSS    | | DPDK 19.05    | | DPDK 18.11    |
+   |                       | | OFED 4.6      | | OFED 4.5      |
+   |                       | | RDMA-CORE V24 | | RDMA-CORE V23 |
+   |                       | | ConnectX-5    | | ConnectX-4    |
+   +-----------------------+-----------------+-----------------+
+   | Encapsulation         | | DPDK 19.05    | | DPDK 19.02    |
+   | (VXLAN / NVGRE / RAW) | | OFED 4.6.2    | | OFED 4.6      |
+   |                       | | RDMA-CORE V24 | | RDMA-CORE V23 |
+   |                       | | ConnectX-5    | | ConnectX-5    |
+   +-----------------------+-----------------+-----------------+
+   | Header rewrite        | | DPDK 19.05    | | DPDK 19.02    |
+   | (set_ipv4_src /       | | OFED 4.6.2    | | OFED 4.6.2    |
+   | set_ipv4_dst /        | | RDMA-CORE V24 | | RDMA-CORE V23 |
+   | set_ipv6_src /        | | ConnectX-5    | | ConnectX-5    |
+   | set_ipv6_dst /        |                 |                 |
+   | set_tp_src /          |                 |                 |
+   | set_tp_dst /          |                 |                 |
+   | dec_ttl /             |                 |                 |
+   | set_ttl /             |                 |                 |
+   | set_mac_src /         |                 |                 |
+   | set_mac_dst)          |                 |                 |
+   +-----------------------+-----------------+-----------------+
+   | Jump                  | | DPDK 19.05    | | DPDK 19.02    |
+   |                       | | OFED 4.6.2    | | OFED 4.6.2    |
+   |                       | | RDMA-CORE V24 | | N/A           |
+   |                       | | ConnectX-5    | | ConnectX-5    |
+   +-----------------------+-----------------+-----------------+
+   | Mark / Flag           | | DPDK 19.05    | | DPDK 18.11    |
+   |                       | | OFED 4.6      | | OFED 4.5      |
+   |                       | | RDMA-CORE V24 | | RDMA-CORE V23 |
+   |                       | | ConnectX-5    | | ConnectX-4    |
+   +-----------------------+-----------------+-----------------+
+   | Port ID               | | DPDK 19.05    |     | N/A       |
+   |                       | | OFED 4.6      |     | N/A       |
+   |                       | | RDMA-CORE V24 |     | N/A       |
+   |                       | | ConnectX-5    |     | N/A       |
+   +-----------------------+-----------------+-----------------+
+
+* Minimum version for each component and nic.
+
 Notes for testpmd
 -----------------

@@ -863,7 +963,7 @@ Usage example
 -------------

 This section demonstrates how to launch **testpmd** with Mellanox
-ConnectX-4/ConnectX-5/ConnectX-6/Bluefield devices managed by librte_pmd_mlx5.
+ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.

 #. Load the kernel modules:

diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 4e0eed5..ce90f65 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -148,10 +148,13 @@ New Features
    * Added support for multiport InfiniBand device.
    * Added control of excessive memory pinning by kernel.
    * Added support of DMA memory registration by secondary process.
-   * Added Direct Rule support in Direct Verbs flow driver.
    * Added support of per-process device registers, reserving identical VA space
      is not needed anymore.
-   * Added E-Switch support in Direct Verbs flow driver.
+   * Added support for jump action for both E-Switch and Nic.
+   * Added Support for multiple rte_flow groups in Nic steering.
+   * Flow engine re-design to support large scale deployments. this includes:
+      * Support millions of offloaded flow rules.
+      * Fast flow insertion and deletion up to 1M flow update per second.

 * **Renamed avf to iavf.**

--
1.8.3.1
  

Patch

diff --git a/doc/guides/nics/mlx4.rst b/doc/guides/nics/mlx4.rst
index f6d7a16..5c6bbde 100644
--- a/doc/guides/nics/mlx4.rst
+++ b/doc/guides/nics/mlx4.rst
@@ -253,7 +253,7 @@  thanks to these environment variables:
 Mellanox OFED as a fallback
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- `Mellanox OFED`_ version: **4.4, 4.5**.
+- `Mellanox OFED`_ version: **4.4, 4.5, 4.6**.
 - firmware version: **2.42.5000** and above.
 
 .. _`Mellanox OFED`: http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 325e9f6..9540657 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -7,7 +7,7 @@  MLX5 poll mode driver
 
 The MLX5 poll mode driver library (**librte_pmd_mlx5**) provides support
 for **Mellanox ConnectX-4**, **Mellanox ConnectX-4 Lx** , **Mellanox
-ConnectX-5**, **Mellanox ConnectX-6** and **Mellanox Bluefield** families
+ConnectX-5**, **Mellanox ConnectX-6** and **Mellanox BlueField** families
 of 10/25/40/50/100/200 Gb/s adapters as well as their virtual functions (VF)
 in SR-IOV context.
 
@@ -62,8 +62,8 @@  Features
 - RX VLAN stripping.
 - TX VLAN insertion.
 - RX CRC stripping configuration.
-- Promiscuous mode.
-- Multicast promiscuous mode.
+- Promiscuous mode on PF and VF.
+- Multicast promiscuous mode on PF and VF.
 - Hardware checksum offloads.
 - Flow director (RTE_FDIR_MODE_PERFECT, RTE_FDIR_MODE_PERFECT_MAC_VLAN and
   RTE_ETH_FDIR_REJECT).
@@ -78,6 +78,10 @@  Features
 - Rx HW timestamp.
 - Tunnel types: VXLAN, L3 VXLAN, VXLAN-GPE, GRE, MPLSoGRE, MPLSoUDP.
 - Tunnel HW offloads: packet type, inner/outer RSS, IP and UDP checksum verification.
+- Nic HW offloads: encapsulation (vxlan, gre, mplsoudp, mplsogre), NAT, routing, TTL
+  increment/decrement, count, drop, mark. For details please see :ref:`Supported hardware offloads using rte_flow API`.
+- Flow insertion rate of more then million flows per second, when using Direct Rules.
+- Support for multiple rte_flow groups.
 
 Limitations
 -----------
@@ -112,8 +116,6 @@  Limitations
   is set to multi-packet send or Enhanced multi-packet send. Otherwise it must have
   less than 50 segments.
 
-- Count action for RTE flow is **only supported in Mellanox OFED**.
-
 - Flows with a VXLAN Network Identifier equal (or ends to be equal)
   to 0 are not supported.
 
@@ -147,30 +149,16 @@  Limitations
   To receive IPv6 Multicast messages on VM, explicitly set the relevant
   MAC address using rte_eth_dev_mac_addr_add() API.
 
-- E-Switch VXLAN tunnel is not supported together with outer VLAN.
-
-- E-Switch Flows with VNI pattern must include the VXLAN decapsulation action.
-
-- E-Switch VXLAN decapsulation Flow:
+- E-Switch decapsulation Flow:
 
   - can be applied to PF port only.
   - must specify VF port action (packet redirection from PF to VF).
-  - must specify tunnel outer UDP local (destination) port, wildcards not allowed.
-  - must specify tunnel outer VNI, wildcards not allowed.
-  - must specify tunnel outer local (destination)  IPv4 or IPv6 address, wildcards not allowed.
-  - optionally may specify tunnel outer remote (source) IPv4 or IPv6, wildcards or group IPs allowed.
   - optionally may specify tunnel inner source and destination MAC addresses.
 
-- E-Switch VXLAN encapsulation Flow:
+- E-Switch  encapsulation Flow:
 
   - can be applied to VF ports only.
   - must specify PF port action (packet redirection from VF to PF).
-  - must specify the VXLAN item with tunnel outer parameters.
-  - must specify the tunnel outer VNI in the VXLAN item.
-  - must specify the tunnel outer remote (destination) UDP port in the VXLAN item.
-  - must specify the tunnel outer local (source) IPv4 or IPv6 in the , this address will locally (with scope link) assigned to the outer network interface, wildcards not allowed.
-  - must specify the tunnel outer remote (destination) IPv4 or IPv6 in the VXLAN item, group IPs allowed.
-  - must specify the tunnel outer destination MAC address in the VXLAN item, this address will be used to create neigh rule.
 
 Statistics
 ----------
@@ -227,7 +215,7 @@  These options can be modified in the ``.config`` file.
 
 .. note::
 
-   For Bluefield, target should be set to ``arm64-bluefield-linux-gcc``. This
+   For BlueField, target should be set to ``arm64-bluefield-linux-gcc``. This
    will enable ``CONFIG_RTE_LIBRTE_MLX5_PMD`` and set ``RTE_CACHE_LINE_SIZE`` to
    64. Default armv8a configuration of make build and meson build set it to 128
    then brings performance degradation.
@@ -277,8 +265,8 @@  Run-time configuration
 
   Supported on:
 
-  - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield.
-  - POWER8 and ARMv8 with ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield.
+  - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField.
+  - POWER9 and ARMv8 with ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField.
 
 - ``rxq_cqe_pad_en`` parameter [int]
 
@@ -296,7 +284,7 @@  Run-time configuration
 
   Supported on:
 
-  - CPU having 128B cacheline with ConnectX-5 and Bluefield.
+  - CPU having 128B cacheline with ConnectX-5 and BlueField.
 
 - ``rxq_pkt_pad_en`` parameter [int]
 
@@ -308,8 +296,8 @@  Run-time configuration
 
   Supported on:
 
-  - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield.
-  - POWER8 and ARMv8 with ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield.
+  - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField.
+  - POWER8 and ARMv8 with ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField.
 
 - ``mprq_en`` parameter [int]
 
@@ -375,13 +363,13 @@  Run-time configuration
 
   This option should be used in combination with ``txq_inline`` above.
 
-  On ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield without
+  On ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField without
   Enhanced MPW:
 
         - Disabled by default.
         - In case ``txq_inline`` is set recommendation is 4.
 
-  On ConnectX-5, ConnectX-6 and Bluefield with Enhanced MPW:
+  On ConnectX-5, ConnectX-6 and BlueField with Enhanced MPW:
 
         - Set to 8 by default.
 
@@ -395,14 +383,14 @@  Run-time configuration
         - Set to 8 by default on ARMv8.
         - Set to 4 by default otherwise.
 
-  On Bluefield
+  On BlueField
 
         - Set to 16 by default.
 
 - ``txq_mpw_en`` parameter [int]
 
   A nonzero value enables multi-packet send (MPS) for ConnectX-4 Lx and
-  enhanced multi-packet send (Enhanced MPS) for ConnectX-5, ConnectX-6 and Bluefield.
+  enhanced multi-packet send (Enhanced MPS) for ConnectX-5, ConnectX-6 and BlueField.
   MPS allows the TX burst function to pack up multiple packets in a
   single descriptor session in order to save PCI bandwidth and improve
   performance at the cost of a slightly higher CPU usage. When
@@ -417,13 +405,13 @@  Run-time configuration
   DEV_TX_OFFLOAD_VXLAN_TNL_TSO, DEV_TX_OFFLOAD_GRE_TNL_TSO, DEV_TX_OFFLOAD_VLAN_INSERT``.
   When those offloads are requested the MPS send function will not be used.
 
-  It is currently only supported on the ConnectX-4 Lx, ConnectX-5, ConnectX-6 and Bluefield
+  It is currently only supported on the ConnectX-4 Lx, ConnectX-5, ConnectX-6 and BlueField
   families of adapters.
   On ConnectX-4 Lx the MPW is considered un-secure hence disabled by default.
   Users which enable the MPW should be aware that application which provides incorrect
   mbuf descriptors in the Tx burst can lead to serious errors in the host including, on some cases,
   NIC to get stuck.
-  On ConnectX-5, ConnectX-6 and Bluefield the MPW is secure and enabled by default.
+  On ConnectX-5, ConnectX-6 and BlueField the MPW is secure and enabled by default.
 
 - ``txq_mpw_hdr_dseg_en`` parameter [int]
 
@@ -443,14 +431,14 @@  Run-time configuration
 
 - ``tx_vec_en`` parameter [int]
 
-  A nonzero value enables Tx vector on ConnectX-5, ConnectX-6 and Bluefield NICs if the number of
+  A nonzero value enables Tx vector on ConnectX-5, ConnectX-6 and BlueField NICs if the number of
   global Tx queues on the port is less than ``txqs_max_vec``.
 
   This option cannot be used with certain offloads such as ``DEV_TX_OFFLOAD_TCP_TSO,
   DEV_TX_OFFLOAD_VXLAN_TNL_TSO, DEV_TX_OFFLOAD_GRE_TNL_TSO, DEV_TX_OFFLOAD_VLAN_INSERT``.
   When those offloads are requested the MPS send function will not be used.
 
-  Enabled by default on ConnectX-5, ConnectX-6 and Bluefield.
+  Enabled by default on ConnectX-5, ConnectX-6 and BlueField.
 
 - ``rx_vec_en`` parameter [int]
 
@@ -480,10 +468,15 @@  Run-time configuration
 
   A nonzero value enables the DV flow steering assuming it is supported
   by the driver.
-  The DV flow steering is not supported on switchdev mode.
 
   Disabled by default.
 
+- ``dv_esw_en`` parameter [int]
+
+  A nonzero value enables E-Switch using Direct Rules.
+
+  Enabled by default if supported.
+
 - ``mr_ext_memseg_en`` parameter [int]
 
   A nonzero value enables extending memseg when registering DMA memory. If
@@ -545,7 +538,7 @@  DPDK and must be installed separately:
 - **libmlx5**
 
   Low-level user space driver library for Mellanox
-  ConnectX-4/ConnectX-5/ConnectX-6/Bluefield devices, it is automatically loaded
+  ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices, it is automatically loaded
   by libibverbs.
 
   This library basically implements send/receive calls to the hardware
@@ -567,7 +560,7 @@  DPDK and must be installed separately:
   their devices:
 
   - mlx5_core: hardware driver managing Mellanox
-    ConnectX-4/ConnectX-5/ConnectX-6/Bluefield devices and related Ethernet kernel
+    ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices and related Ethernet kernel
     network devices.
   - mlx5_ib: InifiniBand device driver.
   - ib_uverbs: user space driver for Verbs (entry point for libibverbs).
@@ -575,7 +568,7 @@  DPDK and must be installed separately:
 - **Firmware update**
 
   Mellanox OFED/EN releases include firmware updates for
-  ConnectX-4/ConnectX-5/ConnectX-6/Bluefield adapters.
+  ConnectX-4/ConnectX-5/ConnectX-6/BlueField adapters.
 
   Because each release provides new features, these updates must be applied to
   match the kernel modules and libraries they come with.
@@ -622,7 +615,8 @@  thanks to these environment variables:
 Mellanox OFED/EN
 ^^^^^^^^^^^^^^^^
 
-- Mellanox OFED version: **4.4, 4.5** / Mellanox EN version: **4.5**
+- Mellanox OFED version: ** 4.5, 4.6** /
+  Mellanox EN version: **4.5, 4.6**
 - firmware version:
 
   - ConnectX-4: **12.21.1000** and above.
@@ -630,7 +624,7 @@  Mellanox OFED/EN
   - ConnectX-5: **16.21.1000** and above.
   - ConnectX-5 Ex: **16.21.1000** and above.
   - ConnectX-6: **20.99.5374** and above.
-  - Bluefield: **18.99.3950** and above.
+  - BlueField: **18.25.1010** and above.
 
 While these libraries and kernel modules are available on OpenFabrics
 Alliance's `website <https://www.openfabrics.org/>`__ and provided by package
@@ -766,6 +760,56 @@  Quick Start Guide on OFED/EN
 6. Compile DPDK and you are ready to go. See instructions on
    :ref:`Development Kit Build System <Development_Kit_Build_System>`
 
+Enable switchdev mode
+---------------------
+
+Switchdev mode is a mode in E-Switch, that binds between representor and VF.
+Representor is a port in DPDK that is connected to a VF in such a way
+that assuming there are no offload flows, each packet that is sent from the VF
+will be received by the corresponding representor. While each packet that is
+sent to a representor will be received by the VF.
+This is very useful in case of SRIOV mode, where the first packet that is sent
+by the VF will be received by the DPDK application which will decide if this
+flow should be offloaded to the E-Switch. After offloading the flow packet
+that the VF that are matching the flow will not be received any more by
+the DPDK application.
+
+1. Enable SRIOV mode:
+
+  .. code-block:: console
+
+        mlxconfig -d <mst device> set SRIOV_EN=true
+
+2. Configure the max number of VFs:
+
+  .. code-block:: console
+
+        mlxconfig -d <mst device> set NUM_OF_VFS=<num of vfs>
+
+3. Reset the FW:
+
+  .. code-block:: console
+
+        mlxfwreset -d <mst device> reset
+
+3. Configure the actual number of VFs:
+
+  .. code-block:: console
+
+        echo <num of vfs > /sys/class/net/<net device>/device/sriov_numvfs
+
+4. Unbind the device (can be rebind after the switchdev mode):
+
+  .. code-block:: console
+
+        echo -n "<device pci address" > /sys/bus/pci/drivers/mlx5_core/unbind
+
+5. Enbale switchdev mode:
+
+  .. code-block:: console
+
+        echo switchdev > /sys/class/net/<net device>/compat/devlink/mode
+
 Performance tuning
 ------------------
 
@@ -842,6 +886,62 @@  Performance tuning
    - Configure per-lcore cache when creating Mempools for packet buffer.
    - Refrain from dynamically allocating/freeing memory in run-time.
 
+Supported hardware offloads using rte_flow API
+----------------------------------------------
+
+.. _Supported hardware offloads using rte_flow API:
+
+.. table:: Supported hardware offloads using rte_flow API
+
+   +-----------------------+-----------------+-----------------+
+   | Offload               | E-Switch        | NIC             |
+   |                       |                 |                 |
+   +=======================+=================+=================+
+   | Count                 | | DPDK 19.05    | | DPDK 19.02    |
+   |                       | | OFED 4.6      | | OFED 4.6      |
+   |                       | | RDMA-CORE V24 | | RDMA-CORE V23 |
+   |                       | | ConnectX-5    | | ConnectX-5    |
+   +-----------------------+-----------------+-----------------+
+   | Drop / Queue / RSS    | | DPDK 19.05    | | DPDK 18.11    |
+   |                       | | OFED 4.6      | | OFED 4.5      |
+   |                       | | RDMA-CORE V24 | | RDMA-CORE V23 |
+   |                       | | ConnectX-5    | | ConnectX-4    |
+   +-----------------------+-----------------+-----------------+
+   | Encapsulation         | | DPDK 19.05    | | DPDK 19.02    |
+   | (VXLAN / NVGRE / RAW) | | OFED 4.6.2    | | OFED 4.6      |
+   |                       | | RDMA-CORE V24 | | RDMA-CORE V23 |
+   |                       | | ConnectX-5    | | ConnectX-5    |
+   +-----------------------+-----------------+-----------------+
+   | Header rewrite        | | DPDK 19.05    | | DPDK 19.02    |
+   | (set_ipv4_src /       | | OFED 4.6.2    | | OFED 4.6.2    |
+   | set_ipv4_dst /        | | RDMA-CORE V24 | | RDMA-CORE V23 |
+   | set_ipv6_src /        | | ConnectX-5    | | ConnectX-5    |
+   | set_ipv6_dst /        |                 |                 |
+   | set_tp_src /          |                 |                 |
+   | set_tp_dst /          |                 |                 |
+   | dec_ttl /             |                 |                 |
+   | set_ttl /             |                 |                 |
+   | set_mac_src /         |                 |                 |
+   | set_mac_dst)          |                 |                 |
+   +-----------------------+-----------------+-----------------+
+   | Jump                  | | DPDK 19.05    | | DPDK 19.02    |
+   |                       | | OFED 4.6.2    | | OFED 4.6.2    |
+   |                       | | RDMA-CORE V24 | | N/A           |
+   |                       | | ConnectX-5    | | ConnectX-5    |
+   +-----------------------+-----------------+-----------------+
+   | Mark / Flag           | | DPDK 19.05    | | DPDK 18.11    |
+   |                       | | OFED 4.6      | | OFED 4.5      |
+   |                       | | RDMA-CORE V24 | | RDMA-CORE V23 |
+   |                       | | ConnectX-5    | | ConnectX-4    |
+   +-----------------------+-----------------+-----------------+
+   | Port ID               | | DPDK 19.05    |     | N/A       |
+   |                       | | OFED 4.6      |     | N/A       |
+   |                       | | RDMA-CORE V24 |     | N/A       |
+   |                       | | ConnectX-5    |     | N/A       |
+   +-----------------------+-----------------+-----------------+
+
+* Minimum version for each component and nic.
+
 Notes for testpmd
 -----------------
 
@@ -863,7 +963,7 @@  Usage example
 -------------
 
 This section demonstrates how to launch **testpmd** with Mellanox
-ConnectX-4/ConnectX-5/ConnectX-6/Bluefield devices managed by librte_pmd_mlx5.
+ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
 
 #. Load the kernel modules:
 
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 4e0eed5..ce90f65 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -148,10 +148,13 @@  New Features
    * Added support for multiport InfiniBand device.
    * Added control of excessive memory pinning by kernel.
    * Added support of DMA memory registration by secondary process.
-   * Added Direct Rule support in Direct Verbs flow driver.
    * Added support of per-process device registers, reserving identical VA space
      is not needed anymore.
-   * Added E-Switch support in Direct Verbs flow driver.
+   * Added support for jump action for both E-Switch and Nic.
+   * Added Support for multiple rte_flow groups in Nic steering.
+   * Flow engine re-design to support large scale deployments. this includes:
+      * Support millions of offloaded flow rules.
+      * Fast flow insertion and deletion up to 1M flow update per second.
 
 * **Renamed avf to iavf.**