[dpdk-dev] [PATCH 3/4] ethdev: introduce Tx queue offloads API

Shahaf Shuler shahafs at mellanox.com
Mon Sep 4 09:12:17 CEST 2017


Introduce a new API to configure Tx offloads.

The new API will re-use existing DEV_TX_OFFLOAD_* flags
to enable the different offloads. This will ease the process
of adding a new Tx offloads, as no ABI breakage is involved.
In addition, the Tx offloads will be disabled by default and be
enabled per application needs. This will much simplify PMD management of
the different offloads.

The new API does not have an equivalent for the below, benchmark
specific, flags:

	- ETH_TXQ_FLAGS_NOREFCOUNT
	- ETH_TXQ_FLAGS_NOMULTMEMP

The Tx queue offload API can be used only with devices which advertize
the RTE_ETH_DEV_TXQ_OFFLOAD capability.

PMDs which move to the new API however support Tx offloads only per
port should return from the queue setup, in case of a mixed configuration,
with error (-ENOTSUP).

The old Tx offloads API is kept for the meanwhile, in order to enable a
smooth transition for PMDs and application to the new API.

Signed-off-by: Shahaf Shuler <shahafs at mellanox.com>
---
 doc/guides/nics/features.rst  |  8 ++++++++
 lib/librte_ether/rte_ethdev.h | 24 ++++++++++++++++++++++++
 2 files changed, 32 insertions(+)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index f2c8497c2..bb25a1cee 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -131,6 +131,7 @@ Lock-free Tx queue
 If a PMD advertises DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
 invoke rte_eth_tx_burst() concurrently on the same Tx queue without SW lock.
 
+* **[uses]    rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_MT_LOCKFREE``.
 * **[provides] rte_eth_dev_info**: ``tx_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
 * **[related]  API**: ``rte_eth_tx_burst()``.
 
@@ -220,6 +221,7 @@ TSO
 
 Supports TCP Segmentation Offloading.
 
+* **[uses]       rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_TCP_TSO``.
 * **[uses]       rte_eth_desc_lim**: ``nb_seg_max``, ``nb_mtu_seg_max``.
 * **[uses]       mbuf**: ``mbuf.ol_flags:PKT_TX_TCP_SEG``.
 * **[uses]       mbuf**: ``mbuf.tso_segsz``, ``mbuf.l2_len``, ``mbuf.l3_len``, ``mbuf.l4_len``.
@@ -510,6 +512,7 @@ VLAN offload
 Supports VLAN offload to hardware.
 
 * **[uses]       rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
+* **[uses]       rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_VLAN_INSERT``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.vlan_tci``.
 * **[provides]   rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
@@ -526,6 +529,7 @@ QinQ offload
 Supports QinQ (queue in queue) offload.
 
 * **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
+* **[uses]     rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_QINQ_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ_PKT``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.vlan_tci``,
    ``mbuf.vlan_tci_outer``.
@@ -541,6 +545,7 @@ L3 checksum offload
 Supports L3 checksum offload.
 
 * **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
+* **[uses]     rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
@@ -558,6 +563,7 @@ L4 checksum offload
 Supports L4 checksum offload.
 
 * **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``.
+* **[uses]     rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
   ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
@@ -576,6 +582,7 @@ MACsec offload
 Supports MACsec.
 
 * **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
+* **[uses]     rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_MACSEC_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
@@ -589,6 +596,7 @@ Inner L3 checksum
 Supports inner packet L3 checksum.
 
 * **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses]     rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 90934418d..1293b9922 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -719,6 +719,14 @@ struct rte_eth_rxq_conf {
 #define ETH_TXQ_FLAGS_NOXSUMS \
 		(ETH_TXQ_FLAGS_NOXSUMSCTP | ETH_TXQ_FLAGS_NOXSUMUDP | \
 		 ETH_TXQ_FLAGS_NOXSUMTCP)
+#define ETH_TXQ_FLAGS_IGNORE	0x8000
+	/**
+	 * When set the txq_flags should be ignored,
+	 * instead the Tx offloads will be set on offloads field
+	 * located on rte_eth_txq_conf struct.
+	 * This flag is temporary till the rte_eth_txq_conf.txq_flags
+	 * API will be deprecated.
+	 */
 /**
  * A structure used to configure a TX ring of an Ethernet port.
  */
@@ -730,6 +738,12 @@ struct rte_eth_txq_conf {
 
 	uint32_t txq_flags; /**< Set flags for the Tx queue */
 	uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
+	uint64_t offloads;
+	/**
+	 * Enable Tx offloads using DEV_TX_OFFLOAD_* flags.
+	 * Supported only for devices which advertize the
+	 * RTE_ETH_DEV_TXQ_OFFLOAD capability.
+	 */
 };
 
 /**
@@ -954,6 +968,8 @@ struct rte_eth_conf {
 /**< Multiple threads can invoke rte_eth_tx_burst() concurrently on the same
  * tx queue without SW lock.
  */
+#define DEV_TX_OFFLOAD_MULTI_SEGS	0x00008000
+/**< multi segment send is supported. */
 
 struct rte_pci_device;
 
@@ -1750,6 +1766,8 @@ struct rte_eth_dev_data {
 #define RTE_ETH_DEV_INTR_RMV     0x0008
 /** Device supports the rte_eth_rxq_conf offloads API */
 #define RTE_ETH_DEV_RXQ_OFFLOAD 0x0010
+/** Device supports the rte_eth_txq_conf offloads API */
+#define RTE_ETH_DEV_TXQ_OFFLOAD 0x0020
 
 /**
  * @internal
@@ -2011,12 +2029,18 @@ int rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
  *   - The *txq_flags* member contains flags to pass to the TX queue setup
  *     function to configure the behavior of the TX queue. This should be set
  *     to 0 if no special configuration is required.
+ *     This API is obsoleted and will be deprecated. Applications
+ *     should set it to ETH_TXQ_FLAGS_IGNORE and use
+ *     the offloads field below.
+ *   - The *offloads* member contains Tx offloads to be enabled.
+ *     Offloads which are not set cannot be used on the datapath.
  *
  *     Note that setting *tx_free_thresh* or *tx_rs_thresh* value to 0 forces
  *     the transmit function to use default values.
  * @return
  *   - 0: Success, the transmit queue is correctly set up.
  *   - -ENOMEM: Unable to allocate the transmit ring descriptors.
+ *   - -ENOTSUP: Device not support the queue configuration.
  */
 int rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
-- 
2.12.0



More information about the dev mailing list