[PATCH v5 1/2] ethdev: introduce the Tx map API for aggregated ports

Thomas Monjalon thomas at monjalon.net
Thu Feb 16 18:42:00 CET 2023


For the title, I suggest
ethdev: add Tx queue mapping of aggregated ports

14/02/2023 16:48, Jiawei Wang:
> When multiple ports are aggregated into a single DPDK port,
> (example: Linux bonding, DPDK bonding, failsafe, etc.),
> we want to know which port use for Tx via a queue.
> 
> This patch introduces the new ethdev API
> rte_eth_dev_map_aggr_tx_affinity(), it's used to map a Tx queue
> with an aggregated port of the DPDK port (specified with port_id),
> The affinity is the number of the aggregated port.
> Value 0 means no affinity and traffic could be routed to any
> aggregated port, this is the default current behavior.
> 
> The maximum number of affinity is given by rte_eth_dev_count_aggr_ports().
> 
> Add the trace point for ethdev rte_eth_dev_count_aggr_ports()
> and rte_eth_dev_map_aggr_tx_affinity() functions.
> 
> Add the testpmd command line:
> testpmd> port config (port_id) txq (queue_id) affinity (value)
> 
> For example, there're two physical ports connected to
> a single DPDK port (port id 0), and affinity 1 stood for
> the first physical port and affinity 2 stood for the second
> physical port.
> Use the below commands to config tx phy affinity for per Tx Queue:
>         port config 0 txq 0 affinity 1
>         port config 0 txq 1 affinity 1
>         port config 0 txq 2 affinity 2
>         port config 0 txq 3 affinity 2
> 
> These commands config the Tx Queue index 0 and Tx Queue index 1 with
> phy affinity 1, uses Tx Queue 0 or Tx Queue 1 send packets,
> these packets will be sent from the first physical port, and similar
> with the second physical port if sending packets with Tx Queue 2
> or Tx Queue 3.
> 
> Signed-off-by: Jiawei Wang <jiaweiw at nvidia.com>

Acked-by: Thomas Monjalon <thomas at monjalon.net>




More information about the dev mailing list