[EXT] Re: [dpdk-dev] [PATCH 22.02 2/2] net/cnxk: add devargs for configuring SDP channel mask

Satheesh Paul psatheesh at marvell.com
Tue Jan 11 15:29:48 CET 2022


Hi,

Please find reply inline.

Thanks,
Satheesh.

-----Original Message-----
From: Ferruh Yigit <ferruh.yigit at intel.com> 
Sent: 11 January 2022 05:26 PM
To: Satheesh Paul <psatheesh at marvell.com>; Nithin Kumar Dabilpuram <ndabilpuram at marvell.com>; Kiran Kumar Kokkilagadda <kirankumark at marvell.com>; Sunil Kumar Kori <skori at marvell.com>; Satha Koteswara Rao Kottidi <skoteshwar at marvell.com>
Cc: dev at dpdk.org; Ori Kam <orika at nvidia.com>; Andrew Rybchenko <andrew.rybchenko at oktetlabs.ru>
Subject: [EXT] Re: [dpdk-dev] [PATCH 22.02 2/2] net/cnxk: add devargs for configuring SDP channel mask

External Email

----------------------------------------------------------------------
On 11/9/2021 9:42 AM, psatheesh at marvell.com wrote:
> From: Satheesh Paul <psatheesh at marvell.com>
> 
> This patch adds support to configure channel mask which will be used 
> by rte flow when adding flow rules on SDP interfaces.
> 

>Hi Satheesh,

>+ Ori & Andrew.

>What 'SDP' stands for?
It stands for "System DMA Packet Interface". This is when the system acts as PCIe endpoint. For instance, an x86 machine can act as a host having an Octeon TX* board plugged through this PCIe interface and packets are transferred through this PCIe interface.

>And can this new devarg be provided with flow rule? Why it needs to be a new devarg?
SDP and its channel related info are specific to the hardware and rte flow api cannot be extended to support them. Hence, it is added as a new devarg.

>Can you please give a sample of the rte flow API that will be used?
This channel mask will be used by the rte_flow_create() api. It is actually transparent at rte_flow_create() invocation itself. That is, at the time of rte_flow_create() invocation, user does not give any additional information. But internally, the driver's flow creation api takes the SDP channel/mask value supplied at the startup and applies it. Basically, in Octeon tx*, the interfaces have a "channel identifier" number. The rules in packet classification hardware are configured to match the channel number. With this change, we are relaxing the exact match and are allowing a range for this SDP interface.

Thanks,
ferruh


> Signed-off-by: Satheesh Paul <psatheesh at marvell.com>
> ---
>   doc/guides/nics/cnxk.rst               | 21 ++++++++++++++
>   drivers/net/cnxk/cnxk_ethdev_devargs.c | 40 ++++++++++++++++++++++++--
>   2 files changed, 59 insertions(+), 2 deletions(-)
> 
> diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst index 
> 837ffc02b4..470e01b811 100644
> --- a/doc/guides/nics/cnxk.rst
> +++ b/doc/guides/nics/cnxk.rst
> @@ -276,6 +276,27 @@ Runtime Config Options
>      set with this custom mask, inbound encrypted traffic from all ports with
>      matching channel number pattern will be directed to the inline IPSec device.
>   
> +- ``SDP device channel and mask`` (default ``none``)
> +   Set channel and channel mask configuration for the SDP device. This
> +   will be used when creating flow rules on the SDP device.
> +
> +   By default, for rules created on the SDP device, the RTE Flow API sets the
> +   channel number and mask to cover the entire SDP channel range in the channel
> +   field of the MCAM entry. This behaviour can be modified using the
> +   ``sdp_channel_mask`` ``devargs`` parameter.
> +
> +   For example::
> +
> +      -a 0002:1d:00.0,sdp_channel_mask=0x700/0xf00
> +
> +   With the above configuration, RTE Flow rules API will set the channel
> +   and channel mask as 0x700 and 0xF00 in the MCAM entries of the  flow rules
> +   created on the SDP device. This option needs to be used when more than one
> +   SDP interface is in use and RTE Flow rules created need to distinguish
> +   between traffic from each SDP interface. The channel and mask combination
> +   specified should match all the channels(or rings) configured on the SDP
> +   interface.
> +
>   .. note::
>   

<...>


More information about the dev mailing list