[dpdk-dev] [PATCH] doc: announce changes to ethdev rxconf structure

Jerin Jacob jerinjacobk at gmail.com
Thu Aug 6 18:41:54 CEST 2020


On Thu, Aug 6, 2020 at 9:56 PM Stephen Hemminger
<stephen at networkplumber.org> wrote:
>
> On Thu, 6 Aug 2020 16:58:22 +0100
> Ferruh Yigit <ferruh.yigit at intel.com> wrote:
>
> > On 8/4/2020 2:32 PM, Jerin Jacob wrote:
> > > On Mon, Aug 3, 2020 at 6:36 PM Slava Ovsiienko <viacheslavo at mellanox.com> wrote:
> > >>
> > >> Hi, Jerin,
> > >>
> > >> Thanks for the comment,  please, see below.
> > >>
> > >>> -----Original Message-----
> > >>> From: Jerin Jacob <jerinjacobk at gmail.com>
> > >>> Sent: Monday, August 3, 2020 14:57
> > >>> To: Slava Ovsiienko <viacheslavo at mellanox.com>
> > >>> Cc: dpdk-dev <dev at dpdk.org>; Matan Azrad <matan at mellanox.com>;
> > >>> Raslan Darawsheh <rasland at mellanox.com>; Thomas Monjalon
> > >>> <thomas at monjalon.net>; Ferruh Yigit <ferruh.yigit at intel.com>; Stephen
> > >>> Hemminger <stephen at networkplumber.org>; Andrew Rybchenko
> > >>> <arybchenko at solarflare.com>; Ajit Khaparde
> > >>> <ajit.khaparde at broadcom.com>; Maxime Coquelin
> > >>> <maxime.coquelin at redhat.com>; Olivier Matz <olivier.matz at 6wind.com>;
> > >>> David Marchand <david.marchand at redhat.com>
> > >>> Subject: Re: [PATCH] doc: announce changes to ethdev rxconf structure
> > >>>
> > >>> On Mon, Aug 3, 2020 at 4:28 PM Viacheslav Ovsiienko
> > >>> <viacheslavo at mellanox.com> wrote:
> > >>>>
> > >>>> The DPDK datapath in the transmit direction is very flexible.
> > >>>> The applications can build multisegment packets and manages almost all
> > >>>> data aspects - the memory pools where segments are allocated from, the
> > >>>> segment lengths, the memory attributes like external, registered, etc.
> > >>>>
> > >>>> In the receiving direction, the datapath is much less flexible, the
> > >>>> applications can only specify the memory pool to configure the
> > >>>> receiving queue and nothing more. In order to extend the receiving
> > >>>> datapath capabilities it is proposed to add the new fields into
> > >>>> rte_eth_rxconf structure:
> > >>>>
> > >>>> struct rte_eth_rxconf {
> > >>>>     ...
> > >>>>     uint16_t rx_split_num; /* number of segments to split */
> > >>>>     uint16_t *rx_split_len; /* array of segment lengthes */
> > >>>>     struct rte_mempool **mp; /* array of segment memory pools */
> > >>>
> > >>> The pool has the packet length it's been configured for.
> > >>> So I think, rx_split_len can be removed.
> > >>
> > >> Yes, it is one of the supposed options - if pointer to array of segment lengths
> > >> is NULL , the queue_setup() could use the lengths from the pool's properties.
> > >> But we are talking about packet split, in general, it should not depend
> > >> on pool properties. What if application provides the single pool
> > >> and just wants to have the tunnel header in the first dedicated mbuf?
> > >>
> > >>>
> > >>> This feature also available in Marvell HW. So it not specific to one vendor.
> > >>> Maybe we could just the use case mention the use case in the depreciation
> > >>> notice and the tentative change in rte_eth_rxconf and exact details can be
> > >>> worked out at the time of implementation.
> > >>>
> > >> So, if I understand correctly, the struct changes in the commit message
> > >> should be marked as just possible implementation?
> > >
> > > Yes.
> > >
> > > We may need to have a detailed discussion on the correct abstraction for various
> > > HW is available with this feature.
> > >
> > > On Marvell HW, We can configure TWO pools for given eth Rx queue.
> > > One pool can be configured as a small packet pool and other one as
> > > large packet pool.
> > > And there is a threshold value to decide the pool between small and large.
> > > For example:
> > > - The small pool is configured 2k
> > > - The large pool is configured with 10k
> > > - And if the threshold value is configured as 2k.
> > > Any packet size <=2K will land in small pool and others in a large pool.
> > > The use case, we are targeting is to save the memory space for jumbo frames.
> >
> > Out of curiosity, do you provide two different buffer address in the descriptor
> > and HW automatically uses one based on the size,
> > or driver uses one of the pools based on the configuration and possible largest
> > packet size?

The later one.

>
> I am all for allowing more configuration of buffer pool.
> But don't want that to be exposed as a hardware specific requirement in the
> API for applications. The worst case would be if your API changes required:
>
>   if (strcmp(dev->driver_name, "marvell") == 0) {
>      // make another mempool for this driver

There is no HW specific requirements here. If one pool specified(like
the existing situation),
HW will create scatter-gather frame.

It is mostly useful for the application use case where it needs single
contiguous of data
for processing(like crypto) and/or improving Rx/TX performance by
running in single seg mode
without losing too much of memory.


>
>


More information about the dev mailing list