[dpdk-dev] [PATCH v2] doc: announce changes to ethdev rxconf structure

Thomas Monjalon thomas at monjalon.net
Thu Aug 6 14:39:11 CEST 2020


05/08/2020 13:14, Andrew Rybchenko:
> On 8/5/20 11:49 AM, Viacheslav Ovsiienko wrote:
> > The DPDK datapath in the transmit direction is very flexible.
> > The applications can build multi-segment packets and manages
> > almost all data aspects - the memory pools where segments
> > are allocated from, the segment lengths, the memory attributes
> > like external, registered, etc.
> > 
> > In the receiving direction, the datapath is much less flexible,
> > the applications can only specify the memory pool to configure
> > the receiving queue and nothing more. The packet being received
> > can only be pushed to the chain of the mbufs of the same data
> > buffer size and allocated from the same pool. In order to extend
> > the receiving datapath buffer description it is proposed to add
> > the new fields into rte_eth_rxconf structure:
> > 
> > struct rte_eth_rxconf {
> >     ...
> >     uint16_t rx_split_num; /* number of segments to split */
> >     uint16_t *rx_split_len; /* array of segment lengths */
> >     struct rte_mempool **mp; /* array of segment memory pools */
> >     ...
> > };
> > 
> > The non-zero value of rx_split_num field configures the receiving
> > queue to split ingress packets into multiple segments to the mbufs
> > allocated from various memory pools according to the specified
> > lengths. The zero value of rx_split_num field provides the
> > backward compatibility and queue should be configured in a regular
> > way (with single/multiple mbufs of the same data buffer length
> > allocated from the single memory pool).
> > 
> > The new approach would allow splitting the ingress packets into
> > multiple parts pushed to the memory with different attributes.
> > For example, the packet headers can be pushed to the embedded data
> > buffers within mbufs and the application data into the external
> > buffers attached to mbufs allocated from the different memory
> > pools. The memory attributes for the split parts may differ
> > either - for example the application data may be pushed into
> > the external memory located on the dedicated physical device,
> > say GPU or NVMe. This would improve the DPDK receiving datapath
> > flexibility preserving compatibility with existing API.
> > 
> > The proposed extended description of receiving buffers might be
> > considered by other vendors to be involved into similar features
> > support, it is the subject for the further discussion.
> > 
> > Signed-off-by: Viacheslav Ovsiienko <viacheslavo at mellanox.com>
> > Acked-by: Jerin Jacob <jerinjacobk at gmail.com>
> 
> I"m OK with the idea in general and we'll work on details
> in the next release cycle.
> 
> Acked-by: Andrew Rybchenko <arybchenko at solarflare.com>

I agree we need to be more flexible with the mempools in Rx.

Acked-by: Thomas Monjalon <thomas at monjalon.net>




More information about the dev mailing list