Can I use rte_pktmbuf_chain to chain multiple mbuffs for calling only single tx_eth_burst API
Stephen Hemminger
stephen at networkplumber.org
Wed Feb 9 23:46:17 CET 2022
On Wed, 9 Feb 2022 22:18:24 +0000
Ferruh Yigit <ferruh.yigit at intel.com> wrote:
> On 2/9/2022 6:03 PM, Ansar Kannankattil wrote:
> > Hi
> > My intention is to decrease the number of rte_tx_eth_burst calls, I know that mentioning nb_pkts will result in sending multiple packets in a single call.
> > But providing nb_pkts=1 and posting a head mbuff having number of mbuffs linked with it will results sending multiple packets
>
> If driver supports, you can do it.
> Driver should expose this capability via RTE_ETH_TX_OFFLOAD_MULTI_SEGS flag,
> in 'dev_info->tx_offload_capa'.
>
> > If not, what is the use case of linking multiple mbuffs together
>
> It is also used in Rx path (again if driver supports).
I think Ansar was asking about chaining multiple packets in one call to tx burst.
The chaining in DPDK is to make a single packet out of multiple pieces (like writev).
DPDK mbufs were based on original BSD concept.
In BSD mbufs, mbuf has two linked lists.
BSD m->m_next pointer == DPDK m->next for multiple parts of packet.
BSD m->m_nextpkt for next packet in queue
There is no nextpkt in DPDK.
More information about the dev
mailing list