[PATCH v3 05/33] net/ena: fix fast mbuf free

Brandes, Shai shaibran at amazon.com
Sun Mar 10 15:58:56 CET 2024



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit at amd.com>
> Sent: Friday, March 8, 2024 7:23 PM
> To: Brandes, Shai <shaibran at amazon.com>
> Cc: dev at dpdk.org; stable at dpdk.org
> Subject: RE: [EXTERNAL] [PATCH v3 05/33] net/ena: fix fast mbuf free
> 
> CAUTION: This email originated from outside of the organization. Do not click
> links or open attachments unless you can confirm the sender and know the
> content is safe.
> 
> 
> 
> On 3/6/2024 12:24 PM, shaibran at amazon.com wrote:
> > From: Shai Brandes <shaibran at amazon.com>
> >
> > In case the application enables fast mbuf release optimization, the
> > driver releases 256 TX mbufs in bulk upon reaching the TX free
> > threshold.
> > The existing implementation utilizes rte_mempool_put_bulk for bulk
> > freeing TXs, which exclusively supports direct mbufs.
> > In case the application transmits indirect bufs, the driver must also
> > decrement the mbuf reference count and unlink the mbuf segment.
> > For such case, the driver should employ rte_pktmbuf_free_bulk.
> >
> 
> Ack.
> 
> I wonder if you observe any performance impact from this change, just for
> reference if we encounter similar decision in the future.
[Brandes, Shai] we did not see performance impact in our testing. 
It was discovered by a new latency application we crafted that uses the bulk free option, which transmitted one by one packets copied from a common buffer, but showed that there are missing packets.



More information about the stable mailing list