[dpdk-dev] Free up completed TX buffers

Zoltan Kiss zoltan.kiss at linaro.org
Mon Jun 1 19:51:04 CEST 2015



On 01/06/15 09:50, Andriy Berestovskyy wrote:
> Hi Zoltan,
>
> On Fri, May 29, 2015 at 7:00 PM, Zoltan Kiss <zoltan.kiss at linaro.org> wrote:
>> The easy way is just to increase your buffer pool's size to make
>> sure that doesn't happen.
>
> Go for it!

I went for it, my question is whether is it a good and worthwhile idea 
to give the applications a last resort option for rainy days? It's a 
problem which probably won't occur very often, but when it does, I think 
it can take painfully long until you figure out what's wrong.
>
>>   But there is no bulletproof way to calculate such
>> a number
>
> Yeah, there are many places for mbufs to stay :( I would try:
>
> Mempool size = sum(numbers of all TX descriptors)
>      + sum(rx_free_thresh)
>      + (mempool cache size * (number of lcores - 1))
>      + (burst size * number of lcores)

It heavily depends on what your application does, and I think it's easy 
to make a mistake in these calculations.

>
>> I'm thinking about a foolproof way, which is exposing functions like
>> ixgbe_tx_free_bufs from the PMDs, so the application can call it as a last
>> resort to avoid deadlock.
>
> Have a look at rte_eth_dev_tx_queue_stop()/start(). Some NICs (i.e.
> ixgbe) do reset the queue and free all the mbufs.

That's a bit drastic, I just want to flush the finished TX buffers, even 
if tx_free_thresh were not reached.
An easy option would be to use rte_eth_tx_burst(..., nb_pkts=0), I'm 
already using this to enforce TX completion if it's really needed. It 
checks for tx_free_thresh, like this:

	/* Check if the descriptor ring needs to be cleaned. */
	if ((txq->nb_tx_desc - txq->nb_tx_free) > txq->tx_free_thresh)
		i40e_xmit_cleanup(txq);

My idea is to extend this condition and add " || nb_pkts == 0", so you 
can force cleanup. But there might be others who uses this same way to 
do manual TX completion, and they expect that it only happens when 
tx_free_thresh is reached.

>
> Regards,
> Andriy
>


More information about the dev mailing list