[PATCH v2] kni: fix possible alloc_q starvation when mbufs are exhausted

Matt zhouyates at gmail.com
Fri Nov 11 10:12:20 CET 2022


On Thu, Nov 10, 2022 at 12:39 AM Stephen Hemminger <
stephen at networkplumber.org> wrote:

> On Wed,  9 Nov 2022 14:04:34 +0800
> Yangchao Zhou <zhouyates at gmail.com> wrote:
>
> > In some scenarios, mbufs returned by rte_kni_rx_burst are not freed
> > immediately. So kni_allocate_mbufs may be failed, but we don't know.
> >
> > Even worse, when alloc_q is completely exhausted, kni_net_tx in
> > rte_kni.ko will drop all tx packets. kni_allocate_mbufs is never
> > called again, even if the mbufs are eventually freed.
> >
> > In this patch, we always try to allocate mbufs for alloc_q.
> >
> > Don't worry about alloc_q being allocated too many mbufs, in fact,
> > the old logic will gradually fill up alloc_q.
> > Also, the cost of more calls to kni_allocate_mbufs should be acceptable.
> >
> > Fixes: 3e12a98fe397 ("kni: optimize Rx burst")
> > Cc: Hemant at freescale.com
> > Cc: stable at dpdk.org
> >
> > Signed-off-by: Yangchao Zhou <zhouyates at gmail.com>
>
> Since fifo_get returning 0 (no buffers) is very common would this
> change impact performance.
>
It does add a little cost, but there is no extra mbuf allocation
and deallocation.

>
> If the problem is pool draining might be better to make the pool
> bigger.
>
Yes, using a larger pool can avoid this problem. But this may lead to
resource wastage and full resource calculation is a challenge for developers
as it involves to mempool caching mechanism, IP fragment cache,
ARP cache, NIC txq, other transit queue, etc.

The mbuf allocation failure may also occur on many NIC drivers,
but if the mbuf allocation fails, the mbuf is not taken out so that
it can be recovered after a retry later.
KNI currently does not have such a takedown and recovery mechanism.
It is also possible to consider implementing something similar to
the NIC driver, but with more changes and other overheads.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mails.dpdk.org/archives/stable/attachments/20221111/4b7587a5/attachment.htm>


More information about the stable mailing list