[dpdk-stable] [dpdk-dev] [PATCH v3] kni: fix mbuf allocation for alloc FIFO

Ajit Khaparde ajit.khaparde at broadcom.com
Thu Jun 24 03:55:59 CEST 2021


On Tue, Jun 22, 2021 at 5:44 AM wangyunjian <wangyunjian at huawei.com> wrote:
>
> From: Yunjian Wang <wangyunjian at huawei.com>
>
> In kni_allocate_mbufs(), we alloc mbuf for alloc_q as this code.
> allocq_free = (kni->alloc_q->read - kni->alloc_q->write - 1) \
>                 & (MAX_MBUF_BURST_NUM - 1);
> The value of allocq_free maybe zero, for example :
> The ring size is 1024. After init, write = read = 0. Then we fill
> kni->alloc_q to full. At this time, write = 1023, read = 0.
>
> Then the kernel send 32 packets to userspace. At this time, write
> = 1023, read = 32. And then the userspace receive this 32 packets.
> Then fill the kni->alloc_q, (32 - 1023 - 1) & 31 = 0, fill nothing.
> ...
> Then the kernel send 32 packets to userspace. At this time, write
> = 1023, read = 992. And then the userspace receive this 32 packets.
> Then fill the kni->alloc_q, (992 - 1023 - 1) & 31 = 0, fill nothing.
>
> Then the kernel send 32 packets to userspace. The kni->alloc_q only
> has 31 mbufs and will drop one packet.
>
> Absolutely, this is a special scene. Normally, it will fill some
> mbufs everytime, but may not enough for the kernel to use.
>
> In this patch, we always keep the kni->alloc_q to full for the kernel
> to use.
>
> Fixes: 49da4e82cf94 ("kni: allocate no more mbuf than empty slots in queue")
> Cc: stable at dpdk.org
>
> Signed-off-by: Cheng Liu <liucheng11 at huawei.com>
> Signed-off-by: Yunjian Wang <wangyunjian at huawei.com>
> Acked-by: Ferruh Yigit <ferruh.yigit at intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde at broadcom.com>

> ---
> v3:
>    update patch title
> v2:
>    add fixes tag and update commit log
> ---
>  lib/kni/rte_kni.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/lib/kni/rte_kni.c b/lib/kni/rte_kni.c
> index 9dae6a8d7c..eb24b0d0ae 100644
> --- a/lib/kni/rte_kni.c
> +++ b/lib/kni/rte_kni.c
> @@ -677,8 +677,9 @@ kni_allocate_mbufs(struct rte_kni *kni)
>                 return;
>         }
>
> -       allocq_free = (kni->alloc_q->read - kni->alloc_q->write - 1)
> -                       & (MAX_MBUF_BURST_NUM - 1);
> +       allocq_free = kni_fifo_free_count(kni->alloc_q);
> +       allocq_free = (allocq_free > MAX_MBUF_BURST_NUM) ?
> +               MAX_MBUF_BURST_NUM : allocq_free;
>         for (i = 0; i < allocq_free; i++) {
>                 pkts[i] = rte_pktmbuf_alloc(kni->pktmbuf_pool);
>                 if (unlikely(pkts[i] == NULL)) {
> --
> 2.23.0
>


More information about the stable mailing list