[dpdk-dev] vhost-net stops sending to virito pmd -- already fixed?

Xie, Huawei huawei.xie at intel.com
Fri Sep 18 07:58:30 CEST 2015


On 9/17/2015 1:25 AM, Kyle Larose wrote:
> Hi Huawei,
>
>> Kyle:
>> Could you tell us how did you produce this issue, very small pool size
>> or you are using pipeline model?
> If I understand correctly, by pipeline model you mean a model whereby
> multiple threads handle a given packet, with some sort IPC (e.g. dpdk
> rings) between them? If so, yes: we are using such a model. And I
> suspect that this model is where we run into issues: the length of the
> pipeline, combined with the queuing between stages, can lead to us
> exhausting the mbufs, particularly when a stage's load causes queuing.
Yes, exactly.
>
> When I initially ran into this issue, I had a fairly large mbuf pool
> (32K entries), with 3 stages in the pipeline: rx, worker, tx. There
> were two worker threads, with a total of 6 rings. I was sending some
> fairly bursty traffic, at a high packet rate (it was bursting up to
> around 1Mpkt/s). There was a low chance that this actually caused the
> problem. However, when I decreased the mbuf pool to 1000 entries, it
> *always* happened.
>
> In summary: the pipeline model is important here, and a small pool
> size definitely exacerbates the problem.
>
> I was able to reproduce the problem using the load_balancer sample
> application, though it required some modification to get it to run
> with virtio. I'm not sure if this is because I'm using DPDK 1.8,  or
> something else. Either way, I made the number of mbufs configurable
> via an environment variable, and was able to show that decreasing it
> from the default of 32K to 1K would cause the problem to always happen
> when using the same traffic as with my application. Applying the below
> patch fixed the problem.
>
> The following patch seems to fix the problem for me, though I'm not
> sure it's the optimal solution. It does so by removing the early exit
> which prevents us from allocating mbufs. After we skip over the packet
> processing loop since there are no packets, the mbuf allocation loop
> runs.  Note that the patch is on dpdk 1.8.
Yes, it will fix your problem. We could try to do the refill each time
we enter the loop no matter there is avail packets or not.

> diff --git a/lib/librte_pmd_virtio/virtio_rxtx.c
> b/lib/librte_pmd_virtio/virtio_rxtx.c
> index c013f97..7cadf52 100644
> --- a/lib/librte_pmd_virtio/virtio_rxtx.c
> +++ b/lib/librte_pmd_virtio/virtio_rxtx.c
> @@ -463,9 +463,6 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf
> **rx_pkts, uint16_t nb_pkts)
>         if (likely(num > DESC_PER_CACHELINE))
>                 num = num - ((rxvq->vq_used_cons_idx + num) %
> DESC_PER_CACHELINE);
>
> -       if (num == 0)
> -               return 0;
> -
>         num = virtqueue_dequeue_burst_rx(rxvq, rcv_pkts, len, num);
>         PMD_RX_LOG(DEBUG, "used:%d dequeue:%d", nb_used, num);
>         for (i = 0; i < num ; i++) {
> @@ -549,9 +546,6 @@ virtio_recv_mergeable_pkts(void *rx_queue,
>
>         rmb();
>
> -       if (nb_used == 0)
> -               return 0;
> -
>         PMD_RX_LOG(DEBUG, "used:%d\n", nb_used);
>
>         while (i < nb_used) {
>
> Thanks,
>
> Kyle
>



More information about the dev mailing list