[PATCH v2] examples/vhost: fix retry logic on eth rx path
Xia, Chenbo
chenbo.xia at intel.com
Tue Jun 21 15:34:51 CEST 2022
Hi Yuan,
> -----Original Message-----
> From: Wang, YuanX <yuanx.wang at intel.com>
> Sent: Friday, June 17, 2022 3:02 PM
> To: maxime.coquelin at redhat.com; Xia, Chenbo <chenbo.xia at intel.com>;
> dev at dpdk.org
> Cc: Hu, Jiayu <jiayu.hu at intel.com>; He, Xingguang <xingguang.he at intel.com>;
> Wang, YuanX <yuanx.wang at intel.com>; stable at dpdk.org; Ling, WeiX
> <weix.ling at intel.com>
> Subject: [PATCH v2] examples/vhost: fix retry logic on eth rx path
>
> drain_eth_rx() uses rte_vhost_avail_entries() to calculate
> the available entries to determine if a retry is required.
> However, this function only works with split rings, and
> calculating packed rings will return the wrong value and cause
> unnecessary retries resulting in a significant performance penalty.
>
> This patch fix that by using the difference between tx/rx burst
> as the retry condition.
>
> Fixes: 4ecf22e356de ("vhost: export device id as the interface to
> applications")
> Cc: stable at dpdk.org
>
> Signed-off-by: Yuan Wang <yuanx.wang at intel.com>
> Tested-by: Wei Ling <weix.ling at intel.com>
> ---
> V2: Rebase to 22.07 rc1.
> ---
> examples/vhost/main.c | 27 ++++++++++-----------------
> 1 file changed, 10 insertions(+), 17 deletions(-)
>
> diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> index e7fee5aa1b..e7a84333ed 100644
> --- a/examples/vhost/main.c
> +++ b/examples/vhost/main.c
> @@ -634,7 +634,7 @@ us_vhost_usage(const char *prgname)
> {
> RTE_LOG(INFO, VHOST_CONFIG, "%s [EAL options] -- -p PORTMASK\n"
> " --vm2vm [0|1|2]\n"
> - " --rx_retry [0|1] --mergeable [0|1] --stats [0-N]\n"
> + " --rx-retry [0|1] --mergeable [0|1] --stats [0-N]\n"
> " --socket-file <path>\n"
> " --nb-devices ND\n"
> " -p PORTMASK: Set mask for ports to be used by
> application\n"
> @@ -1383,27 +1383,20 @@ drain_eth_rx(struct vhost_dev *vdev)
> if (!rx_count)
> return;
>
> - /*
> - * When "enable_retry" is set, here we wait and retry when there
> - * is no enough free slots in the queue to hold @rx_count packets,
> - * to diminish packet loss.
> - */
> - if (enable_retry &&
> - unlikely(rx_count > rte_vhost_avail_entries(vdev->vid,
> - VIRTIO_RXQ))) {
> - uint32_t retry;
> + enqueue_count = vdev_queue_ops[vdev->vid].enqueue_pkt_burst(vdev,
> + VIRTIO_RXQ, pkts, rx_count);
>
> - for (retry = 0; retry < burst_rx_retry_num; retry++) {
> + /* Retry if necessary */
> + if (enable_retry && unlikely(enqueue_count < rx_count)) {
> + uint32_t retry = 0;
> +
> + while (enqueue_count < rx_count && retry++ <
> burst_rx_retry_num) {
> rte_delay_us(burst_rx_delay_time);
> - if (rx_count <= rte_vhost_avail_entries(vdev->vid,
> - VIRTIO_RXQ))
> - break;
> + enqueue_count += vdev_queue_ops[vdev-
> >vid].enqueue_pkt_burst(vdev,
> + VIRTIO_RXQ, pkts,
> rx_count);
Looking at the code again, it seems current logic will enqueue more than rx_count
packets and same packet may be sent multiple times as you are enqueue multiple times
but using the same mbuf pointer and rx_count.
Thanks,
Chenbo
> }
> }
>
> - enqueue_count = vdev_queue_ops[vdev->vid].enqueue_pkt_burst(vdev,
> - VIRTIO_RXQ, pkts, rx_count);
> -
> if (enable_stats) {
> __atomic_add_fetch(&vdev->stats.rx_total_atomic, rx_count,
> __ATOMIC_SEQ_CST);
> --
> 2.25.1
More information about the stable
mailing list