patch 'vdpa/mlx5: fix queue enable drain CQ' has been queued to stable release 23.11.1

Xueming Li xuemingl at nvidia.com
Tue Mar 5 10:46:41 CET 2024


Hi,

FYI, your patch has been queued to stable release 23.11.1

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 03/31/24. So please
shout if anyone has objections.

Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.

Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging

This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=19f0cf0927c7171c0fe06526388f51c17c5ce62b

Thanks.

Xueming Li <xuemingl at nvidia.com>

---
>From 19f0cf0927c7171c0fe06526388f51c17c5ce62b Mon Sep 17 00:00:00 2001
From: Yajun Wu <yajunw at nvidia.com>
Date: Thu, 25 Jan 2024 11:17:55 +0800
Subject: [PATCH] vdpa/mlx5: fix queue enable drain CQ
Cc: Xueming Li <xuemingl at nvidia.com>

[ upstream commit 32fbcf3139fbff04651b3fe173e9f3457f105221 ]

For the case: `ethtool -L eth0 combined xxx` in VM, VQ will disable
and enable without calling device close. In such case, need add
drain CQ before reuse/reset event QP.

Fixes: 24969c7b6224 ("vdpa/mlx5: reuse event queues")

Signed-off-by: Yajun Wu <yajunw at nvidia.com>
Acked-by: Matan Azrad <matan at nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin at redhat.com>
---
 drivers/vdpa/mlx5/mlx5_vdpa_event.c | 29 +++++++++++++++++++----------
 1 file changed, 19 insertions(+), 10 deletions(-)

diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
index 9557c1042e..32430614d5 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
@@ -244,22 +244,30 @@ mlx5_vdpa_queues_complete(struct mlx5_vdpa_priv *priv)
 	return max;
 }

+static void
+mlx5_vdpa_drain_cq_one(struct mlx5_vdpa_priv *priv,
+	struct mlx5_vdpa_virtq *virtq)
+{
+	struct mlx5_vdpa_cq *cq = &virtq->eqp.cq;
+
+	mlx5_vdpa_queue_complete(cq);
+	if (cq->cq_obj.cq) {
+		cq->cq_obj.cqes[0].wqe_counter = rte_cpu_to_be_16(UINT16_MAX);
+		virtq->eqp.qp_pi = 0;
+		if (!cq->armed)
+			mlx5_vdpa_cq_arm(priv, cq);
+	}
+}
+
 void
 mlx5_vdpa_drain_cq(struct mlx5_vdpa_priv *priv)
 {
+	struct mlx5_vdpa_virtq *virtq;
 	unsigned int i;

 	for (i = 0; i < priv->caps.max_num_virtio_queues; i++) {
-		struct mlx5_vdpa_cq *cq = &priv->virtqs[i].eqp.cq;
-
-		mlx5_vdpa_queue_complete(cq);
-		if (cq->cq_obj.cq) {
-			cq->cq_obj.cqes[0].wqe_counter =
-				rte_cpu_to_be_16(UINT16_MAX);
-			priv->virtqs[i].eqp.qp_pi = 0;
-			if (!cq->armed)
-				mlx5_vdpa_cq_arm(priv, cq);
-		}
+		virtq = &priv->virtqs[i];
+		mlx5_vdpa_drain_cq_one(priv, virtq);
 	}
 }

@@ -632,6 +640,7 @@ mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n,
 	if (eqp->cq.cq_obj.cq != NULL && log_desc_n == eqp->cq.log_desc_n) {
 		/* Reuse existing resources. */
 		eqp->cq.callfd = callfd;
+		mlx5_vdpa_drain_cq_one(priv, virtq);
 		/* FW will set event qp to error state in q destroy. */
 		if (reset && !mlx5_vdpa_qps2rst2rts(eqp))
 			rte_write32(rte_cpu_to_be_32(RTE_BIT32(log_desc_n)),
--
2.34.1

---
  Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- -	2024-03-05 17:39:32.765921359 +0800
+++ 0060-vdpa-mlx5-fix-queue-enable-drain-CQ.patch	2024-03-05 17:39:30.773566493 +0800
@@ -1 +1 @@
-From 32fbcf3139fbff04651b3fe173e9f3457f105221 Mon Sep 17 00:00:00 2001
+From 19f0cf0927c7171c0fe06526388f51c17c5ce62b Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl at nvidia.com>
+
+[ upstream commit 32fbcf3139fbff04651b3fe173e9f3457f105221 ]
@@ -11 +13,0 @@
-Cc: stable at dpdk.org


More information about the stable mailing list