[dpdk-stable] patch 'net/mlx5: fix overflow of Rx SW ring' has been queued to stable release 17.08.1

Yuanhan Liu yliu at fridaylinux.org
Tue Nov 21 14:16:54 CET 2017


Hi,

FYI, your patch has been queued to stable release 17.08.1

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/24/17. So please
shout if anyone has objections.

Thanks.

	--yliu

---
>From f29ef55dd26f9c00818258ef1d6b47a3b50726fd Mon Sep 17 00:00:00 2001
From: Yongseok Koh <yskoh at mellanox.com>
Date: Thu, 5 Oct 2017 14:37:29 -0700
Subject: [PATCH] net/mlx5: fix overflow of Rx SW ring

[ upstream commit fc048bd52cb7e3382da86629a5aef89f1377aca8 ]

If vectorized Rx burst is short of mbufs in replenishment, Rx SW ring can
overflow as the Rx burst handles 4 packets in a loop. This is because the
function fills SW ring and its mbufs first and checks validity of
each completion later. So, there should be some buffer slots at the tail of
the ring to protect mbufs which are already owned by application.

Fixes: 6cb559d67b83 ("net/mlx5: add vectorized Rx/Tx burst for x86")

Reported-by: Martin Weiser <martin.weiser at allegro-packets.com>
Signed-off-by: Yongseok Koh <yskoh at mellanox.com>
---
 drivers/net/mlx5/mlx5_rxtx_vec_sse.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_sse.c b/drivers/net/mlx5/mlx5_rxtx_vec_sse.c
index 6f4e1e8..b5a7657 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_sse.c
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_sse.c
@@ -642,6 +642,13 @@ rxq_cq_decompress_v(struct rxq *rxq,
 	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, hash) !=
 			 offsetof(struct rte_mbuf, rx_descriptor_fields1) + 12);
 	/*
+	 * Not to overflow elts array. Decompress next time after mbuf
+	 * replenishment.
+	 */
+	if (unlikely(mcqe_n + MLX5_VPMD_DESCS_PER_LOOP >
+		     (uint16_t)(rxq->rq_ci - rxq->cq_ci)))
+		return;
+	/*
 	 * A. load mCQEs into a 128bit register.
 	 * B. store rearm data to mbuf.
 	 * C. combine data from mCQEs with rx_descriptor_fields1.
@@ -1031,8 +1038,10 @@ rxq_burst_v(struct rxq *rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 	}
 	elts_idx = rxq->rq_pi & q_mask;
 	elts = &(*rxq->elts)[elts_idx];
-	/* Not to overflow pkts array. */
-	pkts_n = RTE_ALIGN_FLOOR(pkts_n - rcvd_pkt, MLX5_VPMD_DESCS_PER_LOOP);
+	pkts_n = RTE_MIN(pkts_n - rcvd_pkt,
+			 (uint16_t)(rxq->rq_ci - rxq->cq_ci));
+	/* Not to overflow pkts/elts array. */
+	pkts_n = RTE_ALIGN_FLOOR(pkts_n, MLX5_VPMD_DESCS_PER_LOOP);
 	/* Not to cross queue end. */
 	pkts_n = RTE_MIN(pkts_n, q_n - elts_idx);
 	if (!pkts_n)
-- 
2.7.4



More information about the stable mailing list