[dpdk-stable] patch 'net/mlx5: fix Rx buffer replenishment threshold' has been queued to LTS release 17.11.4

Yongseok Koh yskoh at mellanox.com
Fri Jul 27 04:09:25 CEST 2018


Hi,

FYI, your patch has been queued to LTS release 17.11.4

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 07/28/18. So please
shout if anyone has objections.

Thanks.

Yongseok

---
>From b9e1edfe8ee644be2fe5c13e3239c8ee1f48afb4 Mon Sep 17 00:00:00 2001
From: Yongseok Koh <yskoh at mellanox.com>
Date: Tue, 26 Jun 2018 04:33:35 -0700
Subject: [PATCH] net/mlx5: fix Rx buffer replenishment threshold

[ upstream commit e10245a13b2e340f48ce80484f19bcbc13e9ebe6 ]

The threshold of buffer replenishment for vectorized Rx burst is a constant
value (64). If the size of Rx queue is comparatively small, device could
run out of buffers. For example, if the size of Rx queue is 128, buffers
are replenished only twice per a wraparound. This can cause jitter in
receiving packets and the jitter can cause unnecessary retransmission for
TCP connections.

Fixes: 6cb559d67b83 ("net/mlx5: add vectorized Rx/Tx burst for x86")
Fixes: 570acdb1da8a ("net/mlx5: add vectorized Rx/Tx burst for ARM")

Signed-off-by: Yongseok Koh <yskoh at mellanox.com>
Acked-by: Shahaf Shuler <shahafs at mellanox.com>
---
 drivers/net/mlx5/mlx5_defs.h          | 5 +++--
 drivers/net/mlx5/mlx5_rxtx_vec.h      | 4 ++--
 drivers/net/mlx5/mlx5_rxtx_vec_neon.h | 2 +-
 drivers/net/mlx5/mlx5_rxtx_vec_sse.h  | 2 +-
 4 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index d70635767..e9cda9a08 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -92,10 +92,11 @@
 #define MLX5_VPMD_MIN_TXQS 4
 
 /* Threshold of buffer replenishment for vectorized Rx. */
-#define MLX5_VPMD_RXQ_RPLNSH_THRESH   64U
+#define MLX5_VPMD_RXQ_RPLNSH_THRESH(n) \
+	(RTE_MIN(MLX5_VPMD_RX_MAX_BURST, (unsigned int)(n) >> 2))
 
 /* Maximum size of burst for vectorized Rx. */
-#define MLX5_VPMD_RX_MAX_BURST        MLX5_VPMD_RXQ_RPLNSH_THRESH
+#define MLX5_VPMD_RX_MAX_BURST 64U
 
 /*
  * Maximum size of burst for vectorized Tx. This is related to the maximum size
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.h b/drivers/net/mlx5/mlx5_rxtx_vec.h
index 1f08ed0b2..d504e2aee 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.h
@@ -106,9 +106,9 @@ mlx5_rx_replenish_bulk_mbuf(struct mlx5_rxq_data *rxq, uint16_t n)
 	volatile struct mlx5_wqe_data_seg *wq = &(*rxq->wqes)[elts_idx];
 	unsigned int i;
 
-	assert(n >= MLX5_VPMD_RXQ_RPLNSH_THRESH);
+	assert(n >= MLX5_VPMD_RXQ_RPLNSH_THRESH(q_n));
 	assert(n <= (uint16_t)(q_n - (rxq->rq_ci - rxq->rq_pi)));
-	assert(MLX5_VPMD_RXQ_RPLNSH_THRESH > MLX5_VPMD_DESCS_PER_LOOP);
+	assert(MLX5_VPMD_RXQ_RPLNSH_THRESH(q_n) > MLX5_VPMD_DESCS_PER_LOOP);
 	/* Not to cross queue end. */
 	n = RTE_MIN(n - MLX5_VPMD_DESCS_PER_LOOP, q_n - elts_idx);
 	if (rte_mempool_get_bulk(rxq->mp, (void *)elts, n) < 0) {
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
index cf424778a..1604d0430 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
@@ -754,7 +754,7 @@ rxq_burst_v(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts, uint16_t pkts_n,
 	 *   N - (rq_ci - rq_pi) := # of buffers consumed (to be replenished).
 	 */
 	repl_n = q_n - (rxq->rq_ci - rxq->rq_pi);
-	if (repl_n >= MLX5_VPMD_RXQ_RPLNSH_THRESH)
+	if (repl_n >= MLX5_VPMD_RXQ_RPLNSH_THRESH(q_n))
 		mlx5_rx_replenish_bulk_mbuf(rxq, repl_n);
 	/* See if there're unreturned mbufs from compressed CQE. */
 	rcvd_pkt = rxq->cq_ci - rxq->rq_pi;
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
index 793142922..01a93a4b4 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
@@ -735,7 +735,7 @@ rxq_burst_v(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts, uint16_t pkts_n,
 	 *   N - (rq_ci - rq_pi) := # of buffers consumed (to be replenished).
 	 */
 	repl_n = q_n - (rxq->rq_ci - rxq->rq_pi);
-	if (repl_n >= MLX5_VPMD_RXQ_RPLNSH_THRESH)
+	if (repl_n >= MLX5_VPMD_RXQ_RPLNSH_THRESH(q_n))
 		mlx5_rx_replenish_bulk_mbuf(rxq, repl_n);
 	/* See if there're unreturned mbufs from compressed CQE. */
 	rcvd_pkt = rxq->cq_ci - rxq->rq_pi;
-- 
2.11.0



More information about the stable mailing list