patch 'net/mlx5: fix partial inline of fine grain packets' has been queued to stable release 20.11.4

Xueming Li xuemingl at nvidia.com
Sun Nov 28 15:53:58 CET 2021


Hi,

FYI, your patch has been queued to stable release 20.11.4

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/21. So please
shout if anyone has objections.

Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.

Queued patches are on a temporary branch at:
https://github.com/steevenlee/dpdk

This queued commit can be viewed at:
https://github.com/steevenlee/dpdk/commit/630a42f437228b5b7aaab5ceee5ebeaea23e1d9c

Thanks.

Xueming Li <xuemingl at nvidia.com>

---
>From 630a42f437228b5b7aaab5ceee5ebeaea23e1d9c Mon Sep 17 00:00:00 2001
From: Dariusz Sosnowski <dsosnowski at nvidia.com>
Date: Wed, 17 Nov 2021 11:50:50 +0200
Subject: [PATCH] net/mlx5: fix partial inline of fine grain packets
Cc: Xueming Li <xuemingl at nvidia.com>

[ upstream commit 7775172c045f3387cee47d3f32633255d37ba785 ]

Assuming a user tried to send multi-segment packets, with
RTE_PMD_MLX5_FINE_GRANULARITY_INLINE flag set, using a device with
minimum inlining requirements (such as ConnectX-4 Lx or when user
specified them explicitly), sending such packets caused segfault.
Segfault was caused by failed invariants in
mlx5_tx_packet_multi_inline function.

This patch introduces a logic for multi-segment packets, with
RTE_PMD_MLX5_FINE_GRANULARITY_INLINE flag set, to omit mbuf scanning for
filling inline buffer and inline only minimal amount of data required.

Fixes: ec837ad0fc7c ("net/mlx5: fix multi-segment inline for the first segments")

Signed-off-by: Dariusz Sosnowski <dsosnowski at nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo at nvidia.com>
---
 drivers/net/mlx5/mlx5_rxtx.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 5ec823b024..97e0995c66 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -3468,7 +3468,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 			MLX5_ASSERT(txq->inlen_mode >=
 				    MLX5_ESEG_MIN_INLINE_SIZE);
 			MLX5_ASSERT(txq->inlen_mode <= txq->inlen_send);
-			inlen = txq->inlen_mode;
+			inlen = RTE_MIN(txq->inlen_mode, inlen);
 		} else if (vlan && !txq->vlan_en) {
 			/*
 			 * VLAN insertion is requested and hardware does not
@@ -3481,6 +3481,8 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 		} else {
 			goto do_first;
 		}
+		if (mbuf->ol_flags & PKT_TX_DYNF_NOINLINE)
+			goto do_build;
 		/*
 		 * Now we know the minimal amount of data is requested
 		 * to inline. Check whether we should inline the buffers
@@ -3513,6 +3515,8 @@ do_first:
 				mbuf = NEXT(mbuf);
 				/* There should be not end of packet. */
 				MLX5_ASSERT(mbuf);
+				if (mbuf->ol_flags & PKT_TX_DYNF_NOINLINE)
+					break;
 				nxlen = inlen + rte_pktmbuf_data_len(mbuf);
 			} while (unlikely(nxlen < txq->inlen_send));
 		}
@@ -3540,6 +3544,7 @@ do_align:
 	 * Estimate the number of Data Segments conservatively,
 	 * supposing no any mbufs is being freed during inlining.
 	 */
+do_build:
 	MLX5_ASSERT(inlen <= txq->inlen_send);
 	ds = NB_SEGS(loc->mbuf) + 2 + (inlen -
 				       MLX5_ESEG_MIN_INLINE_SIZE +
-- 
2.34.0

---
  Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- -	2021-11-28 22:41:06.078722329 +0800
+++ 0054-net-mlx5-fix-partial-inline-of-fine-grain-packets.patch	2021-11-28 22:41:03.380206560 +0800
@@ -1 +1 @@
-From 7775172c045f3387cee47d3f32633255d37ba785 Mon Sep 17 00:00:00 2001
+From 630a42f437228b5b7aaab5ceee5ebeaea23e1d9c Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl at nvidia.com>
+
+[ upstream commit 7775172c045f3387cee47d3f32633255d37ba785 ]
@@ -18 +20,0 @@
-Cc: stable at dpdk.org
@@ -23 +25 @@
- drivers/net/mlx5/mlx5_tx.h | 7 ++++++-
+ drivers/net/mlx5/mlx5_rxtx.c | 7 ++++++-
@@ -26,5 +28,5 @@
-diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
-index ad13b5e608..bc629983fa 100644
---- a/drivers/net/mlx5/mlx5_tx.h
-+++ b/drivers/net/mlx5/mlx5_tx.h
-@@ -1933,7 +1933,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
+diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
+index 5ec823b024..97e0995c66 100644
+--- a/drivers/net/mlx5/mlx5_rxtx.c
++++ b/drivers/net/mlx5/mlx5_rxtx.c
+@@ -3468,7 +3468,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
@@ -39 +41 @@
-@@ -1946,6 +1946,8 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
+@@ -3481,6 +3481,8 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
@@ -43 +45 @@
-+		if (mbuf->ol_flags & RTE_MBUF_F_TX_DYNF_NOINLINE)
++		if (mbuf->ol_flags & PKT_TX_DYNF_NOINLINE)
@@ -48 +50 @@
-@@ -1978,6 +1980,8 @@ do_first:
+@@ -3513,6 +3515,8 @@ do_first:
@@ -52 +54 @@
-+				if (mbuf->ol_flags & RTE_MBUF_F_TX_DYNF_NOINLINE)
++				if (mbuf->ol_flags & PKT_TX_DYNF_NOINLINE)
@@ -57 +59 @@
-@@ -2005,6 +2009,7 @@ do_align:
+@@ -3540,6 +3544,7 @@ do_align:


More information about the stable mailing list