patch 'examples/packet_ordering: fix Rx with reorder mode disabled' has been queued to stable release 23.11.1

Xueming Li xuemingl at nvidia.com
Sat Apr 13 14:49:09 CEST 2024


Hi,

FYI, your patch has been queued to stable release 23.11.1

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.

Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.

Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging

This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=10296d5f506e4ad4e5c7bb2c32a1f6369ffdcd9c

Thanks.

Xueming Li <xuemingl at nvidia.com>

---
>From 10296d5f506e4ad4e5c7bb2c32a1f6369ffdcd9c Mon Sep 17 00:00:00 2001
From: Qian Hao <qi_an_hao at 126.com>
Date: Wed, 13 Dec 2023 19:07:18 +0800
Subject: [PATCH] examples/packet_ordering: fix Rx with reorder mode disabled
Cc: Xueming Li <xuemingl at nvidia.com>

[ upstream commit 7ba49dc729937ea97642a615e9b08f33919b94f4 ]

The packet_ordering example works in two modes (opt via --disable-reorder):
  - When reorder is enabled: rx_thread - N*worker_thread - send_thread
  - When reorder is disabled: rx_thread - N*worker_thread - tx_thread
N parallel worker_thread(s) generate out-of-order packets.

When reorder is enabled, send_thread uses sequence number generated in
rx_thread (L459) to enforce packet ordering. Otherwise rx_thread just
sends any packet it receives.

rx_thread writes sequence number into a dynamic field, which is only
registered by calling rte_reorder_create() (Line 741) when reorder is
enabled. However, rx_thread marks sequence number onto each packet no
matter whether reorder is enabled, overwriting the leading bytes in packet
mbufs when reorder is disabled, resulting in segfaults when PMD tries to
DMA packets.

`if (!disable_reorder_flag) {...}` is added in rx_thread to fix the bug.
The test is inlined by the compiler to prevent any performance loss.

Signed-off-by: Qian Hao <qi_an_hao at 126.com>
Acked-by: Volodymyr Fialko <vfialko at marvell.com>
---
 .mailmap                        |  1 +
 examples/packet_ordering/main.c | 32 +++++++++++++++++++++++++-------
 2 files changed, 26 insertions(+), 7 deletions(-)

diff --git a/.mailmap b/.mailmap
index 9541b3b02e..daa1f52205 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1131,6 +1131,7 @@ Przemyslaw Czesnowicz <przemyslaw.czesnowicz at intel.com>
 Przemyslaw Patynowski <przemyslawx.patynowski at intel.com>
 Przemyslaw Zegan <przemyslawx.zegan at intel.com>
 Pu Xu <583493798 at qq.com>
+Qian Hao <qi_an_hao at 126.com>
 Qian Xu <qian.q.xu at intel.com>
 Qiao Liu <qiao.liu at intel.com>
 Qi Fu <qi.fu at intel.com>
diff --git a/examples/packet_ordering/main.c b/examples/packet_ordering/main.c
index d2fd6f77e4..f839db9102 100644
--- a/examples/packet_ordering/main.c
+++ b/examples/packet_ordering/main.c
@@ -5,6 +5,7 @@
 #include <stdlib.h>
 #include <signal.h>
 #include <getopt.h>
+#include <stdbool.h>
 
 #include <rte_eal.h>
 #include <rte_common.h>
@@ -427,8 +428,8 @@ int_handler(int sig_num)
  * The mbufs are then passed to the worker threads via the rx_to_workers
  * ring.
  */
-static int
-rx_thread(struct rte_ring *ring_out)
+static __rte_always_inline int
+rx_thread(struct rte_ring *ring_out, bool disable_reorder_flag)
 {
 	uint32_t seqn = 0;
 	uint16_t i, ret = 0;
@@ -454,9 +455,11 @@ rx_thread(struct rte_ring *ring_out)
 				}
 				app_stats.rx.rx_pkts += nb_rx_pkts;
 
-				/* mark sequence number */
-				for (i = 0; i < nb_rx_pkts; )
-					*rte_reorder_seqn(pkts[i++]) = seqn++;
+				/* mark sequence number if reorder is enabled */
+				if (!disable_reorder_flag) {
+					for (i = 0; i < nb_rx_pkts;)
+						*rte_reorder_seqn(pkts[i++]) = seqn++;
+				}
 
 				/* enqueue to rx_to_workers ring */
 				ret = rte_ring_enqueue_burst(ring_out,
@@ -473,6 +476,18 @@ rx_thread(struct rte_ring *ring_out)
 	return 0;
 }
 
+static __rte_noinline int
+rx_thread_reorder(struct rte_ring *ring_out)
+{
+	return rx_thread(ring_out, false);
+}
+
+static __rte_noinline int
+rx_thread_reorder_disabled(struct rte_ring *ring_out)
+{
+	return rx_thread(ring_out, true);
+}
+
 /**
  * This thread takes bursts of packets from the rx_to_workers ring and
  * Changes the input port value to output port value. And feds it to
@@ -772,8 +787,11 @@ main(int argc, char **argv)
 				(void *)&send_args, last_lcore_id);
 	}
 
-	/* Start rx_thread() on the main core */
-	rx_thread(rx_to_workers);
+	/* Start rx_thread_xxx() on the main core */
+	if (disable_reorder)
+		rx_thread_reorder_disabled(rx_to_workers);
+	else
+		rx_thread_reorder(rx_to_workers);
 
 	RTE_LCORE_FOREACH_WORKER(lcore_id) {
 		if (rte_eal_wait_lcore(lcore_id) < 0)
-- 
2.34.1

---
  Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- -	2024-04-13 20:43:07.152866717 +0800
+++ 0069-examples-packet_ordering-fix-Rx-with-reorder-mode-di.patch	2024-04-13 20:43:05.017753905 +0800
@@ -1 +1 @@
-From 7ba49dc729937ea97642a615e9b08f33919b94f4 Mon Sep 17 00:00:00 2001
+From 10296d5f506e4ad4e5c7bb2c32a1f6369ffdcd9c Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl at nvidia.com>
+
+[ upstream commit 7ba49dc729937ea97642a615e9b08f33919b94f4 ]
@@ -25,2 +27,0 @@
-Cc: stable at dpdk.org
-
@@ -35 +36 @@
-index 1b346f630f..55913d0450 100644
+index 9541b3b02e..daa1f52205 100644
@@ -38 +39 @@
-@@ -1142,6 +1142,7 @@ Przemyslaw Czesnowicz <przemyslaw.czesnowicz at intel.com>
+@@ -1131,6 +1131,7 @@ Przemyslaw Czesnowicz <przemyslaw.czesnowicz at intel.com>


More information about the stable mailing list