[dpdk-stable] patch 'test/distributor: fix shutdown of busy worker' has been queued to stable release 19.11.6

luca.boccassi at gmail.com luca.boccassi at gmail.com
Wed Oct 28 11:45:45 CET 2020


Hi,

FYI, your patch has been queued to stable release 19.11.6

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 10/30/20. So please
shout if anyone has objections.

Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.

Thanks.

Luca Boccassi

---
>From 6b792339111a059ebe146e9ea984628f94555958 Mon Sep 17 00:00:00 2001
From: Lukasz Wojciechowski <l.wojciechow at partner.samsung.com>
Date: Sat, 17 Oct 2020 05:06:49 +0200
Subject: [PATCH] test/distributor: fix shutdown of busy worker

[ upstream commit cf669d6930116b80493d67cdc5d7a1a568eed8e9 ]

The sanity test with worker shutdown delegates all bufs
to be processed by a single lcore worker, then it freezes
one of the lcore workers and continues to send more bufs.
The freezed core shuts down first by calling
rte_distributor_return_pkt().

The test intention is to verify if packets assigned to
the shut down lcore will be reassigned to another worker.

However the shutdown core was not always the one, that was
processing packets. The lcore processing mbufs might be different
every time test is launched. This is caused by keeping the value
of wkr static variable in rte_distributor_process() function
between running test cases.

Test freezed always lcore with 0 id. The patch stores the id
of worker that is processing the data in zero_idx global atomic
variable. This way the freezed lcore is always the proper one.

Fixes: c3eabff124e6 ("distributor: add unit tests")

Signed-off-by: Lukasz Wojciechowski <l.wojciechow at partner.samsung.com>
Tested-by: David Hunt <david.hunt at intel.com>
---
 app/test/test_distributor.c | 23 +++++++++++++++++++++--
 1 file changed, 21 insertions(+), 2 deletions(-)

diff --git a/app/test/test_distributor.c b/app/test/test_distributor.c
index 52230d2504..dcc4e76a9a 100644
--- a/app/test/test_distributor.c
+++ b/app/test/test_distributor.c
@@ -28,6 +28,7 @@ struct worker_params worker_params;
 static volatile int quit;      /**< general quit variable for all threads */
 static volatile int zero_quit; /**< var for when we just want thr0 to quit*/
 static volatile unsigned worker_idx;
+static volatile unsigned zero_idx;
 
 struct worker_stats {
 	volatile unsigned handled_packets;
@@ -340,26 +341,43 @@ handle_work_for_shutdown_test(void *arg)
 	unsigned int total = 0;
 	unsigned int i;
 	unsigned int returned = 0;
+	unsigned int zero_id = 0;
+	unsigned int zero_unset;
 	const unsigned int id = __atomic_fetch_add(&worker_idx, 1,
 			__ATOMIC_RELAXED);
 
 	num = rte_distributor_get_pkt(d, id, buf, NULL, 0);
 
+	if (num > 0) {
+		zero_unset = RTE_MAX_LCORE;
+		__atomic_compare_exchange_n(&zero_idx, &zero_unset, id,
+			0, __ATOMIC_ACQ_REL, __ATOMIC_ACQUIRE);
+	}
+	zero_id = __atomic_load_n(&zero_idx, __ATOMIC_ACQUIRE);
+
 	/* wait for quit single globally, or for worker zero, wait
 	 * for zero_quit */
-	while (!quit && !(id == 0 && zero_quit)) {
+	while (!quit && !(id == zero_id && zero_quit)) {
 		worker_stats[id].handled_packets += num;
 		count += num;
 		for (i = 0; i < num; i++)
 			rte_pktmbuf_free(buf[i]);
 		num = rte_distributor_get_pkt(d, id, buf, NULL, 0);
+
+		if (num > 0) {
+			zero_unset = RTE_MAX_LCORE;
+			__atomic_compare_exchange_n(&zero_idx, &zero_unset, id,
+				0, __ATOMIC_ACQ_REL, __ATOMIC_ACQUIRE);
+		}
+		zero_id = __atomic_load_n(&zero_idx, __ATOMIC_ACQUIRE);
+
 		total += num;
 	}
 	worker_stats[id].handled_packets += num;
 	count += num;
 	returned = rte_distributor_return_pkt(d, id, buf, num);
 
-	if (id == 0) {
+	if (id == zero_id) {
 		/* for worker zero, allow it to restart to pick up last packet
 		 * when all workers are shutting down.
 		 */
@@ -578,6 +596,7 @@ quit_workers(struct worker_params *wp, struct rte_mempool *p)
 	rte_eal_mp_wait_lcore();
 	quit = 0;
 	worker_idx = 0;
+	zero_idx = RTE_MAX_LCORE;
 }
 
 static int
-- 
2.20.1

---
  Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- -	2020-10-28 10:35:17.482204903 +0000
+++ 0186-test-distributor-fix-shutdown-of-busy-worker.patch	2020-10-28 10:35:11.796834322 +0000
@@ -1,8 +1,10 @@
-From cf669d6930116b80493d67cdc5d7a1a568eed8e9 Mon Sep 17 00:00:00 2001
+From 6b792339111a059ebe146e9ea984628f94555958 Mon Sep 17 00:00:00 2001
 From: Lukasz Wojciechowski <l.wojciechow at partner.samsung.com>
 Date: Sat, 17 Oct 2020 05:06:49 +0200
 Subject: [PATCH] test/distributor: fix shutdown of busy worker
 
+[ upstream commit cf669d6930116b80493d67cdc5d7a1a568eed8e9 ]
+
 The sanity test with worker shutdown delegates all bufs
 to be processed by a single lcore worker, then it freezes
 one of the lcore workers and continues to send more bufs.
@@ -23,7 +25,6 @@
 variable. This way the freezed lcore is always the proper one.
 
 Fixes: c3eabff124e6 ("distributor: add unit tests")
-Cc: stable at dpdk.org
 
 Signed-off-by: Lukasz Wojciechowski <l.wojciechow at partner.samsung.com>
 Tested-by: David Hunt <david.hunt at intel.com>
@@ -32,7 +33,7 @@
  1 file changed, 21 insertions(+), 2 deletions(-)
 
 diff --git a/app/test/test_distributor.c b/app/test/test_distributor.c
-index 52230d2504..6cd7a2edda 100644
+index 52230d2504..dcc4e76a9a 100644
 --- a/app/test/test_distributor.c
 +++ b/app/test/test_distributor.c
 @@ -28,6 +28,7 @@ struct worker_params worker_params;
@@ -57,7 +58,7 @@
 +	if (num > 0) {
 +		zero_unset = RTE_MAX_LCORE;
 +		__atomic_compare_exchange_n(&zero_idx, &zero_unset, id,
-+			false, __ATOMIC_ACQ_REL, __ATOMIC_ACQUIRE);
++			0, __ATOMIC_ACQ_REL, __ATOMIC_ACQUIRE);
 +	}
 +	zero_id = __atomic_load_n(&zero_idx, __ATOMIC_ACQUIRE);
 +
@@ -74,7 +75,7 @@
 +		if (num > 0) {
 +			zero_unset = RTE_MAX_LCORE;
 +			__atomic_compare_exchange_n(&zero_idx, &zero_unset, id,
-+				false, __ATOMIC_ACQ_REL, __ATOMIC_ACQUIRE);
++				0, __ATOMIC_ACQ_REL, __ATOMIC_ACQUIRE);
 +		}
 +		zero_id = __atomic_load_n(&zero_idx, __ATOMIC_ACQUIRE);
 +


More information about the stable mailing list