[dpdk-stable] patch 'app/testpmd: optimize mbuf pool allocation' has been queued to LTS release 16.11.9

Luca Boccassi bluca at debian.org
Thu Sep 27 10:44:03 CEST 2018


Hi,

FYI, your patch has been queued to LTS release 16.11.9

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 09/27/18. So please
shout if anyone has objections.

Also note that after the patch there's a diff of the upstream commit vs the patch applied
to the branch. If the code is different (ie: not only metadata diffs), due for example to
a change in context or macro names, please double check it.

Thanks.

Luca Boccassi

---
>From b2e58e851630e7fd9b2959909e33773812e5f983 Mon Sep 17 00:00:00 2001
From: Phil Yang <phil.yang at arm.com>
Date: Wed, 12 Sep 2018 09:54:26 +0800
Subject: [PATCH] app/testpmd: optimize mbuf pool allocation

[ upstream commit dbfb8ec7094c7115c6d620929de2aedfc9e440aa ]

By default, testpmd will create membuf pool for all NUMA nodes and
ignore EAL configuration.

Count the number of available NUMA according to EAL core mask or core
list configuration. Optimized by only creating membuf pool for those
nodes.

Fixes: c9cafcc82de8 ("app/testpmd: fix mempool creation by socket id")

Signed-off-by: Phil Yang <phil.yang at arm.com>
Acked-by: Gavin Hu <gavin.hu at arm.com>
Acked-by: Bernard Iremonger <bernard.iremonger at intel.com>
---
 app/test-pmd/testpmd.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index e48cf8a1ab..a6d80f35ca 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -358,14 +358,14 @@ set_default_fwd_lcores_config(void)
 
 	nb_lc = 0;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		if (!rte_lcore_is_enabled(i))
+			continue;
 		sock_num = rte_lcore_to_socket_id(i) + 1;
 		if (sock_num > max_socket) {
 			if (sock_num > RTE_MAX_NUMA_NODES)
 				rte_exit(EXIT_FAILURE, "Total sockets greater than %u\n", RTE_MAX_NUMA_NODES);
 			max_socket = sock_num;
 		}
-		if (!rte_lcore_is_enabled(i))
-			continue;
 		if (i == rte_get_master_lcore())
 			continue;
 		fwd_lcores_cpuids[nb_lc++] = i;
-- 
2.18.0

---
  Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- -	2018-09-25 13:26:56.945792405 +0100
+++ 0008-app-testpmd-optimize-mbuf-pool-allocation.patch	2018-09-25 13:26:56.787424707 +0100
@@ -1,8 +1,10 @@
-From dbfb8ec7094c7115c6d620929de2aedfc9e440aa Mon Sep 17 00:00:00 2001
+From b2e58e851630e7fd9b2959909e33773812e5f983 Mon Sep 17 00:00:00 2001
 From: Phil Yang <phil.yang at arm.com>
 Date: Wed, 12 Sep 2018 09:54:26 +0800
 Subject: [PATCH] app/testpmd: optimize mbuf pool allocation
 
+[ upstream commit dbfb8ec7094c7115c6d620929de2aedfc9e440aa ]
+
 By default, testpmd will create membuf pool for all NUMA nodes and
 ignore EAL configuration.
 
@@ -11,7 +13,6 @@
 nodes.
 
 Fixes: c9cafcc82de8 ("app/testpmd: fix mempool creation by socket id")
-Cc: stable at dpdk.org
 
 Signed-off-by: Phil Yang <phil.yang at arm.com>
 Acked-by: Gavin Hu <gavin.hu at arm.com>
@@ -21,21 +22,20 @@
  1 file changed, 2 insertions(+), 2 deletions(-)
 
 diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
-index 571ecb4ac8..001f0e5529 100644
+index e48cf8a1ab..a6d80f35ca 100644
 --- a/app/test-pmd/testpmd.c
 +++ b/app/test-pmd/testpmd.c
-@@ -475,6 +475,8 @@ set_default_fwd_lcores_config(void)
+@@ -358,14 +358,14 @@ set_default_fwd_lcores_config(void)
  
  	nb_lc = 0;
  	for (i = 0; i < RTE_MAX_LCORE; i++) {
 +		if (!rte_lcore_is_enabled(i))
 +			continue;
- 		sock_num = rte_lcore_to_socket_id(i);
- 		if (new_socket_id(sock_num)) {
- 			if (num_sockets >= RTE_MAX_NUMA_NODES) {
-@@ -484,8 +486,6 @@ set_default_fwd_lcores_config(void)
- 			}
- 			socket_ids[num_sockets++] = sock_num;
+ 		sock_num = rte_lcore_to_socket_id(i) + 1;
+ 		if (sock_num > max_socket) {
+ 			if (sock_num > RTE_MAX_NUMA_NODES)
+ 				rte_exit(EXIT_FAILURE, "Total sockets greater than %u\n", RTE_MAX_NUMA_NODES);
+ 			max_socket = sock_num;
  		}
 -		if (!rte_lcore_is_enabled(i))
 -			continue;


More information about the stable mailing list