[v2] app/testpmd: optimize membuf pool allocation

Message ID 1536717266-6363-1-git-send-email-phil.yang@arm.com (mailing list archive)
State Accepted, archived
Delegated to: Thomas Monjalon
Headers
Series [v2] app/testpmd: optimize membuf pool allocation |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Phil Yang Sept. 12, 2018, 1:54 a.m. UTC
  By default, testpmd will create membuf pool for all NUMA nodes and
ignore EAL configuration.

Count the number of available NUMA according to EAL core mask or core
list configuration. Optimized by only creating membuf pool for those
nodes.

Fixes: c9cafcc ("app/testpmd: fix mempool creation by socket id")

Signed-off-by: Phil Yang <phil.yang@arm.com>
Acked-by: Gavin Hu <gavin.hu@arm.com>
---
 app/test-pmd/testpmd.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
  

Comments

Iremonger, Bernard Sept. 12, 2018, 10:15 a.m. UTC | #1
> -----Original Message-----
> From: phil.yang@arm.com [mailto:phil.yang@arm.com]
> Sent: Wednesday, September 12, 2018 2:54 AM
> To: dev@dpdk.org
> Cc: Iremonger, Bernard <bernard.iremonger@intel.com>; gavin.hu@arm.com;
> stable@dpdk.org; phil.yang@arm.com
> Subject: [PATCH v2] app/testpmd: optimize membuf pool allocation
> 
> By default, testpmd will create membuf pool for all NUMA nodes and ignore EAL
> configuration.
> 
> Count the number of available NUMA according to EAL core mask or core list
> configuration. Optimized by only creating membuf pool for those nodes.
> 
> Fixes: c9cafcc ("app/testpmd: fix mempool creation by socket id")
> 
> Signed-off-by: Phil Yang <phil.yang@arm.com>
> Acked-by: Gavin Hu <gavin.hu@arm.com>

Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
  
Thomas Monjalon Sept. 19, 2018, 1:38 p.m. UTC | #2
> > By default, testpmd will create membuf pool for all NUMA nodes and ignore EAL
> > configuration.
> > 
> > Count the number of available NUMA according to EAL core mask or core list
> > configuration. Optimized by only creating membuf pool for those nodes.
> > 
> > Fixes: c9cafcc ("app/testpmd: fix mempool creation by socket id")

Fixes: c9cafcc82de8 ("app/testpmd: fix mempool creation by socket id")
Cc: stable@dpdk.org

> > Signed-off-by: Phil Yang <phil.yang@arm.com>
> > Acked-by: Gavin Hu <gavin.hu@arm.com>
> 
> Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>

Applied, thanks
  
Ferruh Yigit Oct. 8, 2018, 11:33 a.m. UTC | #3
On 9/12/2018 2:54 AM, dev-bounces@dpdk.org wrote:
> By default, testpmd will create membuf pool for all NUMA nodes and
> ignore EAL configuration.
> 
> Count the number of available NUMA according to EAL core mask or core
> list configuration. Optimized by only creating membuf pool for those
> nodes.
> 
> Fixes: c9cafcc ("app/testpmd: fix mempool creation by socket id")
> 
> Signed-off-by: Phil Yang <phil.yang@arm.com>
> Acked-by: Gavin Hu <gavin.hu@arm.com>
> ---
>  app/test-pmd/testpmd.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index ee48db2..a56af2b 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -476,6 +476,8 @@ set_default_fwd_lcores_config(void)
>  
>  	nb_lc = 0;
>  	for (i = 0; i < RTE_MAX_LCORE; i++) {
> +		if (!rte_lcore_is_enabled(i))
> +			continue;
>  		sock_num = rte_lcore_to_socket_id(i);
>  		if (new_socket_id(sock_num)) {
>  			if (num_sockets >= RTE_MAX_NUMA_NODES) {
> @@ -485,8 +487,6 @@ set_default_fwd_lcores_config(void)
>  			}
>  			socket_ids[num_sockets++] = sock_num;
>  		}
> -		if (!rte_lcore_is_enabled(i))
> -			continue;
>  		if (i == rte_get_master_lcore())
>  			continue;
>  		fwd_lcores_cpuids[nb_lc++] = i;
> 


This is causing testpmd fail for the case all cores from socket 1 and added a
virtual device which will try to allocate memory from socket 0.


 $ testpmd -l<cores from socket 1> --vdev net_pcap0,iface=lo -- -i
 ...
 Failed to setup RX queue:No mempool allocation on the socket 0
 EAL: Error - exiting with code: 1
   Cause: Start ports failed
  
Anatoly Burakov Oct. 8, 2018, 11:35 a.m. UTC | #4
On 08-Oct-18 12:33 PM, Ferruh Yigit wrote:
> On 9/12/2018 2:54 AM, dev-bounces@dpdk.org wrote:
>> By default, testpmd will create membuf pool for all NUMA nodes and
>> ignore EAL configuration.
>>
>> Count the number of available NUMA according to EAL core mask or core
>> list configuration. Optimized by only creating membuf pool for those
>> nodes.
>>
>> Fixes: c9cafcc ("app/testpmd: fix mempool creation by socket id")
>>
>> Signed-off-by: Phil Yang <phil.yang@arm.com>
>> Acked-by: Gavin Hu <gavin.hu@arm.com>
>> ---
>>   app/test-pmd/testpmd.c | 4 ++--
>>   1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
>> index ee48db2..a56af2b 100644
>> --- a/app/test-pmd/testpmd.c
>> +++ b/app/test-pmd/testpmd.c
>> @@ -476,6 +476,8 @@ set_default_fwd_lcores_config(void)
>>   
>>   	nb_lc = 0;
>>   	for (i = 0; i < RTE_MAX_LCORE; i++) {
>> +		if (!rte_lcore_is_enabled(i))
>> +			continue;
>>   		sock_num = rte_lcore_to_socket_id(i);
>>   		if (new_socket_id(sock_num)) {
>>   			if (num_sockets >= RTE_MAX_NUMA_NODES) {
>> @@ -485,8 +487,6 @@ set_default_fwd_lcores_config(void)
>>   			}
>>   			socket_ids[num_sockets++] = sock_num;
>>   		}
>> -		if (!rte_lcore_is_enabled(i))
>> -			continue;
>>   		if (i == rte_get_master_lcore())
>>   			continue;
>>   		fwd_lcores_cpuids[nb_lc++] = i;
>>
> 
> 
> This is causing testpmd fail for the case all cores from socket 1 and added a
> virtual device which will try to allocate memory from socket 0.
> 
> 
>   $ testpmd -l<cores from socket 1> --vdev net_pcap0,iface=lo -- -i
>   ...
>   Failed to setup RX queue:No mempool allocation on the socket 0
>   EAL: Error - exiting with code: 1
>     Cause: Start ports failed
> 
> 

It's an open question as to why pcap driver tries to allocate on socket 
0 when everything is on socket 1, but perhaps a better improvement would 
be to take into account not only socket ID's of lcores, but ethdev 
devices as well?
  
Phil Yang Oct. 11, 2018, 7:11 a.m. UTC | #5
> -----Original Message-----
> From: Burakov, Anatoly <anatoly.burakov@intel.com>
> Sent: Monday, October 8, 2018 7:36 PM
> To: Ferruh Yigit <ferruh.yigit@intel.com>; dev-bounces@dpdk.org;
> dev@dpdk.org
> Cc: bernard.iremonger@intel.com; Gavin Hu (Arm Technology China)
> <Gavin.Hu@arm.com>; stable@dpdk.org; Phil Yang (Arm Technology China)
> <Phil.Yang@arm.com>
> Subject: Re: [dpdk-dev] [PATCH v2] app/testpmd: optimize membuf pool
> allocation
>
> On 08-Oct-18 12:33 PM, Ferruh Yigit wrote:
> > On 9/12/2018 2:54 AM, dev-bounces@dpdk.org wrote:
> >> By default, testpmd will create membuf pool for all NUMA nodes and
> >> ignore EAL configuration.
> >>
> >> Count the number of available NUMA according to EAL core mask or core
> >> list configuration. Optimized by only creating membuf pool for those
> >> nodes.
> >>
> >> Fixes: c9cafcc ("app/testpmd: fix mempool creation by socket id")
> >>
> >> Signed-off-by: Phil Yang <phil.yang@arm.com>
> >> Acked-by: Gavin Hu <gavin.hu@arm.com>
> >> ---
> >>   app/test-pmd/testpmd.c | 4 ++--
> >>   1 file changed, 2 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> >> ee48db2..a56af2b 100644
> >> --- a/app/test-pmd/testpmd.c
> >> +++ b/app/test-pmd/testpmd.c
> >> @@ -476,6 +476,8 @@ set_default_fwd_lcores_config(void)
> >>
> >>   nb_lc = 0;
> >>   for (i = 0; i < RTE_MAX_LCORE; i++) {
> >> +if (!rte_lcore_is_enabled(i))
> >> +continue;
> >>   sock_num = rte_lcore_to_socket_id(i);
> >>   if (new_socket_id(sock_num)) {
> >>   if (num_sockets >= RTE_MAX_NUMA_NODES) { @@ -
> 485,8 +487,6 @@
> >> set_default_fwd_lcores_config(void)
> >>   }
> >>   socket_ids[num_sockets++] = sock_num;
> >>   }
> >> -if (!rte_lcore_is_enabled(i))
> >> -continue;
> >>   if (i == rte_get_master_lcore())
> >>   continue;
> >>   fwd_lcores_cpuids[nb_lc++] = i;
> >>
> >
> >
> > This is causing testpmd fail for the case all cores from socket 1 and
> > added a virtual device which will try to allocate memory from socket 0.
> >
> >
> >   $ testpmd -l<cores from socket 1> --vdev net_pcap0,iface=lo -- -i
> >   ...
> >   Failed to setup RX queue:No mempool allocation on the socket 0
> >   EAL: Error - exiting with code: 1
> >     Cause: Start ports failed
> >
> >
>
> It's an open question as to why pcap driver tries to allocate on socket
> 0 when everything is on socket 1, but perhaps a better improvement would be to
> take into account not only socket ID's of lcores, but ethdev devices as well?
>
> --
> Thanks,
> Anatoly

Hi Anatoly,

Agree.

Since NUMA-aware is enabled default in testpmd, so it should be configurable for vdev port NUMA setting.

testpmd -l <cores from socket 1> --vdev net_pcap0,iface=lo --socket-mem=64 -- --numa --port-numa-config="(0,1)" --ring-numa-config="(0,1,1),(0,2,1)" -i

...
Configuring Port 0 (socket 0)
Failed to setup RX queue:No mempool allocation on the socket 0
EAL: Error - exiting with code: 1
  Cause: Start ports failed

This should be a defect.

Thanks
Phil Yang
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
  
Phil Yang Oct. 11, 2018, 10:37 a.m. UTC | #6
Hi Anatoly/Yigit,

I've prepared a patch to fix this issue. 
I will send out the patch once the internal review is done.

Thanks,
Phil Yang

> -----Original Message-----
> From: Phil Yang (Arm Technology China)
> Sent: Thursday, October 11, 2018 3:12 PM
> To: 'Burakov, Anatoly' <anatoly.burakov@intel.com>; Ferruh Yigit
> <ferruh.yigit@intel.com>; dev-bounces@dpdk.org; dev@dpdk.org
> Cc: bernard.iremonger@intel.com; Gavin Hu (Arm Technology China)
> <Gavin.Hu@arm.com>; stable@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v2] app/testpmd: optimize membuf pool
> allocation
> 
> > -----Original Message-----
> > From: Burakov, Anatoly <anatoly.burakov@intel.com>
> > Sent: Monday, October 8, 2018 7:36 PM
> > To: Ferruh Yigit <ferruh.yigit@intel.com>; dev-bounces@dpdk.org;
> > dev@dpdk.org
> > Cc: bernard.iremonger@intel.com; Gavin Hu (Arm Technology China)
> > <Gavin.Hu@arm.com>; stable@dpdk.org; Phil Yang (Arm Technology China)
> > <Phil.Yang@arm.com>
> > Subject: Re: [dpdk-dev] [PATCH v2] app/testpmd: optimize membuf pool
> > allocation
> >
> > On 08-Oct-18 12:33 PM, Ferruh Yigit wrote:
> > > On 9/12/2018 2:54 AM, dev-bounces@dpdk.org wrote:
> > >> By default, testpmd will create membuf pool for all NUMA nodes and
> > >> ignore EAL configuration.
> > >>
> > >> Count the number of available NUMA according to EAL core mask or
> > >> core list configuration. Optimized by only creating membuf pool for
> > >> those nodes.
> > >>
> > >> Fixes: c9cafcc ("app/testpmd: fix mempool creation by socket id")
> > >>
> > >> Signed-off-by: Phil Yang <phil.yang@arm.com>
> > >> Acked-by: Gavin Hu <gavin.hu@arm.com>
> > >> ---
> > >>   app/test-pmd/testpmd.c | 4 ++--
> > >>   1 file changed, 2 insertions(+), 2 deletions(-)
> > >>
> > >> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > >> ee48db2..a56af2b 100644
> > >> --- a/app/test-pmd/testpmd.c
> > >> +++ b/app/test-pmd/testpmd.c
> > >> @@ -476,6 +476,8 @@ set_default_fwd_lcores_config(void)
> > >>
> > >>   	nb_lc = 0;
> > >>   	for (i = 0; i < RTE_MAX_LCORE; i++) {
> > >> +		if (!rte_lcore_is_enabled(i))
> > >> +			continue;
> > >>   		sock_num = rte_lcore_to_socket_id(i);
> > >>   		if (new_socket_id(sock_num)) {
> > >>   			if (num_sockets >= RTE_MAX_NUMA_NODES) { @@ -
> > 485,8 +487,6 @@
> > >> set_default_fwd_lcores_config(void)
> > >>   			}
> > >>   			socket_ids[num_sockets++] = sock_num;
> > >>   		}
> > >> -		if (!rte_lcore_is_enabled(i))
> > >> -			continue;
> > >>   		if (i == rte_get_master_lcore())
> > >>   			continue;
> > >>   		fwd_lcores_cpuids[nb_lc++] = i;
> > >>
> > >
> > >
> > > This is causing testpmd fail for the case all cores from socket 1
> > > and added a virtual device which will try to allocate memory from socket 0.
> > >
> > >
> > >   $ testpmd -l<cores from socket 1> --vdev net_pcap0,iface=lo -- -i
> > >   ...
> > >   Failed to setup RX queue:No mempool allocation on the socket 0
> > >   EAL: Error - exiting with code: 1
> > >     Cause: Start ports failed
> > >
> > >
> >
> > It's an open question as to why pcap driver tries to allocate on
> > socket
> > 0 when everything is on socket 1, but perhaps a better improvement
> > would be to take into account not only socket ID's of lcores, but ethdev
> devices as well?
> >
> > --
> > Thanks,
> > Anatoly
> 
> Hi Anatoly,
> 
> Agree.
> 
> Since NUMA-aware is enabled default in testpmd, so it should be configurable
> for vdev port NUMA setting.
> 
> testpmd -l <cores from socket 1> --vdev net_pcap0,iface=lo --socket-mem=64 --
> --numa --port-numa-config="(0,1)" --ring-numa-config="(0,1,1),(0,2,1)" -i
> 
> ...
> Configuring Port 0 (socket 0)
> Failed to setup RX queue:No mempool allocation on the socket 0
> EAL: Error - exiting with code: 1
>   Cause: Start ports failed
> 
> This should be a defect.
> 
> Thanks
> Phil Yang
  

Patch

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index ee48db2..a56af2b 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -476,6 +476,8 @@  set_default_fwd_lcores_config(void)
 
 	nb_lc = 0;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		if (!rte_lcore_is_enabled(i))
+			continue;
 		sock_num = rte_lcore_to_socket_id(i);
 		if (new_socket_id(sock_num)) {
 			if (num_sockets >= RTE_MAX_NUMA_NODES) {
@@ -485,8 +487,6 @@  set_default_fwd_lcores_config(void)
 			}
 			socket_ids[num_sockets++] = sock_num;
 		}
-		if (!rte_lcore_is_enabled(i))
-			continue;
 		if (i == rte_get_master_lcore())
 			continue;
 		fwd_lcores_cpuids[nb_lc++] = i;