mempool: fix mempool obj alignment for non x86

Message ID 20191219134227.3841799-1-jerinj@marvell.com (mailing list archive)
State Superseded, archived
Delegated to: Thomas Monjalon
Headers
Series mempool: fix mempool obj alignment for non x86 |

Checks

Context Check Description
ci/checkpatch warning coding style issues
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-testing success Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/travis-robot warning Travis build: failed
ci/Intel-compilation success Compilation OK

Commit Message

Jerin Jacob Kollanukkaran Dec. 19, 2019, 1:42 p.m. UTC
  From: Jerin Jacob <jerinj@marvell.com>

The exiting optimize_object_size() function address the memory object
alignment constraint on x86 for better performance.

Different (Mirco) architecture may have different
memory alignment constraint for better performance and it not
same as the existing optimize_object_size() function. Some use,
XOR(kind of CRC) scheme to enable DRAM channel distribution
based on the address and some may have a different formula.

Introducing arch_mem_object_align() function to abstract
the differences in different (mirco) architectures and avoid
wasting memory for mempool object alignment for the architecture
the existing optimize_object_size() is not valid.

Additional details:
https://www.mail-archive.com/dev@dpdk.org/msg149157.html

Fixes: af75078fece3 ("first public release")
Cc: stable@dpdk.org

Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
 doc/guides/prog_guide/mempool_lib.rst |  6 +++---
 lib/librte_mempool/rte_mempool.c      | 17 +++++++++++++----
 2 files changed, 16 insertions(+), 7 deletions(-)
  

Comments

Gavin Hu Dec. 20, 2019, 3:26 a.m. UTC | #1
Hi Jerin,

It got two coding style warnings, otherwise, 
Reviewed-by: Gavin Hu <gavin.hu@arm.com>

WARNING:UNSPECIFIED_INT: Prefer 'unsigned int' to bare use of 'unsigned'
#144: FILE: lib/librte_mempool/rte_mempool.c:84:
+arch_mem_object_align(unsigned obj_size)

WARNING:UNSPECIFIED_INT: Prefer 'unsigned int' to bare use of 'unsigned'
#154: FILE: lib/librte_mempool/rte_mempool.c:106:
+arch_mem_object_align(unsigned obj_size)
  
Jerin Jacob Dec. 20, 2019, 3:45 a.m. UTC | #2
On Fri, Dec 20, 2019 at 8:56 AM Gavin Hu <Gavin.Hu@arm.com> wrote:
>
> Hi Jerin,
>
> It got two coding style warnings, otherwise,
> Reviewed-by: Gavin Hu <gavin.hu@arm.com>

Thanks Gavin for review. Since the existing code is using "unsigned"
in that file, I thought of not change by this patch.
If someone thinks, It is better to change then I can send v2 by fixing
"unsigned" to "unsigned int" across the file as a first patch in the
series.

>
> WARNING:UNSPECIFIED_INT: Prefer 'unsigned int' to bare use of 'unsigned'
> #144: FILE: lib/librte_mempool/rte_mempool.c:84:
> +arch_mem_object_align(unsigned obj_size)
>
> WARNING:UNSPECIFIED_INT: Prefer 'unsigned int' to bare use of 'unsigned'
> #154: FILE: lib/librte_mempool/rte_mempool.c:106:
> +arch_mem_object_align(unsigned obj_size)
  
Morten Brørup Dec. 20, 2019, 10:54 a.m. UTC | #3
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> Sent: Friday, December 20, 2019 4:45 AM
> 
> On Fri, Dec 20, 2019 at 8:56 AM Gavin Hu <Gavin.Hu@arm.com> wrote:
> >
> > Hi Jerin,
> >
> > It got two coding style warnings, otherwise,
> > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> 
> Thanks Gavin for review. Since the existing code is using "unsigned"
> in that file, I thought of not change by this patch.
> If someone thinks, It is better to change then I can send v2 by fixing
> "unsigned" to "unsigned int" across the file as a first patch in the
> series.
> 

The use of the type "unsigned" is a general issue with older DPDK libraries. Anyone touching any of these libraries will get these warnings. They should all be fixed. I'm not saying you should do it; I'm only suggesting that someone should create a dedicated patch to fix them all.

> >
> > WARNING:UNSPECIFIED_INT: Prefer 'unsigned int' to bare use of
> 'unsigned'
> > #144: FILE: lib/librte_mempool/rte_mempool.c:84:
> > +arch_mem_object_align(unsigned obj_size)
> >
> > WARNING:UNSPECIFIED_INT: Prefer 'unsigned int' to bare use of
> 'unsigned'
> > #154: FILE: lib/librte_mempool/rte_mempool.c:106:
> > +arch_mem_object_align(unsigned obj_size)
  
Honnappa Nagarahalli Dec. 20, 2019, 3:55 p.m. UTC | #4
<snip>

> 
> From: Jerin Jacob <jerinj@marvell.com>
> 
> The exiting optimize_object_size() function address the memory object
> alignment constraint on x86 for better performance.
> 
> Different (Mirco) architecture may have different memory alignment
> constraint for better performance and it not same as the existing
> optimize_object_size() function. Some use, XOR(kind of CRC) scheme to
> enable DRAM channel distribution based on the address and some may have
> a different formula.
If I understand correctly, address interleaving is the characteristic of the memory controller and not the CPU.
For ex: different SoCs using the same Arm architecture might have different memory controllers. So, the solution should not be architecture specific, but SoC specific.

> 
> Introducing arch_mem_object_align() function to abstract the differences in
> different (mirco) architectures and avoid wasting memory for mempool
> object alignment for the architecture the existing optimize_object_size() is
> not valid.
> 
> Additional details:
> https://www.mail-archive.com/dev@dpdk.org/msg149157.html
> 
> Fixes: af75078fece3 ("first public release")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Jerin Jacob <jerinj@marvell.com>
> ---
>  doc/guides/prog_guide/mempool_lib.rst |  6 +++---
>  lib/librte_mempool/rte_mempool.c      | 17 +++++++++++++----
>  2 files changed, 16 insertions(+), 7 deletions(-)
> 
> diff --git a/doc/guides/prog_guide/mempool_lib.rst
> b/doc/guides/prog_guide/mempool_lib.rst
> index 3bb84b0a6..eea7a2906 100644
> --- a/doc/guides/prog_guide/mempool_lib.rst
> +++ b/doc/guides/prog_guide/mempool_lib.rst
> @@ -27,10 +27,10 @@ In debug mode
> (CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG is enabled),  statistics about get
> from/put in the pool are stored in the mempool structure.
>  Statistics are per-lcore to avoid concurrent access to statistics counters.
> 
> -Memory Alignment Constraints
> -----------------------------
> +Memory Alignment Constraints on X86 architecture
> +------------------------------------------------
> 
> -Depending on hardware memory configuration, performance can be greatly
> improved by adding a specific padding between objects.
> +Depending on hardware memory configuration on X86 architecture,
> performance can be greatly improved by adding a specific padding between
> objects.
>  The objective is to ensure that the beginning of each object starts on a
> different channel and rank in memory so that all channels are equally loaded.
> 
>  This is particularly true for packet buffers when doing L3 forwarding or flow
> classification.
> diff --git a/lib/librte_mempool/rte_mempool.c
> b/lib/librte_mempool/rte_mempool.c
> index 78d8eb941..871894525 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -45,6 +45,7 @@ EAL_REGISTER_TAILQ(rte_mempool_tailq)
>  #define CALC_CACHE_FLUSHTHRESH(c)	\
>  	((typeof(c))((c) * CACHE_FLUSHTHRESH_MULTIPLIER))
> 
> +#if defined(RTE_ARCH_X86)
>  /*
>   * return the greatest common divisor between a and b (fast algorithm)
>   *
> @@ -74,12 +75,13 @@ static unsigned get_gcd(unsigned a, unsigned b)  }
> 
>  /*
> - * Depending on memory configuration, objects addresses are spread
> + * Depending on memory configuration on x86 arch, objects addresses are
> + spread
>   * between channels and ranks in RAM: the pool allocator will add
>   * padding between objects. This function return the new size of the
>   * object.
>   */
> -static unsigned optimize_object_size(unsigned obj_size)
> +static unsigned
> +arch_mem_object_align(unsigned obj_size)
>  {
>  	unsigned nrank, nchan;
>  	unsigned new_obj_size;
> @@ -99,6 +101,13 @@ static unsigned optimize_object_size(unsigned
> obj_size)
>  		new_obj_size++;
>  	return new_obj_size * RTE_MEMPOOL_ALIGN;  }
> +#else
This applies to add Arm (PPC as well) SoCs which might have different schemes depending on the memory controller. IMO, this should not be architecture specific.

> +static unsigned
> +arch_mem_object_align(unsigned obj_size) {
> +	return obj_size;
> +}
> +#endif
> 
>  struct pagesz_walk_arg {
>  	int socket_id;
> @@ -234,8 +243,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size,
> uint32_t flags,
>  	 */
>  	if ((flags & MEMPOOL_F_NO_SPREAD) == 0) {
>  		unsigned new_size;
> -		new_size = optimize_object_size(sz->header_size + sz-
> >elt_size +
> -			sz->trailer_size);
> +		new_size = arch_mem_object_align
> +			    (sz->header_size + sz->elt_size + sz->trailer_size);
>  		sz->trailer_size = new_size - sz->header_size - sz->elt_size;
>  	}
> 
> --
> 2.24.1
  
Jerin Jacob Dec. 20, 2019, 4:55 p.m. UTC | #5
On Fri, Dec 20, 2019 at 9:25 PM Honnappa Nagarahalli
<Honnappa.Nagarahalli@arm.com> wrote:
>
> <snip>
>
> >
> > From: Jerin Jacob <jerinj@marvell.com>
> >
> > The exiting optimize_object_size() function address the memory object
> > alignment constraint on x86 for better performance.
> >
> > Different (Mirco) architecture may have different memory alignment
> > constraint for better performance and it not same as the existing
> > optimize_object_size() function. Some use, XOR(kind of CRC) scheme to
> > enable DRAM channel distribution based on the address and some may have
> > a different formula.
> If I understand correctly, address interleaving is the characteristic of the memory controller and not the CPU.
> For ex: different SoCs using the same Arm architecture might have different memory controllers. So, the solution should not be architecture specific, but SoC specific.

Yes.  See below.

> > -static unsigned optimize_object_size(unsigned obj_size)
> > +static unsigned
> > +arch_mem_object_align(unsigned obj_size)
> >  {
> >       unsigned nrank, nchan;
> >       unsigned new_obj_size;
> > @@ -99,6 +101,13 @@ static unsigned optimize_object_size(unsigned
> > obj_size)
> >               new_obj_size++;
> >       return new_obj_size * RTE_MEMPOOL_ALIGN;  }
> > +#else
> This applies to add Arm (PPC as well) SoCs which might have different schemes depending on the memory controller. IMO, this should not be architecture specific.

I agree in principle.
I will summarize the
https://www.mail-archive.com/dev@dpdk.org/msg149157.html feedback:

1) For x86 arch, it is architecture-specific
2) For power PC arch, It is architecture-specific
3) For the ARM case, it will be the memory controller specific.
4) For the ARM case, The memory controller is not using the existing
x86 arch formula.
5) If it is memory/arch-specific, Can userspace code find the optimal
alignment? In the case of octeontx2/arm64,
the memory controller does  XOR on PA address which userspace code
doesn't have much control.

This patch address the known case of (1), (2),  (4) and (5). (2) can
be added to this framework when POWER9 folks want it.

We can extend this patch to address (3) if there is a case. Without
the actual requirement(If some can share the formula of alignment
which is the memory
controller specific and it does not come under (4))) then we can
create extra layer for the memory controller and abstraction to probe
it.
Again there is no standard way of probing the memory controller in
userspace and we need platform #define, which won't work for
distribution build.
So solution needs to be arch-specific and then fine-tune to memory
controller if possible.

I can work on creating an extra layer of code if some can provide the
details of the memory controller and probing mechanism or this
patch be extended to support such case if it arises in future.

Thoughts?

>
> > +static unsigned
> > +arch_mem_object_align(unsigned obj_size) {
> > +     return obj_size;
> > +}
> > +#endif
> >
> >  struct pagesz_walk_arg {
> >       int socket_id;
> > @@ -234,8 +243,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size,
> > uint32_t flags,
> >        */
> >       if ((flags & MEMPOOL_F_NO_SPREAD) == 0) {
> >               unsigned new_size;
> > -             new_size = optimize_object_size(sz->header_size + sz-
> > >elt_size +
> > -                     sz->trailer_size);
> > +             new_size = arch_mem_object_align
> > +                         (sz->header_size + sz->elt_size + sz->trailer_size);
> >               sz->trailer_size = new_size - sz->header_size - sz->elt_size;
> >       }
> >
> > --
> > 2.24.1
>
  
Honnappa Nagarahalli Dec. 20, 2019, 9:07 p.m. UTC | #6
<snip>
> > > From: Jerin Jacob <jerinj@marvell.com>
> > >
> > > The exiting optimize_object_size() function address the memory
> > > object alignment constraint on x86 for better performance.
> > >
> > > Different (Mirco) architecture may have different memory alignment
> > > constraint for better performance and it not same as the existing
> > > optimize_object_size() function. Some use, XOR(kind of CRC) scheme
> > > to enable DRAM channel distribution based on the address and some
> > > may have a different formula.
> > If I understand correctly, address interleaving is the characteristic of the
> memory controller and not the CPU.
> > For ex: different SoCs using the same Arm architecture might have different
> memory controllers. So, the solution should not be architecture specific, but
> SoC specific.
> 
> Yes.  See below.
> 
> > > -static unsigned optimize_object_size(unsigned obj_size)
> > > +static unsigned
> > > +arch_mem_object_align(unsigned obj_size)
> > >  {
> > >       unsigned nrank, nchan;
> > >       unsigned new_obj_size;
> > > @@ -99,6 +101,13 @@ static unsigned optimize_object_size(unsigned
> > > obj_size)
> > >               new_obj_size++;
> > >       return new_obj_size * RTE_MEMPOOL_ALIGN;  }
> > > +#else
> > This applies to add Arm (PPC as well) SoCs which might have different
> schemes depending on the memory controller. IMO, this should not be
> architecture specific.
> 
> I agree in principle.
> I will summarize the
> https://www.mail-archive.com/dev@dpdk.org/msg149157.html feedback:
> 
> 1) For x86 arch, it is architecture-specific
> 2) For power PC arch, It is architecture-specific
> 3) For the ARM case, it will be the memory controller specific.
> 4) For the ARM case, The memory controller is not using the existing
> x86 arch formula.
> 5) If it is memory/arch-specific, Can userspace code find the optimal
> alignment? In the case of octeontx2/arm64, the memory controller does  XOR
> on PA address which userspace code doesn't have much control.
> 
> This patch address the known case of (1), (2),  (4) and (5). (2) can be added to
> this framework when POWER9 folks want it.
> 
> We can extend this patch to address (3) if there is a case. Without the actual
> requirement(If some can share the formula of alignment which is the
> memory controller specific and it does not come under (4))) then we can
> create extra layer for the memory controller and abstraction to probe it.
> Again there is no standard way of probing the memory controller in
> userspace and we need platform #define, which won't work for distribution
> build.
> So solution needs to be arch-specific and then fine-tune to memory controller
> if possible.
> 
> I can work on creating an extra layer of code if some can provide the details
> of the memory controller and probing mechanism or this patch be extended
Inputs for BlueField, DPAAx, ThunderX2 would be helpful.

> to support such case if it arises in future.
> 
> Thoughts?
How much memory will this save for your platform? Is it affecting performance?

> 
> >
> > > +static unsigned
> > > +arch_mem_object_align(unsigned obj_size) {
> > > +     return obj_size;
> > > +}
> > > +#endif
> > >
> > >  struct pagesz_walk_arg {
> > >       int socket_id;
> > > @@ -234,8 +243,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size,
> > > uint32_t flags,
> > >        */
> > >       if ((flags & MEMPOOL_F_NO_SPREAD) == 0) {
> > >               unsigned new_size;
> > > -             new_size = optimize_object_size(sz->header_size + sz-
> > > >elt_size +
> > > -                     sz->trailer_size);
> > > +             new_size = arch_mem_object_align
> > > +                         (sz->header_size + sz->elt_size +
> > > + sz->trailer_size);
> > >               sz->trailer_size = new_size - sz->header_size - sz->elt_size;
> > >       }
> > >
> > > --
> > > 2.24.1
> >
  
Jerin Jacob Dec. 21, 2019, 5:06 a.m. UTC | #7
On Sat, Dec 21, 2019 at 2:37 AM Honnappa Nagarahalli
<Honnappa.Nagarahalli@arm.com> wrote:
>
> <snip>
> > > > From: Jerin Jacob <jerinj@marvell.com>
> > > >
> > > > The exiting optimize_object_size() function address the memory
> > > > object alignment constraint on x86 for better performance.
> > > >
> > > > Different (Mirco) architecture may have different memory alignment
> > > > constraint for better performance and it not same as the existing
> > > > optimize_object_size() function. Some use, XOR(kind of CRC) scheme
> > > > to enable DRAM channel distribution based on the address and some
> > > > may have a different formula.
> > > If I understand correctly, address interleaving is the characteristic of the
> > memory controller and not the CPU.
> > > For ex: different SoCs using the same Arm architecture might have different
> > memory controllers. So, the solution should not be architecture specific, but
> > SoC specific.
> >
> > Yes.  See below.
> >
> > > > -static unsigned optimize_object_size(unsigned obj_size)
> > > > +static unsigned
> > > > +arch_mem_object_align(unsigned obj_size)
> > > >  {
> > > >       unsigned nrank, nchan;
> > > >       unsigned new_obj_size;
> > > > @@ -99,6 +101,13 @@ static unsigned optimize_object_size(unsigned
> > > > obj_size)
> > > >               new_obj_size++;
> > > >       return new_obj_size * RTE_MEMPOOL_ALIGN;  }
> > > > +#else
> > > This applies to add Arm (PPC as well) SoCs which might have different
> > schemes depending on the memory controller. IMO, this should not be
> > architecture specific.
> >
> > I agree in principle.
> > I will summarize the
> > https://www.mail-archive.com/dev@dpdk.org/msg149157.html feedback:
> >
> > 1) For x86 arch, it is architecture-specific
> > 2) For power PC arch, It is architecture-specific
> > 3) For the ARM case, it will be the memory controller specific.
> > 4) For the ARM case, The memory controller is not using the existing
> > x86 arch formula.
> > 5) If it is memory/arch-specific, Can userspace code find the optimal
> > alignment? In the case of octeontx2/arm64, the memory controller does  XOR
> > on PA address which userspace code doesn't have much control.
> >
> > This patch address the known case of (1), (2),  (4) and (5). (2) can be added to
> > this framework when POWER9 folks want it.
> >
> > We can extend this patch to address (3) if there is a case. Without the actual
> > requirement(If some can share the formula of alignment which is the
> > memory controller specific and it does not come under (4))) then we can
> > create extra layer for the memory controller and abstraction to probe it.
> > Again there is no standard way of probing the memory controller in
> > userspace and we need platform #define, which won't work for distribution
> > build.
> > So solution needs to be arch-specific and then fine-tune to memory controller
> > if possible.
> >
> > I can work on creating an extra layer of code if some can provide the details
> > of the memory controller and probing mechanism or this patch be extended
> Inputs for BlueField, DPAAx, ThunderX2 would be helpful.

Yes. Probably memory controller used in n1sdp SoC also.

>
> > to support such case if it arises in future.
> >
> > Thoughts?
> How much memory will this save for your platform? Is it affecting performance?

No performance difference.

The existing code adding the tailer for each objs.
Additional space/Trailer space will be function of number of objects
in mempool  and its obj_size, its alignment and selected
rte_memory_get_nchannel() and rte_memory_get_nrank()

I will wait for inputs from Bluefield, DPAAx, ThunderX2 and n1sdp(if
any) for any rework on the patch.

>
> >
> > >
> > > > +static unsigned
> > > > +arch_mem_object_align(unsigned obj_size) {
> > > > +     return obj_size;
> > > > +}
> > > > +#endif
> > > >
> > > >  struct pagesz_walk_arg {
> > > >       int socket_id;
> > > > @@ -234,8 +243,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size,
> > > > uint32_t flags,
> > > >        */
> > > >       if ((flags & MEMPOOL_F_NO_SPREAD) == 0) {
> > > >               unsigned new_size;
> > > > -             new_size = optimize_object_size(sz->header_size + sz-
> > > > >elt_size +
> > > > -                     sz->trailer_size);
> > > > +             new_size = arch_mem_object_align
> > > > +                         (sz->header_size + sz->elt_size +
> > > > + sz->trailer_size);
> > > >               sz->trailer_size = new_size - sz->header_size - sz->elt_size;
> > > >       }
> > > >
> > > > --
> > > > 2.24.1
> > >
  
Olivier Matz Dec. 27, 2019, 3:54 p.m. UTC | #8
Hi,

On Sat, Dec 21, 2019 at 10:36:15AM +0530, Jerin Jacob wrote:
> On Sat, Dec 21, 2019 at 2:37 AM Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com> wrote:
> >
> > <snip>
> > > > > From: Jerin Jacob <jerinj@marvell.com>
> > > > >
> > > > > The exiting optimize_object_size() function address the memory
> > > > > object alignment constraint on x86 for better performance.
> > > > >
> > > > > Different (Mirco) architecture may have different memory alignment
> > > > > constraint for better performance and it not same as the existing
> > > > > optimize_object_size() function. Some use, XOR(kind of CRC) scheme
> > > > > to enable DRAM channel distribution based on the address and some
> > > > > may have a different formula.

typo: Mirco -> Micro
Maybe the whole sentence can be reworded a bit (I think a word is missing).

> > > > If I understand correctly, address interleaving is the characteristic of the
> > > memory controller and not the CPU.
> > > > For ex: different SoCs using the same Arm architecture might have different
> > > memory controllers. So, the solution should not be architecture specific, but
> > > SoC specific.
> > >
> > > Yes.  See below.
> > >
> > > > > -static unsigned optimize_object_size(unsigned obj_size)
> > > > > +static unsigned
> > > > > +arch_mem_object_align(unsigned obj_size)
> > > > >  {
> > > > >       unsigned nrank, nchan;
> > > > >       unsigned new_obj_size;
> > > > > @@ -99,6 +101,13 @@ static unsigned optimize_object_size(unsigned
> > > > > obj_size)
> > > > >               new_obj_size++;
> > > > >       return new_obj_size * RTE_MEMPOOL_ALIGN;  }
> > > > > +#else
> > > > This applies to add Arm (PPC as well) SoCs which might have different
> > > schemes depending on the memory controller. IMO, this should not be
> > > architecture specific.
> > >
> > > I agree in principle.
> > > I will summarize the
> > > https://www.mail-archive.com/dev@dpdk.org/msg149157.html feedback:
> > >
> > > 1) For x86 arch, it is architecture-specific
> > > 2) For power PC arch, It is architecture-specific
> > > 3) For the ARM case, it will be the memory controller specific.
> > > 4) For the ARM case, The memory controller is not using the existing
> > > x86 arch formula.
> > > 5) If it is memory/arch-specific, Can userspace code find the optimal
> > > alignment? In the case of octeontx2/arm64, the memory controller does  XOR
> > > on PA address which userspace code doesn't have much control.
> > >
> > > This patch address the known case of (1), (2),  (4) and (5). (2) can be added to
> > > this framework when POWER9 folks want it.
> > >
> > > We can extend this patch to address (3) if there is a case. Without the actual
> > > requirement(If some can share the formula of alignment which is the
> > > memory controller specific and it does not come under (4))) then we can
> > > create extra layer for the memory controller and abstraction to probe it.
> > > Again there is no standard way of probing the memory controller in
> > > userspace and we need platform #define, which won't work for distribution
> > > build.
> > > So solution needs to be arch-specific and then fine-tune to memory controller
> > > if possible.
> > >
> > > I can work on creating an extra layer of code if some can provide the details
> > > of the memory controller and probing mechanism or this patch be extended
> > Inputs for BlueField, DPAAx, ThunderX2 would be helpful.
> 
> Yes. Probably memory controller used in n1sdp SoC also.
> 
> >
> > > to support such case if it arises in future.
> > >
> > > Thoughts?
> > How much memory will this save for your platform? Is it affecting performance?

Currently, I think Arm-based architectures use the default (nchan=4,
nrank=1). The worst case is for object whose size (including mempool
header) is 2 cache lines, where it is optimized to 3 cache lines (+50%).

Examples for cache lines size = 64:
  orig     optimized
  64    -> 64           +0%
  128   -> 192          +50%
  192   -> 192          +0%
  256   -> 320          +25%
  320   -> 320          +0%
  384   -> 448          +16%
  ...
  2304  -> 2368         +2.7%  (~mbuf size)

> No performance difference.
> 
> The existing code adding the tailer for each objs.
> Additional space/Trailer space will be function of number of objects
> in mempool  and its obj_size, its alignment and selected
> rte_memory_get_nchannel() and rte_memory_get_nrank()
> 
> I will wait for inputs from Bluefield, DPAAx, ThunderX2 and n1sdp(if
> any) for any rework on the patch.

If there is no performance impact on other supporter Arm-based
architectures, I think it is a step in a right direction.

> > > > > +static unsigned
> > > > > +arch_mem_object_align(unsigned obj_size) {
> > > > > +     return obj_size;
> > > > > +}
> > > > > +#endif

I'd prefer "unsigned int" for new code.
Also, the opening brace should be on a separate line.

The documentation of the MEMPOOL_F_NO_SPREAD flag in the .h could be
slightly modified, as you did for the comment above
arch_mem_object_align().

> > > > >
> > > > >  struct pagesz_walk_arg {
> > > > >       int socket_id;
> > > > > @@ -234,8 +243,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size,
> > > > > uint32_t flags,
> > > > >        */
> > > > >       if ((flags & MEMPOOL_F_NO_SPREAD) == 0) {
> > > > >               unsigned new_size;
> > > > > -             new_size = optimize_object_size(sz->header_size + sz-
> > > > > >elt_size +
> > > > > -                     sz->trailer_size);
> > > > > +             new_size = arch_mem_object_align
> > > > > +                         (sz->header_size + sz->elt_size +
> > > > > + sz->trailer_size);
> > > > >               sz->trailer_size = new_size - sz->header_size - sz->elt_size;
> > > > >       }
> > > > >
> > > > > --
> > > > > 2.24.1
> > > >
  

Patch

diff --git a/doc/guides/prog_guide/mempool_lib.rst b/doc/guides/prog_guide/mempool_lib.rst
index 3bb84b0a6..eea7a2906 100644
--- a/doc/guides/prog_guide/mempool_lib.rst
+++ b/doc/guides/prog_guide/mempool_lib.rst
@@ -27,10 +27,10 @@  In debug mode (CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG is enabled),
 statistics about get from/put in the pool are stored in the mempool structure.
 Statistics are per-lcore to avoid concurrent access to statistics counters.
 
-Memory Alignment Constraints
-----------------------------
+Memory Alignment Constraints on X86 architecture
+------------------------------------------------
 
-Depending on hardware memory configuration, performance can be greatly improved by adding a specific padding between objects.
+Depending on hardware memory configuration on X86 architecture, performance can be greatly improved by adding a specific padding between objects.
 The objective is to ensure that the beginning of each object starts on a different channel and rank in memory so that all channels are equally loaded.
 
 This is particularly true for packet buffers when doing L3 forwarding or flow classification.
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 78d8eb941..871894525 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -45,6 +45,7 @@  EAL_REGISTER_TAILQ(rte_mempool_tailq)
 #define CALC_CACHE_FLUSHTHRESH(c)	\
 	((typeof(c))((c) * CACHE_FLUSHTHRESH_MULTIPLIER))
 
+#if defined(RTE_ARCH_X86)
 /*
  * return the greatest common divisor between a and b (fast algorithm)
  *
@@ -74,12 +75,13 @@  static unsigned get_gcd(unsigned a, unsigned b)
 }
 
 /*
- * Depending on memory configuration, objects addresses are spread
+ * Depending on memory configuration on x86 arch, objects addresses are spread
  * between channels and ranks in RAM: the pool allocator will add
  * padding between objects. This function return the new size of the
  * object.
  */
-static unsigned optimize_object_size(unsigned obj_size)
+static unsigned
+arch_mem_object_align(unsigned obj_size)
 {
 	unsigned nrank, nchan;
 	unsigned new_obj_size;
@@ -99,6 +101,13 @@  static unsigned optimize_object_size(unsigned obj_size)
 		new_obj_size++;
 	return new_obj_size * RTE_MEMPOOL_ALIGN;
 }
+#else
+static unsigned
+arch_mem_object_align(unsigned obj_size)
+{
+	return obj_size;
+}
+#endif
 
 struct pagesz_walk_arg {
 	int socket_id;
@@ -234,8 +243,8 @@  rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 	 */
 	if ((flags & MEMPOOL_F_NO_SPREAD) == 0) {
 		unsigned new_size;
-		new_size = optimize_object_size(sz->header_size + sz->elt_size +
-			sz->trailer_size);
+		new_size = arch_mem_object_align
+			    (sz->header_size + sz->elt_size + sz->trailer_size);
 		sz->trailer_size = new_size - sz->header_size - sz->elt_size;
 	}