[v3,3/5] lib/mempool: allow page size aligned mempool

Message ID 20190321091845.78495-4-xiaolong.ye@intel.com (mailing list archive)
State Superseded, archived
Delegated to: Ferruh Yigit
Headers
Series Introduce AF_XDP PMD |

Checks

Context Check Description
ci/Intel-compilation fail Compilation issues
ci/checkpatch success coding style OK

Commit Message

Xiaolong Ye March 21, 2019, 9:18 a.m. UTC
  Allow create a mempool with page size aligned base address.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
---
 lib/librte_mempool/rte_mempool.c | 3 +++
 lib/librte_mempool/rte_mempool.h | 1 +
 2 files changed, 4 insertions(+)
  

Comments

Ananyev, Konstantin March 21, 2019, 2 p.m. UTC | #1
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Xiaolong Ye
> Sent: Thursday, March 21, 2019 9:19 AM
> To: dev@dpdk.org
> Cc: Zhang, Qi Z <qi.z.zhang@intel.com>; Karlsson, Magnus <magnus.karlsson@intel.com>; Topel, Bjorn <bjorn.topel@intel.com>; Ye,
> Xiaolong <xiaolong.ye@intel.com>
> Subject: [dpdk-dev] [PATCH v3 3/5] lib/mempool: allow page size aligned mempool
> 
> Allow create a mempool with page size aligned base address.
> 
> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
> ---
>  lib/librte_mempool/rte_mempool.c | 3 +++
>  lib/librte_mempool/rte_mempool.h | 1 +
>  2 files changed, 4 insertions(+)
> 
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index 683b216f9..33ab6a2b4 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -543,6 +543,9 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>  		if (try_contig)
>  			flags |= RTE_MEMZONE_IOVA_CONTIG;
> 
> +		if (mp->flags & MEMPOOL_F_PAGE_ALIGN)
> +			align = getpagesize();
> +

Might be a bit safer:
pg_sz = getpagesize();
align = RTE_MAX(align, pg_sz);

BTW, why do you need it always default page size aligned?
Is it for 'external' memory allocation or even for eal hugepages too?
Konstantin

>  		mz = rte_memzone_reserve_aligned(mz_name, mem_size,
>  				mp->socket_id, flags, align);
>
  
Qi Zhang March 21, 2019, 2:23 p.m. UTC | #2
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Thursday, March 21, 2019 10:01 PM
> To: Ye, Xiaolong <xiaolong.ye@intel.com>; dev@dpdk.org
> Cc: Zhang, Qi Z <qi.z.zhang@intel.com>; Karlsson, Magnus
> <magnus.karlsson@intel.com>; Topel, Bjorn <bjorn.topel@intel.com>; Ye,
> Xiaolong <xiaolong.ye@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v3 3/5] lib/mempool: allow page size aligned
> mempool
> 
> 
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Xiaolong Ye
> > Sent: Thursday, March 21, 2019 9:19 AM
> > To: dev@dpdk.org
> > Cc: Zhang, Qi Z <qi.z.zhang@intel.com>; Karlsson, Magnus
> > <magnus.karlsson@intel.com>; Topel, Bjorn <bjorn.topel@intel.com>; Ye,
> > Xiaolong <xiaolong.ye@intel.com>
> > Subject: [dpdk-dev] [PATCH v3 3/5] lib/mempool: allow page size
> > aligned mempool
> >
> > Allow create a mempool with page size aligned base address.
> >
> > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
> > ---
> >  lib/librte_mempool/rte_mempool.c | 3 +++
> > lib/librte_mempool/rte_mempool.h | 1 +
> >  2 files changed, 4 insertions(+)
> >
> > diff --git a/lib/librte_mempool/rte_mempool.c
> > b/lib/librte_mempool/rte_mempool.c
> > index 683b216f9..33ab6a2b4 100644
> > --- a/lib/librte_mempool/rte_mempool.c
> > +++ b/lib/librte_mempool/rte_mempool.c
> > @@ -543,6 +543,9 @@ rte_mempool_populate_default(struct rte_mempool
> *mp)
> >  		if (try_contig)
> >  			flags |= RTE_MEMZONE_IOVA_CONTIG;
> >
> > +		if (mp->flags & MEMPOOL_F_PAGE_ALIGN)
> > +			align = getpagesize();
> > +
> 
> Might be a bit safer:
> pg_sz = getpagesize();
> align = RTE_MAX(align, pg_sz);
> 
> BTW, why do you need it always default page size aligned?
> Is it for 'external' memory allocation or even for eal hugepages too?

this help us to enable zero copy between xdp umem to mbuf.
af_xdp umem require 2K chunk size and also aligned on 2K address, 

Qi


> Konstantin
> 
> >  		mz = rte_memzone_reserve_aligned(mz_name, mem_size,
> >  				mp->socket_id, flags, align);
> >
  

Patch

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 683b216f9..33ab6a2b4 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -543,6 +543,9 @@  rte_mempool_populate_default(struct rte_mempool *mp)
 		if (try_contig)
 			flags |= RTE_MEMZONE_IOVA_CONTIG;
 
+		if (mp->flags & MEMPOOL_F_PAGE_ALIGN)
+			align = getpagesize();
+
 		mz = rte_memzone_reserve_aligned(mz_name, mem_size,
 				mp->socket_id, flags, align);
 
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 7c9cd9a2f..75553b36f 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -264,6 +264,7 @@  struct rte_mempool {
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 /**< Don't need IOVA contiguous objs. */
 #define MEMPOOL_F_NO_PHYS_CONTIG MEMPOOL_F_NO_IOVA_CONTIG /* deprecated */
+#define MEMPOOL_F_PAGE_ALIGN     0x0040 /**< Chunk's base address is page aligned */
 
 /**
  * @internal When debug is enabled, store some statistics.