[dpdk-dev] [PATCH v4 6/7] mempool: introduce block size align flag

Olivier MATZ olivier.matz at 6wind.com
Mon Sep 4 18:20:39 CEST 2017


On Tue, Aug 15, 2017 at 11:37:42AM +0530, Santosh Shukla wrote:
> Some mempool hw like octeontx/fpa block, demands block size
> (/total_elem_sz) aligned object start address.
> 
> Introducing an MEMPOOL_F_POOL_BLK_SZ_ALIGNED flag.
> If this flag is set:
> - Align object start address to a multiple of total_elt_sz.

Please specify if it's virtual or physical address.

What do you think about MEMPOOL_F_BLK_ALIGNED_OBJECTS instead?

I don't really like BLK because the word "block" is not used anywhere
else in the mempool code. But I cannot find any good replacement for
it. If you have another idea, please suggest.

> - Allocate one additional object. Additional object is needed to make
>   sure that requested 'n' object gets correctly populated. Example:
> 
> - Let's say that we get 'x' size of memory chunk from memzone.
> - And application has requested 'n' object from mempool.
> - Ideally, we start using objects at start address 0 to...(x-block_sz)
>   for n obj.
> - Not necessarily first object address i.e. 0 is aligned to block_sz.
> - So we derive 'offset' value for block_sz alignment purpose i.e..'off'.
> - That 'off' makes sure that start address of object is blk_sz
>   aligned.
> - Calculating 'off' may end up sacrificing first block_sz area of
>   memzone area x. So total number of the object which can fit in the
>   pool area is n-1, Which is incorrect behavior.
> 
> Therefore we request one additional object (/block_sz area) from memzone
> when F_BLK_SZ_ALIGNED flag is set.
> 
> Signed-off-by: Santosh Shukla <santosh.shukla at caviumnetworks.com>
> Signed-off-by: Jerin Jacob <jerin.jacob at caviumnetworks.com>
> ---
>  lib/librte_mempool/rte_mempool.c | 16 +++++++++++++---
>  lib/librte_mempool/rte_mempool.h |  1 +
>  2 files changed, 14 insertions(+), 3 deletions(-)
> 
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index 19e5e6ddf..7610f0d1f 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -239,10 +239,14 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
>   */
>  size_t
>  rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
> -		      __rte_unused const struct rte_mempool *mp)
> +		      const struct rte_mempool *mp)
>  {
>  	size_t obj_per_page, pg_num, pg_sz;
>  
> +	if (mp && mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED)
> +		/* alignment need one additional object */
> +		elt_num += 1;
> +
>  	if (total_elt_sz == 0)
>  		return 0;

I'm wondering if it's correct if the mempool area is not contiguous.

For instance:
 page size = 4096
 object size = 1900
 elt_num = 10

With your calculation, you will request (11+2-1)/2 = 6 pages.
But actually you may need 10 pages (max), since the number of object per
page matching the alignement constraint is 1, not 2.



More information about the dev mailing list