[dpdk-dev] [PATCH v2] mempool: Introduce _populate_mz_range api

Santosh Shukla santosh.shukla at caviumnetworks.com
Tue Jan 31 15:32:26 CET 2017


Hi Olivier,

Reply inline.

On Tue, Jan 31, 2017 at 11:31:51AM +0100, Olivier Matz wrote:
> Hi Santosh,
> 
> I guess this patch is targeted for 17.05, right?
>

Yes.

> Please see some other comments below.
> 
> On Fri, 20 Jan 2017 20:43:41 +0530, <santosh.shukla at caviumnetworks.com>
> wrote:
> > From: Santosh Shukla <santosh.shukla at caviumnetworks.com>
> > 
> > HW pool manager e.g. Cavium SoC need s/w to program start and
> > end address of pool. Currently there is no such api in ext-mempool.
> 
> Today, the mempool objects are not necessarily contiguous in
> virtual or physical memory. The only assumption that can be made is
> that each object is contiguous (virtually and physically). If the flag
> MEMPOOL_F_NO_PHYS_CONTIG is passed, each object is assured to be
> contiguous virtually.
>

Some clarification: IIUC then pa/va addr for each hugepage (aka mz)
is contiguous but no gaurantee of contiguity across the hugepages (aka
muli-mzs), Right?

If above said is correct then case like pool start addr in one mz and
end addr is on another mz is a problem scenario. Correct? That said then
ext-mempool drivers will get affected in such cases.

> > So introducing _populate_mz_range API which will let HW(pool manager)
> > know about hugepage mapped virtual start and end address.
> 
> rte_mempool_ops_populate_mz_range() looks a bit long. What about
> rte_mempool_ops_populate() instead?

Ok.

> > diff --git a/lib/librte_mempool/rte_mempool.c
> > b/lib/librte_mempool/rte_mempool.c index 1c2aed8..9a39f5c 100644
> > --- a/lib/librte_mempool/rte_mempool.c
> > +++ b/lib/librte_mempool/rte_mempool.c
> > @@ -568,6 +568,10 @@ static unsigned optimize_object_size(unsigned
> > obj_size) else
> >  			paddr = mz->phys_addr;
> >  
> > +		/* Populate mz range */
> > +		if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0)
> > +			rte_mempool_ops_populate_mz_range(mp, mz);
> > +
> >  		if (rte_eal_has_hugepages()
> 
> Given what I've said above, I think the populate() callback should be
> in rte_mempool_populate_phys() instead of
> rte_mempool_populate_default(). It would be called for each contiguous
> zone.
>

Ok.

> > --- a/lib/librte_mempool/rte_mempool.h
> > +++ b/lib/librte_mempool/rte_mempool.h
> > @@ -387,6 +387,12 @@ typedef int (*rte_mempool_dequeue_t)(struct
> > rte_mempool *mp, */
> >  typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool
> > *mp); +/**
> > + * Set the memzone va/pa addr range and len in the external pool.
> > + */
> > +typedef void (*rte_mempool_populate_mz_range_t)(struct rte_mempool
> > *mp,
> > +		const struct rte_memzone *mz);
> > +
> 
> And this API would be:
> typedef void (*rte_mempool_populate_t)(struct rte_mempool *mp,
> 		char *vaddr, phys_addr_t paddr, size_t len)
>  
> 
> If your hw absolutly needs a contiguous memory, a solution could be:
> 
> - add a new flag MEMPOOL_F_CONTIG (maybe a better nale could be
>   found) saying that all the mempool objects must be contiguous.
> - add the ops_populate() callback in rte_mempool_populate_phys(), as
>   suggested above
> 
> Then:
> 
>   /* create an empty mempool */
>   rte_mempool_create_empty(...);
> 
>   /* set the handler:
>    *   in the ext handler, the mempool flags are updated with
>    *   MEMPOOL_F_CONTIG
>    */
>   rte_mempool_set_ops_byname(..., "my_hardware");
> 

You mean, ext handler will set mempool flag using 'pool_config' param; somethng
like rte_mempool_ops_by_name(..., "my_hardware" , MEMPOOL_F_CONTIG); ?

>   /* if MEMPOOL_F_CONTIG is set, all populate() functions should ensure
>    * that there is only one contiguous zone
>    */
>   rte_mempool_populate_default(...);
> 

I am not too sure about scope of MEMPOOL_F_CONTIG. How MEMPOOL_F_CONTIG
flag {setted by application/ driver by calling rte_mempool_create(..., flag)}
can make sure only one contiguous zone? Like to understand from you.

In my understanding:
rte_mempool_populate_default() will request for memzone based on mp->size value;
And if mp->size more than one hugepage_sz{i.e. one mz request not enough} then
it will request more hugepages{ i.e. more mz request},. So IIIU then contiguity
not gauranteed.

> Regards,
> Olivier


More information about the dev mailing list