[dpdk-dev] [PATCH] mempool: limit cache_size

Ananyev, Konstantin konstantin.ananyev at intel.com
Mon May 18 15:14:50 CEST 2015



> -----Original Message-----
> From: Zoltan Kiss [mailto:zoltan.kiss at linaro.org]
> Sent: Monday, May 18, 2015 1:50 PM
> To: Ananyev, Konstantin; dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] mempool: limit cache_size
> 
> 
> 
> On 18/05/15 13:41, Ananyev, Konstantin wrote:
> >
> >
> >> -----Original Message-----
> >> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Zoltan Kiss
> >> Sent: Monday, May 18, 2015 1:28 PM
> >> To: dev at dpdk.org
> >> Subject: Re: [dpdk-dev] [PATCH] mempool: limit cache_size
> >>
> >> Hi,
> >>
> >> Any opinion on this patch?
> >>
> >> Regards,
> >>
> >> Zoltan
> >>
> >> On 13/05/15 19:59, Zoltan Kiss wrote:
> >>> Otherwise cache_flushthresh can be bigger than n, and
> >>> a consumer can starve others by keeping every element
> >>> either in use or in the cache.
> >>>
> >>> Signed-off-by: Zoltan Kiss <zoltan.kiss at linaro.org>
> >>> ---
> >>>    lib/librte_mempool/rte_mempool.c | 3 ++-
> >>>    lib/librte_mempool/rte_mempool.h | 2 +-
> >>>    2 files changed, 3 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> >>> index cf7ed76..ca6cd9c 100644
> >>> --- a/lib/librte_mempool/rte_mempool.c
> >>> +++ b/lib/librte_mempool/rte_mempool.c
> >>> @@ -440,7 +440,8 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
> >>>    	mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list);
> >>>
> >>>    	/* asked cache too big */
> >>> -	if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE) {
> >>> +	if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE ||
> >>> +	    (uint32_t) cache_size * CACHE_FLUSHTHRESH_MULTIPLIER > n) {
> >>>    		rte_errno = EINVAL;
> >>>    		return NULL;
> >>>    	}
> >
> > Why just no 'cache_size > n' then?
> 
> The commit message says: "Otherwise cache_flushthresh can be bigger than
> n, and a consumer can starve others by keeping every element either in
> use or in the cache."

Ah yes, you right - your condition is more restrictive, which is better. 
Though here you implicitly convert cache_size and n to floats and compare 2 floats :
(uint32_t) cache_size * CACHE_FLUSHTHRESH_MULTIPLIER > n)
Shouldn't it be:
(uint32_t)(cache_size * CACHE_FLUSHTHRESH_MULTIPLIER) > n)
So we do conversion back to uint32_t compare to unsigned integers instead?
Same as below:
mp->cache_flushthresh = (uint32_t)
                (cache_size * CACHE_FLUSHTHRESH_MULTIPLIER);
?

In fact, as we use it more than once, it probably makes sense to create a macro for it,
something like:
#define CALC_CACHE_FLUSHTHRESH(c)	((uint32_t)((c) *  CACHE_FLUSHTHRESH_MULTIPLIER)

Or even

#define CALC_CACHE_FLUSHTHRESH(c)	((typeof (c))((c) *  CACHE_FLUSHTHRESH_MULTIPLIER)


Konstantin

> 
> > Konstantin
> >
> >>> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> >>> index 9001312..a4a9610 100644
> >>> --- a/lib/librte_mempool/rte_mempool.h
> >>> +++ b/lib/librte_mempool/rte_mempool.h
> >>> @@ -468,7 +468,7 @@ typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *);
> >>>     *   If cache_size is non-zero, the rte_mempool library will try to
> >>>     *   limit the accesses to the common lockless pool, by maintaining a
> >>>     *   per-lcore object cache. This argument must be lower or equal to
> >>> - *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE. It is advised to choose
> >>> + *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5. It is advised to choose
> >>>     *   cache_size to have "n modulo cache_size == 0": if this is
> >>>     *   not the case, some elements will always stay in the pool and will
> >>>     *   never be used. The access to the per-lcore table is of course
> >>>


More information about the dev mailing list