[dpdk-dev] [PATCH] mempool: allow for user-owned mempool caches

Lazaros Koromilas l at nofutznetworks.com
Thu Mar 24 14:51:25 CET 2016


Hi Konstantin,

On Mon, Mar 21, 2016 at 3:15 PM, Ananyev, Konstantin
<konstantin.ananyev at intel.com> wrote:
>
> Hi lads,
>
>>
>> Hi Lazaros,
>>
>> Thanks for this patch. To me, this is a valuable enhancement.
>> Please find some comments inline.
>
> Yep, patch seems interesting.
> One question - what would be the usage model for get/put_with_cache functions for non-EAL threads?
> I meant for each non-EAL thread user will need to maintain an array (or list or some other structure)
> of pair <mempool_pointer, mempool_cache> to make sure that the cache always matches the mempool,
> correct?

Yes, the user would have to also keep track of the caches. On the use
case, the original idea was to enable caches for threads that run on
the same core. There are cases that this can work, as described in the
programming guide, section on multiple pthreads. Even with two EAL
threads, we currently cannot have multiple caches on a single core,
unless we use the --lcores long option to separate lcore id (which
really is a thread id here) from the actual core affinity.

So the cache should be a member of the thread and not the mempool, in
my opinion. It has to be consistently used with the same mempool
though.

> Again, for non-EAL threads don't we need some sort of invalidate_cache()
> that would put all mbufs in the cache back to the pool?
> For thread termination case or so?

I think also EAL threads benefit from an invalidate() function call.
This can be part of free() that Olivier proposed.

Thanks!
Lazaros.

>
>>
>> On 03/10/2016 03:44 PM, Lazaros Koromilas wrote:
>> > The mempool cache is only available to EAL threads as a per-lcore
>> > resource. Change this so that the user can create and provide their own
>> > cache on mempool get and put operations. This works with non-EAL threads
>> > too. This commit introduces new API calls with the 'with_cache' suffix,
>> > while the current ones default to the per-lcore local cache.
>> >
>> > Signed-off-by: Lazaros Koromilas <l at nofutznetworks.com>
>> > ---
>> >  lib/librte_mempool/rte_mempool.c |  65 +++++-
>> >  lib/librte_mempool/rte_mempool.h | 442 ++++++++++++++++++++++++++++++++++++---
>> >  2 files changed, 467 insertions(+), 40 deletions(-)
>> >
>> > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
>> > index f8781e1..cebc2b7 100644
>> > --- a/lib/librte_mempool/rte_mempool.c
>> > +++ b/lib/librte_mempool/rte_mempool.c
>> > @@ -375,6 +375,43 @@ rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz,
>> >     return usz;
>> >  }
>> >
>> > +#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>>
>> I wonder if this wouldn't cause a conflict with Keith's patch
>> that removes some #ifdefs RTE_MEMPOOL_CACHE_MAX_SIZE.
>> See: http://www.dpdk.org/dev/patchwork/patch/10492/
>>
>> As this patch is already acked for 16.07, I think that your v2
>> could be rebased on top of it to avoid conflicts when Thomas will apply
>> it.
>>
>> By the way, I also encourage you to have a look at other works in
>> progress in mempool:
>> http://www.dpdk.org/ml/archives/dev/2016-March/035107.html
>> http://www.dpdk.org/ml/archives/dev/2016-March/035201.html
>>
>>
>> > +static void
>> > +mempool_cache_init(struct rte_mempool_cache *cache, uint32_t size)
>> > +{
>> > +   cache->size = size;
>> > +   cache->flushthresh = CALC_CACHE_FLUSHTHRESH(size);
>> > +   cache->len = 0;
>> > +}
>> > +
>> > +/*
>> > + * Creates and initializes a cache for objects that are retrieved from and
>> > + * returned to an underlying mempool. This structure is identical to the
>> > + * structure included inside struct rte_mempool.
>> > + */
>>
>> On top of Keith's patch, this comment may be reworked as the cache
>> structure is not included in the mempool structure anymore.
>>
>> nit: I think the imperative form is preferred
>>
>>
>> > +struct rte_mempool_cache *
>> > +rte_mempool_cache_create(uint32_t size)
>
> I suppose extra socket_id parameter is needed, so you can call
> zmalloc_socket() beneath.
>
>> > +{
>> > +   struct rte_mempool_cache *cache;
>> > +
>> > +   if (size > RTE_MEMPOOL_CACHE_MAX_SIZE) {
>> > +           rte_errno = EINVAL;
>> > +           return NULL;
>> > +   }
>> > +
>> > +   cache = rte_zmalloc("MEMPOOL_CACHE", sizeof(*cache), RTE_CACHE_LINE_SIZE);
>> > +   if (cache == NULL) {
>> > +           RTE_LOG(ERR, MEMPOOL, "Cannot allocate mempool cache!\n");
>>
>> I would remove the '!'
>>
>>
>>
>> > @@ -587,10 +624,18 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
>> >     mp->elt_size = objsz.elt_size;
>> >     mp->header_size = objsz.header_size;
>> >     mp->trailer_size = objsz.trailer_size;
>> > -   mp->cache_size = cache_size;
>> > -   mp->cache_flushthresh = CALC_CACHE_FLUSHTHRESH(cache_size);
>> > +   mp->cache_size = cache_size; /* Keep this for backwards compat. */
>>
>> I'm wondering if this should be kept for compat or if it makes sense
>> to keep it. The question is: do we want the cache_size to be a parameter
>> of the mempool or should it be a parameter of the cache?
>>
>> I think we could remove this field from the mempool structure.
>>
>>
>> > @@ -673,13 +718,17 @@ rte_mempool_dump_cache(FILE *f, const struct rte_mempool *mp)
>> >  #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>> >     unsigned lcore_id;
>> >     unsigned count = 0;
>> > +   unsigned cache_size;
>> >     unsigned cache_count;
>> >
>> >     fprintf(f, "  cache infos:\n");
>> > -   fprintf(f, "    cache_size=%"PRIu32"\n", mp->cache_size);
>> >     for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
>> > +           cache_size = mp->local_cache[lcore_id].size;
>> > +           fprintf(f, "    cache_size[%u]=%"PRIu32"\n",
>> > +                   lcore_id, cache_size);
>> >             cache_count = mp->local_cache[lcore_id].len;
>> > -           fprintf(f, "    cache_count[%u]=%u\n", lcore_id, cache_count);
>> > +           fprintf(f, "    cache_count[%u]=%"PRIu32"\n",
>> > +                   lcore_id, cache_count);
>> >             count += cache_count;
>>
>> Does it still make sense to dump the content of the cache as some
>> external caches may exist?
>>
>> If we want to list all caches, we could imagine a sort of cache
>> registration before using a cache structure. Example:
>>
>>    cache = rte_mempool_add_cache()
>>    rte_mempool_del_cache()
>>
>> All the caches could be browsed internally using a list.
>>
>> Thoughts?
>
> Seems a bit excessive to me.
> People still can have and use extrenal cache without any registration.
> I suppose all we can do here - print 'internal' caches.
> Konstantin
>


More information about the dev mailing list