[dpdk-dev] [PATCH 1/5] mempool: add external mempool manager support

Hunt, David david.hunt at intel.com
Tue Mar 8 11:04:31 CET 2016


Hi Olivier,

On 3/4/2016 9:05 AM, Olivier MATZ wrote:
> Hi David,
>
>>>> @@ -622,6 +607,10 @@ rte_mempool_xmem_create(const char *name,
>>>> unsigned n, unsigned elt_size,
>>>>
>>>>        mp->elt_va_end = mp->elt_va_start;
>>>>
>>>> +    /* Parameters are setup. Call the mempool handler alloc */
>>>> +    if ((rte_mempool_ext_alloc(mp, name, n, socket_id, flags)) == NULL)
>>>> +        goto exit;
>>>> +
>>> I think some memory needs to be freed here. At least 'te'.
>> Done in v2
> Please note that in the meanwhile, this fix has been pushed (as we need
> it for next release):
> http://dpdk.org/browse/dpdk/commit/lib/librte_mempool/rte_mempool.c?id=86f36ff9578b5f3d697c8fcf6072dcb70e2b246f

v3 will be rebased on top of the latest head of the repo.


>>>> diff --git a/lib/librte_mempool/rte_mempool.h
>>>> b/lib/librte_mempool/rte_mempool.h
>>>> index 6e2390a..620cfb7 100644
>>>> --- a/lib/librte_mempool/rte_mempool.h
>>>> +++ b/lib/librte_mempool/rte_mempool.h
>>>> @@ -88,6 +88,8 @@ extern "C" {
>>>>    struct rte_mempool_debug_stats {
>>>>        uint64_t put_bulk;         /**< Number of puts. */
>>>>        uint64_t put_objs;         /**< Number of objects successfully
>>>> put. */
>>>> +    uint64_t put_pool_bulk;    /**< Number of puts into pool. */
>>>> +    uint64_t put_pool_objs;    /**< Number of objects into pool. */
>>>>        uint64_t get_success_bulk; /**< Successful allocation number. */
>>>>        uint64_t get_success_objs; /**< Objects successfully allocated. */
>>>>        uint64_t get_fail_bulk;    /**< Failed allocation number. */
>>> I think the comment of put_pool_objs is not very clear.
>>> Shouldn't we have the same stats for get?
>>>
>> Not used, removed. Covered by put_bulk.
> I guess you mean it will be removed in v3? It is still there in the v2
> (the field, not the comment that has been fixed).
>
> Shouldn't we have the same stats for get?

I believe get's are covered by the get_success_bulk and get_fail_bulk

>>>>    /**
>>>>     * The RTE mempool structure.
>>>>     */
>>>>    struct rte_mempool {
>>>>        char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
>>>> -    struct rte_ring *ring;           /**< Ring to store objects. */
>>>>        phys_addr_t phys_addr;           /**< Phys. addr. of mempool
>>>> struct. */
>>>>        int flags;                       /**< Flags of the mempool. */
>>>>        uint32_t size;                   /**< Size of the mempool. */
>>>> @@ -194,6 +270,11 @@ struct rte_mempool {
>>>>
>>>>        unsigned private_data_size;      /**< Size of private data. */
>>>>
>>>> +    /* Common pool data structure pointer */
>>>> +    void *rt_pool __rte_cache_aligned;
>>> What is the meaning of rt_pool?
>> I agree that it's probably not a very good name. Since it's basically
>> the pointer which is used by the handlers callbacks, maybe we should
>> call it mempool_storage? That leaves it generic enough that it can point
>> at a ring, an array, or whatever else is needed for a particular handler.
> My question was more about the "rt_" prefix. Maybe I missed something
> obvious? I think "pool" or "pool_handler" is ok.

v3 will use "pool"

>>>> -
>>>> +
>>>> +    /* add new handler index */
>>>> +    handler_idx = mempool_handler_list.num_handlers++;
>>>> +
>>>> +    snprintf(mempool_handler_list.handler[handler_idx].name,
>>>> +                RTE_MEMPOOL_NAMESIZE, "%s", h->name);
>>>> +    mempool_handler_list.handler[handler_idx].alloc = h->alloc;
>>>> +    mempool_handler_list.handler[handler_idx].put = h->put;
>>>> +    mempool_handler_list.handler[handler_idx].get = h->get;
>>>> +    mempool_handler_list.handler[handler_idx].get_count = h->get_count;
>>>> +
>>>> +    rte_spinlock_unlock(&mempool_handler_list.sl);
>>>> +
>>>> +    return handler_idx;
>>>> +}
>>> Why not using a similar mechanism than what we have for PMDs?
>>>
>>>      void rte_eal_driver_register(struct rte_driver *driver)
>>>      {
>>>          TAILQ_INSERT_TAIL(&dev_driver_list, driver, next);
>>>      }
>>>
>> Already discussed previously, index needed over pointer because of
>> secondary processes.
> Could you add a comment stating this? It may help for next readers
> to have this info.

v3: Comment added to the header file where we define header_idx 
explaining the use of an index versus pointers

>>> Last thing, I think this code should go in rte_mempool.c, not in
>>> rte_mempool_default.c.
>> I was trying to keep the default handlers together in their own file,
>> rather than having them in with the mempool framework. I think it's
>> better having them separate, and new handlers can go in their own files
>> also. no?
> OK for the following functions:
>   common_ring_mp_put()
>   common_ring_sp_put()
>   common_ring_mc_get()
>   common_ring_sc_get()
>   common_ring_get_count()
>   rte_mempool_common_ring_alloc()  (note: only this one has
>         a rte_mempool prefix, maybe it should be fixed)
>
> The other functions are part of the framework to add an external
> handler, I don't think they should go in rte_mempool_default.c
> They could either go in rte_mempool.c or in another file
> rte_mempool_handler.c.

v3: Agreed. The bulk of the v3 is simplification of the files.
All of the "common" callbacks are now in in rte_mempool_handler.c and 
rte_mempool_handler.h.
I've renamed the 'alloc' function above to be in line with the naming of 
the others.
The 'custom' handler has been banished to the autotest code, so as to 
keep the library as clean as possible.
What's interesting is that the autotest can have all the code defining 
the custom mempool (including it's registration), keeping the library 
free of user code.

>>>> --- /dev/null
>>>> +++ b/lib/librte_mempool/rte_mempool_internal.h
>>> Is it the proper name?
>>> We could imagine a mempool handler provided by a plugin, and
>>> in this case this code should go in rte_mempool.h.
>> I was trying to keep the public APIs in rte_mempool.h, and aal the
>> private stuff in rte_mempool_internal.h. Maybe a better name would be
>> rte_mempool_private.h?
> Are these functions internal? I mean, is it possible for an application
> or an external PMD (.so) to provide its own handler? I think it would
> be really interesting to have this capability.
>
> Then, I would prefer to have this either in rte_mempool.h or in
> rte_mempool_handler.h (that would be coherent with the .c)
>   

Now rte_mempool_handler.h


So to synopsise:

rte_mempool.[ch] - mempool create, populate, audit, dump, etc.
rte_mempool_handler.[ch] - handler registration, and fns to call callbacks
rte_mempool_default.c - default internal handlers sp/sc, mp/mc, etc.

custom handler has been moved out to app/test/test_ext_mempool.c

Thanks,
Dave.




More information about the dev mailing list