[RFC] mempool: zero-copy cache put bulk

Honnappa Nagarahalli Honnappa.Nagarahalli at arm.com
Sun Nov 6 00:11:20 CET 2022


+ Akshitha, she is working on similar patch

Few comments inline

> -----Original Message-----
> From: Morten Brørup <mb at smartsharesystems.com>
> Sent: Saturday, November 5, 2022 8:40 AM
> To: dev at dpdk.org; olivier.matz at 6wind.com;
> andrew.rybchenko at oktetlabs.ru; Honnappa Nagarahalli
> <Honnappa.Nagarahalli at arm.com>
> Subject: [RFC] mempool: zero-copy cache put bulk
> 
> Zero-copy access to the mempool cache is beneficial for PMD performance,
> and must be provided by the mempool library to fix [Bug 1052] without a
> performance regression.
> 
> [Bug 1052]: https://bugs.dpdk.org/show_bug.cgi?id=1052
> 
> 
> This RFC offers a conceptual zero-copy put function, where the application
> promises to store some objects, and in return gets an address where to store
> them.
> 
> I would like some early feedback.
> 
> Notes:
> * Allowing the 'cache' parameter to be NULL, and getting it from the
> mempool instead, was inspired by rte_mempool_cache_flush().
I am not sure why the 'cache' parameter is required for this API. This API should take the mem pool as the parameter.

We have based our API on 'rte_mempool_do_generic_put' and removed the 'cache' parameter. This new API, on success, returns the pointer to memory where the objects are copied. On failure it returns NULL and the caller has to call 'rte_mempool_ops_enqueue_bulk'. Alternatively, the new API could do this as well and PMD does not need to do anything if it gets a NULL pointer.

We should think about providing  similar API on the RX side to keep it symmetric.

> * Asserting that the 'mp' parameter is not NULL is not done by other
> functions, so I omitted it here too.
> 
> NB: Please ignore formatting. Also, this code has not even been compile
> tested.
We are little bit ahead, tested the changes with i40e PF PMD, wrote unit test cases, going through internal review, will send out RFC on Monday

> 
> /**
>  * Promise to put objects in a mempool via zero-copy access to a user-owned
> mempool cache.
>  *
>  * @param cache
>  *   A pointer to the mempool cache.
>  * @param mp
>  *   A pointer to the mempool.
>  * @param n
>  *   The number of objects to be put in the mempool cache.
>  * @return
>  *   The pointer to where to put the objects in the mempool cache.
>  *   NULL on error
>  *   with rte_errno set appropriately.
>  */
> static __rte_always_inline void *
> rte_mempool_cache_put_bulk_promise(struct rte_mempool_cache *cache,
>         struct rte_mempool *mp,
>         unsigned int n)
> {
>     void **cache_objs;
> 
>     if (cache == NULL)
>         cache = rte_mempool_default_cache(mp, rte_lcore_id());
>     if (cache == NULL) {
>         rte_errno = EINVAL;
>         return NULL;
>     }
> 
>     rte_mempool_trace_cache_put_bulk_promise(cache, mp, n);
> 
>     /* The request itself is too big for the cache */
>     if (unlikely(n > cache->flushthresh)) {
>         rte_errno = EINVAL;
>         return NULL;
>     }
> 
>     /*
>      * The cache follows the following algorithm:
>      *   1. If the objects cannot be added to the cache without crossing
>      *      the flush threshold, flush the cache to the backend.
>      *   2. Add the objects to the cache.
>      */
> 
>     if (cache->len + n <= cache->flushthresh) {
>         cache_objs = &cache->objs[cache->len];
>         cache->len += n;
>     } else {
>         cache_objs = &cache->objs[0];
>         rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len);
>         cache->len = n;
>     }
> 
>     RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);
>     RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);
> 
>     return cache_objs;
> }
> 
> 
> Med venlig hilsen / Kind regards,
> -Morten Brørup
> 



More information about the dev mailing list