[PATCH v5] mempool cache: add zero-copy get and put functions

Konstantin Ananyev konstantin.ananyev at huawei.com
Mon Jan 23 13:52:24 CET 2023



> > > > > @@ -1364,32 +1556,25 @@ rte_mempool_do_generic_put(struct
> > rte_mempool
> > > > *mp, void * const *obj_table,
> > > > >   {
> > > > >   	void **cache_objs;
> > > > >
> > > > > -	/* No cache provided */
> > > > > -	if (unlikely(cache == NULL))
> > > > > -		goto driver_enqueue;
> > > > > +	/* No cache provided? */
> > > > > +	if (unlikely(cache == NULL)) {
> > > > > +		/* Increment stats now, adding in mempool always
> > succeeds.
> > > > */
> > > > > +		RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1);
> > > > > +		RTE_MEMPOOL_STAT_ADD(mp, put_objs, n);
> > > > >
> > > > > -	/* increment stat now, adding in mempool always success */
> > > > > -	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);
> > > > > -	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);
> > > > > +		goto driver_enqueue;
> > > > > +	}
> > > > >
> > > > > -	/* The request itself is too big for the cache */
> > > > > -	if (unlikely(n > cache->flushthresh))
> > > > > -		goto driver_enqueue_stats_incremented;
> > > > > +	/* Prepare to add the objects to the cache. */
> > > > > +	cache_objs = __rte_mempool_cache_zc_put_bulk(cache, mp, n);
> > > > >
> > > > > -	/*
> > > > > -	 * The cache follows the following algorithm:
> > > > > -	 *   1. If the objects cannot be added to the cache without
> > > > crossing
> > > > > -	 *      the flush threshold, flush the cache to the
> > backend.
> > > > > -	 *   2. Add the objects to the cache.
> > > > > -	 */
> > > > > +	/* The request itself is too big for the cache? */
> > > > > +	if (unlikely(cache_objs == NULL)) {
> > > > > +		/* Increment stats now, adding in mempool always
> > succeeds.
> > > > */
> > > > > +		RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);
> > > > > +		RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);
> > > >
> > > > Shouldn't it be RTE_MEMPOOL_STAT_ADD() here?
> > >
> > > I can see why you are wondering, but the answer is no. The statistics
> > in mempool cache are not related to the cache, they are related
> > > to the mempool; they are there to provide faster per-lcore update
> > access [1].
> > >
> > > [1]:
> > https://elixir.bootlin.com/dpdk/v22.11.1/source/lib/mempool/rte_mempool
> > .h#L94
> >
> > But  the condition above:
> > if (unlikely(cache_objs == NULL))
> > means that me can't put these object to the cache and have to put
> > objects straight to the pool (skipping cache completely), right?
> 
> Correct.
> 
> > If so, then why to update cache stats instead of pool stats?
> 
> Because updating the stats in the cache structure is faster than updating the stats in the pool structure. Refer to the two macros:
> RTE_MEMPOOL_STAT_ADD() [2] is effectively five lines of code, but RTE_MEMPOOL_CACHE_STAT_ADD(cache, name, n) [3] is a one-
> liner: ((cache)->stats.name += (n)).
> 
> [2]: https://elixir.bootlin.com/dpdk/v22.11.1/source/lib/mempool/rte_mempool.h#L325
> [3]: https://elixir.bootlin.com/dpdk/v22.11.1/source/lib/mempool/rte_mempool.h#L348
> 
> And to reiterate that this is the correct behavior here, I will rephrase my previous response: The stats kept in the cache are part of the
> pool stats, they are not stats for the cache itself.

Ah ok, that's  the same as current behavior.
It is still looks a bit strange to me that we incrementing cache (not pool) stats here.
But that's another story, so no extra comments from me for that case.

> 
> > > > >
> > > > > -	if (cache->len + n <= cache->flushthresh) {
> > > > > -		cache_objs = &cache->objs[cache->len];
> > > > > -		cache->len += n;
> > > > > -	} else {
> > > > > -		cache_objs = &cache->objs[0];
> > > > > -		rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache-
> > >len);
> > > > > -		cache->len = n;
> > > > > +		goto driver_enqueue;
> > > > >   	}
> > > > >
> > > > >   	/* Add the objects to the cache. */



More information about the dev mailing list