[dpdk-dev] Question about cache_size in rte_mempool_create

roy roy.shterman at gmail.com
Fri Nov 24 10:39:54 CET 2017


Thanks for your answer, but I cannot understand the dimension of the 
ring and it is affected by the cache size.

On 24/11/17 11:30, Bruce Richardson wrote:
> On Thu, Nov 23, 2017 at 11:05:11PM +0200, Roy Shterman wrote:
>> Hi,
>>
>> In the documentation it says that:
>>
>>   * @param cache_size
>>   *   If cache_size is non-zero, the rte_mempool library will try to
>>   *   limit the accesses to the common lockless pool, by maintaining a
>>   *   per-lcore object cache. This argument must be lower or equal to
>>   *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5.* It is advised to
>> choose*
>> * *   cache_size to have "n modulo cache_size == 0": if this is*
>> * *   not the case, some elements will always stay in the pool and will*
>> * *   never be used.* The access to the per-lcore table is of course
>>   *   faster than the multi-producer/consumer pool. The cache can be
>>   *   disabled if the cache_size argument is set to 0; it can be useful to
>>   *   avoid losing objects in cache.
>>
>> I wonder if someone can please explain the high-lightened sentence, how the
>> cache size affects the objects inside the ring.
> It has no effects upon the objects themselves. Having a cache is
> strongly recommended for performance reasons. Accessing a shared ring
> for a mempool is very slow compared to pulling packets from a per-core
> cache. To test this you can run testpmd using different --mbcache
> parameters.
Still, I didn't understand the sentence from above:

*It is advised to choose cache_size to have "n modulo cache_size == 0": 
if this is* not the case, some elements will always stay in the pool and 
will* never be used.*

>
>> And also does it mean that
>> if I'm sharing pool between different cores can it be that a core sees the
>> pool as empty although it has objects in it?
>>
> Yes, that can occur. You need to dimension the pool to take account of
> your cache usage.

can you please elaborate more on this issue? I'm working with 
multi-consumer multi-producer pools, in my understanding object can or 
in lcore X cache or in ring.
Each core when looking for objects in pool (ring) is looking at 
prod/cons head/tail so how can it be that the cache of different cores 
affects this?

>
> /Bruce



More information about the dev mailing list