[dpdk-dev] memory allocation requirements

Sergio Gonzalez Monroy sergio.gonzalez.monroy at intel.com
Fri Apr 15 10:47:33 CEST 2016


On 15/04/2016 08:12, Olivier Matz wrote:
> Hi,
>
> On 04/14/2016 05:39 PM, Sergio Gonzalez Monroy wrote:
>>> Just to mention that some evolutions [1] are planned in mempool in
>>> 16.07, allowing to populate a mempool with several chunks of memory,
>>> and still ensuring that the objects are physically contiguous. It
>>> completely removes the need to allocate a big virtually contiguous
>>> memory zone (and also physically contiguous if not using
>>> rte_mempool_create_xmem(), which is probably the case in most of
>>> the applications).
>>>
>>> Knowing this, the code that remaps the hugepages to get the largest
>>> possible physically contiguous zone probably becomes useless after
>>> the mempool series. Changing it to only one mmap(file) in hugetlbfs
>>> per NUMA socket would clearly simplify this part of EAL.
>>>
>> Are you suggesting to make those changes after the mempool series
>> has been applied but keeping the current memzone/malloc behavior?
> I wonder if the default property of memzone/malloc which is to
> allocate physically contiguous memory shouldn't be dropped. It could
> remain optional, knowing that allocating a physically contiguous zone
> larger than a page cannot be guaranteed.
>
> But yes, I'm in favor of doing these changes in eal_memory.c, it would
> drop a lot a complex code (all rtemap* stuff), and today I'm not seeing
> any big issue of doing it... maybe we'll find one during the
> discussion :)

I'm in favor of doing those changes but then I think we need to support 
allocating
no contig memory through memzone/malloc or other libraries such as 
librte_hash
may not be able to get the memory they need, right?
Otherwise all library would need a rework like the mempool series to 
deal with
non-contig memory.

For contig memory, I would prefer a new API for dma areas (something 
similar to
rte_eth_dma_zone_reserve() in ethdev) that would transparently deal with 
the case
where we have multiple huge page sizes.

Sergio


> Regards,
> Olivier



More information about the dev mailing list