[dpdk-dev] mmap fails with more than 40000 hugepages

Damjan Marion (damarion) damarion at cisco.com
Thu Feb 5 14:20:01 CET 2015


> On 05 Feb 2015, at 13:59, Neil Horman <nhorman at tuxdriver.com> wrote:
> 
> On Thu, Feb 05, 2015 at 12:00:48PM +0000, Damjan Marion (damarion) wrote:
>> Hi,
>> 
>> I have system with 2 NUMA nodes and 256G RAM total. I noticed that DPDK crashes in rte_eal_init()
>> when number of available hugepages is around 40000 or above.
>> Everything works fine with lower values (i.e. 30000).
>> 
>> I also tried with allocating 40000 on node0 and 0 on node1, same crash happens.
>> 
>> 
>> Any idea what might be causing this?
>> 
>> Thanks,
>> 
>> Damjan
>> 
>> 
>> $ cat /sys/devices/system/node/node[01]/hugepages/hugepages-2048kB/nr_hugepages
>> 20000
>> 20000
>> 
>> $ grep -i huge /proc/meminfo
>> AnonHugePages:    706560 kB
>> HugePages_Total:   40000
>> HugePages_Free:    40000
>> HugePages_Rsvd:        0
>> HugePages_Surp:        0
>> Hugepagesize:       2048 kB
>> 
> Whats your shmmax value set to? 40000 2MB hugepages is way above the default
> setting for how much shared ram a system will allow.  I've not done the math on
> your logs below, but judging by the size of some of the mapped segments, I'm
> betting your hitting the default limit of 4GB.

$ cat /proc/sys/kernel/shmmax
33554432

$ sysctl -w kernel.shmmax=8589934592
kernel.shmmax = 8589934592

same crash :(

Thanks,

Damjan


More information about the dev mailing list