[dpdk-users] Mempool allocation fails on first boot; but succeeds after system reboot

Sarthak Ray sarthak_ray at outlook.com
Wed Aug 17 15:10:35 CEST 2016


Hi,

I am using dpdk-2.1.0 for a platform appliance, where I am facing issue with mempool allocation.

On the firstboot of the newly installed appliance, my dpdk application is not coming up saying failure in mbuf allocation on socket 0. But once I reboot the system, it comes up without any issues.

I tried "rte_malloc_dump_stats" api to check the heap statistics right before allocating mbuf pools.

Heap Statistics on first boot (with --socket-mem=128,128)
Socket:0
    Heap_size:134215808,
    Free_size:127706432,
    Alloc_size:6509376,
    Greatest_free_size:8388544, // This value is very less than the "contiguous memory block" that my app is trying to allocate
    Alloc_count:29,
    Free_count:31,

Please Note: Increasing --socket-mem value from 128 to 192 has no impact on Greatest_free_size value and I don't see this fragmentation on socket 1.

Heap Statistics after reboot (with --socket-mem=128,128)
Socket:0
    Heap_size:134217600,
    Free_size:127708224,
    Alloc_size:6509376,
    Greatest_free_size:125982080,
    Alloc_count:29,
    Free_count:3,

After reboot, the largest free block size is increased drastically resulting successful mbuf pool allocation. So looks like this is heap fragmentation issue on Socket 0.

Output of "numactl -H" on my sytem
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
node 0 size: 65170 MB
node 0 free: 49476 MB
node 1 cpus: 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
node 1 size: 65536 MB
node 1 free: 50759 MB
node distances:
node   0   1
  0:  10  21
  1:  21  10

Kernel Boot Arguments for hugepage setting
hugepagesz=1g hugepages=24

Can anyone please comment on how to address this issue? Is there any way to reserve hugepages that can't be fragmented?

Thanks in advance for the valuable suggestion.

Regards,
Sarthak


More information about the users mailing list