[dpdk-dev] [PATCH v2] mem: balanced allocation of hugepages
Thomas Monjalon
thomas.monjalon at 6wind.com
Mon Apr 10 12:03:50 CEST 2017
2017-04-10 11:04, Ilya Maximets:
> Currently EAL allocates hugepages one by one not paying
> attention from which NUMA node allocation was done.
>
> Such behaviour leads to allocation failure if number of
> available hugepages for application limited by cgroups
> or hugetlbfs and memory requested not only from the first
> socket.
>
> Example:
> # 90 x 1GB hugepages availavle in a system
>
> cgcreate -g hugetlb:/test
> # Limit to 32GB of hugepages
> cgset -r hugetlb.1GB.limit_in_bytes=34359738368 test
> # Request 4GB from each of 2 sockets
> cgexec -g hugetlb:test testpmd --socket-mem=4096,4096 ...
>
> EAL: SIGBUS: Cannot mmap more hugepages of size 1024 MB
> EAL: 32 not 90 hugepages of size 1024 MB allocated
> EAL: Not enough memory available on socket 1!
> Requested: 4096MB, available: 0MB
> PANIC in rte_eal_init():
> Cannot init memory
>
> This happens beacause all allocated pages are
> on socket 0.
>
> Fix this issue by setting mempolicy MPOL_PREFERRED for each
> hugepage to one of requested nodes in a round-robin fashion.
> In this case all allocated pages will be fairly distributed
> between all requested nodes.
>
> New config option RTE_LIBRTE_EAL_NUMA_AWARE_HUGEPAGES
> introduced and disabled by default because of external
> dependency from libnuma.
>
> Fixes: 77988fc08dc5 ("mem: fix allocating all free hugepages")
>
> Signed-off-by: Ilya Maximets <i.maximets at samsung.com>
Status: Changes Requested
per Sergio advice: "I would be inclined towards v3 targeting v17.08."
More information about the dev
mailing list