[dpdk-dev] long initialization of rte_eal_hugepage_init

王志克 wangzhike at jd.com
Wed Sep 6 08:02:47 CEST 2017


Do you mean "pagesize" when you say "size" option? I have specified the pagesize as 1G.
Also, I already use "--socket-mem " to specify that the application only needs 1G per NUMA node.

The problem is that map_all_hugepages() would map all free huge pages, and then select the proper ones. If I have 500 free huge pages (each 1G), and application only needs 1G per NUMA socket, it is unreasonable for such mapping.

My use case is OVS+DPDK. The OVS+DPDK would only need 2G, and other application (Qemu/VM) would use the other huge pages.

Br,
Wang Zhike


-----Original Message-----
From: Tan, Jianfeng [mailto:jianfeng.tan at intel.com] 
Sent: Wednesday, September 06, 2017 12:36 PM
To: 王志克; users at dpdk.org; dev at dpdk.org
Subject: RE: long initialization of rte_eal_hugepage_init



> -----Original Message-----
> From: users [mailto:users-bounces at dpdk.org] On Behalf Of ???
> Sent: Wednesday, September 6, 2017 11:25 AM
> To: users at dpdk.org; dev at dpdk.org
> Subject: [dpdk-users] long initialization of rte_eal_hugepage_init
> 
> Hi All,
> 
> I observed that rte_eal_hugepage_init() will take quite long time if there are
> lots of huge pages. Example I have 500 1G huge pages, and it takes about 2
> minutes. That is too long especially for application restart case.
> 
> If the application only needs limited huge page while the host have lots of
> huge pages, the algorithm is not so efficent. Example, we only need 1G
> memory from each socket.
> 
> What is the proposal from DPDK community? Any solution?

You can mount hugetlbfs with "size" option + use "--socket-mem" option in DPDK to restrict the memory to be used. 

Thanks,
Jianfeng

> 
> Note I tried version dpdk 16.11.
> 
> Br,
> Wang Zhike


More information about the dev mailing list