Bug 608 - DPDK cannot allocate 32GB of memory
Summary: DPDK cannot allocate 32GB of memory
Status: RESOLVED FIXED
Alias: None
Product: DPDK
Classification: Unclassified
Component: core (show other bugs)
Version: 20.11
Hardware: x86 Linux
: Normal major
Target Milestone: ---
Assignee: dev
URL:
Depends on:
Blocks:
 
Reported: 2021-01-07 08:54 CET by mengxiang0811
Modified: 2021-01-14 05:51 CET (History)
3 users (show)



Attachments

Description mengxiang0811 2021-01-07 08:54:47 CET
Currently, applications based on the latest DPDK cannot allocate 32GB of memory. Even the minimum DPDK program that simply tries to allocate 32GB fails with the error message: "EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list". A similar issue was reported in SPDK [1].


[1] https://github.com/spdk/spdk/issues/922.
Comment 1 David Marchand 2021-01-07 09:09:26 CET
That is probably due to the max number of memory segments.

You could play with the internals of the DPDK memory allocator (values in config/rte_config.h) but on the other hand, since you need so much memory, can you use 1GB hugepages?
Comment 2 Anatoly Burakov 2021-01-07 11:45:38 CET
It's actually slightly more complex than that (you can allocate 32GB per socket with 1G pages plus another 16GB per socket with 2M pages, and the total cannot be higher than 512GB i think), but this is intended behavior.

Since 18.05, DPDK uses a different memory allocation scheme, where we can grow and shrink our usage of hugepages at will. We also still need to support secondary processes, and they work by duplicating address space of the primary process. So, when a primary process allocates a new page, the secondary process must allocate the same page at the same address. Because we are not in control of our address space (kernel is), what we do is we reserve certain amount of address space at startup, and map pages into that space at runtime.

So, this is intended behavior.
Comment 3 Michel Machado 2021-01-07 15:12:59 CET
Hi there,

I'm working with Qiaobin, who opened this issue.

Replying to David, we are already using 1GB hugepages.

Anatoly, is there a plan or intention to improve the allocator to support more memory? In the meanwhile, do you have any suggestions on how to work around this issue?

This problem was triggered while allocating an array with 2^29 entries for a hash table. Each entry is 128 bytes. That's from where the 2^29*128 = 32GB comes. This hash table is a flow table.
Comment 4 Anatoly Burakov 2021-01-07 16:52:46 CET
It's not a question of "improving" the allocator, the allocator already supports more memory. It's just that previously, no one was hitting this limitation in practice. You can change the default settings to allow more memory.

If you're using meson, you have to edit `config/rte_config.h` file and change the following values:

#define RTE_MAX_MEMSEG_PER_LIST 8192
#define RTE_MAX_MEM_MB_PER_LIST 32768
#define RTE_MAX_MEMSEG_PER_TYPE 32768
#define RTE_MAX_MEM_MB_PER_TYPE 65536

Just doubling these values should be foolproof enough to not break anything else. If you're using Make, you'd have to edit the relevant config file in a similar way.
Comment 5 mengxiang0811 2021-01-08 07:49:15 CET
Thanks so much for the suggestions! By changing the above values, I was able to allocate 32GB memory using the latest DPDK.
Comment 6 mengxiang0811 2021-01-14 05:51:22 CET
Closing this ticket.

Note You need to log in before you can comment on or make changes to this bug.