Bug 893
Summary: | no_hugetlbfs should not mean legacy_mem on iommu/vfio platforms | ||
---|---|---|---|
Product: | DPDK | Reporter: | Niket Kandya (niketkandya) |
Component: | core | Assignee: | dev |
Status: | UNCONFIRMED --- | ||
Severity: | normal | CC: | dmitry.kozliuk |
Priority: | Normal | ||
Version: | unspecified | ||
Target Milestone: | --- | ||
Hardware: | All | ||
OS: | All |
Description
Niket Kandya
2021-12-03 03:17:40 CET
Let's first clarify why huge pages are used. Memory pinning is possible with regular pages (subject to system limits). 1. Huge pages are needed: 1.1) to minimize TLB cache misses; 1.2) to increase chances of allocating large physically-contiguous blocks. 2. While 1.2) is irrelevant with IOMMU, 1.1) still affects performance. Using small pages would also consume more IOMMU entries, depleting them and degrading performance even worse (IOMMU has to translate VA to PA despite the buffer is VA-contiguous). The --no-huge mode is intended to run (unit)tests with minimal requirements from the system, not even privileged access, including IOMMU access. That being said, enabling dynamic --no-huge should be possible and even desirable in the course of gradually moving to dynamic mode entirely. What is your use case that you want to alleviate this restriction? I agree that huge pages offer performance benefits. The reason why I filed this bug is that we are trying to figure out "guaranteed" huge page support in our containers to the extent we need them. That is the ability to reserve many huge pages in our container is best effort. And therefore I have been looking into falling back to --no-huge mode (accepting the degraded performance here) in a production scenario. |