[dpdk-dev,v3,1/2] memalloc: do not leave unmapped holes in EAL virtual memory area

Message ID 1527857960-109306-1-git-send-email-dariuszx.stojaczyk@intel.com (mailing list archive)
State Accepted, archived
Delegated to: Thomas Monjalon
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Stojaczyk, DariuszX June 1, 2018, 12:59 p.m. UTC
  EAL reserves a huge area in virtual address space
to provide virtual address contiguity for e.g.
future memory extensions (memory hotplug). During
memory hotplug, if the hugepage mmap succeeds but
doesn't suffice EAL's requiriments, the EAL would
unmap this mapping straight away, leaving a hole in
its virtual memory area and making it available
to everyone. As EAL still thinks it owns the entire
region, it may try to mmap it later with MAP_FIXED,
possibly overriding a user's mapping that was made
in the meantime.

This patch ensures each hole is mapped back by EAL,
so that it won't be available to anyone else.

Changes from v2:
  * replaced rte_panic() with a CRIT log.
  * added "git fixline" tags

Changes from v1:
 * checkpatch fixes

Fixes: 582bed1e1d1d ("mem: support mapping hugepages at runtime")
Cc: anatoly.burakov@intel.com
Cc: stable@dpdk.org

Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
---
 lib/librte_eal/linuxapp/eal/eal_memalloc.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)
  

Comments

Burakov, Anatoly June 1, 2018, 3:06 p.m. UTC | #1
On 01-Jun-18 1:59 PM, Dariusz Stojaczyk wrote:
> EAL reserves a huge area in virtual address space
> to provide virtual address contiguity for e.g.
> future memory extensions (memory hotplug). During
> memory hotplug, if the hugepage mmap succeeds but
> doesn't suffice EAL's requiriments, the EAL would
> unmap this mapping straight away, leaving a hole in
> its virtual memory area and making it available
> to everyone. As EAL still thinks it owns the entire
> region, it may try to mmap it later with MAP_FIXED,
> possibly overriding a user's mapping that was made
> in the meantime.
> 
> This patch ensures each hole is mapped back by EAL,
> so that it won't be available to anyone else.
> 
> Changes from v2:
>    * replaced rte_panic() with a CRIT log.
>    * added "git fixline" tags
> 
> Changes from v1:
>   * checkpatch fixes
> 
> Fixes: 582bed1e1d1d ("mem: support mapping hugepages at runtime")
> Cc: anatoly.burakov@intel.com
> Cc: stable@dpdk.org
> 
> Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
> ---

Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>

Thomas, could you please fix the commit message on apply?
  
Thomas Monjalon July 12, 2018, 9:42 p.m. UTC | #2
01/06/2018 17:06, Burakov, Anatoly:
> On 01-Jun-18 1:59 PM, Dariusz Stojaczyk wrote:
> > EAL reserves a huge area in virtual address space
> > to provide virtual address contiguity for e.g.
> > future memory extensions (memory hotplug). During
> > memory hotplug, if the hugepage mmap succeeds but
> > doesn't suffice EAL's requiriments, the EAL would
> > unmap this mapping straight away, leaving a hole in
> > its virtual memory area and making it available
> > to everyone. As EAL still thinks it owns the entire
> > region, it may try to mmap it later with MAP_FIXED,
> > possibly overriding a user's mapping that was made
> > in the meantime.
> > 
> > This patch ensures each hole is mapped back by EAL,
> > so that it won't be available to anyone else.
> > 
> > Changes from v2:
> >    * replaced rte_panic() with a CRIT log.
> >    * added "git fixline" tags
> > 
> > Changes from v1:
> >   * checkpatch fixes
> > 
> > Fixes: 582bed1e1d1d ("mem: support mapping hugepages at runtime")
> > Cc: anatoly.burakov@intel.com
> > Cc: stable@dpdk.org
> > 
> > Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
> > ---
> 
> Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
> 
> Thomas, could you please fix the commit message on apply?

Yes

Applied, thanks
  

Patch

diff --git a/lib/librte_eal/linuxapp/eal/eal_memalloc.c b/lib/librte_eal/linuxapp/eal/eal_memalloc.c
index 8c11f98..6be6680 100644
--- a/lib/librte_eal/linuxapp/eal/eal_memalloc.c
+++ b/lib/librte_eal/linuxapp/eal/eal_memalloc.c
@@ -39,6 +39,7 @@ 
 #include "eal_filesystem.h"
 #include "eal_internal_cfg.h"
 #include "eal_memalloc.h"
+#include "eal_private.h"
 
 /*
  * not all kernel version support fallocate on hugetlbfs, so fall back to
@@ -490,6 +491,8 @@  alloc_seg(struct rte_memseg *ms, void *addr, int socket_id,
 	int ret = 0;
 	int fd;
 	size_t alloc_sz;
+	int flags;
+	void *new_addr;
 
 	/* takes out a read lock on segment or segment list */
 	fd = get_seg_fd(path, sizeof(path), hi, list_idx, seg_idx);
@@ -585,6 +588,20 @@  alloc_seg(struct rte_memseg *ms, void *addr, int socket_id,
 
 mapped:
 	munmap(addr, alloc_sz);
+	flags = MAP_FIXED;
+#ifdef RTE_ARCH_PPC_64
+	flags |= MAP_HUGETLB;
+#endif
+	new_addr = eal_get_virtual_area(addr, &alloc_sz, alloc_sz, 0, flags);
+	if (new_addr != addr) {
+		if (new_addr != NULL)
+			munmap(new_addr, alloc_sz);
+		/* we're leaving a hole in our virtual address space. if
+		 * somebody else maps this hole now, we could accidentally
+		 * override it in the future.
+		 */
+		RTE_LOG(CRIT, EAL, "Can't mmap holes in our virtual address space\n");
+	}
 resized:
 	if (internal_config.single_file_segments) {
 		resize_hugefile(fd, path, list_idx, seg_idx, map_offset,