[dpdk-stable] patch 'mem: do not use physical addresses in IOVA as VA mode' has been queued to LTS release 17.11.2

Yuanhan Liu yliu at fridaylinux.org
Sun Apr 22 17:09:31 CEST 2018


Hi,

FYI, your patch has been queued to LTS release 17.11.2

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/29/18. So please
shout if anyone has objections.

Thanks.

	--yliu

---
>From 8888d6f0905cac90bd2335d799778a6d12472055 Mon Sep 17 00:00:00 2001
From: Anatoly Burakov <anatoly.burakov at intel.com>
Date: Wed, 4 Apr 2018 15:40:45 +0100
Subject: [PATCH] mem: do not use physical addresses in IOVA as VA mode

[ upstream commit 048303b6f3fdfb5ce863e293f6b3dfabc0a0a2db ]

We already use VA addresses for IOVA purposes everywhere if we're in
RTE_IOVA_VA mode:
 1) rte_malloc_virt2phy()/rte_malloc_virt2iova() always return VA addresses
 2) Because of 1), memzone's IOVA is set to VA address on reserve
 3) Because of 2), mempool's IOVA addresses are set to VA addresses

The only place where actual physical addresses are stored is in memsegs at
init time, but we're not using them anywhere, and there is no external API
to get those addresses (aside from manually iterating through memsegs), nor
should anyone care about them in RTE_IOVA_VA mode.

So, fix EAL initialization to allocate VA-contiguous segments at the start
without regard for physical addresses (as if they weren't available), and
use VA to set final IOVA addresses for all pages.

Fixes: 62196f4e0941 ("mem: rename address mapping function to IOVA")

Signed-off-by: Anatoly Burakov <anatoly.burakov at intel.com>
Acked-by: Santosh Shukla <santosh.shukla at caviumnetworks.com>
---
 lib/librte_eal/linuxapp/eal/eal_memory.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c
index 16a181c36..17c20d4b3 100644
--- a/lib/librte_eal/linuxapp/eal/eal_memory.c
+++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
@@ -491,6 +491,9 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi,
 			hugepg_tbl[i].orig_va = virtaddr;
 		}
 		else {
+			/* rewrite physical addresses in IOVA as VA mode */
+			if (rte_eal_iova_mode() == RTE_IOVA_VA)
+				hugepg_tbl[i].physaddr = (uintptr_t)virtaddr;
 			hugepg_tbl[i].final_va = virtaddr;
 		}
 
@@ -1109,7 +1112,8 @@ rte_eal_hugepage_init(void)
 				continue;
 		}
 
-		if (phys_addrs_available) {
+		if (phys_addrs_available &&
+				rte_eal_iova_mode() != RTE_IOVA_VA) {
 			/* find physical addresses for each hugepage */
 			if (find_physaddrs(&tmp_hp[hp_offset], hpi) < 0) {
 				RTE_LOG(DEBUG, EAL, "Failed to find phys addr "
-- 
2.11.0



More information about the stable mailing list