[dpdk-dev,5/5] RFC: vfio/ppc64/spapr: Use correct bus addresses for DMA map

Message ID 20170420072402.38106-6-aik@ozlabs.ru (mailing list archive)
State Rejected, archived
Delegated to: Thomas Monjalon
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation fail Compilation issues

Commit Message

Alexey Kardashevskiy April 20, 2017, 7:24 a.m. UTC
  VFIO_IOMMU_SPAPR_TCE_CREATE ioctl() returns the actual bus address for
just created DMA window. It happens to start from zero because the default
window is removed (leaving no windows) and new window starts from zero.
However this is not guaranteed and the new window may start from another
address, this adds an error check.

Another issue is that IOVA passed to VFIO_IOMMU_MAP_DMA should be a PCI
bus address while in this case a physical address of a user page is used.
This changes IOVA to start from zero in a hope that the rest of DPDK
expects this.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
 lib/librte_eal/linuxapp/eal/eal_vfio.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)
  

Comments

Jonas Pfefferle1 April 20, 2017, 9:04 a.m. UTC | #1
Alexey Kardashevskiy <aik@ozlabs.ru> wrote on 20/04/2017 09:24:02:

> From: Alexey Kardashevskiy <aik@ozlabs.ru>
> To: dev@dpdk.org
> Cc: Alexey Kardashevskiy <aik@ozlabs.ru>, JPF@zurich.ibm.com,
> Gowrishankar Muthukrishnan <gowrishankar.m@in.ibm.com>
> Date: 20/04/2017 09:24
> Subject: [PATCH dpdk 5/5] RFC: vfio/ppc64/spapr: Use correct bus
> addresses for DMA map
>
> VFIO_IOMMU_SPAPR_TCE_CREATE ioctl() returns the actual bus address for
> just created DMA window. It happens to start from zero because the
default
> window is removed (leaving no windows) and new window starts from zero.
> However this is not guaranteed and the new window may start from another
> address, this adds an error check.
>
> Another issue is that IOVA passed to VFIO_IOMMU_MAP_DMA should be a PCI
> bus address while in this case a physical address of a user page is used.
> This changes IOVA to start from zero in a hope that the rest of DPDK
> expects this.

This is not the case. DPDK expects a 1:1 mapping PA==IOVA. It will use the
phys_addr of the memory segment it got from /proc/self/pagemap cf.
librte_eal/linuxapp/eal/eal_memory.c. We could try setting it here to the
actual iova which basically makes the whole virtual to phyiscal mapping
with pagemap unnecessary which I believe should be the case for VFIO
anyway. Pagemap should only be needed when using pci_uio.

>
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> ---
>  lib/librte_eal/linuxapp/eal/eal_vfio.c | 12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/lib/librte_eal/linuxapp/eal/eal_vfio.c b/lib/
> librte_eal/linuxapp/eal/eal_vfio.c
> index 46f951f4d..8b8e75c4f 100644
> --- a/lib/librte_eal/linuxapp/eal/eal_vfio.c
> +++ b/lib/librte_eal/linuxapp/eal/eal_vfio.c
> @@ -658,7 +658,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>  {
>     const struct rte_memseg *ms = rte_eal_get_physmem_layout();
>     int i, ret;
> -
> +   phys_addr_t io_offset;
>     struct vfio_iommu_spapr_register_memory reg = {
>        .argsz = sizeof(reg),
>        .flags = 0
> @@ -702,6 +702,13 @@ vfio_spapr_dma_map(int vfio_container_fd)
>        return -1;
>     }
>
> +   io_offset = create.start_addr;
> +   if (io_offset) {
> +      RTE_LOG(ERR, EAL, "  DMA offsets other than zero is not supported,
"
> +            "new window is created at %lx\n", io_offset);
> +      return -1;
> +   }
> +
>     /* map all DPDK segments for DMA. use 1:1 PA to IOVA mapping */
>     for (i = 0; i < RTE_MAX_MEMSEG; i++) {
>        struct vfio_iommu_type1_dma_map dma_map;
> @@ -723,7 +730,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>        dma_map.argsz = sizeof(struct vfio_iommu_type1_dma_map);
>        dma_map.vaddr = ms[i].addr_64;
>        dma_map.size = ms[i].len;
> -      dma_map.iova = ms[i].phys_addr;
> +      dma_map.iova = io_offset;
>        dma_map.flags = VFIO_DMA_MAP_FLAG_READ |
>               VFIO_DMA_MAP_FLAG_WRITE;
>
> @@ -735,6 +742,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>           return -1;
>        }
>
> +      io_offset += dma_map.size;
>     }
>
>     return 0;
> --
> 2.11.0
>
  
Alexey Kardashevskiy April 20, 2017, 1:25 p.m. UTC | #2
On 20/04/17 19:04, Jonas Pfefferle1 wrote:
> Alexey Kardashevskiy <aik@ozlabs.ru> wrote on 20/04/2017 09:24:02:
> 
>> From: Alexey Kardashevskiy <aik@ozlabs.ru>
>> To: dev@dpdk.org
>> Cc: Alexey Kardashevskiy <aik@ozlabs.ru>, JPF@zurich.ibm.com,
>> Gowrishankar Muthukrishnan <gowrishankar.m@in.ibm.com>
>> Date: 20/04/2017 09:24
>> Subject: [PATCH dpdk 5/5] RFC: vfio/ppc64/spapr: Use correct bus
>> addresses for DMA map
>>
>> VFIO_IOMMU_SPAPR_TCE_CREATE ioctl() returns the actual bus address for
>> just created DMA window. It happens to start from zero because the default
>> window is removed (leaving no windows) and new window starts from zero.
>> However this is not guaranteed and the new window may start from another
>> address, this adds an error check.
>>
>> Another issue is that IOVA passed to VFIO_IOMMU_MAP_DMA should be a PCI
>> bus address while in this case a physical address of a user page is used.
>> This changes IOVA to start from zero in a hope that the rest of DPDK
>> expects this.
> 
> This is not the case. DPDK expects a 1:1 mapping PA==IOVA. It will use the
> phys_addr of the memory segment it got from /proc/self/pagemap cf.
> librte_eal/linuxapp/eal/eal_memory.c. We could try setting it here to the
> actual iova which basically makes the whole virtual to phyiscal mapping
> with pagemap unnecessary which I believe should be the case for VFIO
> anyway. Pagemap should only be needed when using pci_uio.


Ah, ok, makes sense now. But it sure needs a big fat comment there as it is
not obvious why host RAM address is used there as DMA window start is not
guaranteed.


> 
>>
>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>> ---
>>  lib/librte_eal/linuxapp/eal/eal_vfio.c | 12 ++++++++++--
>>  1 file changed, 10 insertions(+), 2 deletions(-)
>>
>> diff --git a/lib/librte_eal/linuxapp/eal/eal_vfio.c b/lib/
>> librte_eal/linuxapp/eal/eal_vfio.c
>> index 46f951f4d..8b8e75c4f 100644
>> --- a/lib/librte_eal/linuxapp/eal/eal_vfio.c
>> +++ b/lib/librte_eal/linuxapp/eal/eal_vfio.c
>> @@ -658,7 +658,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>>  {
>>     const struct rte_memseg *ms = rte_eal_get_physmem_layout();
>>     int i, ret;
>> -
>> +   phys_addr_t io_offset;
>>     struct vfio_iommu_spapr_register_memory reg = {
>>        .argsz = sizeof(reg),
>>        .flags = 0
>> @@ -702,6 +702,13 @@ vfio_spapr_dma_map(int vfio_container_fd)
>>        return -1;
>>     }
>>  
>> +   io_offset = create.start_addr;
>> +   if (io_offset) {
>> +      RTE_LOG(ERR, EAL, "  DMA offsets other than zero is not supported, "
>> +            "new window is created at %lx\n", io_offset);
>> +      return -1;
>> +   }
>> +
>>     /* map all DPDK segments for DMA. use 1:1 PA to IOVA mapping */
>>     for (i = 0; i < RTE_MAX_MEMSEG; i++) {
>>        struct vfio_iommu_type1_dma_map dma_map;
>> @@ -723,7 +730,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>>        dma_map.argsz = sizeof(struct vfio_iommu_type1_dma_map);
>>        dma_map.vaddr = ms[i].addr_64;
>>        dma_map.size = ms[i].len;
>> -      dma_map.iova = ms[i].phys_addr;
>> +      dma_map.iova = io_offset;
>>        dma_map.flags = VFIO_DMA_MAP_FLAG_READ |
>>               VFIO_DMA_MAP_FLAG_WRITE;
>>  
>> @@ -735,6 +742,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>>           return -1;
>>        }
>>  
>> +      io_offset += dma_map.size;
>>     }
>>  
>>     return 0;
>> --
>> 2.11.0
>>
>
  
Alexey Kardashevskiy April 20, 2017, 2:22 p.m. UTC | #3
On 20/04/17 23:25, Alexey Kardashevskiy wrote:
> On 20/04/17 19:04, Jonas Pfefferle1 wrote:
>> Alexey Kardashevskiy <aik@ozlabs.ru> wrote on 20/04/2017 09:24:02:
>>
>>> From: Alexey Kardashevskiy <aik@ozlabs.ru>
>>> To: dev@dpdk.org
>>> Cc: Alexey Kardashevskiy <aik@ozlabs.ru>, JPF@zurich.ibm.com,
>>> Gowrishankar Muthukrishnan <gowrishankar.m@in.ibm.com>
>>> Date: 20/04/2017 09:24
>>> Subject: [PATCH dpdk 5/5] RFC: vfio/ppc64/spapr: Use correct bus
>>> addresses for DMA map
>>>
>>> VFIO_IOMMU_SPAPR_TCE_CREATE ioctl() returns the actual bus address for
>>> just created DMA window. It happens to start from zero because the default
>>> window is removed (leaving no windows) and new window starts from zero.
>>> However this is not guaranteed and the new window may start from another
>>> address, this adds an error check.
>>>
>>> Another issue is that IOVA passed to VFIO_IOMMU_MAP_DMA should be a PCI
>>> bus address while in this case a physical address of a user page is used.
>>> This changes IOVA to start from zero in a hope that the rest of DPDK
>>> expects this.
>>
>> This is not the case. DPDK expects a 1:1 mapping PA==IOVA. It will use the
>> phys_addr of the memory segment it got from /proc/self/pagemap cf.
>> librte_eal/linuxapp/eal/eal_memory.c. We could try setting it here to the
>> actual iova which basically makes the whole virtual to phyiscal mapping
>> with pagemap unnecessary which I believe should be the case for VFIO
>> anyway. Pagemap should only be needed when using pci_uio.
> 
> 
> Ah, ok, makes sense now. But it sure needs a big fat comment there as it is
> not obvious why host RAM address is used there as DMA window start is not
> guaranteed.

Well, either way there is some bug - ms[i].phys_addr and ms[i].addr_64 both
have exact same value, in my setup it is 3fffb33c0000 which is a userspace
address - at least ms[i].phys_addr must be physical address.


> 
> 
>>
>>>
>>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>>> ---
>>>  lib/librte_eal/linuxapp/eal/eal_vfio.c | 12 ++++++++++--
>>>  1 file changed, 10 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/lib/librte_eal/linuxapp/eal/eal_vfio.c b/lib/
>>> librte_eal/linuxapp/eal/eal_vfio.c
>>> index 46f951f4d..8b8e75c4f 100644
>>> --- a/lib/librte_eal/linuxapp/eal/eal_vfio.c
>>> +++ b/lib/librte_eal/linuxapp/eal/eal_vfio.c
>>> @@ -658,7 +658,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>>>  {
>>>     const struct rte_memseg *ms = rte_eal_get_physmem_layout();
>>>     int i, ret;
>>> -
>>> +   phys_addr_t io_offset;
>>>     struct vfio_iommu_spapr_register_memory reg = {
>>>        .argsz = sizeof(reg),
>>>        .flags = 0
>>> @@ -702,6 +702,13 @@ vfio_spapr_dma_map(int vfio_container_fd)
>>>        return -1;
>>>     }
>>>  
>>> +   io_offset = create.start_addr;
>>> +   if (io_offset) {
>>> +      RTE_LOG(ERR, EAL, "  DMA offsets other than zero is not supported, "
>>> +            "new window is created at %lx\n", io_offset);
>>> +      return -1;
>>> +   }
>>> +
>>>     /* map all DPDK segments for DMA. use 1:1 PA to IOVA mapping */
>>>     for (i = 0; i < RTE_MAX_MEMSEG; i++) {
>>>        struct vfio_iommu_type1_dma_map dma_map;
>>> @@ -723,7 +730,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>>>        dma_map.argsz = sizeof(struct vfio_iommu_type1_dma_map);
>>>        dma_map.vaddr = ms[i].addr_64;
>>>        dma_map.size = ms[i].len;
>>> -      dma_map.iova = ms[i].phys_addr;
>>> +      dma_map.iova = io_offset;
>>>        dma_map.flags = VFIO_DMA_MAP_FLAG_READ |
>>>               VFIO_DMA_MAP_FLAG_WRITE;
>>>  
>>> @@ -735,6 +742,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>>>           return -1;
>>>        }
>>>  
>>> +      io_offset += dma_map.size;
>>>     }
>>>  
>>>     return 0;
>>> --
>>> 2.11.0
>>>
>>
> 
>
  
Jonas Pfefferle1 April 20, 2017, 3:15 p.m. UTC | #4
Alexey Kardashevskiy <aik@ozlabs.ru> wrote on 20/04/2017 16:22:01:

> From: Alexey Kardashevskiy <aik@ozlabs.ru>
> To: Jonas Pfefferle1 <JPF@zurich.ibm.com>
> Cc: dev@dpdk.org, Gowrishankar Muthukrishnan
> <gowrishankar.m@in.ibm.com>, Adrian Schuepbach <DRI@zurich.ibm.com>
> Date: 20/04/2017 16:22
> Subject: Re: [PATCH dpdk 5/5] RFC: vfio/ppc64/spapr: Use correct bus
> addresses for DMA map
>
> On 20/04/17 23:25, Alexey Kardashevskiy wrote:
> > On 20/04/17 19:04, Jonas Pfefferle1 wrote:
> >> Alexey Kardashevskiy <aik@ozlabs.ru> wrote on 20/04/2017 09:24:02:
> >>
> >>> From: Alexey Kardashevskiy <aik@ozlabs.ru>
> >>> To: dev@dpdk.org
> >>> Cc: Alexey Kardashevskiy <aik@ozlabs.ru>, JPF@zurich.ibm.com,
> >>> Gowrishankar Muthukrishnan <gowrishankar.m@in.ibm.com>
> >>> Date: 20/04/2017 09:24
> >>> Subject: [PATCH dpdk 5/5] RFC: vfio/ppc64/spapr: Use correct bus
> >>> addresses for DMA map
> >>>
> >>> VFIO_IOMMU_SPAPR_TCE_CREATE ioctl() returns the actual bus address
for
> >>> just created DMA window. It happens to start from zero because the
default
> >>> window is removed (leaving no windows) and new window starts from
zero.
> >>> However this is not guaranteed and the new window may start from
another
> >>> address, this adds an error check.
> >>>
> >>> Another issue is that IOVA passed to VFIO_IOMMU_MAP_DMA should be a
PCI
> >>> bus address while in this case a physical address of a user page is
used.
> >>> This changes IOVA to start from zero in a hope that the rest of DPDK
> >>> expects this.
> >>
> >> This is not the case. DPDK expects a 1:1 mapping PA==IOVA. It will use
the
> >> phys_addr of the memory segment it got from /proc/self/pagemap cf.
> >> librte_eal/linuxapp/eal/eal_memory.c. We could try setting it here to
the
> >> actual iova which basically makes the whole virtual to phyiscal
mapping
> >> with pagemap unnecessary which I believe should be the case for VFIO
> >> anyway. Pagemap should only be needed when using pci_uio.
> >
> >
> > Ah, ok, makes sense now. But it sure needs a big fat comment there as
it is
> > not obvious why host RAM address is used there as DMA window start is
not
> > guaranteed.
>
> Well, either way there is some bug - ms[i].phys_addr and ms[i].addr_64
both
> have exact same value, in my setup it is 3fffb33c0000 which is a
userspace
> address - at least ms[i].phys_addr must be physical address.

This might be the case if you are not using hugetlbfs i.e. passing
"--no-huge" cf. eal_memory.c:980

	/* hugetlbfs can be disabled */
	if (internal_config.no_hugetlbfs) {
		addr = mmap(NULL, internal_config.memory, PROT_READ |
PROT_WRITE,
				MAP_PRIVATE | MAP_ANONYMOUS, 0, 0);
		if (addr == MAP_FAILED) {
			RTE_LOG(ERR, EAL, "%s: mmap() failed: %s\n", __func__,
					strerror(errno));
			return -1;
		}
		mcfg->memseg[0].phys_addr = (phys_addr_t)(uintptr_t)addr;
		mcfg->memseg[0].addr = addr;
		mcfg->memseg[0].hugepage_sz = RTE_PGSIZE_4K;
		mcfg->memseg[0].len = internal_config.memory;
		mcfg->memseg[0].socket_id = 0;
		return 0;
	}

If it fails to get the virt2phys mapping it actually assigns iovas starting
from 0 to the memory segments, cf. set_physaddrs eal_memory.c:263

>
>
> >
> >
> >>
> >>>
> >>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> >>> ---
> >>>  lib/librte_eal/linuxapp/eal/eal_vfio.c | 12 ++++++++++--
> >>>  1 file changed, 10 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/lib/librte_eal/linuxapp/eal/eal_vfio.c b/lib/
> >>> librte_eal/linuxapp/eal/eal_vfio.c
> >>> index 46f951f4d..8b8e75c4f 100644
> >>> --- a/lib/librte_eal/linuxapp/eal/eal_vfio.c
> >>> +++ b/lib/librte_eal/linuxapp/eal/eal_vfio.c
> >>> @@ -658,7 +658,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
> >>>  {
> >>>     const struct rte_memseg *ms = rte_eal_get_physmem_layout();
> >>>     int i, ret;
> >>> -
> >>> +   phys_addr_t io_offset;
> >>>     struct vfio_iommu_spapr_register_memory reg = {
> >>>        .argsz = sizeof(reg),
> >>>        .flags = 0
> >>> @@ -702,6 +702,13 @@ vfio_spapr_dma_map(int vfio_container_fd)
> >>>        return -1;
> >>>     }
> >>>
> >>> +   io_offset = create.start_addr;
> >>> +   if (io_offset) {
> >>> +      RTE_LOG(ERR, EAL, "  DMA offsets other than zero is not
> supported, "
> >>> +            "new window is created at %lx\n", io_offset);
> >>> +      return -1;
> >>> +   }
> >>> +
> >>>     /* map all DPDK segments for DMA. use 1:1 PA to IOVA mapping */
> >>>     for (i = 0; i < RTE_MAX_MEMSEG; i++) {
> >>>        struct vfio_iommu_type1_dma_map dma_map;
> >>> @@ -723,7 +730,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
> >>>        dma_map.argsz = sizeof(struct vfio_iommu_type1_dma_map);
> >>>        dma_map.vaddr = ms[i].addr_64;
> >>>        dma_map.size = ms[i].len;
> >>> -      dma_map.iova = ms[i].phys_addr;
> >>> +      dma_map.iova = io_offset;
> >>>        dma_map.flags = VFIO_DMA_MAP_FLAG_READ |
> >>>               VFIO_DMA_MAP_FLAG_WRITE;
> >>>
> >>> @@ -735,6 +742,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
> >>>           return -1;
> >>>        }
> >>>
> >>> +      io_offset += dma_map.size;
> >>>     }
> >>>
> >>>     return 0;
> >>> --
> >>> 2.11.0
> >>>
> >>
> >
> >
>
>
> --
> Alexey
>
  
Gowrishankar April 20, 2017, 7:16 p.m. UTC | #5
On Thursday 20 April 2017 07:52 PM, Alexey Kardashevskiy wrote:
> On 20/04/17 23:25, Alexey Kardashevskiy wrote:
>> On 20/04/17 19:04, Jonas Pfefferle1 wrote:
>>> Alexey Kardashevskiy <aik@ozlabs.ru> wrote on 20/04/2017 09:24:02:
>>>
>>>> From: Alexey Kardashevskiy <aik@ozlabs.ru>
>>>> To: dev@dpdk.org
>>>> Cc: Alexey Kardashevskiy <aik@ozlabs.ru>, JPF@zurich.ibm.com,
>>>> Gowrishankar Muthukrishnan <gowrishankar.m@in.ibm.com>
>>>> Date: 20/04/2017 09:24
>>>> Subject: [PATCH dpdk 5/5] RFC: vfio/ppc64/spapr: Use correct bus
>>>> addresses for DMA map
>>>>
>>>> VFIO_IOMMU_SPAPR_TCE_CREATE ioctl() returns the actual bus address for
>>>> just created DMA window. It happens to start from zero because the default
>>>> window is removed (leaving no windows) and new window starts from zero.
>>>> However this is not guaranteed and the new window may start from another
>>>> address, this adds an error check.
>>>>
>>>> Another issue is that IOVA passed to VFIO_IOMMU_MAP_DMA should be a PCI
>>>> bus address while in this case a physical address of a user page is used.
>>>> This changes IOVA to start from zero in a hope that the rest of DPDK
>>>> expects this.
>>> This is not the case. DPDK expects a 1:1 mapping PA==IOVA. It will use the
>>> phys_addr of the memory segment it got from /proc/self/pagemap cf.
>>> librte_eal/linuxapp/eal/eal_memory.c. We could try setting it here to the
>>> actual iova which basically makes the whole virtual to phyiscal mapping
>>> with pagemap unnecessary which I believe should be the case for VFIO
>>> anyway. Pagemap should only be needed when using pci_uio.
>>
>> Ah, ok, makes sense now. But it sure needs a big fat comment there as it is
>> not obvious why host RAM address is used there as DMA window start is not
>> guaranteed.
> Well, either way there is some bug - ms[i].phys_addr and ms[i].addr_64 both
> have exact same value, in my setup it is 3fffb33c0000 which is a userspace
> address - at least ms[i].phys_addr must be physical address.

This patch breaks i40e_dev_init() in my server.

EAL: PCI device 0004:01:00.0 on NUMA socket 1
EAL:   probe driver: 8086:1583 net_i40e
EAL:   using IOMMU type 7 (sPAPR)
eth_i40e_dev_init(): Failed to init adminq: -32
EAL: Releasing pci mapped resource for 0004:01:00.0
EAL: Calling pci_unmap_resource for 0004:01:00.0 at 0x3fff82aa0000
EAL: Requested device 0004:01:00.0 cannot be used
EAL: PCI device 0004:01:00.1 on NUMA socket 1
EAL:   probe driver: 8086:1583 net_i40e
EAL:   using IOMMU type 7 (sPAPR)
eth_i40e_dev_init(): Failed to init adminq: -32
EAL: Releasing pci mapped resource for 0004:01:00.1
EAL: Calling pci_unmap_resource for 0004:01:00.1 at 0x3fff82aa0000
EAL: Requested device 0004:01:00.1 cannot be used
EAL: No probed ethernet devices

I have two memseg each of 1G size. Their mapped PA and VA are also 
different.

(gdb) p /x ms[0]
$3 = {phys_addr = 0x1e0b000000, {addr = 0x3effaf000000, addr_64 = 
0x3effaf000000},
   len = 0x40000000, hugepage_sz = 0x1000000, socket_id = 0x1, nchannel 
= 0x0, nrank = 0x0}
(gdb) p /x ms[1]
$4 = {phys_addr = 0xf6d000000, {addr = 0x3efbaf000000, addr_64 = 
0x3efbaf000000},
   len = 0x40000000, hugepage_sz = 0x1000000, socket_id = 0x0, nchannel 
= 0x0, nrank = 0x0}

Could you please recheck this. May be, if new DMA window does not start 
from bus address 0,
only then you reset dma_map.iova for this offset ?


Thanks,
Gowrishankar

>
>>
>>>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>>>> ---
>>>>   lib/librte_eal/linuxapp/eal/eal_vfio.c | 12 ++++++++++--
>>>>   1 file changed, 10 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/lib/librte_eal/linuxapp/eal/eal_vfio.c b/lib/
>>>> librte_eal/linuxapp/eal/eal_vfio.c
>>>> index 46f951f4d..8b8e75c4f 100644
>>>> --- a/lib/librte_eal/linuxapp/eal/eal_vfio.c
>>>> +++ b/lib/librte_eal/linuxapp/eal/eal_vfio.c
>>>> @@ -658,7 +658,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>>>>   {
>>>>      const struct rte_memseg *ms = rte_eal_get_physmem_layout();
>>>>      int i, ret;
>>>> -
>>>> +   phys_addr_t io_offset;
>>>>      struct vfio_iommu_spapr_register_memory reg = {
>>>>         .argsz = sizeof(reg),
>>>>         .flags = 0
>>>> @@ -702,6 +702,13 @@ vfio_spapr_dma_map(int vfio_container_fd)
>>>>         return -1;
>>>>      }
>>>>   
>>>> +   io_offset = create.start_addr;
>>>> +   if (io_offset) {
>>>> +      RTE_LOG(ERR, EAL, "  DMA offsets other than zero is not supported, "
>>>> +            "new window is created at %lx\n", io_offset);
>>>> +      return -1;
>>>> +   }
>>>> +
>>>>      /* map all DPDK segments for DMA. use 1:1 PA to IOVA mapping */
>>>>      for (i = 0; i < RTE_MAX_MEMSEG; i++) {
>>>>         struct vfio_iommu_type1_dma_map dma_map;
>>>> @@ -723,7 +730,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>>>>         dma_map.argsz = sizeof(struct vfio_iommu_type1_dma_map);
>>>>         dma_map.vaddr = ms[i].addr_64;
>>>>         dma_map.size = ms[i].len;
>>>> -      dma_map.iova = ms[i].phys_addr;
>>>> +      dma_map.iova = io_offset;
>>>>         dma_map.flags = VFIO_DMA_MAP_FLAG_READ |
>>>>                VFIO_DMA_MAP_FLAG_WRITE;
>>>>   
>>>> @@ -735,6 +742,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>>>>            return -1;
>>>>         }
>>>>   
>>>> +      io_offset += dma_map.size;
>>>>      }
>>>>   
>>>>      return 0;
>>>> --
>>>> 2.11.0
>>>>
>>
>
  
Alexey Kardashevskiy April 20, 2017, 10:01 p.m. UTC | #6
On 21/04/17 01:15, Jonas Pfefferle1 wrote:
> Alexey Kardashevskiy <aik@ozlabs.ru> wrote on 20/04/2017 16:22:01:
> 
>> From: Alexey Kardashevskiy <aik@ozlabs.ru>
>> To: Jonas Pfefferle1 <JPF@zurich.ibm.com>
>> Cc: dev@dpdk.org, Gowrishankar Muthukrishnan
>> <gowrishankar.m@in.ibm.com>, Adrian Schuepbach <DRI@zurich.ibm.com>
>> Date: 20/04/2017 16:22
>> Subject: Re: [PATCH dpdk 5/5] RFC: vfio/ppc64/spapr: Use correct bus
>> addresses for DMA map
>>
>> On 20/04/17 23:25, Alexey Kardashevskiy wrote:
>> > On 20/04/17 19:04, Jonas Pfefferle1 wrote:
>> >> Alexey Kardashevskiy <aik@ozlabs.ru> wrote on 20/04/2017 09:24:02:
>> >>
>> >>> From: Alexey Kardashevskiy <aik@ozlabs.ru>
>> >>> To: dev@dpdk.org
>> >>> Cc: Alexey Kardashevskiy <aik@ozlabs.ru>, JPF@zurich.ibm.com,
>> >>> Gowrishankar Muthukrishnan <gowrishankar.m@in.ibm.com>
>> >>> Date: 20/04/2017 09:24
>> >>> Subject: [PATCH dpdk 5/5] RFC: vfio/ppc64/spapr: Use correct bus
>> >>> addresses for DMA map
>> >>>
>> >>> VFIO_IOMMU_SPAPR_TCE_CREATE ioctl() returns the actual bus address for
>> >>> just created DMA window. It happens to start from zero because the
> default
>> >>> window is removed (leaving no windows) and new window starts from zero.
>> >>> However this is not guaranteed and the new window may start from another
>> >>> address, this adds an error check.
>> >>>
>> >>> Another issue is that IOVA passed to VFIO_IOMMU_MAP_DMA should be a PCI
>> >>> bus address while in this case a physical address of a user page is used.
>> >>> This changes IOVA to start from zero in a hope that the rest of DPDK
>> >>> expects this.
>> >>
>> >> This is not the case. DPDK expects a 1:1 mapping PA==IOVA. It will use the
>> >> phys_addr of the memory segment it got from /proc/self/pagemap cf.
>> >> librte_eal/linuxapp/eal/eal_memory.c. We could try setting it here to the
>> >> actual iova which basically makes the whole virtual to phyiscal mapping
>> >> with pagemap unnecessary which I believe should be the case for VFIO
>> >> anyway. Pagemap should only be needed when using pci_uio.
>> >
>> >
>> > Ah, ok, makes sense now. But it sure needs a big fat comment there as it is
>> > not obvious why host RAM address is used there as DMA window start is not
>> > guaranteed.
>>
>> Well, either way there is some bug - ms[i].phys_addr and ms[i].addr_64 both
>> have exact same value, in my setup it is 3fffb33c0000 which is a userspace
>> address - at least ms[i].phys_addr must be physical address.
> 
> This might be the case if you are not using hugetlbfs i.e. passing
> "--no-huge" cf. eal_memory.c:980
> 
> /* hugetlbfs can be disabled */
> if (internal_config.no_hugetlbfs) {
> addr = mmap(NULL, internal_config.memory, PROT_READ | PROT_WRITE,
> MAP_PRIVATE | MAP_ANONYMOUS, 0, 0);
> if (addr == MAP_FAILED) {
> RTE_LOG(ERR, EAL, "%s: mmap() failed: %s\n", __func__,
> strerror(errno));
> return -1;
> }
> mcfg->memseg[0].phys_addr = (phys_addr_t)(uintptr_t)addr;
> mcfg->memseg[0].addr = addr;
> mcfg->memseg[0].hugepage_sz = RTE_PGSIZE_4K;
> mcfg->memseg[0].len = internal_config.memory;
> mcfg->memseg[0].socket_id = 0;
> return 0;
> }
> 
> If it fails to get the virt2phys mapping it actually assigns iovas starting
> from 0 to the memory segments, cf. set_physaddrs eal_memory.c:263

Right, this is the case here.


> 
>>
>>
>> >
>> >
>> >>
>> >>>
>> >>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>> >>> ---
>> >>>  lib/librte_eal/linuxapp/eal/eal_vfio.c | 12 ++++++++++--
>> >>>  1 file changed, 10 insertions(+), 2 deletions(-)
>> >>>
>> >>> diff --git a/lib/librte_eal/linuxapp/eal/eal_vfio.c b/lib/
>> >>> librte_eal/linuxapp/eal/eal_vfio.c
>> >>> index 46f951f4d..8b8e75c4f 100644
>> >>> --- a/lib/librte_eal/linuxapp/eal/eal_vfio.c
>> >>> +++ b/lib/librte_eal/linuxapp/eal/eal_vfio.c
>> >>> @@ -658,7 +658,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>> >>>  {
>> >>>     const struct rte_memseg *ms = rte_eal_get_physmem_layout();
>> >>>     int i, ret;
>> >>> -
>> >>> +   phys_addr_t io_offset;
>> >>>     struct vfio_iommu_spapr_register_memory reg = {
>> >>>        .argsz = sizeof(reg),
>> >>>        .flags = 0
>> >>> @@ -702,6 +702,13 @@ vfio_spapr_dma_map(int vfio_container_fd)
>> >>>        return -1;
>> >>>     }
>> >>>  
>> >>> +   io_offset = create.start_addr;
>> >>> +   if (io_offset) {
>> >>> +      RTE_LOG(ERR, EAL, "  DMA offsets other than zero is not
>> supported, "
>> >>> +            "new window is created at %lx\n", io_offset);
>> >>> +      return -1;
>> >>> +   }
>> >>> +
>> >>>     /* map all DPDK segments for DMA. use 1:1 PA to IOVA mapping */
>> >>>     for (i = 0; i < RTE_MAX_MEMSEG; i++) {
>> >>>        struct vfio_iommu_type1_dma_map dma_map;
>> >>> @@ -723,7 +730,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>> >>>        dma_map.argsz = sizeof(struct vfio_iommu_type1_dma_map);
>> >>>        dma_map.vaddr = ms[i].addr_64;
>> >>>        dma_map.size = ms[i].len;
>> >>> -      dma_map.iova = ms[i].phys_addr;
>> >>> +      dma_map.iova = io_offset;
>> >>>        dma_map.flags = VFIO_DMA_MAP_FLAG_READ |
>> >>>               VFIO_DMA_MAP_FLAG_WRITE;
>> >>>  
>> >>> @@ -735,6 +742,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>> >>>           return -1;
>> >>>        }
>> >>>  
>> >>> +      io_offset += dma_map.size;
>> >>>     }
>> >>>  
>> >>>     return 0;
>> >>> --
>> >>> 2.11.0
>> >>>
>> >>
>> >
>> >
>>
>>
>> --
>> Alexey
>>
>
  
Alexey Kardashevskiy April 21, 2017, 3:42 a.m. UTC | #7
On 21/04/17 05:16, gowrishankar muthukrishnan wrote:
> On Thursday 20 April 2017 07:52 PM, Alexey Kardashevskiy wrote:
>> On 20/04/17 23:25, Alexey Kardashevskiy wrote:
>>> On 20/04/17 19:04, Jonas Pfefferle1 wrote:
>>>> Alexey Kardashevskiy <aik@ozlabs.ru> wrote on 20/04/2017 09:24:02:
>>>>
>>>>> From: Alexey Kardashevskiy <aik@ozlabs.ru>
>>>>> To: dev@dpdk.org
>>>>> Cc: Alexey Kardashevskiy <aik@ozlabs.ru>, JPF@zurich.ibm.com,
>>>>> Gowrishankar Muthukrishnan <gowrishankar.m@in.ibm.com>
>>>>> Date: 20/04/2017 09:24
>>>>> Subject: [PATCH dpdk 5/5] RFC: vfio/ppc64/spapr: Use correct bus
>>>>> addresses for DMA map
>>>>>
>>>>> VFIO_IOMMU_SPAPR_TCE_CREATE ioctl() returns the actual bus address for
>>>>> just created DMA window. It happens to start from zero because the
>>>>> default
>>>>> window is removed (leaving no windows) and new window starts from zero.
>>>>> However this is not guaranteed and the new window may start from another
>>>>> address, this adds an error check.
>>>>>
>>>>> Another issue is that IOVA passed to VFIO_IOMMU_MAP_DMA should be a PCI
>>>>> bus address while in this case a physical address of a user page is used.
>>>>> This changes IOVA to start from zero in a hope that the rest of DPDK
>>>>> expects this.
>>>> This is not the case. DPDK expects a 1:1 mapping PA==IOVA. It will use the
>>>> phys_addr of the memory segment it got from /proc/self/pagemap cf.
>>>> librte_eal/linuxapp/eal/eal_memory.c. We could try setting it here to the
>>>> actual iova which basically makes the whole virtual to phyiscal mapping
>>>> with pagemap unnecessary which I believe should be the case for VFIO
>>>> anyway. Pagemap should only be needed when using pci_uio.
>>>
>>> Ah, ok, makes sense now. But it sure needs a big fat comment there as it is
>>> not obvious why host RAM address is used there as DMA window start is not
>>> guaranteed.
>> Well, either way there is some bug - ms[i].phys_addr and ms[i].addr_64 both
>> have exact same value, in my setup it is 3fffb33c0000 which is a userspace
>> address - at least ms[i].phys_addr must be physical address.
> 
> This patch breaks i40e_dev_init() in my server.
> 
> EAL: PCI device 0004:01:00.0 on NUMA socket 1
> EAL:   probe driver: 8086:1583 net_i40e
> EAL:   using IOMMU type 7 (sPAPR)
> eth_i40e_dev_init(): Failed to init adminq: -32
> EAL: Releasing pci mapped resource for 0004:01:00.0
> EAL: Calling pci_unmap_resource for 0004:01:00.0 at 0x3fff82aa0000
> EAL: Requested device 0004:01:00.0 cannot be used
> EAL: PCI device 0004:01:00.1 on NUMA socket 1
> EAL:   probe driver: 8086:1583 net_i40e
> EAL:   using IOMMU type 7 (sPAPR)
> eth_i40e_dev_init(): Failed to init adminq: -32
> EAL: Releasing pci mapped resource for 0004:01:00.1
> EAL: Calling pci_unmap_resource for 0004:01:00.1 at 0x3fff82aa0000
> EAL: Requested device 0004:01:00.1 cannot be used
> EAL: No probed ethernet devices
> 
> I have two memseg each of 1G size. Their mapped PA and VA are also different.
> 
> (gdb) p /x ms[0]
> $3 = {phys_addr = 0x1e0b000000, {addr = 0x3effaf000000, addr_64 =
> 0x3effaf000000},
>   len = 0x40000000, hugepage_sz = 0x1000000, socket_id = 0x1, nchannel =
> 0x0, nrank = 0x0}
> (gdb) p /x ms[1]
> $4 = {phys_addr = 0xf6d000000, {addr = 0x3efbaf000000, addr_64 =
> 0x3efbaf000000},
>   len = 0x40000000, hugepage_sz = 0x1000000, socket_id = 0x0, nchannel =
> 0x0, nrank = 0x0}
> 
> Could you please recheck this. May be, if new DMA window does not start
> from bus address 0,
> only then you reset dma_map.iova for this offset ?

As we figured out, it is --no-huge effect.

Another thing - as I read the code - the window size comes from
rte_eal_get_physmem_size(). On my 512GB machine, DPDK allocates only 16GB
window so it is far away from 1:1 mapping which is believed to be DPDK
expectation. Looking now for a better version of rte_eal_get_physmem_size()...


And another problem - after few unsuccessful starts of app/testpmd, all
huge pages are gone:

aik@stratton2:~$ cat /proc/meminfo
MemTotal:       535527296 kB
MemFree:        516662272 kB
MemAvailable:   515501696 kB
...
HugePages_Total:    1024
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:      16384 kB


How is that possible? What is pinning these pages so testpmd process exit
does not clear that up?




> 
> 
> Thanks,
> Gowrishankar
> 
>>
>>>
>>>>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>>>>> ---
>>>>>   lib/librte_eal/linuxapp/eal/eal_vfio.c | 12 ++++++++++--
>>>>>   1 file changed, 10 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/lib/librte_eal/linuxapp/eal/eal_vfio.c b/lib/
>>>>> librte_eal/linuxapp/eal/eal_vfio.c
>>>>> index 46f951f4d..8b8e75c4f 100644
>>>>> --- a/lib/librte_eal/linuxapp/eal/eal_vfio.c
>>>>> +++ b/lib/librte_eal/linuxapp/eal/eal_vfio.c
>>>>> @@ -658,7 +658,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>>>>>   {
>>>>>      const struct rte_memseg *ms = rte_eal_get_physmem_layout();
>>>>>      int i, ret;
>>>>> -
>>>>> +   phys_addr_t io_offset;
>>>>>      struct vfio_iommu_spapr_register_memory reg = {
>>>>>         .argsz = sizeof(reg),
>>>>>         .flags = 0
>>>>> @@ -702,6 +702,13 @@ vfio_spapr_dma_map(int vfio_container_fd)
>>>>>         return -1;
>>>>>      }
>>>>>   +   io_offset = create.start_addr;
>>>>> +   if (io_offset) {
>>>>> +      RTE_LOG(ERR, EAL, "  DMA offsets other than zero is not
>>>>> supported, "
>>>>> +            "new window is created at %lx\n", io_offset);
>>>>> +      return -1;
>>>>> +   }
>>>>> +
>>>>>      /* map all DPDK segments for DMA. use 1:1 PA to IOVA mapping */
>>>>>      for (i = 0; i < RTE_MAX_MEMSEG; i++) {
>>>>>         struct vfio_iommu_type1_dma_map dma_map;
>>>>> @@ -723,7 +730,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>>>>>         dma_map.argsz = sizeof(struct vfio_iommu_type1_dma_map);
>>>>>         dma_map.vaddr = ms[i].addr_64;
>>>>>         dma_map.size = ms[i].len;
>>>>> -      dma_map.iova = ms[i].phys_addr;
>>>>> +      dma_map.iova = io_offset;
>>>>>         dma_map.flags = VFIO_DMA_MAP_FLAG_READ |
>>>>>                VFIO_DMA_MAP_FLAG_WRITE;
>>>>>   @@ -735,6 +742,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>>>>>            return -1;
>>>>>         }
>>>>>   +      io_offset += dma_map.size;
>>>>>      }
>>>>>        return 0;
>>>>> -- 
>>>>> 2.11.0
>>>>>
>>>
>>
> 
>
  
Alexey Kardashevskiy April 21, 2017, 8:43 a.m. UTC | #8
On 21/04/17 13:42, Alexey Kardashevskiy wrote:
> On 21/04/17 05:16, gowrishankar muthukrishnan wrote:
>> On Thursday 20 April 2017 07:52 PM, Alexey Kardashevskiy wrote:
>>> On 20/04/17 23:25, Alexey Kardashevskiy wrote:
>>>> On 20/04/17 19:04, Jonas Pfefferle1 wrote:
>>>>> Alexey Kardashevskiy <aik@ozlabs.ru> wrote on 20/04/2017 09:24:02:
>>>>>
>>>>>> From: Alexey Kardashevskiy <aik@ozlabs.ru>
>>>>>> To: dev@dpdk.org
>>>>>> Cc: Alexey Kardashevskiy <aik@ozlabs.ru>, JPF@zurich.ibm.com,
>>>>>> Gowrishankar Muthukrishnan <gowrishankar.m@in.ibm.com>
>>>>>> Date: 20/04/2017 09:24
>>>>>> Subject: [PATCH dpdk 5/5] RFC: vfio/ppc64/spapr: Use correct bus
>>>>>> addresses for DMA map
>>>>>>
>>>>>> VFIO_IOMMU_SPAPR_TCE_CREATE ioctl() returns the actual bus address for
>>>>>> just created DMA window. It happens to start from zero because the
>>>>>> default
>>>>>> window is removed (leaving no windows) and new window starts from zero.
>>>>>> However this is not guaranteed and the new window may start from another
>>>>>> address, this adds an error check.
>>>>>>
>>>>>> Another issue is that IOVA passed to VFIO_IOMMU_MAP_DMA should be a PCI
>>>>>> bus address while in this case a physical address of a user page is used.
>>>>>> This changes IOVA to start from zero in a hope that the rest of DPDK
>>>>>> expects this.
>>>>> This is not the case. DPDK expects a 1:1 mapping PA==IOVA. It will use the
>>>>> phys_addr of the memory segment it got from /proc/self/pagemap cf.
>>>>> librte_eal/linuxapp/eal/eal_memory.c. We could try setting it here to the
>>>>> actual iova which basically makes the whole virtual to phyiscal mapping
>>>>> with pagemap unnecessary which I believe should be the case for VFIO
>>>>> anyway. Pagemap should only be needed when using pci_uio.
>>>>
>>>> Ah, ok, makes sense now. But it sure needs a big fat comment there as it is
>>>> not obvious why host RAM address is used there as DMA window start is not
>>>> guaranteed.
>>> Well, either way there is some bug - ms[i].phys_addr and ms[i].addr_64 both
>>> have exact same value, in my setup it is 3fffb33c0000 which is a userspace
>>> address - at least ms[i].phys_addr must be physical address.
>>
>> This patch breaks i40e_dev_init() in my server.
>>
>> EAL: PCI device 0004:01:00.0 on NUMA socket 1
>> EAL:   probe driver: 8086:1583 net_i40e
>> EAL:   using IOMMU type 7 (sPAPR)
>> eth_i40e_dev_init(): Failed to init adminq: -32
>> EAL: Releasing pci mapped resource for 0004:01:00.0
>> EAL: Calling pci_unmap_resource for 0004:01:00.0 at 0x3fff82aa0000
>> EAL: Requested device 0004:01:00.0 cannot be used
>> EAL: PCI device 0004:01:00.1 on NUMA socket 1
>> EAL:   probe driver: 8086:1583 net_i40e
>> EAL:   using IOMMU type 7 (sPAPR)
>> eth_i40e_dev_init(): Failed to init adminq: -32
>> EAL: Releasing pci mapped resource for 0004:01:00.1
>> EAL: Calling pci_unmap_resource for 0004:01:00.1 at 0x3fff82aa0000
>> EAL: Requested device 0004:01:00.1 cannot be used
>> EAL: No probed ethernet devices
>>
>> I have two memseg each of 1G size. Their mapped PA and VA are also different.
>>
>> (gdb) p /x ms[0]
>> $3 = {phys_addr = 0x1e0b000000, {addr = 0x3effaf000000, addr_64 =
>> 0x3effaf000000},
>>   len = 0x40000000, hugepage_sz = 0x1000000, socket_id = 0x1, nchannel =
>> 0x0, nrank = 0x0}
>> (gdb) p /x ms[1]
>> $4 = {phys_addr = 0xf6d000000, {addr = 0x3efbaf000000, addr_64 =
>> 0x3efbaf000000},
>>   len = 0x40000000, hugepage_sz = 0x1000000, socket_id = 0x0, nchannel =
>> 0x0, nrank = 0x0}
>>
>> Could you please recheck this. May be, if new DMA window does not start
>> from bus address 0,
>> only then you reset dma_map.iova for this offset ?
> 
> As we figured out, it is --no-huge effect.
> 
> Another thing - as I read the code - the window size comes from
> rte_eal_get_physmem_size(). On my 512GB machine, DPDK allocates only 16GB
> window so it is far away from 1:1 mapping which is believed to be DPDK
> expectation. Looking now for a better version of rte_eal_get_physmem_size()...


I have not found any helper to get a total RAM size or
round-up-to-power-of-two - I could look through memory segments, find the
one with highest ending physical address, round it up to power of two
(requirement on POWER8 platform for a DMA window size) and use it as a DMA
window size - is there kernel's order_base_2() analog?


> 
> 
> And another problem - after few unsuccessful starts of app/testpmd, all
> huge pages are gone:
> 
> aik@stratton2:~$ cat /proc/meminfo
> MemTotal:       535527296 kB
> MemFree:        516662272 kB
> MemAvailable:   515501696 kB
> ...
> HugePages_Total:    1024
> HugePages_Free:        0
> HugePages_Rsvd:        0
> HugePages_Surp:        0
> Hugepagesize:      16384 kB
> 
> 
> How is that possible? What is pinning these pages so testpmd process exit
> does not clear that up?

Still not clear, any ideas why might be causing this?



btw what is the correct way of running DPDK with hugepages?

I basically create a folder in ~aik/hugepages and do
sudo mount -t hugetlbfs hugetlbfs ~aik/hugepages
sudo sysctl vm.nr_hugepages=4096

This creates bunch of pages:
aik@stratton2:~$ cat /proc/meminfo | grep HugePage
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
HugePages_Total:    4096
HugePages_Free:     4096
HugePages_Rsvd:        0
HugePages_Surp:        0


And then I am watching testpmd to detect hugepages (it does see 4096 16MB
pages) to allocate pages:
rte_eal_hugepage_init() calls map_all_hugepages(... orig=1) - here all 4096
pages are allocated, then it calls map_all_hugepages(... orig=0) - and here
I get lots of "EAL: Cannot get a virtual area: Cannot allocate memory" due
to obvious reason - all pages are allocated. Since you folks have this
tested somehow - what am I doing wrong? :) This is all very confusing -
what is that orig=0/1 business is all about?




> 
> 
> 
> 
>>
>>
>> Thanks,
>> Gowrishankar
>>
>>>
>>>>
>>>>>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>>>>>> ---
>>>>>>   lib/librte_eal/linuxapp/eal/eal_vfio.c | 12 ++++++++++--
>>>>>>   1 file changed, 10 insertions(+), 2 deletions(-)
>>>>>>
>>>>>> diff --git a/lib/librte_eal/linuxapp/eal/eal_vfio.c b/lib/
>>>>>> librte_eal/linuxapp/eal/eal_vfio.c
>>>>>> index 46f951f4d..8b8e75c4f 100644
>>>>>> --- a/lib/librte_eal/linuxapp/eal/eal_vfio.c
>>>>>> +++ b/lib/librte_eal/linuxapp/eal/eal_vfio.c
>>>>>> @@ -658,7 +658,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>>>>>>   {
>>>>>>      const struct rte_memseg *ms = rte_eal_get_physmem_layout();
>>>>>>      int i, ret;
>>>>>> -
>>>>>> +   phys_addr_t io_offset;
>>>>>>      struct vfio_iommu_spapr_register_memory reg = {
>>>>>>         .argsz = sizeof(reg),
>>>>>>         .flags = 0
>>>>>> @@ -702,6 +702,13 @@ vfio_spapr_dma_map(int vfio_container_fd)
>>>>>>         return -1;
>>>>>>      }
>>>>>>   +   io_offset = create.start_addr;
>>>>>> +   if (io_offset) {
>>>>>> +      RTE_LOG(ERR, EAL, "  DMA offsets other than zero is not
>>>>>> supported, "
>>>>>> +            "new window is created at %lx\n", io_offset);
>>>>>> +      return -1;
>>>>>> +   }
>>>>>> +
>>>>>>      /* map all DPDK segments for DMA. use 1:1 PA to IOVA mapping */
>>>>>>      for (i = 0; i < RTE_MAX_MEMSEG; i++) {
>>>>>>         struct vfio_iommu_type1_dma_map dma_map;
>>>>>> @@ -723,7 +730,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>>>>>>         dma_map.argsz = sizeof(struct vfio_iommu_type1_dma_map);
>>>>>>         dma_map.vaddr = ms[i].addr_64;
>>>>>>         dma_map.size = ms[i].len;
>>>>>> -      dma_map.iova = ms[i].phys_addr;
>>>>>> +      dma_map.iova = io_offset;
>>>>>>         dma_map.flags = VFIO_DMA_MAP_FLAG_READ |
>>>>>>                VFIO_DMA_MAP_FLAG_WRITE;
>>>>>>   @@ -735,6 +742,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>>>>>>            return -1;
>>>>>>         }
>>>>>>   +      io_offset += dma_map.size;
>>>>>>      }
>>>>>>        return 0;
>>>>>> -- 
>>>>>> 2.11.0
>>>>>>
>>>>
>>>
>>
>>
> 
>
  
Gowrishankar April 21, 2017, 8:51 a.m. UTC | #9
On Friday 21 April 2017 09:12 AM, Alexey Kardashevskiy wrote:
> On 21/04/17 05:16, gowrishankar muthukrishnan wrote:
>> On Thursday 20 April 2017 07:52 PM, Alexey Kardashevskiy wrote:
>>> On 20/04/17 23:25, Alexey Kardashevskiy wrote:
>>>> On 20/04/17 19:04, Jonas Pfefferle1 wrote:
>>>>> Alexey Kardashevskiy <aik@ozlabs.ru> wrote on 20/04/2017 09:24:02:
>>>>>
>>>>>> From: Alexey Kardashevskiy <aik@ozlabs.ru>
>>>>>> To: dev@dpdk.org
>>>>>> Cc: Alexey Kardashevskiy <aik@ozlabs.ru>, JPF@zurich.ibm.com,
>>>>>> Gowrishankar Muthukrishnan <gowrishankar.m@in.ibm.com>
>>>>>> Date: 20/04/2017 09:24
>>>>>> Subject: [PATCH dpdk 5/5] RFC: vfio/ppc64/spapr: Use correct bus
>>>>>> addresses for DMA map
>>>>>>
>>>>>> VFIO_IOMMU_SPAPR_TCE_CREATE ioctl() returns the actual bus address for
>>>>>> just created DMA window. It happens to start from zero because the
>>>>>> default
>>>>>> window is removed (leaving no windows) and new window starts from zero.
>>>>>> However this is not guaranteed and the new window may start from another
>>>>>> address, this adds an error check.
>>>>>>
>>>>>> Another issue is that IOVA passed to VFIO_IOMMU_MAP_DMA should be a PCI
>>>>>> bus address while in this case a physical address of a user page is used.
>>>>>> This changes IOVA to start from zero in a hope that the rest of DPDK
>>>>>> expects this.
>>>>> This is not the case. DPDK expects a 1:1 mapping PA==IOVA. It will use the
>>>>> phys_addr of the memory segment it got from /proc/self/pagemap cf.
>>>>> librte_eal/linuxapp/eal/eal_memory.c. We could try setting it here to the
>>>>> actual iova which basically makes the whole virtual to phyiscal mapping
>>>>> with pagemap unnecessary which I believe should be the case for VFIO
>>>>> anyway. Pagemap should only be needed when using pci_uio.
>>>> Ah, ok, makes sense now. But it sure needs a big fat comment there as it is
>>>> not obvious why host RAM address is used there as DMA window start is not
>>>> guaranteed.
>>> Well, either way there is some bug - ms[i].phys_addr and ms[i].addr_64 both
>>> have exact same value, in my setup it is 3fffb33c0000 which is a userspace
>>> address - at least ms[i].phys_addr must be physical address.
>> This patch breaks i40e_dev_init() in my server.
>>
>> EAL: PCI device 0004:01:00.0 on NUMA socket 1
>> EAL:   probe driver: 8086:1583 net_i40e
>> EAL:   using IOMMU type 7 (sPAPR)
>> eth_i40e_dev_init(): Failed to init adminq: -32
>> EAL: Releasing pci mapped resource for 0004:01:00.0
>> EAL: Calling pci_unmap_resource for 0004:01:00.0 at 0x3fff82aa0000
>> EAL: Requested device 0004:01:00.0 cannot be used
>> EAL: PCI device 0004:01:00.1 on NUMA socket 1
>> EAL:   probe driver: 8086:1583 net_i40e
>> EAL:   using IOMMU type 7 (sPAPR)
>> eth_i40e_dev_init(): Failed to init adminq: -32
>> EAL: Releasing pci mapped resource for 0004:01:00.1
>> EAL: Calling pci_unmap_resource for 0004:01:00.1 at 0x3fff82aa0000
>> EAL: Requested device 0004:01:00.1 cannot be used
>> EAL: No probed ethernet devices
>>
>> I have two memseg each of 1G size. Their mapped PA and VA are also different.
>>
>> (gdb) p /x ms[0]
>> $3 = {phys_addr = 0x1e0b000000, {addr = 0x3effaf000000, addr_64 =
>> 0x3effaf000000},
>>    len = 0x40000000, hugepage_sz = 0x1000000, socket_id = 0x1, nchannel =
>> 0x0, nrank = 0x0}
>> (gdb) p /x ms[1]
>> $4 = {phys_addr = 0xf6d000000, {addr = 0x3efbaf000000, addr_64 =
>> 0x3efbaf000000},
>>    len = 0x40000000, hugepage_sz = 0x1000000, socket_id = 0x0, nchannel =
>> 0x0, nrank = 0x0}
>>
>> Could you please recheck this. May be, if new DMA window does not start
>> from bus address 0,
>> only then you reset dma_map.iova for this offset ?
> As we figured out, it is --no-huge effect.
>
> Another thing - as I read the code - the window size comes from
> rte_eal_get_physmem_size(). On my 512GB machine, DPDK allocates only 16GB
> window so it is far away from 1:1 mapping which is believed to be DPDK
> expectation. Looking now for a better version of rte_eal_get_physmem_size()...

If your mem segs are more in count (not contiguous unless reserved in 
boot time),
you could check CONFIG_RTE_MAX_NUMA_NODES and CONFIG_RTE_MAX_MEMSEG ?.

Thanks,
Gowrishankar
> And another problem - after few unsuccessful starts of app/testpmd, all
> huge pages are gone:
>
> aik@stratton2:~$ cat /proc/meminfo
> MemTotal:       535527296 kB
> MemFree:        516662272 kB
> MemAvailable:   515501696 kB
> ...
> HugePages_Total:    1024
> HugePages_Free:        0
> HugePages_Rsvd:        0
> HugePages_Surp:        0
> Hugepagesize:      16384 kB
>
>
> How is that possible? What is pinning these pages so testpmd process exit
> does not clear that up?
>
>
>
>>
>> Thanks,
>> Gowrishankar
>>
>>>>>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>>>>>> ---
>>>>>>    lib/librte_eal/linuxapp/eal/eal_vfio.c | 12 ++++++++++--
>>>>>>    1 file changed, 10 insertions(+), 2 deletions(-)
>>>>>>
>>>>>> diff --git a/lib/librte_eal/linuxapp/eal/eal_vfio.c b/lib/
>>>>>> librte_eal/linuxapp/eal/eal_vfio.c
>>>>>> index 46f951f4d..8b8e75c4f 100644
>>>>>> --- a/lib/librte_eal/linuxapp/eal/eal_vfio.c
>>>>>> +++ b/lib/librte_eal/linuxapp/eal/eal_vfio.c
>>>>>> @@ -658,7 +658,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>>>>>>    {
>>>>>>       const struct rte_memseg *ms = rte_eal_get_physmem_layout();
>>>>>>       int i, ret;
>>>>>> -
>>>>>> +   phys_addr_t io_offset;
>>>>>>       struct vfio_iommu_spapr_register_memory reg = {
>>>>>>          .argsz = sizeof(reg),
>>>>>>          .flags = 0
>>>>>> @@ -702,6 +702,13 @@ vfio_spapr_dma_map(int vfio_container_fd)
>>>>>>          return -1;
>>>>>>       }
>>>>>>    +   io_offset = create.start_addr;
>>>>>> +   if (io_offset) {
>>>>>> +      RTE_LOG(ERR, EAL, "  DMA offsets other than zero is not
>>>>>> supported, "
>>>>>> +            "new window is created at %lx\n", io_offset);
>>>>>> +      return -1;
>>>>>> +   }
>>>>>> +
>>>>>>       /* map all DPDK segments for DMA. use 1:1 PA to IOVA mapping */
>>>>>>       for (i = 0; i < RTE_MAX_MEMSEG; i++) {
>>>>>>          struct vfio_iommu_type1_dma_map dma_map;
>>>>>> @@ -723,7 +730,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>>>>>>          dma_map.argsz = sizeof(struct vfio_iommu_type1_dma_map);
>>>>>>          dma_map.vaddr = ms[i].addr_64;
>>>>>>          dma_map.size = ms[i].len;
>>>>>> -      dma_map.iova = ms[i].phys_addr;
>>>>>> +      dma_map.iova = io_offset;
>>>>>>          dma_map.flags = VFIO_DMA_MAP_FLAG_READ |
>>>>>>                 VFIO_DMA_MAP_FLAG_WRITE;
>>>>>>    @@ -735,6 +742,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>>>>>>             return -1;
>>>>>>          }
>>>>>>    +      io_offset += dma_map.size;
>>>>>>       }
>>>>>>         return 0;
>>>>>> -- 
>>>>>> 2.11.0
>>>>>>
>>
>
  
Alexey Kardashevskiy April 21, 2017, 8:59 a.m. UTC | #10
On 21/04/17 18:35, Jonas Pfefferle1 wrote:
> ----------------------------------------
> Jonas Pfefferle
> Cloud Storage & Analytics
> IBM Zurich Research Laboratory
> Saeumerstrasse 4
> CH-8803 Rueschlikon, Switzerland
> +41 44 724 8539
> 
> Alexey Kardashevskiy <aik@ozlabs.ru> wrote on 21/04/2017 05:42:35:
> 
>> From: Alexey Kardashevskiy <aik@ozlabs.ru>
>> To: gowrishankar muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
>> Cc: Jonas Pfefferle1 <JPF@zurich.ibm.com>, Gowrishankar
>> Muthukrishnan <gowrishankar.m@in.ibm.com>, Adrian Schuepbach
>> <DRI@zurich.ibm.com>, "dev@dpdk.org" <dev@dpdk.org>
>> Date: 21/04/2017 05:42
>> Subject: Re: [dpdk-dev] [PATCH dpdk 5/5] RFC: vfio/ppc64/spapr: Use
>> correct bus addresses for DMA map
>>
>> On 21/04/17 05:16, gowrishankar muthukrishnan wrote:
>> > On Thursday 20 April 2017 07:52 PM, Alexey Kardashevskiy wrote:
>> >> On 20/04/17 23:25, Alexey Kardashevskiy wrote:
>> >>> On 20/04/17 19:04, Jonas Pfefferle1 wrote:
>> >>>> Alexey Kardashevskiy <aik@ozlabs.ru> wrote on 20/04/2017 09:24:02:
>> >>>>
>> >>>>> From: Alexey Kardashevskiy <aik@ozlabs.ru>
>> >>>>> To: dev@dpdk.org
>> >>>>> Cc: Alexey Kardashevskiy <aik@ozlabs.ru>, JPF@zurich.ibm.com,
>> >>>>> Gowrishankar Muthukrishnan <gowrishankar.m@in.ibm.com>
>> >>>>> Date: 20/04/2017 09:24
>> >>>>> Subject: [PATCH dpdk 5/5] RFC: vfio/ppc64/spapr: Use correct bus
>> >>>>> addresses for DMA map
>> >>>>>
>> >>>>> VFIO_IOMMU_SPAPR_TCE_CREATE ioctl() returns the actual bus address for
>> >>>>> just created DMA window. It happens to start from zero because the
>> >>>>> default
>> >>>>> window is removed (leaving no windows) and new window starts from zero.
>> >>>>> However this is not guaranteed and the new window may start from
> another
>> >>>>> address, this adds an error check.
>> >>>>>
>> >>>>> Another issue is that IOVA passed to VFIO_IOMMU_MAP_DMA should be a PCI
>> >>>>> bus address while in this case a physical address of a user
>> page is used.
>> >>>>> This changes IOVA to start from zero in a hope that the rest of DPDK
>> >>>>> expects this.
>> >>>> This is not the case. DPDK expects a 1:1 mapping PA==IOVA. It
>> will use the
>> >>>> phys_addr of the memory segment it got from /proc/self/pagemap cf.
>> >>>> librte_eal/linuxapp/eal/eal_memory.c. We could try setting it here
> to the
>> >>>> actual iova which basically makes the whole virtual to phyiscal mapping
>> >>>> with pagemap unnecessary which I believe should be the case for VFIO
>> >>>> anyway. Pagemap should only be needed when using pci_uio.
>> >>>
>> >>> Ah, ok, makes sense now. But it sure needs a big fat comment
>> there as it is
>> >>> not obvious why host RAM address is used there as DMA window start is not
>> >>> guaranteed.
>> >> Well, either way there is some bug - ms[i].phys_addr and ms[i].addr_64
> both
>> >> have exact same value, in my setup it is 3fffb33c0000 which is a userspace
>> >> address - at least ms[i].phys_addr must be physical address.
>> >
>> > This patch breaks i40e_dev_init() in my server.
>> >
>> > EAL: PCI device 0004:01:00.0 on NUMA socket 1
>> > EAL:   probe driver: 8086:1583 net_i40e
>> > EAL:   using IOMMU type 7 (sPAPR)
>> > eth_i40e_dev_init(): Failed to init adminq: -32
>> > EAL: Releasing pci mapped resource for 0004:01:00.0
>> > EAL: Calling pci_unmap_resource for 0004:01:00.0 at 0x3fff82aa0000
>> > EAL: Requested device 0004:01:00.0 cannot be used
>> > EAL: PCI device 0004:01:00.1 on NUMA socket 1
>> > EAL:   probe driver: 8086:1583 net_i40e
>> > EAL:   using IOMMU type 7 (sPAPR)
>> > eth_i40e_dev_init(): Failed to init adminq: -32
>> > EAL: Releasing pci mapped resource for 0004:01:00.1
>> > EAL: Calling pci_unmap_resource for 0004:01:00.1 at 0x3fff82aa0000
>> > EAL: Requested device 0004:01:00.1 cannot be used
>> > EAL: No probed ethernet devices
>> >
>> > I have two memseg each of 1G size. Their mapped PA and VA are
> alsodifferent.
>> >
>> > (gdb) p /x ms[0]
>> > $3 = {phys_addr = 0x1e0b000000, {addr = 0x3effaf000000, addr_64 =
>> > 0x3effaf000000},
>> >   len = 0x40000000, hugepage_sz = 0x1000000, socket_id = 0x1, nchannel =
>> > 0x0, nrank = 0x0}
>> > (gdb) p /x ms[1]
>> > $4 = {phys_addr = 0xf6d000000, {addr = 0x3efbaf000000, addr_64 =
>> > 0x3efbaf000000},
>> >   len = 0x40000000, hugepage_sz = 0x1000000, socket_id = 0x0, nchannel =
>> > 0x0, nrank = 0x0}
>> >
>> > Could you please recheck this. May be, if new DMA window does not start
>> > from bus address 0,
>> > only then you reset dma_map.iova for this offset ?
>>
>> As we figured out, it is --no-huge effect.
>>
>> Another thing - as I read the code - the window size comes from
>> rte_eal_get_physmem_size(). On my 512GB machine, DPDK allocates only 16GB
>> window so it is far away from 1:1 mapping which is believed to be DPDK
>> expectation. Looking now for a better version of
> rte_eal_get_physmem_size()...
> 
> You can try specifying the size with -m or --socket-mem.


Oh, right. Thanks.


>>
>>
>> And another problem - after few unsuccessful starts of app/testpmd, all
>> huge pages are gone:
>>
>> aik@stratton2:~$ cat /proc/meminfo
>> MemTotal:       535527296 kB
>> MemFree:        516662272 kB
>> MemAvailable:   515501696 kB
>> ...
>> HugePages_Total:    1024
>> HugePages_Free:        0
>> HugePages_Rsvd:        0
>> HugePages_Surp:        0
>> Hugepagesize:      16384 kB
>>
>>
>> How is that possible? What is pinning these pages so testpmd process exit
>> does not clear that up?
> 
> I've also seen this. I think that happens if it does not cleanly shutdown.
> I regularly clean /dev/hugepages ...


Oh, I am learning new things about hugepages as we speak :) I think not
being anonymous mapping has this effect. Anyway, this is a bug - pages stay
allocated after every run of testpmd, even if it does not crash but just
does exit() :-/

I still cannot get it working, with Intel 40G ethernet now, this is how far
I get:

USER1: create a new mbuf pool <mbuf_pool_socket_1>: n=1419456, size=2176,
socket=1
EAL: Error - exiting with code: 1
  Cause: Creation of mbuf pool for socket 1 failed: Cannot allocate memory
aik@stratton2:~$


I have put more details to another email.


> 
>>
>>
>>
>>
>> >
>> >
>> > Thanks,
>> > Gowrishankar
>> >
>> >>
>> >>>
>> >>>>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>> >>>>> ---
>> >>>>>   lib/librte_eal/linuxapp/eal/eal_vfio.c | 12 ++++++++++--
>> >>>>>   1 file changed, 10 insertions(+), 2 deletions(-)
>> >>>>>
>> >>>>> diff --git a/lib/librte_eal/linuxapp/eal/eal_vfio.c b/lib/
>> >>>>> librte_eal/linuxapp/eal/eal_vfio.c
>> >>>>> index 46f951f4d..8b8e75c4f 100644
>> >>>>> --- a/lib/librte_eal/linuxapp/eal/eal_vfio.c
>> >>>>> +++ b/lib/librte_eal/linuxapp/eal/eal_vfio.c
>> >>>>> @@ -658,7 +658,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>> >>>>>   {
>> >>>>>      const struct rte_memseg *ms = rte_eal_get_physmem_layout();
>> >>>>>      int i, ret;
>> >>>>> -
>> >>>>> +   phys_addr_t io_offset;
>> >>>>>      struct vfio_iommu_spapr_register_memory reg = {
>> >>>>>         .argsz = sizeof(reg),
>> >>>>>         .flags = 0
>> >>>>> @@ -702,6 +702,13 @@ vfio_spapr_dma_map(int vfio_container_fd)
>> >>>>>         return -1;
>> >>>>>      }
>> >>>>>   +   io_offset = create.start_addr;
>> >>>>> +   if (io_offset) {
>> >>>>> +      RTE_LOG(ERR, EAL, "  DMA offsets other than zero is not
>> >>>>> supported, "
>> >>>>> +            "new window is created at %lx\n", io_offset);
>> >>>>> +      return -1;
>> >>>>> +   }
>> >>>>> +
>> >>>>>      /* map all DPDK segments for DMA. use 1:1 PA to IOVA mapping */
>> >>>>>      for (i = 0; i < RTE_MAX_MEMSEG; i++) {
>> >>>>>         struct vfio_iommu_type1_dma_map dma_map;
>> >>>>> @@ -723,7 +730,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>> >>>>>         dma_map.argsz = sizeof(struct vfio_iommu_type1_dma_map);
>> >>>>>         dma_map.vaddr = ms[i].addr_64;
>> >>>>>         dma_map.size = ms[i].len;
>> >>>>> -      dma_map.iova = ms[i].phys_addr;
>> >>>>> +      dma_map.iova = io_offset;
>> >>>>>         dma_map.flags = VFIO_DMA_MAP_FLAG_READ |
>> >>>>>                VFIO_DMA_MAP_FLAG_WRITE;
>> >>>>>   @@ -735,6 +742,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>> >>>>>            return -1;
>> >>>>>         }
>> >>>>>   +      io_offset += dma_map.size;
>> >>>>>      }
>> >>>>>        return 0;
>> >>>>> --
>> >>>>> 2.11.0
>> >>>>>
>> >>>
>> >>
>> >
>> >
>>
>>
>> --
>> Alexey
>>
>
  
Alexey Kardashevskiy April 22, 2017, 12:12 a.m. UTC | #11
On 21/04/17 19:19, Jonas Pfefferle1 wrote:
> Alexey Kardashevskiy <aik@ozlabs.ru> wrote on 21/04/2017 10:43:53:
> 
>> From: Alexey Kardashevskiy <aik@ozlabs.ru>
>> To: gowrishankar muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
>> Cc: Jonas Pfefferle1 <JPF@zurich.ibm.com>, Gowrishankar
>> Muthukrishnan <gowrishankar.m@in.ibm.com>, Adrian Schuepbach
>> <DRI@zurich.ibm.com>, "dev@dpdk.org" <dev@dpdk.org>
>> Date: 21/04/2017 10:44
>> Subject: Re: [dpdk-dev] [PATCH dpdk 5/5] RFC: vfio/ppc64/spapr: Use
>> correct bus addresses for DMA map
>>
>> On 21/04/17 13:42, Alexey Kardashevskiy wrote:
>> > On 21/04/17 05:16, gowrishankar muthukrishnan wrote:
>> >> On Thursday 20 April 2017 07:52 PM, Alexey Kardashevskiy wrote:
>> >>> On 20/04/17 23:25, Alexey Kardashevskiy wrote:
>> >>>> On 20/04/17 19:04, Jonas Pfefferle1 wrote:
>> >>>>> Alexey Kardashevskiy <aik@ozlabs.ru> wrote on 20/04/2017 09:24:02:
>> >>>>>
>> >>>>>> From: Alexey Kardashevskiy <aik@ozlabs.ru>
>> >>>>>> To: dev@dpdk.org
>> >>>>>> Cc: Alexey Kardashevskiy <aik@ozlabs.ru>, JPF@zurich.ibm.com,
>> >>>>>> Gowrishankar Muthukrishnan <gowrishankar.m@in.ibm.com>
>> >>>>>> Date: 20/04/2017 09:24
>> >>>>>> Subject: [PATCH dpdk 5/5] RFC: vfio/ppc64/spapr: Use correct bus
>> >>>>>> addresses for DMA map
>> >>>>>>
>> >>>>>> VFIO_IOMMU_SPAPR_TCE_CREATE ioctl() returns the actual bus address for
>> >>>>>> just created DMA window. It happens to start from zero because the
>> >>>>>> default
>> >>>>>> window is removed (leaving no windows) and new window starts from
> zero.
>> >>>>>> However this is not guaranteed and the new window may start
>> from another
>> >>>>>> address, this adds an error check.
>> >>>>>>
>> >>>>>> Another issue is that IOVA passed to VFIO_IOMMU_MAP_DMA should be
> a PCI
>> >>>>>> bus address while in this case a physical address of a user
>> page is used.
>> >>>>>> This changes IOVA to start from zero in a hope that the rest of DPDK
>> >>>>>> expects this.
>> >>>>> This is not the case. DPDK expects a 1:1 mapping PA==IOVA. It
>> will use the
>> >>>>> phys_addr of the memory segment it got from /proc/self/pagemap cf.
>> >>>>> librte_eal/linuxapp/eal/eal_memory.c. We could try setting it
>> here to the
>> >>>>> actual iova which basically makes the whole virtual to phyiscal mapping
>> >>>>> with pagemap unnecessary which I believe should be the case for VFIO
>> >>>>> anyway. Pagemap should only be needed when using pci_uio.
>> >>>>
>> >>>> Ah, ok, makes sense now. But it sure needs a big fat comment
>> there as it is
>> >>>> not obvious why host RAM address is used there as DMA window start
> is not
>> >>>> guaranteed.
>> >>> Well, either way there is some bug - ms[i].phys_addr and ms
>> [i].addr_64 both
>> >>> have exact same value, in my setup it is 3fffb33c0000 which is auserspace
>> >>> address - at least ms[i].phys_addr must be physical address.
>> >>
>> >> This patch breaks i40e_dev_init() in my server.
>> >>
>> >> EAL: PCI device 0004:01:00.0 on NUMA socket 1
>> >> EAL:   probe driver: 8086:1583 net_i40e
>> >> EAL:   using IOMMU type 7 (sPAPR)
>> >> eth_i40e_dev_init(): Failed to init adminq: -32
>> >> EAL: Releasing pci mapped resource for 0004:01:00.0
>> >> EAL: Calling pci_unmap_resource for 0004:01:00.0 at 0x3fff82aa0000
>> >> EAL: Requested device 0004:01:00.0 cannot be used
>> >> EAL: PCI device 0004:01:00.1 on NUMA socket 1
>> >> EAL:   probe driver: 8086:1583 net_i40e
>> >> EAL:   using IOMMU type 7 (sPAPR)
>> >> eth_i40e_dev_init(): Failed to init adminq: -32
>> >> EAL: Releasing pci mapped resource for 0004:01:00.1
>> >> EAL: Calling pci_unmap_resource for 0004:01:00.1 at 0x3fff82aa0000
>> >> EAL: Requested device 0004:01:00.1 cannot be used
>> >> EAL: No probed ethernet devices
>> >>
>> >> I have two memseg each of 1G size. Their mapped PA and VA are
>> also different.
>> >>
>> >> (gdb) p /x ms[0]
>> >> $3 = {phys_addr = 0x1e0b000000, {addr = 0x3effaf000000, addr_64 =
>> >> 0x3effaf000000},
>> >>   len = 0x40000000, hugepage_sz = 0x1000000, socket_id = 0x1, nchannel =
>> >> 0x0, nrank = 0x0}
>> >> (gdb) p /x ms[1]
>> >> $4 = {phys_addr = 0xf6d000000, {addr = 0x3efbaf000000, addr_64 =
>> >> 0x3efbaf000000},
>> >>   len = 0x40000000, hugepage_sz = 0x1000000, socket_id = 0x0, nchannel =
>> >> 0x0, nrank = 0x0}
>> >>
>> >> Could you please recheck this. May be, if new DMA window does not start
>> >> from bus address 0,
>> >> only then you reset dma_map.iova for this offset ?
>> >
>> > As we figured out, it is --no-huge effect.
>> >
>> > Another thing - as I read the code - the window size comes from
>> > rte_eal_get_physmem_size(). On my 512GB machine, DPDK allocates only 16GB
>> > window so it is far away from 1:1 mapping which is believed to be DPDK
>> > expectation. Looking now for a better version of
>> rte_eal_get_physmem_size()...
>>
>>
>> I have not found any helper to get a total RAM size or
>> round-up-to-power-of-two - I could look through memory segments, find the
>> one with highest ending physical address, round it up to power of two
>> (requirement on POWER8 platform for a DMA window size) and use it as a DMA
>> window size - is there kernel's order_base_2() analog?
> 
> 
> I guess you have to iterate over the memory segments and create multiple
> windows covering each of them if you want to do a 1:1 mapping.


As for today, POWER8 systems can only do 2 windows. And one window is
enough actually, it can be as big as entire RAM and still have only the
mapping DPDK needs. The problem is to know this RAM size which is easy, I
just did not want to create yet another bicycle here, with reading
/proc/meminfo, etc.


> 
>>
>>
>> >
>> >
>> > And another problem - after few unsuccessful starts of app/testpmd, all
>> > huge pages are gone:
>> >
>> > aik@stratton2:~$ cat /proc/meminfo
>> > MemTotal:       535527296 kB
>> > MemFree:        516662272 kB
>> > MemAvailable:   515501696 kB
>> > ...
>> > HugePages_Total:    1024
>> > HugePages_Free:        0
>> > HugePages_Rsvd:        0
>> > HugePages_Surp:        0
>> > Hugepagesize:      16384 kB
>> >
>> >
>> > How is that possible? What is pinning these pages so testpmd process exit
>> > does not clear that up?
>>
>> Still not clear, any ideas why might be causing this?
>>
>>
>>
>> btw what is the correct way of running DPDK with hugepages?
>>
>> I basically create a folder in ~aik/hugepages and do
>> sudo mount -t hugetlbfs hugetlbfs ~aik/hugepages
>> sudo sysctl vm.nr_hugepages=4096
>>
>> This creates bunch of pages:
>> aik@stratton2:~$ cat /proc/meminfo | grep HugePage
>> AnonHugePages:         0 kB
>> ShmemHugePages:        0 kB
>> HugePages_Total:    4096
>> HugePages_Free:     4096
>> HugePages_Rsvd:        0
>> HugePages_Surp:        0
>>
>>
>> And then I am watching testpmd to detect hugepages (it does see 4096 16MB
>> pages) to allocate pages:
>> rte_eal_hugepage_init() calls map_all_hugepages(... orig=1) - here all 4096
>> pages are allocated, then it calls map_all_hugepages(... orig=0) - and here
>> I get lots of "EAL: Cannot get a virtual area: Cannot allocate memory" due
>> to obvious reason - all pages are allocated. Since you folks have this
>> tested somehow - what am I doing wrong? :) This is all very confusing -
>> what is that orig=0/1 business is all about?
>>
> 
> DPDK tries to allocate all hugepages that are available to find the
> smallest amount of physically contiguous memory segments to cover the
> specified memory size. It then releases all those hugepages that it did not
> need not sure how this is related to orig=1/0 though.


No, it never does release a single page :-/

> You can specify your
> hugepage mount with --huge-dir maybe this helps.

No, makes no difference.

How do you run DPDK (full command line) on POWER8 to make any use of it?


> 
>>
>>
>>
>> >
>> >
>> >
>> >
>> >>
>> >>
>> >> Thanks,
>> >> Gowrishankar
>> >>
>> >>>
>> >>>>
>> >>>>>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>> >>>>>> ---
>> >>>>>>   lib/librte_eal/linuxapp/eal/eal_vfio.c | 12 ++++++++++--
>> >>>>>>   1 file changed, 10 insertions(+), 2 deletions(-)
>> >>>>>>
>> >>>>>> diff --git a/lib/librte_eal/linuxapp/eal/eal_vfio.c b/lib/
>> >>>>>> librte_eal/linuxapp/eal/eal_vfio.c
>> >>>>>> index 46f951f4d..8b8e75c4f 100644
>> >>>>>> --- a/lib/librte_eal/linuxapp/eal/eal_vfio.c
>> >>>>>> +++ b/lib/librte_eal/linuxapp/eal/eal_vfio.c
>> >>>>>> @@ -658,7 +658,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>> >>>>>>   {
>> >>>>>>      const struct rte_memseg *ms = rte_eal_get_physmem_layout();
>> >>>>>>      int i, ret;
>> >>>>>> -
>> >>>>>> +   phys_addr_t io_offset;
>> >>>>>>      struct vfio_iommu_spapr_register_memory reg = {
>> >>>>>>         .argsz = sizeof(reg),
>> >>>>>>         .flags = 0
>> >>>>>> @@ -702,6 +702,13 @@ vfio_spapr_dma_map(int vfio_container_fd)
>> >>>>>>         return -1;
>> >>>>>>      }
>> >>>>>>   +   io_offset = create.start_addr;
>> >>>>>> +   if (io_offset) {
>> >>>>>> +      RTE_LOG(ERR, EAL, "  DMA offsets other than zero is not
>> >>>>>> supported, "
>> >>>>>> +            "new window is created at %lx\n", io_offset);
>> >>>>>> +      return -1;
>> >>>>>> +   }
>> >>>>>> +
>> >>>>>>      /* map all DPDK segments for DMA. use 1:1 PA to IOVA mapping */
>> >>>>>>      for (i = 0; i < RTE_MAX_MEMSEG; i++) {
>> >>>>>>         struct vfio_iommu_type1_dma_map dma_map;
>> >>>>>> @@ -723,7 +730,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>> >>>>>>         dma_map.argsz = sizeof(struct vfio_iommu_type1_dma_map);
>> >>>>>>         dma_map.vaddr = ms[i].addr_64;
>> >>>>>>         dma_map.size = ms[i].len;
>> >>>>>> -      dma_map.iova = ms[i].phys_addr;
>> >>>>>> +      dma_map.iova = io_offset;
>> >>>>>>         dma_map.flags = VFIO_DMA_MAP_FLAG_READ |
>> >>>>>>                VFIO_DMA_MAP_FLAG_WRITE;
>> >>>>>>   @@ -735,6 +742,7 @@ vfio_spapr_dma_map(int vfio_container_fd)
>> >>>>>>            return -1;
>> >>>>>>         }
>> >>>>>>   +      io_offset += dma_map.size;
>> >>>>>>      }
>> >>>>>>        return 0;
>> >>>>>> --
>> >>>>>> 2.11.0
>> >>>>>>
>> >>>>
>> >>>
>> >>
>> >>
>> >
>> >
>>
>>
>> --
>> Alexey
>>
>
  
Anatoly Burakov April 24, 2017, 9:40 a.m. UTC | #12
Hi Alexey,

> > DPDK tries to allocate all hugepages that are available to find the
> > smallest amount of physically contiguous memory segments to cover the
> > specified memory size. It then releases all those hugepages that it
> > did not need not sure how this is related to orig=1/0 though.
> 
> 
> No, it never does release a single page :-/

That is weird.

As far as I can remember, when EAL initializes the pages, it checks if there are any active locks on hugepage files for a given prefix (which presumably you didn't set, so it uses a default "rte" prefix), and if there aren't, it removes the hugepage files. That way, if the pages are still in use (e.g. by a secondary process), they aren't removed, but if they aren't used, then they are freed, and reserved back.

That is, technically, DPDK never "frees" any pages (unless you don't supply -m/--socket-mem switch, in which case it does free unused pages, but still leaves used pages after exit), so after a DPDK process exit they're not cleaned up. However, whenever a primary DPDK process runs again, it is usually able to clean them up and thus should be able to initialize again. Perhaps something is preventing file removal from your hugetlbfs? Like, maybe a permissions issue or something?

Thanks,
Anatoly
  

Patch

diff --git a/lib/librte_eal/linuxapp/eal/eal_vfio.c b/lib/librte_eal/linuxapp/eal/eal_vfio.c
index 46f951f4d..8b8e75c4f 100644
--- a/lib/librte_eal/linuxapp/eal/eal_vfio.c
+++ b/lib/librte_eal/linuxapp/eal/eal_vfio.c
@@ -658,7 +658,7 @@  vfio_spapr_dma_map(int vfio_container_fd)
 {
 	const struct rte_memseg *ms = rte_eal_get_physmem_layout();
 	int i, ret;
-
+	phys_addr_t io_offset;
 	struct vfio_iommu_spapr_register_memory reg = {
 		.argsz = sizeof(reg),
 		.flags = 0
@@ -702,6 +702,13 @@  vfio_spapr_dma_map(int vfio_container_fd)
 		return -1;
 	}
 
+	io_offset = create.start_addr;
+	if (io_offset) {
+		RTE_LOG(ERR, EAL, "  DMA offsets other than zero is not supported, "
+				"new window is created at %lx\n", io_offset);
+		return -1;
+	}
+
 	/* map all DPDK segments for DMA. use 1:1 PA to IOVA mapping */
 	for (i = 0; i < RTE_MAX_MEMSEG; i++) {
 		struct vfio_iommu_type1_dma_map dma_map;
@@ -723,7 +730,7 @@  vfio_spapr_dma_map(int vfio_container_fd)
 		dma_map.argsz = sizeof(struct vfio_iommu_type1_dma_map);
 		dma_map.vaddr = ms[i].addr_64;
 		dma_map.size = ms[i].len;
-		dma_map.iova = ms[i].phys_addr;
+		dma_map.iova = io_offset;
 		dma_map.flags = VFIO_DMA_MAP_FLAG_READ |
 				 VFIO_DMA_MAP_FLAG_WRITE;
 
@@ -735,6 +742,7 @@  vfio_spapr_dma_map(int vfio_container_fd)
 			return -1;
 		}
 
+		io_offset += dma_map.size;
 	}
 
 	return 0;