[dpdk-dev] [PATCH v2 5/7] mem: modify error message for DMA mask check

Alejandro Lucero alejandro.lucero at netronome.com
Tue Nov 6 10:32:04 CET 2018


On Mon, Nov 5, 2018 at 4:35 PM Burakov, Anatoly <anatoly.burakov at intel.com>
wrote:

> On 05-Nov-18 3:33 PM, Alejandro Lucero wrote:
> >
> >
> > On Mon, Nov 5, 2018 at 3:12 PM Burakov, Anatoly
> > <anatoly.burakov at intel.com <mailto:anatoly.burakov at intel.com>> wrote:
> >
> >     On 05-Nov-18 10:13 AM, Alejandro Lucero wrote:
> >      > On Mon, Nov 5, 2018 at 10:01 AM Li, WenjieX A
> >     <wenjiex.a.li at intel.com <mailto:wenjiex.a.li at intel.com>>
> >      > wrote:
> >      >
> >      >> 1. With GCC32, testpmd could not startup without '--iova-mode
> pa'.
> >      >> ./i686-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
> >      >> The output is:
> >      >> EAL: Detected 16 lcore(s)
> >      >> EAL: Detected 1 NUMA nodes
> >      >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> >      >> EAL: Some devices want iova as va but pa will be used because..
> >     EAL: few
> >      >> device bound to UIO
> >      >> EAL: No free hugepages reported in hugepages-1048576kB
> >      >> EAL: Probing VFIO support...
> >      >> EAL: VFIO support initialized
> >      >> EAL: wrong dma mask size 48 (Max: 31)
> >      >> EAL: alloc_pages_on_heap(): couldn't allocate memory due to IOVA
> >     exceeding
> >      >> limits of current DMA mask
> >      >> error allocating rte services array
> >      >> EAL: FATAL: rte_service_init() failed
> >      >> EAL: rte_service_init() failed
> >      >> PANIC in main():
> >      >> Cannot init EAL
> >      >> 5: [./i686-native-linuxapp-gcc/app/testpmd(+0x95fda)
> [0x56606fda]]
> >      >> 4: [/lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf6)
> >     [0xf74d1276]]
> >      >> 3: [./i686-native-linuxapp-gcc/app/testpmd(main+0xf21)
> [0x565fcee1]]
> >      >> 2: [./i686-native-linuxapp-gcc/app/testpmd(__rte_panic+0x3d)
> >     [0x565edc68]]
> >      >> 1: [./i686-native-linuxapp-gcc/app/testpmd(rte_dump_stack+0x33)
> >      >> [0x5675f333]]
> >      >> Aborted
> >      >>
> >      >> 2. With '--iova-mode pa', testpmd could startup.
> >      >> 3. With GCC64, there is no such issue.
> >      >> Thanks!
> >      >>
> >      >>
> >      > Does 32 bits support require IOMMU? It would be a surprise. If
> >     there is no
> >      > IOMMU hardware, no dma mask should be there at all.
> >
> >     IOMMU is supported on 32-bits, however limited the address space
> might
> >     be. Maybe limit IOMMU width to RTE_MIN(31, value) bits for
> >     everything on
> >     32-bit?
> >
> >
> > If IOMMU is supported in 32 bits, then the DMA mask check should not be
> > happening. AFAIK, the IOMMU hardware addressing limitations is a problem
> > only in 64 bits systems. The worst situation I have head of is 39 bits
> > for virtualized IOMMU with QEMU.
> >
> > I would prefer not to invoke rte_mem_set_dma_mask for 32 bits system for
> > the Intel IOMMU case. The only other dma mask client is the NFP PMD and
> > we do not support 32 bits systems.
> >
>
> I don't think not invoking DMA mask check is the right choice here. In
> practice it may be, but i'd rather the behavior to be "correct", if at
> all possible :) It is theoretically possible to have an IOMMU with an
> addressing limitation of, say, 30 bits (even though they don't exist in
> reality), so therefore our code should handle it, should it encounter
> one, and it should also handle the "proper" ones correctly (as in, treat
> them as 32-bit-limited instead of 39- or 48-bit-limited).
>
>
Fine.

The problem is the current sanity check about the dma mask width, what is
31 for 32 bits systems.
Should we just leave a single max dma width to 63? This covers the
possibility of 32 bit systems integrating an IOMMU designed for  64 bits. I
really doubt this is a real possibility in x86, although I can see it more
likely in embedded systems where this sort of hardware components
integration happens.

>
> >     --
> >     Thanks,
> >     Anatoly
> >
>
>
> --
> Thanks,
> Anatoly
>


More information about the dev mailing list