[dpdk-dev] kni: continuous memory restriction ?

Ferruh Yigit ferruh.yigit at intel.com
Tue Mar 20 16:25:13 CET 2018


> 
> 在2018年03月13 22时57分, "Ferruh Yigit"<ferruh.yigit at intel.com>写道:
> 
> 
>     On 3/9/2018 12:14 PM, cys wrote:
>     > Commit 8451269e6d7ba7501723fe2efd0 said "remove continuous memory
>     restriction";
>     >
>     http://dpdk.org/browse/dpdk/commit/lib/librte_eal/linuxapp/kni/kni_net.c?id=8451269e6d7ba7501723fe2efd05745010295bac
>     > For chained mbufs(nb_segs > 1), function va2pa use the offset of previous mbuf
>     > to calculate physical address of next mbuf.
>     > So anywhere guarante that all mbufs have the same offset (buf_addr -
>     buf_physaddr) ?
>     > Or have I misunderstood chained mbufs?
> 
>     Hi,
> 
>     Your description is correct, KNI chained mbufs is broken if chained mbufs are
>     from different mempools.
> 
>     Two commits seems involved, in time order:
>     [1] d89a58dfe90b ("kni: support chained mbufs")
>     [2] 8451269e6d7b ("kni: remove continuous memory restriction")
> 
>     With current implementation, kernel needs to know physical address of the mbuf
>     to be able to access it.
>     For chained mbufs, first mbuf is OK but for rest kernel side gets the virtual
>     address of the mbuf and this only works if all chained mbufs are from same
>     mempool.
> 
>     I don't have any good solution indeed, but it is possible to:
>     a) If you are using chained mbufs, keep old limitation of using singe mempool
>     b) Serialize chained mbufs for KNI in userspace
> 

On 3/14/2018 12:35 AM, cys wrote:
> Thanks for your reply.
> With your solution a), I guess 'single mempool' mean a mempool fit in one memseg
> (continuous memory).

Yes I mean physically continuous memory, a mempool from single memseg, otherwise
it has same problem.

> What about a mempool across many memsegs ? I'm afraid it's still not safe.
> Just like this one:
> -------------- MEMPOOL ----------------
> mempool <mbuf_pool[0]>@0x7ff9e4833d00
>   flags=10
>   pool=0x7ff9fbfffe00
>   phys_addr=0xc4fc33d00
>   nb_mem_chunks=91
>   size=524288
>   populated_size=524288
>   header_size=64
>   elt_size=2432
>   trailer_size=0
>   total_obj_size=2496
>   private_data_size=64
>   avg bytes/object=2496.233643
>
> Zone 0: name:<rte_eth_dev_data>, phys:0xc4fdb7f40, len:0x34000,
> virt:0x7ff9e49b7f40, socket_id:0, flags:0
> Zone 1: name:<MP_mbuf_pool[0]>, phys:0xc4fc33d00, len:0x182100,
> virt:0x7ff9e4833d00, socket_id:0, flags:0
> Zone 2: name:<MP_mbuf_pool[0]_0>, phys:0xb22000080, len:0x16ffff40,
> virt:0x7ffa3a800080, socket_id:0, flags:0
> Zone 3: name:<RG_MP_mbuf_pool[0]>, phys:0xc199ffe00, len:0x800180,
> virt:0x7ff9fbfffe00, socket_id:0, flags:0
> Zone 4: name:<MP_mbuf_pool[0]_1>, phys:0xc29c00080, len:0x77fff40,
> virt:0x7ff9e5800080, socket_id:0, flags:0
> Zone 5: name:<MP_mbuf_pool[0]_2>, phys:0xc22c00080, len:0x67fff40,
> virt:0x7ff9ed200080, socket_id:0, flags:0
> Zone 6: name:<MP_mbuf_pool[0]_3>, phys:0xc1dc00080, len:0x3bfff40,
> virt:0x7ff9f4800080, socket_id:0, flags:0
> Zone 7: name:<MP_mbuf_pool[0]_4>, phys:0xc1bc00080, len:0x1bfff40,
> virt:0x7ff9f8600080, socket_id:0, flags:0
> Zone 8: name:<MP_mbuf_pool[0]_5>, phys:0xbf4600080, len:0xffff40,
> virt:0x7ffa1ea00080, socket_id:0, flags:0
> Zone 9: name:<MP_mbuf_pool[0]_6>, phys:0xc0e000080, len:0xdfff40,
> virt:0x7ffa06400080, socket_id:0, flags:0
> Zone 10: name:<MP_mbuf_pool[0]_7>, phys:0xbe0600080, len:0xdfff40,
> virt:0x7ffa32000080, socket_id:0, flags:0
> Zone 11: name:<MP_mbuf_pool[0]_8>, phys:0xc18000080, len:0xbfff40,
> virt:0x7ff9fd000080, socket_id:0, flags:0
> Zone 12: name:<MP_mbuf_pool[0]_9>, phys:0x65000080, len:0xbfff40,
> virt:0x7ffa54e00080, socket_id:0, flags:0
> Zone 13: name:<MP_mbuf_pool[0]_10>, phys:0xc12a00080, len:0x7fff40,
> virt:0x7ffa02200080, socket_id:0, flags:0
> Zone 14: name:<MP_mbuf_pool[0]_11>, phys:0xc0d600080, len:0x7fff40,
> virt:0x7ffa07400080, socket_id:0, flags:0
> Zone 15: name:<MP_mbuf_pool[0]_12>, phys:0xc06600080, len:0x7fff40,
> virt:0x7ffa0de00080, socket_id:0, flags:0
> ...


More information about the dev mailing list