[dpdk-dev] Question regarding mempool changes impact to XEN PMD

Christian Ehrhardt christian.ehrhardt at canonical.com
Mon Jun 13 10:30:43 CEST 2016


On Mon, Jun 13, 2016 at 10:14 AM, Olivier Matz <olivier.matz at 6wind.com>
wrote:

> Hi Christian,
>
> On 06/13/2016 09:34 AM, Christian Ehrhardt wrote:
> > Hi David,
>
> I guess this mail is for me, not for David :)
>

Absolutely yes, sorry to both of you to - probably read too much patch
headers this morning :-)


> > it seems to be the first time I compiled with
> > CONFIG_RTE_LIBRTE_PMD_XENVIRT=y sinec the bigger mempool changes around
> > "587d684d doc: update release notes about mempool allocation".
> >
> > I've seen related patch to mempool / xen in that regard "c042ba20
> mempool:
> > rework support of Xen dom0"
> >
> > But with above config symbol enabled I got:
> > drivers/net/xenvirt/rte_xen_lib.c: In function
> ‘grant_gntalloc_mbuf_pool’:
> > drivers/net/xenvirt/rte_xen_lib.c:440:69: error: ‘struct rte_mempool’ has
> > no member named ‘elt_va_start’
> >   if (snprintf(val_str, sizeof(val_str), "%"PRIxPTR,
> > (uintptr_t)mpool->elt_va_start) == -1)
> >                                                                      ^
> >   SYMLINK-FILE include/rte_eth_bond.h
> > mk/internal/rte.compile-pre.mk:126: recipe for target 'rte_xen_lib.o'
> failed
> > make[4]: *** [rte_xen_lib.o] Error 1
> > make[4]: *** Waiting for unfinished jobs....
> >
> > The change around the mempools is complex, so I don't see on the first
> look
> > if that needs a minor or major rework in the xen sources.
> > I mean I don't want it to compile, but to work and that could be more
> than
> > just fixing that changed structure :-)
> >
> > So I wanted to ask if you as author would see if it is a trivial change
> > that has to be made?
>
> Sorry, I missed this reference to elt_va_start in my patches.
>
> I'm not very familiar with the xen code in dpdk, but from what I see:
>
> - in the PMD, grant_gntalloc_mbuf_pool() stores the mempool virtual
>   address in the xen key/value database
> - in examples/vhost_xen, the function parse_mpool_va() retrieves it
> - this address is used in new_device()
>
> I think the patch would be almost similar to what I did in mlx
> drivers in this commit:
>
> http://dpdk.org/browse/dpdk/commit/?id=84121f1971873c9f45b2939c316c66126d8754a1
>
> or in librte_kni in this commit:
>
> http://dpdk.org/browse/dpdk/commit?id=d1d914ebbc2514f334a3ed24057e63c8bb76363d
>
> To give more precisions:
>
> - before the patchset, mp->elt_va_start was the virtual address of the
>   mempool objects table. It was always virtually contiguous
>
> - now, a mempool can be fragmented in several virtually contiguous
>   chunks. In case there is only one chunk, it can be safely replaced
>   by STAILQ_FIRST(&mp->mem_list)->addr (= the virtual address of the
>   first chunk).
>
> In case there are more chunks in the mempool, it would require deeper
> modifications I think. But we should keep in mind that having a
> virtually fragmented mempool was not possible before the patchset
> (it would have fail at init). If if fails later in xen code because
> xen does not support fragmented mempools, there is no regression
> compared to what we had before.
>

Ack to that, I only cared about the regression and that I think you covered
excellently.
To make fragmented pools work would be a task for one who "cares" to use it.


>
> I'll send a draft patch, if you could give it a try, it would be great!
>

I can compile and review the patch, but I neither have a setup to actually
run it.
Maybe someone else on the list have, please feel encouraged to do so.


> Thanks for reporting,
> Olivier
>
>


More information about the dev mailing list