[dpdk-dev] dpdk 16.07, issues with rte_mempool_create and rte_kni_alloc()

Gopakumar Choorakkot Edakkunni gopakumar.c.e at gmail.com
Thu Aug 25 16:19:05 CEST 2016


Thank you Ferruh, I will give this a spin over the weekend and let you know.

Rgds,
Gopa.

On Thu, Aug 25, 2016 at 6:51 AM, Ferruh Yigit <ferruh.yigit at intel.com>
wrote:

> On 8/10/2016 11:51 AM, Ferruh Yigit wrote:
> > Hi Gopakumar,
> >
> > On 8/4/2016 5:14 PM, Ferruh Yigit wrote:
> >> On 8/1/2016 10:19 PM, Gopakumar Choorakkot Edakkunni wrote:
> >>> Well, for my purpose I just ended up creating a seperate/smaller pool
> >>> earlier during bootup to try to guarantee its from one memseg.
> >>>
> >>> But I am assuming that this KNI restriction is something thats
> "currently"
> >>> not fixed and is "fixable" ?
> >>
> >>
> >>> Any ideas on what the summary of the reason
> >>> for this restriction is - I was gonna check if I can fix that
> >>
> >> KNI expects all mbufs are from a physically continuous memory. This is
> >> because of current address translation implementation.
> >>
> >> mbufs allocated in userspace and accessed from both user and kernel
> >> space, so mbuf userspace virtual address needs to be converted into
> >> kernelspace virtual address.
> >>
> >> Currently this address translation done by first calculating an offset
> >> between virtual addresses using first filed of mempool, later applying
> >> same offset to all mbufs. This is why all mbufs should be in physically
> >> continuous memory.
> >>
> >> I think this address translation can be done in different way which can
> >> remove the restriction, but not sure about the effect of the
> >> performance. I will send a patch for this.
> >
> > I have sent a patch to remove KNI restriction:
> > http://dpdk.org/dev/patchwork/patch/15171/
> >
> > Can you please test this patch with a mempool with multiple memzone?
> > You need to remove following check in KNI manually:
> >     if (mp->nb_mem_chunks != 1)
> >         goto kni_fail;
>
> Hi Gopakumar,
>
> Off the list.
>
> Any chance to test this?
>
> Thanks,
> ferruh
>
>


More information about the dev mailing list