[dpdk-dev] [RFC 0/7] PMD driver for AF_XDP

Zhang, Qi Z qi.z.zhang at intel.com
Thu Mar 1 13:56:49 CET 2018



> -----Original Message-----
> From: Jason Wang [mailto:jasowang at redhat.com]
> Sent: Thursday, March 1, 2018 3:46 PM
> To: Zhang, Qi Z <qi.z.zhang at intel.com>; dev at dpdk.org
> Cc: Karlsson, Magnus <magnus.karlsson at intel.com>; Topel, Bjorn
> <bjorn.topel at intel.com>
> Subject: Re: [dpdk-dev] [RFC 0/7] PMD driver for AF_XDP
> 
> 
> 
> On 2018年03月01日 12:20, Zhang, Qi Z wrote:
> > +Magnus, since a typo in my first batch in email address.
> >
> >> -----Original Message-----
> >> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Zhang, Qi Z
> >> Sent: Thursday, March 1, 2018 12:19 PM
> >> To: Jason Wang<jasowang at redhat.com>;dev at dpdk.org
> >> Cc:magnus.karlsson at intei.com; Topel, Bjorn<bjorn.topel at intel.com>
> >> Subject: Re: [dpdk-dev] [RFC 0/7] PMD driver for AF_XDP
> >>
> >>
> >>
> >>> -----Original Message-----
> >>> From: Jason Wang [mailto:jasowang at redhat.com]
> >>> Sent: Thursday, March 1, 2018 10:52 AM
> >>> To: Zhang, Qi Z<qi.z.zhang at intel.com>;dev at dpdk.org
> >>> Cc:magnus.karlsson at intei.com; Topel, Bjorn<bjorn.topel at intel.com>
> >>> Subject: Re: [dpdk-dev] [RFC 0/7] PMD driver for AF_XDP
> >>>
> >>>
> >>>
> >>> On 2018年02月27日 17:32, Qi Zhang wrote:
> >>>> The RFC patches add a new PMD driver for AF_XDP which is a proposed
> >>>> faster version of AF_PACKET interface in Linux, see below link for
> >>>> detail AF_XDP introduction:
> >>>> https://fosdem.org/2018/schedule/event/af_xdp/
> >>>> https://lwn.net/Articles/745934/
> >>>>
> >>>> This patchset is base on v18.02.
> >>>> It also require a linux kernel that have below AF_XDP RFC patches
> >>>> be applied.
> >>>> https://patchwork.ozlabs.org/patch/867961/
> >>>> https://patchwork.ozlabs.org/patch/867960/
> >>>> https://patchwork.ozlabs.org/patch/867938/
> >>>> https://patchwork.ozlabs.org/patch/867939/
> >>>> https://patchwork.ozlabs.org/patch/867940/
> >>>> https://patchwork.ozlabs.org/patch/867941/
> >>>> https://patchwork.ozlabs.org/patch/867942/
> >>>> https://patchwork.ozlabs.org/patch/867943/
> >>>> https://patchwork.ozlabs.org/patch/867944/
> >>>> https://patchwork.ozlabs.org/patch/867945/
> >>>> https://patchwork.ozlabs.org/patch/867946/
> >>>> https://patchwork.ozlabs.org/patch/867947/
> >>>> https://patchwork.ozlabs.org/patch/867948/
> >>>> https://patchwork.ozlabs.org/patch/867949/
> >>>> https://patchwork.ozlabs.org/patch/867950/
> >>>> https://patchwork.ozlabs.org/patch/867951/
> >>>> https://patchwork.ozlabs.org/patch/867952/
> >>>> https://patchwork.ozlabs.org/patch/867953/
> >>>> https://patchwork.ozlabs.org/patch/867954/
> >>>> https://patchwork.ozlabs.org/patch/867955/
> >>>> https://patchwork.ozlabs.org/patch/867956/
> >>>> https://patchwork.ozlabs.org/patch/867957/
> >>>> https://patchwork.ozlabs.org/patch/867958/
> >>>> https://patchwork.ozlabs.org/patch/867959/
> >>>>
> >>>> There is no clean upstream target yet since kernel patch is still
> >>>> in RFC stage, The purpose of the patchset is just for anyone that
> >>>> want to eveluate af_xdp with DPDK application and get feedback for
> >>>> further improvement.
> >>>>
> >>>> To try with the new PMD
> >>>> 1. compile and install the kernel with above patches applied.
> >>>> 2. configure $LINUX_HEADER_DIR (dir of "make headers_install")
> >>>>      and $TOOLS_DIR (dir at <kernel_src>/tools) at
> >>> driver/net/af_xdp/Makefile
> >>>>      before compile DPDK.
> >>>> 3. make sure libelf and libbpf is installed.
> >>>>
> >>>> BTW, performance test shows our PMD can reach 94%~98% of the
> >>>> orignal benchmark when share memory is enabled.
> >>> Hi:
> >>>
> >>> Looks like zero copy is not used in this series. Any plan to support that?
> >> Zero copy is enabled in patch 5, if a mempool passed check_mempool,
> >> it will be registered to af_xdp socket.
> >> so there will be no memcpy between mbuf and af_xdp.
> 
> Aha, I see. So the zerocopy was limited to some specific use case. And if I
> understand it correctly, zc mode could not be used for VM.

I think except the limitation for mempool layout, zerocopy is transparent to DPDK application, only difference is performance.
Sorry, I may not get your point, if you could explain more about the VM usage.

Regards
Qi
> 
> Thanks
> 
> >>> If not, what's the advantage compared to vhost-net + tap +
> XDP_REDIRECT?
> >>>
> >>> Have you measured l2fwd performance in this case? I believe the
> >>> number you refer here is rxdrop (XDP_DRV) which is 11.6Mpps.
> >> Actually we measure the performance on rxonly / txonly / l2fwd on
> >> i40e with XDP_SKB and XDP_DRV_ZC
> >>
> >> Regards
> >> Qi
> >>
> >>> Thanks



More information about the dev mailing list