[dpdk-dev] [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio?

O'Driscoll, Tim tim.odriscoll at intel.com
Wed Mar 15 05:10:56 CET 2017


I've included a couple of specific comments inline below, and a general comment here.

We have somebody proposing to add a new driver to DPDK. It's standalone and doesn't affect any of the core libraries. They're willing to maintain the driver and have included a patch to update the maintainers file. They've also included the relevant documentation changes. I haven't seen any negative comment on the patches themselves except for a request from John McNamara for an update to the Release Notes that was addressed in a later version. I think we should be welcoming this into DPDK rather than questioning/rejecting it.

I'd suggest that this is a good topic for the next Tech Board meeting.


> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Vincent JARDIN
> 
> Allain,
> 
> see inline,
> + I did restore the thread from
> http://dpdk.org/ml/archives/dev/2017-March/060087.html into the same
> email.
> 
> To make it short, using ivshmem, you keep people unfocused from virtio.

I agree with the desire to have virtio as the preferred solution. I think the way to do that is by promoting the benefits of a standard solution and continually improving the performance, as we are doing. I don't think it's a reason to reject alternative solutions though.

> > Vincent,
> > Perhaps you can help me understand why the performance or
> functionality of
> > AVP vs. Virtio is relevant to the decision of accepting this driver.
> There
> > are many drivers in the DPDK; most of which provide the same
> functionality
> > at comparable performance rates. AVP is just another such driver. The
> fact
> > that it is virtual rather than physical, in my opinion, should not
> influence
> > the decision of accepting this driver.
> 
> No, it is a different logic: your driver is about Qemu's vNIC support.
> Other PMDs are here to support vendors and different hypervisors. For
> isntance, in case of vmxnet3, different versions, evolutions and
> optimisations can be managed inside vmxnet3. We need to avoid the
> proliferation of PMDs for a same hypervisor while there is already an
> efficient solution, virtio.
> 
> > On the other hand, code
> > quality/complexity or lack of a maintainer are reasonable reasons for
> > rejecting. If our driver is accepted we are committed to maintaining
> it and
> > testing changes as required by any driver framework changes which may
> impact
> > all drivers.
> 
> But, then it is unfocusing your capabilities. As a community, we need to
> be sure that we are all focused on improving existing solutions. Since
> virtio is the one, I would rather prefer to see more people working on
> improving the virtio's community instead of getting everyone unfocused.
> 
> >
> > Along the same lines, I do not understand why upstreaming AVP in to
> the Linux
> > kernel or qemu/kvm should be a prerequisite for inclusion in the DPDK.
> > Continuing my analogy from above, the AVP device is a commercial
> offering
> > tied to the Wind River Systems Titanium product line. It enables
> virtualized
> > DPDK applications and increases DPDK adoption. Similarly to how a
> driver from
> > company XYX is tied to a commercial NIC that must be purchased by a
> customer,
> > our AVP device is available to operators that choose to leverage our
> Titanium
> > product to implement their Cloud solutions. It is not our intention to
> > upstream the qemu/kvm or host vswitch portion of the AVP device. Our
> qemu/kvm
> > extensions are GPL so they are available to our customers if they
> desire to
> > rebuild qemu/kvm with their own proprietary extensions
> 
> It was a solution before 2013, but now we are in 2017, vhost-user is
> here, show them the new proper path. Let's be focus, please.
> 
> > Our AVP device was implemented in 2013 in response to the challenge of
> lower
> > than required performance of qemu/virtio in both user space and DPDK
> > applications in the VM. Rather than making complex changes to
> qemu/virtio
> > and continuously have to forward prop those as we upgraded to newer
> versions
> > of qemu we decided to decouple ourselves from that code base. We
> developed the
> > AVP device based on an evolution of KNI+ivshmem by enhancing both with
> > features that would meet the needs of our customers; better
> performance,
> > multi-queue support, live-migration support, hot-plug support. As I
> said in
> > my earlier response, since 2013, qemu/virtio has seen improved
> performance
> > with the introduction of vhost-user. The performance of vhost-user
> still has
> > not yet achieved performance levels equal to our AVP PMD.
> 
> Frankly, I was on the boat than you: I did strong pitch the use of
> ivshmem for vNICs for many benefits. But Redhat folks did push bash to
> ask every one to be focused in virtio. So, back to 2013, I would have
> supported your AVP approach, but now we are in 2017 and we must stack
> focused on a single and proper qemu solution => virtio/vhost.
> 
> >
> > I acknowledge that the AVP driver could exist as an out-of-tree driver
> loaded
> > as a shared library at runtime. In fact, 2 years ago we released our
> driver
> > source on github for this very reason.  We provide instructions and
> support
> > for building the AVP PMD as a shared library. Some customers have
> adopted
> > this method while many insist on an in-tree driver for several
> reasons.
> >
> > Most importantly, they want to eliminate the burden of needing to
> build and
> > support an additional package into their product. An in-tree driver
> would
> > eliminate the need for a separate build/packaging process. Also, they
> want
> > an option that allows them to be able to develop directly on the
> bleeding
> > edge of DPDK rather than waiting on us to update our out-of-tree
> driver
> > based on stable releases of the DPDK. In this regard, an in-tree
> driver
> > would allow our customers to work directly on the latest DPDK.
> 
> Make them move to virtio, then this pain will disappear.
> 
> > An in-tree driver provides obvious benefits to our customers, but keep
> in
> > mind that this also provides a benefit to the DPDK. If a customer must
> > develop on a stable release because they must use an out-of-tree
> driver then
> > they are less likely to contribute fixes/enhancements/testing
> upstream. I
> > know this first hand because I work with software from different
> sources on
> > a daily basis and it is a significant burden to have to reproduce/test
> fixes
> > on master when you build/ship on an older stable release. Accepting
> this
> > driver would increase the potential pool of developers available for
> > contributions and reviews.
> 
> So, again, you are make a call for unfocusing from virtio, please.
> 
> > Again, we are committed to contributing to the DPDK community by
> supporting
> > our driver and upstreaming other fixes/enhancements we develop along
> the
> > way. We feel that if the DPDK is limited to only a single virtual
> driver of
> > any type then choice and innovation is also limited. In the end if
> more
> > variety and innovation increases DPDK adoption then this is a win for
> DPDK
> > and everyone that is involved in the project.
> 
> Frankly, show/demonstrate that AVP can be faster and better in
> performances than virtio, then I would support you strongly, since, back
> in 2013 days, I did strongly believe in innovation and agility with
> ivshmem. But right now, you did not provide any relevant data, you just
> made the proof points that you are getting the community unfocused from
> virtio 4 years later. We are not in 2013 anymore.
> 
> > This patch series submits an initial version of the AVP PMD from Wind
> River
> > Systems.  The series includes shared header files, driver
> implementation,
> > and changes to documentation files in support of this new driver.  The
> AVP
> > driver is a shared memory based device.  It is intended to be used as
> a PMD
> > within a virtual machine running on a Wind River virtualization
> platform.
> > See: http://www.windriver.com/products/titanium-cloud/
> >
> > It enables optimized packet throughput without requiring any packet
> > processing in qemu. This allowed us to provide our customers with a
> > significant performance increase for both DPDK and non-DPDK
> applications
> > in the VM.   Since our AVP implementation supports VM live-migration
> it
> > is viewed as a better alternative to PCI passthrough or PCI SRIOV
> since
> > neither of those support VM live-migration without manual intervention
> > or significant performance penalties.
> >
> > Since the initial implementation of AVP devices, vhost-user has become
> part
> > of the qemu offering with a significant performance increase over the
> > original virtio implementation.  However, vhost-user still does not
> achieve
> > the level of performance that the AVP device can provide to our
> customers
> > for DPDK based guests.
> >
> > A number of our customers have requested that we upstream the driver
> to
> > dpdk.org.
> >
> > v2:
> > * Fixed coding style violations that slipped in accidentally because
> of an
> >   out of date checkpatch.pl from an older kernel.
> >
> > v3:
> > * Updated 17.05 release notes to add a section for this new PMD
> > * Added additional info to the AVP nic guide document to clarify the
> >   benefit of using AVP over virtio.
> > * Fixed spelling error in debug log missed by local checkpatch.pl
> version
> > * Split the transmit patch to separate the stats functions as they
> >   accidentally got squashed in the last patchset.
> > * Fixed debug log strings so that they exceed 80 characters rather
> than
> >   span multiple lines.
> > * Renamed RTE_AVP_* defines that were in avp_ethdev.h to be AVP_*
> instead
> > * Replaced usage of RTE_WRITE32 and RTE_READ32 with
> rte_write32_relaxed
> >   and rte_read32_relaxed.
> > * Declared rte_pci_id table as const
> >
> > v4:
> > * Split our interrupt handlers to a separate patch and moved to the
> end
> >   of the series.
> > * Removed memset() from stats_get API
> > * Removed usage of RTE_AVP_ALIGNMENT
> > * Removed unnecessary parentheses in rte_avp_common.h
> > * Removed unneeded "goto unlock" where there are no statements in
> between
> >   the goto and the end of the function.
> > * Re-tested with pktgen and found that rte_eth_tx_burst() is being
> called
> >   with 0 packets even before starting traffic which resulted in
> >   incrementing oerrors; fixed in transmit patch.
> >
> > Allain Legacy (17):
> >   config: added attributes for the AVP PMD
> >   net/avp: added public header files
> >   maintainers: claim responsibility for AVP PMD
> >   net/avp: added PMD version map file
> >   net/avp: added log macros
> >   drivers/net: added driver makefiles
> >   net/avp: driver registration
> >   net/avp: device initialization
> >   net/avp: device configuration
> >   net/avp: queue setup and release
> >   net/avp: packet receive functions
> >   net/avp: packet transmit functions
> >   net/avp: device statistics operations
> >   net/avp: device promiscuous functions
> >   net/avp: device start and stop operations
> >   net/avp: migration interrupt handling
> >   doc: added information related to the AVP PMD
> >
> >  MAINTAINERS                             |    6 +
> >  config/common_base                      |   10 +
> >  config/common_linuxapp                  |    1 +
> >  doc/guides/nics/avp.rst                 |   99 ++
> >  doc/guides/nics/features/avp.ini        |   17 +
> >  doc/guides/nics/index.rst               |    1 +
> >  drivers/net/Makefile                    |    1 +
> >  drivers/net/avp/Makefile                |   61 +
> >  drivers/net/avp/avp_ethdev.c            | 2371
> +++++++++++++++++++++++++++++++
> >  drivers/net/avp/avp_logs.h              |   59 +
> >  drivers/net/avp/rte_avp_common.h        |  427 ++++++
> >  drivers/net/avp/rte_avp_fifo.h          |  157 ++
> >  drivers/net/avp/rte_pmd_avp_version.map |    4 +
> >  mk/rte.app.mk                           |    1 +
> >  14 files changed, 3215 insertions(+)
> >  create mode 100644 doc/guides/nics/avp.rst
> >  create mode 100644 doc/guides/nics/features/avp.ini
> >  create mode 100644 drivers/net/avp/Makefile
> >  create mode 100644 drivers/net/avp/avp_ethdev.c
> >  create mode 100644 drivers/net/avp/avp_logs.h
> >  create mode 100644 drivers/net/avp/rte_avp_common.h
> >  create mode 100644 drivers/net/avp/rte_avp_fifo.h
> >  create mode 100644 drivers/net/avp/rte_pmd_avp_version.map
> >
> 
> so, still an nack because:
>    - no performance data of avp vs virtio,

I don't think it should be a requirement for Allain to provide performance data in order to justify getting this accepted into DPDK. Keith pointed out in a previous comment on this patch set that even if performance is the same as virtio, there might still be other reasons why people would want to use it.

>    - 2013 is gone,
>    - it unfocuses from virtio.
> 
> Best regards,
>    Vincent


More information about the dev mailing list