[dpdk-dev] RFC: Kunpeng DMA driver API design decision
Thomas Monjalon
thomas at monjalon.net
Sat Jun 12 10:31:33 CEST 2021
12/06/2021 09:01, fengchengwen:
> Hi all,
>
> We prepare support Kunpeng DMA engine under rawdev framework, and observed that
> there are two different implementations of the data plane API:
> 1. rte_rawdev_enqueue/dequeue_buffers which was implemented by dpaa2_qdma and
> octeontx2_dma driver.
> 2. rte_ioat_enqueue_xxx/rte_ioat_completed_ops which was implemented by ioat
> driver.
>
> Due to following consideration (mainly performance), we plan to implement API
> like ioat (not the same, have some differences) in data plane:
> 1. The rte_rawdev_enqueue_buffers use opaque buffer reference which is vendor's
> specific, so it needs first to translate application parameters to opaque
> pointer, and then driver writes the opaque data onto hardware, this may lead
> to performance problem.
> 2. rte_rawdev_xxx doesn't provide memory barrier API which may need to extend
> by opaque data (e.g. add flag to every request), this may introduce some
> complexity.
>
> Also the example/ioat was used to compare DMA and CPU-memcopy performance,
> Could we generalized it so that it supports multiple-vendor ?
>
> I don't know if the community accepts this kind of implementation, so if you
> have any comments, please provide feedback.
I would love having a common generic API.
I would prefer having drivers under drivers/dma/ directory,
rather than rawdev.
More information about the dev
mailing list