[dpdk-dev] RFC: Kunpeng DMA driver API design decision
fengchengwen
fengchengwen at huawei.com
Sat Jun 12 09:01:28 CEST 2021
Hi all,
We prepare support Kunpeng DMA engine under rawdev framework, and observed that
there are two different implementations of the data plane API:
1. rte_rawdev_enqueue/dequeue_buffers which was implemented by dpaa2_qdma and
octeontx2_dma driver.
2. rte_ioat_enqueue_xxx/rte_ioat_completed_ops which was implemented by ioat
driver.
Due to following consideration (mainly performance), we plan to implement API
like ioat (not the same, have some differences) in data plane:
1. The rte_rawdev_enqueue_buffers use opaque buffer reference which is vendor's
specific, so it needs first to translate application parameters to opaque
pointer, and then driver writes the opaque data onto hardware, this may lead
to performance problem.
2. rte_rawdev_xxx doesn't provide memory barrier API which may need to extend
by opaque data (e.g. add flag to every request), this may introduce some
complexity.
Also the example/ioat was used to compare DMA and CPU-memcopy performance,
Could we generalized it so that it supports multiple-vendor ?
I don't know if the community accepts this kind of implementation, so if you
have any comments, please provide feedback.
Best Regards.
More information about the dev
mailing list