[RFC] net/vhost: support asynchronous data path
Maxime Coquelin
maxime.coquelin at redhat.com
Mon Jan 2 11:58:33 CET 2023
Hi Yuan,
On 12/16/22 03:00, Yuan Wang wrote:
> Vhost asynchronous data-path offloads packet copy from the CPU
> to the DMA engine. As a result, large packet copy can be accelerated
> by the DMA engine, and vhost can free CPU cycles for higher level
> functions.
>
> In this patch, we enable asynchronous data-path for vhostpmd.
> Asynchronous data path is enabled per tx/rx queue, and users need
> to specify the DMA device used by the tx/rx queue. Each tx/rx queue
> only supports to use one DMA device, but one DMA device can be shared
> among multiple tx/rx queues of different vhost PMD ports.
>
> Two PMD parameters are added:
> - dmas: specify the used DMA device for a tx/rx queue.
> (Default: no queues enable asynchronous data path)
> - dma-ring-size: DMA ring size.
> (Default: 4096).
>
> Here is an example:
> --vdev 'eth_vhost0,iface=./s0,dmas=[txq0 at 0000:00.01.0;rxq0 at 0000:00.01.1],dma-ring-size=4096'
>
> Signed-off-by: Jiayu Hu <jiayu.hu at intel.com>
> Signed-off-by: Yuan Wang <yuanx.wang at intel.com>
> Signed-off-by: Wenwu Ma <wenwux.ma at intel.com>
> ---
> drivers/net/vhost/meson.build | 1 +
> drivers/net/vhost/rte_eth_vhost.c | 512 ++++++++++++++++++++++++++++--
> drivers/net/vhost/rte_eth_vhost.h | 15 +
> drivers/net/vhost/version.map | 7 +
> drivers/net/vhost/vhost_testpmd.c | 67 ++++
> 5 files changed, 569 insertions(+), 33 deletions(-)
> create mode 100644 drivers/net/vhost/vhost_testpmd.c
>
This RFC is identical to the v5 that you sent for last release, and so
the comments I made on it are still valid.
Is this intentionally re-sent?
Regards,
Maxime
More information about the dev
mailing list