[dts] [PATCH v1] test_plans/vhost_cbdma_test_plan: add one cbdma performance case

Tu, Lijuan lijuan.tu at intel.com
Wed Jun 9 10:26:34 CEST 2021



> -----Original Message-----
> From: dts <dts-bounces at dpdk.org> On Behalf Of Yinan Wang
> Sent: 2021年6月9日 19:47
> To: dts at dpdk.org
> Cc: Wang, Yinan <yinan.wang at intel.com>
> Subject: [dts] [PATCH v1] test_plans/vhost_cbdma_test_plan: add one cbdma
> performance case
> 

I see 2 major changed in your patch, one is add a test case, but also modified other cases that not mentioned in your commit message, please refine it.

> Signed-off-by: Yinan Wang <yinan.wang at intel.com>
> ---
>  test_plans/vhost_cbdma_test_plan.rst | 65 +++++++++++++++++++++++++++-
>  1 file changed, 63 insertions(+), 2 deletions(-)
> 
> diff --git a/test_plans/vhost_cbdma_test_plan.rst
> b/test_plans/vhost_cbdma_test_plan.rst
> index c827adaa..ce0fdc3e 100644
> --- a/test_plans/vhost_cbdma_test_plan.rst
> +++ b/test_plans/vhost_cbdma_test_plan.rst
> @@ -73,7 +73,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
> 
>  1. Bind one cbdma port and one nic port to igb_uio, then launch vhost by below
> command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost
> --vdev
> 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0 at 80:04.0],dmathr=1024' \
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3
> + --file-prefix=vhost --vdev
> + 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0 at 80:01.0],dmathr=1024' \
>      -- -i --nb-cores=1 --txd=1024 --rxd=1024

Why change pci address ?

>      >set fwd mac
>      >start
> @@ -145,7 +145,7 @@ Test Case2: Split ring dynamic queue number test for
> DMA-accelerated vhost Tx op
>      >set fwd mac
>      >start
> 
> -3. Send packets with packet size [64,1518] from packet generator with random
> ip, check perforamnce can get target.
> +3. Send imix packets from packet generator with random ip, check
> perforamnce can get target.
> 
>  4. Stop vhost port, check vhost RX and TX direction both exist packtes in two
> queues from vhost log.
> 
> @@ -355,3 +355,64 @@ Test Case5: Packed ring dynamic queue number test
> for DMA-accelerated vhost Tx o
>       >start
> 
>  11. Stop vhost port, check vhost RX and TX direction both exist packets in two
> queues from vhost log.
> +
> +Test Case 6: Compare PVP split ring performance between CPU copy, CBDMA
> +copy and Sync copy
> +================================================================
> =======
> +===================
> +
> +CPU copy means vhost enqueue w/o cbdma channel; CBDMA copy needs vhost
> +enqueue with cbdma channel using parameter '-dmas'; Sync copy needs
> +vhost enqueue with cbdma channel, but threshold ( can be adjusted by change
> value of f.async_threshold in dpdk code) is larger than forwarding packet length.
> +
> +1. Bind one cbdma port and one nic port which on same numa to igb_uio, then
> launch vhost by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-
> prefix=vhost --vdev
> 'net_vhost0,iface=/tmp/s0,queues=1,client=1,dmas=[txq0 at 00:01.0],dmathr=10
> 24' \
> +    -- -i --nb-cores=1 --txd=1024 --rxd=1024
> +    >set fwd mac
> +    >start
> +
> +2. Launch virtio-user with inorder mergeable path::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-
> prefix=virtio \
> +    --
> vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_o
> rder=1,queues=1,server=1 \
> +    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --
> rxd=1024
> +    >set fwd mac
> +    >start
> +
> +3. Send packets with 64b and 1518b seperately from packet generator, record
> the throughput as sync copy throughput for 64b and cbdma copy for 1518b::
> +
> +    testpmd>show port stats all
> +
> +4.Quit vhost side, relaunch with below cmd::
> +
> + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost
> --vdev
> 'net_vhost0,iface=/tmp/s0,queues=1,client=1,dmas=[txq0 at 00:01.0],dmathr=20
> 00' \
> +    -- -i --nb-cores=1 --txd=1024 --rxd=1024
> +    >set fwd mac
> +    >start
> +
> +5. Send packets with 1518b from packet generator, record the throughput as
> sync copy throughput for 1518b::
> +
> +    testpmd>show port stats all
> +
> +6. Quit two testpmd, relaunch vhost by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-
> prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1' \
> +    -- -i --nb-cores=1 --txd=1024 --rxd=1024
> +    >set fwd mac
> +    >start
> +
> +7. Launch virtio-user with inorder mergeable path::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-
> prefix=virtio \
> +    --
> vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_o
> rder=1,queues=1 \
> +    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --
> rxd=1024
> +    >set fwd mac
> +    >start
> +
> +8. Send packets with 64b from packet generator, record the throughput as cpu
> copy for 64b::
> +
> +    testpmd>show port stats all
> +
> +9. Check performance can meet below requirement::
> +
> +   (1)CPU copy vs. sync copy delta < 10% for 64B packet size
> +   (2)CBDMA copy vs sync copy delta > 5% for 1518 packet size
> --
> 2.25.1



More information about the dts mailing list