[dts][PATCH V3 1/2] test_plans/virtio_event_idx_interrupt_cbdma_test_plan: modify the dmas parameter by DPDK changed

Wei Ling weix.ling at intel.com
Tue Nov 22 08:13:17 CET 2022


>From DPDK-22.11, the dmas parameter have changed, so modify the dmas
parameter in the testplan.

Signed-off-by: Wei Ling <weix.ling at intel.com>
---
 ...io_event_idx_interrupt_cbdma_test_plan.rst | 118 +++++++++---------
 1 file changed, 60 insertions(+), 58 deletions(-)

diff --git a/test_plans/virtio_event_idx_interrupt_cbdma_test_plan.rst b/test_plans/virtio_event_idx_interrupt_cbdma_test_plan.rst
index 0926e052..d8694ad5 100644
--- a/test_plans/virtio_event_idx_interrupt_cbdma_test_plan.rst
+++ b/test_plans/virtio_event_idx_interrupt_cbdma_test_plan.rst
@@ -12,10 +12,11 @@ This feature is to suppress interrupts for performance improvement, need compare
 interrupt times with and without virtio event idx enabled. This test plan test
 virtio event idx interrupt with cbdma enable. Also need cover driver reload test.
 
-..Note:
-1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > 5.1, and packed ring multi-queues not support reconnect in qemu yet.
-2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version >= 5.2.0, dut to old qemu exist reconnect issue when multi-queues test.
-3.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd.
+.. note::
+
+   1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > 5.1, and packed ring multi-queues not support reconnect in qemu yet.
+   2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version >= 5.2.0, dut to old qemu exist reconnect issue when multi-queues test.
+   3.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd.
 
 Prerequisites
 =============
@@ -53,27 +54,29 @@ General set up
 Test case
 =========
 
-Test Case1: Split ring virtio-pci driver reload test with CBDMA enable
-----------------------------------------------------------------------
-This case tests split ring event idx interrupt mode workable after reload virtio-pci driver several times when vhost uses the asynchronous operations with CBDMA channels.
+Test Case 1: Split ring virtio-pci driver reload test with CBDMA enable
+-----------------------------------------------------------------------
+This case tests split ring event idx interrupt mode workable after reload
+virtio-pci driver several times when vhost uses the asynchronous
+operations with CBDMA channels.
 
 1. Bind one nic port and one cbdma channel to vfio-pci, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost \
-    --vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0;rxq0]' \
-    -- -i --nb-cores=1 --txd=1024 --rxd=1024 --lcore-dma=[lcore29 at 0000:00:04.0]
+    --vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0 at 0000:00:04.0;rxq0 at 0000:00:04.0]' \
+    -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd> start
 
 2. Launch VM::
 
-	taskset -c 32-33 \
-	qemu-system-x86_64 -name us-vhost-vm1 \
+	taskset -c 32-33 qemu-system-x86_64 -name vm1 \
 	-cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
-	-smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu2004_1.img \
-	-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-	-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \
-	-chardev socket,id=char1,path=./vhost-net -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
+	-smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu2004.img \
+	-monitor unix:/tmp/vm2_monitor.sock,server,nowait \
+	-device e1000,netdev=nttsip1 -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \
+	-chardev socket,id=char1,path=./vhost-net \
+	-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
 	-device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on  \
 	-vnc :11 -daemonize
 
@@ -95,30 +98,29 @@ This case tests split ring event idx interrupt mode workable after reload virtio
 
 6. Rerun step4 and step5 10 times to check event idx workable after driver reload.
 
-Test Case2: Split ring 16 queues virtio-net event idx interrupt mode test with cbdma enable
--------------------------------------------------------------------------------------------
-This case tests the split ring virtio-net event idx interrupt with 16 queues and when vhost uses the asynchronous operations with CBDMA channels.
+Test Case 2: Split ring 16 queues virtio-net event idx interrupt mode test with cbdma enable
+--------------------------------------------------------------------------------------------
+This case tests the split ring virtio-net event idx interrupt with 16 queues and when
+vhost uses the asynchronous operations with CBDMA channels.
 
-1. Bind one nic port and 16 cbdma channels to vfio-pci, then launch the vhost sample by below commands::
+1. Bind one nic port and 4 cbdma channels to vfio-pci, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-17 -n 4 --file-prefix=vhost \
-    --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7;rxq8;rxq9;rxq10;rxq11;rxq12;rxq13;rxq14;rxq15]' \
-    -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16 \
-    --lcore-dma=[lcore2 at 0000:00:04.0,lcore3 at 0000:00:04.1,lcore4 at 0000:00:04.2,lcore5 at 0000:00:04.3,lcore6 at 0000:00:04.4,lcore7 at 0000:00:04.5,lcore8 at 0000:00:04.6,lcore9 at 0000:00:04.7,\
-	lcore10 at 0000:80:04.0,lcore11 at 0000:80:04.1,lcore12 at 0000:80:04.2,lcore13 at 0000:80:04.3,lcore14 at 0000:80:04.4,lcore15 at 0000:80:04.5,lcore16 at 0000:80:04.6,lcore17 at 0000:80:04.7]
+    --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0 at 0000:00:04.0;txq1 at 0000:00:04.0;txq2 at 0000:00:04.0;txq3 at 0000:00:04.0;txq4 at 0000:00:04.0;txq5 at 0000:00:04.0;txq6 at 0000:00:04.0;txq7 at 0000:00:04.0;txq8 at 0000:00:04.1;txq9 at 0000:00:04.1;txq10 at 0000:00:04.1;txq11 at 0000:00:04.1;txq12 at 0000:00:04.1;txq13 at 0000:00:04.1;txq14 at 0000:00:04.1;txq15 at 0000:00:04.1;rxq0 at 0000:00:04.2;rxq1 at 0000:00:04.2;rxq2 at 0000:00:04.2;rxq3 at 0000:00:04.2;rxq4 at 0000:00:04.2;rxq5 at 0000:00:04.2;rxq6 at 0000:00:04.2;rxq7 at 0000:00:04.2;rxq8 at 0000:00:04.3;rxq9 at 0000:00:04.3;rxq10 at 0000:00:04.3;rxq11 at 0000:00:04.3;rxq12 at 0000:00:04.3;rxq13 at 0000:00:04.3;rxq14 at 0000:00:04.3;rxq15 at 0000:00:04.3]' \
+    -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
     testpmd> start
 
 2. Launch VM::
 
-	taskset -c 32-33 \
-	qemu-system-x86_64 -name us-vhost-vm1 \
+	taskset -c 32-33 qemu-system-x86_64 -name us-vhost-vm1 \
 	-cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
-	-smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu2004_1.img \
-	-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-	-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \
-	-chardev socket,id=char1,path=./vhost-net -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
-	-device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on  \
+	-smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu2004.img \
+	-monitor unix:/tmp/vm2_monitor.sock,server,nowait \
+	-device e1000,netdev=nttsip1 -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \
+	-chardev socket,id=char1,path=./vhost-net \
+	-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
+	-device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
 	-vnc :11 -daemonize
 
 3. On VM1, give virtio device IP and enable vitio-net with 16 quques::
@@ -136,28 +138,30 @@ This case tests the split ring virtio-net event idx interrupt with 16 queues and
     testpmd> start
     testpmd> stop
 
-Test Case3: Packed ring virtio-pci driver reload test with CBDMA enable
------------------------------------------------------------------------
-This case tests packed ring event idx interrupt mode workable after reload virtio-pci driver several times when uses the asynchronous operations with CBDMA channels.
+Test Case 3: Packed ring virtio-pci driver reload test with CBDMA enable
+------------------------------------------------------------------------
+This case tests packed ring event idx interrupt mode workable after reload
+virtio-pci driver several times when uses the asynchronous operations
+with CBDMA channels.
 
 1. Bind one nic port and one cbdma channel to vfio-pci, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost \
-    --vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0;rxq0]' \
-    -- -i --nb-cores=1 --txd=1024 --rxd=1024 --lcore-dma=[lcore29 at 0000:00:04.0]
+    --vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0 at 0000:00:04.0;rxq0 at 0000:00:04.0]' \
+    -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd> start
 
 2. Launch VM::
 
-	taskset -c 32-33 \
-	qemu-system-x86_64 -name us-vhost-vm1 \
+	taskset -c 32-33 qemu-system-x86_64 -name vm1 \
 	-cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
-	-smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu2004_1.img \
-	-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-	-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \
-	-chardev socket,id=char1,path=./vhost-net -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
-	-device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on  \
+	-smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu2004.img \
+	-monitor unix:/tmp/vm2_monitor.sock,server,nowait \
+	-device e1000,netdev=nttsip1 -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \
+	-chardev socket,id=char1,path=./vhost-net \
+	-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
+	-device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
 	-vnc :11 -daemonize
 
 3. On VM1, set virtio device IP, send 10M packets from packet generator to nic then check virtio device can receive packets::
@@ -178,30 +182,29 @@ This case tests packed ring event idx interrupt mode workable after reload virti
 
 6. Rerun step4 and step5 10 times to check event idx workable after driver reload.
 
-Test Case4: Packed ring 16 queues virtio-net event idx interrupt mode test with cbdma enable
---------------------------------------------------------------------------------------------
-This case tests the packed ring virtio-net event idx interrupt with 16 queues and when vhost uses the asynchronous operations with CBDMA channels.
+Test Case 4: Packed ring 16 queues virtio-net event idx interrupt mode test with cbdma enable
+---------------------------------------------------------------------------------------------
+This case tests the packed ring virtio-net event idx interrupt with 16 queues and when vhost
+uses the asynchronous operations with CBDMA channels.
 
-1. Bind one nic port and 16 cbdma channels to vfio-pci, then launch the vhost sample by below commands::
+1. Bind one nic port and 4 cbdma channels to vfio-pci, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-17 -n 4 --file-prefix=vhost \
-    --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7;rxq8;rxq9;rxq10;rxq11;rxq12;rxq13;rxq14;rxq15]' \
-    -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16 \
-    --lcore-dma=[lcore2 at 0000:00:04.0,lcore3 at 0000:00:04.1,lcore4 at 0000:00:04.2,lcore5 at 0000:00:04.3,lcore6 at 0000:00:04.4,lcore7 at 0000:00:04.5,lcore8 at 0000:00:04.6,lcore9 at 0000:00:04.7,\
-	lcore10 at 0000:80:04.0,lcore11 at 0000:80:04.1,lcore12 at 0000:80:04.2,lcore13 at 0000:80:04.3,lcore14 at 0000:80:04.4,lcore15 at 0000:80:04.5,lcore15 at 0000:80:04.6,lcore15 at 0000:80:04.7]
+    --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0 at 0000:00:04.0;txq1 at 0000:00:04.0;txq2 at 0000:00:04.0;txq3 at 0000:00:04.0;txq4 at 0000:00:04.0;txq5 at 0000:00:04.0;txq6 at 0000:00:04.0;txq7 at 0000:00:04.0;txq8 at 0000:00:04.1;txq9 at 0000:00:04.1;txq10 at 0000:00:04.1;txq11 at 0000:00:04.1;txq12 at 0000:00:04.1;txq13 at 0000:00:04.1;txq14 at 0000:00:04.1;txq15 at 0000:00:04.1;rxq0 at 0000:00:04.2;rxq1 at 0000:00:04.2;rxq2 at 0000:00:04.2;rxq3 at 0000:00:04.2;rxq4 at 0000:00:04.2;rxq5 at 0000:00:04.2;rxq6 at 0000:00:04.2;rxq7 at 0000:00:04.2;rxq8 at 0000:00:04.3;rxq9 at 0000:00:04.3;rxq10 at 0000:00:04.3;rxq11 at 0000:00:04.3;rxq12 at 0000:00:04.3;rxq13 at 0000:00:04.3;rxq14 at 0000:00:04.3;rxq15 at 0000:00:04.3]' \
+    -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
     testpmd> start
 
 2. Launch VM::
 
-	taskset -c 32-33 \
-	qemu-system-x86_64 -name us-vhost-vm1 \
+	taskset -c 32-33 qemu-system-x86_64 -name vm1 \
 	-cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
-	-smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu2004_1.img \
-	-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-	-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \
-	-chardev socket,id=char1,path=./vhost-net -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
-	-device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on  \
+	-smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu2004.img \
+	-monitor unix:/tmp/vm2_monitor.sock,server,nowait \
+	-device e1000,netdev=nttsip1 -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \
+	-chardev socket,id=char1,path=./vhost-net \
+	-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
+	-device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
 	-vnc :11 -daemonize
 
 3. On VM1, configure virtio device IP and enable vitio-net with 16 quques::
@@ -218,4 +221,3 @@ This case tests the packed ring virtio-net event idx interrupt with 16 queues an
     testpmd> stop
     testpmd> start
     testpmd> stop
-
-- 
2.25.1



More information about the dts mailing list