[dts] [PATCH] test_plans: fix doc build warnings

Marvin Liu yong.liu at intel.com
Tue Apr 24 13:18:18 CEST 2018


Signed-off-by: Marvin Liu <yong.liu at intel.com>

diff --git a/test_plans/ddp_gtp_qregion_test_plan.rst b/test_plans/ddp_gtp_qregion_test_plan.rst
index 4a08e41..ced70ff 100644
--- a/test_plans/ddp_gtp_qregion_test_plan.rst
+++ b/test_plans/ddp_gtp_qregion_test_plan.rst
@@ -193,7 +193,9 @@ Test Case: Outer IPv6 dst controls GTP-C queue in queue region
     GTP_U_Header()/Raw('x'*20)
 	
 10. Send different outer src GTP-C packet, check pmd receives packet from 
-    same queue::
+    same queue
+
+.. code-block:: console
 
     p=Ether()/IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0002",
     dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/
@@ -405,7 +407,9 @@ Test Case: Inner IP src controls GTP-U IPv4 queue in queue region
     IP(src="1.1.1.2",dst="2.2.2.2")/UDP()/Raw('x'*20)
 
 10. Send different dst GTP-U IPv4 packet, check pmd receives packet from same
-    queue::
+    queue
+
+.. code-block:: console
     
     p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/
     IP(src="1.1.1.1",dst="2.2.2.3")/UDP()/Raw('x'*20)
@@ -452,7 +456,9 @@ Test Case: Inner IP dst controls GTP-U IPv4 queue in queue region
     p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/
     IP(src="1.1.1.1",dst="2.2.2.3")/UDP()/Raw('x'*20)
 
-10. Send different src address, check pmd receives packet from same queue::
+10. Send different src address, check pmd receives packet from same queue
+
+.. code-block:: console
 
     p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/
     IP(src="1.1.1.2",dst="2.2.2.2")/UDP()/Raw('x'*20)
@@ -635,14 +641,14 @@ Test Case: Inner IPv6 src controls GTP-U IPv6 queue in queue region
     dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/UDP()/Raw('x'*20)
 		
 10. Send different inner dst GTP-U IPv6 packet, check pmd receives packet 
-    from same queue::
+    from same queue
+
+.. code-block:: console
 
     p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/
     IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001",
     dst="2001:0db8:85a3:0000:0000:8a2e:0370:0002)/UDP()/Raw('x'*20)
-	
 
-	
 Test Case: Inner IPv6 dst controls GTP-U IPv6 queue in queue region
 =========================================================================
 1. Check flow type to pctype mapping::
@@ -693,7 +699,9 @@ Test Case: Inner IPv6 dst controls GTP-U IPv6 queue in queue region
     dst="2001:0db8:85a3:0000:0000:8a2e:0370:0002")/UDP()/Raw('x'*20)
 
 10. Send different inner src GTP-U IPv6 packets, check pmd receives packet 
-    from same queue::
+    from same queue
+
+.. code-block:: console
 
     p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/
     IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0002",
diff --git a/test_plans/index.rst b/test_plans/index.rst
index 40984b2..b2a7d28 100644
--- a/test_plans/index.rst
+++ b/test_plans/index.rst
@@ -120,6 +120,15 @@ The following are the test plans for the DPDK DTS automated test system.
     qinq_filter_test_plan
     ddp_gtp_test_plan
     generic_flow_api_test_plan
+    ddp_gtp_qregion_test_plan
+    interrupt_pmd_kvm_test_plan
+    ipingre_test_plan
+    multi_vm_test_plan
+    runtime_queue_number_test_plan
+    sriov_live_migration_test_plan
+    vhost_multi_queue_qemu_test_plan
+    vhost_qemu_mtu_test_plan
+    vlan_fm10k_test_plan
 
     unit_tests_cmdline_test_plan
     unit_tests_crc_test_plan
@@ -150,3 +159,5 @@ The following are the test plans for the DPDK DTS automated test system.
     ptpclient_test_plan
     distributor_test_plan
     efd_test_plan
+    l2fwd_fork_test_plan
+    l3fwdacl_test_plan
diff --git a/test_plans/runtime_queue_number_test_plan.rst b/test_plans/runtime_queue_number_test_plan.rst
index f3353c9..fc07bb5 100644
--- a/test_plans/runtime_queue_number_test_plan.rst
+++ b/test_plans/runtime_queue_number_test_plan.rst
@@ -425,7 +425,9 @@ Test case: pass through VF to VM
  
 5. Bind VF to kernel driver i40evf, check the rxq and txq number.
    if set VF Max possible RX queues and TX queues to 2 by PF,
-   the VF rxq and txq number is 2::
+   the VF rxq and txq number is 2
+
+.. code-block:: console
 
     #ethtool -S eth0
     NIC statistics:
diff --git a/test_plans/sriov_live_migration_test_plan.rst b/test_plans/sriov_live_migration_test_plan.rst
index cd690b6..11d5997 100644
--- a/test_plans/sriov_live_migration_test_plan.rst
+++ b/test_plans/sriov_live_migration_test_plan.rst
@@ -1,289 +1,319 @@
-.. Copyright (c) <2016>, Intel Corporation
-      All rights reserved.
-
-   Redistribution and use in source and binary forms, with or without
-   modification, are permitted provided that the following conditions
-   are met:
-
-   - Redistributions of source code must retain the above copyright
-     notice, this list of conditions and the following disclaimer.
-
-   - Redistributions in binary form must reproduce the above copyright
-     notice, this list of conditions and the following disclaimer in
-     the documentation and/or other materials provided with the
-     distribution.
-
-   - Neither the name of Intel Corporation nor the names of its
-     contributors may be used to endorse or promote products derived
-     from this software without specific prior written permission.
-
-   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
-   FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
-   COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-   INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
-   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
-   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
-   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
-   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-   ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
-   OF THE POSSIBILITY OF SUCH DAMAGE.
-
-====================
-SRIOV live migration
-====================
-Qemu not support migrate a Virtual Machine which has an SR-IOV Virtual
-Function (VF). To get work around of this, bonding PMD and VirtIO is used.
-
-Prerequisites
--------------
-Connect three ports to one switch, these three ports are from Host, Backup
-host and tester.
-
-Start nfs service and export nfs to backup host IP:
-    host# service rpcbind start
-    host# service nfs start
-    host# cat /etc/exports
-    host# /home/vm-image backup-host-ip(rw,sync,no_root_squash)
-
-Make sure host nfsd module updated to v4 version(v2 not support file > 4G)
-
-Enable vhost pmd in configuration file and rebuild dpdk on host and backup host
-    CONFIG_RTE_LIBRTE_PMD_VHOST=y
-
-Create enough hugepages for testpmd and qemu backend memory.
-    host# echo 4096 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
-    host# mount -t hugetlbfs hugetlbfs /mnt/huge
-
-Generate VF device with host port and backup host port
-    host# echo 1 > /sys/bus/pci/devices/0000\:01\:00.1/sriov_numvfs
-    backup# echo 1 > /sys/bus/pci/devices/0000\:03\:00.0/sriov_numvfs
-    
-Test Case 1: migrate with tap VirtIO
-====================================
-Start qemu on host server 
-    host# /usr/local/bin/qemu-system-x86_64 -enable-kvm -m 2048 \
-          -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
-          -numa node,memdev=mem -mem-prealloc -smp 4 -cpu host -name VM1 \
-          -no-reboot \
-          -drive file=/home/vm-image/vm0.img,format=raw \
-          -net nic,model=e1000 -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
-          -netdev type=tap,id=net1,script=no,downscript=no,ifname=tap1 \
-          -device virtio-net-pci,netdev=net1,mac=00:00:00:00:00:01 \
-          -device pci-assign,host=01:10.1,id=vf1 \
-          -monitor telnet::3333,server,nowait \
-          -serial telnet:localhost:5432,server,nowait \
-          -daemonize
-
-Bridge tap and PF device into one bridge
-    host# brctl addbr br0
-    host# brctl addif br0 tap1
-    host# brctl addif br0 $PF
-    host# ifconfig tap1 up
-    host# ifconfig $PF up
-    host# ifconfig br0 up
-
-Login into vm and bind VirtIO and VF device to igb_uio, then start testpmd
-    host# telnet localhost 5432
-    host vm# cd /root/dpdk
-    host vm# modprobe uio
-    host vm# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
-    host vm# modprobe -r ixgbevf
-    host vm# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0 00:04.0
-    host vm# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
-    host vm# ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
-
-Create bond device with VF and virtIO
-    testpmd> create bonded device 1 0
-    testpmd> add bonding slave 0 2
-    testpmd> add bonding slave 1 2
-    testpmd> set bonding primary 1 2
-    testpmd> port start 2
-    testpmd> set portlist 2
-    testpmd> show config fwd
-    testpmd> set fwd rxonly
-    testpmd> set verbose 1
-    testpmd> start
-
-Send packets from tester with bonding device's mac and check received
-    tester# scapy
-    tester# >>> VF="AA:BB:CC:DD:EE:FF"
-    tester# >>> sendp([Ether(dst=VF, src=get_if_hwaddr('p5p1')/IP()/UDP()/Raw('x' * 18)],
-                       iface='p5p1', loop=1, inter=1)
-
-Start qemu on backup server 
-    host# /usr/local/bin/qemu-system-x86_64 -enable-kvm -m 2048 \
-          -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
-          -numa node,memdev=mem -mem-prealloc \
-          -smp 4 -cpu host -name VM1 \
-          -no-reboot \
-          -drive file=/mnt/nfs/vm0.img,format=raw \
-          -net nic,model=e1000 -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
-          -netdev type=tap,id=net1,script=no,downscript=no,ifname=tap1 \
-          -device virtio-net-pci,netdev=net1,mac=00:00:00:00:00:01 \
-          -incoming tcp:0:4444 \
-          -monitor telnet::3333,server,nowait \
-          -serial telnet:localhost:5432,server,nowait \
-          -daemonize
-
-Bridge tap and PF device into one bridge on backup server
-    backup# brctl addbr br0
-    backup# brctl addif br0 tap1
-    backup# brctl addif br0 $PF
-    backup# ifconfig tap1 up
-    backup# ifconfig $PF up
-    backup# ifconfig br0 up
-
-Before migration, remove VF device in host VM
-    testpmd> remove bonding slave 1 2
-    testpmd> port stop 1
-    testpmd> port close 1
-    testpmd> port detach 1
-
-Delete VF device in qemu monitor and then start migration
-    host# telnet localhost 3333
-        (qemu) device_del vf1
-        (qemu) migrate -d tcp:backup server ip:4444
-
-Check in migration process, still can receive packets
-
-After migration, check backup vm can receive packets
-
-After migration done, attached backup VF device
-    backup# (qemu) device_add pci-assign,host=03:10.0,id=vf1
-
-Login backup VM and attach VF device
-    backup vm# ./dpdk/tools/dpdk_nic_bind.py --bind=igb_uio 00:04.0
-    backup vm# testpmd> stop
-    backup vm# testpmd> port attach 0000:00:04.0
-
-Change backup VF mac address to same of host VF device
-    testpmd> mac_addr add port 1 vf 0 AA:BB:CC:DD:EE:FF
-    testpmd> port start 1
-    testpmd> add bonding slave 1 2
-    testpmd> set bonding primary 1 2
-    testpmd> show bonding config 2
-    testpmd> show port stats all
-
-Remove virtio device
-    testpmd> remove bonding slave 0 2
-    testpmd> show bonding config 2
-    testpmd> port stop 0
-    testpmd> port close 0
-    testpmd> port detach 0
-
-Check bonding device still can received packets
-
-
-Test Case 2: migrate with vhost user pmd
-========================================
-Start testpmd with vhost user pmd device on host
-    host# ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 \
-          --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net,queues=1' \
-          --socket-mem 1024 -- -i
-
-Start qemu with vhost user on host
-    host# /usr/local/bin/qemu-system-x86_64 -enable-kvm \
-          -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
-          -numa node,memdev=mem -mem-prealloc -smp 4 -cpu host -name VM1 -no-reboot \
-          -drive file=/home/vm-image/vm0.img,format=raw \
-          -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
-          -chardev socket,id=char0,path=/root/dpdk/vhost-net \
-          -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-          -device virtio-net-pci,netdev=netdev0,mac=00:00:00:00:00:01 \
-          -device pci-assign,host=01:10.1,id=vf1 \
-          -monitor telnet::3333,server,nowait \
-          -serial telnet:localhost:5432,server,nowait \
-          -daemonize
-
-Start testpmd on backup host with vhost user pmd device
-    backup# ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 \
-            --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net,queues=1' \
-            --socket-mem 1024 -- -i
-
-Start qemu with vhost user on backup host
-    backup# /usr/local/bin/qemu-system-x86_64 -enable-kvm -m 2048 \
-            -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
-            -numa node,memdev=mem -mem-prealloc -smp 4 -cpu host -name VM1 -no-reboot \
-            -drive file=/mnt/nfs/vm0.img,format=raw \
-            -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
-            -chardev socket,id=char0,path=/root/dpdk/vhost-net \
-            -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-            -device virtio-net-pci,netdev=netdev0,mac=00:00:00:00:00:01 \
-            -monitor telnet::3333,server,nowait \
-            -serial telnet:localhost:5432,server,nowait \
-            -incoming tcp:0:4444 \
-            -daemonize
-
-Login into host vm, start testpmd with virtio and VF devices
-    host# telnet localhost 5432
-    host vm# cd /root/dpdk
-    host vm# modprobe uio
-    host vm# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
-    host vm# modprobe -r ixgbevf
-    host vm# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0 00:04.0
-    host vm# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
-    host vm# ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
-
-Check host vhost pmd connect with VM’s virtio device
-    host# testpmd> host testpmd message for connection
-
-Create bond device and then add VF and virtio devices into bonding device
-    host vm# testpmd> create bonded device 1 0
-    host vm# testpmd> add bonding slave 0 2
-    host vm# testpmd> add bonding slave 1 2
-    host vm# testpmd> set bonding primary 1 2
-    host vm# testpmd> port start 2
-    host vm# testpmd> set portlist 2
-    host vm# testpmd> show config fwd
-    host vm# testpmd> set fwd rxonly
-    host vm# testpmd> set verbose 1
-    host vm# testpmd> start
-
-Send packets matched bonding device’s mac from tester, check packets received
-by bonding device
-
-Before migration, removed VF device from bonding device. After that, bonding device
-can’t receive packets
-    host vm# testpmd> remove bonding slave 1 2
-    host vm# testpmd> port stop 1
-    host vm# testpmd> port close 1
-    host vm# testpmd> port detach 1
-
-Delete VF device in qemu monitor and then start migration
-    host# telnet localhost 3333
-    host# (qemu) device_del vf1
-    host# (qemu) migrate -d tcp:10.239.129.125:4444
-
-After migration done, add backup VF device into backup VM
-    backup# (qemu) device_add pci-assign,host=03:10.0,id=vf1
-
-Login into backup VM and bind VF device to igb_uio
-    backup# ssh -p 5555 root at localhost
-    backup vm# ./dpdk/tools/dpdk_nic_bind.py --bind=igb_uio 00:04.0
-
-Connect to backup VM serial port  and attach backup VF device
-    backup# telnet localhost 5432
-    backup vm# testpmd> port attach 0000:00:04.0
-
-Change backup VF mac address to match host VF device
-    backup vm# testpmd> mac_addr add port 1 vf 0 AA:BB:CC:DD:EE:FF
-
-Add backup VF device into bonding device
-    backup vm# testpmd> port start 1
-    backup vm# testpmd> add bonding slave 1 2
-    backup vm# testpmd> set bonding primary 1 2
-    backup vm# testpmd> show bonding config 2
-    backup vm# testpmd> show port stats all
-
-Remove virtio device from backup bonding device
-    backup vm# testpmd> remove bonding slave 0 2
-    backup vm# testpmd> show bonding config 2
-    backup vm# testpmd> port stop 0
-    backup vm# testpmd> port close 0
-    backup vm# testpmd> port detach 0
-    backup vm# 
-
-Check still can receive packets matched VF mac address
-
+.. Copyright (c) <2016>, Intel Corporation
+      All rights reserved.
+
+   Redistribution and use in source and binary forms, with or without
+   modification, are permitted provided that the following conditions
+   are met:
+
+   - Redistributions of source code must retain the above copyright
+     notice, this list of conditions and the following disclaimer.
+
+   - Redistributions in binary form must reproduce the above copyright
+     notice, this list of conditions and the following disclaimer in
+     the documentation and/or other materials provided with the
+     distribution.
+
+   - Neither the name of Intel Corporation nor the names of its
+     contributors may be used to endorse or promote products derived
+     from this software without specific prior written permission.
+
+   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+   FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+   COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+   INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+   ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+   OF THE POSSIBILITY OF SUCH DAMAGE.
+
+====================
+SRIOV live migration
+====================
+Qemu not support migrate a Virtual Machine which has an SR-IOV Virtual
+Function (VF). To get work around of this, bonding PMD and VirtIO is used.
+
+Prerequisites
+-------------
+Connect three ports to one switch, these three ports are from Host, Backup
+host and tester.
+
+Start nfs service and export nfs to backup host IP::
+
+    host# service rpcbind start
+    host# service nfs start
+    host# cat /etc/exports
+    host# /home/vm-image backup-host-ip(rw,sync,no_root_squash)
+
+Make sure host nfsd module updated to v4 version(v2 not support file > 4G)
+
+Enable vhost pmd in configuration file and rebuild dpdk on host and backup host
+    CONFIG_RTE_LIBRTE_PMD_VHOST=y
+
+Create enough hugepages for testpmd and qemu backend memory::
+
+    host# echo 4096 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+    host# mount -t hugetlbfs hugetlbfs /mnt/huge
+
+Generate VF device with host port and backup host port::
+
+    host# echo 1 > /sys/bus/pci/devices/0000\:01\:00.1/sriov_numvfs
+    backup# echo 1 > /sys/bus/pci/devices/0000\:03\:00.0/sriov_numvfs
+    
+Test Case 1: migrate with tap VirtIO
+====================================
+Start qemu on host server::
+
+    host# /usr/local/bin/qemu-system-x86_64 -enable-kvm -m 2048 \
+          -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+          -numa node,memdev=mem -mem-prealloc -smp 4 -cpu host -name VM1 \
+          -no-reboot \
+          -drive file=/home/vm-image/vm0.img,format=raw \
+          -net nic,model=e1000 -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+          -netdev type=tap,id=net1,script=no,downscript=no,ifname=tap1 \
+          -device virtio-net-pci,netdev=net1,mac=00:00:00:00:00:01 \
+          -device pci-assign,host=01:10.1,id=vf1 \
+          -monitor telnet::3333,server,nowait \
+          -serial telnet:localhost:5432,server,nowait \
+          -daemonize
+
+Bridge tap and PF device into one bridge::
+
+    host# brctl addbr br0
+    host# brctl addif br0 tap1
+    host# brctl addif br0 $PF
+    host# ifconfig tap1 up
+    host# ifconfig $PF up
+    host# ifconfig br0 up
+
+Login into vm and bind VirtIO and VF device to igb_uio, then start testpmd::
+
+    host# telnet localhost 5432
+    host vm# cd /root/dpdk
+    host vm# modprobe uio
+    host vm# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
+    host vm# modprobe -r ixgbevf
+    host vm# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0 00:04.0
+    host vm# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+    host vm# ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
+
+Create bond device with VF and virtIO::
+
+    testpmd> create bonded device 1 0
+    testpmd> add bonding slave 0 2
+    testpmd> add bonding slave 1 2
+    testpmd> set bonding primary 1 2
+    testpmd> port start 2
+    testpmd> set portlist 2
+    testpmd> show config fwd
+    testpmd> set fwd rxonly
+    testpmd> set verbose 1
+    testpmd> start
+
+Send packets from tester with bonding device's mac and check received::
+
+    tester# scapy
+    tester# >>> VF="AA:BB:CC:DD:EE:FF"
+    tester# >>> sendp([Ether(dst=VF, src=get_if_hwaddr('p5p1')/IP()/UDP()/Raw('x' * 18)],
+                       iface='p5p1', loop=1, inter=1)
+
+Start qemu on backup server::
+
+    host# /usr/local/bin/qemu-system-x86_64 -enable-kvm -m 2048 \
+          -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+          -numa node,memdev=mem -mem-prealloc \
+          -smp 4 -cpu host -name VM1 \
+          -no-reboot \
+          -drive file=/mnt/nfs/vm0.img,format=raw \
+          -net nic,model=e1000 -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+          -netdev type=tap,id=net1,script=no,downscript=no,ifname=tap1 \
+          -device virtio-net-pci,netdev=net1,mac=00:00:00:00:00:01 \
+          -incoming tcp:0:4444 \
+          -monitor telnet::3333,server,nowait \
+          -serial telnet:localhost:5432,server,nowait \
+          -daemonize
+
+Bridge tap and PF device into one bridge on backup server::
+
+    backup# brctl addbr br0
+    backup# brctl addif br0 tap1
+    backup# brctl addif br0 $PF
+    backup# ifconfig tap1 up
+    backup# ifconfig $PF up
+    backup# ifconfig br0 up
+
+Before migration, remove VF device in host VM::
+
+    testpmd> remove bonding slave 1 2
+    testpmd> port stop 1
+    testpmd> port close 1
+    testpmd> port detach 1
+
+Delete VF device in qemu monitor and then start migration::
+
+    host# telnet localhost 3333
+        (qemu) device_del vf1
+        (qemu) migrate -d tcp:backup server ip:4444
+
+Check in migration process, still can receive packets
+
+After migration, check backup vm can receive packets
+
+After migration done, attached backup VF device::
+
+    backup# (qemu) device_add pci-assign,host=03:10.0,id=vf1
+
+Login backup VM and attach VF device::
+
+    backup vm# ./dpdk/tools/dpdk_nic_bind.py --bind=igb_uio 00:04.0
+    backup vm# testpmd> stop
+    backup vm# testpmd> port attach 0000:00:04.0
+
+Change backup VF mac address to same of host VF device::
+
+    testpmd> mac_addr add port 1 vf 0 AA:BB:CC:DD:EE:FF
+    testpmd> port start 1
+    testpmd> add bonding slave 1 2
+    testpmd> set bonding primary 1 2
+    testpmd> show bonding config 2
+    testpmd> show port stats all
+
+Remove virtio device::
+
+    testpmd> remove bonding slave 0 2
+    testpmd> show bonding config 2
+    testpmd> port stop 0
+    testpmd> port close 0
+    testpmd> port detach 0
+
+Check bonding device still can received packets
+
+
+Test Case 2: migrate with vhost user pmd
+========================================
+Start testpmd with vhost user pmd device on host::
+
+    host# ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 \
+          --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net,queues=1' \
+          --socket-mem 1024 -- -i
+
+Start qemu with vhost user on host::
+
+    host# /usr/local/bin/qemu-system-x86_64 -enable-kvm \
+          -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+          -numa node,memdev=mem -mem-prealloc -smp 4 -cpu host -name VM1 -no-reboot \
+          -drive file=/home/vm-image/vm0.img,format=raw \
+          -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+          -chardev socket,id=char0,path=/root/dpdk/vhost-net \
+          -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+          -device virtio-net-pci,netdev=netdev0,mac=00:00:00:00:00:01 \
+          -device pci-assign,host=01:10.1,id=vf1 \
+          -monitor telnet::3333,server,nowait \
+          -serial telnet:localhost:5432,server,nowait \
+          -daemonize
+
+Start testpmd on backup host with vhost user pmd device::
+
+    backup# ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 \
+            --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net,queues=1' \
+            --socket-mem 1024 -- -i
+
+Start qemu with vhost user on backup host::
+
+    backup# /usr/local/bin/qemu-system-x86_64 -enable-kvm -m 2048 \
+            -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+            -numa node,memdev=mem -mem-prealloc -smp 4 -cpu host -name VM1 -no-reboot \
+            -drive file=/mnt/nfs/vm0.img,format=raw \
+            -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+            -chardev socket,id=char0,path=/root/dpdk/vhost-net \
+            -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+            -device virtio-net-pci,netdev=netdev0,mac=00:00:00:00:00:01 \
+            -monitor telnet::3333,server,nowait \
+            -serial telnet:localhost:5432,server,nowait \
+            -incoming tcp:0:4444 \
+            -daemonize
+
+Login into host vm, start testpmd with virtio and VF devices::
+
+    host# telnet localhost 5432
+    host vm# cd /root/dpdk
+    host vm# modprobe uio
+    host vm# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
+    host vm# modprobe -r ixgbevf
+    host vm# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0 00:04.0
+    host vm# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+    host vm# ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
+
+Check host vhost pmd connect with VM's virtio device::
+
+    host# testpmd> host testpmd message for connection
+
+Create bond device and then add VF and virtio devices into bonding device::
+
+    host vm# testpmd> create bonded device 1 0
+    host vm# testpmd> add bonding slave 0 2
+    host vm# testpmd> add bonding slave 1 2
+    host vm# testpmd> set bonding primary 1 2
+    host vm# testpmd> port start 2
+    host vm# testpmd> set portlist 2
+    host vm# testpmd> show config fwd
+    host vm# testpmd> set fwd rxonly
+    host vm# testpmd> set verbose 1
+    host vm# testpmd> start
+
+Send packets matched bonding device's mac from tester, check packets received
+by bonding device
+
+Before migration, removed VF device from bonding device. After that, bonding device
+can't receive packets::
+
+    host vm# testpmd> remove bonding slave 1 2
+    host vm# testpmd> port stop 1
+    host vm# testpmd> port close 1
+    host vm# testpmd> port detach 1
+
+Delete VF device in qemu monitor and then start migration::
+
+    host# telnet localhost 3333
+    host# (qemu) device_del vf1
+    host# (qemu) migrate -d tcp:10.239.129.125:4444
+
+After migration done, add backup VF device into backup VM::
+
+    backup# (qemu) device_add pci-assign,host=03:10.0,id=vf1
+
+Login into backup VM and bind VF device to igb_uio::
+
+    backup# ssh -p 5555 root at localhost
+    backup vm# ./dpdk/tools/dpdk_nic_bind.py --bind=igb_uio 00:04.0
+
+Connect to backup VM serial port  and attach backup VF device::
+
+    backup# telnet localhost 5432
+    backup vm# testpmd> port attach 0000:00:04.0
+
+Change backup VF mac address to match host VF device::
+
+    backup vm# testpmd> mac_addr add port 1 vf 0 AA:BB:CC:DD:EE:FF
+
+Add backup VF device into bonding device::
+
+    backup vm# testpmd> port start 1
+    backup vm# testpmd> add bonding slave 1 2
+    backup vm# testpmd> set bonding primary 1 2
+    backup vm# testpmd> show bonding config 2
+    backup vm# testpmd> show port stats all
+
+Remove virtio device from backup bonding device::
+
+    backup vm# testpmd> remove bonding slave 0 2
+    backup vm# testpmd> show bonding config 2
+    backup vm# testpmd> port stop 0
+    backup vm# testpmd> port close 0
+    backup vm# testpmd> port detach 0
+    backup vm# 
+
+Check still can receive packets matched VF mac address
diff --git a/test_plans/vhost_multi_queue_qemu_test_plan.rst b/test_plans/vhost_multi_queue_qemu_test_plan.rst
index c2a7558..bb13a81 100644
--- a/test_plans/vhost_multi_queue_qemu_test_plan.rst
+++ b/test_plans/vhost_multi_queue_qemu_test_plan.rst
@@ -85,7 +85,8 @@ flow:
 TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 1. Bind one port to igb_uio, then launch testpmd by below command, 
-   ensure the vhost using 2 queues: 
+   ensure the vhost using 2 queues::
+
     rm -rf vhost-net*
     ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \
@@ -106,7 +107,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
     -vnc :2 -daemonize
 
 3. On VM, bind virtio net to igb_uio and run testpmd,
-   using one queue for testing at first  ::
+   using one queue for testing at first::
  
     ./testpmd -c 0x7 -n 3 -- -i --rxq=1 --txq=1 --tx-offloads=0x0 \
     --rss-ip --nb-cores=1
@@ -114,6 +115,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
     testpmd>start
 
 4. Use scapy send packet::
+
     #scapy
     >>>pk1= [Ether(dst="52:54:00:00:00:01")/IP(dst="1.1.1.1")/UDP()/("X"*64)]
     >>>pk2= [Ether(dst="52:54:00:00:00:01")/IP(dst="1.1.1.7")/UDP()/("X"*64)]
@@ -159,7 +161,8 @@ flow:
 TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 1. Bind one port to igb_uio, then launch testpmd by below command, 
-   ensure the vhost using 2 queues: 
+   ensure the vhost using 2 queues::
+
     rm -rf vhost-net*
     ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \
@@ -180,7 +183,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
     -vnc :2 -daemonize
 
 3. On VM, bind virtio net to igb_uio and run testpmd,
-   using one queue for testing at first  ::
+   using one queue for testing at first::
  
     ./testpmd -c 0x7 -n 4 -- -i --rxq=2 --txq=2 \
     --tx-offloads=0x0 --rss-ip --nb-cores=2
@@ -188,6 +191,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
     testpmd>start
  
 4. Use scapy send packet::
+
     #scapy
     >>>pk1= [Ether(dst="52:54:00:00:00:01")/IP(dst="1.1.1.1")/UDP()/("X"*64)]
     >>>pk2= [Ether(dst="52:54:00:00:00:01")/IP(dst="1.1.1.7")/UDP()/("X"*64)]
diff --git a/test_plans/virtio_1.0_test_plan.rst b/test_plans/virtio_1.0_test_plan.rst
index 69f6794..265c586 100644
--- a/test_plans/virtio_1.0_test_plan.rst
+++ b/test_plans/virtio_1.0_test_plan.rst
@@ -44,8 +44,7 @@ test with virtio0.95 to ensure they can co-exist. Besides, we need test virtio
 
 
 Test Case 1: test_func_vhost_user_virtio1.0-pmd with different tx-offloads
-=======================================================================
-
+==========================================================================
 Note: For virtio1.0 usage, we need use qemu version >2.4, such as 2.4.1 or 2.5.0.
 
 1. Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1.::
-- 
1.9.3



More information about the dts mailing list