[dts] [PATCH] test_plans: fix some syntax issues

Marvin Liu yong.liu at intel.com
Fri Aug 25 04:23:23 CEST 2017


Signed-off-by: Marvin Liu <yong.liu at intel.com>

diff --git a/test_plans/ptpclient_test_plan.rst b/test_plans/ptpclient_test_plan.rst
index c41cefd..7e349b5 100644
--- a/test_plans/ptpclient_test_plan.rst
+++ b/test_plans/ptpclient_test_plan.rst
@@ -48,9 +48,13 @@ The sample should be validated on Forville, Niantic and i350 Nics.
 Test case: ptp client
 ======================
 Start ptp server on tester with IEEE 802.3 network transport::
+
     ptp4l -i p785p1 -2 -m
+
 Start ptp client on DUT and wait few seconds::
+
     ./examples/ptpclient/build/ptpclient -c f -n 3 -- -T 0 -p 0x1
+
 Check that output message contained T1,T2,T3,T4 clock and time difference
 between master and slave time is about 10us in niantic, 20us in Fortville,
 8us in i350.
@@ -58,16 +62,22 @@ between master and slave time is about 10us in niantic, 20us in Fortville,
 Test case: update system
 ========================
 Reset DUT clock to initial time and make sure system time has been changed::
+
     date -s "1970-01-01 00:00:00"    
+
 Strip DUT and tester board system time::
+
     date +"%s.%N"
+
 Start ptp server on tester with IEEE 802.3 network transport::
+
     ptp4l -i p785p1 -2 -m -S
+
 Start ptp client on DUT and wait few seconds::
+
     ./examples/ptpclient/build/ptpclient -c f -n 3 -- -T 1 -p 0x1
+
 Make sure DUT system time has been changed to same as tester.
 Check that output message contained T1,T2,T3,T4 clock and time difference
 between master and slave time is about 10us in niantic, 20us in Fortville,
 8us in i350.
-
-
diff --git a/test_plans/qinq_filter_test_plan.rst b/test_plans/qinq_filter_test_plan.rst
index 516c167..fc2aef8 100644
--- a/test_plans/qinq_filter_test_plan.rst
+++ b/test_plans/qinq_filter_test_plan.rst
@@ -57,6 +57,7 @@ Testpmd configuration - 4 RX/TX queues per port
 #. set up testpmd with fortville NICs::
 
     ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1f -n 4 -- -i --rxq=4 --txq=4 --txqflags=0x0  --disable-rss
+
 #. enable qinq::
 
     testpmd command: vlan set qinq on 0
@@ -75,19 +76,21 @@ Testpmd configuration - 4 RX/TX queues per port
 
 tester Configuration
 -------------------- 
-      
+
 #. send dual vlan packet with scapy, verify it can be recognized as qinq packet::
+
     sendp([Ether(dst="3C:FD:FE:A3:A0:AE")/Dot1Q(type=0x8100,vlan=2)/Dot1Q(type=0x8100,vlan=3)/IP(src="192.168.0.1", dst="192.168.0.2")/Raw('x' * 20)], iface="eth17")
 
 Test Case 2: qinq packet filter to PF queues
 ============================================
 
 Testpmd configuration - 4 RX/TX queues per port
-------------------------------------------------
+-----------------------------------------------
 
 #. set up testpmd with fortville NICs::
 
     ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1f -n 4 -- -i --rxq=4 --txq=4 --txqflags=0x0  --disable-rss
+
 #. enable qinq::
 
     testpmd command: vlan set qinq on 0
@@ -105,6 +108,7 @@ Testpmd configuration - 4 RX/TX queues per port
     testpmd command: start
 
 #. create filter rules::
+
     testpmd command: flow create 0 ingress pattern eth / vlan tci is 1 / vlan tci is 4093 / end actions pf / queue index 1 / end
     testpmd command: flow create 0 ingress pattern eth / vlan tci is 2 / vlan tci is 4094 / end actions pf / queue index 2 / end
 
@@ -112,6 +116,7 @@ tester Configuration
 -------------------- 
 
 #. send dual vlan packet with scapy, verify packets can filter to queues::
+
     sendp([Ether(dst="3C:FD:FE:A3:A0:AE")/Dot1Q(type=0x8100,vlan=1)/Dot1Q(type=0x8100,vlan=4093)/IP(src="192.168.0.1", dst="192.168.0.2")/Raw('x' * 20)], iface="eth17")
     sendp([Ether(dst="3C:FD:FE:A3:A0:AE")/Dot1Q(type=0x8100,vlan=2)/Dot1Q(type=0x8100,vlan=4093)/IP(src="192.168.0.1", dst="192.168.0.2")/Raw('x' * 20)], iface="eth17")
 
@@ -145,15 +150,14 @@ Test Case 3: qinq packet filter to VF queues
 
     testpmd command: start
        
- #. create filter rules::
+#. create filter rules::
  
     testpmd command: flow create 0 ingress pattern eth / vlan tci is 1 / vlan tci is 4093 / end actions vf id 0 / queue index 2 / end
-
     testpmd command: flow create 0 ingress pattern eth / vlan tci is 2 / vlan tci is 4094 / end actions vf id 1 / queue index 3 / end
-
     testpmd command: flow create 0 ingress pattern eth / vlan tci is 3 / vlan tci is 4094 / end actions pf / queue index 1 / end
 
 #. set up testpmd with fortville VF0 NICs::
+
     ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3e0 -n 4 --socket-mem=1024,1024 --file-prefix=vf0 -w 81:02.0 -- -i --rxq=4 --txq=4 --rss-udp
 
 #. PMD fwd only receive the packets::
@@ -169,6 +173,7 @@ Test Case 3: qinq packet filter to VF queues
     testpmd command: start
 
 #. set up testpmd with fortville VF0 NICs::
+
     ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7c0 -n 4 --socket-mem=1024,1024 --file-prefix=vf1 -w 81:02.0 -- -i --rxq=4 --txq=4 --rss-udp
 
 #. PMD fwd only receive the packets::
@@ -187,6 +192,7 @@ tester Configuration
 -------------------- 
 
 #. send dual vlan packet with scapy, verify packets can filter to the corresponding PF and VF queues::
+
     sendp([Ether(dst="3C:FD:FE:A3:A0:AE")/Dot1Q(type=0x8100,vlan=1)/Dot1Q(type=0x8100,vlan=4094)/IP(src="192.168.0.1", dst="192.168.0.2")/Raw('x' * 20)], iface="eth17")
     sendp([Ether(dst="3C:FD:FE:A3:A0:AE")/Dot1Q(type=0x8100,vlan=2)/Dot1Q(type=0x8100,vlan=4094)/IP(src="192.168.0.1", dst="192.168.0.2")/Raw('x' * 20)], iface="eth17")
     sendp([Ether(dst="3C:FD:FE:A3:A0:AE")/Dot1Q(type=0x8100,vlan=3)/Dot1Q(type=0x8100,vlan=4094)/IP(src="192.168.0.1", dst="192.168.0.2")/Raw('x' * 20)], iface="eth17")
@@ -228,12 +234,11 @@ Test Case 4: qinq packet filter with diffierent tpid
 #. create filter rules::
  
     testpmd command: flow create 0 ingress pattern eth / vlan tci is 1 / vlan tci is 4093 / end actions vf id 0 / queue index 2 / end
-
     testpmd command: flow create 0 ingress pattern eth / vlan tci is 2 / vlan tci is 4094 / end actions vf id 1 / queue index 3 / end
-
     testpmd command: flow create 0 ingress pattern eth / vlan tci is 3 / vlan tci is 4094 / end actions pf / queue index 1 / end
 
 #. set up testpmd with fortville VF0 NICs::
+
     ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3e0 -n 4 --socket-mem=1024,1024 --file-prefix=vf0 -w 81:02.0 -- -i --rxq=4 --txq=4 --rss-udp
 
 #. PMD fwd only receive the packets::
@@ -249,6 +254,7 @@ Test Case 4: qinq packet filter with diffierent tpid
     testpmd command: start
 
 #. set up testpmd with fortville VF0 NICs::
+
     ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7c0 -n 4 --socket-mem=1024,1024 --file-prefix=vf1 -w 81:02.0 -- -i --rxq=4 --txq=4 --rss-udp
 
 #. PMD fwd only receive the packets::
@@ -273,6 +279,7 @@ Note
 ====================================================
 
 #. How to send packet with specific TPID with scapy::
+
     1. wrpcap("qinq.pcap",[Ether(dst="3C:FD:FE:A3:A0:AE")/Dot1Q(type=0x8100,vlan=1)/Dot1Q(type=0x8100,vlan=4092)/IP(src="192.168.0.1", dst="192.168.0.2")/Raw('x' * 20)]).
     2. hexedit qinq.pcap; change tpid field, "ctrl+w" to save, "ctrl+x" to exit.
-    3. sendp(rdpcap("qinq.pcap"), iface="eth17").
\ No newline at end of file
+    3. sendp(rdpcap("qinq.pcap"), iface="eth17").
diff --git a/test_plans/sriov_live_migration_test_plan.rst b/test_plans/sriov_live_migration_test_plan.rst
index cd690b6..981b3f8 100644
--- a/test_plans/sriov_live_migration_test_plan.rst
+++ b/test_plans/sriov_live_migration_test_plan.rst
@@ -1,289 +1,321 @@
-.. Copyright (c) <2016>, Intel Corporation
-      All rights reserved.
-
-   Redistribution and use in source and binary forms, with or without
-   modification, are permitted provided that the following conditions
-   are met:
-
-   - Redistributions of source code must retain the above copyright
-     notice, this list of conditions and the following disclaimer.
-
-   - Redistributions in binary form must reproduce the above copyright
-     notice, this list of conditions and the following disclaimer in
-     the documentation and/or other materials provided with the
-     distribution.
-
-   - Neither the name of Intel Corporation nor the names of its
-     contributors may be used to endorse or promote products derived
-     from this software without specific prior written permission.
-
-   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
-   FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
-   COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-   INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
-   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
-   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
-   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
-   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-   ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
-   OF THE POSSIBILITY OF SUCH DAMAGE.
-
-====================
-SRIOV live migration
-====================
-Qemu not support migrate a Virtual Machine which has an SR-IOV Virtual
-Function (VF). To get work around of this, bonding PMD and VirtIO is used.
-
-Prerequisites
--------------
-Connect three ports to one switch, these three ports are from Host, Backup
-host and tester.
-
-Start nfs service and export nfs to backup host IP:
-    host# service rpcbind start
-    host# service nfs start
-    host# cat /etc/exports
-    host# /home/vm-image backup-host-ip(rw,sync,no_root_squash)
-
-Make sure host nfsd module updated to v4 version(v2 not support file > 4G)
-
-Enable vhost pmd in configuration file and rebuild dpdk on host and backup host
-    CONFIG_RTE_LIBRTE_PMD_VHOST=y
-
-Create enough hugepages for testpmd and qemu backend memory.
-    host# echo 4096 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
-    host# mount -t hugetlbfs hugetlbfs /mnt/huge
-
-Generate VF device with host port and backup host port
-    host# echo 1 > /sys/bus/pci/devices/0000\:01\:00.1/sriov_numvfs
-    backup# echo 1 > /sys/bus/pci/devices/0000\:03\:00.0/sriov_numvfs
-    
-Test Case 1: migrate with tap VirtIO
-====================================
-Start qemu on host server 
-    host# /usr/local/bin/qemu-system-x86_64 -enable-kvm -m 2048 \
-          -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
-          -numa node,memdev=mem -mem-prealloc -smp 4 -cpu host -name VM1 \
-          -no-reboot \
-          -drive file=/home/vm-image/vm0.img,format=raw \
-          -net nic,model=e1000 -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
-          -netdev type=tap,id=net1,script=no,downscript=no,ifname=tap1 \
-          -device virtio-net-pci,netdev=net1,mac=00:00:00:00:00:01 \
-          -device pci-assign,host=01:10.1,id=vf1 \
-          -monitor telnet::3333,server,nowait \
-          -serial telnet:localhost:5432,server,nowait \
-          -daemonize
-
-Bridge tap and PF device into one bridge
-    host# brctl addbr br0
-    host# brctl addif br0 tap1
-    host# brctl addif br0 $PF
-    host# ifconfig tap1 up
-    host# ifconfig $PF up
-    host# ifconfig br0 up
-
-Login into vm and bind VirtIO and VF device to igb_uio, then start testpmd
-    host# telnet localhost 5432
-    host vm# cd /root/dpdk
-    host vm# modprobe uio
-    host vm# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
-    host vm# modprobe -r ixgbevf
-    host vm# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0 00:04.0
-    host vm# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
-    host vm# ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
-
-Create bond device with VF and virtIO
-    testpmd> create bonded device 1 0
-    testpmd> add bonding slave 0 2
-    testpmd> add bonding slave 1 2
-    testpmd> set bonding primary 1 2
-    testpmd> port start 2
-    testpmd> set portlist 2
-    testpmd> show config fwd
-    testpmd> set fwd rxonly
-    testpmd> set verbose 1
-    testpmd> start
-
-Send packets from tester with bonding device's mac and check received
-    tester# scapy
-    tester# >>> VF="AA:BB:CC:DD:EE:FF"
-    tester# >>> sendp([Ether(dst=VF, src=get_if_hwaddr('p5p1')/IP()/UDP()/Raw('x' * 18)],
-                       iface='p5p1', loop=1, inter=1)
-
-Start qemu on backup server 
-    host# /usr/local/bin/qemu-system-x86_64 -enable-kvm -m 2048 \
-          -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
-          -numa node,memdev=mem -mem-prealloc \
-          -smp 4 -cpu host -name VM1 \
-          -no-reboot \
-          -drive file=/mnt/nfs/vm0.img,format=raw \
-          -net nic,model=e1000 -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
-          -netdev type=tap,id=net1,script=no,downscript=no,ifname=tap1 \
-          -device virtio-net-pci,netdev=net1,mac=00:00:00:00:00:01 \
-          -incoming tcp:0:4444 \
-          -monitor telnet::3333,server,nowait \
-          -serial telnet:localhost:5432,server,nowait \
-          -daemonize
-
-Bridge tap and PF device into one bridge on backup server
-    backup# brctl addbr br0
-    backup# brctl addif br0 tap1
-    backup# brctl addif br0 $PF
-    backup# ifconfig tap1 up
-    backup# ifconfig $PF up
-    backup# ifconfig br0 up
-
-Before migration, remove VF device in host VM
-    testpmd> remove bonding slave 1 2
-    testpmd> port stop 1
-    testpmd> port close 1
-    testpmd> port detach 1
-
-Delete VF device in qemu monitor and then start migration
-    host# telnet localhost 3333
-        (qemu) device_del vf1
-        (qemu) migrate -d tcp:backup server ip:4444
-
-Check in migration process, still can receive packets
-
-After migration, check backup vm can receive packets
-
-After migration done, attached backup VF device
-    backup# (qemu) device_add pci-assign,host=03:10.0,id=vf1
-
-Login backup VM and attach VF device
-    backup vm# ./dpdk/tools/dpdk_nic_bind.py --bind=igb_uio 00:04.0
-    backup vm# testpmd> stop
-    backup vm# testpmd> port attach 0000:00:04.0
-
-Change backup VF mac address to same of host VF device
-    testpmd> mac_addr add port 1 vf 0 AA:BB:CC:DD:EE:FF
-    testpmd> port start 1
-    testpmd> add bonding slave 1 2
-    testpmd> set bonding primary 1 2
-    testpmd> show bonding config 2
-    testpmd> show port stats all
-
-Remove virtio device
-    testpmd> remove bonding slave 0 2
-    testpmd> show bonding config 2
-    testpmd> port stop 0
-    testpmd> port close 0
-    testpmd> port detach 0
-
-Check bonding device still can received packets
-
-
-Test Case 2: migrate with vhost user pmd
-========================================
-Start testpmd with vhost user pmd device on host
-    host# ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 \
-          --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net,queues=1' \
-          --socket-mem 1024 -- -i
-
-Start qemu with vhost user on host
-    host# /usr/local/bin/qemu-system-x86_64 -enable-kvm \
-          -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
-          -numa node,memdev=mem -mem-prealloc -smp 4 -cpu host -name VM1 -no-reboot \
-          -drive file=/home/vm-image/vm0.img,format=raw \
-          -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
-          -chardev socket,id=char0,path=/root/dpdk/vhost-net \
-          -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-          -device virtio-net-pci,netdev=netdev0,mac=00:00:00:00:00:01 \
-          -device pci-assign,host=01:10.1,id=vf1 \
-          -monitor telnet::3333,server,nowait \
-          -serial telnet:localhost:5432,server,nowait \
-          -daemonize
-
-Start testpmd on backup host with vhost user pmd device
-    backup# ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 \
-            --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net,queues=1' \
-            --socket-mem 1024 -- -i
-
-Start qemu with vhost user on backup host
-    backup# /usr/local/bin/qemu-system-x86_64 -enable-kvm -m 2048 \
-            -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
-            -numa node,memdev=mem -mem-prealloc -smp 4 -cpu host -name VM1 -no-reboot \
-            -drive file=/mnt/nfs/vm0.img,format=raw \
-            -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
-            -chardev socket,id=char0,path=/root/dpdk/vhost-net \
-            -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-            -device virtio-net-pci,netdev=netdev0,mac=00:00:00:00:00:01 \
-            -monitor telnet::3333,server,nowait \
-            -serial telnet:localhost:5432,server,nowait \
-            -incoming tcp:0:4444 \
-            -daemonize
-
-Login into host vm, start testpmd with virtio and VF devices
-    host# telnet localhost 5432
-    host vm# cd /root/dpdk
-    host vm# modprobe uio
-    host vm# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
-    host vm# modprobe -r ixgbevf
-    host vm# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0 00:04.0
-    host vm# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
-    host vm# ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
-
-Check host vhost pmd connect with VM’s virtio device
-    host# testpmd> host testpmd message for connection
-
-Create bond device and then add VF and virtio devices into bonding device
-    host vm# testpmd> create bonded device 1 0
-    host vm# testpmd> add bonding slave 0 2
-    host vm# testpmd> add bonding slave 1 2
-    host vm# testpmd> set bonding primary 1 2
-    host vm# testpmd> port start 2
-    host vm# testpmd> set portlist 2
-    host vm# testpmd> show config fwd
-    host vm# testpmd> set fwd rxonly
-    host vm# testpmd> set verbose 1
-    host vm# testpmd> start
-
-Send packets matched bonding device’s mac from tester, check packets received
-by bonding device
-
-Before migration, removed VF device from bonding device. After that, bonding device
-can’t receive packets
-    host vm# testpmd> remove bonding slave 1 2
-    host vm# testpmd> port stop 1
-    host vm# testpmd> port close 1
-    host vm# testpmd> port detach 1
-
-Delete VF device in qemu monitor and then start migration
-    host# telnet localhost 3333
-    host# (qemu) device_del vf1
-    host# (qemu) migrate -d tcp:10.239.129.125:4444
-
-After migration done, add backup VF device into backup VM
-    backup# (qemu) device_add pci-assign,host=03:10.0,id=vf1
-
-Login into backup VM and bind VF device to igb_uio
-    backup# ssh -p 5555 root at localhost
-    backup vm# ./dpdk/tools/dpdk_nic_bind.py --bind=igb_uio 00:04.0
-
-Connect to backup VM serial port  and attach backup VF device
-    backup# telnet localhost 5432
-    backup vm# testpmd> port attach 0000:00:04.0
-
-Change backup VF mac address to match host VF device
-    backup vm# testpmd> mac_addr add port 1 vf 0 AA:BB:CC:DD:EE:FF
-
-Add backup VF device into bonding device
-    backup vm# testpmd> port start 1
-    backup vm# testpmd> add bonding slave 1 2
-    backup vm# testpmd> set bonding primary 1 2
-    backup vm# testpmd> show bonding config 2
-    backup vm# testpmd> show port stats all
-
-Remove virtio device from backup bonding device
-    backup vm# testpmd> remove bonding slave 0 2
-    backup vm# testpmd> show bonding config 2
-    backup vm# testpmd> port stop 0
-    backup vm# testpmd> port close 0
-    backup vm# testpmd> port detach 0
-    backup vm# 
-
-Check still can receive packets matched VF mac address
-
+.. Copyright (c) <2016>, Intel Corporation
+      All rights reserved.
+
+   Redistribution and use in source and binary forms, with or without
+   modification, are permitted provided that the following conditions
+   are met:
+
+   - Redistributions of source code must retain the above copyright
+     notice, this list of conditions and the following disclaimer.
+
+   - Redistributions in binary form must reproduce the above copyright
+     notice, this list of conditions and the following disclaimer in
+     the documentation and/or other materials provided with the
+     distribution.
+
+   - Neither the name of Intel Corporation nor the names of its
+     contributors may be used to endorse or promote products derived
+     from this software without specific prior written permission.
+
+   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+   FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+   COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+   INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+   ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+   OF THE POSSIBILITY OF SUCH DAMAGE.
+
+====================
+SRIOV live migration
+====================
+Qemu not support migrate a Virtual Machine which has an SR-IOV Virtual
+Function (VF). To get work around of this, bonding PMD and VirtIO is used.
+
+Prerequisites
+-------------
+Connect three ports to one switch, these three ports are from Host, Backup
+host and tester.
+
+Start nfs service and export nfs to backup host IP::
+
+    host# service rpcbind start
+    host# service nfs start
+    host# cat /etc/exports
+    host# /home/vm-image backup-host-ip(rw,sync,no_root_squash)
+
+Make sure host nfsd module updated to v4 version(v2 not support file > 4G)
+
+Enable vhost pmd in configuration file and rebuild dpdk on host and backup host::
+
+    CONFIG_RTE_LIBRTE_PMD_VHOST=y
+
+Create enough hugepages for testpmd and qemu backend memory.::
+
+    host# echo 4096 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+    host# mount -t hugetlbfs hugetlbfs /mnt/huge
+
+Generate VF device with host port and backup host port::
+
+    host# echo 1 > /sys/bus/pci/devices/0000\:01\:00.1/sriov_numvfs
+    backup# echo 1 > /sys/bus/pci/devices/0000\:03\:00.0/sriov_numvfs
+
+Test Case 1: migrate with tap VirtIO
+====================================
+Start qemu on host server::
+
+    host# /usr/local/bin/qemu-system-x86_64 -enable-kvm -m 2048 \
+          -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+          -numa node,memdev=mem -mem-prealloc -smp 4 -cpu host -name VM1 \
+          -no-reboot \
+          -drive file=/home/vm-image/vm0.img,format=raw \
+          -net nic,model=e1000 -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+          -netdev type=tap,id=net1,script=no,downscript=no,ifname=tap1 \
+          -device virtio-net-pci,netdev=net1,mac=00:00:00:00:00:01 \
+          -device pci-assign,host=01:10.1,id=vf1 \
+          -monitor telnet::3333,server,nowait \
+          -serial telnet:localhost:5432,server,nowait \
+          -daemonize
+
+Bridge tap and PF device into one bridge::
+
+    host# brctl addbr br0
+    host# brctl addif br0 tap1
+    host# brctl addif br0 $PF
+    host# ifconfig tap1 up
+    host# ifconfig $PF up
+    host# ifconfig br0 up
+
+Login into vm and bind VirtIO and VF device to igb_uio, then start testpmd::
+
+    host# telnet localhost 5432
+    host vm# cd /root/dpdk
+    host vm# modprobe uio
+    host vm# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
+    host vm# modprobe -r ixgbevf
+    host vm# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0 00:04.0
+    host vm# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+    host vm# ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
+
+Create bond device with VF and virtIO::
+
+    testpmd> create bonded device 1 0
+    testpmd> add bonding slave 0 2
+    testpmd> add bonding slave 1 2
+    testpmd> set bonding primary 1 2
+    testpmd> port start 2
+    testpmd> set portlist 2
+    testpmd> show config fwd
+    testpmd> set fwd rxonly
+    testpmd> set verbose 1
+    testpmd> start
+
+Send packets from tester with bonding device's mac and check received::
+
+    tester# scapy
+    tester# >>> VF="AA:BB:CC:DD:EE:FF"
+    tester# >>> sendp([Ether(dst=VF, src=get_if_hwaddr('p5p1')/IP()/UDP()/Raw('x' * 18)],
+                       iface='p5p1', loop=1, inter=1)
+
+Start qemu on backup server::
+
+    host# /usr/local/bin/qemu-system-x86_64 -enable-kvm -m 2048 \
+          -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+          -numa node,memdev=mem -mem-prealloc \
+          -smp 4 -cpu host -name VM1 \
+          -no-reboot \
+          -drive file=/mnt/nfs/vm0.img,format=raw \
+          -net nic,model=e1000 -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+          -netdev type=tap,id=net1,script=no,downscript=no,ifname=tap1 \
+          -device virtio-net-pci,netdev=net1,mac=00:00:00:00:00:01 \
+          -incoming tcp:0:4444 \
+          -monitor telnet::3333,server,nowait \
+          -serial telnet:localhost:5432,server,nowait \
+          -daemonize
+
+Bridge tap and PF device into one bridge on backup server::
+
+    backup# brctl addbr br0
+    backup# brctl addif br0 tap1
+    backup# brctl addif br0 $PF
+    backup# ifconfig tap1 up
+    backup# ifconfig $PF up
+    backup# ifconfig br0 up
+
+Before migration, remove VF device in host VM::
+
+    testpmd> remove bonding slave 1 2
+    testpmd> port stop 1
+    testpmd> port close 1
+    testpmd> port detach 1
+
+Delete VF device in qemu monitor and then start migration::
+
+    host# telnet localhost 3333
+        (qemu) device_del vf1
+        (qemu) migrate -d tcp:backup server ip:4444
+
+Check in migration process, still can receive packets
+
+After migration, check backup vm can receive packets
+
+After migration done, attached backup VF device::
+
+    backup# (qemu) device_add pci-assign,host=03:10.0,id=vf1
+
+Login backup VM and attach VF device::
+
+    backup vm# ./dpdk/tools/dpdk_nic_bind.py --bind=igb_uio 00:04.0
+    backup vm# testpmd> stop
+    backup vm# testpmd> port attach 0000:00:04.0
+
+Change backup VF mac address to same of host VF device::
+
+    testpmd> mac_addr add port 1 vf 0 AA:BB:CC:DD:EE:FF
+    testpmd> port start 1
+    testpmd> add bonding slave 1 2
+    testpmd> set bonding primary 1 2
+    testpmd> show bonding config 2
+    testpmd> show port stats all
+
+Remove virtio device::
+
+    testpmd> remove bonding slave 0 2
+    testpmd> show bonding config 2
+    testpmd> port stop 0
+    testpmd> port close 0
+    testpmd> port detach 0
+
+Check bonding device still can received packets
+
+
+Test Case 2: migrate with vhost user pmd
+========================================
+Start testpmd with vhost user pmd device on host::
+
+    host# ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 \
+          --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net,queues=1' \
+          --socket-mem 1024 -- -i
+
+Start qemu with vhost user on host::
+
+    host# /usr/local/bin/qemu-system-x86_64 -enable-kvm \
+          -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+          -numa node,memdev=mem -mem-prealloc -smp 4 -cpu host -name VM1 -no-reboot \
+          -drive file=/home/vm-image/vm0.img,format=raw \
+          -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+          -chardev socket,id=char0,path=/root/dpdk/vhost-net \
+          -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+          -device virtio-net-pci,netdev=netdev0,mac=00:00:00:00:00:01 \
+          -device pci-assign,host=01:10.1,id=vf1 \
+          -monitor telnet::3333,server,nowait \
+          -serial telnet:localhost:5432,server,nowait \
+          -daemonize
+
+Start testpmd on backup host with vhost user pmd device::
+
+    backup# ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 \
+            --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net,queues=1' \
+            --socket-mem 1024 -- -i
+
+Start qemu with vhost user on backup host::
+
+    backup# /usr/local/bin/qemu-system-x86_64 -enable-kvm -m 2048 \
+            -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+            -numa node,memdev=mem -mem-prealloc -smp 4 -cpu host -name VM1 -no-reboot \
+            -drive file=/mnt/nfs/vm0.img,format=raw \
+            -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+            -chardev socket,id=char0,path=/root/dpdk/vhost-net \
+            -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+            -device virtio-net-pci,netdev=netdev0,mac=00:00:00:00:00:01 \
+            -monitor telnet::3333,server,nowait \
+            -serial telnet:localhost:5432,server,nowait \
+            -incoming tcp:0:4444 \
+            -daemonize
+
+Login into host vm, start testpmd with virtio and VF devices::
+
+    host# telnet localhost 5432
+    host vm# cd /root/dpdk
+    host vm# modprobe uio
+    host vm# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
+    host vm# modprobe -r ixgbevf
+    host vm# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0 00:04.0
+    host vm# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+    host vm# ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
+
+Check host vhost pmd connect with VM's virtio device::
+
+    host# testpmd> host testpmd message for connection
+
+Create bond device and then add VF and virtio devices into bonding device::
+
+    host vm# testpmd> create bonded device 1 0
+    host vm# testpmd> add bonding slave 0 2
+    host vm# testpmd> add bonding slave 1 2
+    host vm# testpmd> set bonding primary 1 2
+    host vm# testpmd> port start 2
+    host vm# testpmd> set portlist 2
+    host vm# testpmd> show config fwd
+    host vm# testpmd> set fwd rxonly
+    host vm# testpmd> set verbose 1
+    host vm# testpmd> start
+
+Send packets matched bonding device's mac from tester, check packets received
+by bonding device
+
+Before migration, removed VF device from bonding device. After that, bonding device
+can't receive packets::
+
+    host vm# testpmd> remove bonding slave 1 2
+    host vm# testpmd> port stop 1
+    host vm# testpmd> port close 1
+    host vm# testpmd> port detach 1
+
+Delete VF device in qemu monitor and then start migration::
+
+    host# telnet localhost 3333
+    host# (qemu) device_del vf1
+    host# (qemu) migrate -d tcp:10.239.129.125:4444
+
+After migration done, add backup VF device into backup VM::
+
+    backup# (qemu) device_add pci-assign,host=03:10.0,id=vf1
+
+Login into backup VM and bind VF device to igb_uio::
+
+    backup# ssh -p 5555 root at localhost
+    backup vm# ./dpdk/tools/dpdk_nic_bind.py --bind=igb_uio 00:04.0
+
+Connect to backup VM serial port  and attach backup VF device::
+
+    backup# telnet localhost 5432
+    backup vm# testpmd> port attach 0000:00:04.0
+
+Change backup VF mac address to match host VF device::
+
+    backup vm# testpmd> mac_addr add port 1 vf 0 AA:BB:CC:DD:EE:FF
+
+Add backup VF device into bonding device::
+
+    backup vm# testpmd> port start 1
+    backup vm# testpmd> add bonding slave 1 2
+    backup vm# testpmd> set bonding primary 1 2
+    backup vm# testpmd> show bonding config 2
+    backup vm# testpmd> show port stats all
+
+Remove virtio device from backup bonding device::
+
+    backup vm# testpmd> remove bonding slave 0 2
+    backup vm# testpmd> show bonding config 2
+    backup vm# testpmd> port stop 0
+    backup vm# testpmd> port close 0
+    backup vm# testpmd> port detach 0
+    backup vm#
+
+Check still can receive packets matched VF mac address
+
diff --git a/test_plans/vf_kernel_test_plan.rst b/test_plans/vf_kernel_test_plan.rst
index 01627a5..b8f08d5 100644
--- a/test_plans/vf_kernel_test_plan.rst
+++ b/test_plans/vf_kernel_test_plan.rst
@@ -30,13 +30,13 @@
    ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
    OF THE POSSIBILITY OF SUCH DAMAGE.
 
-============================
+===========================
 VFD as SRIOV Policy Manager
-============================
+===========================
 
-VFD is SRIOV Policy Manager (daemon) running on the host allowing 
-configuration not supported by kernel NIC driver, supports ixgbe and 
-i40e NIC. Run on the host for policy decisions w.r.t. what a VF can and 
+VFD is SRIOV Policy Manager (daemon) running on the host allowing
+configuration not supported by kernel NIC driver, supports ixgbe and
+i40e NIC. Run on the host for policy decisions w.r.t. what a VF can and
 cannot do to the PF. Only the DPDK PF would provide a callback to implement 
 these features, the normal kernel drivers would not have the callback so 
 would not support the features. Allow passing information to application 
@@ -45,38 +45,43 @@ so action could be taken based on host policy. Stop VM1 from asking for
 something that compromises VM2. Use DPDK DPDK PF + kernel VF mode to verify 
 below features. 
 
-Case 1: Set up environment and load driver
-============================================
+Test Case 1: Set up environment and load driver
+===============================================
 1. Get the pci device id of DUT, load ixgbe driver to required version, 
    take Niantic for example::
-        rmmod ixgbe
-        insmod ixgbe.ko
+
+    rmmod ixgbe
+    insmod ixgbe.ko
 
 2. Host PF in DPDK driver. Create VFs from PF with dpdk driver::
+
 	./tools/dpdk-devbind.py -b igb_uio 05:00.0
 	echo 2 >/sys/bus/pci/devices/0000\:05\:00.0/max_vfs 
 	
-3. Check ixgbevf version and update ixgbevf to required version::
+3. Check ixgbevf version and update ixgbevf to required version
 	
 4. Detach VFs from the host::
-	rmmod ixgbevf
+
+    rmmod ixgbevf
 
 5. Pass through VF 05:10.0 and 05:10.2 to VM0,start and login VM0
 
 6. Check ixgbevf version in VM and update to required version
 
 
-Case 2 : Link
-==========================================
+Test Case 2: Link
+=================
 Pre-environment::
-  (1)Host one DPDK PF and create two VFs, pass through VF0 and VF1 to VM0, 
+
+  (1)Host one DPDK PF and create two VFs, pass through VF0 and VF1 to VM0,
      start VM0 
   (2)Load host DPDK driver and VM0 kernel driver
 
 Steps:  
 
 1. Enable multi-queues to start DPDK PF::
-        ./testpmd -c f -n 4 -- -i --rxq=4 --txq=4
+
+    ./testpmd -c f -n 4 -- -i --rxq=4 --txq=4
 
 2. Link up kernel VF and expect VF link up
 
@@ -85,13 +90,13 @@ Steps:
 4. Repeat above 2~3 for 100 times, expect no crash or core dump issues. 
 
 
-		   
 Test Case 3: ping 
-===========================================
+==================
 Pre-environment:: 
+
   (1)Establish link with link partner.
-  (2)Host one DPDK PF and create two VFs, pass through VF0 and VF1 to VM0, 
-     start VM0 
+  (2)Host one DPDK PF and create two VFs, pass through VF0 and VF1 to VM0,
+     start VM0
   (3)Load host DPDK driver and VM0 kernel driver
 
 Steps: 
@@ -109,10 +114,11 @@ Steps:
    
 
 Test Case 4: reset
-==========================================
+==================
 Pre-environment::
+
   (1)Establish link with link partner.
-  (2)Host one DPDK PF and create two VFs, pass through VF0 to VM0 and VF1 to 
+  (2)Host one DPDK PF and create two VFs, pass through VF0 to VM0 and VF1 to
      VM1, start VM0 and VM1
   (3)Load host DPDK driver and VM kernel driver
 
@@ -141,9 +147,10 @@ Steps:
 Test Case 5: add/delete IP/MAC address
 ==========================================
 Pre-environment::
-  (1)Establish link with link partner.
-  (2)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0 
-  (3)Load host DPDK driver and VM0 kernel drive
+
+    (1)Establish link with link partner.
+    (2)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0
+    (3)Load host DPDK driver and VM0 kernel drive
 
 Steps: 
 
@@ -154,13 +161,16 @@ Steps:
 3. Kernel VF0 ping tester PF, tester PF ping kernel VF0
 
 4. Add IPv6 on kernel VF0(e.g: ens3)::
-        ifconfig ens3 add efdd::9fc8:6a6d:c232:f1c0
+
+    ifconfig ens3 add efdd::9fc8:6a6d:c232:f1c0
 
 5. Delete IPv6 on kernel VF::
-        ifconfig ens3 del efdd::9fc8:6a6d:c232:f1c0
+
+    ifconfig ens3 del efdd::9fc8:6a6d:c232:f1c0
 
 6. Modify MAC address on kernel VF::
-        ifconfig ens3 hw ether 00:AA:BB:CC:dd:EE
+
+    ifconfig ens3 hw ether 00:AA:BB:CC:dd:EE
 
 7. Send packet to modified MAC, expect VF can receive packet successfully
 
@@ -168,19 +178,22 @@ Steps:
 Test Case 6: add/delete vlan
 ==========================================
 Pre-environment::
-  (1)Establish link with link partner.
-  (2)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0 
-  (3)Load host DPDK driver and VM0 kernel driver
+
+    (1)Establish link with link partner.
+    (2)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0
+    (3)Load host DPDK driver and VM0 kernel driver
 
 Steps: 
 
 1. Add random vlan id(0~4095) on kernel VF0(e.g: ens3), take vlan id 51 
    for example::
-        modprobe 8021q
-        vconfig add ens3 51
+
+    modprobe 8021q
+    vconfig add ens3 51
 
 2. Check add vlan id successfully, expect to have ens3.51 device::
-        ls /proc/net/vlan 
+
+    ls /proc/net/vlan
 
 3. Send packet from tester to VF MAC with not-matching vlan id, check the 
    packet can't be received at the vlan device
@@ -189,7 +202,8 @@ Steps:
    packet can be received at the vlan device.
 
 5. Delete configured vlan device::
-        vconfig rem ens3.51
+
+    vconfig rem ens3.51
 
 6. Check delete vlan id 51 successfully
 
@@ -200,24 +214,27 @@ Steps:
 Test Case 7: Get packet statistic
 ==========================================
 Pre-environment::
-  (1)Establish link with link partner.
-  (2)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0 
-  (3)Load host DPDK driver and VM0 kernel driver
+
+    (1)Establish link with link partner.
+    (2)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0
+    (3)Load host DPDK driver and VM0 kernel driver
 
 Steps: 
 
 1. Send packet to kernel VF0 mac
 
 2. Check packet statistic could increase correctly::
-        ethtool -S ens3
+
+    ethtool -S ens3
 
 
 Test Case 8: MTU
 ==========================================
 Pre-environment::
-  (1)Establish link with link partner.
-  (2)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0 
-  (3)Load host DPDK driver and VM0 kernel driver
+
+    (1)Establish link with link partner.
+    (2)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0
+    (3)Load host DPDK driver and VM0 kernel driver
 
 Steps: 
 
@@ -230,15 +247,17 @@ Steps:
    DST MAC, check that Kernel VF can't receive packet
 
 4. Change DPDK PF mtu as 3000,check no confusion/crash on kernel VF::
-        Testpmd > port stop all
-        Testpmd > port config mtu 0 3000
-        Testpmd > port start all
+
+    Testpmd > port stop all
+    Testpmd > port config mtu 0 3000
+    Testpmd > port start all
 
 5. Use scapy to send one packet with length as 2000 with DPDK PF MAC as 
    DST MAC, check that DPDK PF can receive packet
 
 6. Change kernel VF mtu as 3000, check no confusion/crash on DPDK PF::
-        ifconfig eth0 mtu 3000
+
+    ifconfig eth0 mtu 3000
 
 7. Use scapy to send one packet with length as 2000 with kernel VF MAC 
    as DST MAC, check Kernel VF can receive packet
@@ -252,9 +271,10 @@ effect.
 Test Case 9: Enable/disable promisc mode
 =========================================
 Pre-environment::
-  (1)Establish link with link partner.
-  (2)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0 
-  (3)Load host DPDK driver and VM0 kernel driver
+
+    (1)Establish link with link partner.
+    (2)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0
+    (3)Load host DPDK driver and VM0 kernel driver
 
 Steps:
  
@@ -262,7 +282,8 @@ Steps:
 
 2. Set up kernel VF tcpdump without -p parameter, without/with -p parameter 
    could enable/disable promisc mode::
-        sudo tcpdump -i ens3 -n -e -vv
+
+    sudo tcpdump -i ens3 -n -e -vv
 
 3. Send packet from tester with random DST MAC, check the packet can be 
    received by DPDK PF and kernel VF
@@ -271,7 +292,8 @@ Steps:
 
 5. Set up kernel VF tcpdump with -p parameter, which means disable promisc 
    mode::
-        sudo tcpdump -i ens3 -n -e –vv -p
+
+    sudo tcpdump -i ens3 -n -e –vv -p
 
 6. Send packet from tester with random DST MAC, check the packet can't be 
    received by DPDK PF and kernel VF
@@ -289,9 +311,10 @@ Niantic NIC un-supports this case.
 Test Case 10: RSS
 =========================================
 Pre-environment::
-  (1)Establish link with link partner.
-  (2)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0 
-  (3)Load host DPDK driver and VM0 kernel driver
+
+    (1)Establish link with link partner.
+    (2)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0
+    (3)Load host DPDK driver and VM0 kernel driver
 
 Steps: 
 
@@ -313,10 +336,11 @@ Niantic NIC un-supports this case.
 Test Case 11: DPDK PF + kernel VF + DPDK VF
 ============================================
 Pre-environment::
-  (1)Establish link with IXIA.
-  (2)Host one DPDK PF and create two VFs, pass through VF0 and VF1 to VM0, 
-     start VM0
-  (3)Load host DPDK driver, VM0 DPDK driver and kernel driver 
+
+    (1)Establish link with IXIA.
+    (2)Host one DPDK PF and create two VFs, pass through VF0 and VF1 to VM0,
+       start VM0
+    (3)Load host DPDK driver, VM0 DPDK driver and kernel driver 
 
 Steps:
  
@@ -343,10 +367,11 @@ Steps:
 Test Case 12: DPDK PF + 2kernel VFs + 2DPDK VFs + 2VMs
 ======================================================
 Pre-environment::
-  (1)Establish link with IXIA.
-  (2)Host one DPDK PF and create 6 VFs, pass through VF0, VF1, VF2 and VF3 
-     to VM0, pass through VF4, VF5 to VM1, start VM0 and VM1
-  (3)Load host DPDK driver, VM DPDK driver and kernel driver 
+
+    (1)Establish link with IXIA.
+    (2)Host one DPDK PF and create 6 VFs, pass through VF0, VF1, VF2 and VF3
+       to VM0, pass through VF4, VF5 to VM1, start VM0 and VM1
+    (3)Load host DPDK driver, VM DPDK driver and kernel driver
 
 Steps:
  
@@ -379,10 +404,11 @@ Steps:
 
 
 Test Case 13: Load kernel driver stress
-======================================================
+========================================
 Pre-environment::
-  (1)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0 
-  (2)Load host DPDK driver and VM0 kernel driver
+
+    (1)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0
+    (2)Load host DPDK driver and VM0 kernel driver
 
 Steps:
  
-- 
1.9.3



More information about the dts mailing list