[dts] [PATCH] Add test plans of Vhost-virtio in DPDK2.0

Qian Xu qian.q.xu at intel.com
Mon Mar 16 04:21:52 CET 2015


In DPDK2.0, we supported user space vhost in two way: one is vhost-cuse, the other is socket based vhost(vhost-user); as to virtio, before we have multiple implementations, now developer consolidated these to one single virtio implementations.The test plans are to test these features. 

Signed-off-by: Qian Xu <qian.q.xu at intel.com>

diff --git a/test_plans/vhost-virtio-one-copy-test-plan.rst b/test_plans/vhost-virtio-one-copy-test-plan.rst
new file mode 100644
index 0000000..66a17af
--- /dev/null
+++ b/test_plans/vhost-virtio-one-copy-test-plan.rst
@@ -0,0 +1,712 @@
+.. Copyright (c) <2015>, Intel Corporation
+   All rights reserved.
+
+   Redistribution and use in source and binary forms, with or without
+   modification, are permitted provided that the following conditions
+   are met:
+
+   - Redistributions of source code must retain the above copyright
+     notice, this list of conditions and the following disclaimer.
+
+   - Redistributions in binary form must reproduce the above copyright
+     notice, this list of conditions and the following disclaimer in
+     the documentation and/or other materials provided with the
+     distribution.
+
+   - Neither the name of Intel Corporation nor the names of its
+     contributors may be used to endorse or promote products derived
+     from this software without specific prior written permission.
+
+   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+   FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+   COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+   INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+   ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+   OF THE POSSIBILITY OF SUCH DAMAGE.
+   
+===============
+Virtio One Copy
+===============
+This test application is for virtualization application scenarios using Intel DPDK vhost and virtio. As to the vhost, it has 2 implementation, vhost-user and vhost cuse as backend. To test  vhost-user, need use qemu version>= 2.2, for vhost cuse, can use qemu version>=1.5. The Makefile in /lib/librte_vhost will be different for the two implementations. If in vhost-user implementation, need comment line:SRCS-$(CONFIG_RTE_LIBRTE_VHOST) += vhost_cuse/vhost-net-cdev.c vhost_cuse/virtio-net-cdev.c vhost_cuse/eventfd_copy.c. Else, if in vhost-cuse, need uncomment above line and comment the line related to vhost-user. Default is to use vhost-user implementation. There are four scenarios, and for vhost-user implementation, there are 9 functional/performance test cases and 2 stability test case; for vhost cuse, there are 6 functional/performance test cases, so totally 17 test cases.
+
+Note: All the scripts or commands in the test plan is for test reference, not the standard or official ones.
+
+My test environment is qemu version 2.2 on FC20(Kernel 3.11). 
+
+Prerequisites
+=============
+Platform needs to turn on VT-d and host loads kvm and kvm_intel modules. To run vhost-switch sample, we need first change one line of the config file common_linuxapp:: 
+	 
+    CONFIG_RTE_LIBRTE_VHOST=y  
+
+Then build dpdk target and vhost-sample::
+
+    make install -j20 T=x86_64-native-linuxapp-gcc
+    cd <dpdk_folder>/lib/librte_vhost
+    make
+    cd ./eventfd_link
+    make
+    cd <dpdk_folder>/examples/vhost
+    make
+
+As to one VM scenario, one physical NIC port is needed by vhost-switch. The flow is bi-directional as virtio1 <--> virtio2.
+As to VM to VM scenario, at least two physical NIC ports are needed, one is used by vhost-switch, the other is used to send/receive traffic in VM. There are 2VMs, each with one virtio and one VF, the flow is uni-directional as VF1 -> Virtio1 -> Virtio2 -> VF2. 
+
+To bind port to igb_uio::
+
+    ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind=igb_uio device_bus_id
+
+To create VF, set VF's MAC and bind it to pci-stub, below is an example for NIC 82599::
+
+    $ modprobe -r ixgbe
+    
+    $ modprobe ixgbe max_vfs=2
+    
+    $ ifconfig eth1 up # eth1 is a PF which generate 2VFs.
+
+    $ ifconfig eth2 up # eth2 is VF1 for PF.
+       
+    $ ip link set eth1 vf 0 mac 52:54:00:12:34:00 # Set VF1 MAC address
+    
+    $ echo "8086 10ed" >/sys/bus/pci/drivers/pci-stub/new_id
+    
+    $ echo "0000:08:10.0" >/sys/bus/pci/drivers/ixgbevf/unbind # Unbind to VF's driver
+    
+    $ echo "0000:08:10.0" >/sys/bus/pci/drivers/pci-stub/bind # Bind to pci-stub
+
+On the host, before launching the vhost sample, need first configure the hugepages and the environment.Below script can be an example::
+    
+    modprobe kvm
+    modprobe kvm_intel
+    awk '/Hugepagesize/ {print $2}' /proc/meminfo
+    awk '/HugePages_Total/ { print $2 }' /proc/meminfo
+    umount `awk '/hugetlbfs/ { print $2 }' /proc/mounts`
+    mkdir -p /mnt/huge
+    mount -t hugetlbfs nodev /mnt/huge -o pagesize=1G
+    rm -f /dev/vhost-net
+    rmmod vhost-net
+    modprobe fuse
+    modprobe cuse
+    rmmod eventfd_link
+    rmmod igb_uio
+
+    cd ./dpdk
+    insmod lib/librte_vhost/eventfd_link/eventfd_link.ko
+
+    modprobe uio
+    insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
+
+    ./tools/dpdk_nic_bind.py --bind=igb_uio 0000:08:00.1
+    
+
+
+Test Case 1:  test_perf_virtio_one_vm_dpdk_fwd_vhost-cuse_jumboframe
+====================================================================
+
+On host:
+
+1. Start up vhost-switch, mergeable 1 means the jubmo frame feature is enabled. vm2vm 0 means only one vm without vm to vm communication::
+
+    taskset -c 1-3 <dpdk_folder>/examples/vhost/build/vhost-switch -c 0xf -n 4 --huge-dir /mnt/huge --socket-mem 1024,1024 -- -p 1 --mergeable 1 --zero-copy 0 --vm2vm 0
+   
+
+2. Start VM with vhost cuse as backend::
+
+    taskset -c 4-6  /home/qxu10/qemu-2.2.0/x86_64-softmmu/qemu-system-x86_64 -object memory-backend-file, id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+    -enable-kvm -m 2048 -smp 4 -cpu host -name dpdk1-vm1 \
+    -drive file=/home/img/dpdk1-vm1.img \
+    -netdev tap,id=vhost3,ifname=tap_vhost3,vhost=on,script=no \
+    -device virtio-net pci,netdev=vhost3,mac=52:54:00:00:00:01,id=net3,csum=off,gso=off,guest_csum=off,guest_tso4=off,guest_tso6=off,guest_ecn=off \
+    -netdev tap,id=vhost4,ifname=tap_vhost4,vhost=on,script=no \
+    -device virtio-net-pci,netdev=vhost4,mac=52:54:00:00:00:02,id=net4,csum=off,gso=off,guest_csum=off,guest_tso4=off,guest_tso6=off,guest_ecn=off \
+    -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:00:01 \
+    -localtime -nographic
+
+On guest:
+
+3. ensure the dpdk folder copied to the guest with the same config file and build process as host. Then bind 2 virtio devices to igb_uio and start testpmd, below is the step for reference::
+
+    ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0 00:04.0
+
+    ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c f -n 4 -- -i --txqflags 0x0f00 --max-pkt-len 9000 
+    
+    $ >set fwd mac
+    
+    $ >start tx_first
+
+4. After typing start tx_first in testpmd, user can see there would be 2 virtio device with MAC and vlan id registered in vhost-user, the log would be shown in host's vhost-sample output.
+
+5. Send traffic(30second) to virtio1 and virtio2, and set the packet size from 64 to 1518 as well as the jumbo frame 3000. Check the performance in Mpps. The traffic sent to virtio1 should have the DEST MAC of Virtio1's MAC, Vlan id of Virtio1. The traffic sent to virtio2 should have the DEST MAC of Virtio2's MAC, Vlan id of Virtio2. As to the functionality criteria, The received rate should not be zero. As to the performance criteria, need check it with developer or design doc/PRD.
+    
+Test Case 2:  test_perf_virtio_one_vm_linux_fwd_vhost-cuse_jumboframe
+=====================================================================
+
+On host:
+
+Same step as in TestCase1.
+
+On guest:   
+  
+1. Set up routing on guest::
+
+    $ systemctl stop firewalld.service
+    
+    $ systemctl disable firewalld.service
+    
+    $ systemctl stop ip6tables.service
+    
+    $ systemctl disable ip6tables.service
+
+    $ systemctl stop iptables.service
+    
+    $ systemctl disable iptables.service
+
+    $ systemctl stop NetworkManager.service
+    
+    $ systemctl disable NetworkManager.service
+ 
+    $ echo 1 >/proc/sys/net/ipv4/ip_forward
+
+    $ ip addr add 192.168.1.2/24 dev eth1    # eth1 is virtio1
+    
+    $ ip neigh add 192.168.1.1 lladdr 00:00:00:00:0a:0a dev eth1
+    
+    $ ip link set dev eth1 up
+    
+    $ ip addr add 192.168.2.2/24 dev eth2    # eth2 is virtio2
+    
+    $ ip neigh add 192.168.2.1 lladdr 00:00:00:00:00:0a  dev eth2
+    
+    $ ip link set dev eth2 up
+
+2. Send traffic(30second) to virtio1 and virtio2. According to above script, traffic sent to virtio1 should have SRC IP (e.g: 192.168.1.1), DEST IP(e.g:192.168.2.1), DEST MAC as virtio1's MAC, VLAN ID as virtio1's VLAN. Traffic sent to virtio2 has the similar setting, SRC IP(e.g:192.168.2.1), DEST IP(e.g: 192.168.1.1), VLAN ID as virtio2's VLAN. Set the packet size from 64 to 1518 as well as jumbo frame.Check the performance in Mpps.As to the functionality criteria, The received rate should not be zero. As to the performance criteria, need check it with developer or design doc/PRD. 
+    
+Test Case 3:  test_perf_virtio_vm2vm_dpdk_fwd_NIC_L2_switch_vhost-cuse-jumboframe
+=================================================================================
+
+On host:
+
+1. Start up vhost-switch, mergeable 1 means the jubmo frame feature is enabled, vm2vm 2 means vm to vm communication through NIC layer2 switch::
+
+    taskset -c 1-3 <dpdk_folder>/examples/vhost/build/vhost-switch -c 0xf -n 4 --huge-dir /mnt/huge --socket-mem 1024,1024 -- -p 1 --mergeable 1 --zero-copy 0 --vm2vm 2
+   
+
+2. Set PF's MTU as 9000. Start VM with vhost cuse as backend::
+
+	ifconfig pf_interface mtu 9000
+
+	VM1 Startup:: 
+
+    taskset -c 4-6  /home/qxu10/qemu-2.2.0/x86_64-softmmu/qemu-system-x86_64 -object memory-backend-file, id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+    -enable-kvm -m 2048 -smp 4 -cpu host -name dpdk1-vm1 \
+    -drive file=/home/img/dpdk1-vm1.img \
+    -netdev tap,id=vhost3,ifname=tap_vhost3,vhost=on,script=no \
+    -device virtio-net-pci,netdev=vhost3,mac=52:54:00:00:00:01,id=net3,csum=off,gso=off,guest_csum=off,guest_tso4=off,guest_tso6=off,guest_ecn=off \
+    -device pci-assign,host=08:10.0 \
+    -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:00:01 \
+    -localtime -nographic
+
+    VM2 Startup::
+
+    taskset -c 7-9  /home/qxu10/qemu-2.2.0/x86_64-softmmu/qemu-system-x86_64 -object memory-backend-file, id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+    -enable-kvm -m 2048 -smp 4 -cpu host -name dpdk1-vm2 \
+    -drive file=/home/img/dpdk1-vm2.img \
+    -netdev tap,id=vhost4,ifname=tap_vhost4,vhost=on,script=no \
+    -device virtio-net-pci,netdev=vhost4,mac=52:54:00:00:00:02,id=net3,csum=off,gso=off,guest_csum=off,guest_tso4=off,guest_tso6=off,guest_ecn=off \
+    -device pci-assign,host=08:10.2 \
+    -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:00:02 \
+  
+On guest: 
+
+3. Ensure the dpdk folder copied to the guest with the same config file and build process as host. Then in each VM, bind virtio device and VF to igb_uio, and start testpmd, below is the step for reference::
+
+    On VM1:
+
+    ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0 00:04.0
+
+    ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c f -n 4 -- -i --txqflags 0x0f00 --eth_peer=0,peer-mac(virtio in VM2) --max-pkt-len 9000 
+    
+    $ >set fwd mac
+    
+    $ >start tx_first
+
+    On VM2:
+
+    ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0 00:04.0
+
+    ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c f -n 4 -- -i --txqflags 0x0f00 --max-pkt-len 9000 
+    
+    $ >set fwd mac
+    
+    $ >start tx_first
+
+4. After typing start tx_first in testpmd, user can see there would be 2 virtio device with MAC and vlan id registered in vhost-user server, the log would be shown in host's vhost-sample output.
+    
+5. Send traffic(30second) to VF1, the flow should be VF1->Virtio1->Virtio2->VF2. Set the packet size from 64 to 1518 as well as jumbo frame 3000. The traffic sent to VF1's can be DEST MAC=VF1's MAC without VLAN ID. Check the performance in Mpps.As to the functionality criteria, The received rate should not be zero. As to the performance criteria, need check it with developer or design doc/PRD. 
+
+Test Case 4:  test_perf_virtio_vm2vm_linxu_fwd_NIC_L2_switch_vhost-cuse_jumbo_frame
+===================================================================================
+
+On host:
+
+Same steps as in TestCase3.
+
+On guest: 
+
+1. Set up routings On guests and set VF's MTU as 9000 for jumbo frame::
+
+    on VM1::
+    
+    $ ip addr add 192.168.1.2/24 dev eth1 # Suppose eth1 is Virtio, eth2 is VF
+    
+    $ ip neigh add 192.168.1.1 lladdr 52:54:00:00:00:02 dev eth1 # Set the neigh mac is the Next virtio's MAC
+    
+    $ ip link set dev eth1 up
+
+    $ ip addr add 192.168.2.2/24 dev eth2
+
+	$ ifconfig eth2 mtu 9000
+    
+    $ ip neigh add 192.168.2.1 lladdr 00:00:00:00:00:0a  dev eth2
+    
+    $ ip link set dev eth2 up
+    
+    on VM2::
+
+    $ ip addr add 192.168.2.2/24 dev eth1 # Suppose eth1 is virtio, eth2 is VF
+    
+    $ ip neigh add 192.168.2.1 lladdr 00:00:00:00:0a:0a dev eth1
+    
+    $ ip link set dev eth1 up
+    
+    $ ip addr add 192.168.1.2/24 dev eth2
+ 
+    $ ifconfig eth2 mtu 9000
+    
+    $ ip neigh add 192.168.1.1 lladdr 90:e2:ba:36:99:3d dev eth2
+    
+    $ ip link set dev eth2 up
+
+2. Send traffic(30second) to VF1, the flow should be VF1->Virtio1->Virtio2->VF2. Set the packet size from 64 to 1518 as well as jumbo frame 3000. The traffic sent to VF1's can be DEST MAC=VF1's MAC without VLAN ID, DEST IP=192.168.1.1, SRC IP=192.168.2.1. Check the performance in Mpps.As to the functionality criteria, The received rate should not be zero. As to the performance criteria, need check it with developer or design doc/PRD. 
+    
+Test Case 5:  test_perf_virtio_vm2vm_dpdk_fwd_soft_switch_vhost-cuse
+====================================================================
+
+On host:
+
+1. Start up vhost-switch, mergeable 0 means the jubmo frame feature is disabled, vm2vm 1 means vm to vm communication through software switch::
+
+    taskset -c 1-3 <dpdk_folder>/examples/vhost/build/vhost-switch -c 0xf -n 4 --huge-dir /mnt/huge --socket-mem 1024,1024 -- -p 1 --mergeable 0 --zero-copy 0 --vm2vm 1
+   
+
+2. Start VM with vhost cuse as backend::
+
+	VM1 Startup:
+
+    taskset -c 4-6  /home/qxu10/qemu-2.2.0/x86_64-softmmu/qemu-system-x86_64 -object memory-backend-file, id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+    -enable-kvm -m 2048 -smp 4 -cpu host -name dpdk1-vm1 \
+    -drive file=/home/img/dpdk1-vm1.img \
+    -netdev tap,id=vhost3,ifname=tap_vhost3,vhost=on,script=no \
+    -device virtio-net-pci,netdev=vhost3,mac=52:54:00:00:00:01,id=net3,csum=off,gso=off,guest_csum=off,guest_tso4=off,guest_tso6=off,guest_ecn=off \
+    -device pci-assign,host=08:10.0 \
+    -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:00:01 \
+    -localtime -nographic
+
+    VM2 Startup:
+
+    taskset -c 7-9  /home/qxu10/qemu-2.2.0/x86_64-softmmu/qemu-system-x86_64 -object memory-backend-file, id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+    -enable-kvm -m 2048 -smp 4 -cpu host -name dpdk1-vm2 \
+    -drive file=/home/img/dpdk1-vm2.img \
+    -netdev tap,id=vhost4,ifname=tap_vhost4,vhost=on,script=no \
+    -device virtio-net-pci,netdev=vhost4,mac=52:54:00:00:00:02,id=net3,csum=off,gso=off,guest_csum=off,guest_tso4=off,guest_tso6=off,guest_ecn=off \
+    -device pci-assign,host=08:10.2 \
+    -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:00:02 \
+    -localtime -nographic
+  
+On guest: 
+
+3. Ensure the dpdk folder copied to the guest with the same config file and build process as host. Then in each VM, bind virtio device and VF to igb_uio, and start testpmd, below is the step for reference::
+
+    On VM1:
+
+    ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0 00:04.0
+
+    ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c f -n 4 -- -i --txqflags 0x0f00 --eth_peer=0,peer-mac(virtio in VM2) 
+    
+    $ >set fwd mac
+    
+    $ >start tx_first
+
+    On VM2:
+
+    ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0 00:04.0
+
+    ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c f -n 4 -- -i --txqflags 0x0f00 
+    
+    $ >set fwd mac
+    
+    $ >start tx_first
+
+4. After typing start tx_first in testpmd, user can see there would be 2 virtio device with MAC and vlan id registered in vhost-user server, the log would be shown in host's vhost-sample output.
+    
+5. Send traffic(30second) to VF1, the flow should be VF1->Virtio1->Virtio2->VF2. Set the packet size from 64 to 1518. Check the performance in Mpps.The traffic sent to VF1's can be DEST MAC=VF1's MAC without VLAN ID. Check the performance in Mpps.As to the functionality criteria, The received rate should not be zero. As to the performance criteria, need check it with developer or design doc/PRD. 
+
+Test Case 6:  test_perf_virtio_vm2vm_linxu_fwd_soft_switch_vhost-cuse
+=====================================================================
+
+On host:
+
+Same steps as in TestCase5.
+
+On guest: 
+
+1. Set up routings On guests::
+
+    on VM1:
+    
+    $ ip addr add 192.168.2.2/24 dev eth1 # Virtio is eth1, VF is eth2
+    
+    $ ip neigh add 192.168.2.1 lladdr 52:54:00:00:00:02 dev eth1 # Set the neigh mac is the Next virtio's MAC
+    
+    $ ip link set dev eth1 up
+
+    $ ip addr add 192.168.1.2/24 dev eth2
+    
+    $ ip neigh add 192.168.1.1 lladdr 00:00:00:00:00:0a  dev eth2
+    
+    $ ip link set dev eth2 up
+    
+    on VM2:
+
+    $ ip addr add 192.168.2.2/24 dev eth2
+    
+    $ ip neigh add 192.168.2.1 lladdr 00:00:00:00:0a:0a dev eth2
+    
+    $ ip link set dev eth2 up
+    
+    $ ip addr add 192.168.3.2/24 dev eth1
+    
+    $ ip neigh add 192.168.3.1 lladdr 90:e2:ba:36:99:3d dev eth1
+    
+    $ ip link set dev eth1 up
+
+2. Send traffic(30second) to VF1, the flow should be VF1->Virtio1->Virtio2->VF2. Set the packet size from 64 to 1518. Check the performance in Mpps. The traffic sent to VF1's can be DEST MAC=VF1's MAC without VLAN ID, DEST IP=192.168.1.1, SRC IP=192.168.2.1. Check the performance in Mpps.As to the functionality criteria, The received rate should not be zero. As to the performance criteria, need check it with developer or design doc/PRD. 
+
+Test Case 7:  test_perf_virtio_one_vm_dpdk_fwd_vhost-user_jumboframe
+====================================================================
+
+This case is similar to TestCase1, just change the backend from vhost cuse to vhost-user, so need rebuild the dpdk in vhost-user on host, other steps are same as TestCase1. The command to launch vm is different, see below as example:: 
+
+    <qemu-2.2.0_folder>/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc -smp 2 -drive file=/home/img/dpdk1-vm1.img -chardev socket,id=char0,path=<dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1 -chardev socket,id=char1,path=/home/qxu10/dpdk/vhost-net -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2 -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:00:09 -nographic
+
+On the guest, need add one parameter at the end of testpmd command line: --disable-hw-vlan-filter.
+
+Test Case 8:  test_perf_virtio_one_vm_dpdk_fwd_vhost-user
+=========================================================
+
+This case is similar to TestCase7, just set mergeable=0(disable jumbo frame) when launch the vhost sample, and send the packet size from 64B to 1518B to check the performance and basic functions.
+    
+Test Case 9:  test_perf_virtio_one_vm_linux_fwd_vhost-user_jumboframe
+=====================================================================
+
+This case is similar to TestCase2, set the mergeable=1, change the backend vhost sample from vhost cuse to vhost-user, so need rebuild the dpdk in vhost-user on host, other steps are same as TestCase2. The command to launch vm is same as in TestCase7. On the guest, need add one parameter at the end of testpmd command line: --disable-hw-vlan-filter. 
+
+Test Case 10:  test_perf_virtio_one_vm_linux_fwd_vhost-user
+==========================================================
+
+This case is similar to TestCase9, just set mergeable=0(disable jumbo frame) when launch the vhost sample, and send the packet size from 64B to 1518B to check the performance and basic functions.
+
+Test Case 11:  test_perf_virtio_vm2vm_dpdk_fwd_NIC_L2_switch_vhost-user_jumboframe
+=================================================================================
+
+This case is similar to TestCase3, set the mergeable=1, change the backend from vhost cuse to vhost-user, so need rebuild the dpdk in vhost-user on host, other steps are same as TestCase3.The command to launch 2vms is different than vhost cuse, see below examples for reference::
+
+    VM1 Startup:
+	<qemu-2.2.0_folder>/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc -smp 2 -drive file=/home/img/dpdk1-vm1.img -chardev socket,id=char0,path=<dpdk_folder>/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1 -device pci-assign,host=08:10.0  -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:00:01  -nographic
+
+    VM2 Startup:
+    <qemu-2.2.0_folder>/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm2 -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc -smp 2 -drive file=/home/img/dpdk1-vm2.img -chardev socket,id=char0,path=<dpdk_folder>/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=52:54:00:12:34:22,netdev=mynet1 -device pci-assign,host=08:10.2  -netdev tap,id=ipvm2,ifname=tap4,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm2,id=net0,mac=00:00:00:00:00:10  -nographic
+
+On the guest, need add one parameter at the end of testpmd command line: --disable-hw-vlan-filter. 
+
+
+Test Case 12:  test_perf_virtio_vm2vm_dpdk_fwd_NIC_L2_switch_vhost-user
+======================================================================
+
+This case is similar to TestCase11, just set mergeable=0(disable jumbo frame) when launch the vhost sample, and send the packet size from 64B to 1518B to check the performance and basic functions.
+
+
+Test Case 13:  test_perf_virtio_vm2vm_linxu_fwd_NIC_L2_switch_vhost-user_jumboframe
+===================================================================================
+
+This case is similar to TestCase4, set the mergeable=1, change the backend from vhost cuse to vhost-user, so need rebuild the dpdk in vhost-user on host, other steps are same as TestCase4. The command to launch 2vms is same as in TestCase11. 
+
+
+Test Case 14:  test_perf_virtio_vm2vm_linxu_fwd_NIC_L2_switch_vhost-user
+========================================================================
+
+This case is similar to TestCase11, just set mergeable=0(disable jumbo frame) when launch the vhost sample, and send the packet size from 64B to 1518B to check the performance and basic functions.
+
+    
+Test Case 15:  test_perf_virtio_vm2vm_dpdk_fwd_soft_switch_vhost-user_jumboframe
+================================================================================
+
+This case is similar to TestCase5, set the mergeable=1, change the backend from vhost cuse to vhost-user, so need rebuild the dpdk in vhost-user on host, other steps are same as TestCase4. The command to launch 2vms is same as in TestCase11.On the guest, need add one parameter at the end of testpmd command line: --disable-hw-vlan-filter. 
+
+
+Test Case 16:  test_perf_virtio_vm2vm_dpdk_fwd_soft_switch_vhost-user
+=====================================================================
+
+This case is similar to TestCase15, just set mergeable=0(disable jumbo frame) when launch the vhost sample, and send the packet size from 64B to 1518B to check the performance and basic functions.
+
+
+Test Case 17:  test_perf_virtio_vm2vm_linxu_fwd_soft_switch_vhost-user_jumboframe
+=================================================================================
+
+This case is similar to TestCase6, set the mergeable=1, change the backend changes from vhost cuse to vhost-user, so need rebuild the dpdk in vhost-user on host, other steps are same as TestCase6.The command to launch 2vms is same as in TestCase11.
+
+
+Test Case 18:  test_perf_virtio_vm2vm_linxu_fwd_soft_switch_vhost-user
+======================================================================
+
+This case is similar to TestCase17, just set mergeable=0(disable jumbo frame) when launch the vhost sample, and send the packet size from 64B to 1518B to check the performance and basic functions.
+
+
+Test Case 19:  test_function_virtio_one_vm_vlan_insert_dpdk_vhost-user
+======================================================================
+
+On host:
+
+1. Start up vhost-switch::
+
+    taskset -c 1-3 <dpdk_folder>/examples/vhost/build/vhost-switch -c 0xf -n 4 --huge-dir /mnt/huge --socket-mem 1024,1024 -- -p 1 --mergeable 0 --zero-copy 0 --vm2vm 0
+
+2. Start VM::
+
+    taskset <qemu-2.2.0>/x86_64-softmmu/qemu-system-x86_64 -enable-kvm -m 1024 -smp 4 -cpu host -name dpdk1-vm1 \
+    -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on  -numa node,memdev=mem -mem-prealloc \ 
+    -drive file=/home/img/dpdk1-vm1.img  \ 
+    -chardev socket,id=char0,path=<dpdk_folder>/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1 \
+    -chardev socket,id=char1,path=<dpdk_folder>/vhost-net -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2 \
+    -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:00:09 -nographic
+
+On guest:
+
+3. Ensure the dpdk folder copied to the guest with the same config file and build process as host. Then in each VM, bind virtio device and VF to igb_uio and start testpmd, below is the step for reference::
+
+    $ dpdk_nic_bind.py --bind igb_uio 00:03.0 00:04.0
+
+    $ .x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c f -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan-filter 
+    
+    $ >set fwd mac
+
+    $ >tx_vlan set 0x123 0 
+ 
+    $ >tx_vlan set 0x456 1
+    
+    $ >start tx_first
+
+4. Suppose virtio0 is port0, virtio1 is port1, then send traffic to virtio0, the flow will be virtio0 -> virtio1, then check received packet has the Vlan id 0x456, and then send traffic to virtio1, the flow will be virtio1 -> virtio0, then check received packet has the vlan id 0x123.
+
+Test Case 20:  test_function_virtio_one_vm_vlan_strip_dpdk_vhost-user
+=====================================================================
+On host:
+
+1. Start up vhost-switch with vlan-strip ::
+
+    taskset -c 1-3 <dpdk_folder>/examples/vhost/build/vhost-switch -c 0xf -n 4 --huge-dir /mnt/huge --socket-mem 1024,1024 -- -p 1 --mergeable 0 --zero-copy 0 --vm2vm 0 --vlan-strip 0
+
+2. Start VM::
+
+    taskset <qemu-2.2.0>/x86_64-softmmu/qemu-system-x86_64 -enable-kvm -m 1024 -smp 4 -cpu host -name dpdk1-vm1 \
+    -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on  -numa node,memdev=mem -mem-prealloc \ 
+    -drive file=/home/img/dpdk1-vm1.img  \ 
+    -chardev socket,id=char0,path=<dpdk_folder>/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1 \
+    -chardev socket,id=char1,path=<dpdk_folder>/vhost-net -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2 \
+    -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:00:09 -nographic
+
+On guest:
+
+3. Ensure the dpdk folder copied to the guest with the same config file and build process as host. Then in each VM, bind virtio device and VF to igb_uio and start testpmd, below is the step for reference::
+
+    $ dpdk_nic_bind.py --bind igb_uio 00:03.0 00:04.0
+
+    $ .x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c f -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan-filter
+    
+    $ >set fwd mac
+    
+    $ >start tx_first
+
+4. Send traffic(30second) to virtio1 or virtio2 with Vlan id, and check the packet size received by ixia is correct(for 64B, ixia should receive 68B with Vlan id).
+
+Test Case 21:  test_function_virtio_one_vm_port_io_dpdk_vhost-user
+==================================================================
+
+On host:
+
+1. Start up vhost-switch::
+
+    taskset -c 1-3 vhost-switch -c 0xf -n 4 --huge-dir /mnt/huge --socket-mem 1024,1024 -- -p 1 --mergeable 1 --zero-copy 0 --vm2vm 0 
+
+2. Start VM::
+
+    taskset <qemu-2.2.0>/x86_64-softmmu/qemu-system-x86_64 -enable-kvm -m 1024 -smp 4 -cpu host -name dpdk1-vm1 \
+    -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on  -numa node,memdev=mem -mem-prealloc \ 
+    -drive file=/home/img/dpdk1-vm1.img  \ 
+    -chardev socket,id=char0,path=<dpdk_folder>/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1 \
+    -chardev socket,id=char1,path=<dpdk_folder>/vhost-net -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2 \
+    -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:00:09 -nographic
+
+On guest:
+
+3. Start testpmd on guest, no need to insmod and bind igb_uio::
+
+    $ python .<dpdk_folder>/tools/dpdk_nic_bind.py --bind=virtio-pci 00:03.0 00:04.0
+   
+    $ rmmod igb_uio
+
+    $ .x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c f -n 4 -w 00:03.0 -w 00:04.0 -- -i --txqflags 0x0f00 --disable-hw-vlan-filter
+    
+    $ >set fwd mac
+    
+    $ >start tx_first
+
+5. Send traffic(30second) to virtio1 or virtio2 with Vlan id, and the packet size is from 64B to 1518B, check the packets are forwarded from one port to another port correctly. 
+
+Test Case 22:  test_stability_netperf_virtio_vm2vm_vhost-user
+========================================================================
+
+On host:
+
+1. Start up vhost-switch::
+
+    taskset -c 1-3 <dpdk_folder>/examples/vhost/build/vhost-switch -c 0xf -n 4 --huge-dir /mnt/huge --socket-mem 1024,1024 -- -p 1 --mergeable 0 --zero-copy 0 --vm2vm 2
+
+2. Start VM::
+
+    VM1 Start up:
+
+    taskset -c 4-6 <qemu-2.2.0>/x86_64-softmmu/qemu-system-x86_64 -enable-kvm -m 1024 -smp 4 -cpu host -name dpdk1-vm1 \
+    -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on  -numa node,memdev=mem -mem-prealloc \ 
+    -drive file=/home/img/dpdk1-vm1.img  \
+    -chardev socket,id=char0,path=<dpdk_folder>/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1 \
+    -device pci-assign,host=08:10.1 -nographic 
+    
+    VM2 Start up:
+
+    taskset -c 7-9 <qemu-2.2.0>/x86_64-softmmu/qemu-system-x86_64 -enable-kvm -m 1024 -smp 4 -cpu host -name dpdk1-vm2 \
+    -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on  -numa node,memdev=mem -mem-prealloc \ 
+    -drive file=/home/img/dpdk1-vm2.img  \
+    -chardev socket,id=char0,path=<dpdk_folder>/vhost-net -netdev type=vhost-user,id=mynet2,chardev=char0,vhostforce -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2 \
+    -device pci-assign,host=08:10.3 -nographic 
+
+3. Run netperf on guests for long hours, check if the vhost-switch fd file and memory information are correct. Also check the log of netperf to see if any exceptions.Need install netperf on both VMs.
+
+    VM1: launch netserver.
+    $ ifconfig eth0 192.168.1.2 # eth0 is the virtio port
+    $ arp -s 192.168.1.3 52:54:00:00:00:02 # set the arp table to the virtio in VM2
+    $ netserver
+
+    VM2:
+    $ ifconfig eth0 192.168.1.3 # eth0 is the virtio port
+    $ arp -s 192.168.1.2 52:54:00:00:00:01 # set the 
+    for((i=0;i<99999999999999;i++));
+    do
+    netperf -H 192.168.1.2 -t TCP_STREAM -l 1800 >> log/tcp_stream.txt 2&>1
+    netperf -H 192.168.1.2 -t UDP_STREAM -l 1800 >> log/udp_stream.txt 2&>1
+    netperf -H 192.168.1.2 -t TCP_RR -l 1800 >> log/tcp_rr.txt 2&>1
+    netperf -H 192.168.1.2 -t TCP_CRR -l 1800 >> log/tcp_crr.txt 2&>1
+    netperf -H 192.168.1.2 -t UDP_RR -l 1800 >> log/udp_rr.txt 2&>1
+    done    
+
+Test Case 23:  test_stability_virtio_multiple_vms_explorer_dpdk_legacy_vhost-user
+=================================================================================
+
+On host:
+
+1. Start up vhost-switch, mergeable 1 means the jubmo frame feature is enabled, vm2vm 0 means only one vm without vm to vm communication::
+
+    taskset -c 1-3 <dpdk_folder>/examples/vhost/build/vhost-switch -c 0xf -n 4 --huge-dir /mnt/huge --socket-mem 1024,1024 -- -p 1 --mergeable 1 --zero-copy 0 --vm2vm 0
+   
+
+2. Start VMn with vhost user as backend(n>=1)::
+
+    taskset -c 4-6 <qemu-2.2.0>/x86_64-softmmu/qemu-system-x86_64 -enable-kvm -m 2048 -smp 4 -cpu host -name dpdk1-vm1 \ 
+    -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on  -numa node,memdev=mem -mem-prealloc \
+    -drive file=/home/img/dpdk1-vm1.img \
+    -chardev socket,id=char0,path=<dpdk_folder>/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1 \
+    -chardev socket,id=char1,path=<dpdk_folder>/vhost-net -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2 \
+    -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:00:09 -nographic
+
+On guest:
+
+3. Send traffic to VMn.
+ 
+4. First run linux legacy test for 30second, then go to next step.
+
+5. Start testpmd on guest for 30second, quit testpmd, unbind ports from igb_uio to virtio-net, then bind back to igb_uio, restart testpmd. Repeat this step for N times. For example, N=10.
+
+6. Unbind igb_uio, then go back to step4. Repeat step4 and step5 for N times, for example, N=10.
+
+7. Restart the VMn, repeat step4-6. In the outermost loop, repeat reboot/shutdown, start VMs.
+
+8. At the same time, start N VMs running the similar things from step2 to step7. It's a multiple VMs case, but each VM has no relations.
+ 
+9. Check if any exception or no-receive data happen. And also check the vhost process's memory file and fd information, to ensure no memory exception.
+
+
+Test Case 24:  test_guest_memory_one_vm_linux_fwd_vhost-user
+============================================================
+
+1. Start up vhost-switch with vhost-user as backend::
+
+    taskset -c 1-3 <dpdk_folder>/examples/vhost/build/vhost-switch -c 0xf -n 4 --huge-dir /mnt/huge --socket-mem 1024,1024 -- -p 1 --mergeable 1 --zero-copy 0 --vm2vm 0
+
+2. Start VM with different memory size: 512MB,1024MB,2048MB,4096MB,8192MB::
+
+     <qemu-2.2.0_folder>/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm -m <From 512 to 8192> -object memory-backend-file,id=mem,size=<From 512MB to 8092MB>,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc -smp 2 -drive file=/home/img/dpdk1-vm1.img -chardev socket,id=char0,path=<dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1 -chardev socket,id=char1,path=/home/qxu10/dpdk/vhost-net -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2 -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:00:09 -nographic
+
+3. Set up routing on guest::
+
+    $ systemctl stop firewalld.service
+    
+    $ systemctl disable firewalld.service
+    
+    $ systemctl stop ip6tables.service
+    
+    $ systemctl disable ip6tables.service
+
+    $ systemctl stop iptables.service
+    
+    $ systemctl disable iptables.service
+
+    $ systemctl stop NetworkManager.service
+    
+    $ systemctl disable NetworkManager.service
+ 
+    $ echo 1 >/proc/sys/net/ipv4/ip_forward
+
+    $ ip addr add 192.168.1.2/24 dev eth1
+    
+    $ ip neigh add 192.168.1.1 lladdr 00:00:00:00:0a:0a dev eth1
+    
+    $ ip link set dev eth1 up
+    
+    $ ip addr add 192.168.2.2/24 dev eth0
+    
+    $ ip neigh add 192.168.2.1 lladdr 00:00:00:00:00:0a  dev eth0
+    
+    $ ip link set dev eth0 up
+
+4. Send traffic(30second) to virtio1 and virtio2. According to above script, traffic sent to virtio1 should have SRC IP (e.g: 192.168.1.1), DEST IP(e.g:192.168.2.1), DEST MAC as virtio1's MAC, VLAN ID as virtio1's VLAN. Traffic sent to virtio2 has the similar setting, SRC IP(e.g:192.168.2.1), DEST IP(e.g: 192.168.1.1), VLAN ID as virtio2's VLAN. Set the packet size as 64byte, check if the packet can be received correctly.
+
+Test Case 25:  test_2M_huge_page_one_vm_linux_fwd_vhost-user
+============================================================
+
+Use 2M huge page, above cases are using 1G huge page. Repeat TestCase10 steps to check if anything works well. 
\ No newline at end of file
diff --git a/test_plans/vhost-virtio-zero-copy-test-plan.rst b/test_plans/vhost-virtio-zero-copy-test-plan.rst
new file mode 100644
index 0000000..aa9e110
--- /dev/null
+++ b/test_plans/vhost-virtio-zero-copy-test-plan.rst
@@ -0,0 +1,328 @@
+.. Copyright (c) <2015>, Intel Corporation
+   All rights reserved.
+
+   Redistribution and use in source and binary forms, with or without
+   modification, are permitted provided that the following conditions
+   are met:
+
+   - Redistributions of source code must retain the above copyright
+     notice, this list of conditions and the following disclaimer.
+
+   - Redistributions in binary form must reproduce the above copyright
+     notice, this list of conditions and the following disclaimer in
+     the documentation and/or other materials provided with the
+     distribution.
+
+   - Neither the name of Intel Corporation nor the names of its
+     contributors may be used to endorse or promote products derived
+     from this software without specific prior written permission.
+
+   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+   FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+   COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+   INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+   ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+   OF THE POSSIBILITY OF SUCH DAMAGE.
+   
+================
+Virtio Zero Copy
+================
+This test application is for virtualization application scenarios using Intel DPDK vhost and virtio. As to the vhost, it has 2 implementation, vhost-user and vhost cuse as backend. To test  vhost-user, need use qemu version> 2.1, for vhost cuse, can use qemu version>1.5. The Makefile in /lib/librte_vhost will be different for the two implementations. If in vhost-user implementation, need comment line:SRCS-$(CONFIG_RTE_LIBRTE_VHOST) += vhost_cuse/vhost-net-cdev.c vhost_cuse/virtio-net-cdev.c vhost_cuse/eventfd_copy.c. Else, if in vhost-cuse, need uncomment above line and comment the line related to vhost-user. Default is to use vhost-user implementation. There are four scenarios, and for vhost-user implementation, there are 4 functional/performance test cases ; for vhost cuse, there are 4 functional/performance test cases too, so totally 8 test cases.
+
+Compared with one-copy mode, zero-copy has lower priority.In zero-copy mode, host and guest will share the same huge page space, so called zero-copy.
+
+Prerequisites
+=============
+
+Platform needs to turn on VT-d and host loads kvm and kvm_intel modules. To run vhost-switch sample in zero-copy mode, we need first update the config file common_linuxapp:: 
+	 
+    sed -i 's/CONFIG_RTE_MBUF_SCATTER_GATHER=.*$/CONFIG_RTE_MBUF_SCATTER_GATHER=n/' ./config/common_linuxapp
+    grep CONFIG_RTE_MBUF_SCATTER_GATHER ./config/common_linuxapp
+    sed -i 's/CONFIG_RTE_LIBRTE_IP_FRAG=.*$/CONFIG_RTE_LIBRTE_IP_FRAG=n/' ./config/common_linuxapp
+    sed -i 's/CONFIG_RTE_LIBRTE_DISTRIBUTOR=.*$/CONFIG_RTE_LIBRTE_DISTRIBUTOR=n/' ./config/common_linuxapp
+    sed -i 's/CONFIG_RTE_LIBRTE_ACL=.*$/CONFIG_RTE_LIBRTE_ACL=n/'  ./config/common_linuxapp
+    sed -i 's/CONFIG_RTE_LIBRTE_PMD_BOND=.*$/CONFIG_RTE_LIBRTE_PMD_BOND=n/'  ./config/common_linuxapp
+    sed -i 's/CONFIG_RTE_LIBRTE_VHOST=.*$/CONFIG_RTE_LIBRTE_VHOST=y/'  ./config/common_linuxapp
+    sed -i 's/RTE_MBUF_REFCNT=.*$/RTE_MBUF_REFCNT=n/'  ./config/common_linuxapp
+    grep RTE_MBUF_REFCNT ./config/common_linuxapp
+
+Then build dpdk target and vhost-sample::
+
+    make install -j20 T=x86_64-native-linuxapp-gcc
+    cd <dpdk_folder>/lib/librte_vhost
+    make
+    cd ./eventfd_link
+    make
+    cd <dpdk_folder>/examples/vhost
+    make
+
+As to one VM scenario, one physical NIC port is needed by vhost-switch. The flow is bi-directional as virtio1 <--> virtio2.
+As to VM to VM scenario, at least two physical NIC ports are needed, one is used by vhost-switch, the other is used to send/receive traffic in VM. There are 2VMs, each with one virtio and one VF, the flow is uni-directional as VF1 -> Virtio1 -> Virtio2 -> VF2. 
+
+To bind port to igb_uio::
+
+    ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind=igb_uio device_bus_id
+
+To create VF, set VF's MAC and bind it to pci-stub, below is an example for NIC 82599::
+
+    $ modprobe -r ixgbe
+    
+    $ modprobe ixgbe max_vfs=2
+    
+    $ ifconfig eth1 up # eth1 is a PF which generate 2VFs.
+
+    $ ifconfig eth2 up # eth2 is VF1 for PF.
+       
+    $ ip link set eth1 vf 0 mac 52:54:00:12:34:00 # Set VF1 MAC address
+    
+    $ echo "8086 10ed" >/sys/bus/pci/drivers/pci-stub/new_id
+    
+    $ echo "0000:08:10.0" >/sys/bus/pci/drivers/ixgbevf/unbind # Unbind to VF's driver
+    
+    $ echo "0000:08:10.0" >/sys/bus/pci/drivers/pci-stub/bind # Bind to pci-stub
+
+On the host, before launching the vhost sample, need first configure the hugepages and the environment.Below script can be an example::
+    
+    modprobe kvm
+    modprobe kvm_intel
+    awk '/Hugepagesize/ {print $2}' /proc/meminfo
+    awk '/HugePages_Total/ { print $2 }' /proc/meminfo
+    umount `awk '/hugetlbfs/ { print $2 }' /proc/mounts`
+    mkdir -p /mnt/huge
+    mount -t hugetlbfs nodev /mnt/huge -o pagesize=1G
+    rm -f /dev/vhost-net
+    rmmod vhost-net
+    modprobe fuse
+    modprobe cuse
+    rmmod eventfd_link
+    rmmod igb_uio
+
+    cd ./dpdk
+    insmod lib/librte_vhost/eventfd_link/eventfd_link.ko
+
+    modprobe uio
+    insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
+
+    ./tools/dpdk_nic_bind.py --bind=igb_uio 0000:08:00.1
+
+Test Case 1:  test_perf_virtio_one_vm_dpdk_fwd_vhost-cuse
+=========================================================
+
+On host:
+
+1. Start up vhost-switch, zero-copy 1 means zero-copy is enabled, in zero-copy mode, jumbo frame is disabled, so mergeable can only be 0::
+
+    taskset -c 1-3 <dpdk_folder>/examples/vhost/build/vhost-switch -c 0xf -n 4 --huge-dir /mnt/huge --socket-mem 1024,1024 -- -p 1 --mergeable 0 --zero-copy 1 --vm2vm 0 --rx-desc-num 128
+   
+
+2. Start VM with vhost cuse as backend::
+
+    taskset -c 4-6  /home/qxu10/qemu-2.2.0/x86_64-softmmu/qemu-system-x86_64 -object memory-backend-file, id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+    -enable-kvm -m 2048 -smp 4 -cpu host -name dpdk1-vm1 \
+    -drive file=/home/img/dpdk1-vm1.img \
+    -netdev tap,id=vhost3,ifname=tap_vhost3,vhost=on,script=no \
+    -device virtio-net pci,netdev=vhost3,mac=52:54:00:00:00:01,id=net3,csum=off,gso=off,guest_csum=off,guest_tso4=off,guest_tso6=off,guest_ecn=off \
+    -netdev tap,id=vhost4,ifname=tap_vhost4,vhost=on,script=no \
+    -device virtio-net-pci,netdev=vhost4,mac=52:54:00:00:00:02,id=net4,csum=off,gso=off,guest_csum=off,guest_tso4=off,guest_tso6=off,guest_ecn=off \
+    -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:00:01 \
+    -localtime -nographic
+
+On guest:
+
+3. ensure the dpdk folder copied to the guest with the same config file and build process as host. Then bind 2 virtio devices to igb_uio and start testpmd, below is the step for reference::
+
+    ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0 00:04.0
+
+    ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c f -n 4 -- -i --txqflags 0x0f00 --max-pkt-len 9000
+    
+    $ >set fwd mac
+    
+    $ >start tx_first
+
+4. After typing start tx_first in testpmd, user can see there would be 2 virtio device with MAC and vlan id registered in vhost-user server, the log would be shown in host's vhost-sample output.
+
+5. Send traffic(30second) to virtio1 and virtio2, and set the packet size from 64 to 1518 as well as the jumbo frame 3000. Check the performance in Mpps. The traffic sent to virtio1 should have the DEST MAC of Virtio1's MAC, Vlan id of Virtio1. The traffic sent to virtio2 should have the DEST MAC of Virtio2's MAC, Vlan id of Virtio2. As to the functionality criteria, The received rate should not be zero. As to the performance criteria, need check it with developer or design doc/PRD.
+    
+Test Case 2:  test_perf_virtio_one_vm_linux_fwd_vhost-cuse
+==========================================================
+
+On host:
+
+Same step as in TestCase1.
+
+On guest:   
+
+1. Set up routing on guest::
+
+    $ systemctl stop firewalld.service
+    
+    $ systemctl disable firewalld.service
+    
+    $ systemctl stop ip6tables.service
+    
+    $ systemctl disable ip6tables.service
+
+    $ systemctl stop iptables.service
+    
+    $ systemctl disable iptables.service
+
+    $ systemctl stop NetworkManager.service
+    
+    $ systemctl disable NetworkManager.service
+ 
+    $ echo 1 >/proc/sys/net/ipv4/ip_forward
+
+    $ ip addr add 192.168.1.2/24 dev eth1
+    
+    $ ip neigh add 192.168.1.1 lladdr 00:00:00:00:0a:0a dev eth1
+    
+    $ ip link set dev eth1 up
+    
+    $ ip addr add 192.168.2.2/24 dev eth0
+    
+    $ ip neigh add 192.168.2.1 lladdr 00:00:00:00:00:0a  dev eth0
+    
+    $ ip link set dev eth0 up
+
+2. Send traffic(30second) to virtio1 and virtio2. According to above script, traffic sent to virtio1 should have SRC IP (e.g: 192.168.1.1), DEST IP(e.g:192.168.2.1), DEST MAC as virtio1's MAC, VLAN ID as virtio1's VLAN. Traffic sent to virtio2 has the similar setting, SRC IP(e.g:192.168.2.1), DEST IP(e.g: 192.168.1.1), VLAN ID as virtio2's VLAN. Set the packet size from 64 to 1518 as well as jumbo frame.Check the performance in Mpps.As to the functionality criteria, The received rate should not be zero. As to the performance criteria, need check it with developer or design doc/PRD. 
+        
+Test Case 3:  test_perf_virtio_vm2vm_dpdk_fwd_NIC_L2_switch_vhost-cuse
+======================================================================
+
+On host:
+
+1. Start up vhost-switch, vm2vm 2 means vm to vm communication through NIC layer2 switch::
+
+    taskset -c 1-3 <dpdk_folder>/examples/vhost/build/vhost-switch -c 0xf -n 4 --huge-dir /mnt/huge --socket-mem 1024,1024 -- -p 1 --mergeable 0 --zero-copy 0 --vm2vm 2 --rx-desc-num 128
+   
+
+2. Start VM with vhost cuse as backend::
+
+	VM1 Startup:: 
+
+    taskset -c 4-6  /home/qxu10/qemu-2.2.0/x86_64-softmmu/qemu-system-x86_64 -object memory-backend-file, id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+    -enable-kvm -m 2048 -smp 4 -cpu host -name dpdk1-vm1 \
+    -drive file=/home/img/dpdk1-vm1.img \
+    -netdev tap,id=vhost3,ifname=tap_vhost3,vhost=on,script=no \
+    -device virtio-net-pci,netdev=vhost3,mac=52:54:00:00:00:01,id=net3,csum=off,gso=off,guest_csum=off,guest_tso4=off,guest_tso6=off,guest_ecn=off \
+    -device pci-assign,host=08:10.0 \
+    -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:00:01 \
+    -localtime -nographic
+
+    VM2 Startup::
+
+    taskset -c 7-9  /home/qxu10/qemu-2.2.0/x86_64-softmmu/qemu-system-x86_64 -object memory-backend-file, id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+    -enable-kvm -m 2048 -smp 4 -cpu host -name dpdk1-vm2 \
+    -drive file=/home/img/dpdk1-vm2.img \
+    -netdev tap,id=vhost4,ifname=tap_vhost4,vhost=on,script=no \
+    -device virtio-net-pci,netdev=vhost4,mac=52:54:00:00:00:02,id=net3,csum=off,gso=off,guest_csum=off,guest_tso4=off,guest_tso6=off,guest_ecn=off \
+    -device pci-assign,host=08:10.2 \
+    -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:00:02 \
+    -localtime -nographic
+  
+On guest: 
+
+3. Ensure the dpdk folder copied to the guest with the same config file and build process as host. Then in each VM, bind virtio device and VF to igb_uio, and start testpmd, below is the step for reference::
+
+    On VM1:
+
+    ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0 00:04.0
+
+    ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c f -n 4 -- -i --txqflags 0x0f00 --eth_peer=0,peer-mac(virtio in VM2) 
+    
+    $ >set fwd mac
+    
+    $ >start tx_first
+
+    On VM2:
+
+    ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0 00:04.0
+
+    ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c f -n 4 -- -i --txqflags 0x0f00 
+    
+    $ >set fwd mac
+    
+    $ >start tx_first
+
+4. After typing start tx_first in testpmd, user can see there would be 2 virtio device with MAC and vlan id registered in vhost-user server, the log would be shown in host's vhost-sample output.
+    
+5. Send traffic(30second) to VF1, the flow should be VF1->Virtio1->Virtio2->VF2. Set the packet size from 64 to 1518 as well as jumbo frame 3000. The traffic sent to VF1's can be DEST MAC=VF1's MAC without VLAN ID. Check the performance in Mpps.As to the functionality criteria, The received rate should not be zero. As to the performance criteria, need check it with developer or design doc/PRD.
+
+
+Test Case 4:  test_perf_virtio_vm2vm_linxu_fwd_NIC_L2_switch_vhost-cuse
+=======================================================================
+
+On host:
+
+Same steps as in TestCase3.
+
+On guest: 
+
+1. Set up routings On guests::
+
+    on VM1::
+    
+    $ ip addr add 192.168.1.2/24 dev eth1 # Suppose eth1 is Virtio, eth2 is VF
+    
+    $ ip neigh add 192.168.1.1 lladdr 52:54:00:00:00:02 dev eth1 # Set the neigh mac is the Next virtio's MAC
+    
+    $ ip link set dev eth1 up
+
+    $ ip addr add 192.168.2.2/24 dev eth2
+    
+    $ ip neigh add 192.168.2.1 lladdr 00:00:00:00:00:0a  dev eth2
+    
+    $ ip link set dev eth2 up
+    
+    on VM2::
+
+    $ ip addr add 192.168.2.2/24 dev eth1 # Suppose eth1 is virtio, eth2 is VF
+    
+    $ ip neigh add 192.168.2.1 lladdr 00:00:00:00:0a:0a dev eth1
+    
+    $ ip link set dev eth1 up
+    
+    $ ip addr add 192.168.1.2/24 dev eth2
+    
+    $ ip neigh add 192.168.1.1 lladdr 90:e2:ba:36:99:3d dev eth2
+    
+    $ ip link set dev eth2 up
+
+2. Send traffic(30second) to VF1, the flow should be VF1->Virtio1->Virtio2->VF2. Set the packet size from 64 to 1518 as well as jumbo frame 3000. The traffic sent to VF1's can be DEST MAC=VF1's MAC without VLAN ID, DEST IP=192.168.1.1, SRC IP=192.168.2.1. Check the performance in Mpps.As to the functionality criteria, The received rate should not be zero. As to the performance criteria, need check it with developer or design doc/PRD. 
+    
+
+Test Case 5:  test_perf_virtio_one_vm_dpdk_fwd_vhost-user
+=========================================================
+
+This case is similar to TestCase1, just the backend changes from vhost cuse to vhost-user, so need rebuild the dpdk in vhost-user on host, other steps are same as TestCase1.The command to launch vm is different, see below as example:: 
+
+    <qemu-2.2.0_folder>/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc -smp 2 -drive file=/home/img/dpdk1-vm1.img -chardev socket,id=char0,path=<dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1 -chardev socket,id=char1,path=/home/qxu10/dpdk/vhost-net -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2 -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:00:09 -nographic
+    
+Test Case 6:  test_perf_virtio_one_vm_linux_fwd_vhost-user
+==========================================================
+
+This case is similar to TestCase2, just the backend changes from vhost cuse to vhost-user, so need rebuild the dpdk in vhost-user on host, other steps are same as TestCase2.The command to launch vm is same as in TestCase5.
+
+    
+Test Case 7:  test_perf_virtio_vm2vm_dpdk_fwd_NIC_L2_switch_vhost-user
+======================================================================
+
+This case is similar to TestCase3, just the backend changes from vhost cuse to vhost-user, so need rebuild the dpdk in vhost-user on host, other steps are same as TestCase3.The command to launch 2vms is different than vhost cuse, see below examples for reference::
+
+    VM1 Startup:
+	<qemu-2.2.0_folder>/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc -smp 2 -drive file=/home/img/dpdk1-vm1.img -chardev socket,id=char0,path=<dpdk_folder>/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1 -device pci-assign,host=08:10.0  -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:00:01  -nographic
+
+    VM2 Startup:
+    <qemu-2.2.0_folder>/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm2 -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc -smp 2 -drive file=/home/img/dpdk1-vm2.img -chardev socket,id=char0,path=<dpdk_folder>/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=52:54:00:12:34:22,netdev=mynet1 -device pci-assign,host=08:10.2  -netdev tap,id=ipvm2,ifname=tap4,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm2,id=net0,mac=00:00:00:00:00:10  -nographic
+
+Test Case 8:  test_perf_virtio_vm2vm_linxu_fwd_NIC_L2_switch_vhost-user
+========================================================================
+
+This case is similar to TestCas4, just the backend changes from vhost cuse to vhost-user, so need rebuild the dpdk in vhost-user on host, other steps are same as TestCase4.The command to launch 2vms is same as in TestCase7.
+    
-- 
1.9.3



More information about the dts mailing list