[dpdk-dev] [PATCH v5 0/4] add virtio offload support in us-vhost

Xu, Qian Q qian.q.xu at intel.com
Fri Nov 13 08:35:35 CET 2015


Tested-by: Qian Xu <qian.q.xu at intel.com>

- Test Commit: 6b6a94ee17d246a0078cc83257f522d0a6db5409
- OS/Kernel: Fedora 21/4.1.8
- GCC: gcc (GCC) 4.9.2 20141101 (Red Hat 4.9.2-1)
- CPU: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
- NIC: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
- Target: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
- Total 2 cases, 2 passed. DPDK vhost + legacy virtio can work well with NIC TSO offload and VM2VM iperf forwards. 

Test Case1: DPDK vhost user + virtio-net one VM fwd tso
=======================================================

HW preparation: Connect 2 ports directly. In our case, connect 81:00.0(port1) and 81:00.1(port2) two ports directly. Port1 is binded to igb_uio for vhost-sample to use, while port2 is in kernel driver. 

SW preparation: Change one line of the vhost sample and rebuild::

    #In function virtio_tx_route(xxx)
    m->vlan_tci = vlan_tag; 
    #changed to 
    m->vlan_tci = 1000;

1. Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1. For TSO/CSUM test, we need set "--mergeable 1--tso 1 --csum 1".::

    taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 1 --zero-copy 0 --vm2vm 0 --tso 1 --csum 1

2. Launch VM1

    taskset -c 21-22 \
    qemu-system-x86_64 -name us-vhost-vm1 \
     -cpu host -enable-kvm -m 1024 -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
     -smp cores=2,sockets=1 -drive file=/home/img/dpdk-vm1.img  \
     -chardev socket,id=char0,path=/home/qxu10/vhost-tso-test/dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,csum=on,gso=on,guest_csum=on,guest_tso4=on,guest_tso6=on,guest_ecn=on  \
     -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:10:00:00:11:01 -nographic

3. On host,configure port2, then you can see there is a interface called ens260f1.1000.::
   
    ifconfig ens260f1
    vconfig add ens260f1 1000
    ifconfig ens260f1.1000 1.1.1.8

4. On the VM1, set the virtio IP and run iperf

    ifconfig ethX 1.1.1.2
    ping 1.1.1.8 # let virtio and port2 can ping each other successfully, then the arp table will be set up automatically. 
    
5. In host, run : `iperf -s -i 1` ; In guest, run `iperf -c 1.1.1.4 -i 1 -t 60`, check all the tcpdump packet has 65160 length packet. 

6. The iperf performance could be relatively stable at ~9.4Gbits/s. 

Test Case2: DPDK vhost user + virtio-net VM2VM=1 fwd tso
========================================================

HW preparation: No special setup needed. 

1. Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1. For TSO/CSUM test, we need set "--mergeable 1--tso 1 --csum 1 --vm2vm 1".::

    taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 1 --zero-copy 0 --vm2vm 1 --tso 1 --csum 1

2. Launch VM1 and VM2. ::

    taskset -c 21-22 \
    qemu-system-x86_64 -name us-vhost-vm1 \
     -cpu host -enable-kvm -m 1024 -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
     -smp cores=2,sockets=1 -drive file=/home/img/dpdk-vm1.img  \
     -chardev socket,id=char0,path=/home/qxu10/vhost-tso-test/dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,csum=on,gso=on,guest_csum=on,guest_tso4=on,guest_tso6=on,guest_ecn=on  \
     -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:10:00:00:11:01 -nographic

    taskset -c 23-24 \
    qemu-system-x86_64 -name us-vhost-vm1 \
     -cpu host -enable-kvm -m 1024 -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
     -smp cores=2,sockets=1 -drive file=/home/img/vm1.img  \
     -chardev socket,id=char1,path=/home/qxu10/vhost-tso-test/dpdk/vhost-net -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2  \
     -netdev tap,id=ipvm1,ifname=tap4,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:10:00:00:11:02 -nographic

3. On VM1, set the virtio IP and run iperf

    ifconfig ethX 1.1.1.2
    arp -s 1.1.1.8 52:54:00:00:00:02
    arp # to check the arp table is complete and correct. 

4. On VM2, set the virtio IP and run iperf

    ifconfig ethX 1.1.1.8
    arp -s 1.1.1.2 52:54:00:00:00:01
    arp # to check the arp table is complete and correct. 
 
5. Ensure virtio1 can ping virtio2. Then in VM1, run : `iperf -s -i 1` ; In VM2, run `iperf -c 1.1.1.4 -i 1 -t 60`, check all the tcpdump packet has 65160 length packet.

Thanks
Qian


-----Original Message-----
From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Jijiang Liu
Sent: Thursday, November 12, 2015 8:07 PM
To: dev at dpdk.org
Subject: [dpdk-dev] [PATCH v5 0/4] add virtio offload support in us-vhost

Adds virtio offload support in us-vhost.
 
The patch set adds the feature negotiation of checksum and TSO between us-vhost and vanilla Linux virtio guest, and add these offload features support in the vhost lib, and change vhost sample to test them.

v5 changes:
  Add more clear descriptions to explain these changes.
  reset the 'virtio_net_hdr' value in the virtio_enqueue_offload() function.
  reorganize patches. 
  
 
v4 change:
  remove virtio-net change, only keep vhost changes.
  add guest TX offload capabilities to support VM to VM case.
  split the cleanup code as a separate patch.
 
v3 change:
  rebase latest codes.
 
v2 change:
  fill virtio device information for TX offloads.

*** BLURB HERE ***

Jijiang Liu (4):
  add vhost offload capabilities
  remove ipv4_hdr structure from vhost sample.
  add guest offload setting ln the vhost lib.
  change vhost application to test checksum and TSO for VM to NIC case

 examples/vhost/main.c         |  120 ++++++++++++++++++++++++++++-----
 lib/librte_vhost/vhost_rxtx.c |  150 ++++++++++++++++++++++++++++++++++++++++-
 lib/librte_vhost/virtio-net.c |    9 ++-
 3 files changed, 259 insertions(+), 20 deletions(-)

-- 
1.7.7.6



More information about the dev mailing list