[dpdk-dev] Why I can't get max line speed using ovs-dpdk and vhost-user port?

Sam batmanustc at gmail.com
Thu Nov 9 11:11:52 CET 2017


Hi all,

I'm using ovs-dpdk with vhost-user port and 10 VM started by qemu, topology
is:

VM1  ~  VM10
  |             |
-----OVS----
     |
  dpdk-bond

the start command of ovs-dpdk is:

root      8969  200  0.1 107748696 231284 ?    S<Lsl Nov08 3318:18
> ovs-vswitchd --dpdk -c 0x40004 -n 4 --socket-mem 10240 --proc-type
> secondary -w 0000:01:00.0 -w 0000:01:00.1 --
> unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach --log-file
> --mlockall --no-chdir
> --log-file=/usr/local/var/log/openvswitch/ovs-vswitchd.log
> --pidfile=/usr/local/var/run/openvswitch/ovs-vswitchd.pid --detach --monitor


qemu command is:

root     48635  0.9  0.0 3550624 94492 ?       Sl   Nov07  30:37
> /usr/local/bin/qemu-system-x86_64_2.6.0 -enable-kvm -cpu
> qemu64,+vmx,+ssse3,+sse4.1,+sse4.2,+x2apic,+aes,+avx,+vme,+pat,+ss,+pclmulqdq,+xsave,level=13
> -machine pc,accel=kvm -chardev
> socket,id=hmqmondev,port=55924,host=127.0.0.1,nodelay,server,nowait -mon
> chardev=hmqmondev,id=hmqmon,mode=readline -rtc
> base=utc,clock=host,driftfix=none -usb -device usb-tablet -daemonize
> -nodefaults -nodefconfig -no-kvm-pit-reinjection -global
> kvm-pit.lost_tick_policy=discard -vga std -k en-us -smp 8 -name
> gangyewei-35 -m 2048 -boot order=cdn -vnc :24,password -drive
> file=/opt/cloud/workspace/disks/0ce6db23-627c-475d-b7ff-36266ba9492a,if=none,id=drive_0,format=qcow2,cache=none,aio=native
> -device virtio-blk-pci,id=dev_drive_0,drive=drive_0,bus=pci.0,addr=0x5
> -drive
> file=/opt/cloud/workspace/disks/7f11a37e-28bb-4c54-b903-de2a5b28b284,if=none,id=drive_1,format=qcow2,cache=none,aio=native
> -device virtio-blk-pci,id=dev_drive_1,drive=drive_1,bus=pci.0,addr=0x6
> -drive
> file=/opt/cloud/workspace/disks/f2a7e4fb-c457-4e60-a147-18e4fadcb4dc,if=none,id=drive_2,format=qcow2,cache=none,aio=native
> -device virtio-blk-pci,id=dev_drive_2,drive=drive_2,bus=pci.0,addr=0x7
> -device ide-cd,drive=ide0-cd0,bus=ide.1,unit=1 -drive
> id=ide0-cd0,media=cdrom,if=none -chardev
> socket,id=char-n-650b42fe,path=/usr/local/var/run/openvswitch/n-650b42fe,server
> -netdev type=vhost-user,id=n-650b42fe,chardev=char-n-650b42fe,vhostforce=on
> -device
> virtio-net-pci,netdev=n-650b42fe,mac=00:22:65:0b:42:fe,id=netdev-n-650b42fe,addr=0xf
> -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on
> -numa node,memdev=mem -pidfile
> /opt/cloud/workspace/servers/a6d3bb5f-fc6c-4891-864c-a0d947d84867/pid
> -chardev
> socket,path=/opt/cloud/workspace/servers/a6d3bb5f-fc6c-4891-864c-a0d947d84867/qga.sock,server,nowait,id=qga0
> -device virtio-serial -device
> virtserialport,chardev=qga0,name=org.qemu.guest_agent.0


the interface between ovs-dpdk and qemu is  netdev=n-650b42fe, for ovs-dpdk
side, it's vhost-user type interface.

the dpdk-bond interface of ovs-dpdk is dpdk bond type.

Then I start 10 VM to send TCP packets using iperf3. I use 10Gbps
netdevice, but there are only about 5600Mbps through dpdk-bond port, why?
Is there someone who have testing of VM and ovs-dpdk bond port, and could
you please share the test report?


More information about the dev mailing list