[dpdk-users] l2fwd performance in VM with SR-IOV

Furong WBAHACER at 126.com
Sat Dec 19 07:23:20 CET 2015


Hello, everybody.
     I have measured performace of example/l2fwd in VM with SR-IOV.
     My experiment server: CPU: 32 core Intel Xeon E5-4603 v2 @ 
2.20GHz,  NIC: 10G Intel 82599ES, OS:ubuntu14.04.3.
     I started a VM with this command:
         # qemu-system-x86_64 -enable-kvm -cpu host -m 4G -smp 4 -net 
none -device vfio-pci,host=<vf1-pcie-addr> -device 
vfio-pci,host=<vf2-pcie-addr> -hda vm.img -vnc :1
     In VM:
         I bound vf1 & vf2 to igb_uio, then started a example/l2fwd in VM.
     Then i started a pktgen in another server (same hardware & os with 
this server) to send packets (small packet - 64bit).
     The results is :
         1. when i sent packets with pktgen from only 1 port , the 
throughput (measured by pktgen rx/tx rates) was 7.0Gbps.
         2. when i sent packets from both 2 port, the throughput was 
7.2Gbps (3.6Gbps each port).

     But, i have measured l2fwd performance in host with SR-IOV (binding 
vf1 & vf2 to vfio-pci & starting l2fwd in host).
     The result is :
         when i sent packets from both 2 port, the throughput was 
14.4Gbps (7.2Gbps each port).

     I want to ask when i ran l2fwd in VM, Can i achieve similar 
performance with host? or, there are some methods to tune the performance ?

     Thanks a lot!
     Furong



More information about the users mailing list