[dpdk-users] DPDK OVS does not work properly in a multi-flow environment.

Heung Sik Choi hschoi at os.korea.ac.kr
Thu Jun 15 04:55:31 CEST 2017


Hi,



I want to check the performance of DPDK OVS in aspect of forwarding as
below URL.

https://drive.google.com/a/os.korea.ac.kr/file/d/0BxEx0xE0gw2ETjhkaUpzN3YxSVU/view?usp=sharing



There is a similar experiment at 'Intel® Open Network Platform Release 2.1
Performance Test Report'. Also, there is the result of this. the result
shows that when DPDK OVS use multi poll-mode thread and receive multi flows
packet, it can achieve amazing throughput. Thus, it can achieves high
performance in multi-flows environment.





However, I couldn't make this result in my experiment environment. when I
made 10 types of packet by DPDK pktgen machine, the throughput dropped by
2Gbps. with single flow and multi threading, the throughput is 8Gbps in my
machine. Thus, it is a big problem to me.


What makes these the problems in my experiment?


Please Help me.



My setting is as below:



Hardware(both is same)

 - cpu: Xeon 2630 v2 *2 (NUMA)

 - mem: 48G (sum of hugepage(1G) :16G)

 - nic: 82599 ES



Software(DPDK OVS)

 - OS: ubuntu 16.04

 - kernel 4.8.0-54 generic(default kernel)

 - DPDK :2.2 OVS :2.5.2 [I installed this from ubuntu package(ex. apt-get
install ovs-vswitch-dpdk)]

 - the way to set flow table :


"i=0

j=1

while [ $i != 1 ]

do

        while [ $j != 255 ]

        do

                ovs-ofctl add-flow br0
ip,nw_dst=192.0."$i"."$j",actions=mod_dl_dst:90:e2:ba:5b:88:2c,in_port



                j=$((j+1))

        done

        j=1

        i=$((i+1))

done"



 - used commands for multi queues & threads:

ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x3f

ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs=4

ovs-vsctl set Open_vSwitch . other_config:n-dpdk-txqs=4





Software(Pktgen)

- multi flow command (lua):

pktgen.dst_mac("all","start","90:e2:ba:5a:78:30");

pktgen.dst_mac("all","inc","00:00:00:00:00:00");

pktgen.dst_mac("all","min","90:e2:ba:5a:78:30");

pktgen.dst_mac("all","max","90:e2:ba:5a:78:30");



pktgen.src_mac("all","start","90:e2:ba:5b:88:2c");

pktgen.src_mac("all","inc","00:00:00:00:00:00");

pktgen.src_mac("all","min","90:e2:ba:5b:88:2c");

pktgen.src_mac("all","max","90:e2:ba:5b:88:2c");



--pktgen.delay(1000);

pktgen.dst_ip("all","start","192.0.0.1");

pktgen.dst_ip("all","inc","0.0.0.1");

pktgen.dst_ip("all","min","192.0.0.1");

pktgen.dst_ip("all","max","192.0.0.10");



--pktgen.delay(1000);

pktgen.src_ip("all","start","10.0.0.1");

pktgen.src_ip("all","inc","0.0.0.0");

pktgen.src_ip("all","min","10.0.0.1");

pktgen.src_ip("all","max","10.0.0.1");



--pktgen.delay(1000);

pktgen.src_port("all","start",9);

pktgen.src_port("all","inc",0);

pktgen.src_port("all","min",9);

pktgen.src_port("all","max",9);



--pktgen.delay(1000);

pktgen.dst_port("all","start",10);

pktgen.dst_port("all","inc",0);

pktgen.dst_port("all","min",10);

pktgen.dst_port("all","max",10);



--pktgen.delay(1000);

pktgen.pkt_size("all","start",64);

pktgen.pkt_size("all","inc",0);

pktgen.pkt_size("all","min",64);

pktgen.pkt_size("all","max",64);



pktgen.set_proto("all","tcp");

pktgen.set_type("all","ipv4");



- Use 10 core

./app/app/x86_64-native-linuxapp-gcc/pktgen -c 0x7df -n 4 --  -f
multiflow.lua -N  -m "[1-5:6-10].0"



Please let me know if you have any insights.


More information about the users mailing list