Originally posted by zimeiw zimeiw at 163.com on the dpdk-dev mailing list
hi, The tcp/ip stack is developed based on dpdk. tcp/ip stack and APP deployment. |-------| |-------| |-------| | APP | | APP | | APP | | | | | | | | | | | | | |-------| |-------| |-------| | | | -------------------------------------------------- netdpsock | | | fd fd fd | | | -------------------------------------------------- netdp | | | |-------| |-------| |-------| | TCP | | TCP | | TCP | | | | | | | | | | | | | | | | | | | |---------------------------------------| | IP/ARP/ICMP | |---------------------------------------| | | | | | | |LCORE0 | |LCORE1 | |LCORE2 | |-------| |-------| |-------| | | | ---------------RSS--------------- | |---------------------------------------| | NIC | |---------------------------------------| NIC distribute packets to different lcore based on RSS, so same TCP flow are handled in the same lcore. Each lcore has own TCP stack. so no share data between lcores, free lock. IP/ARP/ICMP are shared between lcores. APP process runs as a tcp server, only listens on one lcore and accept tcp connections from the lcore, so the APP process number shall large than the lcore number. The APP processes are deployed on each lcore automaticly and averagely. APP process runs as a tcp client, app process can communicate with each lcore. The tcp connection can be located in specified lcore automaticly. APP process can bind the same port if enable reuseport, APP process could accept tcp connection by round robin. If NIC don't support multi queue or RSS, shall enhance opendp_main.c, reserve one lcore to receive and send packets from NIC, and distribute packets to lcores of netdp tcp stack by software RSS. 2. netdpsock are compatible with BSD socket, so it is easy to porting app to run in netdp stack. nginx is already porting to run in netdp, a few code are changed. link: https://github.com/opendp/dpdk-nginx redis is also porting. link: https://github.com/opendp/dpdk-redis 3. Performance. one lcore, one http server, ab testing Concurrency Level: 500 Time taken for tests: 0.642 seconds Complete requests: 30000 Failed requests: 0 Total transferred: 4530000 bytes HTML transferred: 1890000 bytes Requests per second: 46695.59 [#/sec] (mean) Time per request: 10.708 [ms] (mean) Time per request: 0.021 [ms] (mean, across all concurrent requests) Transfer rate: 6885.78 [Kbytes/sec] received one lcore, one nginx server, ab testing Concurrency Level: 500 Time taken for tests: 0.965 seconds Complete requests: 30000 Failed requests: 0 Total transferred: 25320000 bytes HTML transferred: 18360000 bytes Requests per second: 31092.43 [#/sec] (mean) Time per request: 16.081 [ms] (mean) Time per request: 0.032 [ms] (mean, across all concurrent requests) Transfer rate: 25626.97 [Kbytes/sec] received one lcore, one redis server, redis-bench testing root at h163:~/dpdk-redis# ./src/redis-benchmark -h 2.2.2.2 -p 6379 -n 100000 -c 50 -q PING_INLINE: 86655.11 requests per second PING_BULK: 90497.73 requests per second SET: 84317.03 requests per second GET: 85106.38 requests per second INCR: 86580.09 requests per second LPUSH: 83263.95 requests per second LPOP: 83612.04 requests per second SADD: 85034.02 requests per second SPOP: 86430.43 requests per second LPUSH (needed to benchmark LRANGE): 84245.99 requests per second LRANGE_100 (first 100 elements): 46948.36 requests per second LRANGE_300 (first 300 elements): 19615.54 requests per second LRANGE_500 (first 450 elements): 11584.80 requests per second LRANGE_600 (first 600 elements): 10324.18 requests per second MSET (10 keys): 66401.06 requests per second Still didn't test multicore tcp performance because lack test tools and env. For detail test result, please refer to https://github.com/opendp/dpdk-odp -- Best Regards, zimeiw