[dpdk-dev] [RFC PATCH 0/2] performance utility in testpmd

Thomas Monjalon thomas.monjalon at 6wind.com
Thu Apr 21 11:54:12 CEST 2016


2016-04-20 18:43, Zhihong Wang:
> This RFC patch proposes a general purpose forwarding engine in testpmd
> namely "portfwd", to enable performance analysis and tuning for poll mode
> drivers in vSwitching scenarios.
> 
> 
> Problem statement
> -----------------
> 
> vSwitching is more I/O bound in a lot of cases since there are a lot of
> LLC/cross-core memory accesses.
> 
> In order to reveal memory/cache behavior in real usage scenarios and enable
> efficient performance analysis and tuning for vSwitching, DPDK needs a
> sample application that supports traffic flow close to real deployment,
> e.g. multi-tenancy, service chaining.
> 
> There is a vhost sample application currently to enable simple vSwitching
> scenarios, it comes with several limitations:
> 
>    1) Traffic flow is too simple and not flexible
> 
>    2) Switching based on MAC/VLAN only
> 
>    3) Not enough performance metrics
> 
> 
> Proposed solution
> -----------------
> 
> The testpmd sample application is a good choice, it's a powerful poll mode
> driver management framework hosts various forwarding engine.

Not sure it is a good choice.
The goal of testpmd is to test every PMD features.
How far can we go in adding some stack processing while keeping it
easily maintainable?

> Now with the vhost pmd feature, it can also handle vhost devices, only a
> new forwarding engine is needed to make use of it.

Why a new forwarding engine is needed for vhost?

> portfwd is implemented to this end.
> 
> Features of portfwd:
> 
>    1) Build up traffic from simple rx/tx to complex scenarios easily
> 
>    2) Rich performance statistics for all ports

Have you checked CONFIG_RTE_TEST_PMD_RECORD_CORE_CYCLES and
CONFIG_RTE_TEST_PMD_RECORD_BURST_STATS?

>    3) Core affinity manipulation
> 
>    4) Commands for run time configuration
> 
> Notice that portfwd has fair performance, but it's not for getting the
> "maximum" numbers:
> 
>    1) It buffers packets for burst send efficiency analysis, which increase
>       latency
> 
>    2) It touches the packet header and collect performance statistics which
>       adds overheads
> 
> These "extra" overheads are actually what happens in real applications.
[...]
> Implementation details
> ----------------------
> 
> To enable flexible traffic flow setup, each port has 2 ways to forward
> packets in portfwd:

Should not it be 2 forward engines?
Please first describe the existing engines to help making a decision.

>    1) Forward based on dst ip
[...]
>    2) Forward to a fixed port
[...]


More information about the dev mailing list