[dpdk-dev] [PATCH v3 0/5] vhost/virtio performance loopback utility

De Lara Guarch, Pablo pablo.de.lara.guarch at intel.com
Wed Jun 15 12:04:33 CEST 2016



> -----Original Message-----
> From: Wang, Zhihong
> Sent: Wednesday, June 15, 2016 12:08 AM
> To: dev at dpdk.org
> Cc: Ananyev, Konstantin; Richardson, Bruce; De Lara Guarch, Pablo;
> thomas.monjalon at 6wind.com
> Subject: [PATCH v3 0/5] vhost/virtio performance loopback utility
> 
> This patch enables vhost/virtio pmd performance loopback test in testpmd.
> All the features are for general usage.
> 
> The loopback test focuses on the maximum full-path packet forwarding
> performance between host and guest, it runs vhost/virtio pmd only without
> introducing extra overhead.
> 
> Therefore, the main requirement is traffic generation, since there's no
> other packet generators like IXIA to help.
> 
> In current testpmd, iofwd is the best candidate to perform this loopback
> test because it's the fastest possible forwarding engine: Start testpmd
> iofwd in host with 1 vhost port, and start testpmd iofwd in the connected
> guest with 1 corresponding virtio port, and these 2 ports form a forwarding
> loop: Host vhost Tx -> Guest virtio Rx -> Guest virtio Tx -> Host vhost Rx.
> 
> As to traffic generation, "start tx_first" injects a burst of packets into
> the loop.
> 
> However 2 issues remain:
> 
>    1. If only 1 burst of packets are injected in the loop, there will
>       definitely be empty Rx operations, e.g. When guest virtio port send
>       burst to the host, then it starts the Rx immediately, it's likely
>       the packets are still being forwarded by host vhost port and haven't
>       reached the guest yet.
> 
>       We need to fill up the ring to keep all pmds busy.
> 
>    2. iofwd doesn't provide retry mechanism, so if packet loss occurs,
>       there won't be a full burst in the loop.
> 
> To address these issues, this patch:
> 
>    1. Add retry option in testpmd to prevent most packet losses.
> 
>    2. Add parameter to enable configurable tx_first burst number.
> 
> Other related improvements include:
> 
>    1. Handle all rxqs when multiqueue is enabled: Current testpmd forces a
>       single core for each rxq which causes inconvenience and confusion.
> 
>       This change doesn't break anything, we can still force a single core
>       for each rxq, by giving the same number of cores with the number of
>       rxqs.
> 
>       One example: One Red Hat engineer was doing multiqueue test, there're
>       2 ports in guest each with 4 queues, and testpmd was used as the
>       forwarding engine in guest, as usual he used 1 core for forwarding, as
>       a results he only saw traffic from port 0 queue 0 to port 1 queue 0,
>       then a lot of emails and quite some time are spent to root cause it,
>       and of course it's caused by this unreasonable testpmd behavior.
> 
>       Moreover, even if we understand this behavior, if we want to test the
>       above case, we still need 8 cores for a single guest to poll all the
>       rxqs, obviously this is too expensive.
> 
>       We met quite a lot cases like this, one recent example:
>       http://openvswitch.org/pipermail/dev/2016-June/072110.html
> 
>    2. Show topology at forwarding start: "show config fwd" also does this,
>       but show it directly can reduce the possibility of mis-configuration.
> 
>       Like the case above, if testpmd shows topology at forwarding start,
>       then probably all those debugging efforts can be saved.
> 
>    3. Add throughput information in port statistics display for "show port
>       stats (port_id|all)".
> 
> Finally there's documentation update.
> 
> Example on how to enable vhost/virtio performance loopback test:
> 
>    1. Start testpmd in host with 1 vhost port only.
> 
>    2. Start testpmd in guest with only 1 virtio port connected to the
>       corresponding vhost port.
> 
>    3. "set fwd io retry" in testpmds in both host and guest.
> 
>    4. "start" in testpmd in guest.
> 
>    5. "start tx_first 16" in testpmd in host.
> 
> Then use "show port stats all" to monitor the performance.
> 
> --------------
> Changes in v2:
> 
>    1. Add retry as an option for existing forwarding engines except rxonly.
> 
>    2. Minor code adjustment and more detailed patch description.
> 
> --------------
> Changes in v3:
> 
>    1. Add more details in commit log.
> 
>    2. Give variables more meaningful names.
> 
>    3. Fix a typo in existing doc.
> 
>    4. Rebase the patches.
> 
> 
> Zhihong Wang (5):
>   testpmd: add retry option
>   testpmd: configurable tx_first burst number
>   testpmd: show throughput in port stats
>   testpmd: handle all rxqs in rss setup
>   testpmd: show topology at forwarding start
> 
>  app/test-pmd/Makefile                       |   1 -
>  app/test-pmd/cmdline.c                      | 116 ++++++++++++++++++-
>  app/test-pmd/config.c                       |  74 ++++++++++--
>  app/test-pmd/csumonly.c                     |  12 ++
>  app/test-pmd/flowgen.c                      |  12 ++
>  app/test-pmd/icmpecho.c                     |  15 +++
>  app/test-pmd/iofwd.c                        |  22 +++-
>  app/test-pmd/macfwd-retry.c                 | 167 ----------------------------
>  app/test-pmd/macfwd.c                       |  13 +++
>  app/test-pmd/macswap.c                      |  12 ++
>  app/test-pmd/testpmd.c                      |  12 +-
>  app/test-pmd/testpmd.h                      |  11 +-
>  app/test-pmd/txonly.c                       |  12 ++
>  doc/guides/testpmd_app_ug/run_app.rst       |   1 -
>  doc/guides/testpmd_app_ug/testpmd_funcs.rst |  18 +--
>  15 files changed, 299 insertions(+), 199 deletions(-)
>  delete mode 100644 app/test-pmd/macfwd-retry.c
> 
> --
> 2.5.0

Series-acked-by: Pablo de Lara <pablo.de.lara.guarch at intel.com>



More information about the dev mailing list