Bug 487 - Worse performance with DPDK driver when MTU is set to 2022 or less
Summary: Worse performance with DPDK driver when MTU is set to 2022 or less
Status: UNCONFIRMED
Alias: None
Product: DPDK
Classification: Unclassified
Component: core (show other bugs)
Version: 20.02
Hardware: All All
: Low normal
Target Milestone: ---
Assignee: jeffguo
URL:
Depends on:
Blocks:
 
Reported: 2020-06-04 16:17 CEST by Vratko Polak
Modified: 2022-05-18 14:12 CEST (History)
4 users (show)



Attachments

Description Vratko Polak 2020-06-04 16:17:50 CEST
This is a copy of issue VPP-1876 [0], opened against VPP project. See the description and comments there (or let me know and I will copy them here).

Performance of VPP (NIC: Intel-xxv710) depends on MTU value set, but only when DPDK driver is used. That does not prove the bug is in DPDK, but it is suspicious enough for me to open this Bug against DPDK.

I tried to reproduce the behavior using other DPDK applications, but they tend to hit the line rate regardless of MTU. Is there a DPDK application that intentionally wastes some cycles, so I can try to confirm its performance depends on MTU value?

[0] https://jira.fd.io/browse/VPP-1876
Comment 1 Georgii Tkachuk 2020-06-10 22:41:26 CEST
One additional observation: 
with MTU > 2022, we see that calls to i40e_rx_burst (burst=256) can return 256 packets 
if MTU <= 2022, calls to i40e_rx_burst return at most 64 packets.
Comment 2 Ajit Khaparde 2020-09-16 23:20:46 CEST
Jeff, Can you take a look? Thanks
Comment 3 Vratko Polak 2022-05-18 14:12:46 CEST
The fix [1] on VPP side was to cap the nb_pkts argument value to 32 when calling rte_eth_rx_burst. No idea if that is the correct usage, or just a workaround.

Either way, decreasing importance of this bug.

[1] https://gerrit.fd.io/r/c/vpp/+/35620

Note You need to log in before you can comment on or make changes to this bug.