This is a copy of issue VPP-1876 [0], opened against VPP project. See the description and comments there (or let me know and I will copy them here). Performance of VPP (NIC: Intel-xxv710) depends on MTU value set, but only when DPDK driver is used. That does not prove the bug is in DPDK, but it is suspicious enough for me to open this Bug against DPDK. I tried to reproduce the behavior using other DPDK applications, but they tend to hit the line rate regardless of MTU. Is there a DPDK application that intentionally wastes some cycles, so I can try to confirm its performance depends on MTU value? [0] https://jira.fd.io/browse/VPP-1876
One additional observation: with MTU > 2022, we see that calls to i40e_rx_burst (burst=256) can return 256 packets if MTU <= 2022, calls to i40e_rx_burst return at most 64 packets.
Jeff, Can you take a look? Thanks
The fix [1] on VPP side was to cap the nb_pkts argument value to 32 when calling rte_eth_rx_burst. No idea if that is the correct usage, or just a workaround. Either way, decreasing importance of this bug. [1] https://gerrit.fd.io/r/c/vpp/+/35620