Testing scatter support for PMDs using testpmd

Boyer, Andrew Andrew.Boyer at amd.com
Fri Jan 26 16:04:07 CET 2024



On Jan 24, 2024, at 12:16 PM, Jeremy Spewock <jspewock at iol.unh.edu> wrote:

Hello maintainers,

In porting over the first ethdev suite to the new DTS framework, there was an inconsistency that we found and we were wondering if anyone would be able to shed some light on this. In general the inconsistency pertains to Intel and Mellanox NICs, where one will throw an error and not start testpmd while the other will work as expected.

In the original DTS suite for testing scattered packets, testpmd is started with the flags --max-packet-len=9000 and --mbuf-size=2048. This starts and works fine on Intel NICs, but when you use the same flags on a Mellanox NIC, it will produce the error seen below. There is a flag that exists for testpmd called --enable-scatter and when this flag is provided, the Mellanox NIC seems to accept the flags and start without error.

Our assumption is that this should be consistent between NICs, is there a reason that one NIC would allow for starting testpmd without explicitly enabling scatter and the other wouldn't? Should this flag always be present, and is it an error that testpmd can start without it in the first place?

Here is the error provided when attempting to run on a Mellanox NIC:

mlx5_net: port 0 Rx queue 0: Scatter offload is not configured and no enough mbuf space(2048) to contain the maximum RX packet length(9000) with head-room(128)
mlx5_net: port 0 unable to allocate rx queue index 0
Fail to configure port 0 rx queues
Start ports failed

Thank you for any insight,
Jeremy

Hello Jeremy,

I can share a little bit of what I've seen while working on our devices.

The client can specify the max packet size, MTU, mbuf size, and whether to enable Rx or Tx s/g (separately). For performance reasons we don't want to enable s/g if it's not needed.

Now, the client can easily set things up with a small MTU, start processing, stop the port, and increase the MTU - beyond what a single mbuf would hold. To avoid having to tear down and rebuild the queues on an MTU change to enable s/g support, we automatically enable Rx s/g if the client presents mbufs which are too small to hold the max MTU.

Unfortunately the API to configure the Tx queues doesn't tell us anything about the mbuf size, and there's nothing stopping the client from configuring Tx before Rx. So we can't reliably auto-enable Tx s/g, and it is possible to get into a config where the Rx side produces chained mbufs which the Tx side can't handle.

To avoid this misconfig we have some versions of our PMD set to fail to start if Rx s/g is enabled but Tx s/g isn't.

Hope this helps,
Andrew

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mails.dpdk.org/archives/dev/attachments/20240126/dc79583d/attachment.htm>


More information about the dev mailing list