Bug 1058 - Mellanox mlx5_pmd is reporting incorrect number of maximum rx and tx queues supported from rte_eth_dev_info_get
Summary: Mellanox mlx5_pmd is reporting incorrect number of maximum rx and tx queues s...
Status: UNCONFIRMED
Alias: None
Product: DPDK
Classification: Unclassified
Component: ethdev (show other bugs)
Version: 21.11
Hardware: x86 Linux
: Normal normal
Target Milestone: ---
Assignee: Asaf Penso
URL:
Depends on:
Blocks:
 
Reported: 2022-07-25 03:54 CEST by Sahithi Singam
Modified: 2022-08-28 23:06 CEST (History)
1 user (show)



Attachments

Description Sahithi Singam 2022-07-25 03:54:20 CEST
DPDK is incorrectly reporting maximum number of Rx and Tx queues supported as 1024 where as linux is correctly reporting them in ethtool output. 

Maximum number of rx queues supported on Mellanox Connect X6 Dx SRIOV VFs 
were reported as 1024 when used with DPDK but they were reported as only 15 when used with linux. This behavior is same even when DPDK was used on Mellanox ConnectX4 based SRIOV Virtual Functions. 


~ # ethtool -i eth1
driver: mlx5_core
version: 5.0-0
firmware-version: 22.31.1660 (ORC0000000007)
expansion-rom-version: 
bus-info: 0000:00:04.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: yes

~ # ethtool -l eth1
Channel parameters for eth1:
Pre-set maximums:
RX:             0
TX:             0
Other:          0
Combined:       15
Current hardware settings:
RX:             0
TX:             0
Other:          0
Combined:       15

opt/dpdk-testpmd -l 2-7 -m 4 --allow 0000:00:04.0 -- --portmask=0x1 --mbcac
he=64 --forward-mode=io --eth-peer=0,02:00:17:0A:4B:FB     --rxq=100 --txq=100 -
i  
EAL: Detected CPU lcores: 16
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No free 1048576 kB hugepages reported on node 0
EAL: No available 1048576 kB hugepages reported
EAL: Probe PCI driver: mlx5_pci (15b3:101e) device: 0000:00:04.0 (socket 0)
mlx5_net: cannot bind mlx5 socket: No such file or directory
mlx5_net: Cannot initialize socket: No such file or directory
mlx5_net: DV flow is not supported
TELEMETRY: No legacy callbacks, legacy socket not created
Set io packet forwarding mode
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=78848, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 0)
mlx5_net: port 0 queue 19 empty mbuf pool
mlx5_net: port 0 Rx queue allocation failed: Cannot allocate memory
Fail to start port 0: Cannot allocate memory
Please stop the ports first
Done
testpmd> show port info 0 

********************* Infos for port 0  *********************
MAC address: 02:00:17:07:42:17
Device name: 0000:00:04.0
Driver name: mlx5_pci
Firmware-version: 22.31.1660
Devargs: 
Connect to socket: 0
memory allocation on the socket: 0
Link status: up
Link speed: 50 Gbps
Link duplex: full-duplex
Autoneg status: On
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 128
Maximum number of MAC addresses of hash filtering: 0
VLAN offload: 
  strip off, filter off, extend off, qinq strip off
Hash key size in bytes: 40
Redirection table size: 512
Supported RSS offload flow types:
  ipv4
  ipv4-frag
  ipv4-tcp
  ipv4-udp
  ipv4-other
  ipv6
  ipv6-frag
  ipv6-tcp
  ipv6-udp
  ipv6-other
  ipv6-ex
  ipv6-tcp-ex
  ipv6-udp-ex
  user defined 60
  user defined 61
  user defined 62
  user defined 63
Minimum size of RX buffer: 32
Maximum configurable length of RX packet: 65536
Maximum configurable size of LRO aggregated packet: 65280
Current number of RX queues: 100
Max possible RX queues: 1024
Max possible number of RXDs per queue: 65535
Min possible number of RXDs per queue: 0
RXDs number alignment: 1
Current number of TX queues: 100
Max possible TX queues: 1024
Max possible number of TXDs per queue: 65535
Min possible number of TXDs per queue: 0
TXDs number alignment: 1
Max segment number per packet: 40
Max segment number per MTU/TSO: 40
Device capabilities: 0x10( FLOW_SHARED_OBJECT_KEEP )
Switch name: 0000:00:04.0
Switch domain Id: 0
Switch Port Id: 65535
testpmd>
Comment 1 Asaf Penso 2022-07-25 08:27:37 CEST
Hello,

To ensure that i understand your question, you see that ethtool shows 15 while DPDK 1024, is that correct?

From my limited knowledge of the ethtool and kernel, the value of 15 is the result of some calculation between the number of msix and number of cores.
Since the kernel works in interrupt mode, the number of channels that can be opened depends on the above.

DPDK works in polling mode, so there is no dependency on the msix nor the cpus.
We read it directly for our NIC capabilities.

To summarize, these two numbers don't have to be the same and I do not see an issue.
If I misunderstood, please let me know.
Comment 2 Asaf Penso 2022-08-28 23:06:57 CEST
Hello Sahithi,
Have you seen my reply?

Note You need to log in before you can comment on or make changes to this bug.