Bug 10 - [Testpmd] NUMA, speed issue
Summary: [Testpmd] NUMA, speed issue
Status: CONFIRMED
Alias: None
Product: DPDK
Classification: Unclassified
Component: testpmd (show other bugs)
Version: unspecified
Hardware: x86 Linux
: Normal normal
Target Milestone: ---
Assignee: Anas
URL:
Depends on:
Blocks:
 
Reported: 2018-01-17 14:45 CET by Anas
Modified: 2018-08-29 20:14 CEST (History)
3 users (show)



Attachments

Description Anas 2018-01-17 14:45:46 CET
Hello, 

I need help to manage packet using dpdk under xeon intel chip.
When I launch testpmd, I'm wondering if output traces below are blocking 
to check bandwith:

>./testpmd -l 0-3 -n 4 -- -i --portmask=0x1 --nb-cores=2

EAL: Detected 8 lcore(s)
EAL: 1024 hugepages of size 2097152 reserved, but no mounted hugetlbfs found for that size
EAL: Probing VFIO support...
EAL: cannot open /proc/self/numa_maps, consider that all memory is in socket_id 0
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL:   probe driver: 8086:15a4 net_fm10k
EAL: PCI device 0000:02:00.0 on NUMA socket 0
EAL:   probe driver: 8086:15a4 net_fm10k
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:15ab net_ixgbe
EAL: PCI device 0000:04:00.1 on NUMA socket 0
EAL:   probe driver: 8086:15ab net_ixgbe
EAL: PCI device 0000:04:10.1 on NUMA socket 0
EAL:   probe driver: 8086:15a8 net_ixgbe_vf
EAL: PCI device 0000:04:10.3 on NUMA socket 0
EAL:   probe driver: 8086:15a8 net_ixgbe_vf
EAL: PCI device 0000:04:10.5 on NUMA socket 0
EAL:   probe driver: 8086:15a8 net_ixgbe_vf
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:15a4 net_fm10k
EAL: PCI device 0000:08:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1533 net_e1000_igb
Interactive-mode selected
previous number of forwarding ports 2 - changed to number of configured ports 1
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2240, socket=0

Warning! Cannot handle an odd number of ports with the current port topology. Configuration must be changed to have an even number of ports, or relaunch application with --port-topology=chained

Configuring Port 0 (socket 0)
PMD: fm10k_dev_configure(): fm10k always strip CRC
Port 0: 00:A0:C9:23:45:69
Configuring Port 1 (socket 0)
PMD: fm10k_dev_configure(): fm10k always strip CRC
Port 1: 00:A0:C9:23:45:6A
Checking link statuses...
Port 0 Link Up - speed 0 Mbps - full-duplex
Port 1 Link Up - speed 0 Mbps - full-duplex


On one side, traces show that there is NUMA, speed and hupepage issue.
Have you a idea ?

Thank you
Comment 1 Qian 2018-02-07 15:08:25 CET
Looks like only speed issue, how do u know NUMA and hugepage issue? 
"1024 hugepages of size 2097152 reserved, but no mounted hugetlbfs found for that size" didn't mean huge page issue, the app will check default folder for the huge page mounted folder. If huge page has issue, you can't run up the app. 
As to numa, since you are using lcore 1-3, then it will use socket 0 for memory allocation and core allocation. 
How about the fm10k device firmware and dpdk version?
Comment 2 xiao.w.wang@intel.com 2018-02-08 06:01:54 CET

fm10k can be used in PICeX8 or X4 mode, the speed is ~50Gbps per Gen3 x8 PCIe interface, and ~25Gbps per x4 interface. And the speed can be restricted by TestPoint (Switch manager).

Currently the driver leaves 'speed' field as 0 because it's not aware of the x8 or x4 configuration.
Comment 3 Ajit Khaparde 2018-08-29 20:14:35 CEST
nounoussma@hotmail.com,
Are you satisfied with the explanation?
Do you anything need more on this issue?
If not, can you close it? 

Thanks
Ajit

Note You need to log in before you can comment on or make changes to this bug.