[dpdk-dev] [Bug 10] [Testpmd] NUMA, speed issue

bugzilla at dpdk.org bugzilla at dpdk.org
Wed Jan 17 14:45:46 CET 2018


https://dpdk.org/tracker/show_bug.cgi?id=10

            Bug ID: 10
           Summary: [Testpmd] NUMA, speed issue
           Product: DPDK
           Version: unspecified
          Hardware: x86
                OS: All
            Status: CONFIRMED
          Severity: normal
          Priority: Normal
         Component: testpmd
          Assignee: dev at dpdk.org
          Reporter: nounoussma at hotmail.com
  Target Milestone: ---

Hello, 

I need help to manage packet using dpdk under xeon intel chip.
When I launch testpmd, I'm wondering if output traces below are blocking 
to check bandwith:

>./testpmd -l 0-3 -n 4 -- -i --portmask=0x1 --nb-cores=2

EAL: Detected 8 lcore(s)
EAL: 1024 hugepages of size 2097152 reserved, but no mounted hugetlbfs found
for that size
EAL: Probing VFIO support...
EAL: cannot open /proc/self/numa_maps, consider that all memory is in socket_id
0
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL:   probe driver: 8086:15a4 net_fm10k
EAL: PCI device 0000:02:00.0 on NUMA socket 0
EAL:   probe driver: 8086:15a4 net_fm10k
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:15ab net_ixgbe
EAL: PCI device 0000:04:00.1 on NUMA socket 0
EAL:   probe driver: 8086:15ab net_ixgbe
EAL: PCI device 0000:04:10.1 on NUMA socket 0
EAL:   probe driver: 8086:15a8 net_ixgbe_vf
EAL: PCI device 0000:04:10.3 on NUMA socket 0
EAL:   probe driver: 8086:15a8 net_ixgbe_vf
EAL: PCI device 0000:04:10.5 on NUMA socket 0
EAL:   probe driver: 8086:15a8 net_ixgbe_vf
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:15a4 net_fm10k
EAL: PCI device 0000:08:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1533 net_e1000_igb
Interactive-mode selected
previous number of forwarding ports 2 - changed to number of configured ports 1
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2240,
socket=0

Warning! Cannot handle an odd number of ports with the current port topology.
Configuration must be changed to have an even number of ports, or relaunch
application with --port-topology=chained

Configuring Port 0 (socket 0)
PMD: fm10k_dev_configure(): fm10k always strip CRC
Port 0: 00:A0:C9:23:45:69
Configuring Port 1 (socket 0)
PMD: fm10k_dev_configure(): fm10k always strip CRC
Port 1: 00:A0:C9:23:45:6A
Checking link statuses...
Port 0 Link Up - speed 0 Mbps - full-duplex
Port 1 Link Up - speed 0 Mbps - full-duplex


On one side, traces show that there is NUMA, speed and hupepage issue.
Have you a idea ?

Thank you

-- 
You are receiving this mail because:
You are the assignee for the bug.


More information about the dev mailing list