[dpdk-dev] [Q] l2fwd in examples directory

Moon-Sang Lee sang0627 at gmail.com
Mon Oct 19 09:51:28 CEST 2015


Let me clarify my mixed stuffs.

My processor is L5520, family 6, model 26 that is based on Nehalem
microarchitecture
according to wikipedia (
https://en.wikipedia.org/wiki/Nehalem_(microarchitecture)),
it does not have PCI interface on chipset.

Therefore, "rte_eth_dev_socket_id(portid) always returns -1"  seems no
problem.
My understanding of the lstopo result might be wrong.

Thanks anyway.




On Mon, Oct 19, 2015 at 4:39 PM, Moon-Sang Lee <sang0627 at gmail.com> wrote:

>
> My NUT has Xeon L5520 that is based on Nehalem microarchitecture.
> Does Nehalem supports PCIe interface on chipset?
>
> Anyhow, 'lstopo' shows as below and it seems that my PCI devices are
> connected to socket #0.
> I'm still wondering why rte_eth_dev_socket_id(portid) always returns -1.
>
> mslee at myhost:~$ lstopo
> Machine (31GB)
>   NUMANode L#0 (P#0 16GB) + Socket L#0 + L3 L#0 (8192KB)
>     L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0
>       PU L#0 (P#0)
>       PU L#1 (P#8)
>     L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1
>       PU L#2 (P#2)
>       PU L#3 (P#10)
>     L2 L#2 (256KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2
>       PU L#4 (P#4)
>       PU L#5 (P#12)
>     L2 L#3 (256KB) + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3
>       PU L#6 (P#6)
>       PU L#7 (P#14)
>   NUMANode L#1 (P#1 16GB) + Socket L#1 + L3 L#1 (8192KB)
>     L2 L#4 (256KB) + L1d L#4 (32KB) + L1i L#4 (32KB) + Core L#4
>       PU L#8 (P#1)
>       PU L#9 (P#9)
>     L2 L#5 (256KB) + L1d L#5 (32KB) + L1i L#5 (32KB) + Core L#5
>       PU L#10 (P#3)
>       PU L#11 (P#11)
>     L2 L#6 (256KB) + L1d L#6 (32KB) + L1i L#6 (32KB) + Core L#6
>       PU L#12 (P#5)
>       PU L#13 (P#13)
>     L2 L#7 (256KB) + L1d L#7 (32KB) + L1i L#7 (32KB) + Core L#7
>       PU L#14 (P#7)
>       PU L#15 (P#15)
>   HostBridge L#0
>     PCIBridge
>       PCI 14e4:163b
>         Net L#0 "em1"
>       PCI 14e4:163b
>         Net L#1 "em2"
>     PCIBridge
>       PCI 1000:0058
>         Block L#2 "sda"
>         Block L#3 "sdb"
>     PCIBridge
>       PCIBridge
>         PCIBridge
>           PCI 8086:10e8
>           PCI 8086:10e8
>         PCIBridge
>           PCI 8086:10e8
>           PCI 8086:10e8
>     PCIBridge
>       PCI 102b:0532
>     PCI 8086:3a20
>     PCI 8086:3a26
>       Block L#4 "sr0"
> mslee at myhost:~$
>
>
>
> On Sun, Oct 18, 2015 at 2:51 PM, Moon-Sang Lee <sang0627 at gmail.com> wrote:
>
>>
>> thanks bruce.
>>
>> I didn't know that PCI slots have direct socket affinity.
>> is it static or configurable through PCI configuration space?
>> well, my NUT, two node NUMA, seems always returns -1 on calling
>> rte_eth_dev_socket_id(portid) whenever portid is 0, 1, or other values.
>> I appreciate if you explain more about getting the affinity.
>>
>> p.s.
>> I'm using intel Xeon processor and 1G NIC(82576).
>>
>>
>>
>>
>> On Fri, Oct 16, 2015 at 10:43 PM, Bruce Richardson <
>> bruce.richardson at intel.com> wrote:
>>
>>> On Thu, Oct 15, 2015 at 11:08:57AM +0900, Moon-Sang Lee wrote:
>>> > There is codes as below in examples/l2fwd/main.c and I think
>>> > rte_eth_dev_socket_id(portid)
>>> > always returns -1(SOCKET_ID_ANY) since there is no association code
>>> between
>>> > port and
>>> > lcore in the example codes.
>>>
>>> Can you perhaps clarify what you mean here. On modern NUMA systems, such
>>> as those
>>> from Intel :-), the PCI slots are directly connected to the CPU sockets,
>>> so the
>>> ethernet ports do indeed have a direct NUMA affinity. It's not something
>>> that
>>> the app needs to specify.
>>>
>>> /Bruce
>>>
>>> > (i.e. I need to find a matching lcore from
>>> > lcore_queue_conf[] with portid
>>> > and call rte_lcore_to_socket_id(lcore_id).)
>>> >
>>> >         /* init one RX queue */
>>> >         fflush(stdout);
>>> >         ret = rte_eth_rx_queue_setup(portid, 0, nb_rxd,
>>> >                          rte_eth_dev_socket_id(portid),
>>> >                          NULL,
>>> >                          l2fwd_pktmbuf_pool);
>>> >         if (ret < 0)
>>> >             rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup:err=%d,
>>> > port=%u\n",
>>> >                   ret, (unsigned) portid);
>>> >
>>> > It works fine even though memory is allocated in different NUMA node.
>>> But I
>>> > wonder there is
>>> > a DPDK API that associates inlcore to port internally thus
>>> > rte_eth_devices[portid].pci_dev->numa_node
>>> > contains proper node.
>>> >
>>> >
>>> > --
>>> > Moon-Sang Lee, SW Engineer
>>> > Email: sang0627 at gmail.com
>>> > Wisdom begins in wonder. *Socrates*
>>>
>>
>>
>>
>> --
>> Moon-Sang Lee, SW Engineer
>> Email: sang0627 at gmail.com
>> Wisdom begins in wonder. *Socrates*
>>
>
>
>
> --
> Moon-Sang Lee, SW Engineer
> Email: sang0627 at gmail.com
> Wisdom begins in wonder. *Socrates*
>



-- 
Moon-Sang Lee, SW Engineer
Email: sang0627 at gmail.com
Wisdom begins in wonder. *Socrates*


More information about the dev mailing list