[dpdk-users] problem running ip pipeline application

Singh, Jasvinder jasvinder.singh at intel.com
Wed Apr 6 11:22:03 CEST 2016


Hi,

> -----Original Message-----
> From: users [mailto:users-bounces at dpdk.org] On Behalf Of Talukdar, Biju
> Sent: Wednesday, April 6, 2016 1:43 AM
> To: users <users at dpdk.org>
> Cc: Talukdar, Biju <Biju_Talukdar at student.uml.edu>
> Subject: [dpdk-users] problem running ip pipeline application
> 
> Hi ,
> 
> 
> I am getting the following error when trying to run dpdk ip pipeline
> example.Could someone please tell me what went wrong.
> 
> 
> system configuration:
> 
> dpdk 2.2.0
> 
> ubuntu 14.04
> 
> 
> network driver:
> 
> /dpdk-2.2.0$ ./tools/dpdk_nic_bind.py --status
> 
> Network devices using DPDK-compatible driver
> ============================================
> 0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
> unused=
> 
> Network devices using kernel driver
> ===================================
> 0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=eth3
> drv=ixgbe unused=igb_uio
> 0000:05:00.0 'I350 Gigabit Network Connection' if=eth0 drv=igb
> unused=igb_uio *Active*
> 0000:05:00.1 'I350 Gigabit Network Connection' if=eth2 drv=igb
> unused=igb_uio
> 
> Other network devices
> =====================
> 0000:03:00.0 'Device 15ad' unused=igb_uio
> 0000:03:00.1 'Device 15ad' unused=igb_uio uml at uml:~/dpdk-2.2.0$
> ./tools/dpdk_nic_bind.py --status
> 
> Network devices using DPDK-compatible driver
> ============================================
> 0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
> unused=
> 
> Network devices using kernel driver
> ===================================
> 0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=eth3
> drv=ixgbe unused=igb_uio
> 0000:05:00.0 'I350 Gigabit Network Connection' if=eth0 drv=igb
> unused=igb_uio *Active*
> 0000:05:00.1 'I350 Gigabit Network Connection' if=eth2 drv=igb
> unused=igb_uio
> 
> Other network devices
> =====================
> 0000:03:00.0 'Device 15ad' unused=igb_uio
> 0000:03:00.1 'Device 15ad' unused=igb_uio uml at uml:~/dpdk-2.2.0$
> 
> 
> Here is the dump -------->
> 
> 
> 
> ~/dpdk-2.2.0/examples/ip_pipeline/ip_pipeline/x86_64-native-linuxapp-
> gcc/app$ sudo -E ./ip_pipeline -f /home/uml/dpdk-
> 2.2.0/examples/ip_pipeline/config/l2fwd.cfg -p 0x01 [sudo] password for
> uml:
> [APP] Initializing CPU core map ...
> [APP] CPU core mask = 0x0000000000000003 [APP] Initializing EAL ...
> EAL: Detected lcore 0 as core 0 on socket 0
> EAL: Detected lcore 1 as core 1 on socket 0
> EAL: Detected lcore 2 as core 2 on socket 0
> EAL: Detected lcore 3 as core 3 on socket 0
> EAL: Detected lcore 4 as core 4 on socket 0
> EAL: Detected lcore 5 as core 5 on socket 0
> EAL: Detected lcore 6 as core 6 on socket 0
> EAL: Detected lcore 7 as core 7 on socket 0
> EAL: Detected lcore 8 as core 0 on socket 0
> EAL: Detected lcore 9 as core 1 on socket 0
> EAL: Detected lcore 10 as core 2 on socket 0
> EAL: Detected lcore 11 as core 3 on socket 0
> EAL: Detected lcore 12 as core 4 on socket 0
> EAL: Detected lcore 13 as core 5 on socket 0
> EAL: Detected lcore 14 as core 6 on socket 0
> EAL: Detected lcore 15 as core 7 on socket 0
> EAL: Support maximum 128 logical core(s) by configuration.
> EAL: Detected 16 lcore(s)
> EAL: VFIO modules not all loaded, skip VFIO support...
> EAL: Setting up physically contiguous memory...
> EAL: Ask a virtual area of 0x40000000 bytes
> EAL: Virtual area found at 0x7f24c0000000 (size = 0x40000000)
> EAL: Requesting 1 pages of size 1024MB from socket 0
> EAL: TSC frequency is ~1999998 KHz
> EAL: Master lcore 0 is ready (tid=bc483940;cpuset=[0])
> EAL: lcore 1 is ready (tid=bad86700;cpuset=[1])
> EAL: PCI device 0000:03:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:15ad rte_ixgbe_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:03:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:15ad rte_ixgbe_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:04:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL:   PCI memory mapped at 0x7f2500000000
> EAL:   PCI memory mapped at 0x7f2500080000
> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 5
> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
> EAL: PCI device 0000:04:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:05:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:1521 rte_igb_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:05:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:1521 rte_igb_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> [APP] Initializing MEMPOOL0 ...
> [APP] Initializing LINK0 (0) (1 RXQ, 1 TXQ) ...
> PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f24fb128340
> sw_sc_ring=0x7f24fb127e00 hw_ring=0x7f24fb128880
> dma_addr=0xffb128880
> PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f24fb115c40
> hw_ring=0x7f24fb117c80 dma_addr=0xffb117c80
> PMD: ixgbe_set_tx_function(): Using simple tx code path
> PMD: ixgbe_set_tx_function(): Vector tx enabled.
> PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst
> size no less than 4 (port=0).
> [APP] LINK0 (0) (10 Gbps) UP
> [APP] Initializing MSGQ-REQ-PIPELINE0 ...
> [APP] Initializing MSGQ-RSP-PIPELINE0 ...
> [APP] Initializing MSGQ-REQ-CORE-s0c0 ...
> [APP] Initializing MSGQ-RSP-CORE-s0c0 ...
> [APP] Initializing MSGQ-REQ-PIPELINE1 ...
> [APP] Initializing MSGQ-RSP-PIPELINE1 ...
> [APP] Initializing MSGQ-REQ-CORE-s0c1 ...
> [APP] Initializing MSGQ-RSP-CORE-s0c1 ...
> [APP] Initializing PIPELINE0 ...
> pipeline> [APP] Initializing PIPELINE1 ...
> Cannot find LINK1 for RXQ1.0


Seems like you are having less ports than specified in l2fwd.cfg. In order to run ip_pipeline with l2fwd.cfg configuration file, 4 ports are required.  So , you can either change configuration file and remove 3 extra ports (RXQ1.0, RXQ2.0, RXQ3.0 and TXQ1.0, TXQ2.0, TXQ3.0)  or bind 3 more ports to dpdk and use the default l2fwd.cfg straightaway.

Jasvinder


More information about the users mailing list