[dpdk-users] issues with running ip_pipeline sample application

Zhang, Roy Fan roy.fan.zhang at intel.com
Tue Nov 17 17:40:16 CET 2015


Hello, 

Thanks for using ip_pipeline. 
If your host can work for l2fwd, ip_pipeline standalone should also work. 
The problem may be the port mask 0x11 in your command. 
Port mask works this way: 

If your board has X NIC ports, the port mask shall be a X-bit unsigned integer. 
The Nth bit's "1" indicates Nth NIC port will be used for the ip_pipeline application. 
E.g., 0x3 indicates you want to use 1st and 2nd NIC ports, and 0x11 means you want to use 1st and 5th NIC ports.

To connect the port's sequence number to the actual NIC port information, please use the following command ./tools/dpdk_nic_bind.py --status

The sample output is shown as follows:
Network devices using DPDK-compatible driver ============================================
0000:02:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=ixgbe 	--> 1st port, port mask 0x01
0000:02:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=ixgbe	--> 2nd port, port mask 0x02
0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=ixgbe	--> 3rd port, port mask 0x04
0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=ixgbe  	--> 4th port, port mask 0x08

To use full 4 ports, -p 0x0f option should be included.

Hope this answer your question.

Best regards,
Fan

> -----Original Message-----
> From: users [mailto:users-bounces at dpdk.org] On Behalf Of Grace Liu
> Sent: Monday, November 16, 2015 11:03 PM
> To: users at dpdk.org
> Subject: [dpdk-users] issues with running ip_pipeline sample 
> application
> 
> Hello DPDK community,
> 
> I met errors when running ip_pipeline and test_ip_pipeline sample 
> application. I'm using dpdk-2.1.0 on ubuntu 14.04 host with kernel 3.16.
> My host machine has two 10G ports and they are working for l2fwd 
> sample application but not for ip_pipeline, so I'm wondering is there 
> any specific requirement for NIC to run this app? The command I use 
> is: sudo ./build/ip_pipeline -p 0x11.
> 
> The error message is as follows:
> 
> EAL: Ask a virtual area of 0x200000 bytes
> 
> EAL: Virtual area found at 0x7fb9d8200000 (size = 0x200000)
> 
> EAL: Requesting 256 pages of size 2MB from socket 0
> 
> EAL: Requesting 768 pages of size 2MB from socket 1
> 
> EAL: TSC frequency is ~2666753 KHz
> 
> EAL: Master lcore 0 is ready (tid=74659900;cpuset=[0])
> 
> EAL: lcore 1 is ready (tid=d81ff700;cpuset=[1])
> 
> EAL: PCI device 0000:05:00.0 on NUMA socket -1
> 
> EAL:   probe driver: 8086:10c9 rte_igb_pmd
> 
> EAL:   Not managed by a supported kernel driver, skipped
> 
> EAL: PCI device 0000:05:00.1 on NUMA socket -1
> 
> EAL:   probe driver: 8086:10c9 rte_igb_pmd
> 
> EAL:   Not managed by a supported kernel driver, skipped
> 
> EAL: PCI device 0000:07:00.0 on NUMA socket -1
> 
> EAL:   probe driver: 8086:1572 rte_i40e_pmd
> 
> EAL:   PCI memory mapped at 0x7fb9d71ff000
> 
> EAL:   PCI memory mapped at 0x7fba7465d000
> 
> PMD: eth_i40e_dev_init(): FW 4.22 API 1.2 NVM 04.02.05 eetrack 
> 8000143f
> 
> PMD: eth_i40e_dev_init(): Failed to stop lldp
> 
> PMD: i40e_pf_parameter_init(): Max supported VSIs:66
> 
> PMD: i40e_pf_parameter_init(): PF queue pairs:64
> 
> PMD: i40e_pf_parameter_init(): Max VMDQ VSI num:63
> 
> PMD: i40e_pf_parameter_init(): VMDQ queue pairs:4
> 
> EAL: PCI device 0000:07:00.1 on NUMA socket -1
> 
> EAL:   probe driver: 8086:1572 rte_i40e_pmd
> 
> EAL:   PCI memory mapped at 0x7fb9d69ff000
> 
> EAL:   PCI memory mapped at 0x7fba7461e000
> 
> PMD: eth_i40e_dev_init(): FW 4.22 API 1.2 NVM 04.02.05 eetrack 
> 8000143f
> 
> PMD: eth_i40e_dev_init(): Failed to stop lldp
> 
> PMD: i40e_pf_parameter_init(): Max supported VSIs:66
> 
> PMD: i40e_pf_parameter_init(): PF queue pairs:64
> 
> PMD: i40e_pf_parameter_init(): Max VMDQ VSI num:63
> 
> PMD: i40e_pf_parameter_init(): VMDQ queue pairs:4
> 
> [APP] Initializing MEMPOOL0 ...
> 
> [APP] Initializing LINK0 (0) (1 RXQ, 1 TXQ) ...
> 
> PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are 
> satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
> 
> PMD: i40e_dev_tx_queue_setup(): Using simple tx path
> 
> [APP] Initializing LINK1 (4) (1 RXQ, 1 TXQ) ...
> 
> PANIC in app_init_link():
> 
> LINK1 (4): init error (-22)
> 
> 6: [./build/ip_pipeline() [0x42dad3]]
> 
> 5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)
> [0x7fba73776ec5]]
> 
> 4: [./build/ip_pipeline(main+0x55) [0x42c715]]
> 
> 3: [./build/ip_pipeline(app_init+0x1530) [0x439f10]]
> 
> 2: [./build/ip_pipeline(__rte_panic+0xc1) [0x427bdb]]
> 
> 1: [./build/ip_pipeline(rte_dump_stack+0x18) [0x4abe98]]
> 
> 
> Thanks,
> 
> Grace


More information about the users mailing list