[dpdk-users] issues with running ip_pipeline sample application

Zhang, Roy Fan roy.fan.zhang at intel.com
Tue Nov 17 22:06:55 CET 2015


Hello,

Can you provide the display message?
After the pipeline(s) are initialized, you should see the following terminal output

[APP] LINK0 (0) (10 Gbps) UP
[APP] LINK1 (1) (10 Gbps) UP
[APP] LINK2 (2) (10 Gbps) UP
[APP] LINK3 (3) (10 Gbps) UP
[APP] Initializing MSGQ-REQ-PIPELINE0 ...
[APP] Initializing MSGQ-RSP-PIPELINE0 ...
[APP] Initializing MSGQ-REQ-CORE-s0c0 ...
[APP] Initializing MSGQ-RSP-CORE-s0c0 ...
[APP] Initializing MSGQ-REQ-PIPELINE1 ...
[APP] Initializing MSGQ-RSP-PIPELINE1 ...
[APP] Initializing MSGQ-REQ-CORE-s0c1 ...
[APP] Initializing MSGQ-RSP-CORE-s0c1 ...
[APP] Initializing PIPELINE0 ...
pipeline> [APP] Initializing PIPELINE1 ...
[PIPELINE1] Pass-through

Here you should be able to type CLI commands like “p 1 ping” to check if the pipeline thread is alive. If it is alive, no message will be shown, but when it is not the error message is displayed. The program will exit only when you type “quit” command. The best way to check if this example still working is to check if the traffic is flowing.

Meanwhile, as ip_pipeline is a relatively big example, you can build your own basic pass-through and routing, even more complex flow classification, q-in-q encapsulation dpdk application by simply providing configuration file of your own. Currently ip_pipeline didn’t provide much example configuration files with it (only one simple pass-through I am afraid).

You may find useful information on how to build your configuration file on http://dpdk.org/doc/guides/sample_app_ug/ip_pipeline.html#.

If you have any further question, please do contact me.

Regards,
Fan

From: Grace Liu [mailto:guyue.liu at gmail.com]
Sent: Tuesday, November 17, 2015 5:23 PM
To: Zhang, Roy Fan <roy.fan.zhang at intel.com>
Subject: Re: [dpdk-users] issues with running ip_pipeline sample application

Hello,

Thanks for you reply, I've changed the port mask to 0x3 and added a sleep() command, now it seems both LINKs are up, but then the program will exit and my terminal is freeze, do you have any idea about this problem?

Thanks,
Grace


PMD: i40e_pf_parameter_init(): VMDQ queue pairs:4
[APP] Initializing MEMPOOL0 ...
[APP] Initializing LINK0 (0) (1 RXQ, 1 TXQ) ...
PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
PMD: i40e_dev_tx_queue_setup(): Using simple tx path
[APP] Initializing LINK1 (1) (1 RXQ, 1 TXQ) ...
PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=1, queue=0.
PMD: i40e_dev_tx_queue_setup(): Using simple tx path
[APP] LINK0 (0) (10 Gbps) UP
[APP] LINK1 (1) (10 Gbps) UP
[APP] Initializing MSGQ-REQ-PIPELINE0 ...
[APP] Initializing MSGQ-RSP-PIPELINE0 ...
[APP] Initializing MSGQ-REQ-CORE-s0c0 ...
[APP] Initializing MSGQ-RSP-CORE-s0c0 ...
[APP] Initializing MSGQ-REQ-PIPELINE1 ...
[APP] Initializing MSGQ-RSP-PIPELINE1 ...
[APP] Initializing MSGQ-REQ-CORE-s0c1 ...
[APP] Initializing MSGQ-RSP-CORE-s0c1 ...
[APP] Initializing PIPELINE0 ...
pipeline> [APP] Initializing PIPELINE1 ...
Cannot find LINK2 for RXQ2.0

On Tue, Nov 17, 2015 at 11:40 AM, Zhang, Roy Fan <roy.fan.zhang at intel.com<mailto:roy.fan.zhang at intel.com>> wrote:
Hello,

Thanks for using ip_pipeline.
If your host can work for l2fwd, ip_pipeline standalone should also work.
The problem may be the port mask 0x11 in your command.
Port mask works this way:

If your board has X NIC ports, the port mask shall be a X-bit unsigned integer.
The Nth bit's "1" indicates Nth NIC port will be used for the ip_pipeline application.
E.g., 0x3 indicates you want to use 1st and 2nd NIC ports, and 0x11 means you want to use 1st and 5th NIC ports.

To connect the port's sequence number to the actual NIC port information, please use the following command ./tools/dpdk_nic_bind.py --status

The sample output is shown as follows:
Network devices using DPDK-compatible driver ============================================
0000:02:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=ixgbe  --> 1st port, port mask 0x01
0000:02:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=ixgbe  --> 2nd port, port mask 0x02
0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=ixgbe  --> 3rd port, port mask 0x04
0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=ixgbe          --> 4th port, port mask 0x08

To use full 4 ports, -p 0x0f option should be included.

Hope this answer your question.

Best regards,
Fan

> -----Original Message-----
> From: users [mailto:users-bounces at dpdk.org<mailto:users-bounces at dpdk.org>] On Behalf Of Grace Liu
> Sent: Monday, November 16, 2015 11:03 PM
> To: users at dpdk.org<mailto:users at dpdk.org>
> Subject: [dpdk-users] issues with running ip_pipeline sample
> application
>
> Hello DPDK community,
>
> I met errors when running ip_pipeline and test_ip_pipeline sample
> application. I'm using dpdk-2.1.0 on ubuntu 14.04 host with kernel 3.16.
> My host machine has two 10G ports and they are working for l2fwd
> sample application but not for ip_pipeline, so I'm wondering is there
> any specific requirement for NIC to run this app? The command I use
> is: sudo ./build/ip_pipeline -p 0x11.
>
> The error message is as follows:
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fb9d8200000 (size = 0x200000)
>
> EAL: Requesting 256 pages of size 2MB from socket 0
>
> EAL: Requesting 768 pages of size 2MB from socket 1
>
> EAL: TSC frequency is ~2666753 KHz
>
> EAL: Master lcore 0 is ready (tid=74659900;cpuset=[0])
>
> EAL: lcore 1 is ready (tid=d81ff700;cpuset=[1])
>
> EAL: PCI device 0000:05:00.0 on NUMA socket -1
>
> EAL:   probe driver: 8086:10c9 rte_igb_pmd
>
> EAL:   Not managed by a supported kernel driver, skipped
>
> EAL: PCI device 0000:05:00.1 on NUMA socket -1
>
> EAL:   probe driver: 8086:10c9 rte_igb_pmd
>
> EAL:   Not managed by a supported kernel driver, skipped
>
> EAL: PCI device 0000:07:00.0 on NUMA socket -1
>
> EAL:   probe driver: 8086:1572 rte_i40e_pmd
>
> EAL:   PCI memory mapped at 0x7fb9d71ff000
>
> EAL:   PCI memory mapped at 0x7fba7465d000
>
> PMD: eth_i40e_dev_init(): FW 4.22 API 1.2 NVM 04.02.05 eetrack
> 8000143f
>
> PMD: eth_i40e_dev_init(): Failed to stop lldp
>
> PMD: i40e_pf_parameter_init(): Max supported VSIs:66
>
> PMD: i40e_pf_parameter_init(): PF queue pairs:64
>
> PMD: i40e_pf_parameter_init(): Max VMDQ VSI num:63
>
> PMD: i40e_pf_parameter_init(): VMDQ queue pairs:4
>
> EAL: PCI device 0000:07:00.1 on NUMA socket -1
>
> EAL:   probe driver: 8086:1572 rte_i40e_pmd
>
> EAL:   PCI memory mapped at 0x7fb9d69ff000
>
> EAL:   PCI memory mapped at 0x7fba7461e000
>
> PMD: eth_i40e_dev_init(): FW 4.22 API 1.2 NVM 04.02.05 eetrack
> 8000143f
>
> PMD: eth_i40e_dev_init(): Failed to stop lldp
>
> PMD: i40e_pf_parameter_init(): Max supported VSIs:66
>
> PMD: i40e_pf_parameter_init(): PF queue pairs:64
>
> PMD: i40e_pf_parameter_init(): Max VMDQ VSI num:63
>
> PMD: i40e_pf_parameter_init(): VMDQ queue pairs:4
>
> [APP] Initializing MEMPOOL0 ...
>
> [APP] Initializing LINK0 (0) (1 RXQ, 1 TXQ) ...
>
> PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are
> satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
>
> PMD: i40e_dev_tx_queue_setup(): Using simple tx path
>
> [APP] Initializing LINK1 (4) (1 RXQ, 1 TXQ) ...
>
> PANIC in app_init_link():
>
> LINK1 (4): init error (-22)
>
> 6: [./build/ip_pipeline() [0x42dad3]]
>
> 5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)
> [0x7fba73776ec5]]
>
> 4: [./build/ip_pipeline(main+0x55) [0x42c715]]
>
> 3: [./build/ip_pipeline(app_init+0x1530) [0x439f10]]
>
> 2: [./build/ip_pipeline(__rte_panic+0xc1) [0x427bdb]]
>
> 1: [./build/ip_pipeline(rte_dump_stack+0x18) [0x4abe98]]
>
>
> Thanks,
>
> Grace



More information about the users mailing list