[dpdk-users] Pktgen Cannot configure device panic
Wiles, Keith
keith.wiles at intel.com
Mon Mar 6 16:14:11 CET 2017
> On Mar 5, 2017, at 8:03 PM, Philip Lee <plee2 at andrew.cmu.edu> wrote:
>
> Hello all,
>
> I had a "working" install of pktgen that would transfer data but not
> provide statistics. The setup are two Netronome NICs connected
> together. It was suggested there was a problem with the Netronome PMD,
> so I reinstalled both the Netronome BSP and DPDK. Now I'm getting the
> following error with trying to start up pktgen with: ./pktgen -c 0x1f
> -n 1 -- -m [1:2].0
>
>>>> Packet Burst 32, RX Desc 512, TX Desc 1024, mbufs/port 8192, mbuf cache 1024
> === port to lcore mapping table (# lcores 5) ===
> lcore: 0 1 2 3 4
> port 0: D: T 1: 0 0: 1 0: 0 0: 0 = 1: 1
> Total : 0: 0 1: 0 0: 1 0: 0 0: 0
> Display and Timer on lcore 0, rx:tx counts per port/lcore
>
> Configuring 4 ports, MBUF Size 1920, MBUF Cache Size 1024
> Lcore:
> 1, RX-Only
> RX( 1): ( 0: 0)
> 2, TX-Only
> TX( 1): ( 0: 0)
> Port :
> 0, nb_lcores 2, private 0x8cca90, lcores: 1 2
>
> ** Default Info (5:8.0, if_index:0) **
> max_vfs : 0, min_rx_bufsize : 68, max_rx_pktlen : 0
> max_rx_queues : 0, max_tx_queues : 0
> max_mac_addrs : 1, max_hash_mac_addrs: 0, max_vmdq_pools: 0
> rx_offload_capa: 0, tx_offload_capa : 0, reta_size :
> 128, flow_type_rss_offloads:0000000000000000
> vmdq_queue_base: 0, vmdq_queue_num : 0, vmdq_pool_base: 0
> ** RX Conf **
> pthresh : 8, hthresh : 8, wthresh : 0
> Free Thresh : 32, Drop Enable : 0, Deferred Start : 0
> ** TX Conf **
> pthresh : 32, hthresh : 0, wthresh : 0
> Free Thresh : 32, RS Thresh : 32, Deferred Start :
> 0, TXQ Flags:00000f01
>
> !PANIC!: Cannot configure device: port=0, Num queues 1,1 (2)Invalid argument
> PANIC in pktgen_config_ports():
> Cannot configure device: port=0, Num queues 1,1 (2)Invalid argument6:
> [./pktgen() [0x43394e]]
> 5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7f89dd0f7f45]]
> 4: [./pktgen(main+0x4d4) [0x432f54]]
> 3: [./pktgen(pktgen_config_ports+0x3108) [0x45f418]]
> 2: [./pktgen(__rte_panic+0xbe) [0x42f288]]
> 1: [./pktgen(rte_dump_stack+0x1a) [0x49af3a]]
> Aborted
>
> ------------------------------------------------------------------------------------------------------------------------
>
> I tried unbinding the nics and rebinding. I read in an older mailling
> post that setup.sh needs to be run every reboot. I executed it, and it
> looks like a list of things to install pktgen that I had done manually
> again after the most recent reboot. The output of the status check
> script is below:
> ./dpdk-devbind.py --status
>
> Network devices using DPDK-compatible driver
> ============================================
> 0000:05:08.0 'Device 6003' drv=igb_uio unused=
> 0000:05:08.1 'Device 6003' drv=igb_uio unused=
> 0000:05:08.2 'Device 6003' drv=igb_uio unused=
> 0000:05:08.3 'Device 6003' drv=igb_uio unused=
>
> Network devices using kernel driver
> ===================================
> 0000:01:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=eth0 drv=tg3
> unused=igb_uio *Active*
> 0000:01:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=eth1 drv=tg3
> unused=igb_uio
> 0000:02:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=eth2 drv=tg3
> unused=igb_uio
> 0000:02:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=eth3 drv=tg3
> unused=igb_uio
> 0000:05:00.0 'Device 4000' if= drv=nfp unused=igb_uio
> 0000:43:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=eth4
> drv=ixgbe unused=igb_uio
> 0000:43:00.1 'Ethernet Controller 10-Gigabit X540-AT2' if=eth7
> drv=ixgbe unused=igb_uio
> 0000:44:00.0 'MT27500 Family [ConnectX-3]' if=eth5,eth6 drv=mlx4_core
> unused=igb_uio
>
> Does anyone have any suggestions?
Try blacklisting (-b 0000:01:00.1 -b ...) all of the ports you are not using. The number of ports being setup is taken from the number of devices DPDK detects.
The only on thing I am worried about is the ' max_rx_queues : 0, max_tx_queues : 0’ is reporting zero queues. It maybe other example code does not test the return code from the rte_eth_dev_configure() call. I think the max_rx_queues and max_tx_queues should be at least 1.
>
> Thanks,
>
> Philip Lee
Regards,
Keith
More information about the users
mailing list