[dpdk-users] Capture traffic with DPDK-dump

Wiles, Keith keith.wiles at intel.com
Mon Nov 7 23:42:37 CET 2016


> On Nov 7, 2016, at 9:50 AM, jose suarez <jsuarezv at ac.upc.edu> wrote:
> 
> Hello everybody!
> 
> I am new in DPDK. I'm trying simply to capture traffic from a 10G physical NIC. I installed the DPDK from source files and activated the following modules in common-base file:
> 
> CONFIG_RTE_LIBRTE_PMD_PCAP=y
> 
> CONFIG_RTE_LIBRTE_PDUMP=y
> 
> CONFIG_RTE_PORT_PCAP=y
> 
> Then I built the distribution using the dpdk-setup.h script. Also I add hugepages and check they are configured successfully:
> 
> AnonHugePages:      4096 kB
> HugePages_Total:    2048
> HugePages_Free:        0
> HugePages_Rsvd:        0
> HugePages_Surp:        0
> Hugepagesize:       2048 kB
> 
> To capture the traffic I guess I can use the dpdk-pdump application, but I don't know how to use it. First of all, does it work if I bind the interfaces using the uio_pci_generic drive? I guess that if I capture the traffic using the linux kernel driver (ixgbe) I will loose a lot of packets.
> 
> To bind the NIC I write this command:
> 
> sudo ./tools/dpdk-devbind.py --bind=uio_pci_generic eth0
> 
> 
> When I check the interfaces I can see that the NIC was binded successfully. Also I checked that mi NIC is compatible with DPDK (Intel 8599)
> 
> Network devices using DPDK-compatible driver
> ============================================
> 0000:01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=uio_pci_generic unused=ixgbe,vfio-pci
> 
> 
> To capture packets, I read in the mailing list that it is necessary to run the testpmd application and then dpdk-pdump using different cores. So I used the following commands:
> 
> sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x6 -n 4 -- -i
> 
> sudo ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -c 0xff -n 2 -- --pdump 'device_id=01:00.0,queue=*,rx-dev=/tmp/file.pcap’

I did notice you used lcores 1-2 on testpmd and then used all lcores 0-16 on pdump normally you need to use something like 0xf8 on pdump to not have two threads on a single core.

Not sure this will fix your problem.

> 
> Did I miss any step? Is it necessary to execute any more commands when running the testpmd app in interactive mode?
> 
> 
> When I execute the pdump application I get the following error:
> 
> EAL: Detected 8 lcore(s)
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> EAL: WARNING: Address Space Layout Randomization (ASLR) is enabled in the kernel.
> EAL:    This may cause issues with mapping memory into secondary processes
> PMD: bnxt_rte_pmd_init() called for (null)
> EAL: PCI device 0000:01:00.0 on NUMA socket -1
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL: PCI device 0000:01:00.1 on NUMA socket -1
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> PMD: Initializing pmd_pcap for eth_pcap_rx_0
> PMD: Creating pcap-backed ethdev on numa socket 0
> Port 2 MAC: 00 00 00 01 02 03
> PDUMP: client request for pdump enable/disable failed
> PDUMP: client request for pdump enable/disable failed
> EAL: Error - exiting with code: 1
>  Cause: Unknown error -22
> 
> 
> In the testpmd app I get the following info:
> 
> EAL: Detected 8 lcore(s)
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> PMD: bnxt_rte_pmd_init() called for (null)
> EAL: PCI device 0000:01:00.0 on NUMA socket -1
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL: PCI device 0000:01:00.1 on NUMA socket -1
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> Interactive-mode selected
> USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176, socket=0
> Configuring Port 0 (socket 0)
> Port 0: 00:E0:ED:FF:60:5C
> Configuring Port 1 (socket 0)
> Port 1: 00:E0:ED:FF:60:5D
> Checking link statuses...
> Port 0 Link Up - speed 10000 Mbps - full-duplex
> Port 1 Link Up - speed 10000 Mbps - full-duplex
> Done
> testpmd> PDUMP: failed to get potid for device id=01:00.0
> PDUMP: failed to get potid for device id=01:00.0
> 
> 
> Could you please help me?
> 
> Thank you!
> 
> 
> 

Regards,
Keith



More information about the users mailing list