[dpdk-users] Packet losses using DPDK

dfernandes at toulouse.viveris.com dfernandes at toulouse.viveris.com
Wed May 17 09:53:26 CEST 2017


Thanks for your response !

I have installed Pktgen and I will perform some tests. So far it seems 
to work fine. I'll keep you informed. Thanks again.

David

Le 12.05.2017 18:18, Wiles, Keith a écrit :
>> On May 12, 2017, at 10:45 AM, dfernandes at toulouse.viveris.com wrote:
>> 
>> Hi !
>> 
>> I am working with MoonGen which is a fully scriptable packet generator 
>> build on DPDK.
>> (→ https://github.com/emmericp/MoonGen)
>> 
>> The system on which I perform tests has the following characteristics 
>> :
>> 
>> CPU : Intel Core i3-⁠⁠6100 (3.70 GHz, 2 cores, 2 threads/⁠⁠core)
>> NIC : X540-⁠⁠AT2 with 2x10Gbe ports
>> OS : Linux Ubuntu Server 16.04 (kernel 4.4)
>> 
>> I coded a MoonGen script which requests DPDK to transmit packets from 
>> one physical port and to receive them at the second physical port. The 
>> 2 physical ports are directly connected with an RJ-45 cat6 cable.
>> 
>> The issue is that I perform the same test with exactly the same script 
>> and the same parameters several times and the results show a random 
>> behavior. For most of the tests there is no losses but for some of 
>> them I observe packet losses. The percentage of lost packets is very 
>> variable. It happens even when the packet rate is very low.
>> 
>> Some examples of random failed tests :
>> 
>> # 1,000,000 packets sent (packets size = 124 bytes, rate = 76 Mbps) → 
>> 10170 lost packets
>> 
>> # 3,000,000 packets sent (packets size = 450 bytes, rate = 460 Mbps) → 
>> ALL packets lost
>> 
>> 
>> I tested the following system modifications without success :
>> 
>> # BIOS parameters :
>> 
>>    Hyperthreading : enable (because the machine has only 2 cores)
>>    Multi-⁠⁠⁠processor : enable
>>    Virtualization Technology (VTx) : disable
>>    Virtualization Technology for Directed I/⁠⁠⁠O (VTd) : disable
>>    Allow PCIe/⁠⁠⁠PCI SERR# Interrupt (=PCIe System Errors) : disable
>>    NUMA unavailable
>> 
>> # use of isolcpus in order to isolate the cores which are in charge of 
>> transmission and reception
>> 
>> # hugepages size = 1048576 kB
>> 
>> # size of buffer descriptors : tried with Tx = 512 descriptors and Rx 
>> = 128 descriptors and also with  Tx = 4096 descriptors and Rx = 4096  
>> descriptors
>> 
>> # Tested with 2 different X540-⁠⁠T2 NICs units
>> 
>> # I tested all with a Dell FC430 which has a CPU Intel Xeon E5-2660 v3 
>> @ 2.6GHz with 10 Cores and 2threads/Core (tested with and without 
>> hyper-threading)
>>    → same results and even worse
>> 
>> 
>> Remark concerning the NIC stats :
>>     I used the rte_eth_stats struct in order to get more information 
>> about the losses and I observed that in some cases, when there is 
>> packet losses,  ierrors value is > 0 and also ierrors + imissed + 
>> ipackets < opackets. In other cases I get ierrors = 0 and  imissed + 
>> ipackets = opackets which has more sense.
>> 
>> What could be the origin of that erroneous packets counting?
>> 
>> Do you have any explanation about that behaviour ?
> 
> Not knowing MoonGen at all other then a brief look at the source I may
> not be much help, but I have a few ideas to help locate the problem.
> 
> Try using testpmd in tx-only mode or try Pktgen to see if you get the
> same problem. I hope this would narrow down the problem to a specific
> area. As we know DPDK works if correctly coded and testpmd/pktgen
> work.
> 
>> 
>> Thanks in advance.
>> 
>> David
> 
> Regards,
> Keith



More information about the users mailing list