[dpdk-users] Packet losses using DPDK
dfernandes at toulouse.viveris.com
dfernandes at toulouse.viveris.com
Mon May 15 15:49:37 CEST 2017
Hi Andriy !
Thanks for your response.
Yes, I wait links are up.
David
Le 15.05.2017 10:25, Andriy Berestovskyy a écrit :
> Hey,
> It might be a silly guess, but do you wait for the links are up and
> ready to send/receive packets?
>
> Andriy
>
> On Fri, May 12, 2017 at 5:45 PM, <dfernandes at toulouse.viveris.com>
> wrote:
>> Hi !
>>
>> I am working with MoonGen which is a fully scriptable packet generator
>> build
>> on DPDK.
>> (→ https://github.com/emmericp/MoonGen)
>>
>> The system on which I perform tests has the following characteristics
>> :
>>
>> CPU : Intel Core i3-6100 (3.70 GHz, 2 cores, 2 threads/core)
>> NIC : X540-AT2 with 2x10Gbe ports
>> OS : Linux Ubuntu Server 16.04 (kernel 4.4)
>>
>> I coded a MoonGen script which requests DPDK to transmit packets from
>> one
>> physical port and to receive them at the second physical port. The 2
>> physical ports are directly connected with an RJ-45 cat6 cable.
>>
>> The issue is that I perform the same test with exactly the same script
>> and
>> the same parameters several times and the results show a random
>> behavior.
>> For most of the tests there is no losses but for some of them I
>> observe
>> packet losses. The percentage of lost packets is very variable. It
>> happens
>> even when the packet rate is very low.
>>
>> Some examples of random failed tests :
>>
>> # 1,000,000 packets sent (packets size = 124 bytes, rate = 76 Mbps) →
>> 10170
>> lost packets
>>
>> # 3,000,000 packets sent (packets size = 450 bytes, rate = 460 Mbps) →
>> ALL
>> packets lost
>>
>>
>> I tested the following system modifications without success :
>>
>> # BIOS parameters :
>>
>> Hyperthreading : enable (because the machine has only 2 cores)
>> Multi-processor : enable
>> Virtualization Technology (VTx) : disable
>> Virtualization Technology for Directed I/O (VTd) : disable
>> Allow PCIe/PCI SERR# Interrupt (=PCIe System Errors) : disable
>> NUMA unavailable
>>
>> # use of isolcpus in order to isolate the cores which are in charge of
>> transmission and reception
>>
>> # hugepages size = 1048576 kB
>>
>> # size of buffer descriptors : tried with Tx = 512 descriptors and Rx
>> = 128
>> descriptors and also with Tx = 4096 descriptors and Rx = 4096
>> descriptors
>>
>> # Tested with 2 different X540-T2 NICs units
>>
>> # I tested all with a Dell FC430 which has a CPU Intel Xeon E5-2660 v3
>> @
>> 2.6GHz with 10 Cores and 2threads/Core (tested with and without
>> hyper-threading)
>> → same results and even worse
>>
>>
>> Remark concerning the NIC stats :
>> I used the rte_eth_stats struct in order to get more information
>> about
>> the losses and I observed that in some cases, when there is packet
>> losses,
>> ierrors value is > 0 and also ierrors + imissed + ipackets < opackets.
>> In
>> other cases I get ierrors = 0 and imissed + ipackets = opackets which
>> has
>> more sense.
>>
>> What could be the origin of that erroneous packets counting?
>>
>> Do you have any explanation about that behaviour ?
>>
>> Thanks in advance.
>>
>> David
More information about the users
mailing list