[dpdk-users] Packet losses using DPDK

Andriy Berestovskyy aber at semihalf.com
Mon May 22 14:10:20 CEST 2017


Hi,
Please have a look at https://en.wikipedia.org/wiki/High_availability
I was trying to calculate your link availability, but my Ubuntu
calculator gives me 0 for  2 / 34 481 474 846 ;)

Most probably you dropped a packet during the start/stop.
ierrors is what you NIC consider as an error Ethernet frame
(checksums, runts, giants etc)

Regards,
Andriy

On Mon, May 22, 2017 at 11:40 AM,  <dfernandes at toulouse.viveris.com> wrote:
> Hi !
>
> I performed many tests using Pktgen and it seems to work much better.
> However, I observed that one of the tests showed that 2 packets were
> dropped. In this test I sent packets between the 2 physical ports in
> bidirectional mode during 24 hours. The packets size was 450 bytes and the
> rate in both ports was 1500 Mbps.
>
> The port stats I got are the following :
>
>
> ** Port 0 **  Tx: 34481474912. Rx: 34481474846. Dropped: 2
> ** Port 1 **  Tx: 34481474848. Rx: 34481474912. Dropped: 0
>
> DEBUG portStats = {
>   [1] = {
>     ["ipackets"] = 34481474912,
>     ["ierrors"] = 0,
>     ["rx_nombuf"] = 0,
>     ["ibytes"] = 15378737810752,
>     ["oerrors"] = 0,
>     ["opackets"] = 34481474848,
>     ["obytes"] = 15378737782208,
>   },
>   [0] = {
>     ["ipackets"] = 34481474846,
>     ["ierrors"] = 1,
>     ["rx_nombuf"] = 0,
>     ["ibytes"] = 15378737781316,
>     ["oerrors"] = 0,
>     ["opackets"] = 34481474912,
>     ["obytes"] = 15378737810752,
>   },
>   ["n"] = 2,
> }
>
> So 2 packets were dropped by port 0 and I see that "ierrors" counter has a
> value of 1. Do you know what does this counter represent ? And what could it
> be interpreted ?
> By the way, I performed as well the same test changing the packet size to
> 1518 bytes and the rate to 4500 Mbps (on each port) and 0 packets were
> dropped.
>
> David
>
>
>
>
> Le 17.05.2017 09:53, dfernandes at toulouse.viveris.com a écrit :
>>
>> Thanks for your response !
>>
>> I have installed Pktgen and I will perform some tests. So far it seems
>> to work fine. I'll keep you informed. Thanks again.
>>
>> David
>>
>> Le 12.05.2017 18:18, Wiles, Keith a écrit :
>>>>
>>>> On May 12, 2017, at 10:45 AM, dfernandes at toulouse.viveris.com wrote:
>>>>
>>>> Hi !
>>>>
>>>> I am working with MoonGen which is a fully scriptable packet generator
>>>> build on DPDK.
>>>> (→ https://github.com/emmericp/MoonGen)
>>>>
>>>> The system on which I perform tests has the following characteristics :
>>>>
>>>> CPU : Intel Core i3-⁠⁠6100 (3.70 GHz, 2 cores, 2 threads/⁠⁠core)
>>>> NIC : X540-⁠⁠AT2 with 2x10Gbe ports
>>>> OS : Linux Ubuntu Server 16.04 (kernel 4.4)
>>>>
>>>> I coded a MoonGen script which requests DPDK to transmit packets from
>>>> one physical port and to receive them at the second physical port. The 2
>>>> physical ports are directly connected with an RJ-45 cat6 cable.
>>>>
>>>> The issue is that I perform the same test with exactly the same script
>>>> and the same parameters several times and the results show a random
>>>> behavior. For most of the tests there is no losses but for some of them I
>>>> observe packet losses. The percentage of lost packets is very variable. It
>>>> happens even when the packet rate is very low.
>>>>
>>>> Some examples of random failed tests :
>>>>
>>>> # 1,000,000 packets sent (packets size = 124 bytes, rate = 76 Mbps) →
>>>> 10170 lost packets
>>>>
>>>> # 3,000,000 packets sent (packets size = 450 bytes, rate = 460 Mbps) →
>>>> ALL packets lost
>>>>
>>>>
>>>> I tested the following system modifications without success :
>>>>
>>>> # BIOS parameters :
>>>>
>>>>    Hyperthreading : enable (because the machine has only 2 cores)
>>>>    Multi-⁠⁠⁠processor : enable
>>>>    Virtualization Technology (VTx) : disable
>>>>    Virtualization Technology for Directed I/⁠⁠⁠O (VTd) : disable
>>>>    Allow PCIe/⁠⁠⁠PCI SERR# Interrupt (=PCIe System Errors) : disable
>>>>    NUMA unavailable
>>>>
>>>> # use of isolcpus in order to isolate the cores which are in charge of
>>>> transmission and reception
>>>>
>>>> # hugepages size = 1048576 kB
>>>>
>>>> # size of buffer descriptors : tried with Tx = 512 descriptors and Rx =
>>>> 128 descriptors and also with  Tx = 4096 descriptors and Rx = 4096
>>>> descriptors
>>>>
>>>> # Tested with 2 different X540-⁠⁠T2 NICs units
>>>>
>>>> # I tested all with a Dell FC430 which has a CPU Intel Xeon E5-2660 v3 @
>>>> 2.6GHz with 10 Cores and 2threads/Core (tested with and without
>>>> hyper-threading)
>>>>    → same results and even worse
>>>>
>>>>
>>>> Remark concerning the NIC stats :
>>>>     I used the rte_eth_stats struct in order to get more information
>>>> about the losses and I observed that in some cases, when there is packet
>>>> losses,  ierrors value is > 0 and also ierrors + imissed + ipackets <
>>>> opackets. In other cases I get ierrors = 0 and  imissed + ipackets =
>>>> opackets which has more sense.
>>>>
>>>> What could be the origin of that erroneous packets counting?
>>>>
>>>> Do you have any explanation about that behaviour ?
>>>
>>>
>>> Not knowing MoonGen at all other then a brief look at the source I may
>>> not be much help, but I have a few ideas to help locate the problem.
>>>
>>> Try using testpmd in tx-only mode or try Pktgen to see if you get the
>>> same problem. I hope this would narrow down the problem to a specific
>>> area. As we know DPDK works if correctly coded and testpmd/pktgen
>>> work.
>>>
>>>>
>>>> Thanks in advance.
>>>>
>>>> David
>>>
>>>
>>> Regards,
>>> Keith
>
>



-- 
Andriy Berestovskyy


More information about the users mailing list