[dpdk-dev] XL710 with i40e driver drops packets on RX even on a small rates.

Martin Weiser martin.weiser at allegro-packets.com
Tue Jan 3 13:18:36 CET 2017


Hello,

we are also seeing this issue on one of our test systems while it does
not occur on other test systems with the same DPDK version (we tested
16.11 and current master).

The system that we can reproduce this issue on also has a X552 ixgbe NIC
which can forward the exact same traffic using the same testpmd
parameters without a problem.
Even if we install a 82599ES ixgbe NIC in the same PCI slot that the
XL710 was in the 82599ES can forward the traffic without any drops.

Like in the issue reported by Ilya all packet drops occur on the testpmd
side and are accounted as 'imissed'. Increasing the number of rx
descriptors only helps a little at low packet rates.

Drops start occurring at pretty low packet rates like 100000 packets per
second.

Any suggestions would be greatly appreciated.

Best regards,
Martin



On 22.08.16 14:06, Ilya Maximets wrote:
> Hello, All.
>
> I've faced with a really bad situation with packet drops on a small
> packet rates (~45 Kpps) while using XL710 NIC with i40e DPDK driver.
>
> The issue was found while testing PHY-VM-PHY scenario with OVS and
> confirmed on PHY-PHY scenario with testpmd.
>
> DPDK version 16.07 was used in all cases.
> XL710 firmware-version: f5.0.40043 a1.5 n5.04 e2505
>
> Test description (PHY-PHY):
>
> 	* Following cmdline was used:
>
> 	    # n_desc=2048
> 	    # ./testpmd -c 0xf -n 2 --socket-mem=8192,0 -w 0000:05:00.0 -v \
> 	                -- --burst=32 --txd=${n_desc} --rxd=${n_desc} \
> 	                --rxq=1 --txq=1 --nb-cores=1 \
> 	                --eth-peer=0,a0:00:00:00:00:00 --forward-mode=mac
>
> 	* DPDK-Pktgen application was used as a traffic generator.
> 	  Single flow generated.
>
> Results:
>
> 	* Packet size: 128B, rate: 90% of 10Gbps (~7.5 Mpps):
>
> 	  On the generator's side:
>
> 	  Total counts:
> 		Tx    :      759034368 packets
> 		Rx    :      759033239 packets
> 		Lost  :           1129 packets
>
> 	  Average rates:
> 		Tx    :        7590344 pps
> 		Rx    :        7590332 pps
> 		Lost  :             11 pps
>
> 	  All of this dropped packets are RX-dropped on testpmd's side:
>
> 	  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
> 	  RX-packets: 759033239      RX-dropped: 1129          RX-total: 759034368
> 	  TX-packets: 759033239      TX-dropped: 0             TX-total: 759033239
> 	  +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>
> 	  At the same time 10G NIC with IXGBE driver works perfectly
> 	  without any packet drops in the same scenario.
>
> Much worse situation with PHY-VM-PHY scenario with OVS:
>
> 	* testpmd application used inside guest to forward incoming packets.
> 	  (almost same cmdline as for PHY-PHY)
>
> 	* For packet size 256 B on rate 1% of 10Gbps (~45 Kpps):
>
> 	  Total counts:
> 	        Tx    :        1358112 packets
> 	        Rx    :        1357990 packets
> 	        Lost  :            122 packets
>
> 	  Average rates:
> 	        Tx    :          45270 pps
> 	        Rx    :          45266 pps
> 	        Lost  :              4 pps
>
> 	  All of this 122 dropped packets can be found in rx_dropped counter:
>
> 	    # ovs-vsctl get interface dpdk0 statistics:rx_dropped
> 	    122
>
> 	 And again, no issues with IXGBE on the exactly same scenario.
>
>
> Results of my investigation:
>
> 	* I found that all of this packets are 'imissed'. This means that rx
> 	  descriptor ring was overflowed.
>
> 	* I've modified i40e driver to check the real number of free descriptors
> 	  that was not still filled by the NIC and found that HW fills
> 	  rx descriptors with uneven rate. Looks like it fills them using
> 	  a huge batches.
>
> 	* So, root cause of packet drops with XL710 is somehow uneven rate of
> 	  filling of the hw rx descriptors by the NIC. This leads to exhausting
> 	  of rx descriptors and packet drops by the hardware. 10G IXGBE NIC works
> 	  more smoothly and driver is able to refill hw ring with rx descriptors
> 	  in time.
>
> 	* The issue becomes worse with OVS because of much bigger latencies
> 	  between 'rte_eth_rx_burst()' calls.
>
> The easiest solution for this problem is to increase number of RX descriptors.
> Increasing up to 4096 eliminates packet drops but decreases the performance a lot:
>
> 	For OVS PHY-VM-PHY scenario by 10%
> 	For OVS PHY-PHY scenario by 20%
> 	For tespmd PHY-PHY scenario by 17% (22.1 Mpps --> 18.2 Mpps for 64B packets)
>
> As a result we have a trade-off between zero drop rate on small packet rates and
> the higher maximum performance that is very sad.
>
> Using of 16B descriptors doesn't really help with performance.
> Upgrading the firmware from version 4.4 to 5.04 didn't help with drops.
>
> Any thoughts? Can anyone reproduce this?
>
> Best regards, Ilya Maximets.



More information about the dev mailing list