Bug 924 - Mellanox ConnectX6 DPDK dpdk-testpmd Receive error len error checksum UDP packet performance is very low!
Summary: Mellanox ConnectX6 DPDK dpdk-testpmd Receive error len error checksum UDP pac...
Status: IN_PROGRESS
Alias: None
Product: DPDK
Classification: Unclassified
Component: ethdev (show other bugs)
Version: 21.11
Hardware: x86 Linux
: High normal
Target Milestone: ---
Assignee: dev
URL:
Depends on:
Blocks:
 
Reported: 2022-01-20 10:01 CET by 188989801
Modified: 2022-10-08 04:12 CEST (History)
2 users (show)



Attachments

Description 188989801 2022-01-20 10:01:43 CET
I use Ixia to construct two streams   
flow1 udp 64size small packet
flow2 udp 64size small packet error len error checksum
Send per second 30G bps  57 million pps


./dpdk-testpmd -l 4-22 -n 8 -- -i --rxq 19 --txq 19 --nb-cores 18 --rxd 2048 --txd 2048 --portmask 0xff

set fwd rxonly
start

  ######################## NIC statistics for port 6  ########################
  RX-packets: 178423861761 RX-missed: 646877     RX-bytes:  11419127182586
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:      5904277          Rx-bps:   3022990248
  Tx-pps:            0          Tx-bps:            0
  ############################################################################
Recive per second 3G bps 5 million pps ,Only about one tenth was received

rx_discards_phy drop 
[root@localhost ~]#  ethtool -S enp202s0f0 |grep dis
     rx_discards_phy: 48832790759
     tx_discards_phy: 0
     rx_prio0_discards: 48832470594
     rx_prio1_discards: 0
     rx_prio2_discards: 0
     rx_prio3_discards: 0
     rx_prio4_discards: 0
     rx_prio5_discards: 0
     rx_prio6_discards: 0
     rx_prio7_discards: 0


When I change the flow2 to no problem udp, It's all normal.

  ######################## NIC statistics for port 6  ########################
  RX-packets: 179251451823 RX-missed: 660384     RX-bytes:  11472092947106
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:     57553942          Rx-bps:  29467618576
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

Recive per second 29.4G bps 57 million pps



[root@localhost ~]# mlxfwmanager 
Querying Mellanox devices firmware ...

Device #1:
----------

  Device Type:      ConnectX6
  Part Number:      MCX653106A-ECA_Ax
  Description:      ConnectX-6 VPI adapter card; H100Gb/s (HDR100; EDR IB and 100GbE); dual-port QSFP56; PCIe3.0 x16; tall bracket; ROHS R6
  PSID:             MT_0000000224
  PCI Device Name:  0000:ca:00.0
  Base MAC:         08c0eb204e5a
  Versions:         Current        Available     
     FW             20.30.1004     20.32.1010    
     PXE            3.6.0301       3.6.0502      
     UEFI           14.23.0017     14.25.0017    

  Status:           Update required
Comment 1 188989801 2022-01-27 10:56:37 CET
Can anyone help me!!!
Comment 2 Asaf Penso 2022-03-13 12:28:10 CET
In case of traffic with bad csum, please use mprq_vec burst function by setting mprq_en=1 in the devargs.
Comment 3 Asaf Penso 2022-08-28 22:46:11 CEST
Is this issue still relevant?
Comment 4 188989801 2022-10-08 04:12:06 CEST
Hi @Asaf , 

https://bugs.dpdk.org/show_bug.cgi?id=1053

How to solve this problem?

Note You need to log in before you can comment on or make changes to this bug.