Bug 930
Summary: | ConnectX6 DPDK dpdk-testpmd Receive tcp ,udp Mixed flow performance is very low! | ||
---|---|---|---|
Product: | DPDK | Reporter: | killers (killerstemp) |
Component: | ethdev | Assignee: | Asaf Penso (asafp) |
Status: | CONFIRMED --- | ||
Severity: | normal | CC: | asafp, martin.weiser, rtox |
Priority: | High | ||
Version: | 21.11 | ||
Target Milestone: | --- | ||
Hardware: | x86 | ||
OS: | Linux |
Description
killers
2022-01-28 06:47:54 CET
Hello, To get the best results for mixed traffic, you need to set the correct value for the CQE compress devarg. See the following documentation snippet (I suggest reading it fully in 36.5.3.2. Driver options: rxq_cqe_comp_en parameter [int] Specifying 4 as a rxq_cqe_comp_en value selects L3/L4 Header format for better compression rate in case of mixed TCP/UDP and IPv4/IPv6 traffic. CQE compression format selection requires DevX to be enabled. Please let us know the result. Hello, use rxq_cqe_comp_en=4 performance will be better. Now there is a new problem! 3rd Gen Intel® Xeon® Scalable Processors Receive error len error checksum UDP tcp packet performance is very low! The test environment is the same as above, use Ixia to construct two streams Total 20Gbps 29760000 pps flow1 tcp 64size small packet incorrect tcp checksum Send per second 10G bps 14880000 pps flow2 tcp 64size small packet correct tcp checksum Send per second 10G bps 14880000 pps ./dpdk-testpmd -l 4-22 -n 8 -- -i --rxq 19 --txq 19 --nb-cores 18 --rxd 2048 --txd 2048 --portmask 0xff set fwd rxonly start Rx-pps: 5134559 rx_discards_phy drop ninety percent per second ! This problem only appears on 3rd Gen Intel® Xeon® Scalable Processors platform Can you please elaborate on the motivation for such a test? Why would you like to measure bad checksum traffic performance? The attacker will send attack traffic, which may be various wrong packets. Can you please try to change the rx burst function? Currently, it uses vectorized. Can you add devarg of mprq_en=1 and check the result? Another option to handle this, is to add an rte_flow rule that match on csum integrity bit and does drop action. More info can be found in http://doc.dpdk.org/guides/nics/mlx5.html After using the mprq=1 parameter, the problem remains the same. tcp ,udp Mixed flow tcp(header checksum incorrect) ,tcp or udp(header checksum correct) Mixed flow udp(header checksum incorrect) ,tcp or udp(header checksum correct) Mixed flow tcp(header length incorrect) ,tcp or udp(header length correct) Mixed flow .... Many of these combinations performance is very low. rx_discards_PHY massive packet drop. What is the reason behind it? Is it because netcard and CPU core operate CQE at the same time? Is there any way to solve it, such as adjusting what parameters? Can you please try to change the rx burst function? I don't understand. Is it for me to modify the code of the rte_eth_rx_burst ? I can see two options to handle this case: 1. Add an rte_flow rule that match on csum integrity bit and does drop action. More info can be found in http://doc.dpdk.org/guides/nics/mlx5.html 2. Disable checksum validation altogether in case an application doesn’t care about checksums. (In reply to Asaf Penso from comment #8) > I can see two options to handle this case: > 1. Add an rte_flow rule that match on csum integrity bit and does drop > action. More info can be found in http://doc.dpdk.org/guides/nics/mlx5.html > 2. Disable checksum validation altogether in case an application doesn’t > care about checksums. I tried all the methods, but they didn't take effect in dpdk Disable checksum. The network card still calculates the checksum Hi, We have continued to look into this issue, and would like to know the following results: 1. mixed traffic TCP/UDP without cqe compression (need to set devarg rxq_cqe_comp_en=0) 2. mixed traffic TCP/UDP with cqe compression 3. mixed traffic TCP/UDP with cqe compression and csum errors Hi @Asaf, we are observing the same issue despite rxq_cqe_comp_en=4 . On a 100Gbps load we still drop at least 40% packets as xstats rx_phy_discard_packets See bug report https://bugs.dpdk.org/show_bug.cgi?id=1053 Hello, Can you please provide the 3 results mentioned in comment #10? Hi @Asaf It's been a year, are there new network card hardware or new (OFED/FW) solutions to the bad csum performance issue? |