Bug 798 - mlx5 hw flow performance problem
Summary: mlx5 hw flow performance problem
Status: IN_PROGRESS
Alias: None
Product: DPDK
Classification: Unclassified
Component: ethdev (show other bugs)
Version: 21.08
Hardware: x86 Linux
: Normal critical
Target Milestone: ---
Assignee: Asaf Penso
URL:
Depends on:
Blocks:
 
Reported: 2021-08-30 09:40 CEST by DKCopy
Modified: 2022-01-24 10:19 CET (History)
3 users (show)



Attachments

Description DKCopy 2021-08-30 09:40:48 CEST
DPDK: 18.11/19.11/20.11/21.02/21.05/21.08
NIC: Mellanox Technologies MT27800 Family [ConnectX-5]
FW: firmware-version: 16.31.1014 (MT_0000000012)
CPU: Intel(R) Xeon(R) Platinum 8170M CPU @ 2.10GHz
KERNEL: 5.4.17-2102.200.13.uek
PKTGEN: IxNetwork 9.00.1915.16
MLNX OFED: MLNX_OFED_LINUX-5.0-2.1.8.0

testpmd:

testpmd -l 26-51 --socket-mem=4096,4096 -w d8:00.0,dv_flow_en=0,mprq_en=1,rxqs_min_mprq=1,rx_vec_en=1 -- -i --rxq=16 --txq=16 --nb-cor
es=16 --forward-mode=icmpecho --numa --enable-rx-cksum -a --rxd=2048 --txd=2048 --burst=64


testpmd -l 26-51 --socket-mem=4096,4096 -w d8:00.0,dv_flow_en=1,mprq_en=1,rxqs_min_mprq=1,rx_vec_en=1 -- -i --rxq=16 --txq=16 --nb-cor
es=16 --forward-mode=icmpecho --numa --enable-rx-cksum -a --rxd=2048 --txd=2048 --burst=64


flow:
testpmd> flow create 0 ingress pattern eth / ipv4 dst is 1.1.1.1 / tcp / end actions queue index  15 / end
Flow rule #0 created
testpmd> flow create 0 ingress pattern eth / ipv4 dst is 1.1.1.1 / udp / end actions queue index  15 / end
Flow rule #1 created
testpmd> flow create 0 ingress pattern eth / ipv4 / udp dst is 53 / end actions count / rss / end
Flow rule #2 created


with these flows, no packet matched and testpmd only received 60.1Mpps:
testpmd> show  port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 6512958506 RX-missed: 4476258    RX-bytes:  390777510360
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:     60163078          Rx-bps:  28878277584
  Tx-pps:            0          Tx-bps:            0
  ############################################################################
testpmd> 

with flush all flows, testpmd received 148.8Mpps
testpmd> flow flush  0
testpmd> show  port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 10076834471 RX-missed: 4482703    RX-bytes:  604610068260
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:    148061620          Rx-bps:  71069577904
  Tx-pps:            0          Tx-bps:            0
  ############################################################################
testpmd>
Comment 1 Asaf Penso 2021-10-03 13:52:36 CEST
Hi,

Can you describe your traffic pattern?
Please note all flows have the same priority (0).
It is possible that all traffic matches flows #1 or #2 so you see the performance of a single queue.
Another thing to try is to replace queue 15 with RSS to see the result.
Comment 2 DKCopy 2021-10-09 10:12:52 CEST
Thanks a lot!
all traffic use UDP protocol, and config 1024 source address(14.175.242.132 [RepeatableRandom: 0.0.0.0, 223.255.255.255, 1, 1024]), dest address is fixed(10.245.102.44)!
I had confirmed all traffic not in a single rx queue.
Comment 3 Asaf Penso 2021-10-09 19:45:38 CEST
After checking the configuration the issue is gone.
Comment 4 DKCopy 2021-10-10 04:42:57 CEST
How did you test it?
I tested it but the issue existed
Comment 5 Asaf Penso 2021-10-14 08:54:05 CEST
I missunderstood and thought it was resolved for you.
Can you remove flows #1 and #2? Let's leave only the RSS flow and see.
Also, can  you provide the xstats output? We can see the distribution of the packets among the queues.
Comment 6 Ajit Khaparde 2021-10-25 19:16:44 CEST
Asaf, Assigning to you since you are following this up. Thanks
Comment 7 DKCopy 2021-10-28 08:59:01 CEST
(In reply to Asaf Penso from comment #5)
> I missunderstood and thought it was resolved for you.
> Can you remove flows #1 and #2? Let's leave only the RSS flow and see.
> Also, can  you provide the xstats output? We can see the distribution of the
> packets among the queues.

If device add only one RSS flow, the device RX can reach to wire speed  !
with more no matched flows, the device will reduce rx performance !
Comment 8 Asaf Penso 2021-11-03 14:09:58 CET
Let's start again, since i think we don't have a clear understanding.

According to this traffic:
>all traffic use UDP protocol, and config 1024 source address(14.175.242.132
>[RepeatableRandom: 0.0.0.0, 223.255.255.255, 1, 1024]), dest address is
>fixed(10.245.102.44)!

These flows would not hit, because:
Flow rule #0 is matching on TCP
Flow rule #1 is ipv4 dst 1.1.1.1
There is an impact with the packet processing that we first need to miss these 2 before hitting the other flow.

So, if you only have Flow rule #2, what perf result do you see?
Comment 9 DKCopy 2021-11-07 19:24:07 CET
(In reply to Asaf Penso from comment #8)
> Let's start again, since i think we don't have a clear understanding.
> 
> According to this traffic:
> >all traffic use UDP protocol, and config 1024 source address(14.175.242.132
> >[RepeatableRandom: 0.0.0.0, 223.255.255.255, 1, 1024]), dest address is
> >fixed(10.245.102.44)!
> 
> These flows would not hit, because:
> Flow rule #0 is matching on TCP
> Flow rule #1 is ipv4 dst 1.1.1.1
> There is an impact with the packet processing that we first need to miss
> these 2 before hitting the other flow.
> 
> So, if you only have Flow rule #2, what perf result do you see?

I'll try test this example again and please wait my result !
Comment 10 DKCopy 2021-12-22 08:42:17 CET
Hi, I have tested again and the following is the test result!
the IXIA traffic is 64 bytes UDP packet! 
the packet's ipv4 destnation address is 10.245.28.42, UDP dest port is 8080!
so all packet don't match any flows!
As the flow increases, the NIC RX performance (pps) will to reduce !

DPDK: 21.08
NIC: 
Mellanox Technologies MT27800 Family [ConnectX-5]
FW: firmware-version: 16.32.1010 (MT_0000000012)
Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx] 
FW: firmware-version: 22.28.1002 (MT_0000000359)

CPU: Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz x2 (48 core x 2)
KERNEL: 5.4.17-2102.200.13.uek
PKTGEN: IxNetwork 9.00.1915.16
MLNX OFED: MLNX_OFED_LINUX-5.0-2.1.8.0

DPDK testpmd:
./testpmd-v20.8-cascadelake  -l 1,24-48  -m 4096 -n 4  -w '54:00.1,dv_flow_en=1,mprq_en=1,rxqs_min_mprq=1,rx_vec_en=1' \
	--log-level="lib.eal":8 --log-level=pmd:8 --log-level="pmd.net.mlx5":3  \
	-- -i  --rxq=24 --txq=24 --nb-cores=24 --forward-mode icmpecho --numa \
	--enable-rx-cksum --auto-start


DPDK testpmd with no flow:
testpmd> show port stats  all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 32261848744 RX-missed: 3607570    RX-bytes:  1935710931360
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 2          TX-errors: 0          TX-bytes:  120

  Throughput (since last show)
  Rx-pps:    149175705          Rx-bps:  71604353904
  Tx-pps:            0          Tx-bps:            0
  ############################################################################
testpmd> flow list 0
testpmd> 


DPDK testpmd with flow:
flow create 0 ingress pattern eth / ipv4 dst is 1.1.1.1 / tcp / end actions queue index  15 / end
flow create 0 ingress pattern eth / ipv4 dst is 2.2.2.2 / tcp / end actions queue index  15 / end
flow create 0 ingress pattern eth / ipv4 dst is 1.1.1.1 / udp / end actions queue index  15 / end
flow create 0 ingress pattern eth / ipv4 dst is 2.2.2.2 / udp / end actions queue index  15 / end
flow create 0 ingress pattern eth / ipv4 / udp dst is 53 / end actions count / rss / end
flow create 0 ingress pattern eth / ipv4 / tcp dst is 5353 / end actions count / rss / end
flow create 0 ingress pattern eth / ipv4 / tcp dst is 5060 / end actions count / rss / end
flow create 0 ingress pattern eth / ipv4 / tcp dst is 8081 / end actions count / rss / end
flow create 0 ingress pattern eth / ipv4 / udp dst is 5353 / end actions count / rss / end
flow create 0 ingress pattern eth / ipv4 / udp dst is 5060 / end actions count / rss / end
flow create 0 ingress pattern eth / ipv4 / udp dst is 8081 / end actions count / rss / end

testpmd> show port stats  all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 41411062408 RX-missed: 3607570    RX-bytes:  2484663748260
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 2          TX-errors: 0          TX-bytes:  120

  Throughput (since last show)
  Rx-pps:     96406314          Rx-bps:  46275071592
  Tx-pps:            0          Tx-bps:            0
  ############################################################################
testpmd> flow list 0
ID	Group	Prio	Attr	Rule
0	0	0	i--	ETH IPV4 TCP => QUEUE
1	0	0	i--	ETH IPV4 TCP => QUEUE
2	0	0	i--	ETH IPV4 UDP => QUEUE
3	0	0	i--	ETH IPV4 UDP => QUEUE
4	0	0	i--	ETH IPV4 UDP => COUNT RSS
5	0	0	i--	ETH IPV4 TCP => COUNT RSS
6	0	0	i--	ETH IPV4 TCP => COUNT RSS
7	0	0	i--	ETH IPV4 TCP => COUNT RSS
8	0	0	i--	ETH IPV4 UDP => COUNT RSS
9	0	0	i--	ETH IPV4 UDP => COUNT RSS
10	0	0	i--	ETH IPV4 UDP => COUNT RSS
testpmd>
Comment 11 DKCopy 2021-12-23 07:06:22 CET
(In reply to Asaf Penso from comment #8)
> Let's start again, since i think we don't have a clear understanding.
> 
> According to this traffic:
> >all traffic use UDP protocol, and config 1024 source address(14.175.242.132
> >[RepeatableRandom: 0.0.0.0, 223.255.255.255, 1, 1024]), dest address is
> >fixed(10.245.102.44)!
> 
> These flows would not hit, because:
> Flow rule #0 is matching on TCP
> Flow rule #1 is ipv4 dst 1.1.1.1
> There is an impact with the packet processing that we first need to miss
> these 2 before hitting the other flow.
> 
> So, if you only have Flow rule #2, what perf result do you see?

Hi, I have tested again and the following is the test result!
the IXIA traffic is 64 bytes UDP packet! 
the packet's ipv4 destnation address is 10.245.28.42, UDP dest port is 8080!
so all packet don't match any flows!
As the flow increases, the NIC RX performance (pps) will to reduce !

DPDK: 21.08
NIC: 
Mellanox Technologies MT27800 Family [ConnectX-5]
FW: firmware-version: 16.32.1010 (MT_0000000012)
Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx] 
FW: firmware-version: 22.28.1002 (MT_0000000359)

CPU: Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz x2 (48 core x 2)
KERNEL: 5.4.17-2102.200.13.uek
PKTGEN: IxNetwork 9.00.1915.16
MLNX OFED: MLNX_OFED_LINUX-5.0-2.1.8.0

DPDK testpmd:
./testpmd-v20.8-cascadelake  -l 1,24-48  -m 4096 -n 4  -w '54:00.1,dv_flow_en=1,mprq_en=1,rxqs_min_mprq=1,rx_vec_en=1' \
	--log-level="lib.eal":8 --log-level=pmd:8 --log-level="pmd.net.mlx5":3  \
	-- -i  --rxq=24 --txq=24 --nb-cores=24 --forward-mode icmpecho --numa \
	--enable-rx-cksum --auto-start


DPDK testpmd with no flow:
testpmd> show port stats  all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 32261848744 RX-missed: 3607570    RX-bytes:  1935710931360
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 2          TX-errors: 0          TX-bytes:  120

  Throughput (since last show)
  Rx-pps:    149175705          Rx-bps:  71604353904
  Tx-pps:            0          Tx-bps:            0
  ############################################################################
testpmd> flow list 0
testpmd> 


DPDK testpmd with flow:
flow create 0 ingress pattern eth / ipv4 dst is 1.1.1.1 / tcp / end actions queue index  15 / end
flow create 0 ingress pattern eth / ipv4 dst is 2.2.2.2 / tcp / end actions queue index  15 / end
flow create 0 ingress pattern eth / ipv4 dst is 1.1.1.1 / udp / end actions queue index  15 / end
flow create 0 ingress pattern eth / ipv4 dst is 2.2.2.2 / udp / end actions queue index  15 / end
flow create 0 ingress pattern eth / ipv4 / udp dst is 53 / end actions count / rss / end
flow create 0 ingress pattern eth / ipv4 / tcp dst is 5353 / end actions count / rss / end
flow create 0 ingress pattern eth / ipv4 / tcp dst is 5060 / end actions count / rss / end
flow create 0 ingress pattern eth / ipv4 / tcp dst is 8081 / end actions count / rss / end
flow create 0 ingress pattern eth / ipv4 / udp dst is 5353 / end actions count / rss / end
flow create 0 ingress pattern eth / ipv4 / udp dst is 5060 / end actions count / rss / end
flow create 0 ingress pattern eth / ipv4 / udp dst is 8081 / end actions count / rss / end

testpmd> show port stats  all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 41411062408 RX-missed: 3607570    RX-bytes:  2484663748260
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 2          TX-errors: 0          TX-bytes:  120

  Throughput (since last show)
  Rx-pps:     96406314          Rx-bps:  46275071592
  Tx-pps:            0          Tx-bps:            0
  ############################################################################
testpmd> flow list 0
ID	Group	Prio	Attr	Rule
0	0	0	i--	ETH IPV4 TCP => QUEUE
1	0	0	i--	ETH IPV4 TCP => QUEUE
2	0	0	i--	ETH IPV4 UDP => QUEUE
3	0	0	i--	ETH IPV4 UDP => QUEUE
4	0	0	i--	ETH IPV4 UDP => COUNT RSS
5	0	0	i--	ETH IPV4 TCP => COUNT RSS
6	0	0	i--	ETH IPV4 TCP => COUNT RSS
7	0	0	i--	ETH IPV4 TCP => COUNT RSS
8	0	0	i--	ETH IPV4 UDP => COUNT RSS
9	0	0	i--	ETH IPV4 UDP => COUNT RSS
10	0	0	i--	ETH IPV4 UDP => COUNT RSS
testpmd>
Comment 12 DKCopy 2021-12-23 07:21:25 CET
(In reply to Asaf Penso from comment #8)
> Let's start again, since i think we don't have a clear understanding.
> 
> According to this traffic:
> >all traffic use UDP protocol, and config 1024 source address(14.175.242.132
> >[RepeatableRandom: 0.0.0.0, 223.255.255.255, 1, 1024]), dest address is
> >fixed(10.245.102.44)!
> 
> These flows would not hit, because:
> Flow rule #0 is matching on TCP
> Flow rule #1 is ipv4 dst 1.1.1.1
> There is an impact with the packet processing that we first need to miss
> these 2 before hitting the other flow.
> 
> So, if you only have Flow rule #2, what perf result do you see?


with match flow rule, the CX5 RX performance also reduced 

testpmd> flow create 0 ingress pattern eth / ipv4 dst is 10.245.28.42 / end actions mark id 42 / count / rss / end
Flow rule #0 created
testpmd> flow list 0
ID	Group	Prio	Attr	Rule
0	0	0	i--	ETH IPV4 => MARK COUNT RSS

testpmd> show  port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 7634030896 RX-missed: 36190      RX-bytes:  458041856406
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 1          TX-errors: 0          TX-bytes:  66

  Throughput (since last show)
  Rx-pps:    142120399          Rx-bps:  68217734104
  Tx-pps:            0          Tx-bps:            0
  ############################################################################
Comment 13 Asaf Penso 2021-12-23 12:26:40 CET
If you add more rules the HW needs to process the packet against these rules to determine what actions to do.
Even if the packet doesn't match any rule, the HW still needs time to go over *all* the flows*.
It is expected to see some degradation when adding more rules, even if they don't match.
It's also expected to see some degradation when the packet is matched. In this case, the HW not only does the matching, but also the actions.
When you offload flows, you want to offload the logic from the SW and accelerate in the HW.

As I wrote above, this is expected.
Do you still have some concern or an issue?
Comment 14 DKCopy 2021-12-30 10:42:27 CET
(In reply to Asaf Penso from comment #13)
> If you add more rules the HW needs to process the packet against these rules
> to determine what actions to do.
> Even if the packet doesn't match any rule, the HW still needs time to go
> over *all* the flows*.
> It is expected to see some degradation when adding more rules, even if they
> don't match.
> It's also expected to see some degradation when the packet is matched. In
> this case, the HW not only does the matching, but also the actions.
> When you offload flows, you want to offload the logic from the SW and
> accelerate in the HW.
> 
> As I wrote above, this is expected.
> Do you still have some concern or an issue?

Thanks a lot ! 
I see !
but I test multiple flow with intel E810 100G NIC, and not found this performance prolem!
Comment 15 DKCopy 2021-12-30 13:45:50 CET
(In reply to Asaf Penso from comment #13)
> If you add more rules the HW needs to process the packet against these rules
> to determine what actions to do.
> Even if the packet doesn't match any rule, the HW still needs time to go
> over *all* the flows*.
> It is expected to see some degradation when adding more rules, even if they
> don't match.
> It's also expected to see some degradation when the packet is matched. In
> this case, the HW not only does the matching, but also the actions.
> When you offload flows, you want to offload the logic from the SW and
> accelerate in the HW.
> 
> As I wrote above, this is expected.
> Do you still have some concern or an issue?

and I start new test with 70 bytes UDP packet, IXIA send packets with 138888888pps!

but CX5/CX6 can only receive 112744290pps!
testpmd> show port stats  all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 1235553968 RX-missed: 4095439    RX-bytes:  81546561888
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:    112744290          Rx-bps:  59528973864
  Tx-pps:            0          Tx-bps:            0
  ############################################################################
testpmd> 

but with 64 bytes, CX5/CX6 can receive all packets with wire speed(148809523pps)!
Comment 16 Asaf Penso 2022-01-02 10:25:23 CET
Can you try adding a single rule to group 0 to match all traffic and jump to group 1?
something like:
flow create 0 ingress pattern eth / end actions jump group id 1/ end

Then add all of your flows to group 1.

What results do you see now?
Comment 17 DKCopy 2022-01-03 06:47:52 CET
(In reply to Asaf Penso from comment #16)
> Can you try adding a single rule to group 0 to match all traffic and jump to
> group 1?
> something like:
> flow create 0 ingress pattern eth / end actions jump group id 1/ end
> 
> Then add all of your flows to group 1.
> 
> What results do you see now?

Ok, I will test it and report it in next day!
Comment 18 DKCopy 2022-01-04 08:22:22 CET
(In reply to Asaf Penso from comment #16)
> Can you try adding a single rule to group 0 to match all traffic and jump to
> group 1?
> something like:
> flow create 0 ingress pattern eth / end actions jump group id 1/ end
> 
> Then add all of your flows to group 1.
> 
> What results do you see now?

no flow rule:

testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 13767079146 RX-missed: 20838      RX-bytes:  826024748892
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:    148808971          Rx-bps:  71428306336
  Tx-pps:            0          Tx-bps:            0
  ############################################################################
testpmd> flow list 0
testpmd> 

with sample flow rules:
flow create 0 ingress group 0 pattern eth / end actions jump group 1 / end
flow create 0 ingress group 1 pattern eth / ipv4 / end actions rss / end
testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 18662603754 RX-missed: 29476      RX-bytes:  1119756225384
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:    109444777          Rx-bps:  52533493136
  Tx-pps:            0          Tx-bps:            0
  ############################################################################
testpmd> flow list 0
ID	Group	Prio	Attr	Rule
0	0	0	i--	ETH => JUMP
1	1	0	i--	ETH IPV4 => RSS
testpmd> 

with more flow rules:
flow create 0 ingress group 0 pattern eth / end actions jump group 1 / end
flow create 0 ingress group 1 pattern eth / ipv4 dst is 1.1.1.1 / tcp / end actions queue index  15 / end
flow create 0 ingress group 1 pattern eth / ipv4 dst is 2.2.2.2 / tcp / end actions queue index  15 / end
flow create 0 ingress group 1 pattern eth / ipv4 dst is 1.1.1.1 / udp / end actions queue index  15 / end
flow create 0 ingress group 1 pattern eth / ipv4 dst is 2.2.2.2 / udp / end actions queue index  15 / end
flow create 0 ingress group 1 pattern eth / ipv4 / udp dst is 53 / end actions count / rss / end
flow create 0 ingress group 1 pattern eth / ipv4 / tcp dst is 5353 / end actions count / rss / end
flow create 0 ingress group 1 pattern eth / ipv4 / tcp dst is 5060 / end actions count / rss / end
flow create 0 ingress group 1 pattern eth / ipv4 / tcp dst is 8081 / end actions count / rss / end
flow create 0 ingress group 1 pattern eth / ipv4 / udp dst is 5353 / end actions count / rss / end
flow create 0 ingress group 1 pattern eth / ipv4 / udp dst is 5060 / end actions count / rss / end
flow create 0 ingress group 1 pattern eth / ipv4 / udp dst is 8081 / end actions count / rss / end
flow create 0 ingress group 1 pattern eth / ipv4 / end actions rss / end

testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 26920802980 RX-missed: 32024      RX-bytes:  1615248178968
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:     43985714          Rx-bps:  21113143032
  Tx-pps:            0          Tx-bps:            0
  ############################################################################
testpmd> flow list 0
ID	Group	Prio	Attr	Rule
0	0	0	i--	ETH => JUMP
1	1	0	i--	ETH IPV4 TCP => QUEUE
2	1	0	i--	ETH IPV4 TCP => QUEUE
3	1	0	i--	ETH IPV4 UDP => QUEUE
4	1	0	i--	ETH IPV4 UDP => QUEUE
5	1	0	i--	ETH IPV4 UDP => COUNT RSS
6	1	0	i--	ETH IPV4 TCP => COUNT RSS
7	1	0	i--	ETH IPV4 TCP => COUNT RSS
8	1	0	i--	ETH IPV4 TCP => COUNT RSS
9	1	0	i--	ETH IPV4 UDP => COUNT RSS
10	1	0	i--	ETH IPV4 UDP => COUNT RSS
11	1	0	i--	ETH IPV4 UDP => COUNT RSS
12	1	0	i--	ETH IPV4 => RSS
testpmd>
Comment 19 DKCopy 2022-01-04 08:43:33 CET
(In reply to Asaf Penso from comment #16)
> Can you try adding a single rule to group 0 to match all traffic and jump to
> group 1?
> something like:
> flow create 0 ingress pattern eth / end actions jump group id 1/ end
> 
> Then add all of your flows to group 1.
> 
> What results do you see now?

not create group:

flow create 0 ingress pattern eth / ipv4 dst is 1.1.1.1 / tcp / end actions queue index  15 / end
flow create 0 ingress pattern eth / ipv4 dst is 2.2.2.2 / tcp / end actions queue index  15 / end
flow create 0 ingress pattern eth / ipv4 dst is 1.1.1.1 / udp / end actions queue index  15 / end
flow create 0 ingress pattern eth / ipv4 dst is 2.2.2.2 / udp / end actions queue index  15 / end
flow create 0 ingress pattern eth / ipv4 / udp dst is 53 / end actions count / rss / end
flow create 0 ingress pattern eth / ipv4 / tcp dst is 5353 / end actions count / rss / end
flow create 0 ingress pattern eth / ipv4 / tcp dst is 5060 / end actions count / rss / end
flow create 0 ingress pattern eth / ipv4 / tcp dst is 8081 / end actions count / rss / end
flow create 0 ingress pattern eth / ipv4 / udp dst is 5353 / end actions count / rss / end
flow create 0 ingress pattern eth / ipv4 / udp dst is 5060 / end actions count / rss / end
flow create 0 ingress pattern eth / ipv4 / udp dst is 8081 / end actions count / rss / end
flow create 0 ingress pattern eth / ipv4 / end actions rss / end

testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 42109410797 RX-missed: 66319      RX-bytes:  2529872463822
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:     54996566          Rx-bps:  26398351792
  Tx-pps:            0          Tx-bps:            0
  ############################################################################
testpmd> flow list 0
ID	Group	Prio	Attr	Rule
0	0	0	i--	ETH IPV4 TCP => QUEUE
1	0	0	i--	ETH IPV4 TCP => QUEUE
2	0	0	i--	ETH IPV4 UDP => QUEUE
3	0	0	i--	ETH IPV4 UDP => QUEUE
4	0	0	i--	ETH IPV4 UDP => COUNT RSS
5	0	0	i--	ETH IPV4 TCP => COUNT RSS
6	0	0	i--	ETH IPV4 TCP => COUNT RSS
7	0	0	i--	ETH IPV4 TCP => COUNT RSS
8	0	0	i--	ETH IPV4 UDP => COUNT RSS
9	0	0	i--	ETH IPV4 UDP => COUNT RSS
10	0	0	i--	ETH IPV4 UDP => COUNT RSS
11	0	0	i--	ETH IPV4 => RSS

Note You need to log in before you can comment on or make changes to this bug.