Bug 1057 - Unable to use flow rules on VF
Summary: Unable to use flow rules on VF
Status: RESOLVED WONTFIX
Alias: None
Product: DPDK
Classification: Unclassified
Component: ethdev (show other bugs)
Version: 21.11
Hardware: x86 Linux
: Normal normal
Target Milestone: ---
Assignee: Asaf Penso
URL:
Depends on:
Blocks:
 
Reported: 2022-07-22 16:57 CEST by Hrvoje
Modified: 2024-04-02 04:29 CEST (History)
5 users (show)



Attachments
rte_config file (3.66 KB, text/x-csrc)
2022-07-22 16:57 CEST, Hrvoje
Details

Description Hrvoje 2022-07-22 16:57:58 CEST
Created attachment 214 [details]
rte_config file

Hi.

I'm trying to use rte_flow API to do some packet steering to different queues, but to no avail.

Important note here is that i'm working with VF (SR-IOV) inside VM. VM is Ubuntu 20.04.

DPDK used is 21.11.1 (statically compiled). Tests are done with testpmd application.

I did test the same rule with following DPDK drivers:
ixgbevf - NOT WORKING (x520)
mlx5 - WORKING
iavf - NOT WORKING (xxv710, xl710, e810).

Cards used are:

xx:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
	Subsystem: Intel Corporation Ethernet Server Adapter X520-2

xx:00.0 Ethernet controller: Intel Corporation Ethernet Controller XXV710 for 25GbE SFP28 (rev 02)
	Subsystem: Hewlett Packard Enterprise Ethernet 10/25/Gb 2-port 661SFP28 Adapter

xx:00.0 Ethernet controller: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ (rev 02)
	Subsystem: Intel Corporation Ethernet Converged Network Adapter XL710-Q2

xx:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-C for QSFP (rev 02)
	Subsystem: Intel Corporation Ethernet Network Adapter E810-C-Q2

Cards behave the same in different servers, but i did try it on sandybridge, skylake and cascadelake architecture.

Drivers used (and versions):

ixgbe-5.12.5
ixgbevf-4.12.4
i40e-2.18.9
ice-1.7.16
iavf-4.2.7

Drivers versions are the same in host and guest (VM).

Main question here is - do those cards support rte_flow API applied over VF port? If they do, what is wrong here? If needed i can provide more details.

Here is an example:

# /tmp/dpdk-testpmd -c 0xf -n 4 -a 00:05.0 -- -i --rxq=4 --txq=4
EAL: Detected CPU lcores: 4
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available 1048576 kB hugepages reported
EAL: VFIO support initialized
EAL: Using IOMMU type 8 (No-IOMMU)
EAL: Probe PCI driver: net_iavf (8086:154c) device: 0000:00:05.0 (socket 0)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 0)
iavf_configure_queues(): RXDID[22] is not supported, request default RXDID[1] in Queue[0]
iavf_configure_queues(): RXDID[22] is not supported, request default RXDID[1] in Queue[1]
iavf_configure_queues(): RXDID[22] is not supported, request default RXDID[1] in Queue[2]
iavf_configure_queues(): RXDID[22] is not supported, request default RXDID[1] in Queue[3]

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event
Port 0: 26:03:E7:02:8C:AB
Checking link statuses...
Done
testpmd> show port info all

********************* Infos for port 0  *********************
MAC address: 26:03:E7:02:8C:AB
Device name: 00:05.0
Driver name: net_iavf
Firmware-version: not available
Devargs: 
Connect to socket: 0
memory allocation on the socket: 0
Link status: up
Link speed: 25 Gbps
Link duplex: full-duplex
Autoneg status: On
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 64
Maximum number of MAC addresses of hash filtering: 0
VLAN offload: 
  strip off, filter off, extend off, qinq strip off
Hash key size in bytes: 52
Redirection table size: 64
Supported RSS offload flow types:
  ipv4
  ipv4-frag
  ipv4-tcp
  ipv4-udp
  ipv4-sctp
  ipv4-other
  ipv6
  ipv6-frag
  ipv6-tcp
  ipv6-udp
  ipv6-sctp
  ipv6-other
Minimum size of RX buffer: 1024
Maximum configurable length of RX packet: 9728
Maximum configurable size of LRO aggregated packet: 0
Current number of RX queues: 4
Max possible RX queues: 256
Max possible number of RXDs per queue: 4096
Min possible number of RXDs per queue: 64
RXDs number alignment: 32
Current number of TX queues: 4
Max possible TX queues: 256
Max possible number of TXDs per queue: 4096
Min possible number of TXDs per queue: 64
TXDs number alignment: 32
Max segment number per packet: 0
Max segment number per MTU/TSO: 0
Device capabilities: 0x0( )
testpmd> flow create 0 ingress pattern ipv4 dst is 192.168.0.5 / end actions queue index 1 / end
iavf_flow_create(): Failed to create flow
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
testpmd> 

Attached is rte_config.h.

H.
Comment 1 Asaf Penso 2022-08-28 23:05:40 CEST
Hello,
Please ensure the VF is configured as trusted in case you wish to use rte_flow with mlx5 PMD.
You can find instructions and more details here:
http://doc.dpdk.org/guides/nics/mlx5.html#how-to-configure-a-vf-as-trusted
Comment 2 Hrvoje 2022-08-29 10:54:59 CEST
Hi.

Thank you for your comment. I did forgot to mention, but trusted mode was enabled for all cases.

Also, ignore MAC address in previous example. This is because iavf kernel driver on some cards assigns random MAC on reload (module unload/load). 

Regards,

H.
Comment 3 Asaf Penso 2022-08-29 12:21:36 CEST
Can you please confirm you configured both the FW and the kernel to be trusted?
Comment 4 Hrvoje 2022-08-29 18:55:45 CEST
Hi.

I did enable card to be in a trusted mode. For example, for port assigned to VM, which runs DPDK app:

testpmd> show port info 0

********************* Infos for port 0  *********************
MAC address: FA:16:3E:8D:7E:6E
Device name: 0000:00:05.0
Driver name: net_iavf
...

And on the host on which VM is created:

root@xyz:~# ip l | grep -i "FA:16:3E:8D:7E:6E"
    vf 14     link/ether fa:16:3e:8d:7e:6e brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state auto, trust on

So - trusted is enabled. I'm not sure to what you refer when you say "FW" and "kernel" to be trusted?

Regards,

H.
Comment 5 Asaf Penso 2022-08-29 22:10:41 CEST
My note about configuring FW to work with trusted VFs is relevant for mlx5 PMD.
Comment 6 Kevin.Liu 2022-11-15 10:06:50 CET
Intel NIC does not support this type of rule.
You can try:
flow create 0 ingress pattern eth / ipv4 dst is 192.168.0.5 / end actions queue index 1 / end
Comment 7 Hrvoje 2023-01-16 11:29:07 CET
Hi.

You are correct - adding "eth" seems to get me past the error. But, this only works for E810 controller.

Is there anywhere comprehensive list of which flows are supported on which cards? That is, what are requirements for flows to be "accepted".

And i'm still unable to make it work on X520.

Additionally, Mellanox cards do not accept flows with MPLS in them. Error that is displayed says that either MPLS is not suppored or it is not enabled. Where i can see which is the case for different types of cards (ConnectX-4/5/6)?

Regards,

H.
Comment 8 Asaf Penso 2023-01-16 12:11:50 CET
For mlx5 devices, please refer to our docs:
http://doc.dpdk.org/guides/platform/mlx5.html
To enable MPLS, you need to turn on the flex parser configuration.

"
     enable MPLS flow matching:
     FLEX_PARSER_PROFILE_ENABLE=1
"
Comment 9 Hrvoje 2023-01-17 15:58:19 CET
Hi.

Thx for the pointer. Unfortunately, it seems this is not enough - i'm getting following error:

(-22): 13 protocol filtering not compatible with MPLS layer

I'm not sure what this means. Filter which causes this error is in form of eth/vlan/MPLS label=x with action to move packet to queue index 1.

And yes, this also does not work for the Intel cards.

So, again, is there anywhere comprehensive list of which flows are supported on which cards? Any more complex examples?

Regards,

H.
Comment 10 dengkaiwen 2023-12-01 07:07:13 CET
Hi,

you can refer to iavf_fdir.c, the iavf_fdir_pattern includes all the fdir flows supported by the iavf driver. 
For ice and i40e refer to ice_fdir.c and i40e_fdir.c.

Thanks.
Comment 11 dengkaiwen 2024-04-02 04:29:58 CEST
Close this ticket.

Note You need to log in before you can comment on or make changes to this bug.