[RFC] ethdev: sharing indirect actions between ports

Ori Kam orika at nvidia.com
Thu Jan 26 16:15:14 CET 2023



> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko at oktetlabs.ru>
> Sent: Friday, 20 January 2023 14:23
> 
> On 1/18/23 19:37, Slava Ovsiienko wrote:
> >
> >
> >> -----Original Message-----
> >> From: Thomas Monjalon <thomas at monjalon.net>
> >> Sent: Wednesday, January 18, 2023 6:22 PM
> >> To: Slava Ovsiienko <viacheslavo at nvidia.com>; Ori Kam
> >> <orika at nvidia.com>
> >> Cc: dev at dpdk.org; Matan Azrad <matan at nvidia.com>; Raslan
> Darawsheh
> >> <rasland at nvidia.com>; andrew.rybchenko at oktetlabs.ru;
> >> ivan.malov at oktetlabs.ru; ferruh.yigit at amd.com
> >> Subject: Re: [RFC] ethdev: sharing indirect actions between ports
> >>
> >> 18/01/2023 16:17, Ori Kam:
> >>> From: Thomas Monjalon <thomas at monjalon.net>
> >>>> 28/12/2022 17:54, Viacheslav Ovsiienko:
> >>>>> The RTE Flow API implements the concept of shared objects, known
> >>>>> as indirect actions (RTE_FLOW_ACTION_TYPE_INDIRECT).
> >>>>> An application can create the indirect action of desired type and
> >>>>> configuration with rte_flow_action_handle_create call and then
> >>>>> specify the obtained action handle in multiple flows.
> >>>>>
> >>>>> The initial concept supposes the action handle has strict
> >>>>> attachment to the port it was created on and to be used
> >>>>> exclusively in the flows being installed on the port.
> >>>>>
> >>>>> Nowadays the multipath network topologies are quite common,
> >>>>> packets belonging to the same connection might arrive and be sent
> >>>>> over multiple ports, and there is the raising demand to handle
> >>>>> these "spread" connections. To fulfil this demand it is proposed
> >>>>> to extend indirect action sharing across the multiple ports. This
> >>>>> kind of sharing would be extremely useful for the meters and
> >>>>> counters, allowing to manage the single connection over the
> >>>>> multiple ports.
> >>>>>
> >>>>> This cross-port object sharing is hard to implement in generic way
> >>>>> merely with software on the upper layers, but can be provided by
> >>>>> the driver over the single hardware instance, where  multiple
> >>>>> ports reside on the same physical NIC and share the same hardware
> >>>>> context.
> >>>>>
> >>>>> To allow this action sharing application should specify the "host
> >>>>> port" during flow configuring to claim the intention to share the
> >>>>> indirect actions. All indirect actions reside within "host port"
> >>>>> context and can be shared in flows being installed
> >>>>
> >>>> I don't like the word "host" because it may refer to the host CPU.
> >>>> Also if I understand well, the application must choose one port
> >>>> between all ports of the NIC and keep using the same.
> >>>> I guess we don't want to create a NIC id.
> >>>> So I would suggest to rename to nic_ref_port or something like that.
> >>>>
> >>>
> >>> I think that host is the correct word since this port hosts all
> >>> resources for other ports. (this is also why the host is used in case
> >>> of CPU 😊)
> >>> I don't think it is correct to use bad wording due to the fact that
> >>> some one else also uses this word.
> >>> in rte_flow we never talk about host CPU so I don't think this is
> confusing.
> >>
> >> The confusion is that we can think of a port on the host.
> >
> > In my humble opinion, "_port_id" suffix explicitly specifies what field is and
> does not leave
> > too much space for confusion.
> >
> > "root_port_id"? "base_port_id"?  "container_port_id" ? "mgmnt_port_id"
> ?
> >   Looks worse as for me and does not reflect the exact meaning.
> > As Ori mentioned this is DPDK port ID that embraces all the shared actions.
> > It plays a host role for them.
> 
> Maybe 'owner_port_id' or 'rsrc_port_id' ?
> 
Rsrc?
Owner_port looks O.K but I'm not sure what is the issue with the original suggestion.

Best,
Ori


More information about the dev mailing list