[dpdk-dev] [RFC] Generic flow director/filtering/classification API

John Fastabend john.fastabend at gmail.com
Tue Aug 2 20:19:15 CEST 2016


On 16-07-23 02:10 PM, John Fastabend wrote:
> On 16-07-21 12:20 PM, Adrien Mazarguil wrote:
>> Hi Jerin,
>>
>> Sorry, looks like I missed your reply. Please see below.
>>
> 
> Hi Adrian,
> 
> Sorry for a bit delay but a few comments that may be worth considering.
> 
> To start with completely agree on the general problem statement and the
> nice summary of all the current models. Also good start on this.
> 
>>
>> Considering that allowed pattern/actions combinations cannot be known in
>> advance and would result in an unpractically large number of capabilities to
>> expose, a method is provided to validate a given rule from the current
>> device configuration state without actually adding it (akin to a "dry run"
>> mode).
> 
> Rather than have a query/validate process why did we jump over having an
> intermediate representation of the capabilities? Here you state it is
> unpractical but we know how to represent parse graphs and the drivers
> could report their supported parse graph via a single query to a middle
> layer.
> 
> This will actually reduce the msg chatter imagine many applications at
> init time or in boundary cases where a large set of applications come
> online at once and start banging on the interface all at once seems less
> than ideal.
> 

A bit more details on possible interface for capabilities query,

One way I've used to describe these graphs from driver to software
stacks is to use a set of structures to build the graph. For fixed
graphs this could just be *.h file for programmable hardware (typically
coming from fw update on nics) the driver can read the parser details
out of firmware and render the structures.

I've done this two ways: one is to define all the fields in their
own structures using something like,

struct field {
	char *name;
	u32 uid;
	u32 bitwidth;
};

This gives a unique id (uid) for each field along with its
width and a user friendly name. The fields are organized into
headers via a header structure,

struct header_node {
	char *name;
	u32 uid;
	u32 *fields;
	struct parse_graph *jump;
};

Each node has a unique id and then a list of fields. Where 'fields'
is a list of uid's of fields its also easy enough to embed the field
struct in the header_node if that is simpler its really a style
question.

The 'struct parse_graph' gives the list of edges from this header node
to other header nodes. Using a parse graph structure defined

struct parse_graph {
	struct field_reference ref;
	__u32 jump_uid;
};

Again as a matter of style you can embed the parse graph in the header
node as I did above or do it as its own object.

The field_reference noted below gives the id of the field and the value
e.g. the tuple (ipv4.protocol, 6) then jump_uid would be the uid of TCP.

struct field_reference {
	__u32 header_uid;
	__u32 field_uid;
	__u32 mask_type;
	__u32 type;
	__u8  *value;
	__u8  *mask;
};

The cost doing all this is some additional overhead at init time. But
building generic function over this and having a set of predefined
uids for well-known protocols such ip, udp, tcp, etc helps. What you
get for the cost is a few things that I think are worth it. (i) Now
new protocols can be added/removed without recompiling DPDK (ii) a
software package can use the capability query to verify the required
protocols are off-loadable vs a possibly large set of test queries and
(iii) when we do the programming of the device we can provide a tuple
(table-uid, header-uid, field-uid, value, mask, priority) and the
middle layer "knowing" the above graph can verify the command so
drivers only ever see "good"  commands, (iv) finally it should be
faster in terms of cmds per second because the drivers can map the
tuple (table, header, field, priority) to a slot efficiently vs
parsing.

IMO point (iii) and (iv) will in practice make the code much simpler
because we can maintain common middle layer and not require parsing
by drivers. Making each driver simpler by abstracting into common
layer.

> Worse in my opinion it requires all drivers to write mostly duplicating
> validation code where a common layer could easily do this if every
> driver reported a common data structure representing its parse graph
> instead. The nice fallout of this initial effort upfront is the driver
> no longer needs to do error handling/checking/etc and can assume all
> rules are correct and valid. It makes driver code much simpler to
> support. And IMO at least by doing this we get some other nice benefits
> described below.
> 
> Another related question is about performance.
> 
>> Creation
>> ~~~~~~~~
>>
>> Creating a flow rule is similar to validating one, except the rule is
>> actually created.
>>
>> ::
>>
>>  struct rte_flow *
>>  rte_flow_create(uint8_t port_id,
>>                  const struct rte_flow_pattern *pattern,
>>                  const struct rte_flow_actions *actions);
> 
> I gather this implies that each driver must parse the pattern/action
> block and map this onto the hardware. How many rules per second can this
> support? I've run into systems that expect a level of service somewhere
> around 50k cmds per second. So bulking will help at the message level
> but it seems like a lot of overhead to unpack the pattern/action section.
> 
> One strategy I've used in other systems that worked relatively well
> is if the query for the parse graph above returns a key for each node
> in the graph then a single lookup can map the key to a node. Its
> unambiguous and then these operations simply become a table lookup.
> So to be a bit more concrete this changes the pattern structure in
> rte_flow_create() into a  <key,value,mask> tuple where the key is known
> by the initial parse graph query. If you reserve a set of well-defined
> key values for well known protocols like ethernet, ip, etc. then the
> query model also works but the middle layer catches errors in this case
> and again the driver only gets known good flows. So something like this,
> 
>   struct rte_flow_pattern {
> 	uint32_t priority;
> 	uint32_t key;
> 	uint32_t value_length;
> 	u8 *value;
>   }
> 
> Also if we have multiple tables what do you think about adding a
> table_id to the signature. Probably not needed in the first generation
> but is likely useful for hardware with multiple tables so that it
> would be,
> 
>    rte_flow_create(uint8_t port_id, uint8_t table_id, ...);
> 
> Finally one other problem we've had which would be great to address
> if we are doing a rewrite of the API is adding new protocols to
> already deployed DPDK stacks. This is mostly a Linux distribution
> problem where you can't easily update DPDK.
> 
> In the prototype header linked in this document it seems to add new
> headers requires adding a new enum in the rte_flow_item_type but there
> is at least an attempt at a catch all here,
> 
>> 	/**
>> 	 * Matches a string of a given length at a given offset (in bytes),
>> 	 * or anywhere in the payload of the current protocol layer
>> 	 * (including L2 header if used as the first item in the stack).
>> 	 *
>> 	 * See struct rte_flow_item_raw.
>> 	 */
>> 	RTE_FLOW_ITEM_TYPE_RAW,
> 
> Actually this is a nice implementation because it works after the
> previous item in the stack correct? So you can put it after "known"
> variable length headers like IP. The limitation is it can't get past
> undefined variable length headers. However if you use the above parse
> graph reporting from the driver mechanism and the driver always reports
> its largest supported graph then we don't have this issue where a new
> hardware sku/ucode/etc added support for new headers but we have no
> way to deploy it to existing software users without recompiling and
> redeploying.
> 
> I looked at the git repo but I only saw the header definition I guess
> the implementation is TBD after there is enough agreement on the
> interface?
> 
> Thanks,
> John
> 



More information about the dev mailing list