[dpdk-dev] [PATCH 01/22] ethdev: introduce generic flow API

Kevin Traynor ktraynor at redhat.com
Wed Dec 14 17:11:15 CET 2016


On 12/14/2016 01:54 PM, Adrien Mazarguil wrote:

>>
>>>>>>> + * @param[out] error
>>>>>>> + *   Perform verbose error reporting if not NULL.
>>>>>>> + *
>>>>>>> + * @return
>>>>>>> + *   0 on success, a negative errno value otherwise and rte_errno is set.
>>>>>>> + */
>>>>>>> +int
>>>>>>> +rte_flow_query(uint8_t port_id,
>>>>>>> +	       struct rte_flow *flow,
>>>>>>> +	       enum rte_flow_action_type action,
>>>>>>> +	       void *data,
>>>>>>> +	       struct rte_flow_error *error);
>>>>>>> +
>>>>>>> +#ifdef __cplusplus
>>>>>>> +}
>>>>>>> +#endif
>>>>>>
>>>>>> I don't see a way to dump all the rules for a port out. I think this is
>>>>>> neccessary for degbugging. You could have a look through dpif.h in OVS
>>>>>> and see how dpif_flow_dump_next() is used, it might be a good reference.
>>>>>
>>>>> DPDK does not maintain flow rules and, depending on hardware capabilities
>>>>> and level of compliance, PMDs do not necessarily do it either, particularly
>>>>> since it requires space and application probably have a better method to
>>>>> store these pointers for their own needs.
>>>>
>>>> understood
>>>>
>>>>>
>>>>> What you see here is only a PMD interface. Depending on applications needs,
>>>>> generic helper functions built on top of these may be added to manage flow
>>>>> rules in the future.
>>>>
>>>> I'm thinking of the case where something goes wrong and I want to get a
>>>> dump of all the flow rules from hardware, not query the rules I think I
>>>> have. I don't see a way to do it or something to build a helper on top of?
>>>
>>> Generic helper functions would exist on top of this API and would likely
>>> maintain a list of flow rules themselves. The dump in that case would be
>>> entirely implemented in software. I think that recovering flow rules from HW
>>> may be complicated in many cases (even without taking storage allocation and
>>> rules conversion issues into account), therefore if there is really a need
>>> for it, we could perhaps add a dump() function that PMDs are free to
>>> implement later.
>>>
>>
>> ok. Maybe there are some more generic stats that can be got from the
>> hardware that would help debugging that would suffice, like total flow
>> rule hits/misses (i.e. not on a per flow rule basis).
>>
>> You can get this from the software flow caches and it's widely used for
>> debugging. e.g.
>>
>> pmd thread numa_id 0 core_id 3:
>> 	emc hits:0
>> 	megaflow hits:0
>> 	avg. subtable lookups per hit:0.00
>> 	miss:0
>>
> 
> Perhaps a rule such as the following could do the trick:
> 
>  group: 42 (or priority 42)
>  pattern: void
>  actions: count / passthru
> 
> Assuming useful flow rules are defined with higher priorities (using lower
> group ID or priority level) and provide a terminating action, this one would
> count all packets that were not caught by them.
> 
> That is one example to illustrate how "global" counters can be requested by
> applications.
> 
> Otherwise you could just make sure all rules contain mark / flag actions, in
> which case mbufs would tell directly if they went through them or need
> additional SW processing.
> 

ok, sounds like there's some options at least to work with on this which
is good. thanks.


More information about the dev mailing list