[dpdk-dev,RFC,v2,3/5] ether: Add flow timeout support

Message ID 1513823719-36066-4-git-send-email-qi.z.zhang@intel.com (mailing list archive)
State Superseded, archived
Delegated to: Ferruh Yigit
Headers

Checks

Context Check Description
ci/checkpatch warning coding style issues
ci/Intel-compilation success Compilation OK

Commit Message

Qi Zhang Dec. 21, 2017, 2:35 a.m. UTC
  Add new APIs to support flow timeout, application is able to
1. Setup the time duration of a flow, the flow is expected to be deleted
automatically when timeout.
2. Ping a flow to check if it is active or not.
3. Register a callback function when a flow is deleted due to timeout.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 doc/guides/prog_guide/rte_flow.rst | 37 ++++++++++++++++++++++++++++
 lib/librte_ether/rte_flow.c        | 38 +++++++++++++++++++++++++++++
 lib/librte_ether/rte_flow.h        | 49 ++++++++++++++++++++++++++++++++++++++
 lib/librte_ether/rte_flow_driver.h | 12 ++++++++++
 4 files changed, 136 insertions(+)
  

Comments

Alex Rosenbaum Dec. 21, 2017, 1:59 p.m. UTC | #1
On Thu, Dec 21, 2017 at 4:35 AM, Qi Zhang <qi.z.zhang@intel.com> wrote:
> Add new APIs to support flow timeout, application is able to
> 1. Setup the time duration of a flow, the flow is expected to be deleted
> automatically when timeout.

Can you explain how the application (OVS) is expected to use this API?
It will help to better understand the motivation here...

Are you trying to move the aging timer from application code into the
PMD? or can your HW remove/disable/inactivate a flow at certain time
semantics without software context?

I would prefer to have the aging timer logic in a centralized
location, leek the application itself or some DPDK library. instead of
having each PMD implement its own software timers.


> 3. Register a callback function when a flow is deleted due to timeout.

Is the application 'struct rte_flow*' handle really deleted? or the
flow was removed from HW, just in-active at this time?

Can a flow be re-activated? or does this require a call to
rte_flow_destory() and ret_flow_create()?

Alex
  
Qi Zhang Dec. 22, 2017, 9:03 a.m. UTC | #2
Alex:

> -----Original Message-----

> From: Alex Rosenbaum [mailto:rosenbaumalex@gmail.com]

> Sent: Thursday, December 21, 2017 9:59 PM

> To: Zhang, Qi Z <qi.z.zhang@intel.com>

> Cc: adrien.mazarguil@6wind.com; DPDK <dev@dpdk.org>; Doherty, Declan

> <declan.doherty@intel.com>

> Subject: Re: [dpdk-dev] [RFC v2 3/5] ether: Add flow timeout support

> 

> On Thu, Dec 21, 2017 at 4:35 AM, Qi Zhang <qi.z.zhang@intel.com> wrote:

> > Add new APIs to support flow timeout, application is able to 1. Setup

> > the time duration of a flow, the flow is expected to be deleted

> > automatically when timeout.

> 

> Can you explain how the application (OVS) is expected to use this API?

> It will help to better understand the motivation here...


I think the purpose of the APIs is to expose the hardware feature that support
flow auto delete with a timeout.
As I know, for OVS, every flow in flow table will have time duration
A flow be offloaded to hardware is still required to be deleted in specific time, 
I think these APIs help OVS to take advantage HW feature and simplify the flow
aging management

> 

> Are you trying to move the aging timer from application code into the PMD?

> or can your HW remove/disable/inactivate a flow at certain time semantics

> without software context?


Yes, it for hardware feature.

> 

> I would prefer to have the aging timer logic in a centralized location, leek the

> application itself or some DPDK library. instead of having each PMD

> implement its own software timers.

> 

> 

> > 3. Register a callback function when a flow is deleted due to timeout.

> 

> Is the application 'struct rte_flow*' handle really deleted? or the flow was

> removed from HW, just in-active at this time?


Here the flow is deleted, same thing happen as rte_flow_destroy and we need to call
rte_flow_create to re-enable the flow. 
I will add more explanation to avoid confusion in next release.

> 

> Can a flow be re-activated? or does this require a call to

> rte_flow_destory() and ret_flow_create()?

> 

> Alex


Thanks
Qi
  
Wiles, Keith Dec. 22, 2017, 2:06 p.m. UTC | #3
> On Dec 22, 2017, at 3:03 AM, Zhang, Qi Z <qi.z.zhang@intel.com> wrote:
> 
> Alex:
> 
>> -----Original Message-----
>> From: Alex Rosenbaum [mailto:rosenbaumalex@gmail.com]
>> Sent: Thursday, December 21, 2017 9:59 PM
>> To: Zhang, Qi Z <qi.z.zhang@intel.com>
>> Cc: adrien.mazarguil@6wind.com; DPDK <dev@dpdk.org>; Doherty, Declan
>> <declan.doherty@intel.com>
>> Subject: Re: [dpdk-dev] [RFC v2 3/5] ether: Add flow timeout support
>> 
>> On Thu, Dec 21, 2017 at 4:35 AM, Qi Zhang <qi.z.zhang@intel.com> wrote:
>>> Add new APIs to support flow timeout, application is able to 1. Setup
>>> the time duration of a flow, the flow is expected to be deleted
>>> automatically when timeout.
>> 
>> Can you explain how the application (OVS) is expected to use this API?
>> It will help to better understand the motivation here...
> 
> I think the purpose of the APIs is to expose the hardware feature that support
> flow auto delete with a timeout.
> As I know, for OVS, every flow in flow table will have time duration
> A flow be offloaded to hardware is still required to be deleted in specific time, 
> I think these APIs help OVS to take advantage HW feature and simplify the flow
> aging management
> 
>> 
>> Are you trying to move the aging timer from application code into the PMD?
>> or can your HW remove/disable/inactivate a flow at certain time semantics
>> without software context?
> 
> Yes, it for hardware feature.

We also need to support a software timeout feature here and not just a hardware one. The reason is to make the APIs consistent across all hardware. If you are going to include hardware timeout then we need to add software supported timeout at the same time IMO.

> 
>> 
>> I would prefer to have the aging timer logic in a centralized location, leek the
>> application itself or some DPDK library. instead of having each PMD
>> implement its own software timers.
>> 
>> 
>>> 3. Register a callback function when a flow is deleted due to timeout.
>> 
>> Is the application 'struct rte_flow*' handle really deleted? or the flow was
>> removed from HW, just in-active at this time?
> 
> Here the flow is deleted, same thing happen as rte_flow_destroy and we need to call
> rte_flow_create to re-enable the flow. 
> I will add more explanation to avoid confusion in next release.

Sorry, I little late into this thread, but we can not have 1000 callbacks for each timeout and we need make sure we bunch up a number of timeouts at a time to make the feature more performant IMO. Maybe that discussed or address in the code.

> 
>> 
>> Can a flow be re-activated? or does this require a call to
>> rte_flow_destory() and ret_flow_create()?
>> 
>> Alex
> 
> Thanks
> Qi

Regards,
Keith
  
Alex Rosenbaum Dec. 22, 2017, 10:26 p.m. UTC | #4
On Fri, Dec 22, 2017 at 11:03 AM, Zhang, Qi Z <qi.z.zhang@intel.com> wrote:
>> On Thu, Dec 21, 2017 at 4:35 AM, Qi Zhang <qi.z.zhang@intel.com> wrote:
>> > Add new APIs to support flow timeout, application is able to 1. Setup
>> > the time duration of a flow, the flow is expected to be deleted
>> > automatically when timeout.
>>
>> Can you explain how the application (OVS) is expected to use this API?
>> It will help to better understand the motivation here...
>
> I think the purpose of the APIs is to expose the hardware feature that support
> flow auto delete with a timeout.
> As I know, for OVS, every flow in flow table will have time duration
> A flow be offloaded to hardware is still required to be deleted in specific time,
> I think these APIs help OVS to take advantage HW feature and simplify the flow
> aging management

Are you sure this will allow OVS to 'fire-and-forget' about the rule removal?
or will OVS anyway do rule cleanup from application tables?

Do you know if OVS flow timers are (or can be) re-armed in different
use cases? e.g. extending the timeout duration if traffic is still
flowing?



>> Are you trying to move the aging timer from application code into the PMD?
>> or can your HW remove/disable/inactivate a flow at certain time semantics
>> without software context?
>
> Yes, it for hardware feature.

So if the hardware auto removes the hardware steering entry, what
software part deletes the rte_flow handle?
What software part triggers the application callback? from what
context? will locks be required?

How do you prevent races between application thread and the context
deleting/accessing the rte_flow handle?
I mean in cases that application wants to delete the flow before the
timeout expires, but actually it is same time hardware deletes it.

Alex
  
Qi Zhang Dec. 26, 2017, 3:28 a.m. UTC | #5
Hi Alex:

> -----Original Message-----

> From: Alex Rosenbaum [mailto:rosenbaumalex@gmail.com]

> Sent: Saturday, December 23, 2017 6:27 AM

> To: Zhang, Qi Z <qi.z.zhang@intel.com>

> Cc: adrien.mazarguil@6wind.com; DPDK <dev@dpdk.org>; Doherty, Declan

> <declan.doherty@intel.com>

> Subject: Re: [dpdk-dev] [RFC v2 3/5] ether: Add flow timeout support

> 

> On Fri, Dec 22, 2017 at 11:03 AM, Zhang, Qi Z <qi.z.zhang@intel.com> wrote:

> >> On Thu, Dec 21, 2017 at 4:35 AM, Qi Zhang <qi.z.zhang@intel.com> wrote:

> >> > Add new APIs to support flow timeout, application is able to 1.

> >> > Setup the time duration of a flow, the flow is expected to be

> >> > deleted automatically when timeout.

> >>

> >> Can you explain how the application (OVS) is expected to use this API?

> >> It will help to better understand the motivation here...

> >

> > I think the purpose of the APIs is to expose the hardware feature that

> > support flow auto delete with a timeout.

> > As I know, for OVS, every flow in flow table will have time duration A

> > flow be offloaded to hardware is still required to be deleted in

> > specific time, I think these APIs help OVS to take advantage HW

> > feature and simplify the flow aging management

> 

> Are you sure this will allow OVS to 'fire-and-forget' about the rule removal?

> or will OVS anyway do rule cleanup from application tables?


There is some framework design about offload flow management on OVS side. 
Since I'm not a OVS guy, I can't answer OVS specific question precisely right now,
but the feedback I got is, it will be nice if rte_flow could support flow timeout 
I may check with some OVS expert to give further explanation.
BTW, I think there is no harmful to add these APIs into rte_flow, since a flow timeout is quite 
generic feature to me. it may be useful even for non-OVS case in future.

> 

> Do you know if OVS flow timers are (or can be) re-armed in different use

> cases? e.g. extending the timeout duration if traffic is still flowing?


As I know, for OVS every flow just has a fixed time duration, so "hard_timeout"
is going for this requirement, but by following OpenFlow spec, idle_timeout is paired 
with hard_timeout so I just add it since its generic and maybe useful for future.
> 

> 

> >> Are you trying to move the aging timer from application code into the

> PMD?

> >> or can your HW remove/disable/inactivate a flow at certain time

> >> semantics without software context?

> >

> > Yes, it for hardware feature.

> 

> So if the hardware auto removes the hardware steering entry, what software

> part deletes the rte_flow handle?

> What software part triggers the application callback? from what context? will

> locks be required? 

> How do you prevent races between application thread and the context

> deleting/accessing the rte_flow handle?

> I mean in cases that application wants to delete the flow before the timeout

> expires, but actually it is same time hardware deletes it.


Usually the flow auto delete is running on a separate background thread
(an interrupt handler or a watchdog thread base on hardware capability)
The low level driver is responsible to take care of the race condition between background and foreground flow deleting.
For application, it should be aware that the callback function is running on a separate thread, so it is also required to 
take care of race condition if it will access some data that is shared by foreground thread.

> 

> Alex


Regards
Qi
  
Alex Rosenbaum Dec. 26, 2017, 7:44 a.m. UTC | #6
On Tue, Dec 26, 2017 at 5:28 AM, Zhang, Qi Z <qi.z.zhang@intel.com> wrote:
>> On Fri, Dec 22, 2017 at 11:03 AM, Zhang, Qi Z <qi.z.zhang@intel.com> wrote:
>> >> On Thu, Dec 21, 2017 at 4:35 AM, Qi Zhang <qi.z.zhang@intel.com> wrote:
>> >> > Add new APIs to support flow timeout, application is able to 1.
>> >> > Setup the time duration of a flow, the flow is expected to be
>> >> > deleted automatically when timeout.
>> >>
>> >> Can you explain how the application (OVS) is expected to use this API?
>> >> It will help to better understand the motivation here...
>> >
>> > I think the purpose of the APIs is to expose the hardware feature that
>> > support flow auto delete with a timeout.
>> > As I know, for OVS, every flow in flow table will have time duration A
>> > flow be offloaded to hardware is still required to be deleted in
>> > specific time, I think these APIs help OVS to take advantage HW
>> > feature and simplify the flow aging management
>>
>> Are you sure this will allow OVS to 'fire-and-forget' about the rule removal?
>> or will OVS anyway do rule cleanup from application tables?
>
> There is some framework design about offload flow management on OVS side.
> Since I'm not a OVS guy, I can't answer OVS specific question precisely right now,
> but the feedback I got is, it will be nice if rte_flow could support flow timeout
> I may check with some OVS expert to give further explanation.
> BTW, I think there is no harmful to add these APIs into rte_flow, since a flow timeout is quite
> generic feature to me. it may be useful even for non-OVS case in future.

I'm not a core OVS guy ether :) but adding a feature to DPDK because
it "might be nice" and not a "real benefit" does not sound like a
right approach to me. Each new feature will add extra
work/support/bugs for something which might not really used. I think
it is critical that OVS guys provide you a clear evidence how this
will help.
And we need to try and make this generic so other then OVS application
can use this. e.g. adding a re-arm to the timeout.


>> Do you know if OVS flow timers are (or can be) re-armed in different use
>> cases? e.g. extending the timeout duration if traffic is still flowing?
>
> As I know, for OVS every flow just has a fixed time duration, so "hard_timeout"
> is going for this requirement, but by following OpenFlow spec, idle_timeout is paired
> with hard_timeout so I just add it since its generic and maybe useful for future.

Yes, I also heard OF does issue a hard timeout value.
But I also understood from OVS guys that OVS will manage the timeout
internally. OVS will do traffic sampling for activity before deleting
the flow. so that the timeout value needs to be updated constantly
depending on connection state information (re you other patch exposing
the last_seen_timeout).
So the current suggestion, based on hard timeout, will not fit OVS. At
least as I understood from other OVS guys.
You can see that Kernel OVS guys did not add this to TC flower offload
support ether. I am sure they would have if it would have improved
performance.
That will be a fame if we miss the target of the feature.


>> >> Are you trying to move the aging timer from application code into the
>> PMD?
>> >> or can your HW remove/disable/inactivate a flow at certain time
>> >> semantics without software context?
>> >
>> > Yes, it for hardware feature.
>>
>> So if the hardware auto removes the hardware steering entry, what software
>> part deletes the rte_flow handle?
>> What software part triggers the application callback? from what context? will
>> locks be required?
>> How do you prevent races between application thread and the context
>> deleting/accessing the rte_flow handle?
>> I mean in cases that application wants to delete the flow before the timeout
>> expires, but actually it is same time hardware deletes it.
>
> Usually the flow auto delete is running on a separate background thread
> (an interrupt handler or a watchdog thread base on hardware capability)
> The low level driver is responsible to take care of the race condition between background and foreground flow deleting.

Please explain in details how the race is solved. Maybe a patch will
make this clearer? This is one of the main objection for this feature.
e.g.: If the application holds a rte_flow handle, and the timeout
expired, is not handled yet. Now the application will try to use the
rte_flow which state changed underneath, or even got deleted.

Besides you now require each and every low level driver to add this
same timeout expiration logic, alarm registration and race protection.
This should be done once for all PMD's.
PS: Will this callback thread clear the HW rule as well?

> For application, it should be aware

Need to make this clear in API definitions

> that the callback function is running on a separate thread, so it is also required to
> take care of race condition if it will access some data that is shared by foreground thread.

This means locks in application? or some lock-less async model?
Isn't it simpler to let the application (OVS) delete the rte_flow from
it's own thread, save locks, allow timeout updates according traffic
counter.

Alex
  
Qi Zhang Jan. 14, 2018, 2:03 a.m. UTC | #7
Hi Alex & Keith
	
	Base on my further understanding about OVS requirement and the new device's capability.
	I realize there is no strong point to have the timeout APIs from this patch, I'd like to withdraw it.
	Thanks for all your comments that help me to think it over.

Regards
Qi

> -----Original Message-----
> From: Wiles, Keith
> Sent: Friday, December 22, 2017 10:06 PM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: Alex Rosenbaum <rosenbaumalex@gmail.com>;
> adrien.mazarguil@6wind.com; DPDK <dev@dpdk.org>; Doherty, Declan
> <declan.doherty@intel.com>
> Subject: Re: [dpdk-dev] [RFC v2 3/5] ether: Add flow timeout support
> 
> 
> 
> > On Dec 22, 2017, at 3:03 AM, Zhang, Qi Z <qi.z.zhang@intel.com> wrote:
> >
> > Alex:
> >
> >> -----Original Message-----
> >> From: Alex Rosenbaum [mailto:rosenbaumalex@gmail.com]
> >> Sent: Thursday, December 21, 2017 9:59 PM
> >> To: Zhang, Qi Z <qi.z.zhang@intel.com>
> >> Cc: adrien.mazarguil@6wind.com; DPDK <dev@dpdk.org>; Doherty, Declan
> >> <declan.doherty@intel.com>
> >> Subject: Re: [dpdk-dev] [RFC v2 3/5] ether: Add flow timeout support
> >>
> >> On Thu, Dec 21, 2017 at 4:35 AM, Qi Zhang <qi.z.zhang@intel.com> wrote:
> >>> Add new APIs to support flow timeout, application is able to 1.
> >>> Setup the time duration of a flow, the flow is expected to be
> >>> deleted automatically when timeout.
> >>
> >> Can you explain how the application (OVS) is expected to use this API?
> >> It will help to better understand the motivation here...
> >
> > I think the purpose of the APIs is to expose the hardware feature that
> > support flow auto delete with a timeout.
> > As I know, for OVS, every flow in flow table will have time duration A
> > flow be offloaded to hardware is still required to be deleted in
> > specific time, I think these APIs help OVS to take advantage HW
> > feature and simplify the flow aging management
> >
> >>
> >> Are you trying to move the aging timer from application code into the PMD?
> >> or can your HW remove/disable/inactivate a flow at certain time
> >> semantics without software context?
> >
> > Yes, it for hardware feature.
> 
> We also need to support a software timeout feature here and not just a
> hardware one. The reason is to make the APIs consistent across all hardware. If
> you are going to include hardware timeout then we need to add software
> supported timeout at the same time IMO.
> 
> >
> >>
> >> I would prefer to have the aging timer logic in a centralized
> >> location, leek the application itself or some DPDK library. instead
> >> of having each PMD implement its own software timers.
> >>
> >>
> >>> 3. Register a callback function when a flow is deleted due to timeout.
> >>
> >> Is the application 'struct rte_flow*' handle really deleted? or the
> >> flow was removed from HW, just in-active at this time?
> >
> > Here the flow is deleted, same thing happen as rte_flow_destroy and we
> > need to call rte_flow_create to re-enable the flow.
> > I will add more explanation to avoid confusion in next release.
> 
> Sorry, I little late into this thread, but we can not have 1000 callbacks for each
> timeout and we need make sure we bunch up a number of timeouts at a time to
> make the feature more performant IMO. Maybe that discussed or address in
> the code.
> 
> >
> >>
> >> Can a flow be re-activated? or does this require a call to
> >> rte_flow_destory() and ret_flow_create()?
> >>
> >> Alex
> >
> > Thanks
> > Qi
> 
> Regards,
> Keith
  

Patch

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index dcea2f6..1a242fc 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -181,6 +181,14 @@  directions. At least one direction must be specified.
 Specifying both directions at once for a given rule is not recommended but
 may be valid in a few cases (e.g. shared counters).
 
+Attribute: Timeout
+^^^^^^^^^^^^^^^^^^
+
+Two kinds of timeout can be assigned to a flow rule. For "hard timeout", flow
+rule will be deleted when specific time duration  passed since it's creation.
+For "idle timeout", flow rule will be deleted when no packet hit in the given
+time duration.
+
 Pattern item
 ~~~~~~~~~~~~
 
@@ -1695,6 +1703,35 @@  definition.
                   void *data,
                   struct rte_flow_error *error);
 
+Is Active
+~~~~~~~~~
+
+Check if a flow is still active or not.
+
+It is possible a flow is be deleted automatically due to timeout, this function
+help to check if a flow is still exist.
+
+.. code-block:: c
+
+   int
+   rte_flow_is_active(uint8_t port_id,
+                      struct rte_flow *flow,
+                      uint8_t *active,
+                      struct rte_flow_error* error);
+
+Auto Delete Callback Set
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Register a callback function when flow is automatically deleted due to timeout.
+
+.. code-block:: c
+
+   int
+   rte_flow_auto_delete_callback_set(uint8_t port_id,
+                                     struct rte_flow *flow,
+                                     void (*callback)(struct rte_flow *),
+                                     struct rte_flow_error *error);
+
 Arguments:
 
 - ``port_id``: port identifier of Ethernet device.
diff --git a/lib/librte_ether/rte_flow.c b/lib/librte_ether/rte_flow.c
index 6659063..650c5a5 100644
--- a/lib/librte_ether/rte_flow.c
+++ b/lib/librte_ether/rte_flow.c
@@ -425,3 +425,41 @@  rte_flow_copy(struct rte_flow_desc *desc, size_t len,
 	}
 	return 0;
 }
+
+/** Check if a flow is stiall active or not. */
+int
+rte_flow_is_active(uint8_t port_id,
+		   struct rte_flow* flow,
+		   uint8_t *active,
+		   struct rte_flow_error* error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->is_active))
+		return ops->is_active(dev, flow, active, error);
+	return -rte_flow_error_set(error, ENOSYS,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				   NULL, rte_strerror(ENOSYS));
+}
+
+/** Register a callback function when flow is automatically deleted. */
+int
+rte_flow_auto_delete_callback_set(uint8_t port_id,
+				  struct rte_flow* flow,
+				  void (*callback)(struct rte_flow *),
+				  struct rte_flow_error* error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->auto_delete_callback_set))
+		return ops->auto_delete_callback_set(dev, flow, callback, error);
+	return -rte_flow_error_set(error, ENOSYS,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				   NULL, rte_strerror(ENOSYS));
+}
diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
index 8e902f0..e09e07f 100644
--- a/lib/librte_ether/rte_flow.h
+++ b/lib/librte_ether/rte_flow.h
@@ -97,6 +97,10 @@  struct rte_flow_attr {
 	uint32_t ingress:1; /**< Rule applies to ingress traffic. */
 	uint32_t egress:1; /**< Rule applies to egress traffic. */
 	uint32_t reserved:30; /**< Reserved, must be zero. */
+	uint32_t hard_timeout;
+	/**< If !0, flow will be deleted after given number of seconds. */
+	uint32_t idle_timeout;
+	/**< If !0, flow will be deleted if no packet hit in given seconds. */
 };
 
 /**
@@ -1491,6 +1495,51 @@  rte_flow_copy(struct rte_flow_desc *fd, size_t len,
 	      const struct rte_flow_item *items,
 	      const struct rte_flow_action *actions);
 
+/**
+ * Check if a flow is still active or not.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param flow
+ *   Flow rule to check.
+ * @param[out] active
+ *   0 for not active, 1 for active.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL. PMDs initialize this
+ *   structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+rte_flow_is_active(uint8_t port_id,
+		   struct rte_flow *flow,
+		   uint8_t *active,
+		   struct rte_flow_error *error);
+
+/**
+ * Register a callback function when flow is automatically deleted
+ * due to timeout
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param flow
+ *   Flow rule to track.
+ * @param callback
+ *   The callback function.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL. PMDs initialize this
+ *   structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+rte_flow_auto_delete_callback_set(uint8_t port_id,
+				  struct rte_flow *flow,
+				  void (*callback)(struct rte_flow *),
+				  struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_ether/rte_flow_driver.h b/lib/librte_ether/rte_flow_driver.h
index 254d1cb..862d8ab 100644
--- a/lib/librte_ether/rte_flow_driver.h
+++ b/lib/librte_ether/rte_flow_driver.h
@@ -124,6 +124,18 @@  struct rte_flow_ops {
 		(struct rte_eth_dev *,
 		 int,
 		 struct rte_flow_error *);
+	/** See rte_flow_ping(). */
+	int (*is_active)
+		(struct rte_eth_dev *,
+		 struct rte_flow *,
+		 uint8_t *,
+		 struct rte_flow_error *);
+	/** See rte_flow_delete(). */
+	int (*auto_delete_callback_set)
+		(struct rte_eth_dev *,
+		 struct rte_flow *,
+		 void (*)(struct rte_flow *),
+		 struct rte_flow_error *);
 };
 
 /**