[dpdk-dev,2/2] ether/ethdev: Allow pmd to advertise preferred pool capability

Message ID 20170601080559.10684-3-santosh.shukla@caviumnetworks.com (mailing list archive)
State Changes Requested, archived
Delegated to: Thomas Monjalon
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Santosh Shukla June 1, 2017, 8:05 a.m. UTC
  Platform with two different NICs like external PCI NIC and
Integrated NIC, May want to use their preferred pool handle.
Right now there is no way that two different NICs on same board,
Could use their choice of a pool.
Both NICs forced to use same pool, Which is statically configured
by setting CONFIG_RTE_MEMPOOL_DEFAULT_OPS=<pool-name>.

So Introducing get_preferred_pool() API. Which allows PMD driver
to advertise their pool capability to Application.
Based on that hint, Application creates separate pool for
That driver.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
---
 lib/librte_ether/rte_ethdev.c          | 16 ++++++++++++++++
 lib/librte_ether/rte_ethdev.h          | 21 +++++++++++++++++++++
 lib/librte_ether/rte_ether_version.map |  7 +++++++
 3 files changed, 44 insertions(+)
  

Comments

Olivier Matz June 30, 2017, 2:13 p.m. UTC | #1
On Thu,  1 Jun 2017 13:35:59 +0530, Santosh Shukla <santosh.shukla@caviumnetworks.com> wrote:
> Platform with two different NICs like external PCI NIC and
> Integrated NIC, May want to use their preferred pool handle.
> Right now there is no way that two different NICs on same board,
> Could use their choice of a pool.
> Both NICs forced to use same pool, Which is statically configured
> by setting CONFIG_RTE_MEMPOOL_DEFAULT_OPS=<pool-name>.
> 
> So Introducing get_preferred_pool() API. Which allows PMD driver
> to advertise their pool capability to Application.
> Based on that hint, Application creates separate pool for
> That driver.
> 
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> ---
>  lib/librte_ether/rte_ethdev.c          | 16 ++++++++++++++++
>  lib/librte_ether/rte_ethdev.h          | 21 +++++++++++++++++++++
>  lib/librte_ether/rte_ether_version.map |  7 +++++++
>  3 files changed, 44 insertions(+)
> 
> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> index 83898a8f7..4068a05b1 100644
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c
> @@ -3472,3 +3472,19 @@ rte_eth_dev_l2_tunnel_offload_set(uint8_t port_id,
>  				-ENOTSUP);
>  	return (*dev->dev_ops->l2_tunnel_offload_set)(dev, l2_tunnel, mask, en);
>  }
> +
> +int
> +rte_eth_dev_get_preferred_pool(uint8_t port_id, const char *pool)
> +{
> +	struct rte_eth_dev *dev;
> +
> +	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> +
> +	dev = &rte_eth_devices[port_id];
> +
> +	if (*dev->dev_ops->get_preferred_pool == NULL) {
> +		pool = RTE_MBUF_DEFAULT_MEMPOOL_OPS;
> +		return 0;
> +	}
> +	return (*dev->dev_ops->get_preferred_pool)(dev, pool);
> +}

Instead of this, what about:

/*
 * Return values:
 *   - -ENOTSUP: error, pool type is not supported
 *   - on success, return the priority of the mempool (0 = highest)
 */
int
rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool)

By default, always return 0 (i.e. all pools are supported).

With this API, we can announce several supported pools (not only
one preferred), and order them by preference.

I also wonder if we should use a ops_index instead of a pool name
for the second argument.



> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
> index 0f38b45f8..8e5b06af7 100644
> --- a/lib/librte_ether/rte_ethdev.h
> +++ b/lib/librte_ether/rte_ethdev.h
> @@ -1381,6 +1381,10 @@ typedef int (*eth_l2_tunnel_offload_set_t)
>  	 uint8_t en);
>  /**< @internal enable/disable the l2 tunnel offload functions */
>  
> +typedef int (*eth_get_preferred_pool_t)(struct rte_eth_dev *dev,
> +						const char *pool);
> +/**< @internal Get preferred pool handler for a device */
> +
>  #ifdef RTE_NIC_BYPASS
>  
>  enum {
> @@ -1573,6 +1577,8 @@ struct eth_dev_ops {
>  	/**< Get extended device statistic values by ID. */
>  	eth_xstats_get_names_by_id_t xstats_get_names_by_id;
>  	/**< Get name of extended device statistics by ID. */
> +	eth_get_preferred_pool_t get_preferred_pool;
> +	/**< Get preferred pool handler for a device */
>  };
>  
>  /**
> @@ -4607,6 +4613,21 @@ rte_eth_dev_get_port_by_name(const char *name, uint8_t *port_id);
>  int
>  rte_eth_dev_get_name_by_port(uint8_t port_id, char *name);
>  
> +/**
> + * Get preferred pool handle for a device
> + *
> + * @param port_id
> + *   port identifier of the device
> + * @param [out] pool
> + *   Preferred pool handle for this device.
> + *   Pool len shouldn't more than 256B. Allocated by pmd driver.

[out] ??
I don't get why it is allocated by the driver



> + * @return
> + *   - (0) if successful.
> + *   - (-EINVAL) on failure.
> + */
> +int
> +rte_eth_dev_get_preferred_pool(uint8_t port_id, const char *pool);
> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map
> index d6726bb1b..819fe800e 100644
> --- a/lib/librte_ether/rte_ether_version.map
> +++ b/lib/librte_ether/rte_ether_version.map
> @@ -156,3 +156,10 @@ DPDK_17.05 {
>  	rte_eth_xstats_get_names_by_id;
>  
>  } DPDK_17.02;
> +
> +DPDK_17.08 {
> +	global:
> +
> +	rte_eth_dev_get_preferred_pool;
> +
> +} DPDK_17.05;
  
Santosh Shukla July 4, 2017, 12:39 p.m. UTC | #2
On Friday 30 June 2017 07:43 PM, Olivier Matz wrote:

> On Thu,  1 Jun 2017 13:35:59 +0530, Santosh Shukla <santosh.shukla@caviumnetworks.com> wrote:
>> Platform with two different NICs like external PCI NIC and
>> Integrated NIC, May want to use their preferred pool handle.
>> Right now there is no way that two different NICs on same board,
>> Could use their choice of a pool.
>> Both NICs forced to use same pool, Which is statically configured
>> by setting CONFIG_RTE_MEMPOOL_DEFAULT_OPS=<pool-name>.
>>
>> So Introducing get_preferred_pool() API. Which allows PMD driver
>> to advertise their pool capability to Application.
>> Based on that hint, Application creates separate pool for
>> That driver.
>>
>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>> ---
>>  lib/librte_ether/rte_ethdev.c          | 16 ++++++++++++++++
>>  lib/librte_ether/rte_ethdev.h          | 21 +++++++++++++++++++++
>>  lib/librte_ether/rte_ether_version.map |  7 +++++++
>>  3 files changed, 44 insertions(+)
>>
>> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
>> index 83898a8f7..4068a05b1 100644
>> --- a/lib/librte_ether/rte_ethdev.c
>> +++ b/lib/librte_ether/rte_ethdev.c
>> @@ -3472,3 +3472,19 @@ rte_eth_dev_l2_tunnel_offload_set(uint8_t port_id,
>>  				-ENOTSUP);
>>  	return (*dev->dev_ops->l2_tunnel_offload_set)(dev, l2_tunnel, mask, en);
>>  }
>> +
>> +int
>> +rte_eth_dev_get_preferred_pool(uint8_t port_id, const char *pool)
>> +{
>> +	struct rte_eth_dev *dev;
>> +
>> +	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>> +
>> +	dev = &rte_eth_devices[port_id];
>> +
>> +	if (*dev->dev_ops->get_preferred_pool == NULL) {
>> +		pool = RTE_MBUF_DEFAULT_MEMPOOL_OPS;
>> +		return 0;
>> +	}
>> +	return (*dev->dev_ops->get_preferred_pool)(dev, pool);
>> +}
> Instead of this, what about:
>
> /*
>  * Return values:
>  *   - -ENOTSUP: error, pool type is not supported
>  *   - on success, return the priority of the mempool (0 = highest)
>  */
> int
> rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool)
>
> By default, always return 0 (i.e. all pools are supported).
>
> With this API, we can announce several supported pools (not only
> one preferred), and order them by preference.

IMO: We should let application to decide on pool preference. Driver
only to advice his preferred or supported pool handle to application,
and its upto application to decide on pool selection scheme. 

> I also wonder if we should use a ops_index instead of a pool name
> for the second argument.
>
>
>
>> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
>> index 0f38b45f8..8e5b06af7 100644
>> --- a/lib/librte_ether/rte_ethdev.h
>> +++ b/lib/librte_ether/rte_ethdev.h
>> @@ -1381,6 +1381,10 @@ typedef int (*eth_l2_tunnel_offload_set_t)
>>  	 uint8_t en);
>>  /**< @internal enable/disable the l2 tunnel offload functions */
>>  
>> +typedef int (*eth_get_preferred_pool_t)(struct rte_eth_dev *dev,
>> +						const char *pool);
>> +/**< @internal Get preferred pool handler for a device */
>> +
>>  #ifdef RTE_NIC_BYPASS
>>  
>>  enum {
>> @@ -1573,6 +1577,8 @@ struct eth_dev_ops {
>>  	/**< Get extended device statistic values by ID. */
>>  	eth_xstats_get_names_by_id_t xstats_get_names_by_id;
>>  	/**< Get name of extended device statistics by ID. */
>> +	eth_get_preferred_pool_t get_preferred_pool;
>> +	/**< Get preferred pool handler for a device */
>>  };
>>  
>>  /**
>> @@ -4607,6 +4613,21 @@ rte_eth_dev_get_port_by_name(const char *name, uint8_t *port_id);
>>  int
>>  rte_eth_dev_get_name_by_port(uint8_t port_id, char *name);
>>  
>> +/**
>> + * Get preferred pool handle for a device
>> + *
>> + * @param port_id
>> + *   port identifier of the device
>> + * @param [out] pool
>> + *   Preferred pool handle for this device.
>> + *   Pool len shouldn't more than 256B. Allocated by pmd driver.
> [out] ??
> I don't get why it is allocated by the driver
>
Driver to advice his preferred pool to application. That's why out.

Thanks.

>
>> + * @return
>> + *   - (0) if successful.
>> + *   - (-EINVAL) on failure.
>> + */
>> +int
>> +rte_eth_dev_get_preferred_pool(uint8_t port_id, const char *pool);
>> +
>>  #ifdef __cplusplus
>>  }
>>  #endif
>> diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map
>> index d6726bb1b..819fe800e 100644
>> --- a/lib/librte_ether/rte_ether_version.map
>> +++ b/lib/librte_ether/rte_ether_version.map
>> @@ -156,3 +156,10 @@ DPDK_17.05 {
>>  	rte_eth_xstats_get_names_by_id;
>>  
>>  } DPDK_17.02;
>> +
>> +DPDK_17.08 {
>> +	global:
>> +
>> +	rte_eth_dev_get_preferred_pool;
>> +
>> +} DPDK_17.05;
  
Olivier Matz July 4, 2017, 1:07 p.m. UTC | #3
On Tue, 4 Jul 2017 18:09:33 +0530, santosh <santosh.shukla@caviumnetworks.com> wrote:
> On Friday 30 June 2017 07:43 PM, Olivier Matz wrote:
> 
> > On Thu,  1 Jun 2017 13:35:59 +0530, Santosh Shukla <santosh.shukla@caviumnetworks.com> wrote:  
> >> Platform with two different NICs like external PCI NIC and
> >> Integrated NIC, May want to use their preferred pool handle.
> >> Right now there is no way that two different NICs on same board,
> >> Could use their choice of a pool.
> >> Both NICs forced to use same pool, Which is statically configured
> >> by setting CONFIG_RTE_MEMPOOL_DEFAULT_OPS=<pool-name>.
> >>
> >> So Introducing get_preferred_pool() API. Which allows PMD driver
> >> to advertise their pool capability to Application.
> >> Based on that hint, Application creates separate pool for
> >> That driver.
> >>
> >> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> >> ---
> >>  lib/librte_ether/rte_ethdev.c          | 16 ++++++++++++++++
> >>  lib/librte_ether/rte_ethdev.h          | 21 +++++++++++++++++++++
> >>  lib/librte_ether/rte_ether_version.map |  7 +++++++
> >>  3 files changed, 44 insertions(+)
> >>
> >> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> >> index 83898a8f7..4068a05b1 100644
> >> --- a/lib/librte_ether/rte_ethdev.c
> >> +++ b/lib/librte_ether/rte_ethdev.c
> >> @@ -3472,3 +3472,19 @@ rte_eth_dev_l2_tunnel_offload_set(uint8_t port_id,
> >>  				-ENOTSUP);
> >>  	return (*dev->dev_ops->l2_tunnel_offload_set)(dev, l2_tunnel, mask, en);
> >>  }
> >> +
> >> +int
> >> +rte_eth_dev_get_preferred_pool(uint8_t port_id, const char *pool)
> >> +{
> >> +	struct rte_eth_dev *dev;
> >> +
> >> +	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> >> +
> >> +	dev = &rte_eth_devices[port_id];
> >> +
> >> +	if (*dev->dev_ops->get_preferred_pool == NULL) {
> >> +		pool = RTE_MBUF_DEFAULT_MEMPOOL_OPS;
> >> +		return 0;
> >> +	}
> >> +	return (*dev->dev_ops->get_preferred_pool)(dev, pool);
> >> +}  
> > Instead of this, what about:
> >
> > /*
> >  * Return values:
> >  *   - -ENOTSUP: error, pool type is not supported
> >  *   - on success, return the priority of the mempool (0 = highest)
> >  */
> > int
> > rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool)
> >
> > By default, always return 0 (i.e. all pools are supported).
> >
> > With this API, we can announce several supported pools (not only
> > one preferred), and order them by preference.  
> 
> IMO: We should let application to decide on pool preference. Driver
> only to advice his preferred or supported pool handle to application,
> and its upto application to decide on pool selection scheme. 

The api I'm proposing does not prevent the application from taking
the decision. On the contrary, it gives more clues to the application:
an ordered list of supported pools, instead of just the preferred pool.


> 
> > I also wonder if we should use a ops_index instead of a pool name
> > for the second argument.
> >
> >
> >  
> >> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
> >> index 0f38b45f8..8e5b06af7 100644
> >> --- a/lib/librte_ether/rte_ethdev.h
> >> +++ b/lib/librte_ether/rte_ethdev.h
> >> @@ -1381,6 +1381,10 @@ typedef int (*eth_l2_tunnel_offload_set_t)
> >>  	 uint8_t en);
> >>  /**< @internal enable/disable the l2 tunnel offload functions */
> >>  
> >> +typedef int (*eth_get_preferred_pool_t)(struct rte_eth_dev *dev,
> >> +						const char *pool);
> >> +/**< @internal Get preferred pool handler for a device */
> >> +
> >>  #ifdef RTE_NIC_BYPASS
> >>  
> >>  enum {
> >> @@ -1573,6 +1577,8 @@ struct eth_dev_ops {
> >>  	/**< Get extended device statistic values by ID. */
> >>  	eth_xstats_get_names_by_id_t xstats_get_names_by_id;
> >>  	/**< Get name of extended device statistics by ID. */
> >> +	eth_get_preferred_pool_t get_preferred_pool;
> >> +	/**< Get preferred pool handler for a device */
> >>  };
> >>  
> >>  /**
> >> @@ -4607,6 +4613,21 @@ rte_eth_dev_get_port_by_name(const char *name, uint8_t *port_id);
> >>  int
> >>  rte_eth_dev_get_name_by_port(uint8_t port_id, char *name);
> >>  
> >> +/**
> >> + * Get preferred pool handle for a device
> >> + *
> >> + * @param port_id
> >> + *   port identifier of the device
> >> + * @param [out] pool
> >> + *   Preferred pool handle for this device.
> >> + *   Pool len shouldn't more than 256B. Allocated by pmd driver.  
> > [out] ??
> > I don't get why it is allocated by the driver
> >  
> Driver to advice his preferred pool to application. That's why out.

So how can it be const?
Did you tried it?



> 
> Thanks.
> 
> >  
> >> + * @return
> >> + *   - (0) if successful.
> >> + *   - (-EINVAL) on failure.
> >> + */
> >> +int
> >> +rte_eth_dev_get_preferred_pool(uint8_t port_id, const char *pool);
> >> +
> >>  #ifdef __cplusplus
> >>  }
> >>  #endif
> >> diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map
> >> index d6726bb1b..819fe800e 100644
> >> --- a/lib/librte_ether/rte_ether_version.map
> >> +++ b/lib/librte_ether/rte_ether_version.map
> >> @@ -156,3 +156,10 @@ DPDK_17.05 {
> >>  	rte_eth_xstats_get_names_by_id;
> >>  
> >>  } DPDK_17.02;
> >> +
> >> +DPDK_17.08 {
> >> +	global:
> >> +
> >> +	rte_eth_dev_get_preferred_pool;
> >> +
> >> +} DPDK_17.05;  
>
  
Jerin Jacob July 4, 2017, 2:12 p.m. UTC | #4
-----Original Message-----
> Date: Tue, 4 Jul 2017 15:07:14 +0200
> From: Olivier Matz <olivier.matz@6wind.com>
> To: santosh <santosh.shukla@caviumnetworks.com>
> Cc: dev@dpdk.org, hemant.agrawal@nxp.com, jerin.jacob@caviumnetworks.com
> Subject: Re: [PATCH 2/2] ether/ethdev: Allow pmd to advertise preferred
>  pool capability
> X-Mailer: Claws Mail 3.14.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu)
> 
> On Tue, 4 Jul 2017 18:09:33 +0530, santosh <santosh.shukla@caviumnetworks.com> wrote:
> > On Friday 30 June 2017 07:43 PM, Olivier Matz wrote:

Hi Olivier,

> > 
> > >> +
> > >> +int
> > >> +rte_eth_dev_get_preferred_pool(uint8_t port_id, const char *pool)
> > >> +{
> > >> +	struct rte_eth_dev *dev;
> > >> +
> > >> +	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> > >> +
> > >> +	dev = &rte_eth_devices[port_id];
> > >> +
> > >> +	if (*dev->dev_ops->get_preferred_pool == NULL) {
> > >> +		pool = RTE_MBUF_DEFAULT_MEMPOOL_OPS;
> > >> +		return 0;
> > >> +	}
> > >> +	return (*dev->dev_ops->get_preferred_pool)(dev, pool);
> > >> +}  
> > > Instead of this, what about:
> > >
> > > /*
> > >  * Return values:
> > >  *   - -ENOTSUP: error, pool type is not supported
> > >  *   - on success, return the priority of the mempool (0 = highest)
> > >  */
> > > int
> > > rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool)
> > >
> > > By default, always return 0 (i.e. all pools are supported).
> > >
> > > With this API, we can announce several supported pools (not only
> > > one preferred), and order them by preference.  
> > 
> > IMO: We should let application to decide on pool preference. Driver
> > only to advice his preferred or supported pool handle to application,
> > and its upto application to decide on pool selection scheme. 
> 
> The api I'm proposing does not prevent the application from taking
> the decision. On the contrary, it gives more clues to the application:
> an ordered list of supported pools, instead of just the preferred pool.

Does it complicate the mempool selection procedure from the application
perspective? I have a real world use case, We can take this as a base for
for brainstorming.

A system with two ethdev ports
- Port 0 # traditional NIC # Preferred mempool handler: ring
- Port 1 # integrated NIC # Preferred mempool handler: a_HW_based_ring

Some of the characteristics of HW based ring:
- TX buffer recycling done by HW(packet allocation and free done by HW
  no software intervention is required)
- It will _not_ be fast when using it with traditional NIC as traditional NIC does
packet alloc and free using SW which comes through mempool per cpu caches unlike
HW ring solution.
- So an integrated NIC with a HW based ring does not really make sense
  to use SW ring handler and other way around too.

So in this context, All application wants to know the preferred handler
for the given ethdev port and any other non preferred handlers are _equally_ bad.
Not sure what would be preference for taking one or another if  _the_ preferred
handler attached not available to the integrated NIC.

From application perspective,
approach 1:

char pref_mempool[128];
rte_eth_dev_pool_ops_supported(ethdev_port_id, pref_mempool /* out */);
create_mempool_by_name(pref_mempool);
eth_dev_rx_configure(pref_mempool);


approach 2 is very complicated. The first problem is API to get the available
pools. Unlike ethdev, mempool uses compiler based constructor scheme to
register the mempool PMD and normal build will have all the mempool PMD
even though it is not used or applicable.
Isn't complicating external mempool usage from application perspective?

If there any real world use for giving a set of pools for a given
eth port then it make sense to add the complication in the application.
Does any one has such _real world_ use case ?

/Jerin
  

Patch

diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 83898a8f7..4068a05b1 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -3472,3 +3472,19 @@  rte_eth_dev_l2_tunnel_offload_set(uint8_t port_id,
 				-ENOTSUP);
 	return (*dev->dev_ops->l2_tunnel_offload_set)(dev, l2_tunnel, mask, en);
 }
+
+int
+rte_eth_dev_get_preferred_pool(uint8_t port_id, const char *pool)
+{
+	struct rte_eth_dev *dev;
+
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+
+	dev = &rte_eth_devices[port_id];
+
+	if (*dev->dev_ops->get_preferred_pool == NULL) {
+		pool = RTE_MBUF_DEFAULT_MEMPOOL_OPS;
+		return 0;
+	}
+	return (*dev->dev_ops->get_preferred_pool)(dev, pool);
+}
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 0f38b45f8..8e5b06af7 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -1381,6 +1381,10 @@  typedef int (*eth_l2_tunnel_offload_set_t)
 	 uint8_t en);
 /**< @internal enable/disable the l2 tunnel offload functions */
 
+typedef int (*eth_get_preferred_pool_t)(struct rte_eth_dev *dev,
+						const char *pool);
+/**< @internal Get preferred pool handler for a device */
+
 #ifdef RTE_NIC_BYPASS
 
 enum {
@@ -1573,6 +1577,8 @@  struct eth_dev_ops {
 	/**< Get extended device statistic values by ID. */
 	eth_xstats_get_names_by_id_t xstats_get_names_by_id;
 	/**< Get name of extended device statistics by ID. */
+	eth_get_preferred_pool_t get_preferred_pool;
+	/**< Get preferred pool handler for a device */
 };
 
 /**
@@ -4607,6 +4613,21 @@  rte_eth_dev_get_port_by_name(const char *name, uint8_t *port_id);
 int
 rte_eth_dev_get_name_by_port(uint8_t port_id, char *name);
 
+/**
+ * Get preferred pool handle for a device
+ *
+ * @param port_id
+ *   port identifier of the device
+ * @param [out] pool
+ *   Preferred pool handle for this device.
+ *   Pool len shouldn't more than 256B. Allocated by pmd driver.
+ * @return
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int
+rte_eth_dev_get_preferred_pool(uint8_t port_id, const char *pool);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map
index d6726bb1b..819fe800e 100644
--- a/lib/librte_ether/rte_ether_version.map
+++ b/lib/librte_ether/rte_ether_version.map
@@ -156,3 +156,10 @@  DPDK_17.05 {
 	rte_eth_xstats_get_names_by_id;
 
 } DPDK_17.02;
+
+DPDK_17.08 {
+	global:
+
+	rte_eth_dev_get_preferred_pool;
+
+} DPDK_17.05;