[dpdk-dev,v3,2/2] ethdev: allow pmd to advertise pool handle
Checks
Commit Message
Now that dpdk supports more than one mempool drivers and
each mempool driver works best for specific PMD, example:
- sw ring based mempool for Intel PMD drivers
- dpaa2 HW mempool manager for dpaa2 PMD driver.
- fpa HW mempool manager for Octeontx PMD driver.
Application like to know `preferred mempool vs PMD driver`
information in advance before port setup.
Introducing rte_eth_dev_get_preferred_pool_ops() API,
which allows PMD driver to advertise their pool capability to application.
Application side programing sequence would be:
char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE];
rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempoolx /*out*/);
rte_mempool_create_empty();
rte_mempool_set_ops_byname( , pref_memppol, );
rte_mempool_populate_default();
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
---
v2 --> v3:
- Updated version.map entry to DPDK_v17.11.
v1 --> v2:
- Renamed _get_preferred_pool to _get_preferred_pool_ops().
Per v1 review feedback, Olivier suggested to rename api
to rte_eth_dev_pool_ops_supported(), considering that 2nd param
for that api will return pool handle 'priority' for that port.
However, per v1 [1], we're opting for approach 1) where
ethdev API returns _preferred_ pool handle to application and Its upto
application to decide on policy - whether application wants to create
pool with received preferred pool handle or not. For more discussion details
on this topic refer [1].
[1] http://dpdk.org/dev/patchwork/patch/24944/
- Removed const qualifier from _get_preferred_pool_ops() param.
- Updated API description and changes return val from -EINVAL to _ENOTSUP
(suggested by Olivier)
lib/librte_ether/rte_ethdev.c | 18 ++++++++++++++++++
lib/librte_ether/rte_ethdev.h | 21 +++++++++++++++++++++
lib/librte_ether/rte_ether_version.map | 7 +++++++
3 files changed, 46 insertions(+)
Comments
Hi Santosh,
On Tue, Aug 15, 2017 at 01:37:17PM +0530, Santosh Shukla wrote:
> Now that dpdk supports more than one mempool drivers and
> each mempool driver works best for specific PMD, example:
> - sw ring based mempool for Intel PMD drivers
> - dpaa2 HW mempool manager for dpaa2 PMD driver.
> - fpa HW mempool manager for Octeontx PMD driver.
>
> Application like to know `preferred mempool vs PMD driver`
> information in advance before port setup.
>
> Introducing rte_eth_dev_get_preferred_pool_ops() API,
> which allows PMD driver to advertise their pool capability to application.
>
> Application side programing sequence would be:
>
> char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE];
> rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempoolx /*out*/);
> rte_mempool_create_empty();
> rte_mempool_set_ops_byname( , pref_memppol, );
> rte_mempool_populate_default();
>
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> ---
> v2 --> v3:
> - Updated version.map entry to DPDK_v17.11.
>
> v1 --> v2:
> - Renamed _get_preferred_pool to _get_preferred_pool_ops().
> Per v1 review feedback, Olivier suggested to rename api
> to rte_eth_dev_pool_ops_supported(), considering that 2nd param
> for that api will return pool handle 'priority' for that port.
> However, per v1 [1], we're opting for approach 1) where
> ethdev API returns _preferred_ pool handle to application and Its upto
> application to decide on policy - whether application wants to create
> pool with received preferred pool handle or not. For more discussion details
> on this topic refer [1].
Well, I still think it would be more flexible to have an API like
rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool)
It supports the easy case (= one preferred mempool) without much pain,
and provides a more precise manner to describe what is supported or not
by the driver. Example: "pmd_foo" prefers "mempool_foo" (best perf), but
also supporst "mempool_stack" and "mempool_ring", but "mempool_bar"
won't work at all.
Having only one preferred pool_ops also prevents from smoothly renaming
a pool (supporting both during some time) or to have 2 names for
different variants of the same pool_ops (ex: ring_mp_mc, ring_sp_sc).
But if the users (I guess at least Cavium and NXP) are happy with
what you propose, I'm fine with it.
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c
> @@ -3409,3 +3409,21 @@ rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id,
>
> return 0;
> }
> +
> +int
> +rte_eth_dev_get_preferred_pool_ops(uint8_t port_id, char *pool)
> +{
> + struct rte_eth_dev *dev;
> + const char *tmp;
> +
> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> +
> + dev = &rte_eth_devices[port_id];
> +
> + if (*dev->dev_ops->get_preferred_pool_ops == NULL) {
> + tmp = rte_eal_mbuf_default_mempool_ops();
> + snprintf(pool, RTE_MBUF_POOL_OPS_NAMESIZE, "%s", tmp);
> + return 0;
> + }
> + return (*dev->dev_ops->get_preferred_pool_ops)(dev, pool);
> +}
I think adding the length of the pool buffer to the function arguments
would be better: only documenting that the length is
RTE_MBUF_POOL_OPS_NAMESIZE looks a bit weak to me, because if one day it
changes to another value, the users of the function may not notice it
(no ABI/API change).
One more comment: it would be helpful to have one user of this API in
the example apps or testpmd.
Olivier
Hi Olivier,
On Monday 04 September 2017 05:41 PM, Olivier MATZ wrote:
> Hi Santosh,
>
> On Tue, Aug 15, 2017 at 01:37:17PM +0530, Santosh Shukla wrote:
>> Now that dpdk supports more than one mempool drivers and
>> each mempool driver works best for specific PMD, example:
>> - sw ring based mempool for Intel PMD drivers
>> - dpaa2 HW mempool manager for dpaa2 PMD driver.
>> - fpa HW mempool manager for Octeontx PMD driver.
>>
>> Application like to know `preferred mempool vs PMD driver`
>> information in advance before port setup.
>>
>> Introducing rte_eth_dev_get_preferred_pool_ops() API,
>> which allows PMD driver to advertise their pool capability to application.
>>
>> Application side programing sequence would be:
>>
>> char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE];
>> rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempoolx /*out*/);
>> rte_mempool_create_empty();
>> rte_mempool_set_ops_byname( , pref_memppol, );
>> rte_mempool_populate_default();
>>
>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>> ---
>> v2 --> v3:
>> - Updated version.map entry to DPDK_v17.11.
>>
>> v1 --> v2:
>> - Renamed _get_preferred_pool to _get_preferred_pool_ops().
>> Per v1 review feedback, Olivier suggested to rename api
>> to rte_eth_dev_pool_ops_supported(), considering that 2nd param
>> for that api will return pool handle 'priority' for that port.
>> However, per v1 [1], we're opting for approach 1) where
>> ethdev API returns _preferred_ pool handle to application and Its upto
>> application to decide on policy - whether application wants to create
>> pool with received preferred pool handle or not. For more discussion details
>> on this topic refer [1].
> Well, I still think it would be more flexible to have an API like
> rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool)
>
> It supports the easy case (= one preferred mempool) without much pain,
> and provides a more precise manner to describe what is supported or not
> by the driver. Example: "pmd_foo" prefers "mempool_foo" (best perf), but
> also supporst "mempool_stack" and "mempool_ring", but "mempool_bar"
> won't work at all.
>
> Having only one preferred pool_ops also prevents from smoothly renaming
> a pool (supporting both during some time) or to have 2 names for
> different variants of the same pool_ops (ex: ring_mp_mc, ring_sp_sc).
>
> But if the users (I guess at least Cavium and NXP) are happy with
> what you propose, I'm fine with it.
preferred handle based upon real world use-case and same thing raised
at [1].
Hi Hemant, Are you ok with proposed preferred API?
[1] http://dpdk.org/dev/patchwork/patch/24944/
>> --- a/lib/librte_ether/rte_ethdev.c
>> +++ b/lib/librte_ether/rte_ethdev.c
>> @@ -3409,3 +3409,21 @@ rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id,
>>
>> return 0;
>> }
>> +
>> +int
>> +rte_eth_dev_get_preferred_pool_ops(uint8_t port_id, char *pool)
>> +{
>> + struct rte_eth_dev *dev;
>> + const char *tmp;
>> +
>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>> +
>> + dev = &rte_eth_devices[port_id];
>> +
>> + if (*dev->dev_ops->get_preferred_pool_ops == NULL) {
>> + tmp = rte_eal_mbuf_default_mempool_ops();
>> + snprintf(pool, RTE_MBUF_POOL_OPS_NAMESIZE, "%s", tmp);
>> + return 0;
>> + }
>> + return (*dev->dev_ops->get_preferred_pool_ops)(dev, pool);
>> +}
> I think adding the length of the pool buffer to the function arguments
> would be better: only documenting that the length is
> RTE_MBUF_POOL_OPS_NAMESIZE looks a bit weak to me, because if one day it
> changes to another value, the users of the function may not notice it
> (no ABI/API change).
>
>
> One more comment: it would be helpful to have one user of this API in
> the example apps or testpmd.
Yes. I will add in v3. Thanks.
> Olivier
On 9/4/2017 6:44 PM, santosh wrote:
> Hi Olivier,
>
>
> On Monday 04 September 2017 05:41 PM, Olivier MATZ wrote:
>> Hi Santosh,
>>
>> On Tue, Aug 15, 2017 at 01:37:17PM +0530, Santosh Shukla wrote:
>>> Now that dpdk supports more than one mempool drivers and
>>> each mempool driver works best for specific PMD, example:
>>> - sw ring based mempool for Intel PMD drivers
>>> - dpaa2 HW mempool manager for dpaa2 PMD driver.
>>> - fpa HW mempool manager for Octeontx PMD driver.
>>>
>>> Application like to know `preferred mempool vs PMD driver`
>>> information in advance before port setup.
>>>
>>> Introducing rte_eth_dev_get_preferred_pool_ops() API,
>>> which allows PMD driver to advertise their pool capability to application.
>>>
>>> Application side programing sequence would be:
>>>
>>> char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE];
>>> rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempoolx /*out*/);
>>> rte_mempool_create_empty();
>>> rte_mempool_set_ops_byname( , pref_memppol, );
>>> rte_mempool_populate_default();
>>>
>>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>>> ---
>>> v2 --> v3:
>>> - Updated version.map entry to DPDK_v17.11.
>>>
>>> v1 --> v2:
>>> - Renamed _get_preferred_pool to _get_preferred_pool_ops().
>>> Per v1 review feedback, Olivier suggested to rename api
>>> to rte_eth_dev_pool_ops_supported(), considering that 2nd param
>>> for that api will return pool handle 'priority' for that port.
>>> However, per v1 [1], we're opting for approach 1) where
>>> ethdev API returns _preferred_ pool handle to application and Its upto
>>> application to decide on policy - whether application wants to create
>>> pool with received preferred pool handle or not. For more discussion details
>>> on this topic refer [1].
>> Well, I still think it would be more flexible to have an API like
>> rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool)
>>
>> It supports the easy case (= one preferred mempool) without much pain,
>> and provides a more precise manner to describe what is supported or not
>> by the driver. Example: "pmd_foo" prefers "mempool_foo" (best perf), but
>> also supporst "mempool_stack" and "mempool_ring", but "mempool_bar"
>> won't work at all.
>>
>> Having only one preferred pool_ops also prevents from smoothly renaming
>> a pool (supporting both during some time) or to have 2 names for
>> different variants of the same pool_ops (ex: ring_mp_mc, ring_sp_sc).
>>
>> But if the users (I guess at least Cavium and NXP) are happy with
>> what you propose, I'm fine with it.
>
> preferred handle based upon real world use-case and same thing raised
> at [1].
>
> Hi Hemant, Are you ok with proposed preferred API?
>
> [1] http://dpdk.org/dev/patchwork/patch/24944/
>
The current patch is ok, but it is better if you can extend it to
provide list of preferred pools (in preference order) instead of just
one pool. This will become helpful. I will avoid providing list of
not-supported pools etc.
A driver can have more than one preferred pool, depend on the resource
availability one or other can be used. I am also proposing this in my
proposal[1].
[1] http://dpdk.org/dev/patchwork/patch/26377/
>>> --- a/lib/librte_ether/rte_ethdev.c
>>> +++ b/lib/librte_ether/rte_ethdev.c
>>> @@ -3409,3 +3409,21 @@ rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id,
>>>
>>> return 0;
>>> }
>>> +
>>> +int
>>> +rte_eth_dev_get_preferred_pool_ops(uint8_t port_id, char *pool)
>>> +{
>>> + struct rte_eth_dev *dev;
>>> + const char *tmp;
>>> +
>>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>>> +
>>> + dev = &rte_eth_devices[port_id];
>>> +
>>> + if (*dev->dev_ops->get_preferred_pool_ops == NULL) {
>>> + tmp = rte_eal_mbuf_default_mempool_ops();
>>> + snprintf(pool, RTE_MBUF_POOL_OPS_NAMESIZE, "%s", tmp);
>>> + return 0;
>>> + }
>>> + return (*dev->dev_ops->get_preferred_pool_ops)(dev, pool);
>>> +}
>> I think adding the length of the pool buffer to the function arguments
>> would be better: only documenting that the length is
>> RTE_MBUF_POOL_OPS_NAMESIZE looks a bit weak to me, because if one day it
>> changes to another value, the users of the function may not notice it
>> (no ABI/API change).
>>
>>
>> One more comment: it would be helpful to have one user of this API in
>> the example apps or testpmd.
>
> Yes. I will add in v3. Thanks.
>
>> Olivier
>
>
Hi Hemant,
On Thursday 07 September 2017 02:51 PM, Hemant Agrawal wrote:
> On 9/4/2017 6:44 PM, santosh wrote:
>> Hi Olivier,
>>
>>
>> On Monday 04 September 2017 05:41 PM, Olivier MATZ wrote:
>>> Hi Santosh,
>>>
>>> On Tue, Aug 15, 2017 at 01:37:17PM +0530, Santosh Shukla wrote:
>>>> Now that dpdk supports more than one mempool drivers and
>>>> each mempool driver works best for specific PMD, example:
>>>> - sw ring based mempool for Intel PMD drivers
>>>> - dpaa2 HW mempool manager for dpaa2 PMD driver.
>>>> - fpa HW mempool manager for Octeontx PMD driver.
>>>>
>>>> Application like to know `preferred mempool vs PMD driver`
>>>> information in advance before port setup.
>>>>
>>>> Introducing rte_eth_dev_get_preferred_pool_ops() API,
>>>> which allows PMD driver to advertise their pool capability to application.
>>>>
>>>> Application side programing sequence would be:
>>>>
>>>> char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE];
>>>> rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempoolx /*out*/);
>>>> rte_mempool_create_empty();
>>>> rte_mempool_set_ops_byname( , pref_memppol, );
>>>> rte_mempool_populate_default();
>>>>
>>>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>>>> ---
>>>> v2 --> v3:
>>>> - Updated version.map entry to DPDK_v17.11.
>>>>
>>>> v1 --> v2:
>>>> - Renamed _get_preferred_pool to _get_preferred_pool_ops().
>>>> Per v1 review feedback, Olivier suggested to rename api
>>>> to rte_eth_dev_pool_ops_supported(), considering that 2nd param
>>>> for that api will return pool handle 'priority' for that port.
>>>> However, per v1 [1], we're opting for approach 1) where
>>>> ethdev API returns _preferred_ pool handle to application and Its upto
>>>> application to decide on policy - whether application wants to create
>>>> pool with received preferred pool handle or not. For more discussion details
>>>> on this topic refer [1].
>>> Well, I still think it would be more flexible to have an API like
>>> rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool)
>>>
>>> It supports the easy case (= one preferred mempool) without much pain,
>>> and provides a more precise manner to describe what is supported or not
>>> by the driver. Example: "pmd_foo" prefers "mempool_foo" (best perf), but
>>> also supporst "mempool_stack" and "mempool_ring", but "mempool_bar"
>>> won't work at all.
>>>
>>> Having only one preferred pool_ops also prevents from smoothly renaming
>>> a pool (supporting both during some time) or to have 2 names for
>>> different variants of the same pool_ops (ex: ring_mp_mc, ring_sp_sc).
>>>
>>> But if the users (I guess at least Cavium and NXP) are happy with
>>> what you propose, I'm fine with it.
>>
>> preferred handle based upon real world use-case and same thing raised
>> at [1].
>>
>> Hi Hemant, Are you ok with proposed preferred API?
>>
>> [1] http://dpdk.org/dev/patchwork/patch/24944/
>>
>
> The current patch is ok, but it is better if you can extend it to provide list of preferred pools (in preference order) instead of just one pool. This will become helpful. I will avoid providing list of not-supported pools etc.
>
> A driver can have more than one preferred pool, depend on the resource availability one or other can be used. I am also proposing this in my proposal[1].
>
> [1] http://dpdk.org/dev/patchwork/patch/26377/
>
Ok, then sticking to Olivier api but slight change in param,
example:
/** * Get list of supported pools for a port * * @param port_id [in] * Pointer to port identifier of the device. * @param pools [out] * Pointer to the list of supported pools for that port. * Returns with array of pool ops name handler of size * RTE_MEMPOOL_OPS_NAMESIZE. * @return * >=0: Success; PMD has updated supported pool list. * <0: Failure; */ int rte_eth_dev_pools_ops_supported(uint8_t port_id, char **pools) Hemant, Olivier: Does above api make sense? Pl. confirm. Thanks.
>
>
>>>> --- a/lib/librte_ether/rte_ethdev.c
>>>> +++ b/lib/librte_ether/rte_ethdev.c
>>>> @@ -3409,3 +3409,21 @@ rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id,
>>>>
>>>> return 0;
>>>> }
>>>> +
>>>> +int
>>>> +rte_eth_dev_get_preferred_pool_ops(uint8_t port_id, char *pool)
>>>> +{
>>>> + struct rte_eth_dev *dev;
>>>> + const char *tmp;
>>>> +
>>>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>>>> +
>>>> + dev = &rte_eth_devices[port_id];
>>>> +
>>>> + if (*dev->dev_ops->get_preferred_pool_ops == NULL) {
>>>> + tmp = rte_eal_mbuf_default_mempool_ops();
>>>> + snprintf(pool, RTE_MBUF_POOL_OPS_NAMESIZE, "%s", tmp);
>>>> + return 0;
>>>> + }
>>>> + return (*dev->dev_ops->get_preferred_pool_ops)(dev, pool);
>>>> +}
>>> I think adding the length of the pool buffer to the function arguments
>>> would be better: only documenting that the length is
>>> RTE_MBUF_POOL_OPS_NAMESIZE looks a bit weak to me, because if one day it
>>> changes to another value, the users of the function may not notice it
>>> (no ABI/API change).
>>>
>>>
>>> One more comment: it would be helpful to have one user of this API in
>>> the example apps or testpmd.
>>
>> Yes. I will add in v3. Thanks.
>>
>>> Olivier
>>
>>
>
On Thursday 07 September 2017 03:36 PM, santosh wrote:
> Hi Hemant,
>
>
> On Thursday 07 September 2017 02:51 PM, Hemant Agrawal wrote:
>> On 9/4/2017 6:44 PM, santosh wrote:
>>> Hi Olivier,
>>>
>>>
>>> On Monday 04 September 2017 05:41 PM, Olivier MATZ wrote:
>>>> Hi Santosh,
>>>>
>>>> On Tue, Aug 15, 2017 at 01:37:17PM +0530, Santosh Shukla wrote:
>>>>> Now that dpdk supports more than one mempool drivers and
>>>>> each mempool driver works best for specific PMD, example:
>>>>> - sw ring based mempool for Intel PMD drivers
>>>>> - dpaa2 HW mempool manager for dpaa2 PMD driver.
>>>>> - fpa HW mempool manager for Octeontx PMD driver.
>>>>>
>>>>> Application like to know `preferred mempool vs PMD driver`
>>>>> information in advance before port setup.
>>>>>
>>>>> Introducing rte_eth_dev_get_preferred_pool_ops() API,
>>>>> which allows PMD driver to advertise their pool capability to application.
>>>>>
>>>>> Application side programing sequence would be:
>>>>>
>>>>> char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE];
>>>>> rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempoolx /*out*/);
>>>>> rte_mempool_create_empty();
>>>>> rte_mempool_set_ops_byname( , pref_memppol, );
>>>>> rte_mempool_populate_default();
>>>>>
>>>>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>>>>> ---
>>>>> v2 --> v3:
>>>>> - Updated version.map entry to DPDK_v17.11.
>>>>>
>>>>> v1 --> v2:
>>>>> - Renamed _get_preferred_pool to _get_preferred_pool_ops().
>>>>> Per v1 review feedback, Olivier suggested to rename api
>>>>> to rte_eth_dev_pool_ops_supported(), considering that 2nd param
>>>>> for that api will return pool handle 'priority' for that port.
>>>>> However, per v1 [1], we're opting for approach 1) where
>>>>> ethdev API returns _preferred_ pool handle to application and Its upto
>>>>> application to decide on policy - whether application wants to create
>>>>> pool with received preferred pool handle or not. For more discussion details
>>>>> on this topic refer [1].
>>>> Well, I still think it would be more flexible to have an API like
>>>> rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool)
>>>>
>>>> It supports the easy case (= one preferred mempool) without much pain,
>>>> and provides a more precise manner to describe what is supported or not
>>>> by the driver. Example: "pmd_foo" prefers "mempool_foo" (best perf), but
>>>> also supporst "mempool_stack" and "mempool_ring", but "mempool_bar"
>>>> won't work at all.
>>>>
>>>> Having only one preferred pool_ops also prevents from smoothly renaming
>>>> a pool (supporting both during some time) or to have 2 names for
>>>> different variants of the same pool_ops (ex: ring_mp_mc, ring_sp_sc).
>>>>
>>>> But if the users (I guess at least Cavium and NXP) are happy with
>>>> what you propose, I'm fine with it.
>>> preferred handle based upon real world use-case and same thing raised
>>> at [1].
>>>
>>> Hi Hemant, Are you ok with proposed preferred API?
>>>
>>> [1] http://dpdk.org/dev/patchwork/patch/24944/
>>>
>> The current patch is ok, but it is better if you can extend it to provide list of preferred pools (in preference order) instead of just one pool. This will become helpful. I will avoid providing list of not-supported pools etc.
>>
>> A driver can have more than one preferred pool, depend on the resource availability one or other can be used. I am also proposing this in my proposal[1].
>>
>> [1] http://dpdk.org/dev/patchwork/patch/26377/
>>
> Ok, then sticking to Olivier api but slight change in param,
> example:
> /** * Get list of supported pools for a port * * @param port_id [in] * Pointer to port identifier of the device. * @param pools [out] * Pointer to the list of supported pools for that port. * Returns with array of pool ops name handler of size * RTE_MEMPOOL_OPS_NAMESIZE. * @return * >=0: Success; PMD has updated supported pool list. * <0: Failure; */ int rte_eth_dev_pools_ops_supported(uint8_t port_id, char **pools) Hemant, Olivier: Does above api make sense? Pl. confirm. Thanks.
Sorry for the font, resending proposed API:
/**
* Get list of supported pools for a port
* @param port_id [in]
* Pointer to port identifier of the device.
* @param pools [out]
* Pointer to the list of supported pools for that port.
* Returns with array of pool ops name handler of size
* RTE_MEMPOOL_OPS_NAMESIZE.
* @return
* >=0: Success; PMD has updated supported pool list.
* <0: Failure;
*/
int rte_eth_dev_pools_ops_supported(uint8_t port_id, char **pools)
Hemant, Olivier: Does above api make sense? Pl. confirm. Thanks.
>>
>>>>> --- a/lib/librte_ether/rte_ethdev.c
>>>>> +++ b/lib/librte_ether/rte_ethdev.c
>>>>> @@ -3409,3 +3409,21 @@ rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id,
>>>>>
>>>>> return 0;
>>>>> }
>>>>> +
>>>>> +int
>>>>> +rte_eth_dev_get_preferred_pool_ops(uint8_t port_id, char *pool)
>>>>> +{
>>>>> + struct rte_eth_dev *dev;
>>>>> + const char *tmp;
>>>>> +
>>>>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>>>>> +
>>>>> + dev = &rte_eth_devices[port_id];
>>>>> +
>>>>> + if (*dev->dev_ops->get_preferred_pool_ops == NULL) {
>>>>> + tmp = rte_eal_mbuf_default_mempool_ops();
>>>>> + snprintf(pool, RTE_MBUF_POOL_OPS_NAMESIZE, "%s", tmp);
>>>>> + return 0;
>>>>> + }
>>>>> + return (*dev->dev_ops->get_preferred_pool_ops)(dev, pool);
>>>>> +}
>>>> I think adding the length of the pool buffer to the function arguments
>>>> would be better: only documenting that the length is
>>>> RTE_MBUF_POOL_OPS_NAMESIZE looks a bit weak to me, because if one day it
>>>> changes to another value, the users of the function may not notice it
>>>> (no ABI/API change).
>>>>
>>>>
>>>> One more comment: it would be helpful to have one user of this API in
>>>> the example apps or testpmd.
>>> Yes. I will add in v3. Thanks.
>>>
>>>> Olivier
>>>
On 9/7/2017 3:41 PM, santosh wrote:
>
>
> On Thursday 07 September 2017 03:36 PM, santosh wrote:
>> Hi Hemant,
>>
>>
>> On Thursday 07 September 2017 02:51 PM, Hemant Agrawal wrote:
>>> On 9/4/2017 6:44 PM, santosh wrote:
>>>> Hi Olivier,
>>>>
>>>>
>>>> On Monday 04 September 2017 05:41 PM, Olivier MATZ wrote:
>>>>> Hi Santosh,
>>>>>
>>>>> On Tue, Aug 15, 2017 at 01:37:17PM +0530, Santosh Shukla wrote:
>>>>>> Now that dpdk supports more than one mempool drivers and
>>>>>> each mempool driver works best for specific PMD, example:
>>>>>> - sw ring based mempool for Intel PMD drivers
>>>>>> - dpaa2 HW mempool manager for dpaa2 PMD driver.
>>>>>> - fpa HW mempool manager for Octeontx PMD driver.
>>>>>>
>>>>>> Application like to know `preferred mempool vs PMD driver`
>>>>>> information in advance before port setup.
>>>>>>
>>>>>> Introducing rte_eth_dev_get_preferred_pool_ops() API,
>>>>>> which allows PMD driver to advertise their pool capability to application.
>>>>>>
>>>>>> Application side programing sequence would be:
>>>>>>
>>>>>> char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE];
>>>>>> rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempoolx /*out*/);
>>>>>> rte_mempool_create_empty();
>>>>>> rte_mempool_set_ops_byname( , pref_memppol, );
>>>>>> rte_mempool_populate_default();
>>>>>>
>>>>>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>>>>>> ---
>>>>>> v2 --> v3:
>>>>>> - Updated version.map entry to DPDK_v17.11.
>>>>>>
>>>>>> v1 --> v2:
>>>>>> - Renamed _get_preferred_pool to _get_preferred_pool_ops().
>>>>>> Per v1 review feedback, Olivier suggested to rename api
>>>>>> to rte_eth_dev_pool_ops_supported(), considering that 2nd param
>>>>>> for that api will return pool handle 'priority' for that port.
>>>>>> However, per v1 [1], we're opting for approach 1) where
>>>>>> ethdev API returns _preferred_ pool handle to application and Its upto
>>>>>> application to decide on policy - whether application wants to create
>>>>>> pool with received preferred pool handle or not. For more discussion details
>>>>>> on this topic refer [1].
>>>>> Well, I still think it would be more flexible to have an API like
>>>>> rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool)
>>>>>
>>>>> It supports the easy case (= one preferred mempool) without much pain,
>>>>> and provides a more precise manner to describe what is supported or not
>>>>> by the driver. Example: "pmd_foo" prefers "mempool_foo" (best perf), but
>>>>> also supporst "mempool_stack" and "mempool_ring", but "mempool_bar"
>>>>> won't work at all.
>>>>>
>>>>> Having only one preferred pool_ops also prevents from smoothly renaming
>>>>> a pool (supporting both during some time) or to have 2 names for
>>>>> different variants of the same pool_ops (ex: ring_mp_mc, ring_sp_sc).
>>>>>
>>>>> But if the users (I guess at least Cavium and NXP) are happy with
>>>>> what you propose, I'm fine with it.
>>>> preferred handle based upon real world use-case and same thing raised
>>>> at [1].
>>>>
>>>> Hi Hemant, Are you ok with proposed preferred API?
>>>>
>>>> [1] http://dpdk.org/dev/patchwork/patch/24944/
>>>>
>>> The current patch is ok, but it is better if you can extend it to provide list of preferred pools (in preference order) instead of just one pool. This will become helpful. I will avoid providing list of not-supported pools etc.
>>>
>>> A driver can have more than one preferred pool, depend on the resource availability one or other can be used. I am also proposing this in my proposal[1].
>>>
>>> [1] http://dpdk.org/dev/patchwork/patch/26377/
>>>
>> Ok, then sticking to Olivier api but slight change in param,
>> example:
>> /** * Get list of supported pools for a port * * @param port_id [in] * Pointer to port identifier of the device. * @param pools [out] * Pointer to the list of supported pools for that port. * Returns with array of pool ops name handler of size * RTE_MEMPOOL_OPS_NAMESIZE. * @return * >=0: Success; PMD has updated supported pool list. * <0: Failure; */ int rte_eth_dev_pools_ops_supported(uint8_t port_id, char **pools) Hemant, Olivier: Does above api make sense? Pl. confirm. Thanks.
>
> Sorry for the font, resending proposed API:
>
> /**
> * Get list of supported pools for a port
> * @param port_id [in]
> * Pointer to port identifier of the device.
> * @param pools [out]
> * Pointer to the list of supported pools for that port.
> * Returns with array of pool ops name handler of size
> * RTE_MEMPOOL_OPS_NAMESIZE.
> * @return
> * >=0: Success; PMD has updated supported pool list.
> * <0: Failure;
> */
>
> int rte_eth_dev_pools_ops_supported(uint8_t port_id, char **pools)
>
> Hemant, Olivier: Does above api make sense? Pl. confirm. Thanks.
>
looks ok to me.
>>>
>>>>>> --- a/lib/librte_ether/rte_ethdev.c
>>>>>> +++ b/lib/librte_ether/rte_ethdev.c
>>>>>> @@ -3409,3 +3409,21 @@ rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id,
>>>>>>
>>>>>> return 0;
>>>>>> }
>>>>>> +
>>>>>> +int
>>>>>> +rte_eth_dev_get_preferred_pool_ops(uint8_t port_id, char *pool)
>>>>>> +{
>>>>>> + struct rte_eth_dev *dev;
>>>>>> + const char *tmp;
>>>>>> +
>>>>>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>>>>>> +
>>>>>> + dev = &rte_eth_devices[port_id];
>>>>>> +
>>>>>> + if (*dev->dev_ops->get_preferred_pool_ops == NULL) {
>>>>>> + tmp = rte_eal_mbuf_default_mempool_ops();
>>>>>> + snprintf(pool, RTE_MBUF_POOL_OPS_NAMESIZE, "%s", tmp);
>>>>>> + return 0;
>>>>>> + }
>>>>>> + return (*dev->dev_ops->get_preferred_pool_ops)(dev, pool);
>>>>>> +}
>>>>> I think adding the length of the pool buffer to the function arguments
>>>>> would be better: only documenting that the length is
>>>>> RTE_MBUF_POOL_OPS_NAMESIZE looks a bit weak to me, because if one day it
>>>>> changes to another value, the users of the function may not notice it
>>>>> (no ABI/API change).
>>>>>
>>>>>
>>>>> One more comment: it would be helpful to have one user of this API in
>>>>> the example apps or testpmd.
>>>> Yes. I will add in v3. Thanks.
>>>>
>>>>> Olivier
>>>>
>
>
On Thu, Sep 07, 2017 at 04:38:39PM +0530, Hemant Agrawal wrote:
> On 9/7/2017 3:41 PM, santosh wrote:
> >
> >
> > On Thursday 07 September 2017 03:36 PM, santosh wrote:
> > > Hi Hemant,
> > >
> > >
> > > On Thursday 07 September 2017 02:51 PM, Hemant Agrawal wrote:
> > > > On 9/4/2017 6:44 PM, santosh wrote:
> > > > > Hi Olivier,
> > > > >
> > > > >
> > > > > On Monday 04 September 2017 05:41 PM, Olivier MATZ wrote:
> > > > > > Hi Santosh,
> > > > > >
> > > > > > On Tue, Aug 15, 2017 at 01:37:17PM +0530, Santosh Shukla wrote:
> > > > > > > Now that dpdk supports more than one mempool drivers and
> > > > > > > each mempool driver works best for specific PMD, example:
> > > > > > > - sw ring based mempool for Intel PMD drivers
> > > > > > > - dpaa2 HW mempool manager for dpaa2 PMD driver.
> > > > > > > - fpa HW mempool manager for Octeontx PMD driver.
> > > > > > >
> > > > > > > Application like to know `preferred mempool vs PMD driver`
> > > > > > > information in advance before port setup.
> > > > > > >
> > > > > > > Introducing rte_eth_dev_get_preferred_pool_ops() API,
> > > > > > > which allows PMD driver to advertise their pool capability to application.
> > > > > > >
> > > > > > > Application side programing sequence would be:
> > > > > > >
> > > > > > > char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE];
> > > > > > > rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempoolx /*out*/);
> > > > > > > rte_mempool_create_empty();
> > > > > > > rte_mempool_set_ops_byname( , pref_memppol, );
> > > > > > > rte_mempool_populate_default();
> > > > > > >
> > > > > > > Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> > > > > > > ---
> > > > > > > v2 --> v3:
> > > > > > > - Updated version.map entry to DPDK_v17.11.
> > > > > > >
> > > > > > > v1 --> v2:
> > > > > > > - Renamed _get_preferred_pool to _get_preferred_pool_ops().
> > > > > > > Per v1 review feedback, Olivier suggested to rename api
> > > > > > > to rte_eth_dev_pool_ops_supported(), considering that 2nd param
> > > > > > > for that api will return pool handle 'priority' for that port.
> > > > > > > However, per v1 [1], we're opting for approach 1) where
> > > > > > > ethdev API returns _preferred_ pool handle to application and Its upto
> > > > > > > application to decide on policy - whether application wants to create
> > > > > > > pool with received preferred pool handle or not. For more discussion details
> > > > > > > on this topic refer [1].
> > > > > > Well, I still think it would be more flexible to have an API like
> > > > > > rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool)
> > > > > >
> > > > > > It supports the easy case (= one preferred mempool) without much pain,
> > > > > > and provides a more precise manner to describe what is supported or not
> > > > > > by the driver. Example: "pmd_foo" prefers "mempool_foo" (best perf), but
> > > > > > also supporst "mempool_stack" and "mempool_ring", but "mempool_bar"
> > > > > > won't work at all.
> > > > > >
> > > > > > Having only one preferred pool_ops also prevents from smoothly renaming
> > > > > > a pool (supporting both during some time) or to have 2 names for
> > > > > > different variants of the same pool_ops (ex: ring_mp_mc, ring_sp_sc).
> > > > > >
> > > > > > But if the users (I guess at least Cavium and NXP) are happy with
> > > > > > what you propose, I'm fine with it.
> > > > > preferred handle based upon real world use-case and same thing raised
> > > > > at [1].
> > > > >
> > > > > Hi Hemant, Are you ok with proposed preferred API?
> > > > >
> > > > > [1] http://dpdk.org/dev/patchwork/patch/24944/
> > > > >
> > > > The current patch is ok, but it is better if you can extend it to provide list of preferred pools (in preference order) instead of just one pool. This will become helpful. I will avoid providing list of not-supported pools etc.
> > > >
> > > > A driver can have more than one preferred pool, depend on the resource availability one or other can be used. I am also proposing this in my proposal[1].
> > > >
> > > > [1] http://dpdk.org/dev/patchwork/patch/26377/
> > > >
> > > Ok, then sticking to Olivier api but slight change in param,
> > > example:
> > > /** * Get list of supported pools for a port * * @param port_id [in] * Pointer to port identifier of the device. * @param pools [out] * Pointer to the list of supported pools for that port. * Returns with array of pool ops name handler of size * RTE_MEMPOOL_OPS_NAMESIZE. * @return * >=0: Success; PMD has updated supported pool list. * <0: Failure; */ int rte_eth_dev_pools_ops_supported(uint8_t port_id, char **pools) Hemant, Olivier: Does above api make sense? Pl. confirm. Thanks.
> >
> > Sorry for the font, resending proposed API:
> >
> > /**
> > * Get list of supported pools for a port
> > * @param port_id [in]
> > * Pointer to port identifier of the device.
> > * @param pools [out]
> > * Pointer to the list of supported pools for that port.
> > * Returns with array of pool ops name handler of size
> > * RTE_MEMPOOL_OPS_NAMESIZE.
> > * @return
> > * >=0: Success; PMD has updated supported pool list.
> > * <0: Failure;
> > */
> >
> > int rte_eth_dev_pools_ops_supported(uint8_t port_id, char **pools)
> >
> > Hemant, Olivier: Does above api make sense? Pl. confirm. Thanks.
> >
>
> looks ok to me.
I think that returning a list is harder to use in an application, instead of an
api that just returns an int (priority):
int rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool)
The possible returned values are:
ENOTSUP: mempool ops not supported
< 0: any other error
0: best mempool ops choice for this pmd
1: this mempool ops are supported
Let's take an example. Our application wants to select ops that
will match all pmds. The pseudo code would be like this:
best_score = -1
best_ops = NULL
for ops in mempool_ops:
score = 0
for p in ports:
ret = rte_eth_dev_pools_ops_supported(p, ops.name)
if ret < 0:
score = -1
break
score += ret
if score == -1:
continue
if best_score == -1 || score < best_score:
best_score = score
best_ops = ops
if best_score == -1:
print "no matching mempool ops"
else:
print "selected ops: %s", best_ops.name
You can do the exercise with the API you are proposing, but I think
it would be harder.
Olivier
Hi Olivier,
On Monday 11 September 2017 03:03 PM, Olivier MATZ wrote:
> On Thu, Sep 07, 2017 at 04:38:39PM +0530, Hemant Agrawal wrote:
>> On 9/7/2017 3:41 PM, santosh wrote:
>>> Sorry for the font, resending proposed API:
>>>
>>> /**
>>> * Get list of supported pools for a port
>>> * @param port_id [in]
>>> * Pointer to port identifier of the device.
>>> * @param pools [out]
>>> * Pointer to the list of supported pools for that port.
>>> * Returns with array of pool ops name handler of size
>>> * RTE_MEMPOOL_OPS_NAMESIZE.
>>> * @return
>>> * >=0: Success; PMD has updated supported pool list.
>>> * <0: Failure;
>>> */
>>>
>>> int rte_eth_dev_pools_ops_supported(uint8_t port_id, char **pools)
>>>
>>> Hemant, Olivier: Does above api make sense? Pl. confirm. Thanks.
>>>
>> looks ok to me.
> I think that returning a list is harder to use in an application, instead of an
> api that just returns an int (priority):
>
> int rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool)
>
> The possible returned values are:
> ENOTSUP: mempool ops not supported
> < 0: any other error
> 0: best mempool ops choice for this pmd
> 1: this mempool ops are supported
>
> Let's take an example. Our application wants to select ops that
> will match all pmds. The pseudo code would be like this:
>
> best_score = -1
> best_ops = NULL
> for ops in mempool_ops:
> score = 0
> for p in ports:
> ret = rte_eth_dev_pools_ops_supported(p, ops.name)
> if ret < 0:
> score = -1
> break
> score += ret
> if score == -1:
> continue
> if best_score == -1 || score < best_score:
> best_score = score
> best_ops = ops
> if best_score == -1:
> print "no matching mempool ops"
> else:
> print "selected ops: %s", best_ops.name
>
>
> You can do the exercise with the API you are proposing, but I think
> it would be harder.
>
> Olivier
Proposed model looks okay and I'll implement _pool_ops_supported() api.
But I cant promise the testpmd related changes with this series,
Since rc1 is closing so let the api go in -rc1 release and testpmd changes will
follow then. is it ok with you?
Thanks.
On Mon, Sep 11, 2017 at 06:10:36PM +0530, santosh wrote:
> Hi Olivier,
>
>
> On Monday 11 September 2017 03:03 PM, Olivier MATZ wrote:
> > On Thu, Sep 07, 2017 at 04:38:39PM +0530, Hemant Agrawal wrote:
> >> On 9/7/2017 3:41 PM, santosh wrote:
> >>> Sorry for the font, resending proposed API:
> >>>
> >>> /**
> >>> * Get list of supported pools for a port
> >>> * @param port_id [in]
> >>> * Pointer to port identifier of the device.
> >>> * @param pools [out]
> >>> * Pointer to the list of supported pools for that port.
> >>> * Returns with array of pool ops name handler of size
> >>> * RTE_MEMPOOL_OPS_NAMESIZE.
> >>> * @return
> >>> * >=0: Success; PMD has updated supported pool list.
> >>> * <0: Failure;
> >>> */
> >>>
> >>> int rte_eth_dev_pools_ops_supported(uint8_t port_id, char **pools)
> >>>
> >>> Hemant, Olivier: Does above api make sense? Pl. confirm. Thanks.
> >>>
> >> looks ok to me.
> > I think that returning a list is harder to use in an application, instead of an
> > api that just returns an int (priority):
> >
> > int rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool)
> >
> > The possible returned values are:
> > ENOTSUP: mempool ops not supported
> > < 0: any other error
> > 0: best mempool ops choice for this pmd
> > 1: this mempool ops are supported
> >
> > Let's take an example. Our application wants to select ops that
> > will match all pmds. The pseudo code would be like this:
> >
> > best_score = -1
> > best_ops = NULL
> > for ops in mempool_ops:
> > score = 0
> > for p in ports:
> > ret = rte_eth_dev_pools_ops_supported(p, ops.name)
> > if ret < 0:
> > score = -1
> > break
> > score += ret
> > if score == -1:
> > continue
> > if best_score == -1 || score < best_score:
> > best_score = score
> > best_ops = ops
> > if best_score == -1:
> > print "no matching mempool ops"
> > else:
> > print "selected ops: %s", best_ops.name
> >
> >
> > You can do the exercise with the API you are proposing, but I think
> > it would be harder.
> >
> > Olivier
>
> Proposed model looks okay and I'll implement _pool_ops_supported() api.
> But I cant promise the testpmd related changes with this series,
> Since rc1 is closing so let the api go in -rc1 release and testpmd changes will
> follow then. is it ok with you?
It's ok for me.
@@ -3409,3 +3409,21 @@ rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id,
return 0;
}
+
+int
+rte_eth_dev_get_preferred_pool_ops(uint8_t port_id, char *pool)
+{
+ struct rte_eth_dev *dev;
+ const char *tmp;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+
+ dev = &rte_eth_devices[port_id];
+
+ if (*dev->dev_ops->get_preferred_pool_ops == NULL) {
+ tmp = rte_eal_mbuf_default_mempool_ops();
+ snprintf(pool, RTE_MBUF_POOL_OPS_NAMESIZE, "%s", tmp);
+ return 0;
+ }
+ return (*dev->dev_ops->get_preferred_pool_ops)(dev, pool);
+}
@@ -1425,6 +1425,10 @@ typedef int (*eth_get_dcb_info)(struct rte_eth_dev *dev,
struct rte_eth_dcb_info *dcb_info);
/**< @internal Get dcb information on an Ethernet device */
+typedef int (*eth_get_preferred_pool_ops_t)(struct rte_eth_dev *dev,
+ char *pool);
+/**< @internal Get preferred pool handle for port */
+
/**
* @internal A structure containing the functions exported by an Ethernet driver.
*/
@@ -1544,6 +1548,8 @@ struct eth_dev_ops {
eth_tm_ops_get_t tm_ops_get;
/**< Get Traffic Management (TM) operations. */
+ eth_get_preferred_pool_ops_t get_preferred_pool_ops;
+ /**< Get preferred pool handle for port */
};
/**
@@ -4436,6 +4442,21 @@ int rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id,
uint16_t *nb_rx_desc,
uint16_t *nb_tx_desc);
+/**
+ * Get preferred pool handle for port
+ *
+ * @param port_id
+ * port identifier of the device
+ * @param [out] pool
+ * Preferred pool handle for this port.
+ * Maximum length of preferred pool handle is RTE_MBUF_POOL_OPS_NAMESIZE.
+ * @return
+ * - (0) if successful.
+ * - (-ENOTSUP, -ENODEV or -EINVAL) on failure.
+ */
+int
+rte_eth_dev_get_preferred_pool_ops(uint8_t port_id, char *pool);
+
#ifdef __cplusplus
}
#endif
@@ -187,3 +187,10 @@ DPDK_17.08 {
rte_tm_wred_profile_delete;
} DPDK_17.05;
+
+DPDK_17.11 {
+ global:
+
+ rte_eth_dev_get_preferred_pool_ops;
+
+} DPDK_17.08;