[dpdk-dev,v1,3/9] mempool: remove callback to get capabilities

Message ID 1520696382-16400-4-git-send-email-arybchenko@solarflare.com (mailing list archive)
State Superseded, archived
Delegated to: Thomas Monjalon
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Andrew Rybchenko March 10, 2018, 3:39 p.m. UTC
  The callback was introduced to let generic code to know octeontx
mempool driver requirements to use single physically contiguous
memory chunk to store all objects and align object address to
total object size. Now these requirements are met using a new
callbacks to calculate required memory chunk size and to populate
objects using provided memory chunk.

These capability flags are not used anywhere else.

Restricting capabilities to flags is not generic and likely to
be insufficient to describe mempool driver features. If required
in the future, API which returns structured information may be
added.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
RFCv2 -> v1:
 - squash mempool/octeontx patches to add calc_mem_size and populate
   callbacks to this one in order to avoid breakages in the middle of
   patchset
 - advertise API changes in release notes

 doc/guides/rel_notes/deprecation.rst            |  1 -
 doc/guides/rel_notes/release_18_05.rst          | 11 +++++
 drivers/mempool/octeontx/rte_mempool_octeontx.c | 59 +++++++++++++++++++++----
 lib/librte_mempool/rte_mempool.c                | 44 ++----------------
 lib/librte_mempool/rte_mempool.h                | 52 +---------------------
 lib/librte_mempool/rte_mempool_ops.c            | 14 ------
 lib/librte_mempool/rte_mempool_ops_default.c    | 15 +------
 lib/librte_mempool/rte_mempool_version.map      |  1 -
 8 files changed, 68 insertions(+), 129 deletions(-)
  

Comments

Burakov, Anatoly March 14, 2018, 2:40 p.m. UTC | #1
On 10-Mar-18 3:39 PM, Andrew Rybchenko wrote:
> The callback was introduced to let generic code to know octeontx
> mempool driver requirements to use single physically contiguous
> memory chunk to store all objects and align object address to
> total object size. Now these requirements are met using a new
> callbacks to calculate required memory chunk size and to populate
> objects using provided memory chunk.
> 
> These capability flags are not used anywhere else.
> 
> Restricting capabilities to flags is not generic and likely to
> be insufficient to describe mempool driver features. If required
> in the future, API which returns structured information may be
> added.
> 
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> ---

Just a general comment - it is not enough to describe minimum memchunk 
requirements. With memory hotplug patchset that's hopefully getting 
merged in 18.05, memzones will no longer be guaranteed to be 
IOVA-contiguous. So, if a driver requires its mempool to not only be 
populated from a single memzone, but a single *physically contiguous* 
memzone, going by only callbacks will not do, because whether or not 
something should be a single memzone says nothing about whether this 
memzone has to also be IOVA-contiguous.

So i believe this needs to stay in one form or another.

(also it would be nice to have a flag that a user could pass to 
mempool_create that would force memzone reservation be IOVA-contiguous, 
but that's a topic for another conversation. prime user for this would 
be KNI.)
  
Andrew Rybchenko March 14, 2018, 4:12 p.m. UTC | #2
On 03/14/2018 05:40 PM, Burakov, Anatoly wrote:
> On 10-Mar-18 3:39 PM, Andrew Rybchenko wrote:
>> The callback was introduced to let generic code to know octeontx
>> mempool driver requirements to use single physically contiguous
>> memory chunk to store all objects and align object address to
>> total object size. Now these requirements are met using a new
>> callbacks to calculate required memory chunk size and to populate
>> objects using provided memory chunk.
>>
>> These capability flags are not used anywhere else.
>>
>> Restricting capabilities to flags is not generic and likely to
>> be insufficient to describe mempool driver features. If required
>> in the future, API which returns structured information may be
>> added.
>>
>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>> ---
>
> Just a general comment - it is not enough to describe minimum memchunk 
> requirements. With memory hotplug patchset that's hopefully getting 
> merged in 18.05, memzones will no longer be guaranteed to be 
> IOVA-contiguous. So, if a driver requires its mempool to not only be 
> populated from a single memzone, but a single *physically contiguous* 
> memzone, going by only callbacks will not do, because whether or not 
> something should be a single memzone says nothing about whether this 
> memzone has to also be IOVA-contiguous.
>
> So i believe this needs to stay in one form or another.
>
> (also it would be nice to have a flag that a user could pass to 
> mempool_create that would force memzone reservation be 
> IOVA-contiguous, but that's a topic for another conversation. prime 
> user for this would be KNI.)

I think that min_chunk_size should be treated as IOVA-contiguous. So, we 
have 4 levels:
  - MEMPOOL_F_NO_PHYS_CONTIG (min_chunk_size == 0) -- IOVA-congtiguous 
is not required at all
  - no MEMPOOL_F_NO_PHYS_CONTIG (min_chunk_size == total_obj_size) -- 
object should be IOVA-contiguous
  - min_chunk_size > total_obj_size  -- group of objects should be 
IOVA-contiguous
  - min_chunk_size == <all-objects-size> -- all objects should be 
IOVA-contiguous

If so, how allocation should be implemented?
  1. if (min_chunk_size > min_page_size)
     a. try all contiguous
     b. if cannot, do by mem_chunk_size contiguous
  2. else allocate non-contiguous

--
Andrew.
  
Burakov, Anatoly March 14, 2018, 4:53 p.m. UTC | #3
On 14-Mar-18 4:12 PM, Andrew Rybchenko wrote:
> On 03/14/2018 05:40 PM, Burakov, Anatoly wrote:
>> On 10-Mar-18 3:39 PM, Andrew Rybchenko wrote:
>>> The callback was introduced to let generic code to know octeontx
>>> mempool driver requirements to use single physically contiguous
>>> memory chunk to store all objects and align object address to
>>> total object size. Now these requirements are met using a new
>>> callbacks to calculate required memory chunk size and to populate
>>> objects using provided memory chunk.
>>>
>>> These capability flags are not used anywhere else.
>>>
>>> Restricting capabilities to flags is not generic and likely to
>>> be insufficient to describe mempool driver features. If required
>>> in the future, API which returns structured information may be
>>> added.
>>>
>>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>>> ---
>>
>> Just a general comment - it is not enough to describe minimum memchunk 
>> requirements. With memory hotplug patchset that's hopefully getting 
>> merged in 18.05, memzones will no longer be guaranteed to be 
>> IOVA-contiguous. So, if a driver requires its mempool to not only be 
>> populated from a single memzone, but a single *physically contiguous* 
>> memzone, going by only callbacks will not do, because whether or not 
>> something should be a single memzone says nothing about whether this 
>> memzone has to also be IOVA-contiguous.
>>
>> So i believe this needs to stay in one form or another.
>>
>> (also it would be nice to have a flag that a user could pass to 
>> mempool_create that would force memzone reservation be 
>> IOVA-contiguous, but that's a topic for another conversation. prime 
>> user for this would be KNI.)
> 
> I think that min_chunk_size should be treated as IOVA-contiguous.

Why? It's perfectly reasonable to e.g. implement a software mempool 
driver that would perform some optimizations due to all objects being in 
the same VA-contiguous memzone, yet not be dependent on underlying 
physical memory layout. These are two separate concerns IMO.

 > So, we
> have 4 levels:
>   - MEMPOOL_F_NO_PHYS_CONTIG (min_chunk_size == 0) -- IOVA-congtiguous 
> is not required at all
>   - no MEMPOOL_F_NO_PHYS_CONTIG (min_chunk_size == total_obj_size) -- 
> object should be IOVA-contiguous
>   - min_chunk_size > total_obj_size  -- group of objects should be 
> IOVA-contiguous
>   - min_chunk_size == <all-objects-size> -- all objects should be 
> IOVA-contiguous

I don't think this "automagic" decision on what should be 
IOVA-contiguous or not is the way to go. It needlessly complicates 
things, when all it takes is another flag passed to mempool allocator 
somewhere.

I'm not sure what is the best solution here. Perhaps another option 
would be to let mempool drivers allocate their memory as well? I.e. 
leave current behavior as default, as it's likely that it would be 
suitable for nearly all use cases, but provide another option to 
override memory allocation completely, so that e.g. octeontx could just 
do a memzone_reserve_contig() without regard for default allocation 
settings. I think this could be the cleanest solution.

> 
> If so, how allocation should be implemented?
>   1. if (min_chunk_size > min_page_size)
>      a. try all contiguous
>      b. if cannot, do by mem_chunk_size contiguous
>   2. else allocate non-contiguous
> 
> --
> Andrew.
  
Andrew Rybchenko March 14, 2018, 5:24 p.m. UTC | #4
On 03/14/2018 07:53 PM, Burakov, Anatoly wrote:
> On 14-Mar-18 4:12 PM, Andrew Rybchenko wrote:
>> On 03/14/2018 05:40 PM, Burakov, Anatoly wrote:
>>> On 10-Mar-18 3:39 PM, Andrew Rybchenko wrote:
>>>> The callback was introduced to let generic code to know octeontx
>>>> mempool driver requirements to use single physically contiguous
>>>> memory chunk to store all objects and align object address to
>>>> total object size. Now these requirements are met using a new
>>>> callbacks to calculate required memory chunk size and to populate
>>>> objects using provided memory chunk.
>>>>
>>>> These capability flags are not used anywhere else.
>>>>
>>>> Restricting capabilities to flags is not generic and likely to
>>>> be insufficient to describe mempool driver features. If required
>>>> in the future, API which returns structured information may be
>>>> added.
>>>>
>>>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>>>> ---
>>>
>>> Just a general comment - it is not enough to describe minimum 
>>> memchunk requirements. With memory hotplug patchset that's hopefully 
>>> getting merged in 18.05, memzones will no longer be guaranteed to be 
>>> IOVA-contiguous. So, if a driver requires its mempool to not only be 
>>> populated from a single memzone, but a single *physically 
>>> contiguous* memzone, going by only callbacks will not do, because 
>>> whether or not something should be a single memzone says nothing 
>>> about whether this memzone has to also be IOVA-contiguous.
>>>
>>> So i believe this needs to stay in one form or another.
>>>
>>> (also it would be nice to have a flag that a user could pass to 
>>> mempool_create that would force memzone reservation be 
>>> IOVA-contiguous, but that's a topic for another conversation. prime 
>>> user for this would be KNI.)
>>
>> I think that min_chunk_size should be treated as IOVA-contiguous.
>
> Why? It's perfectly reasonable to e.g. implement a software mempool 
> driver that would perform some optimizations due to all objects being 
> in the same VA-contiguous memzone, yet not be dependent on underlying 
> physical memory layout. These are two separate concerns IMO.

It looks like there is some misunderstanding here or I simply don't 
understand your point.
Above I mean that driver should be able to advertise its requirements on 
IOVA-contiguous regions.
If driver do not care about physical memory layout, no problem.

> > So, we
>> have 4 levels:
>>   - MEMPOOL_F_NO_PHYS_CONTIG (min_chunk_size == 0) -- 
>> IOVA-congtiguous is not required at all
>>   - no MEMPOOL_F_NO_PHYS_CONTIG (min_chunk_size == total_obj_size) -- 
>> object should be IOVA-contiguous
>>   - min_chunk_size > total_obj_size  -- group of objects should be 
>> IOVA-contiguous
>>   - min_chunk_size == <all-objects-size> -- all objects should be 
>> IOVA-contiguous
>
> I don't think this "automagic" decision on what should be 
> IOVA-contiguous or not is the way to go. It needlessly complicates 
> things, when all it takes is another flag passed to mempool allocator 
> somewhere.

No, it is not just one flag. We really need option (3) above: group of 
objects IOVA-contiguous in [1].
Of course, it is possible to use option (4) instead: everything 
IOVA-contigous, but I think it is bad - it may be very big and 
hard/impossible to allocate due to fragmentation.

> I'm not sure what is the best solution here. Perhaps another option 
> would be to let mempool drivers allocate their memory as well? I.e. 
> leave current behavior as default, as it's likely that it would be 
> suitable for nearly all use cases, but provide another option to 
> override memory allocation completely, so that e.g. octeontx could 
> just do a memzone_reserve_contig() without regard for default 
> allocation settings. I think this could be the cleanest solution.

For me it is hard to say. I don't know DPDK history good enough to say 
why there is a mempool API to populate objects on externally provided 
memory. If it may be removed, it is OK for me to do memory allocation 
inside rte_mempool or mempool drivers. Otherwise, if it is still allowed 
to allocate memory externally and pass it to mempool, it must be a way 
to express IOVA-contiguos requirements.

[1] https://dpdk.org/dev/patchwork/patch/34338/

>
>>
>> If so, how allocation should be implemented?
>>   1. if (min_chunk_size > min_page_size)
>>      a. try all contiguous
>>      b. if cannot, do by mem_chunk_size contiguous
>>   2. else allocate non-contiguous
>>
>> -- 
>> Andrew.
>
>
  
Burakov, Anatoly March 15, 2018, 9:48 a.m. UTC | #5
On 14-Mar-18 5:24 PM, Andrew Rybchenko wrote:
> On 03/14/2018 07:53 PM, Burakov, Anatoly wrote:
>> On 14-Mar-18 4:12 PM, Andrew Rybchenko wrote:
>>> On 03/14/2018 05:40 PM, Burakov, Anatoly wrote:
>>>> On 10-Mar-18 3:39 PM, Andrew Rybchenko wrote:
>>>>> The callback was introduced to let generic code to know octeontx
>>>>> mempool driver requirements to use single physically contiguous
>>>>> memory chunk to store all objects and align object address to
>>>>> total object size. Now these requirements are met using a new
>>>>> callbacks to calculate required memory chunk size and to populate
>>>>> objects using provided memory chunk.
>>>>>
>>>>> These capability flags are not used anywhere else.
>>>>>
>>>>> Restricting capabilities to flags is not generic and likely to
>>>>> be insufficient to describe mempool driver features. If required
>>>>> in the future, API which returns structured information may be
>>>>> added.
>>>>>
>>>>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>>>>> ---
>>>>
>>>> Just a general comment - it is not enough to describe minimum 
>>>> memchunk requirements. With memory hotplug patchset that's hopefully 
>>>> getting merged in 18.05, memzones will no longer be guaranteed to be 
>>>> IOVA-contiguous. So, if a driver requires its mempool to not only be 
>>>> populated from a single memzone, but a single *physically 
>>>> contiguous* memzone, going by only callbacks will not do, because 
>>>> whether or not something should be a single memzone says nothing 
>>>> about whether this memzone has to also be IOVA-contiguous.
>>>>
>>>> So i believe this needs to stay in one form or another.
>>>>
>>>> (also it would be nice to have a flag that a user could pass to 
>>>> mempool_create that would force memzone reservation be 
>>>> IOVA-contiguous, but that's a topic for another conversation. prime 
>>>> user for this would be KNI.)
>>>
>>> I think that min_chunk_size should be treated as IOVA-contiguous.
>>
>> Why? It's perfectly reasonable to e.g. implement a software mempool 
>> driver that would perform some optimizations due to all objects being 
>> in the same VA-contiguous memzone, yet not be dependent on underlying 
>> physical memory layout. These are two separate concerns IMO.
> 
> It looks like there is some misunderstanding here or I simply don't 
> understand your point.
> Above I mean that driver should be able to advertise its requirements on 
> IOVA-contiguous regions.
> If driver do not care about physical memory layout, no problem.

Please correct me if i'm wrong, but my understanding was that you wanted 
to use min_chunk as a way to express minimum requirements for 
IOVA-contiguous memory. If i understood you correctly, i don't think 
that's the way to go because there could be valid use cases where a 
mempool driver would like to advertise min_chunk_size to be equal to its 
total size (i.e. allocate everything in a single memzone), yet not 
require that memzone to be IOVA-contiguous. I think these are two 
different concerns, and one does not, and should not imply the other.

> 
>> > So, we
>>> have 4 levels:
>>>   - MEMPOOL_F_NO_PHYS_CONTIG (min_chunk_size == 0) -- 
>>> IOVA-congtiguous is not required at all
>>>   - no MEMPOOL_F_NO_PHYS_CONTIG (min_chunk_size == total_obj_size) -- 
>>> object should be IOVA-contiguous
>>>   - min_chunk_size > total_obj_size  -- group of objects should be 
>>> IOVA-contiguous
>>>   - min_chunk_size == <all-objects-size> -- all objects should be 
>>> IOVA-contiguous
>>
>> I don't think this "automagic" decision on what should be 
>> IOVA-contiguous or not is the way to go. It needlessly complicates 
>> things, when all it takes is another flag passed to mempool allocator 
>> somewhere.
> 
> No, it is not just one flag. We really need option (3) above: group of 
> objects IOVA-contiguous in [1].
> Of course, it is possible to use option (4) instead: everything 
> IOVA-contigous, but I think it is bad - it may be very big and 
> hard/impossible to allocate due to fragmentation.
> 

Exactly: we shouldn't be forcing IOVA-contiguous memory just because 
mempool requrested a big min_chunk_size, nor do i think it is wise to 
encode such heuristics (referring to your 4 "levels" quoted above) into 
the mempool allocator.

>> I'm not sure what is the best solution here. Perhaps another option 
>> would be to let mempool drivers allocate their memory as well? I.e. 
>> leave current behavior as default, as it's likely that it would be 
>> suitable for nearly all use cases, but provide another option to 
>> override memory allocation completely, so that e.g. octeontx could 
>> just do a memzone_reserve_contig() without regard for default 
>> allocation settings. I think this could be the cleanest solution.
> 
> For me it is hard to say. I don't know DPDK history good enough to say 
> why there is a mempool API to populate objects on externally provided 
> memory. If it may be removed, it is OK for me to do memory allocation 
> inside rte_mempool or mempool drivers. Otherwise, if it is still allowed 
> to allocate memory externally and pass it to mempool, it must be a way 
> to express IOVA-contiguos requirements.
> 
> [1] https://dpdk.org/dev/patchwork/patch/34338/

Populating mempool objects is not the same as reserving memory where 
those objects would reside. The closest to "allocate memory externally" 
we have is rte_mempool_xmem_create(), which you are removing in this 
patchset.

> 
>>
>>>
>>> If so, how allocation should be implemented?
>>>   1. if (min_chunk_size > min_page_size)
>>>      a. try all contiguous
>>>      b. if cannot, do by mem_chunk_size contiguous
>>>   2. else allocate non-contiguous
>>>
>>> -- 
>>> Andrew.
>>
>>
>
  
Andrew Rybchenko March 15, 2018, 11:49 a.m. UTC | #6
On 03/15/2018 12:48 PM, Burakov, Anatoly wrote:
> On 14-Mar-18 5:24 PM, Andrew Rybchenko wrote:
>> On 03/14/2018 07:53 PM, Burakov, Anatoly wrote:
>>> On 14-Mar-18 4:12 PM, Andrew Rybchenko wrote:
>>>> On 03/14/2018 05:40 PM, Burakov, Anatoly wrote:
>>>>> On 10-Mar-18 3:39 PM, Andrew Rybchenko wrote:
>>>>>> The callback was introduced to let generic code to know octeontx
>>>>>> mempool driver requirements to use single physically contiguous
>>>>>> memory chunk to store all objects and align object address to
>>>>>> total object size. Now these requirements are met using a new
>>>>>> callbacks to calculate required memory chunk size and to populate
>>>>>> objects using provided memory chunk.
>>>>>>
>>>>>> These capability flags are not used anywhere else.
>>>>>>
>>>>>> Restricting capabilities to flags is not generic and likely to
>>>>>> be insufficient to describe mempool driver features. If required
>>>>>> in the future, API which returns structured information may be
>>>>>> added.
>>>>>>
>>>>>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>>>>>> ---
>>>>>
>>>>> Just a general comment - it is not enough to describe minimum 
>>>>> memchunk requirements. With memory hotplug patchset that's 
>>>>> hopefully getting merged in 18.05, memzones will no longer be 
>>>>> guaranteed to be IOVA-contiguous. So, if a driver requires its 
>>>>> mempool to not only be populated from a single memzone, but a 
>>>>> single *physically contiguous* memzone, going by only callbacks 
>>>>> will not do, because whether or not something should be a single 
>>>>> memzone says nothing about whether this memzone has to also be 
>>>>> IOVA-contiguous.
>>>>>
>>>>> So i believe this needs to stay in one form or another.
>>>>>
>>>>> (also it would be nice to have a flag that a user could pass to 
>>>>> mempool_create that would force memzone reservation be 
>>>>> IOVA-contiguous, but that's a topic for another conversation. 
>>>>> prime user for this would be KNI.)
>>>>
>>>> I think that min_chunk_size should be treated as IOVA-contiguous.
>>>
>>> Why? It's perfectly reasonable to e.g. implement a software mempool 
>>> driver that would perform some optimizations due to all objects 
>>> being in the same VA-contiguous memzone, yet not be dependent on 
>>> underlying physical memory layout. These are two separate concerns IMO.
>>
>> It looks like there is some misunderstanding here or I simply don't 
>> understand your point.
>> Above I mean that driver should be able to advertise its requirements 
>> on IOVA-contiguous regions.
>> If driver do not care about physical memory layout, no problem.
>
> Please correct me if i'm wrong, but my understanding was that you 
> wanted to use min_chunk as a way to express minimum requirements for 
> IOVA-contiguous memory. If i understood you correctly, i don't think 
> that's the way to go because there could be valid use cases where a 
> mempool driver would like to advertise min_chunk_size to be equal to 
> its total size (i.e. allocate everything in a single memzone), yet not 
> require that memzone to be IOVA-contiguous. I think these are two 
> different concerns, and one does not, and should not imply the other.

Aha, you're saying that virtual-contiguous and IOVA-contiguous 
requirements are different things that it could be usecases where 
virtual contiguous is important but IOVA-contiguos is not required. It 
is perfectly fine.
As I understand IOVA-contiguous (physical) typically means 
virtual-contiguous as well. Requirements to have everything virtually 
contiguous and some blocks physically contiguous are unlikely. So, it 
may be reduced to either virtual or physical contiguous. If mempool does 
not care about physical contiguous at all, MEMPOOL_F_NO_PHYS_CONTIG flag 
should be used and min_chunk_size should mean virtual contiguous 
requirements. If mempool requires physical contiguous objects, there is 
*no* MEMPOOL_F_NO_PHYS_CONTIG flag and min_chunk_size means physical 
contiguous requirements.

>>
>>> > So, we
>>>> have 4 levels:
>>>>   - MEMPOOL_F_NO_PHYS_CONTIG (min_chunk_size == 0) -- 
>>>> IOVA-congtiguous is not required at all
>>>>   - no MEMPOOL_F_NO_PHYS_CONTIG (min_chunk_size == total_obj_size) 
>>>> -- object should be IOVA-contiguous
>>>>   - min_chunk_size > total_obj_size  -- group of objects should be 
>>>> IOVA-contiguous
>>>>   - min_chunk_size == <all-objects-size> -- all objects should be 
>>>> IOVA-contiguous
>>>
>>> I don't think this "automagic" decision on what should be 
>>> IOVA-contiguous or not is the way to go. It needlessly complicates 
>>> things, when all it takes is another flag passed to mempool 
>>> allocator somewhere.
>>
>> No, it is not just one flag. We really need option (3) above: group 
>> of objects IOVA-contiguous in [1].
>> Of course, it is possible to use option (4) instead: everything 
>> IOVA-contigous, but I think it is bad - it may be very big and 
>> hard/impossible to allocate due to fragmentation.
>>
>
> Exactly: we shouldn't be forcing IOVA-contiguous memory just because 
> mempool requrested a big min_chunk_size, nor do i think it is wise to 
> encode such heuristics (referring to your 4 "levels" quoted above) 
> into the mempool allocator.
>
>>> I'm not sure what is the best solution here. Perhaps another option 
>>> would be to let mempool drivers allocate their memory as well? I.e. 
>>> leave current behavior as default, as it's likely that it would be 
>>> suitable for nearly all use cases, but provide another option to 
>>> override memory allocation completely, so that e.g. octeontx could 
>>> just do a memzone_reserve_contig() without regard for default 
>>> allocation settings. I think this could be the cleanest solution.
>>
>> For me it is hard to say. I don't know DPDK history good enough to 
>> say why there is a mempool API to populate objects on externally 
>> provided memory. If it may be removed, it is OK for me to do memory 
>> allocation inside rte_mempool or mempool drivers. Otherwise, if it is 
>> still allowed to allocate memory externally and pass it to mempool, 
>> it must be a way to express IOVA-contiguos requirements.
>>
>> [1] https://dpdk.org/dev/patchwork/patch/34338/
>
> Populating mempool objects is not the same as reserving memory where 
> those objects would reside. The closest to "allocate memory 
> externally" we have is rte_mempool_xmem_create(), which you are 
> removing in this patchset.

It is not the only function. Other functions remain: 
rte_mempool_populate_iova, rte_mempool_populate_iova_tab, 
rte_mempool_populate_virt. These functions may be used to add mem areas 
to mempool to populate objects. So, the memory is allocated externally 
and external entity needs to know requirements on memory allocation: 
size and virtual or both virtual/physical contiguous.

>>
>>>
>>>>
>>>> If so, how allocation should be implemented?
>>>>   1. if (min_chunk_size > min_page_size)
>>>>      a. try all contiguous
>>>>      b. if cannot, do by mem_chunk_size contiguous
>>>>   2. else allocate non-contiguous
>>>>
>>>> -- 
>>>> Andrew.
>>>
>>>
>>
>
>
  
Burakov, Anatoly March 15, 2018, noon UTC | #7
On 15-Mar-18 11:49 AM, Andrew Rybchenko wrote:
> On 03/15/2018 12:48 PM, Burakov, Anatoly wrote:
>> On 14-Mar-18 5:24 PM, Andrew Rybchenko wrote:
>>> On 03/14/2018 07:53 PM, Burakov, Anatoly wrote:
>>>> On 14-Mar-18 4:12 PM, Andrew Rybchenko wrote:
>>>>> On 03/14/2018 05:40 PM, Burakov, Anatoly wrote:
>>>>>> On 10-Mar-18 3:39 PM, Andrew Rybchenko wrote:
>>>>>>> The callback was introduced to let generic code to know octeontx
>>>>>>> mempool driver requirements to use single physically contiguous
>>>>>>> memory chunk to store all objects and align object address to
>>>>>>> total object size. Now these requirements are met using a new
>>>>>>> callbacks to calculate required memory chunk size and to populate
>>>>>>> objects using provided memory chunk.
>>>>>>>
>>>>>>> These capability flags are not used anywhere else.
>>>>>>>
>>>>>>> Restricting capabilities to flags is not generic and likely to
>>>>>>> be insufficient to describe mempool driver features. If required
>>>>>>> in the future, API which returns structured information may be
>>>>>>> added.
>>>>>>>
>>>>>>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>>>>>>> ---
>>>>>>
>>>>>> Just a general comment - it is not enough to describe minimum 
>>>>>> memchunk requirements. With memory hotplug patchset that's 
>>>>>> hopefully getting merged in 18.05, memzones will no longer be 
>>>>>> guaranteed to be IOVA-contiguous. So, if a driver requires its 
>>>>>> mempool to not only be populated from a single memzone, but a 
>>>>>> single *physically contiguous* memzone, going by only callbacks 
>>>>>> will not do, because whether or not something should be a single 
>>>>>> memzone says nothing about whether this memzone has to also be 
>>>>>> IOVA-contiguous.
>>>>>>
>>>>>> So i believe this needs to stay in one form or another.
>>>>>>
>>>>>> (also it would be nice to have a flag that a user could pass to 
>>>>>> mempool_create that would force memzone reservation be 
>>>>>> IOVA-contiguous, but that's a topic for another conversation. 
>>>>>> prime user for this would be KNI.)
>>>>>
>>>>> I think that min_chunk_size should be treated as IOVA-contiguous.
>>>>
>>>> Why? It's perfectly reasonable to e.g. implement a software mempool 
>>>> driver that would perform some optimizations due to all objects 
>>>> being in the same VA-contiguous memzone, yet not be dependent on 
>>>> underlying physical memory layout. These are two separate concerns IMO.
>>>
>>> It looks like there is some misunderstanding here or I simply don't 
>>> understand your point.
>>> Above I mean that driver should be able to advertise its requirements 
>>> on IOVA-contiguous regions.
>>> If driver do not care about physical memory layout, no problem.
>>
>> Please correct me if i'm wrong, but my understanding was that you 
>> wanted to use min_chunk as a way to express minimum requirements for 
>> IOVA-contiguous memory. If i understood you correctly, i don't think 
>> that's the way to go because there could be valid use cases where a 
>> mempool driver would like to advertise min_chunk_size to be equal to 
>> its total size (i.e. allocate everything in a single memzone), yet not 
>> require that memzone to be IOVA-contiguous. I think these are two 
>> different concerns, and one does not, and should not imply the other.
> 
> Aha, you're saying that virtual-contiguous and IOVA-contiguous 
> requirements are different things that it could be usecases where 
> virtual contiguous is important but IOVA-contiguos is not required. It 
> is perfectly fine.
> As I understand IOVA-contiguous (physical) typically means 
> virtual-contiguous as well. Requirements to have everything virtually 
> contiguous and some blocks physically contiguous are unlikely. So, it 
> may be reduced to either virtual or physical contiguous. If mempool does 
> not care about physical contiguous at all, MEMPOOL_F_NO_PHYS_CONTIG flag 
> should be used and min_chunk_size should mean virtual contiguous 
> requirements. If mempool requires physical contiguous objects, there is 
> *no* MEMPOOL_F_NO_PHYS_CONTIG flag and min_chunk_size means physical 
> contiguous requirements.
> 

Fair point. I think we're in agreement now :) This will need to be 
documented then.
  
Andrew Rybchenko March 15, 2018, 12:44 p.m. UTC | #8
On 03/15/2018 03:00 PM, Burakov, Anatoly wrote:
> On 15-Mar-18 11:49 AM, Andrew Rybchenko wrote:
>> On 03/15/2018 12:48 PM, Burakov, Anatoly wrote:
>>> On 14-Mar-18 5:24 PM, Andrew Rybchenko wrote:
>>>> On 03/14/2018 07:53 PM, Burakov, Anatoly wrote:
>>>>> On 14-Mar-18 4:12 PM, Andrew Rybchenko wrote:
>>>>>> On 03/14/2018 05:40 PM, Burakov, Anatoly wrote:
>>>>>>> On 10-Mar-18 3:39 PM, Andrew Rybchenko wrote:
>>>>>>>> The callback was introduced to let generic code to know octeontx
>>>>>>>> mempool driver requirements to use single physically contiguous
>>>>>>>> memory chunk to store all objects and align object address to
>>>>>>>> total object size. Now these requirements are met using a new
>>>>>>>> callbacks to calculate required memory chunk size and to populate
>>>>>>>> objects using provided memory chunk.
>>>>>>>>
>>>>>>>> These capability flags are not used anywhere else.
>>>>>>>>
>>>>>>>> Restricting capabilities to flags is not generic and likely to
>>>>>>>> be insufficient to describe mempool driver features. If required
>>>>>>>> in the future, API which returns structured information may be
>>>>>>>> added.
>>>>>>>>
>>>>>>>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>>>>>>>> ---
>>>>>>>
>>>>>>> Just a general comment - it is not enough to describe minimum 
>>>>>>> memchunk requirements. With memory hotplug patchset that's 
>>>>>>> hopefully getting merged in 18.05, memzones will no longer be 
>>>>>>> guaranteed to be IOVA-contiguous. So, if a driver requires its 
>>>>>>> mempool to not only be populated from a single memzone, but a 
>>>>>>> single *physically contiguous* memzone, going by only callbacks 
>>>>>>> will not do, because whether or not something should be a single 
>>>>>>> memzone says nothing about whether this memzone has to also be 
>>>>>>> IOVA-contiguous.
>>>>>>>
>>>>>>> So i believe this needs to stay in one form or another.
>>>>>>>
>>>>>>> (also it would be nice to have a flag that a user could pass to 
>>>>>>> mempool_create that would force memzone reservation be 
>>>>>>> IOVA-contiguous, but that's a topic for another conversation. 
>>>>>>> prime user for this would be KNI.)
>>>>>>
>>>>>> I think that min_chunk_size should be treated as IOVA-contiguous.
>>>>>
>>>>> Why? It's perfectly reasonable to e.g. implement a software 
>>>>> mempool driver that would perform some optimizations due to all 
>>>>> objects being in the same VA-contiguous memzone, yet not be 
>>>>> dependent on underlying physical memory layout. These are two 
>>>>> separate concerns IMO.
>>>>
>>>> It looks like there is some misunderstanding here or I simply don't 
>>>> understand your point.
>>>> Above I mean that driver should be able to advertise its 
>>>> requirements on IOVA-contiguous regions.
>>>> If driver do not care about physical memory layout, no problem.
>>>
>>> Please correct me if i'm wrong, but my understanding was that you 
>>> wanted to use min_chunk as a way to express minimum requirements for 
>>> IOVA-contiguous memory. If i understood you correctly, i don't think 
>>> that's the way to go because there could be valid use cases where a 
>>> mempool driver would like to advertise min_chunk_size to be equal to 
>>> its total size (i.e. allocate everything in a single memzone), yet 
>>> not require that memzone to be IOVA-contiguous. I think these are 
>>> two different concerns, and one does not, and should not imply the 
>>> other.
>>
>> Aha, you're saying that virtual-contiguous and IOVA-contiguous 
>> requirements are different things that it could be usecases where 
>> virtual contiguous is important but IOVA-contiguos is not required. 
>> It is perfectly fine.
>> As I understand IOVA-contiguous (physical) typically means 
>> virtual-contiguous as well. Requirements to have everything virtually 
>> contiguous and some blocks physically contiguous are unlikely. So, it 
>> may be reduced to either virtual or physical contiguous. If mempool 
>> does not care about physical contiguous at all, 
>> MEMPOOL_F_NO_PHYS_CONTIG flag should be used and min_chunk_size 
>> should mean virtual contiguous requirements. If mempool requires 
>> physical contiguous objects, there is *no* MEMPOOL_F_NO_PHYS_CONTIG 
>> flag and min_chunk_size means physical contiguous requirements.
>>
>
> Fair point. I think we're in agreement now :) This will need to be 
> documented then.

OK, I'll do. I don't mind to rebase mine patch series on top of yours, 
but I'd like to do it a bit later when yours is closer to final version 
or even applied - it has really many prerequisites (pre-series) which 
should be collected first. It is really major changes.
  
Olivier Matz March 19, 2018, 5:05 p.m. UTC | #9
Hi,

On Thu, Mar 15, 2018 at 03:44:34PM +0300, Andrew Rybchenko wrote:

[...]

> > > Aha, you're saying that virtual-contiguous and IOVA-contiguous
> > > requirements are different things that it could be usecases where
> > > virtual contiguous is important but IOVA-contiguos is not required.
> > > It is perfectly fine.
> > > As I understand IOVA-contiguous (physical) typically means
> > > virtual-contiguous as well. Requirements to have everything
> > > virtually contiguous and some blocks physically contiguous are
> > > unlikely. So, it may be reduced to either virtual or physical
> > > contiguous. If mempool does not care about physical contiguous at
> > > all, MEMPOOL_F_NO_PHYS_CONTIG flag should be used and min_chunk_size
> > > should mean virtual contiguous requirements. If mempool requires
> > > physical contiguous objects, there is *no* MEMPOOL_F_NO_PHYS_CONTIG
> > > flag and min_chunk_size means physical contiguous requirements.

Just as a side note, from what I understood, having VA="contiguous" and
IOVA="don't care" would be helpful for mbuf pools with mellanox drivers
because perform better in that case.
  
Olivier Matz March 19, 2018, 5:06 p.m. UTC | #10
On Sat, Mar 10, 2018 at 03:39:36PM +0000, Andrew Rybchenko wrote:
> The callback was introduced to let generic code to know octeontx
> mempool driver requirements to use single physically contiguous
> memory chunk to store all objects and align object address to
> total object size. Now these requirements are met using a new
> callbacks to calculate required memory chunk size and to populate
> objects using provided memory chunk.
> 
> These capability flags are not used anywhere else.
> 
> Restricting capabilities to flags is not generic and likely to
> be insufficient to describe mempool driver features. If required
> in the future, API which returns structured information may be
> added.
> 
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>

Looks fine...


> --- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
> +++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
> @@ -126,14 +126,29 @@ octeontx_fpavf_get_count(const struct rte_mempool *mp)
>  	return octeontx_fpa_bufpool_free_count(pool);
>  }
>  
> -static int
> -octeontx_fpavf_get_capabilities(const struct rte_mempool *mp,
> -				unsigned int *flags)
> +static ssize_t
> +octeontx_fpavf_calc_mem_size(const struct rte_mempool *mp,
> +			     uint32_t obj_num, uint32_t pg_shift,
> +			     size_t *min_chunk_size, size_t *align)
>  {
> -	RTE_SET_USED(mp);
> -	*flags |= (MEMPOOL_F_CAPA_PHYS_CONTIG |
> -			MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS);
> -	return 0;
> +	ssize_t mem_size;
> +
> +	/*
> +	 * Simply need space for one more object to be able to
> +	 * fullfil alignment requirements.
> +	 */

...ah, just one typo:

  fullfil -> fulfil or fulfill
  

Patch

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index c06fc67..4deed9a 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -70,7 +70,6 @@  Deprecation Notices
 
   The following changes are planned:
 
-  - removal of ``get_capabilities`` mempool ops and related flags.
   - substitute ``register_memory_area`` with ``populate`` ops.
   - addition of new op to allocate contiguous
     block of objects if underlying driver supports it.
diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index abaefe5..c50f26c 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -66,6 +66,14 @@  API Changes
    Also, make sure to start the actual text at the margin.
    =========================================================
 
+* **Removed mempool capability flags and related functions.**
+
+  Flags ``MEMPOOL_F_CAPA_PHYS_CONTIG`` and
+  ``MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS`` were used by octeontx mempool
+  driver to customize generic mempool library behaviour.
+  Now the new driver callbacks ``calc_mem_size`` and ``populate`` may be
+  used to achieve it without specific knowledge in the generic code.
+
 
 ABI Changes
 -----------
@@ -86,6 +94,9 @@  ABI Changes
   to allow to customize required memory size calculation.
   A new callback ``populate`` has been added to ``rte_mempool_ops``
   to allow to customize objects population.
+  Callback ``get_capabilities`` has been removed from ``rte_mempool_ops``
+  since its features are covered by ``calc_mem_size`` and ``populate``
+  callbacks.
 
 
 Removed Items
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index d143d05..f2c4f6a 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -126,14 +126,29 @@  octeontx_fpavf_get_count(const struct rte_mempool *mp)
 	return octeontx_fpa_bufpool_free_count(pool);
 }
 
-static int
-octeontx_fpavf_get_capabilities(const struct rte_mempool *mp,
-				unsigned int *flags)
+static ssize_t
+octeontx_fpavf_calc_mem_size(const struct rte_mempool *mp,
+			     uint32_t obj_num, uint32_t pg_shift,
+			     size_t *min_chunk_size, size_t *align)
 {
-	RTE_SET_USED(mp);
-	*flags |= (MEMPOOL_F_CAPA_PHYS_CONTIG |
-			MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS);
-	return 0;
+	ssize_t mem_size;
+
+	/*
+	 * Simply need space for one more object to be able to
+	 * fullfil alignment requirements.
+	 */
+	mem_size = rte_mempool_op_calc_mem_size_default(mp, obj_num + 1,
+							pg_shift,
+							min_chunk_size, align);
+	if (mem_size >= 0) {
+		/*
+		 * Memory area which contains objects must be physically
+		 * contiguous.
+		 */
+		*min_chunk_size = mem_size;
+	}
+
+	return mem_size;
 }
 
 static int
@@ -150,6 +165,33 @@  octeontx_fpavf_register_memory_area(const struct rte_mempool *mp,
 	return octeontx_fpavf_pool_set_range(pool_bar, len, vaddr, gpool);
 }
 
+static int
+octeontx_fpavf_populate(struct rte_mempool *mp, unsigned int max_objs,
+			void *vaddr, rte_iova_t iova, size_t len,
+			rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
+{
+	size_t total_elt_sz;
+	size_t off;
+
+	if (iova == RTE_BAD_IOVA)
+		return -EINVAL;
+
+	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
+
+	/* align object start address to a multiple of total_elt_sz */
+	off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
+
+	if (len < off)
+		return -EINVAL;
+
+	vaddr = (char *)vaddr + off;
+	iova += off;
+	len -= off;
+
+	return rte_mempool_op_populate_default(mp, max_objs, vaddr, iova, len,
+					       obj_cb, obj_cb_arg);
+}
+
 static struct rte_mempool_ops octeontx_fpavf_ops = {
 	.name = "octeontx_fpavf",
 	.alloc = octeontx_fpavf_alloc,
@@ -157,8 +199,9 @@  static struct rte_mempool_ops octeontx_fpavf_ops = {
 	.enqueue = octeontx_fpavf_enqueue,
 	.dequeue = octeontx_fpavf_dequeue,
 	.get_count = octeontx_fpavf_get_count,
-	.get_capabilities = octeontx_fpavf_get_capabilities,
 	.register_memory_area = octeontx_fpavf_register_memory_area,
+	.calc_mem_size = octeontx_fpavf_calc_mem_size,
+	.populate = octeontx_fpavf_populate,
 };
 
 MEMPOOL_REGISTER_OPS(octeontx_fpavf_ops);
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index ed0e982..fdcda45 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -208,15 +208,9 @@  rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  */
 size_t
 rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
-		      unsigned int flags)
+		      __rte_unused unsigned int flags)
 {
 	size_t obj_per_page, pg_num, pg_sz;
-	unsigned int mask;
-
-	mask = MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS | MEMPOOL_F_CAPA_PHYS_CONTIG;
-	if ((flags & mask) == mask)
-		/* alignment need one additional object */
-		elt_num += 1;
 
 	if (total_elt_sz == 0)
 		return 0;
@@ -240,18 +234,12 @@  rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
 ssize_t
 rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const rte_iova_t iova[], uint32_t pg_num,
-	uint32_t pg_shift, unsigned int flags)
+	uint32_t pg_shift, __rte_unused unsigned int flags)
 {
 	uint32_t elt_cnt = 0;
 	rte_iova_t start, end;
 	uint32_t iova_idx;
 	size_t pg_sz = (size_t)1 << pg_shift;
-	unsigned int mask;
-
-	mask = MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS | MEMPOOL_F_CAPA_PHYS_CONTIG;
-	if ((flags & mask) == mask)
-		/* alignment need one additional object */
-		elt_num += 1;
 
 	/* if iova is NULL, assume contiguous memory */
 	if (iova == NULL) {
@@ -330,8 +318,6 @@  rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
 	void *opaque)
 {
-	unsigned total_elt_sz;
-	unsigned int mp_capa_flags;
 	unsigned i = 0;
 	size_t off;
 	struct rte_mempool_memhdr *memhdr;
@@ -354,27 +340,6 @@  rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	if (mp->populated_size >= mp->size)
 		return -ENOSPC;
 
-	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
-
-	/* Get mempool capabilities */
-	mp_capa_flags = 0;
-	ret = rte_mempool_ops_get_capabilities(mp, &mp_capa_flags);
-	if ((ret < 0) && (ret != -ENOTSUP))
-		return ret;
-
-	/* update mempool capabilities */
-	mp->flags |= mp_capa_flags;
-
-	/* Detect pool area has sufficient space for elements */
-	if (mp_capa_flags & MEMPOOL_F_CAPA_PHYS_CONTIG) {
-		if (len < total_elt_sz * mp->size) {
-			RTE_LOG(ERR, MEMPOOL,
-				"pool area %" PRIx64 " not enough\n",
-				(uint64_t)len);
-			return -ENOSPC;
-		}
-	}
-
 	memhdr = rte_zmalloc("MEMPOOL_MEMHDR", sizeof(*memhdr), 0);
 	if (memhdr == NULL)
 		return -ENOMEM;
@@ -386,10 +351,7 @@  rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	memhdr->free_cb = free_cb;
 	memhdr->opaque = opaque;
 
-	if (mp_capa_flags & MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS)
-		/* align object start address to a multiple of total_elt_sz */
-		off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
-	else if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
 		off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
 	else
 		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 49083bd..cd3b229 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -245,24 +245,6 @@  struct rte_mempool {
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
-/**
- * This capability flag is advertised by a mempool handler, if the whole
- * memory area containing the objects must be physically contiguous.
- * Note: This flag should not be passed by application.
- */
-#define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040
-/**
- * This capability flag is advertised by a mempool handler. Used for a case
- * where mempool driver wants object start address(vaddr) aligned to block
- * size(/ total element size).
- *
- * Note:
- * - This flag should not be passed by application.
- *   Flag used for mempool driver only.
- * - Mempool driver must also set MEMPOOL_F_CAPA_PHYS_CONTIG flag along with
- *   MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS.
- */
-#define MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS 0x0080
 
 /**
  * @internal When debug is enabled, store some statistics.
@@ -388,12 +370,6 @@  typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
 typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
 
 /**
- * Get the mempool capabilities.
- */
-typedef int (*rte_mempool_get_capabilities_t)(const struct rte_mempool *mp,
-		unsigned int *flags);
-
-/**
  * Notify new memory area to mempool.
  */
 typedef int (*rte_mempool_ops_register_memory_area_t)
@@ -433,13 +409,7 @@  typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp,
  * that pages are grouped in subsets of physically continuous pages big
  * enough to store at least one object.
  *
- * If mempool driver requires object addresses to be block size aligned
- * (MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS), space for one extra element is
- * reserved to be able to meet the requirement.
- *
- * Minimum size of memory chunk is either all required space, if
- * capabilities say that whole memory area must be physically contiguous
- * (MEMPOOL_F_CAPA_PHYS_CONTIG), or a maximum of the page size and total
+ * Minimum size of memory chunk is a maximum of the page size and total
  * element size.
  *
  * Required memory chunk alignment is a maximum of page size and cache
@@ -515,10 +485,6 @@  struct rte_mempool_ops {
 	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
 	rte_mempool_get_count get_count; /**< Get qty of available objs. */
 	/**
-	 * Get the mempool capabilities
-	 */
-	rte_mempool_get_capabilities_t get_capabilities;
-	/**
 	 * Notify new memory area to mempool
 	 */
 	rte_mempool_ops_register_memory_area_t register_memory_area;
@@ -644,22 +610,6 @@  unsigned
 rte_mempool_ops_get_count(const struct rte_mempool *mp);
 
 /**
- * @internal wrapper for mempool_ops get_capabilities callback.
- *
- * @param mp [in]
- *   Pointer to the memory pool.
- * @param flags [out]
- *   Pointer to the mempool flags.
- * @return
- *   - 0: Success; The mempool driver has advertised his pool capabilities in
- *   flags param.
- *   - -ENOTSUP - doesn't support get_capabilities ops (valid case).
- *   - Otherwise, pool create fails.
- */
-int
-rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
-					unsigned int *flags);
-/**
  * @internal wrapper for mempool_ops register_memory_area callback.
  * API to notify the mempool handler when a new memory area is added to pool.
  *
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 1a7f39f..6ac669a 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -57,7 +57,6 @@  rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->enqueue = h->enqueue;
 	ops->dequeue = h->dequeue;
 	ops->get_count = h->get_count;
-	ops->get_capabilities = h->get_capabilities;
 	ops->register_memory_area = h->register_memory_area;
 	ops->calc_mem_size = h->calc_mem_size;
 	ops->populate = h->populate;
@@ -99,19 +98,6 @@  rte_mempool_ops_get_count(const struct rte_mempool *mp)
 	return ops->get_count(mp);
 }
 
-/* wrapper to get external mempool capabilities. */
-int
-rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
-					unsigned int *flags)
-{
-	struct rte_mempool_ops *ops;
-
-	ops = rte_mempool_get_ops(mp->ops_index);
-
-	RTE_FUNC_PTR_OR_ERR_RET(ops->get_capabilities, -ENOTSUP);
-	return ops->get_capabilities(mp, flags);
-}
-
 /* wrapper to notify new memory area to external mempool */
 int
 rte_mempool_ops_register_memory_area(const struct rte_mempool *mp, char *vaddr,
diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c
index 57295f7..3defc15 100644
--- a/lib/librte_mempool/rte_mempool_ops_default.c
+++ b/lib/librte_mempool/rte_mempool_ops_default.c
@@ -11,26 +11,15 @@  rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
 				     uint32_t obj_num, uint32_t pg_shift,
 				     size_t *min_chunk_size, size_t *align)
 {
-	unsigned int mp_flags;
-	int ret;
 	size_t total_elt_sz;
 	size_t mem_size;
 
-	/* Get mempool capabilities */
-	mp_flags = 0;
-	ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
-	if ((ret < 0) && (ret != -ENOTSUP))
-		return ret;
-
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 
 	mem_size = rte_mempool_xmem_size(obj_num, total_elt_sz, pg_shift,
-					 mp->flags | mp_flags);
+					 mp->flags);
 
-	if (mp_flags & MEMPOOL_F_CAPA_PHYS_CONTIG)
-		*min_chunk_size = mem_size;
-	else
-		*min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz);
+	*min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz);
 
 	*align = RTE_MAX((size_t)RTE_CACHE_LINE_SIZE, (size_t)1 << pg_shift);
 
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 90e79ec..42ca4df 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -45,7 +45,6 @@  DPDK_16.07 {
 DPDK_17.11 {
 	global:
 
-	rte_mempool_ops_get_capabilities;
 	rte_mempool_ops_register_memory_area;
 	rte_mempool_populate_iova;
 	rte_mempool_populate_iova_tab;