[dpdk-dev] [PATCH v3 10/10] doc: add mempool and octeontx mempool device

santosh santosh.shukla at caviumnetworks.com
Mon Oct 9 11:19:42 CEST 2017


On Monday 09 October 2017 02:18 PM, Thomas Monjalon wrote:
> 09/10/2017 07:46, santosh:
>> On Monday 09 October 2017 10:31 AM, santosh wrote:
>>> Hi Thomas,
>>>
>>>
>>> On Sunday 08 October 2017 10:13 PM, Thomas Monjalon wrote:
>>>> 08/10/2017 14:40, Santosh Shukla:
>>>>> This commit adds a section to the docs listing the mempool
>>>>> device PMDs available.
>>>> It is confusing to add a mempool guide, given that we already have
>>>> a mempool section in the programmer's guide:
>>>> 	http://dpdk.org/doc/guides/prog_guide/mempool_lib.html
>>>>
>>>> And we will probably need also some doc for bus drivers.
>>>>
>>>> I think it would be more interesting to create a platform guide
>>>> where you can describe the bus and the mempool.
>>>> OK for doc/guides/platform/octeontx.rst ?
>>> No Strong opinion,
>>>
>>> But IMO, purpose of introducing mempool PMD was inspired from
>>> eventdev, Which I find pretty organized.
>>>
>>> Yes, we have mempool_lib guide but that is more about common mempool
>>> layer details like api, structure layout etc.. I wanted
>>> to add guide which tells about mempool PMD's and their capability
>>> if any, thats why included octeontx as strarter and was thinking
>>> that other external-mempool PMDs like dpaa/dpaa2 , sw ring pmd may come
>>> later.
> Yes sure it is interesting.
> The question is to know if mempool drivers make sense in their own guide
> or if it's better to group them with all related platform specifics.

I vote for keeping them just like Eventdev/cryptodev, 
has vendor specific PMD's under one roof.. (has both s/w and hw).

>>> If above said does not make sense then will follow Thomas proposition
>>> and propose a patch.
>>>
>>> Thoughts?
>>>
>> Additional input:
>>
>> mempool PMD logically can work across nics.. could be a reason
>> to not to mention under platform/octeontx or platform/dpaa ..etc..
> I don't understand. OcteonTx mempool works only on OcteonTX?

Can work on other external PCI-e nics though current pmd don;t support.

> Are you saying that OcteonTX can be managed as a device?
>
Yes.
For example:
We have standalone test application for mempool for test purpose,
so to test standlone mempool device, right?
if user gives ]octeontx_fpavf' pool handle
then test works just like for s/w ring.

BTW: HW mempool offload behaves just like ring for example,
Only difference buffer mgmt is ofloaded. Having said that
In theory, offload mempool driver Or s/w pool driver should be agnostic.
(my 2 cents).

Thanks.

>> IMO, Its worth adding a new section for mempool PMD.
>>
>> Thoughts?
>>
>> Regards,
>>
>>>> I choose to integrate this series without this last patch.
>>>> I mark this patch as rejected.
>>>> Please submit a new one separately.
>>>>
>>>>> It then adds the octeontx fpavf mempool PMD to the listed mempool
>>>>> devices.
>>>>>
>>>>> Cc: John McNamara <john.mcnamara at intel.com>
>>>>>
>>>>> Signed-off-by: Santosh Shukla <santosh.shukla at caviumnetworks.com>
>>>>> Signed-off-by: Jerin Jacob <jerin.jacob at caviumnetworks.com>
>>>>> Reviewed-by: John McNamara <john.mcnamara at intel.com>
>>>>> ---
>>>> [...]
>>>>> --- a/MAINTAINERS
>>>>> +++ b/MAINTAINERS
>>>>> @@ -340,6 +340,13 @@ F: drivers/net/liquidio/
>>>>>  F: doc/guides/nics/liquidio.rst
>>>>>  F: doc/guides/nics/features/liquidio.ini
>>>>>  
>>>>> +Cavium Octeontx Mempool
>>>>> +M: Santosh Shukla <santosh.shukla at caviumnetworks.com>
>>>>> +M: Jerin Jacob <jerin.jacob at caviumnetworks.com>
>>>>> +F: drivers/mempool/octeontx
>>>> A slash is missing at the end of the directory.
>>>>
>>>> Until now, the mempool and bus drivers are listed with net drivers.
>>>> We could move them in a platform section later.
>>>> For now, let's put it as "Cavium OcteonTX" in net drivers.
>>>>
>>>> I fixed and merged it with the first patch.
>>> Thanks.
>>>
>>> IMO, for MAINTAINERS file:
>>> Just like we have entry for "Eventdev Driver" and underneath
>>> to that- all vendor specific PMD sits, I was thinking to
>>> introduce "Mempool Drivers" such that we place all
>>> external mempool PMDs + s/w PMD (example: Ring) sits underneath.
>>>
>>> thoughts?
> No need to move SW mempool drivers in a different section.
> They are maintained by Olivier with the mempool core code.
>
> I have the feeling that all platform specific stuff
> (bus, mempool, makefile and config file) are maintained by
> the same persons.
> I think it is easier to know who contact for issues with a platform.



More information about the dev mailing list