[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

Hunt, David david.hunt at intel.com
Thu Jun 9 14:55:54 CEST 2016



On 9/6/2016 12:41 PM, Shreyansh Jain wrote:
> Hi David,
>
>> -----Original Message-----
>> From: Hunt, David [mailto:david.hunt at intel.com]
>> Sent: Thursday, June 09, 2016 3:10 PM
>> To: Shreyansh Jain <shreyansh.jain at nxp.com>; dev at dpdk.org
>> Cc: olivier.matz at 6wind.com; viktorin at rehivetech.com;
>> jerin.jacob at caviumnetworks.com
>> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
>> operations
>>
>> Hi Shreyansh,
>>
>> On 8/6/2016 2:48 PM, Shreyansh Jain wrote:
>>> Hi David,
>>>
>>> Thanks for explanation. I have some comments inline...
>>>
>>>> -----Original Message-----
>>>> From: Hunt, David [mailto:david.hunt at intel.com]
>>>> Sent: Tuesday, June 07, 2016 2:56 PM
>>>> To: Shreyansh Jain <shreyansh.jain at nxp.com>; dev at dpdk.org
>>>> Cc: olivier.matz at 6wind.com; viktorin at rehivetech.com;
>>>> jerin.jacob at caviumnetworks.com
>>>> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
>>>> operations
>>>>
>>>> Hi Shreyansh,
>>>>
>>>> On 6/6/2016 3:38 PM, Shreyansh Jain wrote:
>>>>> Hi,
>>>>>
>>>>> (Apologies for overly-eager email sent on this thread earlier. Will be
>> more
>>>> careful in future).
>>>>> This is more of a question/clarification than a comment. (And I have
>> taken
>>>> only some snippets from original mail to keep it cleaner)
>>>>> <snip>
>>>>>> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
>>>>>> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
>>>>>> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
>>>>>> +MEMPOOL_REGISTER_OPS(ops_sp_mc);
>>>>> <snip>
>>>>>
>>>>>    From the above what I understand is that multiple packet pool handlers
>> can
>>>> be created.
>>>>> I have a use-case where application has multiple pools but only the
>> packet
>>>> pool is hardware backed. Using the hardware for general buffer
>> requirements
>>>> would prove costly.
>>>>>    From what I understand from the patch, selection of the pool is based
>> on
>>>> the flags below.
>>>>
>>>> The flags are only used to select one of the default handlers for
>>>> backward compatibility through
>>>> the rte_mempool_create call. If you wish to use a mempool handler that
>>>> is not one of the
>>>> defaults, (i.e. a new hardware handler), you would use the
>>>> rte_create_mempool_empty
>>>> followed by the rte_mempool_set_ops_byname call.
>>>> So, for the external handlers, you create and empty mempool, then set
>>>> the operations (ops)
>>>> for that particular mempool.
>>> I am concerned about the existing applications (for example, l3fwd).
>>> Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname'
>> model would require modifications to these applications.
>>> Ideally, without any modifications, these applications should be able to use packet pools (backed by hardware) and buffer pools (backed by
>>> ring/others) - transparently.
>>> If I go by your suggestions, what I understand is, doing the above without modification to applications would be equivalent to:
>>>     struct rte_mempool_ops custom_hw_allocator = {...}
>>>
>>> thereafter, in config/common_base:
>>>
>>>     CONFIG_RTE_DEFAULT_MEMPOOL_OPS="custom_hw_allocator"
>>>
>>> calls to rte_pktmbuf_pool_create would use the new allocator.
>> Yes, correct. But only for calls to rte_pktmbuf_pool_create(). Calls to
>> rte_mempool_create will continue to use the default handlers (ring based).
> Agree with you.
> But, some applications continue to use rte_mempool_create for allocating packet pools. Thus, even with a custom handler available (which, most probably, would be a hardware packet buffer handler), application would unintentionally end up not using it.
> Probably, such applications should be changed? (e.g. pipeline).

Yes, agreed.  If those applications need to use external mempool 
handlers, then they should be changed. I would see that as outside the 
scope of this patchset.


>>> But, another problem arises here.
>>>
>>> There are two distinct paths for allocations of a memory pool:
>>> 1. A 'pkt' pool:
>>>      rte_pktmbuf_pool_create
>>>        \- rte_mempool_create_empty
>>>        |   \- rte_mempool_set_ops_byname(..ring_mp_mc..)
>>>        |
>>>        `- rte_mempool_set_ops_byname
>>>              (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..)
>>>              /* Override default 'ring_mp_mc' of
>>>               * rte_mempool_create */
>>>
>>> 2. Through generic mempool create API
>>>      rte_mempool_create
>>>        \- rte_mempool_create_empty
>>>              (passing pktmbuf and pool constructors)
>>>
>>> I found various instances in example applications where
>> rte_mempool_create() is being called directly for packet pools - bypassing
>> the more semantically correct call to rte_pktmbuf_* for packet pools.
>>> In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able to
>> replace custom handler operations for packet buffer allocations.
>>>   From a performance point-of-view, Applications should be able to select
>> between packet pools and non-packet pools.
>>
>> This is intended for backward compatibility, and API consistency. Any
>> applications that use
>> rte_mempool_create directly will continue to use the default mempool
>> handlers. If the need
>> to use a custeom hander, they will need to be modified to call the newer
>> API,
>> rte_mempool_create_empty and rte_mempool_set_ops_byname.
> My understanding was that applications should be oblivious of how their pools are managed, except that they do understand packet pools should be faster (or accelerated) than non-packet pools.
> (Of course, some applications may be designed to explicitly take advantage of an available handler through rte_mempool_create_empty=>rte_mempool_set_ops_byname calls.)

> In that perspective, I was expecting that applications should be calling:
>   -> rte_pktmbuf_* for all packet relation operations
>   -> rte_mempool_* for non-packet or explicit hardware handlers
>
> And leave rest of the mempool handler related magic to DPDK framework.

I think there is still some work on the applications side to know 
whether to use external or internal (default) handler for
particular mempools.

I'll propose something based on the further comments on the other part 
of this thread based on feedback so far.

Regards,
Dave.








>
>>
>>>>> <snip>
>>>>>> +	/*
>>>>>> +	 * Since we have 4 combinations of the SP/SC/MP/MC examine the
>> flags to
>>>>>> +	 * set the correct index into the table of ops structs.
>>>>>> +	 */
>>>>>> +	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
>>>>>> +		rte_mempool_set_ops_byname(mp, "ring_sp_sc");
>>>>>> +	else if (flags & MEMPOOL_F_SP_PUT)
>>>>>> +		rte_mempool_set_ops_byname(mp, "ring_sp_mc");
>>>>>> +	else if (flags & MEMPOOL_F_SC_GET)
>>>>>> +		rte_mempool_set_ops_byname(mp, "ring_mp_sc");
>>>>>> +	else
>>>>>> +		rte_mempool_set_ops_byname(mp, "ring_mp_mc");
>>>>>> +
>>> My suggestion is to have an additional flag, 'MEMPOOL_F_PKT_ALLOC', which,
>> if specified, would:
> I read through some previous discussions and realized that something similar [1] had already been proposed earlier.
> I didn't want to hijack this thread with an old discussions - it was unintentional.
>
> [1] http://article.gmane.org/gmane.comp.networking.dpdk.devel/39803
>
> But, [1] would make the distinction of *type* of pool and its corresponding handler, whether default or external/custom, quite clear.

I can incorporate a bit in the flags (MEMPOOL_F_PKT_ALLOC) as you 
suggest, which would allow the rte_mempool_create calls to use a custom 
handler








More information about the dev mailing list