[dpdk-dev] [PATCH v5 1/3] mempool: support external handler

Hunt, David david.hunt at intel.com
Wed Jun 1 12:46:20 CEST 2016



On 5/31/2016 10:11 PM, Jerin Jacob wrote:
> On Tue, May 31, 2016 at 10:41:00PM +0200, Olivier MATZ wrote:
>> Hi,
>>
>> On 05/31/2016 06:03 PM, Jerin Jacob wrote:
>>> On Tue, May 31, 2016 at 04:37:02PM +0100, Hunt, David wrote:
>>>>
>>>> On 5/31/2016 9:53 AM, Jerin Jacob wrote:
>>>>> On Mon, May 30, 2016 at 12:27:26PM +0100, Hunt, David wrote:
>>>>>> New mempool handlers will use rte_mempool_create_empty(),
>>>>>> rte_mempool_set_handler(),
>>>>>> then rte_mempool_populate_*(). These three functions are new to this
>>>>>> release, to no problem
>>>>> Having separate APIs for external pool-manager create is worrisome in
>>>>> application perspective. Is it possible to have rte_mempool_[xmem]_create
>>>>> for the both external and existing SW pool manager and make
>>>>> rte_mempool_create_empty and rte_mempool_populate_*  internal functions.
>>>>>
>>>>> IMO, We can do that by selecting  specific rte_mempool_set_handler()
>>>>> based on _flags_ encoding, something like below
>>>>>
>>>>> bit 0 - 16   // generic bits uses across all the pool managers
>>>>> bit 16 - 23  // pool handler specific flags bits
>>>>> bit 24 - 31  // to select the specific pool manager(Up to 256 different flavors of
>>>>> pool managers, For backward compatibility, make '0'(in 24-31) to select
>>>>> existing SW pool manager.
>>>>>
>>>>> and applications can choose the handlers by selecting the flag in
>>>>> rte_mempool_[xmem]_create, That way it will be easy in testpmd or any other
>>>>> applications to choose the pool handler from command line etc in future.
>>>> There might be issues with the 8-bit handler number, as we'd have to add an
>>>> api call to
>>>> first get the index of a given hander by name, then OR it into the flags.
>>>> That would mean
>>>> also extra API calls for the non-default external handlers. I do agree with
>>>> the handler-specific
>>>> bits though.
>>> That would be an internal API(upper 8 bits to handler name). Right ?
>>> Seems to be OK for me.
>>>
>>>> Having the _empty and _set_handler  APIs seems to me to be OK for the
>>>> moment. Maybe Olivier could comment?
>>>>
>>> But need 3 APIs. Right? _empty , _set_handler and _populate ? I believe
>>> it is better reduce the public API in spec where ever possible ?
>>>
>>> Maybe Olivier could comment ?
>> Well, I think having 3 different functions is not a problem if the API
>> is clearer.
>>
>> In my opinion, the following:
>> 	rte_mempool_create_empty()
>> 	rte_mempool_set_handler()
>> 	rte_mempool_populate()
>>
>> is clearer than:
>> 	rte_mempool_create(15 args)
> But proposed scheme is not adding any new arguments to
> rte_mempool_create. It just extending the existing flag.
>
> rte_mempool_create(15 args) is still their as API for internal pool
> creation.
>
>> Splitting the flags into 3 groups, with one not beeing flags but a
>> pool handler number looks overcomplicated from a user perspective.
> I am concerned with seem less integration with existing applications,
> IMO, Its not worth having separate functions for external vs internal
> pool creation for application(now each every applications has to added this
> logic every where for no good reason), just my 2 cents.

I think that there is always going to be some  extra code in the 
applications
  that want to use an external mempool. The _set_handler approach does
create, set_hander, populate. The Flags method queries the handler list to
get the index, sets the flags bits, then calls create. Both methods will 
work.

But I think the _set_handler approach is more user friendly, therefore that
it the method I would lean towards.

>>>>> and we can remove "mbuf: get default mempool handler from configuration"
>>>>> change-set OR just add if RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is defined then set
>>>>> the same with rte_mempool_set_handler in rte_mempool_[xmem]_create.
>>>>>
>>>>> What do you think?
>>>> The "configuration" patch is to allow users to quickly change the mempool
>>>> handler
>>>> by changing RTE_MBUF_DEFAULT_MEMPOOL_HANDLER to another string of a known
>>>> handler. It could just as easily be left out and use the rte_mempool_create.
>>>>
>>> Yes, I understand, but I am trying to avoid build time constant. IMO, It
>>> would be better by default RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is not
>>> defined in config. and for quick change developers can introduce the build
>>> with RTE_MBUF_DEFAULT_MEMPOOL_HANDLER="specific handler"
>> My understanding of the compile-time configuration option was
>> to allow a specific architecture to define a specific hw-assisted
>> handler by default.
>>
>> Indeed, if there is no such need for now, we may remove it. But
>> we need a way to select another handler, at least in test-pmd
>> (in command line arguments?).
> like txflags in testpmd, IMO, mempool flags will help to select the handlers
> seamlessly as suggest above.
>
> If we are _not_ taking the flags based selection scheme then it makes to
> keep RTE_MBUF_DEFAULT_MEMPOOL_HANDLER

see comment above

>>>>>>> 2) IMO, It is better to change void *pool in struct rte_mempool to
>>>>>>> anonymous union type, something like below, so that mempool
>>>>>>> implementation can choose the best type.
>>>>>>> 	union {
>>>>>>> 		void *pool;
>>>>>>> 		uint64_t val;
>>>>>>> 	}
>>>>>> Could we do this by using the union for the *pool_config suggested above,
>>>>>> would that give
>>>>>> you what you need?
>>>>> It would be an extra overhead for external pool manager to _alloc_ memory
>>>>> and store the allocated pointer in mempool struct(as *pool) and use pool for
>>>>> pointing other data structures as some implementation need only
>>>>> limited bytes to store the external pool manager specific context.
>>>>>
>>>>> In order to fix this problem, We may classify fast path and slow path
>>>>> elements in struct rte_mempool and move all fast path elements in first
>>>>> cache line and create an empty opaque space in the remaining bytes in the
>>>>> cache line so that if the external pool manager needs only limited space
>>>>> then it is not required to allocate the separate memory to save the
>>>>> per core cache  in fast-path
>>>>>
>>>>> something like below,
>>>>> union {
>>>>> 	void *pool;
>>>>> 	uint64_t val;
>>>>> 	uint8_t extra_mem[16] // available free bytes in fast path cache line
>>>>>
>>>>> }
>>>> Something for the future, perhaps? Will the 8-bits in the flags suffice for
>>>> now?
>>> OK. But simple anonymous union for same type should be OK add now? Not
>>> much change I believe, If its difficult then postpone it
>>>
>>> union {
>>> 	void *pool;
>>> 	uint64_t val;
>>> }
>> I'm ok with the simple union with (void *) and (uint64_t).
>> Maybe "val" should be replaced by something more appropriate.
>> Is "pool_id" a better name?
> How about "opaque"?

I think I would lean towards pool_id in this case.


Regards,
David.


More information about the dev mailing list