[dpdk-dev] [PATCH 6/6] app/crypto-perf: use single mempool

Shreyansh Jain shreyansh.jain at nxp.com
Mon Sep 11 15:10:41 CEST 2017


Hello Pablo,

I have a comment inline:

On Monday 11 September 2017 04:38 PM, De Lara Guarch, Pablo wrote:
>> -----Original Message-----
>> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Akhil Goyal
>> Sent: Wednesday, August 30, 2017 9:31 AM
>> To: De Lara Guarch, Pablo <pablo.de.lara.guarch at intel.com>; Doherty,
>> Declan <declan.doherty at intel.com>; Trahe, Fiona
>> <fiona.trahe at intel.com>; Jain, Deepak K <deepak.k.jain at intel.com>;
>> Griffin, John <john.griffin at intel.com>;
>> jerin.jacob at caviumnetworks.com; hemant.agrawal at nxp.com
>> Cc: dev at dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH 6/6] app/crypto-perf: use single
>> mempool
>>
>> Hi Pablo,
>> On 8/18/2017 1:35 PM, Pablo de Lara wrote:
>>> In order to improve memory utilization, a single mempool is created,
>>> containing the crypto operation and mbufs (one if operation is
>>> in-place, two if out-of-place).
>>> This way, a single object is allocated and freed per operation,
>>> reducing the amount of memory in cache, which improves scalability.
>>>
>>> Signed-off-by: Pablo de Lara <pablo.de.lara.guarch at intel.com>
>>> ---
>>>    app/test-crypto-perf/cperf_ops.c             |  96 ++++++--
>>>    app/test-crypto-perf/cperf_ops.h             |   2 +-
>>>    app/test-crypto-perf/cperf_test_latency.c    | 350 ++++++++++++-------
> ---
>> ----
>>>    app/test-crypto-perf/cperf_test_throughput.c | 347
>>> ++++++++++++------
>> --------
>>>    app/test-crypto-perf/cperf_test_verify.c     | 356 ++++++++++++--------
> ---
>> ----
>>>    5 files changed, 553 insertions(+), 598 deletions(-)
>>>
>> NACK.
>> This patch replaces rte_pktmbuf_pool_create with the
>> rte_mempool_create for mbufs, which is not a preferred way to allocate
> memory for pktmbuf.
>>
>> Any example/test application in DPDK should not be using this, as this
>> kind of usages will  not be compatible for all dpdk drivers in general.
>>
>> This kind of usages of rte_mempool_create will not work for any
>> devices using hw offloaded memory pools for pktmbuf.
>> one such example is dpaa2.
> 
> Hi Akhil,
> 
> Sorry for the delay on this reply and thanks for the review.
> 
> I think, since we are not getting the buffers from the NIC, but we are allocating
> them ourselves, it is not strictly required to call rte_pktmbuf_pool_create.
> In the end, we only need them for memory for the crypto PMDs and we are not touching
> anything in them, so I think using calling rte_mempool_create should work ok.
> Having a single mempool would be way more performant and would avoid the scalability
> issues that we are having in this application now, and knowing that this application
> was created to test crypto PMD performance, I think it is worth trying this out.
> 
> What is it exactly needed for dpaa2? Is the mempool handler?

If I recall correctly:
This is the call flow when rte_pktmbuf_pool_create is called:
  - rte_pktmbuf_pool_create
    `-> rte_mempool_create_empty
        `-> allocate and fill mempool object with defaults
    `-> rte_mempool_set_ops_byname
        `-> sets mempool handler to RTE_MBUF_DEFAULT_MEMPOOL_OPS
    `-> rte_mempool_populate_default
        `-> calls pool handler specific enqueue/dequeue

but that of rte_mempool_create is:
  - rte_mempool_create
    `-> rte_mempool_create_empty
        `-> allocate and fill mempool object with defaults
    `-> rte_mempool_set_ops_byname
        `-> set to one of ring_*_*
            No check/logic for configuration defined handler
            like RTE_MBUF_DEFAULT_MEMPOOL_OPS
    `-> rte_mempool_populate_default
        `-> calls ring* handler specific enqueue/dequeue

Calling rte_mempool_create bypasses the check for any mempool handler 
configured through the build system.

> Would it work for you if I create the mempool in a similar way as what
> rte_pktmbuf_pool_create is doing? Calling rte_mempool_set_ops_byname?

Yes, but that would mean using the combination of 
rte_mempool_create_empty and rte_mempool_set_ops_byname which, 
eventually, would be equal to using rte_pktmbuf_pool_create.

rte_mempool_set_ops_byname over a mempool created by rte_mempool_create 
would mean changing the enqueue/dequeue operations *after* the mempool 
has been populated. That would be incorrect.

I am not sure of what the intent it - whether these buffers should be 
allowed to be offloaded to hardware. If yes, then rte_mempool_create 
wouldn't help.

> 
> Thanks!
> Pablo
> 
> 
>>
>> -Akhil



More information about the dev mailing list