[dpdk-users] Memory requirements for crypto devices (QAT and AESNI) (using DPDK-17.02)

Trahe, Fiona fiona.trahe at intel.com
Tue May 9 17:52:36 CEST 2017


Hi Chinmaya,

> -----Original Message-----
> From: users [mailto:users-bounces at dpdk.org] On Behalf Of Chinmaya Dwibedy
> Sent: Monday, May 8, 2017 2:54 PM
> To: users at dpdk.org
> Subject: Re: [dpdk-users] Memory requirements for crypto devices (QAT and
> AESNI) (using DPDK-17.02)
> 
> Hi,
> 
> Can anyone please respond to this email ? Thank you in advance for your
> suggestion and time.
> 
> Regards,
> Chinmaya
> 
> On Fri, May 5, 2017 at 6:20 PM, Chinmaya Dwibedy <ckdwibedy at gmail.com>
> wrote:
> 
> > Hi All,
> >
> >
> > We are using DPK-17.02.  We use crypto via hardware (QAT) and software
> > acceleration (AESNI).  There is one to one mapping between crypto Dev and
> > worker core. What are the memory requirements for the below stated
> >
> > 1)           Creation of one physical Crypto device.
> >
> > 2)           Creation of one AESNI MB virtual Crypto device.
> >
> > Thereafter we configure a device with the default number of queue pairs to
> > set up for the device as shown below.
> >
> >
> > #define CDEV_MP_CACHE_SZ 64
> >
> > rte_cryptodev_info_get(cdev_id, &info);
> >
> >                 dev_conf.nb_queue_pairs = info.max_nb_queue_pairs;
> >
> >                 dev_conf.session_mp.nb_objs = info.sym.max_nb_sessions;
> >
> >                   dev_conf.socket_id = SOCKET_ID_ANY;
> >
> >                 dev_conf.session_mp.cache_size = CDEV_MP_CACHE_SZ;
> >
> > rte_cryptodev_configure (cdev_id, &dev_conf);
> >
> >
> > How to calculate the minimum memory required to configure per HW and per
> > SW crypto device. Then we allocate and set up a receive queue pair for a
> > device as follows. As of now we use one queue per device and number of
> > descriptors per queue pair is set to 2k. If we increase the number of
> > descriptors, will it improve the performance in terms of throughput?
> >
> >
[Fiona] 
The QAT device can serve only a certain number of requests in parallel
which is far smaller than 2k. So increasing number of descriptors 
won't speed up throughput. In fact 2k is probably excessive and could
lead to longer latency if the queue is being filled up.
I would suggest trying values of 1k, 512 and 256 and if you see no reduction in 
reduction in throughput you can use a smaller queue and save some memory. 
The optimal size partly depends on how bursty your traffic is.

> > #define CDEV_MP_NB_OBJS 2048
> >
> > qp_conf.nb_descriptors = CDEV_MP_NB_OBJS;
> >
> > rte_cryptodev_queue_pair_setup (cdev_id, 0, &qp_conf, dev_conf.socket_id)
> >
> >
[Fiona] Memory for each QAT queue pair (max 2 sym qps per QAT device).
TX queue = qp_conf.nb_descriptors * 128 bytes
+
RX queue = qp_conf.nb_descriptors * 32 bytes
+
op cookies (used for sgl meta-data) = qp_conf.nb_descriptors * 264 bytes

op mempool size is totally up to the user and is not bound to any device or PMD.


Session mempool is per device (though this will change in 17.08)
QAT session struct is 576 bytes long + memory for
bpi_ctx and inst pointers.
Number of sessions in the pool are passed in to rte_cryptodev_configure().
This should be <= max_nb_sessions for that device which can be queried using
rte_cryptodev_info_get()


> > We create a session for symmetric cryptographic operations per IPsec
> > Security association.  What is the memory required to hold session data
> > structure?
> >
> >
> > The intent behind this is to calculate the memory requirements in advance
> > (before EAL initialization) and based upon the available memory, figure out
> > how many crypto devices (note: our application initializes AESNI vdev
> > without using EAL command line option) can be initialized? Say there are 24
> > worker cores and we need 24 crypto AESNI vdevs. But there is no sufficient
> > hugepage memory for creating 24 crypto AESNI vdevs. In such case, we will
> > allocate more hugepages , then call rte_eal_init() and expect it to be
> > passed.
> >
> >
> > Thank you in advance for your suggestion and time.
> >
> >
> >
> > Regards,
> >
> > Chinmaya
> >
> >
> >
> >
> >
> >
> >
> >
> >


More information about the users mailing list