[dpdk-dev] [PATCH v3 6/8] cryptodev: rework session framework

Ananyev, Konstantin konstantin.ananyev at intel.com
Thu Oct 21 15:11:39 CEST 2021


 
> > The problem is that with new approach you proposed there is no simple way
> > for PMD to
> > fulfil that requirement.
> > In current version of DPDK:
> > - PMD reports size of private data, note that it reports extra space needed
> >    to align its data properly inside provided buffer.
> > - Then it ss up to higher layer to allocate mempool with elements big enough
> > to hold
> >    PMD private data.
> > - At session init that mempool is passed to PMD sym_session_confgure() and
> > it is
> >  PMD responsibility to allocate buffer (from given mempool) for its private
> > data
> >   align it properly, and update sess->sess_data[].data.
> > With this patch:
> >  -  PMD still reports size of private data, but now it is cryptodev layer who
> > allocates
> >      memory for PMD private data and updates sess->sess_data[].data.
> >
> > So PMD simply has no way to allocate/align its private data in a way it likes
> > to.
> > Of course it can simply do alignment on the fly for each operation, something
> > like:
> >
> > void *p = get_sym_session_private_data(sess, dev->driver_id);
> > sess_priv = RTE_PTR_ALIGN_FLOOR(p, PMD_SES_ALIGN);
> >
> > But it is way too ugly and error-prone.
> >
> > Another potential problem with that approach (when cryptodev allocates
> > memory for
> > PMD private session data and updates sess->sess_data[].data for it) - it could
> > happen
> > that private data for different PMDs can endup on the same cache-line.
> > If we'll ever have a case with simultaneous session processing by multiple-
> > devices
> > it can cause all sorts of performance problems.
> 
> To resolve above 2 issues(performance and pointer CEIL in PMD), can you check
> If following diff in library would work?
> ----------------------------------------------------------------------------------------------------
> diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
> index 9d5e08bba2..7beb5339ea 100644
> --- a/lib/cryptodev/rte_cryptodev.c
> +++ b/lib/cryptodev/rte_cryptodev.c
> @@ -1731,12 +1731,13 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
>         RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->sym_session_configure, -ENOTSUP);
> 
>         if (sess->sess_data[index].refcnt == 0) {
> -               sess->sess_data[index].data = (void *)((uint8_t *)sess +
> +               sess->sess_data[index].data = RTE_PTR_ALIGN_CEIL(
> +                               (void *)((uint8_t *)sess +
>                                 rte_cryptodev_sym_get_header_session_size() +
> -                               (index * sess->priv_sz));
> -               sess_iova = rte_mempool_virt2iova(sess) +
> +                               (index * sess->priv_sz)), RTE_CACHE_LINE_SIZE);
> +               sess_iova = RTE_ALIGN_CEIL(rte_mempool_virt2iova(sess) +
>                                 rte_cryptodev_sym_get_header_session_size() +
> -                               (index * sess->priv_sz);
> +                               (index * sess->priv_sz), RTE_CACHE_LINE_SIZE);
>                 ret = dev->dev_ops->sym_session_configure(dev, xforms,
>                                 sess->sess_data[index].data, sess_iova);
>                 if (ret < 0) {
> @@ -1805,7 +1806,7 @@ get_max_sym_sess_priv_sz(void)
>                 if (sz > max_sz)
>                         max_sz = sz;
>         }
> -       return max_sz;
> +       return RTE_ALIGN_CEIL(max_sz,RTE_CACHE_LINE_SIZE);
>  }
> 
>  struct rte_mempool *

Yep, aligning each PMD private data on CACHE_LINE will help to overcome that issue.
Though it means that we need to allocate extra CACHE_LINE bytes for each sess_data
element. That could be a significant amount.
 Also I am still not sure that cryptodev layer should allocate/manage space for PMD
private session data. It would be really hard to predict all possible requirements that
each PMD can have. I think better to leave it to PMD itself, as it knows best what
it needs.

> ----------------------------------------------------------------------------------------
> >
> > All in all - these changes for (remove second mempool, change the way we
> > allocate/setup
> > session private data) seems premature to me.
> > So, I think to go ahead with this series (hiding rte_cryptodev_sym_session)
> > for 21.11
> > we need to drop changes for sess_data[] management allocation and keep
> > only changes
> > directly related to hide sym_session.
> > My apologies for not reviewing/testing properly that series earlier.
> >
> 
> The changes are huge and will affect a lot of people. We needed help
> From all the pmd owners to look into this.

Agree.

> We can drop this series, citing not enough review happened, but the issues
> that were raised could have been resolved till RC2 for the cases that are currently
> broken.
> However, there is one more issue that was not highlighted here was that, in case of
> Scheduler PMD there are a lot of inappropriate stuff which hampers these changes.
> Because of which we will end up reserving huge memory space which will be unused
> if scheduler PMD is compiled in.
> We can have a simple single API for session creation similar to rte_security.
> And let scheduler PMD manage all the memory by itself for all the PMDs which
> it want to schedule.

Yes, same thoughts here:
if we can have just one device per session (as we have for security) it would help
to come up with simple and clean approach.
>From my perspective, probably better to create a simple, clean API first,
and then try to re-work scheduler PMD to work with new one.    
 
> We can defer this series for now, and can work on Asymmetric crypto
> first (probably in 22.02) which is still in experimental state. This will help in getting
> these changes matured enough for sym session which we can take up in 22.11.

Sounds like a good plan to me.

> I believe Intel people are planning for new features in asymmetric crypto.
> It makes more sense that they can align it as per the discussed approach.
> 
> Regards,
> Akhil



More information about the dev mailing list