[dpdk-dev] [PATCH] crypto/scheduler: add multicore scheduling mode
De Lara Guarch, Pablo
pablo.de.lara.guarch at intel.com
Wed May 31 09:48:52 CEST 2017
Hi Kirill,
> -----Original Message-----
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Pablo de Lara
> Sent: Sunday, May 28, 2017 9:09 PM
> To: Doherty, Declan; Zhang, Roy Fan
> Cc: dev at dpdk.org; Rybalchenko, Kirill
> Subject: [dpdk-dev] [PATCH] crypto/scheduler: add multicore scheduling
> mode
>
> From: Kirill Rybalchenko <kirill.rybalchenko at intel.com>
>
> Multi-core scheduling mode is a mode where scheduler distributes
> crypto operations in a round-robin base, between several core
> assigned as workers.
>
> Signed-off-by: Kirill Rybalchenko <kirill.rybalchenko at intel.com>
> ---
> app/test-crypto-perf/cperf_test_throughput.c | 2 +
> drivers/crypto/scheduler/Makefile | 1 +
> drivers/crypto/scheduler/rte_cryptodev_scheduler.c | 7 +
> drivers/crypto/scheduler/rte_cryptodev_scheduler.h | 6 +
> drivers/crypto/scheduler/scheduler_multicore.c | 405
> +++++++++++++++++++++
> drivers/crypto/scheduler/scheduler_pmd.c | 73 +++-
> drivers/crypto/scheduler/scheduler_pmd_private.h | 4 +
> lib/librte_cryptodev/rte_cryptodev.c | 2 +-
> 8 files changed, 497 insertions(+), 3 deletions(-)
> create mode 100644 drivers/crypto/scheduler/scheduler_multicore.c
>
> diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-
> crypto-perf/cperf_test_throughput.c
> index 61b27ea..0504a37 100644
> --- a/app/test-crypto-perf/cperf_test_throughput.c
> +++ b/app/test-crypto-perf/cperf_test_throughput.c
> @@ -502,6 +502,8 @@ cperf_throughput_test_runner(void *test_ctx)
>
> }
>
> + rte_cryptodev_stop(ctx->dev_id);
This should be in a separate patch, but this is probably in the wrong place,
as this runner is called several times when using multiple buffer sizes.
> +
> return 0;
> }
>
...
> diff --git a/drivers/crypto/scheduler/scheduler_multicore.c
> b/drivers/crypto/scheduler/scheduler_multicore.c
> new file mode 100644
> index 0000000..12e5734
> --- /dev/null
> +++ b/drivers/crypto/scheduler/scheduler_multicore.c
> +struct mc_scheduler_qp_ctx {
> + struct scheduler_slave
> slaves[RTE_CRYPTODEV_SCHEDULER_MAX_NB_SLAVES];
> + uint32_t nb_slaves;
> +
> + uint32_t last_enq_worker_idx;
> + uint32_t last_deq_worker_idx;
I would say these can be uint8_t.
...
> +static int
> +scheduler_stop(struct rte_cryptodev *dev)
> +{
> + struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> + struct mc_scheduler_ctx *mc_ctx = sched_ctx->private_ctx;
> +
> + mc_ctx->stop_signal = 1;
> + for (uint16_t i = 0; i < sched_ctx->nb_wc; i++)
Declare the variable "i" outside the foor loop.
> + rte_eal_wait_lcore(sched_ctx->wc_pool[i]);
> +
> + return 0;
> +}
> const struct scheduler_parse_map scheduler_ordering_map[] = {
> @@ -117,6 +124,17 @@ cryptodev_scheduler_create(const char *name,
> sched_ctx->max_nb_queue_pairs =
> init_params->def_p.max_nb_queue_pairs;
>
> + if (init_params->mode == CDEV_SCHED_MODE_MULTICORE) {
> + sched_ctx->nb_wc = 0;
> + for (uint16_t i = 0; i < MAX_NB_WORKER_CORES; i++) {
Declare variable "i" outside for loop.
> + if (init_params->wcmask & (1ULL << i)) {
> + sched_ctx->wc_pool[sched_ctx->nb_wc++] =
> i;
> + RTE_LOG(INFO, PMD, " Worker
> core[%u]=%u added\n",
> + sched_ctx->nb_wc-1, i);
> + }
> + }
> + }
> +
> if (init_params->mode > CDEV_SCHED_MODE_USERDEFINED &&
> init_params->mode < CDEV_SCHED_MODE_COUNT)
> {
> ret = rte_cryptodev_scheduler_mode_set(dev->data-
> >dev_id,
> @@ -251,6 +269,42 @@ parse_integer_arg(const char *key __rte_unused,
> return 0;
> }
>
...
> diff --git a/lib/librte_cryptodev/rte_cryptodev.c
> b/lib/librte_cryptodev/rte_cryptodev.c
> index b65cd9c..5aa2b8b 100644
> --- a/lib/librte_cryptodev/rte_cryptodev.c
> +++ b/lib/librte_cryptodev/rte_cryptodev.c
> @@ -1032,8 +1032,8 @@ rte_cryptodev_stop(uint8_t dev_id)
> return;
> }
>
> - dev->data->dev_started = 0;
> (*dev->dev_ops->dev_stop)(dev);
> + dev->data->dev_started = 0;
Separate patch for this.
> }
Last thing, there are some compilation issues to fix, according to patchwork:
http://dpdk.org/ml/archives/test-report/2017-May/020993.html
Thanks,
Pablo
More information about the dev
mailing list