[dpdk-dev,2/3] crypto/scheduler: improve slave configuration

Message ID 1487332862-5719-3-git-send-email-roy.fan.zhang@intel.com (mailing list archive)
State Accepted, archived
Delegated to: Pablo de Lara Guarch
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Fan Zhang Feb. 17, 2017, 12:01 p.m. UTC
  Since the new device configuration API is updated, we can make use of
this feature to the crypto scheduler PMD to configure its slaves
automatically with the same configurations it got. As originally the
slaves have to be manually configured one by one, this patch should
help reducing the coding complexity.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 app/test/test_cryptodev.c                    | 24 +-----------------------
 drivers/crypto/scheduler/scheduler_pmd_ops.c | 18 +++++++++++++-----
 2 files changed, 14 insertions(+), 28 deletions(-)
  

Comments

Doherty, Declan March 23, 2017, 2:58 p.m. UTC | #1
On 17/02/17 12:01, Fan Zhang wrote:
> Since the new device configuration API is updated, we can make use of
> this feature to the crypto scheduler PMD to configure its slaves
> automatically with the same configurations it got. As originally the
> slaves have to be manually configured one by one, this patch should
> help reducing the coding complexity.
>
> Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
...
>

Acked-by: Declan Doherty <declan.doherty@intel.com>
  
Doherty, Declan March 23, 2017, 3:09 p.m. UTC | #2
On 17/02/17 12:01, Fan Zhang wrote:
> Since the new device configuration API is updated, we can make use of
> this feature to the crypto scheduler PMD to configure its slaves
> automatically with the same configurations it got. As originally the
> slaves have to be manually configured one by one, this patch should
> help reducing the coding complexity.
>
> Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
...
>

This patch needs to rebased due to the movement of app/test/ to test/test
  
De Lara Guarch, Pablo March 23, 2017, 3:13 p.m. UTC | #3
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Declan Doherty
> Sent: Thursday, March 23, 2017 3:10 PM
> To: Zhang, Roy Fan; dev@dpdk.org
> Cc: De Lara Guarch, Pablo
> Subject: Re: [dpdk-dev] [PATCH 2/3] crypto/scheduler: improve slave
> configuration
> 
> On 17/02/17 12:01, Fan Zhang wrote:
> > Since the new device configuration API is updated, we can make use of
> > this feature to the crypto scheduler PMD to configure its slaves
> > automatically with the same configurations it got. As originally the
> > slaves have to be manually configured one by one, this patch should
> > help reducing the coding complexity.
> >
> > Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
> > ---
> ...
> >
> 
> This patch needs to rebased due to the movement of app/test/ to test/test

If there are no reworks required, I can do that myself when merging.
  

Patch

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 357a92e..6fe5362 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -7382,17 +7382,8 @@  test_scheduler_attach_slave_op(void)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
 	uint8_t sched_id = ts_params->valid_devs[0];
-	uint32_t nb_devs, qp_id, i, nb_devs_attached = 0;
+	uint32_t nb_devs, i, nb_devs_attached = 0;
 	int ret;
-	struct rte_cryptodev_config config = {
-			.nb_queue_pairs = 8,
-			.socket_id = SOCKET_ID_ANY,
-			.session_mp = {
-				.nb_objs = 2048,
-				.cache_size = 256
-			}
-	};
-	struct rte_cryptodev_qp_conf qp_conf = {2048};
 
 	/* create 2 AESNI_MB if necessary */
 	nb_devs = rte_cryptodev_count_devtype(
@@ -7418,19 +7409,6 @@  test_scheduler_attach_slave_op(void)
 		if (info.dev_type != RTE_CRYPTODEV_AESNI_MB_PMD)
 			continue;
 
-		ret = rte_cryptodev_configure(i, &config);
-		TEST_ASSERT(ret == 0,
-			"Failed to configure device %u of pmd : %s", i,
-			RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD));
-
-		for (qp_id = 0; qp_id < info.max_nb_queue_pairs; qp_id++) {
-			TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
-				i, qp_id, &qp_conf,
-				rte_cryptodev_socket_id(i)),
-				"Failed to setup queue pair %u on "
-				"cryptodev %u", qp_id, i);
-		}
-
 		ret = rte_cryptodev_scheduler_slave_attach(sched_id,
 				(uint8_t)i);
 
diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c b/drivers/crypto/scheduler/scheduler_pmd_ops.c
index 79be119..ea755e0 100644
--- a/drivers/crypto/scheduler/scheduler_pmd_ops.c
+++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
@@ -52,11 +52,8 @@  scheduler_pmd_config(struct rte_cryptodev *dev,
 
 	for (i = 0; i < sched_ctx->nb_slaves; i++) {
 		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
-		struct rte_cryptodev *slave_dev =
-				rte_cryptodev_pmd_get_dev(slave_dev_id);
 
-		ret = (*slave_dev->dev_ops->dev_configure)(slave_dev,
-				config);
+		ret = rte_cryptodev_configure(slave_dev_id, config);
 		if (ret < 0)
 			break;
 	}
@@ -340,11 +337,13 @@  scheduler_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
 /** Setup a queue pair */
 static int
 scheduler_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
-	__rte_unused const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+	const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
 {
 	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
 	struct scheduler_qp_ctx *qp_ctx;
 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	uint32_t i;
+	int ret;
 
 	if (snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN,
 			"CRYTO_SCHE PMD %u QP %u",
@@ -357,6 +356,15 @@  scheduler_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 	if (dev->data->queue_pairs[qp_id] != NULL)
 		scheduler_pmd_qp_release(dev, qp_id);
 
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_id = sched_ctx->slaves[i].dev_id;
+
+		ret = rte_cryptodev_queue_pair_setup(slave_id, qp_id,
+				qp_conf, socket_id);
+		if (ret < 0)
+			return ret;
+	}
+
 	/* Allocate the queue pair data structure. */
 	qp_ctx = rte_zmalloc_socket(name, sizeof(*qp_ctx), RTE_CACHE_LINE_SIZE,
 			socket_id);