[v2] crypto/aesni_mb: fix cpu crypto cipher auth

Message ID 20201029163020.1034474-1-roy.fan.zhang@intel.com (mailing list archive)
State Rejected, archived
Delegated to: akhil goyal
Headers
Series [v2] crypto/aesni_mb: fix cpu crypto cipher auth |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-broadcom-Functional success Functional Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/Intel-compilation success Compilation OK
ci/iol-testing success Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/travis-robot success Travis build: passed
ci/iol-mellanox-Performance success Performance Testing PASS

Commit Message

Fan Zhang Oct. 29, 2020, 4:30 p.m. UTC
  This patch fixes the AESNI-MB PMD CPU crypto process function. Originally
the function tried to access crypto vector's aad buffer even it is not
needed.

Fixes: 8d928d47a29a ("cryptodev: change crypto symmetric vector structure")
Cc: roy.fan.zhang@intel.com

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
V2:
- fix typo.

 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c | 69 +++++++++++++++-------
 1 file changed, 49 insertions(+), 20 deletions(-)
  

Patch

diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index fbbb38af0..53834f9f3 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -1990,33 +1990,62 @@  aesni_mb_cpu_crypto_process_bulk(struct rte_cryptodev *dev,
 		RTE_PER_LCORE(sync_mb_mgr) = mb_mgr;
 	}
 
-	for (i = 0, j = 0, k = 0; i != vec->num; i++) {
+	if (is_aead_algo(s->auth.algo, s->cipher.mode)) {
+		for (i = 0, j = 0, k = 0; i != vec->num; i++) {
+			ret = check_crypto_sgl(sofs, vec->sgl + i);
+			if (ret != 0) {
+				vec->status[i] = ret;
+				continue;
+			}
 
+			buf = vec->sgl[i].vec[0].base;
+			len = vec->sgl[i].vec[0].len;
 
-		ret = check_crypto_sgl(sofs, vec->sgl + i);
-		if (ret != 0) {
-			vec->status[i] = ret;
-			continue;
+			job = IMB_GET_NEXT_JOB(mb_mgr);
+			if (job == NULL) {
+				k += flush_mb_sync_mgr(mb_mgr);
+				job = IMB_GET_NEXT_JOB(mb_mgr);
+				RTE_ASSERT(job != NULL);
+			}
+
+			/* Submit job for processing */
+			set_cpu_mb_job_params(job, s, sofs, buf, len,
+				vec->iv[i].va, vec->aad[i].va, tmp_dgst[i],
+				&vec->status[i]);
+			job = submit_sync_job(mb_mgr);
+			j++;
+
+			/* handle completed jobs */
+			k += handle_completed_sync_jobs(job, mb_mgr);
 		}
+	} else {
+		for (i = 0, j = 0, k = 0; i != vec->num; i++) {
+			ret = check_crypto_sgl(sofs, vec->sgl + i);
+			if (ret != 0) {
+				vec->status[i] = ret;
+				continue;
+			}
 
-		buf = vec->sgl[i].vec[0].base;
-		len = vec->sgl[i].vec[0].len;
+			buf = vec->sgl[i].vec[0].base;
+			len = vec->sgl[i].vec[0].len;
 
-		job = IMB_GET_NEXT_JOB(mb_mgr);
-		if (job == NULL) {
-			k += flush_mb_sync_mgr(mb_mgr);
 			job = IMB_GET_NEXT_JOB(mb_mgr);
-			RTE_ASSERT(job != NULL);
+			if (job == NULL) {
+				k += flush_mb_sync_mgr(mb_mgr);
+				job = IMB_GET_NEXT_JOB(mb_mgr);
+				RTE_ASSERT(job != NULL);
+			}
+
+			/* Submit job for processing */
+			set_cpu_mb_job_params(job, s, sofs, buf, len,
+				vec->iv[i].va, NULL, tmp_dgst[i],
+				&vec->status[i]);
+			job = submit_sync_job(mb_mgr);
+			j++;
+
+			/* handle completed jobs */
+			k += handle_completed_sync_jobs(job, mb_mgr);
 		}
-
-		/* Submit job for processing */
-		set_cpu_mb_job_params(job, s, sofs, buf, len, vec->iv[i].va,
-			vec->aad[i].va, tmp_dgst[i], &vec->status[i]);
-		job = submit_sync_job(mb_mgr);
-		j++;
-
-		/* handle completed jobs */
-		k += handle_completed_sync_jobs(job, mb_mgr);
 	}
 
 	/* flush remaining jobs */