[dpdk-stable] [dpdk-dev] [PATCH 1/6] app/test-eventdev: Enhancing perf-queue packet flow
Pavan Nikhilesh Bhagavatula
pbhagavatula at marvell.com
Thu Jul 2 05:24:07 CEST 2020
>Subject: [dpdk-dev] [PATCH 1/6] app/test-eventdev: Enhancing perf-
>queue packet flow
>
>The event ethernet Tx adapter provides data path for the ethernet
>transmit
>stage. Enqueue a burst of events objects supplied on an event device.
>
NAK, please use pipeline_atq/queue to test Rx->Tx performance.
Perf_atq/queue should only be used to test event device performance/latency such as
<event_src(CPU/Rx/timer)> -> worker.
>Fixes: 2369f73329 ("app/testeventdev: add perf queue worker
>functions")
>Cc: stable at dpdk.org
>
>Signed-off-by: Apeksha Gupta <apeksha.gupta at nxp.com>
>---
> app/test-eventdev/test_perf_common.c | 11 ++++++++
> app/test-eventdev/test_perf_common.h | 1 +
> app/test-eventdev/test_perf_queue.c | 42 ++++++++++++++++++++---
>-----
> 3 files changed, 43 insertions(+), 11 deletions(-)
>
>diff --git a/app/test-eventdev/test_perf_common.c b/app/test-
>eventdev/test_perf_common.c
>index b3af4bfeca..341e16eade 100644
>--- a/app/test-eventdev/test_perf_common.c
>+++ b/app/test-eventdev/test_perf_common.c
>@@ -687,9 +687,20 @@ perf_ethdev_setup(struct evt_test *test,
>struct evt_options *opt)
> return -ENODEV;
> }
>
>+ t->internal_port = 1;
> RTE_ETH_FOREACH_DEV(i) {
> struct rte_eth_dev_info dev_info;
> struct rte_eth_conf local_port_conf = port_conf;
>+ uint32_t caps = 0;
>+
>+ ret = rte_event_eth_tx_adapter_caps_get(opt->dev_id,
>i, &caps);
>+ if (ret != 0) {
>+ evt_err("failed to get event tx adapter[%d]
>caps", i);
>+ return ret;
>+ }
>+
>+ if (!(caps &
>RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT))
>+ t->internal_port = 0;
>
> ret = rte_eth_dev_info_get(i, &dev_info);
> if (ret != 0) {
>diff --git a/app/test-eventdev/test_perf_common.h b/app/test-
>eventdev/test_perf_common.h
>index d8fbee6d89..716199d8c9 100644
>--- a/app/test-eventdev/test_perf_common.h
>+++ b/app/test-eventdev/test_perf_common.h
>@@ -48,6 +48,7 @@ struct test_perf {
> int done;
> uint64_t outstand_pkts;
> uint8_t nb_workers;
>+ uint8_t internal_port;
> enum evt_test_result result;
> uint32_t nb_flows;
> uint64_t nb_pkts;
>diff --git a/app/test-eventdev/test_perf_queue.c b/app/test-
>eventdev/test_perf_queue.c
>index 29098580e7..f79e4a4164 100644
>--- a/app/test-eventdev/test_perf_queue.c
>+++ b/app/test-eventdev/test_perf_queue.c
>@@ -71,10 +71,12 @@ perf_queue_worker(void *arg, const int
>enable_fwd_latency)
> }
>
> static int
>-perf_queue_worker_burst(void *arg, const int enable_fwd_latency)
>+perf_queue_worker_burst(void *arg, const int enable_fwd_latency,
>+ const uint32_t flags)
> {
> PERF_WORKER_INIT;
> uint16_t i;
>+ uint16_t nb_tx;
> /* +1 to avoid prefetch out of array check */
> struct rte_event ev[BURST_SIZE + 1];
>
>@@ -111,12 +113,20 @@ perf_queue_worker_burst(void *arg, const
>int enable_fwd_latency)
> }
> }
>
>- uint16_t enq;
>-
>- enq = rte_event_enqueue_burst(dev, port, ev, nb_rx);
>- while (enq < nb_rx) {
>- enq += rte_event_enqueue_burst(dev, port,
>+ if (flags == TEST_PERF_EVENT_TX_DIRECT) {
>+ nb_tx =
>rte_event_eth_tx_adapter_enqueue(dev, port,
>+ ev,
>nb_rx, 0);
>+ while (nb_tx < nb_rx && !t->done)
>+ nb_tx +=
>rte_event_eth_tx_adapter_enqueue(dev,
>+ port, ev +
>nb_tx,
>+ nb_rx - nb_tx,
>0);
>+ } else {
>+ uint16_t enq;
>+ enq = rte_event_enqueue_burst(dev, port, ev,
>nb_rx);
>+ while (enq < nb_rx) {
>+ enq += rte_event_enqueue_burst(dev,
>port,
> ev + enq, nb_rx
>- enq);
>+ }
> }
> }
> return 0;
>@@ -130,16 +140,26 @@ worker_wrapper(void *arg)
>
> const bool burst = evt_has_burst_mode(w->dev_id);
> const int fwd_latency = opt->fwd_latency;
>-
>+ const bool internal_port = w->t->internal_port;
>+ uint32_t flags;
> /* allow compiler to optimize */
> if (!burst && !fwd_latency)
> return perf_queue_worker(arg, 0);
> else if (!burst && fwd_latency)
> return perf_queue_worker(arg, 1);
>- else if (burst && !fwd_latency)
>- return perf_queue_worker_burst(arg, 0);
>- else if (burst && fwd_latency)
>- return perf_queue_worker_burst(arg, 1);
>+ else if (burst && !fwd_latency && internal_port) {
>+ flags = TEST_PERF_EVENT_TX_DIRECT;
>+ return perf_queue_worker_burst(arg, 0, flags);
>+ } else if (burst && !fwd_latency && !internal_port) {
>+ flags = TEST_PERF_EVENT_TX_ENQ;
>+ return perf_queue_worker_burst(arg, 1, flags);
>+ } else if (burst && fwd_latency && internal_port) {
>+ flags = TEST_PERF_EVENT_TX_DIRECT;
>+ return perf_queue_worker_burst(arg, 0, flags);
>+ } else if (burst && fwd_latency && !internal_port) {
>+ flags = TEST_PERF_EVENT_TX_ENQ;
>+ return perf_queue_worker_burst(arg, 1, flags);
>+ }
>
> rte_panic("invalid worker\n");
> }
>--
>2.17.1
More information about the stable
mailing list