[dpdk-dev,v3,08/12] app/eventdev: add pipeline queue test

Message ID 20180110145144.28403-8-pbhagavatula@caviumnetworks.com (mailing list archive)
State Superseded, archived
Delegated to: Jerin Jacob
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation fail Compilation issues

Commit Message

Pavan Nikhilesh Jan. 10, 2018, 2:51 p.m. UTC
  This is a pipeline queue test case that aims at testing the following:
1. Measure the end-to-end performance of an event dev with a ethernet dev.
2. Maintain packet ordering from Rx to Tx.

The pipeline queue test configures the eventdev with Q queues and P ports,
where Q is (nb_ethdev * nb_stages) + nb_ethdev and P is nb_workers.

The user can choose the number of workers and number of stages through the
--wlcores and the --stlist application command line arguments respectively.
The probed ethernet devices act as producer(s) for this application.

The ethdevs are configured as event Rx adapters that enables them to
injects events to eventdev based the first stage schedule type list
requested by the user through --stlist the command line argument.

Based on the number of stages to process(selected through --stlist),
the application forwards the event to next upstream queue and when it
reaches last stage in the pipeline if the event type is ATOMIC it is
enqueued onto ethdev Tx queue else to maintain ordering the event type is
set to ATOMIC and enqueued onto the last stage queue.
On packet Tx, application increments the number events processed and print
periodically in one second to get the number of events processed in one
second.

Note: The --prod_type_ethdev is mandatory for running the application.

Example command to run pipeline queue test:
sudo build/app/dpdk-test-eventdev -c 0xf -s 0x8 --vdev=event_sw0 -- \
--test=pipeline_queue --wlcore=1 --prod_type_ethdev --stlist=ao

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
---

 v3 Changes:
 - add SPDX licence tags

 app/test-eventdev/Makefile              |   1 +
 app/test-eventdev/test_pipeline_queue.c | 166 ++++++++++++++++++++++++++++++++
 2 files changed, 167 insertions(+)
 create mode 100644 app/test-eventdev/test_pipeline_queue.c

--
2.15.1
  

Comments

Van Haaren, Harry Jan. 10, 2018, 4:38 p.m. UTC | #1
> From: Pavan Nikhilesh [mailto:pbhagavatula@caviumnetworks.com]
> Sent: Wednesday, January 10, 2018 2:52 PM
> To: jerin.jacob@caviumnetworks.com; santosh.shukla@caviumnetworks.com; Van
> Haaren, Harry <harry.van.haaren@intel.com>; Eads, Gage
> <gage.eads@intel.com>; hemant.agrawal@nxp.com; nipun.gupta@nxp.com; Ma,
> Liang J <liang.j.ma@intel.com>
> Cc: dev@dpdk.org; Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> Subject: [dpdk-dev] [PATCH v3 08/12] app/eventdev: add pipeline queue test
> 
> This is a pipeline queue test case that aims at testing the following:
> 1. Measure the end-to-end performance of an event dev with a ethernet dev.
> 2. Maintain packet ordering from Rx to Tx.
> 
> The pipeline queue test configures the eventdev with Q queues and P ports,
> where Q is (nb_ethdev * nb_stages) + nb_ethdev and P is nb_workers.

Why (nb_ethdev * nb_stages) number of Queues?

I had expected if the test is for eventdev with Q queues, P ports, that that Q number of stages is all that is required, (possibly with +1 for TX queue, iirc some HW doesn't require the TX Queue).

Am I missing something here? I've left the code snippet I don't understand below.


<snip>

> Note: The --prod_type_ethdev is mandatory for running the application.

Mandatory arguments seem pointless to me, unless there are other valid options to choose from.


> +
> +static int
> +pipeline_queue_eventdev_setup(struct evt_test *test, struct evt_options
> *opt)
> +{
> +	int ret;
> +	int nb_ports;
> +	int nb_queues;
> +	int nb_stages = opt->nb_stages;
> +	uint8_t queue;
> +	struct rte_event_dev_info info;
> +
> +	nb_ports = evt_nr_active_lcores(opt->wlcores);
> +	nb_queues = rte_eth_dev_count() * (nb_stages);


As per comment above, this is what I don't understand? Why more queues than stages?
  
Pavan Nikhilesh Jan. 10, 2018, 8:01 p.m. UTC | #2
Hi Harry,

Thanks for the review.

On Wed, Jan 10, 2018 at 04:38:35PM +0000, Van Haaren, Harry wrote:
> > From: Pavan Nikhilesh [mailto:pbhagavatula@caviumnetworks.com]
> > Sent: Wednesday, January 10, 2018 2:52 PM
> > To: jerin.jacob@caviumnetworks.com; santosh.shukla@caviumnetworks.com; Van
> > Haaren, Harry <harry.van.haaren@intel.com>; Eads, Gage
> > <gage.eads@intel.com>; hemant.agrawal@nxp.com; nipun.gupta@nxp.com; Ma,
> > Liang J <liang.j.ma@intel.com>
> > Cc: dev@dpdk.org; Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> > Subject: [dpdk-dev] [PATCH v3 08/12] app/eventdev: add pipeline queue test
> >
> > This is a pipeline queue test case that aims at testing the following:
> > 1. Measure the end-to-end performance of an event dev with a ethernet dev.
> > 2. Maintain packet ordering from Rx to Tx.
> >
> > The pipeline queue test configures the eventdev with Q queues and P ports,
> > where Q is (nb_ethdev * nb_stages) + nb_ethdev and P is nb_workers.
>
> Why (nb_ethdev * nb_stages) number of Queues?
>
> I had expected if the test is for eventdev with Q queues, P ports, that that Q number of stages is all that is required, (possibly with +1 for TX queue, iirc some HW doesn't require the TX Queue).
>
> Am I missing something here? I've left the code snippet I don't understand below.
>

The idea is to reduce the load on ingress event queue (mapped to ethernet Rx queue)
by splitting the traffic across event queues for each ethernet device.

for example, nb_stages =  2 and nb_ethdev = 2 then

	nb_queues = (2 * 2) + 2 = 6 (non atq)
	stride = 3 (nb_stages + 1 used for directing each ethernet dev traffic
			into a specific event queue)

queue id's 0,1,2,3,4,5

This allows us to direct the traffic from eth dev0 to event queue 0 and dev1 to
queue 3 based on the stride (dev_id * stride). This intern forms two pipelines
	ethdev0	0->1->2->tx
	ethdev1	3->4->5->tx

In the absence of this both  ethdev0 and ethdev1 would have to inject to 0th
queue and this leads to more congestion as the number of ethernet devices
increase.

Hope this clears things up.

>
> <snip>
>
> > Note: The --prod_type_ethdev is mandatory for running the application.
>
> Mandatory arguments seem pointless to me, unless there are other valid options to choose from.
>

In future we might have --prod_type_event_timer_adptr and --prod_type_cryptodev :-)

Cheers,
Pavan
>
> > +
> > +static int
> > +pipeline_queue_eventdev_setup(struct evt_test *test, struct evt_options
> > *opt)
> > +{
> > +	int ret;
> > +	int nb_ports;
> > +	int nb_queues;
> > +	int nb_stages = opt->nb_stages;
> > +	uint8_t queue;
> > +	struct rte_event_dev_info info;
> > +
> > +	nb_ports = evt_nr_active_lcores(opt->wlcores);
> > +	nb_queues = rte_eth_dev_count() * (nb_stages);
>
>
> As per comment above, this is what I don't understand? Why more queues than stages?
  
Van Haaren, Harry Jan. 15, 2018, 10:17 a.m. UTC | #3
> From: Pavan Nikhilesh [mailto:pbhagavatula@caviumnetworks.com]
> Sent: Wednesday, January 10, 2018 8:01 PM
> To: Van Haaren, Harry <harry.van.haaren@intel.com>;
> jerin.jacob@caviumnetworks.com; santosh.shukla@caviumnetworks.com; Eads,
> Gage <gage.eads@intel.com>; hemant.agrawal@nxp.com; nipun.gupta@nxp.com; Ma,
> Liang J <liang.j.ma@intel.com>
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v3 08/12] app/eventdev: add pipeline queue
> test
> 
> Hi Harry,
> 
> Thanks for the review.
> 
> On Wed, Jan 10, 2018 at 04:38:35PM +0000, Van Haaren, Harry wrote:
> > > From: Pavan Nikhilesh [mailto:pbhagavatula@caviumnetworks.com]
> > > Sent: Wednesday, January 10, 2018 2:52 PM
> > > To: jerin.jacob@caviumnetworks.com; santosh.shukla@caviumnetworks.com;
> Van
> > > Haaren, Harry <harry.van.haaren@intel.com>; Eads, Gage
> > > <gage.eads@intel.com>; hemant.agrawal@nxp.com; nipun.gupta@nxp.com; Ma,
> > > Liang J <liang.j.ma@intel.com>
> > > Cc: dev@dpdk.org; Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> > > Subject: [dpdk-dev] [PATCH v3 08/12] app/eventdev: add pipeline queue
> test
> > >
> > > This is a pipeline queue test case that aims at testing the following:
> > > 1. Measure the end-to-end performance of an event dev with a ethernet
> dev.
> > > 2. Maintain packet ordering from Rx to Tx.
> > >
> > > The pipeline queue test configures the eventdev with Q queues and P
> ports,
> > > where Q is (nb_ethdev * nb_stages) + nb_ethdev and P is nb_workers.
> >
> > Why (nb_ethdev * nb_stages) number of Queues?
> >
> > I had expected if the test is for eventdev with Q queues, P ports, that
> that Q number of stages is all that is required, (possibly with +1 for TX
> queue, iirc some HW doesn't require the TX Queue).
> >
> > Am I missing something here? I've left the code snippet I don't understand
> below.
> >
> 
> The idea is to reduce the load on ingress event queue (mapped to ethernet Rx
> queue)
> by splitting the traffic across event queues for each ethernet device.
> 
> for example, nb_stages =  2 and nb_ethdev = 2 then
> 
> 	nb_queues = (2 * 2) + 2 = 6 (non atq)
> 	stride = 3 (nb_stages + 1 used for directing each ethernet dev traffic
> 			into a specific event queue)
> 
> queue id's 0,1,2,3,4,5
> 
> This allows us to direct the traffic from eth dev0 to event queue 0 and dev1
> to
> queue 3 based on the stride (dev_id * stride). This intern forms two
> pipelines
> 	ethdev0	0->1->2->tx
> 	ethdev1	3->4->5->tx
> 
> In the absence of this both  ethdev0 and ethdev1 would have to inject to 0th
> queue and this leads to more congestion as the number of ethernet devices
> increase.
> 
> Hope this clears things up.


Ah ok, two parallel pipelines use-case. Makes sense, thanks for explaining.

Acked-by: Harry van Haaren <harry.van.haaren@intel.com>

<snip>
  

Patch

diff --git a/app/test-eventdev/Makefile b/app/test-eventdev/Makefile
index f2fb665d8..30bebfb2f 100644
--- a/app/test-eventdev/Makefile
+++ b/app/test-eventdev/Makefile
@@ -52,5 +52,6 @@  SRCS-y += test_perf_queue.c
 SRCS-y += test_perf_atq.c

 SRCS-y += test_pipeline_common.c
+SRCS-y += test_pipeline_queue.c

 include $(RTE_SDK)/mk/rte.app.mk
diff --git a/app/test-eventdev/test_pipeline_queue.c b/app/test-eventdev/test_pipeline_queue.c
new file mode 100644
index 000000000..4b50e7b54
--- /dev/null
+++ b/app/test-eventdev/test_pipeline_queue.c
@@ -0,0 +1,166 @@ 
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2017 Cavium, Inc.
+ */
+
+#include "test_pipeline_common.h"
+
+/* See http://dpdk.org/doc/guides/tools/testeventdev.html for test details */
+
+static __rte_always_inline int
+pipeline_queue_nb_event_queues(struct evt_options *opt)
+{
+	uint16_t eth_count = rte_eth_dev_count();
+
+	return (eth_count * opt->nb_stages) + eth_count;
+}
+
+static int
+worker_wrapper(void *arg)
+{
+	RTE_SET_USED(arg);
+	rte_panic("invalid worker\n");
+}
+
+static int
+pipeline_queue_launch_lcores(struct evt_test *test, struct evt_options *opt)
+{
+	return pipeline_launch_lcores(test, opt, worker_wrapper);
+}
+
+static int
+pipeline_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt)
+{
+	int ret;
+	int nb_ports;
+	int nb_queues;
+	int nb_stages = opt->nb_stages;
+	uint8_t queue;
+	struct rte_event_dev_info info;
+
+	nb_ports = evt_nr_active_lcores(opt->wlcores);
+	nb_queues = rte_eth_dev_count() * (nb_stages);
+	nb_queues += rte_eth_dev_count();
+
+	rte_event_dev_info_get(opt->dev_id, &info);
+
+	const struct rte_event_dev_config config = {
+			.nb_event_queues = nb_queues,
+			.nb_event_ports = nb_ports,
+			.nb_events_limit  = info.max_num_events,
+			.nb_event_queue_flows = opt->nb_flows,
+			.nb_event_port_dequeue_depth =
+				info.max_event_port_dequeue_depth,
+			.nb_event_port_enqueue_depth =
+				info.max_event_port_enqueue_depth,
+	};
+	ret = rte_event_dev_configure(opt->dev_id, &config);
+	if (ret) {
+		evt_err("failed to configure eventdev %d", opt->dev_id);
+		return ret;
+	}
+
+	struct rte_event_queue_conf q_conf = {
+			.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+			.nb_atomic_flows = opt->nb_flows,
+			.nb_atomic_order_sequences = opt->nb_flows,
+	};
+	/* queue configurations */
+	for (queue = 0; queue < nb_queues; queue++) {
+		uint8_t slot;
+
+		slot = queue % (nb_stages + 1);
+		q_conf.schedule_type = slot == nb_stages ?
+			RTE_SCHED_TYPE_ATOMIC :
+			opt->sched_type_list[slot];
+
+		ret = rte_event_queue_setup(opt->dev_id, queue, &q_conf);
+		if (ret) {
+			evt_err("failed to setup queue=%d", queue);
+			return ret;
+		}
+	}
+
+	/* port configuration */
+	const struct rte_event_port_conf p_conf = {
+			.dequeue_depth = opt->wkr_deq_dep,
+			.enqueue_depth = info.max_event_port_dequeue_depth,
+			.new_event_threshold = info.max_num_events,
+	};
+
+	ret = pipeline_event_port_setup(test, opt, nb_queues, p_conf);
+	if (ret)
+		return ret;
+
+	ret = pipeline_event_rx_adapter_setup(opt, nb_stages + 1,
+			p_conf);
+	if (ret)
+		return ret;
+
+	if (!evt_has_distributed_sched(opt->dev_id)) {
+		uint32_t service_id;
+		rte_event_dev_service_id_get(opt->dev_id, &service_id);
+		ret = evt_service_setup(service_id);
+		if (ret) {
+			evt_err("No service lcore found to run event dev.");
+			return ret;
+		}
+	}
+
+	ret = rte_event_dev_start(opt->dev_id);
+	if (ret) {
+		evt_err("failed to start eventdev %d", opt->dev_id);
+		return ret;
+	}
+
+	return 0;
+}
+
+static void
+pipeline_queue_opt_dump(struct evt_options *opt)
+{
+	pipeline_opt_dump(opt, pipeline_queue_nb_event_queues(opt));
+}
+
+static int
+pipeline_queue_opt_check(struct evt_options *opt)
+{
+	return pipeline_opt_check(opt, pipeline_queue_nb_event_queues(opt));
+}
+
+static bool
+pipeline_queue_capability_check(struct evt_options *opt)
+{
+	struct rte_event_dev_info dev_info;
+
+	rte_event_dev_info_get(opt->dev_id, &dev_info);
+	if (dev_info.max_event_queues < pipeline_queue_nb_event_queues(opt) ||
+			dev_info.max_event_ports <
+			evt_nr_active_lcores(opt->wlcores)) {
+		evt_err("not enough eventdev queues=%d/%d or ports=%d/%d",
+			pipeline_queue_nb_event_queues(opt),
+			dev_info.max_event_queues,
+			evt_nr_active_lcores(opt->wlcores),
+			dev_info.max_event_ports);
+	}
+
+	return true;
+}
+
+static const struct evt_test_ops pipeline_queue =  {
+	.cap_check          = pipeline_queue_capability_check,
+	.opt_check          = pipeline_queue_opt_check,
+	.opt_dump           = pipeline_queue_opt_dump,
+	.test_setup         = pipeline_test_setup,
+	.mempool_setup      = pipeline_mempool_setup,
+	.ethdev_setup	    = pipeline_ethdev_setup,
+	.eventdev_setup     = pipeline_queue_eventdev_setup,
+	.launch_lcores      = pipeline_queue_launch_lcores,
+	.eventdev_destroy   = pipeline_eventdev_destroy,
+	.mempool_destroy    = pipeline_mempool_destroy,
+	.ethdev_destroy	    = pipeline_ethdev_destroy,
+	.test_result        = pipeline_test_result,
+	.test_destroy       = pipeline_test_destroy,
+};
+
+EVT_TEST_REGISTER(pipeline_queue);