[dpdk-dev] [PATCH v3 08/12] app/eventdev: add pipeline queue test

Pavan Nikhilesh pbhagavatula at caviumnetworks.com
Wed Jan 10 21:01:22 CET 2018


Hi Harry,

Thanks for the review.

On Wed, Jan 10, 2018 at 04:38:35PM +0000, Van Haaren, Harry wrote:
> > From: Pavan Nikhilesh [mailto:pbhagavatula at caviumnetworks.com]
> > Sent: Wednesday, January 10, 2018 2:52 PM
> > To: jerin.jacob at caviumnetworks.com; santosh.shukla at caviumnetworks.com; Van
> > Haaren, Harry <harry.van.haaren at intel.com>; Eads, Gage
> > <gage.eads at intel.com>; hemant.agrawal at nxp.com; nipun.gupta at nxp.com; Ma,
> > Liang J <liang.j.ma at intel.com>
> > Cc: dev at dpdk.org; Pavan Nikhilesh <pbhagavatula at caviumnetworks.com>
> > Subject: [dpdk-dev] [PATCH v3 08/12] app/eventdev: add pipeline queue test
> >
> > This is a pipeline queue test case that aims at testing the following:
> > 1. Measure the end-to-end performance of an event dev with a ethernet dev.
> > 2. Maintain packet ordering from Rx to Tx.
> >
> > The pipeline queue test configures the eventdev with Q queues and P ports,
> > where Q is (nb_ethdev * nb_stages) + nb_ethdev and P is nb_workers.
>
> Why (nb_ethdev * nb_stages) number of Queues?
>
> I had expected if the test is for eventdev with Q queues, P ports, that that Q number of stages is all that is required, (possibly with +1 for TX queue, iirc some HW doesn't require the TX Queue).
>
> Am I missing something here? I've left the code snippet I don't understand below.
>

The idea is to reduce the load on ingress event queue (mapped to ethernet Rx queue)
by splitting the traffic across event queues for each ethernet device.

for example, nb_stages =  2 and nb_ethdev = 2 then

	nb_queues = (2 * 2) + 2 = 6 (non atq)
	stride = 3 (nb_stages + 1 used for directing each ethernet dev traffic
			into a specific event queue)

queue id's 0,1,2,3,4,5

This allows us to direct the traffic from eth dev0 to event queue 0 and dev1 to
queue 3 based on the stride (dev_id * stride). This intern forms two pipelines
	ethdev0	0->1->2->tx
	ethdev1	3->4->5->tx

In the absence of this both  ethdev0 and ethdev1 would have to inject to 0th
queue and this leads to more congestion as the number of ethernet devices
increase.

Hope this clears things up.

>
> <snip>
>
> > Note: The --prod_type_ethdev is mandatory for running the application.
>
> Mandatory arguments seem pointless to me, unless there are other valid options to choose from.
>

In future we might have --prod_type_event_timer_adptr and --prod_type_cryptodev :-)

Cheers,
Pavan
>
> > +
> > +static int
> > +pipeline_queue_eventdev_setup(struct evt_test *test, struct evt_options
> > *opt)
> > +{
> > +	int ret;
> > +	int nb_ports;
> > +	int nb_queues;
> > +	int nb_stages = opt->nb_stages;
> > +	uint8_t queue;
> > +	struct rte_event_dev_info info;
> > +
> > +	nb_ports = evt_nr_active_lcores(opt->wlcores);
> > +	nb_queues = rte_eth_dev_count() * (nb_stages);
>
>
> As per comment above, this is what I don't understand? Why more queues than stages?


More information about the dev mailing list