[dpdk-dev] [PATCH 7/7] examples/eventdev_pipeline: adding example

Jerin Jacob jerin.jacob at caviumnetworks.com
Wed Nov 23 01:30:35 CET 2016


On Tue, Nov 22, 2016 at 02:04:27PM +0000, Richardson, Bruce wrote:
> 
> 
> > -----Original Message-----
> > From: Jerin Jacob [mailto:jerin.jacob at caviumnetworks.com]
> > Sent: Tuesday, November 22, 2016 6:02 AM
> > To: Van Haaren, Harry <harry.van.haaren at intel.com>
> > Cc: dev at dpdk.org; Eads, Gage <gage.eads at intel.com>; Richardson, Bruce
> > <bruce.richardson at intel.com>
> > Subject: Re: [dpdk-dev] [PATCH 7/7] examples/eventdev_pipeline: adding
> > example
> > 
> > On Wed, Nov 16, 2016 at 06:00:07PM +0000, Harry van Haaren wrote:
> > > This patch adds a sample app to the examples/ directory, which can be
> > > used as a reference application and for general testing. The
> > > application requires two ethdev ports and expects traffic to be
> > > flowing. The application must be run with the --vdev flags as follows
> > > to indicate to EAL that a virtual eventdev device called "evdev_sw0" is
> > available to be used:
> > >
> > >     ./build/eventdev_pipeline --vdev evdev_sw0
> > >
> > > The general flow of the traffic is as follows:
> > >
> > >     Rx core -> Atomic Queue => 4 worker cores => TX core
> > >
> > > A scheduler core is required to do the packet scheduling, making this
> > > configuration require 7 cores (Rx, Tx, Scheduler, and 4 workers).
> > > Finally a master core brings the core count to 8 for this
> > > configuration. The
> > 
> > Thanks for the example application.I will try to share my views on ethdev
> > integration and usability perspective. Hope we can converge.
> 
> Hi Jerin, 
> 
> thanks for the feedback. We'll take it on board for a subsequent version
> we produce. Additional comments and queries on your feedback inline below.

Thanks Bruce.

> 
> /Bruce
> 
> > 
> > Some of the high level details first before getting into exact details.
> > 
> > 1) From the HW and ethdev integration perspective, The integrated NIC
> > controllers does not need producer core(s) to push the event/packets to
> > event queue. So, I was thinking to use 6WIND rte_flow spec to create the
> > "ethdev port to event queue wiring" connection by extending the output
> > ACTION definition, which specifies event queue its need to enqueued to for
> > the given ethdev port (something your are doing in application).
> > 
> > I guess, the producer part of this example can be created as common code,
> > somewhere in rte_flow/ethdev to reuse. We would need this scheme also
> > where when we deal with external nics + HW event manager use case
> > 
> Yes. This is something to consider.
> 
> For the pure-software model, we also might want to look at the opposite
> approach, where we register an ethdev with the scheduler for it to "pull"
> new packets from. This would allow it to bypass the port logic for the new
> packets. 

Not sure,I understand this completely. How different its integrating
with rte_flow specification ?

> 
> An alternative for this is to extend the schedule API to allow a burst of
> packets to be passed in to be scheduled immediately as "NEW" packets. The end
> results should be the same, saving cycles by bypassing unneeded processing
> for the new packets.
> 
> > The complete event driven model can be verified and exercised without
> > integrating with eventdev subsystem. So I think, may be we need to focus
> > on functional applications without ethdev to verify the eventdev features
> > like(automatic multicore scaling,  dynamic load balancing, pipelining,
> > packet ingress order maintenance and synchronization services) and then
> > integrate with ethdev
> 
> Yes, comprehensive unit tests will be needed too. But an example app that
> pulls packets from an external NIC I also think is needed to get a feel
> for the performance of the scheduler with real traffic.

I agree, we need to have example to show case with real traffic.

Please check on ethdev integration aspects. Cavium has both server
(that's going to use SW event pmd) and NPU based platform(that's going to
use HW event pmd). So we would like to have common approach that makes
integration of both models with out changing the application.

I was thinking more with "service core" and "rte_flow" based
integration methodology to make that happen.

> 
> > 
> > > +	const unsigned cores_needed = num_workers +
> > > +			/*main*/1 +
> > > +			/*sched*/1 +
> > > +			/*TX*/1 +
> > > +			/*RX*/1;
> > > +
> > 
> > 2) One of the prime aims of the event driven model is to remove the fixed
> > function core mappings and enable automatic multicore scaling,  dynamic
> > load balancing etc.I will try to use an example in review section to show
> > the method for removing "consumer core" in this case.
> 
> Yes, I agree, but unfortunately, for some tasks, distributing those tasks
> across multiple cores can hurt performance overall do to resource contention.

May only in SW implementation.

>  
> > 
> > > application can be configured for various numbers of flows and worker
> > > cores. Run the application with -h for details.
> > >
> > 
> > Another way to deal wit out additional consumer core(it creates issue in
> > scaling and load balancing) is
> > 
> > in worker:
> > while(1) {
> > 
> > 	ev = dequeue(port);
> > 
> > 	// stage 1 app processing
> > 	if (ev.event_type == RTE_EVENT_TYPE_ETHDEV) {
> > 		// identify the Ethernet port and tx queue the packet needs to
> > go
> > 		// create the flow based on that
> > 		ev.flow_id = flow(port_id, tx_queue_id);
> > 		ev.sched_type = RTE_SCHED_TYPE_ATOMIC;
> > 		ev.operation = RTE_EVENT_OP_FORWARD;
> > 		ev.event_type = RTE_EVENT_TYPE_CORE;
> > 	} // stage 2 app processing
> > 	else if (ev.event_type == RTE_EVENT_TYPE_CORE) {
> > 		port_id = function_of(ev.flow_id) ;// look stage 1 processing
> > 		tx_queue_id = function_of(ev.flow_id) //look stage 1
> > processing
> > 		remaining ethdev based tx is same as yours
> > 	}
> > 	enqueue(ev);
> > }
> >
> Yes, but you still need some core to do the work of pushing the packets into
> the scheduler from the NIC, if you don't have a hardware path from NIC to 
> HW scheduler. [Features like RSS can obviously help here with distributing that
> work if needed]

Yes. make sense to have producer portion of code as common code.

> 
> In the case you do have a HW path - which I assume is the Cavium case - I assume
> that the EVENT_TYPE_ETHDEV branch above needs also to take care of desc to mbuf
> processing, as is normally done by the PMD?
>  
> > 
> > 
> > > +			ev->priority = 0;
> > > +			ev->sched_type = RTE_SCHED_TYPE_ATOMIC;
> > > +			ev->operation = RTE_EVENT_OP_FORWARD;
> > > +
> > > +			uint64_t now = rte_rdtsc();
> > > +			while(now + 750 > rte_rdtsc()) {}
> > 
> > Why delay ?
> 
> Simulate some work being done by the worker, which makes the app slightly more
> realistic and also helps the scheduler as there is not so much contention on the
> shared cache lines.

May not for performance test-cases.

> 


More information about the dev mailing list