[dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added sample app

Jerin Jacob jerin.jacob at caviumnetworks.com
Wed May 17 20:03:16 CEST 2017


-----Original Message-----
> Date: Fri, 21 Apr 2017 10:51:37 +0100
> From: Harry van Haaren <harry.van.haaren at intel.com>
> To: dev at dpdk.org
> CC: jerin.jacob at caviumnetworks.com, Harry van Haaren
>  <harry.van.haaren at intel.com>, Gage Eads <gage.eads at intel.com>, Bruce
>  Richardson <bruce.richardson at intel.com>
> Subject: [PATCH 1/3] examples/eventdev_pipeline: added sample app
> X-Mailer: git-send-email 2.7.4
> 
> This commit adds a sample app for the eventdev library.
> The app has been tested with DPDK 17.05-rc2, hence this
> release (or later) is recommended.
> 
> The sample app showcases a pipeline processing use-case,
> with event scheduling and processing defined per stage.
> The application recieves traffic as normal, with each
> packet traversing the pipeline. Once the packet has
> been processed by each of the pipeline stages, it is
> transmitted again.
> 
> The app provides a framework to utilize cores for a single
> role or multiple roles. Examples of roles are the RX core,
> TX core, Scheduling core (in the case of the event/sw PMD),
> and worker cores.
> 
> Various flags are available to configure numbers of stages,
> cycles of work at each stage, type of scheduling, number of
> worker cores, queue depths etc. For a full explaination,
> please refer to the documentation.
> 
> Signed-off-by: Gage Eads <gage.eads at intel.com>
> Signed-off-by: Bruce Richardson <bruce.richardson at intel.com>
> Signed-off-by: Harry van Haaren <harry.van.haaren at intel.com>
> ---
> +
> +static inline void
> +schedule_devices(uint8_t dev_id, unsigned lcore_id)
> +{
> +	if (rx_core[lcore_id] && (rx_single ||
> +	    rte_atomic32_cmpset(&rx_lock, 0, 1))) {
> +		producer();
> +		rte_atomic32_clear((rte_atomic32_t *)&rx_lock);
> +	}
> +
> +	if (sched_core[lcore_id] && (sched_single ||
> +	    rte_atomic32_cmpset(&sched_lock, 0, 1))) {
> +		rte_event_schedule(dev_id);

One question here,

Does rte_event_schedule()'s SW PMD implementation capable of running
concurrently on multiple cores?

Context:
Currently I am writing a testpmd like test framework to realize
different use cases along with with performance test cases like throughput
and latency and making sure it works on SW and HW driver.

I see the following segfault problem when rte_event_schedule() invoked on
multiple core currently. Is it expected?

#0  0x000000000043e945 in __pull_port_lb (allow_reorder=0, port_id=2,
sw=0x7ff93f3cb540) at
/export/dpdk-thunderx/drivers/event/sw/sw_evdev_scheduler.c:406
/export/dpdk-thunderx/drivers/event/sw/sw_evdev_scheduler.c:406:11647:beg:0x43e945
[Current thread is 1 (Thread 0x7ff9fbd34700 (LWP 796))]
(gdb) bt
#0  0x000000000043e945 in __pull_port_lb (allow_reorder=0, port_id=2,
sw=0x7ff93f3cb540) at
/export/dpdk-thunderx/drivers/event/sw/sw_evdev_scheduler.c:406
#1  sw_schedule_pull_port_no_reorder (port_id=2, sw=0x7ff93f3cb540) at
/export/dpdk-thunderx/drivers/event/sw/sw_evdev_scheduler.c:495
#2  sw_event_schedule (dev=<optimized out>) at
/export/dpdk-thunderx/drivers/event/sw/sw_evdev_scheduler.c:566
#3  0x000000000040b4af in rte_event_schedule (dev_id=<optimized out>) at
/export/dpdk-thunderx/build/include/rte_eventdev.h:1092
#4  worker (arg=<optimized out>) at
/export/dpdk-thunderx/app/test-eventdev/test_queue_order.c:200
#5  0x000000000042d14b in eal_thread_loop (arg=<optimized out>) at
/export/dpdk-thunderx/lib/librte_eal/linuxapp/eal/eal_thread.c:184
#6  0x00007ff9fd8e32e7 in start_thread () from /usr/lib/libpthread.so.0
#7  0x00007ff9fd62454f in clone () from /usr/lib/libc.so.6
(gdb) list
401			 */
402			uint32_t iq_num = PRIO_TO_IQ(qe->priority);
403			struct sw_qid *qid = &sw->qids[qe->queue_id];
404
405			if ((flags & QE_FLAG_VALID) &&
406
iq_ring_free_count(qid->iq[iq_num]) == 0)
407				break;
408
409			/* now process based on flags. Note that for
directed
410			 * queues, the enqueue_flush masks off all but
the
(gdb) 





> +		if (dump_dev_signal) {
> +			rte_event_dev_dump(0, stdout);
> +			dump_dev_signal = 0;
> +		}
> +		rte_atomic32_clear((rte_atomic32_t *)&sched_lock);
> +	}
> +
> +	if (tx_core[lcore_id] && (tx_single ||
> +	    rte_atomic32_cmpset(&tx_lock, 0, 1))) {
> +		consumer();
> +		rte_atomic32_clear((rte_atomic32_t *)&tx_lock);
> +	}
> +}
> +


More information about the dev mailing list