[dpdk-dev] [PATCH v4 1/6] eventdev: introduce event driven programming model
Nipun Gupta
nipun.gupta at nxp.com
Thu Feb 2 12:18:52 CET 2017
Hi,
I had a few queries/comments regarding the eventdev patches.
Please see inline.
> -----Original Message-----
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Jerin Jacob
> Sent: Wednesday, December 21, 2016 14:55
> To: dev at dpdk.org
> Cc: thomas.monjalon at 6wind.com; bruce.richardson at intel.com; Hemant
> Agrawal <hemant.agrawal at nxp.com>; gage.eads at intel.com;
> harry.van.haaren at intel.com; Jerin Jacob <jerin.jacob at caviumnetworks.com>
> Subject: [dpdk-dev] [PATCH v4 1/6] eventdev: introduce event driven
> programming model
>
> In a polling model, lcores poll ethdev ports and associated
> rx queues directly to look for packet. In an event driven model,
> by contrast, lcores call the scheduler that selects packets for
> them based on programmer-specified criteria. Eventdev library
> adds support for event driven programming model, which offer
> applications automatic multicore scaling, dynamic load balancing,
> pipelining, packet ingress order maintenance and
> synchronization services to simplify application packet processing.
>
> By introducing event driven programming model, DPDK can support
> both polling and event driven programming models for packet processing,
> and applications are free to choose whatever model
> (or combination of the two) that best suits their needs.
>
> This patch adds the eventdev specification header file.
>
> Signed-off-by: Jerin Jacob <jerin.jacob at caviumnetworks.com>
> Acked-by: Bruce Richardson <bruce.richardson at intel.com>
> ---
> MAINTAINERS | 3 +
> doc/api/doxy-api-index.md | 1 +
> doc/api/doxy-api.conf | 1 +
> lib/librte_eventdev/rte_eventdev.h | 1275
> ++++++++++++++++++++++++++++++++++++
> 4 files changed, 1280 insertions(+)
> create mode 100644 lib/librte_eventdev/rte_eventdev.h
<snip>
> +
> +/**
> + * Event device information
> + */
> +struct rte_event_dev_info {
> + const char *driver_name; /**< Event driver name */
> + struct rte_pci_device *pci_dev; /**< PCI information */
With 'rte_device' in place (rte_dev.h), should we not have 'rte_device' instead of 'rte_pci_device' here?
> + uint32_t min_dequeue_timeout_ns;
> + /**< Minimum supported global dequeue timeout(ns) by this device */
> + uint32_t max_dequeue_timeout_ns;
> + /**< Maximum supported global dequeue timeout(ns) by this device */
> + uint32_t dequeue_timeout_ns;
> + /**< Configured global dequeue timeout(ns) for this device */
> + uint8_t max_event_queues;
> + /**< Maximum event_queues supported by this device */
> + uint32_t max_event_queue_flows;
> + /**< Maximum supported flows in an event queue by this device*/
> + uint8_t max_event_queue_priority_levels;
> + /**< Maximum number of event queue priority levels by this device.
> + * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS
> capability
> + */
<snip>
> +/**
> + * Dequeue a burst of events objects or an event object from the event port
> + * designated by its *event_port_id*, on an event device designated
> + * by its *dev_id*.
> + *
> + * rte_event_dequeue_burst() does not dictate the specifics of scheduling
> + * algorithm as each eventdev driver may have different criteria to schedule
> + * an event. However, in general, from an application perspective scheduler
> may
> + * use the following scheme to dispatch an event to the port.
> + *
> + * 1) Selection of event queue based on
> + * a) The list of event queues are linked to the event port.
> + * b) If the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability then
> event
> + * queue selection from list is based on event queue priority relative to
> + * other event queue supplied as *priority* in rte_event_queue_setup()
> + * c) If the device has RTE_EVENT_DEV_CAP_EVENT_QOS capability then
> event
> + * queue selection from the list is based on event priority supplied as
> + * *priority* in rte_event_enqueue_burst()
> + * 2) Selection of event
> + * a) The number of flows available in selected event queue.
> + * b) Schedule type method associated with the event
> + *
> + * The *nb_events* parameter is the maximum number of event objects to
> dequeue
> + * which are returned in the *ev* array of *rte_event* structure.
> + *
> + * The rte_event_dequeue_burst() function returns the number of events
> objects
> + * it actually dequeued. A return value equal to *nb_events* means that all
> + * event objects have been dequeued.
> + *
> + * The number of events dequeued is the number of scheduler contexts held by
> + * this port. These contexts are automatically released in the next
> + * rte_event_dequeue_burst() invocation, or invoking
> rte_event_enqueue_burst()
> + * with RTE_EVENT_OP_RELEASE operation can be used to release the
> + * contexts early.
> + *
> + * @param dev_id
> + * The identifier of the device.
> + * @param port_id
> + * The identifier of the event port.
> + * @param[out] ev
> + * Points to an array of *nb_events* objects of type *rte_event* structure
> + * for output to be populated with the dequeued event objects.
> + * @param nb_events
> + * The maximum number of event objects to dequeue, typically number of
> + * rte_event_port_dequeue_depth() available for this port.
> + *
> + * @param timeout_ticks
> + * - 0 no-wait, returns immediately if there is no event.
> + * - >0 wait for the event, if the device is configured with
> + * RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT then this function will
> wait until
> + * the event available or *timeout_ticks* time.
Just for understanding - Is expectation that rte_event_dequeue_burst() will wait till timeout
unless requested number of events (nb_events) are not received on the event port?
> + * if the device is not configured with
> RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
> + * then this function will wait until the event available or
> + * *dequeue_timeout_ns* ns which was previously supplied to
> + * rte_event_dev_configure()
> + *
> + * @return
> + * The number of event objects actually dequeued from the port. The return
> + * value can be less than the value of the *nb_events* parameter when the
> + * event port's queue is not full.
> + *
> + * @see rte_event_port_dequeue_depth()
> + */
> +uint16_t
> +rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event
> ev[],
> + uint16_t nb_events, uint64_t timeout_ticks);
> +
<Snip>
Regards,
Nipun
More information about the dev
mailing list