[dpdk-dev] [RFC v2] lib: add compressdev API
Verma, Shally
Shally.Verma at cavium.com
Mon Nov 27 11:44:29 CET 2017
Hi Fiona
Thanks for spec. It looks good to move ahead for API proof-concepting however still need to address few key open points from rfc doc and others minor comments (please see inline):
A. new enum for operation types:
1. Add rte_comp_op_type
........................................
enum rte_comp_op_type {
RTE_COMP_OP_STATELESS,
RTE_COMP_OP_STATEFUL
}
B. Change in rte_comp_op_pool_create () to input op_type so that required resources could be reserved for that op_type(please see inline more on this).
C. Call enque_burst() for stateless operations. And add a new dedicated API to handle stateful compression
static inline uint16_t rte_compdev_enqueue_stream().
We will add hash and other support as incremental patch on top of this once we align on these few open points.
For rest, please see inline.
> -----Original Message-----
> From: Trahe, Fiona [mailto:fiona.trahe at intel.com]
> Sent: 24 November 2017 22:26
> To: dev at dpdk.org; Verma, Shally <Shally.Verma at cavium.com>
> Cc: Challa, Mahipal <Mahipal.Challa at cavium.com>; Athreya, Narayana
> Prasad <NarayanaPrasad.Athreya at cavium.com>;
> pablo.de.lara.guarch at intel.com; fiona.trahe at intel.com
> Subject: [RFC v2] lib: add compressdev API
>
// snip //
> +/**
> + * Compression Operation.
> + *
> + * This structure contains data relating to performing a compression
> + * operation on the referenced mbuf data buffers.
> + *
> + * All compression operations are Out-of-place (OOP) operations,
> + * as the size of the output data is different to the size of the input data.
> + *
> + * Comp operations are enqueued and dequeued in comp PMDs using the
> + * rte_compressdev_enqueue_burst() /
> rte_compressdev_dequeue_burst() APIs
> + */
> +struct rte_comp_op {
> +
[Shally] need one void *user to enable re-ordering at app if PMD services enqueued requests out of order (requirement from RFC doc)
void *user;
/**< user pointer. Set by app, opaque to PMD. Passed back in deque_burst.
* app can set it to its data to ensure operations could reorderd, if requests are processed out of order.
* Particularly useful for stateless operations
*/
//snip//
> +/**
> + * Creates an operation pool
> + *
> + * @param name pool name
> + * @param nb_elts number of elements in pool
> + * @param cache_size Number of elements to cache on lcore, see
> + * *rte_mempool_create* for further details
> about
> + * cache size
> + * @param priv_size Size of private data to allocate with each
> + * operation
> + * @param socket_id Socket to allocate memory on
> + *
> + * @return
> + * - On success pointer to mempool
> + * - On failure NULL
> + */
> +extern struct rte_mempool *
> +rte_comp_op_pool_create(const char *name,
> + unsigned nb_elts, unsigned cache_size, uint16_t priv_size,
> + int socket_id);
> +
[Shally] This can be modified to input op_type : Stateful/stateless. In such case, pool will be allocated large enough to provide additional memory (or other resources) needed by PMD for that operation such as state / history buffers et el.
So, the proposed API change is :
rte_comp_op_pool_create(const char *name, unsigned nb_elts,
unsigned cache_size, uint16_t priv_size,
rte_comp_op_type op_type,
int socket_id);
Since size of such resources can be different for each implementation, thus a PMD can expose new API in dev_ops that can returns resources size requirement for op_type.
//snip//
> +
> +/**
> + * Allocate an operation from a mempool with default parameters set
> + *
> + * @param mempool operation mempool
> + *
> + * @returns
> + * - On success returns a valid rte_comp_op structure
> + * - On failure returns NULL
> + */
> +static inline struct rte_comp_op *
> +rte_comp_op_alloc(struct rte_mempool *mempool)
> +{
[Shally] rte_comp_op_type should be one of the input arg so that required resources can be reserved according to op_type
> + struct rte_comp_op *op = NULL;
> + int retval;
> +
> + retval = __rte_comp_op_raw_bulk_alloc(mempool, &op, 1);
> + if (unlikely(retval != 1))
> + return NULL;
> +
> + __rte_comp_op_reset(op);
> +
> + return op;
> +}
> +
> +
> +/**
> + * Bulk allocate operations from a mempool with default parameters set
> + *
> + * @param mempool comp operation mempool
> + * @param ops Array to place allocated operations
> + * @param nb_ops Number of operations to allocate
> + *
> + * @returns
> + * - nb_ops if the number of operations requested were allocated.
> + * - 0 if the requested number of ops are not available.
> + * None are allocated in this case.
> + */
> +
> +static inline unsigned
> +rte_comp_op_bulk_alloc(struct rte_mempool *mempool,
> + struct rte_comp_op **ops, uint16_t nb_ops)
> +{
[Shally] Similarly, should be extended to input rte_comp_op_type as one of the arg
> + int i;
> +
> + if (unlikely(__rte_comp_op_raw_bulk_alloc(mempool, ops, nb_ops)
> + != nb_ops))
> + return 0;
> +
> + for (i = 0; i < nb_ops; i++)
> + __rte_comp_op_reset(ops[i]);
> +
> + return nb_ops;
> +}
//snip//
> +
> +/**
> + * Enqueue a burst of operations for processing on a compression device.
> + *
> + * The rte_compressdev_enqueue_burst() function is invoked to place
> + * comp operations on the queue *qp_id* of the device designated by
> + * its *dev_id*.
> + *
> + * The *nb_ops* parameter is the number of operations to process which
> are
> + * supplied in the *ops* array of *rte_comp_op* structures.
> + *
> + * The rte_compressdev_enqueue_burst() function returns the number of
> + * operations it actually enqueued for processing. A return value equal to
> + * *nb_ops* means that all packets have been enqueued.
> + *
> + * @param dev_id The identifier of the device.
> + * @param qp_id The index of the queue pair on which
> operations
> + * are to be enqueued for processing. The value
> + * must be in the range [0, nb_queue_pairs - 1]
> + * previously supplied to
> + * *rte_compressdev_configure*.
> + * @param ops The address of an array of *nb_ops* pointers
> + * to *rte_comp_op* structures which contain
> + * the operations to be processed.
> + * @param nb_ops The number of operations to process.
> + *
> + * @return
> + * The number of operations actually enqueued on the device. The return
> + * value can be less than the value of the *nb_ops* parameter when the
> + * comp devices queue is full or if invalid parameters are specified in
> + * a *rte_comp_op*.
[Shally] To ensure invalid params, app should check for status value of
returned_num+1 ops entry which should be set to INVALID_ARGS by PMD?
> + */
> +static inline uint16_t
> +rte_compressdev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
> + struct rte_comp_op **ops, uint16_t nb_ops)
> +{
> + struct rte_compressdev *dev = &rte_compressdevs[dev_id];
> +
> + return (*dev->enqueue_burst)(
> + dev->data->queue_pairs[qp_id], ops, nb_ops);
> +}
> +
> +
> +/** compressdev session */
> +struct rte_comp_session {
> + __extension__ void *sess_private_data[0];
> + /**< Private session material */
> +};
> +
> +
> +/**
> + * Create symmetric comp session header (generic with no private data)
> + *
[Shally] Believe Symmetric is type from crypto
//snip//
Thanks
Shally
More information about the dev
mailing list