[dpdk-dev,RFC,2/7] mbuf: use helper to create the pool

Message ID 1474292567-21912-3-git-send-email-olivier.matz@6wind.com (mailing list archive)
State Rejected, archived
Delegated to: Thomas Monjalon
Headers

Commit Message

Olivier Matz Sept. 19, 2016, 1:42 p.m. UTC
  When possible, replace the uses of rte_mempool_create() with
the helper provided in librte_mbuf: rte_pktmbuf_pool_create().

This is the preferred way to create a mbuf pool.

By the way, akso update the documentation.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test/test_link_bonding_rssconf.c               | 11 ++++----
 doc/guides/prog_guide/mbuf_lib.rst                 |  2 +-
 doc/guides/sample_app_ug/ip_reassembly.rst         | 13 +++++----
 doc/guides/sample_app_ug/ipv4_multicast.rst        | 12 ++++----
 doc/guides/sample_app_ug/l2_forward_job_stats.rst  | 33 ++++++++--------------
 .../sample_app_ug/l2_forward_real_virtual.rst      | 26 +++++++----------
 doc/guides/sample_app_ug/ptpclient.rst             | 12 ++------
 doc/guides/sample_app_ug/quota_watermark.rst       | 26 ++++++-----------
 drivers/net/bonding/rte_eth_bond_8023ad.c          | 13 ++++-----
 examples/ip_pipeline/init.c                        | 19 ++++++-------
 examples/ip_reassembly/main.c                      | 16 +++++------
 examples/multi_process/l2fwd_fork/main.c           | 14 ++++-----
 examples/tep_termination/main.c                    | 17 ++++++-----
 lib/librte_mbuf/rte_mbuf.c                         |  7 +++--
 lib/librte_mbuf/rte_mbuf.h                         | 29 +++++++++++--------
 15 files changed, 111 insertions(+), 139 deletions(-)
  

Comments

Santosh Shukla Jan. 16, 2017, 3:30 p.m. UTC | #1
Hi Olivier,


On Mon, Sep 19, 2016 at 03:42:42PM +0200, Olivier Matz wrote:
> When possible, replace the uses of rte_mempool_create() with
> the helper provided in librte_mbuf: rte_pktmbuf_pool_create().
> 
> This is the preferred way to create a mbuf pool.
> 
> By the way, akso update the documentation.
>

I am working on ext-mempool pmd driver for cvm soc.
So interested in this thread.

Wondering why this thread not followed up, is it
because we don't want to deprecate rte_mempool_create()?
Or if we want to then in which release you are targeting.

Beside that some high level comment -
- Your changeset missing mempool test application i.e.. test_mempool.c/
  test_mempool_perf.c; Do you plan to accomodate them?
- ext-mempool does not necessarily need MBUF_CACHE_SIZE. Let HW-mngr to directly
  handover buffer to application; Rather caching same buffer per core way. It
  will save some cycles. What do you think?
- I figured out that ext-mempool API not mapping well on cvm hw; For few
  reason:
  Lets say application calls:
  rte_pktmbuf_pool_create()
   --> rte_mempool_create_empty()
   --> rte_mempool_ops_byname()
   --> rte_mempool_populate_default()
         -->> rte_mempool_ops_alloc()
                  --> ext-mempool-specific-pool-create handle

In my case: ext-mempool-pool-create handle will look for huge page mapped 
mz->vaddr/paddr, So to program HW-manager start/end addr of pool; And current
ext-mempool API doesn't support such case. Therefor I chose to add new ops
something like below, which could address such case; We'll soon post the patch.

/**                                                                             
 * Set the memzone va/pa addr range in the external pool.               
 */                                                                             
typedef void (*rte_mempool_populate_mz_range_t)(const struct rte_memzone *mz);  
                                                                                
/** Structure defining mempool operations structure */                          
struct rte_mempool_ops {                                                        
        char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
        rte_mempool_alloc_t alloc;       /**< Allocate private data. */         
        rte_mempool_free_t free;         /**< Free the external pool. */        
        rte_mempool_enqueue_t enqueue;   /**< Enqueue an object. */             
        rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */             
        rte_mempool_get_count get_count; /**< Get qty of available objs. */     
        rte_mempool_populate_mz_range_t populate_mz_range; /**< set memzone     
                                                                per pool info */
} __rte_cache_aligned;                                                          

Let me know your opinion.

Thanks.
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> ---
>  app/test/test_link_bonding_rssconf.c               | 11 ++++----
>  doc/guides/prog_guide/mbuf_lib.rst                 |  2 +-
>  doc/guides/sample_app_ug/ip_reassembly.rst         | 13 +++++----
>  doc/guides/sample_app_ug/ipv4_multicast.rst        | 12 ++++----
>  doc/guides/sample_app_ug/l2_forward_job_stats.rst  | 33 ++++++++--------------
>  .../sample_app_ug/l2_forward_real_virtual.rst      | 26 +++++++----------
>  doc/guides/sample_app_ug/ptpclient.rst             | 12 ++------
>  doc/guides/sample_app_ug/quota_watermark.rst       | 26 ++++++-----------
>  drivers/net/bonding/rte_eth_bond_8023ad.c          | 13 ++++-----
>  examples/ip_pipeline/init.c                        | 19 ++++++-------
>  examples/ip_reassembly/main.c                      | 16 +++++------
>  examples/multi_process/l2fwd_fork/main.c           | 14 ++++-----
>  examples/tep_termination/main.c                    | 17 ++++++-----
>  lib/librte_mbuf/rte_mbuf.c                         |  7 +++--
>  lib/librte_mbuf/rte_mbuf.h                         | 29 +++++++++++--------
>  15 files changed, 111 insertions(+), 139 deletions(-)
> 
> diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
> index 34f1c16..dd1bcc7 100644
> --- a/app/test/test_link_bonding_rssconf.c
> +++ b/app/test/test_link_bonding_rssconf.c
> @@ -67,7 +67,7 @@
>  #define SLAVE_RXTX_QUEUE_FMT      ("rssconf_slave%d_q%d")
>  
>  #define NUM_MBUFS 8191
> -#define MBUF_SIZE (1600 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
> +#define MBUF_SIZE (1600 + RTE_PKTMBUF_HEADROOM)
>  #define MBUF_CACHE_SIZE 250
>  #define BURST_SIZE 32
>  
> @@ -536,13 +536,12 @@ test_setup(void)
>  
>  	if (test_params.mbuf_pool == NULL) {
>  
> -		test_params.mbuf_pool = rte_mempool_create("RSS_MBUF_POOL", NUM_MBUFS *
> -				SLAVE_COUNT, MBUF_SIZE, MBUF_CACHE_SIZE,
> -				sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init,
> -				NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);
> +		test_params.mbuf_pool = rte_pktmbuf_pool_create(
> +			"RSS_MBUF_POOL", NUM_MBUFS * SLAVE_COUNT,
> +			MBUF_CACHE_SIZE, 0, MBUF_SIZE, rte_socket_id(), NULL);
>  
>  		TEST_ASSERT(test_params.mbuf_pool != NULL,
> -				"rte_mempool_create failed\n");
> +				"rte_pktmbuf_pool_create failed\n");
>  	}
>  
>  	/* Create / initialize ring eth devs. */
> diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst
> index 8e61682..b366e04 100644
> --- a/doc/guides/prog_guide/mbuf_lib.rst
> +++ b/doc/guides/prog_guide/mbuf_lib.rst
> @@ -103,7 +103,7 @@ Constructors
>  Packet and control mbuf constructors are provided by the API.
>  The rte_pktmbuf_init() and rte_ctrlmbuf_init() functions initialize some fields in the mbuf structure that
>  are not modified by the user once created (mbuf type, origin pool, buffer start address, and so on).
> -This function is given as a callback function to the rte_mempool_create() function at pool creation time.
> +This function is given as a callback function to the rte_pktmbuf_pool_create() or the rte_mempool_create() function at pool creation time.
>  
>  Allocating and Freeing mbufs
>  ----------------------------
> diff --git a/doc/guides/sample_app_ug/ip_reassembly.rst b/doc/guides/sample_app_ug/ip_reassembly.rst
> index 3c5cc70..4b6023a 100644
> --- a/doc/guides/sample_app_ug/ip_reassembly.rst
> +++ b/doc/guides/sample_app_ug/ip_reassembly.rst
> @@ -223,11 +223,14 @@ each RX queue uses its own mempool.
>  
>      snprintf(buf, sizeof(buf), "mbuf_pool_%u_%u", lcore, queue);
>  
> -    if ((rxq->pool = rte_mempool_create(buf, nb_mbuf, MBUF_SIZE, 0, sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init, NULL,
> -        rte_pktmbuf_init, NULL, socket, MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) == NULL) {
> -
> -            RTE_LOG(ERR, IP_RSMBL, "mempool_create(%s) failed", buf);
> -            return -1;
> +    rxq->pool = rte_pktmbuf_pool_create(buf, nb_mbuf,
> +    	0, /* cache size */
> +    	0, /* priv size */
> +    	MBUF_DATA_SIZE, socket, "ring_sp_sc");
> +    if (rxq->pool == NULL) {
> +    	RTE_LOG(ERR, IP_RSMBL,
> +    		"rte_pktmbuf_pool_create(%s) failed", buf);
> +    	return -1;
>      }
>  
>  Packet Reassembly and Forwarding
> diff --git a/doc/guides/sample_app_ug/ipv4_multicast.rst b/doc/guides/sample_app_ug/ipv4_multicast.rst
> index 72da8c4..099d61a 100644
> --- a/doc/guides/sample_app_ug/ipv4_multicast.rst
> +++ b/doc/guides/sample_app_ug/ipv4_multicast.rst
> @@ -145,12 +145,12 @@ Memory pools for indirect buffers are initialized differently from the memory po
>  
>  .. code-block:: c
>  
> -    packet_pool = rte_mempool_create("packet_pool", NB_PKT_MBUF, PKT_MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private),
> -                                     rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);
> -
> -    header_pool = rte_mempool_create("header_pool", NB_HDR_MBUF, HDR_MBUF_SIZE, 32, 0, NULL, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);
> -    clone_pool = rte_mempool_create("clone_pool", NB_CLONE_MBUF,
> -    CLONE_MBUF_SIZE, 32, 0, NULL, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);
> +    packet_pool = rte_pktmbuf_pool_create("packet_pool", NB_PKT_MBUF, 32,
> +    	0, PKT_MBUF_DATA_SIZE, rte_socket_id(), NULL);
> +    header_pool = rte_pktmbuf_pool_create("header_pool", NB_HDR_MBUF, 32,
> +    	0, HDR_MBUF_DATA_SIZE, rte_socket_id(), NULL);
> +    clone_pool = rte_pktmbuf_pool_create("clone_pool", NB_CLONE_MBUF, 32,
> +    	0, 0, rte_socket_id(), NULL);
>  
>  The reason for this is because indirect buffers are not supposed to hold any packet data and
>  therefore can be initialized with lower amount of reserved memory for each buffer.
> diff --git a/doc/guides/sample_app_ug/l2_forward_job_stats.rst b/doc/guides/sample_app_ug/l2_forward_job_stats.rst
> index 2444e36..a1b3f43 100644
> --- a/doc/guides/sample_app_ug/l2_forward_job_stats.rst
> +++ b/doc/guides/sample_app_ug/l2_forward_job_stats.rst
> @@ -193,36 +193,25 @@ and the application to store network packet data:
>  .. code-block:: c
>  
>      /* create the mbuf pool */
> -    l2fwd_pktmbuf_pool =
> -        rte_mempool_create("mbuf_pool", NB_MBUF,
> -                   MBUF_SIZE, 32,
> -                   sizeof(struct rte_pktmbuf_pool_private),
> -                   rte_pktmbuf_pool_init, NULL,
> -                   rte_pktmbuf_init, NULL,
> -                   rte_socket_id(), 0);
> +    l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
> +    	MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
> +    	rte_socket_id(), NULL);
>  
>      if (l2fwd_pktmbuf_pool == NULL)
>          rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
>  
>  The rte_mempool is a generic structure used to handle pools of objects.
> -In this case, it is necessary to create a pool that will be used by the driver,
> -which expects to have some reserved space in the mempool structure,
> -sizeof(struct rte_pktmbuf_pool_private) bytes.
> -The number of allocated pkt mbufs is NB_MBUF, with a size of MBUF_SIZE each.
> -A per-lcore cache of 32 mbufs is kept.
> +In this case, it is necessary to create a pool that will be used by the driver.
> +The number of allocated pkt mbufs is NB_MBUF, with a data room size of
> +RTE_MBUF_DEFAULT_BUF_SIZE each.
> +A per-lcore cache of MEMPOOL_CACHE_SIZE mbufs is kept.
>  The memory is allocated in rte_socket_id() socket,
>  but it is possible to extend this code to allocate one mbuf pool per socket.
>  
> -Two callback pointers are also given to the rte_mempool_create() function:
> -
> -*   The first callback pointer is to rte_pktmbuf_pool_init() and is used
> -    to initialize the private data of the mempool, which is needed by the driver.
> -    This function is provided by the mbuf API, but can be copied and extended by the developer.
> -
> -*   The second callback pointer given to rte_mempool_create() is the mbuf initializer.
> -    The default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library.
> -    If a more complex application wants to extend the rte_pktmbuf structure for its own needs,
> -    a new function derived from rte_pktmbuf_init( ) can be created.
> +The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf
> +initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init().
> +An advanced application may want to use the mempool API to create the
> +mbuf pool with more control.
>  
>  Driver Initialization
>  ~~~~~~~~~~~~~~~~~~~~~
> diff --git a/doc/guides/sample_app_ug/l2_forward_real_virtual.rst b/doc/guides/sample_app_ug/l2_forward_real_virtual.rst
> index a1c10c0..2330148 100644
> --- a/doc/guides/sample_app_ug/l2_forward_real_virtual.rst
> +++ b/doc/guides/sample_app_ug/l2_forward_real_virtual.rst
> @@ -197,31 +197,25 @@ and the application to store network packet data:
>  
>      /* create the mbuf pool */
>  
> -    l2fwd_pktmbuf_pool = rte_mempool_create("mbuf_pool", NB_MBUF, MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private),
> -        rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, SOCKET0, 0);
> +    l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
> +    	MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
> +    	rte_socket_id(), NULL);
>  
>      if (l2fwd_pktmbuf_pool == NULL)
>          rte_panic("Cannot init mbuf pool\n");
>  
>  The rte_mempool is a generic structure used to handle pools of objects.
> -In this case, it is necessary to create a pool that will be used by the driver,
> -which expects to have some reserved space in the mempool structure,
> -sizeof(struct rte_pktmbuf_pool_private) bytes.
> -The number of allocated pkt mbufs is NB_MBUF, with a size of MBUF_SIZE each.
> +In this case, it is necessary to create a pool that will be used by the driver.
> +The number of allocated pkt mbufs is NB_MBUF, with a data room size of
> +RTE_MBUF_DEFAULT_BUF_SIZE each.
>  A per-lcore cache of 32 mbufs is kept.
>  The memory is allocated in NUMA socket 0,
>  but it is possible to extend this code to allocate one mbuf pool per socket.
>  
> -Two callback pointers are also given to the rte_mempool_create() function:
> -
> -*   The first callback pointer is to rte_pktmbuf_pool_init() and is used
> -    to initialize the private data of the mempool, which is needed by the driver.
> -    This function is provided by the mbuf API, but can be copied and extended by the developer.
> -
> -*   The second callback pointer given to rte_mempool_create() is the mbuf initializer.
> -    The default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library.
> -    If a more complex application wants to extend the rte_pktmbuf structure for its own needs,
> -    a new function derived from rte_pktmbuf_init( ) can be created.
> +The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf
> +initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init().
> +An advanced application may want to use the mempool API to create the
> +mbuf pool with more control.
>  
>  .. _l2_fwd_app_dvr_init:
>  
> diff --git a/doc/guides/sample_app_ug/ptpclient.rst b/doc/guides/sample_app_ug/ptpclient.rst
> index 6e425b7..4bd87c2 100644
> --- a/doc/guides/sample_app_ug/ptpclient.rst
> +++ b/doc/guides/sample_app_ug/ptpclient.rst
> @@ -171,15 +171,9 @@ used by the application:
>  
>  .. code-block:: c
>  
> -    mbuf_pool = rte_mempool_create("MBUF_POOL",
> -                                   NUM_MBUFS * nb_ports,
> -                                   MBUF_SIZE,
> -                                   MBUF_CACHE_SIZE,
> -                                   sizeof(struct rte_pktmbuf_pool_private),
> -                                   rte_pktmbuf_pool_init, NULL,
> -                                   rte_pktmbuf_init,      NULL,
> -                                   rte_socket_id(),
> -                                   0);
> +    mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", NUM_MBUFS * nb_ports,
> +    	MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id(),
> +    	NULL);
>  
>  Mbufs are the packet buffer structure used by DPDK. They are explained in
>  detail in the "Mbuf Library" section of the *DPDK Programmer's Guide*.
> diff --git a/doc/guides/sample_app_ug/quota_watermark.rst b/doc/guides/sample_app_ug/quota_watermark.rst
> index c56683a..f3a6624 100644
> --- a/doc/guides/sample_app_ug/quota_watermark.rst
> +++ b/doc/guides/sample_app_ug/quota_watermark.rst
> @@ -254,32 +254,24 @@ It contains a set of mbuf objects that are used by the driver and the applicatio
>  .. code-block:: c
>  
>      /* Create a pool of mbuf to store packets */
> -
> -    mbuf_pool = rte_mempool_create("mbuf_pool", MBUF_PER_POOL, MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private),
> -        rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);
> +    mbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", MBUF_PER_POOL, 32, 0,
> +    	MBUF_DATA_SIZE, rte_socket_id(), NULL);
>  
>      if (mbuf_pool == NULL)
>          rte_panic("%s\n", rte_strerror(rte_errno));
>  
>  The rte_mempool is a generic structure used to handle pools of objects.
> -In this case, it is necessary to create a pool that will be used by the driver,
> -which expects to have some reserved space in the mempool structure, sizeof(struct rte_pktmbuf_pool_private) bytes.
> +In this case, it is necessary to create a pool that will be used by the driver.
>  
> -The number of allocated pkt mbufs is MBUF_PER_POOL, with a size of MBUF_SIZE each.
> +The number of allocated pkt mbufs is MBUF_PER_POOL, with a data room size
> +of MBUF_DATA_SIZE each.
>  A per-lcore cache of 32 mbufs is kept.
>  The memory is allocated in on the master lcore's socket, but it is possible to extend this code to allocate one mbuf pool per socket.
>  
> -Two callback pointers are also given to the rte_mempool_create() function:
> -
> -*   The first callback pointer is to rte_pktmbuf_pool_init() and is used to initialize the private data of the mempool,
> -    which is needed by the driver.
> -    This function is provided by the mbuf API, but can be copied and extended by the developer.
> -
> -*   The second callback pointer given to rte_mempool_create() is the mbuf initializer.
> -
> -The default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library.
> -If a more complex application wants to extend the rte_pktmbuf structure for its own needs,
> -a new function derived from rte_pktmbuf_init() can be created.
> +The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf
> +initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init().
> +An advanced application may want to use the mempool API to create the
> +mbuf pool with more control.
>  
>  Ports Configuration and Pairing
>  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
> index 2f7ae70..e234c63 100644
> --- a/drivers/net/bonding/rte_eth_bond_8023ad.c
> +++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
> @@ -888,8 +888,8 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev, uint8_t slave_id)
>  	RTE_ASSERT(port->tx_ring == NULL);
>  	socket_id = rte_eth_devices[slave_id].data->numa_node;
>  
> -	element_size = sizeof(struct slow_protocol_frame) + sizeof(struct rte_mbuf)
> -				+ RTE_PKTMBUF_HEADROOM;
> +	element_size = sizeof(struct slow_protocol_frame) +
> +		RTE_PKTMBUF_HEADROOM;
>  
>  	/* The size of the mempool should be at least:
>  	 * the sum of the TX descriptors + BOND_MODE_8023AX_SLAVE_TX_PKTS */
> @@ -900,11 +900,10 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev, uint8_t slave_id)
>  	}
>  
>  	snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool", slave_id);
> -	port->mbuf_pool = rte_mempool_create(mem_name,
> -		total_tx_desc, element_size,
> -		RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ? 32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
> -		sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init,
> -		NULL, rte_pktmbuf_init, NULL, socket_id, MEMPOOL_F_NO_SPREAD);
> +	port->mbuf_pool = rte_pktmbuf_pool_create(mem_name, total_tx_desc,
> +		RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
> +			32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
> +		0, element_size, socket_id, NULL);
>  
>  	/* Any memory allocation failure in initalization is critical because
>  	 * resources can't be free, so reinitialization is impossible. */
> diff --git a/examples/ip_pipeline/init.c b/examples/ip_pipeline/init.c
> index cd167f6..d86aa86 100644
> --- a/examples/ip_pipeline/init.c
> +++ b/examples/ip_pipeline/init.c
> @@ -316,16 +316,15 @@ app_init_mempool(struct app_params *app)
>  		struct app_mempool_params *p = &app->mempool_params[i];
>  
>  		APP_LOG(app, HIGH, "Initializing %s ...", p->name);
> -		app->mempool[i] = rte_mempool_create(
> -				p->name,
> -				p->pool_size,
> -				p->buffer_size,
> -				p->cache_size,
> -				sizeof(struct rte_pktmbuf_pool_private),
> -				rte_pktmbuf_pool_init, NULL,
> -				rte_pktmbuf_init, NULL,
> -				p->cpu_socket_id,
> -				0);
> +		app->mempool[i] = rte_pktmbuf_pool_create(
> +			p->name,
> +			p->pool_size,
> +			p->cache_size,
> +			0, /* priv_size */
> +			p->buffer_size -
> +				sizeof(struct rte_mbuf), /* mbuf data size */
> +			p->cpu_socket_id,
> +			NULL);
>  
>  		if (app->mempool[i] == NULL)
>  			rte_panic("%s init error\n", p->name);
> diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
> index 50fe422..8648161 100644
> --- a/examples/ip_reassembly/main.c
> +++ b/examples/ip_reassembly/main.c
> @@ -84,9 +84,7 @@
>  
>  #define MAX_JUMBO_PKT_LEN  9600
>  
> -#define	BUF_SIZE	RTE_MBUF_DEFAULT_DATAROOM
> -#define MBUF_SIZE	\
> -	(BUF_SIZE + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
> +#define	MBUF_DATA_SIZE	RTE_MBUF_DEFAULT_BUF_SIZE
>  
>  #define NB_MBUF 8192
>  
> @@ -909,11 +907,13 @@ setup_queue_tbl(struct rx_queue *rxq, uint32_t lcore, uint32_t queue)
>  
>  	snprintf(buf, sizeof(buf), "mbuf_pool_%u_%u", lcore, queue);
>  
> -	if ((rxq->pool = rte_mempool_create(buf, nb_mbuf, MBUF_SIZE, 0,
> -			sizeof(struct rte_pktmbuf_pool_private),
> -			rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL,
> -			socket, MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) == NULL) {
> -		RTE_LOG(ERR, IP_RSMBL, "mempool_create(%s) failed", buf);
> +	rxq->pool = rte_pktmbuf_pool_create(buf, nb_mbuf,
> +		0, /* cache size */
> +		0, /* priv size */
> +		MBUF_DATA_SIZE, socket, "ring_sp_sc");
> +	if (rxq->pool == NULL) {
> +		RTE_LOG(ERR, IP_RSMBL,
> +			"rte_pktmbuf_pool_create(%s) failed", buf);
>  		return -1;
>  	}
>  
> diff --git a/examples/multi_process/l2fwd_fork/main.c b/examples/multi_process/l2fwd_fork/main.c
> index 2d951d9..358a760 100644
> --- a/examples/multi_process/l2fwd_fork/main.c
> +++ b/examples/multi_process/l2fwd_fork/main.c
> @@ -77,8 +77,7 @@
>  
>  #define RTE_LOGTYPE_L2FWD RTE_LOGTYPE_USER1
>  #define MBUF_NAME	"mbuf_pool_%d"
> -#define MBUF_SIZE	\
> -(RTE_MBUF_DEFAULT_DATAROOM + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
> +#define MBUF_DATA_SIZE	RTE_MBUF_DEFAULT_BUF_SIZE
>  #define NB_MBUF   8192
>  #define RING_MASTER_NAME	"l2fwd_ring_m2s_"
>  #define RING_SLAVE_NAME		"l2fwd_ring_s2m_"
> @@ -989,14 +988,11 @@ main(int argc, char **argv)
>  		flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET;
>  		snprintf(buf_name, RTE_MEMPOOL_NAMESIZE, MBUF_NAME, portid);
>  		l2fwd_pktmbuf_pool[portid] =
> -			rte_mempool_create(buf_name, NB_MBUF,
> -					   MBUF_SIZE, 32,
> -					   sizeof(struct rte_pktmbuf_pool_private),
> -					   rte_pktmbuf_pool_init, NULL,
> -					   rte_pktmbuf_init, NULL,
> -					   rte_socket_id(), flags);
> +			rte_pktmbuf_pool_create(buf_name, NB_MBUF, 32,
> +				0, MBUF_DATA_SIZE, rte_socket_id(),
> +				NULL);
>  		if (l2fwd_pktmbuf_pool[portid] == NULL)
> -			rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
> +			rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
>  
>  		printf("Create mbuf %s\n", buf_name);
>  	}
> diff --git a/examples/tep_termination/main.c b/examples/tep_termination/main.c
> index 622f248..2b786c5 100644
> --- a/examples/tep_termination/main.c
> +++ b/examples/tep_termination/main.c
> @@ -68,7 +68,7 @@
>  				(nb_switching_cores * MBUF_CACHE_SIZE))
>  
>  #define MBUF_CACHE_SIZE 128
> -#define MBUF_SIZE (2048 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
> +#define MBUF_DATA_SIZE RTE_MBUF_DEFAULT_BUF_SIZE
>  
>  #define MAX_PKT_BURST 32	/* Max burst size for RX/TX */
>  #define BURST_TX_DRAIN_US 100	/* TX drain every ~100us */
> @@ -1200,15 +1200,14 @@ main(int argc, char *argv[])
>  			MAX_SUP_PORTS);
>  	}
>  	/* Create the mbuf pool. */
> -	mbuf_pool = rte_mempool_create(
> +	mbuf_pool = rte_pktmbuf_pool_create(
>  			"MBUF_POOL",
> -			NUM_MBUFS_PER_PORT
> -			* valid_nb_ports,
> -			MBUF_SIZE, MBUF_CACHE_SIZE,
> -			sizeof(struct rte_pktmbuf_pool_private),
> -			rte_pktmbuf_pool_init, NULL,
> -			rte_pktmbuf_init, NULL,
> -			rte_socket_id(), 0);
> +			NUM_MBUFS_PER_PORT * valid_nb_ports,
> +			MBUF_CACHE_SIZE,
> +			0,
> +			MBUF_DATA_SIZE,
> +			rte_socket_id(),
> +			NULL);
>  	if (mbuf_pool == NULL)
>  		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
>  
> diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
> index 3e9cbb6..4b871ca 100644
> --- a/lib/librte_mbuf/rte_mbuf.c
> +++ b/lib/librte_mbuf/rte_mbuf.c
> @@ -62,7 +62,7 @@
>  
>  /*
>   * ctrlmbuf constructor, given as a callback function to
> - * rte_mempool_create()
> + * rte_mempool_obj_iter() or rte_mempool_create()
>   */
>  void
>  rte_ctrlmbuf_init(struct rte_mempool *mp,
> @@ -77,7 +77,8 @@ rte_ctrlmbuf_init(struct rte_mempool *mp,
>  
>  /*
>   * pktmbuf pool constructor, given as a callback function to
> - * rte_mempool_create()
> + * rte_mempool_create(), or called directly if using
> + * rte_mempool_create_empty()/rte_mempool_populate()
>   */
>  void
>  rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg)
> @@ -110,7 +111,7 @@ rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg)
>  
>  /*
>   * pktmbuf constructor, given as a callback function to
> - * rte_mempool_create().
> + * rte_mempool_obj_iter() or rte_mempool_create().
>   * Set the fields of a packet mbuf to their default values.
>   */
>  void
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> index 774e071..352fa02 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -44,6 +44,13 @@
>   * buffers. The message buffers are stored in a mempool, using the
>   * RTE mempool library.
>   *
> + * The preferred way to create a mbuf pool is to use
> + * rte_pktmbuf_pool_create(). However, in some situations, an
> + * application may want to have more control (ex: populate the pool with
> + * specific memory), in this case it is possible to use functions from
> + * rte_mempool. See how rte_pktmbuf_pool_create() is implemented for
> + * details.
> + *
>   * This library provide an API to allocate/free packet mbufs, which are
>   * used to carry network packets.
>   *
> @@ -1189,14 +1196,14 @@ __rte_mbuf_raw_free(struct rte_mbuf *m)
>   * This function initializes some fields in an mbuf structure that are
>   * not modified by the user once created (mbuf type, origin pool, buffer
>   * start address, and so on). This function is given as a callback function
> - * to rte_mempool_create() at pool creation time.
> + * to rte_mempool_obj_iter() or rte_mempool_create() at pool creation time.
>   *
>   * @param mp
>   *   The mempool from which the mbuf is allocated.
>   * @param opaque_arg
>   *   A pointer that can be used by the user to retrieve useful information
> - *   for mbuf initialization. This pointer comes from the ``init_arg``
> - *   parameter of rte_mempool_create().
> + *   for mbuf initialization. This pointer is the opaque argument passed to
> + *   rte_mempool_obj_iter() or rte_mempool_create().
>   * @param m
>   *   The mbuf to initialize.
>   * @param i
> @@ -1270,14 +1277,14 @@ rte_is_ctrlmbuf(struct rte_mbuf *m)
>   * This function initializes some fields in the mbuf structure that are
>   * not modified by the user once created (origin pool, buffer start
>   * address, and so on). This function is given as a callback function to
> - * rte_mempool_create() at pool creation time.
> + * rte_mempool_obj_iter() or rte_mempool_create() at pool creation time.
>   *
>   * @param mp
>   *   The mempool from which mbufs originate.
>   * @param opaque_arg
>   *   A pointer that can be used by the user to retrieve useful information
> - *   for mbuf initialization. This pointer comes from the ``init_arg``
> - *   parameter of rte_mempool_create().
> + *   for mbuf initialization. This pointer is the opaque argument passed to
> + *   rte_mempool_obj_iter() or rte_mempool_create().
>   * @param m
>   *   The mbuf to initialize.
>   * @param i
> @@ -1292,7 +1299,8 @@ void rte_pktmbuf_init(struct rte_mempool *mp, void *opaque_arg,
>   *
>   * This function initializes the mempool private data in the case of a
>   * pktmbuf pool. This private data is needed by the driver. The
> - * function is given as a callback function to rte_mempool_create() at
> + * function must be called on the mempool before it is used, or it
> + * can be given as a callback function to rte_mempool_create() at
>   * pool creation. It can be extended by the user, for example, to
>   * provide another packet size.
>   *
> @@ -1300,8 +1308,8 @@ void rte_pktmbuf_init(struct rte_mempool *mp, void *opaque_arg,
>   *   The mempool from which mbufs originate.
>   * @param opaque_arg
>   *   A pointer that can be used by the user to retrieve useful information
> - *   for mbuf initialization. This pointer comes from the ``init_arg``
> - *   parameter of rte_mempool_create().
> + *   for mbuf initialization. This pointer is the opaque argument passed to
> + *   rte_mempool_create().
>   */
>  void rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg);
>  
> @@ -1309,8 +1317,7 @@ void rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg);
>   * Create a mbuf pool.
>   *
>   * This function creates and initializes a packet mbuf pool. It is
> - * a wrapper to rte_mempool_create() with the proper packet constructor
> - * and mempool constructor.
> + * a wrapper to rte_mempool functions.
>   *
>   * @param name
>   *   The name of the mbuf pool.
> -- 
> 2.8.1
>
  
Olivier Matz Jan. 31, 2017, 10:31 a.m. UTC | #2
Hi Santosh,

On Mon, 16 Jan 2017 21:00:37 +0530, Santosh Shukla
<santosh.shukla@caviumnetworks.com> wrote:
> Hi Olivier,
> 
> 
> On Mon, Sep 19, 2016 at 03:42:42PM +0200, Olivier Matz wrote:
> > When possible, replace the uses of rte_mempool_create() with
> > the helper provided in librte_mbuf: rte_pktmbuf_pool_create().
> > 
> > This is the preferred way to create a mbuf pool.
> > 
> > By the way, akso update the documentation.
> >  
> 
> I am working on ext-mempool pmd driver for cvm soc.
> So interested in this thread.
> 
> Wondering why this thread not followed up, is it
> because we don't want to deprecate rte_mempool_create()?
> Or if we want to then in which release you are targeting.

It seems that the RFC patchset was not the proper way to fix the issue.
On the other hand, this particular patch should be integrated, as
highlighted by Hemant too. Thanks for reminding it.

> Beside that some high level comment -
> - Your changeset missing mempool test application i.e..
> test_mempool.c/ test_mempool_perf.c; Do you plan to accomodate them?

As answered in the other thread, I think there is nothing to change
in test_mempool*.c, since this patch is just about mbuf pools.


> - ext-mempool does not necessarily need MBUF_CACHE_SIZE. Let HW-mngr
> to directly handover buffer to application; Rather caching same
> buffer per core way. It will save some cycles. What do you think?

It's still possible to set the cache size to 0. In that case, it will
directly call rte_mempool_ops_dequeue_bulk(). But, given the function
call to ops->dequeue() and the few tests, it is probably faster to use
the cache, even with a fast hw allocation.


> - I figured out that ext-mempool API not mapping well on cvm hw; For
> few reason:
>   Lets say application calls:
>   rte_pktmbuf_pool_create()
>    --> rte_mempool_create_empty()
>    --> rte_mempool_ops_byname()
>    --> rte_mempool_populate_default()  
>          -->> rte_mempool_ops_alloc()  
>                   --> ext-mempool-specific-pool-create handle  
> [...]

I'm answering in the other thread.

Thanks,
Olivier
  

Patch

diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index 34f1c16..dd1bcc7 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -67,7 +67,7 @@ 
 #define SLAVE_RXTX_QUEUE_FMT      ("rssconf_slave%d_q%d")
 
 #define NUM_MBUFS 8191
-#define MBUF_SIZE (1600 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
+#define MBUF_SIZE (1600 + RTE_PKTMBUF_HEADROOM)
 #define MBUF_CACHE_SIZE 250
 #define BURST_SIZE 32
 
@@ -536,13 +536,12 @@  test_setup(void)
 
 	if (test_params.mbuf_pool == NULL) {
 
-		test_params.mbuf_pool = rte_mempool_create("RSS_MBUF_POOL", NUM_MBUFS *
-				SLAVE_COUNT, MBUF_SIZE, MBUF_CACHE_SIZE,
-				sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init,
-				NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);
+		test_params.mbuf_pool = rte_pktmbuf_pool_create(
+			"RSS_MBUF_POOL", NUM_MBUFS * SLAVE_COUNT,
+			MBUF_CACHE_SIZE, 0, MBUF_SIZE, rte_socket_id(), NULL);
 
 		TEST_ASSERT(test_params.mbuf_pool != NULL,
-				"rte_mempool_create failed\n");
+				"rte_pktmbuf_pool_create failed\n");
 	}
 
 	/* Create / initialize ring eth devs. */
diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst
index 8e61682..b366e04 100644
--- a/doc/guides/prog_guide/mbuf_lib.rst
+++ b/doc/guides/prog_guide/mbuf_lib.rst
@@ -103,7 +103,7 @@  Constructors
 Packet and control mbuf constructors are provided by the API.
 The rte_pktmbuf_init() and rte_ctrlmbuf_init() functions initialize some fields in the mbuf structure that
 are not modified by the user once created (mbuf type, origin pool, buffer start address, and so on).
-This function is given as a callback function to the rte_mempool_create() function at pool creation time.
+This function is given as a callback function to the rte_pktmbuf_pool_create() or the rte_mempool_create() function at pool creation time.
 
 Allocating and Freeing mbufs
 ----------------------------
diff --git a/doc/guides/sample_app_ug/ip_reassembly.rst b/doc/guides/sample_app_ug/ip_reassembly.rst
index 3c5cc70..4b6023a 100644
--- a/doc/guides/sample_app_ug/ip_reassembly.rst
+++ b/doc/guides/sample_app_ug/ip_reassembly.rst
@@ -223,11 +223,14 @@  each RX queue uses its own mempool.
 
     snprintf(buf, sizeof(buf), "mbuf_pool_%u_%u", lcore, queue);
 
-    if ((rxq->pool = rte_mempool_create(buf, nb_mbuf, MBUF_SIZE, 0, sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init, NULL,
-        rte_pktmbuf_init, NULL, socket, MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) == NULL) {
-
-            RTE_LOG(ERR, IP_RSMBL, "mempool_create(%s) failed", buf);
-            return -1;
+    rxq->pool = rte_pktmbuf_pool_create(buf, nb_mbuf,
+    	0, /* cache size */
+    	0, /* priv size */
+    	MBUF_DATA_SIZE, socket, "ring_sp_sc");
+    if (rxq->pool == NULL) {
+    	RTE_LOG(ERR, IP_RSMBL,
+    		"rte_pktmbuf_pool_create(%s) failed", buf);
+    	return -1;
     }
 
 Packet Reassembly and Forwarding
diff --git a/doc/guides/sample_app_ug/ipv4_multicast.rst b/doc/guides/sample_app_ug/ipv4_multicast.rst
index 72da8c4..099d61a 100644
--- a/doc/guides/sample_app_ug/ipv4_multicast.rst
+++ b/doc/guides/sample_app_ug/ipv4_multicast.rst
@@ -145,12 +145,12 @@  Memory pools for indirect buffers are initialized differently from the memory po
 
 .. code-block:: c
 
-    packet_pool = rte_mempool_create("packet_pool", NB_PKT_MBUF, PKT_MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private),
-                                     rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);
-
-    header_pool = rte_mempool_create("header_pool", NB_HDR_MBUF, HDR_MBUF_SIZE, 32, 0, NULL, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);
-    clone_pool = rte_mempool_create("clone_pool", NB_CLONE_MBUF,
-    CLONE_MBUF_SIZE, 32, 0, NULL, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);
+    packet_pool = rte_pktmbuf_pool_create("packet_pool", NB_PKT_MBUF, 32,
+    	0, PKT_MBUF_DATA_SIZE, rte_socket_id(), NULL);
+    header_pool = rte_pktmbuf_pool_create("header_pool", NB_HDR_MBUF, 32,
+    	0, HDR_MBUF_DATA_SIZE, rte_socket_id(), NULL);
+    clone_pool = rte_pktmbuf_pool_create("clone_pool", NB_CLONE_MBUF, 32,
+    	0, 0, rte_socket_id(), NULL);
 
 The reason for this is because indirect buffers are not supposed to hold any packet data and
 therefore can be initialized with lower amount of reserved memory for each buffer.
diff --git a/doc/guides/sample_app_ug/l2_forward_job_stats.rst b/doc/guides/sample_app_ug/l2_forward_job_stats.rst
index 2444e36..a1b3f43 100644
--- a/doc/guides/sample_app_ug/l2_forward_job_stats.rst
+++ b/doc/guides/sample_app_ug/l2_forward_job_stats.rst
@@ -193,36 +193,25 @@  and the application to store network packet data:
 .. code-block:: c
 
     /* create the mbuf pool */
-    l2fwd_pktmbuf_pool =
-        rte_mempool_create("mbuf_pool", NB_MBUF,
-                   MBUF_SIZE, 32,
-                   sizeof(struct rte_pktmbuf_pool_private),
-                   rte_pktmbuf_pool_init, NULL,
-                   rte_pktmbuf_init, NULL,
-                   rte_socket_id(), 0);
+    l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
+    	MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
+    	rte_socket_id(), NULL);
 
     if (l2fwd_pktmbuf_pool == NULL)
         rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
 
 The rte_mempool is a generic structure used to handle pools of objects.
-In this case, it is necessary to create a pool that will be used by the driver,
-which expects to have some reserved space in the mempool structure,
-sizeof(struct rte_pktmbuf_pool_private) bytes.
-The number of allocated pkt mbufs is NB_MBUF, with a size of MBUF_SIZE each.
-A per-lcore cache of 32 mbufs is kept.
+In this case, it is necessary to create a pool that will be used by the driver.
+The number of allocated pkt mbufs is NB_MBUF, with a data room size of
+RTE_MBUF_DEFAULT_BUF_SIZE each.
+A per-lcore cache of MEMPOOL_CACHE_SIZE mbufs is kept.
 The memory is allocated in rte_socket_id() socket,
 but it is possible to extend this code to allocate one mbuf pool per socket.
 
-Two callback pointers are also given to the rte_mempool_create() function:
-
-*   The first callback pointer is to rte_pktmbuf_pool_init() and is used
-    to initialize the private data of the mempool, which is needed by the driver.
-    This function is provided by the mbuf API, but can be copied and extended by the developer.
-
-*   The second callback pointer given to rte_mempool_create() is the mbuf initializer.
-    The default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library.
-    If a more complex application wants to extend the rte_pktmbuf structure for its own needs,
-    a new function derived from rte_pktmbuf_init( ) can be created.
+The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf
+initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init().
+An advanced application may want to use the mempool API to create the
+mbuf pool with more control.
 
 Driver Initialization
 ~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/sample_app_ug/l2_forward_real_virtual.rst b/doc/guides/sample_app_ug/l2_forward_real_virtual.rst
index a1c10c0..2330148 100644
--- a/doc/guides/sample_app_ug/l2_forward_real_virtual.rst
+++ b/doc/guides/sample_app_ug/l2_forward_real_virtual.rst
@@ -197,31 +197,25 @@  and the application to store network packet data:
 
     /* create the mbuf pool */
 
-    l2fwd_pktmbuf_pool = rte_mempool_create("mbuf_pool", NB_MBUF, MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private),
-        rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, SOCKET0, 0);
+    l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
+    	MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
+    	rte_socket_id(), NULL);
 
     if (l2fwd_pktmbuf_pool == NULL)
         rte_panic("Cannot init mbuf pool\n");
 
 The rte_mempool is a generic structure used to handle pools of objects.
-In this case, it is necessary to create a pool that will be used by the driver,
-which expects to have some reserved space in the mempool structure,
-sizeof(struct rte_pktmbuf_pool_private) bytes.
-The number of allocated pkt mbufs is NB_MBUF, with a size of MBUF_SIZE each.
+In this case, it is necessary to create a pool that will be used by the driver.
+The number of allocated pkt mbufs is NB_MBUF, with a data room size of
+RTE_MBUF_DEFAULT_BUF_SIZE each.
 A per-lcore cache of 32 mbufs is kept.
 The memory is allocated in NUMA socket 0,
 but it is possible to extend this code to allocate one mbuf pool per socket.
 
-Two callback pointers are also given to the rte_mempool_create() function:
-
-*   The first callback pointer is to rte_pktmbuf_pool_init() and is used
-    to initialize the private data of the mempool, which is needed by the driver.
-    This function is provided by the mbuf API, but can be copied and extended by the developer.
-
-*   The second callback pointer given to rte_mempool_create() is the mbuf initializer.
-    The default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library.
-    If a more complex application wants to extend the rte_pktmbuf structure for its own needs,
-    a new function derived from rte_pktmbuf_init( ) can be created.
+The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf
+initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init().
+An advanced application may want to use the mempool API to create the
+mbuf pool with more control.
 
 .. _l2_fwd_app_dvr_init:
 
diff --git a/doc/guides/sample_app_ug/ptpclient.rst b/doc/guides/sample_app_ug/ptpclient.rst
index 6e425b7..4bd87c2 100644
--- a/doc/guides/sample_app_ug/ptpclient.rst
+++ b/doc/guides/sample_app_ug/ptpclient.rst
@@ -171,15 +171,9 @@  used by the application:
 
 .. code-block:: c
 
-    mbuf_pool = rte_mempool_create("MBUF_POOL",
-                                   NUM_MBUFS * nb_ports,
-                                   MBUF_SIZE,
-                                   MBUF_CACHE_SIZE,
-                                   sizeof(struct rte_pktmbuf_pool_private),
-                                   rte_pktmbuf_pool_init, NULL,
-                                   rte_pktmbuf_init,      NULL,
-                                   rte_socket_id(),
-                                   0);
+    mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", NUM_MBUFS * nb_ports,
+    	MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id(),
+    	NULL);
 
 Mbufs are the packet buffer structure used by DPDK. They are explained in
 detail in the "Mbuf Library" section of the *DPDK Programmer's Guide*.
diff --git a/doc/guides/sample_app_ug/quota_watermark.rst b/doc/guides/sample_app_ug/quota_watermark.rst
index c56683a..f3a6624 100644
--- a/doc/guides/sample_app_ug/quota_watermark.rst
+++ b/doc/guides/sample_app_ug/quota_watermark.rst
@@ -254,32 +254,24 @@  It contains a set of mbuf objects that are used by the driver and the applicatio
 .. code-block:: c
 
     /* Create a pool of mbuf to store packets */
-
-    mbuf_pool = rte_mempool_create("mbuf_pool", MBUF_PER_POOL, MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private),
-        rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);
+    mbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", MBUF_PER_POOL, 32, 0,
+    	MBUF_DATA_SIZE, rte_socket_id(), NULL);
 
     if (mbuf_pool == NULL)
         rte_panic("%s\n", rte_strerror(rte_errno));
 
 The rte_mempool is a generic structure used to handle pools of objects.
-In this case, it is necessary to create a pool that will be used by the driver,
-which expects to have some reserved space in the mempool structure, sizeof(struct rte_pktmbuf_pool_private) bytes.
+In this case, it is necessary to create a pool that will be used by the driver.
 
-The number of allocated pkt mbufs is MBUF_PER_POOL, with a size of MBUF_SIZE each.
+The number of allocated pkt mbufs is MBUF_PER_POOL, with a data room size
+of MBUF_DATA_SIZE each.
 A per-lcore cache of 32 mbufs is kept.
 The memory is allocated in on the master lcore's socket, but it is possible to extend this code to allocate one mbuf pool per socket.
 
-Two callback pointers are also given to the rte_mempool_create() function:
-
-*   The first callback pointer is to rte_pktmbuf_pool_init() and is used to initialize the private data of the mempool,
-    which is needed by the driver.
-    This function is provided by the mbuf API, but can be copied and extended by the developer.
-
-*   The second callback pointer given to rte_mempool_create() is the mbuf initializer.
-
-The default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library.
-If a more complex application wants to extend the rte_pktmbuf structure for its own needs,
-a new function derived from rte_pktmbuf_init() can be created.
+The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf
+initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init().
+An advanced application may want to use the mempool API to create the
+mbuf pool with more control.
 
 Ports Configuration and Pairing
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 2f7ae70..e234c63 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -888,8 +888,8 @@  bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev, uint8_t slave_id)
 	RTE_ASSERT(port->tx_ring == NULL);
 	socket_id = rte_eth_devices[slave_id].data->numa_node;
 
-	element_size = sizeof(struct slow_protocol_frame) + sizeof(struct rte_mbuf)
-				+ RTE_PKTMBUF_HEADROOM;
+	element_size = sizeof(struct slow_protocol_frame) +
+		RTE_PKTMBUF_HEADROOM;
 
 	/* The size of the mempool should be at least:
 	 * the sum of the TX descriptors + BOND_MODE_8023AX_SLAVE_TX_PKTS */
@@ -900,11 +900,10 @@  bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev, uint8_t slave_id)
 	}
 
 	snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool", slave_id);
-	port->mbuf_pool = rte_mempool_create(mem_name,
-		total_tx_desc, element_size,
-		RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ? 32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
-		sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init,
-		NULL, rte_pktmbuf_init, NULL, socket_id, MEMPOOL_F_NO_SPREAD);
+	port->mbuf_pool = rte_pktmbuf_pool_create(mem_name, total_tx_desc,
+		RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
+			32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
+		0, element_size, socket_id, NULL);
 
 	/* Any memory allocation failure in initalization is critical because
 	 * resources can't be free, so reinitialization is impossible. */
diff --git a/examples/ip_pipeline/init.c b/examples/ip_pipeline/init.c
index cd167f6..d86aa86 100644
--- a/examples/ip_pipeline/init.c
+++ b/examples/ip_pipeline/init.c
@@ -316,16 +316,15 @@  app_init_mempool(struct app_params *app)
 		struct app_mempool_params *p = &app->mempool_params[i];
 
 		APP_LOG(app, HIGH, "Initializing %s ...", p->name);
-		app->mempool[i] = rte_mempool_create(
-				p->name,
-				p->pool_size,
-				p->buffer_size,
-				p->cache_size,
-				sizeof(struct rte_pktmbuf_pool_private),
-				rte_pktmbuf_pool_init, NULL,
-				rte_pktmbuf_init, NULL,
-				p->cpu_socket_id,
-				0);
+		app->mempool[i] = rte_pktmbuf_pool_create(
+			p->name,
+			p->pool_size,
+			p->cache_size,
+			0, /* priv_size */
+			p->buffer_size -
+				sizeof(struct rte_mbuf), /* mbuf data size */
+			p->cpu_socket_id,
+			NULL);
 
 		if (app->mempool[i] == NULL)
 			rte_panic("%s init error\n", p->name);
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 50fe422..8648161 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -84,9 +84,7 @@ 
 
 #define MAX_JUMBO_PKT_LEN  9600
 
-#define	BUF_SIZE	RTE_MBUF_DEFAULT_DATAROOM
-#define MBUF_SIZE	\
-	(BUF_SIZE + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
+#define	MBUF_DATA_SIZE	RTE_MBUF_DEFAULT_BUF_SIZE
 
 #define NB_MBUF 8192
 
@@ -909,11 +907,13 @@  setup_queue_tbl(struct rx_queue *rxq, uint32_t lcore, uint32_t queue)
 
 	snprintf(buf, sizeof(buf), "mbuf_pool_%u_%u", lcore, queue);
 
-	if ((rxq->pool = rte_mempool_create(buf, nb_mbuf, MBUF_SIZE, 0,
-			sizeof(struct rte_pktmbuf_pool_private),
-			rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL,
-			socket, MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) == NULL) {
-		RTE_LOG(ERR, IP_RSMBL, "mempool_create(%s) failed", buf);
+	rxq->pool = rte_pktmbuf_pool_create(buf, nb_mbuf,
+		0, /* cache size */
+		0, /* priv size */
+		MBUF_DATA_SIZE, socket, "ring_sp_sc");
+	if (rxq->pool == NULL) {
+		RTE_LOG(ERR, IP_RSMBL,
+			"rte_pktmbuf_pool_create(%s) failed", buf);
 		return -1;
 	}
 
diff --git a/examples/multi_process/l2fwd_fork/main.c b/examples/multi_process/l2fwd_fork/main.c
index 2d951d9..358a760 100644
--- a/examples/multi_process/l2fwd_fork/main.c
+++ b/examples/multi_process/l2fwd_fork/main.c
@@ -77,8 +77,7 @@ 
 
 #define RTE_LOGTYPE_L2FWD RTE_LOGTYPE_USER1
 #define MBUF_NAME	"mbuf_pool_%d"
-#define MBUF_SIZE	\
-(RTE_MBUF_DEFAULT_DATAROOM + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
+#define MBUF_DATA_SIZE	RTE_MBUF_DEFAULT_BUF_SIZE
 #define NB_MBUF   8192
 #define RING_MASTER_NAME	"l2fwd_ring_m2s_"
 #define RING_SLAVE_NAME		"l2fwd_ring_s2m_"
@@ -989,14 +988,11 @@  main(int argc, char **argv)
 		flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET;
 		snprintf(buf_name, RTE_MEMPOOL_NAMESIZE, MBUF_NAME, portid);
 		l2fwd_pktmbuf_pool[portid] =
-			rte_mempool_create(buf_name, NB_MBUF,
-					   MBUF_SIZE, 32,
-					   sizeof(struct rte_pktmbuf_pool_private),
-					   rte_pktmbuf_pool_init, NULL,
-					   rte_pktmbuf_init, NULL,
-					   rte_socket_id(), flags);
+			rte_pktmbuf_pool_create(buf_name, NB_MBUF, 32,
+				0, MBUF_DATA_SIZE, rte_socket_id(),
+				NULL);
 		if (l2fwd_pktmbuf_pool[portid] == NULL)
-			rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
+			rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
 
 		printf("Create mbuf %s\n", buf_name);
 	}
diff --git a/examples/tep_termination/main.c b/examples/tep_termination/main.c
index 622f248..2b786c5 100644
--- a/examples/tep_termination/main.c
+++ b/examples/tep_termination/main.c
@@ -68,7 +68,7 @@ 
 				(nb_switching_cores * MBUF_CACHE_SIZE))
 
 #define MBUF_CACHE_SIZE 128
-#define MBUF_SIZE (2048 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
+#define MBUF_DATA_SIZE RTE_MBUF_DEFAULT_BUF_SIZE
 
 #define MAX_PKT_BURST 32	/* Max burst size for RX/TX */
 #define BURST_TX_DRAIN_US 100	/* TX drain every ~100us */
@@ -1200,15 +1200,14 @@  main(int argc, char *argv[])
 			MAX_SUP_PORTS);
 	}
 	/* Create the mbuf pool. */
-	mbuf_pool = rte_mempool_create(
+	mbuf_pool = rte_pktmbuf_pool_create(
 			"MBUF_POOL",
-			NUM_MBUFS_PER_PORT
-			* valid_nb_ports,
-			MBUF_SIZE, MBUF_CACHE_SIZE,
-			sizeof(struct rte_pktmbuf_pool_private),
-			rte_pktmbuf_pool_init, NULL,
-			rte_pktmbuf_init, NULL,
-			rte_socket_id(), 0);
+			NUM_MBUFS_PER_PORT * valid_nb_ports,
+			MBUF_CACHE_SIZE,
+			0,
+			MBUF_DATA_SIZE,
+			rte_socket_id(),
+			NULL);
 	if (mbuf_pool == NULL)
 		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
 
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 3e9cbb6..4b871ca 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -62,7 +62,7 @@ 
 
 /*
  * ctrlmbuf constructor, given as a callback function to
- * rte_mempool_create()
+ * rte_mempool_obj_iter() or rte_mempool_create()
  */
 void
 rte_ctrlmbuf_init(struct rte_mempool *mp,
@@ -77,7 +77,8 @@  rte_ctrlmbuf_init(struct rte_mempool *mp,
 
 /*
  * pktmbuf pool constructor, given as a callback function to
- * rte_mempool_create()
+ * rte_mempool_create(), or called directly if using
+ * rte_mempool_create_empty()/rte_mempool_populate()
  */
 void
 rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg)
@@ -110,7 +111,7 @@  rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg)
 
 /*
  * pktmbuf constructor, given as a callback function to
- * rte_mempool_create().
+ * rte_mempool_obj_iter() or rte_mempool_create().
  * Set the fields of a packet mbuf to their default values.
  */
 void
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 774e071..352fa02 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -44,6 +44,13 @@ 
  * buffers. The message buffers are stored in a mempool, using the
  * RTE mempool library.
  *
+ * The preferred way to create a mbuf pool is to use
+ * rte_pktmbuf_pool_create(). However, in some situations, an
+ * application may want to have more control (ex: populate the pool with
+ * specific memory), in this case it is possible to use functions from
+ * rte_mempool. See how rte_pktmbuf_pool_create() is implemented for
+ * details.
+ *
  * This library provide an API to allocate/free packet mbufs, which are
  * used to carry network packets.
  *
@@ -1189,14 +1196,14 @@  __rte_mbuf_raw_free(struct rte_mbuf *m)
  * This function initializes some fields in an mbuf structure that are
  * not modified by the user once created (mbuf type, origin pool, buffer
  * start address, and so on). This function is given as a callback function
- * to rte_mempool_create() at pool creation time.
+ * to rte_mempool_obj_iter() or rte_mempool_create() at pool creation time.
  *
  * @param mp
  *   The mempool from which the mbuf is allocated.
  * @param opaque_arg
  *   A pointer that can be used by the user to retrieve useful information
- *   for mbuf initialization. This pointer comes from the ``init_arg``
- *   parameter of rte_mempool_create().
+ *   for mbuf initialization. This pointer is the opaque argument passed to
+ *   rte_mempool_obj_iter() or rte_mempool_create().
  * @param m
  *   The mbuf to initialize.
  * @param i
@@ -1270,14 +1277,14 @@  rte_is_ctrlmbuf(struct rte_mbuf *m)
  * This function initializes some fields in the mbuf structure that are
  * not modified by the user once created (origin pool, buffer start
  * address, and so on). This function is given as a callback function to
- * rte_mempool_create() at pool creation time.
+ * rte_mempool_obj_iter() or rte_mempool_create() at pool creation time.
  *
  * @param mp
  *   The mempool from which mbufs originate.
  * @param opaque_arg
  *   A pointer that can be used by the user to retrieve useful information
- *   for mbuf initialization. This pointer comes from the ``init_arg``
- *   parameter of rte_mempool_create().
+ *   for mbuf initialization. This pointer is the opaque argument passed to
+ *   rte_mempool_obj_iter() or rte_mempool_create().
  * @param m
  *   The mbuf to initialize.
  * @param i
@@ -1292,7 +1299,8 @@  void rte_pktmbuf_init(struct rte_mempool *mp, void *opaque_arg,
  *
  * This function initializes the mempool private data in the case of a
  * pktmbuf pool. This private data is needed by the driver. The
- * function is given as a callback function to rte_mempool_create() at
+ * function must be called on the mempool before it is used, or it
+ * can be given as a callback function to rte_mempool_create() at
  * pool creation. It can be extended by the user, for example, to
  * provide another packet size.
  *
@@ -1300,8 +1308,8 @@  void rte_pktmbuf_init(struct rte_mempool *mp, void *opaque_arg,
  *   The mempool from which mbufs originate.
  * @param opaque_arg
  *   A pointer that can be used by the user to retrieve useful information
- *   for mbuf initialization. This pointer comes from the ``init_arg``
- *   parameter of rte_mempool_create().
+ *   for mbuf initialization. This pointer is the opaque argument passed to
+ *   rte_mempool_create().
  */
 void rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg);
 
@@ -1309,8 +1317,7 @@  void rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg);
  * Create a mbuf pool.
  *
  * This function creates and initializes a packet mbuf pool. It is
- * a wrapper to rte_mempool_create() with the proper packet constructor
- * and mempool constructor.
+ * a wrapper to rte_mempool functions.
  *
  * @param name
  *   The name of the mbuf pool.