[EXT] Re: [PATCH v6 1/1] app/testpmd: add valid check to verify multi mempool feature

Hanumanth Reddy Pothula hpothula at marvell.com
Mon Nov 21 18:45:34 CET 2022



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit at amd.com>
> Sent: Monday, November 21, 2022 11:02 PM
> To: Hanumanth Reddy Pothula <hpothula at marvell.com>; Aman Singh
> <aman.deep.singh at intel.com>; Yuying Zhang <yuying.zhang at intel.com>
> Cc: dev at dpdk.org; andrew.rybchenko at oktetlabs.ru;
> thomas at monjalon.net; yux.jiang at intel.com; Jerin Jacob Kollanukkaran
> <jerinj at marvell.com>; Nithin Kumar Dabilpuram
> <ndabilpuram at marvell.com>
> Subject: [EXT] Re: [PATCH v6 1/1] app/testpmd: add valid check to verify
> multi mempool feature
> 
> External Email
> 
> ----------------------------------------------------------------------
> On 11/21/2022 2:33 PM, Hanumanth Pothula wrote:
> > Validate ethdev parameter 'max_rx_mempools' to know whether device
> > supports multi-mempool feature or not.
> >
> 
> Validation 'max_rx_mempools' is not main purpose of this patch, I would
> move below paragraph up.
> 
> > Also, add new testpmd command line argument, multi-mempool, to
> control
> > multi-mempool feature. By default its disabled.
> >
> > Bugzilla ID: 1128
> > Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx
> > queue")
> >
> > Signed-off-by: Hanumanth Pothula <hpothula at marvell.com>
> >
> > ---
> > v6:
> >  - Updated run_app.rst file with multi-mempool argument.
> >  - defined and populated multi_mempool at related arguments.
> >  - invoking rte_eth_dev_info_get() withing multi-mempool condition
> > v5:
> >  - Added testpmd argument to enable multi-mempool feature.
> >  - Simplified logic to distinguish between multi-mempool,
> >    multi-segment and single pool/segment.
> > v4:
> >  - updated if condition.
> > v3:
> >  - Simplified conditional check.
> >  - Corrected spell, whether.
> > v2:
> >  - Rebased on tip of next-net/main.
> > ---
> >  app/test-pmd/parameters.c             |  4 ++
> >  app/test-pmd/testpmd.c                | 66 +++++++++++++++++----------
> >  app/test-pmd/testpmd.h                |  1 +
> >  doc/guides/testpmd_app_ug/run_app.rst |  4 ++
> >  4 files changed, 50 insertions(+), 25 deletions(-)
> >
> > diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
> > index aed4cdcb84..d0f7b2f11d 100644
> > --- a/app/test-pmd/parameters.c
> > +++ b/app/test-pmd/parameters.c
> > @@ -155,6 +155,7 @@ usage(char* progname)
> >  	printf("  --rxhdrs=eth[,ipv4]*: set RX segment protocol to split.\n");
> >  	printf("  --txpkts=X[,Y]*: set TX segment sizes"
> >  		" or total packet length.\n");
> > +	printf(" --multi-mempool: enable multi-mempool support\n");
> 
> Indentation is wrong, one space is missing.
> 
> Can you also update the '--mbuf-size=' definition, it has:
> " ... extra memory pools will be created for allocating mbufs to receive
> packets with buffer splitting features.", Now it is for both "buffer splitting
> and multi Rx mempool features."
> Even it can be possible to reference to new argument.
Sure, will update. 
> 
> >  	printf("  --txonly-multi-flow: generate multiple flows in txonly
> mode\n");
> >  	printf("  --tx-ip=src,dst: IP addresses in Tx-only mode\n");
> >  	printf("  --tx-udp=src[,dst]: UDP ports in Tx-only mode\n"); @@
> > -669,6 +670,7 @@ launch_args_parse(int argc, char** argv)
> >  		{ "rxpkts",			1, 0, 0 },
> >  		{ "rxhdrs",			1, 0, 0 },
> >  		{ "txpkts",			1, 0, 0 },
> > +		{ "multi-mempool",              0, 0, 0 },
> 
> Thinking twice, I am not sure about the 'multi-mempool' name, because
> 'mbuf-size' already cause to create multiple mempool, 'multi-mempool'
> can be confusing.
> As ethdev variable name is 'max_rx_mempools', what do you think to use
> 'multi-rx-mempools' here as argument?

Yes, 'multi-rx-mempools' looks clean.

> 
> >  		{ "txonly-multi-flow",		0, 0, 0 },
> >  		{ "rxq-share",			2, 0, 0 },
> >  		{ "eth-link-speed",		1, 0, 0 },
> > @@ -1295,6 +1297,8 @@ launch_args_parse(int argc, char** argv)
> >  				else
> >  					rte_exit(EXIT_FAILURE, "bad
> txpkts\n");
> >  			}
> > +			if (!strcmp(lgopts[opt_idx].name, "multi-
> mempool"))
> > +				multi_mempool = 1;
> >  			if (!strcmp(lgopts[opt_idx].name, "txonly-multi-
> flow"))
> >  				txonly_multi_flow = 1;
> >  			if (!strcmp(lgopts[opt_idx].name, "rxq-share")) { diff
> --git
> > a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > 4e25f77c6a..0bf2e4bd0d 100644
> > --- a/app/test-pmd/testpmd.c
> > +++ b/app/test-pmd/testpmd.c
> > @@ -245,6 +245,7 @@ uint32_t max_rx_pkt_len;
> >   */
> >  uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
> >  uint8_t  rx_pkt_nb_segs; /**< Number of segments to split */
> > +uint8_t multi_mempool; /**< Enables multi-mempool feature */
> >  uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
> >  uint8_t  rx_pkt_nb_offs; /**< Number of specified offsets */
> > uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT];
> > @@ -258,6 +259,8 @@ uint16_t
> tx_pkt_seg_lengths[RTE_MAX_SEGS_PER_PKT]
> > = {  };  uint8_t  tx_pkt_nb_segs = 1; /**< Number of segments in
> > TXONLY packets */
> >
> > +
> > +
> 
> Unintendend change.

Ack
> 
> >  enum tx_pkt_split tx_pkt_split = TX_PKT_SPLIT_OFF;  /**< Split policy
> > for packets to TX. */
> >
> > @@ -2659,24 +2662,9 @@ rx_queue_setup(uint16_t port_id, uint16_t
> rx_queue_id,
> >  	uint32_t prev_hdrs = 0;
> >  	int ret;
> >
> > -	/* Verify Rx queue configuration is single pool and segment or
> > -	 * multiple pool/segment.
> > -	 * @see rte_eth_rxconf::rx_mempools
> > -	 * @see rte_eth_rxconf::rx_seg
> > -	 */
> > -	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> > -	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) !=
> 0))) {
> > -		/* Single pool/segment configuration */
> > -		rx_conf->rx_seg = NULL;
> > -		rx_conf->rx_nseg = 0;
> > -		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
> > -					     nb_rx_desc, socket_id,
> > -					     rx_conf, mp);
> > -		goto exit;
> > -	}
> >
> > -	if (rx_pkt_nb_segs > 1 ||
> > -	    rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
> > +	if ((rx_pkt_nb_segs > 1) &&
> > +	    (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
> >  		/* multi-segment configuration */
> >  		for (i = 0; i < rx_pkt_nb_segs; i++) {
> >  			struct rte_eth_rxseg_split *rx_seg =
> &rx_useg[i].split; @@
> > -2701,22 +2689,50 @@ rx_queue_setup(uint16_t port_id, uint16_t
> rx_queue_id,
> >  		}
> >  		rx_conf->rx_nseg = rx_pkt_nb_segs;
> >  		rx_conf->rx_seg = rx_useg;
> > -	} else {
> > +		rx_conf->rx_mempools = NULL;
> > +		rx_conf->rx_nmempool = 0;
> > +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
> nb_rx_desc,
> > +				    socket_id, rx_conf, NULL);
> > +		rx_conf->rx_seg = NULL;
> > +		rx_conf->rx_nseg = 0;
> > +	} else if (multi_mempool == 1) {
> >  		/* multi-pool configuration */
> > +		struct rte_eth_dev_info dev_info;
> > +
> > +		if (mbuf_data_size_n <= 1) {
> > +			RTE_LOG(ERR, EAL, "invalid number of mempools
> %u",
> > +				mbuf_data_size_n);
> > +			return -EINVAL;
> > +		}
> > +		ret = rte_eth_dev_info_get(port_id, &dev_info);
> > +		if (ret != 0)
> > +			return ret;
> > +		if (dev_info.max_rx_mempools == 0) {
> > +			RTE_LOG(ERR, EAL, "device doesn't support
> requested multi-mempool configuration");
> > +			return -ENOTSUP;
> > +		}
> >  		for (i = 0; i < mbuf_data_size_n; i++) {
> >  			mpx = mbuf_pool_find(socket_id, i);
> >  			rx_mempool[i] = mpx ? mpx : mp;
> >  		}
> >  		rx_conf->rx_mempools = rx_mempool;
> >  		rx_conf->rx_nmempool = mbuf_data_size_n;
> > -	}
> > -	ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
> > +		rx_conf->rx_seg = NULL;
> > +		rx_conf->rx_nseg = 0;
> > +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
> nb_rx_desc,
> >  				    socket_id, rx_conf, NULL);
> > -	rx_conf->rx_seg = NULL;
> > -	rx_conf->rx_nseg = 0;
> > -	rx_conf->rx_mempools = NULL;
> > -	rx_conf->rx_nmempool = 0;
> > -exit:
> > +		rx_conf->rx_mempools = NULL;
> > +		rx_conf->rx_nmempool = 0;
> > +	} else {
> > +		/* Single pool/segment configuration */
> > +		rx_conf->rx_seg = NULL;
> > +		rx_conf->rx_nseg = 0;
> > +		rx_conf->rx_mempools = NULL;
> > +		rx_conf->rx_nmempool = 0;
> > +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
> nb_rx_desc,
> > +				    socket_id, rx_conf, mp);
> > +	}
> > +
> 
> Technically execution can reach to this point without taking any of the
> braches above, in that case there should be an error here instead of silently
> continue.
> 
> I think either there should be a check here, not sure how to do, or single
> mempool can be the default setup out of the 'else' block. What do you
> think?
> 
Yes, default case(final else) is going to be single pool/segment. I think there is no need of error return.

This function(rx_queue_setup()) returns return of rte_eth_rx_queue_setup().
 
> >  	ports[port_id].rxq[rx_queue_id].state = rx_conf->rx_deferred_start
> ?
> >
> 	RTE_ETH_QUEUE_STATE_STOPPED :
> >
> 	RTE_ETH_QUEUE_STATE_STARTED;
> > diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index
> > aaf69c349a..e4f9b142c9 100644
> > --- a/app/test-pmd/testpmd.h
> > +++ b/app/test-pmd/testpmd.h
> > @@ -589,6 +589,7 @@ extern uint32_t max_rx_pkt_len;  extern
> uint32_t
> > rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT];
> >  extern uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
> >  extern uint8_t  rx_pkt_nb_segs; /**< Number of segments to split */
> > +extern uint8_t multi_mempool; /**< Enables multi-mempool feature.
> */
> >  extern uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
> >  extern uint8_t  rx_pkt_nb_offs; /**< Number of specified offsets */
> >
> > diff --git a/doc/guides/testpmd_app_ug/run_app.rst
> > b/doc/guides/testpmd_app_ug/run_app.rst
> > index 610e442924..329570e721 100644
> > --- a/doc/guides/testpmd_app_ug/run_app.rst
> > +++ b/doc/guides/testpmd_app_ug/run_app.rst
> > @@ -365,6 +365,10 @@ The command line options are:
> >      Set TX segment sizes or total packet length. Valid for ``tx-only``
> >      and ``flowgen`` forwarding modes.
> >
> > +* ``--multi-mempool``
> > +
> > +    Enable multi-mempool, multiple mbuf pools per Rx queue, support.
> > +
> >  *   ``--txonly-multi-flow``
> >
> >      Generate multiple flows in txonly mode.



More information about the dev mailing list