[EXT] Re: [dpdk-stable] [PATCH v2] test: avoid hang if queues are full and Tx fails

Rakesh Kudurumalla rkudurumalla at marvell.com
Tue Feb 1 07:30:44 CET 2022


ping

> -----Original Message-----
> From: Rakesh Kudurumalla
> Sent: Monday, January 10, 2022 2:35 PM
> To: Thomas Monjalon <thomas at monjalon.net>; Jerin Jacob Kollanukkaran
> <jerinj at marvell.com>
> Cc: stable at dpdk.org; dev at dpdk.org; david.marchand at redhat.com;
> ferruh.yigit at intel.com; andrew.rybchenko at oktetlabs.ru;
> ajit.khaparde at broadcom.com
> Subject: RE: [EXT] Re: [dpdk-stable] [PATCH v2] test: avoid hang if queues are
> full and Tx fails
> 
> ping
> 
> > -----Original Message-----
> > From: Rakesh Kudurumalla
> > Sent: Monday, December 13, 2021 12:10 PM
> > To: Thomas Monjalon <thomas at monjalon.net>; Jerin Jacob Kollanukkaran
> > <jerinj at marvell.com>
> > Cc: stable at dpdk.org; dev at dpdk.org; david.marchand at redhat.com;
> > ferruh.yigit at intel.com; andrew.rybchenko at oktetlabs.ru;
> > ajit.khaparde at broadcom.com
> > Subject: RE: [EXT] Re: [dpdk-stable] [PATCH v2] test: avoid hang if
> > queues are full and Tx fails
> >
> >
> >
> > > -----Original Message-----
> > > From: Thomas Monjalon <thomas at monjalon.net>
> > > Sent: Monday, November 29, 2021 2:44 PM
> > > To: Rakesh Kudurumalla <rkudurumalla at marvell.com>; Jerin Jacob
> > > Kollanukkaran <jerinj at marvell.com>
> > > Cc: stable at dpdk.org; dev at dpdk.org; david.marchand at redhat.com;
> > > ferruh.yigit at intel.com; andrew.rybchenko at oktetlabs.ru;
> > > ajit.khaparde at broadcom.com
> > > Subject: Re: [EXT] Re: [dpdk-stable] [PATCH v2] test: avoid hang if
> > > queues are full and Tx fails
> > >
> > > 29/11/2021 09:52, Rakesh Kudurumalla:
> > > > From: Thomas Monjalon <thomas at monjalon.net>
> > > > > 22/11/2021 08:59, Rakesh Kudurumalla:
> > > > > > From: Thomas Monjalon <thomas at monjalon.net>
> > > > > > > 20/07/2021 18:50, Rakesh Kudurumalla:
> > > > > > > > Current pmd_perf_autotest() in continuous mode tries to
> > > > > > > > enqueue MAX_TRAFFIC_BURST completely before starting the
> test.
> > > > > > > > Some drivers cannot accept complete MAX_TRAFFIC_BURST even
> > > > > > > > though
> > > > > rx+tx
> > > > > > > > desc
> > > > > > > count
> > > > > > > > can fit it.
> > > > > > >
> > > > > > > Which driver is failing to do so?
> > > > > > > Why it cannot enqueue 32 packets?
> > > > > >
> > > > > > Octeontx2 driver is failing to enqueue because hardware
> > > > > > buffers are full
> > > > > before test.
> > >
> > > Aren't you stopping the support of octeontx2?
> > > Why do you care now?
> > >  yes we are not supporting octeontx2,but this  issue is observed in
> > > cnxk driver ,current patch fixes the same
> > > > >
> > > > > Why hardware buffers are full?
> > > > Hardware buffers are full because number of number of descriptors
> > > > in continuous mode Is less than MAX_TRAFFIC_BURST, so if enque
> > > > fails , there is no way hardware can drop the Packets .
> > > > pmd_per_autotest application evaluates performance after enqueueing
> packets Initially.
> > > > >
> > > > > > pmd_perf_autotest() in continuous mode tries to enqueue
> > > > > > MAX_TRAFFIC_BURST (2048) before starting the test.
> > > > > >
> > > > > > > > This patch changes behaviour to stop enqueuing after few
> retries.
> > > > > > >
> > > > > > > If there is a real limitation, there will be issues in more
> > > > > > > places than this test program.
> > > > > > > I feel it should be addressed either in the driver or at ethdev level.
> > > > > > >
> > > > > > > [...]
> > > > > > > > @@ -480,10 +483,19 @@ main_loop(__rte_unused void *args)
> > > > > > > >  			nb_tx = RTE_MIN(MAX_PKT_BURST, num);
> > > > > > > >  			nb_tx = rte_eth_tx_burst(portid, 0,
> > > > > > > >  						&tx_burst[idx],
> > > nb_tx);
> > > > > > > > +			if (nb_tx == 0)
> > > > > > > > +				retry_cnt++;
> > > > > > > >  			num -= nb_tx;
> > > > > > > >  			idx += nb_tx;
> > > > > > > > +			if (retry_cnt == MAX_RETRY_COUNT) {
> > > > > > > > +				retry_cnt = 0;
> > > > > > > > +				break;
> > > > > > > > +			}
> > >
> > >



More information about the stable mailing list