[dpdk-stable] [dpdk-dev] [PATCH 1/2] net/qede: fix ovs-dpdk failure when using odd number of queues on 100Gb mode

Shahed Shaikh shshaikh at marvell.com
Wed Sep 4 19:52:11 CEST 2019


> -----Original Message-----
> From: Jerin Jacob Kollanukkaran <jerinj at marvell.com>
> Sent: Wednesday, September 4, 2019 7:01 PM
> To: Shahed Shaikh <shshaikh at marvell.com>; dev at dpdk.org
> Cc: Rasesh Mody <rmody at marvell.com>; ferruh.yigit at intel.com; GR-Everest-
> DPDK-Dev <GR-Everest-DPDK-Dev at marvell.com>; stable at dpdk.org
> Subject: RE: [dpdk-dev] [PATCH 1/2] net/qede: fix ovs-dpdk failure when using
> odd number of queues on 100Gb mode
> 
> > -----Original Message-----
> > From: dev <dev-bounces at dpdk.org> On Behalf Of Shahed Shaikh
> > Sent: Wednesday, September 4, 2019 5:01 PM
> > To: dev at dpdk.org
> > Cc: Rasesh Mody <rmody at marvell.com>; ferruh.yigit at intel.com; GR-
> > Everest-DPDK-Dev <GR-Everest-DPDK-Dev at marvell.com>; stable at dpdk.org
> > Subject: [dpdk-dev] [PATCH 1/2] net/qede: fix ovs-dpdk failure when
> > using odd number of queues on 100Gb mode
> >
> > As per HW design of 100Gb mode, device internally uses 2 engines
> > (eng0 and eng1), and both engines need to be configured symmetrically.
> > Based on this requirement, driver design chose an approach to allow
> > user to allocate only even number of queues and split those queues on
> > both engines equally.
> >
> > This approach puts a limitation on number of queues to be allocated - i.e.
> > user can't configure odd number of queues on 100Gb mode.
> > OVS configures DPDK port with 1 rxq and 1 txq, which causes
> > initialization of qede port to fail.
> >
> > This patch changes the implementation of queue allocation method for
> > 100Gb devices by removing above mentioned limitation and allowing user
> > to configure odd number of queues.
> >
> > Key changes in this patch -
> >  - Allocate requested queue count on both engines, so that
> >    actual hardware queue count will be double of what user requested.
> >  - Create a pair of queues from both engines and provide it to
> >    rte_ethdev queue structure. So ethdev will see only one queue for
> >    underlying queue pair created for hw engine pair.
> >  - Rx and Tx methods from ethdev will provide that queue pair
> >    object and PMD will internally split Rx and Tx packet processing across
> >    both engines in separately installed Rx and Tx handlers.
> >  - Consolidate statistics of both HW queues while reporting to application.
> >  - Report engine wise queue statistics in xstats flow.
> >    e.g. - rx_q<hw_eng_id>.<qid>_xxxxxxx
> 
> 
> Multiple logical changes in one patch. Please split the patch to more logical ones
> for easy review.
Hi Jerin,
Sure, let me split this patch into logical patches.

> 
> 
> >
> > Fixes: 2af14ca79c0a ("net/qede: support 100G")
> > Cc: stable at dpdk.org
> >
> > Signed-off-by: Shahed Shaikh <shshaikh at marvell.com>


More information about the stable mailing list