[dpdk-dev] [PATCH 5/6] ixgbe: Config VF RSS

Ouyang, Changchun changchun.ouyang at intel.com
Fri Dec 19 02:13:48 CET 2014


Hi,

> -----Original Message-----
> From: Vlad Zolotarov [mailto:vladz at cloudius-systems.com]
> Sent: Thursday, December 18, 2014 6:09 PM
> To: dev at dpdk.org; Ouyang, Changchun
> Subject: Re: [PATCH 5/6] ixgbe: Config VF RSS
> 
> 
> On 12/18/14 11:58, Vlad Zolotarov wrote:
> > From: Changchun Ouyang <changchun.ouyang at intel.com>
> >
> > It needs config RSS and IXGBE_MRQC and IXGBE_VFPSRTYPE to enable VF
> RSS.
> >
> > Signed-off-by: Changchun Ouyang <changchun.ouyang at intel.com>
> > ---
> >   lib/librte_pmd_ixgbe/ixgbe_pf.c   | 15 +++++++++
> >   lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 66
> +++++++++++++++++++++++++++++++++------
> >   2 files changed, 71 insertions(+), 10 deletions(-)
> >
> > diff --git a/lib/librte_pmd_ixgbe/ixgbe_pf.c
> > b/lib/librte_pmd_ixgbe/ixgbe_pf.c index cbb0145..9c9dad8 100644
> > --- a/lib/librte_pmd_ixgbe/ixgbe_pf.c
> > +++ b/lib/librte_pmd_ixgbe/ixgbe_pf.c
> > @@ -187,6 +187,21 @@ int ixgbe_pf_host_configure(struct rte_eth_dev
> *eth_dev)
> >   	IXGBE_WRITE_REG(hw, IXGBE_MPSAR_LO(hw-
> >mac.num_rar_entries), 0);
> >   	IXGBE_WRITE_REG(hw, IXGBE_MPSAR_HI(hw-
> >mac.num_rar_entries), 0);
> >
> > +	/*
> > +	 * VF RSS can support at most 4 queues for each VF, even if
> > +	 * 8 queues are available for each VF, it need refine to 4
> > +	 * queues here due to this limitation, otherwise no queue
> > +	 * will receive any packet even RSS is enabled.
> > +	 */
> > +	if (eth_dev->data->dev_conf.rxmode.mq_mode ==
> ETH_MQ_RX_VMDQ_RSS) {
> > +		if (RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool == 8) {
> > +			RTE_ETH_DEV_SRIOV(eth_dev).active =
> ETH_32_POOLS;
> > +			RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = 4;
> > +			RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx =
> > +				dev_num_vf(eth_dev) * 4;
> > +		}
> > +	}
> > +
> >   	/* set VMDq map to default PF pool */
> >   	hw->mac.ops.set_vmdq(hw, 0,
> > RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx);
> >
> > diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> > b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> > index f58f98e..5d071b4 100644
> > --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> > +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> > @@ -3327,6 +3327,39 @@ ixgbe_alloc_rx_queue_mbufs(struct
> igb_rx_queue *rxq)
> >   }
> >
> >   static int
> > +ixgbe_config_vf_rss(struct rte_eth_dev *dev) {
> > +	struct ixgbe_hw *hw;
> > +	uint32_t mrqc;
> > +
> > +	ixgbe_rss_configure(dev);
> > +
> > +	hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> > +
> > +	/* MRQC: enable VF RSS */
> > +	mrqc = IXGBE_READ_REG(hw, IXGBE_MRQC);
> > +	mrqc &= ~IXGBE_MRQC_MRQE_MASK;
> > +	switch (RTE_ETH_DEV_SRIOV(dev).active) {
> > +	case ETH_64_POOLS:
> > +		mrqc |= IXGBE_MRQC_VMDQRSS64EN;
> > +		break;
> > +
> > +	case ETH_32_POOLS:
> > +	case ETH_16_POOLS:
> > +		mrqc |= IXGBE_MRQC_VMDQRSS32EN;
> > +		break;
> > +
> > +	default:
> > +		PMD_INIT_LOG(ERR, "Invalid pool number in IOV mode");
> > +		return -EINVAL;
> > +	}
> > +
> > +	IXGBE_WRITE_REG(hw, IXGBE_MRQC, mrqc);
> > +
> > +	return 0;
> > +}
> > +
> > +static int
> >   ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
> >   {
> >   	struct ixgbe_hw *hw =
> > @@ -3358,24 +3391,34 @@ ixgbe_dev_mq_rx_configure(struct
> rte_eth_dev *dev)
> >   			default: ixgbe_rss_disable(dev);
> >   		}
> >   	} else {
> > -		switch (RTE_ETH_DEV_SRIOV(dev).active) {
> >   		/*
> >   		 * SRIOV active scheme
> >   		 * FIXME if support DCB/RSS together with VMDq & SRIOV
> >   		 */
> > -		case ETH_64_POOLS:
> > -			IXGBE_WRITE_REG(hw, IXGBE_MRQC,
> IXGBE_MRQC_VMDQEN);
> > +		switch (dev->data->dev_conf.rxmode.mq_mode) {
> > +		case ETH_MQ_RX_RSS:
> > +		case ETH_MQ_RX_VMDQ_RSS:
> > +			ixgbe_config_vf_rss(dev);
> >   			break;
> >
> > -		case ETH_32_POOLS:
> > -			IXGBE_WRITE_REG(hw, IXGBE_MRQC,
> IXGBE_MRQC_VMDQRT4TCEN);
> > -			break;
> > +		default:
> > +			switch (RTE_ETH_DEV_SRIOV(dev).active) {
> > +			case ETH_64_POOLS:
> > +				IXGBE_WRITE_REG(hw, IXGBE_MRQC,
> IXGBE_MRQC_VMDQEN);
> > +				break;
> >
> > -		case ETH_16_POOLS:
> > -			IXGBE_WRITE_REG(hw, IXGBE_MRQC,
> IXGBE_MRQC_VMDQRT8TCEN);
> > +			case ETH_32_POOLS:
> > +				IXGBE_WRITE_REG(hw, IXGBE_MRQC,
> IXGBE_MRQC_VMDQRT4TCEN);
> > +				break;
> > +
> > +			case ETH_16_POOLS:
> > +				IXGBE_WRITE_REG(hw, IXGBE_MRQC,
> IXGBE_MRQC_VMDQRT8TCEN);
> > +				break;
> > +			default:
> > +				PMD_INIT_LOG(ERR, "invalid pool number in
> IOV mode");
> > +				break;
> > +			}
> >   			break;
> > -		default:
> > -			PMD_INIT_LOG(ERR, "invalid pool number in IOV
> mode");
> >   		}
> >   	}
> >
> > @@ -4094,6 +4137,9 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
> >   			IXGBE_PSRTYPE_IPV6HDR;
> >   #endif
> >
> > +	/* Set RQPL for VF RSS according to max Rx queue */
> > +	psrtype |= (hw->mac.max_rx_queues >> 1) <<
> > +			IXGBE_PSRTYPE_RQPL_SHIFT;
> 
> Don't u have to take a dev->data->nb_rx_queues into an account here as
> well? What if a user has requested less than maximum allowed number of Rx
> queues?
> 
Yes,  it needs, but not only here, the RQPL should keep consistency with MRQC,
Either both configure 2 queues or both configure 4 queues,
It doesn't works if one try to configure 2 queues but another try to configure 4 queues. 
It needs a v2 patch to consider data->nb_rx_queues as 2nd factor to determine the queue number(counter) per vf.

Thanks
Changchun



More information about the dev mailing list