[dpdk-dev] [RFC PATCH 4/4] ethdev: add helpers to move to the new offloads API

Ananyev, Konstantin konstantin.ananyev at intel.com
Tue Aug 29 11:43:20 CEST 2017



> -----Original Message-----
> From: Shahaf Shuler [mailto:shahafs at mellanox.com]
> Sent: Tuesday, August 29, 2017 7:27 AM
> To: Ananyev, Konstantin <konstantin.ananyev at intel.com>; dev at dpdk.org
> Subject: RE: [dpdk-dev] [RFC PATCH 4/4] ethdev: add helpers to move to the new offloads API
> 
> Monday, August 28, 2017 5:12 PM, Ananyev, Konstantin:
> > > Wednesday, August 23, 2017 3:29 PM, Ananyev, Konstantin:
> > > >
> > > > Hmm, and why we would need to reconfigure our device in the middle
> > > > of rx queue setup?
> > >
> > > The reason is the old Rx offloads API is configured on device configure.
> > > This if section is for applications which already moved to the new offload
> > API however the underlying PMD still uses the old one.
> >
> > Ok, but as I remember, right now the initialization order is pretty strict:
> > rx_queue_setup() has to be always called after dev_configure().
> 
> Is this restriction backed up in the API definition?
> From what I saw in  dev_configure API documentation:
> 
> "
> This function must be invoked first before any other function in the Ethernet API. This function can also be re-invoked when a device is in
> the stopped state.
> "
> 
> It does not dictate one is forbidden to do (while port is stopped):
> dev_configure
> rx_queue_setup(queue 1)
> dev _configure (for the same configuration)
> rx_queue_setup(queue 2)
> 

It might work in some cases, but no guarantee it would work in general.
Though, as I understand, in your case for second call of dev_configure() the 
config parameters might  be different anyway.

> didn't saw any words on in it also in rx_queue_setup API. maybe API doc should be updated.

Yes, good point.

> 
> You are correct though that all examples and apps on dpdk tree behaves in the way you described.
> 
> > One of the reasons for that: rx_queue_setup() might change fileds inside
> > dev->data->dev_private.
> > Second call for dev_configure() will void these changes and some of rxq
> > config information will be lost.
> > So I think we should avoid calling dev_configure() inside rx_queue_setup().
> 
> In continue to above comment, is this reason is because of a wrong implementation of the callbacks by the PMDs or actual limitation from
> ethdev?

I'd say it is a combination of limitations of ethdev design and actual HW restrictions.
If we'll have a rx/tx function per queue some of these limitations might be removed I think.
Though for some HW - offloads can be enabled/disabled per device, not queue,
so I guess some limitations would  persist anyway.

> 
> >
> > My preference still would be to force all PMDs to move to the new version of
> > rx_queue_setup() first.
> > I think it would be much more error prone then supporting two flavors of
> > PMD config
> > and will allow us to catch errors early - in case this new scheme doesn't work
> > by some PMD for any reason.
> 
> I Fully agree with you. If we can force all PMDs to move the new API it will be the best.
> No need in risky re-configuration in the middle of the queue setup.
> But how can we force all to do such transition? I saw there is a discussion on this RFC on the techboard - maybe it can be decided there.
> 
> As I said before, I think I will be very hard to migrate all PMDs to the new API by myself.
> It is not a simple function prototype change or some identical swap I need to do on each PMD.
> Each device has its own characteristics and restriction on the offloads it can provide.
> Some offloads can be enabled only on specific places in the device initialization.
> Some offloads are only per port.
> To do such transition properly I must fully understand the underlying device of each PMD.
> 
> I can commit on transition for the devices I familiar with: Mlx4 and mlx5.

That's understandable.
For tx_prepare() work, we used the following approach:
1. submitted patch with changes in rte_ethdev and PMDs we  are familiar with (Intel ones).
    For other PMDs - patch contained just minimal changes to make it build cleanly.
2. Asked other PMD maintainers to review rte_ethdev changes and provide a proper patch
    for the PMD they own.

Probably same approach can be used here.
Konstantin

> 
> >
> > Also it seems that you  forgot to:
> > struct rte_eth_rxmode rxmode  = dev->data->dev_conf.rxmode;
> 
> Thanks, discovered it right after I sent the RFC. Already fixed.
> 
> >
> > Konstantin
> >
> >
> > >
> > > >
> > > > > +			if (ret < 0) {
> > > > > +				RTE_PMD_DEBUG_TRACE(
> > > > > +					"unable to re-configure port %d "
> > > > > +					"in order to apply rxq offloads "
> > > > > +					"configuration\n", port_id);
> > > > > +			}
> > > > > +		}
> > > > > +	}
> > > > > +
> > > > >  	ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id,
> > > > nb_rx_desc,
> > > > >  					      socket_id, rx_conf, mp);
> > > >
> > > > BTW, I don't see changes in any PMD for new offload flags?
> > > > Is it because it is just a n RFC and full patch would contain such changes?
> > >
> > > Yes this is because this is an RFC.
> > >
> > > The full patch I intend will move all examples and testpmd to the new
> > offloads API.
> > > In addition it will include the mlx5 PMD support for the new offloads API.
> > >
> > > As I said on previous mail, I believe that the work to move the
> > > different PMDs to the new API should be done by their developers or
> > maintainers.
> > >
> > > >
> > > >
> > > > >  	if (!ret) {
> > > > > @@ -1094,6 +1182,51 @@ rte_eth_rx_queue_setup(uint8_t port_id,
> > > > uint16_t rx_queue_id,
> > > > >  	return ret;
> > > > >  }
> > > > >
> > > > > +/**
> > > > > + * A copy function from txq_flags to rte_eth_txq_conf offloads
> > > > > +API,
> > > > > + * to enable PMDs to support only one of the APIs.
> > > > > + */
> > > > > +static void
> > > > > +rte_eth_copy_txq_flags(struct rte_eth_txq_conf *txq_conf) {
> > > > > +	uint32_t txq_flags = txq_conf->txq_flags
> > > > > +	uint64_t *offloads = &txq_conf->offloads;
> > > > > +
> > > > > +	if (!(txq_flags & ETH_TXQ_FLAGS_NOMULTSEGS))
> > > > > +		*offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> > > > > +	if (!(txq_flags & ETH_TXQ_FLAGS_NOVLANOFFL))
> > > > > +		*offloads |= DEV_TX_OFFLOAD_VLAN_INSERT;
> > > > > +	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMSCTP))
> > > > > +		*offloads |= DEV_TX_OFFLOAD_SCTP_CKSUM;
> > > > > +	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMUDP))
> > > > > +		*offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;
> > > > > +	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMTCP))
> > > > > +		*offloads |= DEV_TX_OFFLOAD_TCP_CKSUM; }
> > > > > +
> > > > > +/**
> > > > > + * A copy function between rte_eth_txq_conf offloads API to
> > > > > +txq_flags
> > > > > + * offloads API, to enable application to be agnostic to the PMD
> > > > > +supported
> > > > > + * API.
> > > > > + */
> > > > > +static void
> > > > > +rte_eth_copy_txq_offloads(struct rte_eth_txq_conf *txq_conf) {
> > > > > +	uint32_t *txq_flags = &txq_conf->txq_flags
> > > > > +	uint64_t offloads = txq_conf->offloads;
> > > > > +
> > > > > +	if (!(offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
> > > > > +		*txq_flags |= ETH_TXQ_FLAGS_NOMULTSEGS;
> > > > > +	if (!(offloads & DEV_TX_OFFLOAD_VLAN_INSERT))
> > > > > +		*txq_flags |= ETH_TXQ_FLAGS_NOVLANOFFL;
> > > > > +	if (!(offloads & DEV_TX_OFFLOAD_SCTP_CKSUM))
> > > > > +		*txq_flags |= ETH_TXQ_FLAGS_NOXSUMSCTP;
> > > > > +	if (!(offloads & DEV_TX_OFFLOAD_UDP_CKSUM))
> > > > > +		*txq_flags |= ETH_TXQ_FLAGS_NOXSUMUDP;
> > > > > +	if (!(offloads & DEV_TX_OFFLOAD_TCP_CKSUM))
> > > > > +		*txq_flags |= ETH_TXQ_FLAGS_NOXSUMTCP; }
> > > > > +
> > > > >  int
> > > > >  rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
> > > > >  		       uint16_t nb_tx_desc, unsigned int socket_id, @@ -1145,6
> > > > > +1278,13 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t
> > > > tx_queue_id,
> > > > >  	if (tx_conf == NULL)
> > > > >  		tx_conf = &dev_info.default_txconf;
> > > > >
> > > > > +	if ((dev->data->dev_flags & RTE_ETH_DEV_TXQ_OFFLOAD) &&
> > > > > +	    (!(tx_conf->txq_flags & ETH_TXQ_FLAGS_IGNORE)))
> > > > > +		rte_eth_copy_txq_flags(tx_conf);
> > > > > +	else if (!(dev->data->dev_flags & RTE_ETH_DEV_TXQ_OFFLOAD) &&
> > > > > +		   (tx_conf->txq_flags & ETH_TXQ_FLAGS_IGNORE))
> > > > > +		rte_eth_copy_txq_offloads(tx_conf);
> > > > > +
> > > >
> > > > As I said in my previous mail - I think better to always convrert
> > > > from old txq_flags to new TX offload flags and make each PMD to
> > > > understand new offload values only.
> > > > Konstantin
> > > >
> > > > >  	return (*dev->dev_ops->tx_queue_setup)(dev, tx_queue_id,
> > > > nb_tx_desc,
> > > > >  					       socket_id, tx_conf);
> > > > >  }
> > > > > --
> > > > > 2.12.0



More information about the dev mailing list