[dpdk-dev] [RFC PATCH v2 3/5] librte_ether: add API's for VF management

Ananyev, Konstantin konstantin.ananyev at intel.com
Wed Sep 28 18:52:39 CEST 2016


> 
> 2016-09-28 14:30, Ananyev, Konstantin:
> > From: Thomas Monjalon [mailto:thomas.monjalon at 6wind.com]
> > > 2016-09-28 13:26, Ananyev, Konstantin:
> > > > From: Thomas Monjalon [mailto:thomas.monjalon at 6wind.com]
> > > > > 2016-09-28 11:23, Ananyev, Konstantin:
> > > > > > If we  this way (force user to include driver specific headers
> > > > > > and call driver specific functions), how you guys plan to make this functionality available for multiple driver types.
> > > > >
> > > > > Multiple drivers won't have exactly the same specific features.
> > > > > But yes, there are some things common to several Intel NICs.
> > > > >
> > > > > > From discussion with Bernard  understand that customers would need similar functionality for i40e.
> > > > > > Does it mean that they'll have to re-implement this part of their code again?
> > > > > > Or would have to create (and maintain) their own shim layer that would provide some s of abstraction?
> > > > > > Basically their own version of rte_ethdev?
> > > > >
> > > > > No definitive answer.
> > > > > But we can argue the contrary: how to handle a generic API which
> > > > > is implemented only in 1 or 2 drivers? If the application tries to use it, we can imagine that a specific range of hardware is
> expected.
> > > >
> > > > Yes, as I understand, it is a specific subset of supported HW (just Inel NICs for now, but different models/drivers).
> > > > Obviously users would like to have an ability to run their app on all HW from this subset without rebuilding/implementing the
> app.
> > > >
> > > > >
> > > > > I think it is an important question.
> > > > > Previously we had the issue of having some API which are too
> > > > > specific and need a rework to be used with other NICs. In order
> > > > > to avoid such rework and API break, we can try to make them
> > > > > available in a driver-specific or vendor-specific staging area,
> > > > > waiting for
> > > a later generalization.
> > > >
> > > > Could you remind me why you guys were that opposed to ioctl style approach?
> > > > It is not my favorite thing either, but it seems pretty generic way to handle such situations.
> > >
> > > We prefer having well-defined functions instead of opaque ioctl-style encoding.
> > > And it was not clear what is the benefit of ioctl.
> > > Now I think I understand you would like to have a common ioctl service for features available on 2 drivers. Right?
> >
> > Yes.
> >
> > > Example (trying to  read your mind):
> > > 	rte_ethdev_ioctl(port_id, <TLV encoding VF_PING service and VF id>); instead of
> > > 	rte_pmd_ixgbe_vf_ping(port_id, vf_id);
> > > 	rte_pmd_i40e_vf_ping(port_id, vf_id); Please confirm I understand
> > > what you are thinking about.
> >
> > Yep, you read my mind correctly :)
> 
> Both could coexist (if ioctl was accepted by community).

True.

> What about starting to implement the PMD functions and postpone ioctl to later with a dedicated thread?

You mean something like:
- 16.11: implement rte_pmd_ixgbe_vf_ping()
- 17.02:
	a) implement rte_pmd_i40e_vf_ping()
	b) introduce ioctl PMD API
	c) make possible to vf_ping via ioctl API
?
If so, then it sounds like reasonable approach to me.
Though would be inserting to hear what other guys think.
Konstantin




More information about the dev mailing list