[dpdk-dev] [RFC PATCH v2 3/5] librte_ether: add API's for VF management

Thomas Monjalon thomas.monjalon at 6wind.com
Wed Sep 28 17:00:18 CEST 2016


2016-09-28 14:48, Iremonger, Bernard:
> <snip>
> 
> > > Subject: Re: [dpdk-dev] [RFC PATCH v2 3/5] librte_ether: add API's for
> > > VF management
> > >
> > > 2016-09-28 13:26, Ananyev, Konstantin:
> > > > From: Thomas Monjalon [mailto:thomas.monjalon at 6wind.com]
> > > > > 2016-09-28 11:23, Ananyev, Konstantin:
> > > > > > If we  this way (force user to include driver specific headers
> > > > > > and call driver specific functions), how you guys plan to make this
> > functionality available for multiple driver types.
> > > > >
> > > > > Multiple drivers won't have exactly the same specific features.
> > > > > But yes, there are some things common to several Intel NICs.
> > > > >
> > > > > > From discussion with Bernard  understand that customers would
> > need similar functionality for i40e.
> > > > > > Does it mean that they'll have to re-implement this part of their code
> > again?
> > > > > > Or would have to create (and maintain) their own shim layer that
> > would provide some s of abstraction?
> > > > > > Basically their own version of rte_ethdev?
> > > > >
> > > > > No definitive answer.
> > > > > But we can argue the contrary: how to handle a generic API which
> > > > > is implemented only in 1 or 2 drivers? If the application tries to use it,
> > we can imagine that a specific range of hardware is expected.
> > > >
> > > > Yes, as I understand, it is a specific subset of supported HW (just Inel NICs
> > for now, but different models/drivers).
> > > > Obviously users would like to have an ability to run their app on all HW
> > from this subset without rebuilding/implementing the app.
> > > >
> > > > >
> > > > > I think it is an important question.
> > > > > Previously we had the issue of having some API which are too
> > > > > specific and need a rework to be used with other NICs. In order to
> > > > > avoid such rework and API break, we can try to make them available
> > > > > in a driver-specific or vendor-specific staging area, waiting for
> > > a later generalization.
> > > >
> > > > Could you remind me why you guys were that opposed to ioctl style
> > approach?
> > > > It is not my favorite thing either, but it seems pretty generic way to
> > handle such situations.
> > >
> > > We prefer having well-defined functions instead of opaque ioctl-style
> > encoding.
> > > And it was not clear what is the benefit of ioctl.
> > > Now I think I understand you would like to have a common ioctl service for
> > features available on 2 drivers. Right?
> > 
> > Yes.
> > 
> > > Example (trying to  read your mind):
> > > 	rte_ethdev_ioctl(port_id, <TLV encoding VF_PING service and VF
> > id>); instead of
> > > 	rte_pmd_ixgbe_vf_ping(port_id, vf_id);
> > > 	rte_pmd_i40e_vf_ping(port_id, vf_id); Please confirm I understand
> > > what you are thinking about.
> > 
> > Yep, you read my mind correctly :)
> > Konstantin
> > 
> Adding the pmd_ops field to struct eth_devops {} discussed previously in this email thread will allow driver specific functions for multiple drivers and will get rid of the driver specific header file rte_pmd_driver.h.
> Would this be an acceptable solution?

How pmd_ops would be different of eth_devops?


More information about the dev mailing list