[dpdk-dev] [PATCH] gpudev: introduce memory API

Jerin Jacob jerinjacobk at gmail.com
Tue Jun 8 06:10:14 CEST 2021


On Mon, Jun 7, 2021 at 10:17 PM Thomas Monjalon <thomas at monjalon.net> wrote:
>
> 07/06/2021 15:54, Jerin Jacob:
> > On Mon, Jun 7, 2021 at 4:13 PM Thomas Monjalon <thomas at monjalon.net> wrote:
> > > 07/06/2021 09:20, Wang, Haiyue:
> > > > From: Honnappa Nagarahalli <Honnappa.Nagarahalli at arm.com>
> > > > > If we keep CXL in mind, I would imagine that in the future the devices on PCIe could have their own
> > > > > local memory. May be some of the APIs could use generic names. For ex: instead of calling it as
> > > > > "rte_gpu_malloc" may be we could call it as "rte_dev_malloc". This way any future device which hosts
> > > > > its own memory that need to be managed by the application, can use these APIs.
> > > > >
> > > >
> > > > "rte_dev_malloc" sounds a good name,
> > >
> > > Yes I like the idea.
> > > 2 concerns:
> > >
> > > 1/ Device memory allocation requires a device handle.
> > > So far we avoided exposing rte_device to the application.
> > > How should we get a device handle from a DPDK application?
> >
> > Each device behaves differently at this level. In the view of the
> > generic application, the architecture should like
> >
> > < Use DPDK subsystem as rte_ethdev, rte_bbdev etc for SPECIFIC function >
> > ^
> > |
> > < DPDK driver>
> > ^
> > |
> > <rte_device with this new callbacks >
>
> I think the formatting went wrong above.
>
> I would add more to the block diagram:
>
> class device API      - computing device API
>         |            |              |
> class device driver -   computing device driver
>         |                           |
>        EAL device with memory callback
>
> The idea above is that the class device driver can use services
> of the new computing device library.

Yes. The question is, do we need any public DPDK _application_ APIs for that?
If it is public API then the scope is much bigger than that as the application
can use it directly and it makes it non portable.

if the scope is only, the class driver consumption then the existing
"bus"  _kind of_
abstraction/API makes sense to me.

Where it abstracts,
-FW download of device
-Memory management of device
-Opaque way to enq/deque jobs to the device.

And above should be consumed by "class driver" not "application".

If the application doing do that, we are in rte_raw device territory.


> One basic API service is to provide a device ID for the memory callback.
> Other services are for execution control.
>
> > An implementation may decide to have "in tree" or "out of tree"
> > drivers or rte_device implementaion.
> > But generic DPDK applications should not use devices directly. i.e
> > rte_device need to have this callback and
> > mlx ethdev/crypto driver use this driver to implement public API.
> > Otherwise, it is the same as rawdev in DPDK.
> > So not sure what it brings other than raw dev here if we are not
> > taking the above architecture.
> >
> > >
> > > 2/ Implementation must be done in a driver.
> > > Should it be a callback defined at rte_device level?
> >
> > IMO, Yes and DPDK subsystem drivers to use it.
>
> I'm not sure subsystems should bypass the API for device memory.
> We could do some generic work in the API function and call
> the driver callback only for device-specific stuff.
> In such case the callback and the API would be
> in the library computing device library.
> On the other hand, having the callback and API in EAL would allow
> having a common function for memory allocation in EAL.
>
> Another thought: I would like to unify memory allocation in DPDK
> with the same set of flags in an unique function.
> A flag could be used to target devices instead of the running CPU,
> and the same parameter could be shared for the device ID or NUMA node.
>
>


More information about the dev mailing list