[dpdk-dev] [RFC PATCH 0/1] mldev: introduce machine learning device library

Thomas Monjalon thomas at monjalon.net
Wed Jan 25 14:45:16 CET 2023


17/08/2022 08:58, Morten Brørup:
> > From: Jerin Jacob [mailto:jerinjacobk at gmail.com]
> > Sent: Wednesday, 17 August 2022 07.37
> > 
> > On Tue, Aug 16, 2022 at 9:15 PM Morten Brørup
> > <mb at smartsharesystems.com> wrote:
> > >
> > > > From: Jerin Jacob [mailto:jerinjacobk at gmail.com]
> > > > Sent: Tuesday, 16 August 2022 15.13
> > > >
> > > > On Wed, Aug 3, 2022 at 8:49 PM Stephen Hemminger
> > > > <stephen at networkplumber.org> wrote:
> > > > >
> > > > > On Wed, 3 Aug 2022 18:58:37 +0530
> > > > > <jerinj at marvell.com> wrote:
> > > > >
> > > > > > Roadmap
> > > > > > -------
> > > > > > 1) Address the comments for this RFC.
> > > > > > 2) Common code for mldev
> > > > > > 3) SW mldev driver based on TVM (https://tvm.apache.org/)
> > > > >
> > > > > Having a SW implementation is important because then it can be
> > > > covered
> > > > > by tests.
> > > >
> > > > Yes. That reason for adding TVM based SW driver as item (3).
> > > >
> > > > Is there any other high level or API level comments before
> > proceeding
> > > > with v1 and implementation.
> > >
> > > Have you seriously considered if the DPDK Project is the best home
> > for this project? I can easily imagine the DPDK development process
> > being a hindrance in many aspects for an evolving AI/ML library. Off
> > the top of my head, it would probably be better off as a separate
> > project, like SPDK.
> > 
> > Yes. The reasons are following
> > 
> > #  AI/ML compiler libraries more focused on model creation and
> > training etc (Thats where actual value addition the AI/ML libraries
> > can offer) and minimal part for interference(It is just added for
> > testing the model)
> > # Considering the inference is the scope of the DPDK. DPDK is ideal
> > place for following reasons
> > 
> > a) Inference scope is very limited.
> > b) Avoid memcpy of interference data (Use directly from network or
> > other class of device like cryptodev, regexdev)
> > c) Reuse highspeed IO interface like  PCI backed driver etc
> > d) Integration with other DPDK subsystems like eventdev etc for job
> > completion.
> > e) Also support more inline offloads by merging two device classes
> > like rte_secuity.
> > f) Run the inference model from different AI/ML compiler frameworks or
> > abstract the inference usage.
> > Similar concept is already applied to other DPDK device classes like
> > 1) In Regexdev,  The compiler generates the rule database which is out
> > of scope of DPDK. DPDK API just loads the rule database
> > 2) In Gpudev, The GPU kernel etc out of scope of DPDK.DPDK cares about
> > IO interface.
> 
> Thank you for the detailed reply, Jerin.
> 
> These are good reasons for adding the new device class to the DPDK project - especially the Regexdev comparison got me convinced.
> 
> > 
> > > If all this stuff can be completely omitted at build time, I have no
> > objections.
> > 
> > Yes, It can be completely omitted at build time.
> 
> Perfect.
> 
> > Also no plan to
> > integrate to testpmd and other existing application. Planning to add
> > only app/test-mldev application.
> 
> +1 to that
> 
> > 
> > >
> > > A small note about naming (not intending to start a flame war, so
> > please feel free to ignore!): I haven't worked seriously with ML/AI
> > since university three decades ago, so I'm quite rusty in the domain.
> > However, I don't see any Machine Learning functions proposed by this
> > API. The library provides an API to an Inference Engine - but nobody
> > says the inference model stems from Machine Learning; it might as well
> > be a hand crafted model. Do you plan to propose APIs for training the
> > models? If not, the name of the library could confuse some potential
> > users.
> > 
> > No, scope is only inference and it is documented in the programing
> > guide and API header file. I am trying to keep name similar to
> > regexdev, gpudev etc which have similar scope. But I am open to other
> > shortname/name if you have something in mind.
> 
> The AI(Artificial Intelligence)/ML(Machine Learning)/IE(Inference Engine) chip market still seems immature and fragmented, so I can't find any consensus on generic names for such hardware accelerator devices.
> 
> Some of the chip vendors represented on the DPDK mailing list offer AI/ML/IE accelerator chips. Perhaps their marketing department could propose alternatives to "Machine Learning Device"/"mldev" for inference engine devices (with no acceleration for training the models). If not, the initially proposed name is good enough.
> 
> So: Everyone ask your marketing departments and speak up now, or the name "mldev" will be set in stone. ;-)
> 
> I'm thinking: While "Inference Engine Device"/iedev might be technically more correct, it doesn't have same value as "Machine Learning Device"/"mldev" on a marketing scale. And we should choose a name that we expect might become industry standard consensus.

I don't why but I like mldev and dislike iedev.
I could be OK with aidev as well.






More information about the dev mailing list