[dpdk-dev] [PATCH v5] app/testpmd: add option ring-bind-lcpu to bind Q with CPU

Ananyev, Konstantin konstantin.ananyev at intel.com
Thu Jan 18 13:14:05 CET 2018


Hi Simon,

> 
> Hi, Konstantin,
> On Tue, Jan 16, 2018 at 12:38:35PM +0000, Ananyev, Konstantin wrote:
> >
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of wei.guo.simon at gmail.com
> > > Sent: Saturday, January 13, 2018 2:35 AM
> > > To: Lu, Wenzhuo <wenzhuo.lu at intel.com>
> > > Cc: dev at dpdk.org; Thomas Monjalon <thomas at monjalon.net>; Simon Guo <wei.guo.simon at gmail.com>
> > > Subject: [dpdk-dev] [PATCH v5] app/testpmd: add option ring-bind-lcpu to bind Q with CPU
> > >
> > > From: Simon Guo <wei.guo.simon at gmail.com>
> > >
> > > Currently the rx/tx queue is allocated from the buffer pool on socket of:
> > > - port's socket if --port-numa-config specified
> > > - or ring-numa-config setting per port
> > >
> > > All the above will "bind" queue to single socket per port configuration.
> > > But it can actually archieve better performance if one port's queue can
> > > be spread across multiple NUMA nodes, and the rx/tx queue is allocated
> > > per lcpu socket.
> > >
> > > This patch adds a new option "--ring-bind-lcpu"(no parameter).  With
> > > this, testpmd can utilize the PCI-e bus bandwidth on another NUMA
> > > nodes.
> > >
> > > When --port-numa-config or --ring-numa-config option is specified, this
> > > --ring-bind-lcpu option will be suppressed.
> >
> > Instead of introducing one more option - wouldn't it be better to
> > allow user manually to define flows and assign them to particular lcores?
> > Then the user will be able to create any FWD configuration he/she likes.
> > Something like:
> > lcore X add flow rxq N,Y txq M,Z
> >
> > Which would mean - on lcore X recv packets from port=N, rx_queue=Y,
> > and send them through port=M,tx_queue=Z.
> Thanks for the comment.
> Will it be a too compliated solution for user since it will need to define
> specifically for each lcore? We might have hundreds of lcores in current
> modern platforms.

Why for all lcores?
Only for ones that will do packet forwarding.
Also if configuration becomes too complex(/big) to be done manually
user can write a script that will generate set of testpmd commands
to achieve desired layout.
Konstantin


More information about the dev mailing list