[dpdk-dev] Mellanox Flow Steering

Raghav Sethi raghavs at CS.Princeton.EDU
Mon Apr 13 20:01:15 CEST 2015


Hi Olga,

Thanks for clarifying. It appears that the mlx driver does not allow me to
modify RSS options. lib/libret_pmd_mlx4/mlx4.c file states that RSS hash
key and options cannot be modified. However, I will need to modify the hash
function to be an identity/mask function, and the key to be dst mac for my
application.

Would it be correct to conclude that I cannot route packets to cores based
on dst mac using the Mellanox?

If so, given that I have complete control over the packet headers, is there
any other way to ensure deterministic, but equal partitioning of 5-tuple
space across cores using the Mellanox card? My application uses UDP, so I'm
not really concerned about flows. I'm sure the default RSS function
attempts to do just this, but some links to documentation/code for
DPDK+mlx4 default RSS would be great.

Best,
Raghav

On Sun, Apr 12, 2015 at 4:39 PM Olga Shern <olgas at mellanox.com> wrote:

> Hi Raghav,
>
> You are right with your observations,  Mellanox PMD and mlx4_en (kernel
> driver) are co-exist.
> When DPDK application run, all traffic is redirected to DPDK application.
> When DPDK application exit the traffic is received by mlx4_en driver.
>
> Regarding ethtool configuration you did, it influence only mlx4_en driver,
> it doesn't influence Mellanox PMD queues.
>
> Mellanox PMD doesn't support Flow Director, like you mention, and we are
> working to add it.
> Currently the only way to spread traffic between different PMD queues is
> using RSS.
>
> Best Regards,
> Olga
>
> -----Original Message-----
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Raghav Sethi
> Sent: Sunday, April 12, 2015 7:18 PM
> To: Zhou, Danny; dev at dpdk.org
> Subject: Re: [dpdk-dev] Mellanox Flow Steering
>
> Hi Danny,
>
> Thanks, that's helpful. However, Mellanox cards don't support Intel Flow
> Director, so how would one go about installing these rules in the NIC? The
> only technique the Mellanox User Manual (
>
> http://www.mellanox.com/related-docs/prod_software/Mellanox_EN_for_Linux_User_Manual_v2_0-3_0_0.pdf
> )
> lists to use Flow Steering is the ethtool based method.
>
> Additionally, the mlx4_core driver is used both by DPDK PMD and otherwise
> (unlike the igb_uio driver, which needs to be loaded to use PMD) and it
> seems weird that only the packets affected by the rules don't hit the DPDK
> application. That indicates to me that the NIC is dealing with the rules
> somehow even though a DPDK application is running.
>
> Best,
> Raghav
>
> On Sun, Apr 12, 2015 at 7:47 AM Zhou, Danny <danny.zhou at intel.com> wrote:
>
> > Currently, the DPDK PMD and NIC kernel driver cannot drive a same NIC
> > device simultaneously. When you use ethtool to setup flow director
> > filter, the rules are written to NIC via ethtool support in kernel
> > driver. But when DPDK PMD is loaded to drive same device, the rules
> > previously written by ethtool/kernel_driver will be invalid, so you
> > may have to use DPDK APIs to rewrite your rules to the NIC again.
> >
> > The bifurcated driver is designed to provide a solution to support the
> > kernel driver and DPDK coexist scenarios, but it has security concern
> > so netdev maintainer rejects it.
> >
> > It should not be a Mellanox hardware problem, if you try it on Intel
> > NIC the result is same.
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Raghav Sethi
> > > Sent: Sunday, April 12, 2015 1:10 PM
> > > To: dev at dpdk.org
> > > Subject: [dpdk-dev] Mellanox Flow Steering
> > >
> > > Hi folks,
> > >
> > > I'm trying to use the flow steering features of the Mellanox card to
> > > effectively use a multicore server for a benchmark.
> > >
> > > The system has a single-port Mellanox ConnectX-3 EN, and I want to
> > > use 4
> > of
> > > the 32 cores present and 4 of the 16 RX queues supported by the
> > > hardware (i.e. one RX queue per core).
> > >
> > > I assign RX queues to each of the cores, but obviously without flow
> > > steering (all the packets have the same IP and UDP headers, but
> > > different dest MACs in the ethernet headers) each of the packets hits
> one core.
> > I've
> > > set up the client such that it sends packets with a different
> > > destination MAC for each RX queue (e.g. RX queue 1 should get
> > > 10:00:00:00:00:00, RX queue 2 should get 10:00:00:00:00:01 and so on).
> > >
> > > I try to accomplish this by using ethtool to set flow steering rules
> > (e.g.
> > > ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:00 action 1 loc
> > > 1, ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:01 action 2 loc
> 2..).
> > >
> > > As soon as I set up these rules though, packets matching them just
> > > stop hitting my application. All other packets go through, and
> > > removing the rules also causes the packets to go through. I'm pretty
> > > sure my
> > application
> > > is looking at all the queues, but I tried changing the rules to try
> > > a
> > rule
> > > for every single destination RX queue (0-16), and that doesn't work
> > either.
> > >
> > > If it helps, my code is based on the l2fwd sample application, and
> > > is
> > here:
> > > https://gist.github.com/raghavsethi/416fb77d74ccf81bd93e
> > >
> > > Also, I added the following to my /etc/init.d: options mlx4_core
> > > log_num_mgm_entry_size=-1, and restarted the driver before any of
> > > these tests.
> > >
> > > Any ideas what might be causing my packets to drop? In case this is
> > > a Mellanox issue, should I be talking to their customer support?
> > >
> > > Best,
> > > Raghav Sethi
> >
>


More information about the dev mailing list