[dpdk-dev] Appropriate DPDK data structures for TCP sockets

Matt Laswell laswell at infiniteio.com
Mon Feb 23 15:48:57 CET 2015


Hey Matthew,

I've mostly worked on stackless systems over the last few years, but I have
done a fair bit of work on high performance, highly scalable connection
tracking data structures.  In that spirit, here are a few counterintuitive
insights I've gained over the years.  Perhaps they'll be useful to you.
Apologies in advance for likely being a bit long-winded.

First, you really need to take cache performance into account when you're
choosing a data structure.  Something like a balanced tree can seem awfully
appealing at first blush, either on its own or as a chaining mechanism for
a hash table.  But the problem with trees is that there really isn't much
locality of reference in your memory use - every single step in your
descent ends up being a cache miss.  This hurts you twice: once that you
end up stalled waiting for the next node in the tree to load from main
memory, and again when you have to reload whatever you pushed out of cache
to get it.

It's often better if, instead of a tree, you do linear search across arrays
of hash values.  It's easy to size the array so that it is exactly one
cache line long, and you can generally do linear search of the whole thing
in less time than it takes to do a single cache line fill.   If you find a
match, you can do full verification against the full tuple as needed.

Second, rather than synchronizing (perhaps with locks, perhaps with
lockless data structures), it's often beneficial to create multiple
threads, each of which holds a fraction of your connection tracking data.
Every connection belongs to a single one of these threads, selected perhaps
by hash or RSS value, and all packets from the connection go through that
single thread.  This approach has a couple of advantages.  First,
obviously, no slowdowns for synchronization.  But, second, I've found that
when you are spreading packets from a single connection across many compute
elements, you're inevitably going to start putting packets out of order.
In many applications, this ultimately leads to some additional processing
to put things back in order, which gives away the performance gains you
achieved.  Of course, this approach brings its own set of complexities, and
challenges for your application, and doesn't always spread the work as
efficiently across all of your cores.  But it might be worth considering.

Third, it's very worthwhile to have a cache for the most recently accessed
connection.  First, because network traffic is bursty, and you'll
frequently see multiple packets from the same connection in succession.
Second, because it can make life easier for your application code.  If you
have multiple places that need to access connection data, you don't have to
worry so much about the cost of repeated searches.  Again, this may or may
not matter for your particular application.  But for ones I've worked on,
it's been a win.

Anyway, as predicted, this post has gone far too long for a Monday
morning.  Regardless, I hope you found it useful.  Let me know if you have
questions or comments.

--
Matt Laswell
infinite io, inc.
laswell at infiniteio.com

On Sun, Feb 22, 2015 at 10:50 PM, Matthew Hall <mhall at mhcomputing.net>
wrote:

>
> On Feb 22, 2015, at 4:02 PM, Stephen Hemminger <stephen at networkplumber.org>
> wrote:
> > Use userspace RCU? or BSD RB_TREE
>
> Thanks Stephen,
>
> I think the RB_TREE stuff is single threaded mostly.
>
> But user-space RCU looks quite good indeed, I didn't know somebody ported
> it out of the kernel. I'll check it out.
>
> Matthew.


More information about the dev mailing list