[dpdk-dev] Having troubles binding an SR-IOV VF to uio_pci_generic on Amazon instance

Avi Kivity avi at scylladb.com
Thu Oct 1 12:50:10 CEST 2015



On 10/01/2015 01:38 PM, Michael S. Tsirkin wrote:
> On Thu, Oct 01, 2015 at 12:59:47PM +0300, Avi Kivity wrote:
>>
>> On 10/01/2015 12:55 PM, Michael S. Tsirkin wrote:
>>> On Thu, Oct 01, 2015 at 12:22:46PM +0300, Avi Kivity wrote:
>>>> It's easy to claim that
>>>> a solution is around the corner, only no one was looking for it, but the
>>>> reality is that kernel bypass has been a solution for years for high
>>>> performance users,
>>> I never said that it's trivial.
>>>
>>> It's probably a lot of work. It's definitely more work than just abusing
>>> sysfs.
>>>
>>> But it looks like a write system call into an eventfd is about 1.5
>>> microseconds on my laptop. Even with a system call per packet, system
>>> call overhead is not what makes DPDK drivers outperform Linux ones.
>>>
>> 1.5 us = 0.6 Mpps per core limit.
> Oh, I calculated it incorrectly. It's 0.15 us. So 6Mpps.

You also trimmed the extra work that needs to be done, that I 
mentioned.  Maybe your ring proxy can work, maybe it can't.  In any case 
it's a hefty chunk of work.  Should this work block users from using 
their VFs, if they happen to need interrupt support?

> But for RX, you can batch a lot of packets.
>
> You can see by now I'm not that good at benchmarking.
> Here's what I wrote:
>
>
> #include <stdbool.h>
> #include <sys/eventfd.h>
> #include <inttypes.h>
> #include <unistd.h>
>
>
> int main(int argc, char **argv)
> {
>          int e = eventfd(0, 0);
>          uint64_t v = 1;
>
>          int i;
>
>          for (i = 0; i < 10000000; ++i) {
>                  write(e, &v, sizeof v);
>          }
> }
>
>
> This takes 1.5 seconds to run on my laptop:
>
> $ time ./a.out
>
> real    0m1.507s
> user    0m0.179s
> sys     0m1.328s
>
>
>> dpdk performance is in the tens of
>> millions of packets per system.
> I think that's with a bunch of batching though.

Yes, it's also with their application code running as well.  They didn't 
reach this kind of performance by spending cycles unnecessarily.

I'm not saying that the ring proxy is not workable; just that we don't 
know whether it is or not, and I don't think that a patch that enables 
_existing functionality_ for VFs should be blocked in favor of a new and 
unproven approach.

>
>> It's not just the lack of system calls, of course, the architecture is
>> completely different.
> Absolutely - I'm not saying move all of DPDK into kernel.
> We just need to protect the RX rings so hardware does
> not corrupt kernel memory.
>
>
> Thinking about it some more, many devices
> have separate rings for DMA: TX (device reads memory)
> and RX (device writes memory).
> With such devices, a mode where userspace can write TX ring
> but not RX ring might make sense.

I'm sure you can cause havoc just by reading, if you read from I/O memory.

>
> This will mean userspace might read kernel memory
> through the device, but can not corrupt it.
>
> That's already a big win!
>
> And RX buffers do not have to be added one at a time.
> If we assume 0.2usec per system call, batching some 100 buffers per
> system call gives you 2 nano seconds overhead.  That seems quite
> reasonable.

You're ignoring the page table walk and other per-descriptor processing.

Again^2, maybe this can work.  But it shouldn't block a patch enabling 
interrupt support of VFs.  After the ring proxy is available and proven 
for a few years, we can deprecate bus mastering from uio, and after a 
few more years remove it.



More information about the dev mailing list