[dpdk-dev] rte_mbuf size for jumbo frame

Saurabh Mishra saurabh.globe at gmail.com
Tue Jan 26 18:14:57 CET 2016


Hi Lawrence --

>It sounds like you benchmarked Apache using Jumbo Packets, but not the
DPDK app using large mbufs.
>Those are two entirely different issues.

I meant I ran Apache benchmark between two guest VMs through our
data-processing VM which is using DPDK.

I saw 3x better performance with 10k mbuf size vs 2k mbuf size (MTU also
set appropriately )

Unfortunately, we can't handle chained mbuf unless we copy into a large
buffer. Even we do start handling chained mbufs, for inspection we can't
inspect a scattered mbuf payloads. We have to anyway coalesce them into one
to make sense of the content of the packet. We inspect full packet (from
1st byte to last byte).

Thanks,
/Saurabh

On Tue, Jan 26, 2016 at 8:50 AM, Lawrence MacIntyre <macintyrelp at ornl.gov>
wrote:

> Saurabh:
>
> It sounds like you benchmarked Apache using Jumbo Packets, but not the
> DPDK app using large mbufs. Those are two entirely different issues.
>
> You should be able to write your packet inspection routines to work with
> the mbuf chains, rather than copying them into a larger buffer (although if
> there are multiple passes through the data, it could be a bit complicated).
> Copying the data into a larger buffer will definitely cause the application
> to be slower.
>
> Lawrence
>
>
> This one time (01/26/2016 09:40 AM), at band camp, Saurabh Mishra wrote:
>
> Hi,
>
> Since we do full content inspection, we will end up coalescing mbuf chains
> into one before inspecting the packet which would require allocating
> another buffer of larger size.
>
> I am inclined towards larger size mbuf for this reason.
>
> I have benchmarked a bit using apache benchmark and we see 3x performance
> improvement over 1500 mtu. Memory is not an issue.
>
> My only concern is that would all the dpdk drivers work with larger size
> mbuf?
>
> Thanks,
> Saurabh
> On Jan 26, 2016 6:23 AM, "Lawrence MacIntyre" <macintyrelp at ornl.gov>
> wrote:
>
>> Saurabh:
>>
>> Raising the mbuf size will make the packet handling for large packets
>> slightly more efficient, but it will use much more memory unless the great
>> majority of the packets you are handling are of the jumbo size. Using more
>> memory has its own costs. In order to evaluate this design choice, it is
>> necessary to understand the behavior of the memory subsystem, which is VERY
>> complicated.
>>
>> Before  you go down this path, at least benchmark your application using
>> the regular sized mbufs and the large ones and see what the effect is.
>>
>> This one time (01/26/2016 09:01 AM), at band camp, Polehn, Mike A wrote:
>>
>>> Jumbo frames are generally handled by link lists (but called something
>>> else) of mbufs.
>>> Enabling jumbo frames for the device driver should enable the right
>>> portion of the driver which handles the linked lists.
>>>
>>> Don't make the mbufs huge.
>>>
>>> Mike
>>>
>>> -----Original Message-----
>>> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Masaru OKI
>>> Sent: Monday, January 25, 2016 2:41 PM
>>> To: Saurabh Mishra; users at dpdk.org; dev at dpdk.org
>>> Subject: Re: [dpdk-dev] rte_mbuf size for jumbo frame
>>>
>>> Hi,
>>>
>>> 1. Take care of unit size of mempool for mbuf.
>>> 2. Call rte_eth_dev_set_mtu() for each interface.
>>>      Note that some PMDs does not supported change MTU.
>>>
>>> On 2016/01/26 6:02, Saurabh Mishra wrote:
>>>
>>>> Hi,
>>>>
>>>> We wanted to use 10400 bytes size of each rte_mbuf to enable Jumbo
>>>> frames.
>>>> Do you guys see any problem with that? Would all the drivers like
>>>> ixgbe, i40e, vmxnet3, virtio and bnx2x work with larger rte_mbuf size?
>>>>
>>>> We would want to avoid detailing with chained mbufs.
>>>>
>>>> /Saurabh
>>>>
>>>
>> --
>> Lawrence MacIntyre  macintyrelp at ornl.gov  Oak Ridge National Laboratory
>>  865.574.7401  Cyber Space and Information Intelligence Research Group
>>
>>
> --
> Lawrence MacIntyre  macintyrelp at ornl.gov  Oak Ridge National Laboratory
>  865.574.7401  Cyber Space and Information Intelligence Research Group
>
>


More information about the dev mailing list