[dpdk-dev] [PATCH RFC 06/11] mbuf: replace data pointer by an offset

Venkatesan, Venky venky.venkatesan at intel.com
Tue May 13 15:54:07 CEST 2014


An alternative way to save 6 bytes (without the side effects this change has) would be to change the mempool struct * to a uint16_t mempool_id. That limits the changes to a return function, and the performance impact of that can be mitigated quite easily. 

-Venky

-----Original Message-----
From: Neil Horman [mailto:nhorman at tuxdriver.com] 
Sent: Monday, May 12, 2014 11:40 AM
To: Venkatesan, Venky
Cc: Olivier MATZ; Thomas Monjalon; dev at dpdk.org
Subject: Re: [dpdk-dev] [PATCH RFC 06/11] mbuf: replace data pointer by an offset

On Mon, May 12, 2014 at 04:06:23PM +0000, Venkatesan, Venky wrote:
> Olivier,
> 
> The impact isn't going to be felt on the driver quite as much (and can 
> be mitigated) - the driver runs a pretty low IPC (~1.7) compared to 
> some of the more optimized code above it that actually accesses the 
> data. The problem with the dependent compute is like this - in effect 
> you are changing
> 
> struct eth_hdr * eth = (struct eth_hdr *) m->data; to struct eth_hdr * 
> eth = (struct eth_hdr *) ( (char *)m->buf _addr + m->data_offset);
> 
> We have some code that actually processes 4-8 packets in parallel (parse + hash), with a pretty high IPC. What we've done here is essentially replaced is a simple load, with  a load, load, add sequence in front of it. There is no real way to do these computations in parallel for multiple packets - it has to be done one or two at a time. What suffers is the IPC of the overall function that does the parse/hash quite significantly. It's those functions that I worry about more than the driver.  I haven't yet been able to come up with a mitigation for this yet. 
> 
> Neil,
> 
> The last time we looked at this change - and it's been a while ago, the negative effect on the upper level functions built on this was on the order of about 15-20%. It's probably will get worse once we tune the code even more.  Hope the above explanation gives you a flavour of the problem this will introduce. 
> 
I'm sorry, it doesnt.  I take you at your word that it was a problem, but I don't think we can just categorically deny patches based on past testing of potentially simmilar code, especially given that this series attempts to improve some traffic patten via the implementation TSO (meaning the net result will be different based on the use case).  

I understand what your saying above, that this code incurs a second load operation (though I would think they could be implemented in parallel, or at the very least accelerated by clever placement of data_offset relative to buf_addr to ensure that the second load was cache hot).

Regardless, my point is, just saying that this can't be done because you saw a performance hit with something simmilar in the past, isn't helpful.  If you think thats a problem, then we really need to get details of your test case and measurements you took so that they can be reproduced, and confirmed or refuted.

Regards
Neil.

> Regards,
> -Venky
> 
> 
> 
> 
> -----Original Message-----
> From: Olivier MATZ [mailto:olivier.matz at 6wind.com]
> Sent: Monday, May 12, 2014 8:07 AM
> To: Neil Horman; Venkatesan, Venky
> Cc: Thomas Monjalon; dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH RFC 06/11] mbuf: replace data pointer 
> by an offset
> 
> Hi Venky,
> 
> On 05/12/2014 04:41 PM, Neil Horman wrote:
> >> This is a hugely problematic change, and has a pretty large 
> >> performance impact (because the dependency to compute and access). 
> >> We debated this for a long time during the early days of DPDK and 
> >> decided against it. This is also a repeated sequence - the driver 
> >> will do it twice (Rx + Tx) and the next level stack will do it 
> >> twice (Rx + Tx) ...
> >>
> >> My vote is to reject this change particular change to the mbuf.
> >>
> >> Regards,
> >> -Venky
> >>
> > Do you have perforamance numbers to compare throughput with and 
> > without this change?  I always feel suspcious when I see the spectre 
> > of performane used to support or deny a change without supporting reasoning or metrics.
> 
> I agree with Neil. My feeling is that it won't impact performance, and it is correlated with the forwarding tests I've done with this patch.
> 
> I don't really understand what would cost more by storing the offset 
> instead of the virtual address. I agree that each time the stack will 
> access to the begining of the mbuf, there will be an arithmetic 
> operation, but it is compensated by other operations that will be
> accelerated:
> 
> - When receiving a packet, the driver will do:
> 
>      m->data_off = RTE_PKTMBUF_HEADROOM;
> 
>    instead of:
> 
>      m->data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
> 
> - Each time the stack will prepend data, it has to check if the headroom
>    is large enough to do the operation. This will be faster as data_off
>    is the headroom.
> 
> - When transmitting a packet, the driver will get the physical address:
> 
>      phys_addr = m->buf_physaddr + m->data_off
> 
>    instead of:
> 
>      phys_addr = (m->buf_physaddr +  \
>          ((char *)m->data - (char *)m->buf_addr)))
> 
> Moreover, these operations look negligible to me (few cycles) compared to the large amount of arithmetic operations and tests done in the driver.
> 
> Regards,
> Olivier
> 


More information about the dev mailing list