[dpdk-dev] Why packet replication is more efficient when done using memcpy( ) as compared to rte_mbuf_refcnt_update() function?

Wiles, Keith keith.wiles at intel.com
Wed Apr 18 20:36:34 CEST 2018



> On Apr 18, 2018, at 11:43 AM, Shailja Pandey <csz168117 at iitd.ac.in> wrote:
> 
> Hello,
> 
> I am doing packet replication and I need to change the ethernet and IP header field for each replicated packet. I did it in two different ways:
> 
> 1. Share payload from the original packet using rte_mbuf_refcnt_update
>   and allocate new mbuf for L2-L4 headers.
> 2. memcpy() payload from the original packet to newly created mbuf and
>   prepend L2-L4 headers to the mbuf.
> 
> I performed experiments with varying replication factor as well as varying packet size and found that memcpy() is performing way better than using rte_mbuf_refcnt_update(). But I am not sure why it is happening and what is making rte_mbuf_refcnt_update() even worse than memcpy().
> 
> Here is the sample code for both implementations:


The two code fragments are doing two different ways the first is using a loop to create possible more then one replication and the second one is not, correct? The loop can cause performance hits, but should be small.

The first one is using the hdr->next pointer which is in the second cacheline of the mbuf header, this can and will cause a cacheline miss and degrade your performance. The second code does not touch hdr->next and will not cause a cacheline miss. When the packet goes beyond 64bytes then you hit the second cacheline, are you starting to see the problem here. Every time you touch a new cache line performance will drop unless the cacheline is prefetched into memory first, but in this case it really can not be done easily. Count the cachelines you are touching and make sure they are the same number in each case.

On Intel x86 systems 64 byte is the cacheline size and other arches have different sizes.

> 
> *1. Using rte_mbuf_refcnt_update:*
> **struct rte_mbuf *pkt = original packet;**
> **
> ******rte_pktmbuf_adj(pkt, (uint16_t)sizeof(struct ether_hdr)+sizeof(struct ipv4_hdr));
>         rte_pktmbuf_refcnt_update(pkt, replication_factor);
>         for(int i = 0; i < replication_factor; i++) {
>               struct rte_mbuf *hdr;
>               if (unlikely ((hdr = rte_pktmbuf_alloc(header_pool)) == NULL)) {
>                 printf("Failed while cloning $$$\n");
>                 return NULL;
>            }
>            hdr->next = pkt;
>            hdr->pkt_len = (uint16_t)(hdr->data_len + pkt->pkt_len);
>            hdr->nb_segs = (uint8_t)(pkt->nb_segs + 1);
>            //*Update more metadate fields*
> *
> *
> **rte_pktmbuf_prepend(hdr, (uint16_t)sizeof(struct ether_hdr));
>             //*modify L2 fields*
> 
>             rte_pktmbuf_prepend(hdr, (uint16_t)sizeof(struct ipv4_hdr));
>             //Modify L3 fields
>             .
>             .
>             .
>         }
> *
> *
> *
> *
> *2. Using memcpy():*
> **struct rte_mbuf *pkt = original packet
> **struct rte_mbuf *hdr;**
>         if (unlikely ((hdr = rte_pktmbuf_alloc(header_pool)) == NULL)) {
>                 printf("Failed while cloning $$$\n");
>                 return NULL;
>         }
> 
>         /* prepend new header */
>         char *eth_hdr = (char *)rte_pktmbuf_prepend(hdr, pkt->pkt_len);
>         if(eth_hdr == NULL) {
>                 printf("panic\n");
>         }
>         char *b = rte_pktmbuf_mtod((struct rte_mbuf*)pkt, char *);
>         memcpy(eth_hdr, b, pkt->pkt_len);
>         Change L2-L4 header fields in new packet
> 
> The throughput becomes roughly half when the packet size is increased from 64 bytes to 128 bytes and replication is done using *rte_mbuf_refcnt_update(). *The throughput remains more or less same when packet size increases and replication is done using *memcpy()*.

Why did you use memcpy and not rte_memcpy here as rte_memcpy should be faster?

I believe now DPDK has a rte_pktmbuf_alloc_bulk() function to reduce the number of rte_pktmbuf_alloc() calls, which should help if you know the number of packets you need to replicate up front.

> 
> Any help would be appreciated.
> **
> 
> --
> 
> Thanks,
> Shailja
> 

Regards,
Keith



More information about the dev mailing list