[dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement

Cao, Waterman waterman.cao at intel.com
Wed Nov 5 02:32:16 CET 2014


Hi Yong,

	We tested your patch with VMWare ESX 5.5.
	It works fine with R1.8 RC1. 
	You can find more details from Xiaonan's reports.

Regards

Waterman 
>-----Original Message-----
>From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Yong Wang
>Sent: Tuesday, October 14, 2014 5:00 AM
>To: Thomas Monjalon
>Cc: dev at dpdk.org
>Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
>
>Only the last one is performance related and it merely tries to give hints to the compiler to hopefully make branch prediction more efficient.  It also moves a constant assignment out of the pkt polling loop.
>
>We did performance evaluation on a Nehalem box with 4cores at 2.8GHz x 2 socket:
>On the DPDK-side, it's running some l3 forwarding apps in a VM on ESXi with one core assigned for polling.  The client side is pktgen/dpdk, pumping 64B tcp packets at line rate.  Before the patch, we are seeing ~900K PPS with 65% cpu of a core used for DPDK.  After the patch, we are seeing the same pkt rate with only 45% of a core used.  CPU usage is collected factoring our the idle loop cost.  The packet rate is a result of the mode we used for vmxnet3 (pure emulation mode running default number of hypervisor contexts).  I can add these info in the review request.
>
>Yong
>________________________________________
>From: Thomas Monjalon <thomas.monjalon at 6wind.com>
>Sent: Monday, October 13, 2014 1:29 PM
>To: Yong Wang
>Cc: dev at dpdk.org
>Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
>
>Hi,
>
>2014-10-12 23:23, Yong Wang:
>> This patch series include various fixes and improvement to the
>> vmxnet3 pmd driver.
>>
>> Yong Wang (5):
>>   vmxnet3: Fix VLAN Rx stripping
>>   vmxnet3: Add VLAN Tx offload
>>   vmxnet3: Fix dev stop/restart bug
>>   vmxnet3: Add rx pkt check offloads
>>   vmxnet3: Some perf improvement on the rx path
>
>Please, could describe what is the performance gain for these patches?
>Benchmark numbers would be appreciated.
>
>Thanks
>--
>Thomas

-----Original Message-----
From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Yong Wang
Sent: Tuesday, October 14, 2014 5:00 AM
To: Thomas Monjalon
Cc: dev at dpdk.org
Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement

Only the last one is performance related and it merely tries to give hints to the compiler to hopefully make branch prediction more efficient.  It also moves a constant assignment out of the pkt polling loop.

We did performance evaluation on a Nehalem box with 4cores at 2.8GHz x 2 socket:
On the DPDK-side, it's running some l3 forwarding apps in a VM on ESXi with one core assigned for polling.  The client side is pktgen/dpdk, pumping 64B tcp packets at line rate.  Before the patch, we are seeing ~900K PPS with 65% cpu of a core used for DPDK.  After the patch, we are seeing the same pkt rate with only 45% of a core used.  CPU usage is collected factoring our the idle loop cost.  The packet rate is a result of the mode we used for vmxnet3 (pure emulation mode running default number of hypervisor contexts).  I can add these info in the review request.

Yong
________________________________________
From: Thomas Monjalon <thomas.monjalon at 6wind.com>
Sent: Monday, October 13, 2014 1:29 PM
To: Yong Wang
Cc: dev at dpdk.org
Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement

Hi,

2014-10-12 23:23, Yong Wang:
> This patch series include various fixes and improvement to the
> vmxnet3 pmd driver.
>
> Yong Wang (5):
>   vmxnet3: Fix VLAN Rx stripping
>   vmxnet3: Add VLAN Tx offload
>   vmxnet3: Fix dev stop/restart bug
>   vmxnet3: Add rx pkt check offloads
>   vmxnet3: Some perf improvement on the rx path

Please, could describe what is the performance gain for these patches?
Benchmark numbers would be appreciated.

Thanks
--
Thomas


More information about the dev mailing list