[dpdk-dev] [ [PATCH v2] 01/13] virtio: Introduce config RTE_VIRTIO_INC_VECTOR

Xie, Huawei huawei.xie at intel.com
Fri Dec 18 10:52:29 CET 2015


On 12/18/2015 7:25 AM, Stephen Hemminger wrote:
> On Thu, 17 Dec 2015 17:32:38 +0530
> Santosh Shukla <sshukla at mvista.com> wrote:
>
>> On Mon, Dec 14, 2015 at 6:30 PM, Santosh Shukla <sshukla at mvista.com> wrote:
>>> virtio_recv_pkts_vec and other virtio vector friend apis are written for sse/avx
>>> instructions. For arm64 in particular, virtio vector implementation does not
>>> exist(todo).
>>>
>>> So virtio pmd driver wont build for targets like i686, arm64.  By making
>>> RTE_VIRTIO_INC_VECTOR=n, Driver can build for non-sse/avx targets and will work
>>> in non-vectored virtio mode.
>>>
>>> Signed-off-by: Santosh Shukla <sshukla at mvista.com>
>>> ---
>> Ping?
>>
>> any review  / comment on this patch much appreciated. Thanks
> The patches I posted (and were ignored by Intel) to support indirect
> and any layout should have much bigger performance gain than all this
> low level SSE bit twiddling.
Hi Stephen:
We only did SSE twiddling to RX, which almost doubles the performance
comparing to normal path in virtio/vhost performance test case. Indirect
and any layout feature enabling are mostly for TX. We also did some
optimization for single segment and non-offload case in TX, without
using SSE, which also gives ~60% performance improvement, in Qian's
result. My optimization is mostly for single segment and non-offload
case, which i calls simple rx/tx.
I plan to add virtio/vhost performance benchmark so that we could easily
measure the performance difference for each patch.

Indirect and any layout features are useful for multiple segment
transmitted packet mbufs. I had acked your patch at the first time, and
thought it is applied. I don't understand why you say it is ignored by
Intel.

>
>



More information about the dev mailing list