[dpdk-dev] [PATCH v3] vhost: Expose virtio interrupt need on rte_vhost API

Jan Scheurich jan.scheurich at ericsson.com
Wed Oct 4 12:00:56 CEST 2017


Friendly reminder: 
Could somebody please have a look at this patch now that the DPDK
summit is over?

Thanks, Jan

> -----Original Message-----
> From: Jan Scheurich
> Sent: Saturday, 23 September, 2017 22:32
> To: 'dev at dpdk.org' <dev at dpdk.org>
> Subject: [PATCH v3] vhost: Expose virtio interrupt need on rte_vhost API
> 
> Performance tests with the OVS DPDK datapath have shown
> that the tx throughput over a vhostuser port into a VM with
> an interrupt-based virtio driver is limited by the overhead
> incurred by virtio interrupts. The OVS PMD spends up to 30%
> of its cycles in system calls kicking the eventfd. Also the core
> running the vCPU is heavily loaded with generating the virtio
> interrupts in KVM on the host and handling these interrupts
> in the virtio-net driver in the guest. This limits the throughput
> to about 500-700 Kpps with a single vCPU.
> 
> OVS is trying to address this issue by batching packets to a
> vhostuser port for some time to limit the virtio interrupt
> frequency. With a 50 us batching period we have measured an
> iperf3  throughput increase by 15% and a PMD utilization
> decrease from 45% to 30%.
> 
> On the other hand, guests using virtio PMDs do not profit from
> time-based tx batching. Instead they experience a 2-3%
> performance penalty and an average latency increase of
> 30-40 us. OVS therefore intends to apply time-based tx
> batching only for vhostuser tx queues that need to trigger
> virtio interrupts.
> 
> Today this information is hidden inside the rte_vhost library
> and not accessible to users of the API. This patch adds a
> function to the API to query it.
> 
> Signed-off-by: Jan Scheurich <jan.scheurich at ericsson.com>
> 
> ---
> 
> v2 -> v3:
> 	Fixed even more white-space errors and warnings
> v1 -> v2:
> 	Fixed too long commit lines
> 	Fixed white-space errors and warnings
> 
>  lib/librte_vhost/rte_vhost.h | 12 ++++++++++++
>  lib/librte_vhost/vhost.c     | 19 +++++++++++++++++++
>  2 files changed, 31 insertions(+)
> 
> diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h
> index 8c974eb..d62338b 100644
> --- a/lib/librte_vhost/rte_vhost.h
> +++ b/lib/librte_vhost/rte_vhost.h
> @@ -444,6 +444,18 @@ int rte_vhost_get_vhost_vring(int vid, uint16_t vring_idx,
>   */
>  uint32_t rte_vhost_rx_queue_count(int vid, uint16_t qid);
> 
> +/**
> + * Does the virtio driver request interrupts for a vhost tx queue?
> + *
> + * @param vid
> + *  vhost device ID
> + * @param qid
> + *  virtio queue index in mq case
> + * @return
> + *  1 if true, 0 if false
> + */
> +int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid);
> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
> index 0b6aa1c..c6e636e 100644
> --- a/lib/librte_vhost/vhost.c
> +++ b/lib/librte_vhost/vhost.c
> @@ -503,3 +503,22 @@ struct virtio_net *
> 
>  	return *((volatile uint16_t *)&vq->avail->idx) - vq->last_avail_idx;
>  }
> +
> +int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid)
> +{
> +	struct virtio_net *dev;
> +	struct vhost_virtqueue *vq;
> +
> +	dev = get_device(vid);
> +	if (dev == NULL)
> +		return 0;
> +
> +	vq = dev->virtqueue[qid];
> +	if (vq == NULL)
> +		return 0;
> +
> +	if (unlikely(vq->enabled == 0 || vq->avail == NULL))
> +		return 0;
> +
> +	return !(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT);
> +}


More information about the dev mailing list