[PATCH v6 05/16] vdpa/ifc: add vDPA interrupt for blk device
Xia, Chenbo
chenbo.xia at intel.com
Mon Apr 25 14:58:07 CEST 2022
Hi Andy,
> -----Original Message-----
> From: Pei, Andy <andy.pei at intel.com>
> Sent: Thursday, April 21, 2022 4:34 PM
> To: dev at dpdk.org
> Cc: Xia, Chenbo <chenbo.xia at intel.com>; maxime.coquelin at redhat.com; Cao,
> Gang <gang.cao at intel.com>; Liu, Changpeng <changpeng.liu at intel.com>
> Subject: [PATCH v6 05/16] vdpa/ifc: add vDPA interrupt for blk device
>
> For the block device type, we have to relay
> the commands on all queues.
It's a bit short... although I can understand, please add some background
on current implementation for others to easily understand.
>
> Signed-off-by: Andy Pei <andy.pei at intel.com>
> ---
> drivers/vdpa/ifc/ifcvf_vdpa.c | 46 ++++++++++++++++++++++++++++++++------
> -----
> 1 file changed, 35 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c b/drivers/vdpa/ifc/ifcvf_vdpa.c
> index 8ee041f..8d104b7 100644
> --- a/drivers/vdpa/ifc/ifcvf_vdpa.c
> +++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
> @@ -370,24 +370,48 @@ struct rte_vdpa_dev_info {
> irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
> irq_set->start = 0;
> fd_ptr = (int *)&irq_set->data;
> + /* The first interrupt is for the configure space change
> notification */
> fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] =
> rte_intr_fd_get(internal->pdev->intr_handle);
>
> for (i = 0; i < nr_vring; i++)
> internal->intr_fd[i] = -1;
>
> - for (i = 0; i < nr_vring; i++) {
> - rte_vhost_get_vhost_vring(internal->vid, i, &vring);
> - fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] = vring.callfd;
> - if ((i & 1) == 0 && m_rx == true) {
> - fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
> - if (fd < 0) {
> - DRV_LOG(ERR, "can't setup eventfd: %s",
> - strerror(errno));
> - return -1;
> + if (internal->device_type == IFCVF_NET) {
> + for (i = 0; i < nr_vring; i++) {
> + rte_vhost_get_vhost_vring(internal->vid, i, &vring);
> + fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] = vring.callfd;
> + if ((i & 1) == 0 && m_rx == true) {
> + /* For the net we only need to relay rx queue,
> + * which will change the mem of VM.
> + */
> + fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
> + if (fd < 0) {
> + DRV_LOG(ERR, "can't setup eventfd: %s",
> + strerror(errno));
> + return -1;
> + }
> + internal->intr_fd[i] = fd;
> + fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] = fd;
> + }
> + }
> + } else if (internal->device_type == IFCVF_BLK) {
> + for (i = 0; i < nr_vring; i++) {
> + rte_vhost_get_vhost_vring(internal->vid, i, &vring);
> + fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] = vring.callfd;
> + if (m_rx == true) {
> + /* For the blk we need to relay all the read cmd
> + * of each queue
> + */
> + fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
> + if (fd < 0) {
> + DRV_LOG(ERR, "can't setup eventfd: %s",
> + strerror(errno));
> + return -1;
> + }
> + internal->intr_fd[i] = fd;
> + fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] = fd;
Many duplicated code here for blk and net. What if we use this condition to know
creating eventfd or not:
if (m_rx == true && (is_blk_dev || (i & 1) == 0)) {
/* create eventfd and save now */
}
Thanks,
Chenbo
> }
> - internal->intr_fd[i] = fd;
> - fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] = fd;
> }
> }
>
> --
> 1.8.3.1
More information about the dev
mailing list