[dpdk-dev] [PATCH] vhost: fix wrong IOTLB initialization

Xia, Chenbo chenbo.xia at intel.com
Mon May 17 14:46:12 CEST 2021


Hi David,

> -----Original Message-----
> From: David Marchand <david.marchand at redhat.com>
> Sent: Thursday, May 13, 2021 10:12 PM
> To: Xia, Chenbo <chenbo.xia at intel.com>; Maxime Coquelin
> <maxime.coquelin at redhat.com>
> Cc: dev <dev at dpdk.org>; Kevin Traynor <ktraynor at redhat.com>; Pei Zhang
> <pezhang at redhat.com>; Yigit, Ferruh <ferruh.yigit at intel.com>; Thomas
> Monjalon <thomas at monjalon.net>
> Subject: Re: [PATCH] vhost: fix wrong IOTLB initialization
> 
> On Thu, May 13, 2021 at 2:38 PM Chenbo Xia <chenbo.xia at intel.com> wrote:
> >
> > This patch fixes an issue of application crash because of vhost iotlb
> > not initialized when virtio has multiqueue enabled.
> >
> > iotlb messages will be sent when some queues are not enabled. If we
> > initialize iotlb in vhost_user_set_vring_num, it could happen that
> > iotlb update comes when iotlb pool of disabled queues are not
> > initialized.
> 
> This makes the problem I reproduced disappear at init, but I noticed
> the segfault after restarting testpmd once.
> And a little bit after this, my vm crashed.

Oops.. Maybe there's some env difference. My env works well with the 'restart' test.

After checking the logs you provided, is the segfault still because of iotlb cache
not init? IMHO, based on the message sequence, the cache should be inited.

> 
> This is not systematic, so I guess there is some condition with how
> the virtio device is initialised in the vm.
> 
> 
> One question below.
> 
> 
> Bugzilla ID: 703
> 
> > Fixes: 968bbc7e2e50 ("vhost: avoid IOTLB mempool allocation while IOMMU
> disabled")
> >
> 
> Reported-by: Pei Zhang <pezhang at redhat.com>
> 
> > Signed-off-by: Chenbo Xia <chenbo.xia at intel.com>
> > ---
> >  lib/vhost/vhost_user.c | 13 +++++++++----
> >  1 file changed, 9 insertions(+), 4 deletions(-)
> >
> > diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
> > index 611ff209e3..ae4df8eb69 100644
> > --- a/lib/vhost/vhost_user.c
> > +++ b/lib/vhost/vhost_user.c
> > @@ -311,6 +311,7 @@ vhost_user_set_features(struct virtio_net **pdev,
> struct VhostUserMsg *msg,
> >         uint64_t features = msg->payload.u64;
> >         uint64_t vhost_features = 0;
> >         struct rte_vdpa_device *vdpa_dev;
> > +       uint32_t i;
> >
> >         if (validate_msg_fds(msg, 0) != 0)
> >                 return RTE_VHOST_MSG_RESULT_ERR;
> > @@ -389,6 +390,14 @@ vhost_user_set_features(struct virtio_net **pdev,
> struct VhostUserMsg *msg,
> >                 vdpa_dev->ops->set_features(dev->vid);
> >
> >         dev->flags &= ~VIRTIO_DEV_FEATURES_FAILED;
> > +
> > +       if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) {
> > +               for (i = 0; i < dev->nr_vring; i++) {
> 
> I don't know the vhost-user protocol.
> At this point of the device init/life, are we sure nr_vring is set to
> the max number of vring?
> The logs I have tend to say it is the case, but is there a guarantee
> in the protocol?

I think you are correct.. Based on current QEMU implementation, nr_vring should be
the correct value (Correct me if there're corner cases). But I don't think there
is a guarantee as vhost-user protocol doesn't mention about 'SET_FEATURES' comes
after per-vring messages. @Maxime Coquelin Do I miss anything?

> 
> 
> Another way to fix would be to allocate on the first
> VHOST_USER_IOTLB_MSG message received for a vring.

Emmm.. Could there be a case that some hypervisor init certain queue after the first
IOTLB msg? If there is, we may also need to check nr_vring is not changed/there's new
queue inited.

And David, thanks for testing and writing the revert patch for me during my leave.
That's much appreciated!

Thanks,
Chenbo

> 
> 
> > +                       if (vhost_user_iotlb_init(dev, i))
> > +                               return RTE_VHOST_MSG_RESULT_ERR;
> > +               }
> > +       }
> > +
> >         return RTE_VHOST_MSG_RESULT_OK;
> >  }
> >
> 
> 
> --
> David Marchand



More information about the dev mailing list