[dpdk-dev] [PATCH 4/8] net/virtio: allocate queue at init stage

Yuanhan Liu yuanhan.liu at linux.intel.com
Fri Nov 4 02:50:48 CET 2016


On Thu, Nov 03, 2016 at 10:11:43PM +0100, Maxime Coquelin wrote:
> 
> 
> On 11/03/2016 05:09 PM, Yuanhan Liu wrote:
> >Queue allocation should be done once, since the queue related info (such
> >as vring addreess) will only be informed to the vhost-user backend once
> >without virtio device reset.
> >
> >That means, if you allocate queues again after the vhost-user negotiation,
> >the vhost-user backend will not be informed any more. Leading to a state
> >that the vring info mismatches between virtio PMD driver and vhost-backend:
> >the driver switches to the new address has just been allocated, while the
> >vhost-backend still sticks to the old address has been assigned in the init
> >stage.
> >
> >Unfortunately, that is exactly how the virtio driver is coded so far: queue
> >allocation is done at queue_setup stage (when rte_eth_tx/rx_queue_setup is
> >invoked). This is wrong, because queue_setup can be invoked several times.
> >For example,
> >
> >    $ start_testpmd.sh ... --txq=1 --rxq=1 ...
> >    > port stop 0
> >    > port config all txq 1 # just trigger the queue_setup callback again
> >    > port config all rxq 1
> >    > port start 0
> >
> >The right way to do is allocate the queues in the init stage, so that the
> >vring info could be persistent with the vhost-user backend.
> >
> >Besides that, we should allocate max_queue pairs the device supports, but
> >not nr queue pairs firstly configured, to make following case work.
> I understand, but how much memory overhead does that represent?

We are allocating max queue pairs the device supports, but not the
virtio-net spec supports, which, as you stated, would be too much.

I don't know the typical value of the queue pairs being used in
production, but normally, I would assume it be small, something
like 2, or 4. It's 1 by default after all.

So I think it will not be an issue.

> Have you considered performing a device reset when queue number is
> changed?

Not a good idea and clean solution to me.

	--yliu


More information about the dev mailing list