[dpdk-dev] [PATCH v2] net/vhost: fix vhost invalid state
Tan, Jianfeng
jianfeng.tan at intel.com
Wed Apr 11 10:11:20 CEST 2018
On 4/11/2018 6:53 PM, Junjie Chen wrote:
> dev_start sets *dev_attached* after setup queues, this sets device to
> invalid state since no frontend is attached. Also destroy_device set
> *started* to zero which makes *allow_queuing* always zero until dev_start
> get called again. Actually, we should not determine queues existence by
> *dev_attached* but by queues pointers or other separated variable(s).
I see two issues from your description:
- We shall fix the wrong use of dev_attached by commit 30a701a53737.
- The allow_queuing is not set correctly. But I doubt if it's a real
issue, as we do update the queue status in dev_start(), dev_stop(), and
new_device(). Can you explain more?
>
> Fixes: 30a701a53737 ("net/vhost: fix crash when creating vdev
> dynamically")
>
> Signed-off-by: Junjie Chen <junjie.j.chen at intel.com>
> ---
> Changes in v2:
> - use started to determine vhost queues readiness
> - revert setting started to zero in destroy_device
> drivers/net/vhost/rte_eth_vhost.c | 61 ++++++++++++++++++++-------------------
> 1 file changed, 31 insertions(+), 30 deletions(-)
>
> diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
> index 11b6076..ff462b3 100644
> --- a/drivers/net/vhost/rte_eth_vhost.c
> +++ b/drivers/net/vhost/rte_eth_vhost.c
> @@ -528,10 +528,13 @@ update_queuing_status(struct rte_eth_dev *dev)
> unsigned int i;
> int allow_queuing = 1;
>
> - if (rte_atomic32_read(&internal->dev_attached) == 0)
> + if (!dev->data->rx_queues || !dev->data->tx_queues) {
> + RTE_LOG(ERR, PMD, "RX/TX queues not exist yet\n");
Even it's not introduced in this patch, but I think we shall not report
error log here.
> return;
> + }
>
> - if (rte_atomic32_read(&internal->started) == 0)
> + if (rte_atomic32_read(&internal->started) == 0 ||
> + rte_atomic32_read(&internal->dev_attached) == 0)
> allow_queuing = 0;
>
> /* Wait until rx/tx_pkt_burst stops accessing vhost device */
> @@ -607,13 +610,10 @@ new_device(int vid)
> #endif
>
> internal->vid = vid;
> - if (eth_dev->data->rx_queues && eth_dev->data->tx_queues) {
> + if (rte_atomic32_read(&internal->started) == 1)
> queue_setup(eth_dev, internal);
> - rte_atomic32_set(&internal->dev_attached, 1);
> - } else {
> - RTE_LOG(INFO, PMD, "RX/TX queues have not setup yet\n");
> - rte_atomic32_set(&internal->dev_attached, 0);
> - }
> + else
> + RTE_LOG(INFO, PMD, "RX/TX queues not exist yet\n");
>
> for (i = 0; i < rte_vhost_get_vring_num(vid); i++)
> rte_vhost_enable_guest_notification(vid, i, 0);
> @@ -622,6 +622,7 @@ new_device(int vid)
>
> eth_dev->data->dev_link.link_status = ETH_LINK_UP;
>
> + rte_atomic32_set(&internal->dev_attached, 1);
> update_queuing_status(eth_dev);
>
> RTE_LOG(INFO, PMD, "Vhost device %d created\n", vid);
> @@ -651,23 +652,24 @@ destroy_device(int vid)
> eth_dev = list->eth_dev;
> internal = eth_dev->data->dev_private;
>
> - rte_atomic32_set(&internal->started, 0);
> - update_queuing_status(eth_dev);
> rte_atomic32_set(&internal->dev_attached, 0);
> + update_queuing_status(eth_dev);
>
> eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
>
> - for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
> - vq = eth_dev->data->rx_queues[i];
> - if (vq == NULL)
> - continue;
> - vq->vid = -1;
> - }
> - for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
> - vq = eth_dev->data->tx_queues[i];
> - if (vq == NULL)
> - continue;
> - vq->vid = -1;
> + if (eth_dev->data->rx_queues && eth_dev->data->tx_queues) {
> + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
> + vq = eth_dev->data->rx_queues[i];
> + if (!vq)
> + continue;
> + vq->vid = -1;
> + }
> + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
> + vq = eth_dev->data->tx_queues[i];
> + if (!vq)
> + continue;
> + vq->vid = -1;
> + }
> }
>
> state = vring_states[eth_dev->data->port_id];
> @@ -792,11 +794,7 @@ eth_dev_start(struct rte_eth_dev *eth_dev)
> {
> struct pmd_internal *internal = eth_dev->data->dev_private;
>
> - if (unlikely(rte_atomic32_read(&internal->dev_attached) == 0)) {
> - queue_setup(eth_dev, internal);
> - rte_atomic32_set(&internal->dev_attached, 1);
> - }
> -
> + queue_setup(eth_dev, internal);
> rte_atomic32_set(&internal->started, 1);
> update_queuing_status(eth_dev);
>
> @@ -836,10 +834,13 @@ eth_dev_close(struct rte_eth_dev *dev)
> pthread_mutex_unlock(&internal_list_lock);
> rte_free(list);
>
> - for (i = 0; i < dev->data->nb_rx_queues; i++)
> - rte_free(dev->data->rx_queues[i]);
> - for (i = 0; i < dev->data->nb_tx_queues; i++)
> - rte_free(dev->data->tx_queues[i]);
> + if (dev->data->rx_queues)
> + for (i = 0; i < dev->data->nb_rx_queues; i++)
> + rte_free(dev->data->rx_queues[i]);
> +
> + if (dev->data->tx_queues)
> + for (i = 0; i < dev->data->nb_tx_queues; i++)
> + rte_free(dev->data->tx_queues[i]);
>
> rte_free(dev->data->mac_addrs);
> free(internal->dev_name);
More information about the dev
mailing list