[dpdk-dev] [PATCH] vhost: Fix retrieval of numa information in PMD

Yuanhan Liu yuanhan.liu at linux.intel.com
Wed Apr 6 07:44:06 CEST 2016


On Wed, Apr 06, 2016 at 01:32:12PM +0800, Tan, Jianfeng wrote:
> Hi,
> 
> Just out of interest, seems that the message handling thread which runs
> new_device() is pthread_create() from the thread which calls the
> dev_start(), usually master thread, right? But it's not necessary to be the
> master thread to poll pkts from this vhost port, right? So what's the
> significance to record the numa_node information of message handling thread
> here? Shall we make the decision of numa_realloc based on the final PMD
> thread who is responsible for polling this vhost port?

It doesn't matter on which core we made the decision: the result
would be same since we are querying the numa node info of the
virtio_net dev struct.

	--yliu
> 
> It's not related to this patch itself. And it seems good to me.
> 
> 
> Thanks,
> Jianfeng
> 
> 
> 
> On 4/6/2016 12:09 AM, Ciara Loftus wrote:
> >After some testing, it was found that retrieving numa information
> >about a vhost device via a call to get_mempolicy is more
> >accurate when performed during the new_device callback versus
> >the vring_state_changed callback, in particular upon initial boot
> >of the VM.  Performing this check during new_device is also
> >potentially more efficient as this callback is only triggered once
> >during device initialisation, compared with vring_state_changed
> >which may be called multiple times depending on the number of
> >queues assigned to the device.
> >
> >Reorganise the code to perform this check and assign the correct
> >socket_id to the device during the new_device callback.
> >
> >Signed-off-by: Ciara Loftus <ciara.loftus at intel.com>
> >---
> >  drivers/net/vhost/rte_eth_vhost.c | 28 ++++++++++++++--------------
> >  1 file changed, 14 insertions(+), 14 deletions(-)
> >
> >diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
> >index 4cc6bec..b1eb082 100644
> >--- a/drivers/net/vhost/rte_eth_vhost.c
> >+++ b/drivers/net/vhost/rte_eth_vhost.c
> >@@ -229,6 +229,9 @@ new_device(struct virtio_net *dev)
> >  	struct pmd_internal *internal;
> >  	struct vhost_queue *vq;
> >  	unsigned i;
> >+#ifdef RTE_LIBRTE_VHOST_NUMA
> >+	int newnode, ret;
> >+#endif
> >  	if (dev == NULL) {
> >  		RTE_LOG(INFO, PMD, "Invalid argument\n");
> >@@ -244,6 +247,17 @@ new_device(struct virtio_net *dev)
> >  	eth_dev = list->eth_dev;
> >  	internal = eth_dev->data->dev_private;
> >+#ifdef RTE_LIBRTE_VHOST_NUMA
> >+	ret  = get_mempolicy(&newnode, NULL, 0, dev,
> >+			MPOL_F_NODE | MPOL_F_ADDR);
> >+	if (ret < 0) {
> >+		RTE_LOG(ERR, PMD, "Unknown numa node\n");
> >+		return -1;
> >+	}
> >+
> >+	eth_dev->data->numa_node = newnode;
> >+#endif
> >+
> >  	for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
> >  		vq = eth_dev->data->rx_queues[i];
> >  		if (vq == NULL)
> >@@ -352,9 +366,6 @@ vring_state_changed(struct virtio_net *dev, uint16_t vring, int enable)
> >  	struct rte_vhost_vring_state *state;
> >  	struct rte_eth_dev *eth_dev;
> >  	struct internal_list *list;
> >-#ifdef RTE_LIBRTE_VHOST_NUMA
> >-	int newnode, ret;
> >-#endif
> >  	if (dev == NULL) {
> >  		RTE_LOG(ERR, PMD, "Invalid argument\n");
> >@@ -370,17 +381,6 @@ vring_state_changed(struct virtio_net *dev, uint16_t vring, int enable)
> >  	eth_dev = list->eth_dev;
> >  	/* won't be NULL */
> >  	state = vring_states[eth_dev->data->port_id];
> >-
> >-#ifdef RTE_LIBRTE_VHOST_NUMA
> >-	ret  = get_mempolicy(&newnode, NULL, 0, dev,
> >-			MPOL_F_NODE | MPOL_F_ADDR);
> >-	if (ret < 0) {
> >-		RTE_LOG(ERR, PMD, "Unknown numa node\n");
> >-		return -1;
> >-	}
> >-
> >-	eth_dev->data->numa_node = newnode;
> >-#endif
> >  	rte_spinlock_lock(&state->lock);
> >  	state->cur[vring] = enable;
> >  	state->max_vring = RTE_MAX(vring, state->max_vring);


More information about the dev mailing list