[dpdk-dev] [PATCH v3 2/2] vhost: Add VHOST PMD

Wang, Zhihong zhihong.wang at intel.com
Thu Nov 12 13:52:06 CET 2015


Hi Tetsuya,

In my test I created 2 vdev using "--vdev 'eth_vhost0,iface=/tmp/sock0,queues=1' --vdev 'eth_vhost1,iface=/tmp/sock1,queues=1'", and the qemu message got handled in wrong order.
The reason is that: 2 threads are created to handle message from 2 sockets, but their fds are SHARED, so each thread are reading from both sockets.

This can lead to incorrect behaviors, in my case sometimes the VHOST_USER_SET_MEM_TABLE got handled after VRING initialization and lead to destroy_device().

Detailed log as shown below: thread 69351 & 69352 are both reading fd 25. Thanks Yuanhan for helping debugging!


Thanks
Zhihong


-----------------------------------------------------------------------------------------------------------------

---->  debug: setting up new vq conn for fd: 23, tid: 69352
VHOST_CONFIG: new virtio connection is 25
VHOST_CONFIG: new device, handle is 0
---->  debug: vserver_message_handler thread id: 69352, fd: 25
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
---->  debug: vserver_message_handler thread id: 69352, fd: 25
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
---->  debug: vserver_message_handler thread id: 69352, fd: 25
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:26
---->  debug: vserver_message_handler thread id: 69352, fd: 25
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:27
---->  debug: vserver_message_handler thread id: 69351, fd: 25
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:28
---->  debug: vserver_message_handler thread id: 69351, fd: 25
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:26
---->  debug: vserver_message_handler thread id: 69351, fd: 25
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
---->  debug: vserver_message_handler thread id: 69351, fd: 25
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
---->  debug: device_fh: 0: user_set_mem_table
VHOST_CONFIG: mapped region 0 fd:27 to 0x7ff6c0000000 sz:0xa0000 off:0x0
VHOST_CONFIG: mapped region 1 fd:29 to 0x7ff680000000 sz:0x40000000 off:0xc0000
---->  debug: vserver_message_handler thread id: 69351, fd: 25
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
---->  debug: vserver_message_handler thread id: 69351, fd: 25
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
---->  debug: vserver_message_handler thread id: 69351, fd: 25
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
---->  debug: vserver_message_handler thread id: 69351, fd: 25
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:0 file:30
---->  debug: vserver_message_handler thread id: 69352, fd: 25
VHOST_CONFIG: virtio is not ready for processing.
---->  debug: vserver_message_handler thread id: 69351, fd: 25
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
---->  debug: vserver_message_handler thread id: 69351, fd: 25
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
---->  debug: vserver_message_handler thread id: 69351, fd: 25
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:1 file:31
VHOST_CONFIG: virtio is now ready for processing.
PMD: New connection established
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM

-----------------------------------------------------------------------------------------------------------------

> ...
> +
> +static void *vhost_driver_session(void *param __rte_unused)
> +{
> +	static struct virtio_net_device_ops *vhost_ops;
> +
> +	vhost_ops = rte_zmalloc(NULL, sizeof(*vhost_ops), 0);
> +	if (vhost_ops == NULL)
> +		rte_panic("Can't allocate memory\n");
> +
> +	/* set vhost arguments */
> +	vhost_ops->new_device = new_device;
> +	vhost_ops->destroy_device = destroy_device;
> +	if (rte_vhost_driver_pmd_callback_register(vhost_ops) < 0)
> +		rte_panic("Can't register callbacks\n");
> +
> +	/* start event handling */
> +	rte_vhost_driver_session_start();
> +
> +	rte_free(vhost_ops);
> +	pthread_exit(0);
> +}
> +
> +static void vhost_driver_session_start(struct pmd_internal *internal)
> +{
> +	int ret;
> +
> +	ret = pthread_create(&internal->session_th,
> +			NULL, vhost_driver_session, NULL);
> +	if (ret)
> +		rte_panic("Can't create a thread\n");
> +}
> +
> ...



More information about the dev mailing list