[dpdk-dev] [RFC v2] vhost: new rte_vhost API proposal

Stojaczyk, DariuszX dariuszx.stojaczyk at intel.com
Tue May 29 15:38:33 CEST 2018



> -----Original Message-----
> From: Stefan Hajnoczi [mailto:stefanha at redhat.com]
> Sent: Friday, May 25, 2018 12:06 PM
> On Fri, May 18, 2018 at 03:01:05PM +0200, Dariusz Stojaczyk wrote:
> > +struct rte_vhost2_msg {
> > +	uint32_t id;
> 
> Is this what the vhost-user specification calls the "request type"?  I
> suggest following the vhost-user spec terminology.
> 
> > +	uint32_t flags;
> > +	uint32_t size; /**< The following payload size. */
> > +	void *payload;
> > +	int fds[RTE_VHOST2_MEMORY_MAX_NREGIONS];
> 
> Is it necessary to expose file descriptor passing in the API?
> virtio-vhost-user doesn't have file descriptor passing, so it's best if this
> can be hidden inside rte_vhost2.

So it's another argument for not exposing raw message handling to the user.
If there's some backend-specific vhost-user message in future that contains an fd, it will need a set of new abstractions to work with virtio-vhost-user anyway.
I guess I'll get back the original custom_msg idea from V1.

> 
> > +};
> > +
> > +/** Single memory region. Both physically and virtually contiguous */
> > +struct rte_vhost2_mem_region {
> > +	uint64_t guest_phys_addr;
> > +	uint64_t guest_user_addr;
> > +	uint64_t host_user_addr;
> > +	uint64_t size;
> > +	void *mmap_addr;
> > +	uint64_t mmap_size;
> > +	int fd;
> 
> virtio-vhost-user doesn't have an fd.  Why do API consumers need to
> know about the fd?

They don't. Ack. I'll strip this struct.

> 
> > +/**
> > + * Device/queue related callbacks, all optional. Provided callback
> > + * parameters are guaranteed not to be NULL unless explicitly
> specified.
> > + */
> 
> This is a good place to mention that all callbacks are asynchronous unless
> specified otherwise.  Without that knowledge statements below like "If
> this is completed with a non-zero status" are confusing on a void
> function.

Ack.

> 
> > +struct rte_vhost2_tgt_ops {
> > +	/**
> > +	 * New driver connected. If this is completed with a non-zero
> status,
> > +	 * rte_vhost2 will terminate the connection.
> > +	 */
> > +	void (*device_create)(struct rte_vhost2_dev *vdev);
> > +	/**
> > +	* Device is ready to operate. vdev data is now initialized. This
> callback
> > +	* may be called multiple times as e.g. memory mappings can
> change
> > +	* dynamically. All queues are guaranteed to be stopped by now.
> > +	*/
> > +	void (*device_init)(struct rte_vhost2_dev *vdev);
> > +	/**
> > +	* Features have changed in runtime. This is called at least once
> > +during
> 
> s/in/at/

Ack.

> 
> > +	/**
> > +	* Custom vhost-user message handler. This is called for
> > +	* backend-specific messages (net/crypto/scsi) that weren't
> recognized
> > +	* by the generic message parser. `msg` is available until
> > +	* \c rte_vhost2_tgt_cb_complete is called.
> > +	*/
> > +	void (*custom_msg)(struct rte_vhost2_dev *vdev, struct
> > +rte_vhost2_msg *msg);
> 
> What happens if rte_vhost2_tgt_cb_complete() is called with a negative
> rc?  Does the specific errno value matter?

My current implementation only checks for rc != 0 now. I'm still working this out.

> 
> Where is the API for sending a vhost-user reply message?

I didn't push any. Now that you pointed out the fds in public API I think I'll rollback this custom_msg stuff to V1.

D.


More information about the dev mailing list