[dpdk-dev] [RFC] ethdev: abstraction layer for QoS hierarchical scheduler

Bruce Richardson bruce.richardson at intel.com
Thu Dec 8 11:14:38 CET 2016


On Wed, Dec 07, 2016 at 10:58:49AM +0000, Alan Robertson wrote:
> Hi Cristian,
> 
> Looking at points 10 and 11 it's good to hear nodes can be dynamically added.
> 
> We've been trying to decide the best way to do this for support of qos on tunnels for
> some time now and the existing implementation doesn't allow this so effectively ruled
> out hierarchical queueing for tunnel targets on the output interface.
> 
> Having said that, has thought been given to separating the queueing from being so closely
> tied to the Ethernet transmit process ?   When queueing on a tunnel for example we may
> be working with encryption.   When running with an anti-reply window it is really much
> better to do the QOS (packet reordering) before the encryption.  To support this would
> it be possible to have a separate scheduler structure which can be passed into the
> scheduling API ?  This means the calling code can hang the structure of whatever entity
> it wishes to perform qos on, and we get dynamic target support (sessions/tunnels etc).
>
Hi,

just to note that not all ethdevs need to be actual NICs (physical or
virtual). It was also for situations like this that the ring PMD was
created. For the QoS scheduler, the common "output port" type chosen was
the ethdev, to avoid having to support multiple underlying types. To use
a ring instead as the output port, just create a ring and then call
"rte_eth_from_ring" to get an ethdev port wrapper around the ring, and
which you can then use for just about any API that wants an ethdev.
[Note: the rte_eth_from_ring API is in the ring driver itself, so you do
need to link against that driver directly if using shared libs] 

Regards,
/Bruce



More information about the dev mailing list