[dpdk-dev] [PATCH 1/2] ethdev: add buffered tx api

Ananyev, Konstantin konstantin.ananyev at intel.com
Tue Feb 2 14:49:52 CET 2016


Hi Tomasz,

> -----Original Message-----
> From: Kulasek, TomaszX
> Sent: Tuesday, February 02, 2016 10:01 AM
> To: Ananyev, Konstantin; dev at dpdk.org
> Subject: RE: [dpdk-dev] [PATCH 1/2] ethdev: add buffered tx api
> 
> Hi Konstantin,
> 
> > -----Original Message-----
> > From: Ananyev, Konstantin
> > Sent: Friday, January 15, 2016 19:45
> > To: Kulasek, TomaszX; dev at dpdk.org
> > Subject: RE: [dpdk-dev] [PATCH 1/2] ethdev: add buffered tx api
> >
> > Hi Tomasz,
> >
> > >
> > > +		/* get new buffer space first, but keep old space around */
> > > +		new_bufs = rte_zmalloc("ethdev->txq_bufs",
> > > +				sizeof(*dev->data->txq_bufs) * nb_queues, 0);
> > > +		if (new_bufs == NULL)
> > > +			return -(ENOMEM);
> > > +
> >
> > Why not to allocate space for txq_bufs together with tx_queues (as one
> > chunk for both)?
> > As I understand there is always one to one mapping between them anyway.
> > Would simplify things a bit.
> > Or even introduce a new struct to group with all related tx queue info
> > togetehr struct rte_eth_txq_data {
> > 	void *queue; /*actual pmd  queue*/
> > 	struct rte_eth_dev_tx_buffer buf;
> > 	uint8_t state;
> > }
> > And use it inside struct rte_eth_dev_data?
> > Would probably give a better data locality.
> >
> 
> Introducing such a struct will require a huge rework of pmd drivers. I don't think it's worth only for this one feature.

Why not?
Things are getting more and more messy here: now we have a separate array of pointer to queues,
Separate array of queue states, you are going to add separate array of tx buffers.
For me it seems logical to unite all these 3 fields into one sub-struct. 

> 
> 
> > > +/**
> > > + * @internal
> > > + * Structure used to buffer packets for future TX
> > > + * Used by APIs rte_eth_tx_buffer and rte_eth_tx_buffer_flush  */
> > > +struct rte_eth_dev_tx_buffer {
> > > +	struct rte_mbuf *pkts[RTE_ETHDEV_TX_BUFSIZE];
> >
> > I think it is better to make size of pkts[] configurable at runtime.
> > There are a lot of different usage scenarios - hard to predict what would
> > be an optimal buffer size for all cases.
> >
> 
> This buffer is allocated in eth_dev shared memory, so there are two scenarios:
> 1) We have prealocated buffer with maximal size, and then we can set threshold level without restarting device, or
> 2) We need to set its size before starting device.

> 
> Second one is better, I think.

Yep, I was thinking about 2) too.
Might be an extra parameter in struct rte_eth_txconf.

> 
> Tomasz


More information about the dev mailing list