[dpdk-dev] [PATCH v3] net/kni: add KNI PMD

Ferruh Yigit ferruh.yigit at intel.com
Fri Nov 4 13:21:32 CET 2016


Hi Yong,

Thank you for the review.

On 11/3/2016 1:24 AM, Yong Wang wrote:
>> -----Original Message-----
>> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Ferruh Yigit
>> Sent: Monday, October 10, 2016 6:20 AM
>> To: dev at dpdk.org
>> Cc: Ferruh Yigit <ferruh.yigit at intel.com>
>> Subject: [dpdk-dev] [PATCH v3] net/kni: add KNI PMD
>>
>> Add KNI PMD which wraps librte_kni for ease of use.
>>
>> KNI PMD can be used as any regular PMD to send / receive packets to the
>> Linux networking stack.
>>
>> Signed-off-by: Ferruh Yigit <ferruh.yigit at intel.com>
>> ---
>>
>> v3:
>> * rebase on top of latest master
>>
>> v2:
>> * updated driver name eth_kni -> net_kni
>> ---

<...>

>>  CONFIG_RTE_LIBRTE_KNI=n
>> +CONFIG_RTE_LIBRTE_PMD_KNI=n
> 
> Nit: change this to CONFIG_RTE_LIBRTE_KNI_PMD instead to be consistent with all other pmds.

There is an inconsistency between virtual and physical PMD config options.

Physical ones: xxx_PMD=
*IXGBE_PMD, *I40E_PMD, *ENA_PMD, ...

Virtual ones: PMD_xxx=
*PMD_RING, *PMD_PCAP, *PMD_NULL, ...

So I am consistent with inconsistency J

<...>

>> +#define DRV_NAME net_kni
> 
> The name generated this way is not consistent with other vdevs.  Why not simply assign "KNI PMD" to drv_name?

Right, it is not consistent but intentionaly.

With macro RTE_PMD_REGISTER_VDEV(net_kni, xxx), rte_driver.name set to
"net_kni"

and if you set drivername to "KNI PMD", pmd will report driver name as
"KNI PMD"

so there will be two different driver names, I tried to unify them to a
single name.
And some physical drivers already does same thing.


<...>

>> +static uint16_t
>> +eth_kni_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
>> +{
>> +	struct pmd_queue *kni_q = q;
>> +	struct rte_kni *kni = kni_q->internals->kni;
>> +	uint16_t nb_pkts;
>> +
>> +	nb_pkts = rte_kni_rx_burst(kni, bufs, nb_bufs);
>> +
>> +	kni_q->rx.pkts += nb_pkts;
>> +	kni_q->rx.err_pkts += nb_bufs - nb_pkts;
>> +
>> +	return nb_pkts;
>> +}
>> +
> 
> I don't think it's safe to do receive from two queues concurrently on two cores sharing the same underlying KNI device due to the current limitation of KNI user-space queues not being multi-thread safe.

You are right, above code is not safe.
It is possible to create a KNI interface per queue, but I don't see any
advantage of this against creating a new virtual KNI port.

So I will limit to single queue.

> Is the proposed plan to have the application layer implement
synchronization logic?
> If that's the case, it needs to be clearly documented and depending on
the implementation, measurable overhead will be incurred.
> Otherwise (only single-queue supported), could you check queue number
if the application tries to configure multi-queue?
> 



<...>

>> +static struct rte_eth_dev *
>> +eth_kni_create(const char *name, unsigned int numa_node)
>> +{
>> +	struct pmd_internals *internals = NULL;
>> +	struct rte_eth_dev_data *data;
>> +	struct rte_eth_dev *eth_dev;
>> +	uint16_t nb_rx_queues = 1;
>> +	uint16_t nb_tx_queues = 1;
> 
> Since these two values are always 1 here, I think they could be removed.

I will remove them.


Thanks,
ferruh


More information about the dev mailing list