[PATCH v2 01/21] net/cpfl: support device initialization

Zhang, Helin helin.zhang at intel.com
Fri Jan 13 14:32:21 CET 2023



> -----Original Message-----
> From: Mingxia Liu <mingxia.liu at intel.com>
> Sent: Friday, January 13, 2023 4:19 PM
> To: dev at dpdk.org; Zhang, Qi Z <qi.z.zhang at intel.com>; Wu, Jingjing
> <jingjing.wu at intel.com>; Xing, Beilei <beilei.xing at intel.com>
> Cc: Wu, Wenjun1 <wenjun1.wu at intel.com>; Liu, Mingxia
> <mingxia.liu at intel.com>
> Subject: [PATCH v2 01/21] net/cpfl: support device initialization
> 
> Support device init and add the following dev ops:
>  - dev_configure
>  - dev_close
>  - dev_infos_get
>  - link_update
>  - cpfl_dev_supported_ptypes_get
> 
> Signed-off-by: Mingxia Liu <mingxia.liu at intel.com>
> ---
>  MAINTAINERS                            |   9 +
>  doc/guides/nics/cpfl.rst               |  66 +++
>  doc/guides/nics/features/cpfl.ini      |  12 +
>  doc/guides/rel_notes/release_23_03.rst |   5 +
>  drivers/net/cpfl/cpfl_ethdev.c         | 769 +++++++++++++++++++++++++
>  drivers/net/cpfl/cpfl_ethdev.h         |  78 +++
>  drivers/net/cpfl/cpfl_logs.h           |  32 +
>  drivers/net/cpfl/cpfl_rxtx.c           | 244 ++++++++
>  drivers/net/cpfl/cpfl_rxtx.h           |  25 +
>  drivers/net/cpfl/meson.build           |  14 +
>  drivers/net/meson.build                |   1 +
>  11 files changed, 1255 insertions(+)
>  create mode 100644 doc/guides/nics/cpfl.rst  create mode 100644
> doc/guides/nics/features/cpfl.ini  create mode 100644
> drivers/net/cpfl/cpfl_ethdev.c  create mode 100644
> drivers/net/cpfl/cpfl_ethdev.h  create mode 100644
> drivers/net/cpfl/cpfl_logs.h  create mode 100644 drivers/net/cpfl/cpfl_rxtx.c
> create mode 100644 drivers/net/cpfl/cpfl_rxtx.h  create mode 100644
> drivers/net/cpfl/meson.build
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 22ef2ea4b9..970acc5751 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -780,6 +780,15 @@ F: drivers/common/idpf/
>  F: doc/guides/nics/idpf.rst
>  F: doc/guides/nics/features/idpf.ini
> 
> +Intel cpfl
> +M: Qi Zhang <qi.z.zhang at intel.com>
> +M: Jingjing Wu <jingjing.wu at intel.com>
> +M: Beilei Xing <beilei.xing at intel.com>
> +T: git://dpdk.org/next/dpdk-next-net-intel
> +F: drivers/net/cpfl/
> +F: doc/guides/nics/cpfl.rst
> +F: doc/guides/nics/features/cpfl.ini
> +
>  Intel igc
>  M: Junfeng Guo <junfeng.guo at intel.com>
>  M: Simei Su <simei.su at intel.com>
> diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst new file mode
> 100644 index 0000000000..064c69ba7d
> --- /dev/null
> +++ b/doc/guides/nics/cpfl.rst
> @@ -0,0 +1,66 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> +   Copyright(c) 2022 Intel Corporation.
> +
> +.. include:: <isonum.txt>
> +
> +CPFL Poll Mode Driver
> +=====================
> +
> +The [*EXPERIMENTAL*] cpfl PMD (**librte_net_cpfl**) provides poll mode
> +driver support for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg|
> IPU) E2100.
> +
> +
> +Linux Prerequisites
> +-------------------
> +
> +Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK
> environment.
> +
> +To get better performance on Intel platforms, please follow the
> +:doc:`../linux_gsg/nic_perf_intel_platform`.
> +
> +
> +Pre-Installation Configuration
> +------------------------------
> +
> +Runtime Config Options
> +~~~~~~~~~~~~~~~~~~~~~~
> +
> +- ``vport`` (default ``0``)
> +
> +  The PMD supports creation of multiple vports for one PCI device,
> + each vport corresponds to a single ethdev.
> +  The user can specify the vports with specific ID to be created, for example::
> +
> +    -a ca:00.0,vport=[0,2,3]
> +
> +  Then the PMD will create 3 vports (ethdevs) for device ``ca:00.0``.
> +
> +  If the parameter is not provided, the vport 0 will be created by default.
> +
> +- ``rx_single`` (default ``0``)
> +
> +  There are two queue modes supported by Intel\ |reg| IPU Ethernet
> + ES2000 Series,  single queue mode and split queue mode for Rx queue.
What is the relationship with IPU E2100? Is IPU ethernet ES2000 a new product?

Thanks,
Helin

> +  User can choose Rx queue mode, example::
> +
> +    -a ca:00.0,rx_single=1
> +
> +  Then the PMD will configure Rx queue with single queue mode.
> +  Otherwise, split queue mode is chosen by default.
> +
> +- ``tx_single`` (default ``0``)
> +
> +  There are two queue modes supported by Intel\ |reg| IPU Ethernet
> + ES2000 Series,  single queue mode and split queue mode for Tx queue.
> +  User can choose Tx queue mode, example::
> +
> +    -a ca:00.0,tx_single=1
> +
> +  Then the PMD will configure Tx queue with single queue mode.
> +  Otherwise, split queue mode is chosen by default.
> +
> +
> +Driver compilation and testing
> +------------------------------
> +
> +Refer to the document :doc:`build_and_test` for details.
> \ No newline at end of file
> diff --git a/doc/guides/nics/features/cpfl.ini
> b/doc/guides/nics/features/cpfl.ini
> new file mode 100644
> index 0000000000..a2d1ca9e15
> --- /dev/null
> +++ b/doc/guides/nics/features/cpfl.ini
> @@ -0,0 +1,12 @@
> +;
> +; Supported features of the 'cpfl' network poll mode driver.
> +;
> +; Refer to default.ini for the full list of available PMD features.
> +;
> +; A feature with "P" indicates only be supported when non-vector path ;
> +is selected.
> +;
> +[Features]
> +Linux                = Y
> +x86-32               = Y
> +x86-64               = Y
> diff --git a/doc/guides/rel_notes/release_23_03.rst
> b/doc/guides/rel_notes/release_23_03.rst
> index b8c5b68d6c..465a25e91e 100644
> --- a/doc/guides/rel_notes/release_23_03.rst
> +++ b/doc/guides/rel_notes/release_23_03.rst
> @@ -55,6 +55,11 @@ New Features
>       Also, make sure to start the actual text at the margin.
>       =======================================================
> 
> +* **Added Intel cpfl driver.**
> +
> +  Added the new ``cpfl`` net driver
> +  for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100.
> +  See the :doc:`../nics/cpfl` NIC guide for more details on this new driver.
> 
>  Removed Items
>  -------------
> diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
> new file mode 100644 index 0000000000..2d79ba2098
> --- /dev/null
> +++ b/drivers/net/cpfl/cpfl_ethdev.c
> @@ -0,0 +1,769 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2022 Intel Corporation
> + */
> +
> +#include <rte_atomic.h>
> +#include <rte_eal.h>
> +#include <rte_ether.h>
> +#include <rte_malloc.h>
> +#include <rte_memzone.h>
> +#include <rte_dev.h>
> +#include <errno.h>
> +#include <rte_alarm.h>
> +
> +#include "cpfl_ethdev.h"
> +
> +#define CPFL_TX_SINGLE_Q	"tx_single"
> +#define CPFL_RX_SINGLE_Q	"rx_single"
> +#define CPFL_VPORT		"vport"
> +
> +rte_spinlock_t cpfl_adapter_lock;
> +/* A list for all adapters, one adapter matches one PCI device */
> +struct cpfl_adapter_list cpfl_adapter_list; bool
> +cpfl_adapter_list_init;
> +
> +static const char * const cpfl_valid_args[] = {
> +	CPFL_TX_SINGLE_Q,
> +	CPFL_RX_SINGLE_Q,
> +	CPFL_VPORT,
> +	NULL
> +};
> +
> +static int
> +cpfl_dev_link_update(struct rte_eth_dev *dev,
> +		     __rte_unused int wait_to_complete) {
> +	struct idpf_vport *vport = dev->data->dev_private;
> +	struct rte_eth_link new_link;
> +
> +	memset(&new_link, 0, sizeof(new_link));
> +
> +	switch (vport->link_speed) {
> +	case 10:
Is it better to replace '10' with a meaningful macro?
Same comments to below 20 lines of code.

Thanks,
Helin

> +		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
> +		break;
> +	case 100:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
> +		break;
> +	case 1000:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
> +		break;
> +	case 10000:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
> +		break;
> +	case 20000:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
> +		break;
> +	case 25000:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
> +		break;
> +	case 40000:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
> +		break;
> +	case 50000:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
> +		break;
> +	case 100000:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
> +		break;
> +	case 200000:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
> +		break;
> +	default:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
> +	}
> +
> +	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
> +	new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
> +		RTE_ETH_LINK_DOWN;
> +	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
> +				  RTE_ETH_LINK_SPEED_FIXED);
> +
> +	return rte_eth_linkstatus_set(dev, &new_link); }
> +
> +static int
> +cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info
> +*dev_info) {
> +	struct idpf_vport *vport = dev->data->dev_private;
> +	struct idpf_adapter *adapter = vport->adapter;
> +
> +	dev_info->max_rx_queues = adapter->caps.max_rx_q;
> +	dev_info->max_tx_queues = adapter->caps.max_tx_q;
> +	dev_info->min_rx_bufsize = CPFL_MIN_BUF_SIZE;
> +	dev_info->max_rx_pktlen = vport->max_mtu +
> CPFL_ETH_OVERHEAD;
> +
> +	dev_info->max_mtu = vport->max_mtu;
> +	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
> +
> +	return 0;
> +}
> +
> +static const uint32_t *
> +cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused) {
> +	static const uint32_t ptypes[] = {
> +		RTE_PTYPE_L2_ETHER,
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
> +		RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
> +		RTE_PTYPE_L4_FRAG,
> +		RTE_PTYPE_L4_UDP,
> +		RTE_PTYPE_L4_TCP,
> +		RTE_PTYPE_L4_SCTP,
> +		RTE_PTYPE_L4_ICMP,
> +		RTE_PTYPE_UNKNOWN
> +	};
> +
> +	return ptypes;
> +}
> +
> +static int
> +cpfl_dev_configure(struct rte_eth_dev *dev) {
> +	struct idpf_vport *vport = dev->data->dev_private;
> +	struct rte_eth_conf *conf = &dev->data->dev_conf;
> +
> +	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
> +		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
> +		PMD_INIT_LOG(ERR, "Multi-queue TX mode %d is not
> supported",
> +			     conf->txmode.mq_mode);
> +		return -ENOTSUP;
> +	}
> +
> +	if (conf->lpbk_mode != 0) {
> +		PMD_INIT_LOG(ERR, "Loopback operation mode %d is not
> supported",
> +			     conf->lpbk_mode);
> +		return -ENOTSUP;
> +	}
> +
> +	if (conf->dcb_capability_en != 0) {
> +		PMD_INIT_LOG(ERR, "Priority Flow Control(PFC) if not
> supported");
> +		return -ENOTSUP;
> +	}
> +
> +	if (conf->intr_conf.lsc != 0) {
> +		PMD_INIT_LOG(ERR, "LSC interrupt is not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	if (conf->intr_conf.rxq != 0) {
> +		PMD_INIT_LOG(ERR, "RXQ interrupt is not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	if (conf->intr_conf.rmv != 0) {
> +		PMD_INIT_LOG(ERR, "RMV interrupt is not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	return 0;
> +}
> +
> +static int
> +cpfl_dev_close(struct rte_eth_dev *dev) {
> +	struct idpf_vport *vport = dev->data->dev_private;
> +	struct cpfl_adapter_ext *adapter =
> +CPFL_ADAPTER_TO_EXT(vport->adapter);
> +
> +	idpf_vport_deinit(vport);
> +
> +	adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
> +	adapter->cur_vport_nb--;
> +	dev->data->dev_private = NULL;
> +	adapter->vports[vport->sw_idx] = NULL;
> +	rte_free(vport);
> +
> +	return 0;
> +}
> +
> +static int
> +insert_value(struct cpfl_devargs *devargs, uint16_t id) {
> +	uint16_t i;
> +
> +	/* ignore duplicate */
> +	for (i = 0; i < devargs->req_vport_nb; i++) {
> +		if (devargs->req_vports[i] == id)
> +			return 0;
> +	}
> +
> +	if (devargs->req_vport_nb >= RTE_DIM(devargs->req_vports)) {
> +		PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
> +			     CPFL_MAX_VPORT_NUM);
> +		return -EINVAL;
> +	}
> +
> +	devargs->req_vports[devargs->req_vport_nb] = id;
> +	devargs->req_vport_nb++;
> +
> +	return 0;
> +}
> +
> +static const char *
> +parse_range(const char *value, struct cpfl_devargs *devargs) {
> +	uint16_t lo, hi, i;
> +	int n = 0;
> +	int result;
> +	const char *pos = value;
> +
> +	result = sscanf(value, "%hu%n-%hu%n", &lo, &n, &hi, &n);
> +	if (result == 1) {
What does "1" mean here?
I may suggest to replace it with a meaningful macro.

Thanks,
Helin

> +		if (lo >= CPFL_MAX_VPORT_NUM)
> +			return NULL;
> +		if (insert_value(devargs, lo) != 0)
> +			return NULL;
> +	} else if (result == 2) {
> +		if (lo > hi || hi >= CPFL_MAX_VPORT_NUM)
> +			return NULL;
> +		for (i = lo; i <= hi; i++) {
> +			if (insert_value(devargs, i) != 0)
> +				return NULL;
> +		}
> +	} else {
> +		return NULL;
> +	}
> +
> +	return pos + n;
> +}
> +
> +static int
> +parse_vport(const char *key, const char *value, void *args) {
> +	struct cpfl_devargs *devargs = args;
> +	const char *pos = value;
> +
> +	devargs->req_vport_nb = 0;
> +
> +	if (*pos == '[')
> +		pos++;
> +
> +	while (1) {
> +		pos = parse_range(pos, devargs);
> +		if (pos == NULL) {
> +			PMD_INIT_LOG(ERR, "invalid value:\"%s\" for
> key:\"%s\", ",
> +				     value, key);
> +			return -EINVAL;
> +		}
> +		if (*pos != ',')
> +			break;
> +		pos++;
> +	}
> +
> +	if (*value == '[' && *pos != ']') {
> +		PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", ",
> +			     value, key);
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static int
> +parse_bool(const char *key, const char *value, void *args) {
> +	int *i = args;
> +	char *end;
> +	int num;
> +
> +	errno = 0;
> +
> +	num = strtoul(value, &end, 10);
> +
> +	if (errno == ERANGE || (num != 0 && num != 1)) {
> +		PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\",
> value must be 0 or 1",
> +			value, key);
> +		return -EINVAL;
> +	}
> +
> +	*i = num;
> +	return 0;
> +}
> +
> +static int
> +cpfl_parse_devargs(struct rte_pci_device *pci_dev, struct
> cpfl_adapter_ext *adapter,
> +		   struct cpfl_devargs *cpfl_args)
> +{
> +	struct rte_devargs *devargs = pci_dev->device.devargs;
> +	struct rte_kvargs *kvlist;
> +	int i, ret;
> +
> +	cpfl_args->req_vport_nb = 0;
> +
> +	if (devargs == NULL)
Need a LOG for debugging purpose?

> +		return 0;
> +
> +	kvlist = rte_kvargs_parse(devargs->args, cpfl_valid_args);
> +	if (kvlist == NULL) {
> +		PMD_INIT_LOG(ERR, "invalid kvargs key");
> +		return -EINVAL;
> +	}
> +
> +	/* check parsed devargs */
> +	if (adapter->cur_vport_nb + cpfl_args->req_vport_nb >
> +	    CPFL_MAX_VPORT_NUM) {
> +		PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
> +			     CPFL_MAX_VPORT_NUM);
> +		ret = -EINVAL;
> +		goto bail;
> +	}
> +
> +	for (i = 0; i < cpfl_args->req_vport_nb; i++) {
> +		if (adapter->cur_vports & RTE_BIT32(cpfl_args-
> >req_vports[i])) {
> +			PMD_INIT_LOG(ERR, "Vport %d has been created",
> +				     cpfl_args->req_vports[i]);
> +			ret = -EINVAL;
> +			goto bail;
> +		}
> +	}
> +
> +	ret = rte_kvargs_process(kvlist, CPFL_VPORT, &parse_vport,
> +				 cpfl_args);
> +	if (ret != 0)
> +		goto bail;
> +
> +	ret = rte_kvargs_process(kvlist, CPFL_TX_SINGLE_Q, &parse_bool,
> +				 &adapter->base.txq_model);
> +	if (ret != 0)
> +		goto bail;
> +
> +	ret = rte_kvargs_process(kvlist, CPFL_RX_SINGLE_Q, &parse_bool,
> +				 &adapter->base.rxq_model);
> +	if (ret != 0)
> +		goto bail;
Is above line useless code?

> +
> +bail:
> +	rte_kvargs_free(kvlist);
> +	return ret;
> +}
> +
> +static struct idpf_vport *
> +cpfl_find_vport(struct cpfl_adapter_ext *adapter, uint32_t vport_id) {
> +	struct idpf_vport *vport = NULL;
> +	int i;
> +
> +	for (i = 0; i < adapter->cur_vport_nb; i++) {
> +		vport = adapter->vports[i];
> +		if (vport->vport_id != vport_id)
> +			continue;
> +		else
> +			return vport;
Likely you just need to check if vport->vport_id equals to vport_id, right?

> +	}
> +
> +	return vport;
> +}
> +
> +static void
> +cpfl_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t
> +msglen) {
> +	struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg;
> +	struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev;
> +
> +	if (msglen < sizeof(struct virtchnl2_event)) {
> +		PMD_DRV_LOG(ERR, "Error event");
> +		return;
> +	}
> +
> +	switch (vc_event->event) {
> +	case VIRTCHNL2_EVENT_LINK_CHANGE:
> +		PMD_DRV_LOG(DEBUG,
> "VIRTCHNL2_EVENT_LINK_CHANGE");
> +		vport->link_up = vc_event->link_status;
> +		vport->link_speed = vc_event->link_speed;
> +		cpfl_dev_link_update(dev, 0);
> +		break;
> +	default:
> +		PMD_DRV_LOG(ERR, " unknown event received %u",
> vc_event->event);
> +		break;
> +	}
> +}
> +
> +static void
> +cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext *adapter_ex) {
> +	struct idpf_adapter *adapter = &adapter_ex->base;
> +	struct idpf_dma_mem *dma_mem = NULL;
> +	struct idpf_hw *hw = &adapter->hw;
> +	struct virtchnl2_event *vc_event;
> +	struct idpf_ctlq_msg ctlq_msg;
> +	enum idpf_mbx_opc mbx_op;
> +	struct idpf_vport *vport;
> +	enum virtchnl_ops vc_op;
> +	uint16_t pending = 1;
> +	int ret;
> +
> +	while (pending) {
> +		ret = idpf_ctlq_recv(hw->arq, &pending, &ctlq_msg);
> +		if (ret) {
> +			PMD_DRV_LOG(INFO, "Failed to read msg from
> virtual channel, ret: %d", ret);
> +			return;
> +		}
> +
> +		rte_memcpy(adapter->mbx_resp,
> ctlq_msg.ctx.indirect.payload->va,
> +			   IDPF_DFLT_MBX_BUF_SIZE);
> +
> +		mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode);
> +		vc_op =
> rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
> +		adapter->cmd_retval =
> +rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
> +
> +		switch (mbx_op) {
> +		case idpf_mbq_opc_send_msg_to_peer_pf:
> +			if (vc_op == VIRTCHNL2_OP_EVENT) {
> +				if (ctlq_msg.data_len < sizeof(struct
> virtchnl2_event)) {
> +					PMD_DRV_LOG(ERR, "Error event");
> +					return;
> +				}
> +				vc_event = (struct virtchnl2_event
> *)adapter->mbx_resp;
> +				vport = cpfl_find_vport(adapter_ex,
> vc_event->vport_id);
> +				if (!vport) {
> +					PMD_DRV_LOG(ERR, "Can't find
> vport.");
> +					return;
> +				}
> +				cpfl_handle_event_msg(vport, adapter-
> >mbx_resp,
> +						      ctlq_msg.data_len);
> +			} else {
> +				if (vc_op == adapter->pend_cmd)
> +					notify_cmd(adapter, adapter-
> >cmd_retval);
> +				else
> +					PMD_DRV_LOG(ERR, "command
> mismatch, expect %u, get %u",
> +						    adapter->pend_cmd,
> vc_op);
> +
> +				PMD_DRV_LOG(DEBUG, " Virtual channel
> response is received,"
> +					    "opcode = %d", vc_op);
> +			}
> +			goto post_buf;
> +		default:
> +			PMD_DRV_LOG(DEBUG, "Request %u is not
> supported yet", mbx_op);
> +		}
> +	}
> +
> +post_buf:
> +	if (ctlq_msg.data_len)
> +		dma_mem = ctlq_msg.ctx.indirect.payload;
> +	else
> +		pending = 0;
> +
> +	ret = idpf_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
> +	if (ret && dma_mem)
> +		idpf_free_dma_mem(hw, dma_mem);
> +}
> +
> +static void
> +cpfl_dev_alarm_handler(void *param)
> +{
> +	struct cpfl_adapter_ext *adapter = param;
> +
> +	cpfl_handle_virtchnl_msg(adapter);
> +
> +	rte_eal_alarm_set(CPFL_ALARM_INTERVAL,
> cpfl_dev_alarm_handler,
> +adapter); }
> +
> +static int
> +cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct
> +cpfl_adapter_ext *adapter) {
> +	struct idpf_adapter *base = &adapter->base;
> +	struct idpf_hw *hw = &base->hw;
> +	int ret = 0;
> +
> +	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
> +	hw->hw_addr_len = pci_dev->mem_resource[0].len;
> +	hw->back = base;
> +	hw->vendor_id = pci_dev->id.vendor_id;
> +	hw->device_id = pci_dev->id.device_id;
> +	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
> +
> +	strncpy(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE);
> +
> +	ret = idpf_adapter_init(base);
> +	if (ret != 0) {
> +		PMD_INIT_LOG(ERR, "Failed to init adapter");
> +		goto err_adapter_init;
> +	}
> +
> +	rte_eal_alarm_set(CPFL_ALARM_INTERVAL,
> cpfl_dev_alarm_handler,
> +adapter);
> +
> +	adapter->max_vport_nb = adapter->base.caps.max_vports;
> +
> +	adapter->vports = rte_zmalloc("vports",
> +				      adapter->max_vport_nb *
> +				      sizeof(*adapter->vports),
> +				      0);
> +	if (adapter->vports == NULL) {
> +		PMD_INIT_LOG(ERR, "Failed to allocate vports memory");
> +		ret = -ENOMEM;
> +		goto err_get_ptype;
> +	}
> +
> +	adapter->cur_vports = 0;
> +	adapter->cur_vport_nb = 0;
> +
> +	adapter->used_vecs_num = 0;
> +
> +	return ret;
> +
> +err_get_ptype:
> +	idpf_adapter_deinit(base);
> +err_adapter_init:
> +	return ret;
> +}
> +
> +static const struct eth_dev_ops cpfl_eth_dev_ops = {
> +	.dev_configure			= cpfl_dev_configure,
> +	.dev_close			= cpfl_dev_close,
> +	.dev_infos_get			= cpfl_dev_info_get,
> +	.link_update			= cpfl_dev_link_update,
> +	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
> +};
> +
> +static uint16_t
> +cpfl_vport_idx_alloc(struct cpfl_adapter_ext *ad) {
> +	uint16_t vport_idx;
> +	uint16_t i;
> +
> +	for (i = 0; i < ad->max_vport_nb; i++) {
> +		if (ad->vports[i] == NULL)
> +			break;
> +	}
> +
> +	if (i == ad->max_vport_nb)
> +		vport_idx = CPFL_INVALID_VPORT_IDX;
Why not initialize vport_idx directly?

> +	else
> +		vport_idx = i;
> +
> +	return vport_idx;
> +}
> +
> +static int
> +cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) {
> +	struct idpf_vport *vport = dev->data->dev_private;
> +	struct cpfl_vport_param *param = init_params;
> +	struct cpfl_adapter_ext *adapter = param->adapter;
> +	/* for sending create vport virtchnl msg prepare */
> +	struct virtchnl2_create_vport create_vport_info;
> +	int ret = 0;
> +
> +	dev->dev_ops = &cpfl_eth_dev_ops;
> +	vport->adapter = &adapter->base;
> +	vport->sw_idx = param->idx;
> +	vport->devarg_id = param->devarg_id;
> +	vport->dev = dev;
> +
> +	memset(&create_vport_info, 0, sizeof(create_vport_info));
> +	ret = idpf_create_vport_info_init(vport, &create_vport_info);
> +	if (ret != 0) {
> +		PMD_INIT_LOG(ERR, "Failed to init vport req_info.");
> +		goto err;
> +	}
> +
> +	ret = idpf_vport_init(vport, &create_vport_info, dev->data);
> +	if (ret != 0) {
> +		PMD_INIT_LOG(ERR, "Failed to init vports.");
> +		goto err;
> +	}
> +
> +	adapter->vports[param->idx] = vport;
> +	adapter->cur_vports |= RTE_BIT32(param->devarg_id);
> +	adapter->cur_vport_nb++;
> +
> +	dev->data->mac_addrs = rte_zmalloc(NULL, RTE_ETHER_ADDR_LEN,
> 0);
> +	if (dev->data->mac_addrs == NULL) {
> +		PMD_INIT_LOG(ERR, "Cannot allocate mac_addr memory.");
> +		ret = -ENOMEM;
> +		goto err_mac_addrs;
> +	}
> +
> +	rte_ether_addr_copy((struct rte_ether_addr *)vport-
> >default_mac_addr,
> +			    &dev->data->mac_addrs[0]);
> +
> +	return 0;
> +
> +err_mac_addrs:
> +	adapter->vports[param->idx] = NULL;  /* reset */
> +	idpf_vport_deinit(vport);
> +err:
> +	return ret;
> +}
> +
> +static const struct rte_pci_id pci_id_cpfl_map[] = {
> +	{ RTE_PCI_DEVICE(IDPF_INTEL_VENDOR_ID, IDPF_DEV_ID_CPF) },
> +	{ .vendor_id = 0, /* sentinel */ },
> +};
> +
> +static struct cpfl_adapter_ext *
> +cpfl_find_adapter_ext(struct rte_pci_device *pci_dev) {
> +	struct cpfl_adapter_ext *adapter;
> +	int found = 0;
> +
> +	if (pci_dev == NULL)
> +		return NULL;
> +
> +	rte_spinlock_lock(&cpfl_adapter_lock);
> +	TAILQ_FOREACH(adapter, &cpfl_adapter_list, next) {
> +		if (strncmp(adapter->name, pci_dev->device.name,
> PCI_PRI_STR_SIZE) == 0) {
> +			found = 1;
> +			break;
> +		}
> +	}
> +	rte_spinlock_unlock(&cpfl_adapter_lock);
> +
> +	if (found == 0)
> +		return NULL;
> +
> +	return adapter;
> +}
> +
> +static void
> +cpfl_adapter_ext_deinit(struct cpfl_adapter_ext *adapter) {
> +	rte_eal_alarm_cancel(cpfl_dev_alarm_handler, adapter);
> +	idpf_adapter_deinit(&adapter->base);
> +
> +	rte_free(adapter->vports);
> +	adapter->vports = NULL;
> +}
> +
> +static int
> +cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
> +	       struct rte_pci_device *pci_dev) {
> +	struct cpfl_vport_param vport_param;
> +	struct cpfl_adapter_ext *adapter;
> +	struct cpfl_devargs devargs;
> +	char name[RTE_ETH_NAME_MAX_LEN];
> +	int i, retval;
> +	bool first_probe = false;
> +
> +	if (!cpfl_adapter_list_init) {
> +		rte_spinlock_init(&cpfl_adapter_lock);
> +		TAILQ_INIT(&cpfl_adapter_list);
> +		cpfl_adapter_list_init = true;
> +	}
> +
> +	adapter = cpfl_find_adapter_ext(pci_dev);
> +	if (adapter == NULL) {
> +		first_probe = true;
> +		adapter = rte_zmalloc("cpfl_adapter_ext",
> +						sizeof(struct
> cpfl_adapter_ext), 0);
> +		if (adapter == NULL) {
> +			PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
> +			return -ENOMEM;
> +		}
> +
> +		retval = cpfl_adapter_ext_init(pci_dev, adapter);
> +		if (retval != 0) {
> +			PMD_INIT_LOG(ERR, "Failed to init adapter.");
> +			return retval;
> +		}
> +
> +		rte_spinlock_lock(&cpfl_adapter_lock);
> +		TAILQ_INSERT_TAIL(&cpfl_adapter_list, adapter, next);
> +		rte_spinlock_unlock(&cpfl_adapter_lock);
> +	}
> +
> +	retval = cpfl_parse_devargs(pci_dev, adapter, &devargs);
> +	if (retval != 0) {
> +		PMD_INIT_LOG(ERR, "Failed to parse private devargs");
> +		goto err;
> +	}
> +
> +	if (devargs.req_vport_nb == 0) {
> +		/* If no vport devarg, create vport 0 by default. */
> +		vport_param.adapter = adapter;
> +		vport_param.devarg_id = 0;
> +		vport_param.idx = cpfl_vport_idx_alloc(adapter);
> +		if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
> +			PMD_INIT_LOG(ERR, "No space for vport %u",
> vport_param.devarg_id);
> +			return 0;
> +		}
> +		snprintf(name, sizeof(name), "cpfl_%s_vport_0",
> +			 pci_dev->device.name);
> +		retval = rte_eth_dev_create(&pci_dev->device, name,
> +					    sizeof(struct idpf_vport),
> +					    NULL, NULL, cpfl_dev_vport_init,
> +					    &vport_param);
> +		if (retval != 0)
> +			PMD_DRV_LOG(ERR, "Failed to create default vport
> 0");
> +	} else {
> +		for (i = 0; i < devargs.req_vport_nb; i++) {
> +			vport_param.adapter = adapter;
> +			vport_param.devarg_id = devargs.req_vports[i];
> +			vport_param.idx = cpfl_vport_idx_alloc(adapter);
> +			if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
> +				PMD_INIT_LOG(ERR, "No space for
> vport %u", vport_param.devarg_id);
> +				break;
> +			}
> +			snprintf(name, sizeof(name), "cpfl_%s_vport_%d",
> +				 pci_dev->device.name,
> +				 devargs.req_vports[i]);
> +			retval = rte_eth_dev_create(&pci_dev->device,
> name,
> +						    sizeof(struct idpf_vport),
> +						    NULL, NULL,
> cpfl_dev_vport_init,
> +						    &vport_param);
> +			if (retval != 0)
> +				PMD_DRV_LOG(ERR, "Failed to create
> vport %d",
> +					    vport_param.devarg_id);
> +		}
> +	}
> +
> +	return 0;
> +
> +err:
> +	if (first_probe) {
> +		rte_spinlock_lock(&cpfl_adapter_lock);
> +		TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
> +		rte_spinlock_unlock(&cpfl_adapter_lock);
> +		cpfl_adapter_ext_deinit(adapter);
> +		rte_free(adapter);
> +	}
> +	return retval;
> +}
> +
> +static int
> +cpfl_pci_remove(struct rte_pci_device *pci_dev) {
> +	struct cpfl_adapter_ext *adapter = cpfl_find_adapter_ext(pci_dev);
> +	uint16_t port_id;
> +
> +	/* Ethdev created can be found RTE_ETH_FOREACH_DEV_OF
> through rte_device */
> +	RTE_ETH_FOREACH_DEV_OF(port_id, &pci_dev->device) {
> +			rte_eth_dev_close(port_id);
> +	}
> +
> +	rte_spinlock_lock(&cpfl_adapter_lock);
> +	TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
> +	rte_spinlock_unlock(&cpfl_adapter_lock);
> +	cpfl_adapter_ext_deinit(adapter);
> +	rte_free(adapter);
> +
> +	return 0;
> +}
> +
> +static struct rte_pci_driver rte_cpfl_pmd = {
> +	.id_table	= pci_id_cpfl_map,
> +	.drv_flags	= RTE_PCI_DRV_NEED_MAPPING,
> +	.probe		= cpfl_pci_probe,
> +	.remove		= cpfl_pci_remove,
> +};
> +
> +/**
> + * Driver initialization routine.
> + * Invoked once at EAL init time.
> + * Register itself as the [Poll Mode] Driver of PCI devices.
> + */
> +RTE_PMD_REGISTER_PCI(net_cpfl, rte_cpfl_pmd);
> +RTE_PMD_REGISTER_PCI_TABLE(net_cpfl, pci_id_cpfl_map);
> +RTE_PMD_REGISTER_KMOD_DEP(net_cpfl, "* igb_uio | vfio-pci");
> +RTE_PMD_REGISTER_PARAM_STRING(net_cpfl,
> +			      CPFL_TX_SINGLE_Q "=<0|1> "
> +			      CPFL_RX_SINGLE_Q "=<0|1> "
> +			      CPFL_VPORT "=[vport_set0,[vport_set1],...]");
> +
> +RTE_LOG_REGISTER_SUFFIX(cpfl_logtype_init, init, NOTICE);
> +RTE_LOG_REGISTER_SUFFIX(cpfl_logtype_driver, driver, NOTICE);
> diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
> new file mode 100644 index 0000000000..83459b9c91
> --- /dev/null
> +++ b/drivers/net/cpfl/cpfl_ethdev.h
> @@ -0,0 +1,78 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2022 Intel Corporation
> + */
> +
> +#ifndef _CPFL_ETHDEV_H_
> +#define _CPFL_ETHDEV_H_
> +
> +#include <stdint.h>
> +#include <rte_malloc.h>
> +#include <rte_spinlock.h>
> +#include <rte_ethdev.h>
> +#include <rte_kvargs.h>
> +#include <ethdev_driver.h>
> +#include <ethdev_pci.h>
> +
> +#include "cpfl_logs.h"
> +
> +#include <idpf_common_device.h>
> +#include <idpf_common_virtchnl.h>
> +#include <base/idpf_prototype.h>
> +#include <base/virtchnl2.h>
> +
> +#define CPFL_MAX_VPORT_NUM	8
> +
> +#define CPFL_INVALID_VPORT_IDX	0xffff
> +
> +#define CPFL_MIN_BUF_SIZE	1024
> +#define CPFL_MAX_FRAME_SIZE	9728
> +#define CPFL_DEFAULT_MTU	RTE_ETHER_MTU
> +
> +#define CPFL_NUM_MACADDR_MAX	64
> +
> +#define CPFL_VLAN_TAG_SIZE	4
> +#define CPFL_ETH_OVERHEAD \
> +	(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
> CPFL_VLAN_TAG_SIZE * 2)
> +
> +#define CPFL_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
> +
> +#define CPFL_ALARM_INTERVAL	50000 /* us */
> +
> +/* Device IDs */
> +#define IDPF_DEV_ID_CPF			0x1453
> +
> +struct cpfl_vport_param {
> +	struct cpfl_adapter_ext *adapter;
> +	uint16_t devarg_id; /* arg id from user */
> +	uint16_t idx;       /* index in adapter->vports[]*/
> +};
> +
> +/* Struct used when parse driver specific devargs */ struct
> +cpfl_devargs {
> +	uint16_t req_vports[CPFL_MAX_VPORT_NUM];
> +	uint16_t req_vport_nb;
> +};
> +
> +struct cpfl_adapter_ext {
> +	TAILQ_ENTRY(cpfl_adapter_ext) next;
> +	struct idpf_adapter base;
> +
> +	char name[CPFL_ADAPTER_NAME_LEN];
> +
> +	struct idpf_vport **vports;
> +	uint16_t max_vport_nb;
> +
> +	uint16_t cur_vports; /* bit mask of created vport */
> +	uint16_t cur_vport_nb;
> +
> +	uint16_t used_vecs_num;
> +};
> +
> +TAILQ_HEAD(cpfl_adapter_list, cpfl_adapter_ext);
> +
> +#define CPFL_DEV_TO_PCI(eth_dev)		\
> +	RTE_DEV_TO_PCI((eth_dev)->device)
> +#define CPFL_ADAPTER_TO_EXT(p)					\
> +	container_of((p), struct cpfl_adapter_ext, base)
> +
> +#endif /* _CPFL_ETHDEV_H_ */
> diff --git a/drivers/net/cpfl/cpfl_logs.h b/drivers/net/cpfl/cpfl_logs.h new
> file mode 100644 index 0000000000..451bdfbd1d
> --- /dev/null
> +++ b/drivers/net/cpfl/cpfl_logs.h
> @@ -0,0 +1,32 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2022 Intel Corporation
> + */
> +
> +#ifndef _CPFL_LOGS_H_
> +#define _CPFL_LOGS_H_
> +
> +#include <rte_log.h>
> +
> +extern int cpfl_logtype_init;
> +extern int cpfl_logtype_driver;
> +
> +#define PMD_INIT_LOG(level, ...) \
> +	rte_log(RTE_LOG_ ## level, \
> +		cpfl_logtype_init, \
> +		RTE_FMT("%s(): " \
> +			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
> +			__func__, \
> +			RTE_FMT_TAIL(__VA_ARGS__,)))
> +
> +#define PMD_DRV_LOG_RAW(level, ...) \
> +	rte_log(RTE_LOG_ ## level, \
> +		cpfl_logtype_driver, \
> +		RTE_FMT("%s(): " \
> +			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
> +			__func__, \
> +			RTE_FMT_TAIL(__VA_ARGS__,)))
> +
> +#define PMD_DRV_LOG(level, fmt, args...) \
> +	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
> +
> +#endif /* _CPFL_LOGS_H_ */
> diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c new file
> mode 100644 index 0000000000..ea4a2002bf
> --- /dev/null
> +++ b/drivers/net/cpfl/cpfl_rxtx.c
> @@ -0,0 +1,244 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2022 Intel Corporation
> + */
> +
> +#include <ethdev_driver.h>
> +#include <rte_net.h>
> +#include <rte_vect.h>
> +
> +#include "cpfl_ethdev.h"
> +#include "cpfl_rxtx.h"
> +
> +static uint64_t
> +cpfl_tx_offload_convert(uint64_t offload) {
> +	uint64_t ol = 0;
> +
> +	if ((offload & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0)
> +		ol |= IDPF_TX_OFFLOAD_IPV4_CKSUM;
> +	if ((offload & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0)
> +		ol |= IDPF_TX_OFFLOAD_UDP_CKSUM;
> +	if ((offload & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
> +		ol |= IDPF_TX_OFFLOAD_TCP_CKSUM;
> +	if ((offload & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) != 0)
> +		ol |= IDPF_TX_OFFLOAD_SCTP_CKSUM;
> +	if ((offload & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0)
> +		ol |= IDPF_TX_OFFLOAD_MULTI_SEGS;
> +	if ((offload & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0)
> +		ol |= IDPF_TX_OFFLOAD_MBUF_FAST_FREE;
> +
> +	return ol;
> +}
> +
> +static const struct rte_memzone *
> +cpfl_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
> +		      uint16_t len, uint16_t queue_type,
> +		      unsigned int socket_id, bool splitq) {
> +	char ring_name[RTE_MEMZONE_NAMESIZE];
> +	const struct rte_memzone *mz;
> +	uint32_t ring_size;
> +
> +	memset(ring_name, 0, RTE_MEMZONE_NAMESIZE);
> +	switch (queue_type) {
> +	case VIRTCHNL2_QUEUE_TYPE_TX:
> +		if (splitq)
> +			ring_size = RTE_ALIGN(len * sizeof(struct
> idpf_flex_tx_sched_desc),
> +					      CPFL_DMA_MEM_ALIGN);
> +		else
> +			ring_size = RTE_ALIGN(len * sizeof(struct
> idpf_flex_tx_desc),
> +					      CPFL_DMA_MEM_ALIGN);
> +		rte_memcpy(ring_name, "cpfl Tx ring", sizeof("cpfl Tx ring"));
> +		break;
> +	case VIRTCHNL2_QUEUE_TYPE_RX:
> +		if (splitq)
> +			ring_size = RTE_ALIGN(len * sizeof(struct
> virtchnl2_rx_flex_desc_adv_nic_3),
> +					      CPFL_DMA_MEM_ALIGN);
> +		else
> +			ring_size = RTE_ALIGN(len * sizeof(struct
> virtchnl2_singleq_rx_buf_desc),
> +					      CPFL_DMA_MEM_ALIGN);
> +		rte_memcpy(ring_name, "cpfl Rx ring", sizeof("cpfl Rx ring"));
> +		break;
> +	case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
> +		ring_size = RTE_ALIGN(len * sizeof(struct
> idpf_splitq_tx_compl_desc),
> +				      CPFL_DMA_MEM_ALIGN);
> +		rte_memcpy(ring_name, "cpfl Tx compl ring", sizeof("cpfl Tx
> compl ring"));
> +		break;
> +	case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
> +		ring_size = RTE_ALIGN(len * sizeof(struct
> virtchnl2_splitq_rx_buf_desc),
> +				      CPFL_DMA_MEM_ALIGN);
> +		rte_memcpy(ring_name, "cpfl Rx buf ring", sizeof("cpfl Rx
> buf ring"));
> +		break;
> +	default:
> +		PMD_INIT_LOG(ERR, "Invalid queue type");
> +		return NULL;
> +	}
> +
> +	mz = rte_eth_dma_zone_reserve(dev, ring_name, queue_idx,
> +				      ring_size, CPFL_RING_BASE_ALIGN,
> +				      socket_id);
> +	if (mz == NULL) {
> +		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for
> ring");
> +		return NULL;
> +	}
> +
> +	/* Zero all the descriptors in the ring. */
> +	memset(mz->addr, 0, ring_size);
> +
> +	return mz;
> +}
> +
> +static void
> +cpfl_dma_zone_release(const struct rte_memzone *mz) {
> +	rte_memzone_free(mz);
> +}
> +
> +static int
> +cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
> +		     uint16_t queue_idx, uint16_t nb_desc,
> +		     unsigned int socket_id)
> +{
> +	struct idpf_vport *vport = dev->data->dev_private;
> +	const struct rte_memzone *mz;
> +	struct idpf_tx_queue *cq;
> +	int ret;
> +
> +	cq = rte_zmalloc_socket("cpfl splitq cq",
> +				sizeof(struct idpf_tx_queue),
> +				RTE_CACHE_LINE_SIZE,
> +				socket_id);
> +	if (cq == NULL) {
> +		PMD_INIT_LOG(ERR, "Failed to allocate memory for Tx compl
> queue");
> +		ret = -ENOMEM;
> +		goto err_cq_alloc;
> +	}
> +
> +	cq->nb_tx_desc = nb_desc;
> +	cq->queue_id = vport->chunks_info.tx_compl_start_qid +
> queue_idx;
> +	cq->port_id = dev->data->port_id;
> +	cq->txqs = dev->data->tx_queues;
> +	cq->tx_start_qid = vport->chunks_info.tx_start_qid;
> +
> +	mz = cpfl_dma_zone_reserve(dev, queue_idx, nb_desc,
> +
> VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION,
> +				   socket_id, true);
> +	if (mz == NULL) {
Need an error log here?
> +		ret = -ENOMEM;
> +		goto err_mz_reserve;
> +	}
> +	cq->tx_ring_phys_addr = mz->iova;
> +	cq->compl_ring = mz->addr;
> +	cq->mz = mz;
> +	reset_split_tx_complq(cq);
> +
> +	txq->complq = cq;
> +
> +	return 0;
> +
> +err_mz_reserve:
> +	rte_free(cq);
> +err_cq_alloc:
> +	return ret;
> +}
> +
> +int
> +cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
> +		    uint16_t nb_desc, unsigned int socket_id,
> +		    const struct rte_eth_txconf *tx_conf) {
> +	struct idpf_vport *vport = dev->data->dev_private;
> +	struct idpf_adapter *adapter = vport->adapter;
> +	uint16_t tx_rs_thresh, tx_free_thresh;
> +	struct idpf_hw *hw = &adapter->hw;
> +	const struct rte_memzone *mz;
> +	struct idpf_tx_queue *txq;
> +	uint64_t offloads;
> +	uint16_t len;
> +	bool is_splitq;
> +	int ret;
> +
> +	offloads = tx_conf->offloads | dev->data-
> >dev_conf.txmode.offloads;
> +
> +	tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh > 0) ?
> +		tx_conf->tx_rs_thresh : CPFL_DEFAULT_TX_RS_THRESH);
> +	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh > 0) ?
> +		tx_conf->tx_free_thresh :
> CPFL_DEFAULT_TX_FREE_THRESH);
> +	if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
> +		return -EINVAL;
> +
> +	/* Allocate the TX queue data structure. */
> +	txq = rte_zmalloc_socket("cpfl txq",
> +				 sizeof(struct idpf_tx_queue),
> +				 RTE_CACHE_LINE_SIZE,
> +				 socket_id);
> +	if (txq == NULL) {
> +		PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue
> structure");
> +		ret = -ENOMEM;
> +		goto err_txq_alloc;
> +	}
> +
> +	is_splitq = !!(vport->txq_model ==
> VIRTCHNL2_QUEUE_MODEL_SPLIT);
> +
> +	txq->nb_tx_desc = nb_desc;
> +	txq->rs_thresh = tx_rs_thresh;
> +	txq->free_thresh = tx_free_thresh;
> +	txq->queue_id = vport->chunks_info.tx_start_qid + queue_idx;
> +	txq->port_id = dev->data->port_id;
> +	txq->offloads = cpfl_tx_offload_convert(offloads);
> +	txq->tx_deferred_start = tx_conf->tx_deferred_start;
> +
> +	if (is_splitq)
> +		len = 2 * nb_desc;
> +	else
> +		len = nb_desc;
> +	txq->sw_nb_desc = len;
> +
> +	/* Allocate TX hardware ring descriptors. */
> +	mz = cpfl_dma_zone_reserve(dev, queue_idx, nb_desc,
> VIRTCHNL2_QUEUE_TYPE_TX,
> +				   socket_id, is_splitq);
> +	if (mz == NULL) {
> +		ret = -ENOMEM;
> +		goto err_mz_reserve;
> +	}
> +	txq->tx_ring_phys_addr = mz->iova;
> +	txq->mz = mz;
> +
> +	txq->sw_ring = rte_zmalloc_socket("cpfl tx sw ring",
> +					  sizeof(struct idpf_tx_entry) * len,
> +					  RTE_CACHE_LINE_SIZE, socket_id);
> +	if (txq->sw_ring == NULL) {
> +		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX
> ring");
> +		ret = -ENOMEM;
> +		goto err_sw_ring_alloc;
> +	}
> +
> +	if (!is_splitq) {
> +		txq->tx_ring = mz->addr;
> +		reset_single_tx_queue(txq);
> +	} else {
> +		txq->desc_ring = mz->addr;
> +		reset_split_tx_descq(txq);
> +
> +		/* Setup tx completion queue if split model */
> +		ret = cpfl_tx_complq_setup(dev, txq, queue_idx,
> +					   2 * nb_desc, socket_id);
> +		if (ret != 0)
> +			goto err_complq_setup;
> +	}
> +
> +	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
> +			queue_idx * vport->chunks_info.tx_qtail_spacing);
> +	txq->q_set = true;
> +	dev->data->tx_queues[queue_idx] = txq;
> +
> +	return 0;
> +
> +err_complq_setup:
> +err_sw_ring_alloc:
> +	cpfl_dma_zone_release(mz);
> +err_mz_reserve:
> +	rte_free(txq);
> +err_txq_alloc:
> +	return ret;
> +}
> diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h new
> file mode 100644 index 0000000000..ec42478393
> --- /dev/null
> +++ b/drivers/net/cpfl/cpfl_rxtx.h
> @@ -0,0 +1,25 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2022 Intel Corporation
> + */
> +
> +#ifndef _CPFL_RXTX_H_
> +#define _CPFL_RXTX_H_
> +
> +#include <idpf_common_rxtx.h>
> +#include "cpfl_ethdev.h"
> +
> +/* In QLEN must be whole number of 32 descriptors. */
> +#define CPFL_ALIGN_RING_DESC	32
> +#define CPFL_MIN_RING_DESC	32
> +#define CPFL_MAX_RING_DESC	4096
> +#define CPFL_DMA_MEM_ALIGN	4096
> +/* Base address of the HW descriptor ring should be 128B aligned. */
> +#define CPFL_RING_BASE_ALIGN	128
> +
> +#define CPFL_DEFAULT_TX_RS_THRESH	32
> +#define CPFL_DEFAULT_TX_FREE_THRESH	32
> +
> +int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
> +			uint16_t nb_desc, unsigned int socket_id,
> +			const struct rte_eth_txconf *tx_conf); #endif /*
> _CPFL_RXTX_H_ */
> diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build new
> file mode 100644 index 0000000000..106cc97e60
> --- /dev/null
> +++ b/drivers/net/cpfl/meson.build
> @@ -0,0 +1,14 @@
> +# SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2022 Intel
> +Corporation
> +
> +if is_windows
> +    build = false
> +    reason = 'not supported on Windows'
> +    subdir_done()
> +endif
> +
> +deps += ['common_idpf']
> +
> +sources = files(
> +        'cpfl_ethdev.c',
> +)
> \ No newline at end of file
> diff --git a/drivers/net/meson.build b/drivers/net/meson.build index
> 6470bf3636..a8ca338875 100644
> --- a/drivers/net/meson.build
> +++ b/drivers/net/meson.build
> @@ -13,6 +13,7 @@ drivers = [
>          'bnxt',
>          'bonding',
>          'cnxk',
> +        'cpfl',
>          'cxgbe',
>          'dpaa',
>          'dpaa2',
> --
> 2.25.1



More information about the dev mailing list