[dpdk-dev,02/10] mempool/octeontx: probe timvf PCIe devices
Checks
Commit Message
On Octeontx HW, each event timer device is enumerated as separate SRIOV VF
PCIe device.
In order to expose as a event timer device:
On PCIe probe, the driver stores the information associated with the
PCIe device and later when appliacation requests for a event timer device
through `rte_event_timer_adapter_create` the driver infrastructure creates
the timer adapter with earlier probed PCIe VF devices.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
---
drivers/mempool/octeontx/Makefile | 1 +
drivers/mempool/octeontx/meson.build | 1 +
drivers/mempool/octeontx/octeontx_mbox.h | 7 +
drivers/mempool/octeontx/octeontx_timvf.c | 145 +++++++++++++++++++++
.../octeontx/rte_mempool_octeontx_version.map | 3 +
usertools/dpdk-devbind.py | 8 ++
6 files changed, 165 insertions(+)
create mode 100644 drivers/mempool/octeontx/octeontx_timvf.c
Comments
-----Original Message-----
> Date: Sat, 17 Feb 2018 03:06:52 +0530
> From: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> To: jerin.jacob@caviumnetworks.com, santosh.shukla@caviumnetworks.com,
> erik.g.carrillo@intel.com
> Cc: dev@dpdk.org, Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> Subject: [dpdk-dev] [PATCH 02/10] mempool/octeontx: probe timvf PCIe devices
> X-Mailer: git-send-email 2.16.1
>
> On Octeontx HW, each event timer device is enumerated as separate SRIOV VF
> PCIe device.
>
> In order to expose as a event timer device:
> On PCIe probe, the driver stores the information associated with the
> PCIe device and later when appliacation requests for a event timer device
> through `rte_event_timer_adapter_create` the driver infrastructure creates
> the timer adapter with earlier probed PCIe VF devices.
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> ---
> drivers/mempool/octeontx/Makefile | 1 +
> drivers/mempool/octeontx/meson.build | 1 +
> drivers/mempool/octeontx/octeontx_mbox.h | 7 +
> drivers/mempool/octeontx/octeontx_timvf.c | 145 +++++++++++++++++++++
> .../octeontx/rte_mempool_octeontx_version.map | 3 +
> usertools/dpdk-devbind.py | 8 ++
I suggest to have separate patch for usertools/dpdk-devbind.py
common code change.
> diff --git a/usertools/dpdk-devbind.py b/usertools/dpdk-devbind.py
> index 18d938607..340643b70 100755
> --- a/usertools/dpdk-devbind.py
> +++ b/usertools/dpdk-devbind.py
> @@ -22,11 +22,14 @@
> 'SVendor': None, 'SDevice': None}
> cavium_pkx = {'Class': '08', 'Vendor': '177d', 'Device': 'a0dd,a049',
> 'SVendor': None, 'SDevice': None}
> +cavium_tim = {'Class': '08', 'Vendor': '177d', 'Device': 'a051',
> + 'SVendor': None, 'SDevice': None}
>
> network_devices = [network_class, cavium_pkx]
> crypto_devices = [encryption_class, intel_processor_class]
> eventdev_devices = [cavium_sso]
> mempool_devices = [cavium_fpa]
> +eventtimer_devices = [cavium_tim]
In order to reduce number of different type of device, IMO, we could
group this also as "eventdev_devices" as it comes as sub device of
eventdev.
ie.
eventdev_devices = [cavium_sso, cavium_tim]
On Sat, Feb 17, 2018 at 10:24:07AM +0530, Jerin Jacob wrote:
> -----Original Message-----
> > Date: Sat, 17 Feb 2018 03:06:52 +0530
> > From: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> > To: jerin.jacob@caviumnetworks.com, santosh.shukla@caviumnetworks.com,
> > erik.g.carrillo@intel.com
> > Cc: dev@dpdk.org, Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> > Subject: [dpdk-dev] [PATCH 02/10] mempool/octeontx: probe timvf PCIe devices
> > X-Mailer: git-send-email 2.16.1
> >
> > On Octeontx HW, each event timer device is enumerated as separate SRIOV VF
> > PCIe device.
> >
> > In order to expose as a event timer device:
> > On PCIe probe, the driver stores the information associated with the
> > PCIe device and later when appliacation requests for a event timer device
> > through `rte_event_timer_adapter_create` the driver infrastructure creates
> > the timer adapter with earlier probed PCIe VF devices.
> >
> > Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> > ---
> > drivers/mempool/octeontx/Makefile | 1 +
> > drivers/mempool/octeontx/meson.build | 1 +
> > drivers/mempool/octeontx/octeontx_mbox.h | 7 +
> > drivers/mempool/octeontx/octeontx_timvf.c | 145 +++++++++++++++++++++
> > .../octeontx/rte_mempool_octeontx_version.map | 3 +
> > usertools/dpdk-devbind.py | 8 ++
>
> I suggest to have separate patch for usertools/dpdk-devbind.py
> common code change.
>
> > diff --git a/usertools/dpdk-devbind.py b/usertools/dpdk-devbind.py
> > index 18d938607..340643b70 100755
> > --- a/usertools/dpdk-devbind.py
> > +++ b/usertools/dpdk-devbind.py
> > @@ -22,11 +22,14 @@
> > 'SVendor': None, 'SDevice': None}
> > cavium_pkx = {'Class': '08', 'Vendor': '177d', 'Device': 'a0dd,a049',
> > 'SVendor': None, 'SDevice': None}
> > +cavium_tim = {'Class': '08', 'Vendor': '177d', 'Device': 'a051',
> > + 'SVendor': None, 'SDevice': None}
> >
> > network_devices = [network_class, cavium_pkx]
> > crypto_devices = [encryption_class, intel_processor_class]
> > eventdev_devices = [cavium_sso]
> > mempool_devices = [cavium_fpa]
> > +eventtimer_devices = [cavium_tim]
>
> In order to reduce number of different type of device, IMO, we could
> group this also as "eventdev_devices" as it comes as sub device of
> eventdev.
>
> ie.
> eventdev_devices = [cavium_sso, cavium_tim]
Agreed, will do the specified changes and send it as a sperate patch.
Thanks,
Pavan.
@@ -20,6 +20,7 @@ LIBABIVER := 1
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += octeontx_ssovf.c
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += octeontx_mbox.c
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += octeontx_fpavf.c
+SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += octeontx_timvf.c
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += rte_mempool_octeontx.c
ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y)
@@ -4,6 +4,7 @@
sources = files('octeontx_ssovf.c',
'octeontx_mbox.c',
'octeontx_fpavf.c',
+ 'octeontx_timvf.c',
'rte_mempool_octeontx.c'
)
@@ -21,6 +21,11 @@ enum octeontx_ssovf_type {
OCTEONTX_SSO_HWS, /* SSO hardware workslot vf */
};
+struct octeontx_timvf_info {
+ uint16_t domain; /* Domain id */
+ uint8_t total_timvfs; /* Total timvf available in domain */
+};
+
struct octeontx_mbox_hdr {
uint16_t vfid; /* VF index or pf resource index local to the domain */
uint8_t coproc; /* Coprocessor id */
@@ -32,5 +37,7 @@ int octeontx_ssovf_info(struct octeontx_ssovf_info *info);
void *octeontx_ssovf_bar(enum octeontx_ssovf_type, uint8_t id, uint8_t bar);
int octeontx_ssovf_mbox_send(struct octeontx_mbox_hdr *hdr,
void *txdata, uint16_t txlen, void *rxdata, uint16_t rxlen);
+int octeontx_timvf_info(struct octeontx_timvf_info *info);
+void *octeontx_timvf_bar(uint8_t id, uint8_t bar);
#endif /* __OCTEONTX_MBOX_H__ */
new file mode 100644
@@ -0,0 +1,145 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Cavium, Inc
+ */
+
+#include <rte_eal.h>
+#include <rte_io.h>
+#include <rte_pci.h>
+#include <rte_bus_pci.h>
+
+#include "octeontx_mbox.h"
+#include "octeontx_pool_logs.h"
+
+#ifndef PCI_VENDOR_ID_CAVIUM
+#define PCI_VENDOR_ID_CAVIUM (0x177D)
+#endif
+
+#define PCI_DEVICE_ID_OCTEONTX_TIM_VF (0xA051)
+#define TIM_MAX_RINGS (64)
+
+struct timvf_res {
+ uint16_t domain;
+ uint16_t vfid;
+ void *bar0;
+ void *bar2;
+ void *bar4;
+};
+
+struct timdev {
+ uint8_t total_timvfs;
+ struct timvf_res rings[TIM_MAX_RINGS];
+};
+
+static struct timdev tdev;
+
+int
+octeontx_timvf_info(struct octeontx_timvf_info *tinfo)
+{
+ int i;
+ struct octeontx_ssovf_info info;
+
+ if (tinfo == NULL)
+ return -EINVAL;
+
+ if (!tdev.total_timvfs)
+ return -ENODEV;
+
+ if (octeontx_ssovf_info(&info) < 0)
+ return -EINVAL;
+
+ for (i = 0; i < tdev.total_timvfs; i++) {
+ if (info.domain != tdev.rings[i].domain) {
+ mbox_log_err("GRP error, vfid=%d/%d domain=%d/%d %p",
+ i, tdev.rings[i].vfid,
+ info.domain, tdev.rings[i].domain,
+ tdev.rings[i].bar0);
+ return -EINVAL;
+ }
+ }
+
+ tinfo->total_timvfs = tdev.total_timvfs;
+ tinfo->domain = info.domain;
+ return 0;
+}
+
+void*
+octeontx_timvf_bar(uint8_t id, uint8_t bar)
+{
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return NULL;
+
+ if (id > tdev.total_timvfs)
+ return NULL;
+
+ switch (bar) {
+ case 0:
+ return tdev.rings[id].bar0;
+ case 4:
+ return tdev.rings[id].bar4;
+ default:
+ return NULL;
+ }
+}
+
+static int
+timvf_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
+{
+ uint64_t val;
+ uint16_t vfid;
+ struct timvf_res *res;
+
+ RTE_SET_USED(pci_drv);
+
+ /* For secondary processes, the primary has done all the work */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ if (pci_dev->mem_resource[0].addr == NULL ||
+ pci_dev->mem_resource[4].addr == NULL) {
+ mbox_log_err("Empty bars %p %p",
+ pci_dev->mem_resource[0].addr,
+ pci_dev->mem_resource[4].addr);
+ return -ENODEV;
+ }
+
+ val = rte_read64((uint8_t *)pci_dev->mem_resource[0].addr + 0x100);
+ vfid = (val >> 23) & 0xff;
+ if (vfid >= TIM_MAX_RINGS) {
+ mbox_log_err("Invalid vfid(%d/%d)", vfid, TIM_MAX_RINGS);
+ return -EINVAL;
+ }
+
+ res = &tdev.rings[tdev.total_timvfs];
+ res->vfid = vfid;
+ res->bar0 = pci_dev->mem_resource[0].addr;
+ res->bar2 = pci_dev->mem_resource[2].addr;
+ res->bar4 = pci_dev->mem_resource[4].addr;
+ res->domain = (val >> 7) & 0xffff;
+ tdev.total_timvfs++;
+ rte_wmb();
+
+ mbox_log_dbg("Domain=%d VFid=%d bar0 %p total_timvfs=%d", res->domain,
+ res->vfid, pci_dev->mem_resource[0].addr,
+ tdev.total_timvfs);
+ return 0;
+}
+
+
+static const struct rte_pci_id pci_timvf_map[] = {
+ {
+ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
+ PCI_DEVICE_ID_OCTEONTX_TIM_VF)
+ },
+ {
+ .vendor_id = 0,
+ },
+};
+
+static struct rte_pci_driver pci_timvf = {
+ .id_table = pci_timvf_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_IOVA_AS_VA,
+ .probe = timvf_probe,
+ .remove = NULL,
+};
+
+RTE_PMD_REGISTER_PCI(octeontx_timvf, pci_timvf);
@@ -5,5 +5,8 @@ DPDK_17.11 {
octeontx_ssovf_bar;
octeontx_ssovf_mbox_send;
+ octeontx_timvf_info;
+ octeontx_timvf_bar;
+
local: *;
};
@@ -22,11 +22,14 @@
'SVendor': None, 'SDevice': None}
cavium_pkx = {'Class': '08', 'Vendor': '177d', 'Device': 'a0dd,a049',
'SVendor': None, 'SDevice': None}
+cavium_tim = {'Class': '08', 'Vendor': '177d', 'Device': 'a051',
+ 'SVendor': None, 'SDevice': None}
network_devices = [network_class, cavium_pkx]
crypto_devices = [encryption_class, intel_processor_class]
eventdev_devices = [cavium_sso]
mempool_devices = [cavium_fpa]
+eventtimer_devices = [cavium_tim]
# global dict ethernet devices present. Dictionary indexed by PCI address.
# Each device within this is itself a dictionary of device properties
@@ -565,6 +568,9 @@ def show_status():
if status_dev == "mempool" or status_dev == "all":
show_device_status(mempool_devices, "Mempool")
+ if status_dev == "event_timer" or status_dev == "all":
+ show_device_status(eventtimer_devices, "Event Timer")
+
def parse_args():
'''Parses the command-line arguments given by the user and takes the
appropriate action for each'''
@@ -638,6 +644,7 @@ def do_arg_actions():
get_device_details(crypto_devices)
get_device_details(eventdev_devices)
get_device_details(mempool_devices)
+ get_device_details(eventtimer_devices)
show_status()
@@ -650,6 +657,7 @@ def main():
get_device_details(crypto_devices)
get_device_details(eventdev_devices)
get_device_details(mempool_devices)
+ get_device_details(eventtimer_devices)
do_arg_actions()
if __name__ == "__main__":