[dpdk-dev] [PATCH v6 7/7] nfp: adding nic guide

Alejandro.Lucero alejandro.lucero at netronome.com
Thu Nov 5 11:43:21 CET 2015


From: "Alejandro.Lucero" <alejandro.lucero at netronome.com>

Signed-off-by: Alejandro.Lucero <alejandro.lucero at netronome.com>
Signed-off-by: Rolf.Neugebauer <rolf.neugebauer at netronome.com>
---
 MAINTAINERS               |    1 +
 doc/guides/nics/index.rst |    1 +
 doc/guides/nics/nfp.rst   |  189 +++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 191 insertions(+)
 create mode 100644 doc/guides/nics/nfp.rst

diff --git a/MAINTAINERS b/MAINTAINERS
index 72abbb2..3129cd2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -263,6 +263,7 @@ F: doc/guides/nics/mlx5.rst
 Netronome nfp
 M: Alejandro Lucero <alejandro.lucero at netronome.com>
 F: drivers/net/nfp/
+F: doc/guides/nics/nfp.rst
 
 RedHat virtio
 M: Huawei Xie <huawei.xie at intel.com>
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 2d4936d..1a7bffe 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -46,6 +46,7 @@ Network Interface Controller Drivers
     intel_vf
     mlx4
     mlx5
+    nfp
     virtio
     vmxnet3
     pcap_ring
diff --git a/doc/guides/nics/nfp.rst b/doc/guides/nics/nfp.rst
new file mode 100644
index 0000000..bb2afda
--- /dev/null
+++ b/doc/guides/nics/nfp.rst
@@ -0,0 +1,189 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Netronome Systems, Inc. All rights reserved.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+NFP poll mode driver library
+============================
+
+Netronome's sixth generation of flow processors pack 216 programmable
+cores and over 100 hardware accelerators that uniquely combine packet,
+flow, security and content processing in a single device that scales
+up to 400 Gbps.
+
+This document explains how to use DPDK with the Netronome Poll Mode
+Driver (PMD) supporting Netronome's Network Flow Processor 6xxx
+(NFP-6xxx).
+
+Currently the driver supports virtual functions (VFs) only.
+
+Dependencies
+------------
+
+Before using the Netronome's DPDK PMD some NFP-6xxx configuration,
+which is not related to DPDK, is required. The system requires
+installation of **Netronome's BSP (Board Support Package)** which includes
+Linux drivers, programs and libraries.
+
+If you have a NFP-6xxx device you should already have the code and
+documentation for doing this configuration. Contact
+**support at netronome.com** to obtain the latest available firmware.
+
+The NFP Linux kernel drivers (including the required PF driver for the
+NFP) are available on Github at
+**https://github.com/Netronome/nfp-drv-kmods** along with build
+instructions.
+
+Using Netronome´s NFP PMD requires to have the Netronome´s BSP module
+loaded.
+
+Building the software
+---------------------
+
+Netronome's PMD code is provided in the **drivers/net/nfp** directory.
+This PMD is included as part of the DPDK **common_linuxapp configuration**
+file, but it is not enabled by default. If it is enabled without a BSP
+installed in the system, the compilation will fail.
+
+For enabling the PMD, just modifies the **common_linuxapp** file with:
+
+- **CONFIG_RTE_LIBRTE_NFP_PMD=y**
+
+Once DPDK is built all the DPDK apps and examples include support for
+the NFP PMD.
+
+System configuration
+--------------------
+
+Using the NFP PMD is not different to using other PMDs. Usual steps are:
+
+#. **Configure hugepages:** All major Linux distributions have the hugepages
+   functionality enabled by default. By default this allows the system uses for
+   working with transparent hugepages. But in this case some hugepages need to
+   be created/reserved for use with the DPDK through the hugetlbfs file system.
+   First the virtual file system need to be mounted:
+
+   .. code-block:: console
+
+      mount -t hugetlbfs none /mnt/hugetlbfs
+
+   The command uses the common mount point for this file system and it needs to
+   be created if necessary.
+
+   Configuring hugepages is performed via sysfs:
+
+   .. code-block:: console
+
+      /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+
+   This sysfs file is used to specify the number of hugepages to reserve.
+   For example:
+
+   .. code-block:: console
+
+      echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+
+   This will reserve 2GB of memory using 1024 2MB hugepages. The file may be
+   read to see if the operation was performed correctly:
+
+   .. code-block:: console
+
+      cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+
+   The number of unused hugepages may also be inspected.
+
+   Before executing the DPDK app it should match the value of nr_hugepages.
+
+   .. code-block:: console
+
+      cat /sys/kernel/mm/hugepages/hugepages-2048kB/free_hugepages
+
+   The hugepages reservation should be performed at system initialisation and
+   it is usual to use a kernel parameter for configuration. If the reservation
+   is attempted on a busy system it will likely fail. Reserving memory for
+   hugepages may be done adding the following to the grub kernel command line:
+
+   .. code-block:: console
+
+      default_hugepagesz=1M hugepagesz=2M hugepages=1024
+
+   This will reserve 2GBytes of memory using 2Mbytes huge pages.
+
+   Finally, for a NUMA system the allocation needs to be made on the correct
+   NUMA node. In a DPDK app there is a master core which will (usually) perform
+   memory allocation. It is important that some of the hugepages are reserved
+   on the NUMA memory node where the network device is attached. This is because
+   of a restriction in DPDK by which TX and RX descriptors rings must be created
+   on the master code.
+
+   Per-node allocation of hugepages may be inspected and controlled using sysfs.
+   For example:
+
+   .. code-block:: console
+
+      cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
+
+   For a NUMA system there will be a specific hugepage directory per node
+   allowing control of hugepage reservation. A common problem may occur when
+   hugepages reservation is performed after the system has been working for
+   some time. Configuration using the global sysfs hugepage interface will
+   succeed but the per-node allocations may be unsatisfactory.
+
+   The number of hugepages that need to be reserved depends on how the app uses
+   TX and RX descriptors, and packets mbufs.
+
+#. **Enable SR-IOV on the NFP-6xxx device:** The current NFP PMD works with
+   Virtual Functions (VFs) on a NFP device. Make sure that one of the Physical
+   Function (PF) drivers from the above Github repository is installed and
+   loaded.
+
+   Virtual Functions need to be enabled before they can be used with the PMD.
+   Before enabling the VFs it is useful to obtain information about the
+   current NFP PCI device detected by the system:
+
+   .. code-block:: console
+
+      lspci -d19ee:
+
+   Now, for example, configure two virtual functions on a NFP-6xxx device
+   whose PCI system identity is "0000:03:00.0":
+
+   .. code-block:: console
+
+      echo 2 > /sys/bus/pci/devices/0000:03:00.0/sriov_numvfs
+
+   The result of this command may be shown using lspci again:
+
+   .. code-block:: console
+
+      lspci -d19ee: -k
+
+   Two new PCI devices should appear in the output of the above command. The
+   -k option shows the device driver, if any, that devices are bound to.
+   Depending on the modules loaded at this point the new PCI devices may be
+   bound to nfp_netvf driver.
-- 
1.7.9.5



More information about the dev mailing list