5. BNX2X Poll Mode Driver
The BNX2X poll mode driver library (librte_pmd_bnx2x) implements support for QLogic 578xx 10/20 Gbps family of adapters as well as their virtual functions (VF) in SR-IOV context. It is supported on several standard Linux distros like Red Hat 7.x and SLES12 OS. It is compile-tested under FreeBSD OS.
More information can be found at QLogic Corporation’s Official Website.
5.1. Supported Features
BNX2X PMD has support for:
- Base L2 features
- Unicast/multicast filtering
- Promiscuous mode
- Port hardware statistics
- SR-IOV VF
5.2. Non-supported Features
The features not yet supported include:
- TSS (Transmit Side Scaling)
- RSS (Receive Side Scaling)
- LRO/TSO offload
- Checksum offload
- SR-IOV PF
- Rx TX scatter gather
5.3. Co-existence considerations
- BCM578xx being a CNA can have both NIC and Storage personalities. However, coexistence with storage protocol drivers (cnic, bnx2fc and bnx2fi) is not supported on the same adapter. So storage personality has to be disabled on that adapter when used in DPDK applications.
- For SR-IOV case, bnx2x PMD will be used to bind to SR-IOV VF device and Linux native kernel driver (bnx2x) will be attached to SR-IOV PF.
5.4. Supported QLogic NICs
- Requires firmware version 126.96.36.199. It is included in most of the standard Linux distros. If it is not available visit QLogic Driver Download Center to get the required firmware.
5.6. Pre-Installation Configuration
5.6.1. Config File Options
The following options can be modified in the
.config file. Please note that
enabling debugging options may affect system performance.
Toggle compilation of bnx2x driver.
Toggle display of generic debugging messages.
Toggle display of initialization related messages.
Toggle display of transmit fast path run-time messages.
Toggle display of receive fast path run-time messages.
Toggle display of register reads and writes.
5.7. Driver compilation and testing
Refer to the document compiling and testing a PMD for a NIC for details.
5.8. SR-IOV: Prerequisites and sample Application Notes
This section provides instructions to configure SR-IOV with Linux OS.
Verify SR-IOV and ARI capabilities are enabled on the adapter using
lspci -s <slot> -vvv
[...] Capabilities: [1b8 v1] Alternative Routing-ID Interpretation (ARI) [...] Capabilities: [1c0 v1] Single Root I/O Virtualization (SR-IOV) [...] Kernel driver in use: igb_uio
Load the kernel module:
systemd-udevd: renamed network interface eth0 to ens5f0 systemd-udevd: renamed network interface eth1 to ens5f1
Bring up the PF ports:
ifconfig ens5f0 up ifconfig ens5f1 up
Create VF device(s):
Echo the number of VFs to be created into “sriov_numvfs” sysfs entry of the parent PF.
echo 2 > /sys/devices/pci0000:00/0000:00:03.0/0000:81:00.0/sriov_numvfs
Assign VF MAC address:
Assign MAC address to the VF using iproute2 utility. The syntax is: ip link set <PF iface> vf <VF id> mac <macaddr>
ip link set ens5f0 vf 0 mac 52:54:00:2f:9d:e8
The VF devices may be passed through to the guest VM using virt-manager or virsh etc. bnx2x PMD should be used to bind the VF devices in the guest VM using the instructions outlined in the Application notes below.
Follow instructions available in the document compiling and testing a PMD for a NIC to run testpmd.
[...] EAL: PCI device 0000:84:00.0 on NUMA socket 1 EAL: probe driver: 14e4:168e rte_bnx2x_pmd EAL: PCI memory mapped at 0x7f14f6fe5000 EAL: PCI memory mapped at 0x7f14f67e5000 EAL: PCI memory mapped at 0x7f15fbd9b000 EAL: PCI device 0000:84:00.1 on NUMA socket 1 EAL: probe driver: 14e4:168e rte_bnx2x_pmd EAL: PCI memory mapped at 0x7f14f5fe5000 EAL: PCI memory mapped at 0x7f14f57e5000 EAL: PCI memory mapped at 0x7f15fbd4f000 Interactive-mode selected Configuring Port 0 (socket 0) PMD: bnx2x_dev_tx_queue_setup(): fp req_bd=512, thresh=512, usable_bd=1020, total_bd=1024, tx_pages=4 PMD: bnx2x_dev_rx_queue_setup(): fp req_bd=128, thresh=0, usable_bd=510, total_bd=512, rx_pages=1, cq_pages=8 PMD: bnx2x_print_adapter_info(): [...] Checking link statuses... Port 0 Link Up - speed 10000 Mbps - full-duplex Port 1 Link Up - speed 10000 Mbps - full-duplex Done testpmd>