[dpdk-users] Unable to Bind Device in VirtualBox VM

Nick Allen nick at nickallen.org
Wed Feb 10 14:45:20 CET 2016


Problem: I am unable to bind a virtual NIC using DPDK 2.2.0 that I
created inside of a Ubuntu 14.04 guest running in VirtualBox 5.0.14 on
OS X 10.11.3 on a 2015 Macbook Pro.  Here is the error that I am
seeing.

  $ ./tools/dpdk_nic_bind.py --bind=vfio-pci 00:11.0
  Error: bind failed for 0000:00:11.0 - Cannot bind to driver vfio-pci

  $ dmesg | tail -1
  [  413.613076] vfio-pci: probe of 0000:00:11.0 failed with error -22

Note: Based on my research, this is most often caused by not adding
'iommu=pt intel_iommu=on' to the kernel boot options.  I believe that
this is not the problem in my case.

Background: I have been trying to follow the instructions provided at
http://plvision.eu/blog/deploying-intel-dpdk-in-oracle-virtualbox/.
First, I created the VM inside of VirtualBox and created two virtual
NICs that are bridged using the 'Intel PRO/1000 MT Server (82545EM)'.
I also turned on 'Promiscuous Mode: Allow All' for each.

I then ran the following and then manually installed Ubuntu 14.04.

  VBoxManage setextradata "VM name" VBoxInternal/CPUM/SSE4.1 1
  VBoxManage setextradata "VM name" VBoxInternal/CPUM/SSE4.2 1

After installing all of the dependencies, I ran the following commands
to build DPDK on the VM.

  curl http://dpdk.org/browse/dpdk/snapshot/dpdk-2.2.0.tar.gz | tar -xvz
  cd dpdk-2.2.0
  export DPDK_DIR=`pwd`
  sed 's/CONFIG_RTE_BUILD_COMBINE_LIBS=n/CONFIG_RTE_BUILD_COMBINE_LIBS=y/'
-i config/common_linuxapp
  make install T=x86_64-ivshmem-linuxapp-gcc
  cd x86_64-ivshmem-linuxapp-gcc
  EXTRA_CFLAGS="-g -Ofast" make -j10

I appended the following to '/etc/default/grub'.  I then ran
'update-grub2 && reboot'.

  GRUB_CMDLINE_LINUX="default_hugepagesz=1G hugepagesz=1G hugepages=16
iommu=pt intel_iommu=on"

After the reboot I can see that the options took effect.

  $ cat /proc/cmdline
  BOOT_IMAGE=/vmlinuz-3.19.0-25-generic
root=/dev/mapper/dpdk1--vg-root ro default_hugepagesz=1G hugepagesz=1G
hugepages=16 iommu=pt intel_iommu=on

I then using the 'tools/setup.sh' script to do the following:

  [18] Insert VFIO module
  [20] Setup hugepage mappings for non-NUMA systems
  [25] Setup VFIO permissions
  [24] Bind Ethernet device to VFIO module

I am not able to bind the interface.  I get the same result when I
manually run the commands instead of using the 'tools/setup.sh'
script.

  root at dpdk1:/home/vagrant/dpdk-2.2.0# ./tools/dpdk_nic_bind.py --status
    Network devices using DPDK-compatible driver
    ============================================
    <none>

    Network devices using kernel driver
    ===================================
    0000:00:08.0 '82545EM Gigabit Ethernet Controller (Copper)'
if=eth0 drv=e1000 unused=vfio-pci *Active*

    Other network devices
    =====================
    0000:00:11.0 '82545EM Gigabit Ethernet Controller (Copper)' unused=vfio-pci

  root at dpdk1:/home/vagrant/dpdk-2.2.0# ./tools/dpdk_nic_bind.py
--bind=vfio-pci 00:11.0
    Error: bind failed for 0000:00:11.0 - Cannot bind to driver vfio-pci

  root at dpdk1:/home/vagrant/dpdk-2.2.0# dmesg | tail -1
    [ 2084.997657] vfio-pci: probe of 0000:00:11.0 failed with error -22

What am I doing wrong here?  What else can I dig into?


More information about the users mailing list