[dpdk-dev] [Dpdk-ovs] problem in binding interfaces of virtio-pci on the VM

Mussar, Gary gmussar at ciena.com
Fri Feb 27 15:17:18 CET 2015


This may be a long shot, but I have noticed that using dissimilar device types when launching the VM that these devices might not be bound to the same eth devices in the VM. Are you sure that esn3 is the device you are expecting to use to talk to the host?

Gary

-----Original Message-----
From: Dpdk-ovs [mailto:dpdk-ovs-bounces at lists.01.org] On Behalf Of Srinivasreddy R
Sent: Friday, February 27, 2015 06:00
To: Bruce Richardson
Cc: dev at dpdk.org; dpdk-ovs at lists.01.org
Subject: Re: [Dpdk-ovs] [dpdk-dev] problem in binding interfaces of virtio-pci on the VM

hi ,

please fine the oputput  On the VM .

/tools/dpdk_nic_bind.py --status

Network devices using DPDK-compatible driver ============================== ============== <none>

Network devices using kernel driver
===================================
0000:00:03.0 '82540EM Gigabit Ethernet Controller' if=ens3 drv=e1000 unused=igb_uio *Active*
0000:00:04.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio
0000:00:05.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio

Other network devices
=====================
<none>


i am trying to bind  "virtio network devices "   with pci  00:04.0 ,
00:05.0 .
 .
when i give the  below command i face the issue.
./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0



when  qemu does not able to allocate memory for vm  on /dev/hugepages  . it gives the below error message . "Cannot allocate memory "
In this case i am able to bind the interfaces to igb_uio .
does this gives any hint on what wrong i am doing .

do i need to handle any thing on the host when i bind to igb_uio on the guest  for usvhost .


 ./x86_64-softmmu/qemu-system-x86_64 -cpu host -boot c  -hda /home/utils/images/vm1.img  -m 4096M -smp 3 --enable-kvm -name 'VM1'
-nographic -vnc :1 -pidfile /tmp/vm1.pid -drive file=fat:rw:/tmp/qemu_share,snapshot=off -monitor unix:/tmp/vm1monitor,server,nowait  -net none -no-reboot -mem-path /dev/hugepages -mem-prealloc -netdev type=tap,id=net1,script=no,downscript=no,ifname=usvhost1,vhost=on -device virtio-net-pci,netdev=net1,mac=00:16:3e:00:03:03,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
-netdev type=tap,id=net2,script=no,downscript=no,ifname=usvhost2,vhost=on
-device
virtio-net-pci,netdev=net2,mac=00:16:3e:00:03:04,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
-net nic -net tap,ifname=tap6,script=no
vvfat /tmp/qemu_share chs 1024,16,63
file_ram_alloc: can't mmap RAM pages: Cannot allocate memory
qemu-system-x86_64: unable to start vhost net: 22: falling back on userspace virtio
qemu-system-x86_64: unable to start vhost net: 22: falling back on userspace virtio




thanks,
srinivas.



On Fri, Feb 27, 2015 at 3:36 PM, Bruce Richardson < bruce.richardson at intel.com> wrote:

> On Thu, Feb 26, 2015 at 10:46:58PM +0530, Srinivasreddy R wrote:
> > hi Bruce ,
> > Thank you for your response .
> > I am accessing my VM via  " vncviewer " . so ssh doesn't come into
> picture .
> > Is there any way to find the root cause of my problem . does dpdk 
> > stores any logs while binding interfaces to igb_uio.
> > i have seen my /var/log/messages . but could not find any clue.
> >
> > the movement i gave the below command my vm got struck and not 
> > responding untill i forcefully kill the qemu and relaunch .
> > ./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0
> >
>
> Does VNC not also connect using a network port? What is the output of 
> ./dpdk_nic_bind.py --status before you run this command?
>
> /Bruce
>
> >
> >
> > thanks,
> > srinivas.
> >
> >
> >
> > On Thu, Feb 26, 2015 at 10:30 PM, Bruce Richardson < 
> > bruce.richardson at intel.com> wrote:
> >
> > > On Thu, Feb 26, 2015 at 10:08:59PM +0530, Srinivasreddy R wrote:
> > > > hi Mike,
> > > > Thanks for our detailed explanation of your example . usually i 
> > > > do
> > > similar
> > > > to u and i am aware of working with dpdk applications .
> > > > my problem is :
> > > > 1. i have written a code for  host to guest communication 
> > > > .[taken
> form
> > > > usvhost which is developed in ovdk vswitch] .
> > > > 2. i launched VM with two  interfaces .
> > > > 3. i am able to send and receive traffic to and from guest to 
> > > > host on
> > > these
> > > > interfaces .
> > > > 4. when i  try to bind these interfaces to igb_uio  to run dpdk
> > > application
> > > > . i am not able to access my instance . it got struck and not
> responding
> > > .
> > > > i need to hard reboot the vm.
> > >
> > > Are you sure you are not trying to access the vm via one of the
> interfaces
> > > now bount to igb_uio? If you bind the interface you use for ssh to
> igb_uio,
> > > you won't be able to ssh to that vm any more.
> > >
> > > /Bruce
> > >
> > > >
> > > > My Question is  :
> > > > surely i might done something wrong in code . as my VM is not 
> > > > able to access any more when i try to bind interfaces to igb_uio  
> > > > . not able
> to
> > > > debug the issue .
> > > > someone please help me in figuring the issue . i dont find 
> > > > anything
> in
> > > > /var/log/messages after relaunching the instance .
> > > >
> > > >
> > > > thanks,
> > > > srinivas.
> > > >
> > > >
> > > >
> > > > On Thu, Feb 26, 2015 at 8:42 PM, Polehn, Mike A <
> mike.a.polehn at intel.com
> > > >
> > > > wrote:
> > > >
> > > > > In this example, the control network 00:03.0, remains unbound 
> > > > > to
> UIO
> > > > > driver but remains attached
> > > > >  to Linux device driver (ssh access with putty) and just the 
> > > > > target interfaces are bound.
> > > > > Below, it shows all 3 interfaces bound to the uio driver, 
> > > > > which
> are not
> > > > > usable until a task uses the UIO driver.
> > > > >
> > > > > [root at F21vm l3fwd-vf]# lspci -nn
> > > > > 00:00.0 Host bridge [0600]: Intel Corporation 440FX - 82441FX 
> > > > > PMC
> > > [Natoma]
> > > > > [8086:1237] (rev 02)
> > > > > 00:01.0 ISA bridge [0601]: Intel Corporation 82371SB PIIX3 ISA 
> > > > > [Natoma/Triton II] [8086:7000]
> > > > > 00:01.1 IDE interface [0101]: Intel Corporation 82371SB PIIX3 
> > > > > IDE [Natoma/Triton II] [8086:7010]
> > > > > 00:01.3 Bridge [0680]: Intel Corporation 82371AB/EB/MB PIIX4 
> > > > > ACPI [8086:7113] (rev 03)
> > > > > 00:02.0 VGA compatible controller [0300]: Cirrus Logic GD 5446
> > > [1013:00b8]
> > > > > 00:03.0 Ethernet controller [0200]: Red Hat, Inc Virtio 
> > > > > network
> device
> > > > > [1af4:1000]
> > > > > 00:04.0 Ethernet controller [0200]: Intel Corporation 
> > > > > XL710/X710
> > > Virtual
> > > > > Function [8086:154c] (rev 01)
> > > > > 00:05.0 Ethernet controller [0200]: Intel Corporation 
> > > > > XL710/X710
> > > Virtual
> > > > > Function [8086:154c] (rev 01)
> > > > >
> > > > > [root at F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> > > > > --bind=igb_uio 00:04.0
> > > > > [root at F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> > > > > --bind=igb_uio 00:05.0
> > > > > [root at F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> --status
> > > > >
> > > > > Network devices using DPDK-compatible driver 
> > > > > ============================================
> > > > > 0000:00:04.0 'XL710/X710 Virtual Function' drv=igb_uio
> unused=i40evf
> > > > > 0000:00:05.0 'XL710/X710 Virtual Function' drv=igb_uio
> unused=i40evf
> > > > >
> > > > > Network devices using kernel driver 
> > > > > ===================================
> > > > > 0000:00:03.0 'Virtio network device' if= drv=virtio-pci 
> > > > > unused=virtio_pci,igb_uio
> > > > >
> > > > > Other network devices
> > > > > =====================
> > > > > <none>
> > > > >
> > > > > -----Original Message-----
> > > > > From: Dpdk-ovs [mailto:dpdk-ovs-bounces at lists.01.org] On 
> > > > > Behalf Of Srinivasreddy R
> > > > > Sent: Thursday, February 26, 2015 6:11 AM
> > > > > To: dev at dpdk.org; dpdk-ovs at lists.01.org
> > > > > Subject: [Dpdk-ovs] problem in binding interfaces of 
> > > > > virtio-pci on
> the
> > > VM
> > > > >
> > > > > hi ,
> > > > > I have written sample program for usvhost  supported by ovdk.
> > > > >
> > > > > i have initialized VM using the below command .
> > > > > On the VM :
> > > > >
> > > > > I am able to see two interfaces . and working fine with 
> > > > > traffic in rawsocket mode .
> > > > > my problem is when i bind the interfaces to pmd driver[ 
> > > > > ibg_uio ]
> my
> > > > > virtual machine is getting hanged . and  i am not able to 
> > > > > access it
> > > further
> > > > > .
> > > > > now my question is . what may be the reason for the behavior . 
> > > > > and
> how
> > > can
> > > > > in debug the root cause .
> > > > > please help in finding out the problem .
> > > > >
> > > > >
> > > > >
> > > > >  ./tools/dpdk_nic_bind.py --status
> > > > >
> > > > > Network devices using DPDK-compatible driver 
> > > > > ============================================
> > > > > <none>
> > > > >
> > > > > Network devices using kernel driver 
> > > > > ===================================
> > > > > 0000:00:03.0 '82540EM Gigabit Ethernet Controller' if=ens3
> drv=e1000
> > > > > unused=igb_uio *Active*
> > > > > 0000:00:04.0 'Virtio network device' if= drv=virtio-pci
> unused=igb_uio
> > > > > 0000:00:05.0 'Virtio network device' if= drv=virtio-pci
> unused=igb_uio
> > > > >
> > > > > Other network devices
> > > > > =====================
> > > > > <none>
> > > > >
> > > > >
> > > > > ./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0
> > > > >
> > > > >
> > > > >
> > > > > ./x86_64-softmmu/qemu-system-x86_64 -cpu host -boot c  -hda 
> > > > > /home/utils/images/vm1.img  -m 2048M -smp 3 --enable-kvm -name
> 'VM1'
> > > > > -nographic -vnc :1 -pidfile /tmp/vm1.pid -drive 
> > > > > file=fat:rw:/tmp/qemu_share,snapshot=off -monitor 
> > > > > unix:/tmp/vm1monitor,server,nowait  -net none -no-reboot 
> > > > > -mem-path /dev/hugepages -mem-prealloc -netdev 
> > > > > type=tap,id=net1,script=no,downscript=no,ifname=usvhost1,vhost
> > > > > =on
> > > -device
> > > > >
> > >
> virtio-net-pci,netdev=net1,mac=00:16:3e:00:03:03,csum=off,gso=off,gues
> t_tso4=off,guest_tso6=off,guest_ecn=off
> > > > > -netdev
> > > type=tap,id=net2,script=no,downscript=no,ifname=usvhost2,vhost=on
> > > > > -device
> > > > >
> > > > >
> > >
> virtio-net-pci,netdev=net2,mac=00:16:3e:00:03:04,csum=off,gso=off,gues
> t_tso4=off,guest_tso6=off,guest_ecn=off
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > ----------
> > > > > thanks
> > > > > srinivas.
> > > > > _______________________________________________
> > > > > Dpdk-ovs mailing list
> > > > > Dpdk-ovs at lists.01.org
> > > > > https://lists.01.org/mailman/listinfo/dpdk-ovs
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > thanks
> > > > srinivas.
> > >
> >
> >
> >
> > --
> > thanks
> > srinivas.
>



--
thanks
srinivas.
_______________________________________________
Dpdk-ovs mailing list
Dpdk-ovs at lists.01.org
https://lists.01.org/mailman/listinfo/dpdk-ovs


More information about the dev mailing list