[dpdk-dev] [DISCUSSION] : ERROR while running vhost example in dpdk-1.8

Srinivasreddy R srinivasreddy4390 at gmail.com
Thu Jan 29 17:48:12 CET 2015


Hi,

I am using dpdk-1.8.0.

I am trying to run vhost example . I followed sample app user guide at
below link.

http://www.dpdk.org/doc/guides/sample_app_ug/vhost.html



what may be the reason . may be I am missing some thing .



Facing problem  while running ,

VHOST_CONFIG: (0) Failed to find memory file for pid 5235

file_ram_alloc: can't mmap RAM pages: Cannot allocate memory

qemu-system-x86_64: unable to start vhost net: 22: falling back on
userspace virtio





Vhost  switch app :



/home/utils/dpdk-1.8.0/examples/vhost# ./build/app/vhost-switch -c f -n 4
-- -p 0x1    --dev-basename usvhost-1  --stats 2

EAL: Detected lcore 0 as core 0 on socket 0

EAL: Detected lcore 1 as core 1 on socket 0

EAL: Detected lcore 2 as core 2 on socket 0

EAL: Detected lcore 3 as core 3 on socket 0

EAL: Detected lcore 4 as core 0 on socket 0

EAL: Detected lcore 5 as core 1 on socket 0

EAL: Detected lcore 6 as core 2 on socket 0

EAL: Detected lcore 7 as core 3 on socket 0

EAL: Support maximum 128 logical core(s) by configuration.

EAL: Detected 8 lcore(s)

EAL: 512 hugepages of size 2097152 reserved, but no mounted hugetlbfs found
for that size

EAL:   cannot open VFIO container, error 2 (No such file or directory)

EAL: VFIO support could not be initialized

EAL: Setting up memory...

EAL: Ask a virtual area of 0x200000000 bytes

EAL: Virtual area found at 0x7f89c0000000 (size = 0x200000000)

EAL: Requesting 8 pages of size 1024MB from socket 0

EAL: TSC frequency is ~3092841 KHz

EAL: Master core 0 is ready (tid=f500d880)

PMD: ENICPMD trace: rte_enic_pmd_init

EAL: Core 3 is ready (tid=f26e5700)

EAL: Core 2 is ready (tid=f2ee6700)

EAL: Core 1 is ready (tid=f36e7700)

EAL: PCI device 0000:01:00.0 on NUMA socket -1

EAL:   probe driver: 8086:1521 rte_igb_pmd

EAL:   PCI memory mapped at 0x7f8bc0000000

EAL:   PCI memory mapped at 0x7f8bc0100000

PMD: eth_igb_dev_init(): port_id 0 vendorID=0x8086 deviceID=0x1521

EAL: PCI device 0000:01:00.1 on NUMA socket -1

EAL:   probe driver: 8086:1521 rte_igb_pmd

EAL:   0000:01:00.1 not managed by UIO driver, skipping

EAL: PCI device 0000:03:00.0 on NUMA socket -1

EAL:   probe driver: 8086:10d3 rte_em_pmd

EAL:   0000:03:00.0 not managed by UIO driver, skipping

EAL: PCI device 0000:04:00.0 on NUMA socket -1

EAL:   probe driver: 8086:10d3 rte_em_pmd

EAL:   0000:04:00.0 not managed by UIO driver, skipping

pf queue num: 0, configured vmdq pool num: 8, each vmdq pool has 1 queues

PMD: eth_igb_rx_queue_setup(): sw_ring=0x7f89c0af7e00
hw_ring=0x7f8a0bbc2000 dma_addr=0x24bbc2000

PMD: eth_igb_rx_queue_setup(): sw_ring=0x7f89c0af5d00
hw_ring=0x7f8a0bbd2000 dma_addr=0x24bbd2000

PMD: eth_igb_rx_queue_setup(): sw_ring=0x7f89c0af3c00
hw_ring=0x7f8a0bbe2000 dma_addr=0x24bbe2000

PMD: eth_igb_rx_queue_setup(): sw_ring=0x7f89c0af1b00
hw_ring=0x7f8a0bbf2000 dma_addr=0x24bbf2000

PMD: eth_igb_rx_queue_setup(): sw_ring=0x7f89c0aefa00
hw_ring=0x7f8a0bc02000 dma_addr=0x24bc02000

PMD: eth_igb_rx_queue_setup(): sw_ring=0x7f89c0aed900
hw_ring=0x7f8a0bc12000 dma_addr=0x24bc12000

PMD: eth_igb_rx_queue_setup(): sw_ring=0x7f89c0aeb800
hw_ring=0x7f8a0bc22000 dma_addr=0x24bc22000

PMD: eth_igb_rx_queue_setup(): sw_ring=0x7f89c0ae9700
hw_ring=0x7f8a0bc32000 dma_addr=0x24bc32000

PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider
setting the TX WTHRESH value to 4, 8, or 16.

PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f89c0ae7600
hw_ring=0x7f8a0bc42000 dma_addr=0x24bc42000

PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider
setting the TX WTHRESH value to 4, 8, or 16.

PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f89c0ae5500
hw_ring=0x7f8a0bc52000 dma_addr=0x24bc52000

PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider
setting the TX WTHRESH value to 4, 8, or 16.

PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f89c0ae3400
hw_ring=0x7f8a0bc62000 dma_addr=0x24bc62000

PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider
setting the TX WTHRESH value to 4, 8, or 16.

PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f89c0ae1300
hw_ring=0x7f8a0bc72000 dma_addr=0x24bc72000

PMD: eth_igb_start(): <<

VHOST_PORT: Max virtio devices supported: 8

VHOST_PORT: Port 0 MAC: 2c 53 4a 00 28 68

VHOST_DATA: Procesing on Core 1 started

VHOST_DATA: Procesing on Core 2 started

VHOST_DATA: Procesing on Core 3 started





Device statistics ====================================

======================================================

VHOST_CONFIG: (0) Device configuration started

Device statistics ====================================

======================================================

VHOST_CONFIG: (0) Failed to find memory file for pid 5235



Device statistics ====================================

======================================================











Qemu :



./qemu-wrap.py -machine pc-i440fx-1.4,accel=kvm,usb=off -cpu host -smp
2,sockets=2,cores=1,threads=1  -netdev tap,id=hostnet1,vhost=on -device
virtio-net-pci,netdev=hostnet1,id=net1  -hda /home/utils/images/vm1.img  -m
2048  -vnc 0.0.0.0:2   -net nic -net tap,ifname=tap3,script=no -mem-path
/dev/hugepages -mem-prealloc

W: /etc/qemu-ifup: no bridge for guest interface found

file_ram_alloc: can't mmap RAM pages: Cannot allocate memory

qemu-system-x86_64: unable to start vhost net: 22: falling back on
userspace virtio






--------
thanks
srinivas.


More information about the dev mailing list