[spp] [PATCH 1/3] doc: remove ivshmem reference from documents
Ferruh Yigit
ferruh.yigit at intel.com
Thu Feb 23 16:58:41 CET 2017
Since ivshmem supported moved from DPDK already since v16.11
Signed-off-by: Ferruh Yigit <ferruh.yigit at intel.com>
---
docs/setup_guide.md | 87 +++++------------------------------------------------
1 file changed, 7 insertions(+), 80 deletions(-)
diff --git a/docs/setup_guide.md b/docs/setup_guide.md
index 6ec47b7..0b35b47 100644
--- a/docs/setup_guide.md
+++ b/docs/setup_guide.md
@@ -4,8 +4,8 @@ Compilation
===========
Change to DPDK directory
Set RTE_SDK variable to current folder
-Set RTE_TARGET variable to "x86_64-ivshmem-linuxapp-*"
-Compile DPDK: "make T=x86_64-ivshmem-linuxapp-gcc install"
+Set RTE_TARGET variable to any valid target.
+Compile DPDK: "make T=x86_64-native-linuxapp-gcc install"
Change to SPP directory
Compile SPP: "make"
@@ -16,35 +16,21 @@ python spp.py -p 5555 -s 6666
Start spp_primary
=================
-sudo ./src/primary/src/primary/x86_64-ivshmem-linuxapp-gcc/spp_primary -c 0x02 -n 4 --socket-mem 512,512 --huge-dir=/dev/hugepages --proc-type=primary -- -p 0x03 -n 4 -s 192.168.122.1:5555
+sudo ./src/primary/src/primary/x86_64-native-linuxapp-gcc/spp_primary -c 0x02 -n 4 --socket-mem 512,512 --huge-dir=/dev/hugepages --proc-type=primary -- -p 0x03 -n 4 -s 192.168.122.1:5555
Start spp_nfv
=============
-sudo ./src/nfv/src/nfv/x86_64-ivshmem-linuxapp-gcc/spp_nfv -c 0x06 -n 4 --proc-type=secondary -- -n 1 -s 192.168.122.1:6666
-sudo ./src/nfv/src/nfv/x86_64-ivshmem-linuxapp-gcc/spp_nfv 0x0A -n 4 --proc-type=secondary -- -n 1 -s 192.168.122.1:6666
+sudo ./src/nfv/src/nfv/x86_64-native-linuxapp-gcc/spp_nfv -c 0x06 -n 4 --proc-type=secondary -- -n 1 -s 192.168.122.1:6666
+sudo ./src/nfv/src/nfv/x86_64-native-linuxapp-gcc/spp_nfv 0x0A -n 4 --proc-type=secondary -- -n 1 -s 192.168.122.1:6666
Start VM (QEMU)
===============
-[NOTE: Custom QEMU version required]
-
Common qemu command line:
sudo ./x86_64-softmmu/qemu-system-x86_64 -cpu host -enable-kvm -object memory-backend-file,id=mem,size=2048M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc -hda /home/dpdk/debian_wheezy_amd64_standard.qcow2 -m 2048 -smp cores=4,threads=1,sockets=1 -device e1000,netdev=net0,mac=DE:AD:BE:EF:00:01 -netdev tap,id=net0 -nographic -vnc :2
To start spp_vm "qemu-ifup" script required, please copy docs/qemu-ifup to host /etc/qemu-ifup
-Two types of VM interfaces supported:
-* ring based (ivshmem)
-* vhost interface
-
-* ring based (ivshmem)
-----------------------
- - This requires custom qemu
- - Needs ivshmem argument for qemu, refer to spp_primary logs to get the ivshmem metadata command line:
- APP: QEMU command line for config 'pp_ivshmem':
- -device ivshmem,size=2048M,shm=fd:/dev/hugepages/rtemap_0:0x0:0x40000000:/dev/zero:0x0:0x3fffc000:/var/run/.dpdk_ivshmem_metadata_pp_ivshmem:0x0:0x4000
-
- Insert into qemu command line:-
- sudo ./x86_64-softmmu/qemu-system-x86_64 -cpu host -enable-kvm -object memory-backend-file,id=mem,size=2048M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc -hda /home/dpdk/debian_wheezy_amd64_standard.qcow2 -m 2048 -smp cores=4,threads=1,sockets=1 -device e1000,netdev=net0,mac=DE:AD:BE:EF:00:01 -netdev tap,id=net0 -device ivshmem,size=2048M,shm=fd:/dev/hugepages/rtemap_0:0x0:0x40000000:/dev/zero:0x0:0x3fffc000:/var/run/.dpdk_ivshmem_metadata_pp_ivshmem:0x0:0x4000 -nographic
+Vhost interface is supported to communicate between guest and host:
* vhost interface
-----------------
@@ -55,7 +41,7 @@ Two types of VM interfaces supported:
Start spp_vm (Inside the VM)
============================
-sudo ./src/vm/src/vm/x86_64-ivshmem-linuxapp-gcc/spp_vm -c 0x03 -n 4 --proc-type=primary -- -p 0x01 -n 1 -s 192.168.122.1:6666
+sudo ./src/vm/src/vm/x86_64-native-linuxapp-gcc/spp_vm -c 0x03 -n 4 --proc-type=primary -- -p 0x01 -n 1 -s 192.168.122.1:6666
@@ -220,65 +206,6 @@ spp > sec 0;forward
spp > sec 1;forward
-Test Setup 3: Dual NFV with VM through ring pmd
- __
- +----------------------+ |
- | guest | |
- | | |
- | +--------------+ | | guest
- | | spp_vm | | |
- | | 0 1 | | |
- +---+--------------+---+ __|
- ^ :
- | |
- : v
- +-+ +-+
- +-+ +-+
- ring 0 +-+ +-+ ring 1
- +-+ +-+
- ^ |
- | V __
- +----------+ +----------+ |
- | spp_nfv1 | | spp_nfv2 | |
- | 2 | | 3 | |
- +----------+ +----------+ |
- ^ : |
- | | |
- : v |
- +----+----------+-------------------------------------------------+ |
- | | primary | ^ : | |
- | +----------+ | : | |
- | : : | |
- | : | | | host
- | : v | |
- | +--------------+ +--------------+ | |
- | | phy port 0 | | phy port 1 | | |
- +------------------+--------------+------------+--------------+---+ __|
- ^ :
- | |
- : v
-
-Legend:-
-sec 0 = spp_nfv1
-sec 1 = spp_nfv2
-sec 2 = spp_vm
-
-
-Configuration for Uni directional L2fwd:-
-spp > sec 0;add ring 0
-spp > sec 0;add ring 1
-spp > sec 1;add ring 0
-spp > sec 1;add ring 1
-spp > sec 2;add ring 0
-spp > sec 2;add ring 1
-spp > sec 2;patch 0 1
-spp > sec 0;patch 0 2
-spp > sec 1;patch 3 1
-spp > sec 2;forward
-spp > sec 1;forward
-spp > sec 0;forward
-
-
Test Setup 4: Single NFV with VM through vhost pmd
__
--
2.9.3
More information about the spp
mailing list