[dpdk-users] Slow DPDK startup with many 1G hugepages

Tan, Jianfeng jianfeng.tan at intel.com
Fri Jun 2 03:40:16 CEST 2017



> -----Original Message-----
> From: Marco Varlese [mailto:marco.varlese at suse.com]
> Sent: Thursday, June 1, 2017 6:12 PM
> To: Tan, Jianfeng; Imre Pinter; users at dpdk.org
> Cc: Gabor Halász; Péter Suskovics
> Subject: Re: [dpdk-users] Slow DPDK startup with many 1G hugepages
> 
> On Thu, 2017-06-01 at 08:50 +0000, Tan, Jianfeng wrote:
> >
> > >
> > > -----Original Message-----
> > > From: users [mailto:users-bounces at dpdk.org] On Behalf Of Imre Pinter
> > > Sent: Thursday, June 1, 2017 3:55 PM
> > > To: users at dpdk.org
> > > Cc: Gabor Halász; Péter Suskovics
> > > Subject: [dpdk-users] Slow DPDK startup with many 1G hugepages
> > >
> > > Hi,
> > >
> > > We experience slow startup time in DPDK-OVS, when backing memory
> with
> > > 1G hugepages instead of 2M hugepages.
> > > Currently we're mapping 2M hugepages as memory backend for DPDK
> OVS.
> > > In the future we would like to allocate this memory from the 1G
> hugepage
> > > pool. Currently in our deployments we have significant amount of 1G
> > > hugepages allocated (min. 54G) for VMs and only 2G memory on 2M
> > > hugepages.
> > >
> > > Typical setup for 2M hugepages:
> > >                 GRUB:
> > > hugepagesz=2M hugepages=1024 hugepagesz=1G hugepages=54
> > > default_hugepagesz=1G
> > >
> > > $ grep hugetlbfs /proc/mounts
> > > nodev /mnt/huge_ovs_2M hugetlbfs rw,relatime,pagesize=2M 0 0
> > > nodev /mnt/huge_qemu_1G hugetlbfs rw,relatime,pagesize=1G 0 0
> > >
> > > Typical setup for 1GB hugepages:
> > > GRUB:
> > > hugepagesz=1G hugepages=56 default_hugepagesz=1G
> > >
> > > $ grep hugetlbfs /proc/mounts
> > > nodev /mnt/huge_qemu_1G hugetlbfs rw,relatime,pagesize=1G 0 0
> > >
> > > DPDK OVS startup times based on the ovs-vswitchd.log logs:
> > >
> > >   *   2M (2G memory allocated) - startup time ~3 sec:
> > >
> > > 2017-05-03T08:13:50.177Z|00009|dpdk|INFO|EAL ARGS: ovs-vswitchd -c
> 0x1
> > > --huge-dir /mnt/huge_ovs_2M --socket-mem 1024,1024
> > >
> > > 2017-05-03T08:13:50.708Z|00010|ofproto_dpif|INFO|netdev at ovs-
> netdev:
> > > Datapath supports recirculation
> > >
> > >   *   1G (56G memory allocated) - startup time ~13 sec:
> > > 2017-05-03T08:09:22.114Z|00009|dpdk|INFO|EAL ARGS: ovs-vswitchd -c
> 0x1
> > > --huge-dir /mnt/huge_qemu_1G --socket-mem 1024,1024
> > > 2017-05-03T08:09:32.706Z|00010|ofproto_dpif|INFO|netdev at ovs-
> netdev:
> > > Datapath supports recirculation
> > > I used DPDK 16.11 for OVS and testpmd and tested on Ubuntu 14.04 with
> > > kernel 3.13.0-117-generic and 4.4.0-78-generic.
> >
> >
> > You can shorten the time by this:
> >
> > (1) Mount 1 GB hugepages into two directories.
> > nodev /mnt/huge_ovs_1G hugetlbfs rw,relatime,pagesize=1G,size=<how
> much you
> > want to use in OVS> 0 0
> > nodev /mnt/huge_qemu_1G hugetlbfs rw,relatime,pagesize=1G 0 0
> I understood (reading Imre) that this does not really work because of non-
> deterministic allocation of hugepages in a NUMA architecture.
> e.g. we would end up (potentially) using hugepages allocated on different
> nodes
> even when accessing the OVS directory.
> Did I understand this correctly?

Did you try step 2? And Sergio also gives more options on another email in this thread for your reference.

Thanks,
Jianfeng

> 
> >
> > (2) Force to use memory  interleave policy
> > $ numactl --interleave=all ovs-vswitchd ...
> >
> > Note: keep the huge-dir and socket-mem option, "--huge-dir
> /mnt/huge_ovs_1G --
> > socket-mem 1024,1024".
> >


More information about the users mailing list