[dpdk-stable] [dpdk-dev] [PATCH 17.11] mem: fix memory initialization time
Burakov, Anatoly
anatoly.burakov at intel.com
Fri Nov 16 16:56:16 CET 2018
On 16-Nov-18 2:42 PM, Alejandro Lucero wrote:
>
>
> On Fri, Nov 16, 2018 at 1:35 PM Burakov, Anatoly
> <anatoly.burakov at intel.com <mailto:anatoly.burakov at intel.com>> wrote:
>
> On 16-Nov-18 12:49 PM, Alejandro Lucero wrote:
> >
> >
> > On Thu, Nov 15, 2018 at 1:16 PM Burakov, Anatoly
> > <anatoly.burakov at intel.com <mailto:anatoly.burakov at intel.com>
> <mailto:anatoly.burakov at intel.com
> <mailto:anatoly.burakov at intel.com>>> wrote:
> >
> > On 12-Nov-18 11:18 AM, Alejandro Lucero wrote:
> > > When using large amount of hugepage based memory, doing
> all the
> > > hugepages mapping can take quite significant time.
> > >
> > > The problem is hugepages being initially mmaped to virtual
> addresses
> > > which will be tried later for the final hugepage mmaping.
> This causes
> > > the final mapping requiring calling mmap with another hint
> > address which
> > > can happen several times, depending on the amount of memory to
> > mmap, and
> > > which each mmmap taking more than a second.
> > >
> > > This patch changes the hint for the initial hugepage
> mmaping using
> > > a starting address which will not collide with the final
> mmaping.
> > >
> > > Fixes: 293c0c4b957f ("mem: use address hint for mapping
> hugepages")
> > >
> > > Signed-off-by: Alejandro Lucero
> <alejandro.lucero at netronome.com <mailto:alejandro.lucero at netronome.com>
> > <mailto:alejandro.lucero at netronome.com
> <mailto:alejandro.lucero at netronome.com>>>
> > > ---
> >
> > Hi Alejandro,
> >
> > I'm not sure i understand the purpose. When final mapping is
> performed,
> > we reserve new memory area, and map pages into it. (i don't quite
> > understand why we unmap the area before mapping pages, but
> it's how
> > it's
> > always been and i didn't change it in the legacy code)
> >
> > Which addresses are causing the collision?
> >
> >
> > Because the hint for the final mapping is at 4GB address, and the
> > hugepages are initially individually mapped starting at low virtual
> > addresses, when the memory to map is 4GB or higher, the hugepages
> will
> > end using that hint address and higher. The more the hugepages to
> mmap,
> > the more addresses above the hint address are used, and the more
> mmaps
> > failed for getting the virtual addresses for the final mmap.
>
> Yes, but i still don't understand what the problem is.
>
> Before the final mapping, all of the pages get unmapped. They no longer
> occupy any VA space at all. Then, we create a VA-area the size of
> IOVA-contiguous chunk we have, but then we also unmap *that* (again, no
> idea why we actually do that, but that's how it works). So, the final
> mapping is performed with the knowledge that there are no pages at
> specified addresses, and mapping for specified addresses is performed
> when the first mapping has already been unmapped.
>
> As far as i understand, at no point do we hold addresses for initial
> and
> final mappings concurrently. So, where does the conflict come in?
>
>
> Are you sure about this? Because I can see calling
> unmap_all_hugepage_init happens after the second call to map_all_hugepages.
>
> Maybe you are looking at the legacy code in a newer version which is not
> exactly doing the same steps.
Ah yes, you're right - we do remap the pages before we unmap the
original mappings. This patch perfect makes sense then. It'd still
collide with mappings with --base-virtaddr set to the same address, but
it's not going to fail (just be slow again), so it's OK.
Acked-by: Anatoly Burakov <anatoly.burakov at intel.com>
>
> >
> > --
> > Thanks,
> > Anatoly
> >
>
>
> --
> Thanks,
> Anatoly
>
--
Thanks,
Anatoly
More information about the stable
mailing list