[7/7] eal/mem: use DMA mask check for legacy memory

Message ID 20181031172931.11894-8-alejandro.lucero@netronome.com (mailing list archive)
State Superseded, archived
Delegated to: Thomas Monjalon
Headers
Series fix DMA mask check |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation fail Compilation issues

Commit Message

Alejandro Lucero Oct. 31, 2018, 5:29 p.m. UTC
  If a device reports addressing limitations through a dma mask,
the IOVAs for mapped memory needs to be checked out for ensuring
correct functionality.

Previous patches introduced this DMA check for main memory code
currently being used but other options like legacy memory and the
no hugepages one need to be also considered.

This patch adds the DMA check for those cases.

Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
---
 lib/librte_eal/linuxapp/eal/eal_memory.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)
  

Comments

Anatoly Burakov Nov. 1, 2018, 10:40 a.m. UTC | #1
On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
> If a device reports addressing limitations through a dma mask,
> the IOVAs for mapped memory needs to be checked out for ensuring
> correct functionality.
> 
> Previous patches introduced this DMA check for main memory code
> currently being used but other options like legacy memory and the
> no hugepages one need to be also considered.
> 
> This patch adds the DMA check for those cases.
> 
> Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
> ---

IMO this needs to be integrated with patch 5.

>   lib/librte_eal/linuxapp/eal/eal_memory.c | 17 +++++++++++++++++
>   1 file changed, 17 insertions(+)
> 
> diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c
> index fce86fda6..2a3a8c7a3 100644
> --- a/lib/librte_eal/linuxapp/eal/eal_memory.c
> +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
> @@ -1393,6 +1393,14 @@ eal_legacy_hugepage_init(void)
>   
>   			addr = RTE_PTR_ADD(addr, (size_t)page_sz);
>   		}
> +		if (mcfg->dma_maskbits) {
> +			if (rte_mem_check_dma_mask_unsafe(mcfg->dma_maskbits)) {
> +				RTE_LOG(ERR, EAL,
> +					"%s(): couldn't allocate memory due to DMA mask\n",

I would use suggested rewording from patch 5 :)

> +					__func__);
> +				goto fail;
> +			}
> +		}
>   		return 0;
>   	}
>   
> @@ -1628,6 +1636,15 @@ eal_legacy_hugepage_init(void)
>   		rte_fbarray_destroy(&msl->memseg_arr);
>   	}
>   
> +	if (mcfg->dma_maskbits) {
> +		if (rte_mem_check_dma_mask_unsafe(mcfg->dma_maskbits)) {
> +			RTE_LOG(ERR, EAL,
> +				"%s(): couldn't allocate memory due to DMA mask\n",

Same as above.

> +				__func__);
> +			goto fail;
> +		}
> +	}
> +
>   	return 0;
>   
>   fail:
>
  
Alejandro Lucero Nov. 1, 2018, 1:39 p.m. UTC | #2
On Thu, Nov 1, 2018 at 10:40 AM Burakov, Anatoly <anatoly.burakov@intel.com>
wrote:

> On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
> > If a device reports addressing limitations through a dma mask,
> > the IOVAs for mapped memory needs to be checked out for ensuring
> > correct functionality.
> >
> > Previous patches introduced this DMA check for main memory code
> > currently being used but other options like legacy memory and the
> > no hugepages one need to be also considered.
> >
> > This patch adds the DMA check for those cases.
> >
> > Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
> > ---
>
> IMO this needs to be integrated with patch 5.
>
>
Not sure about this. patch 5 is a follow-up of patch 4, and this is to add
support for other EAL supported memory options.


> >   lib/librte_eal/linuxapp/eal/eal_memory.c | 17 +++++++++++++++++
> >   1 file changed, 17 insertions(+)
> >
> > diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c
> b/lib/librte_eal/linuxapp/eal/eal_memory.c
> > index fce86fda6..2a3a8c7a3 100644
> > --- a/lib/librte_eal/linuxapp/eal/eal_memory.c
> > +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
> > @@ -1393,6 +1393,14 @@ eal_legacy_hugepage_init(void)
> >
> >                       addr = RTE_PTR_ADD(addr, (size_t)page_sz);
> >               }
> > +             if (mcfg->dma_maskbits) {
> > +                     if
> (rte_mem_check_dma_mask_unsafe(mcfg->dma_maskbits)) {
> > +                             RTE_LOG(ERR, EAL,
> > +                                     "%s(): couldn't allocate memory
> due to DMA mask\n",
>
> I would use suggested rewording from patch 5 :)
>

Ok


>
> > +                                     __func__);
> > +                             goto fail;
> > +                     }
> > +             }
> >               return 0;
> >       }
> >
> > @@ -1628,6 +1636,15 @@ eal_legacy_hugepage_init(void)
> >               rte_fbarray_destroy(&msl->memseg_arr);
> >       }
> >
> > +     if (mcfg->dma_maskbits) {
> > +             if (rte_mem_check_dma_mask_unsafe(mcfg->dma_maskbits)) {
> > +                     RTE_LOG(ERR, EAL,
> > +                             "%s(): couldn't allocate memory due to DMA
> mask\n",
>
> Same as above.
>
> > +                             __func__);
> > +                     goto fail;
> > +             }
> > +     }
> > +
> >       return 0;
> >
> >   fail:
> >
>
>
> --
> Thanks,
> Anatoly
>
  
Anatoly Burakov Nov. 1, 2018, 2:28 p.m. UTC | #3
On 01-Nov-18 1:39 PM, Alejandro Lucero wrote:
> 
> 
> On Thu, Nov 1, 2018 at 10:40 AM Burakov, Anatoly 
> <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>> wrote:
> 
>     On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
>      > If a device reports addressing limitations through a dma mask,
>      > the IOVAs for mapped memory needs to be checked out for ensuring
>      > correct functionality.
>      >
>      > Previous patches introduced this DMA check for main memory code
>      > currently being used but other options like legacy memory and the
>      > no hugepages one need to be also considered.
>      >
>      > This patch adds the DMA check for those cases.
>      >
>      > Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com
>     <mailto:alejandro.lucero@netronome.com>>
>      > ---
> 
>     IMO this needs to be integrated with patch 5.
> 
> 
> Not sure about this. patch 5 is a follow-up of patch 4, and this is to 
> add support for other EAL supported memory options.
> 

So it's a followup of patch 5, adding pretty much the same functionality 
(only in a different place). It's pretty safe to say these should be in 
the same patch, or at the very least one after the other.
  
Alejandro Lucero Nov. 1, 2018, 2:32 p.m. UTC | #4
On Thu, Nov 1, 2018 at 2:28 PM Burakov, Anatoly <anatoly.burakov@intel.com>
wrote:

> On 01-Nov-18 1:39 PM, Alejandro Lucero wrote:
> >
> >
> > On Thu, Nov 1, 2018 at 10:40 AM Burakov, Anatoly
> > <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>> wrote:
> >
> >     On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
> >      > If a device reports addressing limitations through a dma mask,
> >      > the IOVAs for mapped memory needs to be checked out for ensuring
> >      > correct functionality.
> >      >
> >      > Previous patches introduced this DMA check for main memory code
> >      > currently being used but other options like legacy memory and the
> >      > no hugepages one need to be also considered.
> >      >
> >      > This patch adds the DMA check for those cases.
> >      >
> >      > Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com
> >     <mailto:alejandro.lucero@netronome.com>>
> >      > ---
> >
> >     IMO this needs to be integrated with patch 5.
> >
> >
> > Not sure about this. patch 5 is a follow-up of patch 4, and this is to
> > add support for other EAL supported memory options.
> >
>
> So it's a followup of patch 5, adding pretty much the same functionality
> (only in a different place). It's pretty safe to say these should be in
> the same patch, or at the very least one after the other.
>
>
Ok. I'll do so.

Thanks


> --
> Thanks,
> Anatoly
>
  

Patch

diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c
index fce86fda6..2a3a8c7a3 100644
--- a/lib/librte_eal/linuxapp/eal/eal_memory.c
+++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
@@ -1393,6 +1393,14 @@  eal_legacy_hugepage_init(void)
 
 			addr = RTE_PTR_ADD(addr, (size_t)page_sz);
 		}
+		if (mcfg->dma_maskbits) {
+			if (rte_mem_check_dma_mask_unsafe(mcfg->dma_maskbits)) {
+				RTE_LOG(ERR, EAL,
+					"%s(): couldn't allocate memory due to DMA mask\n",
+					__func__);
+				goto fail;
+			}
+		}
 		return 0;
 	}
 
@@ -1628,6 +1636,15 @@  eal_legacy_hugepage_init(void)
 		rte_fbarray_destroy(&msl->memseg_arr);
 	}
 
+	if (mcfg->dma_maskbits) {
+		if (rte_mem_check_dma_mask_unsafe(mcfg->dma_maskbits)) {
+			RTE_LOG(ERR, EAL,
+				"%s(): couldn't allocate memory due to DMA mask\n",
+				__func__);
+			goto fail;
+		}
+	}
+
 	return 0;
 
 fail: