[dpdk-stable] [PATCH] net/af_xdp: fix 32-bit build for older kernels

Loftus, Ciara ciara.loftus at intel.com
Mon Nov 16 15:24:31 CET 2020


> 
> On 11/12/2020 4:35 PM, Ciara Loftus wrote:
> > 'uint64_t' is used to hold pointers in multiple locations in the
> > copy-mode code (used for kernels before 5.4). For a 32-bit build
> > this assumption is wrong and results in build errors. This commit
> > replaces such instances of 'uint64_t' with 'uintptr_t'.
> >
> > While the copy-mode code will now compile for 32-bit, the PMD is
> > not expected to work and will fail at initialisation due to some
> > limitations in the kernel that were subsequently removed in v5.4.
> > Add a note to the docs to flag this limitation.
> >
> > Fixes: f1debd77efaf ("net/af_xdp: introduce AF_XDP PMD")
> > Fixes: d8a210774e1d ("net/af_xdp: support unaligned umem chunks")
> > Cc: stable at dpdk.org
> >
> > Signed-off-by: Ciara Loftus <ciara.loftus at intel.com>
> > ---
> >   doc/guides/nics/af_xdp.rst          | 1 +
> >   drivers/net/af_xdp/rte_eth_af_xdp.c | 6 +++---
> >   2 files changed, 4 insertions(+), 3 deletions(-)
> >
> > diff --git a/doc/guides/nics/af_xdp.rst b/doc/guides/nics/af_xdp.rst
> > index 052e59a3ae..5ed24374f8 100644
> > --- a/doc/guides/nics/af_xdp.rst
> > +++ b/doc/guides/nics/af_xdp.rst
> > @@ -50,6 +50,7 @@ This is a Linux-specific PMD, thus the following
> prerequisites apply:
> >   *  For PMD zero copy, it requires kernel version later than v5.4-rc1;
> >   *  For shared_umem, it requires kernel version v5.10 or later and libbpf
> version
> >      v0.2.0 or later.
> > +*  For 32-bit OS, a kernel with version 5.4 or later is required.
> >
> 
> +1 to doc update
> 
> >   Set up an af_xdp interface
> >   -----------------------------
> > diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c
> b/drivers/net/af_xdp/rte_eth_af_xdp.c
> > index 4076ff797c..75ff1c00b2 100644
> > --- a/drivers/net/af_xdp/rte_eth_af_xdp.c
> > +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
> > @@ -349,7 +349,7 @@ af_xdp_rx_cp(void *queue, struct rte_mbuf
> **bufs, uint16_t nb_pkts)
> >
> >   	for (i = 0; i < rcvd; i++) {
> >   		const struct xdp_desc *desc;
> > -		uint64_t addr;
> > +		uintptr_t addr;
> >   		uint32_t len;
> >   		void *pkt;
> >
> > @@ -402,7 +402,7 @@ pull_umem_cq(struct xsk_umem_info *umem, int
> size, struct xsk_ring_cons *cq)
> >   	n = xsk_ring_cons__peek(cq, size, &idx_cq);
> >
> >   	for (i = 0; i < n; i++) {
> > -		uint64_t addr;
> > +		uintptr_t addr;
> >   		addr = *xsk_ring_cons__comp_addr(cq, idx_cq++);
> 
> Hi Ciara,
> 
> As far as I can see the API 'xsk_ring_cons__comp_addr()' returns fixed size
> variable ('__u64'),
> and when the PMD is compiled for 32bit, won't it be assigning a 64bit variable
> to the 32bit storage.

Correct. However we can assume the higher 32bits are zero in this case.
The 'addr' we are consuming via this API will be one which we previously enqueued to the buf_ring and we always cast to (void *) on enqueue.

> 
> I guess libbpf also needs to be adjusted for the 32bit support, what about
> making PMD changes after libbpf changed?

I'm not sure whether this is planned but maybe it makes sense to wait and see rather than relying on assumptions above.

> 
> >   #if defined(XDP_UMEM_UNALIGNED_CHUNK_FLAG)
> >   		addr = xsk_umem__extract_addr(addr);
> > @@ -1005,7 +1005,7 @@ xsk_umem_info *xdp_umem_configure(struct
> pmd_internals *internals,
> >   	char ring_name[RTE_RING_NAMESIZE];
> >   	char mz_name[RTE_MEMZONE_NAMESIZE];
> >   	int ret;
> > -	uint64_t i;
> > +	uintptr_t i;
> >
> 
> Not sure on this one, 'i' seems not to hold a pointer but index, and result of
> calculation cast to "void *", I assume intention is to prevent calculation
> result to be 64 bit to cover the case "void *" is 4 bytes, for that what do you
> think making variable uint32_t?

Do you suggest something like:
#ifdef RTE_ARCH_64
       uint64_t i;
#else
       uint32_t i;
#endif

I can submit a v2 with just the doc update and hold off on the other changes until the necessary changes to libbpf are in place. Let me know what you think.

Thanks,
Ciara


More information about the stable mailing list