[dpdk-dev] [PATCH v2 0/6] bonding: locks

Bruce Richardson bruce.richardson at intel.com
Fri Jun 10 16:45:42 CEST 2016


On Thu, May 26, 2016 at 05:38:41PM +0100, Bernard Iremonger wrote:
> Add spinlock to bonding rx and tx queues.
> Take spinlock in rx and tx burst functions.
> Take all spinlocks in slave add and remove functions.
> With spinlocks in place remove memcpy of slaves.
> 
> Changes in v2:
> Replace patch 1.
> Add patch 2 and reorder patches.
> Add spinlock to bonding rx and tx queues.
> Take all spinlocks in slave add and remove functions.
> Replace readlocks with spinlocks.
> 
> Bernard Iremonger (6):
>   bonding: add spinlock to rx and tx queues
>   bonding: grab queue spinlocks in slave add and remove
>   bonding: take queue spinlock in rx/tx burst functions
>   bonding: add spinlock to stop function
>   bonding: add spinlock to link update function
>   bonding: remove memcpy from burst functions
> 
>  drivers/net/bonding/rte_eth_bond_api.c     |  52 +++++++-
>  drivers/net/bonding/rte_eth_bond_pmd.c     | 196 ++++++++++++++++++-----------
>  drivers/net/bonding/rte_eth_bond_private.h |   4 +-
>  3 files changed, 173 insertions(+), 79 deletions(-)
> 
> -- 

The patches in this set are missing any explanation for the reasons behind each
patch. The commit messages are empty for every patch.

I'm also concerned at the fact that this patchset is adding lock operations all
over the bonding PMD. While other PMDs need synchronization between control plane
and data plane threads so that e.g. you don't do IO on a stopped port, they
don't use locks so as to get max performance. Nowhere in the patchset is there
an explanation as to why bonding is so different that it needs locks where
other PMDs can do without them. This should also be explained in each individual
patch as to why the area covered by the patch needs locks in this PMD (again,
compared to other PMDs)

Thanks,
/Bruce


More information about the dev mailing list