[dpdk-dev] [PATCH v2 2/6] ethdev: add port ownership

Matan Azrad matan at mellanox.com
Thu Jan 18 18:20:31 CET 2018


Hi Neil

From: Neil Horman, Thursday, January 18, 2018 6:55 PM
> On Thu, Jan 18, 2018 at 02:00:23PM +0000, Matan Azrad wrote:
> > Hi Neil
> >
> > From: Neil Horman, Thursday, January 18, 2018 3:10 PM
> > > On Wed, Jan 17, 2018 at 05:01:10PM +0000, Ananyev, Konstantin wrote:
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Neil Horman [mailto:nhorman at tuxdriver.com]
> > > > > Sent: Wednesday, January 17, 2018 2:00 PM
> > > > > To: Matan Azrad <matan at mellanox.com>
> > > > > Cc: Ananyev, Konstantin <konstantin.ananyev at intel.com>; Thomas
> > > > > Monjalon <thomas at monjalon.net>; Gaetan Rivet
> > > > > <gaetan.rivet at 6wind.com>; Wu, Jingjing <jingjing.wu at intel.com>;
> > > > > dev at dpdk.org; Richardson, Bruce <bruce.richardson at intel.com>
> > > > > Subject: Re: [PATCH v2 2/6] ethdev: add port ownership
> > > > >
> > > > > On Wed, Jan 17, 2018 at 12:05:42PM +0000, Matan Azrad wrote:
> > > > > >
> > > > > > Hi Konstantin
> > > > > > From: Ananyev, Konstantin, Sent: Wednesday, January 17, 2018
> > > > > > 1:24 PM
> > > > > > > Hi Matan,
> > > > > > >
> > > > > > > > Hi Konstantin
> > > > > > > >
> > > > > > > > From: Ananyev, Konstantin, Tuesday, January 16, 2018 9:11
> > > > > > > > PM
> > > > > > > > > Hi Matan,
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Hi Konstantin
> > > > > > > > > > From: Ananyev, Konstantin, Monday, January 15, 2018
> > > > > > > > > > 8:44 PM
> > > > > > > > > > > Hi Matan,
> > > > > > > > > > > > Hi Konstantin
> > > > > > > > > > > > From: Ananyev, Konstantin, Monday, January 15,
> > > > > > > > > > > > 2018
> > > > > > > > > > > > 1:45 PM
> > > > > > > > > > > > > Hi Matan,
> > > > > > > > > > > > > > Hi Konstantin
> > > > > > > > > > > > > > From: Ananyev, Konstantin, Friday, January 12,
> > > > > > > > > > > > > > 2018 2:02 AM
> > > > > > > > > > > > > > > Hi Matan,
> > > > > > > > > > > > > > > > Hi Konstantin
> > > > > > > > > > > > > > > > From: Ananyev, Konstantin, Thursday,
> > > > > > > > > > > > > > > > January 11, 2018
> > > > > > > > > > > > > > > > 2:40 PM
> > > > > > > > > > > > > > > > > Hi Matan,
> > > > > > > > > > > > > > > > > > Hi Konstantin
> > > > > > > > > > > > > > > > > > From: Ananyev, Konstantin, Wednesday,
> > > > > > > > > > > > > > > > > > January 10,
> > > > > > > > > > > > > > > > > > 2018
> > > > > > > > > > > > > > > > > > 3:36 PM
> > > > > > > > > > > > > > > > > > > Hi Matan,
> > > > > > > > > >  <snip>
> > > > > > > > > > > > > > > > > > > It is good to see that now
> > > > > > > > > > > > > > > > > > > scanning/updating rte_eth_dev_data[]
> > > > > > > > > > > > > > > > > > > is lock protected, but it might be
> > > > > > > > > > > > > > > > > > > not very plausible to protect both
> > > > > > > > > > > > > > > > > > > data[] and next_owner_id using the
> > > > > > > > > > > > > same lock.
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > I guess you mean to the owner
> > > > > > > > > > > > > > > > > > structure in
> > > > > > > > > > > > > rte_eth_dev_data[port_id].
> > > > > > > > > > > > > > > > > > The next_owner_id is read by ownership
> > > > > > > > > > > > > > > > > > APIs(for owner validation), so it
> > > > > > > > > > > > > > > > > makes sense to use the same lock.
> > > > > > > > > > > > > > > > > > Actually, why not?
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Well to me next_owner_id and
> > > > > > > > > > > > > > > > > rte_eth_dev_data[] are not directly
> > > > > > > > > > > > > > > related.
> > > > > > > > > > > > > > > > > You may create new owner_id but it
> > > > > > > > > > > > > > > > > doesn't mean you would update
> > > > > > > > > > > > > > > > > rte_eth_dev_data[]
> > > immediately.
> > > > > > > > > > > > > > > > > And visa-versa - you might just want to
> > > > > > > > > > > > > > > > > update rte_eth_dev_data[].name or
> .owner_id.
> > > > > > > > > > > > > > > > > It is not very good coding practice to
> > > > > > > > > > > > > > > > > use same lock for non-related data structures.
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > I see the relation like next:
> > > > > > > > > > > > > > > > Since the ownership mechanism
> > > > > > > > > > > > > > > > synchronization is in ethdev
> > > > > > > > > > > > > > > > responsibility, we must protect against
> > > > > > > > > > > > > > > > user mistakes as much as we can by
> > > > > > > > > > > > > > > using the same lock.
> > > > > > > > > > > > > > > > So, if user try to set by invalid owner
> > > > > > > > > > > > > > > > (exactly the ID which currently is
> > > > > > > > > > > > > > > allocated) we can protect on it.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Hmm, not sure why you can't do same checking
> > > > > > > > > > > > > > > with different lock or atomic variable?
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > The set ownership API is protected by
> > > > > > > > > > > > > > ownership lock and checks the owner ID
> > > > > > > > > > > > > > validity By reading the next
> > > owner ID.
> > > > > > > > > > > > > > So, the owner ID allocation and set API should
> > > > > > > > > > > > > > use the same atomic
> > > > > > > > > > > > > mechanism.
> > > > > > > > > > > > >
> > > > > > > > > > > > > Sure but all you are doing for checking
> > > > > > > > > > > > > validity, is check that owner_id > 0 &&&
> > > > > > > > > > > > > owner_id < next_ownwe_id,
> > > right?
> > > > > > > > > > > > > As you don't allow owner_id overlap (16/3248
> > > > > > > > > > > > > bits) you can safely do same check with just
> > > atomic_get(&next_owner_id).
> > > > > > > > > > > > >
> > > > > > > > > > > > It will not protect it, scenario:
> > > > > > > > > > > > - current next_id is X.
> > > > > > > > > > > > - call set ownership of port A with owner id X by
> > > > > > > > > > > > thread 0(by user
> > > > > > > > > mistake).
> > > > > > > > > > > > - context switch
> > > > > > > > > > > > - allocate new id by thread 1 and get X and change
> > > > > > > > > > > > next_id to
> > > > > > > > > > > > X+1
> > > > > > > > > > > atomically.
> > > > > > > > > > > > -  context switch
> > > > > > > > > > > > - Thread 0 validate X by atomic_read and succeed
> > > > > > > > > > > > to take
> > > > > > > ownership.
> > > > > > > > > > > > - The system loosed the port(or will be managed by
> > > > > > > > > > > > two
> > > > > > > > > > > > entities) -
> > > > > > > > > crash.
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Ok, and how using lock will protect you with such scenario?
> > > > > > > > > >
> > > > > > > > > > The owner set API validation by thread 0 should fail
> > > > > > > > > > because the owner
> > > > > > > > > validation is included in the protected section.
> > > > > > > > >
> > > > > > > > > Then your validation function would fail even if you'll
> > > > > > > > > use atomic ops instead of lock.
> > > > > > > > No.
> > > > > > > > With atomic this specific scenario will cause the validation to
> pass.
> > > > > > >
> > > > > > > Can you explain to me how?
> > > > > > >
> > > > > > > rte_eth_is_valid_owner_id(uint16_t owner_id) {
> > > > > > >               int32_t cur_owner_id =
> > > > > > > RTE_MIN(rte_atomic32_get(next_owner_id),
> > > > > > > UINT16_MAX);
> > > > > > >
> > > > > > > 	if (owner_id == RTE_ETH_DEV_NO_OWNER || owner >
> > > > > > > cur_owner_id) {
> > > > > > > 		RTE_LOG(ERR, EAL, "Invalid owner_id=%d.\n",
> owner_id);
> > > > > > > 		return 0;
> > > > > > > 	}
> > > > > > > 	return 1;
> > > > > > > }
> > > > > > >
> > > > > > > Let say your next_owne_id==X, and you invoke
> > > > > > > rte_eth_is_valid_owner_id(owner_id=X+1)  - it would fail.
> > > > > >
> > > > > > Explanation:
> > > > > > The scenario with locks:
> > > > > > next_owner_id = X.
> > > > > > Thread 0 call to set API(with invalid owner Y=X) and take lock.
> > > > > > Context switch.
> > > > > > Thread 1 call to owner_new and stuck in the lock.
> > > > > > Context switch.
> > > > > > Thread 0 does owner id validation and failed(Y>=X) - unlock
> > > > > > the lock and
> > > return failure to the user.
> > > > > > Context switch.
> > > > > > Thread 1 take the lock and update X to X+1, then, unlock the lock.
> > > > > > Everything is OK!
> > > > > >
> > > > > > The same scenario with atomics:
> > > > > > next_owner_id = X.
> > > > > > Thread 0 call to set API(with invalid owner Y=X) and take lock.
> > > > > > Context switch.
> > > > > > Thread 1 call to owner_new and change X to X+1(atomically).
> > > > > > Context switch.
> > > > > > Thread 0 does owner id validation and success(Y<(atomic)X+1) -
> > > > > > unlock
> > > the lock and return success to the  user.
> > > > > > Problem!
> > > > > >
> > > > >
> > > > >
> > > > > Matan is correct here, there is no way to preform parallel set
> > > > > operations using just and atomic variable here, because multiple
> > > > > reads of next_owner_id need to be preformed while it is stable.
> > > > > That is to say rte_eth_next_owner_id must be compared to
> > > > > RTE_ETH_DEV_NO_OWNER and owner_id in
> rte_eth_is_valid_owner_id.
> > > If
> > > > > you were to only use an atomic_read on such a variable, it could
> > > > > be incremented by the owner_new function between the checks and
> > > > > an invalid owner value could become valid because  a third
> > > > > thread incremented the next value.  The state of next_owner_id
> > > > > must be kept stable during any validity checks
> > > >
> > > > It could still be incremented between the checks - if let say
> > > > different thread will invoke new_onwer_id, grab the lock update
> > > > counter, release the lock - all that before the check.
> > > I don't see how all of the contents of rte_eth_dev_owner_set is
> > > protected under rte_eth_dev_ownership_lock, as is
> rte_eth_dev_owner_new.
> > > Next_owner might increment between another threads calls to
> > > owner_new and owner_set, but that will just cause a transition from
> > > an ownership id being valid to invalid, and thats ok, as long as
> > > there is consistency in the model that enforces a single valid owner
> > > at a time (in that case the subsequent caller to owner_new).
> > >
> >
> > I'm not sure I fully understand you, but see:
> > we can't protect all of the user mistakes(using the wrong owner id).
> > But we are doing the maximum for it.
> >
> Yeah, my writing was atrocious, apologies.  All I meant to say was that the
> locking you have is ok, in that it maintains a steady state for the data being
> read during the period its being read.  The fact that a given set operation may
> fail because someone else created an ownership record is an artifact of the
> api, not a bug in its implementation.  I think we're basically in agreement on
> the semantics here, but this goes to my argument about complexity (more
> below).
> 
> >
> > > Though this confusion does underscore my assertion I think that this
> > > API is overly complicated
> > >
> >
> > I really don't think it is complicated. - just take ownership of a port(by
> owner id allocation and set APIs) and manage the port as you want.
> >
> But thats not all.  The determination of success or failure in claiming
> ownership is largely dependent on the behavior of other threads actions, not
> a function of the state of the system at the moment ownership is requested.
> That is to say, if you have N threads, and they all create ownership objects
> identified as X, x+1, X+2...X+N, only the thread with id X+N will be able to
> claim ownership of any port, because they all will have incremented the
> shared nex_id variable.

Why? Each one will get its owner id according to some order(The critical section is protected by spinlock).

>  Determination of ownership by the programmer will
> have to be done via debugging, and errors will likely be transient dependent
> on the order in which threads execute (subject to scheduling jitter).
> 
Yes.

> Rather than making ownership success dependent on any data contained
> within the ownership record, ownership should be entirely dependent on
> the state of port ownership at the time that it was requested.  That is to say,
> port ownership should succede if and only if the port is unowned at the time
> that a given thread requets ownership.

Yes.

>  Any ancilliary data regarding which
> context owns the port should be exactly that, ancilliary, and have no impact
> on weather or not the port ownership request succedes.
> 

Yes, I understand what you say - there is no deterministic state for ownership set success.
Actually I think it will be very hard to arrive to determination in DPDK regarding port ownership when multi-thread is in the game,
Especially it depend in a lot of DPDK entities implementation..
But the current non-deterministic approach makes good order in the game. 



> Regards
> Neil
> 
> > > Neil
> >
> >


More information about the dev mailing list