[dpdk-dev] [PATCH v2 2/6] ethdev: add port ownership

Matan Azrad matan at mellanox.com
Wed Jan 17 21:34:25 CET 2018



From: Matan Azrad, Wednesday, January 17, 2018 8:02 PM
> Hi Konstantin
> 
> From: Ananyev, Konstantin, Wednesday, January 17, 2018 6:53 PM
> > Hi Matan,
> >
> > >
> > > Hi Konstantin
> > > From: Ananyev, Konstantin, Wednesday, January 17, 2018 2:55 PM
> > > > >
> > > > >
> > > > > Hi Konstantin
> > > > > From: Ananyev, Konstantin, Sent: Wednesday, January 17, 2018
> > > > > 1:24 PM
> > > > > > Hi Matan,
> > > > > >
> > > > > > > Hi Konstantin
> > > > > > >
> > > > > > > From: Ananyev, Konstantin, Tuesday, January 16, 2018 9:11 PM
> > > > > > > > Hi Matan,
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Hi Konstantin
> > > > > > > > > From: Ananyev, Konstantin, Monday, January 15, 2018 8:44
> > > > > > > > > PM
> > > > > > > > > > Hi Matan,
> > > > > > > > > > > Hi Konstantin
> > > > > > > > > > > From: Ananyev, Konstantin, Monday, January 15, 2018
> > > > > > > > > > > 1:45 PM
> > > > > > > > > > > > Hi Matan,
> > > > > > > > > > > > > Hi Konstantin
> > > > > > > > > > > > > From: Ananyev, Konstantin, Friday, January 12,
> > > > > > > > > > > > > 2018
> > > > > > > > > > > > > 2:02 AM
> > > > > > > > > > > > > > Hi Matan,
> > > > > > > > > > > > > > > Hi Konstantin
> > > > > > > > > > > > > > > From: Ananyev, Konstantin, Thursday, January
> > > > > > > > > > > > > > > 11,
> > > > > > > > > > > > > > > 2018
> > > > > > > > > > > > > > > 2:40 PM
> > > > > > > > > > > > > > > > Hi Matan,
> > > > > > > > > > > > > > > > > Hi Konstantin
> > > > > > > > > > > > > > > > > From: Ananyev, Konstantin, Wednesday,
> > > > > > > > > > > > > > > > > January 10,
> > > > > > > > > > > > > > > > > 2018
> > > > > > > > > > > > > > > > > 3:36 PM
> > > > > > > > > > > > > > > > > > Hi Matan,
> > > > > > > > >  <snip>
> > > > > > > > > > > > > > > > > > It is good to see that now
> > > > > > > > > > > > > > > > > > scanning/updating rte_eth_dev_data[]
> > > > > > > > > > > > > > > > > > is lock protected, but it might be not
> > > > > > > > > > > > > > > > > > very plausible to protect both data[]
> > > > > > > > > > > > > > > > > > and next_owner_id using the
> > > > > > > > > > > > same lock.
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > I guess you mean to the owner structure
> > > > > > > > > > > > > > > > > in
> > > > > > > > > > > > rte_eth_dev_data[port_id].
> > > > > > > > > > > > > > > > > The next_owner_id is read by ownership
> > > > > > > > > > > > > > > > > APIs(for owner validation), so it
> > > > > > > > > > > > > > > > makes sense to use the same lock.
> > > > > > > > > > > > > > > > > Actually, why not?
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Well to me next_owner_id and
> > > > > > > > > > > > > > > > rte_eth_dev_data[] are not directly
> > > > > > > > > > > > > > related.
> > > > > > > > > > > > > > > > You may create new owner_id but it doesn't
> > > > > > > > > > > > > > > > mean you would update rte_eth_dev_data[]
> > immediately.
> > > > > > > > > > > > > > > > And visa-versa - you might just want to
> > > > > > > > > > > > > > > > update rte_eth_dev_data[].name or .owner_id.
> > > > > > > > > > > > > > > > It is not very good coding practice to use
> > > > > > > > > > > > > > > > same lock for non-related data structures.
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > I see the relation like next:
> > > > > > > > > > > > > > > Since the ownership mechanism
> > > > > > > > > > > > > > > synchronization is in ethdev responsibility,
> > > > > > > > > > > > > > > we must protect against user mistakes as
> > > > > > > > > > > > > > > much as we can by
> > > > > > > > > > > > > > using the same lock.
> > > > > > > > > > > > > > > So, if user try to set by invalid owner
> > > > > > > > > > > > > > > (exactly the ID which currently is
> > > > > > > > > > > > > > allocated) we can protect on it.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Hmm, not sure why you can't do same checking
> > > > > > > > > > > > > > with different lock or atomic variable?
> > > > > > > > > > > > > >
> > > > > > > > > > > > > The set ownership API is protected by ownership
> > > > > > > > > > > > > lock and checks the owner ID validity By reading
> > > > > > > > > > > > > the next owner
> > > > ID.
> > > > > > > > > > > > > So, the owner ID allocation and set API should
> > > > > > > > > > > > > use the same atomic
> > > > > > > > > > > > mechanism.
> > > > > > > > > > > >
> > > > > > > > > > > > Sure but all you are doing for checking validity,
> > > > > > > > > > > > is check that owner_id > 0 &&& owner_id <
> > > > > > > > > > > > next_ownwe_id,
> > > > right?
> > > > > > > > > > > > As you don't allow owner_id overlap (16/3248 bits)
> > > > > > > > > > > > you can safely do same check with just
> > > > atomic_get(&next_owner_id).
> > > > > > > > > > > >
> > > > > > > > > > > It will not protect it, scenario:
> > > > > > > > > > > - current next_id is X.
> > > > > > > > > > > - call set ownership of port A with owner id X by
> > > > > > > > > > > thread 0(by user
> > > > > > > > mistake).
> > > > > > > > > > > - context switch
> > > > > > > > > > > - allocate new id by thread 1 and get X and change
> > > > > > > > > > > next_id to
> > > > > > > > > > > X+1
> > > > > > > > > > atomically.
> > > > > > > > > > > -  context switch
> > > > > > > > > > > - Thread 0 validate X by atomic_read and succeed to
> > > > > > > > > > > take
> > > > > > ownership.
> > > > > > > > > > > - The system loosed the port(or will be managed by
> > > > > > > > > > > two
> > > > > > > > > > > entities) -
> > > > > > > > crash.
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Ok, and how using lock will protect you with such scenario?
> > > > > > > > >
> > > > > > > > > The owner set API validation by thread 0 should fail
> > > > > > > > > because the owner
> > > > > > > > validation is included in the protected section.
> > > > > > > >
> > > > > > > > Then your validation function would fail even if you'll
> > > > > > > > use atomic ops instead of lock.
> > > > > > > No.
> > > > > > > With atomic this specific scenario will cause the validation to pass.
> > > > > >
> > > > > > Can you explain to me how?
> > > > > >
> > > > > > rte_eth_is_valid_owner_id(uint16_t owner_id) {
> > > > > >               int32_t cur_owner_id =
> > > > > > RTE_MIN(rte_atomic32_get(next_owner_id),
> > > > > > UINT16_MAX);
> > > > > >
> > > > > > 	if (owner_id == RTE_ETH_DEV_NO_OWNER || owner >
> > > > > > cur_owner_id) {
> > > > > > 		RTE_LOG(ERR, EAL, "Invalid owner_id=%d.\n",
> owner_id);
> > > > > > 		return 0;
> > > > > > 	}
> > > > > > 	return 1;
> > > > > > }
> > > > > >
> > > > > > Let say your next_owne_id==X, and you invoke
> > > > > > rte_eth_is_valid_owner_id(owner_id=X+1)  - it would fail.
> > > > >
> > > > > Explanation:
> > > > > The scenario with locks:
> > > > > next_owner_id = X.
> > > > > Thread 0 call to set API(with invalid owner Y=X) and take lock.
> > > >
> > > > Ok I see what you mean.
> > > > But, as I said before, if thread 0 will grab the lock first -
> > > > you'll experience the same failure.
> > > > I understand now that by some reason you treat these two scenarios
> > > > as something different, but for me it is pretty much the same case.
> > > > And to me it means that neither lock, neither atomic can fully
> > > > protect you here.
> > > >
> > >
> > > I agree that we are not fully protected even when using locks but
> > > one lock
> > are more protected than ether atomics or 2 different locks.
> > > So, I think keeping it as is (with one lock) makes sense.
> >
> > Ok if that your preference - let's keep your current approach here.
> >
> > >
> > > > > Context switch.
> > > > > Thread 1 call to owner_new and stuck in the lock.
> > > > > Context switch.
> > > > > Thread 0 does owner id validation and failed(Y>=X) - unlock the
> > > > > lock and
> > > > return failure to the user.
> > > > > Context switch.
> > > > > Thread 1 take the lock and update X to X+1, then, unlock the lock.
> > > > > Everything is OK!
> > > > >
> > > > > The same scenario with atomics:
> > > > > next_owner_id = X.
> > > > > Thread 0 call to set API(with invalid owner Y=X) and take lock.
> > > > > Context switch.
> > > > > Thread 1 call to owner_new and change X to X+1(atomically).
> > > > > Context switch.
> > > > > Thread 0 does owner id validation and success(Y<(atomic)X+1) -
> > > > > unlock the
> > > > lock and return success to the  user.
> > > > > Problem!
> > > > >
> > > > > > > With lock no next_id changes can be done while the thread is
> > > > > > > in the set
> > > > > > API.
> > > > > > >
> > > > > > > > But in fact your code is not protected for that scenario -
> > > > > > > > doesn't matter will you'll use lock or atomic ops.
> > > > > > > > Let's considerer your current code with the following scenario:
> > > > > > > >
> > > > > > > > next_owner_id  == 1
> > > > > > > > 1) Process 0:
> > > > > > > >      rte_eth_dev_owner_new(&owner_id);
> > > > > > > >      now owner_id == 1 and next_owner_id == 2
> > > > > > > > 2) Process 1 (by mistake):
> > > > > > > >     rte_eth_dev_owner_set(port_id=1, owner->id=1); It will
> > > > > > > > complete successfully, as owner_id ==1 is considered as valid.
> > > > > > > > 3) Process 0:
> > > > > > > >       rte_eth_dev_owner_set(port_id=1, owner->id=1); It
> > > > > > > > will also complete with success, as owner->id is valid is
> > > > > > > > equal to current port
> > > > > > owner_id.
> > > > > > > > So you finished with 2 processes assuming that they do own
> > > > > > > > exclusively then same port.
> > > > > > > >
> > > > > > > > Honestly in that situation  locking around nest_owner_id
> > > > > > > > wouldn't give you any advantages over atomic ops.
> > > > > > > >
> > > > > > >
> > > > > > > This is a different scenario that we can't protect on it
> > > > > > > with atomic or
> > > > locks.
> > > > > > > But for the first scenario I described I think we can.
> > > > > > > Please read it again, I described it step by step.
> > > > > > >
> > > > > > > > >
> > > > > > > > > > I don't think you can protect yourself against such
> > > > > > > > > > scenario with or without locking.
> > > > > > > > > > Unless you'll make it harder for the mis-behaving
> > > > > > > > > > thread to guess valid owner_id, or add some extra logic
> here.
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > > > The set(and others) ownership APIs already uses
> > > > > > > > > > > > > the ownership lock so I
> > > > > > > > > > > > think it makes sense to use the same lock also in
> > > > > > > > > > > > ID
> > allocation.
> > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > In fact, for next_owner_id, you don't
> > > > > > > > > > > > > > > > > > need a lock - just rte_atomic_t should
> > > > > > > > > > > > > > > > > > be
> > enough.
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > I don't think so, it is problematic in
> > > > > > > > > > > > > > > > > next_owner_id wraparound and may
> > > > > > > > > > > > > > > > complicate the code in other places which read it.
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > IMO it is not that complicated, something
> > > > > > > > > > > > > > > > like that should work I
> > > > > > > > > > think.
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > /* init to 0 at startup*/ rte_atomic32_t
> > > > > > > > > > > > > > > > *owner_id;
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > int new_owner_id(void) {
> > > > > > > > > > > > > > > >     int32_t x;
> > > > > > > > > > > > > > > >     x = rte_atomic32_add_return(&owner_id, 1);
> > > > > > > > > > > > > > > >     if (x > UINT16_MAX) {
> > > > > > > > > > > > > > > >        rte_atomic32_dec(&owner_id);
> > > > > > > > > > > > > > > >        return -EOVERWLOW;
> > > > > > > > > > > > > > > >     } else
> > > > > > > > > > > > > > > >         return x; }
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Why not just to keep it simple and using
> > > > > > > > > > > > > > > > > the same
> > > > lock?
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Lock is also fine, I just think it better
> > > > > > > > > > > > > > > > be a separate one
> > > > > > > > > > > > > > > > - that would protext just next_owner_id.
> > > > > > > > > > > > > > > > Though if you are going to use uuid here -
> > > > > > > > > > > > > > > > all that probably not relevant any more.
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > I agree about the uuid but still think the
> > > > > > > > > > > > > > > same lock should be used for
> > > > > > > > > > > > both.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > But with uuid you don't need next_owner_id at
> > > > > > > > > > > > > > all,
> > right?
> > > > > > > > > > > > > > So lock will only be used for
> > > > > > > > > > > > > > rte_eth_dev_data[] fields
> > > > > > anyway.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > Sorry, I meant uint64_t, not uuid.
> > > > > > > > > > > >
> > > > > > > > > > > > Ah ok, my thought uuid_t is better as with it you
> > > > > > > > > > > > don't need to support your own code to allocate
> > > > > > > > > > > > new owner_id, but rely on system libs
> > > > > > > > > > instead.
> > > > > > > > > > > > But wouldn't insist here.
> > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Another alternative would be to use 2
> > > > > > > > > > > > > > > > > > locks - one for next_owner_id second
> > > > > > > > > > > > > > > > > > for actual data[]
> > > > protection.
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Another thing - you'll probably need
> > > > > > > > > > > > > > > > > > to
> > > > grab/release
> > > > > > > > > > > > > > > > > > a lock inside
> > > > > > > > > > > > > > > > > > rte_eth_dev_allocated() too.
> > > > > > > > > > > > > > > > > > It is a public function used by
> > > > > > > > > > > > > > > > > > drivers, so need to be protected
> > > > > > > > > > too.
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Yes, I thought about it, but decided not
> > > > > > > > > > > > > > > > > to use lock in
> > > > > > next:
> > > > > > > > > > > > > > > > > rte_eth_dev_allocated rte_eth_dev_count
> > > > > > > > > > > > > > > > > rte_eth_dev_get_name_by_port
> > > > > > > > > > rte_eth_dev_get_port_by_name
> > > > > > > > > > > > > > > > > maybe more...
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > As I can see in patch #3 you protect by
> > > > > > > > > > > > > > > > lock access to rte_eth_dev_data[].name
> > > > > > > > > > > > > > > > (which seems like a good
> > > > > > thing).
> > > > > > > > > > > > > > > > So I think any other public function that
> > > > > > > > > > > > > > > > access rte_eth_dev_data[].name should be
> > > > > > > > > > > > > > > > protected by the
> > > > > > same
> > > > > > > > lock.
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > I don't think so, I can understand to use
> > > > > > > > > > > > > > > the ownership lock here(as in port
> > > > > > > > > > > > > > creation) but I don't think it is necessary too.
> > > > > > > > > > > > > > > What are we exactly protecting here?
> > > > > > > > > > > > > > > Don't you think it is just timing?(ask in
> > > > > > > > > > > > > > > the next moment and you may get another
> > > > > > > > > > > > > > > answer) I don't see optional
> > > > crash.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Not sure what you mean here by timing...
> > > > > > > > > > > > > > As I understand rte_eth_dev_data[].name unique
> > > > identifies
> > > > > > > > > > > > > > device and is used by  port
> > > > > > > > > > > > > > allocation/release/find
> > > > functions.
> > > > > > > > > > > > > > As you stated above:
> > > > > > > > > > > > > > "1. The port allocation and port release
> > > > > > > > > > > > > > synchronization will be managed by ethdev."
> > > > > > > > > > > > > > To me it means that ethdev layer has to make
> > > > > > > > > > > > > > sure that all accesses to
> > > > > > > > > > > > > > rte_eth_dev_data[].name are
> > atomic.
> > > > > > > > > > > > > > Otherwise what would prevent the situation
> > > > > > > > > > > > > > when one
> > > > > > process
> > > > > > > > > > > > > > does
> > > > > > > > > > > > > > rte_eth_dev_allocate()-
> > > > >snprintf(rte_eth_dev_data[x].name,
> > > > > > > > > > > > > > ...) while second one does
> > > > > > > > > > > > rte_eth_dev_allocated(rte_eth_dev_data[x].name, ...) ?
> > > > > > > > > > > > > >
> > > > > > > > > > > > > The second will get True or False and that is it.
> > > > > > > > > > > >
> > > > > > > > > > > > Under race condition - in the worst case it might
> > > > > > > > > > > > crash, though for that you'll have to be really unlucky.
> > > > > > > > > > > > Though in most cases as you said it would just not
> > > > > > > > > > > > operate
> > > > > > correctly.
> > > > > > > > > > > > I think if we start to protect dev->name by lock
> > > > > > > > > > > > we need to do it for all instances (both read and write).
> > > > > > > > > > > >
> > > > > > > > > > > Since under the ownership rules, the user must take
> > > > > > > > > > > ownership
> > > > of a
> > > > > > > > > > > port
> > > > > > > > > > before using it, I still don't see a problem here.
> > > > > > > > > >
> > > > > > > > > > I am not talking about owner id or name here.
> > > > > > > > > > I am talking about dev->name.
> > > > > > > > > >
> > > > > > > > > So? The user still should take ownership of a device
> > > > > > > > > before using it
> > > > (by
> > > > > > > > name or by port id).
> > > > > > > > > It can just read it without owning it, but no managing it.
> > > > > > > > >
> > > > > > > > > > > Please, Can you describe specific crash scenario and
> > > > > > > > > > > explain how could the
> > > > > > > > > > locking fix it?
> > > > > > > > > >
> > > > > > > > > > Let say thread 0 doing rte_eth_dev_allocate()-
> > > > > > > > > > >snprintf(rte_eth_dev_data[x].name, ...), thread 1
> > > > > > > > > > >doing
> > > > > > > > > > rte_pmd_ring_remove()->rte_eth_dev_allocated()-
> > >strcmp().
> > > > > > > > > > And because of race condition -
> > > > > > > > > > rte_eth_dev_allocated() will
> > > > return
> > > > > > > > > > rte_eth_dev * for the wrong device.
> > > > > > > > > Which wrong device do you mean? I guess it is the device
> > > > > > > > > which
> > > > > > currently is
> > > > > > > > being created by thread 0.
> > > > > > > > > > Then rte_pmd_ring_remove() will call rte_free() for
> > > > > > > > > > related resources, while It can still be in use by someone
> else.
> > > > > > > > > The rte_pmd_ring_remove caller(some DPDK entity) must
> > > > > > > > > take
> > > > > > ownership
> > > > > > > > > (or validate that he is the owner) of a port before
> > > > > > > > > doing it(free,
> > > > > > release), so
> > > > > > > > no issue here.
> > > > > > > >
> > > > > > > > Forget about ownership for a second.
> > > > > > > > Suppose we have a process it created ring port for itself
> > > > > > > > (without
> > > > setting
> > > > > > any
> > > > > > > > ownership)  and used it for some time.
> > > > > > > > Then it decided to remove it, so it calls
> > > > > > > > rte_pmd_ring_remove()
> > for it.
> > > > > > > > At the same time second process decides to call
> > > > rte_eth_dev_allocate()
> > > > > > (let
> > > > > > > > say for anither ring port).
> > > > > > > > They could collide trying to read (process 0) and modify
> > > > > > > > (process 1)
> > > > same
> > > > > > > > string rte_eth_dev_data[].name.
> > > > > > > >
> > > > > > > Do you mean that process 0 will compare successfully the
> > > > > > > process 1
> > > > new
> > > > > > port name?
> > > > > >
> > > > > > Yes.
> > > > > >
> > > > > > > The state are in local process memory - so process 0 will
> > > > > > > not compare
> > > > the
> > > > > > process 1 port, from its point of view this port is in UNUSED
> > > > > > > state.
> > > > > > >
> > > > > >
> > > > > > Ok, and why it can't be in attached state in process 0 too?
> > > > >
> > > > > Someone in process 0 should attach it using protected
> > > > > attach_secondary
> > > > somewhere in your scenario.
> > > >
> > > > Yes, process 0 can have this port attached too, why not?
> > > See the function with inline comments:
> > >
> > > struct rte_eth_dev *
> > > rte_eth_dev_allocated(const char *name) {
> > > 	unsigned i;
> > >
> > > 	for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
> > >
> > > 	    	The below state are in local process memory,
> > > 		So, if here process 1 will allocate a new port (the current i),
> > update its local state to ATTACHED and write the name,
> > > 		the state is not visible by process 0 until someone in process
> > 0 will attach it by rte_eth_dev_attach_secondary.
> > > 		So, to use rte_eth_dev_attach_secondary process 0 must
> > take the lock
> > > and it can't, because it is currently locked by process 1.
> >
> > Ok I see.
> > Thanks for your patience.
> > BTW, that means that if let say process 0 will call
> > rte_eth_dev_allocate("xxx") and process 1 will call
> > rte_eth_dev_allocate("yyy") we can endup with same port_id be used for
> > different devices and 2 processes will overwrite the same
> rte_eth_dev_data[port_id]?
> 
> No, contrary to the state, the lock itself is in shared memory, so 2 processes
> cannot allocate port in the same time.(you can see it in the next patch of this
> series).
>

Actually I think only one process(primary) should allocate ports, the others should attach them.
The race of port allocation is only between the threads of the primary process.

 
> > Konstantin
> >
> > >
> > > 		if ((rte_eth_devices[i].state == RTE_ETH_DEV_ATTACHED)
> > &&
> > > 		strcmp(rte_eth_devices[i].data->name, name) == 0)
> > > 			return &rte_eth_devices[i];
> > > 	}
> > > 	return NULL;
> > >
> > >



More information about the dev mailing list