[dpdk-dev] Question on mlx5 PMD txq memory registration

Shahaf Shuler shahafs at mellanox.com
Thu Jul 20 17:20:32 CEST 2017


Hi Sagi, 

Thursday, July 20, 2017 5:06 PM, Sagi Grimberg:
> >> Its worse than just a drop, without debug enabled the error
> >> completion is ignored, and the wqe_pi is taken from an invalid field,
> >> which leads to bogus mbufs free (elts_tail is not valid).
> >
> > Right
> 
> A simple work-around would be to simply fill a correct tail so that error
> completions will still have it (although I'm not sure the which fields are
> reliable other than the status in error completions).

As Nélio said, we have patches which solves the issue from the root cause.
In the meanwhile the walk around is to have large enough MR cache.
I agree documentation is missing.  
 
> 
> >> Not sure why it needs a lock at all. it *may* need an rcu protection
> >> or rw_lock if at all.
> >
> > Tx queues may run on several CPU there is a need to be sure this array
> > cannot be modified by two threads at the same time.  Anyway it is
> > costly.
> 
> As I said, there are primitives which are designed to handle frequent reads
> and rare mutations.

Even with such primitives, rarely lock is more than never lock. 

> 
> >> AFAICT, all this caching mechanism is just working around the fact
> >> that mlx5 allocates resources on top of the existing verbs interface.
> >> I think it should work like any other pmd driver, i.e. use mbuf the
> >> physical addresses.
> >>
> >> The mlx5 device (like all other rdma devices) has a global DMA lkey
> >> that spans the entire physical address space. Just about all the
> >> kernel drivers heavily use this lkey. IMO, the mlx5_pmd driver should
> >> be able to query the kernel what this lkey is and ask for the kernel
> >> to create the QP with privilege level to post send/recv operations with
> that lkey.
> >>
> >> And then, mlx5_pmd becomes like other drivers working with physical
> >> addresses instead of working around the memory registration sub-
> optimally.
> >
> > It is one possibility discussed also with Mellanox guys, the point is
> > this breaks the security point of view which is also an important stuff.
> 
> What security aspect? The entire dpdk model builds on top of physical
> addresses awareness running under root permissions. 
>I'm not saying
> exposing it to the application nor granting remote permissions to the physical
> space.
> 
> mlx5_pmd is a network driver, and as a driver, it should allowed to use the
> device dma lkey as it sees fit. I honestly think its pretty much mandatory
> considering the work-arounds mlx5_pmd tries to do (which we agreed are
> broken).

True, There are many PMDs which can work only with physical memory.
However Mellanox NICs have the option to work with virtual one thus provide more security.
The fact running under root doesn't mean you have privileges to access every physical page on the server (even if you try very hard to be aware). 

The issue here, AFAIU, is performance.  
We are now looking into ways to provide the same performance as if it was only a single lkey, while preserving the security feature.  
 
> 
> > If this is added in the future it will certainly be as an option, this
> > way both will be possible, the application could then choose about
> > security vs performance.
> 
> Why should the application even be aware of that? Does any other driver
> expose the user how it maps pkt mbufs to the NIC? Just like the MR
> handling, its 100% internal to mlx5 and no reason why the user should ever
> be exposed to any of these details.

Other option is the reserved lkey as you suggested, but it will lose the security guarantees.
Like every performance optimization it should be the application decision.
In fact, there are some discussions on the ML of exposing the option to use va instead of pa. [1]

> 
> > I don't know any planning on this from Mellanox side, maybe Shahaf have.
> 
> rdma-core has a very nice vendor extension mechanism (which is important
> because we really don't want to pollute the verbs API just for dpdk).
> Its very easy to expose the dma lkey and create the TX queue-pairs with
> reserved lkey attributes via this mechanism. Just the kernel needs to verify
> root permissions before exposing it.
> 
> >> And while were on the subject, what is the plan of detaching mlx5_pmd
> >> from its MLNX_OFED dependency? Mellanox has been doing a good job
> >> upstreaming the needed features (rdma-core). CC'ing Leon (who is
> >> co-maintaining the user-space rdma tree.
> >
> > This is also a in progress in PMD part, it should be part of the next
> > DPDK release.
> 
> That is *very* good to hear! Can you guys share a branch? I'm willing to take
> it for testing.

The branch is still pre-mature. It may be good enough for external testing in about two weeks.
Contact me directly and I will provide it to you. 

[1] http://dpdk.org/ml/archives/dev/2017-June/067156.html



More information about the dev mailing list