[dpdk-dev] [PATCH 4/4] pmd_hw_support.py: Add tool to query binaries for hw support information

Neil Horman nhorman at tuxdriver.com
Wed May 18 14:03:00 CEST 2016


On Wed, May 18, 2016 at 02:48:30PM +0300, Panu Matilainen wrote:
> On 05/16/2016 11:41 PM, Neil Horman wrote:
> > This tool searches for the primer sting PMD_DRIVER_INFO= in any ELF binary,
> > and, if found parses the remainder of the string as a json encoded string,
> > outputting the results in either a human readable or raw, script parseable
> > format
> > 
> > Signed-off-by: Neil Horman <nhorman at tuxdriver.com>
> > CC: Bruce Richardson <bruce.richardson at intel.com>
> > CC: Thomas Monjalon <thomas.monjalon at 6wind.com>
> > CC: Stephen Hemminger <stephen at networkplumber.org>
> > CC: Panu Matilainen <pmatilai at redhat.com>
> > ---
> >  tools/pmd_hw_support.py | 174 ++++++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 174 insertions(+)
> >  create mode 100755 tools/pmd_hw_support.py
> > 
> > diff --git a/tools/pmd_hw_support.py b/tools/pmd_hw_support.py
> > new file mode 100755
> > index 0000000..0669aca
> > --- /dev/null
> > +++ b/tools/pmd_hw_support.py
> > @@ -0,0 +1,174 @@
> > +#!/usr/bin/python3
> 
> I think this should use /usr/bin/python to be consistent with the other
> python scripts, and like the others work with python 2 and 3. I only tested
> it with python2 after changing this and it seemed to work fine so the
> compatibility side should be fine as-is.
> 
Sure, I can change the python executable, that makes sense.

> On the whole, AFAICT the patch series does what it promises, and works for
> both static and shared linkage. Using JSON formatted strings in an ELF
> section is a sound working technical solution for the storage of the data.
> But the difference between the two cases makes me wonder about this all...
You mean the difference between checking static binaries and dynamic binaries?
yes, there is some functional difference there

> 
> For static library build, you'd query the application executable, eg
Correct.

> testpmd, to get the data out. For a shared library build, that method gives
> absolutely nothing because the data is scattered around in individual
> libraries which might be just about wherever, and you need to somehow
Correct, I figured that users would be smart enough to realize that with
dynamically linked executables, they would need to look at DSO's, but I agree,
its a glaring diffrence.

> discover the location + correct library files to be able to query that. For
> the shared case, perhaps the script could be taught to walk files in
> CONFIG_RTE_EAL_PMD_PATH to give in-the-ballpark correct/identical results
My initial thought would be to run ldd on the executable, and use a heuristic to
determine relevant pmd DSO's, and then feed each of those through the python
script.  I didn't want to go to that trouble unless there was consensus on it
though.


> when querying the executable as with static builds. If identical operation
> between static and shared versions is a requirement (without running the app
> in question) then query through the executable itself is practically the
> only option. Unless some kind of (auto-generated) external config file
> system ala kernel depmod / modules.dep etc is brought into the picture.
Yeah, I'm really trying to avoid that, as I think its really not a typical part
of how user space libraries are interacted with.

> 
> For shared library configurations, having the data in the individual pmds is
> valuable as one could for example have rpm autogenerate provides from the
> data to ease/automate installation (in case of split packaging and/or 3rd
> party drivers). And no doubt other interesting possibilities. With static
> builds that kind of thing is not possible.
Right.

Note, this also leaves out PMD's that are loaded dynamically (i.e. via dlopen).
For those situations I don't think we have any way of 'knowing' that the
application intends to use them.

> 
> Calling up on the list of requirements from
> http://dpdk.org/ml/archives/dev/2016-May/038324.html, I see a pile of
> technical requirements but perhaps we should stop for a moment to think
> about the use-cases first?

To ennumerate the list:

- query all drivers in static binary or shared library (works)
- stripping resiliency (works)
- human friendly (works)
- script friendly (works)
- show driver name (works)
- list supported device id / name (works)
- list driver options (not yet, but possible)
- show driver version if available (nope, but possible)
- show dpdk version (nope, but possible)
- show kernel dependencies (vfio/uio_pci_generic/etc) (nope)
- room for extra information? (works)

Of the items that are missing, I've already got a V2 started that can do driver
options, and is easier to expand.  Adding in the the DPDK and PMD version should
be easy (though I think they can be left out, as theres currently no globaly
defined DPDK release version, its all just implicit, and driver versions aren't
really there either).  I'm also hesitant to include kernel dependencies without
defining exactly what they mean (just module dependencies, or feature
enablement, or something else?).  Once we define it though, adding it can be
easy.

I'll have a v2 posted soon, with the consensus corrections you have above, as
well as some other cleanups

Best
Neil

> 
> To name some from the top of my head:
> - user wants to know whether the hardware on the system is supported
> - user wants to know which package(s) need to be installed to support the
> system hardware
> - user wants to list all supported hardware before going shopping
> - [what else?]
> 
> ...and then think how these things would look like from the user
> perspective, in the light of the two quite dramatically differing cases of
> static vs shared linkage.
> 
> P.S. Sorry for being late to this party, I'm having some health issues so my
> level of participation is a bit on-and-off at the moment.
> 
> 	- Panu -
> 


More information about the dev mailing list